Jump to content

jangjong

Members
  • Posts

    365
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

jangjong's Achievements

Contributor

Contributor (5/14)

2

Reputation

  1. Not a good photographer like speeding_ant.. but here it is
  2. http://lime-technology.com/forum/index.php?topic=26639.msg235245#msg235245
  3. I can comment on this. I recently switched from 1 x MV8 to 2 x M1015 First of all, HDDs can't even reach max speed of SATA II (3GB/s), so SATA III or SATA II, that doesn't affect much. and there is almost no performance gain in terms of copying stuff to or from unraid when using M1015 or MV8. HOWEVER, you see the improvement when it comes to Parity Check or Data Rebuild. M1015 uses PCIe x8, which is able to do 1.6GB/s single direction. MV8 uses x4, which is able to do only 800 MB/s so let's say you have 8 drives plugged in to one of these cards. and let's say your hdd's are able to read 110 MB/s max (this is what i saw during preclear for most drives during pre-read). If you were to use MV8, your speed will be limited to 800 MB/s or less. it can never reaches that high. that's just theoretical speed. With simple feature stats plug in, you can actually see the transfer speed of all drive during data build / parity check. I was getting about 500 MB/s. again, this is from SIMIPLEFEATURE STATS. On the index page, i was seeing about 80 MB/s for parity check. When I switched over to M1015, I was getting about 700 - 800 MB/s total. and about 100 - 120 MB/s during parity check. (i also removed green drives from my array which increased my speed as well). SO with MV8, parity check took 10 - 11 hours, with M1015, took about 7 - 8 hours or less. it is a big improvement in my opinion. So, to sum it up. In regular usage, there is no difference because you're accessing 1 or 2 drive at time, so it doesn't get limited by PCIe speed. However, when you're doing the parity check or data rebuild that uses all 8 drives at the same time, you see improvment.
  4. Yes it's possible since your array data is actually stored in the flash drive and you use that to boot the VM. That link should be a good guide on how to do this. However, without being able to do hw passthroough, like i said before you have to do RDM for all HDD's you have, there is no way around it. I am not a big fan of RDM personally, so I wouldn't go with esxi if my hardware doesnt support hw passthrough.
  5. You don't have to passthrough the NIC. you can just add virtual NIC that's provided by esxi. It's normal to say inactive under DirectPath I/O if you have a virtual NIC. i dont know if precision 690 supports hw passthrough. If you go to BIOS, do you have VT-d or some kind of vitualization setting? if it doesnt support, you're going to have to use RDM for all your hdd which is not too convenient.
  6. You just add it as a usb device in your VM. Go to the setting screen on your VM, Click "Add...". Add USB Controller, then you add USB Device See this post: http://lime-technology.com/forum/index.php?topic=14695.msg138465#msg138465 Scroll down to "VM#3 unRAID VMDirectPath Hardware Passthough"
×
×
  • Create New...