UnRAID on VMWare ESXi with Raw Device Mapping


Recommended Posts

I've tested here and unRAID v5b7 now support spinup/spindown the drives with Mapped Raw LUN with either LSI Logic SAS (mptsas driver) and Paravirtual SCSI (using recompiled kernel with PVSCSI driver) controllers. Temperatures still a non go, but I can read them with smartctl, so it shouldn't be hard to fix it.

 

Why do I insist using Mapped Raw LUN and not passthrough the controller? Because VT-d and IOMMU capable hardware are more expensive and hard to find, while MRL only requires a compatible disk controller, e.g. LSI 1068E/LSI 2008. No need to have Xeon processor and ECC RAM. Any computer with a cheap PCI Intel card should do the trick.

 

Did Tom express any interest in support unRAID for ESXi? Maybe anyone can talk to him asking the temperature bug to be fixed.

 

I understand and am sure there are others that support the same idea, as this post is named RDM. But do keep an open mind "No need to have Xeon processor and ECC RAM." is a gray area. First u get what you pay for and secondly, depending what you are doing. Fully Bufferred Ram is great if you have the option, If one was to be running many VMs or 1 or more proc hungry apps like "handbrake" then Xeon's are the way you would want to go.

Link to comment
  • Replies 461
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

I understand and am sure there are others that support the same idea, as this post is named RDM. But do keep an open mind "No need to have Xeon processor and ECC RAM." is a gray area. First u get what you pay for and secondly, depending what you are doing. Fully Bufferred Ram is great if you have the option, If one was to be running many VMs or 1 or more proc hungry apps like "handbrake" then Xeon's are the way you would want to go.

 

I know that, madburg, I have a Xeon E3-1230/Supermicro X9SCM-F combo arriving, but these kind of hardware is impossible to get here in Brazil and when imported can easily reach the double of US price. So I think VT-d/IOMMU shouldn't be the answer for running unRAID in ESXi.

Link to comment

So I think VT-d/IOMMU shouldn't be the answer for running unRAID in ESXi.

 

Yes, but what you are doing is not what unRAID was designed for.  Tom might well see the value in it and try to get it working, but frankly don't be surprised if it is not addressed.  I would like to run unRAID under ESXi but then another part of me screams at me to NOT do that.

 

It's all a matter of personal preference.

Link to comment

I understand and am sure there are others that support the same idea, as this post is named RDM. But do keep an open mind "No need to have Xeon processor and ECC RAM." is a gray area. First u get what you pay for and secondly, depending what you are doing. Fully Bufferred Ram is great if you have the option, If one was to be running many VMs or 1 or more proc hungry apps like "handbrake" then Xeon's are the way you would want to go.

 

I know that, madburg, I have a Xeon E3-1230/Supermicro X9SCM-F combo arriving, but these kind of hardware is impossible to get here in Brazil and when imported can easily reach the double of US price. So I think VT-d/IOMMU shouldn't be the answer for running unRAID in ESXi.

 

I understand your situation not being in the states here in regards to pricing. But to say "So I think VT-d/IOMMU shouldn't be the answer for running unRAID in ESXi" is not correct, if you were just trying to run unRAID you would not bother to virtualize it, the underline means you are trying to accomplish MORE than just running unRAID in one chassis. How much more (and its demand for CPU/memory utilization) will dictate the hardware you purchase to accomplish this.

 

To add the closer you can get to having unRAID BELIEVE it is running on physical hardware the better chance you have at it working and stable as if it was not virtualized. Example: by passing through controller cards, the drive temps and spin ups/down work just fine. RDM is not the same. To each there own, if you got something that works for you and accomplishs your goals, that is great. And everyone would like to hear your experience.

Link to comment

Another advantage of passing through the controller is that it left the disks outside the virtualization layer. If something happens to your controller or your mainboard, just plug the disk in another computer and you'll find your data again...

 

By the way you can find cheap mainboard with VT-d support like the Asus pq5-vm-do

Link to comment

With "raw mapped LUN" the disk is accessed directly by the guest, the entire drive. No virtual filesystem is involved.

WRONG

 

The mounting of the drive as a LUN does not constitute the ownership of the DRIVE itself just disk access. Just the ability to write to it the amount of space carved out by the LUN, which can be the size of the whole disk.

 

"RDM is a disk access management systems just like VMFS. RDM is a mapping file in a VMFS volume that acts as a proxy for a raw physical device. The RDM file contains metadata used to manage and redirect disk accesses to the physical device. ". Do not kid yourself thinking your passing through a hard drive instead of a controller like you would via advanced passthrough.

 

"RDM provides access to most hardware characteristics of the mapped device. VMkernel passes all SCSI commands to the device, with one exception, thereby exposing all the physical characteristics of the underlying hardware."

 

Link to comment

I thought ESXi allowed one to pass through a physical device without the use of VT-d/IOMMU. Is that incorrect?

 

Also, semi-related to this, does anyone know of any Intel motherboards for 1155 socket / i7-2600 CPU that do support VT-d? I only see three from Intel but they don't support integrated video. I see some that claim to support it but in reality do not. :( [ http://siphon9.net/loune/2011/01/list-of-sandy-bridge-lga1155-h67p67-motherboards-that-support-vt-d/ ]

Link to comment

I thought ESXi allowed one to pass through a physical device without the use of VT-d/IOMMU. Is that incorrect?

 

Not that I am aware of BRiT, USB ports are the only thing that is auto passthrough via some other mechanism they use and you could assign to a VM without having VT-d/IOMMU (if my memory serves correctly).

 

Your MB/Proc & Bios must properly support VT-d/IOMMU (Many PC based boards claim VT-d/IOMMU, but then dont add whats needed to their bios, i remember reading one Manf. even put "Enable VT-d" in the bios but actually it did nothing. They later confirmed this). Then comes VM certification for the hardware you are using. Not easy/fun unless your using the big guys (HP, etc..) hardware, they work hand in hand with VMWare as partners. But then they dont use TV/SATA cards etc... LOL, right  :P

 

I was not picking on gfjardim, just dont want his statements to confuss anyone. Knowledge is key to have a better chance to achieve the goals your setting for these type of custom configs.

Link to comment

I thought ESXi allowed one to pass through a physical device without the use of VT-d/IOMMU. Is that incorrect?

As far as I know the CPU and the motherboad have to support VT-d in order to passthrough PCI/PCIe devices

 

Also, semi-related to this, does anyone know of any Intel motherboards for 1155 socket / i7-2600 CPU that do support VT-d? I only see three from Intel but they don't support integrated video. I see some that claim to support it but in reality do not. :( [ http://siphon9.net/loune/2011/01/list-of-sandy-bridge-lga1155-h67p67-motherboards-that-support-vt-d/ ]

This document may help you

http://wiki.xensource.com/xenwiki/VTdHowTo

Link to comment

I thought ESXi allowed one to pass through a physical device without the use of VT-d/IOMMU. Is that incorrect?

 

Also, semi-related to this, does anyone know of any Intel motherboards for 1155 socket / i7-2600 CPU that do support VT-d? I only see three from Intel but they don't support integrated video. I see some that claim to support it but in reality do not. :( [ http://siphon9.net/loune/2011/01/list-of-sandy-bridge-lga1155-h67p67-motherboards-that-support-vt-d/ ]

 

I am using a Tyan S5510GM3NR (Socket 1155, C204 chipset, integrated video + IPMI/KVM-over-IP) motherboard with a Xeon E3-1230 to pass through an LSI 9211-8i SAS controller/Intel RES2SV240 expander directly to unRAID via VT-d. Works great, and with the latest beta supports full temps and spindown. Full details of the build are in my UCD thread

Link to comment

I am using a Tyan S5510GM3NR (Socket 1155, C204 chipset, integrated video + IPMI/KVM-over-IP) motherboard with a Xeon E3-1230 to pass through an LSI 9211-8i SAS controller/Intel RES2SV240 expander directly to unRAID via VT-d. Works great, and with the latest beta supports full temps and spindown. Full details of the build are in my UCD thread

 

Thanks for the pointer, especially with the SAS expander, and thanks to everyone else for sharing their experiences. Definitely more research for me to do. I just got the idea to give this a shot a day or two ago.

 

It looks like the Intel C204 chipset is the one to look for. I'm currently looking at the Tyan S5512 with LSISAS2008 onboard and perhaps the Intel Xeon E3-1275  - http://www.newegg.com/Product/Product.aspx?Item=N82E16813151247 . I'm fairly certain this will work for the unRAID guest.

 

What I'm not certain of is how well will it work a Windows 7 guest OS, in particular for MediaCenter TV capturing using the Ceton InfiniTV. I know you need to run the Digital Cable Advisor tool, but don't know exactly what it requires, if the C204/ESXi video is enough to pass the test or not.

 

I can save some money by opting for the CPU without integrated graphics, but having it even if it's unused now, gives me some future flexibility should my needs change.

Link to comment

I'm running Windows 7 as a guest alongside unRAID; 7MC will start up ok, but I have not tried it with any capture cards. In theory it should work if you pass the card through. I will say, however, that I have had trouble trying to run XBMC in a guest because it's expecting direct access to the video card for acceleration; in a VM scenario, ESXi is just presenting Windows with a basic virtual video adapter. I don't think it's ever even aware of what your hardware video is.

Link to comment

Well, with a Supermicro X9SCM-F and a Xeon E3-1230 here, I tried to passthrough AOC-SASLP-MV8 to an unRAID VM, but like others without any luck, so ESXi will need to wait two BR10i cards I bought on Ebay. This fact corroborates those statements that not every PCIe peripheral can be successfully passed through VM's.

 

 

Link to comment

For those of you with BR10i cards passed through, have you tried beta7 with 3 TB drives on them yet?  LSI has been a bit cagey on whether the older 1068e based cards will work with 3 TB drives, but I'm not clear whether that is just a limitation to boot from them or not.

Link to comment

Well, with a Supermicro X9SCM-F and a Xeon E3-1230 here, I tried to passthrough AOC-SASLP-MV8 to an unRAID VM, but like others without any luck, so ESXi will need to wait two BR10i cards I bought on Ebay. This fact corroborates those statements that not every PCIe peripheral can be successfully passed through VM's.

 

 

 

Well, I successfully made AOC-SASLP-MV8 work with ESXi 4.1 and VMDirectPath.

 

With "Remote Tech Support" enabled, use WinSCP to connect to ESXi, and add there two lines to the /etc/vmware/passthru.map file:

 

# Marvell Technologies, Inc. MV64460/64461/64462 System Controller, Revision B

11ab  6485  d3d0     false

 

Now open your VM's .vmx file and change this:

pciPassthru0.present = "TRUE"

pciPassthru0.deviceId = "6485"

pciPassthru0.vendorId = "11ab"

pciPassthru0.systemId = "4dfc27f9-93be-d5c1-9198-00259027d9d8"

pciPassthru0.id = "01:00.0"

 

to this:

 

pciPassthru0.present = "TRUE"

pciPassthru0.msiEnabled = "FALSE"

pciPassthru0.deviceId = "6485"

pciPassthru0.vendorId = "11ab"

pciPassthru0.systemId = "4dfc27f9-93be-d5c1-9198-00259027d9d8"

pciPassthru0.id = "01:00.0"

 

The catch is force the use of IOAPIC mode with the "pciPassthru0.msiEnabled = 'FALSE'" statement.

 

Reboot the hypervisor and start your unRAID VM!

 

Good luck.

Link to comment

Well, with a Supermicro X9SCM-F and a Xeon E3-1230 here, I tried to passthrough AOC-SASLP-MV8 to an unRAID VM, but like others without any luck, so ESXi will need to wait two BR10i cards I bought on Ebay. This fact corroborates those statements that not every PCIe peripheral can be successfully passed through VM's.

 

 

 

Well, I successfully made AOC-SASLP-MV8 work with ESXi 4.1 and VMDirectPath.

 

With "Remote Tech Support" enabled, use WinSCP to connect to ESXi, and add there two lines to the /etc/vmware/passthru.map file:

 

# Marvell Technologies, Inc. MV64460/64461/64462 System Controller, Revision B

11ab  6485  d3d0     false

 

Now open your VM's .vmx file and change this:

pciPassthru0.present = "TRUE"

pciPassthru0.deviceId = "6485"

pciPassthru0.vendorId = "11ab"

pciPassthru0.systemId = "4dfc27f9-93be-d5c1-9198-00259027d9d8"

pciPassthru0.id = "01:00.0"

 

to this:

 

pciPassthru0.present = "TRUE"

pciPassthru0.msiEnabled = "FALSE"

pciPassthru0.deviceId = "6485"

pciPassthru0.vendorId = "11ab"

pciPassthru0.systemId = "4dfc27f9-93be-d5c1-9198-00259027d9d8"

pciPassthru0.id = "01:00.0"

 

The catch is force the use of IOAPIC mode with the "pciPassthru0.msiEnabled = 'FALSE'" statement.

 

Reboot the hypervisor and start your unRAID VM!

 

Good luck.

 

Good stuff, I never got my hands on one of those cards to work it. Question, did ESXi auto passthrough this card since it did not have a driver for it. I have never seen that happen unless you enable it in the advanced options but that doesnt mean its not what happens for a card it has no drivers, thats why I am asking.

 

Because I am thinking 2 things, you would want to pass it through if you wanted to assign the whole card with its hard drives to the guest (unRAID), you would not want to pass it through at all if you wanted to utilize RDM, but I believe the card must be supported (with a driver) in order to utilize RDM.

 

Just trying to understand for future cases. As well as how you got there.

 

Last question that comes to mind, did you flash it with any particular update/firmware (like from LSI) and/or version if you did.

Link to comment

Good stuff, I never got my hands on one of those cards to work it. Question, did ESXi auto passthrough this card since it did not have a driver for it. I have never seen that happen unless you enable it in the advanced options but that doesnt mean its not what happens for a card it has no drivers, thats why I am asking.

 

Because I am thinking 2 things, you would want to pass it through if you wanted to assign the whole card with its hard drives to the guest (unRAID), you would not want to pass it through at all if you wanted to utilize RDM, but I believe the card must be supported (with a driver) in order to utilize RDM.

 

Just trying to understand for future cases. As well as how you got there.

 

Last question that comes to mind, did you flash it with any particular update/firmware (like from LSI) and/or version if you did.

I had to go to the Advanced Options first and select the card for passthrough, but that's not enough, since the default options used by ESXi for passthrough make the card instable with a lot of PSOD (Pink screen of death). With those modifications, the card is apparently stable, no more PSOD so far.

 

The AOC-SASLP-MV8 isn't supported by ESXi, so passthrough is the only way to go. I'm using the stock firmware, I think it is 3.1.1.5N.

Link to comment

This have probably been asked previously but how do you get the unraid license 'installed' when booting using esx? Is it possible to boot from the usb stick in esxi 4.1 or does that need tweakings too?

so far the no-go for SASLP-MV8's has been the showstopper using esxi for me.

Link to comment

This have probably been asked previously but how do you get the unraid license 'installed' when booting using esx? Is it possible to boot from the usb stick in esxi 4.1 or does that need tweakings too?

so far the no-go for SASLP-MV8's has been the showstopper using esxi for me.

 

Look a post above yours.  Looks like to me her has SASLP cards working in passthrough to me.

Link to comment

appologize for my lazyness drealit, but the thread has grown so big that it takes longer reading through 24 pages of less relevant posts than asking a 1 line question.

 

gfjardim: thanks once again for a valueable answer.

 

By the way is the vt-d thing also something you need on  your motherboard, or only the cpu?

I've got a asus p8h67-m with a sandy bridge cpu i know has vt-d.. but not sure about the mobo.. can't find much about it online.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.