Virtualized unRAID


Recommended Posts

This was part of a larger project to consolidate all of my hardware into a single 42u rack mount server cabinet, as well as virtualize all server duties from multiple machines into a single physical box running ESXi.  Though not officially supported, all of the research and experiences in this thread gave me confidence that I could succesfully virtualize unRAID alongside multiple other guest operating systems. My goal was for maximum incremental expandability for both unRAID storage and the ability to add additional virtual machines as needed, without sacrificing performance or flexibility of unRAID itself. To that end, I selected components capable of using VT-d/VMDirectPath to provide direct hardware access to both the drive controller and NIC to the virtualized unRAID instance.

 

In addition to expansion to 20 data drives in the primary chassis, I can eventually add up to 3 more PCI-Express SAS HBA's attached to external chassis in the rack and assign them to additional instances of unRAID for a total of 60 additional data drives.

 

OS at time of building:  unRAID Server Pro 5.0-beta6a as a guest OS under ESXi 4.1 (on the bleeding edge for the sake of SAS2008 support

CPU: Intel Xeon E3-1230 Sandy Bridge 3.2 GHz (1 vCPU allocated to unRAID)

Motherboard:  TYAN S5510GM3NR

RAM: 16 GB (4GBx4) Crucial DDR3 1333 ECC (2 GB allocated to unRAID)

Case: Norco RPC-4224

Drive Cage(s): SYBA SY-MRA2508 expansion slot cage for 2.5" ESXi datastore drive

Power Supply:  Corsair HX 750 Watt

SAS Expansion Card(s): LSI 9211-8i 6 Gb/s (passed through to unRAID via VMDirectPath) + Intel RES2SV240 SAS Expander, for a total of 24 ports

Cables:  6x Norco SFF-8087 to SFF-8087 to the the 6 case backplanes; generic SATA to SATA to the internal datastore drive

Fans:  2x Arctic Cooling CF8 PWM 80mm, 3xScythe SY1225SL12L 120mm

NIC: Intel 82574L (onboard, passed through to unRAID via VMDirectPath) + virtual NIC for exclusive communication between other VMs on the same machine

 

ESXi Datastore Drive: WD Scorpio 2.5 500 GB 7200 RPM

Parity Drive: Seagate 5900 RPM 2 TB

Data Drives: 3xSeagate 5900 RPM + 2xHitachi 5400 RPM 2 TB

Cache Drive: Hitachi 500 GB 7200

Total Drive Capacity: 10 TB, expandable to 40 TB in the primary chassis, and 160 TB via additional unRAID virtualized on the same ESXi installation, given current 2 TB drives

 

Primary Use: Media serving to XBMC

Add Ons Used: unMenu, VMWare Tools

Other VMs on the machine: : Windows 7 Enterprise runs Homeseer home automation, Plex Media Server, and CyberPower Powercenter for UPS management and shutdown/resume automation); Gentoo Linux runs MySQL for XBMC library, LDAP, MediaTomb DLNA server, SABNzbd, Sickbeard, Couch Potato, and Deluge; PBX In a Flash (CentOS) Asterisk VOIP server; XBMC Live for central library management

 

Unassembled components:

 

aNEZ0.jpg

 

 

I used some epoxy to custom mount the SAS expander card directly to the fan wall of the case. Since the card can be powered by MOLEX, I did not want to waste a PCIe slot. This also allowed me to use shorter SAS cables:

 

GAfyh.jpg

 

Fully assembled and ready to move into the rack:

 

FWf80.jpg

 

In its final home in the rack along with UPS (Cyberpower OR1500 LCDRM2U 1500VA, 900 Watt), patch panel and 24 port Gigabit switch (Netgear GS724-T300NAS).

 

2YsLT.jpg

 

Low light shot of the same to emphasize the LEDs:

 

6SRat.jpg

 

I used a USB label printer with white-on-clear laminated label tape to label each drive bay with the last four digits of the drive serial number:

 

QVBWc.jpg

 

The full rack. In addition to the equipment mentioned above, it is also home to my primary desktop PC, an older (and former NAS) box running PFSense for router duties, a testbed box, and a Yamaha AVR:

 

3uWqY.jpg

Link to comment

LoL at craigs list deals.

 

I had originally planed to build my unRAID like this.

 

I waned to Run 3 to 5 servers on one server in a Norco 4224.

 

1. unRaid with 22 drives.

2. WHS2011 with the last 2 drives. strictly for PC backups (possibly storage, see #5)

3. Win2008r2 PDC (dhcp, wins, dns) on an SSD that's internal mounted (drill holes in side of case if need be)

4. Win2008r2 or W7 Client running newsbin 24x7 on an SSD or 2.5 laptop drive internal mounted.

5. Possibly adding another windows 2008 with external sas connectors and expanders to a second 4224 or a 3216 (or add the storage to the whs2011 if the resources are low).

 

as long as i have 4 servers that are running 24x7 on atoms, why not put them all on 1 sandy bridge xeon and save power, heat and space?

 

I have got the motherboard already (SUPERMICRO MBD-X9SCM-F-O). I had the Xeon on order and canceled it.

I have 2 4224's.

2x MV8's

I would just need the xeon,  a 4 port nic for a nic for each VM to have a dedicated NIC (it looks like you are running shared?), another sas card with an internal and external port that supports expanders. possibly tossing my 2 MV8s into another box and get something closer to your setup.

 

Maybe i might still go that path, it only have 6 data drives in my unraid right now and it is all duped on another box. so it would be a good time to change my mind.

I'd love to see how you set up ESXi

Link to comment

The motherboard has 3 NICs (1 is shared with IPMI/KVM-over-IP). I'm passing one of those through directly to unRAID, and the other two are shared among the remaining VMs (letting ESXi do the load balancing). I've also added a second virtual NIC to unRAID for communication with the Gentoo VM on a seperate subnet so that traffic never actually goes out to the router.

 

The ESXi configuration itself is pretty standard. Really the only hoops I had to jump through was using Plop Boot Manager on the unRAID VM (since you cannot boot a VM from a passed USB drive), and some Perl scripts on the Windows 7 VM (i'll have to dig up the URL for those) to interact with my UPS to initiate a clean powerdown of all of the guest VMs on power failure.

Link to comment
  • 2 weeks later...

Thanks for posting this, I'll be setting up a new esxi/unraid build later this year and this is exactly what I'm aiming for.  Was planning to look at the new bulldozer CPUs but these entry level Xeons look great - I might end up duplicating your build almost exactly!

 

I already picked up a BR10i card on eBay, I wonder if this will work OK with a SAS expander or if there would be performance issues (3Gb/sec vs 6Gb/sec)

 

You might be interested to know that the latest unraid beta supports 3TB drives, I did a bit of research and your controller supports 3TB but I'm not sure about the expander.  Assuming it does or will with a firmware update, you might be able to bump up those maximum storage figures by 50% soon - probably 100% by this time next year!

 

Do you find any performance issues running all your guest operating systems on a laptop drive?

 

edit: Forgot to ask - are drive temps and standby working?

Link to comment

Isn't that LSI card you use like $250? I'm using that SuperMicro card that just about everyone uses since no one really knows what will work or not. I would have liked to get a better performing card but couldn't find the correct information out there to make sure it would work with unRAID. Would that LSI card perform better than the SuperMicro card?

 

 

Link to comment

Well for one, that LSI card will work with ESXi unlike the Supermicro :). A proper alternative/comparable card would be the $50-100 BR10i's that everyone has been buying up. However, it isn't clear whether or not the BR10i will receive proper firmware to support 3TB drives. As for the 6GB/s vs 3GB/s speed difference... that shouldn't really matter with mechanical drives but I'm not sure if that would affect how much the Expander would comfortably scale (I have 0 experience with them and don't feel like researching right now).

 

Are you sure it's the card that is hampering your performance and not the drives/unraid itself?

Link to comment

As for the 6GB/s vs 3GB/s speed difference... that shouldn't really matter with mechanical drives but I'm not sure if that would affect how much the Expander would comfortably scale (I have 0 experience with them and don't feel like researching right now).

 

Are you sure it's the card that is hampering your performance and not the drives/unraid itself?

It doesn't matter if the ports are used normally.  My limited research is this:

 

Controller: 2 x SFF8087 ports

Expander:  6 x SFF8087 ports

 

The normal configuration is to connect one cable SFF8087 cable between the controller and the expander - losing one port on each leaves you with one port on the controller and five ports on the expander - six SFF8087 ports for a total of 24 SATA connections.

 

The problem is the 20 drives on the expander are all sharing the bandwidth of that one SFF8087 port on the controller.  At 3Gb/sec per sata port, sharing between five sata ports is 75MB/sec each?  Which sounds OK for current green drives, but maximum theoretical speeds never seem to match actual real-world speeds...

 

It's possible to use both controller ports to supply more bandwidth to the expander, but that only leaves you with 16 SATA connections - you'd be better off (and have more money in the bank) with two BR10i's.

 

Probably worth investing a little bit more to do it right as fade23 has done.  3TB+ support is critical to me though as I'm hoping this next build will last me for a good 7-10 years and I'm guessing 4TB drives will be cost effective in 2013!

Link to comment

Hey Fade23

 

I am thinking of building a new unRaid box since mine is many years old and i would like to be able to setup vm server like the one you built. I was wondering if you had it to do all over again what would change or you happy with the way it is now? I really do like the path you took here and just may go the same route.

 

Thanks again!

 

JM

Link to comment

Thanks for posting this, I'll be setting up a new esxi/unraid build later this year and this is exactly what I'm aiming for.  Was planning to look at the new bulldozer CPUs but these entry level Xeons look great - I might end up duplicating your build almost exactly!

 

I already picked up a BR10i card on eBay, I wonder if this will work OK with a SAS expander or if there would be performance issues (3Gb/sec vs 6Gb/sec)

 

You might be interested to know that the latest unraid beta supports 3TB drives, I did a bit of research and your controller supports 3TB but I'm not sure about the expander.  Assuming it does or will with a firmware update, you might be able to bump up those maximum storage figures by 50% soon - probably 100% by this time next year!

 

Do you find any performance issues running all your guest operating systems on a laptop drive?

 

edit: Forgot to ask - are drive temps and standby working?

 

Yes, I'm definitely planning on going with 3 TB drives from here out now that they are supported. In fact one of the main reasons I went with the SAS2008 card is because LSI has stated that the 3 Gbps SAS controllers may not support 3 TB drives (though I'm not clear on whether that is just for booting or not).

 

I have not found any appreciable difference between running the ESXi datastore on a 7200 rpm 2.5" drive vs. a 7200 rpm 3.5" drive. One of the reasons I went with the 2.5" drive is that eventually I will replace it with an SSD, and I can just pop the new drive in via the hotswap bay.

 

Drive temps work fine in 5.0beta6; beta7 supposedly adds spindown support, but I have not upgraded yet.

Link to comment

If you are building today you  might also want to consider Chenbro's new 36 port expander if you can find it; it uses the same chipset as the Intel one I used, but has more ports, so it can fully populate a Norco 4224 using a single port HBA (or with dual link). Availability still seems a bit limited though; when I built I was debating waiting to use that, but in the end I got impatient. If and when I build slave chassis for additional drives I'll be using one of those.

Link to comment

 

The normal configuration is to connect one cable SFF8087 cable between the controller and the expander - losing one port on each leaves you with one port on the controller and five ports on the expander - six SFF8087 ports for a total of 24 SATA connections.

 

The problem is the 20 drives on the expander are all sharing the bandwidth of that one SFF8087 port on the controller.  At 3Gb/sec per sata port, sharing between five sata ports is 75MB/sec each?  Which sounds OK for current green drives, but maximum theoretical speeds never seem to match actual real-world speeds...

 

It's possible to use both controller ports to supply more bandwidth to the expander, but that only leaves you with 16 SATA connections - you'd be better off (and have more money in the bank) with two BR10i's.

 

 

Keep in mind that though the bandwidth IS shared, in most real world usage scenarios for unRAID the only time you are accessing all the discs at once is when you are doing a parity calculation/rebuild. For day-to-day media usage, you're very rarely going to saturate it.

Link to comment

3. Win2008r2 PDC (dhcp, wins, dns) on an SSD that's internal mounted (drill holes in side of case if need be)

 

http://www.scythe-eu.com/en/products/pc-accessory/slot-rafter.html

http://www.newegg.com/Product/Product.aspx?Item=N82E16817998079&Tpk=SY-MRA25018

 

I'm using the latter link in my build. The nice thing about that one is that it's a hotswap bay, so I can change the datastore out without having to open the case.

Link to comment

If you are building today you  might also want to consider Chenbro's new 36 port expander if you can find it

 

Thanks for pointing that out. Here's the MFG's details on the card, well at least a picture of it, http://www.chenbro.com/corporatesite/products_detail.php?sku=187 . Though it seems a bit odd in how it's setup with the ports, 1 External 8088 Input, and 1 Internal 8087 Input, but that's how the other Chenbro SAS Expanders are setup as well.

Link to comment
  • 1 month later...

Does the dual port SAS2008 card show up as one device for VMDirectPath or is each port a different device?  Ie, one port to unRAID server #1 and the second port to unRAID server #2.

 

It shows up as a single device.  This is actually a good thing as you can only pass through two PCIe devices to any individual VM - if it was the other way you'd be limited to passing through 8 drives.

 

This is my understanding from what I've read anyway.

 

 

Link to comment

yes, it shows up as a single device, expander and all.

That might be one of the best reasons to use an expander for unraid.

To get your full 22 drives in esxi without passing though any disks.

 

The only advantage I had though of until now was to be able to use a single PCIe Atom mobo for a 22 drive budget box...(and I am still thinking about doing it for my second box)

 

Usually an expander aware card and an expander cost more then 2 (or 3) cheap-o supermicro MV8's

 

That brings me to the question that is stopping me. how slow would a parity check on a 21 drive (20 data 1 Parity) entirely on one 4x or 8x PCI slot?

Link to comment

Quick math gives you about 140 MB/s per device (6Gbps x 4 channels /21 drives)with 21 slots all reading at the same time. That's if you use a single link; some expander/HBA combos will support dual linking to the expander (though obviously doing that you lose ports). Slower than a dedicated port for sure, but if you're running green drives you're more likely to be bottle necked by your rotational speed than you ever will be by the bus.

Link to comment

It shows up as a single device.  This is actually a good thing as you can only pass through two PCIe devices to any individual VM - if it was the other way you'd be limited to passing through 8 drives.

 

This is my understanding from what I've read anyway.

 

This has actually changed for ESXi 4.1.  ESXi 4.0 supported only 2 devices for passthrough, but ESXi 4.1 supports 4!

 

http://kb.vmware.com/kb/1010789

 

fade23:  This build is fantastic, thank you for posting it.  You have inspired me to shell out a bit of money and move away from my software based VMware solution into a full blown ESXi config.  I will be going with the IBM M1015 cards that do support 3TB+ drives - only have 15 drives right now.

 

I'm curious to see the new feature set of ESXi 5.0, but with the new licensing model I'm not sure many folks will be moving to it.

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.