HBA + SAS expander for 24 bay Norco


Recommended Posts

So I'm about to build my first Unraid server, but am having trouble finding the best [read: cheapest] way to connect all the hard drives.

 

After cannibalizing my current ghetto Ubuntu server and grabbing an ebay bargain I have the follwing parts

 

Intel E5200 Dual core Pentium

Super Mico C2SBM-Q

4GB [2x2GB] of G.Skill DDR2

Antec Neo Power 650W

 

 

Now for the case, I'm looking at the Norco RPC-4224

Which has 6 backplanes utilising an SAS SFF-8087 connector.

 

My first plan was to get a 4xSATA to SFF-8087 reverse breakout cable, for the top row/4 drives.

Then 2 of the AOC-SASLP-MV8s and using SFF-8087 to SFF-8087 cables, which would take care of the next 4 rows/16 drives.

 

This however would leave me with one row/4 drives unconnected. So I'd need to get another card, most likely PCI, to connect the last 4 drives.

 

So after reading around this forum and others, I've found out about HBA cards and SAS expanders. I'm all very new SAS cards and such, my usual computer knowleded didn't go much farther than assembling basic PCs with windows installs. So I'm not entirely sure what I'm thinking here, makes any sense.

 

Now I'm wondering if I could get some sort of basic SAS card and then use something like a Chenbro CK12804 which would give me 6 SFF-8087 ports for connecting all 6 backplanes/24 drives?

 

 

So my questions are;

Is there a set of matching hardware in this configuration that is recommended for Unraid builds? Prefferably one that doesn't cost thousands.

Is there going to be any sort bottlenecking from running all 24 drives through one PCIe card to the motherboard?

Link to comment

Are you feeling lucky?  You can certainly try the SAS expander, but there's no guarantee that it will work.  There are a ton of new SAS cards on the market right now, and we as a community haven't had time to test them all.  However, there are quite a few described in this thread.  PCIe has more than enough bandwidth, you won't have any bottlenecks running all your drives on that bus.

 

In case you missed it, unRAID only supports 22 drives at the current time (20 data, 1 parity, 1 cache).  However, you can mount the two extra drives outside the parity protected array with the SNAP add on or through unMenu.  If you don't want to mess around with unknown controllers/expanders, then your only option is to use a four port PCI card (or two 2 port ones) which will definitely introduce a bottleneck during parity checks or other disk operations that require reading from or writing to multiple disks simultaneously.

 

 

Link to comment

Yeah, that was one of the main threads that I've been reading, trying to absorb some knowledge.

 

So it looks like the BR10i [LSI SAS1068E] has everything working since 5.0b7, so if I was to get one of those with an expander like the Chenbro in my first post would that work?

 

Does the expander card need to be compatible with Unraid also, or is it invisible to the OS due to running through the BR10i HBA?

 

If you don't want to mess around with unknown controllers/expanders, then your only option is to use a four port PCI card (or two 2 port ones) which will definitely introduce a bottleneck during parity checks or other disk operations that require reading from or writing to multiple disks simultaneously.

 

Thats exactly why I'm looking in to the expanders, as I don't want to have a bottleneck or always have to think about avoiding those disks that are connected to the PCI slot.

Link to comment

 

Does the expander card need to be compatible with Unraid also, or is it invisible to the OS due to running through the BR10i HBA?

...

 

Thats exactly why I'm looking in to the expanders, as I don't want to have a bottleneck or always have to think about avoiding those disks that are connected to the PCI slot.

 

My 2-cents: if you want to use an expander, I recommend using a LSI 2008 based card and a LSI SAS2X36 based expander. The HBA can be, for example, IBM1015 or a Supermicro AOC-USAS2-L8i; the expander can be the recently released Chenbro CK23601 which is already SATA3 compatible, meaning more bandwidth available.

 

http://www.provantage.com/chenbro-micom-ck23601~7CHEN11K.htm

 

As I see the expander should be transparent to the system itself, but there are incompatibilities between them and some HBA or RAID controllers, so keeping all LSI or PMC-Sierra can avoid a lot of trouble.

Link to comment

I'm starting to think my best option would be to just grab one AOC-SASLP-MV8 for now, and then when I run out of drive spaces hopefully the support for expanders is further along, or at least more documented success stories. If not, I'll just get another AOC-SASLP-MV8.

 

If I do go the expander route the only thing I'm worried about is the fact that the AOC-SASLP-MV8 is only PCIe x4, where as something like the BR10i is PCIe x8, which combined with an expander would mean the 24 drives would all be running through one PCIe slot, and x4 speed would be a bit of a bottleneck.

But the only times that all 24 drives would be active is during a parity check, correct? So the bottleneck would only be relevant then, which wouldn't be too much of a problem.

Link to comment

I would just run two SASLP cards and be done with it.  Tried and true, and it will probably end up being cheaper than using an expander anyway.  No bottlenecks either.

 

All drives are also active during a rebuild from parity or during the emulation of a failed disk.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.