Jump to content

PCI-E SATA Cards 1 vs 2 and x1-x16


erikatcuse

Recommended Posts

Basically what would be better 1 or 2 PCI-E cards?

 

I have a x1 and x16 slot.  So would I be better off with a x1 and x4 card each with 4 ports?  Or how about one x4 card with 8 ports.  Will I see a difference?

 

What about someone who has two x4 slots would two 4 port cards be better than one 4x 8 port card?

 

 

What's the advantage?  What if you could afford an x8 card?

 

 

 

Link to comment

I wanted to find out about the Highpoint RocketRaid 2320 (8 port pci-e x4) and asked Tom about support via email and received a reply almost right away.  Since I only have on x16 slot a single card might be beneficial to me.  Now to find one on ebay for a decent price.

 

I don't have any experience with that card.  It's going to depend on whether you can 'disable' the bios and have the card just provide an interface to your hard drives that linux will recognize.  Might be worth a try - newegg is generally pretty good at letting you return stuff, less the shipping fee of course.  Be sure and let me know how it works out.

 

Each PCI-E lane is capable of 250MB/sec (compared to PCI which is 133MB/sec).  A x4 card is thus theoretically capable of 1GB/sec.  It all depends on the card architecture though.  For example, a Silicon Image based PCI-E x1 card only does 133MB/sec because the chip was designe for PCI and that's the max it will do.  The Marvell chip used on that Highpoint card, as well as the Adaptec controller should get close to 1GB/sec, but you then start to run into northbridge/southbridge bottleneck.

 

Hence reason we went with Supermicro m/b was because it has a PCI-E x4 and PCI-E x16 and PCI-E x1 - we use all three slots & see parity sync rates between 50-60MB/sec with 15 drives which equates to close to 900MB/sec.

Link to comment

I wanted to find out about the Highpoint RocketRaid 2320 (8 port pci-e x4) and asked Tom about support via email and received a reply almost right away.  Since I only have on x16 slot a single card might be beneficial to me.  Now to find one on ebay for a decent price.

 

I don't have any experience with that card.  It's going to depend on whether you can 'disable' the bios and have the card just provide an interface to your hard drives that linux will recognize.  Might be worth a try - newegg is generally pretty good at letting you return stuff, less the shipping fee of course.  Be sure and let me know how it works out.

 

Each PCI-E lane is capable of 250MB/sec (compared to PCI which is 133MB/sec).  A x4 card is thus theoretically capable of 1GB/sec.  It all depends on the card architecture though.  For example, a Silicon Image based PCI-E x1 card only does 133MB/sec because the chip was designe for PCI and that's the max it will do.  The Marvell chip used on that Highpoint card, as well as the Adaptec controller should get close to 1GB/sec, but you then start to run into northbridge/southbridge bottleneck.

 

Hence reason we went with Supermicro m/b was because it has a PCI-E x4 and PCI-E x16 and PCI-E x1 - we use all three slots & see parity sync rates between 50-60MB/sec with 15 drives which equates to close to 900MB/sec.

 

Although the Silicon Image PCI-E x1 card may be limited to 133MB/sec, that single card gets the full 133MB/sec.  So if you had 2 of them, each card would get 133MB/sec.  This is different than the PCI bus, in which ALL of the PCI devices (controllers, LAN, etc.) share the single 133MB/sec bus.

 

UPDATE:

 

To answer the OP question, you are dealing with 3 variables - performance, cost, and expandability.

 

Performance will be best if each drive gets 100MB/sec+ even during high multi-disk usage (e.g., parity checks).  unRAID is not exactly a performance critical platform, however, so most wouldn't complain too much with 1/2 that or even less bandwidth per drive.

 

Typically more high-speed ports (e.g., 8 port x4 card) are very pricey.  That is why you see so few of them in use here.  If you found an x4 or wider PCIe card for $200-$300 that supports 8 drives, I think it would become pretty popular here.

 

unRAID Expandability is currently limited to 16 array drives + 1 cache drive = 17 total drives.  Tom has hinted at raising this limit.  You only get a couple PCIe slots, so using them wisely is  important especially if you want to be able to expand beyond that level

 

I like the Adaptec 1430SA (x4, 4-port) as a good bang for the buck for folks that have an x4 or faster slot.

 

There is a ~$100 8 port PCI/PCI-X card (Supermicro MV8) but it suffers from performance issues on the PCI bus.  I recently read about a Rosewell controller that had this same chipset on a PCIe x1 card.  8 drives sharing 250MB/sec = ~40MB/sec wouldn't be bad.  But I think this is the one that Tom is saying is limited to 133MB/sec.

 

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...