Jump to content

Sata controller card Recommendations?


maxbear

Recommended Posts

Hello,

 

I am looking for a Sata controller card to add more sata ports in my fileserver.

 

I don't need raid for the card, all I want is add more sata ports.

 

Is there any good 8 ports card? If 8 ports is hard to find, is it better to use 2x4 ports card?

 

Thanks a lot.

Link to comment

Stick with somewhat of a name-brand.  My first choice in the cost-to-brand ratio is Promise.  Here's one of the only 4 channel SATA II cards. PROMISE SATA300 TX4 PCI SATA II Controller Card - OEM - $59.99

This is a controller only.  No RAID or anything else fancy.

 

http://www.newegg.com/Product/Product.aspx?Item=N82E16816102062

 

The only downer is that you're limited to a quantitiy of 9999 ;)

 

Oz

Link to comment

Very impressive!  This $102 8 port SATA300 Supermicro card is designed for a 64-bit PCI-X interface, which is generally only found on more expensive server boards.  Perhaps someone will be our guinea pig and find out if it works (and how well) in a PCI slot, AND if it is supported in unRAID?  It's currently on backorder.

 

Link to comment

Yes, too bad. backorder only.

 

Just a question, if all ports stick to 1 pci slot, will it make the transfer speed really slow? Or is it better if I split it into 2 pci slots? Or using some pci-e card?

 

PCI is fine for most people in most circumstances as a single stream won't come close to saturating the PCI bus.  However, parity will probably take longer (especially those with lots of drives) and those with multiple intensive streams (like concurrent HD streams) may encounter issues.

 

So, it is best to avoid the PCI bus for hard disk traffic, but it isn't a disaster for most if it happens.

 

 

Bill

Link to comment

I agree with Bill, PCI is fine for most things.  Probably 99% of the time, most of us including me access our unRAID servers one disk at a time. The PCI bus is not really a bottleneck for single disk access (if not using a PCI gigabit network card).  Simultaneous disk access IS affected, but can be managed, by running backups and synchronization jobs and file moves at off hours, and knowing and working within your media streaming limits (eg. 2 music streams with 1 video stream, 1 HD video plus 1 standard video stream plus 1 music stream, etc).  You can expect slower performance during simultaneous disk operations like parity checks, parity builds, and during a drive failure, but those are hopefully rare events.

 

The unRAID system is great in its niche, as economical and reliable protected storage.  PCI suits that philosophy.  For highest performance servers, you would never use PCI.  You would want faster busses, Areca and 3ware RAID cards, server boards, high RPM drives, hardware redundancy, etc.

 

I tend to emphasize PCI Express over PCI because I live with the problems of a saturated PCI bus every day, due to money issues (current lack of).  One of my primary computers is an older 866Mhz Pentium III Dell, AGP card, PCI gigabit network card, PCI Promise card, PCI sound card, PCI tuner card, and I have to manage use very carefully (no more than 25% network bandwidth) or failures occur.  I'd love to replace it with a faster computer and all PCIe-based cards.

 

(Edited: changed 'very poor performance' to 'slower performance' due to Joe's usage test below)

 

Link to comment

Just to add my 2cents. Anyone thats ever tried to expand a RAID 5 array using a Adaptec or 3ware card knows unRAID beats it hands down. In the real world where arrays have to expand to keep up there are no hardware RAID solutions that touch it.

 

I know this at my exepnse (after spending hundreds and hundreds on now useless RAID cards)

Link to comment
  You can expect very poor performance during simultaneous disk operations like parity checks, parity builds, and during a drive failure, but those are hopefully rare events.

I did an experiment,I tried to stress the pci bus on my original Intel motherboard, an original MD1200 unRaid server, all IDE array with 8 data drives + parity

 

I stopped my array, made a backup copy of my config folder, unassigned a data drive on the drive assignment page, and then re-started the unRaid array.  This simulated a failed drive.

 

Then, I played a DVD ISO image of a file on the currently "unassigned" disk.  unRaid re-constructed it from the 8 other disks on the fly and the movie played just fine.  (I was pretty happy)

 

I then went to each of the other PC's and media players on my lan and did the same thing.  I played a different DVD ISO file on each, all from the currently "unassigned" drive.

 

I ended up with 4 DVDs, each playing just fine, all from a (simulated) failed disk.

 

The PCI bus on the Intel motherboard was able to keep up with 8 drives, across two Promise cards and the motherboard's IDE controller.  (I was very happy)

 

Now, it might not be able to handle multiple HD streams when in a degraded state, but there is good odds it will handle one HD stream.

 

Don't sell the PCI bus short... depending on the motherboard, it might just do better than you might think.

 

At the end, I stopped the array, re-assigned the drive I had un-assigned, started back up, and it set about re-constructing the drive.

I tried again to play my movies... they still played, while the re-build was occurring... 

 

With SATA drives and a PCI-X bus, odds are you will never know when the array has a failed drive unless you looked at the web-interface page. 

 

Joe L.

Link to comment

Joe's well-structured experiment points out a key issue that most folks don't understand ...

 

There is a difference between peak performance and real-world performance.

 

Consider two highways:

* The first is two lanes each way and designed to allow speeds of up to 150mph - many stretches of the autobahn are like this.  Slow banked turns, incredibly smooth surfaces, no signs or other obstacles along the sides of the road.

* The second is five lines each way and designed for speeds of up to 100mph - the typical US highway.  Swells in the road, horribly designed expansion joints, changing road surfaces, etc.

 

Which is faster?

 

The obvious one is "the first one", it's 50% faster.  However, the right answer is the second one since very few people have cars that go that fast and, even if they do, they won't typically drive that fast.  And even if someone does drive that fast, the person in front of them won't thus they will slow down to traffic rates anyway.  Real-world performance (aka throughput) on a freeway is far more tied to the number of lanes, not the peak speed of the road.

 

Similarly, too many people get caught up in the "my RAID10 box does 150MBps" vs. the Unraid's far slower speed.  But what home application needs that speed?  HD is the heaviest possible load in a typical home and there you only need a small fraction of 150MBps.  So like the "extra 50mph" in the first highway, that headroom is nearly useless when the dollars could have been spent to make more, lower quality, lanes that would add to the real-world usefulness.

 

Instead of ultimate speed performance, we have flexibility (drive expansion, SATA+PATA, etc.) and cost benefits.  Two things far more useful to an average home geek.  PLUS, per Joe's experiment, we still more performance than most of us need.

 

 

Bill

 

P.S. I drove 150mph on the autobahn north of Munich.  Fun and scary.

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...