Real World Experience with Seagate 2T drive?


bblue

Recommended Posts

Has anyone done any i/o or other performance tests on the Seagate 2T drive, compared to the performance of a Seagate 1.5T?

 

All I've been able to find on it is one review that compares it favorably to a Seagate 320G (which is not a stellar performer) and some hints that it outperforms the WD 2T.  But the WD drive scores pretty low on almost every i/o test, so that doesn't say much either.

 

I'm trying to determine whether I'd take a big hit in performance by using the Seagate 2T for parity, and planning for other data 2T's when the prices come down some more.

 

Any thoughts?

 

--Bill

 

Link to comment

I had the same concern regarding the 2TB's.

I'm going to wait a while until the price is much lower, or they come out with a 7200RPM 2TB drive.

 

I have a 1.5tb parity and when I upgraded from a WD 1TB green 16MB cache drive to the Seagate 7200/32MB cache 1.5TB drive I saw a noticeable performance boost.

For me, with rtorrent and all the little files I use, I do see a difference.

 

The Seagate 2TB drive is a 5900 RPM drive with 32MB cache, so you may not notice that much of a performance difference depending on your usage pattern.

 

If anyone has a 2TB seagate drive, can you do a

 

dd if=/dev/sdx(your drive) of=/dev/null count=8192000

 

and post results.

 

ST31500341AS - 1.5tb 7200rpm, 32MB cache

4194304000 bytes (4.2 GB) copied, 36.4742 s, 115 MB/s

 

ST31000340AS - 1TB 7200rpm, 32MB cache

4194304000 bytes (4.2 GB) copied, 39.7161 s, 106 MB/s

 

WDC WD10EACS-00ZJB0 - 1TB 5400 RPM 16MB cache

dd if=/dev/sdf of=/dev/null count=8192000

4194304000 bytes (4.2 GB) copied, 48.0557 s, 87.3 MB/s

 

 

Link to comment

My results using my various WD 2TB drives and a WD 1TB drive... It didn't seem to make any difference using the main device instead of the partition.

 

dd if=/dev/sdd of=/dev/null count=8192000

WDC_WD20EADS-00R6B0_WD-WCAVY0252670 - 2tb - 5400rpm  - 32MB - Data

4194304000 bytes (4.2 GB) copied, 40.0207 s, 105 MB/s

4194304000 bytes (4.2 GB) copied, 39.9925 s, 105 MB/s

4194304000 bytes (4.2 GB) copied, 40.4503 s, 104 MB/s

 

dd if=/dev/sdc of=/dev/null count=8192000

WDC_WD20EADS-00R6B0_WD-WCAVY0247937 - 2tb - 5400rpm - 32MB - Data

4194304000 bytes (4.2 GB) copied, 42.2654 s, 99.2 MB/s

4194304000 bytes (4.2 GB) copied, 42.6418 s, 98.4 MB/s

4194304000 bytes (4.2 GB) copied, 42.665 s, 98.3 MB/s

 

dd if=/dev/sdb of=/dev/null count=8192000

WDC_WD20EADS-00R6B0_WD-WCAVY0211284 - 2tb - 5400rpm - 32MB - Parity

4194304000 bytes (4.2 GB) copied, 41.2877 s, 102 MB/s

4194304000 bytes (4.2 GB) copied, 41.3244 s, 101 MB/s

4194304000 bytes (4.2 GB) copied, 41.2867 s, 102 MB/s

 

dd if=/dev/sda of=/dev/null count=8192000

WDC_WD1001FALS-00J7B0_WD-WMATV1120303 - 1TB - 32MB - System / Cache

4194304000 bytes (4.2 GB) copied, 38.8415 s, 108 MB/s

4194304000 bytes (4.2 GB) copied, 38.479 s, 109 MB/s

 

Link to comment

As long as I use the motherboard's SATA ports (ICH8 on this D915PDL Intel board) I get 126 MB/s on the Seagate 1.5T's with that test.  If I do the same on a budget PCI controller with a sil3114 or 3512 chip (1.5GB/s) it drops to around 90 MB/s.  I have some better ones coming this week (with sil3132 and 3124 chips) and it will be interesting to see how they fare.

 

Also, broke down and picked up a couple of the 2T Seagates (only $175 each at Fry's) so will try them in the next day or so and see how they do.

 

--Bill

Link to comment

Ok, finally got all the Seagate 1.5T and 2T drives installed in one place for testing.

 

Motherboard SATA ports (ICH8) 3Gb/s:

Parity SATA 0 2T    111 MB/s

Disk1  SATA 1 1.5T  123 MB/s

Disk2  SATA 2 1.5T  122 MB/s

Disk3  SATA 3 1.5T  122 MB/s

 

Syba PCI card w/SIL3124 3Gb/s chipset

Cache  SATA 0 500G  65 MB/s

Disk4  SATA 2 1.5T  81 MB/s

Disk5  SATA 3 1.5T  83 MB/s

Disk6  SATA 1 2T    84 MB/s

 

So the Seagate 2T stacks up reasonably well, overall, specially for a 5900 RPM drive.  These read only numbers don't really tell the whole tale, however.

 

Is the lower throughput on the  PCI controller due just to this motherboard (Intel D915PBL w/3.4Ghz P4, 4GB RAM), or PCI in general?  I now have a couple of Syba PCI-e boards with the Sil3132 3Gb/s chipset for testing, but haven't done so yet.  The Sil3132 is  the 2 port version of the sil3124, both of which support port multipliers based on the Sil3726 chipset.

 

With Parity checking and Pre-Clearing taking as much time as they do, I'd certainly rather have all the drives up in the 110-120MB/s range than in the 80's, but need to find out why the lower rates on the PCI.  Anyone with ideas about this?

 

BTW, the boards I have tested with Sil3114 and Sil3512 chipsets, which are 1.5Gb/s SATA I, perform about 20% lower on this motherboard for the same drives.  They are also PCI.

 

--Bill

Link to comment

Ok, have finished the testing of the last four drives on sil3132 PCI-e cards.  It's the same rate whether or not they are connected through a port multiplier (sil3726).

 

Motherboard SATA ports (ICH8) 3Gb/s:

Parity SATA 0 2T    111 MB/s

Disk1  SATA 1 1.5T  123 MB/s

Disk2  SATA 2 1.5T  122 MB/s

Disk3  SATA 3 1.5T  122 MB/s

 

Syba PCI card w/SIL3124 3Gb/s chipset

Cache  SATA 0 500G  65 MB/s

Disk4  SATA 2 1.5T  81 MB/s

Disk5  SATA 3 1.5T  83 MB/s

Disk6  SATA 1 2T    84 MB/s

 

Syba PCI-e card w/SIL3132 3GB/s chipset

Cache  SATA 0 500G  72MB/s

Disk4  SATA 2 1.5T  94MB/s

Disk5  SATA 3 1.5T  122MB/s

Disk6  SATA 1 2T    120MB/s

 

In the third group, the reason for the difference in throughput between SATA 2 and SATA 3, both Seagate 1.5t's, is that SATA 2 is a Seagate 1.5T LP (5900 RPM) model number ending in 541AS.  All others are the standard Seagate 1.5T's (7200 RPM) and a model number ending in 341AS.  Not sure quite how I ended up with that LP but it wasn't intentional.  The LP version also doesn't show a temperature reading in either unRaid management, or unMenu.

 

When a parity calc is running, everything goes to hell.  All the Syba controller rates drop to about 13.3 MB/s for drives in the array, and with a port multiplier, or about 21MB/s without the PM.  The motherboard SATA drives are also down to around 21MB/s.  The cache drive in my case (a much slower Seagate 500G drive) drops to 30MB/s.  If you have a bunch of files on your cache drive, mover will take forever and undoutedly slow down the parity process as well.

 

With parity and mover running, load average rises way up into the 5-6 range, but CPU utilization peaks at just 40%, so a faster processor (mine is 3.4Ghz) probably won't make much difference.  It's all I/O bound.

 

So, some questions for anyone who might have experience with all this:

 

1. Do the port multipliers do just basic switching between devices depending on demand, much like two devices on an IDE controller?  I.E., it's one or the other active at a time, period.  Or are there more smarts to it than that where device switching is more transparent?

 

2. My numbers seem to imply that you'd be better off with multiple PCI-e 3Gb/s cards and motherboard SATA ports (native) than PCI-e with PM's, but this could just be the behavior of my board.  General thoughts about this?

 

3. How much impact on the I/O (realistically) would a faster FSB and RAM be on a newer motherboard, while not necessarily increasing CPU speed?  This board is using DDR2-800 RAM and it's operating pretty much 1:1 with the capabilities of the MB.  But newer ones can achieve up to double that without overclocking.  It seems that if this 122MB/s figure is not really limited by the drive, the faster FSB would improve things.

 

4. Maybe in some way relevant:

 

The shfs process invoked to mount /user0 *apparently* starts using a lot of CPU (20-40%) and quite a bit of memory when mover begins, but when mover is stopped, the usage doesn't drop back to normal.  Even with Samba killed and nothing camping on any shares, you cannot umount /mnt/user0.  However, about 10-15 minutes later there's this message in syslog:

 

Sep 10 08:34:13 Bench kernel: shfs[31002]: segfault at 0 ip b7f421a3 sp bf86f02c error 4 in libc-2.7.so[b7ed0000+146000]

 

and the high CPU thrashing stops.  But of course /mnt/user0 isn't there until you recreate it with "/usr/local/sbin/shfs -cache 0 /mnt/user0 -o allow_other,attr_timeout=0,entry_timeout=0,negative_timeout=0".  Then restart Samba and the shares reappear and everything continues on.  Does anyone recognize what is happening with the crazy behavior of this process?  It takes around a minute or two before the CPU and RAM usage start escalating after mover starts.  (I'm currently using 4.5b6)

 

--Bill

 

Link to comment

What are the block sizes used by default with dd?

512K? What is the Reiser FS default block size?

Just wondering if it is a valid test.

 

It's good enough to get a "general" feel of the drive performance.

hdparm -t is close to these numbers too.

 

when you are dd'ing a /dev/sdx device, it's raw read of the device itself.

Filesystem does not come into play. (unless there is other activity that moves the heads).

 

 

Link to comment

What are the block sizes used by default with dd?

512K?

 

As specified,

 

dd if=/dev/sda of=/dev/null count=8192000

 

I'd assume that count would be in 512 byte blocks which are native to the drive, for a total of 4,096K bytes read in.

Using a block size greater than the default of 512 bytes will result in much better performance.

 

I would try something like

dd if=/dev/sda of=/dev/null count=128000 bs=32k

or even

dd if=/dev/sda of=/dev/null count=4096 bs=1024k

 

Joe L.

Link to comment

I would try something like

dd if=/dev/sda of=/dev/null count=128000 bs=32k

or even

dd if=/dev/sda of=/dev/null count=4096 bs=1024k

 

Joe L.

I'm sure either of those would improve things in a standalone world (I'll try later when I have a free drive).  But in terms of how they seem to work in unRAID, the times I getting with the existing test and SATA 3Gb/s interfaces are just shy of those numbers.  Typically 110-120MB/s during a pre-clear (with no other disk activity).  But that could be (in part) by limits in this motherboard architecture.  I'm going to test that notion shortly.  The current MB is Intel D915PBL with P4 3.4Ghz processor, ICH8 chipset and is limited to DDR2-800 RAM (a 400Mhz FSB).  I just picked up a Gigabyte EP43-UD3L MB which can handle P4 to quad core processors, and up to DDR2-1333 RAM (a 667 Mhz FSB), and ICH10/P43 chipset.  With some fast ram and an E8400 3.0Ghz dual core processor, it should be easy to determine the effect of the motherboard on disk i/o.

 

Which brings me to a question I was going to post separately about, but maybe you know.  I read somewhere on the forum or wiki that Linux doesn't use but one core of a processor so the notion of a dual core doesn't help except for better speed-stepping, lower power consumption, and faster FSB/RAM.  But if hyperthreading is active on a single core processor, does Linux take advantage of that, or is it strictly one thread whether it's running on an HT enabled processor or not?  On my Intel board I've always run it with HT disabled, assuming only one thread can be used.

 

Is there any beef to be had under any condition of more than one thread available?

 

--Bill

 

Link to comment

Linux will utilize all the cores available.

 

At one point in time, unRaid was built without SMP support. The latest version have SMP support enabled. unRaid does not need a powerful CPU and the parity calculations are fast enough to not need more than a single core. They significantly exceed the slowest part of the system -- the hard drives.

 

Generally switching from something recent like a ICH8 to an ICH9/ICH10 will not improve your hard drive performance.

Link to comment

dd if=/dev/sda of=/dev/null count=8192000 ->  36.2441 s, 116.0 MB/s - WDC_WD10EADS-00L5B1

dd if=/dev/sdb of=/dev/null count=8192000 ->  36.8217 s, 114.0 MB/s - SAMSUNG_HD103UJ

dd if=/dev/sdc of=/dev/null count=8192000 ->  53.4214 s,  78.5 MB/s - ST3400620NS

dd if=/dev/sdd of=/dev/null count=8192000 ->  56.0731 s,  74.8 MB/s - Maxtor_7V300F0

dd if=/dev/sde of=/dev/null count=8192000 ->  57.4346 s,  73.0 MB/s - Maxtor_7V300F0 (cache)

dd if=/dev/sdf of=/dev/null count=8192000 -> 164.453  s,  24.5 MB/s - Sandisk Cruzer 4GB

dd if=/dev/sdg of=/dev/null count=8192000 ->  50.077  s,  83.8 MB/s - WDC_WD5000AAKS-00TMA0

dd if=/dev/sdh of=/dev/null count=8192000 ->  43.6976 s,  96.0 MB/s - WDC_WD7500AAKS-00RBA0

dd if=/dev/sdi of=/dev/null count=8192000 ->  49.0074 s,  85.6 MB/s - WDC_WD5000AAKS-00TMA0

dd if=/dev/sdj of=/dev/null count=8192000 ->  57.0859 s,  73.5 MB/s - WDC_WD5000KS-00MNB0

dd if=/dev/sdk of=/dev/null count=8192000 ->  40.3138 s, 104.0 MB/s - WDC_WD10EADS-00M2B0 (parity)

dd if=/dev/sdl of=/dev/null count=8192000 ->  40.3138 s, 104.0 MB/s - WDC_WD10EADS-00M2B0

Link to comment

Linux will utilize all the cores available.

 

At one point in time, unRaid was built without SMP support. The latest version have SMP support enabled. unRaid does not need a powerful CPU and the parity calculations are fast enough to not need more than a single core. They significantly exceed the slowest part of the system -- the hard drives.

 

Generally switching from something recent like a ICH8 to an ICH9/ICH10 will not improve your hard drive performance.

 

Thanks for the info.  I have the EP43-UD3L in now with a Core Duo 3.0Ghz processor and 8GB RAM (just happened to have a set matching).  You're correct that when testing the single throughput of a drive with either the manual dd option posted earlier, or within unMenu, the numbers look pretty much the same as I was getting on the older motherboard.

 

However, in practice, pre-clears, writes to array drives w/parity, etc., are noticeably improved.  The CPU horsepower helps, but I think the biggest gain is in the drive controllers and faster busses being able to loaf a along on the slower drives, plus the UDMA mode that they are capable of on the newer drives (Seagate 1.5T 7200 RPM and up).  Even the 1T drives show good performance.  But 750G and below are an older ATA standard and don't support the higher UDMA modes, so they chug along pretty much as they always have (65-75 MB/s).  The Seagate 2T 5900 RPM can achieve *almost* the higher rate of a 1.5T (120MB/s vs 129 MB/s), though the actual amount seems to vary from drive to drive.  I have two 2T's in this array, one gets 120MB/s and the other 107MB/s regardless of port position on the controllers.

 

Of course, I realize trying to make a 'performance' unRaid machine is counter to what most are doing, but it's an exercise that I enjoy pursuing.  You can learn a lot along the way if you have time to experiment.

 

--Bill

Link to comment

Hard drive controllers, what a quagmire!

 

First, if you're going for performance on a machine that is capable of it, forget most any PCI controller.  SATA ports on the motherboard and PCI-e controllers are pretty much all you have to choose from.  But like everything else, they're not created equally.

 

I chose to stick with the Silicon Image chipsets that support FIS based port multipliers for any interfaces other than SATA ports on the motherboard.  That leaves a sil3132 based 2-port SATA controller, paired with one or more sil3726 port multipliers (5 ports).  The sil3124 based boards are 4 ports, but PCI.  That might have some benefits in a traditional RAID system but not with J.B.O.D (separate drives) like unRaid uses.

 

One thing that is really a gain with FIS based PM's are the fact they can address all the drives on that card simultaneously, eliminating much of the back-and-forth cycles between drives on a conventional controller.  Even the two ports of the sil3132 controller are multiplexed as well.  This seems to be the perfect place to put (for example) cache and parity drives to speed things up.  Further, the most currently written-to drives (in a Fill-up scenario) placed on the same PM can further improve the total performance.

 

So that all seems like pretty positive stuff, right?  What Silicon Image doesn't say in *any* of their literature is that they only support up to UDMA mode 5, which is 100MB/s, even if the drive supports the highest mode 6 of 133MB/s.  So on one hand you have the benefit of multiplexing and FIS based sharing of drives on the same interface, at the expense of a per-drive limit of UDMA 5's 100MB/s.  OR, you have the motherboard ports which have no problem with UDMA 6 133MB/s, but no multiplexing.

 

Nifty, eh?  Now this only applies to newer drives (last year or so) that support the ATA-8 specification which includes UDMA mode 6.  Anything else will back down to lower rates.  There is a couple of statements in Silicon Image's detailed specifications that suggest that the entire pathway between motherboard and drives is transparent to transfer modes, be it PIO or UDMA, and that it is up to the DRIVERS to set things correctly!  They don't come right out and detail it, just hint hint.

 

So, does anyone know if the current sil3124 drivers (that are used in unRaid, and support both the 3124 and 3132 chipsets) are in fact limited to UDMA mode 5?  And if so, are there newer driver modules compatible with this kernel that can be installed in their place?

 

For those who are curious, the sil3132 cards made by Syba can be had at http://www.eforcity.com/ part number PCRDSATACON4.

A variety of sil3726 based port multipliers can be had at http://addonics.com.  Addonics also has some sil3132 and 3124 based controllers, but they're quite a bit more expensive.

 

Any other card/multiplier choices?

 

--Bill

 

Link to comment

Here's some information I just came across which I've never seen mentioned here.

 

I was reading the 1.3 draft of the SATA-AHCI document to try and understand a bit more about its role in the Intel chipsets.  First, it's an Intel child and apparently only relevant to motherboards which use the ICH series chipsets.  I don't know what version of AHCI is in each set, but the ICH10 set that is in the newer motherboards is supposed the full AHCI spec when it is enabled on the motherboard.

 

What I *didn't* know was that it supports port multipliers!  Apparently, depending on the version it may support a command based PM (one drive at a time queued) or an FIS based PM (handles multiple drives simultaneously) depending on the specific driver code.  As I read more into it it was described remarkably like text I read on the Silicon Image 3726 PM chip, though it never once mentions Sil.

 

Hmmm.  So I connected one of the Sil PM's I have on to one of the motherboard SATA connectors.  It works!  And it's fully UDMA mode 6 (ATA133) compatible.  I'm running and testing it but so far so good!

 

Oddities include:

 

1) Only four of the five ports on my PM's seem to work when connected as above.  If I connect them to the Sil3132 controller, all five ports work, but UDMA mode 6 doesn't, so there appears to be an ATA100 limit imposed.

 

2) Whatever drive is connected to the lowest numbered connected port (port 0, or port 1 if nothing is on port 0) will not return all of its SMART information, and claims the drive conforms to the ATA 7 spec, when the drives really are all ATA 8.  This is true regardless of which controller you connect the PM to.  It means that smartctl cannot return things like temperature...

 

3) Certain drives (just one of mine so far) will not work AT ALL if it's connected to the lowest numbered active port on the PM but is fine on any other port.  I don't know if this is controller sensitive or not.

 

Has anyone run across any of this type of stuff?

 

On a motherboard with 6 SATA ports, you could add (presumably) 4 port multipliers and support a total of 18 drives. 

 

--Bill

Link to comment

I've tested port multipliers with the Intel chipses. Apparently I do not have a new enough chipset to test PM on intel.

 

I've had the same issues with certain drives on the PM causing issues with all the drives initializing on the PM.

This was mostly with the rosewill RC-218 card.

 

With a SIL 3124 everything worked fine.

Although it reported UDMA 100, the drives still worked around the same speed.

Actually I think it was faster on the SIL hardware.

 

I'm not sure if the linux drivers are mature enough on the Intel chipsets or the Marvel chipsets.

 

My tests were with the 1X5 port multiplier and with  the 1X2 steelvine chipset.

The steelvine chipset worked no matter what.

In addition this supports RAID0, RAID1, SAFE33, SAFE50, BIG, etc, etc.

Link to comment

I've tested port multipliers with the Intel chipses. Apparently I do not have a new enough chipset to test PM on intel.

 

I've had the same issues with certain drives on the PM causing issues with all the drives initializing on the PM.

This was mostly with the rosewill RC-218 card.

 

With a SIL 3124 everything worked fine.

Although it reported UDMA 100, the drives still worked around the same speed.

Actually I think it was faster on the SIL hardware.

 

What size drives were they?

 

I'm not sure if the linux drivers are mature enough on the Intel chipsets or the Marvel chipsets.

 

Are you aware of any changes or improvements in those drivers on later Slackware versions?

 

My tests were with the 1X5 port multiplier and with  the 1X2 steelvine chipset.

The steelvine chipset worked no matter what.

In addition this supports RAID0, RAID1, SAFE33, SAFE50, BIG, etc, etc.

 

Well, Steelvine=Silicon Image, and consists of the sil3132 PCI-e two port cards in tandem with the sil3726 based 5 port PM.  Exactly the same chipset I'm using here, except my 3132 boards are made by Syba, and the 3726 boards are by Addonics.  The sil3124 is also part of that series, but I haven't re-tested it in this motherboard.  In my previous motherboard its throughput was considerably below that of the sil3132 or built-in SATA ports.  I have latest firmware in all, except possibly the PM's.  Their newest firmware is dated 2006 on the Sil website so I'd guess they're current.

 

There is another Steelvine PM, I believe it's a 3752, but it is only two ports.

 

I wonder why we're seeing such differences in behavior?

 

--Bill

Link to comment

I've tested port multipliers with the Intel chipses. Apparently I do not have a new enough chipset to test PM on intel.

 

I've had the same issues with certain drives on the PM causing issues with all the drives initializing on the PM.

This was mostly with the rosewill RC-218 card.

 

With a SIL 3124 everything worked fine.

Although it reported UDMA 100, the drives still worked around the same speed.

Actually I think it was faster on the SIL hardware.

 

What size drives were they?

 

250-1TB drives.

 

I'm not sure if the linux drivers are mature enough on the Intel chipsets or the Marvel chipsets.

Are you aware of any changes or improvements in those drivers on later Slackware versions?

 

No.

 

I wonder why we're seeing such differences in behavior?

 

The drivers are not mature enough oin other hardware yet.

I think the ICH9R is the first intel chipset to support PM.

Link to comment

eBay has a PCI-e bridged SIL3124.  (item # 280391868133)

 

The documentation says PCI-e X1, physically looks like a PCI-e x1 interface. Says it provides 4 lanes of SATA300 inc PM support.

So in theory you could have 20 HDDs but they would share one 250MB/s channel. However a couple of these cards and two PMs would give you 16 drives at reasonable cost and fairly good performance, especially if you have four or more onboard sata ports. 

 

Looks like a good card for those with PCI-e X1 slots to fill or as a replacement for a SIL3132 giving four sata ports as oppossed to two.

 

All my SATA parts report as UDMA100 under PCI (SIL3112, 3114 and 3124).

With my PCI SIL3124 I get lots of noise in the syslog when attaching HDDs via PM. Seem to work fine though. Marginal performance hit using PM when running parity.

 

I dont have any PCI-e motherboards for testing as yet. Will report back when they arrive.

 

Rgds

 

Kevin

Link to comment

eBay has a PCI-e bridged SIL3124.  (item # 280391868133)

 

The documentation says PCI-e X1, physically looks like a PCI-e x1 interface. Says it provides 4 lanes of SATA300 inc PM support.

So in theory you could have 20 HDDs but they would share one 250MB/s channel. However a couple of these cards and two PMs would give you 16 drives at reasonable cost and fairly good performance, especially if you have four or more onboard sata ports.   

 

Looks like a good card for those with PCI-e X1 slots to fill or as a replacement for a SIL3132 giving four sata ports as oppossed to two.

 

That's interesting.  I was always under the impression that to properly support 4 drives at UDMA 6 (133MB/s) you would have to use PCI-e 4X.  I just purchased one of them, so we'll see!

 

All my SATA parts report as UDMA100 under PCI (SIL3112, 3114 and 3124).

With my PCI SIL3124 I get lots of noise in the syslog when attaching HDDs via PM. Seem to work fine though. Marginal performance hit using PM when running parity.

 

What drives are you using?  Anything below 1T will come up as ATA-7 UDMA mode 5 (100MB/s).  Most Seagate 1T's and all 1.5T and up support the higher speeds. 

 

I just re-tested my PCI 3124 board with four ports (SIIG) and its performance is pretty dismal.  On a 1.5T 32M buffer drive where I'd normally get around 129 MB/s on a read test in PCI-e or Motherboard SATA ports, I only get around 85 MB/s with the PCI 3124... same drive.

 

I dont have any PCI-e motherboards for testing as yet. Will report back when they arrive.

 

Please do!

 

--Bill

 

Link to comment

eBay has a PCI-e bridged SIL3124.  (item # 280391868133)

 

The documentation says PCI-e X1, physically looks like a PCI-e x1 interface. Says it provides 4 lanes of SATA300 inc PM support.

So in theory you could have 20 HDDs but they would share one 250MB/s channel. However a couple of these cards and two PMs would give you 16 drives at reasonable cost and fairly good performance, especially if you have four or more onboard sata ports.   

 

Looks like a good card for those with PCI-e X1 slots to fill or as a replacement for a SIL3132 giving four sata ports as oppossed to two.

 

All my SATA parts report as UDMA100 under PCI (SIL3112, 3114 and 3124).

With my PCI SIL3124 I get lots of noise in the syslog when attaching HDDs via PM. Seem to work fine though. Marginal performance hit using PM when running parity.

 

I dont have any PCI-e motherboards for testing as yet. Will report back when they arrive.

 

Rgds

 

Kevin

 

Newegg  has this card: http://www.newegg.com/Product/Product.aspx?Item=N82E16816124027

 

I think there has been a couple people try it in the unRAID server without success. Hopefully someone will get it to work.

Link to comment
What drives are you using?  Anything below 1T will come up as ATA-7 UDMA mode 5 (100MB/s).  Most Seagate 1T's and all 1.5T and up support the higher speeds. 

 

Samsung 1TB (HD103UJ). Speedwise I get 100MB/s reads,  much better than the SIL3112/4 SATA I PCI cards (65Mb/s). This is still fast enough to feed Gb network IMO.

 

I ran the SIL3124 under windows with a couple of PM cards to external silos, 2 x 5 HDD RAID5 arrays. Worked well for several years, I prefer the unRAID system though and am migrating the data to unRAID. 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.