SATA Controller Advice


Recommended Posts

I've been looking to add a SATA Controller and I've been trying to find out if this card is compatible with UNRAID - HighPoint RocketRAID 2720SGL - http://www.newegg.ca/Product/Product.aspx?Item=N82E16816115100 but haven't had much luck.

 

It seems like everyone on here uses the SUPERMICRO cards but I've haven't been able to find out if the HighPoint RocketRAID controller is compatible or not. Is there anyone that uses this HighPoint card or at least tried to?

 

 

Link to comment

I know there are some people who have had sucecss with the HPT controllers, but I do not remember about this one specifically

You'll need to find out what the chipset is on the card, then compare it to existing controllers.

 

Is there a reason to go with Highpoint vs Supermicro.

So many people have had success with the supermicro's and there is a wealth of knowledge on the board.

 

Link to comment

I realize that most people here use Supermicro cards, seems to be the most popular choice by far! They way I see it:

 

HP 2720SGL - PCI-e 2.0 x 8, 6Gb/s (no I don't have 6Gb/s drives but any drive that I replace will be with 6G HDDs), 3TB support @ $129

 

vs.

 

SM AOC-SASLP-MV8 - PCI-e (1? not sure about this) x 4, 3Gb/s, 3TB @ 120

 

Just seems that for the price the HP card is the better choice but whether or not it's supported by UNRAID is the question.... I have read posts that suggested that hdd's would never see the benefit of 6Gb/s but even if this true, x4 vs x8 PCI-e means a potential throughput of 2GB/s vs 4GB/s.

 

I haven't been able to find what chipset the HP card is based on so

 

To be honest I'll probably go with the SM card because there is so much knowledge is here about that particular card and I certainly don't feel the urge to compile a custom kernel for UNRAID :P

Link to comment

If each lane can do 250MB/s in one direction (500MB/s Full duplex) and each hard drive maxs out around 120-130MB/s an x4 card will handle 8 drives just fine.

 

In days of yesteryear, unRAID was built with 2 promise TX4's on a PCI bus.

 

The main reason you would want to go all out crazy is to minimize your parity create/check time. Current magnetic hard drives usually do not exceed the 130MB/s.

 

The only ones I've seen exceed that were the Samsung 1TB drives. Must be something in the cache, they maxed around 135-140 on the outer tracks.

 

In this case I would go with tested, tried and true. I choose to do the same with the AOC-SATA2 cards for my build.

I dropped them in, they worked and there was no muss, no fuss, no motor cards, no need for luxureee...  ;D

 

I would not use an HPT card in my build.

I have tested and used the Areca ARC-12XX and 3Ware 9550 4LP with success.

 

So if you were going for the Areca line, I would say, that might be a good choice, you can do really interesting things like running a hybrid RAID0(Parity)/RAID1(cache) + 6 drives. It does add a bit to performance on protected drives.

Link to comment
On 1/15/2012 at 10:22 PM, WeeboTech said:

If each lane can do 250MB/s in one direction (500MB/s Full duplex)

Note these are the specs for PCIe v1 (v2 doubles)

Quote

and each hard drive maxs out around 120-130MB/s an x4 card will handle 8 drives just fine.

Realize that the per lane bandwidth figures are specs, and can only be relied upon to be a theoretical maximum, and certainly not a real-world minimum. Sort of like the advertising world's favorite hype term "up to ..." (which should be heard by the astute consumer as "never more than ... [and likely much less than ...]").

 

For sensible capacity planning, it's best to take real measurements. Consider the following little script. Call it with the drive letters [D in /dev/sdD] you want to test. If the first arg is X, it will test the drives sequentially and list their models--save this output for later reference. When called without the X, the drives specified are tested simultaneously. You might be (unpleasantly) surprised.

 

#!/bin/sh
x=0
if [[ $1 == "X" ]]
then
x=1
shift
fi
for i in $*
do
if (( $x == 0 ))
then
echo sd$i `hdparm -t --direct /dev/sd$i | sed -n "s/^.*= / = /p"` &
else
echo sd$i `hdparm -i /dev/sd$i | sed -n "/Model=/s/,.*$//p" | sed -n "s/Model=//p"` `hdparm -t --direct /dev/sd$i | sed -n "s/^.*= / = /p"`
fi
done
wait
 

(Attached below)

I think the general premise is very useful; someone else can pick it up and run with it [comments, cleaner, etc] (I'm retired :))

 

I threw it together to get to the bottom of an underperformance situation. Intel SS-4200 w 4x7K2000 drives.

 

dskt X a b c d

yielded:

sda Hitachi HDS722020ALA330 = 123.49 MB/sec
sdb Hitachi HDS722020ALA330 = 122.47 MB/sec
sdc Hitachi HDS722020ALA330 = 128.06 MB/sec
sdd Hitachi HDS722020ALA330 = 130.28 MB/sec
 

dskt a b c d

yielded

sdd = 47.09 MB/sec
sdb = 122.29 MB/sec
sda = 123.53 MB/sec
sdc = 122.64 MB/sec
 

 

and

dskt b c d

yielded

sdb = 122.45 MB/sec
sdd = 130.57 MB/sec
sdc = 127.72 MB/sec
 

 

(Note: the SS-4200 has the 4 internal SATA2's on its ICH7R Southbridge.)

 

[ Weebo, I showed you mine; (please) show me yours. ]

 

--UhClem

dskt.txt

(use below instead; see comments at head of script for usage/info)

 

[5 years later !! and a few improvements ...]

 

dskt2.txt

Edited by UhClem
include updated version of script [dskt2.txt]
Link to comment

I do realize that the speed is a theoretical maximum. I also feel that sometimes people get caught up in the values and get caught up in values vs real world. Real world, drives rarely do 140Mb/s sustained for the whole surface of the drive.

 

I've run the test on my setup. (neat test by the way).

but my config varies from others.

I have

5 drives on 6 of the motherboard ports.

5 drives on #1 pci-x AOC-SATA2 controller

5 drives on #2 pci-x AOC-SATA2 controller

parity and cache in a SAFE raid configuration on an Areca-ARC1200.

0 drives (but 4 slots available) on an PCIe x4 Rosewill MV8 controller.

 


root@atlas /boot/bin #./drivetest.sh 
usage: ./drivetest.sh [bg] a b c d
Where bg is optional to run tests simultaneously in backgroun
       a b c d is single drive letter character to test. (no limit).

./drivetest.sh n o p q r s
sdn SAMSUNG HD103SJ = 142.47 MB/sec
sdo ST31500341AS = 116.30 MB/sec
sdp ST31500541AS = 103.47 MB/sec
sdq ST31500341AS = 124.25 MB/sec
sdr ST32000542AS = 101.11 MB/sec
sds INTEL SSDSA2CW160G3 = 260.14 MB/sec

root@atlas /boot/bin #./drivetest.sh bg n o p q r s
sdr = 107.07 MB/sec
sdo = 119.02 MB/sec
sds = 258.65 MB/sec
sdn = 142.82 MB/sec
sdq = 119.16 MB/sec
sdp = 98.39 MB/sec


5 drives on #1 PCI-X Controller
root@atlas /boot/bin #./drivetest.sh b c d e f 
sdb WDC WD10EACS-00ZJB0 = 73.17 MB/sec
sdc WDC WD10EACS-00ZJB0 = 75.25 MB/sec
sdd WDC WD10EACS-00ZJB0 = 69.84 MB/sec
sde WDC WD10EACS-00ZJB0 = 74.47 MB/sec
sdf ST31500541AS = 99.85 MB/sec

root@atlas /boot/bin #./drivetest.sh bg b c d e f 
sdb = 73.57 MB/sec
sdd = 72.09 MB/sec
sde = 75.54 MB/sec
sdf = 96.31 MB/sec
sdc = 72.10 MB/sec

5 drives on #2 PCI-X Controller
root@atlas /boot/bin #./drivetest.sh g h j k i 
sdg WDC WD10EACS-32ZJB0 = 78.18 MB/sec
sdh WDC WD10EACS-00D6B1 = 84.21 MB/sec
sdj WDC WD10EADS-00L5B1 = 85.57 MB/sec
sdk WDC WD10EACS-00ZJB0 = 72.58 MB/sec
sdi SAMSUNG HD103SJ = 137.11 MB/sec

root@atlas /boot/bin #./drivetest.sh bg g h j k i 
sdj = 83.84 MB/sec
sdg = 81.83 MB/sec
sdh = 80.43 MB/sec
sdi = 137.11 MB/sec
sdk = 72.56 MB/sec


root@atlas /boot/bin #./drivetest.sh bg b c d e f g h j k i 
sdd = 70.56 MB/sec
sde = 72.30 MB/sec
sdj = 83.92 MB/sec
sdf = 96.52 MB/sec
sdc = 75.18 MB/sec
sdi = 133.79 MB/sec
sdh = 84.72 MB/sec
sdk = 75.27 MB/sec
sdb = 73.59 MB/sec
sdg = 80.91 MB/sec

Link to comment

Here's my version of the script modified a lil

#!/bin/bash

[ ${DEBUG:=0} -gt 0 ] && set -x -v 

if [ -z "${1}" ]
   then echo "usage: $0 [bg] a b c d"
        echo " Where bg is optional to run tests simultaneously in backgroun"
        echo "       a b c d is single drive letter character to test. (no limit)."
        exit
fi

if [ "${1}" = "bg" ]
   then bg=yes
        shift
fi

for i in $*
do 
    if [[ "${bg:=no}" = "yes" ]] ; then 
        echo sd$i `hdparm -t --direct /dev/sd$i | sed -n "s/^.*= / = /p"` &
    else
        echo sd$i `hdparm -i /dev/sd$i | sed -n "/Model=/s/,.*$//p" | sed -n "s/Model=//p"` `hdparm -t --direct /dev/sd$i | sed -n "s/^.*= / = /p"`
    fi
done

wait

Link to comment

...

I've run the test on my setup. (neat test by the way).

but my config varies from others.

...

Thanks -- and thanks for posting your results. Your system is an excellent example of good quality components which work well together, and were designed to deliver the full realizable throughput of its specs.

 

A couple of additional tests I'd find interesting:

 

1) ./drivetest.sh l m  (I assume these are the drives on the ARC-1200.)

1a) ./drivetest.sh bg l m

 

[i'm curious how well a no-compromise PCIe x1 2-drive controller performs. (See below for my result on the bargain-bin variety.)]

 

and

 

2) ./drivetest.sh bg b c d e f g h i j k l m n o p q r s

  [ showing no mercy :) ]

 

I recently picked up a HighPoint Rocket 620, a 2-port PCIe v2 x1 controller using a Marvell 9128. Results:

(sequential)
sda SAMSUNG HD103SJ = 131.70 MB/sec
sdb Hitachi HDS5C3020ALA632 = 125.87 MB/sec

(and simultaneous/background)
sda = 111.75 MB/sec
sdb = 91.08 MB/sec

(using a Gigabyte GA-EP35-DS3R mobo - note: PCIe v1)

 

Weebo, before testing, check my next post for some refinements in the script.

 

--UhClem

 

Link to comment

UhClem Thanks for the script!

 

@ Johnm - I looked into and you are right! HighPoint is certainly not known for their support!  :-\

 

Just for kicks I ran UhClem's script on my setup:

 

I currently have a 4 port PCI Vantec card (this is the card I'm replacing) and it's certainly to WeeboTech's level :P But I at least expected to get 100 MB/s... almost but not quite and not even close on 1 hdd...

 

./dskt.sh X g h i j

sdg SAMSUNG HD103UJ = 92.91 MB/sec
sdh SAMSUNG HD103UJ = 3.71 MB/sec
sdi ST31000528AS = 93.64 MB/sec
sdj ST31000333AS = 93.38 MB/sec

 

./dskt.sh g h i j

sdg = 86.99 MB/sec
sdi = 62.74 MB/sec
sdh = 13.00 MB/sec
sdj = 63.26 MB/sec

 

 

Here are the results of my 6 motherboard hdd's... I expected that they would all get above 110MB/s, some of them are rather old and may replaced in the near future. Would the age of the drive (3+years) have a bearing on these results?

 

root@MR_NAS:/boot# ./dskt.sh a b c d e f
sdd = 80.42 MB/sec
sdf = 112.34 MB/sec
sda = 68.28 MB/sec
sde = 127.13 MB/sec
sdb = 77.44 MB/sec
sdc = 76.36 MB/sec

root@MR_NAS:/boot# ./dskt.sh X a b c d e f
sda ST32000542AS = 107.95 MB/sec
sdb ST3500830AS = 77.95 MB/sec
sdc ST3320620AS = 77.36 MB/sec
sdd SAMSUNG HD501LJ = 82.54 MB/sec
sde ST31000528AS = 128.61 MB/sec
sdf SAMSUNG HD103UJ = 115.60 MB/sec

 

 

Link to comment

Here's my version of the script modified a lil

...

 

Modify without restraint. My version's only value was as a "proof of concept". (When I "learned" the shell, there were no variables, and the control structures were limited to "if" and "goto", which were each separate commands! And I've only nibbled on a few measly crumbs since.) "... old dog ... new tricks."

 

One minor improvement:

echo sd$i `hdparm -i /dev/sd$i | sed -n "s/^.*Model=\(.*\), Fw.*$/\1/p"` `hdparm .... `

for the foreground line. "sed ... | sed ..." is so low-rent :)

 

And, the background report is improved by having the lines output in the same order as the drive letter arguments. (not some random inconsistent order at the whim of silicon)

 

I did it using "echo ... > /tmp/dskt_$$_$i" and then, after the wait, looping again through $*, cat'ing and rm'ing each /tmp/... .

 

But, someone more adept than I at sh/bash can surely use variables (built with $i) to store each (backgrounded) echo result, and avoid my /tmp klugery.

 

I believe this has the potential to be a very useful little tool, and if packaged properly, could lead to a lot of interesting data on various controllers (and their usable limitations) and, in the case of many budget/spare-parts builds, realistic throughput expectations. I invite/encourage someone to pursue this. [i lack the motivation/justification ... (remember, old dog ...)]

 

--UhClem

 

 

 

 

Link to comment

1) ./drivetest.sh l m  (I assume these are the drives on the ARC-1200.)

 

These are not really good tests.

I have the ARC-1200 configured in special way with 2 1.5GB drives.

 

The drives are in a raid set. Configured in a SAFE33 arrangement.

 

70% of the two drives are configured as RAID0 This is the parity drive. sdl

30% of the two drives are configured as RAID1 This is the cache and app drive. sdm

 

root@atlas /boot/bin #./drivetest.sh  l m 
HDIO_DRIVE_CMD(identify) failed: Invalid exchange
HDIO_GET_IDENTITY failed: Invalid argument

sdl = 173.24 MB/sec

HDIO_DRIVE_CMD(identify) failed: Invalid exchange
HDIO_GET_IDENTITY failed: Invalid argument

sdm = 136.45 MB/sec

 

Doing both of them together will kill the statistics. because both virtual drives use the same physical drives in different ways.

Here goes.

 

root@atlas /boot/bin #./drivetest.sh  bg l m 
sdl = 134.17 MB/sec
sdm = 59.47 MB/sec

 

 

And the big one.... makes my machline look like some kind of Christmas decoration or disco lights.

 

root@atlas /boot/bin #./drivetest.sh bg b c d e f g h i j k l m n o p q r s
sdb = 71.16 MB/sec
sdm = 96.61 MB/sec
sdd = 71.76 MB/sec
sdj = 83.74 MB/sec
sdr = 101.48 MB/sec
sdi = 137.16 MB/sec
sdf = 97.25 MB/sec
sdg = 81.56 MB/sec
sdq = 120.65 MB/sec
sdh = 81.55 MB/sec
sdl = 159.92 MB/sec
sde = 74.40 MB/sec
sdk = 74.42 MB/sec
sdn = 141.81 MB/sec
sdc = 75.58 MB/sec
sdo = 118.98 MB/sec
sds = 252.29 MB/sec
sdp = 99.81 MB/sec

 

I'm not even sure they all run at the same time. I'll have to investigate it more.. also seeing if I can populate the x4 card and test on that.

Link to comment

1) ./drivetest.sh l m  (I assume these are the drives on the ARC-1200.)

 

These are not really good tests.

I have the ARC-1200 configured in special way with 2 1.5GB drives.

 

The drives are in a raid set. Configured in a SAFE33 arrangement.

 

70% of the two drives are configured as RAID0 This is the parity drive. sdl

30% of the two drives are configured as RAID1 This is the cache and app drive. sdm

Aaaahh! I noted your earlier mention of the RAID0/RAID1 double-duty, but I didn't anticipate that the "hardware" drive slots would be usurped to serve that end. Gee, I wonder what will happen if ...

 

root@atlas /boot/bin #./drivetest.sh  l m 
HDIO_DRIVE_CMD(identify) failed: Invalid exchange
HDIO_GET_IDENTITY failed: Invalid argument

sdl = 173.24 MB/sec

HDIO_DRIVE_CMD(identify) failed: Invalid exchange
HDIO_GET_IDENTITY failed: Invalid argument

sdm = 136.45 MB/sec

Yup--an identity crisis :). (And accessing SMART must cause retardation.)

The (individual) transfer rates are good, though.

Doing both of them together will kill the statistics. because both virtual drives use the same physical drives in different ways.

Here goes.

 

root@atlas /boot/bin #./drivetest.sh  bg l m 
sdl = 134.17 MB/sec
sdm = 59.47 MB/sec

I agree. All bets are off in this scenario. (I expected worse.)

And the big one.... makes my machline look like some kind of Christmas decoration or disco lights.

...

Very impressive!  The numbers, that is. (I can't see the lights.)

The l & m rates (combined) are (possibly) too good at ~257 MB/sec--it does make one question the "simultaneity". But, doesn't all the report output come in a single burst? And within 3-4 seconds of invocation? (As much as I like a good puzzle, I'm gonna pass on this one. Too many unknowns.)

 

I can't resist asking this one question: Given this l/m (parity/cache) set-up, how does it impact the performance of copying cached files to the array? How can it avoid a fair amount of "head-thrashing" (due to reads on m competing for the heads with writes to l)?  Very large buffers??

 

--UhClem

 

 

Link to comment
I can't resist asking this one question: Given this l/m (parity/cache) set-up, how does it impact the performance of copying cached files to the array? How can it avoid a fair amount of "head-thrashing" (due to reads on m competing for the heads with writes to l)?  Very large buffers??

 

rsync has it's own buffers (two processes are run and communicate to one another) plus there is a ram read/writeback cache on the controller.

I rarely use the cache drive to cache user share writes.

 

I use the cache more for local file system storage (since it's RAID1 and retains writes after reboot).

I can't wait for the day we have a union fs later in the kernel.

 

The array itself performs well enough for writing to the disk shares with the faster RAID0 parity.

I get anywhere from 40-50 MB/s burst for the first 500MB which then slides slowly down to about 35-40MB/s after about 10GB.

Besides I have so many machines, I can easily start a copy to a disk share and go do something else somewhere else.

Teracopy & rsync are great tools for that as it provides a level of confidence.

Link to comment

...

I rarely use the cache drive to cache user share writes.

Well, that sort of removes the premise for my question.

(Just because my cat is named "Dog", don't expect him to bark.)

 

Btw, does Areca provide any workaround to allow for explicit SMART checks (ie, smartctl) with your set-up?

 

Re: disktest.sh

If you do play around with it some more (Rosewill x4 etc.), it would be interesting to know what the "tipping point" is on that and the AOC-SAT2-MV8 (6-8 drives), if re-cabling drives for a quick test is feasible.

 

"If you push something hard enough, it will fall over." --Fud's First Law of Opposition

 

--UhClem

 

 

Link to comment

Btw, does Areca provide any workaround to allow for explicit SMART checks (ie, smartctl) with your set-up?

I think if you set the card up in JBOD mode it does. I cannot remember, I'll have to test with my x8 card one of these days.

 

Re: disktest.sh

If you do play around with it some more (Rosewill x4 etc.), it would be interesting to know what the "tipping point" is on that and the AOC-SAT2-MV8 (6-8 drives), if re-cabling drives for a quick test is feasible.

 

I have no intention of rewiring the AOC-SAT2-MV8 card. The card's SATA ports are finicky.

 

This weekend I added 1 sata to each card, then somehow bumped the other cables out and it was a bear rebooting and checking if all the drives were on line.

 

The current configuration of 5 drives per controller works because 2 ports are on top, 3 are on the other side of the card.

The next 3 are on top of the first three, which lends itself to crowding and interference from card next to it.

 

Since I have real data on this machine and it holds back the whole network while it's down, I refrain from tests on it.

I moved all spinners and large storage to this machine and put SSD's in all the others. They all rely on having the server up.

So when it's down, it's a pain.

 

This is why I choose hardware that was a little expensive, but solid, stable and reliable. 

The only trouble I get is when I go into the machine and bump a cable.

I do have cables with clips, but the card does not lock them in.

Plus if you look closely, when you use the outter ports, The plastic SATA holder bulges slightly.

 

With The X4 card I'll have to see, maybe I can re jumper for the 2 external ports and use the two internal ports simultaneously.

 

Although I can easily support 20 drives, I'm only using 15 data drives,  and hybrid cache/parity. 17 total.  3 unused, but standby.

I only installed this card so I could pre clear and move data from the current set of 15 data drives to spare drives rapidly or reassign a failed drive to a new slot without moving drives.  Sort of a warm standby spare.

 

I have another 9 drive machine I can test with at another time.

Link to comment

Btw, does Areca provide any workaround to allow for explicit SMART checks (ie, smartctl) with your set-up?

I think if you set the card up in JBOD mode it does. I cannot remember, I'll have to test with my x8 card one of these days.

I wouldn't expect any problem in standard/JBOD ("bunch of disks" *not* "big ol' disk") since each drive has a dedicated slot/letter/minor-dev. I was just surprised that Areca did not make some other provision, in the hybrid set-ups, to allow for normal usage of, ie, smartctl. Even if it meant arbitrarily associating each of the two drives to one, or the other, drive slots for low-level (ie, IDENTIFY and SMART) queries. Not trying to be a trouble-maker, just a little nit-picking.

 

Re: disktest.sh

If you do play around with it some more ... if re-cabling drives for a quick test is feasible.

I have no intention of rewiring the AOC-SAT2-MV8 card. The card's SATA ports are finicky.

 

This weekend I added 1 sata to each card, then somehow bumped the other cables out and it was a bear rebooting and checking if all the drives were on line. ...

No further explanation needed. "Not feasible" is a gross understatement. I only mess with 4-6 drive configs and curses are a standard part of the protocol.

 

Thanks for at least considering it. I only asked since you mentioned possibly mucking with the x4. Please don't expend any effort  on this. But, if you do expand your MV8 config, or populate the x4, and can rerun the "bg" test, please PM me the results.

 

I'm surprised that such real-world throughput limits for various controllers are not easily found. For example, someone documented that an Areca ARC-1261ML (16-port PCIe x8) maxes out at 800-900 MB/sec. That's a lot, but not for the $$$ *and* taking an x8 slot *and* "supporting" 16 drives, imho. Yes, I realize you get all kinds of Raid goodness, but ...

 

PS re:disktest.sh, I've accepted having to use /tmp files (for ordered report output of a bg run), since using shell variables in conjunction with background processing proved futile (for me, at least).

 

--UhClem

 

Link to comment

Ok, now i officially do not like you guys..

 

I have to figure out why my SDJ is so slow.

 

dskt.sh bg b c d e f g h i j k l m yeilds:

sdi = 110.50 MB/sec
sde = 137.39 MB/sec
sdf = 136.51 MB/sec
sdh = 125.49 MB/sec
sdg = 130.94 MB/sec
sdj = 39.88 MB/sec
sdl = 132.40 MB/sec
sdd = 115.17 MB/sec
sdb = 455.86 MB/sec
sdc = 124.61 MB/sec
sdm = 125.33 MB/sec
sdk = 132.14 MB/sec

 

this on my ESXi server

SDI is a 1TB ears that is not part of the array.

SDB is my Cache drive.

All of the rest are 3TB Hitach greens.

 

all 12 drives are on a single M1015 with an Intel expander.

 

[the X command fails when I try it]

 

 

EDIT:

OK, I ran it 10 min later now a different drive is slow,

I must have cache_dirs or something accessing that drive during the test.

sdi = 99.08 MB/sec
sdj = 114.81 MB/sec
sdd = 104.47 MB/sec
sdg = 118.85 MB/sec
sdk = 119.32 MB/sec
sde = 125.57 MB/sec
sdl = 118.38 MB/sec
sdf = 123.08 MB/sec
sdm = 112.71 MB/sec
sdh = 39.99 MB/sec
sdc = 107.58 MB/sec
sdb = 449.77 MB/sec

Link to comment

mine is all over :P

 

root@p5bplus:/boot# ./dskt.sh b c d e f g h i j k l m n o p q r s t u v w x
sdi = 113.05 MB/sec
sdr = 97.17 MB/sec
sdh = 39.29 MB/sec
sdf = 35.28 MB/sec
sdb = 53.27 MB/sec
sde = 41.36 MB/sec
sdd = 38.66 MB/sec
sdq = 86.52 MB/sec
sdg = 38.42 MB/sec
sdt = 97.48 MB/sec
sdn = 86.63 MB/sec
sdj = 116.44 MB/sec
sdp = 75.62 MB/sec
sdl = 85.88 MB/sec
sdv = 17.07 MB/sec
sdo = 87.02 MB/sec
sdx = 17.92 MB/sec
sdk = 96.47 MB/sec
sds = 74.47 MB/sec
sdu = 73.60 MB/sec
sdm = 105.27 MB/sec
sdw = 40.27 MB/sec
sdc = 41.84 MB/sec

 

sdv and sdx are on the esata port with a port multiplier

rest i guess is a bit all over cause headphones is scanning

to accurately testing this i guess you need to shutdown all other apps

 

 

Link to comment

Hi uhclem, Weebo

 

Nice script. Need to see what new build performs.

 

Could you maybe move it to User Customizations forum so it does not get lost :P

Why don't you? :)

 

I may have conceived the little rascal, but I really don't have the motherly instinct to nurture it. Yes, it would be nice if it could mature into a useful member of society. So, if someone capable (Weebo, JoeL, Bubba etc.) wishes to adopt it, ...  (But, remember, it's not your kid!--just say it came from some Bozo.)

 

--UhClem  "5 jobs; 2 detached."

 

Link to comment

Hi uhclem, Weebo

 

Nice script. Need to see what new build performs.

 

Could you maybe move it to User Customizations forum so it does not get lost :P

Why don't you? :)

 

I may have conceived the little rascal, but I really don't have the motherly instinct to nurture it. Yes, it would be nice if it could mature into a useful member of society. So, if someone capable (Weebo, JoeL, Bubba etc.) wishes to adopt it, ...  (But, remember, it's not your kid!--just say it came from some Bozo.)

 

--UhClem  "5 jobs; 2 detached."

 

I'll adopt it and post it on my google code sight tonight.

Link to comment

Hi uhclem, Weebo

 

Nice script. Need to see what new build performs.

 

Could you maybe move it to User Customizations forum so it does not get lost :P

Why don't you? :)

 

I may have conceived the little rascal, but I really don't have the motherly instinct to nurture it. Yes, it would be nice if it could mature into a useful member of society. So, if someone capable (Weebo, JoeL, Bubba etc.) wishes to adopt it, ...  (But, remember, it's not your kid!--just say it came from some Bozo.)

 

--UhClem  "5 jobs; 2 detached."

 

I'll adopt it and post it on my google code sight tonight.

Weebo, just a little 411, the X switch does not detect my hardware on my m1050. I get an error back. what info do you need from me to try and fix this?

Link to comment

Hi uhclem, Weebo

 

Nice script. Need to see what new build performs.

 

Could you maybe move it to User Customizations forum so it does not get lost :P

Why don't you? :)

 

I may have conceived the little rascal, but I really don't have the motherly instinct to nurture it. Yes, it would be nice if it could mature into a useful member of society. So, if someone capable (Weebo, JoeL, Bubba etc.) wishes to adopt it, ...  (But, remember, it's not your kid!--just say it came from some Bozo.)

 

--UhClem  "5 jobs; 2 detached."

 

I'll adopt it and post it on my google code sight tonight.

Weebo, just a little 411, the X switch does not detect my hardware on my m1050. I get an error back. what info do you need from me to try and fix this?

 

X switch or i switch?

Link to comment

If I run it like this

 

root@Goliath:/boot# dskt.sh b c d

HDIO_GET_IDENTITY failed: Invalid argument
sdb = 455.93 MB/sec
HDIO_GET_IDENTITY failed: Invalid argument
sdc = 137.57 MB/sec
HDIO_GET_IDENTITY failed: Invalid argument
sdd = 129.74 MB/sec
root@Goliath:/boot#

Is my output.

 

It looks like it is my set up. not the script.

if i run

root@Goliath:/boot# hdparm -i /dev/sdb

/dev/sdb:
HDIO_GET_IDENTITY failed: Invalid argument
root@Goliath:/boot# hdparm -i /dev/sdc

/dev/sdc:
HDIO_GET_IDENTITY failed: Invalid argument
root@Goliath:/boot#

 

and if i run

root@Goliath:/boot# hdparm -I /dev/sdb

 

/dev/sdb:

 

ATA device, with non-removable media

        Model Number:      OCZ-VERTEX3

        Serial Number:      OCZ-92413U56JQ580172

        Firmware Revision:  2.15

        Transport:          Serial, ATA8-AST, SATA 1.0a, SATA II Extensions, SAT                                  A Rev 2.5, SATA Rev 2.6, SATA Rev 3.0

~snip~

root@Goliath:/boot# hdparm -I /dev/sdc

/dev/sdc:

ATA device, with non-removable media
        Model Number:       Hitachi HDS5C3030ALA630
        Serial Number:      MJ1321YNG14A6A
        Firmware Revision:  MEAOA580
        Transport:          Serial, ATA8-AST, SATA 1.0a, SATA II Extensions, SATA Rev 2.5, SATA Rev 2.6; Revision: ATA8-AST T13 Project D1697 Revision 0b
~snip~

It knows my hardware, just the -i is failing to return the hardware info,

I'll guess it has something to do with my setup.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.