Drive performance testing (version 2.6.5) for UNRAID 5 thru 6.4


Recommended Posts

diskspeed.sh, version 2.6.5

 

UNRAID 5 Users: Download version 2.5

A new, enhanced version is being created for UNRAID 6 and later. {View}

Version 2.6.5 contains an edit by bonienl to allow it to work in version 6.4

Note: Disks under 25 GB will be skipped. Should work in virtualized environments.

 

This utility will perform read tests at different points on each drive attached to the system (even those not assigned to the UNRAID array) to compute the average speed & generate a graph. Useful if you want to see if you have a drive that's slower than the others and negatively affecting your parity drive speeds. Even drives of the same make & model can perform marginally different - example with the cache drive and disk 9 in the below graph utilizing my backup server with drives retired from my main server.

 

To execute, ensure no other processes are running on your UNRAID server that are accessing your hard drives (which will result in slower results) and run the diskspeed.sh script. Execution time will be approx. 90 seconds multiplied by the number of drives attached to the system with the default sample & test iterations.

 

If you suspect a disk is failing, run the following command and replace "sdx" with the drive you want to test. It will test just that drive at every 2% of the capacity.

diskspeed.sh -s 51 -n sdx

 

Syntax

diskspeed.sh -i # -s #

diskspeed.sh --iterations # --samples #

diskspeed.sh -f

 

Examples

diskspeed.sh                  Test 11 sample points one time (0%, 10%, 20%, 30%, 40%, 50%, 60%, 70%, 80%, 90%, 100%)

diskspeed.sh -s 11 -i 1    Same as above as these are the default values

diskspeed.sh -s 5            Test five sample points (0%, 25%, 50%, 75%, 100%)

diskspeed.sh -s 3            Test three sample points (0%, 50%, 100%)

diskspeed.sh -i 3            Test each sample point three times and take the average

diskspeed.sh -s 21          Test the drive every 5%

diskspeed.sh -f                Perform a fast test where 200 MB is read at each location vs 1 GB. Not as accurate.

diskspeed.sh -x sda,sdb  Exclude drives sda & sdb

diskspeed.sh -n sdc,sdd  Only test drives sdc & sdd

 

Expanding info on sample points: The script will check the start of the hard drive (0%) and the end of the hard drive (100%). The rest of the sample points are divided evenly between the start and end. So a sample request of 3 would test the start, end, and middle of the drive.

 

If your graph seems "spiky", try running the script with the "-i 3" option to test each location three times and take the average.

 

To view the graph, navigate to the location you executed the script file via your preferred file explorer on your UNRAID share (ex: \\tower\flash\scripts) and open the diskspeed.html file. You can toggle each drive off & on by clicking on its designation in the legend.

 

Example console output

The drives are tested in the order they were assigned by the OS

diskspeed.sh for UNRAID, version 2.3
By John Bartlett. Support board @ limetech: http://goo.gl/ysJeYV

/dev/sdb (Disk 4): 107 MB/sec avg
/dev/sdc (Disk 2): 158 MB/sec avg
/dev/sdd (Disk 5): 98 MB/sec avg
/dev/sde (Disk 10): 98 MB/sec avg
/dev/sdf (Disk 6): 100 MB/sec avg
/dev/sdg (Disk 7): 99 MB/sec avg
/dev/sdh (Disk : 97 MB/sec avg
/dev/sdi (Disk 9): 97 MB/sec avg
/dev/sdj (Disk 3): 123 MB/sec avg
/dev/sdk (Disk 11): 104 MB/sec avg
/dev/sdl (Disk 1): 112 MB/sec avg

To see a graph of the drive's speeds, please browse to the current
directory and open the file diskspeed.html in your Internet Browser
application.
 

 

Example graph

diskspeed_2.3.png

 

How it works

The script utilizes the dd utility to do a direct read at various offsets

 

Change Log

Version 2.6.4
Added support for UNRAID 6.3.0-RC9

Version 2.6.3
Changed memory check to ignore cache memory

Version 2.6.2
Added a check to ensure there is enough free memory available to execute the dd command
Added -n | --include option to specify which drives to test, comma delimited
Ignore floppy drives
Added support for nvme drives

Version 2.6.1
Fixed issue identifying drives assigned sdxx (more than 26 drives attached)
Fixed issue with data drives over 9 having the last digit truncated

Version 2.6
Removed checks for invalid drives, redundent
Altered drive inventory to exclude md? drives/identify drive/cache/parity assignments
Modified to support UNRAID 6.2 running under OS 4.4.x and higher

Version 2.5
Fixed computation for percentages less than 10%
Reverted to 1 GB scans for better results but slower
Added -f --fast to scan 200 MB instead of 1 GB, same as version 2.3 & 2.4

Version 2.4
If the drive model is not able to be determined via fdisk, extract it from mdmcd
Add -l --log option to create the debug log file diskspeed.log
Modified to not display the MB sec in drive inventory report for excluded drives
Modified to compute the drive capacity from the number of bytes UNRAID reports for external drive cards.
Added -g --graph option to display the drive by percentage comparison graph
Added warning if files on the array are open which could mean drives are active
Added spin up drive support by reading a random spot on the drive

Version 2.3
Changed to use the "dd" command for speed testing, eliminates risk of hitting the end of the drive. The app will read 200MB
  of data at each testing location.
Before scanning each spot, uses the "dd" command to place the drive head at the start of the test location.
Added -o --output option for saving the file to a given location/name (credit pkn)
Added report generation date & server name to the end of the report (credit pkn)
Added a Y axis floor of zero to keep the graph from display negative ranges
Hid graph that compared each drive by percentage. If you wish to re-enable it, change the line "ShowGraph1=0" to
  "ShowGraph1=1"
Added average speed to the drive inventory list below the graph
Added -x --exclude option to ignore drives, comma seperated. Ex: -x sda,sdb,sdc
Added -o --output option to specify report HTML file name

Version 2.2
Changed method of identifying the UNRAID boot drive and/or USB by looking for the file /bzimage or /config/ident.cfg if the
  device is mounted
Skip drives < 25 GB
Route fdisk errors to the bit bucket
Removed the max size on the 2nd graph to allow smaller drives to scale if larger drives are hidden

Version 2.1
Fixed GB Size determination to minimize hdparm hitting the end of the drive while performing a read test at the end of the
  drive (credit doron)
Fixed division error in averaging sample sizes (credit doron)
Updated graphs to size to 1000 px wide
Added 2nd graph which shows drive speeds in relation to the largest drive size; this is a better indication of how your
  parity speeds may run
Added drive identification details below the graphs
Added support for scanning all hard drives attached to the system

Version 2.0
Added ability to specify the number of tests performed at each sample spot
Added ability to specify the number of samples to take, min of 3 samples. first sample will be at the start of the drive,
  last sample at the end, and the rest spread out evenly on the drive
Added help screen
Formatted the graph tool tip to display the information in a easy to read format
Do not run if the parity sync is in process
Added support for gaps in drive assignments
Added support for arrays with no parity drive

Version 1.1
Fix bug for >= 10 drives in array (credit bonienl)
Fix graph bug so graph displays in MB

Version 1.0
Initial Release
 

 

UNRAID 5 Users: Use version 2.5

diskspeed.v2.5.zip

diskspeed_2.6.4.zip

 

diskspeed_2.6.5.zip

diskspeed_2.6.5.zip

Edited by jbartlett
  • Like 3
  • Upvote 6
Link to comment

yes but i will need some instruction. I tried to figure out how to do it but have failed. I am a beginner level.

 

Telnet into your server and log in. Then enter the following command:

cp /proc/mdcmd /boot/mdcmd.txt

 

Then you'll be able to use Windows Explorer to navigate to your UNRAID server and mdcmd.txt will be in the "flash" share.

Link to comment

Nice work,

I have the same problem with last drive (disk 10) in the array failing the test:

 

(...)

Drive 9: 168 MB/sec avg

 

/dev/=sdh: No such file or directory

./diskspeed.sh: line 58: / 1000: syntax error: operand expected (error token is

./diskspeed.sh: line 70: /tmp/diskspeed.disk10: No such file or directory

rm: cannot remove `/tmp/diskspeed.disk10': No such file or directory

 

Attached mdcmd.

 

Thanks.

mdcmd.txt

Link to comment

yes but i will need some instruction. I tried to figure out how to do it but have failed. I am a beginner level.

 

Telnet into your server and log in. Then enter the following command:

cp /proc/mdcmd /boot/mdcmd.txt

 

Then you'll be able to use Windows Explorer to navigate to your UNRAID server and mdcmd.txt will be in the "flash" share.

 

Got it. I was close, but still learning slowly. Thanks! here ya go.

[dont know if this is useful or not but the graph does list the disks not being tested.]

mdcmd.txt

Link to comment

Very useful but dangerous script, my results are not near the ones you've posted and wants me to run to the shop for an upgrade ;)

 

Version v1.0 has two minor bugs which can be easily corrected (as noted by the people in the previous responses):

 

Change:

if [[ $linelen -gt 9 ]];then

 

into

if [[ $disk -gt 9 ]];then

 

This will allow testing to go beyond drive 9.

 

Change:

speedidx[$inc]=$(($speed * 1000))

diskavgspeed=$(($total / 10 / 1000))

 

into

speedidx[$inc]=$(($speed * 1000000))

diskavgspeed=$(($total / 10 / 1000000))

 

This will display M units in the graph.

 

And something for the future to think about ... some people have noncontinuous assignments of their drives, would be nice if the script can handle that.

 

Thanks again for making this available :)

Link to comment

Thanks bonienl, I implemented your drive fix into my script but needed to take a different route to fix the graph and still maintain the console output.

 

Good point on the out-of-sequence drives, I used to have one of those myself.

 

Unfortunately, I can't test the script/graph on a physical server as my new MB in the UNRAID server let the smoke out last night but fortunately, I have an UNRAID test server running under VirtualPC. Things match up properly now.

Link to comment

Did a lot of work on the script yesterday adding new features. Please see the first post to download.

 

Version 2.0
Added ability to specify the number of tests performed at each sample spot
Added ability to specify the number of samples to take, min of 3 samples. first sample
  will be at the start of the drive, last sample at the end, and the rest spread out
  evenly on the drive
Added help screen
Formatted the graph tool tip to display the information in a easy to read format
Do not run if the parity sync is in process
Added support for gaps in drive assignments
Added support for arrays with no parity drive

Link to comment

Did a lot of work on the script yesterday adding new features. Please see the first post to download.

 

Version 2.0
Added ability to specify the number of tests performed at each sample spot
Added ability to specify the number of samples to take, min of 3 samples. first sample
  will be at the start of the drive, last sample at the end, and the rest spread out
  evenly on the drive
Added help screen
Formatted the graph tool tip to display the information in a easy to read format
Do not run if the parity sync is in process
Added support for gaps in drive assignments
Added support for arrays with no parity drive

 

It is getting better and better, well done  :)

Link to comment

repeats for each disk, and does complete for all drives. Is this expected behavior?

ran as diskspeed.sh -i 3

 

Performance testing parity drive at 3811 GB (hit end of disk) (100%), pass 1 of

Performance testing parity drive at 3809 GB (hit end of disk) (100%), pass 1 of                                       

Performance testing parity drive at 3807 GB (hit end of disk) (100%), pass 1 of                                       

Performance testing parity drive at 3805 GB (hit end of disk) (100%), pass 1 of                                       

Performance testing parity drive at 3803 GB (hit end of disk) (100%), pass 1 of                                       

Performance testing parity drive at 3801 GB (hit end of disk) (100%), pass 1 of                                       

Performance testing parity drive at 3799 GB (hit end of disk) (100%), pass 1 of                                       

Performance testing parity drive at 3797 GB (hit end of disk) (100%), pass 1 of                                       

Performance testing parity drive at 3795 GB (hit end of disk) (100%), pass 1 of                                       

Performance testing parity drive at 3793 GB (hit end of disk) (100%), pass 1 of                                       

Performance testing parity drive at 3791 GB (hit end of disk) (100%), pass 1 of                                       

Performance testing parity drive at 3789 GB (hit end of disk) (100%), pass 1 of                                       

Performance testing parity drive at 3787 GB (hit end of disk) (100%), pass 1 of                                       

Performance testing parity drive at 3785 GB (hit end of disk) (100%), pass 1 of                                       

Performance testing parity drive at 3783 GB (hit end of disk) (100%), pass 1 of                                       

Performance testing parity drive at 3781 GB (hit end of disk) (100%), pass 1 of                                       

Performance testing parity drive at 3779 GB (hit end of disk) (100%), pass 1 of                                       

Performance testing parity drive at 3777 GB (hit end of disk) (100%), pass 1 of                                       

Performance testing parity drive at 3775 GB (hit end of disk) (100%), pass 1 of                                       

Performance testing parity drive at 3773 GB (hit end of disk) (100%), pass 1 of                                       

Performance testing parity drive at 3771 GB (hit end of disk) (100%), pass 1 of                                       

Performance testing parity drive at 3769 GB (hit end of disk) (100%), pass 1 of                                       

Performance testing parity drive at 3767 GB (hit end of disk) (100%), pass 1 of                                       

Performance testing parity drive at 3765 GB (hit end of disk) (100%), pass 1 of                                       

Performance testing parity drive at 3763 GB (hit end of disk) (100%), pass 1 of                                       

Performance testing parity drive at 3761 GB (hit end of disk) (100%), pass 1 of                                       

Performance testing parity drive at 3759 GB (hit end of disk) (100%), pass 1 of                                     

Performance testing parity drive at 3757 GB (hit end of disk) (100%), pass 1 of                                       

Performance testing parity drive at 3755 GB (hit end of disk) (100%), pass 1 of                                       

Performance testing parity drive at 3753 GB (hit end of disk) (100%), pass 1 of                                       

Performance testing parity drive at 3751 GB (hit end of disk) (100%), pass 1 of                                       

Performance testing parity drive at 3749 GB (hit end of disk) (100%), pass 1 of                                       

Performance testing parity drive at 3747 GB (hit end of disk) (100%), pass 1 of                                       

Performance testing parity drive at 3745 GB (hit end of disk) (100%), pass 1 of                                       

Performance testing parity drive at 3743 GB (hit end of disk) (100%), pass 1 of                                       

Performance testing parity drive at 3741 GB (hit end of disk) (100%), pass 1 of                                       

Performance testing parity drive at 3739 GB (hit end of disk) (100%), pass 1 of                                       

Performance testing parity drive at 3737 GB (hit end of disk) (100%), pass 1 of                                       

Performance testing parity drive at 3735 GB (hit end of disk) (100%), pass 1 of                                       

Performance testing parity drive at 3733 GB (hit end of disk) (100%), pass 1 of                                       

Performance testing parity drive at 3731 GB (hit end of disk) (100%), pass 1 of                                       

Performance testing parity drive at 3729 GB (hit end of disk) (100%), pass 1 of                                       

Performance testing parity drive at 3727 GB (hit end of disk) (100%), pass 1 of                                       

Performance testing parity drive at 3725 GB (hit end of disk) (100%), pass 1 of                                       

Parity: 140 MB/sec avg

 

Link to comment

Neat script! Well done, jbartlett, and thank you!

 

A small bug (which triggers the "hit end of disk" issue) can be fixed with the following patch:

 

--- diskspeed.sh.orig	2014-01-11 19:30:04.000000000 +0200
+++ diskspeed.sh	2014-01-11 19:30:17.000000000 +0200
@@ -151,7 +151,7 @@
		rm /tmp/diskspeed.tmp
		DiskGB=${DiskGB:32}
		DiskGB=${DiskGB/" MBytes"/""}
-		DiskGB=$(($DiskGB / 1000))
+		DiskGB=$(($DiskGB / 1024))
		SlowestTest=999999999

		LoopEnd=$(( $samples - 1 ))

Link to comment

In v2.0, if I use "-s" with any number other than the default, the average comes out skewed (check e.g. with -s 3; you can check with -s 33 to get really interesting result). Average calculation seems wrong.

 

Cumulative patch against v2.0:

 

--- diskspeed.sh.orig	2014-01-11 19:30:04.000000000 +0200
+++ diskspeed.sh	2014-01-11 19:50:26.000000000 +0200
@@ -151,7 +151,7 @@
		rm /tmp/diskspeed.tmp
		DiskGB=${DiskGB:32}
		DiskGB=${DiskGB/" MBytes"/""}
-		DiskGB=$(($DiskGB / 1000))
+		DiskGB=$(($DiskGB / 1024))
		SlowestTest=999999999

		LoopEnd=$(( $samples - 1 ))
@@ -257,7 +257,7 @@
		else
			drivenum="Drive $disk"
		fi
-		diskavgspeed=$(($total / 10 / 1000))
+		diskavgspeed=$(($total / $samples / 1000))
		echo -e "\033[1A$drivenum: $diskavgspeed MB/sec avg\033[K"
		echo
	fi

Link to comment

Thanks doron for the help in finding & fixing bugs. I swear that 1000/1024 thing is a PITA because different parts of that app uses either 1000 or 1024. It did minimize the issue of hdparm hitting the end of the drive.

 

I'm waiting for my USB key file for my backup NAS that has a wide mix of drives I retired from my regular NAS which is still out of commission due to a faulty mother board. Once I get the key file, I'll test the script out against an actual server and not UNRAID running in an Oracle VM with three VM drives of varying sizes running on a RAM Drive. Speeds are very unpredictable with it. Once everything checks out and baring new features to add which pop into my mind, I'll upload the script.

 

# Version 2.1

# Fixed GB Size determination to minimize hdparm hitting the end of the drive while

#  performing a read test at the end of the drive (credit doron)

# Fixed divison error in averging sample sizes (credit doron)

# Updated graphs to size to 1000 px wide but shrinkable

# Added 2nd graph which shows drive speeds in relation to the largest drive size; this

#  is a better indication of how your parity speeds may run

# Added drive identification details below the graphs

 

Link to comment

Man, getting it to scan all drives (in the array & out) was a PITA! But here you go. Download the update in the first post of this thread.

 

 

Version 2.1

Fixed GB Size determination to minimize hdparm hitting the end of the drive while

  performing a read test at the end of the drive (credit doron)

Fixed division error in averaging sample sizes (credit doron)

Updated graphs to size to 1000 px wide

Added 2nd graph which shows drive speeds in relation to the largest drive size; this

  is a better indication of how your parity speeds may run

Added drive identification details below the graphs

Added support for scanning all hard drives attached to the system

Link to comment

Version 2.1

(...)

 

Thanks for posting the new version! Clearly lots of hard work goes into this.

 

Some issues with the new version:

 

[*]fdisk (on my system at least, 5.0) doesn't like GPT and spews a warning message. Doesn't interrupt the working of the script, just aesthetics. Suggest to add "2> /dev/null" to the "fdisk -l" invocation.

[*]On my system, there's one HDD that's smaller than 10GB. It's not part of the array so it didn't show up until now. However now it breaks the script completely, as the calculations seem to not have anticipated such a small drive. The thing is that fdisk reports its size as "8589 MB" (MB and not GB); the script assumes the number is in GB and then, hell breaks loose (okay, not THAT bad  ;) ). Suggest to look at the string output by fdisk and if the unit is MB, calculate accordingly. For all I care, you can just ignore such drives and move on. (In case you're curious, this is the unRAID boot drive - since my unRAID is virtualized, and hypervisors cannot boot from USB for some god-knows-what reason.)

[*]During the run of that small disk, for some reason, awk spews out its input line. See below.

[*]Suggestion - add a new command line argument to test only the drives in the array. Or reverse logic - some "-a" to test all drives, while default is array only.

 

This is what I get:

 

root@Tower:/boot/plug# ./diskspeed.sh

diskspeed.sh for UNRAID, version 2.1
By John Bartlett. Support board @ limetech: http://goo.gl/ysJeYV

Syncing disks...

WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted.


WARNING: GPT (GUID Partition Table) detected on '/dev/sdb'! The util fdisk doesn't support GPT. Use GNU Parted.


WARNING: GPT (GUID Partition Table) detected on '/dev/sdc'! The util fdisk doesn't support GPT. Use GNU Parted.

/dev/sda (Disk 2): 148 MB/sec avg
/dev/sdb (Parity): 151 MB/sec avg
/dev/sdc (Disk 1): 144 MB/sec avg

SG_IO: bad/missing sense data, sb[]:  70 00 05 00 00 00 00 0a 00 00 00 00 20 00 
Performance testing /dev/sdd at 0 GB (0%)
awk: BEGIN{printf("%0.0f", * 0.10)}
Performance testing /dev/sdd at  GB (10%)
awk: BEGIN{printf("%0.0f", * 0.20)}
Performance testing /dev/sdd at  GB (20%)
awk: BEGIN{printf("%0.0f", * 0.30)}
Performance testing /dev/sdd at  GB (30%)
awk: BEGIN{printf("%0.0f", * 0.40)}
Performance testing /dev/sdd at  GB (40%)
awk: BEGIN{printf("%0.0f", * 0.50)}
Performance testing /dev/sdd at  GB (50%)
awk: BEGIN{printf("%0.0f", * 0.60)}
Performance testing /dev/sdd at  GB (60%)
awk: BEGIN{printf("%0.0f", * 0.70)}
Performance testing /dev/sdd at  GB (70%)
awk: BEGIN{printf("%0.0f", * 0.80)}
Performance testing /dev/sdd at  GB (80%)
awk: BEGIN{printf("%0.0f", * 0.90)}
Performance testing /dev/sdd at  GB (100%)
Performance testing /dev/sdd at -10 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -20 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -30 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -40 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -50 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -60 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -70 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -80 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -90 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -100 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -110 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -120 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -130 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -140 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -150 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -160 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -170 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -180 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -190 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -200 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -210 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -220 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -230 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -240 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -250 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -260 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -270 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -280 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -290 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -300 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -310 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -320 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -330 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -340 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -350 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -360 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -370 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -380 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -390 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -400 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -410 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -420 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -430 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -440 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -450 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -460 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -470 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -480 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -490 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -500 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -510 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -520 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -530 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -540 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -550 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -560 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -570 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -580 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -590 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -600 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -610 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -620 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -630 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -640 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -650 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -660 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -670 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -680 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -690 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -700 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -710 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -720 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -730 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -740 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -750 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -760 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -770 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -780 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -790 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -800 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -810 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -820 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -830 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -840 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -850 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -860 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -870 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -880 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -890 GB (hit end of disk) (100%)
Performance testing /dev/sdd at -900 GB (hit end of disk) (100%)
^C
root@Tower:/boot/plug#

 

And here's a quick-n-dirty patch that solved the problems for me. Perhaps the "correct" way to fix the size issue is to look at the 4th positional output off of fdisk - number of bytes - and to calculate MB/GB/TB off it.

 

--- diskspeed.sh.orig   2014-01-14 09:14:43.000000000 +0200
+++ diskspeed.sh        2014-01-14 10:12:24.000000000 +0200
@@ -165,7 +165,7 @@
fi

# Inventory drives
-fdisk -l | grep "Disk /" > /tmp/inventory.txt
+fdisk -l 2> /dev/null | grep "Disk /" > /tmp/inventory.txt
sort /tmp/inventory.txt -o /tmp/inventory.txt
DriveCount=0
LastDrive=""
@@ -175,7 +175,8 @@
        CurrLine=( $line )
        tmp1=${CurrLine[1]}
        tmp2=${tmp1:5:3}
-       if [ "$tmp2" != "$FlashID" ];then
+       tmpunit=${CurrLine[3]:0:2}
+       if [ "$tmp2" != "$FlashID" ] && [ "$tmpunit" != "MB" ] ; then
                DiskID[$DriveCount]=$tmp2
                LastDrive=$tmp2
                tmp1=${CurrLine[2]}

Link to comment

Thanks for those. I did test with a 25 GB drive on my Virtual PC setup but that's the smallest I went and I didn't test this current version at all on it since I had my backup UNRAID server up & running. I assumed it would always be GB because drives that small haven't been made in over a decade but a virtual drive never occurred to me.

 

I'll also add an exclusion to omit the drive mounted as /boot. The script uses data from the fdisk for displaying the location being scanned so that's probably why it's barfing out all those awk errors. I'll redirect the fdisk errors out to a file and use that to ignore devices that it can't handle.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.