Ultimate Unraid Server - The Sequel


GaryMaster

Recommended Posts

There are significant read and write performance gains with improved peforming I/O controllers and processors (this test doesn't yet distinguish which is causing the benefit). 18% improvement in write and 33% improvement in read.

 

There is a huge write performance hit when using the new EARS drive vs the fast parity drive (78% drop).

 

Appreciate this type of tests. One comment though, the differences between mobos, could that also be due to different LAN chips ?

 

And wow, that is really a huge performance drop. This was comparing two vRaptors (one data one parity) with one vRaptor and one EARS drive, right ? Same SATA ports used ?  That is a lot more than one would expect, I think I have seen faster write speeds with all green drives arround these forums.

 

Link to comment
  • Replies 209
  • Created
  • Last Reply

Top Posters In This Topic

There is a huge write performance hit when using the new EARS drive vs the fast parity drive (78% drop).

 

You didn't align the partition.  Fdisk in expert mode.

 

You can check the alignment by looking in:

 

/sys/block/sdX/sdX1/start

 

where X is the device letter.

 

You can also use:

 

./hdparm -I /dev/sdb | grep Sector

 

With the later versions of hdparm, it will tell you both physical and logical sector size.

 

Link to comment

There is a huge write performance hit when using the new EARS drive vs the fast parity drive (78% drop).

 

You didn't align the partition.  Fdisk in expert mode.

 

You can check the alignment by looking in:

 

/sys/block/sdX/sdX1/start

 

where X is the device letter.

Unfortunately, I think if you were to change the first partition in unRAID to start on cylinder 64 instead of 63 it would no longer be recognized as a valid unRAID file system.    unRAID expects the cylinder to start on the first full cylinder as reported by the disks geometry.  (Typically on sector 63)   If the disk reported cylinders of 64 sectors, rather than of 63, the partition created and subsequently looked for by unRAID would start on sector 64.  I do not think this is possible since the field in the MBR for the number of sectors per track can only have values from 1-63.

 

I've not done any experiments with this, but it is just from my experience in writing the pre-clear disk utility and how unRAID decided if the disk is one it can mount, or is one that is foreign to it.    I suppose the trick will be to get the disk's reported geometry to be cylinders of 64 instead of 63.   This whole historical start on sector 63 is a hold-over from MS-DOS days... unRAID is trying to make the disks recognizable in MS-Windows based tools.  There is no underlying reason for the start on sector 63, and leaving sectors 1-62 unused otherwise. (sector 0 is the MBR)

 

Joe L.

Link to comment

 

Appreciate this type of tests. One comment though, the differences between mobos, could that also be due to different LAN chips ?

 

And wow, that is really a huge performance drop. This was comparing two vRaptors (one data one parity) with one vRaptor and one EARS drive, right ? Same SATA ports used ?  That is a lot more than one would expect, I think I have seen faster write speeds with all green drives arround these forums.

 

 

Correct - two vRaptors for all tests except the EARS test where the parity vRaptor was replaced with the 1TB WD10EARS drive.

 

The LAN chips are comparable, but it is always possible that there are other motherboard specific issues at play.

 

The EARS drive was in an identical configuration, same hardware, same SATA port.

 

Manual drive alignment did not work as reported by Joe L. (I  had already attempted this prior to testing, bubbaQ).  

 

I notice there are some jumper configurations available on this drive, but it is stated on the documentation that this is for Windows based OS configuration so I didn't research it further.

Link to comment

Gary, I appreciate the tests, but ... I think you need to do the following test to get a real idea of the performance tradeoffs. Then one can determine if the limitations of array size and cost warrant the slight performance differences. Frankly, if I were even considering using 10000rpm drives in an array, I wouldn't even consider any sort of software raid, only hardware-based solutions would be on the table.

 

Controls:

Scenario 1) 10000rpm Data and 10000rpm Parity

Scenario 2) 7200rpm Data and 7200rpm Parity

Scenario 3) 5400rpm Data and 5400rpm Parity

 

Variables:

Scenario 4) 7200rpm Data and 10000rpm Parity

Scenario 5) 5400rpm Data and 10000rpm Parity

 

Scenario 6) 10000rpm Data and 7200rpm Parity

Scenario 7) 5400rpm Data and 7200rpm Parity

 

Scenario 8) 10000rpm Data and 5400rpm Parity

Scenario 9) 7200rpm Data and 5400rpm Parity

 

 

 

 

Link to comment

Gary, I appreciate the tests, but ... I think you need to do the following test to get a real idea of the performance tradeoffs. Then one can determine if the limitations of array size and cost warrant the slight performance differences. Frankly, if I were even considering using 10000rpm drives in an array, I wouldn't even consider any sort of software raid, only hardware-based solutions would be on the table.

 

Controls:

Scenario 1) 10000rpm Data and 10000rpm Parity

Scenario 2) 7200rpm Data and 7200rpm Parity

Scenario 3) 5400rpm Data and 5400rpm Parity

 

Variables:

Scenario 4) 7200rpm Data and 10000rpm Parity

Scenario 5) 5400rpm Data and 10000rpm Parity

 

Scenario 6) 10000rpm Data and 7200rpm Parity

Scenario 7) 5400rpm Data and 7200rpm Parity

 

Scenario 8) 10000rpm Data and 5400rpm Parity

Scenario 9) 7200rpm Data and 5400rpm Parity

It has already been proven you can use the hdparm -N command to add an HPA to a larger drive to make it appear smaller... therefore, you can use drives if they are not all the same size by forcing them to all appear the same size.  That way, you can swap the parity drive and the data drive as needed for the tests.
Link to comment

Gary, I appreciate the tests, but ... I think you need to do the following test to get a real idea of the performance tradeoffs. Then one can determine if the limitations of array size and cost warrant the slight performance differences. Frankly, if I were even considering using 10000rpm drives in an array, I wouldn't even consider any sort of software raid, only hardware-based solutions would be on the table.

 

BRiT:

 

I will test these scenarios as time (and hardware) allows. 

 

I had posted earlier that I'm not using the vRaptors to proclaim the merits of using a 10,000 RPM drive.  It is just that the vRaptor allows me to demonstrate a best case scenario.  There are other high performance 7200 RPM drives that are fairly practical, including the 2TB Western Digital Black and 2TB WD RE4 - both put in performance numbers similar to the vRaptor in terms of average read and write throughput.  I just don't have any on hand right now (and they cost $290), so I used the raptors.

Link to comment

Also, this is not really as simple as testing 10,000RPM vs 7,200 vs 5,400.  There are big differences in platter densities, cache and drive controllers.  

 

For example, I recently benchmarked a 2 year old Western Digital Black 1TB 7,200 RPM drive against a new Green 5400 RPM drive with 500GB platters and both put in the same 85MB/s average read and write performance in Windows (HD Tune measurement).

Link to comment

An interesting read: http://www.osnews.com/story/22872/Linux_Not_Fully_Prepared_for_4096-Byte_Sector_Hard_Drives

 

$ time cp winxp.img /mnt/sdc  # ALIGNED

real    5m9.360s

user    0m0.090s

sys    0m20.420s

 

$ time cp winxp.img /mnt/sdd  # UNALIGNED

real    13m26.943s

user    0m0.110s

sys    0m19.350s

 

$ time cp -r Computer Architecture/ /mnt/sdc  # ALIGNED

real    42m9.602s

user    0m0.680s

sys    1m59.070s

 

$ time cp -r Computer Architecture/ /mnt/sdd  # UNALIGNED

real    138m54.610s

user    0m0.660s

sys    2m15.630s

 

This performance hit of a factor of about 3.3 is surprisingly consistent across operations. And this is severe. I've read people guessing that there would be a 30% performance loss. But a 230% performance loss is exceptionally bad

 

 

 

Link to comment

That was a good read, BRiT.

 

Does anyone here have a proven method to align these drives for UNRAID?  I only have a few more days with this drive before I have to ship it out, so I don't have a lot of time for trial and error.

 

This magnitude of performance loss is certainly tied (at least in part) to this issue and I would like to find out how much of the problem could be mitigated through alignment to the 4k sectors. 

 

This is really an important lesson for people to see here, since I suspect 99% of users are simply going to plug this drive in and walk away because it seems to be working.

Link to comment

This is really an important lesson for people to see here, since I suspect 99% of users are simply going to plug this drive in and walk away because it seems to be working.

 

That's a pretty bold statement considering many of the users who are active on this forum have offered varying degrees of technical assistance to you in your endeavor to create "The Ultimate Unraid Server".  Of course there are no metrics to prove or disprove this comment, but I somehow find it mildly presumptuous to assume that those of us who don't do benchmark testing can't measure some discernible difference in drive performance, be it normal parity builds or data transfer.

Link to comment

This is really an important lesson for people to see here, since I suspect 99% of users are simply going to plug this drive in and walk away because it seems to be working.

 

That's a pretty bold statement considering many of the users who are active on this forum have offered varying degrees of technical assistance to you in your endeavor to create "The Ultimate Unraid Server".  Of course there are no metrics to prove or disprove this comment, but I somehow find it mildly presumptuous to assume that those of us who don't do benchmark testing can't measure some discernible difference in drive performance, be it normal parity builds or data transfer.

 

Sorry if i offended you in some way... This wasn't aimed at anyone who frequents these forums, since most who hang out here are not the casual user.  

 

I was just stating that I believe most users are going to plug these drives in and assume everything is OK when in fact they are likely running their system at a big performance defecit without recognizing it.  

Link to comment

It has already been proven you can use the hdparm -N command to add an HPA to a larger drive to make it appear smaller... therefore, you can use drives if they are not all the same size by forcing them to all appear the same size.  That way, you can swap the parity drive and the data drive as needed for the tests.

 

Joe L:  Thanks for the tip, I was not aware I could do this.

 

BubbaQ:  You seem to see a way to use fdisk to align the drive.  When I tried this before the test, the disk was not recognized in UNRAID.  Would you please provide additional detail on how you are suggesting this be done?  I also have an ubuntu workstation and a Windows 7 box where I could align the drive if there is a better way to accomplish this in another OS.  All of the WD tools are for Windows OS and they claim the drive is ready as-is on all other OS (they probably didn't have the Reiser File System in mind).

 

WD claims the jumper solution is a workaround for Win XP only and should only be done prior to formatting, explicitely stating NOT to jumper after a format.  After doing a little more research, it sounds like the jumper simply performs an addressing offset in the drive itself so that when the operating system thinks it is writing to LBA 63, it is actually writing to LBA 64.  Not very elegant, but it seems that this may work. 

 

Thoughts from the file system experts? 

 

http://www.westerndigital.com/en/products/advancedformat/

Link to comment

You can use the jumper only if you are creating a single partition (which unRAID does).

 

Changing the jumper after formatting will make all the data on the drive inaccessible.... but since you are just going to reformat it in unRAID anyway, it won't matter.

 

There should be a way to override the STP from 63 to 32 in fdisk, but I'll have to research how to do it.

Link to comment

BubbaQ:  Great!  I'll give it a try tonight and rerun the transfer tests.  Should be interesting.

 

When I reinitialize this drive in UNRAID it will want to rerun the parity check because parity will be invalidated.

 

Can anyone tell me whether UNRAID would continue to perform all read/write parity operations even if I cancel this check? 

 

I have been performing the parity rebuild on each of these tests because I didn't want to take the chance that unraid would perform differently if I didn't first create valid parity.

 

 

Link to comment

Thanks, WeeboTech...

 

I already started the rebuild for this test, so I guess I will just let it finish this time (two hours to go... joy!).

 

Do you think this "Trust my Array" procedure would be appropriate in my case?  I am swapping disks in and out and even though each disk is essentially blank, I wouldn't rule out the occasional piece of data being left on the drive which would generate a parity error.

 

My concern is whether or not this will impact performance during read/write testing and consequently change the results. 

 

Rebuilding parity each time is probably the safest thing I can do to assure equality between tests, but it is very time consuming.

 

I am also still wondering if unraid goes through all of the same motions of writing parity even if you cancel the parity check and it thinks parity is not valid.

Link to comment

I don't have all the answers on this one.

 

I would think that it won't impact read performance, and will probably not impact write performance. But if there is any prior data on the drives, you cannot rely on parity to rebuild anything accurately (Which probably doesn't matter in this case).

 

If you do a parity check there will surely be errors.

 

I think there are two cases with unRAID and parity writing.

If the array has had difficulty and/or drives have been swapped. unRAID will not re-write parity.

If parity was good and you started a parity check and canceled it, unRAID may continue to re-write parity at another point in time. (I've done this with older versions, I don;'t know if this is the case on more recent versions).

Link to comment

Good news!  Setting the jumper on pins 7-8 of the WD advanced format drive corrects the alignment problem with unRAID.  Attached is the updated performance graph showing the before and after results.  There is still a significant performance hit vs the faster vRaptor drive (27% drop on 4GB transfers), but a big improvement over the unaligned drive.  

 

Unfortunately, I think users who already have these drives in use (and want to get the improved performance from alignment) are in for some time consuming activities unless someone creates an alignment tool for this operating system.  The WDALIGN software for Windows will move existing data for the user while aligning the drive.  I don't see a way to do that here.

 

 

 

Raptor_Write_Performance_-_EARS_Aligned_-_Unraid_4.5_Stock.JPG.9bbe8b9ec7fbc5a28465e6dc4fa57b82.JPG

Link to comment

Great work Gary! Your benchmarks and report of findings are greatly appreciated.

 

The performance drop off at 4Gb is a bit curious for me considering it performs closer than I expected in the earlier sections. Does the system by any chance have 4GB of physical memory? If so, it might indicate the earlier sections are more influenced by memory than physical drive limitations?

 

Overall, it's not that bad, considering the EARS are 54% the speed of the VRaptors on rotational matters. I think this indicates how performance is really dictated by multiple and not singular factors.

Link to comment
Overall, it's not that bad, considering the EARS are 54% the speed of the VRaptors on rotational matters.

 

That's not the whole picture.  The areal density of the EARS is higher.  Areal density is higher on outside cylinders than inside cylinders.  And you can't buy a 2TB Raptor, so it is a moot point for anyone who wants more than 300GB drives in unRAID.  Also the ICH10 is known to be faster than the ICH7..... you should test the drives outside the array on each controller to get an idea of the controller's contribution.

 

If you are writing 4GB, you need to take the time needed for the subject drive to read 4GB sequential, and add the time needed to write 4GB sequential.  Then divide 4GB by that total time, and you get the theoretical max write performance of unRAID for that drive.  Do the same for the parity drive.  Your max is the SLOWEST of the two.  If you can get over 90% of that max, you are doing good.

 

Remember, however, that disk read and write speed can vary 50% between outer cylinders and inner cylinders.... so if you benchmark a system with a new (empty) drive, performance will decrease as the drive fills up.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.