SSDs as RAID 0 Cache Drive


Recommended Posts

4) I would miss the comradarie and concern for my personal finances which can only be found here in the unRAID forums!

We are not only frugal with our money, but also with your money.  ;D

 

I doubt the extra $200 or so will delay your retirement by a few years, but it might, and we care.

 

I can appreciate your need for speed.  Obviously your usage profile is different than many of us.  I do share your love for unRAID's  hardware independence.

 

Joe L.

Link to comment

More to the point, why marginally optomize cache drive speed when your LAN is the bottleneck?

 

Throw 4GB in a server and set up a large RAMdisk, and test your writes to the RAMdisk.  When you can write more than 70MB/sec to a RAMdisk on unRAID, then you should start considering a faster cache.... until then, you are just rearranging deck chairs on the Titanic.

 

I'm in agreement with this, which is why I suggested one large ssd vs 2 small.

Also adding as much ram as your mobo/wallet can handle.

 

4) I would miss the comradarie and concern for my personal finances which can only be found here in the unRAID forums!

 

I'm sorry Gary. Camaraderie, Yes, concern for your wallet in comparison to pushing unraid to the limits. Maybe not. LOL.

Link to comment

You guys crack me up (thanks for caring, Joe L.!). 

 

I haven't given up on the SSD cache drive, but maybe trying to run two for the sake of cheaper small units is more trouble than it is worth.  If I can find a decent bargain an 60-80GB drive with 150MB/s write speeds, it will most certainly saturate the network. 

 

I hate to say it, but part of me still wants to know if the system will recognize the stripe set correctly and (even though I already know the answer) test the difference in throughput.  I think I have some kind of unhealthy OCD addiction going on here.

 

purko:

 

At the risk of offending the unRAID admins, what operating system would you recommend if I wanted to create a separate homebrew NAS to experiment with the speed potential of a standard RAID 5 set?  I would ideally want to do this using the intel onboard raid to keep a very small form factor.  It sounds like you have spent some time with these systems.  I have done RAID 5 arrays on a workstation but never as a dedicated NAS.

Link to comment

At the risk of offending the unRAID admins, what operating system would you recommend if I wanted to create a separate homebrew NAS to experiment with the speed potential of a standard RAID 5 set?  I would ideally want to do this using the intel onboard raid to keep a very small form factor.  It sounds like you have spent some time with these systems.  I have done RAID 5 arrays on a workstation but never as a dedicated NAS.

 

I would suggest getting your hands dirty and installing slackware. Once you do that, you can compile your own unraid packages and do your speed tests with the standard raid modules, or recompile the kernel with unraid drivers and custom controller drivers.

Link to comment

purko:

 

At the risk of offending the unRAID admins, what operating system would you recommend if I wanted to create a separate homebrew NAS to experiment with the speed potential of a standard RAID 5 set?  I would ideally want to do this using the intel onboard raid to keep a very small form factor.  It sounds like you have spent some time with these systems.  I have done RAID 5 arrays on a workstation but never as a dedicated NAS.

 

Slackware is your answer.  (I see that WeeboTech beat me to that)

But you may also want to check this.

 

Link to comment

Slackware is your answer.  (I see that WeeboTech beat me to that)

But you may also want to check this.

 

Thanks!  I was already thinking about an ubuntu server with apache for web administration.  I'm haven't used slackware before.  Would you mind saving me a few hours of web research and giving me your opinions on what Slackware does better than ubuntu?  I have some experience setting up ubuntu web servers already, so that would be more familiar territory for me.

Link to comment

Gary,

 

I'm the proud owner of two new (less than a month old) SSDs:

 

Corsair Reactor Series CSSD-R60GB2-BRKT 2.5" 60GB USB 2.0 & SATAII Internal Solid State Drive (SSD)

Sequential Access - Read  	up to 250MB/s
Sequential Access - Write 	up to 110MB/s

and

OCZ Agility Series OCZSSD2-1AGT30G 2.5" 30GB SATA II MLC Internal Solid State Drive (SSD)

Sequential Access - Read  	Up to 185 MB/s
Sequential Access - Write 	Up to 100 MB/s

 

While I cannot stripe them, I would be happy to try each one as a cache drive in my server and report the results, if that would help your cause.

Link to comment

 

While I cannot stripe them, I would be happy to try each one as a cache drive in my server and report the results, if that would help your cause.

 

Fantastic!  I'd love to take you up on that!  

 

Would you mind humoring me and running the IOzone measurement I referenced in post #9 of this thread from your windows workstation to see what the throughput differences are over a direct cable connection?  This would give me a way to compare it to what I am getting here.

 

Do you currently use a rotating cache drive that would allow us to see how it compares to this "baseline" method?

Link to comment

Would you mind humoring me and running the IOzone measurement I referenced earlier from your workstation to see what the throughput differences are over a direct cable connection?  This would give me a way to compare it to what I am getting here.

As long as it's not too difficult, sure.

 

Do you currently use a rotating cache drive that would allow us to see how it compares to this "baseline" method?

Yes, I currently use a rather old Seagate 320 GB 7200 rpm w/ 8 mb cache as my cache drive.  I'll provide those numbers as well.

Link to comment

That's awesome! It's very easy stuff.

 

Just download the free utility here:

 

http://www.iozone.org/

 

Open a command prompt as ADMINISTRATOR by right clicking on the command prompt and selecting "run as administrator" and then past the following line after browsing to your installation directory

(I believe the default is "program files (x86)/benchmarks/Iozone 3.321")

 

iozone -Rab RESULTSFILENAME.xls -i 0 -i 1 -+u -f \\tower\disk1\filetest -y 64k -q 64k -n 64k -g 8G -z

 

To eliminate noise variables and network traffic, I always disable my virus scanner and plug the unraid server I am testing directly into my workstation rather than going through a switch.  I'm not sure these make much difference, but that's what I do.  I don't do anything else on workstation while the test is running.  Make sure your computer's sleep time to something > 1 hour becuase the test may take 30 minutes or so to get all the way through the 8GB files.

 

You would replace tower\disk 1 in the command line above with the name of your server and share you want to test.  filetest is just an arbitrary name for a file it will create and then delete.  After it finishes, it will put an excel spreadsheet with the name you specify in RESULTSFILENAME into the Iozone 3.321 directory.  It will provide read and write transfer rates (in kB/s) for file sizes ranging from 64kB to 8GB.  There will also be columns for re-read and re-write speeds so that you can see how effective buffering is for files that are re-read or re-written.

 

Another nice thing about this when you are done is that you can directly compare your performance to commercially available NAS systems here (he takes the average of the transfer rates for files sized 32MB to 4GB):

 

http://www.smallnetbuilder.com/index.php?option=com_nas&Itemid=&chart=13

 

Link to comment

It looks like the fastest NAS ever tested at his site measures:

84.5MB/s for Average Read

110.6MB/s for Average Write

 

The fastest non-cache unRAID box I have ever tested measures:

52.5MB/s for Average Read

49.5MB/s for Average Write

 

These are the averages of file sizes from 32MB to 4GB only from the Iozone output results (8 data points each for read and write).

 

I believe he tests with a command line ending in "4G -z" instead of what I posted above (he only tests up to 4GB transfers).

I run the tests through 8GB to measure any buffering effects that may be occuring with 4GB of RAM in the machine.

 

I post this only as anecdotal information in case anyone is curious where unRAID falls in the spectrum.  Most know it wasn't designed for maximum speed.  But it is nice to know that there are quite a few modern day commercial NAS platforms out there that perform substantially worse than unRAID.  

Link to comment

Thanks for the suggestion, PhatalOne but the case has additional space only for a 2.5" HDD and I am also trying to maximize performance and minimize my power draw at the same time.

 

I already have a couple of the vRaptors here.  The review I read had it performing almost as well as the vRaptor if I recall.  I don't remember it being better.  In addition, the vRaptor is a 2.5" notebook sized HDD and consumes much less power.  However, it uses a giant heat sink that makes it take up the full 3.5" bay or I might have considered doing this.

 

I'm really just trying to determine if I can run onboard ICH10R raid to stripe one pair of drives and run the others as stand alone disks on the same controller and have the configuration recognized properly by unRAID OS without any performance penalties.

 

I may be better off waiting for a single 80GB sandforce SSD that will hit 200MB/s write speeds and not complicate things so much.

 

on board raid does work on this MB:  http://lime-technology.com/forum/index.php?topic=3630.0 but at the time (have not tried it with the latest REV on unraid) it did have a few quirks! IE not shutting down the two drives.

 

Link to comment

Another angle to consider is the fact that a person can run with a large number of low power consumption, low cost green drives and spend a little of the money saved on a small SSD which will idle at 0.1W.  This reduces the cost of the drives substantially vs higher performance drives while providing a very nice reduction in power consumption for large arrays.  

 

With the SSD cache drive, the write speed for the user is far better than an array filled with high performance drives and the power consumption is optimized.

 

I hear what everyone is saying regarding performance and agree that I could blow away this performance by going to a full hardware RAID 5 system for my data and use freeNAS or the like... but I really like the functionality of unraid and the flexibility it offers me on drive configuration, so I plan to stick with it.  Striping the SSD is probably overkill for any sane person - I just wanted to know if it would work because I had an opportunity to pickup two smaller units for a good price.  Besides, it's my money I'm wasting, so there is no reason for anyone to become personally upset or concerned!

 

Gary, if money is not an issue, why not try a fusion io card for your cache? Not sure if it would work in unraid without the drivers, but then you know you hit the max speed you can have. I say fusion IO, but everyone has a PCIe RAM/SSD storage device today. SLC is better then MLC for long term storage.

Link to comment

on board raid does work on this MB:  http://lime-technology.com/forum/index.php?topic=3630.0 but at the time (have not tried it with the latest REV on unraid) it did have a few quirks! IE not shutting down the two drives.

 

 

Good to know.  These guys have already convinced me that I should just find a fast 80-120GB drive rather than mess around with the added complexity of doing a stripe set to try to force two smaller drives to work.  I'm eagerly awaiting Weebotech's results.

Link to comment

 

Now to torment you some more, what if you ran multiple Raid 5 Software sets on Linux or ESX or VMware/Virtualbox,

RAN unraid in a virtualbox? You would still have the network as a bottle neck, although you could do network card teaming, and you could run any hardware you like (assuming there is a linux/windows driver for it). Then you could use up all the bandwidth your cpu has! Hmmm, maybe even run your HTPC with it. Wow that would be fast. I think I am going to try this with my test box... Grin.

 

OK Gary, once you are done with this, you need to test 10 GigE vs Infiniband. I hear the prices are coming down to the point that for 3000$ you could have a SOHO setup. 10GigE ~ 1.2 GigBytes per second. Yes they have much faster options, but you have to stop somewhere...

Link to comment

on board raid does work on this MB:  http://lime-technology.com/forum/index.php?topic=3630.0 but at the time (have not tried it with the latest REV on unraid) it did have a few quirks! IE not shutting down the two drives.

 

This is using the steelvine processor. It does work, but there would not be an increase in speed.

I've tried this with 2 hard drives. You are still limited to the bandwidth of a single SATA link.

 

In addition there are other add on steelvine processors that can be used (see addonics.com).

It would be the same issue.

 

So for size, this would help, but not for speed.

 

You can also take a look at the ACARD ANS-9010 DDR2 SATA RAM-Drive.

 

I don't see the benefit here for unRAID. Yes, I do plan to get one for my vmware services, but not for unRAID.

 

Frankly, for the price, I would spend the funds on a high powered server board that takes very large amounts of ram. The cache would help a great deal or you could make a tmpfs ramdrive and set the mover to run hourly.

Link to comment

 

Now to torment you some more, what if you ran multiple Raid 5 Software sets on Linux or ESX or VMware/Virtualbox,

RAN unraid in a virtualbox? You would still have the network as a bottle neck, although you could do network card teaming, and you could run any hardware you like (assuming there is a linux/windows driver for it). Then you could use up all the bandwidth your cpu has! Hmmm, maybe even run your HTPC with it. Wow that would be fast. I think I am going to try this with my test box... Grin.

 

I've read about this approach from the WHS crowd.

 

VMware with WHS running, then another VM with openfiler running with the physical disks raw attached in vmware.

 

Carve up disks in openfiler and export as iscsi to be mounted in the WHS virtual machine.

 

Gives the WHS frontend / feature set without burning disk space on the raid-1-esque duplication requirements.

 

Also allows you to add more 'servers' on by simply installing openfiler on them and exporting via iscsi. Thus you have a modular 'storage brick' solution.

 

All sorts of interconnects being used - including 10GE and infiniband though I think the latter worked out cheaper for a small scale installation.

 

It's a complicated, but appealing, approach to give the best of both worlds with high performance.

Link to comment

Attached are the Iozone results for the 60 GB Corsair SSD, which should be the fastest of my SSDs.  Hopefully Gary can translate it into plain English, because I'm not quite sure what I'm looking at.  I'll try to post the results for the 30 GB OCZ Agility and my standard 320 GB Seagate 7200 rpm drive later this week.

 

I'm unable to provide any anecdotal results for using an SSD as a cache drive because there's something wrong in my configuration, and so the cache drive isn't being used during any of my transfers to a user share!  I didn't change anything besides the drive itself, so I'm wondering if maybe the 'min free space' setting needs to be changed?  It was on 2000 before (which means 2 GB, right?).  I changed it to 1000, but that didn't fix the problem.  Any ideas?

60_GB_Corsair_SSD.zip

Link to comment

I'm unable to provide any anecdotal results for using an SSD as a cache drive because there's something wrong in my configuration, and so the cache drive isn't being used during any of my transfers to a user share!  I didn't change anything besides the drive itself, so I'm wondering if maybe the 'min free space' setting needs to be changed?  It was on 2000 before (which means 2 GB, right?).  I changed it to 1000, but that didn't fix the problem.  Any ideas?

 

That setting is in KiloBytes, not in MegaBytes.  So your setting of 1000 there means 1MB.

 

On the 'Shares' page under 'Cache' is a setting called 'Min. free space', which defaults to 2000000.  This is the minimum amount of free space, specified in 1024-byte blocks,  that must exist on the cache disk in order to create a new object on the cache disk.  So a value 2000000 means 2GB.  Did you change this setting?

 

I might add that this default value is too low if you are creating large ISO files that can be bigger than 2GB.  If during the transfer of a single large file, the cache disk becomes full, the server will terminate the transfer with "Out of Space" error.

 

Link to comment

Attached is the Iozone results for the 60 GB Corsair SSD, which should be the fastest of my SSDs.  Hopefully Gary can translate it into plain English, because I'm not quite sure what I'm looking at.  I'll try to post the results for the 30 GB OCZ Agility and my standard 320 GB Seagate 7200 rpm drive later this week.

 

I'm unable to provide any anecdotal results for using an SSD as a cache drive because there's something wrong in my configuration, and so the cache drive isn't being used during any of my transfers to a user share!  I didn't change anything besides the drive itself, so I'm wondering if maybe the 'min free space' setting needs to be changed?  It was on 2000 before (which means 2 GB, right?).  I changed it to 1000, but that didn't fix the problem.  Any ideas?

 

Rajahal:

 

Thanks again for doing this!

 

Those results are simply the transfer speeds for each file size listed in kB/s... the number on the left side is the file size in kBytes and the number on the right is the transfer speed in kB/s.  There are 4 tests:  Writing, RE-Writing, Reading and Re-Reading.  I tend to only look at the READER and WRITER reports since the re-writes and re-reads merely evaluate the effectiveness of buffering.

 

so, as an example from your data for the WRITER report (Write speeds):

 

2097152 32710

4194304 21246

8388608 19342

 

The 8GB file wrote at a rate of 19.3MB/s (19340kB/s).  This isn't really very good since I was able to get speeds of over 21MB/s from fast hard drives without a cache drive.

 

The strange thing is that your READER report (read speeds) are off the chart:

 

2097152 78532

4194304 75501

8388608 80803

 

Here, an 8GB file transferred at over 80MB/s.  That's a phenominal rate.  Reading your post again, I'm wondering if you just plugged the SSD in as a data disk in your unRAID server instead of cache.  If this is the case, your results make much more sense since your parity drive is still a rotating disk and would limit only the write speed.  This is about 30MB/s faster than my vRaptor drives could transmit data over the wire!  Network headroom does not appear to be the primary limiting factor in over-the-wire performance.  Once you get your cache drive working properly for writes, I would think that they would be running near this 80MB/s rate (though write speeds for MLC SSDs are much worse than their read speeds, so it may not be quite this good).

Link to comment

I forgot to mention in my first post, all of these test are being performed through my GigE router and over a total of 110 ft of Cat6 cable (100 ft from router to unRAID, 10 ft from router to desktop).  I tried hooking up the desktop and the unRAID server directly, but for some reason it didn't work - the desktop couldn't see the unRAID server.  I didn't feel like messing around with it.  I did eliminate all other network activity during these tests, and didn't run anything but Iozone and a web browser (Chrome, only these forums and the unRAID web management page open).  In all cases my test transfer (for the anecdotal results) is a folder containing two Video_ts DVD rips and one .iso disc image, totaling 13.2 GB.

 

I understand your logic about it seeming like I'm using the SSD as a data drive, but it is in fact a cache drive.  As proof, I present:

zH0FTl.jpg

 

Also, changing the 'min free space' setting back to the default 2 GB did allow me to use the SSDs as cache drive as per normal.  Therefore, anecdotally I can offer the 60 GB Corsair SSD as ~73 mb/s:

WZSLt.png

 

This is only slightly faster than I remember my 320 GB Seagate 7200 rpm drive being, which is certainly somewhat disappointing.

 

Now for the 30 GB OCZ Agility SSD:

Again, to demonstrate that the OCZ SSD is properly installed as a cache drive:

dAlJel.jpg

 

For some reason I can't get the OCZ SSD working as a cache drive in a normal capacity.  I've tried all sorts of values in the 'min free space' field, ranging from 2 mb to 20 GB (including the default 2 GB), with no success.  Every test transfer writes directly to the disk at the predictable 20 mb/s.  Any ideas how I can get this one working?

 

I can at least send my test transfer directly to the cache share, so here's those results (~72 mb/s):

pcmiA.png

Roughly the same as the Corsair SSD, and sadly not much faster than a HDD.

 

Iozone results attached.

 

I still have one more drive to test, my original 320 GB Seagate 7200 rpm w/ 8 mb cache, but the early results are in - don't do it, it isn't worth it!  Not with these SSDs, at least.

 

Interesting that the Corsair SSD reports a drive temp of 40 C, which is surprisingly hot but feasible, whereas the OCZ SSD reports 0 C, which is clearly wrong.  Wonder which one is closer to the truth?

 

Another interesting note - overnight unRAID actually 'spun down' the OCZ SSD Cache drive:

d5QiPs.jpg

How does that work?

30_GB_OCZ_Agility_SSD.zip

Link to comment

You must not have autosensing ports on your ethernet adapters, so you would require a "crossover" ethernet cable.  Either that or you have your machines are setup for DHCP and there was no DHCP server connected to provide an IP address.  In either case, I measured comparable performance through a good gigabit switch, and it had very little impact on the performance. 

 

I am puzzled by why you are seeing these great read speeds and yet no performance increase in write speeds.  Seems completely backwards to me.

 

I am traveling this week and don't have a lot of time to analyze your information, but it looks like you are writing to your user share which should write directly to your cache drive...

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.