Transfer speeds?!?


Recommended Posts

Hi guys,

 

running the latest BETA without a parity/cache drive.

 

When I initially tested the machine I was getting speeds of 80MB/sec+!! Now I barely hit 50!

 

I'm getting write speeds using 1+Gb video files of the following:

 

speeds.jpg

 

I am preclearing a disk in the background, would that really cause this speed peak/jump!?

 

Always by AFP.

 

3GHz AMD 250, connected VIA gigabit, 4GB memory.

 

System log doesn't show anything untoward, but can post it anyway.

 

Also, sometimes randomly Finder will say the server has shut down even when I haven't touched it and then it needs a array restart to get it working again! :(

Link to comment

That is about what i experienced this week. (first unraid build)

 

I have done all my testing with 6-20gig MKV's and 50-200gig Rar files. I bet 10,000+ 1meg files will perform worse.

 

i was seeing 80MB/s to my cache drive while setting up the server. I believe it showed "parity invalid" at that time due to disk swapping.

 

now that it is in "pre-production" with valid parity using  7200rpm cache, 5900rpm parity and 6 5400rpm pool drives, I am getting 40-60MB/s averaging in the mid 50's using a the cache drive.

 

If i write straight to the root of the cache drive I can still get 60-80MB/s. I'm not sure how the mover script likes that. I'll use it as it was intended and write to the shares.

 

If i disable the cache drive, I get around 35MB/s. So there is a performance boost going that route.

 

I'm going to keep messing with it over the next week and see what i can do. my cache drive is a not the best or fastest, I'm going to try a few other drives, maybe an ssd before i am either happy or at least accept that is the limit of unraid.

 

I'm just used to high performance arrays and SSD's. I need to remember unRAID is not that, it is an inexpensive, long term, secure storage solution for the home user.  it is what it is, and it does it's job.

 

Edit: i should mention i am using samba, not afp

Edit2  you are preclearing in the background? It is possible that it could affect system speeds, it is using cpu and memory and sata bus bandwidth.

that last one might have a slight impact.

I would let that finish and retry your testing.

Link to comment

now that it is in "pre-production" with valid parity using  7200rpm cache, 5900rpm parity and 6 5400rpm pool drives, I am getting 40-60MB/s averaging in the mid 50's using a the cache drive.

This kind of makes sense, especially if you are going through User Shares to write the data.  The User Share system is another layer the data has to go through, so it may be and probably is a little slower.

 

If i write straight to the root of the cache drive I can still get 60-80MB/s. I'm not sure how the mover script likes that. I'll use it as it was intended and write to the shares.

The mover script does not really care, but you have to make sure you get the data into the correct folder structure so that it moves it over to were you intend it to go.

 

If i disable the cache drive, I get around 35MB/s. So there is a performance boost going that route.

By writing directly to the parity protected data store you will have slower speeds then the other routes above.  The overhead of calc'ing parity slows it down.

 

I'm going to keep messing with it over the next week and see what i can do. my cache drive is a not the best or fastest, I'm going to try a few other drives, maybe an ssd before i am either happy or at least accept that is the limit of unraid.

Rajahal did some tests and found that in most cases an SSD is mostly useless for any speed advantages over the network.

Link to comment

Well i've tried a few things with no avail (even changing my GO script back to stock) plus cancelling the pre-clear.

 

Changing to JUMBO frames has improved the situation slightly, still not as fast as it was tho.

 

Where is the problem!?

 

Frankly 50 is pretty good, and more than most people get.

Link to comment

Well i've tried a few things with no avail (even changing my GO script back to stock) plus cancelling the pre-clear.

 

Changing to JUMBO frames has improved the situation slightly, still not as fast as it was tho.

 

Where is the problem!?

 

Frankly 50 is pretty good, and more than most people get.

 

Well 50MB/sec was short lived, restarted and we are down to 7 bytes a second.

 

This is getting annoying :(

Link to comment

Well i've tried a few things with no avail (even changing my GO script back to stock) plus cancelling the pre-clear.

 

Changing to JUMBO frames has improved the situation slightly, still not as fast as it was tho.

 

Where is the problem!?

 

Frankly 50 is pretty good, and more than most people get.

 

Well 50MB/sec was short lived, restarted and we are down to 7 bytes a second.

 

This is getting annoying :(

 

syslog perhaps?  without that we are just guessing.

Link to comment

If i write straight to the root of the cache drive I can still get 60-80MB/s. I'm not sure how the mover script likes that. I'll use it as it was intended and write to the shares.

The mover script does not really care, but you have to make sure you get the data into the correct folder structure so that it moves it over to were you intend it to go.

In that case. I have a beyondcompare script that kicks off on a workstation that copies the files in my dropbox to the cache drive root with the correct share folder structure. if it passes binary compare, it then moves the local workstation file to a temp folder to delete at my leisure.

 

I'm going to keep messing with it over the next week and see what i can do. my cache drive is a not the best or fastest, I'm going to try a few other drives, maybe an ssd before i am either happy or at least accept that is the limit of unraid.

Rajahal did some tests and found that in most cases an SSD is mostly useless for any speed advantages over the network.

The question then is what kind of SSD's were used, there are some that top out at 60MB/s and some that top out at 550+MB/s. then the next question is the unraid overhead esting into the data transfer speed.

I can copy from SSD to SSD on my network at fully saturated 1Gig. while i don't expect to see that nor do i need it, it would be nice to know I could. 

My supermicro Serverboard has a 6Gb sata port on it, it would be fun to test a sata3 SSD on it.

take one for the unraid team to say and report the results.

 

Anyways, thanks for the info, most of it I knew already from reading the forums.

Link to comment

That's good you got that issue solved.

 

I just tweaked my setup a tad.

I swapped motherboards in my unraid. Huge increase in speed!

 

I'm now averaging around 90MB/s with bursts up to 115MB/s from my Win2008r2 "NAS boxes".

 

It is little faster from my large raid array on my Areca card or my desktop's SSD. those are averaging 95MB/s.

 

I bet if i defrag my servers, ill see  a little cleaner throughput. but the arrays are at 98% full. oops.

I also my get worse results as the cache drive gets  closer to full?

 

I don't think I'll need that SSD after all. I cant see spending any more cash for a few more MB/s

 

 

this is from a windows server with 4 Samsung F4's in software RAID5 on an old intel ITX motherboard. to my unRAID with 1.5tb seagate cache drive.

KevHi.png

 

Right now. i'm happy with unraid

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.