X9SCM-F slow write speed, good read speed


Recommended Posts

Just wanted to chime in here. I have a different motherboard and cpu, but was experiencing very similar issues with slow internal writes ( < 1MB/sec ).

 

The command that Tom posted, sysctl vm.highmem_is_dirtyable=1, also worked for me to get writes back to normal.

 

My unraid specs:

Motherboard: eVGA 132-BL-E758

CPU: Intel® Core i7 - 2.666 GHz

RAM: 24 GB

Data: 3 x 2TB

Cache: 500GB

 

The common thread obviously, 24GB of RAM.

Link to comment
  • Replies 387
  • Created
  • Last Reply

Top Posters In This Topic

Please type this command at the console or telnet session and report if it makes any difference in write transfer throughput:

 

sysctl vm.highmem_is_dirtyable=1

 

Yes indeed the command above changed the write speed from ~ 1 MB/s to ~40 MB/s with 32 MB RAM.  My concern with the command was this comment in this post (http://thread.gmane.org/gmane.linux.kernel/1311355) that lists this command as a workaround:

 

> You can make highmem dirtyable by setting

>

> sysctl vm.highmem_is_dirtyable=1

>

> But this will make the number of dirtyable pages very high compared to

> your lowmem.

>

> I wonder if it would be best to just enforce a minimum amount of

> dirtyable memory.  A percentage of lowmem or so to keep it from

> expanding with highmem.

>

 

You're right, it is probably advisable to set a minimum value, or at

least to display a warning message. This is probably a rare case

depending on the memory configuration, but it may occur.

 

In subsequent searching, I found this post and I believe it might indicate a fix to properly allocate dirtyable memory.  (https://patchwork.kernel.org/patch/1767451/)  I just don't know enough to be certain if this fix is directly tied with the earlier post.  If it is a proper fix for this problem, how does this get updated to the kernel?  Any knowledgeable kernel work-flow folks that can comment?

 

The system uses global_dirtyable_memory() to calculate

number of dirtyable pages/pages that can be allocated

to the page cache.  A bug causes an underflow thus making

the page count look like a big unsigned number.  This in turn

confuses the dirty writeback throttling to aggressively write

back pages as they become dirty (usually 1 page at a time).

This generally only affects systems with highmem because the

underflowed count gets subtracted from the global count of

dirtyable memory.

 

The problem was introduced with v3.2-4896-gab8fabd

 

Fix is to ensure we don't get an underflowed total of either highmem

or global dirtyable memory.

 

In the interim, I'm going to place the "sysctl vm.highmem_is_dirtyable=1" command in my GO script and use the 5.0-rc8a release...I just don't know if there might be other consequences...guess I'll find out.  :)

Link to comment

Hi there !

 

I am crossreading this Thread right now, wow.. that "bugtracking" you are peforming is hard to get :D

 

I get with almost the same config (xeon 1220lv2, x9scm-f,8 gig kingston, 9 x WD red 3 TB, 2 x SASLP about 30mb/s in write rates without any cache disk over SMB!

 

Is this write rate okay ? or can i push it up without using a cache drive ?

 

best regards

 

Christoph

Link to comment

Good Morning Guys ! (GMT +1  ;))

 

Your opinion/experience is why I asked !

 

I don't had any idea if something wrong is bottlenecking.

 

The only thing I can Input is that I run an 8 GB Kit of Kingston ECC Ram (KVR1333D3E9SK2/8G) and the Network load is constant at 25% = ~ 30Mbyte/s no drops to 5% or something strange.

 

I hope this can help because of our almost identical config !

 

best regards

 

Christoph

Link to comment
  • 2 weeks later...
  • 2 weeks later...

I put this command from Tom in my GO file. It seemed to "fix" my slow write speed problem. Does v5.0 rc9 permanently address this issue that many of us have experienced?

 

I believe it should due to a bug that is fixed with kernel 3.4.24 but haven't been able to test/verify since my 5.0-rc9a server is performing a parity sync check at the moment.  I should be able to test/verify this evening, but perhaps others can test/verify beforehand.

Link to comment

I installed 5.0 rc9a last night and kicked off a parity check / w update.  It has run for little over 9 hours and showed an 89 MB/s speed.

 

So far rc9 appears to be fine.

 

That is without the "sysctl vm.highmem_is_dirtyable=1" command, correct?

 

I'm fairly certain the parity "check" wasn't affected, it was only a "write" to the array that exhibited the slow write speed.  tspotorno, can you try a "write" to the array with 5.0-rc9a?

 

Edit:  With 5.0-rc9a, my parity "sync" for a 3TB parity disk was 59217 sec => 50.7 MB/s if I am calculating this correctly.  However when I write to the array, the slow ~ 1 MB/s speed is present unless I issue the "sysctl vm.highmem_is_dirtyable=1" command, then the write speed is great ~ 45 MB/s.

 

Edit #2:  Attached are "meminfo", "inconfig" and "ethtool" command outputs.  I am running with "sysctl vm.highmem_is_dirtyable=1" in Go script.

meminfo_inconfig_ethtool.txt

Link to comment

Well here are my results:  5.0 rc9a

 

1. Created a new user share that is uncached.

2. Copied 1.2gb file to the share and the upload fluctuated between 36-37 MB/s

3. Download the file back to my pc and 105-112 MB/s

 

Run the command: sysctl vm.highmem_is_dirtyable=1

 

4. Upload speed jumped to 110-123 MB/s

5. Download speed sayed pretty much the same 115 MS/s to 123 MB/s

 

Double the write speed with the command.

 

Only addon's I have running is UnMenu.

 

I'm running on gig ethernet.

 

I am surprised by the speed jump, it is dramatic!

 

Does this command cause any harm?  Reason I ask, is I powered down the server and it failed to shut down properly.  I rebooted and all was fine, but had me worried for a few min.

Link to comment

Here is the problem I'm having with "vm.highmem_is_dirtyable" variable.

 

When I first boot up the machine, without setting "vm.highmem_is_dirtyable=1", I get less than 1 MB/s for the write speed.

 

After I set vm.highmem_is_dirtyable=1, I get the speed close to 100MB/s sometimes.

 

HOWEVER, this ONLY lasts about 500 - 600 MB.

I don't know if anyone has tested with larger files (6 - 10 GB) file, but after awhile it goes down to below 1MB/s again.

AND, sometimes it won't even let me write anything. It will have some kind of error. Don't have the error handy, but it was something about the network drive is inaccessible or something like that.

When I tested with some 10GB file, it failed after copying about 70%. Not all the time though. Maybe half of the time i tested.

 

Then I set "vm.highmem_is_dirtyable" back to 0, then I get about 5 - 15MB/s.

 

 

I have 16GB ram.

Link to comment

For me the 80MB/s I get at start drops down to 20 to 30 MB/s .. I notice the same behaviour you mention (the fact that the high speed is somewhat burst-like), the 20 or 30MB/s I am left with is however perfectly acceptable for me, 1MB/s is not... I used to have that before this setting.

 

We have the same amount of memory, I am guessing you are using more (low) memory then I am..

Link to comment

I'm getting this behaviour too now.

 

Transferring to a cache drive which I KNOW can do 110MB/sec writes.

 

I've tested via a RAMDISK over AFP which manages 110MB/sec writes and 100MB/sec reads up to 4GB.

 

Currently transferring 120GB to my cache drive.

 

With sysctl vm.highmem_is_dirtyable=0 resulted in an average of 35Mb/sec over 60 seconds.

Changing it to 1, after 60 seconds the average 60 second rate has increased to 60MB/sec.

 

Neither is exactly stellar performance or anywhere near the capabilities of the cache drive itself.

 

OPs Chipset is C204, mine is C206 and my cache drive is running off a Sil3132 PCI-E card.

 

Cheers!

Link to comment

I really don't need it to be anything over 20MB /s since I don't usually upload anything to unraid too much.

 

I just want a constant solid write speed.

 

Honestly, it's not a HUGE problem for me, but I heard the new rc9a was supposed to fix this issue in a way... and I just wanted to mention doesn't seem to be fixed

 

--- EDIT

 

i just realized.. most of you have X9SCM-F

I just have some cheap board with Celeron G530.

 

but I guess what I want to say is that we all have this similar issue where high speed is kind of 'burst like'.

It wasn't too slow when I only had 4GB ram though. Maybe i should just switch back to 4GB ram. idk.

Link to comment

but I guess what I want to say is that we all have this similar issue where high speed is kind of 'burst like'.

It wasn't too slow when I only had 4GB ram though. Maybe i should just switch back to 4GB ram. idk.

 

You are always going to see "burst-like" behavior on the network if the sink is slower than the source (which is the case most of the time in unRaid). 

Link to comment

but I guess what I want to say is that we all have this similar issue where high speed is kind of 'burst like'.

It wasn't too slow when I only had 4GB ram though. Maybe i should just switch back to 4GB ram. idk.

 

You are always going to see "burst-like" behavior on the network if the sink is slower than the source (which is the case most of the time in unRaid).

 

I see.. well not too big of a deal then..

 

But the problem is still there without setting "vm.highmem_is_dirtyable=1" for me.. oh well im gonna change the board soon so maybe that'll change who knows.

 

thanks for all the work you do

Link to comment

but I guess what I want to say is that we all have this similar issue where high speed is kind of 'burst like'.

It wasn't too slow when I only had 4GB ram though. Maybe i should just switch back to 4GB ram. idk.

 

You are always going to see "burst-like" behavior on the network if the sink is slower than the source (which is the case most of the time in unRaid).

[Just from my interpretation of the users' descriptions of their "observations" ...]

 

It appears as though the "sink" is [easily] as fast as the "source" until you reach the hairball :) (otherwise known as the point where your system buffers (write-behind cache) have all been filled, and then your "sink" speed is reduced to the reality of your actual disk subsystem write performance.

 

By timing the upload side,  this is being partially masked--because after the upload appears to have finished (and reported its xfer rates accordingly), the destination side (unRAID) still has to complete the output (draining) of the write-behind cache to the disk subsystem.

 

Maybe better if users performed a timing on the unRAID side (ie, as a download) AND be sure to include a sync at the end (to include the buffer-draining).

time ([i]transfer[/i] WinPC:10GB_file unRAID_box ; sync)

(substitute your choice for transfer method)

then use real time to calculate xfer rate.

 

[is that highmem toggle just making more RAM available to system buffers? (which is still good--"RAM is a terrible thing to waste") Speaking of 64-bit kernel ... :)

 

Link to comment

I am experiencing slow write speeds since upgrading from rc6-test2 to rc8a (and now rc9a no difference)

 

I have tried running the command in this thread, no change, my write speeds are 12-14 MB/s (averaging 12.8 MB/s) under rc9a with and without the command run they are identical.

 

Ever since 4.X and through rc6-test2 I have always had a consistent write speed of 24 MB/s for the last 5+ years until now.

Link to comment

Well, Im having the same problem!  I "upgraded" my C2SEE to an X9SCM-F today, and my speeds plummeted.  I added "sysctl vm.highmem_is_dirtyable=1" to my GO script and the speeds shot back up, but only for small files.  A file larger than 5GB just craps out at the 5GB mark.

 

I know I'm running a couple of plugins, but I have most of it disabled right now.  I literally unplugged my old motherboard (which was working great) and plugged in the new one. I made no changes to the software.,

 

Any would would be much appreciated.

syslog_2013-01-12.zip

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.