X9SCM-F slow write speed, good read speed


Recommended Posts

A couple experiments to try.  Please use a completely "stock" configurations, i.e., no plugins, no memory limit, stock 'go' file, etc.

 

Experiment #1:

 

Go to Settings/Disk Settings and change these parameters:

 

md_num_stripes: 4096

md_write_limit: 2048

 

Stop and then Start the array for changes to take effect.  Please let me know if there is any difference in write transfer rate to an array disk.

 

Experiment #2:

 

(This will invalidate your parity, so only try this on a test array, or an array where you won't mind rebuilding the parity afterwards.)

 

Stop array and unassign the parity disk.  Start array, see if slow write persists.

 

Twenty-four pages of complaints, comments and suggestions.  A couple of suggested band-aid solutions.  Tom requests that users who have said they had a problem conduct two experiments to see if the parameters used in the unRAID code might be contributing to this issue.  After five days, only one person tries one of his ideas.  Apparently, this is not much of an issue at this point...    ::)  Perhaps, version 5 should be released without a full resolution to the problem.    ;)

 

Point taken....I'll test this week as soon as I get a chance....been working late at work on a regular basis, but I'll test the sequence Tom requests and post the results in the next day or so.

Link to comment
  • Replies 387
  • Created
  • Last Reply

Top Posters In This Topic

I just started building my unraid: X9SCM-F, E3-1230V2, onboard ide. G switch with cat5e.

 

And here is the stat of fastcopy from windows to unraid.

 

TotalRead = 3821.3 MB

TotalWrite = 3821.3 MB

TotalFiles = 1 (0)

TotalTime= 85.88 sec

TransRate= 44.50 MB/s

FileRate  = 0.01 files/s

 

The lan1 doesnt work under the latest unraid rc. I didnt check whether it would work for earlier version thou. I checked the lan1 with debian and esxi, it works under both.

Link to comment

I just started building my unraid: X9SCM-F, E3-1230V2, onboard ide. G switch with cat5e.

 

And here is the stat of fastcopy from windows to unraid.

 

TotalRead = 3821.3 MB

TotalWrite = 3821.3 MB

TotalFiles = 1 (0)

TotalTime= 85.88 sec

TransRate= 44.50 MB/s

FileRate  = 0.01 files/s

 

The lan1 doesnt work under the latest unraid rc. I didnt check whether it would work for earlier version thou. I checked the lan1 with debian and esxi, it works under both.

 

Is that with a cache disk or without? That's really good if without cache, not so great if with.

Link to comment

Dont have cache disk for now. It is just a test setup with 2 disks, a Toshiba(Hitachi) 2T 7K3000 as data disk and a Hitachi 2T 5k3000 as parity. I think both on the SATA 3 port.

 

I just started building my unraid: X9SCM-F, E3-1230V2, onboard ide. G switch with cat5e.

 

And here is the stat of fastcopy from windows to unraid.

 

TotalRead = 3821.3 MB

TotalWrite = 3821.3 MB

TotalFiles = 1 (0)

TotalTime= 85.88 sec

TransRate= 44.50 MB/s

FileRate  = 0.01 files/s

 

The lan1 doesnt work under the latest unraid rc. I didnt check whether it would work for earlier version thou. I checked the lan1 with debian and esxi, it works under both.

 

Is that with a cache disk or without? That's really good if without cache, not so great if with.

Link to comment

I am having the same issue with my new toshiba drive.  I had a 4 - 2tb wd Green set up (5 w/parity) and it worked great.  I added a toshiba 2tb drive last week and my write speed is very slow and will actually stop.  It will create the new folders and transfer some small files but wont transfer the movie files and then will eventually give me an error that it cant finish the transfer.  I pre-cleared the drive before adding it to the array.  I did also add a second controller card but is the exact model as my other one that I have been using without issue.

 

Any thoughts?

Link to comment

A couple experiments to try.  Please use a completely "stock" configurations, i.e., no plugins, no memory limit, stock 'go' file, etc.

 

Experiment #1:

 

Go to Settings/Disk Settings and change these parameters:

 

md_num_stripes: 4096

md_write_limit: 2048

 

Stop and then Start the array for changes to take effect.  Please let me know if there is any difference in write transfer rate to an array disk.

 

Experiment #2:

 

(This will invalidate your parity, so only try this on a test array, or an array where you won't mind rebuilding the parity afterwards.)

 

Stop array and unassign the parity disk.  Start array, see if slow write persists.

Results:

  • Using unRAID 5.0-rc11, "stock" configuration, i.e., no plugins, no memory limit, stock 'go' file, etc.
  • test #1 - md_num_stripes set to 1280 and md_write_limit set to 768, with parity disk (no modifications) => write speed to array at ~ 1.2 MB/s
  • test #2 - md_num_stripes from 1280 to 4096 and md_write_limit from 768 to 2048, with parity disk => write speed to array at ~ 1.2 MB/s (no change)
  • test #3 - md_num_stripes set to 4096 and md_write_limit set to 2048, parity disk unassigned => write speed to array at ~ 1.2 MB/s (no change)
  • test #4 - md_num_stripes from 4096 to 1280 and md_write_limit from 2048 to 768, parity disk unassigned => write speed to array at ~ 1.2 MB/s (no change)

 

Please let me know if you need further testing.

Link to comment
  • 2 weeks later...

I get ~ 1.2 - 2.5MB/sec transferring from Windows to my unRAID with the Supermicro MBD-X9SCM-F-O.

 

I can't do the Parity wipe but I'll see if I can do the other test and write back.

 

Thanks!

 

Edit: I did the tests and still get approximately the same speed. I realized I also just recently upgraded my Parity drive to a WD Red 3TB, so I doubt that is the problem.

 

Edit Edit: Saw you need to do this in the command: sysctl vm.highmem_is_dirtyable=1

 

That worked. Sorry.

Link to comment
  • 3 weeks later...
  • 2 weeks later...

This issue also exists on a Gigabyte GA-880GMA-UD2H (bios F6f) Motherboard. I am on a Gigabyte AMD motherboard, so if the issue is the same as others are reporting on this thread then it does seem to be broader than just the X9SCM. The issues seems to occur whether disk1, disk2 and parity disk are plugged directly into motherboard sata or AOC-SASLP-MV8.

 

In the last week I've upgraded my UnRaid server from an Athlon II processor and 4GB RAM (2x2gb) to a Phenom II 955 and 16GB RAM (4x4gb) and have suddenly experienced write speed slow downs to 1MB/sec.

 

There were no other material changes to the hardware set-up when I began to notice this issue other than:

Adding 4TB HDDs plugged in but not part of array

 

The speed is from a 'mv' command for large files from /mnt/disk1 ->/mnt/disk2 so there is no network in play and I am not copying via the shares or via my windows desktop PC. Both disk1 and disk2 are < 60% full.

 

I have tried the file move as a 'mv' command from console as well as using midnight commander.

The issue seems to be repeatable, in that if I select one of my media directories which has around 270GB of files (70 files, 10 directories - with some individual files ~14GB-16GB in size) I get the slow write speed,  however another media directory on the same disk which has maybe 6-10 files of ~10-14GB copies at 35MB/s.

 

When I go home tonight I will try the following and test with each individual change. I will abort before each test completes, delete any files copied onto disk 2 and then repeat with the new configuration. In each case I am using Midnight Commander to move the save original source directory/files from disk1 to disk2:

 

a) Try transfer with 16GB RAM (4 x 4GB): update: FAILED Transfer drops to 1MB/s

 

b) Try sysctl vm.highmem_is_dirtyable=1 with 16GB RAM (4 x 4GB):  update: FAILED I typed this into the console while I still had Midnight Commander transferring at < 1MB/s and with the 16GB RAM. Midnight Commander instantly burst up to 80MB/s and then gradually reduced down to about 26-35MB/s, before bursting back up to 80MB/s and reducing again etc. However after transferring over 100GB of files the speeds then dropped back down to 1MB/s.

 

c) Reduce back down to 4GB (2x2GB) RAM  update: PASSED I have rebooted and am now testing the transfer using only 2 sticks of 2GB RAM (4GB total) instead of 4 x 4GB (16GB total). No other changes have been made in BIOS etc. Midnight Commander is currently transferring at a steady 40MB/s and has got more than 3x further through the 270GB transfer with no loss of performance.  Summary is that 4GB (2x2GB) is not suffering from the performance slow down issue.

 

d) Reduce back down to 4GB (1x4GB) RAM update: PASSED I have rebooted and am now testing the transfer using only 1 stick of 4GB RAM (4GB total) instead of 4 x 4GB (16GB total). No other changes have been made in BIOS etc. Midnight Commander is currently transferring at a steady 40MB/s and has got more than 3x further through the 270GB transfer with no loss of performance.  Summary is that 4GB (1x4GB) is not suffering from the performance slow down issue.

 

e) Reduce back down to 8GB (4x2GB) RAM update: PASSED I have rebooted and am now testing the transfer using only 4 sticks of 2GB RAM (8GB total) instead of 4 x 4GB (16GB total). No other changes have been made in BIOS etc. Midnight Commander is currently transferring at a steady 40MB/s and has got more than 3x further through the 270GB transfer with no loss of performance.  Summary is that 8GB (4x2GB) is not suffering from the performance slow down issue.

 

f) Reduce back down to 8GB (2x4GB) RAM  update: PASSED I have rebooted and am now testing the transfer using only 2 sticks of 4GB RAM (8GB total) instead of 4 x 4GB (16GB total). No other changes have been made in BIOS etc. Midnight Commander is currently transferring at a steady 40MB/s and has got more than 3x further through the 270GB transfer with no loss of performance.  Summary is that 8GB (2x4GB) is not suffering from the performance slow down issue.

 

g) Recheck BIOS settings to ensure smallest amount of RAM allocated to graphics, switch PnP OS to off and look for any write caching options. If I find any settings that are appropriate to change I will then retry the transfer with 16GB RAM to see if any difference. - update: I've looked in the BIOS and cannot see any settings that are applicable to change.

 

I will also continue to read the latest posts in this thread to see if anything else I have missed, there are no obvious errors listed in the syslog.

 

UPDATE:

 

I can provide syslogs for each of the above tests. I'm not sure they provide any further insight but maybe there is something in them that will help in how the RAM is being configured.

 

Since the above tests I have successfully copied 650GB from disk1 to disk2 with no slow down using 8GB RAM (2 x 4GB). I will reset back to 16GB and undertake one final attempt to confirm that the slow transfer speeds return.

 

I'm happy to do further tests if Tom can provide guidance on what I can do to help.

 

very odd!

 

TheWombat

Link to comment

Note: While I have modified my AOC-SASLP cards to have RAID disabled on firmware .21 I do not believe this has affected the performance as I was finding the reduction in transfer speed to 1MB/s after I upgraded to 16GB RAM but while I was still running on the standard .21 firmware/configuration with RAID enabled (i.e. before I modified the configuration of the AOC-SASLP cards.

 

Just in case people see this post in the storage sub-forum!

 

http://lime-technology.com/forum/index.php?topic=27129.0

 

As a positive, all the rebooting of the server I am doing has tested the boot up sequence and the AOC-SASLP-MV8 with the modified .21 configuration has been 100% successful in finding the HDDs and booting much quicker than when RAID was enabled. No more time outs and pauses.

 

TheWombat

Link to comment
  • 6 months later...

So is this fixed in the lastest Unraid release or do we still need to use the 4095 RAM and dirty =1 option

 

I have same setup as most of you..

 

ESxi 5.5 

VM Unraid 5 latest

M1015 passtrough in It mode

Supermicro  X9SCM-F-O

RCP 4224.

 

If I transfer to share with Parity on== 10MB/s avg

If I transfer to my ESxi datastore using SSD I get 80Mb/s  .. bypassing the M1015 and Unraid

 

did not try the Band aids yet..since this post is quite old now..

 

so Issue is Unraid or the M1015 It mode

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.