X9SCM-F slow write speed, good read speed


Recommended Posts

 

I don't know which speeds are mentioned here, but yesterday I have been moving files from one disk in the array to an other in MC. Speeds reported by MC were between 28 and 42 MB/s. 42 MB/s was reported with files larger than 4 GB. My VM was assigned 4 GB memory. Total memory is 16 GB. Don't know what is "normal" BTW.

If your talking about parity protected drives then that is normal.  I get 16MB/s on some drives upto 45MB/s on others.  I'm working on replacing the slower drives.  I have completely replaced them on one unRAID server and now get a consistent 40+MB/s when I copy dropping to around 30MB/s when the drives get closer to being full.

 

 

Which drives have you found to be slow ?  Are these the typical "green drives" ?  I would not stop using them because of the power consumption, but if you found something else to determine a "slow" drive then I would like to know..

It was some 5400rpm Seagate's and Seagate made Samsung's that I removed and replaced with Hitachi 7200rpm's.  I was trying to keep my 7200rpm drives for other computers.  I found the bad drives by copying files to the drive shares.  I setup a block of media files around a 100GB total and copied it to each drive from another computer.  I noticed that I wasn't getting the same speed copying to the Seagate's that I was to the Hitachi's and all drives were the same 5400rpm speed.  The unRAID server is now Hitachi drive only (8 5400 & 6 7200) but I will be adding a WD Red I just bought when the preclear gets done.  The Seagate's I removed are being used elsewhere but they will eventually become offline backups once I get more drives. 
Link to comment
  • Replies 387
  • Created
  • Last Reply

Top Posters In This Topic

 

For further discussion

sysctl vm.highmem_is_dirtyable=1 seems to have a positive effect w write speed

http://lime-technology.com/forum/index.php?topic=25431.msg221288

 

WeeboTech,

 

That command has been mentioned already:

 

http://lime-technology.com/forum/index.php?topic=22675.msg213845#msg213845

 

Also, there is still write speed issue even with that command. see this post:

 

http://lime-technology.com/forum/index.php?topic=22675.msg219647#msg219647

 

 

The problem is that even with that, if you were copying a large file (say 10GB fille) over the network, it seems to work okay for first few gigs, then the problem comes back.

 

The real fix we are trying to say is.. the parameter (I think something like mem=4095). it's the one you mentioned earlier thread.

 

This seems like more of a 'solid fix'

Link to comment

I responded in the other thread also.

 

My system isn't broken, and the parameter improves write burst, which may be good for people who write allot of small files.

Has anyone tested the mem= parameter along with the sysctl parameter to see if it has any effect.

Point is, with some kernel tuning, write burst increases.

While it doesn't solve the issue with high memory on some boards, it shows that there's definitely a hindrance somewhere.

I couldn't understand how I could double the ram in my old system and get hardly any improvement in caching and think this is part of the answer.

In any case, I understand what was previously discussed. I posted a separate thread so other people could make an attempt at testing to see if it helped improve the short term write burst speed.

 

mem=4095 fixes (bandaids) some high memory issue with this board.

 

sysctl vm.highmem_is_dirtyable=1 improves my write burst to a drastic amount with a normal write speed. 

 

 

In fact it was so drastic, that it's worthy for other people with allot of ram to try and discuss the results outside of this particular himem issue.

 

I wonder how both work for people having the write performance issue.

Link to comment

Actually, it does not start with limiting unraid, it limits the curent kernel. Linux is WAY broader then unraid and 32 bit is not obscure so this will get noticed, matter of time till its fixed in the kernel (and thats not something Tom does).

 

Untill that kernel is fixed everything we can do to fix this in unraid is a workaround or temporary bandaid. Current mem parameter is a perfect bandaid for me, and I do not notice ANY adverse effects from running in this state.

 

This is why I would prefer development time going towards making unraid run on a 64bit kernel, it makes more sense to me.

 

 

Sent from my iPad using Tapatalk HD

Link to comment

Just checking into this thread.  I've been building a new server with an X9SCM-F-O and I'm experiencing mediocre write speeds to my user shares.

 

Here's my relavent hardware:

  • X9SCM-F-O
  • Xeon E3-1230 V2 (Ivy Bridge)
  • KVR1333D3E9SK2/16G (2x8GB)
  • IBM M1015 crossflashed to IT mode

 

I have unRAID virtualized in ESXi 5.1 with the M1015 in passthrough mode.  I am running 5.0rc10.  When writing to the array directly, I get around 10MB/sec, which is not as high as I would expect.  My read speeds are around 25MB/sec.  My unRAID VM currently has 2GB of RAM allocated.  I've tried bumping it up to 4GB and trying the mem=4095 kernel boot parameter and it had no effect.  I also tried vm.highmem_is_dirtyable=1 and that didn't have much effect either.

 

This evening I tried adding a cache drive (WD 500GB Caviar Blue).  With the cache drive, I get much better write performance - around 100MB/sec.  I'm going to see if I can do some tinkering/profiling to try to find where this performance discrepancy is.

Link to comment

Just checking into this thread.  I've been building a new server with an X9SCM-F-O and I'm experiencing mediocre write speeds to my user shares.

 

Here's my relavent hardware:

  • X9SCM-F-O
  • Xeon E3-1230 V2 (Ivy Bridge)
  • KVR1333D3E9SK2/16G (2x8GB)
  • IBM M1015 crossflashed to IT mode

 

I have unRAID virtualized in ESXi 5.1 with the M1015 in passthrough mode.  I am running 5.0rc10.  When writing to the array directly, I get around 10MB/sec, which is not as high as I would expect.  My read speeds are around 25MB/sec.  My unRAID VM currently has 2GB of RAM allocated.  I've tried bumping it up to 4GB and trying the mem=4095 kernel boot parameter and it had no effect.  I also tried vm.highmem_is_dirtyable=1 and that didn't have much effect either.

 

This evening I tried adding a cache drive (WD 500GB Caviar Blue).  With the cache drive, I get much better write performance - around 100MB/sec.  I'm going to see if I can do some tinkering/profiling to try to find where this performance discrepancy is.

 

The enormous "increase in speed with a cachedrive" is totally to be expected. This is because with an enabled cachedrive you are actively not using a core unraid process that costs a lot of processing time: parity protection. Do not worry about analyzing the speed difference, this is logical.

 

Now the sustained 10 MB/sec write however is low, that would need to be higher. That the dirtyable and the mem parameter do not help is logical also, these bandaids are there to "fix" the fact that the kernel unraid uses gets into memory issues when you have more then 4gb assigned to unraid. You do not have more then 4gb assigned so the parameters should not have any real effect.

 

How is your network setup, 100mb / gigabit ?  wired/wireless ?  What kind of performance do you get when you transfer inside a telnet session on the system itself (from disk-to-disk), from "disk-to-usershare".

Link to comment

What models are your parity and data drive?

 

I have a mish mash of data drives - I have a WD20EARS as my parity drive and the data drive I'm writing to for my test is a WD20EARX.  I upgraded the system from unRAID 4.5, and the EARX drive has the jumper set for Advanced Format drives.  I did not apply the jumper to the EARS parity drive - should that matter?  Writing to other drives in the array gets me similar or worse results.

Link to comment

The enormous "increase in speed with a cachedrive" is totally to be expected. This is because with an enabled cachedrive you are actively not using a core unraid process that costs a lot of processing time: parity protection. Do not worry about analyzing the speed difference, this is logical.

 

Now the sustained 10 MB/sec write however is low, that would need to be higher. That the dirtyable and the mem parameter do not help is logical also, these bandaids are there to "fix" the fact that the kernel unraid uses gets into memory issues when you have more then 4gb assigned to unraid. You do not have more then 4gb assigned so the parameters should not have any real effect.

 

How is your network setup, 100mb / gigabit ?  wired/wireless ?  What kind of performance do you get when you transfer inside a telnet session on the system itself (from disk-to-disk), from "disk-to-usershare".

 

I should have mentioned that I was testing on the console using dd.  Disk-to-disk and disk-to-usershare performance is similar.  I'm using the VMXNET3 driver on unRAID and on another test VM to do performance tests over NFS, but I don't believe that to be a bottleneck.  I'm able to get ~100MB/sec when writing the cache drive over NFS.  To me it does not make sense that parity calculation should take *that* much time.  When I look at top during a dd, most of the time is spent in wait.  There's definitely something else going on here...

Link to comment
I did not apply the jumper to the EARS parity drive - should that matter?

 

Yes, this can, potentially, have a very significant impact on your disk write performance - assuming that it was initialised with the partition starting on sector 63 (the only option under 4.5).

Link to comment

See systems below. Neither of the write speed "fixes" work on them. One is just WD30EARS and WD30EZRS drives, another has a few other models mixed in but they are all still WD Green models. None of the drives have the jumper installed. The couple of old WD20EADS I have are unaligned (by design), and everything else is 4k aligned (also by design). All drives went through 1-3 preclears with no errors, or speed issues before being added. Crappy writes and slow parity check speeds on everything after beta 12a.

 

I'm seeing a pattern, or maybe it's just because these drives are popular. Is anyone experiencing these issues and doesn't have WD Greens drives...? Is it possible it's caused by 4k aligned drives in general?

 

What models are your parity and data drive?

 

I have a mish mash of data drives - I have a WD20EARS as my parity drive and the data drive I'm writing to for my test is a WD20EARX.  I upgraded the system from unRAID 4.5, and the EARX drive has the jumper set for Advanced Format drives.  I did not apply the jumper to the EARS parity drive - should that matter?  Writing to other drives in the array gets me similar or worse results.

 

I forget what version of unRAID added 4k alignment support, but 4k alligned drives (EARX and EARS) would of needed the jumper installed BEFORE preclearing and adding to array on these older versions. I don't think it's as simple as installing the jumper, you'll need to move the data off, reformat it as 4k, then transfer the data back. The jumper isn't needed anymore as long as the drive is correctly formatted.

 

If the EARS parity drive says 4k aligned in the web GUI then it is fine, if not... I suggest removing the jumper, reformatting the drive using preclear script, and then rebuilding parity.

Link to comment

Yes, this can, potentially, have a very significant impact on your disk write performance - assuming that it was initialised with the partition starting on sector 63 (the only option under 4.5).

 

I wonder if there is a way I can verify that?  Assuming that's the case, would it make sense to rebuild parity with a newly formatted drive?

 

 

Link to comment

Yes, this can, potentially, have a very significant impact on your disk write performance - assuming that it was initialised with the partition starting on sector 63 (the only option under 4.5).

 

I wonder if there is a way I can verify that?  Assuming that's the case, would it make sense to rebuild parity with a newly formatted drive?

 

The drive, if the jumper is not installed, should say 4k aligned if you click on it in the web GUI. If you do not have a jumper installed and it says unaligned, i'd recommend reformatting it. Any other drives that are also suppose to be 4k aligned would need this done too.

Link to comment

Yup, it's showing up as unaligned.  I guess the first thing for me to do is reformat my parity drive to properly align it.  Since 5.0 supports AF disks, should I just be able to wipe the partition table on the parity drive using dd and then try to start the array?  Ideally unRAID would see the disk as "unformatted" and then re-format it and rebuild parity?

Link to comment

Yup, it's showing up as unaligned.  I guess the first thing for me to do is reformat my parity drive to properly align it.  Since 5.0 supports AF disks, should I just be able to wipe the partition table on the parity drive using dd and then try to start the array?  Ideally unRAID would see the disk as "unformatted" and then re-format it and rebuild parity?

 

That should work, just reformatting the drive should get it to 4k, assuming you didn't change the "Default partition format" setting in unRAID.

 

Then just sit back and hope you don't have a drive failure without parity.  :)

Link to comment

Looks like that did the trick.  I backed up the old MBR (just in case) and cleared it with dd.  Re-started the array and the drive now shows up as a new parity drive and it's started the parity sync.

 

The UI now says that it's 4k aligned, but just to make sure, I dumped out the partition table entry and it looks right:

 

root@nas:/mnt/tmp/tmp# hexdump -C -n 16 -s 446 parity_new_mbr.bin 
000001be  00 00 00 00 83 00 00 00  40 00 00 00 70 88 e0 e8  |[email protected]...|
000001ce

 

It shows the partition now starting at 0x40 (64), where if I look at the old partition table it was starting at 0x3F (63).  Thanks for the help.  Once this finishes I'll see if my write performance improves.

Link to comment

This is getting slightly annoying :-)

 

I am now still running with 5gb of ram (one strip) and speeds are down to 5 MB/s again.. with no plugins active. I am copying of off a full drive towards an allmost empty one.

 

I no longer have the MEM parameter active since I am now running with a 4gig physical limit..

 

Setting the highmem=dirtyable=1 parameter immediately increases transfer (sustained) to 20MB/s and up...

 

Does this make sense to anyone ?

 

I will go ahead and change the boot paramter to boot with MEM=4095 and reboot to see if that also makes a difference...

 

 

Link to comment

Using a Supermicro MBD-X9SCL+-F, Core i3 2120T 2.6G, 16GB RAM.  2x Seagate ST3000DM001, 2x WD WDBAAY0030HNC-NRSN (but only writing the Seagates).

 

I see no slowdown.

 

Writes to disk share: around 40MB/sec

Writes to user share: around 39MB/sec

 

Writes to cache disk or non-parity protected array peg the network at around 98MB/sec

 

I did have to "fix" win7 network throttling per this:

http://www.daveherbert.info/network-bandwidth-throttling-in-windows-7

 

This is actually the best performance I've ever measured.  Hmm...

Link to comment

Interesting link on bandwidth throttling.. I wonder, does anyone experiencing this issue stream music all the time? lol

 

Just did some more testing, hope this is interesting:

 

cached share @ 6Gb RAM - started at 81MB/s and maintained

disk share - started at 35MB/s and dropped, stopping then starting and hovering around 2MB/s

 

cached share @ 2Gb RAM - started at 95MB/s and maintained, +/- 5MB/s

disk share - started at 35MB/s and dropped, stopping then starting and hovering around 2MB/s (interesting that 2Gb still had this issue)

 

stayed at 2Gb RAM and added the network throttling registry fix mentioned above, even though I'm on Windows 8 (I like breaking things :P). 

 

cached share @ 6Gb - started at 81MB/s and maintained

disk share - started at 35MB/s and dropped all the way to 0 for a few secnods, then started again and hovered around 2MB/s

 

cached share @ 2Gb - started at 95MB/s and maintained, +/- 5MB/s

disk share - started at 35MB/s and sustained for 30 seconds or so, then dropped to 0 for about 2 seconds, then sustained ~35MB/s for the rest of the transfer. 

 

I am not really bothered by this as all writes I do to the array are via cache.. but some interesting outcomes nevertheless.

 

This is on 5.0rc11, and X9SCM-F, but virtual machine in ESXi as per link in my sig.

Link to comment

Using a Supermicro MBD-X9SCL+-F, Core i3 2120T 2.6G, 16GB RAM.  2x Seagate ST3000DM001, 2x WD WDBAAY0030HNC-NRSN (but only writing the Seagates).

 

I see no slowdown.

 

Writes to disk share: around 40MB/sec

Writes to user share: around 39MB/sec

 

Writes to cache disk or non-parity protected array peg the network at around 98MB/sec

 

I did have to "fix" win7 network throttling per this:

http://www.daveherbert.info/network-bandwidth-throttling-in-windows-7

 

This is actually the best performance I've ever measured.  Hmm...

 

Everything I test is using MC within array in a telnet session, so network could not be the issue. Am now rebooting with the MEM parameter. Will report back in a minute or so..

Link to comment

Now running RC11 with the mem=4095M parameter. Transfers are around 18MB/sec right from the start. That is about the best I have seen (turning on the dirtyable parameter shows bigger burst but the average is still somewhere just below 20MB/Sec.

 

My system has one strip of memory (4gig) and was giving me when running without the mem=4095 parameter and without the dirtable parameter transfers of around 5MB/sec.

 

In both situations I was running my fill set of plugins but they were not doing anything, also this behaviour is something I have seen for weeks now.

 

The 1,000,000 question is: What is the mem=4095 paramter actually doing on a system that has 4gig, it obviously is not the amount of memory by itself..

 

Tom: Anything you want me to test ?

 

EDIT: An hour or so later transfer seems to drop again to 2 or 3 MB/sec. period before that it dropped to something like 10 MB/Sec. All transfer speeds are sustained for more then 10 minutes and no plugin activity (they are running though). At the moment I had seen 2 to 3 MB/sec for 10 minutes I executed the dirtable paramter in another telnet window. Transfer was back to around 20MB/sec within 20 seconds and appears to stay at that level.

At that moment I set the parameter back to '0' and transfer remains at the 20MB/sec level. I expect it to drop down again in some time.

 

It really looks like a memory management thing that might occur more quickly with more RAM but occurs anyhow, the dirtyable parameter does something that frees up something that slowly fills up again.

Link to comment

I started posting a problem here: http://lime-technology.com/forum/index.php?topic=25640.0

...Then realized that I was duplicating a problem that was under discussion in this thread.

 

I have a Gigabyte Z77-DS3H with 24GB of RAM (ridiculous I know), 3 data drives, parity drive, and ssd cache drive connected to an m1015 (parity may be connected to mobo), and another non-array drive connected to the mobo.  I had severely slow write speeds on all drives.

 

I found that putting vm.highmem_is_dirtyable=1 solved my problem (speeds jumped from 1MB/S to 250+MB/s).

I have now done some more tests...

3 write tests to 3 different drives, with dirtyable=1.  The first drive is the ssd cache drive, second is data disk and third is non-array drive (this third drive is SATA I -- the others are all SATA III).  Each test is writing an 8GB file from the console.  The cache and non-array drives were screaming.  The parity protected disk1 was slower (but not as dismal as I had seen before):

dirtyable=1; cache    ; 8GB; speed=202MB/s
dirtyable=1; disk1    ; 8GB; speed=29.8MB/s
dirtyable=1; non-array; 8GB; speed=303MB/s

I tried this twice and got similar results.  What was interesting (to me) was that by watching the memory monitor, it looks like the (slower) disk1 write was fast at first, then slowed fairly soon after the operation started.  The attached graph shows the cache memory growing fairly quick for about the first 1/3 of the operation, then very slowly thereafter.  I am not sure if this is related to parity or not, but that doesn't feel like it should matter to me.

 

Then I set dirtyable=0 and tried again.  In this case, all writes took ages, and the cache memory growth was uniformly slow growing.

dirtyable=1; cache    ; 8GB; speed=19.9MB/s
dirtyable=1; disk1    ; 8GB; speed=35.8MB/s
dirtyable=1; non-array; 8GB; speed=27.1MB/s

 

In this case, I was surprised to see that these were significantly better than the 1MB/s I had seen before.  In fact, the disk1 parity protected write was even better than with dirtyable=1.  The graph shows that the cache memory consumption growth was uniform and stable (mem2.jpg attached).

 

Then... I realized that my test was deleting each test file as soon as it was created.  I moved the deletions to the end of my script, and (duh) the cache buffer filled more.  My deletions were freeing the associated cache buffer memory.  After moving the 3 deletions to the end ofo the script, the speed of writing and memory growth was still fast, then slow, then fast again (see mem3), with cache consumption peaking over 20GB.  This confirms that the actual disk writes underneath the cache were still going along slowly.  At least I know that having lots of memory will distance me from the problem in most operations.

 

 

mem1.JPG.5880aceef7515c63182a0024e349198a.JPG

mem2.jpg.200a2639d54dad3087581b078673444c.jpg

mem3.jpg.0a5403c435230650c4d63ecb7cc81bbb.jpg

Link to comment

Using a Supermicro MBD-X9SCL+-F, Core i3 2120T 2.6G, 16GB RAM.  2x Seagate ST3000DM001, 2x WD WDBAAY0030HNC-NRSN (but only writing the Seagates).

 

I see no slowdown.

 

Writes to disk share: around 40MB/sec

Writes to user share: around 39MB/sec

 

Writes to cache disk or non-parity protected array peg the network at around 98MB/sec

 

I did have to "fix" win7 network throttling per this:

http://www.daveherbert.info/network-bandwidth-throttling-in-windows-7

 

This is actually the best performance I've ever measured.  Hmm...

 

Couple other tweaks that may (in general) speed things up is to disable remote differential compression and disable TCP window auto tuning Link.  Also if one is using maps to unRaid shares and using anti-virus, excluding those network maps from the anti-virus program could speed things up depending on the Win 7 system hardware.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.