[maybe solved]Array not responsive during a parity check with V6


jbuszkie

Recommended Posts

Ok..  so I solved/improved my parity check speed by changing to a 2 core processor...  see here. but now my array isn't responsive while doing a parity check.  It's reallly sssssllllloooooowwww when accessing files.  My automatic acronis backup

fails when before it would happen just fine during a parity check with V5.

Is there some way to lower the priority of the parity check or something else I can do?  I can't really have the array non responsive while the parity check is going on.

I mean I can tolerate some slowness..  but if programs fail then that's just not good.

Link to comment

Ok..  so I solved/improved my parity check speed by changing to a 2 core processor...  see here. but now my array isn't responsive while doing a parity check.  It's reallly sssssllllloooooowwww when accessing files.  My automatic acronis backup

fails when before it would happen just fine during a parity check with V5.

Is there some way to lower the priority of the parity check or something else I can do?  I can't really have the array non responsive while the parity check is going on.

I mean I can tolerate some slowness..  but if programs fail then that's just not good.

As far as I know you cannot change the parity check priority.

 

What type of disk controllers have - they are normally the limiting factor.  I use either my motherboard SATA connections, or SATA connections on a SASLP-MV8 and find that I still get good performance during a parity check.

 

Having said that I wonder if there is something that LimeTech can do to improve performance during parity sync/check (when the system is heavy I/O load).

Link to comment

It's not actually surprising that array operations are much slower during a parity check (or disk rebuild) operation => since ALL of the disks are very busy during these operations, any activity beyond that causes excessive head movement on the drive(s) being accessed, as they thrash back-and-forth between the current parity check location and the area being read/written to.    Writes are particularly impacted, since each sector being written requires 4 I/O operations, and these are being shared with the parity check operations.

 

But this is true regardless of which version of UnRAID you're using, so I'm a bit surprised the slowdown seem notably worse than it was with v5.    Did you have your "disk tunables" optimized with v5?  ... it could simply be that these are different with v6, and this is impacting the buffering for read/write operations and thus causing different performance during times when the disks are VERY busy.

 

Link to comment

 

But this is true regardless of which version of UnRAID you're using, so I'm a bit surprised the slowdown seem notably worse than it was with v5.    Did you have your "disk tunables" optimized with v5?  ... it could simply be that these are different with v6, and this is impacting the buffering for read/write operations and thus causing different performance during times when the disks are VERY busy.

Since I don't know what "disk tunables" was for V5..  My guess is I didn't have it.  I did have the:

for i in /dev/md*
do
echo Setting $i
  blockdev --setra 2048 $i
  blockdev --getra $i
done

 

In the go file for V5..  But I don't have it for V6.  Has anyone shown that it makes a difference?

 

And I guess I should look up disk tunables?

 

Thanks,

 

Jim

 

Link to comment

This is very frustrating.    Just updated my oldest server to v6.1.3 and it very notably degraded parity check times.    Just before doing the upgrade, ran a parity check on v5.0.6 => took 8:05:42.    Reformated the flash drive to a clean v6.1.3, booted and added the drives, and did a parity check => took 10:34:04 ... over 30% longer !!

 

System is an old SuperMicro C2SEA with a dual-core Pentium E6300, 4GB of RAM, and a 14 drive mix of 1.5TB Seagates (4) and 2TB WD Greens.

 

Just for grins, I reverted to v5, booted, and ran another parity check ... which just finished in 8:05 (same as before).

 

I was never on an earlier v6, so I don't know for sure, but based on others' reports on the forum, it seems this has only been an issue with 6.1.3.  I'm inclined to leave it on v5 for now, and try v6 again when the next release comes out ... hopefully 6.1.4 will resolve this  :)

 

 

Link to comment

Mine actually went up!  But that is more likely due to I put in a dual core processor and bumped the speed up and mucked with the

md_num_stripes. 

But before I did those I was Waaaaayyyy slower with my single core celeron with V6.  I wonder if I should try with the old V5 with these setting to

see if it is even faster!  I'm running out of time though..  I have two new 4T drives that just finished pre-clear that I want to add to the array.

They will be xfs now which isn't backwards compatible with V5 as I hear....

 

Is your processor pegged when you do the parity check?  mine was which was hinting that V6, for some reason, needs more CPU Horsepower!

 

Jim

Link to comment

Yes, once you add an XFS drive you no longer have the option of reverting to v5.  [i'm not certain, but I believe you could actually do that and the drives would show as "unformatted" => as long as you absolutely do NOT allow it to format that, you could run a parity check.  I may test this on one of my spare "test" systems - I'll post back if I get around to doing that in the next few days  :) ]

 

I didn't actually look at the Dashboard during the parity check, but I suspect the CPU was indeed pretty much pegged.  Nevertheless, it seems strange that it has no problem with v5 yet is SO much slower with v6 for the exact same function.

 

Link to comment

Yes, once you add an XFS drive you no longer have the option of reverting to v5.  [i'm not certain, but I believe you could actually do that and the drives would show as "unformatted" => as long as you absolutely do NOT allow it to format that, you could run a parity check.  I may test this on one of my spare "test" systems - I'll post back if I get around to doing that in the next few days  :) ]

 

I didn't actually look at the Dashboard during the parity check, but I suspect the CPU was indeed pretty much pegged.  Nevertheless, it seems strange that it has no problem with v5 yet is SO much slower with v6 for the exact same function.

 

Uhh. gary, how could you run a parity check if the 2 XFS are recognized as unformatted and would therefore be assumed as "missing" by unRAID? :)

Link to comment

In theory it should be FASTER ... since it's able to process twice as much data per instruction.

 

Further, although I never installed a version prior to v6.1.3 on that server, others have reported that they had fine parity check speeds with 6.1.1 and 6.1.2, but have seen the same degradation I noted with 6.1.3 => and these are all 64-bit versions.    In fact, there's a post on the 6.1.3 announcement thread that notes a check took 9 hours on 6.1.1 and is looking like about 14 hours or more on 6.1.3  [already at 13 hours with an hour to go].

 

Link to comment

Yes, once you add an XFS drive you no longer have the option of reverting to v5.  [i'm not certain, but I believe you could actually do that and the drives would show as "unformatted" => as long as you absolutely do NOT allow it to format that, you could run a parity check.  I may test this on one of my spare "test" systems - I'll post back if I get around to doing that in the next few days  :) ]

 

I didn't actually look at the Dashboard during the parity check, but I suspect the CPU was indeed pretty much pegged.  Nevertheless, it seems strange that it has no problem with v5 yet is SO much slower with v6 for the exact same function.

 

Uhh. gary, how could you run a parity check if the 2 XFS are recognized as unformatted and would therefore be assumed as "missing" by unRAID? :)

 

I don't believe they show as "missing" -- they just show as Unformatted.  You can Start an array with unformatted disks ... and as long as you don't check the "Format" box they'll remain that way and not be formatted.  I've run both parity syncs and parity checks with unformatted disks in an array ... although I haven't specifically tried it with disks that were already formatted with XFS.  I'll try to do so in the next couple days on one of my test servers.

 

Link to comment

...not something I'll be trying out on my one and only server ;)

 

Agree ... there are some things that are best tried only on a test server.    I have two 3-drive servers set up for "playing" ... and have been thinking about buying a "bundle" of 8-10 old small laptop SATA drives (often available on e-bay for under $100) just to have a test unit that has enough drives to try various scenarios of mixed formatting; drive failures; dual parity rebuilds (hopefully in the not too distant future); etc.

 

[if anyone wants to donate a few old 40-80GB SATA drives (preferably 2.5")  feel free to PM me for an address  :)  ]

 

One of the test servers is all XFS, so I need to redo it with one Reiser and one XFS disk on v6, and then try it with v5 to see if what I noted above works as I expect.

 

 

Link to comment

I don't believe they show as "missing" -- they just show as Unformatted.  You can Start an array with unformatted disks ... and as long as you don't check the "Format" box they'll remain that way and not be formatted.  I've run both parity syncs and parity checks with unformatted disks in an array ... although I haven't specifically tried it with disks that were already formatted with XFS.  I'll try to do so in the next couple days on one of my test servers.

 

That's exactly right, did that on my test server with no problems.

Link to comment

In theory it should be FASTER ... since it's able to process twice as much data per instruction.

 

Further, although I never installed a version prior to v6.1.3 on that server, others have reported that they had fine parity check speeds with 6.1.1 and 6.1.2, but have seen the same degradation I noted with 6.1.3 => and these are all 64-bit versions.    In fact, there's a post on the 6.1.3 announcement thread that notes a check took 9 hours on 6.1.1 and is looking like about 14 hours or more on 6.1.3  [already at 13 hours with an hour to go].

 

v6 on my #1 - media server is definitely more CPU intensive than v5..

 

I ran some rough test for CPU usage:

 

At Idle

v5 - 0.7% CPU usage

v6 - 14% CPU usage

 

Playback of a 1080p file

v5 avg 2%

v6 avg 9%

v5 Peak 4.6%

v6 Peak 11%

 

Recording (write 1080p streams in real time)

v5 for 3 streams/shows 15%

v6 was about 15% per stream 45% for 3 files

 

Delete 1 file;

v5 34%

v6 shot up around 78%

 

Reloading the web GUI in v5 ran up 17% with nothing else going on.

I didn't try v6 but the GUI does seem more consuming than v5 just looking back at the freeze issues prior to 6.1.3.

 

Ram usage v5, 80-100mb

Ram usage v6, 400mb

 

Not very scientific but observations over a period of time watching v5 trough a telnet session using htop and v6 with the web GUI.

Since I typically record 4 1080 Shows and watch 1 at the same time then delete while still recording 4, v6 was hitting around 98% at that point, I think it 100% once. I had to revert to v5 because of this.

 

Thinking about what Apple did to migrate to 64 bit:

They had a lot of 32 bit code still in the 64 bit OS until I think 10.7 then maybe some still later until 10.9.

As they converted to all 64 bit the OS got smaller, lighter and faster.

 

So I wonder if that could be some of the issue.

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.