Sign in to follow this  
UnRaid_11317

Server Performance - Suddenly very slow/hard to work in

12 posts in this topic Last Reply

Recommended Posts

Hello,

I am having performance issues with my 6.4.1 unRAID server.

  • Opening folders, creating text files, transferring files, moving files around is all running extremely slow. Sometimes have to wait a full min before system is responsive again.
  • Video Playback from server will sometimes just stop.
  • Docker containers can become unresponsive.
  • Nothing ever crashes, just takes a long time or will become unresponsive. If I give it a min, unRAID eventually does what it was commanded to do. However each task can suddenly take a full min to complete.
  • No problem navigating the WebGUI. No apparent slow downs.
  • HDDs are a bit full (can that be a problem?)
  • In Dashboard HDDs are Green, AVG CPU Load is around 20-30%, Memory Usage 20-30%
  • Network equipment reset, just in case - no change.

Nothing red or yellow appears in the Logs. I have attached my diagnostics file. In the System folder under the memory.txt it reads a total of 15G, 3.9G used and 229M free. Is there a problem there or is that normal because it also reads buff/cache 11G and available 10G. 

Very obviously, I don't know what I should be looking for in the diagnostic files.

 

I appreciate any and all help!

media-diagnostics-20180809-0631.zip

Share this post


Link to post
14 minutes ago, UnRaid_11317 said:

Opening folders, creating text files, transferring files, moving files around is all running extremely slow. Sometimes have to wait a full min before system is responsive again.

This sounds like the typical reiserfs disks full issue.

Share this post


Link to post

My disks are very full 98%-100%

So as the disks get full with the REISERFS file system it takes a hit to performance? Does XFS have the same limitation? Is there a % threshold I should stay below? Is there an easy way to move the data off those drives to new larger drives?

Share this post


Link to post
17 minutes ago, UnRaid_11317 said:

Does XFS have the same limitation?

No, though you should avoid filling them to 100%, always leave a few GB free.

 

17 minutes ago, UnRaid_11317 said:

Is there a % threshold I should stay below?

You should convert anyway since reiserfs is a dead filesystem.

 

18 minutes ago, UnRaid_11317 said:

Is there an easy way to move the data off those drives to new larger drives?

 

Share this post


Link to post
11 minutes ago, UnRaid_11317 said:

So as the disks get full with the REISERFS file system it takes a hit to performance?

 

ReiserFS is notorious for becoming slow to handle new updates when close to full. If filling them once and then just reading out the data, then it's ok to cram them full. But not if you want to modify some of the content - then ResierFS spends huge amounts of time trying to optimize the internal structures.

 

But strictly speaking - for most FS you should think twice about filling over maybe 95%. If a newer version of the file system is released that adds new functionality or optimizations, then the meta-data may need to grow. XFS for example have introduced support for checksumming of meta-data. And it isn't impossible that they may want to continue and see if they can introduce checksumming for file data too.

 

Leaving a bit of free space also makes it easier to change to a different file system that might require a different amount of metadata.

 

17 minutes ago, UnRaid_11317 said:

Is there an easy way to move the data off those drives to new larger drives?

 

Easy but slow - you need to move all data to other disks. In the array or outside of the array.

Then reformat.

Then restore the content.

Then repeat for the other data drives.


Easiest is to do this when upgrading to larger drives. One new 8 TB disk can handle the content of two older 4 TB disks. So much less copying around of data.

Share this post


Link to post
But not if you want to modify some of the content - then ResierFS spends huge amounts of time trying to optimize the internal structures.

Note that this slowdown is felt in inRAID even when not writing to those disks, as long as they are part of the user share you're writing to unRAID will spend a long time, up to a few minutes, deciding in what disk it can write.

 

 

Share this post


Link to post
1 minute ago, johnnie.black said:

Not that this slowdown is felt in inRAID even when not writing to those disks, as long as they are part of the user share you're writing to unRAID will spend a long time, up to a few minutes, deciding in what disk it can write.


I have not seen any such behavior. I have a number of rather full RFS disks, but with 100% static content. I can read out data from them at full gbit network speed. I will wait with replacing the FS until it's time to replace all drives.

 

Are you talking about user shares that doesn't have minimum free space configured? Because I have never seen any full RFS disk with static content be slow at reporting free space. It sounds more like you are talking about a RFS that has been allowed to create a very fragmented inode "table", in which case it takes lots of disk seeks to read in the directory tree and inode contents to present the merged user share content. But I haven't seen these issues for one-time fill-up of archival data to RFS. I have only seen it when data has been added/removed for an almost full RFS volume.

Share this post


Link to post

This effect can be mitigated somewhat by reformatting the drive, a fresh ReiserFS drive even if it is packed full is much more responsive than a battle worn file system that has had many deletes and such. Before unraid supported XFS I would occasionally clean off a drive entirely, format it, and fill it back up.

 

Now that drive sizes are much larger, ReiserFS seems to run into problems sooner rather than later, XFS is the way to go in my experience. BTRFS is getting there, but I'm not comfortable with it yet.

 

As far as keeping space free, my general rule of thumb is to keep enough free space on the array collectively to allow emptying my largest single drive.  When the total free space drops below that, it's time to add a drive or swap one of the smaller drives for a larger one.

Share this post


Link to post
9 minutes ago, pwm said:

Are you talking about user shares that doesn't have minimum free space configured? Because I have never seen any full RFS disk with static content be slow at reporting free space. It sounds more like you are talking about a RFS that has been allowed to create a very fragmented inode "table", in which case it takes lots of disk seeks to read in the directory tree and inode contents to present the merged user share content.

I'm talking about user shares with minimum free space configured, though most of my data is constant there are some deletes and refils over time, but I remember well before converting my disks from reiser it would take about a minute or more for unRAID to read all the reiserfs disks on that share and finally start writing, sometimes the copy operation would even timeout in Windows and I would need to retry.

Share this post


Link to post
8 minutes ago, jonathanm said:

Now that drive sizes are much larger, ReiserFS seems to run into problems sooner rather than later

 

RFS doesn't have preallocated inodes. The advantage is that you can add a ridiculous number of small files. But the disadvantage is that the inodes gets spread all over the surface. And the more add/delete that are performed, the more fragmented the inodes becomes.

 

And as the disks becomes larger, we can store more files on them - so using more inodes that can be fragmented over more disk blocks.

 

So RFS is a very bad FS to use for new disks. But an archival disk that has just been filled with files can work very well without any urgent need to move the data.

Share this post


Link to post
4 minutes ago, johnnie.black said:

there are some deletes and refils over time


I have never done any refills. That might be why I have never suffered any significant slowdowns for directory-tree reads as done by unRAID user shares. All my RFS (unRAID or other Linux machines) have been used for basically archival storage. Work disks have either used ext2/3/4 or Btrfs.

Share this post


Link to post

Thank you for all the help! I have a couple of 10TB Iron Wolfs on the way to start my great XFS migration.

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this  


Copyright © 2005-2018 Lime Technology, Inc.
unRAID® is a registered trademark of Lime Technology, Inc.