unRAID Server Release 6.0-betaX-x86_64 Discussion


Recommended Posts

Since they decided to lock us out of the announcements forum, we need a new central location to talk about what's new/what's coming (and also a new place to complain about the lack of communication ;)). I figured a thread per major release would work, since sometimes the beta releases can be frequest

 

Anyway, onto more topical things, this popped up in the IRC yesterday:

dynamixdash.png

 

Looks like they're merging Dynamix into the normal Web GUI for the next beta release. The uptime is 3 days, which hopefully means we'll see something soon, with some more goodies attached:

 

We need a little more time to finish the prep, but seriously, you guys are in for some real treats this week ;-).

 

I imagine this is one of the said "treats"

Link to comment
  • 2 months later...

Crusade1. An ESXi build that has 12GB total in the box, 4GB of it to the unRAID VM

2. Crashplan backup server (HP N40L) with 4GB

3. Dev/Test/preclear server also has 4GB in it

 

I seem to have a thing for 4

 

You just described almost exactly the setup I have.  The only difference is the ESXi box currently only has 10GB of RAM in it.  I have 382 days uptime on that ESXi box so don't want to take it down to put more RAM in it.

Link to comment

I've been running the 6 beta on my backup unRAID for awhile now but haven't upgraded to the newest releases until recently, so I just saw the new interface changes and I'm wonderfully surprised and impressed!  So much more information on the server now available in the GUI, though I'm not yet decided if I like the way the "Array Operation" has been moved to it's own tab.

 

Still, it sure is much more informative than v5 and since I have had zero issues through the v6 beta releases on my backup unRAID, I'm wondering if everyone else has also experienced reliable operation; enough so that it would be safe enough to upgrade my media unRAID to version 6?

 

EDIT:  I also just discovered that RFS drive format is being discontinued and unRAID is switching to XFS as the default format.  Since I'm gradually swapping out to 6TB drives, I'm curious if tis better to go to XFS formatting with version 6, further adding impetus to upgrade my media server to v6.

Link to comment

Still, it sure is much more informative than v5 and since I have had zero issues through the 6 beta releases on my backup unRAID, I'm wondering if everyone else has also experienced reliable operation; enough so that it would be safe enough to upgrade my media unRAID to version 6?

I have had zero issues with reliability, etc with v6 Beta 12.  There are a couple of outstanding GUI issues, and some users are reporting spindown problems with some WD drives, but those are relatively minor.  IMHO there's no reason to not upgrade.  Utilizing all the memory in the machine, support for Docker & KVM, ability to use other files systems are huge reasons to take the leap.

 

Both of my production servers are on 6B12 with no hiccups.

Link to comment
I have had zero issues with reliability, etc with v6 Beta 12.  There are a couple of outstanding GUI issues, and some users are reporting spindown problems with some WD drives, but those are relatively minor.  IMHO there's no reason to not upgrade.  Utilizing all the memory in the machine, support for Docker & KVM, ability to use other files systems are huge reasons to take the leap.

 

Both of my production servers are on 6B12 with no hiccups.

 

I've also discovered that the APC battery backup plugin has been updated for v6 (http://lime-technology.com/forum/index.php?topic=34994.0) so there are no longer any issues holding me back from upgrading to v6.

 

I'm only worried with data integrity and reliability so any GUI glitches are of no concern for me.  But all v6 beta's I've tried has not resulted in any data issues, losses, or other drive issues, so it looks like I will be upgraded today once a replacement 4TB arrives via UPS.

Link to comment

I have had zero issues with reliability, etc with v6 Beta 12.  There are a couple of outstanding GUI issues, and some users are reporting spindown problems with some WD drives, but those are relatively minor.  IMHO there's no reason to not upgrade.  Utilizing all the memory in the machine, support for Docker & KVM, ability to use other files systems are huge reasons to take the leap.

 

Both of my production servers are on 6B12 with no hiccups.

 

I've also discovered that the APC battery backup plugin has been updated for v6 (http://lime-technology.com/forum/index.php?topic=34994.0) so there are no longer any issues holding me back from upgrading to v6.

 

I'm only worried with data integrity and reliability so any GUI glitches are of no concern for me.  But all v6 beta's I've tried has not resulted in any data issues, losses, or other drive issues, so it looks like I will be upgraded today once a replacement 4TB arrives via UPS.

There was only one beta (I think 9) that had the possibility of corruption with ReiserFS (although I don't think there were any reports on unRaid that could be proven)

 

Considering that ReiserFS is no longer being maintained, its not a bad idea to convert all of your disks to XFS.  But, there's really no pressing need to.  Any new disks I add to my array are set as XFS, and the old ones are mostly still Reiser.  Been gradually working on transferring them over, but life always seems to get in the way.

 

Every 3 months I run a complete MD5 check against the entire array, and it has never found any corruption of files, bit-rot, etc.  My last one just completed last week.

Link to comment

More than a handful of users have been having stability issues caused by CPU Stalls that seem to be RFS related looking at the kernel dump info. Once they completely converted over to XFS  their cpu stall issues were solved. It's likely a good idea to move off of RFS when possible before you get hit by the same issue.

Link to comment

More than a handful of users have been having stability issues caused by CPU Stalls that seem to be RFS related looking at the kernel dump info. Once they completely converted over to XFS  their cpu stall issues were solved. It's likely a good idea to move off of RFS when possible before you get hit by the same issue.

 

I'm assuming this is with v6 only; that is, there have been no common RFS/CPU stalls under v5, no?

 

And were these stalls during read, write, or both types of operations?  My media server is mostly read, except when transferring media over to the server, whereas my backup unRAID (backup in the sense it's used as a backup to my computers' hard drives) is mostly involved with write operations and I don't recall experiencing stalls nor stability issues in the year or more that I've been running beta 6.  Were these problematic issues using mixed RFS/XFS/BTRFS formatting?

 

EDIT: I just noticed that one of my drives in the backup unRAID, the last one added to expand the array, had been formatted in XFS.  Hmm, never knew that, though I must admit I rarely look at change logs nor keep up to date with announcements as I upgrade beta's whenever I see a new version on LT's website.  So I've been running a mixed RFS/XFS system for the past year and no issues.

Link to comment

Ugh.  I just discovered that the only way to reformat drives is to basically clear off the contents of a drive, stop the array, change the file system type on the Device Settings page, restart the array and allow unRAID to format the drive.  Rinse, repeat.  For each drive.

 

94TB.  This... will... take... some... time...  :(

Link to comment

Ugh.  I just discovered that the only way to reformat drives is to basically clear off the contents of a drive, stop the array, change the file system type on the Device Settings page, restart the array and allow unRAID to format the drive.  Rinse, repeat.  For each drive.

 

94TB.  This... will... take... some... time...  :(

Yes - but at least although the elapsed time is long you can let the computer do most of the work.

 

Because of the elapsed time it takes, many people seem to be doing it when a drive needs replacing/upgrading and otherwise leaving things as they are.

Link to comment

Ugh.  I just discovered that the only way to reformat drives is to basically clear off the contents of a drive, stop the array, change the file system type on the Device Settings page, restart the array and allow unRAID to format the drive.  Rinse, repeat.  For each drive.

 

94TB.  This... will... take... some... time...  :(

Yes - but at least although the elapsed time is long you can let the computer do most of the work.

 

Because of the elapsed time it takes, many people seem to be doing it when a drive needs replacing/upgrading and otherwise leaving things as they are.

 

True, the computer does all the work.  But to move essentially 4TB worth of data at a time probably takes about a day of unattended operation, checking the completion status the next day (I'm only guesstimating because a 4TB data rebuild takes about a day to complete; haven't performed a cp/mv operation of that magnitude yet): 23 data drives would take close to a month.

 

But I guess now's the best time for me since I just lost 4TB of data (system crashed while performing a reiserfsck --rebuild-tree, resulting in unRAID v5 marking the drive as unformatted with no option to rebuild the data from parity).  I can repopulate the lost video media after I convert to XFS...

Link to comment
A ReiserFS disks wouldn't just corrupt on their own in V5 so why did you need to be running reiserfsck in the first place? And once you switch, whatever you did could also corrupt a xfs file system.

 

Did a SMART check and there were no errors.  There were no tell-tale signs in the syslog of an obvious hardware failure (just read errors on that drive while performing a copy operation utilizing two different drives in which the suspect drive had no direct involvement).  Then ran a reiserfsck which came back with the recommendation for --rebuild-tree, which I did.  Since there appeared to be no hardware errors that I could detect, nor had I experienced any issues before whenever I had this rare "corruption" over the years on several occasions, which were successfully repaired by performing the recommended repair options by reiserfsck, went ahead and performed the procedure.  During the rebuild, unRAID coughed up a syslog dump at the console and froze (http://lime-technology.com/forum/index.php?topic=37772.msg349454#msg349454).  After forcing a restart, that's when I noticed unRAID marking the drive as unformatted.  Attempting any subsequent reiserfsck came back with some sort of incomplete status (IIRC) that I could not recover from or bypass.  Installing a replacement drive unRAID simply formatted it and data-rebuilt it as an empty drive.

 

It was only afterwards that I discovered that while unRAID is Maintenance mode and performing a reiserfsck operation, this essentlally marks the target drive as unformatted until the reiserfsck completes successfully.  Since I could never restart the reiserfsck to complete successfully, its contents were doomed.

 

The replacement drive is a brand new one so it's a freshly minted XFS format with no data coming from the original drive whatsoever so there is no "corruption" being propagated into the "new" array (its data is unrecoverable anyways).  There were no errors noted by unRAID on any other drive.  There have been no errors, hardware or otherwise, since installing a new drive.  So "whatever I did" or am doing should not cause any corruption of the new XFS system as there is absolutely nothing to indicate any current issues with the system, which has been running fine for the past 11 days.

 

Anyways, I'm not sure why you are implying that no reiserfsck should ever be performed when the unRAID wiki actually does specify to do so as part of the troubleshooting process; I followed all recommended steps in the wiki and from the reiserfsck results.  The one possible repercussion that was never mentioned anywhere was what could go wrong during a reiserfsck rebuilt process: if I had know that a crash or other interruption that would prevent a successful completion of that command could result in total loss of the data on that drive, then perhaps I would have simply replaced the drive and had unRAID perform a data rebuild.

 

Then again, nothing is foolproof: a crash or other such event could occur even during a data rebuild, or the parity drive could go bad during said rebuild.  The end result is still data loss of the original drive.

 

We all take our chances in whatever course of remedial action we take when array errors occur.  At least the beauty of unRAID is that we only lose the data on the drive(s) that become unusable for whatever reason.

Link to comment
I would have simply replaced the drive and had unRAID perform a data rebuild.
I'm not sure from reading your saga whether you meant this statement at face value or not, but I want to clarify just in case. Unraid has no way to rebuild your data, it can only recreate the drive in total. So, any corruption on the drive will exist on the rebuilt drive as well. If you were to have physically pulled the drive and worked on the replacement drive, you would have most likely had the same results, but you would have had another copy of the corrupted drive to work on to possibly try different recovery options.
Link to comment

I would have simply replaced the drive and had unRAID perform a data rebuild.
I'm not sure from reading your saga whether you meant this statement at face value or not, but I want to clarify just in case. Unraid has no way to rebuild your data, it can only recreate the drive in total. So, any corruption on the drive will exist on the rebuilt drive as well. If you were to have physically pulled the drive and worked on the replacement drive, you would have most likely had the same results, but you would have had another copy of the corrupted drive to work on to possibly try different recovery options.

 

Yes, I understand the distinction: unRAID can only "rebuild the data" in regards to whatever the parity/data drives calculate the data to be, and cannot "remove" any corruption that had been incorporated into the parity drive.

 

The suspect drive has been completely replaced, and since unRAID v5 simply created no data on the replacement drive from it's "data rebuild" there is no "corruption" on that new drive (which I subsequently reformatted to XFS).  I did NOT perform any parity check/update on the parity drive whatsoever.  In fact, since upgrading to v6 I decided to have the parity drive rebuilt from scratch, trusting the integrity of the existing data drives.

 

You've brought up a great point which further strengthens the position to perform a reiserfsck since if the operation results in a recommendation for a --rebuild-tree, doing so should also correct any "corruption" of the parity data that may have been incorporated from a corrupted data drive; simply replacing the suspect drive outright and allowing unRAID to perform a data rebuild (by the way, I'm using the exact terms of "data rebuild" that all unRAID documentation and references uses) would allow any corrupted parity data to be propagated onto the replacement data drive.

 

UPDATE:  Just getting back to the original thread topic discussion, it looks like it will be much faster copying data over as it has taken two hours to copy 600+ GB of data so far.  At this rate, it should take just under 8 hours for each 4TB drive, meaning I could theoretically do a couple drives or more per day and get the entire media server converted in under two weeks, if all goes well.

Link to comment

...  it has taken two hours to copy 600+ GB of data so far.  At this rate, it should take just under 8 hours for each 4TB drive ...

 

Interesting math  :)

 

Note that 2 hours for 600GB = 300GB/hour

At that rate, in 8 hours you'll copy 2400GB -- this is hardly "... just under 8 hours for each 4TB ..."

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.