UnRAID 6.0 not recognizing precleared drives properly


Recommended Posts

I am really getting frustrated by this, and am hoping someone can help.

 

Since moving to 6.0 I am finding an issue where even though I run the preclear process on each drive, when I go to add it to UnRAID I see the following:

 

Stopped. Found 1 new disk(s).  Clear will completely clear (set to zero) the new disk(s). Once clear completes, the array may be Started, expanding the array to include the new disk(s).

Caution: any data on the new disk(s) will be erased! If you want to preserve the data on the new disk(s), reset the array configuration and rebuild parity instead.

Yes I want to do this

 

I just finished preclearing a 4TB drive (which took 42 hours), and now, when I stop the array and try and add it, UnRAID wants to clear it again (which will take 8-10 hours during which UnRAID is unavailable).

 

I had been running 6.0-beta3 and tried with the drive before rebooting to move to beta4 and had this issue (I tested with the drive I did the standard preclear on, and the one I precleared using bjp's new script - both do the same). After rebooting into beta4 I am having the same issue.

 

Can anyone suggest anything I can do to resolve this. It's really starting to piss me off. This is the 3rd & 4th drive I've had this issue with.

 

Is anyone else experiencing anything similar?

Link to comment

Yes, I am using Joe's latest version, as well as the one I did with bjp's new preclear script - both drives behave the same way. I have used the same script on another 5.0.5 server I am building for a family member and this issue never happened.

 

I will try and start up a 5.0.5 USB key later tonight and see if I get the message. I don't need to start the array for it to appear (I just need to add the drive to an empty slot in the GUI and this message pops up), so at least I don't need to muck up the drive like I did the last one where I formatted it on 5.0.5 to try and avoid this issue.

 

I am surprised no-one else has reported anything similar (I had noted this issue early in the beta3 forum), which makes me wonder if it's something with my config - however I did a clean install of 6.0 when I moved from 5.0. I just copied over the shares file and re-assigned all the drives manually.

 

 

Link to comment

Does unRAID then proceed to clear the drive once more?  Or does it always give that message and  then skip the clearing step once it determines the drive had already been cleared. 

 

In any case, at least one 5.X version did not properly recognize the preclear signature.  That bug might be back in 6.0.

Link to comment

I just now tried assigning a precleared drive to the array and I saw the same thing:  a warning that starting the array will cause the disk to be cleared.  I  unassigned the disk before starting the array.  I then checked the preclear signature with "preclear_disk.sh -t"  and the disk is reported as NOT precleared.  In hindsight, I should have checked the preclear signature before assigning it to the array, but the only thing I've done with the drive since preclear is use hdparm to set the spindown time and manually spin it down.

 

The disk was a Seagate 4T precleared on unRAID 6.0 b3 with preclear_disk.sh v1.14.

Disk was assigned in unRAID 6.0 b4.

Link to comment

I then checked the preclear signature with "preclear_disk.sh -t"  and the disk is reported as NOT precleared.

 

Since it's unlikely Joe's pre-clear script has suddenly decided to stop working [  :) ], it seems that either preclear doesn't work correctly under the 64-bit kernel (Joe ??) or something in the 64-bit code is keeping the system from seeing the signature correctly.

 

 

 

 

 

Link to comment

 

 

Since it's unlikely Joe's pre-clear script has suddenly decided to stop working [  :) ], it seems that either preclear doesn't work correctly under the 64-bit kernel (Joe ??) or something in the 64-bit code is keeping the system from seeing the signature correctly.

 

 

 

 

Or the process of assigning the disk in 6.0 somehow changes the preclear signature.  I'm currently preclearing to try again.

Link to comment

 

 

Since it's unlikely Joe's pre-clear script has suddenly decided to stop working [  :) ], it seems that either preclear doesn't work correctly under the 64-bit kernel (Joe ??) or something in the 64-bit code is keeping the system from seeing the signature correctly.

 

 

 

 

Or the process of assigning the disk in 6.0 somehow changes the preclear signature.  I'm currently preclearing to try again.

 

Yes, that's also possible.  I hadn't considered that, since you were checking the signature using preclear; but since this was on a disk you had already tried assigning, that could indeed be the case.

 

Link to comment

Does unRAID then proceed to clear the drive once more?  Or does it always give that message and  then skip the clearing step once it determines the drive had already been cleared. 

 

In any case, at least one 5.X version did not properly recognize the preclear signature.  That bug might be back in 6.0.

 

Yes, unRAID preclears the drive (a 4TB drive takes almost 10 hours). UnRAID is completely unusable during this time (it doesn't mount the array). Once the clear is done you are back at the menu where you can start the array. Once started you get the option to format the drive.

Link to comment

I just now tried assigning a precleared drive to the array and I saw the same thing:  a warning that starting the array will cause the disk to be cleared.  I  unassigned the disk before starting the array.  I then checked the preclear signature with "preclear_disk.sh -t"  and the disk is reported as NOT precleared.  In hindsight, I should have checked the preclear signature before assigning it to the array, but the only thing I've done with the drive since preclear is use hdparm to set the spindown time and manually spin it down.

 

The disk was a Seagate 4T precleared on unRAID 6.0 b3 with preclear_disk.sh v1.14.

Disk was assigned in unRAID 6.0 b4.

 

I see the same thing, but like you I've tried to add the drive to the array once, so not sure how it reported prior to that.

 

I am happy to see I am not the only one with the issue though.

Link to comment

I then checked the preclear signature with "preclear_disk.sh -t"  and the disk is reported as NOT precleared.

 

Since it's unlikely Joe's pre-clear script has suddenly decided to stop working [  :) ], it seems that either preclear doesn't work correctly under the 64-bit kernel (Joe ??) or something in the 64-bit code is keeping the system from seeing the signature correctly.

 

You are correct. My title isn't 100% accurate as it's not an issue with preclear (presumably since it hasn't changed). It's likely an issue with 6.0 handling precleared drives.

Link to comment

Okay, I updated my subject to be more accurate (I believe) since the preclear script hasn't changed in a while.

 

I also tried booting into a virgin 5.0.5 USB and manually reassigned all my disks, started the array, stopped it and tried to add the disks, but they are still reporting they need to be cleared. As was suggested by someone else though I don't know if when I tried to add them to UnRAID 6.0 the first time it unchecked a flag to indicate that the disk has been cleared.

 

If I do a preclear -t both 4TB disks show they are not cleared.

 

I also noticed one other thing... I had thought I was using the latest preclear from Joe, but I forgot to copy it to my 5.0.5 USB and noticed the latest is 1.14. It got me thinking because the reports I had been running for bjp for his new script shows 1.13:

 

================================================================

===1.13

== invoked as: ./preclear_disk.sh -A /dev/sdc

==  WDC WD40EZRX-00SPEB0    WD-WCC4E0503635

== Disk /dev/sdc has been successfully precleared

== with a starting sector of 1

== Ran 1 cycle

==

== Using :Read block size = 1000448 Bytes

== Last Cycle's Pre Read Time  : 12:00:21 (92 MB/s)

== Last Cycle's Zeroing time  : 10:39:12 (104 MB/s)

== Last Cycle's Post Read Time : 19:35:00 (56 MB/s)

== Last Cycle's Total Time    : 42:15:32

==

== Total Elapsed Time 42:15:32

==

== Disk Start Temperature: 27C

==

== Current Disk Temperature: 29C,

 

Could this be the source of my issues? I see a note from Joe that earlier than 1.14 has issues with 2.2TB or larger drives.

 

Freddie - Can you confirm if you are using 1.13 or 1.14? You can check the rpt file on your flash under preclear_reports. It will be named preclear_rpt_ WD-WCC4E0503635_2014-03-29 (or something similar).

 

Maybe that is my issue... I don't know. Unfortunately with me using 4TB drives it's not quick to retest and check results.

 

I am going to add one 4TB drive to the array and let UnRAID re-clear it. I will then re-run preclear 1.14 on my other 4TB disk and see if there is any difference (unless Freddie can confirm his issue happens on 1.14 as well).

 

Link to comment

I am using preclear_disk.sh v1.14

 

Thanks for checking.

 

As I was thinking about this I realized that even if I was using 1.13 instead of 1.14 I've had the same version of preclear for a while and had previously cleared 3TB drives no issue on unRAID 5.0.+

 

It looks like it may be unRAID 6.0 related. I will be interested to see your new preclear results Freddie since you will likely get yours before I do.

Link to comment

Although the issue doesn't appear to be due to the earlier version, I'd nevertheless be sure you switch to the latest pre-clear version (I suspect you've already done that -- just wanted to be sure).

The older 1.13 preclear version cleared large drives just fine.  The 5.X (and probably 6.X versions)  of unRAID uses its disk config files differently...  The 1.13 preclear script could  not accurately detect which drives were assigned until after the array had been started.  The 1.14 version added a check using the super.dat file to address that so you would be less likely to shoot yourself in the foot by accidentally clearing the wrong drive.

 

The act of simply assigning a drive the array will erase the "preclear signature" as you've discovered. (in any version of unRAID)  You must test the precleared drive ( -t option ) prior to assignment to verify it is still precleared.

 

Joe L.

Link to comment

I precleared again without preread or postread.  I then checked the preclear signature with preclear_disk.sh -t and the drive is reported as NOT precleared.

 

The only things that happened between the finish of the preclear and the signature check where:

1. Drive spun down due to standby timeout previously set with hdparm -S

2. Mover ran

3. During mover, syslog shows one strange line (I think unrelated, but I don't know):

Mar 30 03:40:38 uwer logger: Cannot stat file /proc/2910/fd/255: No such file or directory

4. I checked drive power state with hdparm -S

 

I think my next step is to boot up a fresh install of unRAID 5.05 and check the preclear signature on the same drive.  Then try the cycle again in unRAID 6 beta 4 with addons removed (I currently have apcupsd, SNAP and ntfs-3g installed)

 

Link to comment

I precleared again without preread or postread.  I then checked the preclear signature with preclear_disk.sh -t and the drive is reported as NOT precleared.

 

The only things that happened between the finish of the preclear and the signature check where:

1. Drive spun down due to standby timeout previously set with hdparm -S

2. Mover ran

3. During mover, syslog shows one strange line (I think unrelated, but I don't know):

Mar 30 03:40:38 uwer logger: Cannot stat file /proc/2910/fd/255: No such file or directory

4. I checked drive power state with hdparm -S

 

I think my next step is to boot up a fresh install of unRAID 5.05 and check the preclear signature on the same drive.  Then try the cycle again in unRAID 6 beta 4 with addons removed (I currently have apcupsd, SNAP and ntfs-3g installed)

 

Since you're using a drive > 2TB, you might be running into an issue with GPT partitioning. Definitely try it without any addons, specifically anything that might modify SGDISK or the libraries it uses.

 

I had some oddities on my self-made slackware-64bit distro relating to this a while back. I eventually got it working manually after updating some things here and there. But that should absolutely not be needed for this official builds.

Link to comment

I precleared again without preread or postread.  I then checked the preclear signature with preclear_disk.sh -t and the drive is reported as NOT precleared.

 

The only things that happened between the finish of the preclear and the signature check where:

1. Drive spun down due to standby timeout previously set with hdparm -S

2. Mover ran

3. During mover, syslog shows one strange line (I think unrelated, but I don't know):

Mar 30 03:40:38 uwer logger: Cannot stat file /proc/2910/fd/255: No such file or directory

4. I checked drive power state with hdparm -S

 

I think my next step is to boot up a fresh install of unRAID 5.05 and check the preclear signature on the same drive.  Then try the cycle again in unRAID 6 beta 4 with addons removed (I currently have apcupsd, SNAP and ntfs-3g installed)

 

Since you're using a drive > 2TB, you might be running into an issue with GPT partitioning. Definitely try it without any addons, specifically anything that might modify SGDISK or the libraries it uses.

 

I had some oddities on my self-made slackware-64bit distro relating to this a while back. I eventually got it working manually after updating some things here and there. But that should absolutely not be needed for this official builds.

 

My 6.0-beta4 server is plugin free. I have ArchVM running with all my plugins, so that shouldn't be the issue.

 

I am wondering if it's a >2TB issue as well. I am starting to reclear my 4TB drive and will post the result.

Link to comment

Just precleared 3 older 750 and 1tb drives to create an v6 test bed. Using 1.13 on v6b4, all 3 got the correct preclear signature. Not sure what you guys are seeing???

 

Guess I spoke too soon.  The first of the 3 drives was added to the array as disk1 without delay.  Yesterday second drive was added as parity and it kicked off a parity check???  It's been so long that I've added a precleared parity disk that I don't remember if this is expected behavior. 

 

Just now, however adding the 3rd disk as disk2 resulted in it wanting to clear the disk again.  I guess this is what you are referring to??  Its only been 12 hours since the batch were all precleared, and disk2 was never touched otherwise.  I then precleared disk2 again with v1.14 and it still wasn't precleared when checked with a -t option with preclear_disk.sh v1.14  ????

 

################################################################## 1.14
Model Family:     Hitachi Deskstar 7K1000
Device Model:     Hitachi HDS721075KLA330
Serial Number:    GTF202P8GGB6EF
LU WWN Device Id: 5 000cca 215c68744
Firmware Version: GK8OAB0A
User Capacity:    750,156,374,016 bytes [750 GB]

Disk /dev/sdd: 750.2 GB, 750156374016 bytes
255 heads, 63 sectors/track, 91201 cylinders, total 1465149168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System
/dev/sdd1              63  1416016384   708008161    0  Empty
########################################################################
failed test 6
========================================================================1.14
==
== Disk /dev/sdd is NOT precleared
== 63 1416016322 1465149105
============================================================================

 

 

Link to comment

Just precleared 3 older 750 and 1tb drives to create an v6 test bed. Using 1.13 on v6b4, all 3 got the correct preclear signature. Not sure what you guys are seeing???

 

Guess I spoke too soon.  The first of the 3 drives was added to the array as disk1 without delay.  Yesterday second drive was added as parity and it kicked off a parity check???  It's been so long that I've added a precleared parity disk that I don't remember if this is expected behavior. 

 

Just now, however adding the 3rd disk as disk2 resulted in it wanting to clear the disk again.  I guess this is what you are referring to??  Its only been 12 hours since the batch were all precleared, and disk2 was never touched otherwise.  I then precleared disk2 again with v1.14 and it still wasn't precleared when checked with a -t option with preclear_disk.sh v1.14  ????

 

Yes, this is the same behaviour I was seeing. I don't know if your first disk was successful because you didn't have a parity drive yet, but the 3rd disk experience is what I've seen. I am re-preclearing a 4TB drive to try this again. I want to confirm the drive is marked as precleared before I try and add to the array (you can run preclear_disk.sh -t /dev/sdX to check status).

 

As for the parity drive, it is presumably not a parity check, but a parity build it kicked off, and yes I think that behaviour is normal as well.

Link to comment

As for the parity drive, it is presumably not a parity check, but a parity build it kicked off, and yes I think that behaviour is normal as well.

 

That's correct.  It wasn't a parity check -- it was the initial parity sync.    Pre-clearing has NO impact on whether or not a parity check is required when you add the parity drive.    The sole purpose of a pre-clear (other than confidence testing of the drive) is to allow the drive to be added to a parity-protected array without the need to clear the drive.    The pre-clear "signature" tells UnRAID that the drive has been completed zeroed -- and since UnRAID then "knows" it's all zeroes, it also knows that adding the drive doesn't require any changes to the parity drive for it to remain valid.    That's the reason, by the way, that UnRAID wipes out the pre-clear "signature" -- it writes zeroes over it so the drive is truly all zeroes.

 

Pre-clearing is completely unnecessary (again, except as a confidence check for the drive) if you're adding a drive to a system without a parity drive;  and it's also irrelevant on a parity drive, since a parity sync will be required as soon as it's assigned to that role.

 

Link to comment

I've done a bunch of testing and concluded that preclear is not writing a valid preclear signature on some drives in unRAID 6.  My tests included:

  • A 4TB disk precleared in 6.0 beta4 tests as NOT precleared in both 6.0b4 and 5.05. I precleared with all plugins removed, but still had the screen package installed and I had booted into unRAID with Xen and IronicBander's archVM running.
     

  • A 250GB disk precleared in 6.0 beta4 tests as precleared, even with Xen and my selection of plugins installed.
     

I also modified the preclear_disk.sh script to be able to create very dangerous disks that have the preclear signature without actually clearing the whole disk. I call it fake_clear and it allows me to test much quicker.


  • A 250GB disk fake_cleared in 6.0b4 tests as precleared in both 6.0b4 and 5.05.
     

  • A 4TB disk fake_cleared in 6.0b4 tests as NOT precleared in both 6.0b4 and 5.05. I have run the fake_clear in many 6.0b4 boot configurations including Safe Mode without Xen and with the default go file.
     

  • A 4TB disk fake_cleared in 5.05 tests as precleared in both 6.0b4 and 5.05.
     

I would have guessed the problem is related to disks over 2TB, but tr0910's results seem to contradict this.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.