Preclear Report Question


Recommended Posts

Preclear completed for 3 out of 4 of my drives.  The Hitachi looks fine but there are comments on the both of the new Samsung drives that did not appear on the Hitachi.  The refer to Program Fail Cnt Total (on one of the drives) and Multi Zone Error Rate (on both drives).

 

Can anyone help me interpret these reports?  I'm not familiar with a lot of this.

 

 

 

preclear_finish__S2H7J1PB500855_2011-07-12.txt

preclear_rpt__S2H7J1PB500855_2011-07-12.txt

preclear_start__S2H7J1PB500855_2011-07-12.txt

Link to comment

All you need to read is in the report(s).

 

========================================================================1.11

== invoked as: ./preclear_disk.sh -A /dev/sda

==  SAMSUNG HD204UI    S2H7J1PB500855

== Disk /dev/sda has been successfully precleared

== with a starting sector of 64

== Ran 1 cycle

==

== Using :Read block size = 8225280 Bytes

== Last Cycle's Pre Read Time  : 6:41:34 (83 MB/s)

== Last Cycle's Zeroing time  : 5:55:50 (93 MB/s)

== Last Cycle's Post Read Time : 13:57:14 (39 MB/s)

== Last Cycle's Total Time    : 26:36:05

==

== Total Elapsed Time 26:36:05

==

== Disk Start Temperature: 28C

==

== Current Disk Temperature: 28C,

==

============================================================================

** Changed attributes in files: /tmp/smart_start_sda  /tmp/smart_finish_sda

                ATTRIBUTE  NEW_VAL OLD_VAL FAILURE_THRESHOLD STATUS      RAW_VALUE

    Multi_Zone_Error_Rate =  100    252            0        ok          0

No SMART attributes are FAILING_NOW

 

0 sectors were pending re-allocation before the start of the preclear.

0 sectors were pending re-allocation after pre-read in cycle 1 of 1.

0 sectors were pending re-allocation after zero of disk in cycle 1 of 1.

0 sectors are pending re-allocation at the end of the preclear,

    the number of sectors pending re-allocation did not change.

0 sectors had been re-allocated before the start of the preclear.

0 sectors are re-allocated at the end of the preclear,

    the number of sectors re-allocated did not change.

============================================================================

 

 

and

 

========================================================================1.11

== invoked as: ./preclear_disk.sh -A /dev/sdc

==  SAMSUNG HD204UI    S2H7J1PB500857

== Disk /dev/sdc has been successfully precleared

== with a starting sector of 64

== Ran 1 cycle

==

== Using :Read block size = 8225280 Bytes

== Last Cycle's Pre Read Time  : 6:41:11 (83 MB/s)

== Last Cycle's Zeroing time  : 5:54:39 (94 MB/s)

== Last Cycle's Post Read Time : 13:55:55 (39 MB/s)

== Last Cycle's Total Time    : 26:33:12

==

== Total Elapsed Time 26:33:12

==

== Disk Start Temperature: 28C

==

== Current Disk Temperature: 28C,

==

============================================================================

** Changed attributes in files: /tmp/smart_start_sdc  /tmp/smart_finish_sdc

                ATTRIBUTE  NEW_VAL OLD_VAL FAILURE_THRESHOLD STATUS      RAW_VALUE

  Program_Fail_Cnt_Total =  100    252            0        ok          16994118

    Multi_Zone_Error_Rate =  100    252            0        ok          0

No SMART attributes are FAILING_NOW

 

0 sectors were pending re-allocation before the start of the preclear.

0 sectors were pending re-allocation after pre-read in cycle 1 of 1.

0 sectors were pending re-allocation after zero of disk in cycle 1 of 1.

0 sectors are pending re-allocation at the end of the preclear,

    the number of sectors pending re-allocation did not change.

0 sectors had been re-allocated before the start of the preclear.

0 sectors are re-allocated at the end of the preclear,

    the number of sectors re-allocated did not change.

============================================================================

 

Link to comment

What does this:

 

** Changed attributes in files: /tmp/smart_start_sda  /tmp/smart_finish_sda

               ATTRIBUTE   NEW_VAL OLD_VAL FAILURE_THRESHOLD STATUS      RAW_VALUE

   Multi_Zone_Error_Rate =   100     252            0        ok          0

No SMART attributes are FAILING_NOW

 

and this:

 

** Changed attributes in files: /tmp/smart_start_sdc  /tmp/smart_finish_sdc

               ATTRIBUTE   NEW_VAL OLD_VAL FAILURE_THRESHOLD STATUS      RAW_VALUE

  Program_Fail_Cnt_Total =   100     252            0        ok          16994118

   Multi_Zone_Error_Rate =   100     252            0        ok          0

No SMART attributes are FAILING_NOW

 

Mean?

 

This is what I saw at the end of the preclear.  What does a Program Fail Cnt Total of 16994118 Mean?  That reading only shows up on one drive?  What does changing attributes from 100 to 252 mean?

 

Thanks

 

Link to comment

Been doing some online digging.  A number of people have reported the "Program Fail Count" attribute to have high raw values and rising quickly specifically on this drive, but not having any HD issues.  I saw a few comment on the same issue in a Crucial SSD also.  No one seems to be certain what it is or means, but it's a vendor specific attribute.

 

I did find:

 

Description

Program Fail Count (chip) S.M.A.R.T. parameter indicates a number of flash program failures.

 

Recommendations

This parameter is considered informational by the most hardware vendors. Although degradation of this parameter can be an indicator of drive aging and/or potential electromechanical problems, it does not directly indicate imminent drive failure. Regular backup is recommended. Pay closer attention to other parameters and overall drive health.

 

Any further insight is appreciated.

Link to comment

I've not pre-cleared my drives - ever....   even on a brand new system I built just last weekend...  Last I saw, Tom still didn't officially support pre-clearing drives.

 

Is that still the situation?

 

Russell

He did not write it, I did.  For that reason, he provides no support for it, but he has supported me in writing it for the general unRAID user population.  He provided to me the details of how to create the older style MBR pre-clear signature, and the newer one for GPT partitioned drives over 2.2TB

 

  His only comment on the preclear script is in the release notes for most recent versions of unRAID:

Finally, the MBR of any drive that already has a valid "unRAID" MBR will not be re-written, regardless of the

setting of "Default partition format".  This includes unRAID MBR with "factory-erased" signature.  Therefore, if

you use Joe L.'s excellent  "preclear_disk" script, you must explicitly use the -A option to position partition 1 in

sector 64 if this is what you desire.

 

Obviously, you need not use it.  It is completely optional...  but it does burn in the new drives and test them before you use them for your data... Since about 1 out of every 5 fails when pre-cleared, or before.  The script has shown its value, but  it does not guarantee anything, as a drive can still fail shortly thereafter being cleared and installed...  Do you feel lucky?  Odds are in your favor. 

Link to comment

Hi Joe,

 

Thanks for filling me in on the details of preclear - and thanks for creating it.  While I found the instructions lengthy on the main thread, the guys at GreenLeaf have written instructions for Linux unfamiliars, like me:  http://www.greenleaf-technology.com/Blogs/BlogHowTo/index.php?published-min=2011-02-01T00:00:00-00:00&published-max=2011-03-01T00:00:00-00:00

 

Since I've just built my second UnRaid, I decided to try out your script following their instructions.  I hope there are no issues in running four Telnet connections to preclear four drives at a time - that's what I'm doing - running the three iteration preclear to check things out before I start to put this machine in use.

 

Neither set of instructions (yours or theirs) say what to do when the preclears complete - I assume I just go to the Devices tab and assign the drives to an array - will I then be able to simply start the array?  or do I need to do anything special to have that option available?  (Just asking ahead of time - my 2TB drives, times 3 iterations, look like they'll take a couple days).

 

One other question, if you're still here, Joe...  How much of the lifespan of my drives am I using up with these Preclear cycles?

 

Thanks again for the direct help Joe,

 

Russell

Link to comment

Thanks for filling me in on the details of preclear - and thanks for creating it.  While I found the instructions lengthy on the main thread, the guys at GreenLeaf have written instructions for Linux unfamiliars, like me:  http://www.greenleaf-technology.com/Blogs/BlogHowTo/index.php?published-min=2011-02-01T00:00:00-00:00&published-max=2011-03-01T00:00:00-00:00

 

Kyle (prostuff1) wrote that tutorial.  I'm glad you found it useful.  If you found any part of it difficult to follow, I'm sure he would appreciate any feedback.  You can email him at [email protected]

 

Since I've just built my second UnRaid, I decided to try out your script following their instructions.  I hope there are no issues in running four Telnet connections to preclear four drives at a time - that's what I'm doing - running the three iteration preclear to check things out before I start to put this machine in use.

 

There's no problem with running four preclears at once.  The max is generally six, as you can only have six sessions open at once on your unRAID system console.  It is possible to bypass this restriction if using screen, but running too many preclears at once can cause your system to run out of memory and crash.  Exactly how many cause a system crash depends on the size of the drives and the amount of RAM you have.

 

At GreenLeaf we run a minimum of two passes of preclear per drive.  However, we try not to run excessive preclear iterations as they do add needless wear-and-tear to the drive.  Three is OK, and I would say that four should be your maximum.

 

Neither set of instructions (yours or theirs) say what to do when the preclears complete - I assume I just go to the Devices tab and assign the drives to an array - will I then be able to simply start the array?  or do I need to do anything special to have that option available?  (Just asking ahead of time - my 2TB drives, times 3 iterations, look like they'll take a couple days).

 

Correct, just go to the Devices tab and assign them to the array.  unRAID will see the special preclear signature on the disk and skip the 'clearing' phase.  The drives will still need to be formatted, though.  I recommend formatting at most two drives at once, as in some cases formatting multiple drives at once can cause system crashes.  No permanent damage done, just an annoyance.

 

One other question, if you're still here, Joe...  How much of the lifespan of my drives am I using up with these Preclear cycles?

 

I'm also interested to hear Joe's answer to this one...  As I understand it, running a drive through a few passes of preclear is not going to significantly affect the lifespan of your drive, and it is definitely worth the trouble if you plan on trusting important data to the drive.  At GreenLeaf we have some test drives that we use to burn in new systems.  One of the tests we run involves running a pass of preclear on every single drive bay in the server (this tests the drive bay's backplanes, the SATA controller cards, the motherboard SATA ports, cabling, etc.).  These drives will see 10+ passes of preclear over the course of a month or so.  We've definitely seen some of these test drives fail within a few months of use.  In many cases, these test drives were already several years old so we can chalk those failures up to old age.  However, in one case I used a brand new 2 TB Seagate LP drive (running firmware CC34) as a test drive.  Within a few months, it failed.  Unfortunately there's no way to tell if the failure was due to the faulty firmware, a mechanical error, or the excessive preclearing.  I RMA'd the drive, upgraded the firmware to CC35 on the replacement, and started using it as my personal server's parity drive (after running it through two passes of preclear).  It does report a few SMART anomalies, but nothing I'm too worried about.

Link to comment

I'd say you are using up about 20 to 30 hours of your drive's life.

 

if you wish to know more, and if your manufacturer rates the life of their drive in terms of "seek" operations, then perhaps you can check their SMART parameter (if it exists) against their expected

life.

 

Most disks in servers seek constantly, for years, and do not spin down.

 

Dust and dirt kill drives.  Poor power connections kill drives.  excessive "G" forces kills drives.  Spinning on fluid bearings for 30 hours does not unless there is dirt there to start with in the bearings.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.