unRAID Server Release 5.0-beta6a Available


Recommended Posts

  • Replies 349
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

upgraded from 4.7 to 5.0b6a this week.  21 drives in array all showed up properly as aligned or unaligned.  Cache drive working properly.  Everything seems to be working fine.  Sickbeard, SABnzbd and powerdown script are the only thing i've installed so far but all working with no issues

 

Josh

Link to comment

Anyone else had the issue of preclearing all disks with the preclear script and then upon creating an array the format process precedes to then format the entire array rather than detecting the precleared disks and the format taking 2-3 min.

 

any ideas?

 

I then have to "initconfig" then preclear each disk again using "preclear_disk.sh -n -D -A /dev/sdX etc for each disk. heres hoping it will work after this second preclear.

 

 

Link to comment

Anyone else had the issue of preclearing all disks with the preclear script and then upon creating an array the format process precedes to then format the entire array rather than detecting the precleared disks and the format taking 2-3 min.

 

any ideas?

 

I then have to "initconfig" then preclear each disk again using "preclear_disk.sh -n -D -A /dev/sdX etc for each disk. heres hoping it will work after this second preclear.

 

 

 

Not sure what you mean by "format the entire array". Each new disk has to be formatted. The parity will always be build for a new array or after the init command.

 

Peter

Link to comment

Anyone else had the issue of preclearing all disks with the preclear script and then upon creating an array the format process precedes to then format the entire array rather than detecting the precleared disks and the format taking 2-3 min.

 

any ideas?

 

I then have to "initconfig" then preclear each disk again using "preclear_disk.sh -n -D -A /dev/sdX etc for each disk. heres hoping it will work after this second preclear.

 

 

 

Not sure what you mean by "format the entire array". Each new disk has to be formatted. The parity will always be build for a new array or after the init command.

 

Peter

 

 

From scratch with 5.0-beta6a, i precleared 1x 2TB drive and 9x 1.5TB drives. after the 28 hours to complete the lot, i then assigned them to an array, then upon starting the array it went on to do a parity sync. now i stopped the parity sync immediately then choose to format the array (as all disks show unformated) now this stage, rather than taking 5min or so, was actually ignoring the pre-cleared state of the drive and formating them from scratch, meaning that after 30min of leaving it to think i noticed it was at 2% complete for the format. so i wonder if I am doing something wrong. according to the preclear_disk.sh thread here on the forum this process should not take that long. or do i need to assign them all to an array then reboot the box, then format?

 

 

any ideas, because this issue defeats the purpose of being able to quick add drives as the are pre-cleared.

 

 

Link to comment

Anyone else had the issue of preclearing all disks with the preclear script and then upon creating an array the format process precedes to then format the entire array rather than detecting the precleared disks and the format taking 2-3 min.

 

any ideas?

 

I then have to "initconfig" then preclear each disk again using "preclear_disk.sh -n -D -A /dev/sdX etc for each disk. heres hoping it will work after this second preclear.

 

 

 

Not sure what you mean by "format the entire array". Each new disk has to be formatted. The parity will always be build for a new array or after the init command.

 

Peter

 

 

From scratch with 5.0-beta6a, i precleared 1x 2TB drive and 9x 1.5TB drives. after the 28 hours to complete the lot, i then assigned them to an array, then upon starting the array it went on to do a parity sync. now i stopped the parity sync immediately then choose to format the array (as all disks show unformated) now this stage, rather than taking 5min or so, was actually ignoring the pre-cleared state of the drive and formating them from scratch, meaning that after 30min of leaving it to think i noticed it was at 2% complete for the format. so i wonder if I am doing something wrong. according to the preclear_disk.sh thread here on the forum this process should not take that long. or do i need to assign them all to an array then reboot the box, then format?

 

 

any ideas, because this issue defeats the purpose of being able to quick add drives as the are pre-cleared.

 

 

Basically, since you interrupted the normal process, I have no idea what unRAID will do.

 

Once parity is established, THEN the pre-clearing will be recognized and unRAID will not clear a drive when adding it to an already established array since it is pre-cleared.

 

Since you did not establish parity, then it is as if it is a new array, but since the drives are not all new (since you interrupted them when they were) it would proceed as if the disks were not new. (it had already erased the pre-clear signature and put into place a partition for the file-system)

 

Good luck with your array.  Do not try to cut corners, and do not cancel out process once they start unless you know the impact.

 

I personally think starting an array without parity is a mistake, but others think loading the data fastest (without parity) is their highest priority.  You take your choice, fastest write speed, or safer data.  I personally prefer safer data.

 

Joe L.

Link to comment

So if I

 

1. Preclear All Disks

2. Create Array (and let it calculate Parity for it)

3. Then Format the array.

 

based on your experience, will it detect the drives are clean when hitting format and then just work rather than formatting the entire array which takes hours!

 

 

Link to comment
then upon starting the array it went on to do a parity sync. now i stopped the parity sync immediately then choose to format the array (as all disks show unformated)

 

The parity sync must complete for a new array. You can't stop it or you will not have parity protection. I'm quite certain you can press the format button any time during the parity sync.

 

Once you interrupted it then unRAID must have though the parity was correct and you were adding new drives to an existing array which were not cleared.

 

There is absolutely no point in running the preclear again. Do another initconfig and it should just allow you to format if necessary and also it will build parity.

 

unRAID never will preclear any disk when you create a new array. It just builds parity to match whatever is on the disks, cleared or not. It will also create the partitions and allow you to format the disks if it is necessary.

 

There isn't any issue. You screwed up the process when you cancelled the initial parity build.

 

Peter

 

Link to comment

then upon starting the array it went on to do a parity sync. now i stopped the parity sync immediately then choose to format the array (as all disks show unformated)

 

The parity sync must complete for a new array. You can't stop it or you will not have parity protection. I'm quite certain you can press the format button any time during the parity sync.

 

Once you interrupted it then unRAID must have though the parity was correct and you were adding new drives to an existing array which were not cleared.

 

There is absolutely no point in running the preclear again. Do another initconfig and it should just allow you to format if necessary and also it will build parity.

 

unRAID never will preclear any disk when you create a new array. It just builds parity to match whatever is on the disks, cleared or not. It will also create the partitions and allow you to format the disks if it is necessary.

 

There isn't any issue. You screwed up the process when you cancelled the initial parity build.

 

Peter

 

 

I had already precleared the disks again and started with a formatted fresh UNRAID USB stick, created the array and now will let it do its initial parity sync before doing the format stage. from what has been said above, it should detect clean drives and the drives should "format" very quick.

 

feel free to correct me :P

Link to comment

You're still not listening. unRAID doesn't give a damn about the disks being "clean" when you create a new array.

 

If the format button is available then you can use it at any time.

 

Peter

 

Ok with parity sync now at 10% (i let it go ahead without cancelling) I then hit format.

 

I now have logs showing just this....

 

May 9 13:38:15 Aurora-NAS01 logger: mkreiserfs 3.6.21 (2009 www.namesys.com)
May 9 13:38:15 Aurora-NAS01 logger: 
May 9 13:38:15 Aurora-NAS01 logger: mkreiserfs 3.6.21 (2009 www.namesys.com)
May 9 13:38:15 Aurora-NAS01 logger: mkreiserfs 3.6.21 (2009 www.namesys.com)
May 9 13:38:15 Aurora-NAS01 logger: 
May 9 13:38:15 Aurora-NAS01 logger: 
May 9 13:38:15 Aurora-NAS01 logger: mkreiserfs 3.6.21 (2009 www.namesys.com)
May 9 13:38:15 Aurora-NAS01 logger: 
May 9 13:38:15 Aurora-NAS01 logger: mkreiserfs 3.6.21 (2009 www.namesys.com)
May 9 13:38:15 Aurora-NAS01 logger: 

Link to comment

Hi!

 

I'm using version 5.0-beta6a for weeks now and I'm quite happy. But one week or two ago I recognized an unwanted behaviour.

 

I created a disk share which is bound to only one disk (include disks: disk7/ exclude disks: NONE entered). All my downloads are going to this share from my windows 7 machine. All the other drives should go down in standby. But they don't. The complete array is up and running. I guess that this has something to do with the error I see at 4.40 in my syslog.

 

Besides I dont't know why I'm getting these "afpd" messages. I deactivated AFP after switching to Windows and don't need it. 

 

Rebooting the machine helps but next day I have the same problem.

 

Installed plugins: unmenu, unrar

 

Thanks.

 

Bye.

unraidlog.txt

Link to comment

Im doing my head in. can someone explain how to have the array "see" the drives as pre-cleared. I have a fresh install of unraid 5.0b6a along with all my drives fully pre-cleared. I assigned all the drives and then started the array. After that i let it finish its parity sync. Then i hit the format button to format the array. after this is actually started formating (clearing again) as the writes for all drives was climbing very fast. and has been for that last 3 hours (i killed the last one from earlier and then wrote zeros to the MBR's of all drives then precleared the again for this time)

 

what am i doing wrong? i just want to have a new build with a clean pre-cleared array. HELP!!

 

Should i just create a small array (Parity + 2x data) and then add drives one at a time? will that fix it?

Link to comment

WngmanNZ - Did the system come up with all green indicators after the parity build completed? If so, then I doubt it clearing the drives again but look in the syslog because there should be a percent complete logged in there if it is clearing. And if you find it, then this is an issue with the OS to report.

 

It sounds to me that the OS is having trouble formatting that many drives at once. You may have to manually format some or all of them. I believe the formatting can be done like so;

 

mkreiserfs -q /dev/mdX  where X is the drive number.

 

Hopefully, someone else will confirm.

 

The thing is probably hung-up trying to format so you would either have to try and kill any format processes and emhttp and then restart emhttp (/usr/local/sbin/emhttp &) or just hard reset and endure another parity check. You can do the manual formats and such during the new parity check though - the mdX devices refer to the parity protected disks so any command line operation to an mdX device maintains parity.

 

You could do a small array and then adding a few drives at a time but that would require you to preclear again or let the server clear the disks. There is a preclear option to just zero the drives which makes it quicker but still, it's 6 or 8 hours per drive you shouldn't have to spend.

 

You could also simply add a few data disks with an initconfig and format them, then add a few more and format them, continuing until all the data disks are in place. Then, assign the parity and build the initial parity. unRAID won't clear the data disks when there is no parity disk assigned.

 

Peter

 

Link to comment

Hi!

 

The AFP entries in the LOG. Doesn't matter. As long as this "Spinning drives" problem is not solved I'm going back to 4.7. Which runs a parity check right now. I can't affort running 10 disks 24 hours a day.

 

Bye.

All you need do is change the spindown setting in unRAID to never and then

for each of your disks put a line like this in your config/go script for each of your disks:

hdparm -S 242 /dev/sdX

where sdX = the three letter designation for your disk.

Link to comment

Hi!

 

The AFP entries in the LOG. Doesn't matter. As long as this "Spinning drives" problem is not solved I'm going back to 4.7. Which runs a parity check right now. I can't affort running 10 disks 24 hours a day.

 

Bye.

 

Have you tried a manual spin-down? Others here are reporting the spin-down works fine after doing it manually one time.

 

Peter

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.