unRAID Server Release 6.2.0-beta23 Available


Recommended Posts

I just upgraded to b23 from 6.1.9. I am not assigning a second parity drive right away, and I have something I don't see any other posts on.

 

I don't have a way to initiate a parity check. I've been looking for a setting I missed, but I'm lost.

 

ADDED:

Also, the main tab shows Parity2 as unassigned after starting up. Am I supposed to do a new config after upgrading?

 

Also, the start array button says the array will be unprotected, but if I start I get a green ball. So....?

On my system the button to initiate a parity check is on the Main tab just under the button to stop/start the array.    If you do not have it then you need to provide a screen shot of the Main tab so we can see what you are seeing.
Link to comment
  • Replies 229
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

I just upgraded to b23 from 6.1.9. I am not assigning a second parity drive right away, and I have something I don't see any other posts on.

 

I don't have a way to initiate a parity check. I've been looking for a setting I missed, but I'm lost.

 

ADDED:

Also, the main tab shows Parity2 as unassigned after starting up. Am I supposed to do a new config after upgrading?

 

Also, the start array button says the array will be unprotected, but if I start I get a green ball. So....?

 

 

Looks like we have the same issue...

 

Looks like this right?

unraid-master-diagnostics-20160624-1727_1.zip

Link to comment

There is a minor error in the "Help" system.  Go to the 'Shares', 'User Shares', And click on any User Share.  Look at the 'NFS Security Settings' tab and you will find no 'Help' for those settings.  There is 'Help' for both the 'Share Setting' and 'SMB Security Setting' tabs.

 

Looks like a minor oversight that should probably be easy to correct.

 

This is not an oversight. Unlike SMB or AFP, NFS doesn't have hidden or special shares, hence there are no italics in the display.

 

Link to comment

I just upgraded to b23 from 6.1.9. I am not assigning a second parity drive right away, and I have something I don't see any other posts on.

 

I don't have a way to initiate a parity check. I've been looking for a setting I missed, but I'm lost.

 

ADDED:

Also, the main tab shows Parity2 as unassigned after starting up. Am I supposed to do a new config after upgrading?

 

Also, the start array button says the array will be unprotected, but if I start I get a green ball. So....?

 

 

Looks like we have the same issue...

 

Looks like this right?

 

Yes. Exactly, complete with Cache 30. Also, looking at cache tab, it will not allow assigning of slots. I also discovered that if I try to do a new config, I can't assign ANY disks to any slots array or otherwise (thank you unraid for revert back instructions).

 

If I reboot in safe mode all appears normal. So a plugin conflict maybe? How do I go about resolving that? I have tried removing all of the plugins that were installed and nothing. Only booting in safe mode seems to do anything.

 

[uPDATE rather than add to the post]

 

Like mikefallen below I also just reinstalled fresh (I went all thew way to a clean USB). Had a couple of hiccups (had to create a blank libvirt.conf file with nano so the updated libvirt would load) but all in all pretty easy rebuild of a system. Have reinstalled all of the plugins and dockers that were running and everything appears ship-shape and passed the parity check.

Link to comment

I just upgraded to b23 from 6.1.9. I am not assigning a second parity drive right away, and I have something I don't see any other posts on.

 

I don't have a way to initiate a parity check. I've been looking for a setting I missed, but I'm lost.

 

ADDED:

Also, the main tab shows Parity2 as unassigned after starting up. Am I supposed to do a new config after upgrading?

 

Also, the start array button says the array will be unprotected, but if I start I get a green ball. So....?

 

 

Looks like we have the same issue...

 

Looks like this right?

 

Yes. Exactly, complete with Cache 30. Also, looking at cache tab, it will not allow assigning of slots. I also discovered that if I try to do a new config, I can't assign ANY disks to any slots array or otherwise (thank you unraid for revert back instructions).

 

If I reboot in safe mode all appears normal. So a plugin conflict maybe? How do I go about resolving that? I have tried removing all of the plugins that were installed and nothing. Only booting in safe mode seems to do anything.

 

 

I did a fresh install and it everything is working great now, defiantly seems to be some hiccups updating. I saved a backup of my usb, deleted everything put a fresh beta23 image on it. Have to redo all my dockers and stuff by i just saved all the config files from appdata so no big deal.

Link to comment

There is a minor error in the "Help" system.  Go to the 'Shares', 'User Shares', And click on any User Share.  Look at the 'NFS Security Settings' tab and you will find no 'Help' for those settings.  There is 'Help' for both the 'Share Setting' and 'SMB Security Setting' tabs.

 

Looks like a minor oversight that should probably be easy to correct.

 

This is not an oversight. Unlike SMB or AFP, NFS doesn't have hidden or special shares, hence there are no italics in the display.

 

See this thread as the reason, I requested the addition of some 'Help' messages for the 'NFS Security Setting' tab:

 

    http://lime-technology.com/forum/index.php?topic=49960.0

 

There should be some explanation of what the 'Public', 'Secure' and 'Private' settings do in the 'Security' dropdown.  As an example, this is what appears for the 'SMB Security Setting' tab for 'Security':

 

    Summary of security modes:

    Public All users including guests have full read/write access.

    Secure All users including guests have read access, you select which of your users have write access.

    Private No guest access at all, you select which of your users have read/write or read-only access.

 

Apparently, there is some other options that can be specified when choosing 'Private'. 

 

Link to comment

Just upgraded, and now I'm seeing a blue 'Update Ready' mark next to the containers linuxserver/Plex:latest and yujiod/minecraft-mineos:latest. When I click update next to MineOS, it seems to be working, but as soon as I press Check for Updates, it turns back to the 'Update Ready' state.

 

Now this is where things get interesting. When I update Plex, however, after the job completes it turn red with the message 'Not Available'. As soon as I press Check for Updates, it turns back to the blue 'Update Ready' state.

 

I'm assuming this is a bug that has already been patched with the next build?

Link to comment

Is anyone having trouble stopping the array.

 

I also had to reset my config because of some bug where I could not add hard drives cause the option was grayed out and also cache did not have a selection for how many drives and was just blank. After renaming config this issue was resolved but I am still having trouble stopping the array.

Link to comment

Is anyone having trouble stopping the array.

 

I also had to reset my config because of some bug where I could not add hard drives cause the option was grayed out and also cache did not have a selection for how many drives and was just blank. After renaming config this issue was resolved but I am still having trouble stopping the array.

 

Had this untill i did a fresh install

 

Link to comment

Is anyone having trouble stopping the array.

 

I also had to reset my config because of some bug where I could not add hard drives cause the option was grayed out and also cache did not have a selection for how many drives and was just blank. After renaming config this issue was resolved but I am still having trouble stopping the array.

 

If you want some intelligent help, you will have to post your diagnostics file (Go to 'Tools') and a list of the plugins and Dockers that you are running would also be helpful. 

Link to comment

Just upgraded, and now I'm seeing a blue 'Update Ready' mark next to the containers linuxserver/Plex:latest and yujiod/minecraft-mineos:latest. When I click update next to MineOS, it seems to be working, but as soon as I press Check for Updates, it turns back to the 'Update Ready' state.

 

Now this is where things get interesting. When I update Plex, however, after the job completes it turn red with the message 'Not Available'. As soon as I press Check for Updates, it turns back to the blue 'Update Ready' state.

 

I'm assuming this is a bug that has already been patched with the next build?

 

I'm having this same issue. Non of the containers will update, and now I can't access Community Apps either to reinstall some containers.

Link to comment

Check your dns settings.  See the unraid faq

 

Sent from my LG-D852 using Tapatalk

 

DNS looks correct. Not sure what is going on. Only happened after the upgrade to beta 23

 

From what I am seeing on my two servers, there are some differences in the setup screens for the beta and stable versions.  Why don't post a screen shot of your 'Network Settings' page.  Perhaps someone will see something you need to adjust.    (BTW, apparently you still have to stop the array to make any changes but that notification line was dropped from the beta version!)

Link to comment

I have been getting same probs with docker updates always showing as available aswell. (however not all dockers)

 

------------------------------------------------------------------------------------------------------------------------------------

 

I have found since my upgrade to 23  i cant ssh or telnet into my server.

 

I get this error

 

ssh: connect to host 192.168.1.199 port 22: Connection refused

 

I havent changed anything and have tried restarting server and same result.

prime-diagnostics-20160628-1944.zip

Link to comment

I've never had an issue with updating the Beta's via the Plugin Update button. However on 23 this does not seem to work.

 

When I press the update button I get;

 

plugin: updating: unRAIDServer.plg

plugin: downloading: https://s3.amazonaws.com/dnld.lime-technology.com/beta/unRAIDServer-6.2.0-beta23-x86_64.zip ... 0%

 

At that point it just sits there. Nothing happens. I need to close the box access tower.

 

I'd appreciate any suggestions or pointers I'd like to update from 21 -> 23

 

Thanx

Link to comment

I've never had an issue with updating the Beta's via the Plugin Update button. However on 23 this does not seem to work.

......

I'd appreciate any suggestions or pointers I'd like to update from 21 -> 23

 

Thanx

 

A couple of things to try.  First, go to the 'Plugins' and click on the 'Check for updates' tab.  Does it check for updates?

 

If that works, try to update again.  It could be that there is a server side (amazonaws.com) issue. 

Link to comment

Frank, thank you. Yes I tried that. no different. Also I've attempted the update many times over the last few weeks. no dice .

....

 

I'd appreciate any suggestions or direction to have a go with.

 

Post the Diagnostics files (Tools' >> 'Diagnostics') to your next post.  Some Guru will then be able to go through them and see if he/she can spot the issue.

Link to comment

Your drive assignments are actually stored in super.dat ...

 

A comment for Tom or Eric - there's been a change in how super.dat is modified, sometime recently so probably a part of 6.2 development.  My guess is that pre-6.2, super.dat was modified by seeking then reading or writing in place, whereas now it is modified by clearing the file then writing out the whole file.  The problem is that there is now a window of opportunity that wasn't there before, as we have seen 3 to 5 cases already where the super.dat file exists but is zero bytes (a loss of all array assignments).  I don't remember that ever happening before this.  It's my impression that each of the cases are different, which implies it's the general super.dat update routine that has changed, and is clearing the file to zero bytes well before it is rewritten, a window of time that's vulnerable to power outage or system crash.  Hopefully that can be improved, either never clearing it or clear before immediate write.

 

Is there a system log somewhere from a server that exhibited this issue?

 

Found another case, the best one yet because you can see evidence of the failure ->  here

 

The user booted on March 16, has been running fine except for numerous power outages, but is on a UPS so stayed operational.  Super.dat appears to have been fine then, until June23 -

Jun 23 21:54:48 Tower kernel: sd 1:0:10:0: [sdl] Synchronizing SCSI cache

Jun 23 21:54:48 Tower kernel: sd 1:0:10:0: [sdl] UNKNOWN(0x2003) Result: hostbyte=0x01 driverbyte=0x00

Jun 23 21:54:48 Tower kernel: sd 1:0:10:0: [sdl] CDB: opcode=0x88 88 00 00 00 00 02 9b b1 01 e8 00 00 00 08 00 00

Jun 23 21:54:48 Tower kernel: blk_update_request: I/O error, dev sdl, sector 11202003432

Jun 23 21:54:48 Tower kernel: sd 1:0:10:0: [sdl] Synchronize Cache(10) failed: Result: hostbyte=0x01 driverbyte=0x00

Jun 23 21:54:48 Tower kernel: md: disk10 read error, sector=11202003368

Jun 23 21:54:48 Tower kernel: mpt2sas0: removing handle(0x0014), sas_addr(0x500605b400006fcf)

Jun 23 21:54:49 Tower kernel: md: disk10 write error, sector=11202003368

Jun 23 21:54:49 Tower kernel: md: recovery thread woken up ...

Jun 23 21:54:49 Tower kernel: write_file: write error 4

Jun 23 21:54:49 Tower kernel: md: could not write superblock from /boot/config/super.dat

Jun 23 21:54:49 Tower kernel: md: recovery thread has nothing to resync

Jun 23 21:54:52 Tower kernel: scsi 1:0:18:0: Direct-Access    ATA      WDC WD60EFRX-68M 0A82 PQ: 0 ANSI: 6

Jun 23 21:54:52 Tower kernel: scsi 1:0:18:0: SATA: handle(0x0014), sas_addr(0x500605b400006fcf), phy(15), device_name(0x0000000000000000)

Jun 23 21:54:52 Tower kernel: scsi 1:0:18:0: SATA: enclosure_logical_id(0x500605b400006fff), slot(15)

Jun 23 21:54:52 Tower kernel: scsi 1:0:18:0: atapi(n), ncq(y), asyn_notify(n), smart(y), fua(y), sw_preserve(y)

Jun 23 21:54:52 Tower kernel: sd 1:0:18:0: Attached scsi generic sg11 type 0

Jun 23 21:54:52 Tower kernel: sd 1:0:18:0: [sdu] 11721045168 512-byte logical blocks: (6.00 TB/5.45 TiB)

Jun 23 21:54:52 Tower kernel: sd 1:0:18:0: [sdu] 4096-byte physical blocks

Jun 23 21:54:52 Tower kernel: sd 1:0:18:0: [sdu] Write Protect is off

Jun 23 21:54:52 Tower kernel: sd 1:0:18:0: [sdu] Mode Sense: 7f 00 10 08

Jun 23 21:54:52 Tower kernel: sd 1:0:18:0: [sdu] Write cache: enabled, read cache: enabled, supports DPO and FUA

Jun 23 21:54:52 Tower kernel: sdu: sdu1

Jun 23 21:54:52 Tower kernel: sd 1:0:18:0: [sdu] Attached SCSI disk

 

More than 3 months after booting, the SAS card or drive connection fails, and Disk 10 first has a read error, then loses its handle, resulting in the write error, followed by the failure to update super.dat.  Disk 10 was sdl on sd 1:0:10:0, was lost, then about 4 seconds later is brought back on sd 1:0:18:0 as sdu.  I have to say that mpt2sas is a miserable unhelpful module!  It hates to provide helpful clues.

 

The diagnostics has a zero byte super.dat.  This is not related to a crash or power outage (I think), so I'm probably wrong in my original guess at the problem cause.

 

If you examine the syslogs, you'll note numerous network drops.  Apparently the user has their server on the UPS but not their routers or switches.

Link to comment

I've never had an issue with updating the Beta's via the Plugin Update button. However on 23 this does not seem to work.

 

When I press the update button I get;

 

plugin: updating: unRAIDServer.plg

plugin: downloading: https://s3.amazonaws.com/dnld.lime-technology.com/beta/unRAIDServer-6.2.0-beta23-x86_64.zip ... 0%

 

At that point it just sits there. Nothing happens. I need to close the box access tower.

 

I'd appreciate any suggestions or pointers I'd like to update from 21 -> 23

I looked at your diagnostics, and while there are a couple of things that are unique to your system, I don't believe they are relevant to this issue.  The syslog basically agrees with you, that the unRAID distro is created and reported as 'downloading' but nothing ever happens.  It doesn't look like a problem on the server end, so must be an Internet thing.  Have you tried downloading it manually, from the LimeTech download page?  I just tried that, and didn't have any trouble.  Then you can update the old manual way, extracting the bz files to the flash drive and rebooting.

 

It's my first time to see a system with ZFS installed.  Didn't learn much as the ZFS modules don't log practically anything!  On installation, it does report altering the kernel, but probably not relevant to your issue.  You may possibly need a ZFS plugin update for the kernel in beta23.

 

You had 2 general protection faults, in Python 3.4, which on the surface could be serious, but these appear to have been expected and were trapped, so probably not an issue either.  Python had 5 to 7 instances running in the ps report.

Link to comment

Small bug in the network setting GUI.  If you select "static" for IP address, it greys out the choice for static/automatic for DNS server.... so if both were Auto, and you changed IP address to static, you could not then change DNS server to static also.  You have to change DNS server to static FIRST, before you change IP address to static.

Link to comment

Small bug in the network setting GUI.  If you select "static" for IP address, it greys out the choice for static/automatic for DNS server.... so if both were Auto, and you changed IP address to static, you could not then change DNS server to static also.  You have to change DNS server to static FIRST, before you change IP address to static.

 

Can confirm, noticed this this morning, hadn't had a chance to report :)

Link to comment

Small bug in the network setting GUI.  If you select "static" for IP address, it greys out the choice for static/automatic for DNS server.... so if both were Auto, and you changed IP address to static, you could not then change DNS server to static also.  You have to change DNS server to static FIRST, before you change IP address to static.

 

Can confirm, noticed this this morning, hadn't had a chance to report :)

 

yep i noticed that too.

Link to comment
Guest
This topic is now closed to further replies.