unRAID Server Release 6.1-rc2 Available


Recommended Posts

Is there any alternative way to install the upgrade? Unfortunately I cannot download download files from amzonaws from my Unraid server. Any chance that I can download the zip file from another machine and have Unraid extract and install?

Why can't your server download?
Link to comment
  • Replies 167
  • Created
  • Last Reply

Top Posters In This Topic

In RC1, I tried to install the OE virtual machine. When I clicked the download button, it changed to "Downloading". I just upgraded to RC2 and it still says downloading. How can I clear this? I've tried 2 browsers and cleared cache. Nothing has been downloaded to the specified download location.

 

splnut,

 

sorry for the delay in getting back to you on this.  Can you confirm this is still an issue?  If so, please try this.  Delete the openelec.cfg file from your flash device.  It's located under the /config/plugins/dynamix.vm.manager folder.  After deleting, revisit the Add VM page and select the OpenELEC template again from the drop down.

 

I haven't seen anyone else report this issue yet.  Let me know if this doesn't work for you.

 

Thanks for the response. I deleted the config file and tried to download again. Same result. I also stopped Docker in case something there was an issue. Same response.

 

Any logs I can provided?

Diagnostics would be good. Are you even seeing the progress indicator that displays a percentage complete?  Can you screenshot where its frozen?

 

No progress indicator. Just says "Downloading".

 

Syslog - http://1drv.ms/1JQvRa4

 

Scren.jpg

Have you tried to change the destination folder to one without a space in the name?

 

I tried multiple destinations, but they all had spaces. This fixed the issue.

 

Thanks

Ok, sounds like a bug!  I'll ask Eric to look into this.

Link to comment

Any interim fix? I actually have the zip file (downloaded elsewhere), but no idea how I can install it?

Assuming that you are upgrading an existing v6 system then you could just copy the bzimage and bzroot files from the ZIP file to your flash drive and then reboot.

Link to comment

When am checking syslog, i never see the spinup message, while spindown are ok ... Will be nice to log it too

Thanks

If you click on device spinup icon or click spinup all you will see log messages.  Spinup as a result of I/O is not logged.

 

Isnt a nice idea to log them ? maybe with another syntax than the web action

For me will be usefull to know when it occurs, easy count spinup/down, avg spinup time and son on

 

Thanks for the reply, and for all your work.

Link to comment

When am checking syslog, i never see the spinup message, while spindown are ok ... Will be nice to log it too

Thanks

If you click on device spinup icon or click spinup all you will see log messages.  Spinup as a result of I/O is not logged.

 

Isnt a nice idea to log them ? maybe with another syntax than the web action

For me will be usefull to know when it occurs, easy count spinup/down, avg spinup time and son on

 

Thanks for the reply, and for all your work.

 

In the case of parity and data disks, turns out yes, this is possible.  Never had anyone request this before.  In case of cache device, not possible at present.

Link to comment

When am checking syslog, i never see the spinup message, while spindown are ok ... Will be nice to log it too

Thanks

If you click on device spinup icon or click spinup all you will see log messages.  Spinup as a result of I/O is not logged.

 

Isnt a nice idea to log them ? maybe with another syntax than the web action

For me will be usefull to know when it occurs, easy count spinup/down, avg spinup time and son on

 

Thanks for the reply, and for all your work.

 

In the case of parity and data disks, turns out yes, this is possible.  Never had anyone request this before.  In case of cache device, not possible at present.

 

+1 for adding this feature.  I still get occasional instances (on both my servers) where the temperature is displayed but the 'Device' status indicator says it is spun down.  From the temperature of the disk, it appears that the Disk is really spun up.  I can then manually spin the disk(s) down and things come back into 'sync'.  I have never make an issue of it as I can't identify what is causing the disk to spin up and why it didn't get spun down.  By adding this feature, it might provide a clue as to what is causing this behavior.

 

PS---  I don't consider this behavior a real big 'deal' as I seriously doubt if it reduces the life of the disk but it does cause unneeded power consumption and it would be nice to get to the bottom of the 'problem'. 

Link to comment

+1 for adding this feature.  I still get occasional instances (on both my servers) where the temperature is displayed but the 'Device' status indicator says it is spun down.  From the temperature of the disk, it appears that the Disk is really spun up.  I can then manually spin the disk(s) down and things come back into 'sync'.  I have never make an issue of it as I can't identify what is causing the disk to spin up and why it didn't get spun down.  By adding this feature, it might provide a clue as to what is causing this behavior.

 

PS---  I don't consider this behavior a real big 'deal' as I seriously doubt if it reduces the life of the disk but it does cause unneeded power consumption and it would be nice to get to the bottom of the 'problem'.

Attributes (temperature) are only polled every 30 minutes by default.  This means that if a disk spins down, it can take up to 30 minutes for the temperature to turn to "*".  I have mine set to poll every 5 minutes, for more responsive updates.  You can tune this under Disk Settings.
Link to comment

+1 for adding this feature.  I still get occasional instances (on both my servers) where the temperature is displayed but the 'Device' status indicator says it is spun down.  From the temperature of the disk, it appears that the Disk is really spun up.  I can then manually spin the disk(s) down and things come back into 'sync'.  I have never make an issue of it as I can't identify what is causing the disk to spin up and why it didn't get spun down.  By adding this feature, it might provide a clue as to what is causing this behavior.

 

PS---  I don't consider this behavior a real big 'deal' as I seriously doubt if it reduces the life of the disk but it does cause unneeded power consumption and it would be nice to get to the bottom of the 'problem'.

Attributes (temperature) are only polled every 30 minutes by default.  This means that if a disk spins down, it can take up to 30 minutes for the temperature to turn to "*".  I have mine set to poll every 5 minutes, for more responsive updates.  You can tune this under Disk Settings.

 

Not the issue here.  I usually find the 'problem' in the morning when I get the Status Reports e-mails.  The report, with a time stamp of 20 minutes after midnight, will say the disk(s) are active. I will then start the GUI (this now being after 7:00AM), the disk temperature(s) are there and the indicator says the disk(s) are in the 'Stand By Mode (spundown)'.  So it is not a matter of Attributes polling not having time to 'synced up' with the actual disk state.  PLUS, I have set the Polling time to 60 seconds since I don't keep the GUI open unless I am actively using it!

Link to comment

+1 for adding this feature.  I still get occasional instances (on both my servers) where the temperature is displayed but the 'Device' status indicator says it is spun down.  From the temperature of the disk, it appears that the Disk is really spun up.  I can then manually spin the disk(s) down and things come back into 'sync'.  I have never make an issue of it as I can't identify what is causing the disk to spin up and why it didn't get spun down.  By adding this feature, it might provide a clue as to what is causing this behavior.

 

PS---  I don't consider this behavior a real big 'deal' as I seriously doubt if it reduces the life of the disk but it does cause unneeded power consumption and it would be nice to get to the bottom of the 'problem'.

Attributes (temperature) are only polled every 30 minutes by default.  This means that if a disk spins down, it can take up to 30 minutes for the temperature to turn to "*".  I have mine set to poll every 5 minutes, for more responsive updates.  You can tune this under Disk Settings.

 

Not the issue here.  I usually find the 'problem' in the morning when I get the Status Reports e-mails.  The report, with a time stamp of 20 minutes after midnight, will say the disk(s) are active. I will then start the GUI (this now being after 7:00AM), the disk temperature(s) are there and the indicator says the disk(s) are in the 'Stand By Mode (spundown)'.  So it is not a matter of Attributes polling not having time to 'synced up' with the actual disk state.  PLUS, I have set the Polling time to 60 seconds since I don't keep the GUI open unless I am actively using it!

 

There are 2 things I know of that cause the out-of-sync temps to spin status -

* using s3 sleep plugin and waking up causes the drives to be spun up without informing emhttp, so the drives show temps but also show spun down

* clicking on a drive brings up the drive info, Check Filesystem, and SMART info sections; it will often briefly indicate something about only showing SMART info when drive is spun up, then will apparently spin the drive up(!) and show all of the SMART info; on return to the Main page, drive still appears spun down but the temp is showing

* I'm sure there are other causes, be good to identify them

 

Mine too is set to 60 second update.  It looks like emhttp assumes it knows all, and trusts all other agents to inform it of spin up or down, which obviously isn't happening.  It needs to check for itself the true physical state, much more often, or find ways to be better informed of spin ups.

Link to comment

There are 2 things I know of that cause the out-of-sync temps to spin status -

* using s3 sleep plugin and waking up causes the drives to be spun up without informing emhttp, so the drives show temps but also show spun down

* clicking on a drive brings up the drive info, Check Filesystem, and SMART info sections; it will often briefly indicate something about only showing SMART info when drive is spun up, then will apparently spin the drive up(!) and show all of the SMART info; on return to the Main page, drive still appears spun down but the temp is showing

* I'm sure there are other causes, be good to identify them

 

Mine too is set to 60 second update.  It looks like emhttp assumes it knows all, and trusts all other agents to inform it of spin up or down, which obviously isn't happening.  It needs to check for itself the true physical state, much more often, or find ways to be better informed of spin ups.

 

I am sure my temp/Drive status out of sync problem is related to S3 sleep and I figured there was not much I could do about that.  I have put the server to sleep a couple of times over the past few weeks when I knew it would get no use.  Polling is set to 5 minutes, but, polling interval really doesn't matter as polling drive status does not fix the temp display issue on a spun down drive.  The only fix is to spin up all drives and spin down again to get things back in sync (even that used to not always work).

 

Right now, this is my biggest "problem" with unRAID so there is really nothing to complain about as I figured S3 sleep was the issue.  I am not losing sleep over it and my server is getting more sleep  :)

Link to comment

r.e. the closed Beta.

 

I don't know for sure what LT's thoughts are for this, but if it was me, I'd want folks to do at least the following in addition to a few parity checks and just basic "is everything working" testing  ...

 

(a)  "Fail" two drives (unplug the SATA connections or just pull them from a hot-swap)  and confirm that all data is still accessible.  Both "cold" (with system off) and "hot" -- unplug while system is on and currently copying (or streaming) a file from one of the drives you're going to pull.

 

(b)  "Fail" a drive in the middle of a rebuild.    i.e. pull one drive; then put in another drive for UnRAID to rebuild to;  and, in the middle of that rebuild,  unplug a different drive.    Confirm that the rebuild finishes error-free.

 

I plan to set up a small (5-6 drive) test system for this using small drives, so the various rebuilds, parity checks, etc. don't take too many hours.    I have a bunch of relatively small drives (250-500GB), so that's what I'll use for that.

 

 

Link to comment

I have 2 unused 4T HGSTs and plan to them in my backup server and make into a RAID0 raidset, and then create several small volumes (volumes look like physical disks to unRAID with an Areca RAID card). 5x15G (2 parity and 3 data) should be plenty to run a bunch of use cases in a hurry. (Performance will likely suck on parity check and rebuilds since all the drives are reading off of the same 2 physical disks, but at such a small size it should still be quick). Interestingly Areca gives a Web interface that lets you reconfigure the RAID array in real time - as it is being used. I expect that if I suddenly delete a volume, that unRAID will start getting I/O errors to that drive. This level of control in real time seems really ideal for testing nasty failure scenarios without risking dangerous unplugging of powered on drives.

Link to comment
This level of control in real time seems really ideal for testing nasty failure scenarios without risking dangerous unplugging of powered on drives.

FWIW the SATA connector is designed for hot plug, with different length connections for interface, power, and data. A hot swap chassis with power buttons for individual drive bays is even better. The biggest risk to unplugging a running drive is the physical shock if you move the spinning drive while removing the cables, but you could just unplug the data cable at the controller and leave the drive connections alone.

 

I think a collection of old 64GB SSD drives would be ideal for playing with various failure and rebuild tests.

 

It will be interesting to see how your virtual volumes react to being deleted and created on the fly while unraid tries to manage them. Do they have assignable serial numbers so you can "reinsert" a deleted volume that has been failed by unraid?

Link to comment

I have 2 unused 4T HGSTs and plan to them in my backup server and make into a RAID0 raidset, and then create several small volumes (volumes look like physical disks to unRAID with an Areca RAID card). 5x15G (2 parity and 3 data) should be plenty to run a bunch of use cases in a hurry. (Performance will likely suck on parity check and rebuilds since all the drives are reading off of the same 2 physical disks, but at such a small size it should still be quick). Interestingly Areca gives a Web interface that lets you reconfigure the RAID array in real time - as it is being used. I expect that if I suddenly delete a volume, that unRAID will start getting I/O errors to that drive. This level of control in real time seems really ideal for testing nasty failure scenarios without risking dangerous unplugging of powered on drives.

 

Interesting test scenario.  I agree a bunch of 15GB "drives" should be a nice test setup, and if you can indeed "fail" these real-time (by just deleting the volume) it should be the equivalent of unplugging it.    Not sure if there will be other consequences.  Have you tried doing that with the current version of UnRAID?    You can easily test the concept by perhaps building a 3-drive (2 data, one parity) Basic setup in v5 (so no license) and then deleting one of the drives while copying some data to/from it.

 

 

 

Link to comment

Any chance this will make it into 6.1 FINAL:  http://lime-technology.com/forum/index.php?topic=41079.msg388890#msg388890

 

Interesting.  When we changed the network startup script to wait in-line (up to 60 sec) for a dhcp lease, I almost added a 'brctl setfd br0 0' to the default initialization for the root bridge, but I wasn't sure about how that would affect down-link virtual bridges.  Replies in the redhat bug report you found seem to imply setting fwd delay to 0 is ok.  I think what we'll do is add this as a configurable setting on the Network Page.
Link to comment

Any chance this will make it into 6.1 FINAL:  http://lime-technology.com/forum/index.php?topic=41079.msg388890#msg388890

 

Interesting.  When we changed the network startup script to wait in-line (up to 60 sec) for a dhcp lease, I almost added a 'brctl setfd br0 0' to the default initialization for the root bridge, but I wasn't sure about how that would affect down-link virtual bridges.  Replies in the redhat bug report you found seem to imply setting fwd delay to 0 is ok.  I think what we'll do is add this as a configurable setting on the Network Page.

Have you checked rc2?  It's already there...

Link to comment

Any chance this will make it into 6.1 FINAL:  http://lime-technology.com/forum/index.php?topic=41079.msg388890#msg388890

 

Interesting.  When we changed the network startup script to wait in-line (up to 60 sec) for a dhcp lease, I almost added a 'brctl setfd br0 0' to the default initialization for the root bridge, but I wasn't sure about how that would affect down-link virtual bridges.  Replies in the redhat bug report you found seem to imply setting fwd delay to 0 is ok.  I think what we'll do is add this as a configurable setting on the Network Page.

Have you checked rc2?  It's already there...

 

OK.  I still had the entry in my GO script and didn't want to remove it until I asked.

 

Thank you for addressing this!  :)

 

John

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.