unRAID Server Release 6.2.0-beta20 Available


Recommended Posts

 

@RobJ, There's always a first.

 

That is a lousy thing to say.

 

It's as perfectly fine a thing to say as is posting about the history. History does not always dictate the future. That's all I was conveying.

 

Past performance may not always indicate the future performance but the best way to predict future performance is by using past performance.

Link to comment

 

- When necessary to query keyserver, poll up to 45 seconds for a connection.

 

 

Does this mean that when booting unraid, that the key check now allows 45seconds for the array to come online and VM's to start. IE virtualised pfsense before the keycheck disables the system for an invalid key ?

 

Yup the license check is still a piece of crap. Whatever was done in this release is even worse then the previous two. Previously in 18 and 19 i could reach the unraid web gui and would be told invalid license. Now in 20 the web gui won't load at all until i get a WAN connection, this means i have to manually go patch in a router to get internet access for the license check. It is as retarded as it sounds !!

 

I really don't know why more focus isn't placed on resolving this rather poorly thought out license validation and wasting effort on throw away inclusions like updating firefox in the console gui who no one asked for / wants / even needs.

 

+1

 

My system refuse to validate and its the first time i ever had any interaction with LT support and they have been zero help.

Waited most of the day for them to verify it will not validat and they the took the rest of the day to tell me I need a reliable internet connection.

 

How did you do this ?

"manually go patch in a router to get internet access for the license check"

 

First and foremost Stew, I responded to your e-mail, which you sent us at 1:30 AM this morning, not but two hours later because I happened to wake up at that time, saw the message, and decided to respond to you in the middle of the night to at least get the discussion going.  As others have indicated, this is beta software and you do not have to participate in the beta if you don't like its requirements.

 

We publicly addressed why we want someone to have their server internet accessible to participate in the beta:

 

For -beta and -rc releases, of all key types, the server must validate at boot time.  The reason is that this lets us "invalidate" a beta release.  That is, if a beta gets out there with a major snafu we can prevent it being run by new users who stumble upon the release zip file.  For example, if there is a bug in P+Q handling.  Remember the reiserfs snafu last year?  We want to minimize that.

 

For stable releases, Basic/Plus/Pro keys do not validate at boot time, that is, works the same as it always has.

 

Starting with 6.2, Trials will require validation with key server.  This is in preparation for making the Trial experience easier.

 

This is a very clear policy and if you missed that post, well, now you've seen it.

 

As far as the other posts in this thread going back and forth about this topic, enough.  Any further posts on licensing will be moved to the bilge.  This is not up for debate.  This thread is to focus on fixing software bugs, not debate our licensing policy on beta software.  If you want to do that, you can send us an e-mail with your complaint as per this thread.

 

We're not trying to be jerks here.  We have good reasons for requiring Internet access for the beta period and they have already been stated.  If your server cannot make a connection to the Internet for the beta, then you will need to refrain from participating in it.  I'm sorry if you don't like that answer, but there it is.

Link to comment

This is a very clear policy and if you missed that post, well, now you've seen it.

 

As far as the other posts in this thread going back and forth about this topic, enough.  Any further posts on licensing will be moved to the bilge.  This is not up for debate.  This thread is to focus on fixing software bugs, not debate our licensing policy on beta software.  If you want to do that, you can send us an e-mail with your complaint as per this thread.

 

We're not trying to be jerks here.  We have good reasons for requiring Internet access for the beta period and they have already been stated.  If your server cannot make a connection to the Internet for the beta, then you will need to refrain from participating in it.  I'm sorry if you don't like that answer, but there it is.

 

***MIC DROP***

Link to comment

Hi, this beta is working fine now.

 

Updating from 6.1.9 to beta 18 somehow messed up my SSH keys (which was why I could not connect to my server through SSH for the last few beta builds). Deleting the /config/ssh folder on my boot drive fixed the issue, though.

 

 

One request: Please add a german keyboard option to the VNC section there is no remote keyboard layout that enables me to properly type my passwords, paths, etc. over VNC. (german switzerland is not at all similar to german standard layouts).

 

An additional option to select other keyboard layouts for the GUI/default boot modes would be AWESOME!

Link to comment

Just found by accident that clearing a disk now does not put array offline, array is available during the process, don't think this was in the release notes.

 

Yeah, this is new behavior in v6.2 and should be mentioned in the release notes (present since b18).

 

Link to comment

Just found by accident that clearing a disk now does not put array offline, array is available during the process, don't think this was in the release notes.

 

Can you explain how this works (to the user) please? Do you "ask" it to clear a HDD and it moves all files to other HDDs (like the unBalance plugin)? If so that is nice (and deprecates the plugin)? Does it move files to existing directories on HDDs first (ie fill them) before creating new directories as I prefer to avoid having my files scattered over random HDDs?

Link to comment

Can you explain how this works (to the user) please? Do you "ask" it to clear a HDD and it moves all files to other HDDs (like the unBalance plugin)? If so that is nice (and deprecates the plugin)? Does it move files to existing directories on HDDs first (ie fill them) before creating new directories as I prefer to avoid having my files scattered over random HDDs?

 

Clearing is needed when adding a new disk to the protected array, it has nothing to do with moving data, difference is that prior to 6.2-beta array had to be offline during this process, this could take many hours with large disks.

Link to comment

Can you explain how this works (to the user) please? Do you "ask" it to clear a HDD and it moves all files to other HDDs (like the unBalance plugin)? If so that is nice (and deprecates the plugin)? Does it move files to existing directories on HDDs first (ie fill them) before creating new directories as I prefer to avoid having my files scattered over random HDDs?

 

when adding a new disk that isnt preclrear. unraid will clear it out for you but array would stay offline.

it seems this is no longer the fact.

Link to comment

Just found by accident that clearing a disk now does not put array offline, array is available during the process, don't think this was in the release notes.

 

I completely agree that this is a nice new feature, and needs to be mentioned ...  BUT it also needs to have a warning with it!

 

There was a very important reason this wasn't done before, and that is that Parity is invalid while clearing is being performed!  (Unless the drive has been Precleared of course!)  It was always safer the old way, keeping the array offline until the new drive is completely zeroed.

 

When the array is started, parity is assumed to be correct, so if anything happens, the bits of the parity drive can be used with the other drives to rebuild any data drive.  As soon as you add a new disk of unknown contents to the array, parity is wrong for every non-zero bit on that drive!  And it won't be correct until the last bit is zeroed, cleared.  Normally, nothing should happen during the clearing, so the likelihood of a mishap is extremely small.  But if anything goes wrong during the clearing, then the array is in a degraded state, with an invalid parity drive.  At that point, no drives can be rebuilt.

 

On the other hand, this is not as serious as that sounds, since if the clearing is aborted for any reason, you would simply remove the new drive, then 'trust parity' again, and you should be back where you started, before adding the new drive.  If what goes wrong is a *different* drive 'red-balling', then it's vital that you either make sure the clearing proceeds to completion (thereby returning parity to correctness), or you remove the new drive and attempt to get parity trusted (if that's possible in this situation).

 

It's still a nice feature, but I wanted to make sure users understand the ramifications.  There's a small risk involved.

 

Edit: I should add that Tom may have already implemented this in a safer way, such that if anything at all goes wrong, the new drive is immediately unassigned.

Link to comment

Just found by accident that clearing a disk now does not put array offline, array is available during the process, don't think this was in the release notes.

 

I completely agree that this is a nice new feature, and needs to be mentioned ...  BUT it also needs to have a warning with it!

 

There was a very important reason this wasn't done before, and that is that Parity is invalid while clearing is being performed!  (Unless the drive has been Precleared of course!)  It was always safer the old way, keeping the array offline until the new drive is completely zeroed.

 

When the array is started, parity is assumed to be correct, so if anything happens, the bits of the parity drive can be used with the other drives to rebuild any data drive.  As soon as you add a new disk of unknown contents to the array, parity is wrong for every non-zero bit on that drive!  And it won't be correct until the last bit is zeroed, cleared.  Normally, nothing should happen during the clearing, so the likelihood of a mishap is extremely small.  But if anything goes wrong during the clearing, then the array is in a degraded state, with an invalid parity drive.  At that point, no drives can be rebuilt.

 

On the other hand, this is not as serious as that sounds, since if the clearing is aborted for any reason, you would simply remove the new drive, then 'trust parity' again, and you should be back where you started, before adding the new drive.  If what goes wrong is a *different* drive 'red-balling', then it's vital that you either make sure the clearing proceeds to completion (thereby returning parity to correctness), or you remove the new drive and attempt to get parity trusted (if that's possible in this situation).

 

It's still a nice feature, but I wanted to make sure users understand the ramifications.  There's a small risk involved.

 

Edit: I should add that Tom may have already implemented this in a safer way, such that if anything at all goes wrong, the new drive is immediately unassigned.

Lol, I believe this feature is in 6.1.9 as well as a few earlier versions. The way it works (I'll let Tom clarify the technical details) is that the disk isn't truly added to the array until AFTER the clearing process is complete.  This eliminates all the concerns brought up.

Link to comment

Possible to add, clear and format disk larger than parity...

 

WOW, is my first reaction because the excess will be unprotected with no indication of a problem on that page! 

 

And then there is a second reaction---What happens when you have to replace one of the parity disks???

Link to comment

Just found by accident that clearing a disk now does not put array offline, array is available during the process, don't think this was in the release notes.

 

I completely agree that this is a nice new feature, and needs to be mentioned ...  BUT it also needs to have a warning with it!

 

There was a very important reason this wasn't done before, and that is that Parity is invalid while clearing is being performed!  (Unless the drive has been Precleared of course!)  It was always safer the old way, keeping the array offline until the new drive is completely zeroed.

...[snipped]...

Edit: I should add that Tom may have already implemented this in a safer way, such that if anything at all goes wrong, the new drive is immediately unassigned.

Lol, I believe this feature is in 6.1.9 as well as a few earlier versions. The way it works (I'll let Tom clarify the technical details) is that the disk isn't truly added to the array until AFTER the clearing process is complete.  This eliminates all the concerns brought up.

Thank you jonp, that's perfect!

Link to comment

Please excuse what may be an annoying question, redundant, but I can't remember if I or another asked if you had exhaustively tested the RAM?  I'm thinking something like 24 hours of Memtest.

 

Although there have been reports of issues with certain USB 3.0 drivers, what I see above looks too low level to be USB related.  Looks more like RAM or timers or CPU or VM/driver race condition or the like.

 

Edit: It would be interesting to know if anyone else has the same motherboard and BIOS and CPU etc, and what issues they are having?

 

Thanks for the reply Rob. I have done some 24/7 mem tests with no issues. I haven't done one recently though, worth another set of runs do we think?

 

Edit:

I have started a memtest so let's see what that does

 

Let's try this:

 

Reboot the system so you're at a clean boot.  Without the array started yet, login via SSH or Telnet.  Type the following command:

 

killall /usr/sbin/irqbalance

 

After this is done, start the array and begin using the system like normal.  Report back if you still have issues afterwards.

Link to comment

Another thing I noted:

 

If my Windows 10 VM is in 3.0 USB mode, my 3.0 drives work flawlessly on my passed-through PCIe controller. My 2.0 flash drive is no detected, however.

 

Is this expected behavior, or should I post diagnostics?

Link to comment

Hi, this beta is working fine now.

 

Updating from 6.1.9 to beta 18 somehow messed up my SSH keys (which was why I could not connect to my server through SSH for the last few beta builds). Deleting the /config/ssh folder on my boot drive fixed the issue, though.

 

 

One request: Please add a german keyboard option to the VNC section there is no remote keyboard layout that enables me to properly type my passwords, paths, etc. over VNC. (german switzerland is not at all similar to german standard layouts).

 

An additional option to select other keyboard layouts for the GUI/default boot modes would be AWESOME!

 

Thanks for reporting this.  I believe you can manually edit the XML for now to resolve this, but we will add this option to the VNC keyboard selector drop-down.  The manual XML edit required would be to this area:

 

    <graphics type='vnc' port='-1' autoport='yes' websocket='-1' listen='0.0.0.0' keymap='de'>

      <listen type='address' address='0.0.0.0'/>

    </graphics>

 

The bolded part is what I changed.  Instead of "de-ch" I used just "de".  Let me know if that works for you.

Link to comment

Another thing I noted:

 

If my Windows 10 VM is in 3.0 USB mode, my 3.0 drives work flawlessly on my passed-through PCIe controller. My 2.0 flash drive is no detected, however.

 

Is this expected behavior, or should I post diagnostics?

 

The virtual USB controllers are hit and miss when it comes to device support.  Some devices will work fine with both controllers, others only work fine with the 3.0 controller whereas yet others still may only work with the 2.0 controller.

 

If you want to help get support for your 2.0 flash device, your best bet would be to e-mail the QEMU mailing list with the details.  In the meantime, if the 2.0 flash device is required, you'll need to go back to passing through the entire PCIe controller.

Link to comment

Docker updates worked perfectly but now I am having some issues getting my Win10 VM updated.

 

I went into the VM tab and attempted to "edit" the VM (Win10ProIsolCPUs) so that it would update to the new settings. I may have done something wrong during that process because now the VM wont start and their are errors displayed on the VM tab. See screen capture.

 

I have copies of my old XML file and copies of the Win10 disk image. Should I attempt to fix this current "edit" or would it be better to use a template and import the existing Win10 disk image and then make changes to the generated XML if needed?

 

edit

 

I went into terminal and Virsh to see if I could start the VM from there. It reported that the VM started and when I turned on my TV the VM was being passed through Audio and Video as well as USB controller and attached devices. See screen shots.

 

The VM tab is still showing errors and the dashboard is no longer showing my working Dockers or VM's

 

Diagnostics attached

 

Couldn't add the diagnostics file to the previous post so here it is

 

Jude,

 

Please try booting into safe mode and report back if the errors persist.

Link to comment

If you want to help get support for your 2.0 flash device, your best bet would be to e-mail the QEMU mailing list with the details.  In the meantime, if the 2.0 flash device is required, you'll need to go back to passing through the entire PCIe controller.

That's what will do for now. Thanks for the info (ill get in touch with the QEMU community, though).

 

The bolded part is what I changed.  Instead of "de-ch" I used just "de".  Let me know if that works for you.

 

That worked right away! :D

 

Thank you. If this could make it to the drop-down of 6.20 me (and a few germans) would be super happy!

 

 

Anything I could add to the go script to also change the keyboard layout for the boot modes?

Link to comment
Guest
This topic is now closed to further replies.