limetech

Administrators
  • Content count

    7413
  • Joined

  • Last visited

  • Days Won

    32

limetech last won the day on June 13

limetech had the most liked content!

Community Reputation

423 Very Good

About limetech

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. limetech

    unRAID OS version 6.5.3 available

    Yep.
  2. limetech

    unRAID OS version 6.5.3 available

    Hmm, that's a bug, fixed now, which has been there a while, sorry about that.
  3. To upgrade: If you are running any 6.4/6.5 stable release or any 6.4-rc/6.5-rc release, click 'Check for Updates' on the Tools/Update OS page. If you are running a pre-6.4 release, click 'Check for Updates' on the Plugins page. If the above doesn't work, navigate to Plugins/Install Plugin, select/copy/paste this plugin URL and click Install: https://s3.amazonaws.com/dnld.lime-technology.com/stable/unRAIDServer.plg Refer also to @ljm42 excellent 6.4 Update Notes which still are helpful if you are upgrading from a pre-6.4 release. In terms of code changes, this is a very minor release; however, we changed a significant linux kernel CONFIG setting that changes the kernel preemption model. This change should not have any deleterious effect on your server, and in fact may improve performance in some areas, certainly in VM startup (see below). This change has been thoroughly tested - thank you! to all who participated in the 6.5.3-rc series testing. Background: several users have reported, and we have verified, that as the number of cores assigned to a VM increases, the POST time required to start a VM increases seemingly exponentially with OVMF and at least one GPU/PCI device passed through. Complicating matters, the issue only appears for certain Intel CPU families. It took a lot of work by @eschultz in consultation with a couple linux kernel developers to figure out what was causing this issue. It turns out that QEMU makes heavy use of a function associated with kernel CONFIG_PREEMPT_VOLUNTARY=yes to handle locking/unlocking of critical sections during VM startup. Using our previous kernel setting CONFIG_PREEMPT=yes makes this function a NO-OP and thus introduces serious, unnecessary locking delays as CPU cores are initialized. For core counts around 4-8 this delay is not that noticeable, but as the core count increases, VM start can take several minutes(!). This release also brings us up-to-date with the latest LTS kernel and fixes a handful of bugs. Version 6.5.3 2018-06-12 Linux kernel: version 4.14.49 set CONFIG_PREEMPT=no and CONFIG_PREEMPT_VOLUNTARY=yes Management: Small update to create_network_ini to suppress progress information when using cURL. update smartmontools drivedb and hwdata/{pci.ids,usb.ids,oui.txt,manuf.txt} webgui: Remove unused tags from docker templates webgui: apcups: ensure numeric fields are 0 if the values are empty webgui: bug fix: prevent deleting user template when (letter case) renaming a docker container webgui: Strip HTML from back-end webgui: make entire menu items clickable for gray and azure themes
  4. Installation and Bug Reporting Instructions Getting back in the saddle after a move which took far longer than it should have... This is a small update to bring us up-to-date with the latest LTS kernel and fix a handful of bugs. Let's test this over the weekend and release 6.5.3 stable on Monday along with 6.6.0 next soon thereafter. Version 6.5.3-rc2 2018-06-08 Linux kernel: version 4.14.48 Management: Small update to create_network_ini to suppress progress information when using cURL. update smartmontools drivedb and hwdata/{pci.ids,usb.ids,oui.txt,manuf.txt} webgui: Remove unused tags from docker templates webgui: apcups: ensure numeric fields are 0 if the values are empty webgui: bug fix: prevent deleting user template when (letter case) renaming a docker container webgui: Strip HTML from back-end webgui: make entire menu items clickable for gray and azure themes Version 6.5.3-rc1 2018-05-18 Summary: In order to fix VM startup issue we need to change the linux kernel preemption model: CONFIG_PREEMPT=no (previous setting) This option reduces the latency of the kernel by making all kernel code (that is not executing in a critical section) preemptible. This allows reaction to interactive events by permitting a low priority process to be preempted involuntarily even if it is in kernel mode executing a system call and would otherwise not be about to reach a natural preemption point. This allows applications to run more 'smoothly' even when the system is under load, at the cost of slightly lower throughpu and a slight runtime overhead to kernel code. Select this if you are building a kernel for a desktop or embedded system with latency requirements in the milliseconds range. CONFIG_PREEMPT_VOLUNTARY=yes (new in this release) This option reduces the latency of the kernel by adding more "explicit preemption points" to the kernel code. These new preemption points have been selected to reduce the maximum latency of rescheduling, providing faster application reactions, at the cost of slightly lower throughput. This allows reaction to interactive events by allowing a low priority process to voluntarily preempt itself even if it is in kernel mode executing a system call. This allows applications to run more 'smoothly' even when the system is under load. Select this if you are building a kernel for a desktop system. Linux kernel: version 4.14.41 set CONFIG_PREEMPT=no and CONFIG_PREEMPT_VOLUNTARY=yes
  5. limetech

    We're moving

    Ok, move pretty much complete. Quite a pain moving both residence and corp headquarters at the same time! FYI here's a brief history: circa 2005/2006: unRAID born, Sunnyvale, CA 2008-2011: Fort Collins, CO 2012-early 2018: San Diego, CA (incorporated 2015) present: Anaheim, CA (no, we're not in the tree house in Disneyland) Jon, meanwhile, moved within the same city near Chicago. Next up: release 6.5.3-rc2, which brings us up-to-date with linux 4.14 LTS kernel, along with a handful of bug fixes. As soon as that release is promoted to stable we'll get unRAID 6.6 next release out there. Thanks to everyone for your patience during this time.
  6. limetech

    We're moving

    Hey sorry everyone for not announcing this sooner, and also sorry for our sparse involvement in the last week. Both myself and Lime Tech corp are in the process of moving, and coincidentally Jon is moving as well. Eric is busy chasing problems with various compilation errors getting unRAID OS running on the latest linux kernel. I'm anticipating being fully back to work on June 1, Jon should be sooner.
  7. Sure! Every data point is appreciated.
  8. Some notes from @eschultz:
  9. Installation and Bug Reporting Instructions This is a somewhat unusual release, meaning vs. 6.5.2 stable we only updated the kernel to latest patch release, but also changed a single kernel CONFIG setting that changes the kernel preemption model. This change should not have any deleterious effect on the server, and in fact may improve performance in some areas, certainly in VM startup (see below). However, we want to make this change in isolation and release to the Community in order to get testing on a wider range of hardware, especially to find out if parity operations are negatively affected. The reason we are releasing this as 6.5.3 is because for upcoming unRAID OS 6.6 we are moving to the linux 4.16 kernel and will also be including a fairly large base package update, including updated Samba, Docker, LIbvirt and QEMU. If we released this kernel CONFIG change along with all these other changes, and something is not working right, it could be very difficult to isolate. Background: several users have reported, and we have verified, that as the number of cores assigned to a VM increases, the POST time required to start a VM increases seemingly exponentially with OVMF and at least one GPU / PCI device being passthrough. Complicating matters, the issue only appears for certain Intel CPU families. It took a lot of work by @eschultz in consultation with a couple linux kernel developers to figure out what was causing this issue. It turns out that QEMU makes heavy use of a function associated with kernel CONFIG_PREEMPT_VOLUNTARY=yes to handle locking/unlocking of critical sections during VM startup. Using our previous kernel setting CONFIG_PREEMPT=yes makes this function a NO-OP and thus introduces serious, unnecessary locking delays as CPU cores are initialized. For core counts around 4-8 this delay is not that noticeable, but as the core count increases, VM start can take several minutes(!). We are very interested in seeing reports regarding any performance issues not seen in 6.5.2 release. As soon as we get this verified, we'll get this released to stable and get 6.6.0-rc1 out there. Thank you!! Version 6.5.3-rc1 2018-05-18 Summary: In order to fix VM startup issue we need to change the linux kernel preemption model: CONFIG_PREEMPT=no (previous setting) This option reduces the latency of the kernel by making all kernel code (that is not executing in a critical section) preemptible. This allows reaction to interactive events by permitting a low priority process to be preempted involuntarily even if it is in kernel mode executing a system call and would otherwise not be about to reach a natural preemption point. This allows applications to run more 'smoothly' even when the system is under load, at the cost of slightly lower throughpu and a slight runtime overhead to kernel code. Select this if you are building a kernel for a desktop or embedded system with latency requirements in the milliseconds range. CONFIG_PREEMPT_VOLUNTARY=yes (new in this release) This option reduces the latency of the kernel by adding more "explicit preemption points" to the kernel code. These new preemption points have been selected to reduce the maximum latency of rescheduling, providing faster application reactions, at the cost of slightly lower throughput. This allows reaction to interactive events by allowing a low priority process to voluntarily preempt itself even if it is in kernel mode executing a system call. This allows applications to run more 'smoothly' even when the system is under load. Select this if you are building a kernel for a desktop system. Linux kernel: version 4.14.41 set CONFIG_PREEMPT=no and CONFIG_PREEMPT_VOLUNTARY=yes
  10. limetech

    [6.5.0] Upgrade to 6.5.1

    Changed Status to Open
  11. limetech

    [6.5.0] Upgrade to 6.5.1

    This is one of the intended use for 'system' share. I will update this report to 'Open' so that it stays on my radar
  12. limetech

    unRAID OS version 6.5.2 available

    Yes when 4.17 gets released if we are still in unRAID 6.6-rc phase we would almost certainly move to that kernel - which is to say, we'll probably upgrade to that kernel I guess the only possible exception is if 4.16 gets marked 'LTS' but that is not likely.
  13. limetech

    unRAID OS version 6.5.1 Stable Release Available

    Be sure and backup content of your current flash 'config' directory.
  14. limetech

    unRAID OS version 6.5.2 available

    Kudos to @bonienl for that one!
  15. limetech

    unRAID OS version 6.5.2 available

    Yes we are moving to 4.16 kernel in unRAID 6.6 release, under construction now.

Copyright © 2005-2018 Lime Technology, Inc.
unRAID® is a registered trademark of Lime Technology, Inc.