jonp

Administrators
  • Content count

    5607
  • Joined

  • Last visited

  • Days Won

    8

Everything posted by jonp

  1. A feature we are considering REMOVING at some point in the future of unRAID 6 would be that of spin up groups. This feature was originally designed to combat an issue specific to the use of IDE hard drives that are no longer prevalent in today's computers. The poll question here is to ascertain who, if anyone, is using this feature today and if so, for what purpose. Please provide your feedback and if you do use spin up groups, please let us know how they help you! Thanks!
  2. From discussion that start in this thread, we decided to create a simple plugin that installs a set of additional command line tools onto your unRAID server. WARNING: These tools are for advanced users. Their use is not officially supported by Lime Tech. Use at your own risk and do not ask for direct support from us on the use of these tools. Eric put this plugin together mainly so that folks who want to do development or more advanced functions can have access to do so without having to manually download and copy the Slackbuilds to their USB flash device. Here are the packages that are included in this plugin: apr 1.5.0 apr-util 1.5.3 bwm-ng 0.6 cpio 2.11 git 2.3.5 iftop 1.0pre2 inotify-tools 3.14 iotop 0.6 iperf 3.0.11 kbd 1.15.3 lftp 4.6.1 lshw B.02.17 neon 0.29.6 p7zip 9.38.1 perl 5.22.0 python 2.7.9 readline 6.3 screen 4.2.1 sshfs-fuse 2.5 strace 4.10 subversion 1.7.16 unrar 5.2.5 utempter 1.1.5 vim 7.4.898 If someone wants to write up a small description of each of these tools, I will incorporate into the OP here. Otherwise, google is your friend Here's the link to the PLG itself (now on dmacias' repo). Copy and paste this on the "install plugin" page of the webGui and you'll be on your way! https://raw.githubusercontent.com/dmacias72/unRAID-NerdPack/master/plugin/NerdPack.plg
  3. https://lime-technology.com/wp/security-benefits-of-gaming-in-a-vm/
  4. jonp

    Some clearity

    Hi Lucannus! It actually is stated on the pricing page in the FAQ section: Will I have to pay for upgrades to newer versions of unRAID as they are released? No. All Basic, Plus and Pro registration keys are eligible to run new releases of unRAID Server OS at no additional cost.
  5. That is actually a point I make in the article, though I say it with more subtlety as quoted below: With respect to this: I don't actually agree with this sentiment because I think the security holes generated from having multiple users on the same running copy of Windows is far greater than the security holes generated from having multiple VMs to patch manage. Especially with Windows because Windows automatically patches itself nowadays anyways, so there really isn't anything for the individual user to manage. The downside (today) of multiple VMs vs. sharing a single computer is purely storage utilization. Clearly you will use more storage by having multiple VMs as opposed to one single PC, but we are working on solutions for that as well (just not ready to unveil them yet).
  6. jonp

    [Support] Linuxserver.io - Organizr

    I resolved this. Let me know if there are other posts lingering out there that need to be hidden / addressed.
  7. jonp

    Help save a sale!

    Wow, I can't believe we let this thread go on this long without chiming in ourselves. Let's be ultra clear here: The Internet connection requirement we are discussing here ONLY applies to the Trial and not paid licenses. The Internet connection requirement also only applies when you first boot the system and any subsequent reboots. Starting and stopping the array doesn't require Internet validation. It is very rare for us to see complaints about the Trial requirements, but sure, we do get them from time to time. The bottom line is this: it is very difficult to make software (especially on Linux) that isn't easily pirated in some way. Internet validation is one of the best tools we have to work with, and while it's not ideal (we know this), its the best option available. I have seen no alternatives suggested in this thread that would adequately deal with the concerns relating to licensing. Until someone comes up with one, this is what we're sticking with. I should also mention that the ability to use pfSense in a VM with Unraid isn't something we necessarily recommend or tout. It's something that some of our users do and do very successfully, but it does come with some limitations and this is one of them (inability to adequately test in the Trial experience). If this is a showstopper for you, then you'll need to look at another solution because we're not going to change licensing requirements for this use-case (its not justifiable). With respect to running VMs while the array is stopped, this has been discussed pretty heavily and there are LOTS of challenges and concerns with that. They might not be obvious at first, but spend more than 1 hour thinking about it and I bet you'll fill up both hands with reasons why this isn't a 2-minute patch for us to include in the OS. Trust me when I say, that's a feature I was asking for internally over 2 years ago and we still can't justify doing it for the sheer amount of work it would take at this point. It's not to say that we would never consider doing that in the future, but there is a long list of to-dos on the pile that will definitely come before we shift focus on to that.
  8. UPDATE 4/27/2015: --cpuset will be deprecated in Docker 1.6. For those using unRAID 6 beta 15, you are not affected, but when we upgrade to Docker 1.6, this will be impacted. The new method will be to use --cpuset-cpus (it's just being renamed). Hey guys, wanted to share something cool we figured out today that can substantially impact how Docker and VMs work together on the same host. In short, you can force individual containers to be bound to specific CPU cores inside unRAID. Why is this useful? The number one thing that can affect user experience for VMs running on an unRAID host that are localized is context switching. When applications are competing for access to the CPU, they essentially take turns and when that happens, the processor performs a context switch where it unloads data from within the processors L1, L2, and L3 cache back into RAM temporarily so that the other process can load into that cache quickly to perform it's job, then unload and reload the first process. While this is a normal thing to occur, it can cause some undesirable effects when severely processor intensive activities are happening in both a container and a VM at the same time. By pinning specific containers to specific cores, similar to how we can with virtual machines, we can completely eliminate the need for context switching to occur and as a result, avoid undesirable impacts to user experience. How to do it The plan is to implement this into dockerMan in an upcoming release as an advanced configuration option that you can choose to apply to all docker containers or individual containers, but for now, you can take advantage of this TODAY by modifying your existing containers in dockerMan like so: In the "repository name" field, simply add the following code before the name of the author/repo: --cpuset=# If you want to set multiple cores, you can do so by using commas or to specify a range of cores, you can use a dash. Examples: --cpuset=0,2,4,6 --cpuset=0-3 Note that cores are numbered starting with 0. Also note that you can check the # of cores you have in total on your system by typing the following in a command line session (SSH or Telnet): nproc
  9. In a recent reply to a post by another forum member (archedraft), I provided this guide to help him assign one of his NIC devices to a virtual machine, leaving the other for host networking (unRAID OS). I didn't see much point in this because with KVM and VirtIO, we can create virtual network interfaces that offer little to no overhead over a physical NIC, but after testing with pfSense, archedraft confirmed for me that he saw a dramatic performance increase. The reason? In this particular instance, pfSense was acting as a firewall and is based on FreeBSD. The FreeBSD kernel used by pfSense, while having support for VirtIO, appears to be out of date and was not allowing full 1gbps LAN throughput as it does with Linux or Windows VMs. Passing through a physical Ethernet controller to his pfSense VM in this instance resolved his issue. So we have found at least one use case thus far to consider such a method, but in the future, we may find more. And since the question comes up from time to time, I thought it prudent to post this here as an advanced guide for those that want to try it. WARNING: If you do not have multiple NICs in your system, doing this will result in your server losing all network connectivity. IMPORTANT: Regarding VM to Host Networking Performance When VMs utilize VirtIO, their is another distinct advantage in that networking between the host and guest can take place without traversing the copper wire. This allows for much faster throughput than the physical NIC hardware even supports at the port level. As an example, in mounting an SMB share to my SSD-based cache pool from inside my Windows VM, I was able to see IO throughput to the share exceed 250MB/s (that's megabytes, not bits). When a VM is assigned a physical network controller, this advantage disappears as the VM will communicate with the host as if it was a separate physical machine, going out the one NIC, down to your router/switching infrastructure, and then back in. This will limit your network throughput to that of the physical hardware. In my previous Windows VM / SMB example, I would be limited to 1gbps or 125MB/s. Guide 1 - Login to your server via ssh. 2 - Type the following command: lspci You will get a list like this: 00:00.0 Host bridge: Intel Corporation 4th Gen Core Processor DRAM Controller (rev 06) 00:01.0 PCI bridge: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor PCI Express x16 Controller (rev 06) 00:01.1 PCI bridge: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor PCI Express x8 Controller (rev 06) 00:02.0 VGA compatible controller: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor Integrated Graphics Controller (rev 06) 00:03.0 Audio device: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor HD Audio Controller (rev 06) 00:14.0 USB controller: Intel Corporation 8 Series/C220 Series Chipset Family USB xHCI (rev 04) 00:16.0 Communication controller: Intel Corporation 8 Series/C220 Series Chipset Family MEI Controller #1 (rev 04) 00:19.0 Ethernet controller: Intel Corporation Ethernet Connection I217-V (rev 04) 00:1b.0 Audio device: Intel Corporation 8 Series/C220 Series Chipset High Definition Audio Controller (rev 04) 00:1c.0 PCI bridge: Intel Corporation 8 Series/C220 Series Chipset Family PCI Express Root Port #1 (rev d4) 00:1c.3 PCI bridge: Intel Corporation 8 Series/C220 Series Chipset Family PCI Express Root Port #4 (rev d4) 00:1f.0 ISA bridge: Intel Corporation Z87 Express LPC Controller (rev 04) 00:1f.2 SATA controller: Intel Corporation 8 Series/C220 Series Chipset Family 6-port SATA Controller 1 [AHCI mode] (rev 04) 00:1f.3 SMBus: Intel Corporation 8 Series/C220 Series Chipset Family SMBus Controller (rev 04) 01:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Cedar [Radeon HD 5000/6000/7350/8350 Series] 01:00.1 Audio device: Advanced Micro Devices, Inc. [AMD/ATI] Cedar HDMI Audio [Radeon HD 5400/6300 Series] 02:00.0 VGA compatible controller: NVIDIA Corporation GK110 [GeForce GTX 780] (rev a1) 02:00.1 Audio device: NVIDIA Corporation GK110 HDMI Audio (rev a1) 04:00.0 Multimedia video controller: Device 1a0a:6202 (rev 01) Identify the Ethernet controller you wish to assign. Note the PCI address for the device (from my list, it would be 00:19.0). From my list, I only have one network card, so I shouldn't do this, but if you have multiple, either one SHOULD be fine to select. 3 - Type the following command: lspci -n 00:00.0 0600: 8086:0c00 (rev 06) 00:01.0 0604: 8086:0c01 (rev 06) 00:01.1 0604: 8086:0c05 (rev 06) 00:02.0 0300: 8086:0412 (rev 06) 00:03.0 0403: 8086:0c0c (rev 06) 00:14.0 0c03: 8086:8c31 (rev 04) 00:16.0 0780: 8086:8c3a (rev 04) 00:19.0 0200: 8086:153b (rev 04) 00:1b.0 0403: 8086:8c20 (rev 04) 00:1c.0 0604: 8086:8c10 (rev d4) 00:1c.3 0604: 8086:8c16 (rev d4) 00:1f.0 0601: 8086:8c44 (rev 04) 00:1f.2 0106: 8086:8c02 (rev 04) 00:1f.3 0c05: 8086:8c22 (rev 04) 01:00.0 0300: 1002:68f9 01:00.1 0403: 1002:aa68 02:00.0 0300: 10de:1004 (rev a1) 02:00.1 0403: 10de:0e1a (rev a1) 04:00.0 0400: 1a0a:6202 (rev 01) 4 - Identify your network card by PCI address (first column of results). 5 - Obtain the vendor/product ID for that device from the last column. 00:19.0 from my example is 8086:153b. 6 - Edit your syslinux.cfg file and add the following after the append but before initrd=/bzroot. pci-stub.ids=8086:153b REPLACE THE VENDOR/PRODUCT ID FROM MY EXAMPLE ABOVE WITH THE ONE YOU OBTAINED IN STEP 5. 7 - Reboot your system. 8 - Edit your VM using the XML editor mode. 9 - Add the following between the <devices> and </devices> tags. <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x00' slot='0x19' function='0x0'/> </source> </hostdev> Modify the address line entering in the two digit bus, slot, and function from your ID. So 00:19.0 translates to what I have above. Save the XML and start your VM. All should be right as rain! NOTE: If you get an error, it could be because your NIC is in an IOMMU group with another in-use PCI device (either assigned to the host or to another VM). In this instance, you can attempt to use the PCIE ACS Override option under the VM Manager settings page, but use of this toggle is considered experimental.
  10. So I know we don't have a blog up on the website yet, but given that this video hit YouTube faster than I expected, I wanted to share it here now so you could see it. I will add a blog post with some behind-the-scenes details on this video as soon as I can. For now, enjoy:
  11. Hello unRAID Community! For far too long now we have gone without a proper logo for unRAID. We have wanted one for some time now, but because none of us at LT are graphic artists, we were afraid we'd end up with something like this: So instead of assaulting your eyes with our pathetic attempts at art, we thought we'd throw it out to you, our community, to see if any talented artists out there want to submit their ideas and concepts for consideration. We will be working with a professional designer to generate the final artwork, but your ideas could shape the look and feel of the logo. Here's a quick run-down of how we're going to do this: All proposals must be in before December 3rd, 2017. The logo should be 1024 x 1024 pixels in size, and should look good when downsized to 512 x 512 or even 256 x 256. Creating a vector-based image (think Adobe Illustrator) is highly recommended. No "tagline" is required to be in the logo (e.g. "The Ultimate Home Media Server"), but if you have an interesting one that you want to include, go for it! Feel free to submit multiple options if you have more than one! You must own the copyright to any images/art you use and be willing to transfer said ownership to Lime Technology, Inc. Any fonts you use must be royalty-free, otherwise we can't use them. Open source fonts are a big plus. To submit a proposal, simply reply to this thread and include your work in the post. If you wish to incorporate the use of stock photos/imagery, please provide a link to where it can be purchased along with your submission (original artwork will definitely be looked at more favorably). It is possible that we may want to include concepts from multiple designs. NOTE: If you are a professional designer and wish to be considered for contracting the final work, please indicate as such in your response and be sure to include a link to a portfolio of your work. Thanks in advance for all of your submissions!
  12. Hey, thanks for calling us out on this. Seriously. We dropped the ball on providing you guys with feedback and for that, we are truly sorry. Good news though! Following this exercise, we did end up contracting with a firm that is helping us finalize a new logo and the work done in this thread has definitely contributed towards that end. While I think the final product will probably end up looking a bit different from what we've seen here, this definitely helped us figure out what we did and didn't like and guided us toward that end. That being said, its worth noting that the investment we are making extends far beyond just a logo, and I think you guys will be awfully pleased with the end result once we get there. We're still probably at least a few months away from revealing the efforts of this project, but we may have a few things to share along the way ;-).
  13. Hi all, It's been a while since I chimed in on this topic and given some recent updates / patches from the QEMU team and some testing I've been performing, I thought it was about time I provide an update. Background Many Intel CPUs feature support for an on-die GPU. We refer to this as an Intel IGD (Integrated Graphics Device). This essentially means being able to supply and power graphics for a system without needing a discrete GPU (either soldered onto the chipset like with many AMD systems and their Radeon graphics chips or as a PCIe device). However, up and through this point, unRAID has not supported assigning that GPU to a virtual machine in the same way we support it for PCIe add-on GPUs such as AMD and NVIDIA. However, that will be changing in the future. Why is IGD pass through to VMs important? While discrete GPUs can provide a powerful performance boost to 3D applications (making them ideal for gaming), they do require an additional investment above and beyond the CPU/motherboard, increase the demand for power, generate more heat, and require you to supply an enclosure (case) that is large enough to house it effectively. If all you want is a VM with a local desktop for basic 2D applications (office / productivity / browser), an IGD has more than enough horsepower to satisfy those tasks. A powerful use-case could be to build a small 4-bay NAS for the living room that has a CPU with both an IGD and Intel VT-d support. This small form-factor system could be placed in the living room and a video connection could be made from the IGD directly to the main TV, while also using the NAS for streaming media to other devices over the LAN. The VMs that system could run would include both OpenELEC (for playing media) and SteamOS (for streaming games from another PC). Why doesn't IGD pass through work today? In short, IGDs are not like your atypical discrete GPU. They behave very differently and therefore require a lot of special coding to pass them through to VMs. Are there any efforts to support IGD pass through to VMs? There is an entire project dedicated towards the use of IGDs with virtual machines: Intel GVT. This project is hard at work to bring multiple benefits to users. From the projects dedicated site: You can read more about the project on it's official site: https://01.org/igvt-g What hardware will be supported? Initial support will be for Intel Haswell and Broadwell CPUs only. But wait, what about my shiny new Skylake?! Sorry but no, it doesn't appear that there are any plans to support Skylake at this time, but that may change in the future. Note that Skylake CPUs are not recommended for VMs with GPU pass through (harder to isolate the IOMMU groups; no support for the ACS override). What about my Ivy-Bridge or <INSERT OLDER CPU HERE>? Nope. What is the current status of IGD pass through? Back in December, a patch was released for QEMU to add support for passing through Intel IGDs to virtual machines. This patch has yet to be merged in the stable branch for QEMU, but we at Lime Tech built a special version of QEMU to include this patch and tested it ourselves. In short...[glow=red,2,300]it works[/glow]. However, it is not yet ready for prime time inclusion with unRAID. First and foremost, there is no libvirt support for this at all yet, which means to do it we have to invoke QEMU manually, which is a huge problem from a manageability standpoint. We also need to thoroughly test this (with various hardware and VM configurations). And lastly, we need to do the logic coding on VM manager to then make it as simple as just allowing folks to select the IGD from the graphics device list when adding/editing a VM. So??? When??? Supporting IGD pass through to VMs should be an achievable objective for us in 2016. I will be able to narrow down that time-line sometime in the next 1-2 months as that will be the earliest we will be able to dedicate significant R&D time towards this item.
  14. jonp

    Tapatalk No UnReads

    We upgraded our forum software and of course, tapatalk broke again. We're looking into it. Thank goodness the forum actually works really well just using a standard web browser on a mobile device.
  15. Hey everyone, just thought I'd put this up here after reading a syslog by another forum member and realizing a repeating pattern I've seen here where folks decide to let Plex create temporary files for transcoding on an array or cache device instead of in RAM. Why should I move transcoding into RAM? What do I gain? In short, transcoding is both CPU and IO intensive. Many write operations occur to the storage medium used for transcoding, and when using an SSD specifically, this can cause unnecessary wear and tear that would lead to SSD burnouts happening more quickly than is necessary. By moving transcoding to RAM, you alleviate the burden from your non-volatile storage devices. RAM isn't subject to "burn out" from usage like an SSD would be, and transcoding doesn't need nearly as much space in memory to perform as some would think. How much RAM do I need for this? A single stream of video content transcoded to 12mbps on my test system took up 430MB on the root ram filesystem. The quality of the source content shouldn't matter, only the bitrate to which you are transcoding. In addition, there are other settings you can tweak to transcoding that would impact this number including how many second of transcoding should occur in advance of being played. Bottom line: If you have 4GB or less of total RAM on your system, you may have to tweak settings based on how many different streams you intend on transcoding simultaneously. If you have 8GB or more, you are probably in the safe zone, but obviously the more RAM you use in general, the less space will be available for transcoding. How do I do this There are two tweaks to be made in order to move your transcoding into RAM. One is to the Docker Container you are running and the other is a setting from within the Plex web client itself. Step 1: Changing your Plex Container Properties From within the webGui, click on "Docker" and click on the name of the PlexMediaServer container. From here, add a new volume mapping: /transcode to /tmp Click "Apply" and the container will be started with the new mapping. Step 2: Changing the Plex Media Server to use the new transcode directory Connect to the Plex web interface from a browser (e.g. http://tower:32400/web). From there, click the wrench in the top right corner of the interface to get to settings. Now click the "Server" tab at the top of this page. On the left, you should see a setting called "Transcoder." Clicking on that and then clicking the "Show Advanced" button will reveal the magical setting that let's you redirect the transcoding directory. Type "/transcode" in there and click apply and you're all set. You can tweak some of the other settings if desired to see if that improves your media streaming experience. Thanks for reading and enjoy!
  16. jonp

    VMs not Starting Up + Other Questions

    Hi Dave, Here are the answers to your questions: By default, you need to have a GPU dedicated to unRAID OS (especially if you are booting into GUI mode) in order to pass through other GPUs to virtual machines. There are possibilities to work around this issue, but they require you to pass the ROM file for your GPU through manually (there is an instruction on how to do this here): Of course it would! Windows 7 is pretty old and GPU pass through works only so-so in that world. Windows 8 or 10 would result in MAJOR improvements. Unfortunately no. USB hubs themselves are not assignable devices. The other way you could do this would be to purchase a 4-in-1 USB controller that presents 4 discreet USB controllers to the host OS. Then you could attach a USB hub to each port on that device, then pass through each of those individual controllers through to individual VMs.
  17. Supporting the various web-browsers out there to provide a common experience is a real challenge nowadays. I mean, it has always been something we've had to deal with to some extent, but as the webGui continues to evolve into something bigger and better, with more technologies being used to create a superior experience, we expect that we'll run into some browser-specific challenges along the way.
  18. Thanks for the great feedback everyone!
  19. We discovered a thread we found in the Debian mailing list that documents an issue with Intel processors of both the Skylake and Kaby Lake families. You can read the thread yourself for a complete debrief on the issue, but here is the synopsis, as also documented in the thread from the mailing list: Due to the nature of this issue, we are recommending all affected users do the following: Read the Debian mailing list post regarding this issue to confirm your CPU is affected. Check to see if there is a BIOS update available for your hardware. If no BIOS update is available, disable Hyperthreading in your system BIOS immediately. We are looking into providing a way to allow users to apply a microcode update as a workaround that allows you to temporarily patch out of this bug on a per-boot basis, but until that time, users with these systems need to consider it risky to continue using the Hyperthreading feature.
  20. Just to be clear, it is not our intention to support the virt-manager tool you are using, so I can't really commit to anything with respect to that. I think adding the SCSI option will suffice for now and folks that want to further adjust options can do manual xml edits to make that happen.
  21. Our thinking exactly. Longer term, we want to do some fancy stuff by auto-detecting the use of an SSD and auto-tuning the XML based on that, but that's a more complicated feature.
  22. Ok, we are going to add the option to select SCSI as a bus type to storage devices. This will also automatically generate the XML for the virtio-scsi controller that you'll need to talk to these devices. We are NOT adding the discard='unmap' option directly to the GUI VM editor yet. That may be something we do in the future. For now, users will have to use the XML editor mode for the VM to add that special option.

Copyright © 2005-2018 Lime Technology, Inc.
unRAID® is a registered trademark of Lime Technology, Inc.