Xen/unRAID-6 Discussion


limetech

Recommended Posts

I know there is a lot of interest in this feature, so let's separate Xen-specific discussion from the -beta announcement threads to here please.

 

First, here is my "vision" for Xen/unRAID-6:

 

The base “unRaid OS” is derived from a minimum installation of slackware64-14.1.  What I mean by “minimum” is that only the set of packages required for unRaid to function as a "NAS Appliance" are included, though  now we also have packages required to have it run as a dom0.  This means no video drivers (other than basic VGA), no sound drivers, no X-windows, no desktop manager, and no libraries to support all those things.  BTW this is a big reason I like slack: it’s very easy to customize your installation exactly like you need it and very easy to understand everything happening “under the hood” – but I digress…

 

The good part about this: the OS is simple, fast, and has minimal memory footprint (remember root file system is in RAM).

 

The bad part about this: if you want to install anything beyond a trivial application it’s going to require the installation of many dependent packages.  This is what people gripe the most about because different “plugins” download different packages and/or versions from who-knows-where.  Once you get more than a few applications installed it becomes something of a mess to manage.  This is one of the reasons people think moving to a “newer” distro like Arch would be a benefit because a plugin could at least uses Arch’s “pacman” package manager to manage this.

 

But Xen changes all this.  What I want to do is keep unRaid dom0 “minimal” – just enough to support storage management and the hypervisor.  Now if you want to run true applications, create a VM with your distro of choice and install those applications there.  The distro running in the VM can get at unRaid storage in a couple ways: accessing shares via the virtual network, and/or via virtualized storage assigned to it at VM-start time.

 

In this environment we still have plugins.  But plugins are much simpler – they would just extend the functionality of the webGui mainly, but could also implement some specialized web-based applications such as a simple VM manager.  (But how about this: run your VM manager in a VM!)

 

This is definitely a work-in-process as I come up to speed with requirements of Xen.

 

Link to comment
  • Replies 199
  • Created
  • Last Reply

Top Posters In This Topic

Some have expressed concerns over the 'road map'.  bkastner got it exactly right.  Yes I took a bit of a "detour" off the map to get Xen working, but I think it will be well worth it.  Why Xen vs KVM?  Flipped a coin and it came up Xen  ;)

 

 

Tom please excuse the dumb question, so if I'm reading this right your not going to be building in KVM hypervisor support then, you've settled on xen only, yes?

Link to comment

Some have expressed concerns over the 'road map'.  bkastner got it exactly right.  Yes I took a bit of a "detour" off the map to get Xen working, but I think it will be well worth it.  Why Xen vs KVM?  Flipped a coin and it came up Xen  ;)

 

 

Tom please excuse the dumb question, so if I'm reading this right your not going to be building in KVM hypervisor support then, you've settled on xen only, yes?

 

Not ruling out KVM but I had to pick one to start with.

Link to comment

That's pretty much the best case for the official release.  Really, dom0 shouldn't have any "app" plugins and unraid can run in a very streamlined manner.

 

Obviously some people will still want to run unraid as a VM or add unraid functionality to other distros, which is already being done so there's no need to officialy develop/support that yet as our amazing community has that covered.

 

With this very simple yet substantial change to unraid, hopefully we can accelerate the beta/rc testing phases as pretty much everyone can get away with running stock unraid while not sacrificing their usenet downloading, media server and other applications.  Hopefully this can speed up our ability to move on to things like cache pools, P+Q, etc in the more timely fashion and lower the amount  of support cases which are caused by plugins.

 

Running unraid on ESXi has been a dream and I would never install another app plugin on unraid ever again, however I'm very much looking forward to testing unraid as dom0 and giving it direct access to my hardware again.

Link to comment

This is probably a question for those more experienced with setting up Xen etc...

 

Does the base os have to have drivers for graphics cards etc to be able to pass those through to the VM's or is that all taken care of in the VM's itself and you just pass through at the PCI level ?

the device itself is passed through, it doesn't really need to know what that device is so the drivers etc are in the receiving OS.

Link to comment

so would the default way to manage the VMs be virt-manager over x11 forwarding?  Will SSH be included with the standard build now?

 

Eventually there will be a plugin.

 

However, creating a Ubuntu.cfg or Arch.cfg isn't complicated. Once you have one setup... You just copy it and customize it for the rest of your VMs.

 

Example of Ubuntu VM cfg file

 

name = "Ubuntu" # <--- Name of the VM.

 

memory = 1024 # <--- Amount of memory you want to assign to the VM.

 

vcpus = 1 # <--- Number of Virtual CPUs to assign to VM..

 

vif = ['mac=00:16:3e:01:01:01,bridge=xenbr0'] # <--- The MAC Address and bridge name for the VM.

 

disk = ['file:/mnt/cache/VMs/Ubuntu.img,xvda,w'] # <--- Tell it where the VM hard drive is located.

 

Another Example of how you would setup the VM Hard Disk and boot into the Ubuntu Live CD Installer ISO.

 

disk = ['file:/mnt/cache/VMs/Ubuntu.img,xvda,w' , ‘file:/mnt/user/ISOs/ubuntu.iso,hdc:cdrom,r’ ] # <--- VM Hard Drive and directly boot into Ubuntu Live CD.iso to install VM.

 

bootloader = "pygrub" # <--- Needed to read / boot the Ubuntu bootloader.

 

vnc = '1' # <--- Enable VNC so you can Remote into VM,

 

vnclisten = '0.0.0.0' # <--- Tell VNC which IP ports to listen in. In this example, all.

 

on_shutdown = 'destroy' # <--- Tells Xen to stop VM if you shutdown host.

 

on_reboot = 'restart' # <--- Tells Xen to restart the VM if you reboot the host.

 

on_crash = 'destroy' # <--- Tells Xen to stop VM if it crashes.

 

pci = [ '03:00.0' ] # <--- Tell Xen the PCI Device IDs for the PCI Devices you are passing through to the VM.

Link to comment

so would the default way to manage the VMs be virt-manager over x11 forwarding? 

I can't answer this one...

 

Will SSH be included with the standard build now?

 

However I can cut and paste this.

 

- other: updated samba, php.  Added mailx (for future email notifications). 

Added openssh.  The initial set of host keys will be generated upon first boot and stored on the flash in config/ssh directory.

Link to comment

That's very helpful, thanks Bus Driver  8)

 

That was just a general overview.

 

If you are installing a Linux VM, you will want to use the Linux Distro Xen Installer and not a standard ISO. Otherwise, you will not load / install the Paravirtualized Drivers in your VM and going from HVM to PVHVM or PV is a pain in the ass.

 

Paravirtualized Drivers - Special Drivers that let the Host / VM communicate instead of the Host having to emulate a full PC. It makes your Graphics, Hard Drive, Network, memory, CPU, etc. in your VM A LOT faster than a fully emulated PC. Plus, the Host does not have to work as hard either.

 

Examples / Instructions on how to install 3 different Linux Distros with PV Drivers into a VM.

 

CentOS

 

Ubuntu

 

Arch Linux

 

 

Link to comment

But Xen changes all this.  What I want to do is keep unRaid dom0 “minimal” – just enough to support storage management and the hypervisor.  Now if you want to run true applications, create a VM with your distro of choice and install those applications there.  The distro running in the VM can get at unRaid storage in a couple ways: accessing shares via the virtual network, and/or via virtualized storage assigned to it at VM-start time.

 

In this environment we still have plugins.  But plugins are much simpler – they would just extend the functionality of the webGui mainly, but could also implement some specialized web-based applications such as a simple VM manager.  (But how about this: run your VM manager in a VM!)

 

IMHO I think this move is counter-productive to unRAID.  Adding VM support is great, but if you're competing with your QNAS, Synology and etc out there then the average home user just wants a simple plugin system where he can find a repository and just click install.  From what I am reading, VMs are a long way from that simplicity.  Instead of making the home NAS+Apps server segment more open to unRAID, you seem to be shutting it down.

 

unRAID still needs a solid plugin framework for it be a player in the SOHO market.  Maybe nicinabox's tools are the answer, or maybe it's Docker.  Dunno.

Link to comment

disk = ['file:/mnt/cache/VMs/Ubuntu.img,xvda,w'] # <--- Tell it where the VM hard drive is located.

disk = ['file:/mnt/cache/VMs/Ubuntu.img,xvda,w' , ‘file:/mnt/user/ISOs/ubuntu.iso,hdc:cdrom,r’ ] # <--- VM Hard Drive and directly boot into Ubuntu Live CD.iso to install VM.

SchoolBusDriver's examples above made me realize the location of VMs on unRAID 6 probably won't be on the protected array.

 

I think we all love the fact the array disks get spun down when not being used. Therefore the next most logical place for them is the cache drive, probably a SSD. Since the cache drive is not protected it will be nice to have a method to regularly back up (or snapshot) them to protected storage.

 

Otherwise, it reinforces the need for a mirrored cache system.

Link to comment

But Xen changes all this.  What I want to do is keep unRaid dom0 “minimal” – just enough to support storage management and the hypervisor.  Now if you want to run true applications, create a VM with your distro of choice and install those applications there.  The distro running in the VM can get at unRaid storage in a couple ways: accessing shares via the virtual network, and/or via virtualized storage assigned to it at VM-start time.

 

In this environment we still have plugins.  But plugins are much simpler – they would just extend the functionality of the webGui mainly, but could also implement some specialized web-based applications such as a simple VM manager.  (But how about this: run your VM manager in a VM!)

 

IMHO I think this move is counter-productive to unRAID.  Adding VM support is great, but if you're competing with your QNAS, Synology and etc out there then the average home user just wants a simple plugin system where he can find a repository and just click install.  From what I am reading, VMs are a long way from that simplicity.  Instead of making the home NAS+Apps server segment more open to unRAID, you seem to be shutting it down.

 

unRAID still needs a solid plugin framework for it be a player in the SOHO market.  Maybe nicinabox's tools are the answer, or maybe it's Docker.  Dunno.

 

How about if there is a plugin manager and one of your options is "Install XBMCbuntu in a VM"?

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.