[REQ] A Newbie Guide to Virtualization, unRAID 64, and unRAID Distro


Recommended Posts

Hi Guys,

 

I have been reading and following the colorful threads for the unRAID 64, unRAID Distro, XBMC-in-unRAID, etc.

 

I clearly fall into the "not a linux expert that got unRAID to store my XBMC media" category. I mentioned this on other posts. As a tinkerer, and envelope-stretcher, I am fascinated about what I think the possibilities are. Grumpy, Ironic, and others have a vision for unRAID. As I praised before, I am thrilled that super-smart experts are involved with unRAID. I think there are exciting possibilities we can tap into.

 

While I read all these posts, the first thing that is clear to me is how little I know. I get the basic concepts about virtualization (I think). I get that my hardware will need to support something called VT-d to properly passthrough (which I assume is a direct talk between the virtualized environment and the hardware).

 

Something I believe many of us non-experts would like is a post describing what these things can do for us in our real world situation. I understand that one can setup media serving in a variety of ways. HTPCs connect to unRAID to read the files (which is what we all do today). Plex Media Server to serve files to Rokus, Smart TV, etc. HTPCs that can remote boot via PXE stored in  unRAID, and somehow virtualized htpcs all coming from unRAID and somehow connected to our TVs. These virtualized machines can also be our desktop Win7 machine.

Can some kind soul, please describe the possibilities in near layman's terms, so we can grasp the concepts and advantages. Please outline the type of hardware requirements to make this function (ie for virtualization, do I need to have 3 video card in unRAID with cables to the TVs? what types of video cards, what type motherboard accepts these cards, are they all PCIE 16x slots? what about this DMI thru ethernet?) Are we talking some fancy, non-desktop motherboards and hardware? What about our current cheap SATA  controller cards, do they work? what about USB IR remotes for various TVs? BTW, I do not mind spending money to upgrade my unRAID version and upgrade my hardware.

 

I know there will be options within options...  and chapters could be written on any small issue. But if someone can just paint a landscape with broad strokes, I think this could inspire the broad base of unRAID adopters.

 

I for one, want to take advantage of new possibilities... I rarely let my lack of knowledge stop me. This is all very exciting. I am positive I am not the only non-pro that is eager to learn about this.

 

Many thanks,

 

H.

 

 

 

 

 

Link to comment
  • Replies 77
  • Created
  • Last Reply

Top Posters In This Topic

not sure what exactly are you asking here but want to chime in with my own observations that might answer some of things in OP post.

 

first and foremost unRaid is a file server (NAS) type of setup.

many people want / try adding some additional functionality to the setup via plug-ins

also many people try to improve the main user experience by creating newer GUI and such.

but as it is  the core unRaid is limited not only by core design and a long development/update cycle for it self, but also  because it is based on an outdated Linux distro that have a very long development/update cycle, outdated kernel,

limited number of drives etc.

 

many current users have been trying to overcome some of this issues by virtualizing the setup, which creates other issues by it self as the product have not been created with virtualization in mind (it expects bare-metal setup use) and needs a lot of modifications and hardware compatibility to works hence a number of threads by experienced users  providing special modified VM files with instructions on how to use them for each VM host.

 

what grumpy and ironic are trying to do do is to push unRaid development onto more mainstream route by first converting the unraid to module based on newer 64bit  regular up-to date distro.

changing the way unRaid is setup and configured so it would easily install on a hard drive  as normal distro and use usb only for license and some config store.

 

since the goal is to use regular up-to date distro , it would make the virtualization use better since you can use KVM/Xen on unraid host natively thus removing many issues with hardware compatibility we have today (most issues exist because of need to Pass-through controllers into unraid, since unraid will be the  host system, that will not be an issue any longer )

 

as it is today many plug-ins have been developed to add functionality to unRaid that is easily available to day on other Linux distros. if this new setup come to existence you will not be locked into a custom plug-in with limited support and could use a standard commercial applications available.

 

also in some limited cases, if you need/want to use the hardware that is a server for XBMC box as well, using the main stream distro will let you do just that.

currently unraid does not support this functionality.

 

 

Link to comment

thanks Grumpy. I have been reading/following  all threads on this.

BTW, do you know any guides  on how to setup OpenSuse KVM vLAN with VM firewall.

I could not find any current pages for it, and most that I can find are for Ubuntu and Xen

 

I hope to setup OpenSuse 13.1 with pfSence VM to replace my router.

Link to comment

This may add to the confusion but wouldn't it be correct to say that by moving unraid to a mainstream up-to-date distro then you'd eliminate the need for members of the unraid community to virtualize in order to install additional functionality?

 

So then all that remains is what would be other reasons for virtualizing? I'm sure that is a huge list but I think the OP is asking for the most common. For example

pfSence VM to replace my router
Link to comment

This may add to the confusion but wouldn't it be correct to say that by moving unraid to a mainstream up-to-date distro then you'd eliminate the need for members of the unraid community to virtualize in order to install additional functionality?

 

So then all that remains is what would be other reasons for virtualizing? I'm sure that is a huge list but I think the OP is asking for the most common. For example

pfSence VM to replace my router

 

it is partially true.

I believe the main reason people try to virtualize is that they want to run things that are not compartible with unRaid or that they want to use the hardware they have to the fullest potential. also a consolidation of hardware to save money on energy.

 

making unRaid the main host system capable of virtualization a-la KVM/XEN

have several advantages:

#1 - since unraid is the host, you can use protected share as a datastore  for all VMs.

#2 - you do not need to run any plugins on the host thus unraid will run in stock mode thus more stable. it truly is set-it and forget system. all else can be run in VM

#3 - much easier UPS integration with unraid as it is the Dom0.

Link to comment

This may add to the confusion but wouldn't it be correct to say that by moving unraid to a mainstream up-to-date distro then you'd eliminate the need for members of the unraid community to virtualize in order to install additional functionality?

 

So then all that remains is what would be other reasons for virtualizing? I'm sure that is a huge list but I think the OP is asking for the most common. For example

pfSence VM to replace my router

You could run 'virtual full desktops' (guests) from within a hardware server (host). In plain terms, you could run for example Windows, Linux and OSX from one simple box.

 

And of course one of these guests could be pfsense ;-)

 

Sent from my GT-I9305 using Tapatalk

 

 

Link to comment

Thank you guys...  ;)

 

I am not sure you guys understood what I was asking.

 

What I want is to get a clear description of benefits of what you guys are trying to achieve. In other words, what new things can this do for me. This description would be as if you had to explain this to someone slightly more knowledgeable than my mom or my wife.

 

Because I read through all the threads, I believe I follow your explanation below. The key word is "I believe".  I believe I understand what virtualization is, but I am not sure.

 

With that in mind, lets assuming that a large portion of unRAID user base are people that:

  • implemented it for home media storage
  • learned how to use Putty
  • followed some guides on what to type at the command propmpt
  • got to learn about MySQL to centralize their movie db
  • started using Sab, CP, SB
  • learned to install Openelec on an htpc
  • are fiddling around with a Raspberry Pi or an Ouya or something that will run XBMC

 

Please also assume that we (the user base I described above) do NOT need convincing about what/why/how what you guys are doing is smart. You are "smarter" than us... we buy it.... you had us at hello. But please clarify what we can do, that we do not know exists. Tell us what we are missing.

 

For example:

 

With 64 Bit unRAID as Tom is attempting you can:

[*]adress more ram

[*]benefit two

[*]benefit three

 

With unRAID Distro you will be able to:

[*]Virtualization

[*]benefit two

[*]benefit three

 

What can you do with Virtualization

Explanation here like, you can eliminate your Openelec htpcs by doing this, and that. You need to buy this and that... You can also do this and that with your desktop... If you do not buy this you can then do that but then this will happen.

 

You can Install this.

What this can do if I have this or that.

 

Of course there will be highly involved technical explanations for all this... and we WILL learn it if we want those things. We already learned to get us where we are. I know we are going to want/need some of those things.

 

I hope this helps, and I think this would go a long way in creating push from us non-pros.

 

Thanks!

 

:D  :D  :D

Link to comment

I think the point is that consumer hardware is really powerful now for not much money and most of it sits there idle for a lot of the time. Virtualisation gives you at least 2 things; the ability to consolidate multiple machines onto one system, isolation (at the software level) between them. The reason to do this entirely depends on what you want to do with your computers. For me, I see v little use in the home environment for the first benefit. The family is in at the same time and tends to do things together so load is created at the same time. In a server environment, there are lots of things going on at different times so then you do get better utilisation of set physical footprint.

 

The real upside is the isolation. Software has dependencies and sometimes they conflict or just silently break stuff. A VM helps you avoid that by letting you change things independently and giving an easy way to revert to a known good state as well as an easy way to back up that known state.

 

I am quite happy with unraid in a VM for this reason as it means I can just leave it there untouched knowing that nothing else (except the host) can break it.

 

FWIW the only other VM I need atm is one for a squeezebox server, again because that tends to bring in a shed load of dependencies.

 

Ultimately it is usually easier to manage a few self contained things than 1 big monolith.

Link to comment

Virtualization really comes down to 2 main benefits in my mind - better hardware utilization (many of us have way more cpu and memory capacity than what UnRAID needs), as well as server consolidation/isolation.

 

Like many people using UnRAID, I started with a base setup just to manage my media storage. I followed the guide to install UnRAID to give me some additional options and was off to the races. As my comfort level grew I added things like UnMENU, SimpleFeatures, SABnzbd, SickBeard, Couch Potato, Headphones and most recently MySQL as I am in the process of setting up a unified database for my various XBMC clients. I've also installed Plex to play around with the options it brings.

 

As I add more and more to UnRAID I do worry about the worst case scenario where UnRAID dies and I have to remember how to re-setup all these applications that are plugged in, plus there is the downtime associated with all this automation that has been configured.

 

The point of virtualization in general is usually server/computer consolidation. If you look at it from a business perspective, many companies have applications that are business critical, but have a small cpu and/or ram requirement, however it's nearly impossible to buy a server that just meets these low requirements (try and buy a computer with a single cpu core and 2GB of memory - it's easier in the consumer space, but usually not cost effective and it's worse in the corporate space).

 

Now imagine that you have 30 computers that really only need low end cpu and ram requirements, but you have 30 physical servers sitting in your data center or server room. Not only does this take up a large amount of space, and generate a lot of heat, and cables, and management to oversee, but there each server is also underutilized (sometimes only using 5-10% of the servers capabilities), and each server is a single point of failure.

 

Virtualization from a corporate perspective is based on the strategy that instead of having 30 underutilized servers sucking power and generating heat, you instead buy one monster server (comparatively speaking), or better yet two (for redundancy), and then you virtualize each of those underutilized computers and run them from the new beefy servers. Because each server is virtualized within it's own self-contained bubble, you can mix Linux, Unix and Windows. Each virtual machine (VM) believes it's installed on bare metal and has access to all the system resources, where in reality there is a virtualization layer (either ESX, Xen, Hyper-V or whatever) that is serving up virtualized instances of your resources (cpu, ram, storage, network card, video, etc) to each VM and managing all 30 VM instances from a central pool.

 

There are significant benefits to this, including:

 

- Reduced management (you are only managing 2 physical servers, not 30 - though some VM management is required as well)

- Reduced footprint - you only have 2 servers sucking power and generating heat - potentially with some shared storage.

- Better hardware utilization - instead of having 30 machines using 5% of the available CPU and RAM, you can now balance those 30 machines across multiple virtualization hosts, and leverage 80% of the available CPU and RAM - which is a much better return on your investment

- Lower risk/better availability - usually in the corporate would when building virtualization you plan for N+1 hosts, which means you determine how many hosts you need to support all your VMs (you add up the CPU and RAM requirements and figure how best to lay them out virtually), and then you add an additional host so that in the event that a single server dies the VMs will automatically failover to the new host - this way the hardware failure is invisible to the virtual servers and hopefully to the end users who are using those applications.

 

From a consumer perspective (specifically with UnRAID), virtualization allows you to remove some of plugins from the UnRAID server and run these applications in isolated VMs. This gives you the following benefits:

 

- Improved stability of UnRAID - Right now you could have 10 different plugins running on UnRAID. Each is potentially created by a different person, each has unique requirements, and each has the potential to blow up UnRAID. If one of those 10 plugins was updated incorrectly, it could cause unexpected side effects, and it could have nasty consequences. The more you add on top of UnRAID, the higher the associated risk of something going wrong. By taking plugins away and running them in a VM (such as SAB, SickBeard, CouchPotato, Plex, MySQL), you reduce the complexity of your UnRAID environment, and decrease the possibility of an unexpected event occurring due to an incorrect updated plugin, or a conflict of plugins.

- Application Isolation - Along with improving the stability of UnRAID, you also reduce the results of UnRAID blowing up. Currently, if you have everything running on UnRAID as I mentioned at the top of this post then if your USB stick died, it would be a ton of work to re-set everything up and reconfigure it. If you had a separate VM that was running SAB, SB and CP, and another VM running Plex, and possibly another running MySQL and XBMC updater, then if UnRAID died it would only impact UnRAID. All your other VMs would still be fine, which means you don't need to worry about re-creating or re-configuring those apps. If UnRAID is your base OS as suggested in the virtualization categories then these VMs would be out of commission until the server was rebuilt, but you would only need to re-point the virtualization software to the pre-existing VMs and then restart them.

- Server/Computer Consolidation - Even though this is not as prevalent an issue as in the corporate world, many users here have multiple computers for different tasks. Some have spoken about software based routers, whereas others (such as myself) have Active Directory and Exchange running at home, or other software. Rather than buying additional servers to manage this, being able to virtualize on top of UnRAID allows you to expand your home server environment as your needs grow, without having to invest in additional hardware (unless you have a minimal UnRAID hardware config, in which case you may need to make an investment).

- Test environments - one of the really nice options virtualization allows is that you can easily create and destroy VMs. This give you a ton of options to try different configurations (again, without needing new hardware) and figuring out exactly what you want to do before moving it into production. For example, say you want to add new functionality to UnRAID (like MySQL and maybe XMBC centralized library), but have no idea what you are doing, and don't really want to mess around with your UnRAID server. With virtualization you could easily create another UnRAID server in a VM (you could even just copy your current production configuration), and then tweak and modify the test VM until you are happy with the results, and then, once you are comfortable you can make the necessary changes to your production UnRAID server. This is a far better approach than trying to undo changes on your production server if you find something went wrong. This is also a great way to test out things like the different GUI options without modifying your production server until you are sure which one you want.

 

As each of us mature in both our technical understanding, and our desire to further improve our personal environments, virtualization makes a lot of sense as a platform to work from.

 

With all the above said the key thing to take advantage of virtualization is to make sure you have the proper hardware to both support it, as well as enough resources to take advantage of it.

 

To truly have a setup that will be able to take advantage of these options you need to invest in higher end configurations than what is often suggested for UnRAID. Again, one of the features UnRAID claims is a minimal hardware config. This means that many, such as myself, bought low end AMD/Intel processors, fairly basic motherboards and minimal RAM configurations. If you are planning on virtualization for the future you need to learn new terminology and re-plan the hardware requirements for your UnRAID server.

 

If you are an AMD fan, then you are looking for motherboards and CPUs that support HyperTransport.

If you are an Intel fan, then you are looking for motherboards and CPUs that support Virtualization Technology for Directed I/O, abbreviated VT-d

 

I prefer Intel, and just did this research for myself for my updated UnRAID server. Intel has a website (http://ark.intel.com) that you can use to look up any CPU to confirm it's feature set. AMD may have something similar, but I haven't looked into it.

 

You also need to make sure that the motherboard you select supports virtualization (HyperTransport/VT-d). This is likely a matter of picking which CPU you want (so you can get the correct CPU socket type), and then pick your favorite motherboard manufacturer and find a board that supports the virtualization technology you need. Then you want to make sure you stock it with enough RAM to support multiple VMs (or at least make sure you have the option to expand as your needs increase - i.e. if you have 4 slots, don't fill them with 1GB DIMMs, instead start with 2x4GB DIMMS or 2x8GB DIMMS so that you can expand down the road).

 

I believe everything above is accurate, however I may be mistaken on some of the AMD requirements. If so, I am sure someone will correct me.

 

Lastly, while I've made reference to UnRAID blowing up and other nasty things happening to highlight my points, the likelihood of this is pretty small (I think). The way we run things now (i.e. UnRAID with plugins) works for thousands of people without issue, and likely will continue to do so for years to come - I don't want to leave anyone with the impression they are running around with a ticking time bomb just waiting for it to explode.

 

Hopefully this was helpful.

 

Link to comment

Bkastner.... thank you very much for your post!!  :) :) Very informative and helpful. I really sheds light as I was hoping.

 

It is interesting how our experience w/ unRAID so similar; we really are the audience the Limetech targeted.

 

A few questions.... I read, that you can VM your htpc's with something like Openelec.... How would this work? I think this feature is where VM would take things to another level.

 

Many thanks again!

 

H.

Link to comment

FWIU the importance of "directed I/O" IOMMU, etc. comes into play if you need to pass host physical resources to a guest machine. With the idea of unRAID with virtualization baked in on the host machine, it does open the possibility where passing physical I/O between host and guest would not be needed. To me this lessens the "stringent" requirements for the host, as long as the host's CPU/Mobo/BIOS supports "VT-x" (in Intel speak) or more basically "virtualization technology."

 

This is a long-way around saying it would allow older hardware to be used, i.e. some Pentium D's!

 

Here's a wonderful page on Intel's processors: http://ark.intel.com/products/virtualizationtechnology

Link to comment

FWIU the importance of "directed I/O" IOMMU, etc. comes into play if you need to pass host physical resources to a guest machine. With the idea of unRAID with virtualization baked in on the host machine, it does open the possibility where passing physical I/O between host and guest would not be needed.

 

So the baked in virtualization in unRAID means that the unRAID portion (pooling the drives, and the shares) are running as part of the OS... not virtualized? Like what it is today...?

 

I have been mulling this over... So if my above statement is correct, within the server, I would create several VM's.... would I create a VM strictly for Sab, SB and CP? Then I would create one for Maraschino (currently my Maraschino plugin's Python fights with the SB and CP python) and perhaps throw in mySQL on that VM? If the above is correct, each of these VM would have their own internal IP addresses when its time to do the settings in Maraschino, Sab, etc?

 

Now what OS are each of these VMs? I assume not a full flown OpenSUSE or Ubuntu... Is this what Arch linux is, a basic basic OS? CentOS? Is this the part where grumpy says, just go "yum install sickbeard" (I know syntax is not correct, but only for discussion)?

 

To me (as I mentioned before) the Holy Grail is virtualizing the htpcs... What type of user experience would one expect on this.... say I have a powerful i7 or i5 unRAID and VMs... how would speed/user experience compare to my current Openelec/Centralized MySQL running on Atom Ion $280 hardware. How are the USB IR remotes handled? You can assign individual USB ports on mobo to specific VMs? This is the passthrough everyone refers to right?

 

I am going to search for some type of newbie videos about VMs. Curious to see how it all gets managed...

 

Thank you guys.

 

In a year I hope I run into this thread again, and shake my head about how ignorant and naive I was...

 

Link to comment

To me (as I mentioned before) the Holy Grail is virtualizing the htpcs... What type of user experience would one expect on this.... say I have a powerful i7 or i5 unRAID and VMs... how would speed/user experience compare to my current Openelec/Centralized MySQL running on Atom Ion $280 hardware. How are the USB IR remotes handled? You can assign individual USB ports on mobo to specific VMs? This is the passthrough everyone refers to right?

 

This requires pci-passthrough and IOMMU. I seem to recall from one of the threads that you can drive HDMI over a long cable run or over IP to the TV/Monitor. That's pretty heady stuff. I think I saw in your sig you have a big Norco case and I'm sure you're not going to put it in the "home theatre" so the A/V over IP would be a good fit.

 

Edit: To me I'd just a soon plop down a little box like my Pivos DS than route HDMI all over. But that's just me.

Link to comment

well hernandito >> most of your post is just on target. few tings however need some corrections or explanation.

 

Now what OS are each of these VMs? I assume not a full flown OpenSUSE or Ubuntu... Is this what Arch linux is, a basic basic OS? CentOS? Is this the part where grumpy says, just go "yum install sickbeard" (I know syntax is not correct, but only for discussion)?

the VMs can be anything you want. and yes you can have a full blown distro installed in VM if needed.

for Sab/SB/CP you probably would not waist all the space and resources and just drop a server version of your prefered distro in VM

 

To me (as I mentioned before) the Holy Grail is virtualizing the htpcs...[\quote]

why would you virtualize HTPC on a server?

#1. you still need computer at the point of consumption (TV) to use it.

all the issues with controlling it via remote, etc.

 

the only thing useful I can see with virtualizing HTPC setup is to do automatic recordings from the server so you do not need to keep the point of consumption PC on. but not for actually using it as HTPC.

 

and no you can connect USB port to a VM as needed without passing it through fully.

passthrough usually means completely disconnect the controller/device from host and allow VM full hardware level access to the controller/device.

 

 

 

 

 

 

 

Link to comment

well hernandito >> most of your post is just on target. few tings however need some corrections or explanation.

 

Now what OS are each of these VMs? I assume not a full flown OpenSUSE or Ubuntu... Is this what Arch linux is, a basic basic OS? CentOS? Is this the part where grumpy says, just go "yum install sickbeard" (I know syntax is not correct, but only for discussion)?

the VMs can be anything you want. and yes you can have a full blown distro installed in VM if needed.

for Sab/SB/CP you probably would not waist all the space and resources and just drop a server version of your prefered distro in VM

 

To me (as I mentioned before) the Holy Grail is virtualizing the htpcs...[\quote]

why would you virtualize HTPC on a server?

#1. you still need computer at the point of consumption (TV) to use it.

all the issues with controlling it via remote, etc.

 

the only thing useful I can see with virtualizing HTPC setup is to do automatic recordings from the server so you do not need to keep the point of consumption PC on. but not for actually using it as HTPC.

 

and no you can connect USB port to a VM as needed without passing it through fully.

passthrough usually means completely disconnect the controller/device from host and allow VM full hardware level access to the controller/device.

 

Thanks Vl... This is one clarification I needed... I see in the discussions about the different distros, etc. It is hard for me to distinguish which would be the full distro vs what would be used to run say the Sab and CP VM.... So , what would you run  Sab, CP in? MySQL?

 

I read before posts about virtializing the htpc... about how $20 and cabling gives you a new htpc. I was very intrigued about that.

 

 

Link to comment

if you have already have the infrastructure to support a remote htpc then you tend to have a rack with all the kit in away from the cinema room & can have some sort of automation setup to deal with it. In this case, virtualising the media player is a nice option.

 

USB device passthrough is technically possible but seems quite underdeveloped as far as I can see. Some info on the xen page and the kvm page to kick off further reading.

Link to comment

well as of now I think I would use opensuse in servermode (no desktop or minimal desktop)

but you can use Ubuntu Server edition as well.

there are plenty of how-to forums on setting it up fro sb/cp etc.

same for MySQL(just an FYI but MySQL is now called MirandaDB or something)

 

as for virtualizing HTPC I don't know, seams kind of convoluted to me. and I do not believe in implied simplicity

about how $20 and cabling gives you a new htpc[\quote]

I would like to see this.

 

#1. what exactly you pay $20 for ?

#2. cabling can cost you over $20 itself.

    let see I have a TV on the second floor, that is over 100 feet of cables for HDMI only

you also need IR extender for remote etc. etc. etc.

 

last time I priced all that it came out to be well over $300

for that I can have a full fledged PC running next to my TV connected to my fileserver/media server in the basement,  with no issues .

heck for couple of hundred more I can even get a slimline  silent solid state components based  PC with no fans or anything.

Link to comment

FWIU the importance of "directed I/O" IOMMU, etc. comes into play if you need to pass host physical resources to a guest machine. With the idea of unRAID with virtualization baked in on the host machine, it does open the possibility where passing physical I/O between host and guest would not be needed. To me this lessens the "stringent" requirements for the host, as long as the host's CPU/Mobo/BIOS supports "VT-x" (in Intel speak) or more basically "virtualization technology."

 

This is a long-way around saying it would allow older hardware to be used, i.e. some Pentium D's!

 

Here's a wonderful page on Intel's processors: http://ark.intel.com/products/virtualizationtechnology

 

You are correct - sorry, I should have clarified that. Realistically, any Intel CPU/Motherboard that supports VT-x will support virtualization. I had already mentally taken it the next step and assumed that if I am rebuilding my UnRAID server with new CPU/motherboard that I wanted to make sure my investment would support everything I want now, and may want in the future - hence the VT-d requirement and pass-through.

 

 

Link to comment

You are correct - sorry, I should have clarified that. Realistically, any Intel CPU/Motherboard that supports VT-x will support virtualization. I had already mentally taken it the next step and assumed that if I am rebuilding my UnRAID server with new CPU/motherboard that I wanted to make sure my investment would support everything I want now, and may want in the future - hence the VT-d requirement and pass-through.

 

Yes, anyone looking at new hardware should/would want VT-d. One of the current strengths of unRAID is "ability" to leverage older hardware...

 

"Honey, where's that dusty old Gateway XP system the kids got all the spyware on; the one I wouldn't let you give to Goodwill?"  ;)

 

I'd hate to see that feature go away. I don't think it ever will as it brings new users into the fold or eases the adding a second system.

Link to comment

You are correct - sorry, I should have clarified that. Realistically, any Intel CPU/Motherboard that supports VT-x will support virtualization. I had already mentally taken it the next step and assumed that if I am rebuilding my UnRAID server with new CPU/motherboard that I wanted to make sure my investment would support everything I want now, and may want in the future - hence the VT-d requirement and pass-through.

 

Yes, anyone looking at new hardware should/would want VT-d. One of the current strengths of unRAID is "ability" to leverage older hardware...

 

"Honey, where's that dusty old Gateway XP system the kids got all the spyware on; the one I wouldn't let you give to Goodwill?"  ;)

 

I'd hate to see that feature go away. I don't think it ever will as it brings new users into the fold or eases the adding a second system.

 

I don't think it will go a way.

the original unRaid will still be around,

the unRaid Geek edition is for people who are planing to build a more beefier server with virtualization in mind. and/or who  want to have a full up-to date distro to use the market applications instead of custom plug-in adaptation.

 

most current Linux distros can still run on most of the older hardware, especially if installed without desktop GUI.

 

Link to comment

FWIU the importance of "directed I/O" IOMMU, etc. comes into play if you need to pass host physical resources to a guest machine. With the idea of unRAID with virtualization baked in on the host machine, it does open the possibility where passing physical I/O between host and guest would not be needed.

 

So the baked in virtualization in unRAID means that the unRAID portion (pooling the drives, and the shares) are running as part of the OS... not virtualized? Like what it is today...?

 

I have been mulling this over... So if my above statement is correct, within the server, I would create several VM's.... would I create a VM strictly for Sab, SB and CP? Then I would create one for Maraschino (currently my Maraschino plugin's Python fights with the SB and CP python) and perhaps throw in mySQL on that VM? If the above is correct, each of these VM would have their own internal IP addresses when its time to do the settings in Maraschino, Sab, etc?

 

Now what OS are each of these VMs? I assume not a full flown OpenSUSE or Ubuntu... Is this what Arch linux is, a basic basic OS? CentOS? Is this the part where grumpy says, just go "yum install sickbeard" (I know syntax is not correct, but only for discussion)?

 

To me (as I mentioned before) the Holy Grail is virtualizing the htpcs... What type of user experience would one expect on this.... say I have a powerful i7 or i5 unRAID and VMs... how would speed/user experience compare to my current Openelec/Centralized MySQL running on Atom Ion $280 hardware. How are the USB IR remotes handled? You can assign individual USB ports on mobo to specific VMs? This is the passthrough everyone refers to right?

 

I am going to search for some type of newbie videos about VMs. Curious to see how it all gets managed...

 

Thank you guys.

 

In a year I hope I run into this thread again, and shake my head about how ignorant and naive I was...

 

As I think others have mentioned, your Holy Grail is likely overly convoluted and I am not sure on the true value.

 

In order to get HDMI out to each TV you need either to run a long HDMI cable, or you can do HDMI over Ethernet, however if you already have Ethernet out to the TV you will likely get a better experience with a local endpoint device. I've had HDMI over Ethernet setup previously, but had issues with flickering and it just wasn't seamless.

 

Currently I have Ethernet throughout my house, so I am playing around with different end points:

 

Raspberry Pi - You can get a base unit for $35, though you really need a fast SDHC and USB3 key to make it work, plus you will likely want a housing and power cable, plus you need a IR receiver and remote - so it's really about $100 to setup. I've been able to play my 1080p movies fairly well with this, though I get pauses occasionally. I do find the menus really slow and have had issues with the Pi rebooting periodically.

 

Pivos Xios DS - This is a XBMC approved device, and is roughly $90-110 depending on where you get it. It definitely runs smoother than the Pi and has a better overall experience. I don't really like the remote as it's cheap plastic crap, but it does the job. My bigger issue is that XBMC is at 12.3 however the Pivos supported version of XBMC is still 12.0 - which is over a year old (the Pivos XBMC latest update is from May 2013). For an officially supported device I am disappointed that the software is so outdated.

 

Intel NUC - This is a proper Intel computer (mine is Celeron 847), that you need to buy a mSATA drive and ram for. Mine cost me about $400 total, but it can run Windows or OpenElec, which I have on it. Because it's a full PC it is nice and fast and works the best of the end point units I've tried.

 

I also have a full blown PC on my primary TV (it's actually in the basement, with a 50 foot HDMI connection from the receiver to the TV in the living room). I had been running Windows 7/XBMC, but just replaced that with OpenElec as I had gotten to a point where I needed to reboot Windows periodically.

 

The local end points are far easier to setup I would think, and far easier to troubleshoot than a fully centralized solution (if it's even possible).

 

Personally I see a lot of value in consolidating the server side details, and centralizing common components (such as the XBMC database), but I am not sure whether the end points are worthwhile.

 

As I write this, there is one solution that comes to mind... Plex. This is another media center solution that can run in place or, or beside XBMC. Many newer TVs can install Plex, which would mean you wouldn't need a local endpoint. As I understand it, pretty much all the processing is done on the Plex server as well (which can run on UnRAID), and you can share you media library, with your TV, tablet, smart phone, as well as friends. If you have newer TVs that support this, this may be a good option for you, but other than that I'd stick with real end points.

Link to comment

In order to get HDMI out to each TV you need either to run a long HDMI cable, or you can do HDMI over Ethernet, however if you already have Ethernet out to the TV you will likely get a better experience with a local endpoint device. I've had HDMI over Ethernet setup previously, but had issues with flickering and it just wasn't seamless.

 

I have had great success with HDMI over Ethernet. My Server boots up, starts 2 XBMC VMs automatically. I can play 1080p with HD Audio no problems. Also tested it with Windows 7/8 running all kinds of benchmarks / games without an issue.

 

Did you do this utilizing visualization? Assuming you did, were you passing through an AMD / nVidia (Quattro Serier) video card?

 

Did you enable the Video Hardware Acceleration on the card or were you using VMWare (If ESXi) or Cirrus Video Drivers (If Xen / XenServer) (which is will do / use by default)?

 

Simple way to tell what driver you are using in XBMC... goto System, System Info, Video Hardware. Also, after starting a Video... Hit "o" on your keyboard / info on your remote and you should see vappi for Intel, VDPAU for nVidia and XVBA or VDPAU for AMD. If you don't see that, you aren't using the Hardware Video Acceleration on your Video Card and will notice that your CPU utilization is very high (it's also on that same screen).

 

Most people don't know when passing through a Video Card in ESXi / Xen / XenServer by default it won't use the Hardware Video Acceleration on the Video Card. Therefore the CPU has to do all the processing and depending on the CPU, that usually is around 70 - 80% of the CPU Utilization.

 

If you enable / configure / install your XBMC (and XBMC VMs) to use Hardware Video Acceleration on the Video Card... CPU Utilization drops down to 10% and on newer CPUs it should be 5% or lower.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.