Dump ESXi?


Recommended Posts

  • 1 month later...

I am planning to upgrade unRAID 6.0 on weekend.

 

I dont think I am going to dump ESXi.

 

I have a few VMs on ESXi: Unraid, Plex Media Servers, Ubuntu VM, and Windows 7 VM

 

What the performance different if I run Plex Media Servers on Docker via unraid instead of VM on ESXi?

 

For example:

 

VM Unraid -> Docker -> Plex Server

VM Ubuntu - > Plex Server

 

I think "VM Ubuntu - > Plex Server" much faster performance.

Link to comment

If these USB reset issues are not resolved (and I don't expect them to be anytime soon since LT doesn't officially support virtualized unRAID installations), that may be reason enough to dump ESXi.  In fact it's to the point that I'm going to start testing uunRAID in Hyper-V and Proxmox this week.

Link to comment

Have you noticed the webgui is painfully slow when the array is stopped, but managable when its started?

 

Hadn't really noticed this, and not sure I'd call it "painfully slow" => but I did just try it, and indeed the response is notably slower when the array is stopped.    Switching between tabs can take a couple seconds, whereas when the array is started it's almost instantaneous; and the initial response to //Tower can be even slower if it's stopped.    This is on a native UnRAID -- not virtualized -- but I assume this is what you're referring to.    Is it even slower under ESXi ?

 

Link to comment

Have you noticed the webgui is painfully slow when the array is stopped, but managable when its started?

 

Hadn't really noticed this, and not sure I'd call it "painfully slow" => but I did just try it, and indeed the response is notably slower when the array is stopped.    Switching between tabs can take a couple seconds, whereas when the array is started it's almost instantaneous; and the initial response to //Tower can be even slower if it's stopped.    This is on a native UnRAID -- not virtualized -- but I assume this is what you're referring to.    Is it even slower under ESXi ?

 

 

I'm on ESXi via PLOP. And it can be 15-30 seconds with the array stopped.

Link to comment

I'm on ESXi via PLOP. And it can be 15-30 seconds with the array stopped.

 

Ahh ... THAT indeed qualifies as "... painfully slow ..."  :)

 

Does it have the same behavior if you're using a VMDK to boot UnRAID?    [This, of course, has its own disadvantages -- notably the nifty new in-GUI updates won't work correctly]

 

Link to comment

I'm on ESXi via PLOP. And it can be 15-30 seconds with the array stopped.

 

Ahh ... THAT indeed qualifies as "... painfully slow ..."  :)

 

Does it have the same behavior if you're using a VMDK to boot UnRAID?    [This, of course, has its own disadvantages -- notably the nifty new in-GUI updates won't work correctly]

 

Or I'm just impatient.......  I'll try with 6.0.0 tonight and let you know.

Link to comment

Just to chime in, unRAID 6 under ESXi 5.5 is just as good as previous versions of unRAID and ESXi.  And I haven't noticed an usually slow webgui (I am doing the vmdk boot option, but I doubt that would matter once the boot has completed).

 

I only run one Docker under unRAID and no VMs (a VM under a VM would be unnecessary, obviously), and 3 VMs in addition to unRAID (Windows 7, Ubuntu 14.04 server, and the Cyberpower UPS appliance).  Not sure if it was necessary for me to move that unRAID plugin over to a Docker, but I haven't noticed any negative drawbacks.

 

For me, the only reason I'd drop ESXi and go 100% unRAID would be to better leverage the 2 TB Red and 2 SSDs I have as datastore drives.  I could then do an SSD cache pool with 3 SSDs (currently using one with unRAID) and go ahead and add the 2 TB to the unRAID array.  But, as much as I like to tinker, ESXi has been ROCK SOLID, so I really can't justify messing with this.

Link to comment

... ESXi has been ROCK SOLID, so I really can't justify messing with this.

 

Definitely something to be said for "If it ain't broke, don't fix it"  :)

 

While I'll probably "tinker" a bit with v6 VM's, I must admit it seems like ESXi has some notable advantages => the most compelling is simply that you don't have to shut down all your VM's to do anything with UnRAID [reboot it; reconfigure it; etc.].    ESXi is absolutely rock solid ... I know folks who haven't rebooted the hypervisor in over 2 years !!

 

While KVM is a nice hypervisor, it is still tied to the underlying Linux OS, and UnRAID rides on this same OS ... so to reboot UnRAID you have to stop all your VM's.    This can be very inconvenient if, for example, you're using something like pfSense in a VM to control your network or have an HTPC that's acting as your PVR.  With ESXi all of those activities could continue while you made changes to UnRAID.

 

It's not at all clear which direction is "better" => I think the answer is that neither is; they're just "different", with their own sets of pros/cons.    It'd be interesting to do some performance comparisons ... I may do that later this year after I build a new high-performance server that I'm planning on putting together in Nov/Dec.

 

 

Link to comment

unRAID 6.0 on ESXi 6.0 is running just fine.  No need to change anything for me.

 

Without pre-judging until I've actually tried a couple variations, I will say that's the combination that I think is where I'll end up.  Several folks running that have indicated that Dockers run just fine in the virtualized UnRAID; and it lets ALL of your operating environments - including UnRAID - be completely independent of each other.

 

Link to comment

unRAID 6.0 on ESXi 6.0 is running just fine.  No need to change anything for me.

 

Without pre-judging until I've actually tried a couple variations, I will say that's the combination that I think is where I'll end up.  Several folks running that have indicated that Dockers run just fine in the virtualized UnRAID; and it lets ALL of your operating environments - including UnRAID - be completely independent of each other.

 

I forget if I posted this here or a different forum, but thats what I running as well.  All is great except for that USB warning.

Link to comment

unRAID 6.0 on ESXi 6.0 is running just fine.  No need to change anything for me.

 

Without pre-judging until I've actually tried a couple variations, I will say that's the combination that I think is where I'll end up.  Several folks running that have indicated that Dockers run just fine in the virtualized UnRAID; and it lets ALL of your operating environments - including UnRAID - be completely independent of each other.

 

 

Yep...they do!  I'm running Emby, CrashPlan and PlexWatch in Docker.  And I run Plex as a plugin. I just never have switched it to a Docker since it works prefectly as-is.

 

And a big THANK YOU to Zeron for keeping up with VMTools!

Link to comment

From the feedback I've seen for folks running UnRAID on ESXi, v6 runs quite nicely in this environment, and Dockers run perfectly as well ==> so I see NO reason to switch from ESXi.

 

While I don't personally run UnRAID in a virtualized environment [i run a bare metal UnRAID; and have a separate system for all my VM's], I DO plan to move to this when I build my next system.  I agree with the comments above that ESXi has more flexibility in VM configuration and management; and it's certainly nice to be able to reboot ANY of your systems (including UnRAID) without impacting the others.

 

As for the comment r.e. no fault tolerance for your VM datastore => that's easily resolved with a hardware RAID controller and a RAID-1 array  :)    Your UnRAID VM can provide a larger array for the your data, so you don't need a high-end controller with a lot of ports.

 

I have an unraid nfs share as datastore, works fine..

Link to comment

As for the comment r.e. no fault tolerance for your VM datastore => that's easily resolved with a hardware RAID controller and a RAID-1 array  :)    Your UnRAID VM can provide a larger array for the your data, so you don't need a high-end controller with a lot of ports.

 

Or a fibre channel SAN!  :)

 

Pquwo3j.png

Link to comment

I have an unraid nfs share as datastore, works fine..

 

I'm running ESXI 5.5 with unRAID 5 and a ZFS VM to provide datastore storage. Setup a few years ago when SSD's were painfully expensive.

 

Given how cheap SSD's are now, I'm considering losing the Napp-it install, and throwing a couple of biggish SSD's into a cache pool... using that as an nfs datastore... but I havent seen anyone on the forums mention doing that successfully (until now).

 

What is the speed / stability like for your nfs shares? I'm hoping no re-emergence of the old 'stale file handle' issue in nfs in unRAID 6?

 

The other option is just to assign the HBA to ESXI instead of ZFS, and use the SSD's directly in ESXI. I just like leaving some of the SSD space in unRAID as cache.

Link to comment

I am using SSD's natively under esxi for my main VM's. But, since they were rediculously expensive some years ago (sounds familiar) I made an NFS share on my unraid system and used that for a couple of VM's. My Vcenter for example runs of the nfs share, same as some testing stuff and I just made a windows server 2012 vm yesterday on it.. Granted booting and stuff is a little bit slow but most of what a system is doing in memory so for a couple of them it is fine..

 

However... I can now get a 500gig ssd for 200euro so I will probably get one and use that... The SSD I am using now in esxi I will most probably get out and use in unraid as btrfs pool (although I have a 200gig ssd for that now, and it is big enough..

 

I have never had any issues with the nfs datastore.. I will keep it in all cases, if only for an easy parking area for vm;s that are not in use..

Link to comment
  • 1 month later...

I'm late to the discussion but I'd point out an advantage I've found with Windows Unraid VMs vs ESXi VMs.  In ESXi I was never able to get HDCP working correctly when passing through a PCI video card.  The only way i could get things to work was to turn off hardware video acceleration.  So far in my testing of Unraid Windows VM's I have no such issue.  I've successfully setup 3 VM's with video and usb passed through.  I successfully did all this in less than a quarter of the time it took for me to setup something similar in ESXi (could be because I understand more now that I setup ESXi.)  Also, the hardware support seems much more flexible with Unraid's KVM including allowing for much better passthrough of onboard USB based on address.  ESXi was easier to identify devices to passthrough, however if it didn't recognize the device you couldn't do anything with it.  So far Unraids KVM solution allows me to pick any pci devices to passthrough.  Shockingly I have a choice of 5 distinct USB pci buses i can passthrough on my motherboard.  No more extra USB3 pci for me in my new configuration.

 

ESXi does give you much more control over the system from the gui which I will  miss.  I'm going to try out ESXi 6 once I finish moving things from my current main Unraid server.  If it solves my issue with HDCP I may go back but for now Unraid KVM seems to be a better solution for me.

 

Brian

Link to comment
  • 2 weeks later...

Going to chime in here..

 

In 2011-2012, I purchased the hardware for a white box ESXi for the home (Intel E3-1230v2, Supermicro X9SCM-IIF, 16GB, a few IBM M1015's). I actually left the unRAID platform for FlexRAID (and then my own Debian + SnapRAID solution). It's been running well, but with my move back to unRAID, I'm actually going to try running unRAID bare metal and use KVM or Xen if I need anything extra. I only had a few VMs (a Mac one, which may be a challenge under KVM), one being another Debian one, another being a virtual appliance for a cyberpower UPS, a windows VM, etc.

 

With my previous setup of Debian + SnapRAID + mhddfs (pooling solution), I ran multiple dockers (so ESXi -> Debian -> Docker) without any issue for over a year. There is absolutely no problem running ESXi -> VM -> Dockers. Heck, we do it in our production environment at work (we have production Docker virtualized servers that run smaller applications, saves a lot of resources and money).

 

For the comments that mentioned VMs are easier than Dockers, I would beg to differ. I would almost always choose to run an application in a Docker unless it was resource intensive, then I may use a dedicated VM. I don't see the need to run dedicated VMs for things like Plex, Crashplan, Sonarr, etc. These are much simpler to run, secure and update through Docker images.

 

Question...

If i'm running bare metal with unRAID, what would be the migration path from bare metal -> back to ESXi (should I choose to)? I'm figuring I may be able to use plop with the existing USB and that's it. Hoping someone can confirm.

 

One thing I like about unRAID, no messy re-installs. Very quick to pick up and go. I've had to rebuild FlexRAID/Debian+SnapRAID over the past few years and even with the advent of Docker, it still takes some time.

 

 

 

 

Link to comment

Question...

If i'm running bare metal with unRAID, what would be the migration path from bare metal -> back to ESXi (should I choose to)? I'm figuring I may be able to use plop with the existing USB and that's it. Hoping someone can confirm.

you can go with plopkexec as more faster solution than original plop..

i'm using VMDK method and it works just fine.

if you wanna go esxi route keep in mind usb reset bug with unRAID6, more info and possible workarounds here: http://lime-technology.com/forum/index.php?topic=40605.0

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.