Jump to content

Put VMs on SSD array?


Recommended Posts

I'm paranoid about cache drive corruption or just a mistake hosing lots of VMs. Is it crazy to put my system/libvirt share on an SSD that is in my array? Ditto for my docker.img, I guess.

 

Reads should happen at SSD speeds, and writes at HD speeds (since my parity drive is a hard drive). But since most disk I/O is reads, maybe it will be reasonably fast.

 

My parity drive will be running nonstop. Is that a bad thing?

Link to comment

Both the docker and libvirt images can be easily recreated, don't see the point in having them in the array, for dockers the important folder is appdata, that can be backup up, e.g., with CA, for VMs it's the vdisks, you could have them on the array but write performance will be considerably degraded, it's easyer to make regular backups, after shutting down the VMs.

Link to comment

Both the docker and libvirt images can be easily recreated, don't see the point in having them in the array, for dockers the important folder is appdata, that can be backup up, e.g., with CA, for VMs it's the vdisks, you could have them on the array but write performance will be considerably degraded, it's easyer to make regular backups, after shutting down the VMs.

 

Exactly dont put them on the array.

I had my vms on cache and then for one of the put a second attached large vdisk i installled games onto for the vm on the array.

Performance was not good. So i took the vdisk off the array and used an unassigned disk to store my game data whilst keeping the boot disk of the vm on cache ssd.

I backup the vm images off the cache in case of a prob.

 

 

Link to comment

Yeah, I was talking about putting an SSD into the parity-protected array, then putting the vdisks on that drive. Has anyone done that?

The big reason for not doing this is the fact that writes will be so slow so performance will be bad.    This is because under unRAID each 'write' operation to an array device actually involves 4 I/O operations (Read's on both the parity and data drives followed by writes to both the data and parity drives).  In fact in such a scenario having a SSD is likely to be very little better than using a HDD.

Link to comment

The best performance would be to have an ssd as an unassigned drive outside of the array, then to passthrough the physical disk or partitions of the physical disk to the vm and therefor not using a vdisk image.

example

/dev/disk/by-id/ata-ST31000528AS_6VP41EPS

To protect your data you can just use any normal backup software that you would use for a normal baremetal machine and store the backups on the array.

  • Like 1
Link to comment
This is because under unRAID each 'write' operation to an array device actually involves 4 I/O operations (Read's on both the parity and data drives followed by writes to both the data and parity drives).

 

Ouch. That does sound bad.

 

The best performance would be to have an ssd as an unassigned drive outside of the array, then to passthrough the physical disk or partitions of the physical disk to the vm and therefor not using a vdisk image.

 

Yes, in theory. But SSDs being as fast as they are, I doubt passthrough will make a noticeable difference. Seems like with 6.2, LT is defaulting to putting vdisk images on the cache drive. I'm going to try going with the flow on this one.

Link to comment

For the above listed reasons I would also strongly advise you not put the VMs onto the array, purely from a performance point of view.

 

Since I've had issues with VMs getting hosed in the past, I take regular backups (ie - shutdown the VM, hop into MC via ssh, then just copy the vdisk into my backup share on the array; toss the vm xml into a txt file) Its surprisingly fast and gives me piece of mind (especially since my primary VMs are running Win10).

 

Someone brought up using unassigned devices and attaching an SSD there for gaming. I'm wondering if I should scrap my BTRFS SSD cache pool (250GBx2) and just make one dedicated cache, and one a games library share. I ask because load times from my HDD games library on the array are pretty horrendous. What do you guys think?

Link to comment

For the above listed reasons I would also strongly advise you not put the VMs onto the array, purely from a performance point of view.

 

Since I've had issues with VMs getting hosed in the past, I take regular backups (ie - shutdown the VM, hop into MC via ssh, then just copy the vdisk into my backup share on the array; toss the vm xml into a txt file) Its surprisingly fast and gives me piece of mind (especially since my primary VMs are running Win10).

 

Someone brought up using unassigned devices and attaching an SSD there for gaming. I'm wondering if I should scrap my BTRFS SSD cache pool (250GBx2) and just make one dedicated cache, and one a games library share. I ask because load times from my HDD games library on the array are pretty horrendous. What do you guys think?

 

what i do is have a 50 gig vdisk i use for my windows vm on the ssd cache. This i just have the os installed on. That way if the vm gets hosed i can restore my backup as you do.

But i attach and unassigned 1tb as a secondary drive (d-drive) here i install all my games and programmes. As the disk is passed through and not a vdisk, it is very fast, alot faster than using an array share. In my opinion best of both worlds because i have my vms on the ssd, but data kept separate so my cache isnt full of vm images.

Link to comment

what i do is have a 50 gig vdisk i use for my windows vm on the ssd cache. This i just have the os installed on. That way if the vm gets hosed i can restore my backup as you do.

But i attach and unassigned 1tb as a secondary drive (d-drive) here i install all my games and programmes. As the disk is passed through and not a vdisk, it is very fast, alot faster than using an array share. In my opinion best of both worlds because i have my vms on the ssd, but data kept separate so my cache isnt full of vm images.

 

lol, almost the same, I keep my vdisks at 60gb on the cache.  :D

 

So I took the leap tonight and pulled the 2nd drive from my cache pool following the directions in the manual (except I did power-downs before pulling cables). Found a cmd via google (which I've already forgotten) to show me the re-balancing of the cache drive and waited for the balance to finish up. Initially I was unclear what you meant about attaching the drive, but sorted it out and found the hard drive pass-through post from April plus what you wrote above. Got it added into both my VMs as a D drive now.

 

I did notice some wonky-ness though. With both VMs up and running, new files wont show up in the other VM until after a restart. ie, vm1 copies file A to the D drive, vm2 wont see that file until after it restarts. Wonder if that has something to do with NTFS? Also, with this setup, will the same program on both VMs be able to run concurrently? Im also very interested in how you setup your D drive inside of Windows.

Link to comment

what i do is have a 50 gig vdisk i use for my windows vm on the ssd cache. This i just have the os installed on. That way if the vm gets hosed i can restore my backup as you do.

But i attach and unassigned 1tb as a secondary drive (d-drive) here i install all my games and programmes. As the disk is passed through and not a vdisk, it is very fast, alot faster than using an array share. In my opinion best of both worlds because i have my vms on the ssd, but data kept separate so my cache isnt full of vm images.

 

lol, almost the same, I keep my vdisks at 60gb on the cache.  :D

 

So I took the leap tonight and pulled the 2nd drive from my cache pool following the directions in the manual (except I did power-downs before pulling cables). Found a cmd via google (which I've already forgotten) to show me the re-balancing of the cache drive and waited for the balance to finish up. Initially I was unclear what you meant about attaching the drive, but sorted it out and found the hard driver pass-through post from April plus what you wrote above. Got it added into both my VMs as a D drive now.

 

I did notice some wonky-ness though. With both VMs up and running, new files wont show up in the other VM until after a restart. ie, vm1 copies file A to the D drive, vm2 wont see that file until after it restarts. Wonder if that has something to do with NTFS? Also, with this setup, will the same program on both VMs be able to run concurrently? Im also very interested in how you setup your D drive inside of Windows.

 

Oh ,dont use the the attatched d drive on 2 vms running at the same time. I use it on more than one vm but you should only have one accessing it at a time. It is not like a network share so it will not work properly in the way you want it to. You could create a second partion on the passed through drive and attach that to the second vm and use at the same time but then you will not be sharing data.

 

edit..... although i say the above i havent tried it to be honest. I guess it could work so long as the vms dont write to the same file at the same time. maybe they could both load the same software ie game so long as gamesaves were stored to the c drive. And to see a file straight away fro the other vm, maybe mapping a network drive of a share of the D drive on both vms!!!!!

But i wouldnt really do it myself as too many protential probs even if it did "somehow" work!!

What i did try once through is running 2 vms off the same vdisk at the same time!! they both started fine but then they never did again as they both saving different windows files etc. i kinda knew it wouldnt work but tried it anyway lol....................

 

I use the d drive in my win 10 vms by installing the programmes games etc to the d drive in the programmes folder i created on d. Then on the other vm i install the same programme onto d drive aswell. That way both vms have the reg files and other things that may be written to the c during install etc they may need for the programme to run.

Also you can change your desktop and docs location etc to the d drive if you want aswell if you wanted both machines to have same docs etc but i dont do this myself.

I use it so i can save space on multiple windows vms by not have separate installs across the vms as i rarely run 2 win vms at once

 

Link to comment

Oh ,dont use the the attatched d drive on 2 vms running at the same time. I use it on more than one vm but you should only have one accessing it at a time. It is not like a network share so it will not work properly in the way you want it to. You could create a second partion on the passed through drive and attach that to the second vm and use at the same time but then you will not be sharing data.

 

edit..... although i say the above i havent tried it to be honest. I guess it could work so long as the vms dont write to the same file at the same time. maybe they could both load the same software ie game so long as gamesaves were stored to the c drive. And to see a file straight away fro the other vm, maybe mapping a network drive of a share of the D drive on both vms!!!!!

But i wouldnt really do it myself as too many protential probs even if it did "somehow" work!!

What i did try once through is running 2 vms off the same vdisk at the same time!! they both started fine but then they never did again as they both saving different windows files etc. i kinda knew it wouldnt work but tried it anyway lol....................

 

I use the d drive in my win 10 vms by installing the programmes games etc to the d drive in the programmes folder i created on d. Then on the other vm i install the same programme onto d drive aswell. That way both vms have the reg files and other things that may be written to the c during install etc they may need for the programme to run.

Also you can change your desktop and docs location etc to the d drive if you want aswell if you wanted both machines to have same docs etc but i dont do this myself.

I use it so i can save space on multiple windows vms by not have separate installs across the vms as i rarely run 2 win vms at once

 

Ya, it got real weird real quick, lol. Got some really interesting permission errors from Windows that I'm not even going to bother trying to translate to English. I really did enjoy the speed of the drive attached this way though. While it doesn’t have redundancy of any kind, I think the OP would enjoy the speed. It was easily 3-4x faster loading speeds for my games compared to the HDD array share I’ve been using. Unfortunately for me though, I don't think it will work for the way I intend to use it. I can definitely see the perks of it being attached this way for a single user though.

 

That being said, I think I'll just mount and share it via the unassigned devices plugin. Pretty sure I’ll be able to have the VMs access it simultaneously then like a standard share. That and being able to access the drive via MC and running on XFS has its perks. I think I’ll do some R/W tests before I switch over just to see how much of a performance hit there actually is. Thank you for all the info!

 

 

EDIT - I actually used to have the games library in a second vdisk that I kept on the cache drive. I was able to access that from both VMs concurently without issues (if memory serves me), and was initially hoping that btrfs would raid the 2x250GB SSDs into 1x500GB. I was going to go that route, but until unRAID adds that in natively, I think I'll go the UD plugin route for the mean time.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...