VM Cpus (Cores?): how does it work? how many to check?


syrys

Recommended Posts

So, when creating a VM, you get an option to select number of CPUs (cpu Cores?) to associate with your VM. For example, i have a SINGLE i7 4770 CPU on my unraid server, and when adding a VM i get 8 cpu checkboxes where i can check as many of them as i want (atleast 1) to associate with the VM.

 

So, my questions are:

 

1. what happens if you create a VM and check all the CPU checkboxes? Would you get any errors or issues with your unraid UI or any plugins and/or dockers apps that are running? How does it all work?

2. What if you create 2 VMs, 1 using 6 CPUs (out of 8 ), and the 2nd also using 6 (out of 8 ). Would the 2 VMs have issues? Would any plugins/unraid-ui/docker have any issues?

3. Unless its already answered in 1 and 2, is there any specific reason why you shouldnt check all the CPU checkboxes when creating a VM?

 

Cheers.

Link to comment

CPUs = Cores, not entire processor chips.

 

1. Your performance might be a little bit slower because the first core (core 0) and its hyperthread is used by unraid.

 

2. Since the Cores can be shared it shouldn't be a problem, the performance should be ok if its linux VMs but with windows vms the performance and latency is going to to be a lot slower/higher

 

3. Because you shouldnt use core 0 and its hyperthread if you need high performance. You can use core 0 and its hyperthread for a NAS software in a VM (Xpenology) or a server that doesn't need a lot of resources but only if its linux or a similar os (windows likes to use CPU performance even when its idle and it has a lot of problems when the latency isnt perfect).

Also on some CPUs your performance might actually be better with less cores, its happens most of the time only on xeons because of the lower clocked cores.

You can also try changing the isolated cores, emulator pins and much more to improve performance and reduce latency! unRAID isn't perfect but its a lot of fun to change different settings to try to get the best performance in VMs!

 

Pro Tip: If you need an extremely low latency and dont have a problem with loosing a little bit CPU performance you can pin the emulator to the hyperthreads (In the example they aren't pinned to the HT cores, just replace "6-7,14-15" with your HT cores)! Dont forget to remove them from the normal VM cpu config and change the threads to 1 in the cpu config, the number of sockets and cores can stay at the default number

  <cpu mode='host-passthrough'>
    <topology sockets='1' cores='4' threads='1'/>
  </cpu>

 

More detailed explanation of CPU pinning and assignment.

Link to comment

CPUs = Cores, not entire processor chips.

 

1. Your performance might be a little bit slower because the first core (core 0) and its hyperthread is used by unraid.

 

2. Since the Cores can be shared it shouldn't be a problem, the performance should be ok if its linux VMs but with windows vms the performance and latency is going to to be a lot slower/higher

 

3. Because you shouldnt use core 0 and its hyperthread if you need high performance. You can use core 0 and its hyperthread for a NAS software in a VM (Xpenology) or a server that doesn't need a lot of resources but only if its linux or a similar os (windows likes to use CPU performance even when its idle and it has a lot of problems when the latency isnt perfect).

Also on some CPUs your performance might actually be better with less cores, its happens most of the time only on xeons because of the lower clocked cores.

You can also try changing the isolated cores, emulator pins and much more to improve performance and reduce latency! unRAID isn't perfect but its a lot of fun to change different settings to try to get the best performance in VMs!

 

Pro Tip: If you need an extremely low latency and dont have a problem with loosing a little bit CPU performance you can pin the emulator to the hyperthreads (In the example they aren't pinned to the HT cores, just replace "6-7,14-15" with your HT cores)! Dont forget to remove them from the normal VM cpu config and change the threads to 1 in the cpu config, the number of sockets and cores can stay at the default number

  <cpu mode='host-passthrough'>
    <topology sockets='1' cores='4' threads='1'/>
  </cpu>

 

More detailed explanation of CPU pinning and assignment.

 

I'm still pretty new to this but I will share the following:

 

It all depends on what you are trying to do. Sometimes less is more, and sometimes more is more. On one machine (dual 4 core xeon processors - 16 total cores) I have a non windows VM that is used as part of a cluster for distributed transcoding. If I utilize all 16 cores, it runs faster on batch/segmented transcoding vs only running 14 and leaving core 0 and its ht for unRaid/etc. Even if I use emulator pins, overall performance degrades unless I give assign the vm all 16 cores with no other caveats.

 

On another vm of the same exact same machine type, I use a non windows vm as a video work station with an nvidia GPU. Again, in this instance, more cores = more power for editing since the program is cpu intensive. Leaving any cores available (and/or isolated) for unRaid to manage its other business (including vm management) seems to only lower my overall performance and responsiveness. It shouldn't be that way, but all benchmark programs and my own testing confirm it. This is probably because I don't demand unRaid to do much else while I'm doing processor intensive work.

 

BUT, if I try to assign  a windows vm with the intention of using the same graphics card using all cores, playback of videos is choppy. Even youtube. In fact, the windows vm with a graphics card seems to hate any ht cores I assign it. Video playback only smooths out when it i only assigned physical cores (including core 0.)  If I run the vm headless, video playback is handled by the processors and is just fine, albeit lesser quality (including assigning ht cores.) I had read on here and a few other places about windows sometimes not liking ht cores with some video cards. (My problem is also partly due to a limitation of single core clock speed on my processor getting maxed out by the GPU, but that will be remedied soon. But it speaks to physical cores being "faster" and more responsive vs ht cores in this circumstance.)

 

Anyways, my situation is a little different than most it seems. The average person doesn't use a vm for the sole purpose of batch transcoding and need to utilize all available cpu power for longer periods of time. If you're gaming and have a good GPU, then it seems you need just enough fast cores to just feed the GPU and load files. If you're checking email, web browsing, and writing documents, you can get away with far less and have more vm's doing the same at the same time, on the same cores.

 

+1 on isolating cores. It does provide a *slight* bump to vm's, but I rarely us it as I don't like waiting on those applications using the isolated cores to load/work because they get less total cpu time (example: Plex transcoding- 2 cores takes 8-15 seconds to load, 16 cores takes 2-3 seconds to load. I'm just too impatient.) I don't run much in docker on most machines, so I don't have to tons of different apps demanding processor time.

 

It is also worth noting that I'm not doing any of this on consumer desktops.

 

 

 

Link to comment

thanks for the answers guys. i will just clarify what my usecases are.

 

as mentioned i have a unraid system with a i7 4770 (8 cores on unraid).

 

1st usecase (current setup):

Currently i want to run a W10 VM, and i want to maximise the performance within this VM. The VM it self will be running things like plex, download clients, media automation software, backup software, and some other random windows apps. I just want them to run without too much issue, specially plex performance. I currently have 6 cores assigned to this VM, and was wondering if assigning all 8 cores to it would have any benefit/harm.

 

2nd usecase (future setup, potentially):

given the above system/setup of the first usecase,  want to try add a 2nd VM (W10), that will have GPU pass through to the VM, and use it as an every day computer. Not really heavy gaming, but mainly media consumption, browsing, programming and maybe some lite gaming (arcade/steam games maybe).

 

any suggestions for above usecases?

Link to comment

If possible you should switch the VM to Linux and give it the cores 0 and 1 (and their hyperthreads) and give the rest to a Windows VM. If the Linux VM needs more performance you can assign it more cores. You can share the cores between the Linux and the Windows VM but it could cause a slower performance and higher latency, if the Linux VM is most of the time not using a lot of resources but sometimes you use it for applications that use multiple cores like rendering videos or 3D models it can be very useful.

Link to comment

thanks for the answers guys. i will just clarify what my usecases are.

 

as mentioned i have a unraid system with a i7 4770 (8 cores on unraid).

 

1st usecase (current setup):

Currently i want to run a W10 VM, and i want to maximise the performance within this VM. The VM it self will be running things like plex, download clients, media automation software, backup software, and some other random windows apps. I just want them to run without too much issue, specially plex performance. I currently have 6 cores assigned to this VM, and was wondering if assigning all 8 cores to it would have any benefit/harm.

 

2nd usecase (future setup, potentially):

given the above system/setup of the first usecase,  want to try add a 2nd VM (W10), that will have GPU pass through to the VM, and use it as an every day computer. Not really heavy gaming, but mainly media consumption, browsing, programming and maybe some lite gaming (arcade/steam games maybe).

 

any suggestions for above usecases?

 

-not an expert reply-

 

Use Plex in docker, not in a vm. That way you can allow it to use the entire processing power available (if you want, which it does by default.) If you find it is messing with your vm's performance, then determine at a minimum how many cores you need to create a stable, non buffering transcoded stream (assuming you need to transcode for specific devices,) and then isolate the remaining cores from unRaid's use and use those for vm's. There is another way to assign specific cores to docker apps without isolating cores, but i've never fiddled with that. just seemed like too much work. Plus I like having piex use all processor power to spin up transcoded vides quickly before idling back down to maintain the stream.

 

Try and add 2 cores more to your vm and see what happens. you won't break anything. also, takeaway cores and see what happens to your performance.Play with adding emulator pins (or a single pin.) Run benchmarks and play around with the vm using different settings. try making it a 4 core vm using 2 physical cores and their ht pair. There are some good guidelines in regards to cpu pinning, but my setup sort of defies how things are suppose to operate, and I would have never maxed out its performance if I didn't fiddle with things and go against what works for most people.

 

As mentioned before, you can share cores with little impact IF the vm's using them aren't pegging them constantly on their own. The more you add, and the more you do, the more it will bog down. Just a fact. But you might find that one vm only needs 4 cores, and the other 2, and that running them independent is the way to go. Or you might mix and match, taking 6 cores and giving each of the two vm's 4 cores (sharing 2 cores.)

Link to comment

Thanks, ill try those suggestions. One of the reasons i created the thread was to make sure that i wont break or explode something by assigning more cores than i have (ie. double booking cores). But now that people have clarified that it wont break anything, i can try experiment a little.

 

As for the plex docker suggestion, i have heard some bad storied about that, and i will stay away from it. Same goes with other dockers for things like couchpotato/sickrage/sonarr. These apps them selves seem to be pretty badly managed and breaks very often (they commit in broken code to the stable/release branch which then gets picked up by the auto updater which then breaks the app entirely and you have to do something manual to fix it). I can manage doing these manual fixes on windows (as im used to it now), but the effort of doing that on the docker is more than im willing to do at this stage :P Thus, im staying away from docker for the above mentioned apps.

Link to comment

Ok, ill give it a go. But I'm holding you responsible.

 

Sent from my Nexus 6P using Tapatalk

NVM don't download the docker! Plex has an official unRAID plug in: https://www.plex.tv/downloads/

I really wouldn't advise that. Plugins interface directly with the unraid OS, and have a bad habit of breaking things. Docker is isolated, and has a much better chance of working through upgrades and changes. I'd read the support threads before choosing.
Link to comment

Ok, ill give it a go. But I'm holding you responsible.

 

Sent from my Nexus 6P using Tapatalk

NVM don't download the docker! Plex has an official unRAID plug in: https://www.plex.tv/downloads/

I really wouldn't advise that. Plugins interface directly with the unraid OS, and have a bad habit of breaking things. Docker is isolated, and has a much better chance of working through upgrades and changes. I'd read the support threads before choosing.

Which docker is better? The official unRAID Plex docker or the linuxserver.io Plex docker? Or is it the best to just use a Linux VM with Plex installed? Its not a linux VM but i use the official plex plugin for Synology inside an Xpenology VM (Synology DSM 5.2) and never had any problems with it.

 

Just the last plex update broke my plex server a little bit because you can't connect from domains (even if they are just local) to the server without being logged in :( It took a long time until i figured out what the problem was lol. I just manually allowed all local ips to access the server without logging in and it worked again (sadly only using the IP address, but a whitelist for domains is going to get added soon)! Its not a problem with the server, its just the new plex update.

Link to comment

Ok, ill give it a go. But I'm holding you responsible.

 

Sent from my Nexus 6P using Tapatalk

NVM don't download the docker! Plex has an official unRAID plug in: https://www.plex.tv/downloads/

I really wouldn't advise that. Plugins interface directly with the unraid OS, and have a bad habit of breaking things. Docker is isolated, and has a much better chance of working through upgrades and changes. I'd read the support threads before choosing.

Which docker is better? The official unRAID Plex docker or the linuxserver.io Plex docker? Or is it the best to just use a Linux VM with Plex installed? Its not a linux VM but i use the official plex plugin for Synology inside an Xpenology VM (Synology DSM 5.2) and never had any problems with it.

 

Just the last plex update broke my plex server a little bit because you can't connect from domains (even if they are just local) to the server without being logged in :( It took a long time until i figured out what the problem was lol. I just manually allowed all local ips to access the server without logging in and it worked again (sadly only using the IP address, but a whitelist for domains is going to get added soon)! Its not a problem with the server, its just the new plex update.

 

I use the official one through community applications and have no problems with local or remote viewing. I don't have a Plex pass so I can't comment on that aspect.

 

As for the plex docker suggestion, i have heard some bad storied about that, and i will stay away from it.

 

BOOOOOOOOOOO. Those folks probably had issues using non-standard versions of Plex that were docker apps created by others. Give the official one a try. If you hate it, then you can just delete it, the docker image, and then disable docker.

Link to comment

BOOOOOOOOOOO. Those folks probably had issues using non-standard versions of Plex that were docker apps created by others. Give the official one a try. If you hate it, then you can just delete it, the docker image, and then disable docker.

Well, its not that simple. Thing is, once you swap your plex installation to a different environment, you need to figure out a way to sync your libraries/settings (or else you lose all your settings/progress etc). Same goes the other way too, so you cant simply decide to delete the docker image, you need to first figure out a way to export/import your settings/progress to the new/old environment you are going to.

 

That being said, ill give it a go when i have a chance.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.