Jump to content

CPU Questions


KayakNate

Recommended Posts

Doing some testing on my setup yesterday, I noticed an interesting benchmark result. With both of my VMs running, but ony one VM running Cinebench, it gets a score of ~900. With both VMs running and both VMs running Cinebench, they each get a score of 645. This is an acceptable score still, but it makes me wonder a couple things:

 

1) When unRAID lists the CPUs to be chosen for a VM, which are real cores and which are HT? Is it evens real odd HT, or first half listed real, second have HT, etc.? I have two CPUS and trying to make sure one VM has one whole CPU (real and HT) and the other VM has the other whole CPU. To not mix instructions across CPUs. These have QPI...but still.

 

2) If the hardware that would be used by Cinebench for each VM is independent, why would I get a slow down doing both at the same time? Northbridge limitation?

Link to comment

It says on the Home page which core is paired with which so you can tell the HT pairs apart.

Virtualisation is a very complicated thing when it comes to scalability, I wouldnt be surprised that it doesn't scale probably. Just like having 2 CPUs won't make rendering twice as fast.

 

Makes sense that scaling limitations wouldn't just let you double your power. GPUs deal with the same issue. But I'd like to give the machine the best power and scalabitly I can.

 

So in my attached image, the first row is 0/12. Does that mean that 0 is a real core and 12 it its HT counterpart? If so, I'm guessing the correct load if VM1 was running cinebench would be the top 6 rows are all at 100%. Mine's currently not set up like that. Mine is setup so when VM1 in benchmarking, the left column is at 100%.

FullSizeRender.jpg.f6a0cd4883799626e0fc5a65e36329a0.jpg

Link to comment

Why do you feel the need to give a VM so many cores? What are you doing on the VM's that are so CPU intensive?

 

Gaming.

 

Gaming isn't nearly as CPU intensive as it used to be, considering that most games have a really hard time making good user of the extra cores.  Now if each VM is going to be streaming to Twitch and capturing/encoding content while you're gaming, maybe more core assignments are necessary, but if you're just gaming for the sake of gaming, you could easily drop your core assignments dramatically and you wouldn't notice much of a difference in framerate/quality.

Link to comment

Why do you feel the need to give a VM so many cores? What are you doing on the VM's that are so CPU intensive?

 

Gaming.

 

Gaming isn't nearly as CPU intensive as it used to be, considering that most games have a really hard time making good user of the extra cores.  Now if each VM is going to be streaming to Twitch and capturing/encoding content while you're gaming, maybe more core assignments are necessary, but if you're just gaming for the sake of gaming, you could easily drop your core assignments dramatically and you wouldn't notice much of a difference in framerate/quality.

 

While this is sometimes true, there are games that do use 4 cores completely, as well as a few that even use all the hyperthreading. The Division maxes out almost every modern CPUs real cores. Overwatch is multi-threaded like a beast and totally maxes out most CPUs as well.

 

I don't have any other purpose for this machine. Already have a separate Plex box and separate NAS. So I figure there is no reason to not use as many cores as are available. I've switched it to the configuration I listed above. I don't see any immediate differences, but I'll sleep better knowing that my hardware is more appropriately allocated.

Link to comment

here is some testing I did recently on cpu pinning in OS X and win10. Enjoy...or not? : https://lime-technology.com/forum/index.php?topic=56139.msg535326#msg535326

 

This looks like it would save me some testing, but I'm having a hard time interpreting some of this. Lol.

 

It's a little dense and deals with not only testing vm's on single threaded cores for performance, but also paired threads, and presenting different virtual topology to see how the vm's reacts differently. But, it allowed me to fine-tune my setup and I know how to get the most out of all vm's I run, plus the pros and cons of each different setup.

Link to comment

here is some testing I did recently on cpu pinning in OS X and win10. Enjoy...or not? : https://lime-technology.com/forum/index.php?topic=56139.msg535326#msg535326

 

This looks like it would save me some testing, but I'm having a hard time interpreting some of this. Lol.

 

It's a little dense and deals with not only testing vm's on single threaded cores for performance, but also paired threads, and presenting different virtual topology to see how the vm's reacts differently. But, it allowed me to fine-tune my setup and I know how to get the most out of all vm's I run, plus the pros and cons of each different setup.

 

Based off of your findings, and the pic of my available CPUs above, does the above configuration I've put follow your most successful tests?

Link to comment

here is some testing I did recently on cpu pinning in OS X and win10. Enjoy...or not? : https://lime-technology.com/forum/index.php?topic=56139.msg535326#msg535326

 

This looks like it would save me some testing, but I'm having a hard time interpreting some of this. Lol.

 

It's a little dense and deals with not only testing vm's on single threaded cores for performance, but also paired threads, and presenting different virtual topology to see how the vm's reacts differently. But, it allowed me to fine-tune my setup and I know how to get the most out of all vm's I run, plus the pros and cons of each different setup.

 

Based off of your findings, and the pic of my available CPUs above, does the above configuration I've put follow your most successful tests?

 

No. Your pinning follows common accepted practice for windows vm's which in terms of total power/performance is limited by the fact that both threads on the cpu can not run 100%. My testing showed that virtualized windows does not know/care if a given core/thread you've assigned it is on a single core or sharing with another.

 

With that said, some people find that they improve latency issues in win10 by putting a vm on its "HT pairs." I haven't posted anything in regards to this, but I also did testing a short while ago for this issue, and found essentially no gain doing it that way.

 

For me, the way I run my vm's is as follows: I stack 2 vm's on a given core's paired thread (so, for you that would be vm 1: 1-11, vm 2 13-23.) Why? Because neither of them typically run 100% at the same time.

 

A physical core can run about 180% total utilization with both threads maxed (these numbers are not exact and vary processor to processor but we'll go with them for now as general estimates based on testing.) If both threads are attempting to using 100%, they each get bottlenecked by the other and capped about 90%. If you have a vm using all single threads ( for instance 1-11) then it has access to 100% of the thread potential if the other vm running on the paired thread is attempting to use 80%ish or less utilization. If you put them on their own HT paired core, then if they are attempting to use 100% utilization on each thread, it will never achieve it. This is essentially what was shown in my testing.

 

So again, I reiterate, this is all about pure power and has nothing to do with latency that some people get. But I also am beginning to think that many people's problem with audio/video latency and other cpu pinning issues comes from using single processors that either don't have the power or mobo support to backup running multiple vm's, or even getting a single one to play nice. I have a hard time getting my system to behave "badly" in term of vm performance. But I'm also on older enterprise equipment that was made for virtualization.

 

As I tell everyone, my results may be because of my hardware and that they should experiment for themselves to see what works best for them. All I really know is how to get almost every ounce out of my own machines, and the benchmarks/testing that backs up why I do what I do.

Link to comment

Why do you feel the need to give a VM so many cores? What are you doing on the VM's that are so CPU intensive?

 

That's not a lot! When I'm running my transcoding cluster i have a  23 core vm  isolated from unraid, leaving core 0 for it. I actually get (ever so) slightly better performance if I give the vm all 24 cores, but something make me feel like it's a little better by giving unraid 1 core and the vm an emulator pin.... can't wait till I get an 80 core server in the future!

Link to comment

Ok. I definitely now understand the benifits of doing the CPUs the way that you say. But being that I'm using this for gaming, and therefore my CPUs WOULD be maxed out at the same time, means that my situation might be different. Which as you said, means I need to do some of my own testing.

 

Thank you for that detailed response. It gives me a good basepoint and better understanding as to why some of my results might not match what I previously might have guessed.

 

I'll report back when testing is done.

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...