CPU Pinning in Proxmox: Configuration and Performance testing

preview_player
Показать описание
CPU pinning can lock a VM to specific threads. While not officially supported in Proxmox, Linux tools allow CPU pinning to be configured and I test the performance impact CPU pinning can create and when it would make sense to be used.

00:00 Intro
00:22 What is CPU pinning?
01:39 Setting up CPU pinning in Proxmox
09:39 Benchmark results
Рекомендации по теме
Комментарии
Автор

Thank you, your guide is really useful in setting up 13900k for 4 windows VMs. None of the hypervisors currently knows how to utilize the E cores so by doing per-core CPU pinning, I am able to pass the E cores to specific VMs and let the guest OS handles that

harrytsang
Автор

Hey whats the link to the proxmox thread / script?

giovannifrancesco
Автор

Dude I love how detailed you are and the findings are fascinating. Thanks for making this!

stephenflee
Автор

Wow.... that is a lot of extra work lol. Suprised there isnt a more automatic method. Very grateful for you making this video. Definitely not enough of these, at least for me. Conclusion: maybe I can just trade someone for a non ryzen x3D lol

Oo-_-oOoO
Автор

Using cpu units I have seen a positive response in prioritizing cpu to one machine over others. But it is not very noticable without benchmarks. Also the priority is not that aggressive giving a machine lowest possible priority will not in any way make it stop working if the server is overloaded on all cores. And also the machine with highest will still be interupted by the other machines but it will get alot more computing power so the function is doing something but maybe not as much and agressive as one might want.

burnsnu
Автор

I actually have a vendor which requires this feature on their VMs I'm to deploy soon. Perfect timing. Thanks!

vitz
Автор

Hey, thanks for this! I'm moving three Proxmox nodes into a single one with dual CPUs and I knew I needed to do something like this to get the best performance depending on VM use/hardware passthrough. I wonder if it can be applied to a container as well.

GeoffSeeley
Автор

you make great videos! im just a mailman having this little computer hobby, no idea why im learning this stuff besides my own curiosity and its hella cool.

TheRaginghalfasian
Автор

Can you tell about the updated proxmox version 7.3.3, there is called in the GUI 'CPU Affinity' which simplifies the process of CPU pinning and it is also related to your way of cpu pinning since the option is based on taskset command I would like to see the performance difference between manually setting it vs setting via GUI.

cineaviation
Автор

good content - overall take away here is to overcompensate a bit with regards to cpu - have enough cores and use nvme and faster network cards and switches plus fastest isp connectivity you can afford (emphasis on symmetrical bw and fast upload speeds) #comptia

shephusted
Автор

this was a lot more useful and informative than I expected, thanks!

ImTheKaiser
Автор

I really enjoy your channel and analysis. Thanks for doing this!

zacharysandberg
Автор

Thanks for the info!
Testing 1c2t isolcpus each (out of 4c8t Kaby Lake) for my pfSense and TrueNAS VMs with rest threads for all other less sensitive VMs.
instead of decoding affinity mask via taskset and having to deal with PIDs, I found htop useful, which has a human readable 'a' hotkey for a highlighted process.

Mr.Leeroy
Автор

Thanks for the video but it's not very clear which of these optimisations you are doing on the host and which ones on the VMs.

rishi
Автор

This is an awesome explanation, very thorough. I have been using this to great affect!

svengustavsson
Автор

I've been struggling with cinebench on my virtual windows cinebench wall have one or two threads that will get stuck while the others carry-on quite happily so I'm very keen to try iso the cpu

mattruddick
Автор

I have seen examples of individual processes, within the guest VM, benefiting from single vCPU affinity on the guest. Where a thread becomes core bound. Combined with setting that process priority to Above Normal. I'm wondering if there's some extra performance here by doing the same on the hypervisor too.

johnwalshaw
Автор

Great video, thank you. I was seeing the same thing with CPU Units doing seemingly nothing. Now I have been able to do pinning and seeing much better results!

ambient
Автор

Very useful info. Great video. Thank you!

dustinhess
Автор

Your explanation of pinning is somewhat backwards. (you sort-of get there in the end, but how you talk about it in the explanation section is flawed.)
Pinning a process to a core(s) will limit it to using only that core(s), but it won't stop OTHER processes from using those cores(s).
You can't really guarantee that a core will be free so a specific machine can use it without a BUNCH of fiddling.
CPU pinning is most useful when you have to deal with multiple NUMA nodes. Because moving a thread to another NUMA node can take 10s or 100s of cycles.
And even then, multiple studies have shown that current-generation NUMA aware CPU schedulers do just as good, or a better job at managing which core to run a process on. This is especially true if you have a VM that maxes out the core(s) you are pinning it to. In this case, it is almost always better to let a thread jump to another NUMA node than to just sit and wait for a core to free up.

The only other reason to pin a core is if you happen to be using a 12th or 13th generation Intel CPU with P+E cores. Support is limited for managing these automatically (in 2022) so you MIGHT want to pin VMs to specific core(s).

TLDR: If you don't have a Multi-CPU, Generation 1 Epyc, or Intel 12th/13th gen system, you shouldn't be pinning cores.

Prophesr
visit shbcf.ru