VMware vs Proxmox Performance Benchmarks

preview_player
Показать описание
Just a quick video to compare Proxmox with VMware hypervisors performance using some common benchmarking tools.

#VMware #Proxmox
Рекомендации по теме
Комментарии
Автор

I listen to this video when I have trouble to fall asleep

seeithappen
Автор

I think this test may be inaccurate, because I found that Proxmox uses KVM64 CPU emulator from default. It should run more efficient if Host is selected. By the way - there is Windows VirtIO Drivers.

liudas
Автор

This is really interesting and I appreciate you sharing how you did these benchmarks.

mrjamiebowman
Автор

@9:43 the processor is "Common KVM" so emulated. Configure the VM to pass through the host processor and I'm sure it will be faster.

GeoffSeeley
Автор

starnge results, did you install all virtIO drivers for your windows VM ? harddrive, nic...etc...

davidstievenard
Автор

great stuff buddy - just got myself an r720 and thinking which hypervisor to install - probably going to go with Esxi VMWare - I'm new to all of this so my question is - how do you access your VMs? I see that you Remote Desktop Connected and it went straight into windows - how did you do that? You create the VM on the hypervisor which is on the server - and you use another PC to get into the console for the managing the VMs like VMware - but how do you get the VM up so you can use it?

austinstallion
Автор

You have to run the native drivers for KVM, VMware would have installed their drivers for windows automaticly.

mjoconr
Автор

If you use the right drivers / virtual controller and configure them correct AND use a better virtual disk format I get 691.01MB/s and 745.65MB/s on the same test from my ssd.
Their may be a little truth in what someone said in a video, Hyper-v if you are running mainly windows and qemu/kvm for mainly Linux, but not sure where WMware fits in that ?

peteradshead
Автор

You should select the HOST cpu option to enable AVX/AVX2/AVX512. CPU extensions do not get enabled for non HOST cpu type. You need this to compare performance of SIMD vs SIMD enabled. Then you get a measurement from the same point.

not-another-dev
Автор

Just curious, does anyone have any clues why my Linux VMs keep freezing in Hyper-V but the Windows 10 VM works fine?

nemonada
Автор

Looks unreal based on the similar comparison we did. In our case Vmware was slower as compared with Proxmox.

The test bench used for Proxmox and Vmware:
[ A ] Compute Node.

Xeon 8 Core X 2 Cpu [ Supermicro ]
Memory : 128 GB Ram
Boot Drive : 240 GB SSD X 2 [ Raid 1 ]
10 GBPS Network Card

[ B] Storage Node [ Common for both Proxmox and Vmware ] [ Dell ]
Ubuntu Server : 21.04.2
32 GB Ram
Boot Disk : 256 GB SSD
Harddrives : 4 TB X 6 [ Raid Z2 using zfs ]
10 GBPS Network Card.

For each test we destroyed the system pool to keep consistency in tests.

Number of tests taken before averaging out the results : 5

Storage Exposed to compute Node : As NFS as well iSCSI Target.

Result : Promoxed was 9% faster than VMWare.

Please re-verify your results....

mithubopensourcelab
Автор

needs to install virtio on kvm vm or vmware tools for vmware vm & update vm config. In my test, kvm is far faster like near physical machine performance.

fujiaki
Автор

this was super chill, would definitly watch more @techThis

eddied
Автор

VMware ESXi and Proxmox are both free hypervisors, and both also have premium options as well which give you support and sometimes better features.

VMware spent a lot of time making their hypervisor’s out of box experience much more user friendly. Proxmox does have a learning curve, but these tests do need to be run again once you’ve played around more with Proxmox and learned some of the configurations, especially when it comes to running Windows.

I couldn’t quite make it out, but I think you did have the virtio drivers needed for a more stable Windows environment, but that does require reconfiguring the VM to take advantage of those drivers.

frogboy
Автор

just curious, is anybody using proxmox in an enterprise environment?

HyPex-
Автор

Would be interesting to see the Cinebench test run again on ESXi if you ran the VM with only 5 CPUs. I would expect you would get even better results.

In ESXi, when a VM is scheduled, all of the virtual CPUs are scheduled together at the same time* thus leaving nothing available for the VMKernel whenever it needs time on the processor. So what happens is that the VM is forced to wait (you can see these metrics on the host if you run ESXTOP) whilst the VMKernel does its processing. By removing the wait states by allowing the VM and the VMkernel to use CPUs simultaneously, you will generally end up with better performance.

* Relaxed co-processor scheduling was introduced in ESX 3.5 helps to some degree but when running a CPU intensive application / stress test it doesn’t really help that much.

TJSSheppard
Автор

6:16 CPU Usage at ~18% Memory Usage at 87%

Was the test on proxmox run after this operation completed and cpu/mem were free?

anantmishra
Автор

I've ran a couple of ProxMox clusters (7 nodes per cluster) at work for three years and I have noticed alot of the VMs ran bit slow. Always thought it was because of CEPH. Mind you these are AMD Epyc 32 core processor with 512 gigs of RAM each server so it got plenty of horsepower. When I switched them to vmware and vsan it is like night and day. VMs run alot better and my users no longer complain how slow the apps are.

What made a difference is that vmware have a special ISO image just for Dell servers with custom hardware drivers. Something I couldn't do in ProxMox which is a shame. ProxMox WebGUI is easier to deal with than in vsphere.

I still use ProxMox for my home lab on a Dell Precision workstation.

Darkk
Автор

You can’t put threadripper and anything below a normal X30-X40 server in same category 🤷🏽‍♂️😤

It’s old lol I run 210s, 710s, 720s and one 740 series at home.

kristopherleslie
Автор

Passing the harddrive through to the VM will yield near native results

Alphahydro
join shbcf.ru