Checking out VirtioFS in PVE 8.4

preview_player
Показать описание
IN this video I take a look at VirtioFS in PVE 8.4. VirtioFS is a new way to access files between the guest and host in PVE and in this video I check out the speed and compare it to doing a network share using Samba.

00:00 Intro
01:36 Setting up VirtioFS
03:15 Mounting in Windows
06:15 Mounting in Linux
07:44 Comparing VirtioFS to Samba
08:44 Performance Comparisons
12:26 VirtioFS vs Samba Ease of Use
14:52 Conclusion
Рекомендации по теме
Комментарии
Автор

Very informative and clear video with solid coverage of the issues and of the questions an admin will have. Well done. Keep this quality content coming!

luckygreen
Автор

Excellent overview and great job with the benchmarks. Only thing I would suggest is putting the benchmark comparison on a graph for us. Love your channel!

Faustetheus
Автор

Thanks for reminding me i needed to update my Proxmox :)

mattybbg
Автор

I super appreciate your videos!!!
And thank you for cutting all of the dead time for example when you were showing rebooting the mounts...

I'm a super happy subscriber and I really hope your channel gets a lot larger.

Mikesco
Автор

We love your videos bud, thanks for taking the time to test everything for us, before we even implement it in our homelabs!!!

sking
Автор

Awesome overview. Thanks for this! Very well done; comprehensive and clear, yet concise. 10/10

crackshot
Автор

After watching many of your videos, I've finally upgraded my AM4 server from a 5600 to a 5900XT, installed Proxmox, and am now virtualising my UnRaid under Proxmox (passed the whole HBA to UnRaid VM). Will be going over your videos again to work out what I want to do with Proxmox next. Will probably virtualise pfsense (or opnsense) next, for starters.

shootinputin
Автор

It is mainly for sharing the configurations between guest and hosts and so on. But not for the data sharing. This is my thought on it

syncbricks
Автор

Try changing the CPU type from host to the either X86-64-v2-AES or X86-64-v3-AES - might help as Windows could otherwise be doing mitigation against CPU hardware vulns, tanking its performance. Made a massive difference to IO speeds and overall latency for me.

neilmartin
Автор

Since years i use a lxc with self made samba server on all my proxmox machines. It uses the host file systems like btrfs and zfs with full performance. The downside is it has to be privileged. It cant be beaten in flexibility and performance imho. I can access all my data from vm, notebooks and desktop pc. Windows, linux, Mac OS and BSD no issues. Love it.

sheldonkupa
Автор

Been using it on flatcar Linux since. 2023 or so, really great option for sharing zfs backed storage into guests at the file level or vice versa.

joshenders
Автор

Windows always performed quite good, most of the time very good and even better than Linux when it comes to virtualization and disk / network IO. This is with my hardware and in my tests at least. The virtiofs implementation on Windows looks like it can be improved, no reason it should be that slow and freeze the UI (I assume ), but I guess I would also rather use Samba to get a file on/off ramp, since network access is more versatile, so I agree to all you say.. o)

I have a weird configuration for some months now, I use a *.vhdx file located on a Windows Server share, this share is mounted into a Debian VM and the *.vhdx file is mounted as a block device in that same Debian as a BTRFS disk. I needed a remote "offload" Linux filesystem to store things but keep Linux permissions etc. and had no other Linux system around. The performance is quite good, maxes out my 1gb network on relatively old hardware (gen6 i5) and the virtual "disk" inside the VM (running on Hyper-V) even resumes perfectly fine after sleep of the host system (which is my daily computer). The VM reconnects network and samba share connection just fine, and the BTRFS filesystem on that remote *.vhdx file also did not break yet.. I wonder how long this can work without problems.. o)

Thank you for doing these tests, very interesting and surely will save a lot of people some time as well.

ytbone
Автор

Very clean and straight to the point video format! such perfect timing as well, have you tested VirtioFS to share a folder path from a VM to an LXC? I was hoping this could be an easy way to set up and manage sharepoints between TrueNAS and containers. For instance TrueNAS VM and Jellyfin LXC. And if this is possible, I am wondering how the performance would get impacted for streams.

funkytux
Автор

great to hear about this new feature and even better to have this thorough information on it. although i cant think if an immediate scenario i could use this.

one thing was surprising to me though, that you always kept comparing it to samba. Actually it never occurred to me to use samba under linux in favor of NFS. In my mind samba always equals windows. maybe i should change my prejudice here?

demorez
Автор

Excellent video! Apart from lack of graphs as mentioned by @Faustetheus, I wonder if there are any security implications? A VM root has unrestricted access to a VirtioFS mount. Is this something those POSIX ACLs help with?

shoaloak
Автор

I would love a video exploring those advanced options and what they affect, especially since i suspect they differ depending on underlying filesystem (zfs, ext4, btrfs or we).. Did like they can have a important effect on performance, and maybe need matching mount options in fstab?

With the current numbers its a bit of a bummer since its so much slower than qcow2, I had hoped it was a lxc blind mount alternative..

jansontomatoboy
Автор

Great seeing the performance testing. You should see if that bad performance is bypassed with direct io. Should help on the pegged CPU

silversword
Автор

I like smb as that way I can kinda limit the speeds/throughput via the proxmox GUI.
Tho looks like a good option for a GUI option

jamescrook
Автор

what was the filesystem type on the ssdShare you added to the Directory Mapping?

WalterBoring
Автор

What about sharing CephFS ? How would you share with guest VMs, CephFS mounted on the pve nodes ?
I think that use-case is a better fit to VirtioFS that Smb and NFS do not address.

RobertoSantos-rbgu