How *I* Reduced Proxmox RAM Usage with NFS Instead of ZFS

preview_player
Показать описание
In this video I'll show you how I reduced my Proxmox RAM usage by switching from ZFS to NFS on my Proxmox homelab.

Proxmox is a popular open-source virtualization platform that supports a variety of storage backends, including ZFS and NFS. While both ZFS and NFS have their advantages, there are several reasons why switching from ZFS to NFS could be better for Proxmox.

Firstly, NFS is a more lightweight and simpler protocol than ZFS, which can result in better performance and less resource usage, especially in environments with many virtual machines. Additionally, NFS is a more widely supported protocol and is compatible with a wider range of storage devices, which can make it easier to integrate with existing storage infrastructure.

Furthermore, NFS allows for easier sharing of storage resources between different hosts, which can be particularly useful in clustered environments. NFS also offers a number of features such as file locking and client-side caching, which can help improve performance and reduce network traffic.

Overall, while ZFS has its advantages in terms of data integrity and reliability, switching to NFS can offer better performance, compatibility, and flexibility for Proxmox users. However, it's important to carefully evaluate your specific needs and use case before making a switch, as both storage backends have their own unique strengths and weaknesses.

Timestamps
0:00 Intro
0:15 Prerequisites and Caveats
2:24 Baseline RAM Usage
4:52 Setting Up NFS
7:42 Adding NFS Storage to Proxmox
9:25 Download a Proxmox Template to NFS
11:03 Deploying Containers on NFS
14:13 Checking New RAM Usage
14:28 Additional Thoughts and Wrap Up

/=========================================/

Get early, ad-free access to new content by becoming a channel member, a Patron or signing up for the members' only website!

/=========================================/

The hardware in my recording studio is:
✔ Custom PC w/ Ryzen 2600, 32GB RAM, RTX 2070, Assorted Storage

/=========================================/

The hardware in my current home servers:

/=========================================/

✨Find all my social accounts here:

✨Ways to support DB Tech:

✨Come chat in Discord:

✨Join this channel to get access to perks:

✨Hardware (Affiliate Links):
Рекомендации по теме
Комментарии
Автор

For everyone watching this video, please do not follow the advice in this video.

First: Switching from local storage to network storage will cause a performance hit, if not in throughput, then in latency for sure.
Second: There is no RAM saved by following this guide. ZFS uses all available RAM for cache. If proxmox shows you 70% of 32 GB used with only a small vm or two, then that means the rest is used as ZFS cache.
Third: If you continue to create new virtual machines, ZFS will automatically free up the memory for it, and only use the available rest for cache.
Fourth: 50% free memory is half of your memory wasted. Free resources are wasted resources, let ZFS use whatever is available.

@DBTechYT You should really revisit your setup and read up on how ZFS actually works. ZFS will use 100 GB if you have 100 GB available, it will use 10 GB RAM if you use 90 GB for virtual machines. If you want to know how much memory would be available for creating additional vms, use arc_summary to see how much RAM you could use.

kaspersergej
Автор

I'm going to sound negative here...this isn't really a "ZFS" vs "NFS" for a few reasons.

One - that doesn't exist; one (ZFS) is a disk file *system* and the other networked *storage* (it can be any file system on that storage). If anything this test was ZFS vs the BTRFS you're using on the Synology.
Two - it's local vs networked attached storage. Of course memory use will go down as you're offloading the work (memory, CPU...electricity) to a second device.
Three - It could still be a ZFS test of local vs NAS, just a ZFS NAS that you attached via NFS. (However, you show in the video you're using BTRFS on your NAS.)

Bonus: You could just use BTRFS on your Proxmox and gain back the memory ZFS uses for caching and file checking.
----Note that this memory used by the bare metal device/hypervisor isn't the same as having it committed to a VM (i.e. TrueNAS). The hypervisor will use basically as much memory is available to support the ZFS effort by increasing cache, error checking use, etc - but can give it back to VMs when needed. If you've committed it all to a CM as many people do - THEN it is much more dedicated memory use. I.E. memory used by Proxmox for it's ZFS maintenance isn't lost. It may as well use as much memory is available; just makes file transfer faster.

You've also lost all the benefits of ZFS doing this - unless your NFS is also ZFS, and as #3 above that would be a more valid test.

You've really changed a BUNCH of variables, but not any that would show why (if) ZFS is a better choice.

I'd recommend showing local vs NFS (which you did), but with local BTRFS vs NFS on a BTRFS system to keep it "apples to apples"...and then local ZFS vs ZFS on NAS.
However, if you don't *need* the benefits of ZFS, then why use it? It is much more memory dependant. Too many people are on the TrueNAS/ZFS train without any real reason and then concerned because they're short on memory.

StephenDail
Автор

You can also just adjust the ARC size and limit the amount of ram ZFS can use. Worked great on my setup.

acollins
Автор

ZFS will use all memory possible, but it gives back memory to other processes when they need it. Or at least that is how I understood it???

FTLN
Автор

ZFS uses a lot of resources because people don’t take the time to read and set proper ARC cache correctly for the amount of storage they have.

ffjmr
Автор

Uhm... did you just directly compare zfs with nfs.. They are 2 entirely different things? A file system+volume manager compared to a network sharing protocol... Uhm... maybe I misunderstood you

JPEaglesandKatz
Автор

Why do you set user permissions (especially for guest user) if you want to use the share for nfs?
What are you doing with the free memory?

SebastianMBraun
Автор

You have substituted hardware raid and BTRFS for ZFS...that's cool if that's your choice. But why use NFS? Why not set up your proxmox server to run HW raid and BTRFS locally? Save yourself the headache of mapping NFS shares, the network latency, etc.

louisperugini
Автор

I tried to do this and it worked fine using my normal 1GB home network. However I tried setting up a private connection using my Synology 10GB nic to my Proxmox 10GB nic and I can not get access to the shared folder. From the Proxmox server I can ping the Synology private ip and I can also ssh to the Synology NAS through that ip so I know there is no firewall issue. Anyone know why this works on a 1GB connection and not a 10GB connection. I saw in the comments some people have a 10GB connection so how did you get it to work?

timuckotter
Автор

Local file system compared with over wire one... I don't know.
Few things to consider
Zfs use arc. You can adjust it. It use memory like a sponge but give it back if proxmox will call for it. It will give it back slowly so if you over provision to much you will recive some crashes.
Secundo ZFS is not the only file system that you can use. Good LVM can be at same drive different partition. I mix and match those a lot. Things that need extra boost with memory cache go on zfs. Things like slow network shares or things that i do not plan to move go to lvm. Zfs with snaphsots allow for replication and Ha on cluster which lvm cannot do.
Third issue issue with nfs itself. Its great for bulks operations but random IO with working system is... Only good thing with 10gb/s network for HDDs or if you have a few 1gb/s NICs in bound or lag. Still it is the slowest type of connection that modern hardaware can use.
The last thing CEPH in all the glory of cluster and HA. Big topic. As i see on the video much advanced for your needs but worth learning.

hawwestin
Автор

In Linux and Proxmox in this mater - unused RAM is wasted RAM.
In your case, what is a point to have 32GB RAM and only use 9?

ZFS will happily take all server RAM but same time will have no problem give it away if HOST / VM / CT requests it.
My Proxmox NUC server with 64GB RAM / 18TB ZFS Mirror / 12 VMs and 10 LXC Containers has 80% used constantly and i am fine with that.

MRPtech
Автор

I tried running my containers on NFS and with 1gig connections they seem to run really slow.

itsathejoey
Автор

I have an old server computer but a lot of disk space, I think I can be able to create the NFS inside the server through shell commands and later add it in the proxmox web console, what do you think?

DiomedesDominguez
Автор

I have my storage and compute split into 2 boxes. I have an R720XD with TrueNAS and an NFS Share to my Hypervisor (multiple XCP-ng nodes)

UltralifeTech
Автор

Thank you DB 👍
I want to investigate further the digital trail created by some copy on write COW systems.
Sure one gets to have snapshots. But if undeleted copies [[with file bits to not moved to zero in flash or non-magnetic media]] of interim revisions are hanging around, I am not sure whether that is best for my use case. So ZFS might or might not be staying around with me.
Kindest regards, neighbours and friends.

chromerims
Автор

You can set the nfs up on your pro max server

SteveStowell
Автор

For this use case, more important than network speed will be your network latency. You can run KDiskMark prior to moving VM disks (within Proxmox) from local to NFS to compare the disk performance. You'll see the 4k random read/write performance will be very slow. I've noticed that while running the OS on remote storage can be troublesome, using NFS for remote data directory mounts work much better for general files, configuration, and docker containers that the VM can access. One benefit with your setup is utilizing HA within a Proxmox Cluster, since your VM storage is already remote, migrating VMs from one host to another only takes a few seconds.

ryanbell
Автор

Or you could just set the max arc size

mfqmozb
Автор

This is great stuff. You should mention that you chose Btrfs as the actual file system that NFS is using. I completely forgot about Btrfs; it is also a snapshot-based filesystem similar to zfs. A lot of people shy away from it, and just go with ext3/ext4 or zfs, me included. But I may go this route for my proxmox build

joechristl
Автор

Hi David .. love the content I'm definitely going to try switching to NFS to reduce my ram usage.
Side note have you ever tried haveing your CT and or VMs connect to different Vlans within your network.

kslim