How Much Memory Does ZFS Need and Does It Have To Be ECC?

preview_player
Показать описание

ZFS is a COW

Explaining ZFS LOG and L2ARC Cache: Do You Need One and How Do They Work?

Set ZFS Arc Size on TrueNAS Scale

Connecting With Us
---------------------------------------------------

Lawrence Systems Shirts and Swag
---------------------------------------------------

AFFILIATES & REFERRAL LINKS
---------------------------------------------------
Amazon Affiliate Store

UniFi Affiliate Link

All Of Our Affiliates that help us out and can get you discounts!

Gear we use on Kit

Use OfferCode LTSERVICES to get 10% off your order at

Digital Ocean Offer Code

HostiFi UniFi Cloud Hosting Service

Protect you privacy with a VPN from Private Internet Access

Patreon

⏱️ Time Stamps ⏱️
00:00 ZFS Memory Requirement
01:32 Minimum Memory ZFS System
03:04 TrueNAS Scale Linux ZFS Memory Usage
04:03 ZFS Memory For Performance

#TrueNAS #ZFS
Рекомендации по теме
Комментарии
Автор

This will be a common video for all newbies to look up.

Jimmy_Jones
Автор

Finally good advice without fearmongering. There is so much fear mongering with ZFS for some reason.

marshalleq
Автор

Easily the most informative video/content I’ve seen yet on Truenas. Thanks for sharing this!

bdhaliwal
Автор

Thanks for the comment about the Linux 50% rule with ZFS. zfs_arc_max is a critical setting to adjust.

edwardallenthree
Автор

Thanks for the easy too understand zfs guide 👍

Tntdruid
Автор

Nicely explained, however I am still not clear- if ECC is not strictly required and data integrity is still there without it, what precisely is the benefit of ECC? Or should I ask, in what situations would a non-ECC system fail where an ECC system would not?
Thanks for the video, please keep uploading more great content!

healthy
Автор

Great work Tom. Good onya. Just watching this now because you posted it again in YT timeline. 😊

ashuggtube
Автор

Thank you so much for bringing light I to this and also love the new frequencies of high quality content !

nixxblikka
Автор

Took me days of research to come to these same conclusions a few months ago, thanks for putting the record straight!

Ecker
Автор

A nice calm discussion. Thanks for a well reasoned argument about memory use in ZFS. In my shop, we have a general rule of thumb of 128GB of memory per 100TB of zpools served. IOW, if I have a 200TB zpool, the server managing it will have 256GB of memory. We get very good performance this way, with most of the memory mapped to ZFS, which is what you want.

paulhenderson
Автор

Thank you for this explanations about ZFS.

jonathanchevallier
Автор

I appreciate your videos so much. Thank you for your hard work. You are my hero.

henderstech
Автор

Well, to be honest I'm a bit disapointed by the video. I would have expected some benchmarks to show when TrueNas becomes unstable/unusable under a certain amount of memory. Or you could have an ECC system and a non-ECC system, overclock RAM on both of them until it's unstable and see what it does to your data.
This very video does not show a lot, and I'm sure did not required a lot of work.
What happens if you run TrueNAS with 2Go of RAM ? Or even 1G ?
What happens if you run TrueNas with 8Go (the bare recommended minimum) but with 100Tb+ of storage and some loads ? How does it affect write and read performance ?
How resilvering is affected by the lack of memory ?

All these tests would be useful, interesting to watch, and would also offer a definitive answer to the question we are seeing to many times on the forum "how much memory do I need for my system" ?

Okeur
Автор

Great vid 👍
My brain reading title as: *"How much money does ZFS need?"*
Kindest regards, friends.

chromerims
Автор

My ZFS memory usage is occasionally measured in MB not GB. My use case is running VMs on an Ubuntu desktop and I have only 1 pair of hands to keep the VMs occupied. My hardware is cheap; Ryzen 3 2200G; 16GB; 512GB nvme-SSD; 2TB HDD supported by 128GB sata-SSD as cache. My 3 datapool are: nvme-SSD (3400/2300MB/s) for the most used VMs; 1TB partition on begin of HDD with 100GB L2ARC and 5GB LOG for VMs; 1TB partition at end of HDD with 20GB L2ARC and 3GB LOG for my data. L2ARC and LOG partitions are together again 128GB :) I maximized memory cache L1ARC to 3GB.

My nvme-SSD datapool runs with primarycache=metadata, so I don't use the memory cache L1ARC for caching records. My nvme-SSD access does not gain very much in performance using the L1ARC. The boot times of e.g Xubuntu improves from ~8 seconds to ~6.5 seconds. My metadata L1ARC size is 200MB, saving space to load another VM :)

I have a backup-server with FreeBSD 13.1 and OpenZFS, it runs on a 2003 Pentium 4 HT (3.0GHz) with 1.5GB of DDR of which ~1GB is used :) So OpenZFS can run in 1GB :)

The VMs on the HDD run from L1ARC and L2ARC, so basically they boot assisted by L2ARC and afterwards they run from L1ARC. After a couple seconds it is like running the VMs from RAM disk or a very fast nvme-SSD :) :) Here the VMs fully use the 3GB (lz4 compressed) say 5, 8GB uncompressed and my disk IO hit-rates for the L1ARC are ~93%. Using a 4GB L1ARC I can get it to ~98%.

For all the measurement I use conky in the VMs and in the Host. Conky displays also data from /proc/spl/kstat/zfs/arcstats and from the zfs commands.

PERFORMANCE:
The relative small difference between using nvme-SSD and nvme-SSD + L1ARC is probably caused by the 2nd slowest Ryzen CPU available. I expect most boot-time is used by the CPU overhead and decompression, so reading from nvme instead of memory does not add very much more delay. That would change in favor of the L1ARC with a faster CPU like e.g. a Ryzen 5 5600G.

More memory would make the tuning of the L1ARC easy, just make it say 6GB. It would not make the system much faster, since the L1ARC hit rates for disk IO are already very high in my use case. However I could load more VMs at the same time.

The 2TB HDD is new. In the past I used 2 smaller HDDs in Raid-0. It were older slower HDDs, but the responsiveness felt better. I expect, while one HDD moved its head, the other could read. Those HDDs had 9 and 10 power-on years, so one of them died of old age, so I don't trust the remaining one anymore for serious work. Another advantage was, that my private dataset was stored with copies=2, creating a kind of mirror for that data. Once it corrected an error in my data automatically :) I consider buying a second HDD again.

My Pentium backup server has one advantage; I reuse two 3.5" IDE HDDs (320+250GB) and two 2.5" SATA HDDs (320+320GB)and it has one disadvantage; the throughput is limited to ~22 MB/s due to a 95% load on one CPU thread. That good old PC gets overworked during say 1 hour/week.

bertnijhof
Автор

Great video topic and timely for me! I am in the process of deciding how much to expand my TrueNAS Core usage. I currently only utilize it for iscsi (esx). Would like to move to editing videos off of TrueNAS vs copying all assets to my local machine, I was curious about the RAM usage - currently running 4x8gb DDR3 ECC Reg. I could probably stand to search for some 16gb or 32gb dimms.

STS
Автор

I'm running ZFS on my desktop. It has 8 NVMEs all mirrored in pairs which results in about 5TB of storage in total (not evenly sized - but I run it for reliability not optimal speed and 14/10 R/W GB/s is just plainly good enough for me).
Doesn't really matter since that desktop is a bit beyond most normal stuff (5975WX, 512GB ECC RAM, etc.) and so it is more of only anecdotal value. And yes, that is too much RAM even for ZFS - it only uses about 50 -150 GB out of the box for those 5TB of storage. So I will need to look into it of how to tune it to do better caching. ;-)

My file server on the other hand is only an old trusty workhorse (old i7 from 2015) until recently running Linux md-raid with 16 GB of non-ECC RAM. It is just a very normal home file server. Normal (recycled) PC hardware, running about 8 years 24/7/365 without issue. Only PSU needed replacement once so far.

Was always running raid 6 with 8 drives. Last incarnation was 8x9TB. Of course after a few years that again now became too small.
So a few days ago I replaced the 9TB drives with 18TB drives and this time I also switched from md-raid to ZFS (zraid-2).
What can I say? It just works at least as good as before. Just a bit faster since the drives are a bit faster than before. Hardware is old but not super-slow. Memory is not much. But with 10GbE connection it simply is still good enough for me.
md-raid certainly stood the test of time in my home so that I still can fully recommend it. With ext4 it simply is very robust.

But now running ZFS of course has its added value. And when the hardware finally dies, I will switch this to ECC RAM, too. Of course.

thisiswaytoocomplicated
Автор

Once again great video! Question for you. You mentioned S3 Target. Are you using Minio? And if so, how is the performance when it's running on top of ZFS?

zparihar
Автор

Timely video as I've just upgraded an old FreeNAS 8 server to TrueNAS. The performance I'm seeing definitely aligns with this video.

HP Gen 7 Microserver N54L (2x 2.2GHz AMD Turion 64-bit cores), 16GB ECC RAM, LSI 9211-8i SAS controller (PCI-E 16x slot), Intel NIC (PCI-E 1x slot).

TrueNAS Core 13.0-U5, booting off 250GB Crucial MX500 SSD (internal SATA port).

* RAID-Z1 Pool: 4x 8TB IronWolf Pro 7200 RPM HDD (connected to 1st port on LSI Controller)

* RAID-Z1 Pool: 4x 1TB Crucial MX500 SSD (connected to 2nd port on LSI Controller, via 2.5" 4-bay dock in optical drive bay)

* Mirrored Pool (encrypted dataset): 2x 4TB IronWolf Pro 7200 RPM HDD (connected over eSATA to external 2-bay enclosure).

This is an ancient system, massively underpowered these days, but for home use (ie. SMB/NFS file sharing - mostly media/movies/TV shows on HDD, plus the occasional git repo or document on SSD) it's still perfect as it saturates the 1Gbps NIC for pretty much everything (reads AND writes, even from the 4xHDD pool which has a sequential read rate of 640MB/s).

At idle the system pulls about 55W, and maxes out at about 105W during a 4x HDD scrub. It's nearly silent but stick it in a closet (as mine is) and you absolutely won't hear it.

Even the older and slower N36L can saturate the 1Gbps network with a similar controller/disk setup (I recently swapped out the N36L motherboard for the N54L as a final upgrade!)

The only possible improvement now would be to upgrade the network side of things as that's definitely become the limiting factor, but to be honest for home use there's really no need...

milhousevh
Автор

I’ve got a slightly nasty 32GB stick that writes incredibly slowly so would take hours to fill but works well as a cache device though has taken weeks to fill. Now it’s full (ZFS-stats -L) it has improved things beyond what just some smaller faster SSD cache partitions did on their own. So if you have “spare” USB memory sticks and ports no risk in stuffing them in as cache devices. As I only run the machine for hours per weeks persistent cache is good for me.

jms