I had VDEV Layouts all WRONG! ...and you probably do too!

preview_player
Показать описание
Look, it's hard to know what VDEV layout is the best for #TrueNAS CORE and SCALE. Rich has built numerous different disk layouts and never really saw much of a performance difference between them. So, he's endeavoring to find out. In this video, we show you the results of our testing on which data VDEV layout is the best and whether read/write caches (L2ARC & SLOG caches) actually make a difference. Spoiler alert, we had to go get answers from the experts! Thank you again @TrueNAS, @iXsystemsInc, and Chris Peredun, for helping us answer the tough questions and setting us straight!

**GET SOCIAL AND MORE WITH US HERE!**
Get help with your Homelab, ask questions, and chat with us!

Subscribe and follow us on all the socials, would ya?

Find all things 2GT at our website!

More of a podcast kinda person? Check out our Podcast here:

Support us through the YouTube Membership program! Becoming a member gets you priority comments, special emojis, and helps us make videos!

**TIMESTAMPS!**
0:00 Introduction
0:40 OpenZFS and VDEV types (Data VDEVs, L2ARC VDEVs, and Log VDEVS, OH MY!)
2:21 The hardware we used to test
3:09 The VDEV layout combinations we tested
3:30 A word about the testing results
3:53 The results of our Data VDEV tests
5:22 Something's not right here. Time to get some help
5:42 Interview with Chris Peredun at iXsystems
6:00 Why are my performance results so similar regardless of VDEV layout?
7:30 Where do caching VDEVs make sense to deploy in OpenZFS?
10:04 How much RAM should you put into your TrueNAS server?
11:37 What are the best VDEV layouts simple home file sharing? (SMB, NFS, and mixed SMB/NFS)
13:27 What's the best VDEV layout for high random read/writes?
14:09 What's the best VDEV layout for iSCSI and Virtualization?
16:41 Conclusions, final thoughts, and what you should do moving forward!
17:10 Closing! Thanks for watching!
Рекомендации по теме
Комментарии
Автор

Chris's expertise is astounding, we're glad to have him in the iX Family!

This video is a must watch for anyone looking to expand their TrueNAS Knowledge. As always, fantastic job on the video 2GT Team!

TrueNAS
Автор

Bro.... this cleared up a TON of the same questions I also had.... one big thing that i learned was that write cache SLOG is not used in SMB Shares, where is how I use my NAS at home. But if my VMs were writing to the same pool, then maybe it would benefit from that. The other thing I learned was that the "write every 5 seconds" is really "up to 5 seconds unless something tells it to write" i found this super helpful. I'm glad you asked all the same questions.

SteveHartmanVideos
Автор

Coming from a classic RAID background in IT, I support the use of RAID10 (stripe of mirrors) or equivalent. Sure, it's 50% usable capacity, but i/o is fast and resilient, plus a drive replacement takes as long as a single full disk read and write (since that is all it needs to do), meaning you get back to nominal quicker than other raid configs. Just remember that RAID (including raidz) is not a replacement for backups.

Tip: Backups are a complete waste of money, until you need them, then they are worth every single cent ten times over.

Tip2: if you want to use truenas you will want to start with at least four drives, two cheap and small ones for the truenas OS (mirrored), and two (or more) large ones for data storage. You cannot use the OS drives for data storage (this is not a bug, it is by design). You can skirt this a bit by using virtualization for the truenas OS (installed to a virtual disk) and passing the physical data drives to the truenas vm to use directly (just don't use the vm capabilities of truenas scale as nested vm's is a whole new level of WTF). Took me a long time to fully understand this, hope it saves someone on the internet some time.

stevedixon
Автор

This is fantastic stuff!
I have noticed the performance improvements by throwing more RAM in a box. My friends complain that ZFS is a RAM hog. I just tell them that if you spent money on memory, wouldn't you want the server to use it all? By watching ZFS use up all the memory, I know that it is respecting my hard-earned dollars and putting that memory to work. I never want my resources sitting idle while performance suffers.

Thanks for this video!

ScottPlude
Автор

This is probably one of the best videos on truenas on yt. I've learned alot thanks

lumpiataoge
Автор

Concise and straightforward explanations. My TrueNAS server has 24GB of RAM, and I am using it primarily for backups of my home lab VMs and a network share for the occasions I decide to stash data away. Now I know that I could probably pull that SSD and use it for something else. Unfortunately, you have now planted the seed that I might want to reconfigure my VDEVs into 2 mirrors, versus a Z1. May have to dedicate an afternoon to a new project!

wagnonforcolorado
Автор

Huge thanks to you and Chris for explaining all this for me, exactly what I needed as I am setting up my first TrueNAS.

GreySectoid
Автор

Great Video and so thankful for the internview with Chris. Amazing.

murphy
Автор

Did those tests a few years ago with NMVe and got to the same conclusion and explanation. For the entreprise with big data access, you will need those cache, but for labs or small entreprises, the best optimization to get is ADD MORE RAM. That's it! Once you understand this, everything is quite simple.

Traumatree
Автор

Dropped in to add this comment, really like the style of this !! Don't get the expected results and get a real expert to explain in plain English giving real world examples. Very useful and informative.

simon_bingham
Автор

FYI: Optane nvme (U.2) is basically ideal for slog. I use two of the cheap 16GB NVME versions in a stripe to decent effect (boosted ~30MB/s sync writes to 250MB/s sync writes (till they fill.)) I am more concerned about unexpected power loss than those drives failing. Obviously a mirror would be wiser.

foxale
Автор

as someone in process of upgrading my TrueNAS Scale hardware, i found this very helpful. Thank you!

stefanbondzulic
Автор

Note that, like most of these videos, the zil/log is being confused with a write cache.
It's frustrating to see ix reps reinforce the misunderstanding.

It is not a write cache, it's a write backup. As long as the write works normally, the zil will never be read at all. Zfs write cache is in RAM (transactions groups), the zil or slog backs that up for sync write in case the ram copy is lost.
Poorly designed slog will slow your pool down, since it's NOT a write cache, you are adding another operation.

artlessknave
Автор

This was extremely helpful.
I'm in the process of doing my homework to migrate my home server to TrueNAS and was planning on spending on SSD's for caching. Now I know to get more memory instead.

fredericv
Автор

10:44 "mostly it's coming down to the performance you're after for your workload."

you! I have been shouting my damn lungs out for nearly a decade now. Cache in memory is meant to act as a way to not have to reach down into the disk. This was really irksome when people were like, "You need 1GB of memory per 1TB of total storage on your ZFS pool!" No. You don't. That's dumb. Closer to the truth would be you need 1GB of memory per 1TB of DATA in your pool. Because it takes exactly 0MB of memory to track empty storage. A better metric would be, you need more memory if your ZFS ARC hit ratio drops below about 70% regularly, and the performance hit is starting to irritate you.

One thing I don't like is how people, even people in the know, talk about the SLOG. It's not a write cache, it's a secondary ZFS intent log. The ZFS intent log (ZIL) exists in every pool, typically on each data disk. It's essentially the journal in every other journaling file system. Before you write data to the disk, you write that you're starting an operation, then you write the data, then you tell the journal that you've written the data. ZFS does the same thing, though it actually writes the data to the ZIL as well.

When people talk about caching writes, they're usually thinking about something like battery backed storage on RAID controllers. ZFS will never do this exact thing. When you add a SLOG to a pool, you're ZIL is basically moving over there. This takes IO pressure off of your spinning rust data drives, and puts it on another drive. To that end, if you have a pool of spinning rust drives, and you add another spinning rust drive as the SLOG, you will see write performance increase, just not the massive increases that you might expect from a typical write cache. Your spinning rust data drives will absolutely thank you in the long run.

praecorloth
Автор

TLDR: the average user doesn't go hard enough to notice the fact that, yes, there is a difference between reads depending on your vdev layouts.

LackofFaithify
Автор

Dang! Thanks Rich. I definitely got so many of my burning questions answered.

bobjb
Автор

5:03 : That's because a SLOG is not exactly a write cache. This is only going to speed up synchronous workloads, and mostly will benefit random writes, not sequential.

kungfujesus
Автор

Wow! Such an informative video! Thank you so much!

mariozielu
Автор

Back in my Oracle DBA + Unix sysadmin days, we had an acronym: SAME (stripe and mirror everything). It kinda still applies, especially with spinning disks. Disk is cheap, mirror everything if you're concerned about write performance and fault tolerance. Of course if your data sets can fit on SSD or NVME then do that and get on with your life (I would still mirror it though).

adam
join shbcf.ru