Make Your Home Server Go FAST with SSD Caching

preview_player
Показать описание

Follow me:

Support the channel:

Music:
Hale – Moment
Delavoh – Always With Me
Meod – Crispy Cone
Steven Beddall – Cuts So Deep
When Mountains Move – Alone Atlast

Videos are edited with Davinci Resolve Studio. I use Affinity Photo for thumbnails and Ableton Live for audio editing.

Video gear:

Other stuff that I use:

As an Amazon Associate, I earn from qualifying purchases

Timestamps:
00:00 Intro
01:27 2.5Gbit Networking
03:15 10Gbit Networking
06:26 SATA or NVMe SSDs?
07:51 WD Red SSDs
08:46 Filesystems
09:08 BTRFS
11:30 ZFS
13:04 Mergerfs & Snapraid
14:43 Tiered Caching
17:10 Outro
Рекомендации по теме
Комментарии
Автор

mergerfs author here. Thanks for the coverage.

trapexit
Автор

Small note about BTRFS: its RAID1 is not actually RAID1, it's a different type of RAID that is confusingly named "1".
To cut to the chase: BTRFS RAID1 (and RAID10, for that matter) can tolerate only ONE disk loss, REGARDLESS of the amount of disks in RAID. Please consider this before committing to BTRFS on your NAS.

Suppose you have 2 6TB drives and 1 8TB drive in a BTRFS RAID1 (yes, you can do odd number of disks, and different sizes as well). Now you write a 1TB file to it, for the sake of example. The way BTRFS works, it will write 1TB file to the "most free" drive, which is the 8TB drive. Then, it will write a copy of it to ANOTHER "most free" drive, which is either of the 6TB drives. Let's write 1TB files until our BTRFS RAID1 is full and the free space on disks:
6TB#1 | 6TB#2 | 8TB
6TB | 6TB | 8TB
5TB | 6TB | 7TB
5TB | 5TB | 6TB
4TB | 5TB | 5TB
4TB | 4TB | 4TB
3TB | 3TB | 4TB
2TB | 3TB | 3TB
2TB | 2TB | 2TB
1TB | 1TB | 2TB
0TB | 1TB | 1TB
0TB | 0TB | 0TB
We can see that our biggest disk (8TB) is used the most, while its free space doesn't become equal to another two drives' free disk space, and by then writes are balanced equally between the disks.

TheTeregor
Автор

Thank you! because of your videos I learned a lot about server stuff and also improved my own server here!
I already have a nas with a corsair nvme drive and probably in 2023 I will be able to switch to 2.5Gb/s network. Btrfs has been my FS of choice on all my OSes at home and on my server it's running at RAID 1 + zstd:3 compression, without any problem at all on two seagate ironwolf 4TB drives

myghi
Автор

While you're right about needing a network upgrade to get more sequential performance from SSD caching, but even on single gigabit, you're getting a huge performance advantage in random IOPS. I love your videos, keep them coming!

chrisd
Автор

Thank you very much for this and all the other videos. Even though I don't use a server (yet?), it is so interesting to see these tutorials, especially the ones about power efficiency. Have a good one! <3

nichtgestalt
Автор

15:11 The ZIL (ZFS Intent Log) is a part of the ZFS Copy-on-Write function (for preventing Data loss), which only gets used with sync writes (e.g. if you use your Storage Server for Virtual Machine Storage).
On normal file copy operations, the ZILL never gets used. If one does have a lot of sync writes, they should put their ZIL on a dedicated Log Device (SLOG), which would usually be an SSD.

The L2ARC is an extention of the ARC read cache, wich will cache frequently access files in you free/unused RAM.
a L2ARC will be useful if you cannot fit all of your data in your RAM you want to be cached, but it will only speed up operations that access files already on your ZFS pool.

Felix-vehs
Автор

Your videos are always so helpful and well made. Thanks for sharing this!

Guilherme-qkso
Автор

When will you do a video about home automation?

muazahmed
Автор

There's a linux kernel driver that does tiered storage as well - its called btier. It moves the more often used data to the ssd drives. I use it, works great. Thanks for the video!

halbouma
Автор

12:10 The memory requirements of ZFS depends completely on your use case and the mentioned rule is valid for a server with many users (say >20). I use ZFS on my desktop and laptop (Ubuntu) and I limit the L1ARC cache to 20% to 25% of my RAM size, mainly to save some time starting VMs. If needed ZFS will free up cache memory, if programs or VMs need it. On my 16GB desktop I limit the cache to 3 or 4GB; on my 8GB laptop I limit the cache to 1.5 to 2 GB. On my 2003 backup-server (Pentium 4; 1C2T; 3.0GHz) with 1GB DDR (400Mhz) I did not set limits, but FreeBSD/OpenZFS limits the cache to 100/200MB.

Currently I use a 512GB SP nvme-SSD (3400/2300MB/s) and a 2TB HDD supported by a 128GB sata-SSD cache (L2ARC and ZIL). Often I run the datasets on the nvme-SSD only caching the meta-data (L1ARC), because full caching only speeds up disk IO by 10% to 20%. That small difference is, because of the fast nvme-SSD and my slow Ryzen 3 2200G, who needs relative much time for compressing/decompressing the records. I don't complain, because I boot e.g Xubuntu 22.04 LTS in ~6 seconds mainly from the cache or in ~7 seconds directly from the nvme-SSD.

Note that all my data storage and all transfers of the changed records during an incremental backup are lz4 compressed.

The ZFS snapshots on my desktop saved me twice from 2 hacks I've experienced this year.

bertnijhof
Автор

I have been using Btrfs over mdadm raid-6 on several servers, for many years now. Apart from the occasional dead drive that needed replacing this worked without problem. I did switch to combining smaller (1Tb) disc partitions of each of the 6 drives in a raid, then combining the raid with LVM and put Btrfs on top of that. The smaller disc partitions have the advantage that I can run a raid check over night on them, instead of having a huge one starting Sunday morning early, but not finish until very late in the day, making access very slow. Of course some people will roll their eyes about this layering and the loss of speed, but for me and my usage pattern that is not a matter of concern.

anthonvanderneut
Автор

Thanks for your tips and videos. I like you're channel for helping me in my daily life here as an non-programmer. Greetings from southern Germany !

NN-ucfh
Автор

A better way to add disks to ZFS is do add another vdev (array) to a pool. Yes, you should not simply add just one disk, but by adding pairs of disks, or even another raidz volume with several, you can grow your storage pool without having to do the kinds of gymnastics you outlined. ZFS will stripe across vdevs in a pool.

It is a trade off in convenience and cost (requiring multiple disks), but what you get is security. ZFS, essentially, is designed to force a user to do it the "safe" way.

This will also let you avoid the MergerFS on ZFS thing you alluded to. Which sounds like a bad idea.

ciaduck
Автор

Not really mentioned in the video but cache in unraid is read/write only when the files are in the cache pool. After the files have been moved back to regular storage you have to manually move it back to the cache. I would think tiered storage should do read and write caching but in unraid it only does write caching.

kevinwuestenbergs
Автор

Stablebit Drivepool now has an SSD caching option. Things saved to the drivepool are saved to SSD first then moved onto HDD after.

shawn
Автор

This is timely for me. Thank you.
Will watch tonight.

chromerims
Автор

Wolfgang, do you think you could make a video about your current OS selection for your server? When you were upgrading to your server rack you said that now that you have CMR drives you will try trunas but I see you are using unraid now. I think it would be great to hear your thoughts about unraid vs trunas core/scale vs Ubuntu server and see the process of how to switch your OS on a current server.

bradlee
Автор

This is my best YouTube channel. Straight to the point, no BS. No irrelant talk. Hit hard on just the important information.

I found out despite always getting your videos in my feed, I wasn't subscribed. Totally fixed that.

baricdondarion
Автор

Im really glad you talked about NAS.
I want to build a medium sized (50tb)array, but out of NVME-m2 drives. For pure speed at all costs.
In my research i found that due to chipsets, the fastest raid arrays for m2 are quantity of 3 drives. Although some boards and many m2 raid cards have 4 ports, actually adding the fourth drive lowers the performance. Therefore, each submachine of the array would be limited to 3 drives. Although i could plug in 4 cards with 4 drives each, those 16 ssd wont enjoy the full bus rate (i think?)
Therefore, instead of building one server with 16-24 drives, the m2 ultra speed nas would need to be made out of a network of machines. Ideally built with the smallest board with a fully supporting chipset i can find. Well, i know nothing about networking. What manages this nas array? Another machine with tb of RAM?
Building a high speed high performance array is more complicated than i thought😢

lizzyfrizzy
Автор

I learned a lot about RAID in this video than 5 years running a server.

rasaskitchen