Tuesday Tech Tip - Accelerating ZFS Workloads with NVMe Storage

preview_player
Показать описание
Every second Tuesday, we will be releasing a tech tip video that will give users information on various topics relating to our Storinator storage servers.

Previously, we released our plan for the newly designed Stornado 2U All-Flash storage server. Brett talked about our plan for the SATA version and an NVMe version coming early 2023. Brett also explains how you can incorporate NVMe storage into your workflow now (if you don't want to wait until 2023). You can check out those videos linked below.

Brett is back this week to talk more about NVMe storage. Specifically, we are talking about accelerating a ZFS storage pool using NVMe devices. Brett talks about ZFS, special vdevs, and shows a comparison between NVMe and HDD for metadata performance.

Chapters:
00:00 – Introduction: Accelerating Your ZFS Workload with NVMe Storage
01:20 – Comparing Two ZFS Storage Pools
02:00 – Running Listing Tests in the Terminals with Performance Stats
03:50 – Understanding Metadata and Metadata Performance
05:55 – Comparing HDD vs NVMe Time Performance Listing 1K Files
08:20 - Comparing HDD vs NVMe Time Performance Listing 10K Files
08:55 - Comparing HDD vs NVMe Time Performance Listing 50K Files
09:04 - Comparing HDD vs NVMe Time Performance Listing 100K Files
09:33 - Comparing HDD vs NVMe Time Performance Listing 500K Files
10:30 - Comparing HDD vs NVMe Time Performance Listing 1000K Files
11:27 - Outro
--

Check out our videos about the Stornado and NVMe storage:

#45drives #storinator #stornado #storageserver #serverstorage #storagenas #nasstorage #networkattachedstorage #proxmox #virtualization #cephstorage #storageclustering #virtualmachines #cephcluster #storagecluster #proxmox #ansible #nasstorage #prometheus #samba #cephfs #allflash #ssdstorage #ssdserver #allflashserver #allflashstorage
Рекомендации по теме
Комментарии
Автор

actually document management servers actually frequently have over 100 K files in one directory. massive workload do exist. a human may not do directly by but applications frequently do

midnightwatchman
Автор

Would having a SLOG and a cache device still make sense when having a special vdev in the pool?
I'm building a compact storage server that fits 64 GB of RAM, 2 NVMes and 4 HDDs. I could imagine partitioning the NVMe drives equally so I have everything mirrored plus a striped cache. Would that be useful? What is a good way to measure this?

HoshPak
Автор

Stinks that you lose the whole pool if the mirror dies. I'd want to also z2 the special pool

StephenCunningham
Автор

Thank you for the explanation, you mentioned that the metadata is stored on the disks and having an NVME will help speed that up. Would you recommend adding one for an all-flash storage pool?

tsupra
Автор

Do you usually run some performance metrics on your customers' machines once they have been built out?
Feel like you could easily let the same tools run in the background to generate some exemplary "load at 10 am might be this", which should easily show the differences.

For StarWind vSAN I used DiskSPD, which seems to have a linux-port Git repo (YT doesn't like links, it's the first result in Google).

nc
Автор

Great demo. What are the risks? Do we need to mirror the special device? What happens if it dies?

zparihar
Автор

Great vid but won't 'hot' metadata live in your ARC (RAM) anyway and therefore that is surely the fastest place to have it?

chrisparkin
Автор

If you add a metadata vdev to a pool, is it safe to remove later? Is this a cache or is no metadata going to disk anymore?

Solkre
Автор

In my experience, it really accelerates file transfers. Especially, when doing large backups of entire drives and file systems.

TheChadXperience
Автор

Im curious does this benefit iscsi luns and vm disks. Say i want to use truenas as a storage target iscsi for windows vms and i would also like to use a SR (storage repo) for vm disks to live on.

cyberpunk
Автор

Hi, what capacity HDD's and NVME were used for the video, I'm terrible at reading Linux's storage capacity counters. I'm trying to work out a good capacity of NVME to get for my 32TB (raw) pool, is 500GB a good amount?

teagancollyer
Автор

i'm honestly quite disappointed that the speedup for nvme special device is quite a lot smaller in the larger folders:
500k 18/11 1.63636363636364


the first examples were nice, 6x speedup - why not.
but a 2x speedup, not so impressive any more, considering that nvme should normally be 10x faster even at the largest blocksizes.

in the iostat output i also see the nvme being read at often just 5MB/s, why is it so low?!

shittubes
Автор

is it good to have meta disk even if we use ZFS primarily as virtualisation target ?

pivotindia
Автор

Unfortunately it will add a single point of failure when using only one name device as all data will be gone when the metadata ssd dies

steveo