INSANE PetaByte Homelab! (TrueNAS Scale ZFS + 10Gb Networking + 40Gb SMB Fail)

preview_player
Показать описание
Check out our INSANE 1PB network share powered by TrueNAS Scale!
FEATURED GEAR:

👇HOMELAB GEAR (#ad)👇

DISK SHELF (JBOD) + CABLE

RAM

SERVER

SERVER RAILS + CABLE MANAGEMENT

HBA

ENCLOSURE

SWITCH

UPS

Be sure to 👍✅Subscribe✅👍 for more content like this!

Please share this video to help spread the word and drop a comment below with your thoughts or questions. Thanks for watching!

🛒Shop

DSP Website

Chapters
0:00 TrueNas Scale PetaByte Project
0:48 Unboxing a PetaByte
1:55 Putting drives in NetApp DE6600
4:22 JBOD Power Up
4:47 Wiring Up 40Gb Network
7:00 ZFS SSD Array Install
8:10 TrueNas Scale Hardware Overview
9:24 Create ZFS Flash Array
10:00 Create PB ZFS Array
11:00 Setup SMB Share TrueNas Scale
12:30 Map 1PB Network Share
13:05 Moving Files over 40Gb
14:30 40Gb network SMB Windows 11
16:20 Troubleshooting SMB Windows networking performance
19:35 Could it be the EPYC CPU?

#homelab #datacenter #truenas #zfs #homedatacenter #homenetwork #networking

Disclaimers: This is not financial advice. Do your own research to make informed decisions about how you mine, farm, invest in and/or trade cryptocurrencies.

*****
As an Amazon Associate I earn from qualifying purchases.

When you click on links to various merchants on this site and make a purchase, this can result in this site earning a commission. Affiliate programs and affiliations include, but are not limited to, the eBay Partner Network.

Other Merchant Affiliate Partners for this site include, but are not limited to, Newegg, Best Buy, Lenovo, Samsung, and LG. I earn a commission if you click on links and make a purchase from the merchant.
*****
Рекомендации по теме
Комментарии
Автор

2:36 Ooooh that perfect drive cube stack!! Wow 1PB in a singe array - you're making me 8x 18TB look tiny.

HomeSysAdmin
Автор

You probably need to look into NUMA and QPI bus saturation being the issue on your Truenas box since it's and older dual-socket Xeon setup.
Odds are the QPI bus is saturated when performing this test.
For some context:
I've successfully ran single connection sustained transfers up to 93Gbit/s (excluding networking overheads on the link) between on Windows 2012 R2 boxes in a routed network as part of an unpaid POC back in the day (2017).
Servers used were dual-socket Xeon E5-2650 v4 (originally) w/ 128GB of RAM, running Starwind RAMdisk (because we couldn't afford NVME VROC for an unpaid POC).
Out of the box without any tuning on W2012R2, I could only sustain about 46-50Gbit/s.
With tuning on the Windows stack (RSC, RSS, NUMA pinning & processes affinity pinning), that went up to about 70Gbit/s (the QPI bus was the bottleneck here).
Eventually, I took out the 2nd socket proc for each server to eliminate QPI bus saturation and the pinning/ affinity issues and obtained 93Gbit/s sustained (on the Arista switches running OSPF for routing, the actual utilization with the networking overheads was about 97Gbit/s). The single 12C/24T Xeon was only about 50% loaded with non-RDMA TCP transfers. The file transfer test was done with a Q1T1 test on Crystaldiskmark (other utilities like diskspd or Windows Explorer copies seem to have some other limitations/ inefficiencies).

For the best chance at testing such transfers, I'd say that you should remove one processor from the Dell server running Truenas.
1) Processes running on cores on socket 1 will need to traverse the QPI to reach memory attached to socket 2 (and vice versa).
2) If your NIC and HBA are attached to PCIe lanes on different sockets, that's also traffic that will hit your QPI bus.
3) Processes on socket 1 accessing either the NIC or HBA attached to PCIE on the 2nd socket will also hit your QPI bus.

All of these will potentially end up saturating the QPI and 'artificially' limit the performance you could get.
By placing all memory, NIC, and HBA to only one socket, you can effectively eliminate QPI link saturation issues.

BigBenAdv
Автор

I believe that the windows explorer copy/paste is limited to 1 core go that would be the bottle neck.

Also I think at 14:40 you said the "write cache", but the RAM in ZFS is not used for write cache as far as I know, only for read cache.

rodrimora
Автор

He must be single. There is no way the wife would allow that much server hardware in the house.

CaleMcCollough
Автор

Hey just wanted to say your videos inspired me to purchase a DE6600 and they were invaluable to the decision. The result has been so friggin' good. Extremely happy!

PeterKocic
Автор

19:38 "now we have this set up in a much more common-sense [...]" -- I'm a ZFS noob, but is 60 disks in a single Z2 really a good idea? Seems like the odds of losing 3/60 disks would be relatively high, particularly if they all come from one batch of returned drives.

What if it was 6x (RaidZ2 | 10 wide) instead, say? Then it could stripe the reads and writes over all those vdevs too...

pfeilspitze
Автор

Awesome vid! Thanks g! Picked up another 66tb for my farm

punxlife
Автор

I have a humble homelab, but what would you even realistically need a petabyte storage system for?

Mruktz
Автор

60 wide raidz2 doesn't make much more sense haha. Try 10 wide raidz2 x 6. That would make much more sense no?
Maybe you're limited by smb, have you tried using iscsi or nfs?

philippemiller
Автор

Love catching up on your build. You never stop building.

thecryptoecho
Автор

You don't plan on using a single Rz2 in production, right? Right? One Rz2 shouldn't be much wider than 8 drives for optimal performance and redundancy. Recovering from a failure with a 60 drive z2 would take a freaking long time and chances are really high that other drives will go boom as well. It has to read all 1PiB after all...

LampJustin
Автор

That's freakin insane. Your out of your mind DS. That's a ton of storage and you look like you just came home from the grocery store or something.

electronicparadiseonline
Автор

thanks for the demo and info, MegaUpload lol... Have a great day

chrisumali
Автор

Thanks for your video, can you tell me where you buy these disk (not available in your shop) ?

christiandu
Автор

That awesome
Right now I have a Supermicro SC836 16 bay with 7 x 12tb hdds and 96Gb of ram
I'm upgrading little by little, saving money to upgrade my network

juaorok
Автор

Not sure of the differences, but Dell, before EMC, used the same enclosure style as well. PowerVault MD3060E and other varities. Though the prices may be a bit different.

samishiikihaku
Автор

Hi, loves your video
I noticed when you where in the iDrac you were on a Dell 720XD, I am looking at going to 10GB for my setup and was wondering what 10GB NIC you have installed?

mitchell
Автор

that sound of fans is just relaxing to me

laughingvampire
Автор

I am new to your channel, is there any chance you can do an overall tour of your setup and how you got to where you are?

TVJAY
Автор

just found this channel, but what is your use case for all of this

notmyname