Fixing our server's biggest flaw

preview_player
Показать описание


We made a big mistake when setting up our main editing server Whonnock - it wasn’t serviceable. Today we fix that with a brand new machine and brand new sabrent nvme drives!

Purchases made through some store links may provide some compensation to Linus Media Group.

FOLLOW US
---------------------------------------------------  

MUSIC CREDIT
---------------------------------------------------
Intro: Laszlo - Supernova

Outro: Approaching Nirvana - Sugar High

CHAPTERS
---------------------------------------------------
0:00 Intro
3:00 Meet the new server
4:00 Specs / Config
6:10 THERMAL PASTE
7:49 secret screwdriver
8:10 Loading the drives into sleds
10:10 Powering it on
13:53 Outro
Рекомендации по теме
Комментарии
Автор

What do you think of our Nu Nu...wait, how many Nus? Anyways, what do you think of our new server? Let us know below!


Purchases made through some store links may provide some compensation to Linus Media Group.

LinusTechTips
Автор

I like how he said "leftover 8 TB " drives.

xmchonkyx
Автор

Linus and his son messing about with server hardware has to be my favourite type of LTT videos.

DrathVader
Автор

Every server update seems to start with "our last server update was a bit of a lie..."

adhdengineer
Автор

The year is 2341. Linus releases what he calls "the final file server room update"

SavvasDalkitsis
Автор

Linus: We can use cheap consumer SSDs
Also Linus: proceeds tu put in sabrent 8TB SSD, which costs more than my whole computer

Michalosnup
Автор

Not lying, server videos are my favourite content on this channel. Not to mention that I gained vast amount of server knowledge from these videos.
Also I need to know who is the new editor. 4:42 8:04 😂

Urboyfromfuture
Автор

It's always fun watching amateurs learning the lessons that IT pros learned decades ago. Document everything, hot swap is not optional, and if you really care about high availability you need a clustered solution. One day the janky solutions will end in tears.

radman
Автор

5:39 i love how he blows the dust up and then away. thats something everyone should do when its laying down. if not then the dust lands back into the case...smart. i thought i was the only one

corner_store_bill
Автор

Regarding identification of failed drive you can do using M.2 Serial number where it is visible in OS.

adamleszek
Автор

I think you should be more precise about the read parity check for ZFS. It’s a feature on the filesystem level not in RAIDZ level. None RAID controller in the world doing parity check on every read. All of them doing background parity check for silent corruption. This includes ZFS RAIDZ. When you have read parity check on ZFS filesystem level, you can have other RAID controller to replace RAIDZ and still enjoy the read parity check feature that includes GRAID GPU RAID.

leanderyu
Автор

I don't know why, but these server builds are always really fun to watch. So much faster than anything I could ever need or afford.

jonmayer
Автор

tbf, I kinda feel like Linus should try to see if he can scale the editting servers more horizontally instead of vertically.
Sure, it'll add a bit of complexity but if that 1 machine goes down, all editors are kinda screwed instead of just a bunch of them.

FinlayDaGk
Автор

10:27 Those blanks are actually to equalize airflow across all drives.

tylerebowers
Автор

Storage requires performance - but it also requires reliability, availability and serviceability. It's been fun (if painful) to watch LTT make mistakes and correct them... Admittedly, for something so vital to your company, one would have inter-site replication.

Currently, my concerns are underlying software issues (as hit previously) and stuff like the server mobo itself failing. It's relatively rare, but you'd have a bad time. A twin canister based approach would probably be more bulletproof, but would still have a midplane to crap out. So, yeah, good replication and redundancy over separated failure domains feels like where I'd go.

Admittedly, I'm basically suggesting getting a second storage array for no additional capacity. But keeping your data is always handy.

mirrorsandstuff
Автор

We have a build like this in progress currently using a Dell PowerEdge R7525, a pair of AMD Epyc 7763 CPUs, 1TB of RAM, and 24 of Micron's 30.72TB 9400 Pro U.3 drives. There's so much to learn about tuning these to run in TrueNAS for workloads at this performance level. It's been quite the long haul but I'm happy to say it performs better than anything I've ever seen, even on this channel.

firesyde
Автор

A cluster of 3-5 Whonnock servers makes more sense. Each could have a less extreme setup with much more combined CPUs/RAM/Drive space. A cluster has a lower risk of downtime and spare lanes to put more IO devices like NICs\GPUs\Accelerators. You would have spare CPUs to do other work like other network services. If you ever needed more CPUs/IO/Storage then you slap in another server to the cluster. 45 drives could probably help with that.

edwarddesposito
Автор

Been getting into making a homelab. Ill never need something this insane for my home but its great information for my already packed brain.

dylanguerrero
Автор

I've used Startech products for years now and have found all their stuff to be incredibly reliable and really good value. The only problem I had was the casing came lose on an HDMI cable and they replaced it despite it being over 2 years old.

Holycurative
Автор

I am loving the server content! Been learning the hard way by diving into the deep end. My team has no one else to configure or maintain our servers...and I'm relatively new at sysadmin type stuff. Lots of limitations to work with and things to learn.

So, standard IT job? 😂

Would love to know more about what you use to stress test your server configuration. Thanks for the show and tell!

Nick