Building My ULTIMATE, All-inOne, HomeLab Server

preview_player
Показать описание
Today I built the ultimate, all in one, HomeLab Home Server to handle everything.

Sliger did send this case to me however asked for nothing in return.

Other 4u Cases

Other Parts

(Affiliate links may be included in this description. I may receive a small commission at no cost to you.)

00:00 - What I want out of a HomeLab Home Server
01:19 - Selecting a case / chassis
02:23 - Use Old case?
02:58 - New or Reuse?
03:33 - Other Case Options (Zack Morris style)
03:51 - Thinking about Hacking this chassis
04:19 - CPU & Motherboard
05:39 - Disassembling
06:48 - Component layout
08:11 - How to get 15 SSDs in here
08:57 - Maybe print some parts?
09:45 - For now, it's jank
10:24 - Test flight
11:02 - Power usage
11:37 - Testing components with an OS
12:18 - Networking
13:02 - Temperature checks
13:30 - Testing GPU
14:45 - SSDs are here
15:19 - Racking Server
15:56 - Weird Gap
16:20 - Selecting the operating system

Thank you for watching!
Рекомендации по теме
Комментарии
Автор

Sorry about the mistake by saying 5.25" drives! While researching and testing, I was trying to figure out how many drives I could fit in the Corsair's 5.25" bays and somehow that got into my script. 🤦‍♂ In the spirit of mixing things up, let me know what you've mixed up before!

TechnoTim
Автор

I, also, hate using 5.25" hard drives. Such a pain ;)

corrpendragon
Автор

Hey Tim, I’m hard of hearing, but I just want to say thank you for your time and effort into adding subtitle😊

YakDuck
Автор

Your public library might have a 3D printer if you don't want to purchase one. I use my library's printer often.

techaddressed
Автор

For an ultimate all-in-one homelab server, a hypervisor whiteout even thinking. One solution (nearly) fits all.

Krushx
Автор

I went bear metal on my home lab server for a while, because it could do everything I wanted. However, things changed and now I’ve reinstalled everything on proxmox. The overhead is low and you have the flexibility anything in the future. So I would just install a hyper visor of you’re choosing.

redhonu
Автор

4:40: x16 will only run as x8 and x8 runs as x4 if you are using a feature that shares the same lane, x16 and NVMe Gen4 slot for example.
It's important to know your mobo limitations like how many PCIe and which features shares the same lanes.

hakunamatata
Автор

I’ve run for years an unRaid server which had a w10 VM with GPU and NVME passthrough that I used for playing games and the rest of the system was used for docker stuff: Plex, arr suite, homeassistant and a large etc.

Just make sure you have a Renesas chip based USB PCIe card passed to the VM so you can plug and unplug peripherals without freezing the VM.

blinkitogaming
Автор

Don’t go with the EVO 870’s, I made the same mistake (They are consumer drives and wear out quickly!) I replaced all of them with the Samsung SM883’s (ZFS pool)

ASFokkema
Автор

I moved to a single giant server build a while back from my own giant rack with dell power edge servers. But it was to much power usage. For most use cases in a single server build. I found unraid to be the best base OS for me. 5 years later. Still rock solid and never had any major issues. Kind of on autopilot and it just works.

Docmeir
Автор

Also, thanks for the info on the Sliger cases. You just made me spend more money, they look great and are made in the USA for a reasonable price. My Threadripper platform is getting a new home. :)

dieseldrax
Автор

I rackmounted my PC a week ago. The chassis was less than (the equivalent of) £100, including rails, and fits my 3 chunky radiators in too. My office is so much cooler and incredibly quiet now.

Sossingro
Автор

Great Video!
Just be sure to look at your motherboard spec when concerned about PCIE lane availability. Motherboards also come with a controller with additional PCIE lanes.
Basically, the processor's PCIE count is only half the story.

chedderpop
Автор

200 Watt idle. Where I live (Netherlands) that's about 500€/year.

evertgbakker
Автор

15:18 When racking servers by myself, I've found that there's usually holes in both the sliding part of the rail in the rack, and the stationary part of the rail (the portion that attaches to the rack itself). Every rail is different, so it always takes some experimentation, but I've found that I can put a spare screw/toothpick/pointy-thing through both holes so that sliding part of the rail doesn't push back while I get things lined up. I do this for both sides but sticking out different amounts so I can line things up one side at a time. Just be sure that the holes you pick in the rail can be reached from the front of the rack.

questionablecommands
Автор

Proxmox or xcpng are what I'd go with. It's nice to not have projects competing for ports or service configs.

haxwithaxe
Автор

Proxmox for the win!
No reason why, I just love it.

atomycal
Автор

Great video. Some of your inflections remind me of LGR. I was chuckling.

sanguineel
Автор

Same here. 😁👍

Taking a play from my corporate sys admin world. Separate storage and compute boxes.

Going to build a TrueNAS Scale box as the central storage for the entire homelab. Then use the 2nd unRAID lic to build a fresh compute server. Both will have 10Gbe until i can swap for fiber.

CampRusso
Автор

Hey Tim, love your videos. Fellow homelabber here, but I can’t stress enough as a homelabber you need to RTFM on the motherboard and CPU. In particular look at the block diagram in the doco. Super micro docs are always good and show how the lanes are used as some of those 48 lanes have to service usb, SATA, IPMI etc… Also, the motherboard silk screen will also say 8x in 16 etc to tell you its only 8 lanes on a physical 16 slot. Don’t put that GPU in the first 16 slot if you want those juicy 16 lanes.

simbozoni