My ENTIRE Home-Lab On A SINGLE CPU???

preview_player
Показать описание

#homelab #selfhosted #ryzenserver

Affiliate links for stuff I used in this video:
---------------------------------------------------
Music (in order):
"If You Want To" - Me
---------------------------------------------------
Gear I Use: (affiliate links)

Recording Gear

Servers and Networking
---------------------------------------------------
Timestamps:
0:00 Intro
0:35 Setup Email Campaigns with Squarespace (today's sponsor)
1:21 The CPU
2:15 The rest of the hardware
4:00 The software I'm running
7:58 Assembly
9:07 Getting started with the setup
10:45 PFSense
13:33 Everything went wrong
15:28 What all I setup in proxmox
18:54 Jellyfin testing
20:04 Running cinebench, jellyfin, and minecraft server and pulling lots of power
23:08 Was this a good idea?
Рекомендации по теме
Комментарии
Автор

CORRECTIONS:
- The memory is DDR4 3200 not 2400 (Thanks Ian!)
- I completely forgot to mention the 2TB Samsung SSD for the boot drive. This was also purchased for my new editing PC build.

HardwareHaven
Автор

Now you just need to build a hot spare. Then add another for 3-way quorum for storage. And an identical offsite backup machine. Then we're back to square one ;)

JeffGeerling
Автор

You know a tech channel is good reliable and helpful when the host talk about the problems he encountered himself, love your work

jumpmaster
Автор

It's pretty incredible what you can run on even modest, modern x86 hardware. For several months, I ran everything on a Pentium Gold G6400 with 32GB of ram. Host OS was Unraid, virtualized Untangle for router/firewall (gigabit fiber), virtualized Windows with a GPU passed through for a "Console PC" in my Homelab for management/admin... was running plex, pihole, vpn, etc, etc. All of it, no performance issues. So why did I move away from it? Maintenance... Of course I wanted to still tinker, but with a full household of very connected people... it really constrained my maintenance window to before 8am and after 11pm. Not worth it.

jimkirk
Автор

I feel you on that GPU passthrough. It has been the bane of my existence on Proxmox. I've been working on it for the past 8 hours for the GPU on my laptop Intel chip. I worked my way through 3-4 different errors - used SEVERAL different guides that all gave different instructions - and finally got stuck.

ZachariahWiedeman
Автор

I think people greatly overestimate how much power they need to run stuff in their homelab. I am also EXTREMELY guilty of this lol. Great stuff!

RaidOwl
Автор

This video covers so many things that I want to get back to doing, but I've just been distracted with Life and other things. Thank you for showing us your struggles. It helps rest of us know that we're not alone when we meet up with hardware challenges. And, I do plan more getting back to trying to virtualize all of my appliances on less hardware -- just takes time to overcome the issues, while trying to keep everything else in life running too. So, yeah, I feel your struggles and time-constraints.

PoeLemic
Автор

You pretty much demonstrated a conclusion I've been struggling with for a while... there are some things that CAN be done on a virtual machine, but probably shouldn't be. The two glaring examples I picked out were the router and the NAS. I've been trying for a couple of months now to put as much on my NAS as possible, but found that all too often taking down the NAS to get a container working would cause problems with the NAS itself. I finally removed everything from it but Jellyfin, Syncthing and Nextcloud... and it might get an instance of pihole. Now it sits quietly drawing about 21 watts, storing my stuff and always available when needed. For a router I wanted something with more capability than the typical consumer device, but it needs to be always available so my family can do what they do while I'm playing with stuff. So I chose to buy a Netgate device.

I think it comes down to risk/availability tolerance. If it needs to be (nearly) always available, it shouldn't really be virtualized... or should be virtualized in a cluster. For a home lab where a cluster isn't really feasible for most people you just have to pick your battles. Which ever way you go, thanks for sharing the struggle!

cameronfrye
Автор

Hey, Congrats on 100K subscribers !!!

awesomearizona-dino
Автор

About the passthrough issues you had at 14:50, I think I had the same issue last year when passing my GPU. In my case (also B550), when enabling Above 4G Decoding on BIOS it would *silently* also enable Re-sizeable BAR support. Manually disabling it, but obviously keeping 4G Decoding on, fixed my issue.

mttkl
Автор

I'm always on the hunt it seems to bring power draw down, or think about services in a different way. I'm with you, I'd not put the firewall/router on the same machine you are trying other things with, unless it is one that is just part of the lab. Overall neat concept and I understand the frustration of just trying to get a video done.

ntgm
Автор

I am quite confident this kind of setup can be an amazing workstation if you only focus of the storage and computing aspects while keeping the networking on a separate low-power machine.
The most power efficient machine us one that is spun up only on demand, so for my personal needs, the NAS wouldn't have to run 24:7. What I would need here instead is the means to power it on from afar i.e. by hooking it up to a Pi-KVM (or hope for WoL to work properly). Then, it's a matter of getting a ZFS-root setup to work, throw in 2 NVMe drives and a couple HDDs and your off to the races.
You should still have one dedicated NAS for cold-storage backups as well as one large capacity exterbal drive you can just unplug and throw into a safe or remote location, though.

HoshPak
Автор

Regarding your IOMMU issue with the HBA

For okayish iOMMU Groups you would need an X570 rather than B550 or A520.

As the X570 IOhub on the PCH is basically what AMD used on Zen2 CPUs anyways it has a more dedicated iOMMU grouping. X570 as a Chipset was developed by AMD themselves based on Zen2 CPU chips. The IOhub on X570 mimics the IOhub that EPYC uses... As Epyc by it self doesn't have a PCH.

B550 and A520, and any 400 and 300 Chipsets for that matter were subcontracted to ASmedia, AsMedia did not the best job regarding their IOMMu groupings..

I am running quite the similiar box in my Lab (though currently shelfed for summer as not needed at the moment) on ESXi 8 with a R9 3900x on a X570 board

itmkoeln
Автор

I attempted something similar AMD IOMMU is always a pain but this command may enable the devices connected on the chipset to be split
iommu=pt pcie_acs_override=downstream, multifunction nomodeset") if you ever plan an AMD proxmox server try using a "G" series processor it will make the GPU passthrough so much easer.

elmestguzman
Автор

I like the idea of having a hyper converged home lab. I use LXD containers to run all my services (router, OTA/Satellite TV, DHCP, DNS, SMB, IoT automation, VMs for kuberneres cluster among other services) on a single node (ryzen 3700x, X470d4u motherboard). I didn't want to maintain multiple physical servers and took a minimalist approach, heck I even replaced my GPON Modem/ONT with an SFP module connected to my mikrotik switch on its own VLAN. It's been running rock solid for the past year, and the entire process has been very educational, I can try new configurations on my containerized router and if things go south I just roll back to a working snapshot, plus I can spin up as many virtual routers with as many virtual interfaces as needed to try out new stuff or learn how routing protocols work. I will probably be adding another node or 2 for High availability

akachomba
Автор

For the iommu group issue, you can have an acs override line in the grub file and force them to have different iommu groups. I did the same thing and it worked.

jaskaransandhu
Автор

My thinking : start with your information and system needs – also look at local power prices (HUGE in the UK, Australia, etc.). A lot of stuff can be handled by ultra low power solutions such as a Raspberry Pi with attached USB 2.5” storage (SSD or spinners).


I think virtualising your outward facing firewall/router is too dangerous – the risk of a zero-day virtualisation layer breach is always there, incoming packets have to be handled by Proxmox before pfsense gets them so it’s an extra layer of vulnerability of course. You can run a physical decided hardware solution for as little as 5-10w of power nowadays, 20w is common (though you need 64-bit capable x86 to run pfsense last time I looked).

Personally I consider this type of setup useful for a not-always-on lab solution. I’ve run ESXi for 15 years or so, used to do what you’re doing here (I lacked the linux skills and initially ultra low powered SBC hardware) to do it. Today I use it as a fire-up-when-I-need-the-lab setup, I use Wake-on-Lan headers to wake the system up and a script which I shell into the ESXi host and execute to shut it down; it’s really effective and the host runs headless in a cupboard on a 11 year old Xeon.

Also for those with archive servers – split things up a bit. I use two small drives for always-available stuff and the rest are on two storage tanks which are powered down unless archiving or retrieving. I can also do backups that way too – WoL packet to wake the server, MQTT messages to notify of boot completion and the backup or archive proceeds after which it’s powered down (I use an MQTT subscription script on the backup server which puts it into standby or shutdown).

Another thing I do is I have an old half-missing laptop I power up for chomping on things (e.g. compression tasks) in the background – I can script its wake, compression task and even shutdown. Cost me nothing and is only on when I am actually using it. It’s surprising what you can do with older kit that has a lot of grunt but chews power if you use this technique – sure it might chew X watts at the wall but if it’s on for an hour a week it’s not that big a deal.

davocc
Автор

If you have a managed switch that supports Trunk/LAGG, I would suggest setting up your 4 ports on the NIC to be LAGG ports and run the VLANs and network through that. Since you have 2 2.5GB on-board NICs, you can run one as WAN in pfSense and the other for something else. I Knwo it is just a temporary setup but it is totally doable. I run LAGG on my pfSense system, TrueNAS, Proxmox and Proxmox Backup Server. It just gives you flexibility to have more lanes when tx/rx data to multiple devices that demand high throughput like the servers mentioned.

NightHawkATL
Автор

Love your content, it scratches that itch I have to play with home lab stuff without having to spend hours of time fixing the things I break

BinaryBroadcast
Автор

Oof! He's running HDDs on a smooth hard flat desktop surface. The vibrations could harm the drives. At least run them on some kind of insulating material to dampen the vibrations. Like, a mouse pad.

TheChadXperience
join shbcf.ru