$250 Proxmox Cluster gets HYPER-CONVERGED with Ceph! Basic Ceph, RADOS, and RBD for Proxmox VMs

preview_player
Показать описание
I previously setup a Proxmox high availability cluster on my $35 Dell Wyse 5060 thin clients. Now, I'm improving this cluster to make it *hyperconverged*. It's a huge buzzword in the industry now, and basically, it combines storage and compute in the same nodes, with each node having some compute and some storage, and clustering both the storage and compute. In traditional clustering you have a storage system (SAN) and compute system (virtualization cluster / kubernetes / ...), so merging the SAN into the compute nodes means all of the nodes are identical and network traffic is, in aggregate, going from all nodes to all nodes without a bottleneck between the compute half and SAN half.

Today I am limiting this tutorial to only the features provided through the Proxmox GUI for Ceph, and only for RBD (RADOS Block Device) storage (Not CephFS). Ceph is a BIG topic for BIG data, but I'm planning on covering erasure coded RBD pools followed by CephFS in the future. But be sure to let me know if there's anything specific you'd like to see.

Merging your storage and compute can make sense, even in the homelab, if we are concerned with single point failures. I currently rely on TrueNAS for my storage needs, but any maintenance to the TrueNAS server will kick the Proxmox cluster offline. The Proxmox cluster can handle a failed node (or a node down for maintenance), but with shared storage on TrueNAS, we don't get that same level of failure tolerance on the storage side, so we are still a single point of failure away from losing functionality. I could add storage to every Proxmox node and use ZFS replication to keep the VM disks in sync, but then I either need to give in to having a copy of all VMs on all nodes, or individually pick two nodes for each VM and replicate the disk to those two (and create all of the corresponding HA groups so they don't get migrated somewhere else).

With Ceph, I can let Ceph deal with storage balancing on the back end, and know that VM disks are truly stored without a single point of failure. Any node in the cluster can access any VM disk, and as the cluster expands beyond 3 nodes I am only storing the VM 3 times. With erasure coding, I can get this down to 1.5 times or less, but that's a topic for a future video.

As a bonus, I can use CephFS to store files used by the VMs, and the VMs can mount the cluster filesystem themselves if they need to, getting the same level of redundancy while sharing the data with multiple servers, or gateways to NFS/SMB. Of course, that's also a topic for a future video.

Link to the blog post:

Cost accounting:
I actually spent $99 on the three thin clients (as shown in a previous video). I spent another $25 each for 8G DDR3L SODIMMs to upgrade the thin clients to 12G each (1 8G stick + the 4G stick they came with). And I spent $16 each on the flash drives. Total is $222, so round up to $250 to cover shipping and taxes.

My Discord server:

Thumbnails:
00:00 - Introduction
00:35 - Hardware
02:13 - Ceph Installation
06:15 - Ceph Dashboard
08:06 - Object Storage Daemons
16:02 - Basic Pool Creation
18:04 - Basic VM Storage
19:34 - Degraded Operation
21:50 - Conclusions

#Ceph
#Proxmox
#BigData
#Homelab
#Linux
#Virtualization
#Cluster

Proxmox is a trademark of Proxmox Server Solutions GmbH
Ceph is a trademark of Red Hat Inc
Рекомендации по теме
Комментарии
Автор

Your channel has great potential, it already has its own style. I hope you keep the momentum, I will be watching: meaning I find your video useful and interesting 😁Thanks.

robertopontone
Автор

You got it man .. I read hyperconverged and I was like lol 😆.. 250$ for what we pay millions for between azure aws and 3 data centers .. that gotta be awesome 👌 👏 but it still fun to watch thank u

rimonmikhael
Автор

Man, super enjoyed this presentation of CEPH

bluesquadron
Автор

Thanks for the video, I really appreciate the practical step by step explanation. Looking forward to more ceph videos!

gustersongusterson
Автор

Excellent content thank you. Ceph scares me but I will get there at some point hopefully. Really like your editing which removes all the whitespace which too many others leave in.

mikebakkeyt
Автор

If anyone is getting the message "Error ENOENT: all mgr daemons do not support module 'dashboard', pass --force to force enablement" when trying to enable the dashboard, running apt install ceph-mgr-dashboard on all the nodes in the cluster fixed it for me.

Chris_Cable
Автор

So youtubers are just out here hyperconverging their hardware for views now? Disgusting! (Thanks, this was very helpful)

funkijote
Автор

*Thanks a lot for sharing your knowhow!, your Videos are great.*

tomschi
Автор

outstanding experiment! thanks for sharing 😀👍🏆

enkaskal
Автор

Really appreciate the step by step detail on CEPH as it relates to Proxmox and an inexpensive home lab setup. Looking forward to future videos!

twincitiespcmd
Автор

Fun video! It's cool that proxmox makes ceph available to new users in a stripped-down way. I've found it to be excellent for home use, since it has none of the limitations of traditional NAS solutions it allows the kind of random cobbled together setups that are common outside of enterprise. CephFS in particular is an absolute godsend, and as a whole ceph is among the most reliable software projects I've ever used. Issues have always been possible to work out and I've never lost any data despite hardware failures. I've changed and added parts, altered crush configs, and upgraded across major versions without any downtime too (from 15->16->17 over the years).. It's real FOSS too (I've had some small PRs merged) and despite the IBM happenings, it actually feels like the community aspect is growing still

kelownatechkid
Автор

LOVE this video!

Thank you!

I'm just getting around to setting up my 3-node HA Proxmox Cluster with ceph and this video is TREMENDOUSLY helpful.

ewenchan
Автор

Just stumbled into this absolute gem. Thank you for the incredible content

JosephJohnson-sqbu
Автор

Thanks, very good demo and presentation.

goodcitizen
Автор

I didn't even know prox had a ceph wizard. Cool 👍

SB-qmwg
Автор

This was really helpful. Great explanation and just the right amount of detail for me. Thank you very much!

dn
Автор

Great video. I'm going to have to watch this a few times. You really go in to great detail on proxmos, which is exactly what I've been looking for.

mistakek
Автор

I'm about to jump into Ceph so I watched this one again and really appreciate your Ceph coverage. We anxiously await the next PROMISED instalment. Heh, no pressure 🙂

MarkConstable
Автор

Love your channel. Thanks for the great video!

curtisjones
Автор

You can use ZFS as backend for CEPH. This way you get best of both, but speed is not a priority in that setup. Although this is true for ZFS in general

JohnSmith-yzuh