Proxmox 8 Cluster with Ceph Storage configuration

preview_player
Показать описание
Are you looking to setup a server cluster in your home lab? Proxmox is a great option along with Ceph storage. In this video we take a deep dive into Proxmox clustering and how to configure a 3-node Proxmox server cluster with Ceph shared storage, along with Ceph OSDs, Monitors, and Ceph storage Pool. In the end I test a migration of a Windows Server 2022 virtual machine between Proxmox nodes using the shared Ceph storage. Cool stuff.

Introduction to running a Proxmox server cluster - 0:00
Talking about Promox, open-source hypervisors, etc - 0:48
Thinking about high-availability requires thinking about storage - 1:20
Overview of creating a Proxmox 8 cluster and Ceph - 2:10
Beginning the process to configure a Proxmox 8 cluster - 2:24
Looking at the create cluster operation - 3:03
Kicking off the cluster creation process - 3:25
Join information to use with the member nodes to join the cluster - 3:55
Joining the cluster on another node and entering the root password - 4:15
Joining the 3rd node to the Proxmox 8 cluster - 5:13
Refreshing the browser and checking that we can see all the Proxmox nodes - 5:40
Overview of Ceph - 6:11
Distributed file system and sharing storage between the logical storage volume - 6:30
Beginning the installation of Ceph on the Proxmox nodes - 6:52
Changing the repository to the no subscription model - 7:30
Verify the installation of Ceph - 7:51
Selecting the IP subnet available under Public network and Cluster network - 8:06
Looking at the replicas configuration - 8:35
Installation is successful and looking at the checklist to install Ceph on other nodes - 8:50
The Ceph Object Storage Daemon (OSD) - 9:27
Creating the OSD and designating the disk in our Proxmox hosts for Ceph - 9:50
Selecting the disk for the OSD - 10:15
Creating OSD on node 2 - 10:40
Creating OSD on node 3 - 11:00
Looking at the Ceph dashboard and health status - 11:25
Creating the Ceph pool - 11:35
All Proxmox nodes display the Ceph pool - 12:00
Ceph Monitor overview - 12:22
Beginning the process to create additional monitors - 13:00
Setting up the test for live migration using Ceph storage - 13:30
Beginning a continuous ping - 14:00
The VM is on the Ceph storage pool - 14:25
Kicking off the migration - 14:35
Only the memory map is copied between the two Proxmox hosts - 14:45
Distributed shared storage is working between the nodes - 15:08
Nested configuration in my lab but still works great - 15:35
Concluding thoughts on Proxmox clustering in Proxmox 8 and Ceph for shared storage - 15:49

Proxmox 8: New Features and Home Lab Upgrade Instructions:

Proxmox 8 and Ceph:

Top VMware vSphere Configurations in 2023:
Рекомендации по теме
Комментарии
Автор

That's how a tutorial should be done! Thoroughly explained and step-by-step detailed!!! THANK YOU SO VERY MUCH!!

davidefuzzati
Автор

This was a perfect tutorial, watched it once, built a test lab, everything worked as expected.

substandard
Автор

Best detailed HOW to video in Proxmox universe...

pg_usa
Автор

Not only this is a perfect ProxMox/Ceph tutorial but amazing tutorial on how to make proper videos that deliver results! Thank you!

Stingray
Автор

Great video and thoroughly detailed. My only advice for properly monitoring a "migrating VM" would be to send a ping to the 'Migrating VM' from a different machine/VM. When doing anything from the VM being migrated, the process will pause in order to be transferred over to the new host (thus not showing any dropped packets "from" the VM's point of view).
Keep up the good work!

rich-it-hp
Автор

This is great. I would love to see a realworld homelab version using 3 mini PCs and a 2.5gb switch. I think there are a lot of users like me running home assistant in a proxmox vm along with a bunch of containers for cctv / dns etc. There are no videos covering this ceph scenario and I need a hero 😊

substandard
Автор

The best Proxmox & Ceph tutorial, thank you.

naami
Автор

Thanks SO MUCH for this video. It literally turned things around for me. Cheers from Panama.

RobertoRubio-zm
Автор

Nice presentation and explanation of some key core steps of the procedure. Yet you omit to mention that
-nodes should be the same from a h/w perspective, specially when VMs running are Win Servers since you could easily loose your License just by transferring it to a different node with different h/w specs
-even if someone, might get it from just pausing the video and noticing that the 3 storages are the same on all the 3 nodes, a mention of that wouldn't t hurt.
-finally a video like this, could be a nice start for several other ones, about maintaining and troubleshooting a cluster with ceph since usual stuff like a node went down for good or went down for a long time since parts need to be ordered in order to be fixed, (this will have as a result several syslog messages flood the user and you might want to show how to stop them or suppress them until the node gets fixed again)....etc

ierosgr
Автор

The best tutorial for clustering, 😊thank you sir....We will try it on three server devices, to be applied to the Republic of Indonesia radio data center...

achmadsdjunaedi
Автор

I’ve been planning to move from VMware & VSAN to “Pmox” :) & ceph for a while now. I just need the time to set everything up and test. I love that you virtualized this first! My used storage is about 90% testing vm’s like these. 🤷‍♂️

youNOOBsickle
Автор

Good video sir, i played with this with a few Lenovo Mini machines and loved it !!

JasonsLabVideos
Автор

Ceph is incredible nice distributed object storage solution which is open source. I need to check it out myself

samegoi
Автор

Thank you, that was very informative spot-on...

One thing I did pickup, and this is my "wierdness", you might be trying a little to hard with the explicit descriptions. For example, the migration testing, you explicity call out the full hostnames several times - at this stage in the video, viewers are intimately familiar with the servers, so stating "server 1 to 2" would feel more natural.

EViL
Автор

FYI when clicking in CMD window it changes the title to select and the process running in CMD window pauses. For most of the demo it was in selection mode (paused ping command) it would be interesting to see how it worked without the select. Otherwise loved the demo and Ceph storage setup exactly what I was looking for.

PaulKling
Автор

At 3:26 it would be useful to be mentioned the fact that ceph and HA would be highly benefit from a different network for their exchange data procedures. There would be the point where you should choose a different network (in case of course there is one to choose from) from the managed one. Yes it will work with the same network for everything but it won t be as performant as with a dedicated one.
New edit: Stand corrected by 8:26 where you do mention it.

dimitristsoutsouras
Автор

Advice: in production environments use 10Gbps links on all servers, or else a "bottleneck" is generated if the disks are running at 6Gbps speed

felipemarriagabenavides
Автор

We definitely love the content, we appreciate your attention to detail!!!

sking
Автор

Thanks for this awesome tutorial. It was easy to understand, also for an non native english speaker.

cafeaffe
Автор

Super helpful with really clear steps and explanations, saved me a lot of time and learnt a lot too - many thanks.

NetDevAutomate
welcome to shbcf.ru