Kubernetes Home Lab Storage with Microk8s Rook and Ceph

preview_player
Показать описание
Easy Kubernetes Storage with Microk8s Rook and Ceph. Who would have thought we could say that Kubernetes storage was easy? In this video we explore Microk8s and shared storage using rook and microceph. We then create a pod to test creating a persistent volume claim for persistent storage.

Introduction - 0:00
Learning Kubernetes and persistent storage - 0:42
Microk8s - 0:57
Microceph - 1:08
Installing Microk8s - 1:32
Checking the status of Microk8s - 2:03
Repeating the process of installing Microk8s on additional nodes - 2:14
Adding the additional nodes to the cluster - 2:35
Pasting the command on 2nd node - 3:00
Generating a new token - 3:10
All nodes are joined to the cluster - 3:28
Getting the nodes in the cluster - 3:35
Beginning to install Microceph - 3:52
Installing it on additional nodes - 4:14
Bootstrapping the microceph cluster configuration - 4:34
Creating the join token for the additional microceph cluster nodes - 4:55
Copying the join token and joining them to the ceph cluster - 5:19
Looking at the status of the ceph cluster - 5:52
Overview of allocating disks to the ceph cluster - 6:20
After adding the disks we check the disks are added to the nodes - 7:15
Wiping the disk and adding to storage pool - 7:40
Checking the status of ceph once again - 8:05
Summarizing where we are currently - 8:29
Overview of enabling rook-ceph plugin - 8:42
Steps to enable rook-ceph in Microk8s - 8:56
Connecting to the external ceph - 9:30
Viewing the default storage provider - 9:40
Overview of creating a pod to take advantage of the persistent storage - 9:53
Creating a pod with a pvc - 10:08
Running the kubectl create command to create the pod with a PVC - 10:55
Looking at persistent volume claims - 11:18
Concluding thoughts on Microk8s and persistent storage with rook and ceph - 11:35

Other posts you may like, including Microk8s with Portainer:

Proxmox and Ceph storage:
Рекомендации по теме
Комментарии
Автор

Never heard of micro-ceph. I've worked with normal CEPH but this one is pretty darn easy and cool!

Darkk
Автор

Thank you, it's very clear and the blog is great.

fredericdiaz
Автор

Maybe I'm missing something, but thought you would have something to tie the pod to the pvc? In the pod definition it looked like it would go to pod-pvc, but the pvc you create was nginx-pvc? But very cool to see the simplicity of the microk8s and micro-ceph in a "virtual" world.

DavidC-rtor
Автор

Me encantó tu video, genial!!! Muchas gracias, sigue creando contenido de ese nivel.

ToniSimpson-ztcj
Автор

Hi Brendan, Great video! Although I do have 1 concern and that is combing MicroK8s & MicroCeph all on the single network, I built a K3S\Longhorn cluster and experience huge performance issue due to Longhorn replication and automatic snapshotting processes....how difficult would be to segregate the storage network from the MicroK8s pod and ingress network? Cheers

paulfx
Автор

it's great but please explain why you do what you did., starting from, why microk8s and not regular k8s, pros and cons

Автор

Hi, thanx as always for these useful video, i'm wondering if is possible to use Openlens for a

zippi
Автор

can we mount microceph pool to a folder ?

BrunoBernard-knvt
Автор

Longhorn was a nightmare compared to this and very buggy.

sylnsr
Автор

Why used micro8s?I think used home lab vanila k8s or k3s better

ЕгорГончаренко-нб
Автор

how come it worked, that Yaml file got an issue right??

metadata:
name: nginx-pvc
spec:
storageClassName: ceph-rbd
accessModes: [ReadWriteOnce]
resources: { requests: { storage: 5Gi } }

and


kind: Pod
metadata:
name: nginx
spec:
volumes:
- name: pvc
persistentVolumeClaim:
claimName: pod-pvc


claimName, you have given pod-pvc. it wont work right? it should be nginx-pvc

ab.
Автор

A shame that microk8s insists on using snap.

tcurdt
Автор

Running ceph on virtualized storage isn't a good idea. The performance will be terrible.

yourjjrjjrjj
join shbcf.ru