[ Kube 13 ] Using Persistent Volumes and Claims in Kubernetes Cluster

preview_player
Показать описание
In this video I will demonstrate how you can use Persistent Volumes and request it in Pod specification using persistent volume claims.

For any questions or issues, please leave me a comment. If you like this video, please share it with your friends and don't forget to subscribe to my channel.

If you wish to support me:

Thanks for watching this video and appreciate your feedback.
Рекомендации по теме
Комментарии
Автор

Great stuff! Love the rolling interaction with kubectl and the nodeSelector . :) . Excellent work!

xmlviking
Автор

Very detailed, step-by-step video with practical demonstration. It helped me a lot. Thank you!

GunjanSharma
Автор

Your videos are very informative and useful. They helped me solve several issues with my deployment. Thank you!

vishalkole
Автор

Glad this video popped up in youtube results. Very helpful because everything is through command execution and verification. Thanks lot

krishnak
Автор

Very good explanation - Thanks for the session - Appreciate all your efforts in making this video available.

prasadrayudu
Автор

It's very helpful, You explained it clearly!!. Thank you very much Venkat.

venkatnunepalli
Автор

Hi,
This is the best video for understanding PV, PVC in K8S, Thank you very much.

Pallikkoodam
Автор

Informative and simple to understand. Thank You!

blaupaunk
Автор

it's absolutely brillant, thanks man !!

clementduval
Автор

Great episode again, completed the exercise in my local "Kubeadm-dind-cluster". My head was reeling when I read the documentation. This episode helped me a lot. Completing storage related episodes (13, 20, 23) together. If you were in Hyderabad, would have treated you with beer and biryani.

ajitdb
Автор

This was immensely helpful. Please consider another tutorial using dynamic provisioning.

TomGrubbe
Автор

Thanks, @venkat bro excellent explanation.

sarankumaark
Автор

If directory is not available, still we can create a PV by adding below section under


apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-hostpath
spec:
storageClassName: local-storage
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
type: DirectoryOrCreate

dubareddy
Автор

Hi Venkat, thanks for the video. I have a question. Based on what a pvc binds to a pv? There are no selectors, labels or something like that in the pvc. For example, if I have 3 pods and I need 3 difference volumes for each of them, so I create 3 pv and 3 pvc respectively, so which pvc will binds to which pv? Can we control it?

oleggv
Автор

Hi Venkat, I have seen your ELK videos, those are great. Now watching K8, awsome :) I have a request can you make video on ELK to be deployed with Helm?

FreshersKosam
Автор

Hi Buddy, What kind volume type (instead of host path ) we have to use to replicate data for all running pod (all worker nodes)

palanisamy-dlqe
Автор

Great videos on kubernetes topics, could you please do one for authentication and authorization in kubernetes cluster possibly with Dex, gangway, github ?

rajivbaviskar
Автор

Crisp and Clear Explanation..:) Great..
But facing the below error.
spec: Forbidden: pod updates may not change fields other than `spec.containers[*].image`, `spec.initContainers[*].image`, `spec.activeDeadlineSeconds` or `spec.tolerations` (only additions to existing tolerations). I tried to delete the pods and created again in aws eks. No expected results for the volume mount.

ponmanimaran
Автор

Hi Venkat, Thanks for the video,
1 can we use aws s3 as pv ??
2 Any other method to mount pv on all nodes

ramalingamvarkala
Автор

Hi Venkat,

Thanks for sharing knowledge.

I have one question related to pv, pvc

For example, I am doing helm release-1 which creates the pod-1, pvc-1, which is bound to pv-1 created by sc. Some data is there in pv-1 related to pod-1 now.
After sometime, i deleted the helm release-1 which deletes the pod-1, pvc-1 but pv-1 is still remains as per retain policy.

My question is, if i create helm release-1 again, is there any chance to REBOUND the same pv-1 so that we can get our data back ?

charank