.
Using Pod Volumes
- Pod Volumes are a part of the Pod specification and have the storage reference hard coded in the Pod manifest
- This is not bad, but it doesn’t allow for flexible storage allocation
- Pod Volumes can be used for any storage type
- Also, the ConfigMap can be used to mount Pod Volumes
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 |
[root@k8s ~]# kubectl explain pod.spec.volumes KIND: Pod VERSION: v1 FIELD: volumes <[]Volume> DESCRIPTION: List of volumes that can be mounted by containers belonging to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes Volume represents a named volume in a pod that may be accessed by any container in the pod. FIELDS: awsElasticBlockStore <AWSElasticBlockStoreVolumeSource> awsElasticBlockStore represents an AWS Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore azureDisk <AzureDiskVolumeSource> azureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. azureFile <AzureFileVolumeSource> azureFile represents an Azure File Service mount on the host and bind mount to the pod. cephfs <CephFSVolumeSource> cephFS represents a Ceph FS mount on the host that shares a pod's lifetime cinder <CinderVolumeSource> cinder represents a cinder volume attached and mounted on kubelets host machine. More info: https://examples.k8s.io/mysql-cinder-pd/README.md configMap <ConfigMapVolumeSource> configMap represents a configMap that should populate this volume csi <CSIVolumeSource> csi (Container Storage Interface) represents ephemeral storage that is handled by certain external CSI drivers (Beta feature). downwardAPI <DownwardAPIVolumeSource> downwardAPI represents downward API about the pod that should populate this volume emptyDir <EmptyDirVolumeSource> emptyDir represents a temporary directory that shares a pod's lifetime. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir ephemeral <EphemeralVolumeSource> ephemeral represents a volume that is handled by a cluster storage driver. The volume's lifecycle is tied to the pod that defines it - it will be created before the pod starts, and deleted when the pod is removed. Use this if: a) the volume is only needed while the pod runs, b) features of normal volumes like restoring from snapshot or capacity tracking are needed, c) the storage driver is specified through a storage class, and d) the storage driver supports dynamic volume provisioning through a PersistentVolumeClaim (see EphemeralVolumeSource for more information on the connection between this volume type and PersistentVolumeClaim). Use PersistentVolumeClaim or one of the vendor-specific APIs for volumes that persist for longer than the lifecycle of an individual pod. Use CSI for light-weight local ephemeral volumes if the CSI driver is meant to be used that way - see the documentation of the driver for more information. A pod can use both types of ephemeral volumes and persistent volumes at the same time. fc <FCVolumeSource> fc represents a Fibre Channel resource that is attached to a kubelet's host machine and then exposed to the pod. flexVolume <FlexVolumeSource> flexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. flocker <FlockerVolumeSource> flocker represents a Flocker volume attached to a kubelet's host machine. This depends on the Flocker control service being running gcePersistentDisk <GCEPersistentDiskVolumeSource> gcePersistentDisk represents a GCE Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk gitRepo <GitRepoVolumeSource> gitRepo represents a git repository at a particular revision. DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod's container. glusterfs <GlusterfsVolumeSource> glusterfs represents a Glusterfs mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/glusterfs/README.md hostPath <HostPathVolumeSource> hostPath represents a pre-existing file or directory on the host machine that is directly exposed to the container. This is generally used for system agents or other privileged things that are allowed to see the host machine. Most containers will NOT need this. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath iscsi <ISCSIVolumeSource> iscsi represents an ISCSI Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://examples.k8s.io/volumes/iscsi/README.md name <string> -required- name of the volume. Must be a DNS_LABEL and unique within the pod. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names nfs <NFSVolumeSource> nfs represents an NFS mount on the host that shares a pod's lifetime More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs persistentVolumeClaim <PersistentVolumeClaimVolumeSource> persistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims photonPersistentDisk <PhotonPersistentDiskVolumeSource> photonPersistentDisk represents a PhotonController persistent disk attached and mounted on kubelets host machine portworxVolume <PortworxVolumeSource> portworxVolume represents a portworx volume attached and mounted on kubelets host machine projected <ProjectedVolumeSource> projected items for all in one resources secrets, configmaps, and downward API quobyte <QuobyteVolumeSource> quobyte represents a Quobyte mount on the host that shares a pod's lifetime rbd <RBDVolumeSource> rbd represents a Rados Block Device mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/rbd/README.md scaleIO <ScaleIOVolumeSource> scaleIO represents a ScaleIO persistent volume attached and mounted on Kubernetes nodes. secret <SecretVolumeSource> secret represents a secret that should populate this volume. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret storageos <StorageOSVolumeSource> storageOS represents a StorageOS volume attached and mounted on Kubernetes nodes. vsphereVolume <VsphereVirtualDiskVolumeSource> vsphereVolume represents a vSphere volume attached and mounted on kubelets host machine |
Example how simple share storage .
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 |
[root@k8s cka]# vim morevolumes.yaml [root@k8s cka]# cat morevolumes.yaml apiVersion: v1 kind: Pod metadata: name: morevol spec: containers: - name: centos1 image: centos:7 command: - sleep - "3600" volumeMounts: - mountPath: /centos1 name: test - name: centos2 image: centos:7 command: - sleep - "3600" volumeMounts: - mountPath: /centos2 name: test volumes: - name: test emptyDir: {} [root@k8s cka]# kubectl apply -f morevolumes.yaml pod/morevol created [root@k8s cka]# kubectl get pods NAME READY STATUS RESTARTS AGE deploydaemon-zzllp 1/1 Running 0 4h16m firstnginx-d8679d567-249g9 1/1 Running 0 29h firstnginx-d8679d567-66c4s 1/1 Running 0 29h firstnginx-d8679d567-72qbd 1/1 Running 0 29h firstnginx-d8679d567-rhhlz 1/1 Running 0 12h init-demo 1/1 Running 0 14h morevol 0/2 ContainerCreating 0 12s mydaemon-d4dcd 1/1 Running 0 4h27m sleepy 1/1 Running 4 (27m ago) 15h testpod 1/1 Running 0 29h two-containers 2/2 Running 26 (6m22s ago) 12h web-0 1/1 Running 0 17h web-1 1/1 Running 0 4h27m web-2 1/1 Running 0 4h27m [root@k8s cka]# kubectl describe morevol error: the server doesn't have a resource type "morevol" [root@k8s cka]# kubectl describe pod morevol Name: morevol Namespace: default Priority: 0 Service Account: default Node: k8s.netico.pl/172.30.9.24 Start Time: Thu, 01 Feb 2024 20:26:03 -0500 Labels: <none> Annotations: <none> Status: Running IP: 10.244.0.18 IPs: IP: 10.244.0.18 Containers: centos1: Container ID: docker://7bd6b49a892aeedda9a4b829268e08bd83e924459e995d85c607da8abe9de1c5 Image: centos:7 Image ID: docker-pullable://centos@sha256:be65f488b7764ad3638f236b7b515b3678369a5124c47b8d32916d6487418ea4 Port: <none> Host Port: <none> Command: sleep 3600 State: Running Started: Thu, 01 Feb 2024 20:26:18 -0500 Ready: True Restart Count: 0 Environment: <none> Mounts: /mnt/centos1 from test (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-87m8c (ro) centos2: Container ID: docker://6872581086a3f760e2511a373419f7be1136f3aa38828b7d767b86afe91a667f Image: centos:7 Image ID: docker-pullable://centos@sha256:be65f488b7764ad3638f236b7b515b3678369a5124c47b8d32916d6487418ea4 Port: <none> Host Port: <none> Command: sleep 3600 State: Running Started: Thu, 01 Feb 2024 20:26:19 -0500 Ready: True Restart Count: 0 Environment: <none> Mounts: /mnt/centos2 from test (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-87m8c (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: test: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: <unset> kube-api-access-87m8c: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 58s default-scheduler Successfully assigned default/morevol to k8s.netico.pl Normal Pulling 58s kubelet Pulling image "centos:7" Normal Pulled 46s kubelet Successfully pulled image "centos:7" in 12.355s (12.355s including waiting) Normal Created 44s kubelet Created container centos1 Normal Started 44s kubelet Started container centos1 Normal Pulled 44s kubelet Container image "centos:7" already present on machine Normal Created 43s kubelet Created container centos2 Normal Started 43s kubelet Started container centos2 [root@k8s cka]# kubectl exec -it morevol -c centos1 -- touch /centos1/centos1file [root@k8s cka]# kubectl exec -it morevol -c centos2 -- ls /centos2 |
Managing Persistent Volumes
- PersistentVolumes (PV) are an API resource that represents specific storage
- PVs can be created manually, or automatically using StorageClass and storage provisioners
- Pods do not connect to PVs directly, but indirectly using PersistentVolumeClaim (PVC)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 |
[root@k8s cka]# cat pv.yaml kind: PersistentVolume apiVersion: v1 metadata: name: pv-volume labels: type: local spec: storageClassName: demo capacity: storage: 2Gi accessModes: - ReadWriteOnce hostPath: path: "/mnt/mydata" [root@k8s cka]# kubectl apply -f pv.yaml persistentvolume/pv-volume created [root@k8s cka]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM S TORAGECLASS REASON AGE pv-volume 2Gi RWO Retain Available d emo 16s pvc-3a0733ec-1795-47fe-88e1-efb340c7d90d 1Gi RWX Delete Bound default/www-web-1 s tandard 4h41m pvc-4cc89455-9cc1-4c27-b97a-e9c045a12744 1Gi RWX Delete Bound default/www-web-2 s tandard 4h41m pvc-e4dcae51-6d74-4535-93b9-40b020b120b1 1Gi RWX Delete Bound default/www-web-0 s tandard 4h42m [root@k8s cka]# kubectl get pv -o yaml apiVersion: v1 items: - apiVersion: v1 kind: PersistentVolume metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"v1","kind":"PersistentVolume","metadata":{"annotations":{},"labels":{"type":"local"},"name":"pv-volume"},"spec":{"accessModes":["ReadWriteOnce"],"capacity":{"storage":"2Gi"},"hostPath":{"path":"/mnt/mydata"},"storageClassName":"demo"}} creationTimestamp: "2024-02-02T01:40:23Z" finalizers: - kubernetes.io/pv-protection labels: type: local name: pv-volume resourceVersion: "42864" uid: 5a6fa4b7-ec31-490d-831a-3a7eca7df9e6 spec: accessModes: - ReadWriteOnce capacity: storage: 2Gi hostPath: path: /mnt/mydata type: "" persistentVolumeReclaimPolicy: Retain storageClassName: demo volumeMode: Filesystem status: phase: Available - apiVersion: v1 kind: PersistentVolume metadata: annotations: hostPathProvisionerIdentity: 1d973bdb-d473-44e6-b861-681c66c35790 pv.kubernetes.io/provisioned-by: k8s.io/minikube-hostpath creationTimestamp: "2024-02-01T20:59:04Z" finalizers: - kubernetes.io/pv-protection name: pvc-3a0733ec-1795-47fe-88e1-efb340c7d90d resourceVersion: "28808" uid: b45da20f-8c8b-4996-908c-9a3f4fe3b417 spec: accessModes: - ReadWriteMany capacity: storage: 1Gi claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: www-web-1 namespace: default resourceVersion: "28799" uid: 3a0733ec-1795-47fe-88e1-efb340c7d90d hostPath: path: /tmp/hostpath-provisioner/default/www-web-1 type: "" persistentVolumeReclaimPolicy: Delete storageClassName: standard volumeMode: Filesystem status: phase: Bound - apiVersion: v1 kind: PersistentVolume metadata: annotations: hostPathProvisionerIdentity: 1d973bdb-d473-44e6-b861-681c66c35790 pv.kubernetes.io/provisioned-by: k8s.io/minikube-hostpath creationTimestamp: "2024-02-01T20:59:07Z" finalizers: - kubernetes.io/pv-protection name: pvc-4cc89455-9cc1-4c27-b97a-e9c045a12744 resourceVersion: "28835" uid: 78ec7535-586f-4770-b508-b9d28128edd4 spec: accessModes: - ReadWriteMany capacity: storage: 1Gi claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: www-web-2 namespace: default resourceVersion: "28825" uid: 4cc89455-9cc1-4c27-b97a-e9c045a12744 hostPath: path: /tmp/hostpath-provisioner/default/www-web-2 type: "" persistentVolumeReclaimPolicy: Delete storageClassName: standard volumeMode: Filesystem status: phase: Bound - apiVersion: v1 kind: PersistentVolume metadata: annotations: hostPathProvisionerIdentity: 1d973bdb-d473-44e6-b861-681c66c35790 pv.kubernetes.io/provisioned-by: k8s.io/minikube-hostpath creationTimestamp: "2024-02-01T20:58:27Z" finalizers: - kubernetes.io/pv-protection name: pvc-e4dcae51-6d74-4535-93b9-40b020b120b1 resourceVersion: "28669" uid: 4d0110fe-5376-40ad-bfad-32d22f8f5cf6 spec: accessModes: - ReadWriteMany capacity: storage: 1Gi claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: www-web-0 namespace: default resourceVersion: "20117" uid: e4dcae51-6d74-4535-93b9-40b020b120b1 hostPath: path: /tmp/hostpath-provisioner/default/www-web-0 type: "" persistentVolumeReclaimPolicy: Delete storageClassName: standard volumeMode: Filesystem status: phase: Bound kind: List metadata: resourceVersion: "" |
Configuring PersistentVolumeClaim
- PVCs allows Pods to connect to any type of storage that is provided at a
specific site - Site-specific storage needs to be created as a PersistentVolume, either manually or automatically using StorageClass
- Behind StorageClass a storage provisioner is required
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 |
[root@k8s cka]# cat pvc2.yaml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pv-claim spec: storageClassName: demo accessModes: - ReadWriteOnce resources: requests: storage: 1Gi [root@k8s cka]# kubectl apply -f pvc2.yaml persistentvolumeclaim/pv-claim created [root@k8s cka]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE pv-claim Pending demo 4s [root@k8s cka]# kubectl describe pvc pv-claim Name: pv-claim Namespace: default StorageClass: demo Status: Pending Volume: Labels: <none> Annotations: <none> Finalizers: [kubernetes.io/pvc-protection] Capacity: Access Modes: VolumeMode: Filesystem Used By: <none> Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning ProvisioningFailed 9s (x2 over 21s) persistentvolume-controller storageclass.storage.k8s.io "demo" not found [root@k8s cka]# [root@k8s cka]# kubectl delete -f pvc2.yaml persistentvolumeclaim "pv-claim" deleted [root@k8s cka]# cat pvc1.yaml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pv-claim spec: storageClassName: standard accessModes: - ReadWriteOnce resources: requests: storage: 1Gi [root@k8s cka]# kubectl apply -f pvc1.yaml persistentvolumeclaim/pv-claim created [root@k8s cka]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE pv-claim Bound pvc-3bd7c987-ee98-4d63-b8bb-93bb37ef9475 1Gi RWO standard 6s [root@k8s cka]# kubectl get pvc,pv NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/pv-claim Bound pvc-3bd7c987-ee98-4d63-b8bb-93bb37ef9475 1Gi RWO standard 3m24s NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE persistentvolume/pv-volume 2Gi RWO Retain Released default/pv-claim demo 39m persistentvolume/pvc-3bd7c987-ee98-4d63-b8bb-93bb37ef9475 1Gi RWO Delete Bound default/pv-claim standard 3m24s |
Now let’s use pv in the pod:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 |
[root@k8s cka]# cat pv-pod.yaml kind: Pod apiVersion: v1 metadata: name: pv-pod spec: volumes: - name: pv-storage persistentVolumeClaim: claimName: pv-claim containers: - name: pv-container image: nginx ports: - containerPort: 80 name: "http-server" volumeMounts: - mountPath: "/usr/share/nginx/html" name: pv-storage [root@k8s cka]# kubectl apply -f pv-pod.yaml pod/pv-pod created [root@k8s cka]# kubectl exec -it pv-pod -- touch /usr/share/nginx/html/hellothere [root@k8s cka]# kubectl describe pv Name: pv-volume Labels: type=local Annotations: pv.kubernetes.io/bound-by-controller: yes Finalizers: [kubernetes.io/pv-protection] StorageClass: demo Status: Released Claim: default/pv-claim Reclaim Policy: Retain Access Modes: RWO VolumeMode: Filesystem Capacity: 2Gi Node Affinity: <none> Message: Source: Type: HostPath (bare host directory volume) Path: /mnt/mydata HostPathType: Events: <none> [root@k8s cka]# kubectl get pods pv-pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pv-pod 1/1 Running 0 4m41s 10.244.0.19 k8s.example.pl <none> <none> [ |
StorageClass
- StorageClass is an API resource that allows storage to be automatically
provisioned - StorageClass can also be used as a property that connects PVC and PV without using an actual StorageClass resource
- Multiple StorageClass resources can co-exist in the same cluster to provide access to different types of storage
- For automatic working, one StorageClass must be set as default
kubectl patch storageclass mysc -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
Using StorageClass
- To enable automatic provisioning, StorageClass needs a backing storage provisioner
- In the PV and PVC definition, a storageClass property can be set to connect to a specific StorageClass which is useful if multiple StorageClass resources are available
- If the storageClass property is not set, the PVC will get storage from the default StorageClass
- If also no default StorageClass is set, the PVC will get stuck in a status of Pending
Using an NFS Storage Provisioner
- The Storage Provisioner works with a StorageClass to automatically provide storage
- It runs as a Pod in the Kubernetes cluster, provided with access control configured through Roles, RoleBindings, and ServiceAccounts
- Once operational, you don’t have to manually create PersistentVolumes anymore
Requirements
- To create a storage provisioner, access permissions to the API are required
- Roles and RoleBindings are created to provide these permissions
- A ServiceAccount is created to connect the Pod to the appropriate RoleBinding
Configuring a Storage Provisioner
- On control:
- Ubuntu:
sudo apt install -y nfs-server
- RHEL/Centos:
sudo dnf install -y nfs-utils
- Ubuntu:
- On other nodes:
- Debian/Ubuntu:
sudo apt install nfs-client
- RHEL/Centos:
sudo dnf install nfs-utils nfs4-acl-tools
- Debian/Ubuntu:
- On control:
sudo mkdir /nfsexport
sudo sh -c 'echo "/nfsexport *(rw,no_root_squash)" > /etc/exports'
sudo systemctl restart nfs-server
- On other nodes:
showmount -e control
ConfigMap
- A ConfigMap is an API resource used to store site-specific data
- A Secret is a base64 encoded ConfigMap
- ConfigMaps are used to store either environment variables, startup parameters or configuration files
- When a Configuration File is used in a ConfigMap or Secret, it is mounted as a volume to provide access to its contents
Creating a ConfigMap
echo "hello world" > index.html
kubectl create cm webindex --from-file=index.html
kubectl describe cm webindex
kubectl create deploy webserver --image=nginx
kubectl edit deploy webserver
1 2 3 4 5 6 7 8 9 |
spec.template.spec volumes: - name: cmvol configMap: name: webindex spec.template.spec.containers volumeMounts: - mountPath: /usr/share/nginx/html name: cmvol |
Let’s checkout how to use ConfigMap as a volume.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
[root@k8s cka]# echo hello world > index.html [root@k8s cka]# kubectl create cm webindex --from-file=index.html configmap/webindex created [root@k8s cka]# kubectl describe cm webindex Name: webindex Namespace: default Labels: <none> Annotations: <none> Data ==== index.html: ---- hello world BinaryData ==== Events: <none> [root@k8s cka]# kubectl create deployment webserver --image=nginx deployment.apps/webserver created [root@k8s cka]# kubectl edit deployments.apps webserver |
Now edit the deployment webserver and add volume in the spec section in the same level as container section:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
spec: containers: - image: nginx imagePullPolicy: Always name: nginx resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /usr/share/nginx/html name: cmvol volumes: - name: cmvol configMap: name: webindex dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: {} |
Let’s test it:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 |
[root@k8s cka]# kubectl edit deployments.apps webserver deployment.apps/webserver edited [root@k8s cka]# kubectl get deploy NAME READY UP-TO-DATE AVAILABLE AGE firstnginx 4/4 4 4 41h webserver 1/1 1 1 14m [root@k8s cka]# kubectl get all NAME READY STATUS RESTARTS AGE ... pod/webserver-76d44586d-8gqhf 1/1 Running 0 90s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 46h service/nginx ClusterIP None <none> 80/TCP 29h NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/deploydaemon 1 1 1 1 1 <none> 16h daemonset.apps/mydaemon 1 1 1 1 1 <none> 40h NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/firstnginx 4/4 4 4 41h deployment.apps/webserver 1/1 1 1 15m NAME DESIRED CURRENT READY AGE replicaset.apps/firstnginx-d8679d567 4 4 4 41h replicaset.apps/webserver-667ddc69b6 0 0 0 15m replicaset.apps/webserver-76d44586d 1 1 1 90s NAME READY AGE statefulset.apps/web 3/3 29h [root@k8s cka]# kubectl exec pod/webserver-76d44586d-8gqhf -- cat /usr/share/nginx/html/index.html hello world |
That proves that ConfigMap has sucessfully been mounted.
Lab: Setting up Storage
- Create a PersistentVolume, using the HostPath storage type to access the directory
/storage
- Create a file
/storage/index.html
, containing the text"hello lab4"
- Run a Pod that uses an Nginx image and mounts the HostPath storage on the directory
/usr/share/nginx/html
- On the running Pod, use
kubectl exec
to verify the existence of the file/usr/share/nginx/html
Solution:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 |
[root@k8s cka]# mkdir /mnt/storage [root@k8s cka]# vi /mnt/storage/index.html [root@k8s cka]# cat /mnt/storage/index.html hello [root@k8s cka]# vi lab4pv.yaml [root@k8s cka]# cat lab4pv.yaml apiVersion: v1 kind: PersistentVolume metadata: name: lab4-volume labels: type: local spec: storageClassName: lab4 capacity: storage: 10Gi accessModes: - ReadWriteOnce hostPath: path: "/mnt/storage" [root@k8s cka]# kubectl apply -f lab4pv.yaml persistentvolume/lab4-volume created [root@k8s cka]# kubectl describe pv lab4-volume Name: lab4-volume Labels: type=local Annotations: <none> Finalizers: [kubernetes.io/pv-protection] StorageClass: lab4 Status: Available Claim: Reclaim Policy: Retain Access Modes: RWO VolumeMode: Filesystem Capacity: 10Gi Node Affinity: <none> Message: Source: Type: HostPath (bare host directory volume) Path: /mnt/storage HostPathType: Events: <none> [root@k8s cka]# vi lab4pvc.yaml [root@k8s cka]# cat lab4pvc.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: lab4-claim spec: storageClassName: lab4 accessModes: - ReadWriteOnce resources: requests: storage: 3Gi [root@k8s cka]# kubectl apply -f lab4pvc.yaml persistentvolumeclaim/lab4-claim created [root@k8s cka]# kubectl get pv,pvc NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS SS REASON AGE persistentvolume/lab4-volume 10Gi RWO Retain Bound 3m5s persistentvolume/pv-volume 2Gi RWO Retain Releas 12h persistentvolume/pvc-3a0733ec-1795-47fe-88e1-efb340c7d90d 1Gi RWX Delete Bound 17h persistentvolume/pvc-3bd7c987-ee98-4d63-b8bb-93bb37ef9475 1Gi RWO Delete Bound 12h persistentvolume/pvc-4cc89455-9cc1-4c27-b97a-e9c045a12744 1Gi RWX Delete Bound 17h persistentvolume/pvc-e4dcae51-6d74-4535-93b9-40b020b120b1 1Gi RWX Delete Bound 17h NAME STATUS VOLUME CAPACITY ACCESS persistentvolumeclaim/lab4-claim Bound lab4-volume 10Gi RWO persistentvolumeclaim/pv-claim Bound pvc-3bd7c987-ee98-4d63-b8bb-93bb37ef9475 1Gi RWO persistentvolumeclaim/www-web-0 Terminating pvc-e4dcae51-6d74-4535-93b9-40b020b120b1 1Gi RWX persistentvolumeclaim/www-web-1 Terminating pvc-3a0733ec-1795-47fe-88e1-efb340c7d90d 1Gi RWX persistentvolumeclaim/www-web-2 Terminating pvc-4cc89455-9cc1-4c27-b97a-e9c045a12744 1Gi RWX [root@k8s cka]# vi lab4pod.yaml [root@k8s cka]# kubectl apply -f lab4pod.yaml pod/lab4-pod created [root@k8s cka]# cat lab4pod.yaml apiVersion: v1 kind: Pod metadata: name: lab4-pod spec: volumes: - name: task-pv-storage persistentVolumeClaim: claimName: lab4-claim containers: - name: task-pv-container image: nginx ports: - containerPort: 80 name: "http-server" volumeMounts: - mountPath: "/usr/share/nginx/html" name: task-pv-storage [root@k8s cka]# kubectl exec lab4-pod -- cat /usr/share/nginx/html/index.html hello |
The example yaml files you can find in the kubernetes documentation:
https://kubernetes.io/docs/home/ -> persistent volume -> Configure a Pod to Use a PersistentVolume for Storage