{"id":4869,"date":"2023-07-22T15:57:30","date_gmt":"2023-07-22T13:57:30","guid":{"rendered":"http:\/\/miro.borodziuk.eu\/?p=4869"},"modified":"2023-09-22T09:18:59","modified_gmt":"2023-09-22T07:18:59","slug":"manage-storage-on-openshift","status":"publish","type":"post","link":"http:\/\/miro.borodziuk.eu\/index.php\/2023\/07\/22\/manage-storage-on-openshift\/","title":{"rendered":"Openshift Storage"},"content":{"rendered":"<p><!--more--><\/p>\n<p><span style=\"color: #3366ff;\">Understanding Container Storage<\/span><\/p>\n<ul>\n<li>Container Storage by default is ephemeral<\/li>\n<li>Upon deletion of a container, all files and data inside it are also deleted<\/li>\n<li>Containers can use volumes or bind mounts to provide persistent storage<\/li>\n<li>Bind mounts are useful in stand-alone containers; volumes are needed to decouple the storage from the container<\/li>\n<li>Using volumes guarantees that storage outlives the container lifetime<\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #3366ff;\">Understanding OpenShift Storage<\/span><\/p>\n<ul>\n<li>OpenShift uses persistent volumes to provision storage<\/li>\n<li>Storage can be provisioned in a static or dynamic way<\/li>\n<li>Static provisioning means that the cluster administrator creates the persistent volumes manually<\/li>\n<li>Dynamic provisioning uses storage classes to create persistent volumes on demand<\/li>\n<li>OpenShift provides storage classes as the default solution<\/li>\n<li>Developers are using persistent volume claims to dynamically add storage to the application<\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #3366ff;\">Using Pod Volumes<\/span><\/p>\n<p>Consider such a simple pod definition <code>(morevolumes.yaml<\/code>):<\/p>\n<pre class=\"lang:default decode:true \">apiVersion: v1\r\nkind: Pod\r\nmetadata: \r\n  name: morevol2\r\nspec:\r\n  containers:\r\n  - name: centos1\r\n    image: centos:7\r\n    command:\r\n      - sleep\r\n      - \"3600\" \r\n    volumeMounts:\r\n      - mountPath: \/centos1\r\n        name: test\r\n  - name: centos2\r\n    image: centos:7\r\n    command:\r\n      - sleep\r\n      - \"3600\"\r\n    volumeMounts:\r\n      - mountPath: \/centos2\r\n        name: test\r\n  volumes: \r\n    - name: test\r\n      emptyDir: {}<\/pre>\n<p>Let&#8217;s create this pod:<\/p>\n<pre class=\"lang:default decode:true\">$ oc create -f morevolumes.yaml\r\npod\/morevol2 created\r\n$ oc get pods\r\nNAME READY STATUS RESTARTS AGE\r\nmorevol2 2\/2 Running 0 21s\r\nnginx-1-658588fd7-rw6nn 1\/1 Running 0 4h51m\r\nnginx-7548849bf9-mddnj 1\/1 Running 0 4h51m\r\nworkspacebd3b7f49b120402c-5544445d75-98c2k 2\/2 Running 0 27m<\/pre>\n<p>After a while the pod is running:<\/p>\n<pre class=\"lang:default decode:true \">$ oc get pods\r\nNAME READY STATUS RESTARTS AGE\r\nmorevol2 2\/2 Running 0 62s\r\nnginx-1-658588fd7-rw6nn 1\/1 Running 0 4h51m\r\nnginx-7548849bf9-mddnj 1\/1 Running 0 4h51m\r\nworkspacebd3b7f49b120402c-5544445d75-98c2k 2\/2 Running 0 28m<\/pre>\n<p>Let&#8217;s see the created pod:<\/p>\n<pre class=\"lang:default decode:true\">$ oc describe pod morevol2\r\nName: morevol2\r\nNamespace: makarewicz-openshift-dev\r\nPriority: -3\r\nPriority Class Name: sandbox-users-pods\r\nService Account: default\r\nNode: ip-10-0-196-172.ec2.internal\/10.0.196.172\r\nStart Time: Sun, 23 Jul 2023 14:55:03 +0000\r\nLabels: &lt;none&gt;\r\nAnnotations: k8s.ovn.org\/pod-networks:\r\n{\"default\":{\"ip_addresses\":[\"10.128.7.223\/23\"],\"mac_address\":\"0a:58:0a:80:07:df\",\"gateway_ips\":[\"10.128.6.1\"],\"ip_address\":\"10.128.7.223\/2...\r\nk8s.v1.cni.cncf.io\/network-status:\r\n[{\r\n\"name\": \"ovn-kubernetes\",\r\n\"interface\": \"eth0\",\r\n\"ips\": [\r\n\"10.128.7.223\"\r\n],\r\n\"mac\": \"0a:58:0a:80:07:df\",\r\n\"default\": true,\r\n\"dns\": {}\r\n}]\r\nkubernetes.io\/limit-ranger:\r\nLimitRanger plugin set: cpu, memory request for container centos1; cpu, memory limit for container centos1; cpu, memory request for contai...\r\nopenshift.io\/scc: restricted-v2\r\nseccomp.security.alpha.kubernetes.io\/pod: runtime\/default\r\nStatus: Running\r\nIP: 10.128.7.223\r\nIPs:\r\nIP: 10.128.7.223\r\n\r\nContainers:\r\n<strong>\r\ncentos1:<\/strong>\r\nContainer ID: cri-o:\/\/a9e548baa7632e6f63898b8cab0e22975e2b83dd2d28933c29f2aaaa973bf14c\r\nImage: centos:7\r\nImage ID: quay.io\/centos\/centos@sha256:e4ca2ed0202e76be184e75fb26d14bf974193579039d5573fb2348664deef76e\r\nPort: &lt;none&gt;\r\nHost Port: &lt;none&gt;\r\nCommand:\r\nsleep\r\n3600\r\nState: Running\r\nStarted: Sun, 23 Jul 2023 14:55:08 +0000\r\nReady: True\r\nRestart Count: 0\r\nLimits:\r\ncpu: 1\r\nmemory: 1000Mi\r\nRequests:\r\ncpu: 10m\r\nmemory: 64Mi\r\nEnvironment: &lt;none&gt;\r\nMounts:\r\n\/centos1 from test (rw)\r\n\/var\/run\/secrets\/kubernetes.io\/serviceaccount from kube-api-access-bk75f (ro)\r\n<strong>\r\ncentos2:<\/strong>\r\nContainer ID: cri-o:\/\/7f7dcc73df4b6bd60e9e156ab0fd19477a475b70c9404cb03a3b07a3071434d9\r\nImage: centos:7\r\nImage ID: quay.io\/centos\/centos@sha256:e4ca2ed0202e76be184e75fb26d14bf974193579039d5573fb2348664deef76e\r\nPort: &lt;none&gt;\r\nHost Port: &lt;none&gt;\r\nCommand:\r\nsleep\r\n3600\r\nState: Running\r\nStarted: Sun, 23 Jul 2023 14:55:08 +0000\r\nReady: True\r\nRestart Count: 0\r\nLimits:\r\ncpu: 1\r\nmemory: 1000Mi\r\nRequests:\r\ncpu: 10m\r\nmemory: 64Mi\r\nEnvironment: &lt;none&gt;\r\nMounts:\r\n\/centos2 from test (rw)\r\n\/var\/run\/secrets\/kubernetes.io\/serviceaccount from kube-api-access-bk75f (ro)\r\nConditions:\r\nType Status\r\nInitialized True\r\nReady True\r\nContainersReady True\r\nPodScheduled True\r\nVolumes:\r\ntest:\r\nType: EmptyDir (a temporary directory that shares a pod's lifetime)\r\nMedium:\r\nSizeLimit: &lt;unset&gt;\r\nkube-api-access-bk75f:\r\nType: Projected (a volume that contains injected data from multiple sources)\r\nTokenExpirationSeconds: 3607\r\nConfigMapName: kube-root-ca.crt\r\nConfigMapOptional: &lt;nil&gt;\r\nDownwardAPI: true\r\nConfigMapName: openshift-service-ca.crt\r\nConfigMapOptional: &lt;nil&gt;\r\nQoS Class: Burstable\r\nNode-Selectors: &lt;none&gt;\r\nTolerations: node.kubernetes.io\/memory-pressure:NoSchedule op=Exists\r\nnode.kubernetes.io\/not-ready:NoExecute op=Exists for 300s\r\nnode.kubernetes.io\/unreachable:NoExecute op=Exists for 300s\r\nEvents:\r\nType Reason Age From Message\r\n---- ------ ---- ---- -------\r\nNormal Scheduled 5m43s default-scheduler Successfully assigned makarewicz-openshift-dev\/morevol2 to ip-10-0-196-172.ec2.internal\r\nNormal AddedInterface 5m43s multus Add eth0 [10.128.7.223\/23] from ovn-kubernetes\r\nNormal Pulling 5m43s kubelet Pulling image \"centos:7\"\r\nNormal Pulled 5m38s kubelet Successfully pulled image \"centos:7\" in 4.431932317s (4.431940349s including waiting)\r\nNormal Created 5m38s kubelet Created container centos1\r\nNormal Started 5m38s kubelet Started container centos1\r\nNormal Pulled 5m38s kubelet Container image \"centos:7\" already present on machine\r\nNormal Created 5m38s kubelet Created container centos2\r\nNormal Started 5m38s kubelet Started container centos2<\/pre>\n<p>As we see in the morevol2 we have two containers: centos1 and centos2. Both containers have mounted volumes.<\/p>\n<p>Let&#8217;s check it:<\/p>\n<pre class=\"lang:default decode:true\">$ oc exec -it morevol2 -c centos1 -- sh\r\nsh-4.2$ cd \/\r\nsh-4.2$ ls\r\nanaconda-post.log  bin  centos1  dev  etc  home  lib  lib64  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var\r\nsh-4.2$ echo hello-world &gt; \/centos1\/hello-world.txt\r\nsh-4.2$ ls -l \/centos1\/\r\ntotal 4\r\n-rw-r--r--. 1 1004130000 1004130000 12 Jul 23 15:10 hello-world.txt\r\nsh-4.2$ exit\r\nexit<\/pre>\n<p>Exec the container in another way:<\/p>\n<pre class=\"lang:default decode:true\">$ oc exec morevol2 -c centos1 -- ls -l \/centos1\r\ntotal 4\r\n-rw-r--r--. 1 1004130000 1004130000 12 Jul 23 15:10 hello-world.txt<\/pre>\n<p>&#8212;<\/p>\n<p><strong>Decoupling Storage with Persistent Volumes<\/strong><\/p>\n<p><span style=\"color: #3366ff;\">Understanding Persistent Volume<\/span><\/p>\n<ul>\n<li>Persistent volumes (Plis) provide storage in a decoupled way<\/li>\n<li>Administrators create persistent volumes of a type that matches the site-specific storage solution<\/li>\n<li>Alternatively, StorageClass can be used to automatically provision persistent volumes<\/li>\n<li>Persistent volumes are available for the entire cluster and not bound to a specific project<\/li>\n<li>Once a persistent volume is bound to a persistent volume claim (PVC), it cannot service any other claims<\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #3366ff;\">Understanding Persistent Volume Claim<\/span><\/p>\n<ul>\n<li>Developers define a persistent volume claim to add access to persistent volumes to their applications<\/li>\n<li>The Pod volume uses the persistent volume claim to access storage in a decoupled way<\/li>\n<li>The persistent volume claim does not bind to a specific persistent volume, but uses any persistent volume that matches the claim requirements<\/li>\n<li>If no matching persistent volume is found, the persistent volume claim will wait until it becomes available<\/li>\n<li>When a matching persistent volume is found, the persistent volume binds to the persistent volume claim<\/li>\n<\/ul>\n<p>Let&#8217;s explore diffrent options exists for the persisten volumes:<\/p>\n<pre class=\"lang:default decode:true \">$ oc explain pv.spec | less\r\n\r\nKIND:     PersistentVolume\r\nVERSION:  v1\r\nRESOURCE: spec &lt;Object&gt;\r\nDESCRIPTION:\r\n     spec defines a specification of a persistent volume owned by the cluster.\r\n     Provisioned by an administrator. More info:\r\n     https:\/\/kubernetes.io\/docs\/concepts\/storage\/persistent-volumes#persistent-volumes\r\n     PersistentVolumeSpec is the specification of a persistent volume.\r\nFIELDS:\r\n   accessModes  &lt;[]string&gt;\r\n     accessModes contains all ways the volume can be mounted. More info:\r\n     https:\/\/kubernetes.io\/docs\/concepts\/storage\/persistent-volumes#access-modes\r\n\r\n   awsElasticBlockStore &lt;Object&gt;\r\n     awsElasticBlockStore represents an AWS Disk resource that is attached to a\r\n     kubelet's host machine and then exposed to the pod. More info:\r\n     https:\/\/kubernetes.io\/docs\/concepts\/storage\/volumes#awselasticblockstore\r\n\r\n   azureDisk    &lt;Object&gt;\r\n     azureDisk represents an Azure Data Disk mount on the host and bind mount to\r\n     the pod.\r\n\r\n   azureFile    &lt;Object&gt;\r\n     azureFile represents an Azure File Service mount on the host and bind mount\r\n     to the pod.\r\n\r\n   capacity     &lt;map[string]string&gt;\r\n     capacity is the description of the persistent volume's resources and\r\n     capacity. More info:\r\n     https:\/\/kubernetes.io\/docs\/concepts\/storage\/persistent-volumes#capacity\r\n\r\n   cephfs       &lt;Object&gt;\r\n     cephFS represents a Ceph FS mount on the host that shares a pod's lifetime\r\n\r\n   cinder       &lt;Object&gt;\r\n     cinder represents a cinder volume attached and mounted on kubelets host\r\n     machine. More info: https:\/\/examples.k8s.io\/mysql-cinder-pd\/README.md\r\n\r\n   claimRef     &lt;Object&gt;\r\n     claimRef is part of a bi-directional binding between PersistentVolume and\r\n     PersistentVolumeClaim. Expected to be non-nil when bound. claim.VolumeName\r\n     is the authoritative bind between PV and PVC. More info:\r\n     https:\/\/kubernetes.io\/docs\/concepts\/storage\/persistent-volumes#binding\r\n\r\n   csi  &lt;Object&gt;\r\n     csi represents storage that is handled by an external CSI driver (Beta\r\n     feature).\r\n\r\n   fc   &lt;Object&gt;\r\n     fc represents a Fibre Channel resource that is attached to a kubelet's host\r\n     machine and then exposed to the pod.\r\n\r\n   flexVolume   &lt;Object&gt;\r\n     flexVolume represents a generic volume resource that is\r\n     provisioned\/attached using an exec based plugin.\r\n\r\n   flocker      &lt;Object&gt;\r\n     flocker represents a Flocker volume attached to a kubelet's host machine\r\n     and exposed to the pod for its usage. This depends on the Flocker control\r\n     service being running\r\n\r\n   gcePersistentDisk    &lt;Object&gt;\r\n     gcePersistentDisk represents a GCE Disk resource that is attached to a\r\n     kubelet's host machine and then exposed to the pod. Provisioned by an\r\n     admin. More info:\r\n     https:\/\/kubernetes.io\/docs\/concepts\/storage\/volumes#gcepersistentdisk\r\n\r\n   glusterfs    &lt;Object&gt;\r\n     glusterfs represents a Glusterfs volume that is attached to a host and\r\n     exposed to the pod. Provisioned by an admin. More info:\r\n     https:\/\/examples.k8s.io\/volumes\/glusterfs\/README.md\r\n\r\n   hostPath     &lt;Object&gt;\r\n     hostPath represents a directory on the host. Provisioned by a developer or\r\n     tester. This is useful for single-node development and testing only!\r\n     On-host storage is not supported in any way and WILL NOT WORK in a\r\n     multi-node cluster. More info:\r\n     https:\/\/kubernetes.io\/docs\/concepts\/storage\/volumes#hostpath\r\n\r\n   iscsi        &lt;Object&gt;\r\n     iscsi represents an ISCSI Disk resource that is attached to a kubelet's\r\n     host machine and then exposed to the pod. Provisioned by an admin.\r\n\r\n   local        &lt;Object&gt;\r\n     local represents directly-attached storage with node affinity\r\n\r\n   mountOptions &lt;[]string&gt;\r\n     mountOptions is the list of mount options, e.g. [\"ro\", \"soft\"]. Not\r\n     validated - mount will simply fail if one is invalid. More info:\r\n     https:\/\/kubernetes.io\/docs\/concepts\/storage\/persistent-volumes\/#mount-options\r\n\r\n   nfs  &lt;Object&gt;\r\n     nfs represents an NFS mount on the host. Provisioned by an admin. More\r\n     info: https:\/\/kubernetes.io\/docs\/concepts\/storage\/volumes#nfs\r\n\r\n   nodeAffinity &lt;Object&gt;\r\n     nodeAffinity defines constraints that limit what nodes this volume can be\r\n     accessed from. This field influences the scheduling of pods that use this\r\n     volume.\r\n\r\n   persistentVolumeReclaimPolicy        &lt;string&gt;\r\n     persistentVolumeReclaimPolicy defines what happens to a persistent volume\r\n     when released from its claim. Valid options are Retain (default for\r\n     manually created PersistentVolumes), Delete (default for dynamically\r\n     provisioned PersistentVolumes), and Recycle (deprecated). Recycle must be\r\n     supported by the volume plugin underlying this PersistentVolume. More info:\r\n     https:\/\/kubernetes.io\/docs\/concepts\/storage\/persistent-volumes#reclaiming\r\n\r\n     Possible enum values:\r\n     - `\"Delete\"` means the volume will be deleted from Kubernetes on release\r\n     from its claim. The volume plugin must support Deletion.\r\n     - `\"Recycle\"` means the volume will be recycled back into the pool of\r\n     unbound persistent volumes on release from its claim. The volume plugin\r\n     must support Recycling.\r\n     - `\"Retain\"` means the volume will be left in its current phase (Released)\r\n     for manual reclamation by the administrator. The default policy is Retain.\r\n\r\n   photonPersistentDisk &lt;Object&gt;\r\n     photonPersistentDisk represents a PhotonController persistent disk attached\r\n     and mounted on kubelets host machine\r\n\r\n   portworxVolume       &lt;Object&gt;\r\n     portworxVolume represents a portworx volume attached and mounted on\r\n     kubelets host machine\r\n\r\n   quobyte      &lt;Object&gt;\r\n     quobyte represents a Quobyte mount on the host that shares a pod's lifetime\r\n\r\n   rbd  &lt;Object&gt;\r\n     rbd represents a Rados Block Device mount on the host that shares a pod's\r\n     lifetime. More info: https:\/\/examples.k8s.io\/volumes\/rbd\/README.md\r\n\r\n   scaleIO      &lt;Object&gt;\r\n     scaleIO represents a ScaleIO persistent volume attached and mounted on\r\n     Kubernetes nodes.\r\n\r\n   storageClassName     &lt;string&gt;\r\n     storageClassName is the name of StorageClass to which this persistent\r\n     volume belongs. Empty value means that this volume does not belong to any\r\n     StorageClass.\r\n\r\n   storageos    &lt;Object&gt;\r\n     storageOS represents a StorageOS volume that is attached to the kubelet's\r\n     host machine and mounted into the pod More info:\r\n     https:\/\/examples.k8s.io\/volumes\/storageos\/README.md\r\n\r\n   volumeMode   &lt;string&gt;\r\n     volumeMode defines if a volume is intended to be used with a formatted\r\n     filesystem or to remain in raw block state. Value of Filesystem is implied\r\n     when not included in spec.\r\n\r\n   vsphereVolume        &lt;Object&gt;\r\n     vsphereVolume represents a vSphere volume attached and mounted on kubelets\r\n     host machine<\/pre>\n<p>Consider such a <code>pv.yaml<\/code> file:<\/p>\n<pre class=\"lang:default decode:true \">kind: PersistentVolume\r\napiVersion: v1\r\nmetadata:\r\n  name: pv-volume\r\n  labels:\r\n      type: local\r\nspec:\r\n  capacity:\r\n    storage: 2Gi\r\n  accessModes:\r\n    - ReadWriteOnce\r\n  hostPath:\r\n    path: \"\/mnt\/mydata\"<\/pre>\n<p>We can&#8217;t create persitent volumes as developer user and we must switch\u00a0 to kubadmin user:<\/p>\n<pre class=\"lang:default decode:true\">$ oc login -u developer -p developer\r\nLogin successful.\r\n\r\n$ oc create -f pv.yaml\r\nError from server (Forbidden): error when creating \"pv.yaml\": persistentvolumes is forbidden: User \"kubeadmin\" cannot create persistent                 volumes at the cluster scope: no RBAC policy matched\r\n\r\n$ oc login -u kubeadmin -p 6FJQ-4NAO4gaRD5Fk_2r2L-zufcFrIrKzwJ-Tsrxtr0 https:\/\/api.sandbox-m3.1530.p1.openshiftapps.com:6443\r\nUsing project \"default\".\r\n\r\n$ oc create -f pv.yaml\r\npersistentvolume\/pv-volume created\r\n\r\n$ oc get pv\r\nNAME        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM     STORAGECLASS   REASON    AGE\r\n<strong>pv-volume   2Gi        RWO            Retain           Available                                      1m<\/strong>\r\npv0001      100Gi      RWO,ROX,RWX    Recycle          Available                                      21h\r\npv0002      100Gi      RWO,ROX,RWX    Recycle          Available                                      21h\r\npv0003      100Gi      RWO,ROX,RWX    Recycle          Available                                      21h\r\npv0004      100Gi      RWO,ROX,RWX    Recycle          Available                                      21h\r\n...\r\n<\/pre>\n<p>&nbsp;<\/p>\n<p>Now, consider <code>pvc.yaml<\/code>:<\/p>\n<pre class=\"lang:default decode:true\">kind: PersistentVolumeClaim\r\napiVersion: v1\r\nmetadata:\r\n  name: <strong>pv-claim<\/strong>\r\nspec:\r\n  accessModes:\r\n    - ReadWriteOnce\r\n  resources:\r\n    requests:\r\n      storage: 1Gi<\/pre>\n<p>Lets create <code>pv-claim<\/code>:<\/p>\n<pre class=\"lang:default decode:true\">$ oc login -u developer -p developer\r\n\r\n$ oc create -f pvc.yaml\r\npersistentvolumeclaim\/pv-claim created\r\n\r\n$ oc get pvc\r\nNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE\r\npv-claim Pending gp3 58m\r\n\r\n$ oc get pv\r\nNo resources found.\r\nError from server (Forbidden): persistentvolumes is forbidden: User \"developer\" cannot list persistentvolumes at the cluster scope: no RBAC policy matched\r\n$ oc login -u kubeadmin -p 6FJQ-4NAO4gaRD5Fk_2r2L-zufcFrIrKzwJ-Tsrxtr0 https:\/\/api.sandbox-m3.1530.p1.openshiftapps.com:6443 Using project \"default\".\r\n\r\n$ oc get pv\r\nNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE\r\n<strong>pv-volume <\/strong>2Gi RWO Retain Bound<strong> myproject\/pv-claim  <\/strong>1h\r\npv0001 100Gi RWO,ROX,RWX Recycle Available 23h\r\npv0002 100Gi RWO,ROX,RWX Recycle Available 23h\r\npv0003 100Gi RWO,ROX,RWX Recycle Available 23h<\/pre>\n<p>As we see above <code>pv-claim<\/code> is bound to <code>pv-volume<\/code> in <code>myproject<\/code>.<\/p>\n<p>&nbsp;<\/p>\n<p>Consider such a<code> pv-pod.yaml<\/code> file::<\/p>\n<pre class=\"lang:default decode:true\">kind: Pod\r\napiVersion: v1\r\nmetadata:\r\n   name: pv-pod\r\nspec:\r\n  volumes:\r\n    - name: pv-storage\r\n      persistentVolumeClaim:\r\n        claimName: pv-claim\r\n  containers:\r\n    - name: pv-container\r\n      image: bitnami\/nginx\r\n      securityContext:\r\n        privileged: yes\r\n      ports:\r\n        - containerPort: 80\r\n          name: \"http-server\"\r\n      volumeMounts:\r\n        - mountPath: \"\/usr\/share\/nginx\/html\"\r\n          name: pv-storage\r\n<\/pre>\n<p><code>bitnami\/nginx<\/code> is rootless container.<\/p>\n<pre class=\"lang:default decode:true \">$ oc create -f pv-pod.yaml\r\npod\/pv-pod created\r\n<\/pre>\n<p>Let&#8217;s check it out:<\/p>\n<pre class=\"lang:default decode:true \">$ oc describe pod\/pv-pod\r\nName:               pv-pod\r\nNamespace:          default\r\nPriority:           0\r\nPriorityClassName:  &lt;none&gt;\r\nNode:               localhost\/172.30.9.22\r\nStart Time:         Sun, 23 Jul 2023 20:31:25 +0200\r\nLabels:             &lt;none&gt;\r\nAnnotations:        openshift.io\/scc=privileged\r\nStatus:             <strong>Running<\/strong>\r\nIP:                 172.17.0.9\r\nContainers:\r\n  pv-container:\r\n    Container ID:   docker:\/\/0c67067e54302435159b1faacbd126c7359023a145f37a3ff14ae89569de35b1\r\n    Image:          bitnami\/nginx\r\n    Image ID:       docker-pullable:\/\/bitnami\/nginx@sha256:2d1f6e1612377bbff8f0aa08b1e577da899c16446df5fb47f015a2cc6a54225f\r\n    Port:           80\/TCP\r\n    Host Port:      0\/TCP\r\n    State:          Running\r\n      Started:      Sun, 23 Jul 2023 20:31:28 +0200\r\n    Ready:          True\r\n    Restart Count:  0\r\n    Environment:    &lt;none&gt;\r\n    Mounts:\r\n      \/usr\/share\/nginx\/html from pv-storage (rw)\r\n      \/var\/run\/secrets\/kubernetes.io\/serviceaccount from default-token-r665c (ro)\r\nConditions:\r\n  Type              Status\r\n  Initialized       True\r\n  Ready             True\r\n  ContainersReady   True\r\n  PodScheduled      True\r\nVolumes:\r\n  pv-storage:\r\n    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)\r\n    ClaimName:  pv-claim\r\n    ReadOnly:   false\r\n  default-token-r665c:\r\n    Type:        Secret (a volume populated by a Secret)\r\n    SecretName:  default-token-r665c\r\n    Optional:    false\r\nQoS Class:       BestEffort\r\nNode-Selectors:  &lt;none&gt;\r\nTolerations:     &lt;none&gt;\r\nEvents:\r\n  Type    Reason     Age   From                Message\r\n  ----    ------     ----  ----                -------\r\n  Normal  Scheduled  4s    default-scheduler   Successfully assigned default\/pv-pod to localhost\r\n  Normal  Pulling    3s    kubelet, localhost  pulling image \"bitnami\/nginx\"\r\n  Normal  Pulled     2s    kubelet, localhost  Successfully pulled image \"bitnami\/nginx\"\r\n  Normal  Created    2s    kubelet, localhost  Created container\r\n  Normal  Started    1s    kubelet, localhost  Started container\r\n<\/pre>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #3366ff;\">Understanding StorageClass<\/span><\/p>\n<ul>\n<li>Persistent Volumes are used to statically allocate storage<\/li>\n<li>StorageClass allows containers to use the default storage that is provided in a cluster<\/li>\n<li>From the developer perspective it doesn&#8217;t make a difference, as the developer uses only a PVC to connect to the available storage<\/li>\n<li>Based on its properties, PVCs can bind to any StorageClass<\/li>\n<li>Set a default StorageClass to allow developers to bind to the default storage class automatically, without specifying anything specific in the PVC<\/li>\n<li>If no default StorageClass is set, the PVC needs to specify the name of the StorageClass it wants to bind to<\/li>\n<li>To set a StorageClass as default, use <code>oc annotate storageclass standard --overwrite \"storageclass.kubernetessio\/is-default-class=true\"<\/code><\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #3366ff;\">Understanding StorageClass Provisioners<\/span><\/p>\n<ul>\n<li>In order to create persistent volumes on demand, the storage class needs a provisioner<\/li>\n<li>The following default provisioners are provided:\n<ul>\n<li>AWS EBC<\/li>\n<li>Azure File<\/li>\n<li>Azure Disk<\/li>\n<li>Cinder<\/li>\n<li>GCE Persistent Disk<\/li>\n<li>VMware vSphere<\/li>\n<\/ul>\n<\/li>\n<li>If you create a storage class for a volume plug-in that does not have a corresponding provisioner, use a storage class provisioner value of <code>kubernetesio\/no-provisioner<\/code><\/li>\n<\/ul>\n<p>Let&#8217;s see how we can use strorage class in a manual configuration.<\/p>\n<p>Consider such a<code> pv-pvc-pod.yaml<\/code> file:<\/p>\n<pre class=\"lang:default decode:true \">apiversion: v1\r\nkind: PersistentVolume \r\nmetadata: \r\n  name: local-pv-volume \r\nspec: \r\n  storageClassName: manual \r\n  capacity: \r\n    storage: 10Gi \r\n  accessModes: \r\n    - ReadWriteOnce \r\n  hostPath: \r\n    path: \"\/mnt\/data\"\r\n--- \r\napiVersion: vl \r\nkind: PersistentVolumeClaim \r\nmetadata: \r\n  name: local-pv-claim \r\n  namespace: myvol \r\nspec: \r\n  storageClassName: manual \r\n  accessModes: \r\n    - ReadWriteOnce \r\n  resources: \r\n    requests: \r\n      storage: 3Gi\r\n--- \r\napiVersion: v1 \r\nkind: Pod \r\nmetadata: \r\n  name: local-pv-pod \r\n  namespace: myvol \r\nspec: \r\n  volumes: \r\n    - name: local-pv-storage \r\n      persistentVolumeClaim: \r\n        claimName: local-pv-claim \r\n  containers: \r\n    - name: local-pv-container \r\n      image: nginx <a class=\"ab-item\" href=\"http:\/\/miro.borodziuk.eu\/wp-admin\/profile.php\" aria-haspopup=\"true\">Howdy, <span class=\"display-name\">miro<\/span><\/a>\r\n      ports: \r\n        - containerPort: 80 \r\n          name: \"http-server\" \r\n      volumeMounts: \r\n        - mountPath: \"\/usr\/share\/nginx\/html\" \r\n          name: local-pv-storage \r\n<\/pre>\n<p>Let&#8217;s create it:<\/p>\n<pre class=\"lang:default decode:true\">$ oc create \u2014f pv\u2014pvc\u2014pod.yaml \r\npersistentvolume\/local\u2014pv\u2014volume created \r\npersistentvolumeclaim\/local\u2014pv\u2014claim created \r\npod\/local\u2014pv\u2014pod created \r\n\r\n$ oc get pv \r\nNAME   CAPACITY ACCESS MODES RECLAIM   POLICY   STATUS  CLAIM  STORAGECLASS REASON  AGE \r\nlocal-pv-volume 10Gi RWO Retain  Bound  myvol\/local\u2014pv\u2014claim manual  3s \r\npv\u2014volume \r\npv0001 \r\n...<\/pre>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #3366ff;\">Understanding ConfigMap<\/span><\/p>\n<ul>\n<li>ConfigMaps are used to decouple information<\/li>\n<li>Different types of information can be stored in ConfigMaps\n<ul>\n<li>Command line parameters<\/li>\n<li>Variables<\/li>\n<li>ConfigFiles<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p><span style=\"color: #3366ff;\">Procedure to work wiith ConfigMap<\/span><\/p>\n<ul>\n<li>Start by defining the ConfigMap and create it\n<ul>\n<li>Consider the different sources that can be used for ConfigMaps<\/li>\n<li><code>kubectl create cm myconf --from-file=my.conf<\/code><\/li>\n<li><code>kubectl create cm variables --from-env-file=variables<\/code><\/li>\n<li><code>kubectl create cm special --from-literal=VAR3=cow --from-literal=VAR4=goat<\/code><\/li>\n<li>Verify creation, using<code> kubectl describe cm &lt;cmname&gt;<\/code><\/li>\n<\/ul>\n<\/li>\n<li>Use <code>--from-file<\/code> to put the contents of a configuration file in the ConfigMap<\/li>\n<li>Use <code>--from-env-file<\/code> to define variables<\/li>\n<li>Use<code> --from-literal<\/code> to define variables or command line arguments<\/li>\n<\/ul>\n<p>Let&#8217;s see how to create config maps from variables:<\/p>\n<pre class=\"lang:default decode:true \">$ oc create configmap NAME -h\r\nCreate a configmap based on a file, directory, or specified literal value.\r\n...\r\nAliases:\r\nconfigmap, cm\r\nUsage:\r\n  oc create configmap NAME [--from-file=[key=]source] [--from-literal=key1=value1] [--dry-run] [flags]\r\nExamples:\r\n  # Create a new configmap named my-config based on folder bar\r\n  oc create configmap my-config --from-file=path\/to\/bar\r\n\r\n  # Create a new configmap named my-config with specified keys instead of file basenames on disk\r\n  oc create configmap my-config --from-file=key1=\/path\/to\/bar\/file1.txt --from-file=key2=\/path\/to\/bar\/file2.txt\r\n\r\n  # Create a new configmap named my-config with key1=config1 and key2=config2\r\n  oc create configmap my-config --from-literal=key1=config1 --from-literal=key2=config2\r\n\r\n  # Create a new configmap named my-config from the key=value pairs in the file\r\n  oc create configmap my-config --from-file=path\/to\/bar\r\n\r\n  # Create a new configmap named my-config from an env file\r\n  oc create configmap my-config --from-env-file=path\/to\/bar.env\r\n...\r\n<\/pre>\n<p>We have such a file with variables:<\/p>\n<pre class=\"lang:default decode:true \">$ cat varfile.txt\r\nVAR1=Hello\r\nVAR2=World\r\n<\/pre>\n<p>We can create config map from this file:<\/p>\n<pre class=\"lang:default decode:true \">$ oc create cm variables --from-env-file=varfile.txt\r\nconfigmap\/variables created\r\n\r\n$ oc get cm variables -o yaml\r\napiVersion: v1\r\ndata:\r\nVAR1: Hello\r\nVAR2: World\r\nkind: ConfigMap\r\nmetadata:\r\ncreationTimestamp: 2023-07-23T19:41:57Z\r\nname: variables\r\nnamespace: default\r\nresourceVersion: \"360417\"\r\nselfLink: \/api\/v1\/namespaces\/default\/configmaps\/variables\r\nuid: fbea531c-2990-11ee-8f96-8e5760356a66\r\n\r\n$ oc describe cm variables\r\nName:         variables\r\nNamespace:    default\r\nLabels:       &lt;none&gt;\r\nAnnotations:  &lt;none&gt;\r\n\r\nData\r\n====\r\nVAR1:\r\n----\r\nHello\r\nVAR2:\r\n----\r\nWorld\r\nEvents:  &lt;none&gt;\r\n<\/pre>\n<p>If\u00a0 we want to use it:<\/p>\n<pre class=\"lang:default decode:true\">$ cat cm-test-pod.yaml\r\n\r\napiVersion: v1\r\nkind: Pod \r\nmetadata: \r\n  name: test1 \r\nspec: \r\n  containers: \r\n  - name: testl \r\n    image: cirros \r\n    command: [\"\/bin\/sh\", \"-c\", \"env\"] \r\n    envFrom: \r\n      - configMapRef: \r\n          name: variables \r\n<\/pre>\n<p>Create pod:<\/p>\n<pre class=\"lang:default decode:true \">$ oc create -f  cm-test-pod.yaml\r\npod\/test1 created\r\n<\/pre>\n<p>Let&#8217;s check the logs. In the logs we see VAR1 and VAR1.<\/p>\n<pre class=\"lang:default decode:true\">$ oc logs test1\r\nROUTER_PORT_80_TCP_PROTO=tcp\r\nKUBERNETES_SERVICE_PORT=443\r\nKUBERNETES_PORT=tcp:\/\/172.30.0.1:443\r\nHOSTNAME=test1\r\nSHLVL=1\r\nDOCKER_REGISTRY_PORT_5000_TCP_ADDR=172.30.1.1\r\nHOME=\/root\r\nROUTER_SERVICE_PORT_80_TCP=80\r\nROUTER_PORT_80_TCP=tcp:\/\/172.30.110.170:80\r\nROUTER_PORT_443_TCP_ADDR=172.30.110.170\r\nDOCKER_REGISTRY_PORT_5000_TCP_PORT=5000\r\nDOCKER_REGISTRY_PORT_5000_TCP_PROTO=tcp\r\nROUTER_PORT_443_TCP_PORT=443\r\nROUTER_PORT_443_TCP_PROTO=tcp\r\nDOCKER_REGISTRY_SERVICE_HOST=172.30.1.1\r\nDOCKER_REGISTRY_PORT_5000_TCP=tcp:\/\/172.30.1.1:5000\r\nDOCKER_REGISTRY_SERVICE_PORT_5000_TCP=5000\r\nKUBERNETES_PORT_443_TCP_ADDR=172.30.0.1\r\n<strong>VAR1=Hello<\/strong>\r\nROUTER_SERVICE_PORT_443_TCP=443\r\nROUTER_PORT_443_TCP=tcp:\/\/172.30.110.170:443\r\nPATH=\/usr\/local\/sbin:\/usr\/local\/bin:\/usr\/sbin:\/usr\/bin:\/sbin:\/bin\r\n<strong>VAR2=World<\/strong>\r\nKUBERNETES_PORT_443_TCP_PORT=443\r\nROUTER_SERVICE_HOST=172.30.110.170\r\nKUBERNETES_PORT_443_TCP_PROTO=tcp\r\nDOCKER_REGISTRY_SERVICE_PORT=5000\r\nDOCKER_REGISTRY_PORT=tcp:\/\/172.30.1.1:5000\r\nROUTER_PORT_1936_TCP_ADDR=172.30.110.170\r\nROUTER_PORT_1936_TCP_PORT=1936\r\nROUTER_SERVICE_PORT=80\r\nROUTER_PORT=tcp:\/\/172.30.110.170:80\r\nROUTER_PORT_1936_TCP_PROTO=tcp\r\nKUBERNETES_PORT_443_TCP=tcp:\/\/172.30.0.1:443\r\nKUBERNETES_SERVICE_PORT_HTTPS=443\r\nKUBERNETES_SERVICE_HOST=172.30.0.1\r\nPWD=\/\r\nROUTER_PORT_80_TCP_ADDR=172.30.110.170\r\nROUTER_SERVICE_PORT_1936_TCP=1936\r\nROUTER_PORT_1936_TCP=tcp:\/\/172.30.110.170:1936\r\nROUTER_PORT_80_TCP_PORT=80\r\n<\/pre>\n<p>Now, let&#8217;s look on tthe second demo which is using config map in diffrent way.<\/p>\n<pre class=\"lang:default decode:true\">$ oc create cm morevars --from-literal=VAR3=goat --from-literal=VAR4=cow\r\nconfigmap\/morevars created\r\n\r\n$ oc get cm morevars\r\nNAME       DATA      AGE\r\nmorevars   2         34s\r\n[root@okd ~]# oc get cm morevars -o yaml\r\napiVersion: v1\r\ndata:\r\n  VAR3: goat\r\n  VAR4: cow\r\nkind: ConfigMap\r\nmetadata:\r\n  creationTimestamp: 2023-07-24T08:35:57Z\r\n  name: morevars\r\n  namespace: default\r\n  resourceVersion: \"547714\"\r\n  selfLink: \/api\/v1\/namespaces\/default\/configmaps\/morevars\r\n  uid: 1c6f04a0-29fd-11ee-8f96-8e5760356a66\r\n<\/pre>\n<p>The third way of using config maps is also very interesting. It has more to do with configuration file.<\/p>\n<p>W have <code>nginx-custom-config.conf<\/code> file:<\/p>\n<pre class=\"lang:default decode:true \">server {\r\n    listen       8888;\r\n    server_name  localhost;\r\n    location \/ {\r\n        root   \/usr\/share\/nginx\/html;\r\n        index  index.html index.htm;\r\n    }\r\n}<\/pre>\n<p>Let&#8217;s create config map from this file::<\/p>\n<pre class=\"lang:default decode:true\">$ oc create cm nginx-cm --from-file=nginx-custom-config.conf\r\nconfigmap\/nginx-cm created\r\n\r\n$ oc describe cm nginx-cm\r\nName:         nginx-cm\r\nNamespace:    default\r\nLabels:       &lt;none&gt;\r\nAnnotations:  &lt;none&gt;\r\n\r\nData\r\n====\r\nnginx-custom-config.conf:\r\n----\r\nserver {\r\n    listen       8888;\r\n    server_name  localhost;\r\n    location \/ {\r\n        root   \/usr\/share\/nginx\/html;\r\n        index  index.html index.htm;\r\n    }\r\n}\r\nEvents:  &lt;none&gt;\r\n<\/pre>\n<p>we can use this cm by creating the pod (<code>nginx-cm.yml<\/code>):<\/p>\n<pre class=\"lang:default decode:true\">apiVersion: v1\r\nkind: Pod\r\nmetadata:\r\n  name: nginx-cm\r\n  labels:\r\n    role: web\r\nspec:\r\n  containers:\r\n  - name: nginx-cm\r\n    image: nginx\r\n    volumeMounts:\r\n    - name: conf\r\n      mountPath: \/etc\/nginx\/conf.d\r\n  volumes:\r\n  - name: conf\r\n    configMap:\r\n      name: nginx-cm\r\n      items:\r\n      - key: nginx-custom-config.conf\r\n        path: default.conf<\/pre>\n<p>And create it:<\/p>\n<pre class=\"lang:default decode:true\">$ oc create -f nginx-cm.yml\r\npod\/nginx-cm created\r\n<\/pre>\n<p>And we can use <code>os exec<\/code> to execute the shell in the pud just was created:<\/p>\n<pre class=\"lang:default decode:true \">$ oc exec -it nginx-cm -- \/bin\/bash<\/pre>\n<p>And we are inside the pod and we can <code>cat<\/code> the nginx configuration:<\/p>\n<pre class=\"lang:default decode:true \">root@nginx-cm:\/# cat \/etc\/nginx\/conf.d\/default.conf\r\nserver {\r\n    listen       8888;\r\n    server_name  localhost;\r\n    location \/ {\r\n        root   \/usr\/share\/nginx\/html;\r\n        index  index.html index.htm;\r\n    }\r\n}\r\n\r\nroot@nginx-cm:\/#ctrl+d\r\nexit\r\n$\r\n<\/pre>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #3366ff;\">Local Storage Operator<\/span><\/p>\n<ul>\n<li>Operators can be used to configure additional resources based on custom resource definitions<\/li>\n<li>Different storage types in OpenShift are provided as operators<\/li>\n<li>The local storage operator creates a new LocalVolume resource, but also sets up RBAC to allow integration of this resource in the cluster<\/li>\n<li>The operator itself can be implemented as ready-to-run code, which makes setting it up much easier<\/li>\n<\/ul>\n<p><span style=\"color: #3366ff;\">Installing the Operator<\/span><\/p>\n<ul>\n<li>Type <code>crc console<\/code>; log in as <code>kubeadmin<\/code> user<\/li>\n<li>Select <strong>Operators &gt; OperatorHub<\/strong>; check the<strong> Storage<\/strong> category<\/li>\n<li>Select<strong> LocalStorage<\/strong>, click <strong>Install<\/strong> to install it<\/li>\n<li>Explore its properties in<strong> Operators &gt; Installed Operators<\/strong><\/li>\n<\/ul>\n<p><span style=\"color: #3366ff;\">Using the LocalStorage Operator<\/span><\/p>\n<ul>\n<li>Explore operator resources: <code>oc get all -n openshift-local-storage<\/code><\/li>\n<li>Create a block device on the CoreOS CRC machine\n<ul>\n<li><code>ssh -i \"\/.crc\/machines\/crc\/id_rsa core@$(crc ip)<\/code><\/li>\n<li><code>sudo -i \u2022 cd Mint; dd if-\/devizero of=loopbackfile bs=1M count-1000<\/code><\/li>\n<li><code>losetup -fP loopbackfile<\/code><\/li>\n<li><code>ls -l ideviloop0; exit<\/code><\/li>\n<\/ul>\n<\/li>\n<li><code>oc create -f localstorage.yml<\/code><\/li>\n<li><code>oc get all -n openshift-local-storage<\/code><\/li>\n<li><code>oc get sc<\/code> will show the StorageClass in a waiting state<\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #3366ff;\">Lab: Managing Storage<\/span><br \/>\nRun an nginx Pod that uses persistent storage to store data in<code> \/usr\/share\/nginx\/html<\/code> persistentl.<\/p>\n<p>For the solutiion of this lab let&#8217;s create such a <code>pv-pvc-lab.yaml<\/code> file, because it has already what we need:<\/p>\n<pre class=\"lang:default decode:true\">apiVersion: v1 \r\nkind: PersistentVolume \r\nmetadata: \r\n  name: lab4pv \r\nspec: \r\n  storageClassName: lab4\r\n  capacity: \r\n    storage: 1Gi \r\n  accessModes: \r\n    - ReadWriteOnce \r\n  hostPath: \r\n    path: \"\/mnt\/lab4\"\r\n--- \r\napiVersion: v1 \r\nkind: PersistentVolumeClaim \r\nmetadata: \r\n  name: lab4-pvc \r\n  namespace: default \r\nspec: \r\n  storageClassName: lab4\r\n  accessModes: \r\n    - ReadWriteOnce \r\n  resources: \r\n    requests: \r\n      storage: 1Gi\r\n---\r\napiVersion: v1 \r\nkind: Pod \r\nmetadata: \r\n  name: lab4pod\r\n  namespace: default \r\nspec: \r\n  volumes: \r\n    - name: local-pv-storage \r\n      persistentVolumeClaim: \r\n        claimName: lab4-pvc \r\n  containers: \r\n    - name: lab4-container \r\n      image: bitnami\/nginx \r\n      ports: \r\n        - containerPort: 8888\r\n          name: \"http-server\" \r\n      volumeMounts: \r\n        - mountPath: \"\/usr\/share\/nginx\/html\" \r\n          name: local-pv-storage<\/pre>\n<p>Let&#8217;s create the pod from this file::<\/p>\n<pre class=\"lang:default decode:true \">$ oc create -f pv-pvc-lab.yaml\r\npersistentvolume\/lab4pv created\r\npersistentvolumeclaim\/lab4-pvc created\r\npod\/lab4pod created\r\n\r\n$ oc get pv\r\nNAME        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                STORAGECLASS   REASON    AGE\r\nlab4pv      1Gi        RWO            Retain           Bound       default\/lab4-pvc     lab4                     9s\r\npv-volume   2Gi        RWO            Retain           Bound       myproject\/pv-claim                            17h\r\npv0001      100Gi      RWO,ROX,RWX    Recycle          Available                                                 1d\r\n...\r\n\r\n$ oc describe pod lab4pod\r\nName:               lab4pod\r\nNamespace:          default\r\nPriority:           0\r\nPriorityClassName:  &lt;none&gt;\r\nNode:               localhost\/172.30.9.22\r\nStart Time:         Mon, 24 Jul 2023 12:29:08 +0200\r\nLabels:             &lt;none&gt;\r\nAnnotations:        openshift.io\/scc=anyuid\r\nStatus:             Running\r\nIP:                 172.17.0.12\r\nContainers:\r\n  lab4-container:\r\n    Container ID:   docker:\/\/589d0c99e2f7f06f7b1ddfcc4b1ef225b02b395a3e9de1517ca712bce9e3b9c9\r\n    Image:          bitnami\/nginx\r\n    Image ID:       docker-pullable:\/\/bitnami\/nginx@sha256:e91f0a4171ea26b0612398a4becb5503d35de67dcbbc7bde9f50374572dea6ac\r\n    Port:           8888\/TCP\r\n    Host Port:      0\/TCP\r\n    State:          Running\r\n      Started:      Mon, 24 Jul 2023 12:29:17 +0200\r\n    Ready:          True\r\n    Restart Count:  0\r\n    Environment:    &lt;none&gt;\r\n    Mounts:\r\n      \/usr\/share\/nginx\/html from local-pv-storage (rw)\r\n      \/var\/run\/secrets\/kubernetes.io\/serviceaccount from default-token-r665c (ro)\r\nConditions:\r\n  Type              Status\r\n  Initialized       True\r\n  Ready             True\r\n  ContainersReady   True\r\n  PodScheduled      True\r\nVolumes:\r\n  local-pv-storage:\r\n    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)\r\n    ClaimName:  lab4-pvc\r\n    ReadOnly:   false\r\n  default-token-r665c:\r\n    Type:        Secret (a volume populated by a Secret)\r\n    SecretName:  default-token-r665c\r\n    Optional:    false\r\nQoS Class:       BestEffort\r\nNode-Selectors:  &lt;none&gt;\r\nTolerations:     &lt;none&gt;\r\nEvents:\r\n  Type    Reason     Age   From                Message\r\n  ----    ------     ----  ----                -------\r\n  Normal  Scheduled  38s   default-scheduler   Successfully assigned default\/lab4pod to localhost\r\n  Normal  Pulling    37s   kubelet, localhost  pulling image \"bitnami\/nginx\"\r\n  Normal  Pulled     29s   kubelet, localhost  Successfully pulled image \"bitnami\/nginx\"\r\n  Normal  Created    29s   kubelet, localhost  Created container\r\n  Normal  Started    29s   kubelet, localhost  Started container\r\n\r\n$ oc get pods\r\nNAME                            READY     STATUS             RESTARTS   AGE\r\ndocker-registry-1-ctgff         1\/1       Running            0          1d\r\n<strong>lab4pod                        <\/strong> 1\/1       Running            0          1m\r\nnginx-cm                        1\/1       Running            0          1h\r\npersistent-volume-setup-8f6lt   0\/1       Completed          0          1d\r\npv-pod                          1\/1       Running            0          15h\r\nrouter-1-k8zgt                  1\/1       Running            0          1d\r\ntest1                           0\/1       CrashLoopBackOff   29         2h\r\n<\/pre>\n<p>Everything went ok.<\/p>\n","protected":false},"excerpt":{"rendered":"","protected":false},"author":1,"featured_media":4875,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[93],"tags":[],"_links":{"self":[{"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/posts\/4869"}],"collection":[{"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/comments?post=4869"}],"version-history":[{"count":44,"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/posts\/4869\/revisions"}],"predecessor-version":[{"id":4915,"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/posts\/4869\/revisions\/4915"}],"wp:featuredmedia":[{"embeddable":true,"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/media\/4875"}],"wp:attachment":[{"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/media?parent=4869"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/categories?post=4869"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/tags?post=4869"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}