Openshift Storage

Understanding Container Storage

  • Container Storage by default is ephemeral
  • Upon deletion of a container, all files and data inside it are also deleted
  • Containers can use volumes or bind mounts to provide persistent storage
  • Bind mounts are useful in stand-alone containers; volumes are needed to decouple the storage from the container
  • Using volumes guarantees that storage outlives the container lifetime


Understanding OpenShift Storage

  • OpenShift uses persistent volumes to provision storage
  • Storage can be provisioned in a static or dynamic way
  • Static provisioning means that the cluster administrator creates the persistent volumes manually
  • Dynamic provisioning uses storage classes to create persistent volumes on demand
  • OpenShift provides storage classes as the default solution
  • Developers are using persistent volume claims to dynamically add storage to the application


Using Pod Volumes

Consider such a simple pod definition (morevolumes.yaml):

Let’s create this pod:

After a while the pod is running:

Let’s see the created pod:

As we see in the morevol2 we have two containers: centos1 and centos2. Both containers have mounted volumes.

Let’s check it:

Exec the container in another way:

Decoupling Storage with Persistent Volumes

Understanding Persistent Volume

  • Persistent volumes (Plis) provide storage in a decoupled way
  • Administrators create persistent volumes of a type that matches the site-specific storage solution
  • Alternatively, StorageClass can be used to automatically provision persistent volumes
  • Persistent volumes are available for the entire cluster and not bound to a specific project
  • Once a persistent volume is bound to a persistent volume claim (PVC), it cannot service any other claims


Understanding Persistent Volume Claim

  • Developers define a persistent volume claim to add access to persistent volumes to their applications
  • The Pod volume uses the persistent volume claim to access storage in a decoupled way
  • The persistent volume claim does not bind to a specific persistent volume, but uses any persistent volume that matches the claim requirements
  • If no matching persistent volume is found, the persistent volume claim will wait until it becomes available
  • When a matching persistent volume is found, the persistent volume binds to the persistent volume claim

Let’s explore diffrent options exists for the persisten volumes:

Consider such a pv.yaml file:

We can’t create persitent volumes as developer user and we must switch  to kubadmin user:


Now, consider pvc.yaml:

Lets create pv-claim:

As we see above pv-claim is bound to pv-volume in myproject.


Consider such a pv-pod.yaml file::

bitnami/nginx is rootless container.

Let’s check it out:


Understanding StorageClass

  • Persistent Volumes are used to statically allocate storage
  • StorageClass allows containers to use the default storage that is provided in a cluster
  • From the developer perspective it doesn’t make a difference, as the developer uses only a PVC to connect to the available storage
  • Based on its properties, PVCs can bind to any StorageClass
  • Set a default StorageClass to allow developers to bind to the default storage class automatically, without specifying anything specific in the PVC
  • If no default StorageClass is set, the PVC needs to specify the name of the StorageClass it wants to bind to
  • To set a StorageClass as default, use oc annotate storageclass standard --overwrite "storageclass.kubernetessio/is-default-class=true"


Understanding StorageClass Provisioners

  • In order to create persistent volumes on demand, the storage class needs a provisioner
  • The following default provisioners are provided:
    • AWS EBC
    • Azure File
    • Azure Disk
    • Cinder
    • GCE Persistent Disk
    • VMware vSphere
  • If you create a storage class for a volume plug-in that does not have a corresponding provisioner, use a storage class provisioner value of kubernetesio/no-provisioner

Let’s see how we can use strorage class in a manual configuration.

Consider such a pv-pvc-pod.yaml file:

Let’s create it:


Understanding ConfigMap

  • ConfigMaps are used to decouple information
  • Different types of information can be stored in ConfigMaps
    • Command line parameters
    • Variables
    • ConfigFiles

Procedure to work wiith ConfigMap

  • Start by defining the ConfigMap and create it
    • Consider the different sources that can be used for ConfigMaps
    • kubectl create cm myconf --from-file=my.conf
    • kubectl create cm variables --from-env-file=variables
    • kubectl create cm special --from-literal=VAR3=cow --from-literal=VAR4=goat
    • Verify creation, using kubectl describe cm <cmname>
  • Use --from-file to put the contents of a configuration file in the ConfigMap
  • Use --from-env-file to define variables
  • Use --from-literal to define variables or command line arguments

Let’s see how to create config maps from variables:

We have such a file with variables:

We can create config map from this file:

If  we want to use it:

Create pod:

Let’s check the logs. In the logs we see VAR1 and VAR1.

Now, let’s look on tthe second demo which is using config map in diffrent way.

The third way of using config maps is also very interesting. It has more to do with configuration file.

W have nginx-custom-config.conf file:

Let’s create config map from this file::

we can use this cm by creating the pod (nginx-cm.yml):

And create it:

And we can use os exec to execute the shell in the pud just was created:

And we are inside the pod and we can cat the nginx configuration:


Local Storage Operator

  • Operators can be used to configure additional resources based on custom resource definitions
  • Different storage types in OpenShift are provided as operators
  • The local storage operator creates a new LocalVolume resource, but also sets up RBAC to allow integration of this resource in the cluster
  • The operator itself can be implemented as ready-to-run code, which makes setting it up much easier

Installing the Operator

  • Type crc console; log in as kubeadmin user
  • Select Operators > OperatorHub; check the Storage category
  • Select LocalStorage, click Install to install it
  • Explore its properties in Operators > Installed Operators

Using the LocalStorage Operator

  • Explore operator resources: oc get all -n openshift-local-storage
  • Create a block device on the CoreOS CRC machine
    • ssh -i "/.crc/machines/crc/id_rsa core@$(crc ip)
    • sudo -i • cd Mint; dd if-/devizero of=loopbackfile bs=1M count-1000
    • losetup -fP loopbackfile
    • ls -l ideviloop0; exit
  • oc create -f localstorage.yml
  • oc get all -n openshift-local-storage
  • oc get sc will show the StorageClass in a waiting state


Lab: Managing Storage
Run an nginx Pod that uses persistent storage to store data in /usr/share/nginx/html persistentl.

For the solutiion of this lab let’s create such a pv-pvc-lab.yaml file, because it has already what we need:

Let’s create the pod from this file::

Everything went ok.