Running and Deploying Kubernetes Applications

All the services need to be in a running state. With Kubernetes, our ultimate aim is to deploy our application in the form of containers on a set of machines that are configured as worker nodes in a cluster. Kubernetes does not deploy containers directly on the worker nodes. The containers are encapsulated into a Kubernetes object known as pods. A pod is a single instance of an application. A pod is the smallest object that you can create in Kubernetes.

What is a Pod?

  • A Pod is an abstraction of a server
    • It can run multiple containers within a single NameSpace, exposed by a single IP address
  • The Pod is the minimal entity that can be managed by Kubernetes
  • From a container perspective, a Pod is an entity that runs typically one or more containers by using container images
  • Typically, Pods are only started through a Deployment, because “naked” Pods are not rescheduled in case of a node failure


Naked Pod Disadvantages

  • Naked Pods are not rescheduled in case of failure
  • Rolling updates don’t apply to naked Pods; you can only bring it down and bring it up again with the new settings
  • Naked Pods cannot be scaled
  • Naked Pods cannot be replaced automatically


Using Deployments

  • The Deployment is the standard way for running containers in Kubernetes
  • Deployments are responsible for starting Pods in a scalable way
  • The Deployment resource uses a ReplicaSet to manage scalability
  • Also, the Deployment offers the RollingUpdate feature to allow for zero-downtime application updates
  • To start a Deployment the imperative way, use kubectl create deploy

Kubernetes Tools

  • Before starting the installation, you’ll have to install the Kubernetes tools
  • These include the following:
    • kubeadm: used to install and manage a Kubernetes cluster
    • kubelet: the core Kubernetes service that starts all Pods
    • kubectl: the interface that allows you to run and manage applications in Kubernetes


This command deploys a Docker container by creating a pod, so it first creates a pod automatically and deploys an instance of the NGINX Docker image, but where does it get the application image from? For that, you need to specify the image name using the dash dash image parameter. The application image, in this case, the NGINX image, is downloaded from the Docker Hub repository. You could configure Kubernetes to pull the image from the public Docker Hub or a private repository within the organization.

Now that we have a pod created, how do we see the list of pods available?

The kubectl get pods command helps us see the list of pods in our cluster. In this case,we see the pod is in a container creating state and soon changes to a running statewhen it is actually running.

To see detailed information about the pod, run:

This will tell you information about the pod when it was created, what labels are assigned to it, what docker containers are part of it and the events associated with that pod.



Understanding DaemonSets

  • A DaemonSet is a resource that starts one application instance on each cluster node
  • It is commonly used to start agents like the kube-proxy that need to be running on all cluster nodes
  • It can also be used for user workloads
  • lf the DaemonSet needs to run on control-plane nodes, a toleration must be configured to allow the node to run regardless of the control-plane taints

Lets create a DaemmonSet as it is written at

copy yaml from

After edit the kind and remove replicas and sttrategy spec mydaemon.yaml file looks like:

Now lets create daemonset:

We have no daemonset running because we use minikube which has no worker node.


Stateful and Stateless Applications

  • A stateless application is an application that doesn’t store any session data
  • Redirecting traffic in a stateless application is easy, the traffic can just be directed to another Pod instance
  • A stateful application saves session data to persistent storage
  • Databases are an example of stateful applications
  • Even if stateful applications can be started by a Deployment, it’s better to start it in a StatefulSet


A StatefulSet offers features that are needed by stateful applications

    • It provides guarantees about ordering and uniqueness of Pods
    • It maintains a sticky identifier for each of the Pods it creates
    • Pods in a StatefulSet are not interchangeable: each Pod has a persistent identifier that it maintains while being rescheduled
    • The unique Pod identifiers make it easier to match existing volumes to replaced Pods

StatefulSet Considerations

  • Storage must be automatically provisioned by a persistent volume provisioner. Pre-provisioning is challenging, as volumes need to be dynamically added when new Pods are scheduled
  • When a StatefulSet is deleted, associated volumes will not be deleted
  • A headless Service resource must be created in order to manage the network identity of Pods
  • Pods are not guaranteed to be stopped while deleting a StatefulSet, and it is recommended to scale down to zero Pods before deleting the StatefulSet

Let’s  look at the below yaml file:

StatefullSets requires headless service so clusterIP is set to none in the above yaml file. Let’s run it and see what it is doing:

The name of pods (web-0) which has been generated doesn’t contain the random id but there are nice names. Because of the availability here of the storage class the StatefulSets has been able to automatically allocate storage for very single instance in the StatefulSet.

Running Individual Pods

  • Running individual Pods has disadvantages:
  • No workload protection
  • No load balancing
  • No zero-downtime application update
  • Use individual Pods only for testing, troubleshooting, and analyzing
  • In all other cases, use Deployment, DaemonSet, or StatefulSet

Here is how we can run an idividual pod:

--  [dash dash space] is the easy way to pass a command (sleep 3600) that sholud be started by a pod.

What can you do if you want to initialize something before the main application is started ? You can run an init container.

Using !nit Containers

  • If preparation is required before running the main container, use an init container
  • Init containers run to completion, and once completed the main container can be started
  • Use init containers in any case where preliminary setup is required

Consider such an init container temmplate:


Scaling Applications

  • kubectl scale is used to manually scale Deployment, ReplicaSet, or StatefulSet
    • kubectl scale deployment myapp --replicas=3
  • Alternatively, HorizontalPodAutoscaler can be used


Multi-container Pods

  • As a Pod should be created for each specific task, running single-container Pods is the standard
  • In some cases, an additional container is needed to modify or present data generated by the main container
  • Specific use cases are defined:
    • Sidecar: provides additional functionality to the main container
    • Ambassador: is used as a proxy to connect containers externally
    • Adapter: is used to standardize or normalize main container output

Multi-container Storage

  • In a multi-container Pod, Pod Volumes are often used as shared storage
  • The Pod Volume may use PersistentVolumeClaim (PVC) to refer to a PersistentVolume, but may also directly refer to the required storage
  • By using shared storage, the main container can write to it, and the helper Pod will pick up information written to the shared storage


Lab: Running a DaemonSet

  • Create a DaemonSet with the name nginxdaemon.
  • Ensure it runs an Nginx Pod on every worker node.

Edit deploydaemon.yaml. Change kind and delete replicas and strategy lines.