Deploying Applications the DevOps way On Kubernetes

Helm

  • Helm is used to streamline installing and managing Kubernetes applications
  • Helm consists of the helm tool, which needs to be installed, and a chart
  • A chart is a Helm package, which contains the following:
    • A description of the package
    • One or more templates containing Kubernetes manifest files
  • Charts can be stored locally, or accessed from remote Helm repositories

 

Installing the Helm Binary

  • Fetch the binary from https://github.com/helm/helm/releases; check for
    the latest release!
  • tar xvf helm-xxxx.tar.gz
  • sudo mv linux-amd64/helm /usr/local/bin
  • helm version

 

Getting Access to Helm Charts

  • The main site for finding Helm charts, is through https://artifacthub.io
  • This is a major way for finding repository names
  • Search for specific software here, and run the commands to install it; for instance, to run the Kubernetes Dashboard:
    • helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/
    • helm install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard

 

Installing Helm Charts

  • After adding repositories, use helm repo update to ensure access to the
    most up-to-date information
  • Use helm install to install the chart with default parameters
  • After installation, use helm list to list currently installed charts
  • Optionally, use helm delete to remove currently installed charts

 

Installing Helm Charts: Commands

  • helm install bitnami/mysql --generate-name
  • kubectl get all
  • helm show chart bitnami/mysql
  • helm show all bitnami/mysql
  • helm list
  • helm status mysql-xxxx

 

Customizing Before Installing

  • A Helm chart consists of templates to which specific values are applied
  • The values are stored in the values.yaml file, within the helm chart
  • The easiest way to modify these values, is by first using helm pull to fetch a local copy of the helm chart
  • Next use your favorite editor on chartname/values.yaml to change any values

 

Customizing Before Install: Commands

  • helm show values bitnami/nginx
  • helm pull bitnami/nginx
  • tar xvf nginx-xxxx
  • vim nginx/values.yaml
  • helm template --debug nginx
  • helm install -f nginx/values.yaml my-nginx nginx/

 

Kustomize

  • kustomize is a Kubernetes feature, that uses a file with the name
    kustomization.yaml to apply changes to a set of resources
  • This is convenient for applying changes to input files that the user does not control himself, and which contents may change because of new versions appearing in Git
  • Use kubectl apply -k ./ in the directory with the kustomization.yaml and the files it refers to to apply changes
  • Use kubectl delete -k ./ in the same directory to delete all that was created by the Kustomization

 

Understanding a Sample kustomization.yaml

resources: # defines which resources (in YAML files) apply
- deployment.yaml

- service.yaml
namePrefix: test- # specifies a prefix should be added to all names

namespace: testing # objects will be created in this specific
namespace

commonLabels: # labels that will be added to all objects

environment: testing

 

Using Kustomization Overlays

  • Kustomization can be used to define a base configuration, as well as
    multiple deployment scenarios (overlays) as in dev, staging and prod for
    instance
  • In such a configuration, the main kustomization.yaml defines the structure:

- base
   - deployment.yam|
   - service.yaml
   - kustomization.yaml
- overlays
   - dev
   - kustomization.yaml
- staging
   - kustomization.yaml
- prod
   - kustomization.yaml

  • In each of the overlays/{dev,staging.prod}/kustomization.yaml, users would
    reference the base configuration in the resources field, and specify changes
    for that specific environment:

resources:

- ../../base

namePrefix: dev-

namespace: development

commonLabels:

environment: development

 

Using Kustomization Commands

  • cat deployment.yaml
  • cat service.yaml
  • kubectl apply -f deployment.yaml service.yaml
  • cat kustomization.yaml
  • kubectl apply -k .

 

Blue/Green Deployment

  • Blue/green Deployments are a strategy to accomplish zero downtime
    application upgrade
  • Essential is the possibility to test the new version of the application before taking it into production
  • The blue Deployment is the current application
  • The green Deployment is the new application
  • Once the green Deployment is tested and ready, traffic is re-routed to the new application version
  • Blue/green Deployments can easily be implemeted using Kubernetes Services

 

Procedure Overview

  • Start with the already running application
  • Create a new Deployment that is running the new version, and test with a temporary Service resource
  • If all tests pass, remove the temporary Service resource
  • Remove the old Service resource (pointing to the blue Deployment), and immediately create a new Service resource exposing the green Deployment
  • After successful transition, remove the blue Deployment
  • It is essential to keep the Service name unchanged, so that front-end resources such as Ingress will automatically pick up the transition

 

Using Blue/Green Deployments

  • kubectl create deploy blue-nginx --image=nginx:1.14 --replicas=3
  • kubectl expose deploy blue-nginx --port=80 --name=bgnginx
  • kubectl get deploy blue-nginx -o yaml > green-nginx.yaml
    • Clean up dynamic generated stuff
    • Change Image version
    • Change “blue” to “green” throughout
  • kubectl create -f green-nginx.yaml
  • kubectl get pods
  • kubectl delete svc bgnginx; kubectl expose deploy green-nginx --port=80 --name=bgnginx
  • kubectl delete deploy blue-nginx

 

Canary Deployments

  • A canary Deployment is an update strategy where you first push the update
    at small scale to see if it works well
  • In terms of Kubernetes, you could imagine a Deployment that runs 4 replicas
  • Next, you add a new Deployment that uses the same label
  • Then you create a Service that uses the same selector label for all
  • As the Service is load balancing, only 1 out of 5 requests would be serviced by the newer version
  • And if that doesn’t seem to be working, you can easily delete it

 

Step 1: Running the Old Version

  • kubectl create deploy old-nginx --image=nginx:1.14 --replicas=3 --dry-
    run=client -o yaml > ~/oldnginx.yaml
  • vim oldnginx.yaml
    • set labels: type: canary in deploy metadata as well as Pod metadata
  • kubectl create -f oldnginx.yaml
  • kubectl expose deploy old-nginx --name=nginx --port=80 --selector type=canary
  • kubectl get svc; kubectl get endpoints
  • minikube ssh; curl <svc-ip-address> a few times, you’ll see all the same

 

Step 2: Creating a ConfigMap

  • kubectl cp <old-nginx-pod>:/usr/share/nginx/html/index.html index.html
  • vim index.html
    • Add a line that uniquely identifies this as the canary Pod
  • kubectl create cm canary --from-file=index.html
  • kubectl describe cm canary

 

Step 3: Preparing the New Version

  • cp oldnginx.yaml canary.yaml
  • vim canary.yaml
    • image: nginx:latest
    • replicas: 1
    • :%s/old/new/g
    • Mount the configMap as a volume (see Git repo canary.yaml)
  • kubectl create -f canary.yaml
  • kubectl get svc; kubectl get endpoints
  • minikube ssh; curl <service-ip> and notice different results: this is canary in action

Go to the Kubernetes documentation -> search: ConfigMap

 

Step 4: Activating the New Version

  • Use kubectl get deploy to verify the names of the old and the new
    deployment
  • Use kubectl scale to scale the canary deployment up to the desired number of replicas
  • kubectl delete deploy to delete the old deployment

 

Custom Resource Definitions

  • Custom Resource Definitions (CRDs) allow users to add custom resources to
    clusters
  • Doing so allows anything to be integrated in a cloud-native environment
  • The CRD allows users to add resources in a very easy way
    • The resources are added as extension to the original Kubernetes API server
    • No programming skills required
  • The alternative way to build custom resources, is via API integration
    • This will build a custom API server
    • Programming skills are required

 

Creating Custom Resources

  • Creating Custom Resources using CRDs is a two-step procedure
  • First, you’ll need to define the resource, using the CustomResourceDefinition API kind
  • After defining the resource, it can be added through its own API resource

Creating Custom Resources Commands

  • cat crd-object.yaml
  • kubectl create -f crd-object.yaml
  • kubectl api-resources | grep backup
  • cat crd-backup.yaml
  • kubectl create -f crd-backup.yaml
  • kubectl get backups

 

 

Operators and Controllers

  • Operators are custom applications, based on Custom Resource Definitions
  • Operators can be seen as a way of packaging, running and managing applications in Kubernetes
  • Operators are based on Controllers, which are Kubernetes components that continuously operate dynamic systems
  • The Controller loop is the essence of any Controllers
  • The Kubernetes Controller manager runs a reconciliation loop, which continuously observes the current state, compares it to the desired state, and adjusts it when necessary
  • Operators are application-specific Controllers
  • Operators can be added to Kubernetes by developing them yourself
  • Operators are also available from community websites
  • A common registry for operators is found at operatorhub.io (which is rather OpenShift oriented)
  • Many solutions from the Kubernetes ecosystem are provided as operators
    • Prometheus: a monitoring and alerting solution
    • Tigera: the operator that manages the calico network plugin
    • Jaeger: used for tracing transactions between distributed services

 

Lab:  Using Canary Deployments

  • Run an nginx Deployment that meets the following requirements
    • Use a ConfigMap to provide an index.html file containing the text “welcome to the old version”
    • Use image version 1.14
    • Run 3 replicas
  • Use the canary Deployment upgrade strategy to replace with a newer version of the application
    • Use a ConfigMap to provide an index.html in the new application, containing the text “welcome to the new version”
    • Set the image version to latest
  • Complete the transition such that the old application is completely removed after verifying successful working of the updated application