Helm
- Helm is used to streamline installing and managing Kubernetes applications
- Helm consists of the
helm
tool, which needs to be installed, and a chart - A chart is a Helm package, which contains the following:
- A description of the package
- One or more templates containing Kubernetes manifest files
- Charts can be stored locally, or accessed from remote Helm repositories
Installing the Helm Binary
- Fetch the binary from https://github.com/helm/helm/releases; check for
the latest release! tar xvf helm-xxxx.tar.gz
sudo mv linux-amd64/helm /usr/local/bin
helm version
Getting Access to Helm Charts
- The main site for finding Helm charts, is through https://artifacthub.io
- This is a major way for finding repository names
- Search for specific software here, and run the commands to install it; for instance, to run the Kubernetes Dashboard:
helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/
helm install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 |
[root@controller ~]# helm version version.BuildInfo{Version:"v3.14.2", GitCommit:"c309b6f0ff63856811846ce18f3bdc93d2b4d54b", GitTreeState:"clean", GoVersion:"go1.21.7"} [root@controller ~]# helm repo add bitnami https://charts.bitnami.com/bitnami "bitnami" has been added to your repositories [root@controller ~]# helm search repo bitnami NAME CHART VERSION APP VERSION DESCRIPTION bitnami/airflow 16.8.2 2.8.1 Apache Airflow is a tool to express and execute... bitnami/apache 10.6.2 2.4.58 Apache HTTP Server is an open-source HTTP serve... bitnami/apisix 2.8.2 3.8.0 Apache APISIX is high-performance, real-time AP... bitnami/appsmith 2.7.2 1.13.0 Appsmith is an open source platform for buildin... bitnami/argo-cd 5.9.0 2.10.0 Argo CD is a continuous delivery tool for Kuber... bitnami/argo-workflows 6.6.3 3.5.4 Argo Workflows is meant to orchestrate Kubernet... bitnami/aspnet-core 5.6.2 8.0.2 ASP.NET Core is an open-source framework for we... bitnami/cassandra 10.11.2 4.1.4 Apache Cassandra is an open source distributed ... bitnami/cert-manager 0.22.0 1.14.2 cert-manager is a Kubernetes add-on to automate... bitnami/clickhouse 5.2.2 24.1.5 ClickHouse is an open-source column-oriented OL... bitnami/common 2.16.1 2.16.1 A Library Helm Chart for grouping common logic ... bitnami/concourse 3.5.2 7.11.2 Concourse is an automation system written in Go... bitnami/consul 10.20.0 1.17.3 HashiCorp Consul is a tool for discovering and ... bitnami/contour 15.5.2 1.27.1 Contour is an open source Kubernetes ingress co... bitnami/contour-operator 4.2.1 1.24.0 DEPRECATED The Contour Operator extends the Kub... ... [root@controller ~]# helm repo list NAME URL bitnami https://charts.bitnami.com/bitnami [root@controller ~]# helm repo update Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "bitnami" chart repository Update Complete. ⎈Happy Helming!⎈ |
Installing Helm Charts
- After adding repositories, use
helm repo update
to ensure access to the
most up-to-date information - Use
helm install
to install the chart with default parameters - After installation, use
helm list
to list currently installed charts - Optionally, use
helm delete
to remove currently installed charts
Installing Helm Charts: Commands
helm install bitnami/mysql --generate-name
kubectl get all
helm show chart bitnami/mysql
helm show all bitnami/mysql
helm list
helm status mysql-xxxx
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 |
[root@controller ~]# helm install bitnami/mysql --generate-name NAME: mysql-1708951513 LAST DEPLOYED: Mon Feb 26 07:45:16 2024 NAMESPACE: default STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: CHART NAME: mysql CHART VERSION: 9.21.2 APP VERSION: 8.0.36 ** Please be patient while the chart is being deployed ** ... [root@controller ~]# kubectl get pods -w --namespace default NAME READY STATUS RESTARTS AGE counter 2/2 Running 0 6d20h lab124deploy-7c7c8457f9-lclk4 1/1 Running 1 (7d1h ago) 8d lab126deploy-fff46cd4b-4drk6 1/1 Running 1 (7d1h ago) 8d lab126deploy-fff46cd4b-lhmfs 1/1 Running 1 (7d1h ago) 8d lab126deploy-fff46cd4b-zw5fq 1/1 Running 1 (7d1h ago) 8d mysql-1708951513-0 0/1 Pending 0 56s [root@controller ~]# kubectl get all NAME READY STATUS RESTARTS AGE pod/counter 2/2 Running 0 6d20h pod/lab124deploy-7c7c8457f9-lclk4 1/1 Running 1 (7d1h ago) 8d pod/lab126deploy-fff46cd4b-4drk6 1/1 Running 1 (7d1h ago) 8d pod/lab126deploy-fff46cd4b-lhmfs 1/1 Running 1 (7d1h ago) 8d pod/lab126deploy-fff46cd4b-zw5fq 1/1 Running 1 (7d1h ago) 8d pod/mysql-1708951513-0 0/1 Pending 0 68s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 10d service/lab126deploy NodePort 10.105.103.37 <none> 80:32567/TCP 8d service/mysql-1708951513 ClusterIP 10.103.12.9 <none> 3306/TCP 68s service/mysql-1708951513-headless ClusterIP None <none> 3306/TCP 68s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/lab124deploy 1/1 1 1 8d deployment.apps/lab126deploy 3/3 3 3 8d NAME DESIRED CURRENT READY AGE replicaset.apps/lab124deploy-7c7c8457f9 1 1 1 8d replicaset.apps/lab126deploy-fff46cd4b 3 3 3 8d NAME READY AGE statefulset.apps/mysql-1708951513 0/1 68s [root@controller ~]# helm show chart bitnami/mysql annotations: category: Database images: | - name: mysql image: docker.io/bitnami/mysql:8.0.36-debian-12-r8 - name: mysqld-exporter image: docker.io/bitnami/mysqld-exporter:0.15.1-debian-12-r8 - name: os-shell image: docker.io/bitnami/os-shell:12-debian-12-r16 licenses: Apache-2.0 .. [root@controller ~]# helm show all bitnami/mysql | more annotations: category: Database images: | - name: mysql image: docker.io/bitnami/mysql:8.0.36-debian-12-r8 - name: mysqld-exporter image: docker.io/bitnami/mysqld-exporter:0.15.1-debian-12-r8 - name: os-shell image: docker.io/bitnami/os-shell:12-debian-12-r16 licenses: Apache-2.0 .. [root@controller ~]# helm list NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION mysql-1708951513 default 1 2024-02-26 07:45:16.679479351 -0500 EST deployed mysql-9.21.2 8.0.36 [root@controller ~]# helm status mysql-1708951513 NAME: mysql-1708951513 LAST DEPLOYED: Mon Feb 26 07:45:16 2024 NAMESPACE: default STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: CHART NAME: mysql CHART VERSION: 9.21.2 APP VERSION: 8.0.36 ** Please be patient while the chart is being deployed ** Tip: Watch the deployment status using the command: kubectl get pods -w --namespace default Services: echo Primary: mysql-1708951513.default.svc.cluster.local:3306 Execute the following to get the administrator credentials: echo Username: root MYSQL_ROOT_PASSWORD=$(kubectl get secret --namespace default mysql-1708951513 -o jsonpath="{.data.mysql-root-password}" | base64 -d) To connect to your database: 1. Run a pod that you can use as a client: kubectl run mysql-1708951513-client --rm --tty -i --restart='Never' --image docker.io/bitnami/mysql:8.0.36-debian-12-r8 --namespace default --env MYSQL_ROOT_PASSWORD=$MYSQL_ROOT_PASSWORD --command -- bash 2. To connect to primary service (read/write): mysql -h mysql-1708951513.default.svc.cluster.local -uroot -p"$MYSQL_ROOT_PASSWORD" WARNING: There are "resources" sections in the chart not set. Using "resourcesPreset" is not recommended for production. For production installations, please set the following values according to your workload needs: - primary.resources - secondary.resources +info https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ |
Customizing Before Installing
- A Helm chart consists of templates to which specific values are applied
- The values are stored in the
values.yaml
file, within the helm chart - The easiest way to modify these values, is by first using
helm pull
to fetch a local copy of the helm chart - Next use your favorite editor on
chartname/values.yaml
to change any values
Customizing Before Install: Commands
helm show values bitnami/nginx
helm pull bitnami/nginx
tar xvf nginx-xxxx
vim nginx/values.yaml
helm template --debug nginx
helm install -f nginx/values.yaml my-nginx nginx/
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 |
[root@controller ~]# helm show values bitnami/nginx | more # Copyright VMware, Inc. # SPDX-License-Identifier: APACHE-2.0 ## @section Global parameters ## Global Docker image parameters ## Please, note that this will override the image parameters, including dependencies, configured to use the global value ## Current available global Docker image parameters: imageRegistry, imagePullSecrets and storageClass ## @param global.imageRegistry Global Docker image registry ## @param global.imagePullSecrets Global Docker registry secret names as an array ## global: imageRegistry: "" ## E.g. ## imagePullSecrets: ## - myRegistryKeySecretName ## imagePullSecrets: [] ## @section Common parameters [root@controller ~]# helm pull bitnami/nginx [root@controller ~]# tar xvf nginx-15.12.2.tgz [root@controller ~]# cd nginx/ nginx/.helmignore nginx/README.md nginx/charts/common/Chart.yaml nginx/charts/common/values.yaml nginx/charts/common/templates/_affinities.tpl nginx/charts/common/templates/_capabilities.tpl ... [root@controller ~]# cd nginx/ [root@controller nginx]# cat values.yaml | more # Copyright VMware, Inc. # SPDX-License-Identifier: APACHE-2.0 ## @section Global parameters ## Global Docker image parameters ## Please, note that this will override the image parameters, including dependencies, configured to use the global value ## Current available global Docker image parameters: imageRegistry, imagePullSecrets and storageClass ## @param global.imageRegistry Global Docker image registry ## @param global.imagePullSecrets Global Docker registry secret names as an array ## global: imageRegistry: "" ## E.g. ## imagePullSecrets: ## - myRegistryKeySecretName ## imagePullSecrets: [] ## @section Common parameters ## @param nameOverride String to partially override nginx.fullname template (will maintain the release name) ## nameOverride: "" ... [root@controller nginx]# cd .. [root@controller ~]# helm template --debug nginx install.go:214: [debug] Original chart version: "" install.go:231: [debug] CHART PATH: /root/nginx --- # Source: nginx/templates/networkpolicy.yaml kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: release-name-nginx namespace: "default" labels: app.kubernetes.io/instance: release-name app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: nginx app.kubernetes.io/version: 1.25.4 helm.sh/chart: nginx-15.12.2 spec: ... [root@controller ~]# helm install -f nginx/values.yaml my-nginx nginx/ NAME: my-nginx LAST DEPLOYED: Mon Feb 26 08:11:01 2024 NAMESPACE: default STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: CHART NAME: nginx CHART VERSION: 15.12.2 APP VERSION: 1.25.4 ** Please be patient while the chart is being deployed ** NGINX can be accessed through the following DNS name from within your cluster: my-nginx.default.svc.cluster.local (port 80) To access NGINX from outside the cluster, follow the steps below: 1. Get the NGINX URL by running these commands: NOTE: It may take a few minutes for the LoadBalancer IP to be available. Watch the status with: 'kubectl get svc --namespace default -w my-nginx' export SERVICE_PORT=$(kubectl get --namespace default -o jsonpath="{.spec.ports[0].port}" services my-nginx) export SERVICE_IP=$(kubectl get svc --namespace default my-nginx -o jsonpath='{.status.loadBalancer.ingress[0].ip}') echo "http://${SERVICE_IP}:${SERVICE_PORT}" WARNING: There are "resources" sections in the chart not set. Using "resourcesPreset" is not recommended for production. For production installations, please set the following values according to your workload needs: - cloneStaticSiteFromGit.gitSync.resources - resources +info https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ [root@controller ~]# kubectl get all NAME READY STATUS RESTARTS AGE pod/counter 2/2 Running 0 6d20h pod/lab124deploy-7c7c8457f9-lclk4 1/1 Running 1 (7d1h ago) 8d pod/lab126deploy-fff46cd4b-4drk6 1/1 Running 1 (7d1h ago) 8d pod/lab126deploy-fff46cd4b-lhmfs 1/1 Running 1 (7d1h ago) 8d pod/lab126deploy-fff46cd4b-zw5fq 1/1 Running 1 (7d1h ago) 8d pod/my-nginx-f8bf59cd9-clnj5 0/1 Pending 0 38s pod/mysql-1708951513-0 0/1 Pending 0 26m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 10d service/lab126deploy NodePort 10.105.103.37 <none> 80:32567/TCP 8d service/my-nginx LoadBalancer 10.98.101.190 <pending> 80:32378/TCP 38s service/mysql-1708951513 ClusterIP 10.103.12.9 <none> 3306/TCP 26m service/mysql-1708951513-headless ClusterIP None <none> 3306/TCP 26m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/lab124deploy 1/1 1 1 8d deployment.apps/lab126deploy 3/3 3 3 8d deployment.apps/my-nginx 0/1 1 0 38s NAME DESIRED CURRENT READY AGE replicaset.apps/lab124deploy-7c7c8457f9 1 1 1 8d replicaset.apps/lab126deploy-fff46cd4b 3 3 3 8d replicaset.apps/my-nginx-f8bf59cd9 1 1 0 38s NAME READY AGE statefulset.apps/mysql-1708951513 0/1 26m |
Kustomize
kustomize
is a Kubernetes feature, that uses a file with the name
kustomization.yaml
to apply changes to a set of resources- This is convenient for applying changes to input files that the user does not control himself, and which contents may change because of new versions appearing in Git
- Use
kubectl apply -k ./
in the directory with thekustomization.yaml
and the files it refers to to apply changes - Use
kubectl delete -k ./
in the same directory to delete all that was created by the Kustomization
Understanding a Sample kustomization.yaml
resources:
# defines which resources (in YAML files) apply
- deployment.yaml
- service.yaml
namePrefix: test-
# specifies a prefix should be added to all names
namespace: testing
# objects will be created in this specific
namespace
commonLabels:
# labels that will be added to all objects
environment: testing
Using Kustomization Overlays
- Kustomization can be used to define a base configuration, as well as
multiple deployment scenarios (overlays) as in dev, staging and prod for
instance - In such a configuration, the main
kustomization.yaml
defines the structure:
- base
- deployment.yam|
- service.yaml
- kustomization.yaml
- overlays
- dev
- kustomization.yaml
- staging
- kustomization.yaml
- prod
- kustomization.yaml
- In each of the overlays/{dev,staging.prod}/kustomization.yaml, users would
reference the base configuration in the resources field, and specify changes
for that specific environment:
resources:
- ../../base
namePrefix: dev-
namespace: development
commonLabels:
environment: development
Using Kustomization Commands
cat deployment.yaml
cat service.yaml
kubectl apply -f deployment.yaml service.yaml
cat kustomization.yaml
kubectl apply -k .
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 |
[root@controller ckad]# cd kustomization [root@controller kustomization]# ls deployment.yaml kustomization.yaml service.yaml [root@controller kustomization]# cat deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: annotations: deployment.kubernetes.io/revision: "1" creationTimestamp: "2019-09-20T14:54:12Z" generation: 1 labels: k8s-app: nginx-friday20 name: nginx-friday20 namespace: default resourceVersion: "24766" selfLink: /apis/apps/v1/namespaces/default/deployments/nginx-friday20 uid: 4c4e3217-0fcf-4365-987c-10d089a09c1e spec: progressDeadlineSeconds: 600 replicas: 3 revisionHistoryLimit: 10 selector: matchLabels: k8s-app: nginx-friday20 strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type: RollingUpdate template: metadata: creationTimestamp: null labels: k8s-app: nginx-friday20 name: nginx-friday20 spec: containers: - image: nginx imagePullPolicy: Always name: nginx-friday20 resources: {} securityContext: privileged: false terminationMessagePath: /dev/termination-log terminationMessagePolicy: File dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 30 [root@controller kustomization]# cat service.yaml apiVersion: v1 kind: Service metadata: creationTimestamp: null labels: k8s-app: nginx-friday20 name: nginx-friday20 spec: ports: - port: 80 protocol: TCP targetPort: 80 selector: k8s-app: nginx-friday20 status: loadBalancer: {} [root@controller kustomization]# cat kustomization.yaml resources: - deployment.yaml - service.yaml namePrefix: test- commonLabels: environment: testing [root@controller kustomization]# kubectl apply -k . service/test-nginx-friday20 created deployment.apps/test-nginx-friday20 created [root@controller kustomization]# kubectl get all --selector environment=testing NAME READY STATUS RESTARTS AGE pod/test-nginx-friday20-757bb757c5-4k6m8 0/1 Pending 0 4m48s pod/test-nginx-friday20-757bb757c5-lmt2r 0/1 Pending 0 4m48s pod/test-nginx-friday20-757bb757c5-wnfdw 0/1 Pending 0 4m47s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/test-nginx-friday20 ClusterIP 10.105.107.35 <none> 80/TCP 4m50s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/test-nginx-friday20 0/3 3 0 4m51s NAME DESIRED CURRENT READY AGE replicaset.apps/test-nginx-friday20-757bb757c5 3 3 0 4m49s |
Blue/Green Deployment
- Blue/green Deployments are a strategy to accomplish zero downtime
application upgrade - Essential is the possibility to test the new version of the application before taking it into production
- The blue Deployment is the current application
- The green Deployment is the new application
- Once the green Deployment is tested and ready, traffic is re-routed to the new application version
- Blue/green Deployments can easily be implemeted using Kubernetes Services
Procedure Overview
- Start with the already running application
- Create a new Deployment that is running the new version, and test with a temporary Service resource
- If all tests pass, remove the temporary Service resource
- Remove the old Service resource (pointing to the blue Deployment), and immediately create a new Service resource exposing the green Deployment
- After successful transition, remove the blue Deployment
- It is essential to keep the Service name unchanged, so that front-end resources such as Ingress will automatically pick up the transition
Using Blue/Green Deployments
kubectl create deploy blue-nginx --image=nginx:1.14 --replicas=3
kubectl expose deploy blue-nginx --port=80 --name=bgnginx
kubectl get deploy blue-nginx -o yaml > green-nginx.yaml
- Clean up dynamic generated stuff
- Change Image version
- Change “blue” to “green” throughout
kubectl create -f green-nginx.yaml
kubectl get pods
kubectl delete svc bgnginx; kubectl expose deploy green-nginx --port=80 --name=bgnginx
kubectl delete deploy blue-nginx
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 |
[root@controller ~]# kubectl create deploy blue-nginx --image=nginx:1.14 --replicas=3 [root@controller ~]# kubectl get all NAME READY STATUS RESTARTS AGE pod/blue-nginx-69497ddbcd-55dj4 1/1 Running 0 4m38s pod/blue-nginx-69497ddbcd-h7tqp 1/1 Running 0 4m38s pod/blue-nginx-69497ddbcd-plglc 1/1 Running 0 4m38s pod/counter 2/2 Running 2 (20h ago) 8d NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 12d NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/blue-nginx 3/3 3 3 4m38s NAME DESIRED CURRENT READY AGE replicaset.apps/blue-nginx-69497ddbcd 3 3 3 4m38s [root@controller ~]# kubectl expose deploy blue-nginx --port=80 --name=bgnginx service/bgnginx exposed [root@controller ~]# kubectl get deploy blue-nginx -o yaml > green-nginx.yaml [root@controller ~]# cat green-nginx.yaml apiVersion: apps/v1 kind: Deployment metadata: annotations: deployment.kubernetes.io/revision: "1" creationTimestamp: "2024-02-28T12:25:49Z" generation: 1 labels: app: blue-nginx name: blue-nginx namespace: default resourceVersion: "2429551" uid: 4971a862-61b0-4f82-bf0f-3a34150ecf8b spec: progressDeadlineSeconds: 600 replicas: 3 revisionHistoryLimit: 10 selector: matchLabels: app: blue-nginx strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type: RollingUpdate template: metadata: creationTimestamp: null labels: app: blue-nginx spec: containers: - image: nginx:1.14 imagePullPolicy: IfNotPresent name: nginx resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 30 status: availableReplicas: 3 conditions: - lastTransitionTime: "2024-02-28T12:30:22Z" lastUpdateTime: "2024-02-28T12:30:22Z" message: Deployment has minimum availability. reason: MinimumReplicasAvailable status: "True" type: Available - lastTransitionTime: "2024-02-28T12:25:49Z" lastUpdateTime: "2024-02-28T12:30:22Z" message: ReplicaSet "blue-nginx-69497ddbcd" has successfully progressed. reason: NewReplicaSetAvailable status: "True" type: Progressing observedGeneration: 1 readyReplicas: 3 replicas: 3 updatedReplicas: 3 [root@controller ~]# vi green-nginx.yaml [root@controller ~]# cat green-nginx.yaml apiVersion: apps/v1 kind: Deployment metadata: labels: app: green-nginx name: green-nginx namespace: default spec: progressDeadlineSeconds: 600 replicas: 3 revisionHistoryLimit: 10 selector: matchLabels: app: green-nginx strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type: RollingUpdate template: metadata: creationTimestamp: null labels: app: green-nginx spec: containers: - image: nginx:1.17 imagePullPolicy: IfNotPresent name: nginx resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 30 [root@controller ~]# kubectl create -f green-nginx.yaml deployment.apps/green-nginx created [root@controller ~]# kubectl expose deploy green-nginx --port=80 --name=green service/green exposed [root@controller ~]# kubectl get endpoints NAME ENDPOINTS AGE bgnginx 192.168.0.141:80,192.168.0.145:80,192.168.0.146:80 39m green 192.168.0.131:80,192.168.0.142:80,192.168.0.143:80 8s kubernetes 172.30.9.25:6443 12d [root@controller ~]# kubectl delete svc green service "green" deleted [root@controller ~]# kubectl delete svc bgnginx service "bgnginx" deleted [root@controller ~]# kubectl expose deploy green-nginx --port=80 --name=bgnginx service/bgnginx exposed [root@controller ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES blue-nginx-69497ddbcd-8gc4h 1/1 Running 0 26m 192.168.0.145 worker2.example.com <none> <none> blue-nginx-69497ddbcd-msnjc 1/1 Running 0 26m 192.168.0.146 worker2.example.com <none> <none> blue-nginx-69497ddbcd-ww7rq 1/1 Running 0 26m 192.168.0.141 worker2.example.com <none> <none> green-nginx-78cf677cf8-djnjl 1/1 Running 0 36m 192.168.0.143 worker2.example.com <none> <none> green-nginx-78cf677cf8-gqbjr 1/1 Running 0 36m 192.168.0.131 worker2.example.com <none> <none> green-nginx-78cf677cf8-pvcbb 1/1 Running 0 36m 192.168.0.142 worker2.example.com <none> <none> [root@controller ~]# kubectl get endpoints NAME ENDPOINTS AGE bgnginx 192.168.0.131:80,192.168.0.142:80,192.168.0.143:80 76s kubernetes 172.30.9.25:6443 12d [root@controller ~]# kubectl delete deployment.apps/blue-nginx deployment.apps "blue-nginx" deleted [root@controller ~]# kubectl get all NAME READY STATUS RESTARTS AGE pod/green-nginx-78cf677cf8-djnjl 1/1 Running 0 38m pod/green-nginx-78cf677cf8-gqbjr 1/1 Running 0 38m pod/green-nginx-78cf677cf8-pvcbb 1/1 Running 0 38m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/bgnginx ClusterIP 10.110.110.222 <none> 80/TCP 2m46s service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 12d NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/green-nginx 3/3 3 3 38m NAME DESIRED CURRENT READY AGE replicaset.apps/green-nginx-78cf677cf8 3 3 3 38m |
Canary Deployments
- A canary Deployment is an update strategy where you first push the update
at small scale to see if it works well - In terms of Kubernetes, you could imagine a Deployment that runs 4 replicas
- Next, you add a new Deployment that uses the same label
- Then you create a Service that uses the same selector label for all
- As the Service is load balancing, only 1 out of 5 requests would be serviced by the newer version
- And if that doesn’t seem to be working, you can easily delete it
Step 1: Running the Old Version
kubectl create deploy old-nginx --image=nginx:1.14 --replicas=3 --dry-
run=client -o yaml > ~/oldnginx.yaml
vim oldnginx.yaml
- set labels: type: canary in deploy metadata as well as Pod metadata
kubectl create -f oldnginx.yaml
kubectl expose deploy old-nginx --name=nginx --port=80 --selector type=canary
kubectl get svc; kubectl get endpoints
minikube ssh; curl <svc-ip-address>
a few times, you’ll see all the same
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 |
[root@controller ~]# kubectl create deploy old-nginx --image=nginx:1.14 --replicas=3 --dry-run=client -o yaml > ~/oldnginx.yaml [root@controller ~]# cat oldnginx.yaml apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: old-nginx name: old-nginx spec: replicas: 3 selector: matchLabels: app: old-nginx strategy: {} template: metadata: creationTimestamp: null labels: app: old-nginx spec: containers: - image: nginx:1.14 name: nginx resources: {} status: {} [root@controller ~]# vim oldnginx.yaml [root@controller ~]# cat oldnginx.yaml apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: old-nginx type: canary name: old-nginx spec: replicas: 3 selector: matchLabels: app: old-nginx strategy: {} template: metadata: creationTimestamp: null labels: app: old-nginx type: canary spec: containers: - image: nginx:1.14 name: nginx resources: {} status: {} [root@controller ~]# kubectl create -f oldnginx.yaml deployment.apps/old-nginx created [root@controller ~]# kubectl expose deploy old-nginx --name=nginx --port=80 --selector type=canary |
Step 2: Creating a ConfigMap
kubectl cp <old-nginx-pod>:/usr/share/nginx/html/index.html index.html
vim index.html
- Add a line that uniquely identifies this as the canary Pod
kubectl create cm canary --from-file=index.html
kubectl describe cm canary
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 |
[root@controller ~]# kubectl get all NAME READY STATUS RESTARTS AGE pod/old-nginx-5d6fc48749-7x8t6 1/1 Running 0 39m pod/old-nginx-5d6fc48749-knph2 1/1 Running 0 39m pod/old-nginx-5d6fc48749-tn5d5 1/1 Running 0 39m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 46m service/nginx ClusterIP 10.97.203.127 <none> 80/TCP 10m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/old-nginx 3/3 3 3 39m NAME DESIRED CURRENT READY AGE replicaset.apps/old-nginx-5d6fc48749 3 3 3 39m [root@controller ~]# kubectl cp old-nginx-5d6fc48749-tn5d5:/usr/share/nginx/html/index.html index.html tar: Removing leading `/' from member names [root@controller ~]# cat index.html <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> [root@controller ~]# vi index.html [root@controller ~]# cat index.html <!DOCTYPE html> <html> <head> <title>Welcome to the Canary!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to Canary !</h1> <p>Hello Canary</p> </body> </html> [root@controller ~]# kubectl create cm canary --from-file=index.html configmap/canary created [root@controller ~]# kubectl describe cm canary Name: canary Namespace: default Labels: <none> Annotations: <none> Data ==== index.html: ---- <!DOCTYPE html> <html> <head> <title>Welcome to the Canary!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to Canary !</h1> <p>Hello Canary</p> </body> </html> |
Step 3: Preparing the New Version
cp oldnginx.yaml canary.yaml
vim canary.yaml
image: nginx:latest
replicas: 1
:%s/old/new/g
- Mount the configMap as a volume (see Git repo canary.yaml)
kubectl create -f canary.yaml
kubectl get svc; kubectl get endpoints
minikube ssh; curl <service-ip>
and notice different results: this is canary in action
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 |
[root@controller ~]# cp oldnginx.yaml canary.yaml [root@controller ~]# cat canary.yaml apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: old-nginx type: canary name: old-nginx spec: replicas: 3 selector: matchLabels: app: old-nginx strategy: {} template: metadata: creationTimestamp: null labels: app: old-nginx type: canary spec: containers: - image: nginx:1.14 name: nginx resources: {} status: {} [root@controller ~]# vim canary.yaml [root@controller ~]# cat canary.yaml apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: new-nginx type: canary name: new-nginx spec: replicas: 1 selector: matchLabels: app: new-nginx strategy: {} template: metadata: creationTimestamp: null labels: app: new-nginx type: canary spec: containers: - image: nginx:latest name: nginx resources: {} status: {} |
Go to the Kubernetes documentation -> search: ConfigMap
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 |
[root@controller ~]# vim canary.yaml [root@controller ~]# cat canary.yaml apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: new-nginx type: canary name: new-nginx spec: replicas: 1 selector: matchLabels: app: new-nginx strategy: {} template: metadata: creationTimestamp: null labels: app: new-nginx type: canary spec: containers: - image: nginx:latest name: nginx resources: {} volumeMounts: - name: canvol mountPath: "/usr/share/nginx/html/index.html" volumes: - name: canvol configMap: name: canary status: {} [root@controller ~]# kubectl create -f canary.yaml deployment.apps/new-nginx created [root@controller ~]# kubectl get svc; kubectl get endpoints NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 74m oldnginx ClusterIP 10.97.203.127 <none> 80/TCP 38m NAME ENDPOINTS AGE kubernetes 172.30.9.25:6443 74m nginx 172.16.102.131:80,172.16.71.192:80,172.16.71.193:80 38m [root@controller ~]# kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS new-nginx-b7bfd4469-6hdcs 0/1 CrashLoopBackOff 3 (36s ago) 84s app=new-nginx,pod-template-hash=b7bfd4469,type=canary old-nginx-5d6fc48749-7x8t6 1/1 Running 0 67m app=old-nginx,pod-template-hash=5d6fc48749,type=canary old-nginx-5d6fc48749-knph2 1/1 Running 0 67m app=old-nginx,pod-template-hash=5d6fc48749,type=canary old-nginx-5d6fc48749-tn5d5 1/1 Running 0 67m app=old-nginx,pod-template-hash=5d6fc48749,type=canary [root@controller ~]# kubectl describe new-nginx-b7bfd4469-6hdcs error: the server doesn't have a resource type "new-nginx-b7bfd4469-6hdcs" [root@controller ~]# kubectl describe pod new-nginx-b7bfd4469-6hdcs Name: new-nginx-b7bfd4469-6hdcs Namespace: default Priority: 0 Service Account: default Node: worker2.example.com/172.30.9.27 Start Time: Thu, 29 Feb 2024 12:38:07 -0500 Labels: app=new-nginx pod-template-hash=b7bfd4469 type=canary Annotations: cni.projectcalico.org/containerID: 1aba5965b8b90c8aca42dacf6fe882ed25aa73ead94fd1bcb7e426d7beb6903d cni.projectcalico.org/podIP: 172.16.71.194/32 cni.projectcalico.org/podIPs: 172.16.71.194/32 Status: Running IP: 172.16.71.194 IPs: IP: 172.16.71.194 Controlled By: ReplicaSet/new-nginx-b7bfd4469 Containers: nginx: Container ID: containerd://e0e8ffe81b49b64788a6a9474870b52151e88764d8bf2f5f49c841665fc57f13 Image: nginx:latest Image ID: docker.io/library/nginx@sha256:25ff478171a2fd27d61a1774d97672bb7c13e888749fc70c711e207be34d370a Port: <none> Host Port: <none> State: Waiting Reason: RunContainerError Last State: Terminated Reason: StartError Message: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error mounting "/var/lib/kubelet/pods/5600e3eb-ed9c-4834-940b-fb15c114d56c/volumes/kubernetes.io~configmap/canvol" to rootfs at "/usr/share/nginx/html/index.html": mount /var/lib/kubelet/pods/5600e3eb-ed9c-4834-940b-fb15c114d56c/volumes/kubernetes.io~configmap/canvol:/usr/share/nginx/html/index.html (via /proc/self/fd/6), flags: 0x5001: not a directory: unknown Exit Code: 128 Started: Wed, 31 Dec 1969 19:00:00 -0500 Finished: Thu, 29 Feb 2024 12:39:48 -0500 Ready: False Restart Count: 4 Environment: <none> Mounts: /usr/share/nginx/html/index.html from canvol (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vmxvc (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: canvol: Type: ConfigMap (a volume populated by a ConfigMap) Name: canary Optional: false kube-api-access-vmxvc: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 105s default-scheduler Successfully assigned default/new-nginx-b7bfd4469-6hdcs to worker2.example.com Normal Pulled 103s kubelet Successfully pulled image "nginx:latest" in 1.26s (1.26s including waiting) Normal Pulled 101s kubelet Successfully pulled image "nginx:latest" in 934ms (934ms including waiting) Normal Pulled 85s kubelet Successfully pulled image "nginx:latest" in 897ms (897ms including waiting) Normal Created 57s (x4 over 103s) kubelet Created container nginx Warning Failed 57s (x4 over 103s) kubelet Error: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error mounting "/var/lib/kubelet/pods/5600e3eb-ed9c-4834-940b-fb15c114d56c/volumes/kubernetes.io~configmap/canvol" to rootfs at "/usr/share/nginx/html/index.html": mount /var/lib/kubelet/pods/5600e3eb-ed9c-4834-940b-fb15c114d56c/volumes/kubernetes.io~configmap/canvol:/usr/share/nginx/html/index.html (via /proc/self/fd/6), flags: 0x5001: not a directory: unknown Normal Pulled 57s kubelet Successfully pulled image "nginx:latest" in 908ms (908ms including waiting) Warning BackOff 18s (x8 over 100s) kubelet Back-off restarting failed container nginx in pod new-nginx-b7bfd4469-6hdcs_default(5600e3eb-ed9c-4834-940b-fb15c114d56c) Normal Pulling 5s (x5 over 104s) kubelet Pulling image "nginx:latest" [root@controller ~]# [root@controller ~]# vim canary.yaml [root@controller ~]# kubectl delete -f canary.yaml deployment.apps "new-nginx" deleted [root@controller ~]# cat canary.yaml apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: new-nginx type: canary name: new-nginx spec: replicas: 1 selector: matchLabels: app: new-nginx strategy: {} template: metadata: creationTimestamp: null labels: app: new-nginx type: canary spec: containers: - image: nginx:latest name: nginx resources: {} volumeMounts: - name: canvol mountPath: "/usr/share/nginx/html" volumes: - name: canvol configMap: name: canary status: {} [root@controller ~]# kubectl create -f canary.yaml deployment.apps/new-nginx created [root@controller ~]# kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS new-nginx-7c59c857b9-pvwvb 1/1 Running 0 7s app=new-nginx,pod-template-hash=7c59c857b9,type=canary old-nginx-5d6fc48749-7x8t6 1/1 Running 0 69m app=old-nginx,pod-template-hash=5d6fc48749,type=canary old-nginx-5d6fc48749-knph2 1/1 Running 0 69m app=old-nginx,pod-template-hash=5d6fc48749,type=canary old-nginx-5d6fc48749-tn5d5 1/1 Running 0 69m app=old-nginx,pod-template-hash=5d6fc48749,type=canary [root@controller ~]# kubectl get svc; kubectl get endpoints NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 76m oldnginx ClusterIP 10.97.203.127 <none> 80/TCP 40m NAME ENDPOINTS AGE kubernetes 172.30.9.25:6443 76m nginx 172.16.102.131:80,172.16.71.192:80,172.16.71.193:80 + 1 more... 40m [root@controller ~]# curl 10.97.203.127 |
Step 4: Activating the New Version
- Use
kubectl get deploy
to verify the names of the old and the new
deployment - Use
kubectl scale
to scale the canary deployment up to the desired number of replicas kubectl delete deploy
to delete the old deployment
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 |
[root@controller ~]# kubectl get deploy NAME READY UP-TO-DATE AVAILABLE AGE new-nginx 1/1 1 1 94m old-nginx 3/3 3 3 162m [root@controller ~]# kubectl scale deploy new-nginx --replicas=3 deployment.apps/new-nginx scaled [root@controller ~]# kubectl get deploy NAME READY UP-TO-DATE AVAILABLE AGE new-nginx 3/3 3 3 95m old-nginx 3/3 3 3 164m [root@controller ~]# kubectl describe svc oldnginx Name: oldnginx Namespace: default Labels: app=old-nginx type=canary Annotations: <none> Selector: type=canary Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.97.203.127 IPs: 10.97.203.127 Port: <unset> 80/TCP TargetPort: 80/TCP Endpoints: 172.16.102.131:80,172.16.102.132:80,172.16.71.192:80 + 3 more... Session Affinity: None Events: <none> [root@controller ~]# kubectl scale deploy old-nginx --replicas=0 deployment.apps/old-nginx scaled [root@controller ~]# kubectl get deploy NAME READY UP-TO-DATE AVAILABLE AGE new-nginx 3/3 3 3 97m old-nginx 0/0 0 0 166m [root@controller ~]# kubectl get all NAME READY STATUS RESTARTS AGE pod/new-nginx-7c59c857b9-75l29 1/1 Running 0 2m19s pod/new-nginx-7c59c857b9-cmdct 1/1 Running 0 2m19s pod/new-nginx-7c59c857b9-pvwvb 1/1 Running 0 97m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 173m service/oldnginx ClusterIP 10.97.203.127 <none> 80/TCP 137m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/new-nginx 3/3 3 3 97m deployment.apps/old-nginx 0/0 0 0 166m NAME DESIRED CURRENT READY AGE replicaset.apps/new-nginx-7c59c857b9 3 3 3 97m replicaset.apps/old-nginx-5d6fc48749 0 0 0 166m |
Custom Resource Definitions
- Custom Resource Definitions (CRDs) allow users to add custom resources to
clusters - Doing so allows anything to be integrated in a cloud-native environment
- The CRD allows users to add resources in a very easy way
- The resources are added as extension to the original Kubernetes API server
- No programming skills required
- The alternative way to build custom resources, is via API integration
- This will build a custom API server
- Programming skills are required
Creating Custom Resources
- Creating Custom Resources using CRDs is a two-step procedure
- First, you’ll need to define the resource, using the CustomResourceDefinition API kind
- After defining the resource, it can be added through its own API resource
Creating Custom Resources Commands
cat crd-object.yaml
kubectl create -f crd-object.yaml
kubectl api-resources | grep backup
cat crd-backup.yaml
kubectl create -f crd-backup.yaml
kubectl get backups
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 |
[root@controller ckad]# cat crd-object.yaml apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: backups.stable.example.com spec: group: stable.example.com versions: - name: v1 served: true storage: true schema: openAPIV3Schema: type: object properties: spec: type: object properties: backupType: type: string image: type: string replicas: type: integer scope: Namespaced names: plural: backups singular: backup shortNames: - bks kind: BackUp [root@controller ckad]# kubectl create -f crd-object.yaml customresourcedefinition.apiextensions.k8s.io/backups.stable.example.com created [root@controller ckad]# kubectl api-versions | grep backup [root@controller ckad]# kubectl api-resources | grep backup backups bks stable.example.com/v1 true BackUp [root@controller ckad]# cat crd-backup.yaml apiVersion: "stable.example.com/v1" kind: BackUp metadata: name: mybackup spec: backupType: full image: linux-backup-image replicas: 5 [root@controller ckad]# kubectl create -f crd-backup.yaml backup.stable.example.com/mybackup created [root@controller ckad]# kubectl get backups NAME AGE mybackup 12s [root@controller ckad]# kubectl describe backups.stable.example.com mybackup Name: mybackup Namespace: default Labels: <none> Annotations: <none> API Version: stable.example.com/v1 Kind: BackUp Metadata: Creation Timestamp: 2024-03-05T11:26:52Z Generation: 1 Resource Version: 646523 UID: 3c69b166-da8a-47d1-82a6-f9c27f616142 Spec: Backup Type: full Image: linux-backup-image Replicas: 5 Events: <none> |
Operators and Controllers
- Operators are custom applications, based on Custom Resource Definitions
- Operators can be seen as a way of packaging, running and managing applications in Kubernetes
- Operators are based on Controllers, which are Kubernetes components that continuously operate dynamic systems
- The Controller loop is the essence of any Controllers
- The Kubernetes Controller manager runs a reconciliation loop, which continuously observes the current state, compares it to the desired state, and adjusts it when necessary
- Operators are application-specific Controllers
- Operators can be added to Kubernetes by developing them yourself
- Operators are also available from community websites
- A common registry for operators is found at operatorhub.io (which is rather OpenShift oriented)
- Many solutions from the Kubernetes ecosystem are provided as operators
- Prometheus: a monitoring and alerting solution
- Tigera: the operator that manages the calico network plugin
- Jaeger: used for tracing transactions between distributed services
Lab: Using Canary Deployments
- Run an nginx Deployment that meets the following requirements
- Use a ConfigMap to provide an index.html file containing the text “welcome to the old version”
- Use image version 1.14
- Run 3 replicas
- Use the canary Deployment upgrade strategy to replace with a newer version of the application
- Use a ConfigMap to provide an index.html in the new application, containing the text “welcome to the new version”
- Set the image version to latest
- Complete the transition such that the old application is completely removed after verifying successful working of the updated application
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 |
[root@controller ckad]# vim index.html [root@controller ckad]# cat index.html Welcome to the old version ! [root@controller ckad]# kubectl create cm -h | more ... Aliases: configmap, cm Examples: # Create a new config map named my-config based on folder bar kubectl create configmap my-config --from-file=path/to/bar ... [root@controller ckad]# kubectl create cm oldversion --from-file=index.html configmap/oldversion created [root@controller ckad]# kubectl get cm oldversion NAME DATA AGE oldversion 1 30s [root@controller ckad]# kubectl get cm oldversion -o yaml apiVersion: v1 data: index.html: | Welcome to the old version ! kind: ConfigMap metadata: creationTimestamp: "2024-03-05T14:07:41Z" name: oldversion namespace: default resourceVersion: "661988" uid: edc224bf-7059-44bb-b96c-30275858019f [root@controller ckad]# echo Welcome to the new version ! > index.html [root@controller ckad]# cat index.html Welcome to the new version ! [root@controller ckad]# kubectl create cm newversion --from-file=index.html configmap/newversion created [root@controller ckad]# kubectl get cm -o yaml - apiVersion: v1 data: index.html: | Welcome to the new version ! kind: ConfigMap metadata: creationTimestamp: "2024-03-05T14:10:35Z" name: newversion namespace: default resourceVersion: "662262" uid: d7db8273-dd05-45af-b592-7f8e16dd08cc - apiVersion: v1 data: index.html: | Welcome to the old version ! kind: ConfigMap metadata: creationTimestamp: "2024-03-05T14:07:41Z" name: oldversion namespace: default resourceVersion: "661988" uid: edc224bf-7059-44bb-b96c-30275858019f kind: List metadata: resourceVersion: "" [root@controller ckad]# kubectl create deploy oldnginx --image=nginx:1.14 --replicas=3 --dry-run=client -o yaml > oldnginx.yaml [root@controller ckad]# cat oldnginx.yaml apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: oldnginx name: oldnginx spec: replicas: 3 selector: matchLabels: app: oldnginx strategy: {} template: metadata: creationTimestamp: null labels: app: oldnginx spec: containers: - image: nginx:1.14 name: nginx resources: {} status: {} [root@controller ckad]# vim oldnginx.yaml [root@controller ckad]# cat oldnginx.yaml apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: oldnginx type: canary name: oldnginx spec: replicas: 3 selector: matchLabels: app: oldnginx strategy: {} template: metadata: creationTimestamp: null labels: app: oldnginx type: canary spec: containers: - image: nginx:1.14 name: nginx resources: {} volumeMounts: - name: indexfile # pod volume name mountPath: "/usr/share/nginx/html/" volumes: - name: indexfile configMap: name: oldversion status: {} [root@controller ckad]# kubectl create -f oldnginx.yaml deployment.apps/oldnginx created [root@controller ckad]# kubectl get all NAME READY STATUS RESTARTS AGE pod/oldngix-7f6d676788-bglsk 1/1 Running 0 16s pod/oldngix-7f6d676788-gsxxg 1/1 Running 0 16s pod/oldngix-7f6d676788-jsfft 1/1 Running 0 16s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d21h NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oldngix 3/3 3 3 16s NAME DESIRED CURRENT READY AGE replicaset.apps/oldngix-7f6d676788 3 3 3 16s [root@controller ckad]# kubectl get all --show-labels NAME READY STATUS RESTARTS AGE LABELS pod/oldngix-7f6d676788-bglsk 1/1 Running 0 63s app=oldngix,pod-template-hash=7f6d676788,type=canary pod/oldngix-7f6d676788-gsxxg 1/1 Running 0 63s app=oldngix,pod-template-hash=7f6d676788,type=canary pod/oldngix-7f6d676788-jsfft 1/1 Running 0 63s app=oldngix,pod-template-hash=7f6d676788,type=canary NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE LABELS service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d21h component=apiserver,provider=kubernetes NAME READY UP-TO-DATE AVAILABLE AGE LABELS deployment.apps/oldngix 3/3 3 3 64s app=oldngix,type=canary NAME DESIRED CURRENT READY AGE LABELS replicaset.apps/oldngix-7f6d676788 3 3 3 64s app=oldngix,pod-template-hash=7f6d676788,type=canary [root@controller ckad]# kubectl get all -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod/oldngix-7f6d676788-bglsk 1/1 Running 0 69s 172.16.71.197 worker2.example.com <none> <none> pod/oldngix-7f6d676788-gsxxg 1/1 Running 0 69s 172.16.102.136 worker1.example.com <none> <none> pod/oldngix-7f6d676788-jsfft 1/1 Running 0 69s 172.16.71.198 worker2.example.com <none> <none> NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d21h <none> NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR deployment.apps/oldngix 3/3 3 3 69s nginx nginx:1.14 app=oldngix NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR replicaset.apps/oldngix-7f6d676788 3 3 3 69s nginx nginx:1.14 app=oldngix,pod-template-hash=7f6d676788 [root@controller ckad]# kubectl expose -h | more ... # Create a service for an nginx deployment, which serves on port 80 and connects to the containers on port 8000 kubectl expose deployment nginx --port=80 --target-port=8000 ... --selector='': A label selector to use for this service. Only equality-based selector requirements are supported. If empty (the default) infer the selector from t he replication controller or replica set.) [root@controller ckad]# kubectl expose deployment oldnginx --name=canary --port=80 --selector=type=canary service/canary exposed [root@controller ckad]# kubectl get endpoints NAME ENDPOINTS AGE canary 172.16.102.137:80,172.16.71.199:80,172.16.71.200:80 25m kubernetes 172.30.9.25:6443 4d22h [root@controller ckad]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE canary ClusterIP 10.96.148.14 <none> 80/TCP 4s kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d22h [root@controller ckad]# kubectl describe svc canary Name: canary Namespace: default Labels: app=oldngix type=canary Annotations: <none> Selector: type=canary Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.96.148.14 IPs: 10.96.148.14 Port: <unset> 80/TCP TargetPort: 80/TCP Endpoints: 172.16.102.136:80,172.16.71.197:80,172.16.71.198:80 Session Affinity: None Events: <none> [root@controller ckad]# curl 10.96.148.14 [root@controller ckad]# cp oldnginx.yaml newnginx.yaml [root@controller ckad]# cat newnginx.yaml apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: oldnginx type: canary name: oldnginx spec: replicas: 3 selector: matchLabels: app: oldnginx strategy: {} template: metadata: creationTimestamp: null labels: app: oldnginx type: canary spec: containers: - image: nginx:1.14 name: nginx resources: {} volumeMounts: - name: indexfile # pod volume name mountPath: "/usr/share/nginx/html/" volumes: - name: indexfile configMap: name: oldversion status: {} [root@controller ckad]# vim newnginx.yaml [root@controller ckad]# cat newnginx.yaml apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: newnginx type: canary name: newnginx spec: replicas: 1 selector: matchLabels: app: newnginx strategy: {} template: metadata: creationTimestamp: null labels: app: newnginx type: canary spec: containers: - image: nginx name: nginx resources: {} volumeMounts: - name: indexfile # pod volume name mountPath: "/usr/share/nginx/html/" volumes: - name: indexfile configMap: name: newversion status: {} [root@controller ckad]# kubectl create -f newnginx.yaml deployment.apps/newnginx created [root@controller ckad]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE canary ClusterIP 10.96.148.14 <none> 80/TCP 33m kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d22h [root@controller ckad]# curl 10.96.148.14 [root@controller ckad]# kubectl get deploy NAME READY UP-TO-DATE AVAILABLE AGE newnginx 1/1 1 1 115s oldnginx 3/3 3 3 12m [root@controller ckad]# kubectl scale deployment newnginx --replicas=3 deployment.apps/newnginx scaled [root@controller ckad]# kubectl scale deployment oldnginx --replicas=0 deployment.apps/oldnginx scaled [root@controller ckad]# kubectl get deploy NAME READY UP-TO-DATE AVAILABLE AGE newnginx 3/3 3 3 3m20s oldnginx 0/0 0 0 14m [root@controller ckad]# curl 10.96.148.14 |