{"id":5588,"date":"2023-12-30T09:40:38","date_gmt":"2023-12-30T08:40:38","guid":{"rendered":"http:\/\/miro.borodziuk.eu\/?p=5588"},"modified":"2025-05-05T19:18:37","modified_gmt":"2025-05-05T17:18:37","slug":"deploying-applications-the-devops-way-on-kubernetes","status":"publish","type":"post","link":"http:\/\/miro.borodziuk.eu\/index.php\/2023\/12\/30\/deploying-applications-the-devops-way-on-kubernetes\/","title":{"rendered":"Deploying Applications the DevOps way On Kubernetes"},"content":{"rendered":"<p><!--more--><\/p>\n<p><span style=\"color: #3366ff;\">Helm<\/span><\/p>\n<ul>\n<li>Helm is used to streamline installing and managing Kubernetes applications<\/li>\n<li>Helm consists of the <code>helm<\/code> tool, which needs to be installed, and a chart<\/li>\n<li>A chart is a Helm package, which contains the following:\n<ul>\n<li>A description of the package<\/li>\n<li>One or more templates containing Kubernetes manifest files<\/li>\n<\/ul>\n<\/li>\n<li>Charts can be stored locally, or accessed from remote Helm repositories<\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #3366ff;\">Installing the Helm Binary<\/span><\/p>\n<ul>\n<li>Fetch the binary from https:\/\/github.com\/helm\/helm\/releases; check for<br \/>\nthe latest release!<\/li>\n<li><code>tar xvf helm-xxxx.tar.gz<\/code><\/li>\n<li><code>sudo mv linux-amd64\/helm \/usr\/local\/bin<\/code><\/li>\n<li><code>helm version<\/code><\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #3366ff;\">Getting Access to Helm Charts<\/span><\/p>\n<ul>\n<li>The main site for finding Helm charts, is through https:\/\/artifacthub.io<\/li>\n<li>This is a major way for finding repository names<\/li>\n<li>Search for specific software here, and run the commands to install it; for instance, to run the Kubernetes Dashboard:\n<ul>\n<li><code>helm repo add kubernetes-dashboard https:\/\/kubernetes.github.io\/dashboard\/<\/code><\/li>\n<li><code>helm install kubernetes-dashboard kubernetes-dashboard\/kubernetes-dashboard<\/code><\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<pre class=\"lang:default decode:true\">[root@controller ~]# helm version\r\nversion.BuildInfo{Version:\"v3.14.2\", GitCommit:\"c309b6f0ff63856811846ce18f3bdc93d2b4d54b\", GitTreeState:\"clean\", GoVersion:\"go1.21.7\"}\r\n\r\n[root@controller ~]# helm repo add bitnami https:\/\/charts.bitnami.com\/bitnami\r\n\"bitnami\" has been added to your repositories\r\n\r\n[root@controller ~]# helm search repo bitnami\r\nNAME                                            CHART VERSION   APP VERSION     DESCRIPTION\r\nbitnami\/airflow                                 16.8.2          2.8.1           Apache Airflow is a tool to express and execute...\r\nbitnami\/apache                                  10.6.2          2.4.58          Apache HTTP Server is an open-source HTTP serve...\r\nbitnami\/apisix                                  2.8.2           3.8.0           Apache APISIX is high-performance, real-time AP...\r\nbitnami\/appsmith                                2.7.2           1.13.0          Appsmith is an open source platform for buildin...\r\nbitnami\/argo-cd                                 5.9.0           2.10.0          Argo CD is a continuous delivery tool for Kuber...\r\nbitnami\/argo-workflows                          6.6.3           3.5.4           Argo Workflows is meant to orchestrate Kubernet...\r\nbitnami\/aspnet-core                             5.6.2           8.0.2           ASP.NET Core is an open-source framework for we...\r\nbitnami\/cassandra                               10.11.2         4.1.4           Apache Cassandra is an open source distributed ...\r\nbitnami\/cert-manager                            0.22.0          1.14.2          cert-manager is a Kubernetes add-on to automate...\r\nbitnami\/clickhouse                              5.2.2           24.1.5          ClickHouse is an open-source column-oriented OL...\r\nbitnami\/common                                  2.16.1          2.16.1          A Library Helm Chart for grouping common logic ...\r\nbitnami\/concourse                               3.5.2           7.11.2          Concourse is an automation system written in Go...\r\nbitnami\/consul                                  10.20.0         1.17.3          HashiCorp Consul is a tool for discovering and ...\r\nbitnami\/contour                                 15.5.2          1.27.1          Contour is an open source Kubernetes ingress co...\r\nbitnami\/contour-operator                        4.2.1           1.24.0          DEPRECATED The Contour Operator extends the Kub...\r\n...          \r\n\r\n[root@controller ~]# helm repo list\r\nNAME    URL\r\nbitnami https:\/\/charts.bitnami.com\/bitnami\r\n\r\n[root@controller ~]# helm repo update\r\nHang tight while we grab the latest from your chart repositories...\r\n...Successfully got an update from the \"bitnami\" chart repository\r\nUpdate Complete. \u2388Happy Helming!\u2388\r\n<\/pre>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #3366ff;\">Installing Helm Charts<\/span><\/p>\n<ul>\n<li>After adding repositories, use<code> helm repo update<\/code> to ensure access to the<br \/>\nmost up-to-date information<\/li>\n<li>Use<code> helm install<\/code> to install the chart with default parameters<\/li>\n<li>After installation, use<code> helm list<\/code> to list currently installed charts<\/li>\n<li>Optionally, use <code>helm delete<\/code> to remove currently installed charts<\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #3366ff;\">Installing Helm Charts: Commands<\/span><\/p>\n<ul>\n<li><code>helm install bitnami\/mysql --generate-name<\/code><\/li>\n<li><code>kubectl get all<\/code><\/li>\n<li><code>helm show chart bitnami\/mysql<\/code><\/li>\n<li><code>helm show all bitnami\/mysql<\/code><\/li>\n<li><code>helm list<\/code><\/li>\n<li><code>helm status mysql-xxxx<\/code><\/li>\n<\/ul>\n<pre class=\"lang:default decode:true\">[root@controller ~]# helm install bitnami\/mysql --generate-name\r\nNAME: mysql-1708951513\r\nLAST DEPLOYED: Mon Feb 26 07:45:16 2024\r\nNAMESPACE: default\r\nSTATUS: deployed\r\nREVISION: 1\r\nTEST SUITE: None\r\nNOTES:\r\nCHART NAME: mysql\r\nCHART VERSION: 9.21.2\r\nAPP VERSION: 8.0.36\r\n\r\n** Please be patient while the chart is being deployed **\r\n\r\n...\r\n[root@controller ~]# kubectl get pods -w --namespace default\r\nNAME                            READY   STATUS    RESTARTS       AGE\r\ncounter                         2\/2     Running   0              6d20h\r\nlab124deploy-7c7c8457f9-lclk4   1\/1     Running   1 (7d1h ago)   8d\r\nlab126deploy-fff46cd4b-4drk6    1\/1     Running   1 (7d1h ago)   8d\r\nlab126deploy-fff46cd4b-lhmfs    1\/1     Running   1 (7d1h ago)   8d\r\nlab126deploy-fff46cd4b-zw5fq    1\/1     Running   1 (7d1h ago)   8d\r\nmysql-1708951513-0              0\/1     Pending   0              56s\r\n\r\n[root@controller ~]# kubectl get all\r\nNAME                                READY   STATUS    RESTARTS       AGE\r\npod\/counter                         2\/2     Running   0              6d20h\r\npod\/lab124deploy-7c7c8457f9-lclk4   1\/1     Running   1 (7d1h ago)   8d\r\npod\/lab126deploy-fff46cd4b-4drk6    1\/1     Running   1 (7d1h ago)   8d\r\npod\/lab126deploy-fff46cd4b-lhmfs    1\/1     Running   1 (7d1h ago)   8d\r\npod\/lab126deploy-fff46cd4b-zw5fq    1\/1     Running   1 (7d1h ago)   8d\r\npod\/mysql-1708951513-0              0\/1     Pending   0              68s\r\n\r\nNAME                                TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE\r\nservice\/kubernetes                  ClusterIP   10.96.0.1       &lt;none&gt;        443\/TCP        10d\r\nservice\/lab126deploy                NodePort    10.105.103.37   &lt;none&gt;        80:32567\/TCP   8d\r\nservice\/mysql-1708951513            ClusterIP   10.103.12.9     &lt;none&gt;        3306\/TCP       68s\r\nservice\/mysql-1708951513-headless   ClusterIP   None            &lt;none&gt;        3306\/TCP       68s\r\n\r\nNAME                           READY   UP-TO-DATE   AVAILABLE   AGE\r\ndeployment.apps\/lab124deploy   1\/1     1            1           8d\r\ndeployment.apps\/lab126deploy   3\/3     3            3           8d\r\n\r\nNAME                                      DESIRED   CURRENT   READY   AGE\r\nreplicaset.apps\/lab124deploy-7c7c8457f9   1         1         1       8d\r\nreplicaset.apps\/lab126deploy-fff46cd4b    3         3         3       8d\r\n\r\nNAME                                READY   AGE\r\nstatefulset.apps\/mysql-1708951513   0\/1     68s\r\n\r\n\r\n[root@controller ~]# helm show chart bitnami\/mysql\r\nannotations:\r\n  category: Database\r\n  images: |\r\n    - name: mysql\r\n      image: docker.io\/bitnami\/mysql:8.0.36-debian-12-r8\r\n    - name: mysqld-exporter\r\n      image: docker.io\/bitnami\/mysqld-exporter:0.15.1-debian-12-r8\r\n    - name: os-shell\r\n      image: docker.io\/bitnami\/os-shell:12-debian-12-r16\r\n  licenses: Apache-2.0\r\n..\r\n\r\n[root@controller ~]# helm show all bitnami\/mysql | more\r\nannotations:\r\n  category: Database\r\n  images: |\r\n    - name: mysql\r\n      image: docker.io\/bitnami\/mysql:8.0.36-debian-12-r8\r\n    - name: mysqld-exporter\r\n      image: docker.io\/bitnami\/mysqld-exporter:0.15.1-debian-12-r8\r\n    - name: os-shell\r\n      image: docker.io\/bitnami\/os-shell:12-debian-12-r16\r\n  licenses: Apache-2.0\r\n..\r\n\r\n[root@controller ~]# helm list\r\nNAME                    NAMESPACE       REVISION        UPDATED                                 STATUS          CHART           APP VERSION\r\nmysql-1708951513        default         1               2024-02-26 07:45:16.679479351 -0500 EST deployed        mysql-9.21.2    8.0.36 \r\n\r\n[root@controller ~]# helm status mysql-1708951513\r\nNAME: mysql-1708951513\r\nLAST DEPLOYED: Mon Feb 26 07:45:16 2024\r\nNAMESPACE: default\r\nSTATUS: deployed\r\nREVISION: 1\r\nTEST SUITE: None\r\nNOTES:\r\nCHART NAME: mysql\r\nCHART VERSION: 9.21.2\r\nAPP VERSION: 8.0.36\r\n\r\n** Please be patient while the chart is being deployed **\r\n\r\nTip:\r\n\r\n  Watch the deployment status using the command: kubectl get pods -w --namespace default\r\n\r\nServices:\r\n\r\n  echo Primary: mysql-1708951513.default.svc.cluster.local:3306\r\n\r\nExecute the following to get the administrator credentials:\r\n\r\n  echo Username: root\r\n  MYSQL_ROOT_PASSWORD=$(kubectl get secret --namespace default mysql-1708951513 -o jsonpath=\"{.data.mysql-root-password}\" | base64 -d)\r\n\r\nTo connect to your database:\r\n\r\n  1. Run a pod that you can use as a client:\r\n\r\n      kubectl run mysql-1708951513-client --rm --tty -i --restart='Never' --image  docker.io\/bitnami\/mysql:8.0.36-debian-12-r8 --namespace default --env MYSQL_ROOT_PASSWORD=$MYSQL_ROOT_PASSWORD --command -- bash\r\n\r\n  2. To connect to primary service (read\/write):\r\n\r\n      mysql -h mysql-1708951513.default.svc.cluster.local -uroot -p\"$MYSQL_ROOT_PASSWORD\"\r\n\r\nWARNING: There are \"resources\" sections in the chart not set. Using \"resourcesPreset\" is not recommended for production. For production installations, please set the following values according to your workload needs:\r\n  - primary.resources\r\n  - secondary.resources\r\n+info https:\/\/kubernetes.io\/docs\/concepts\/configuration\/manage-resources-containers\/\r\n<\/pre>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #3366ff;\">Customizing Before Installing<\/span><\/p>\n<ul>\n<li>A Helm chart consists of templates to which specific values are applied<\/li>\n<li>The values are stored in the<code> values.yaml<\/code> file, within the helm chart<\/li>\n<li>The easiest way to modify these values, is by first using<code> helm pull<\/code> to fetch a local copy of the helm chart<\/li>\n<li>Next use your favorite editor on <code>chartname\/values.yaml<\/code> to change any values<\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #3366ff;\">Customizing Before Install: Commands<\/span><\/p>\n<ul>\n<li><code>helm show values bitnami\/nginx<\/code><\/li>\n<li><code>helm pull bitnami\/nginx<\/code><\/li>\n<li><code>tar xvf nginx-xxxx<\/code><\/li>\n<li><code>vim nginx\/values.yaml<\/code><\/li>\n<li><code>helm template --debug nginx<\/code><\/li>\n<li><code>helm install -f nginx\/values.yaml my-nginx nginx\/<\/code><\/li>\n<\/ul>\n<pre class=\"lang:default decode:true\">[root@controller ~]# helm show values bitnami\/nginx | more\r\n# Copyright VMware, Inc.\r\n# SPDX-License-Identifier: APACHE-2.0\r\n\r\n## @section Global parameters\r\n## Global Docker image parameters\r\n## Please, note that this will override the image parameters, including dependencies, configured to use the global value\r\n## Current available global Docker image parameters: imageRegistry, imagePullSecrets and storageClass\r\n\r\n## @param global.imageRegistry Global Docker image registry\r\n## @param global.imagePullSecrets Global Docker registry secret names as an array\r\n##\r\nglobal:\r\n  imageRegistry: \"\"\r\n  ## E.g.\r\n  ## imagePullSecrets:\r\n  ##   - myRegistryKeySecretName\r\n  ##\r\n  imagePullSecrets: []\r\n## @section Common parameters\r\n\r\n\r\n[root@controller ~]# helm pull bitnami\/nginx\r\n\r\n[root@controller ~]# tar xvf nginx-15.12.2.tgz\r\n\r\n[root@controller ~]# cd nginx\/\r\nnginx\/.helmignore\r\nnginx\/README.md\r\nnginx\/charts\/common\/Chart.yaml\r\nnginx\/charts\/common\/values.yaml\r\nnginx\/charts\/common\/templates\/_affinities.tpl\r\nnginx\/charts\/common\/templates\/_capabilities.tpl\r\n...\r\n\r\n[root@controller ~]# cd nginx\/\r\n\r\n[root@controller nginx]# cat values.yaml | more\r\n# Copyright VMware, Inc.\r\n# SPDX-License-Identifier: APACHE-2.0\r\n\r\n## @section Global parameters\r\n## Global Docker image parameters\r\n## Please, note that this will override the image parameters, including dependencies, configured to use the global value\r\n## Current available global Docker image parameters: imageRegistry, imagePullSecrets and storageClass\r\n\r\n## @param global.imageRegistry Global Docker image registry\r\n## @param global.imagePullSecrets Global Docker registry secret names as an array\r\n##\r\nglobal:\r\n  imageRegistry: \"\"\r\n  ## E.g.\r\n  ## imagePullSecrets:\r\n  ##   - myRegistryKeySecretName\r\n  ##\r\n  imagePullSecrets: []\r\n## @section Common parameters\r\n\r\n## @param nameOverride String to partially override nginx.fullname template (will maintain the release name)\r\n##\r\nnameOverride: \"\"\r\n...\r\n[root@controller nginx]# cd ..\r\n\r\n[root@controller ~]# helm template --debug nginx\r\ninstall.go:214: [debug] Original chart version: \"\"\r\ninstall.go:231: [debug] CHART PATH: \/root\/nginx\r\n\r\n---\r\n# Source: nginx\/templates\/networkpolicy.yaml\r\nkind: NetworkPolicy\r\napiVersion: networking.k8s.io\/v1\r\nmetadata:\r\n  name: release-name-nginx\r\n  namespace: \"default\"\r\n  labels:\r\n    app.kubernetes.io\/instance: release-name\r\n    app.kubernetes.io\/managed-by: Helm\r\n    app.kubernetes.io\/name: nginx\r\n    app.kubernetes.io\/version: 1.25.4\r\n    helm.sh\/chart: nginx-15.12.2\r\nspec:\r\n...\r\n\r\n[root@controller ~]# helm install -f nginx\/values.yaml my-nginx nginx\/\r\nNAME: my-nginx\r\nLAST DEPLOYED: Mon Feb 26 08:11:01 2024\r\nNAMESPACE: default\r\nSTATUS: deployed\r\nREVISION: 1\r\nTEST SUITE: None\r\nNOTES:\r\nCHART NAME: nginx\r\nCHART VERSION: 15.12.2\r\nAPP VERSION: 1.25.4\r\n\r\n** Please be patient while the chart is being deployed **\r\nNGINX can be accessed through the following DNS name from within your cluster:\r\n\r\n    my-nginx.default.svc.cluster.local (port 80)\r\n\r\nTo access NGINX from outside the cluster, follow the steps below:\r\n\r\n1. Get the NGINX URL by running these commands:\r\n\r\n  NOTE: It may take a few minutes for the LoadBalancer IP to be available.\r\n        Watch the status with: 'kubectl get svc --namespace default -w my-nginx'\r\n\r\n    export SERVICE_PORT=$(kubectl get --namespace default -o jsonpath=\"{.spec.ports[0].port}\" services my-nginx)\r\n    export SERVICE_IP=$(kubectl get svc --namespace default my-nginx -o jsonpath='{.status.loadBalancer.ingress[0].ip}')\r\n    echo \"http:\/\/${SERVICE_IP}:${SERVICE_PORT}\"\r\n\r\nWARNING: There are \"resources\" sections in the chart not set. Using \"resourcesPreset\" is not recommended for production. For production installations, please set the following values according to your workload needs:\r\n  - cloneStaticSiteFromGit.gitSync.resources\r\n  - resources\r\n+info https:\/\/kubernetes.io\/docs\/concepts\/configuration\/manage-resources-containers\/\r\n\r\n\r\n[root@controller ~]# kubectl get all\r\nNAME                                READY   STATUS    RESTARTS       AGE\r\npod\/counter                         2\/2     Running   0              6d20h\r\npod\/lab124deploy-7c7c8457f9-lclk4   1\/1     Running   1 (7d1h ago)   8d\r\npod\/lab126deploy-fff46cd4b-4drk6    1\/1     Running   1 (7d1h ago)   8d\r\npod\/lab126deploy-fff46cd4b-lhmfs    1\/1     Running   1 (7d1h ago)   8d\r\npod\/lab126deploy-fff46cd4b-zw5fq    1\/1     Running   1 (7d1h ago)   8d\r\npod\/my-nginx-f8bf59cd9-clnj5        0\/1     Pending   0              38s\r\npod\/mysql-1708951513-0              0\/1     Pending   0              26m\r\n\r\nNAME                                TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE\r\nservice\/kubernetes                  ClusterIP      10.96.0.1       &lt;none&gt;        443\/TCP        10d\r\nservice\/lab126deploy                NodePort       10.105.103.37   &lt;none&gt;        80:32567\/TCP   8d\r\nservice\/my-nginx                    LoadBalancer   10.98.101.190   &lt;pending&gt;     80:32378\/TCP   38s\r\nservice\/mysql-1708951513            ClusterIP      10.103.12.9     &lt;none&gt;        3306\/TCP       26m\r\nservice\/mysql-1708951513-headless   ClusterIP      None            &lt;none&gt;        3306\/TCP       26m\r\n\r\nNAME                           READY   UP-TO-DATE   AVAILABLE   AGE\r\ndeployment.apps\/lab124deploy   1\/1     1            1           8d\r\ndeployment.apps\/lab126deploy   3\/3     3            3           8d\r\ndeployment.apps\/my-nginx       0\/1     1            0           38s\r\n\r\nNAME                                      DESIRED   CURRENT   READY   AGE\r\nreplicaset.apps\/lab124deploy-7c7c8457f9   1         1         1       8d\r\nreplicaset.apps\/lab126deploy-fff46cd4b    3         3         3       8d\r\nreplicaset.apps\/my-nginx-f8bf59cd9        1         1         0       38s\r\n\r\nNAME                                READY   AGE\r\nstatefulset.apps\/mysql-1708951513   0\/1     26m\r\n<\/pre>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #3366ff;\">Kustomize<\/span><\/p>\n<ul>\n<li><code>kustomize<\/code> is a Kubernetes feature, that uses a file with the name<br \/>\n<code>kustomization.yaml<\/code> to apply changes to a set of resources<\/li>\n<li>This is convenient for applying changes to input files that the user does not control himself, and which contents may change because of new versions appearing in Git<\/li>\n<li>Use<code> kubectl apply -k .\/<\/code> in the directory with the <code>kustomization.yaml<\/code> and the files it refers to to apply changes<\/li>\n<li>Use<code> kubectl delete -k .\/<\/code> in the same directory to delete all that was created by the Kustomization<\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #3366ff;\">Understanding a Sample kustomization.yaml<\/span><\/p>\n<p><code>resources:<\/code> # defines which resources (in YAML files) apply<br \/>\n<code>- deployment.yaml<\/code><\/p>\n<p><code>- service.yaml<\/code><br \/>\n<code>namePrefix: test- <\/code># specifies a prefix should be added to all names<\/p>\n<p><code>namespace: testing <\/code># objects will be created in this specific<br \/>\n<code>namespace<\/code><\/p>\n<p><code>commonLabels:<\/code> # labels that will be added to all objects<\/p>\n<p><code>environment: testing<\/code><\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #3366ff;\">Using Kustomization Overlays<br \/>\n<\/span><\/p>\n<ul>\n<li>Kustomization can be used to define a base configuration, as well as<br \/>\nmultiple deployment scenarios (overlays) as in dev, staging and prod for<br \/>\ninstance<\/li>\n<li>In such a configuration, the main <code>kustomization.yaml<\/code> defines the structure:<\/li>\n<\/ul>\n<p><code>- base<\/code><br \/>\n<code>\u00a0\u00a0 - deployment.yam|<\/code><br \/>\n<code>\u00a0\u00a0 - service.yaml<\/code><br \/>\n<code>\u00a0\u00a0 - kustomization.yaml<\/code><br \/>\n<code>- overlays<\/code><br \/>\n<code>\u00a0\u00a0 - dev<\/code><br \/>\n<code>\u00a0\u00a0 - kustomization.yaml<\/code><br \/>\n<code>- staging<\/code><br \/>\n<code>\u00a0\u00a0 - kustomization.yaml<\/code><br \/>\n<code>- prod<\/code><br \/>\n<code>\u00a0\u00a0 - kustomization.yaml<\/code><\/p>\n<ul>\n<li>In each of the overlays\/{dev,staging.prod}\/kustomization.yaml, users would<br \/>\nreference the base configuration in the resources field, and specify changes<br \/>\nfor that specific environment:<\/li>\n<\/ul>\n<p><code>resources:<\/code><\/p>\n<p><code>- ..\/..\/base<\/code><\/p>\n<p><code>namePrefix: dev-<\/code><\/p>\n<p><code>namespace: development<\/code><\/p>\n<p><code>commonLabels:<\/code><\/p>\n<p><code>environment: development<\/code><\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #3366ff;\">Using Kustomization Commands<\/span><\/p>\n<ul>\n<li><code>cat deployment.yaml<\/code><\/li>\n<li><code>cat service.yaml<\/code><\/li>\n<li><code>kubectl apply -f deployment.yaml service.yaml<\/code><\/li>\n<li><code>cat kustomization.yaml<\/code><\/li>\n<li><code>kubectl apply -k .<\/code><\/li>\n<\/ul>\n<pre class=\"lang:default decode:true \">[root@controller ckad]# cd kustomization\r\n\r\n[root@controller kustomization]# ls\r\ndeployment.yaml  kustomization.yaml  service.yaml\r\n\r\n[root@controller kustomization]# cat deployment.yaml\r\napiVersion: apps\/v1\r\nkind: Deployment\r\nmetadata:\r\n  annotations:\r\n    deployment.kubernetes.io\/revision: \"1\"\r\n  creationTimestamp: \"2019-09-20T14:54:12Z\"\r\n  generation: 1\r\n  labels:\r\n    k8s-app: nginx-friday20\r\n  name: nginx-friday20\r\n  namespace: default\r\n  resourceVersion: \"24766\"\r\n  selfLink: \/apis\/apps\/v1\/namespaces\/default\/deployments\/nginx-friday20\r\n  uid: 4c4e3217-0fcf-4365-987c-10d089a09c1e\r\nspec:\r\n  progressDeadlineSeconds: 600\r\n  replicas: 3\r\n  revisionHistoryLimit: 10\r\n  selector:\r\n    matchLabels:\r\n      k8s-app: nginx-friday20\r\n  strategy:\r\n    rollingUpdate:\r\n      maxSurge: 25%\r\n      maxUnavailable: 25%\r\n    type: RollingUpdate\r\n  template:\r\n    metadata:\r\n      creationTimestamp: null\r\n      labels:\r\n        k8s-app: nginx-friday20\r\n      name: nginx-friday20\r\n    spec:\r\n      containers:\r\n      - image: nginx\r\n        imagePullPolicy: Always\r\n        name: nginx-friday20\r\n        resources: {}\r\n        securityContext:\r\n          privileged: false\r\n        terminationMessagePath: \/dev\/termination-log\r\n        terminationMessagePolicy: File\r\n      dnsPolicy: ClusterFirst\r\n      restartPolicy: Always\r\n      schedulerName: default-scheduler\r\n      securityContext: {}\r\n      terminationGracePeriodSeconds: 30\r\n\r\n[root@controller kustomization]# cat service.yaml\r\napiVersion: v1\r\nkind: Service\r\nmetadata:\r\n  creationTimestamp: null\r\n  labels:\r\n    k8s-app: nginx-friday20\r\n  name: nginx-friday20\r\nspec:\r\n  ports:\r\n  - port: 80\r\n    protocol: TCP\r\n    targetPort: 80\r\n  selector:\r\n    k8s-app: nginx-friday20\r\nstatus:\r\n  loadBalancer: {}\r\n[root@controller kustomization]# cat kustomization.yaml\r\nresources:\r\n  - deployment.yaml\r\n  - service.yaml\r\nnamePrefix: test-\r\ncommonLabels:\r\n  environment: testing\r\n\r\n[root@controller kustomization]# kubectl apply -k .\r\nservice\/test-nginx-friday20 created\r\ndeployment.apps\/test-nginx-friday20 created\r\n\r\n[root@controller kustomization]# kubectl get all --selector environment=testing\r\nNAME                                       READY   STATUS    RESTARTS   AGE\r\npod\/test-nginx-friday20-757bb757c5-4k6m8   0\/1     Pending   0          4m48s\r\npod\/test-nginx-friday20-757bb757c5-lmt2r   0\/1     Pending   0          4m48s\r\npod\/test-nginx-friday20-757bb757c5-wnfdw   0\/1     Pending   0          4m47s\r\n\r\nNAME                          TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE\r\nservice\/test-nginx-friday20   ClusterIP   10.105.107.35   &lt;none&gt;        80\/TCP    4m50s\r\n\r\nNAME                                  READY   UP-TO-DATE   AVAILABLE   AGE\r\ndeployment.apps\/test-nginx-friday20   0\/3     3            0           4m51s\r\n\r\nNAME                                             DESIRED   CURRENT   READY   AGE\r\nreplicaset.apps\/test-nginx-friday20-757bb757c5   3         3         0       4m49s\r\n<\/pre>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #3366ff;\">Blue\/Green Deployment<\/span><\/p>\n<ul>\n<li>Blue\/green Deployments are a strategy to accomplish zero downtime<br \/>\napplication upgrade<\/li>\n<li>Essential is the possibility to test the new version of the application before taking it into production<\/li>\n<li>The blue Deployment is the current application<\/li>\n<li>The green Deployment is the new application<\/li>\n<li>Once the green Deployment is tested and ready, traffic is re-routed to the new application version<\/li>\n<li>Blue\/green Deployments can easily be implemeted using Kubernetes Services<\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #3366ff;\">Procedure Overview<\/span><\/p>\n<ul>\n<li>Start with the already running application<\/li>\n<li>Create a new Deployment that is running the new version, and test with a temporary Service resource<\/li>\n<li>If all tests pass, remove the temporary Service resource<\/li>\n<li>Remove the old Service resource (pointing to the blue Deployment), and immediately create a new Service resource exposing the green Deployment<\/li>\n<li>After successful transition, remove the blue Deployment<\/li>\n<li>It is essential to keep the Service name unchanged, so that front-end resources such as Ingress will automatically pick up the transition<\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #3366ff;\">Using Blue\/Green Deployments<\/span><\/p>\n<ul>\n<li><code>kubectl create deploy blue-nginx --image=nginx:1.14 --replicas=3<\/code><\/li>\n<li><code>kubectl expose deploy blue-nginx --port=80 --name=bgnginx<\/code><\/li>\n<li><code>kubectl get deploy blue-nginx -o yaml &gt; green-nginx.yaml<\/code>\n<ul>\n<li>Clean up dynamic generated stuff<\/li>\n<li>Change Image version<\/li>\n<li>Change &#8220;blue&#8221; to &#8220;green&#8221; throughout<\/li>\n<\/ul>\n<\/li>\n<li><code>kubectl create -f green-nginx.yaml<\/code><\/li>\n<li><code>kubectl get pods<\/code><\/li>\n<li><code>kubectl delete svc bgnginx; kubectl expose deploy green-nginx --port=80 --name=bgnginx<\/code><\/li>\n<li><code>kubectl delete deploy blue-nginx<\/code><\/li>\n<\/ul>\n<pre class=\"lang:default decode:true\">[root@controller ~]# kubectl create deploy blue-nginx --image=nginx:1.14 --replicas=3\r\n[root@controller ~]# kubectl get all\r\nNAME                                READY   STATUS    RESTARTS      AGE\r\npod\/blue-nginx-69497ddbcd-55dj4     1\/1     Running   0             4m38s\r\npod\/blue-nginx-69497ddbcd-h7tqp     1\/1     Running   0             4m38s\r\npod\/blue-nginx-69497ddbcd-plglc     1\/1     Running   0             4m38s\r\npod\/counter                         2\/2     Running   2 (20h ago)   8d\r\n\r\nNAME                   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE\r\nservice\/kubernetes     ClusterIP   10.96.0.1       &lt;none&gt;        443\/TCP        12d\r\n\r\nNAME                           READY   UP-TO-DATE   AVAILABLE   AGE\r\ndeployment.apps\/blue-nginx     3\/3     3            3           4m38s\r\n\r\nNAME                                      DESIRED   CURRENT   READY   AGE\r\nreplicaset.apps\/blue-nginx-69497ddbcd     3         3         3       4m38s\r\n\r\n\r\n[root@controller ~]# kubectl expose deploy blue-nginx --port=80 --name=bgnginx\r\nservice\/bgnginx exposed\r\n\r\n[root@controller ~]# kubectl get deploy blue-nginx -o yaml &gt; green-nginx.yaml\r\n[root@controller ~]# cat green-nginx.yaml\r\napiVersion: apps\/v1\r\nkind: Deployment\r\nmetadata:\r\n  annotations:\r\n    deployment.kubernetes.io\/revision: \"1\"\r\n  creationTimestamp: \"2024-02-28T12:25:49Z\"\r\n  generation: 1\r\n  labels:\r\n    app: blue-nginx\r\n  name: blue-nginx\r\n  namespace: default\r\n  resourceVersion: \"2429551\"\r\n  uid: 4971a862-61b0-4f82-bf0f-3a34150ecf8b\r\nspec:\r\n  progressDeadlineSeconds: 600\r\n  replicas: 3\r\n  revisionHistoryLimit: 10\r\n  selector:\r\n    matchLabels:\r\n      app: blue-nginx\r\n  strategy:\r\n    rollingUpdate:\r\n      maxSurge: 25%\r\n      maxUnavailable: 25%\r\n    type: RollingUpdate\r\n  template:\r\n    metadata:\r\n      creationTimestamp: null\r\n      labels:\r\n        app: blue-nginx\r\n    spec:\r\n      containers:\r\n      - image: nginx:1.14\r\n        imagePullPolicy: IfNotPresent\r\n        name: nginx\r\n        resources: {}\r\n        terminationMessagePath: \/dev\/termination-log\r\n        terminationMessagePolicy: File\r\n      dnsPolicy: ClusterFirst\r\n      restartPolicy: Always\r\n      schedulerName: default-scheduler\r\n      securityContext: {}\r\n      terminationGracePeriodSeconds: 30\r\nstatus:\r\n  availableReplicas: 3\r\n  conditions:\r\n  - lastTransitionTime: \"2024-02-28T12:30:22Z\"\r\n    lastUpdateTime: \"2024-02-28T12:30:22Z\"\r\n    message: Deployment has minimum availability.\r\n    reason: MinimumReplicasAvailable\r\n    status: \"True\"\r\n    type: Available\r\n  - lastTransitionTime: \"2024-02-28T12:25:49Z\"\r\n    lastUpdateTime: \"2024-02-28T12:30:22Z\"\r\n    message: ReplicaSet \"blue-nginx-69497ddbcd\" has successfully progressed.\r\n    reason: NewReplicaSetAvailable\r\n    status: \"True\"\r\n    type: Progressing\r\n  observedGeneration: 1\r\n  readyReplicas: 3\r\n  replicas: 3\r\n  updatedReplicas: 3\r\n\r\n[root@controller ~]# vi green-nginx.yaml\r\n\r\n[root@controller ~]# cat green-nginx.yaml\r\napiVersion: apps\/v1\r\nkind: Deployment\r\nmetadata:\r\n  labels:\r\n    app: green-nginx\r\n  name: green-nginx\r\n  namespace: default\r\nspec:\r\n  progressDeadlineSeconds: 600\r\n  replicas: 3\r\n  revisionHistoryLimit: 10\r\n  selector:\r\n    matchLabels:\r\n      app: green-nginx\r\n  strategy:\r\n    rollingUpdate:\r\n      maxSurge: 25%\r\n      maxUnavailable: 25%\r\n    type: RollingUpdate\r\n  template:\r\n    metadata:\r\n      creationTimestamp: null\r\n      labels:\r\n        app: green-nginx\r\n    spec:\r\n      containers:\r\n      - image: nginx:1.17\r\n        imagePullPolicy: IfNotPresent\r\n        name: nginx\r\n        resources: {}\r\n        terminationMessagePath: \/dev\/termination-log\r\n        terminationMessagePolicy: File\r\n      dnsPolicy: ClusterFirst\r\n      restartPolicy: Always\r\n      schedulerName: default-scheduler\r\n      securityContext: {}\r\n      terminationGracePeriodSeconds: 30\r\n\r\n[root@controller ~]# kubectl create -f green-nginx.yaml\r\ndeployment.apps\/green-nginx created\r\n\r\n[root@controller ~]# kubectl expose deploy green-nginx --port=80 --name=green\r\nservice\/green exposed\r\n\r\n[root@controller ~]# kubectl get endpoints\r\nNAME           ENDPOINTS                                            AGE\r\nbgnginx        192.168.0.141:80,192.168.0.145:80,192.168.0.146:80   39m\r\ngreen          192.168.0.131:80,192.168.0.142:80,192.168.0.143:80   8s\r\nkubernetes     172.30.9.25:6443                                     12d\r\n\r\n[root@controller ~]# kubectl delete svc green\r\nservice \"green\" deleted\r\n\r\n[root@controller ~]# kubectl delete svc bgnginx\r\nservice \"bgnginx\" deleted\r\n\r\n[root@controller ~]# kubectl expose deploy green-nginx --port=80 --name=bgnginx\r\nservice\/bgnginx exposed\r\n\r\n[root@controller ~]# kubectl get pods -o wide\r\nNAME                            READY   STATUS    RESTARTS      AGE   IP              NODE                  NOMINATED NODE   READINESS GATES\r\nblue-nginx-69497ddbcd-8gc4h     1\/1     Running   0             26m   192.168.0.145   worker2.example.com   &lt;none&gt;           &lt;none&gt;\r\nblue-nginx-69497ddbcd-msnjc     1\/1     Running   0             26m   192.168.0.146   worker2.example.com   &lt;none&gt;           &lt;none&gt;\r\nblue-nginx-69497ddbcd-ww7rq     1\/1     Running   0             26m   192.168.0.141   worker2.example.com   &lt;none&gt;           &lt;none&gt;\r\ngreen-nginx-78cf677cf8-djnjl    1\/1     Running   0             36m   192.168.0.143   worker2.example.com   &lt;none&gt;           &lt;none&gt;\r\ngreen-nginx-78cf677cf8-gqbjr    1\/1     Running   0             36m   192.168.0.131   worker2.example.com   &lt;none&gt;           &lt;none&gt;\r\ngreen-nginx-78cf677cf8-pvcbb    1\/1     Running   0             36m   192.168.0.142   worker2.example.com   &lt;none&gt;           &lt;none&gt;\r\n\r\n[root@controller ~]# kubectl get endpoints\r\nNAME           ENDPOINTS                                            AGE\r\nbgnginx        192.168.0.131:80,192.168.0.142:80,192.168.0.143:80   76s\r\nkubernetes     172.30.9.25:6443                                     12d\r\n\r\n[root@controller ~]# kubectl delete deployment.apps\/blue-nginx\r\ndeployment.apps \"blue-nginx\" deleted\r\n\r\n[root@controller ~]# kubectl get all\r\nNAME                                READY   STATUS    RESTARTS      AGE\r\npod\/green-nginx-78cf677cf8-djnjl    1\/1     Running   0             38m\r\npod\/green-nginx-78cf677cf8-gqbjr    1\/1     Running   0             38m\r\npod\/green-nginx-78cf677cf8-pvcbb    1\/1     Running   0             38m\r\n\r\nNAME                   TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE\r\nservice\/bgnginx        ClusterIP   10.110.110.222   &lt;none&gt;        80\/TCP         2m46s\r\nservice\/kubernetes     ClusterIP   10.96.0.1        &lt;none&gt;        443\/TCP        12d\r\n\r\nNAME                           READY   UP-TO-DATE   AVAILABLE   AGE\r\ndeployment.apps\/green-nginx    3\/3     3            3           38m\r\n\r\nNAME                                      DESIRED   CURRENT   READY   AGE\r\nreplicaset.apps\/green-nginx-78cf677cf8    3         3         3       38m\r\n\r\n<\/pre>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #3366ff;\">Canary Deployments<\/span><\/p>\n<ul>\n<li>A canary Deployment is an update strategy where you first push the update<br \/>\nat small scale to see if it works well<\/li>\n<li>In terms of Kubernetes, you could imagine a Deployment that runs 4 replicas<\/li>\n<li>Next, you add a new Deployment that uses the same label<\/li>\n<li>Then you create a Service that uses the same selector label for all<\/li>\n<li>As the Service is load balancing, only 1 out of 5 requests would be serviced by the newer version<\/li>\n<li>And if that doesn&#8217;t seem to be working, you can easily delete it<\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #3366ff;\">Step 1: Running the Old Version<\/span><\/p>\n<ul>\n<li><code>kubectl create deploy old-nginx --image=nginx:1.14 --replicas=3 --dry-<\/code><br \/>\n<code>run=client -o yaml &gt; ~\/oldnginx.yaml<\/code><\/li>\n<li><code>vim oldnginx.yaml<\/code>\n<ul>\n<li>set labels: type: canary in deploy metadata as well as Pod metadata<\/li>\n<\/ul>\n<\/li>\n<li><code>kubectl create -f oldnginx.yaml<\/code><\/li>\n<li><code>kubectl expose deploy old-nginx --name=nginx --port=80 --selector type=canary<\/code><\/li>\n<li><code>kubectl get svc; kubectl get endpoints<\/code><\/li>\n<li><code>minikube ssh; curl &lt;svc-ip-address&gt;<\/code> a few times, you&#8217;ll see all the same<\/li>\n<\/ul>\n<pre class=\"lang:default decode:true\">[root@controller ~]# kubectl create deploy old-nginx --image=nginx:1.14 --replicas=3 --dry-run=client -o yaml &gt; ~\/oldnginx.yaml\r\n[root@controller ~]# cat oldnginx.yaml\r\napiVersion: apps\/v1\r\nkind: Deployment\r\nmetadata:\r\n  creationTimestamp: null\r\n  labels:\r\n    app: old-nginx\r\n  name: old-nginx\r\nspec:\r\n  replicas: 3\r\n  selector:\r\n    matchLabels:\r\n      app: old-nginx\r\n  strategy: {}\r\n  template:\r\n    metadata:\r\n      creationTimestamp: null\r\n      labels:\r\n        app: old-nginx\r\n    spec:\r\n      containers:\r\n      - image: nginx:1.14\r\n        name: nginx\r\n        resources: {}\r\nstatus: {}\r\n\r\n[root@controller ~]# vim oldnginx.yaml\r\n[root@controller ~]# cat oldnginx.yaml\r\napiVersion: apps\/v1\r\nkind: Deployment\r\nmetadata:\r\n  creationTimestamp: null\r\n  labels:\r\n    app: old-nginx\r\n    type: canary\r\n  name: old-nginx\r\nspec:\r\n  replicas: 3\r\n  selector:\r\n    matchLabels:\r\n      app: old-nginx\r\n  strategy: {}\r\n  template:\r\n    metadata:\r\n      creationTimestamp: null\r\n      labels:\r\n        app: old-nginx\r\n        type: canary\r\n    spec:\r\n      containers:\r\n      - image: nginx:1.14\r\n        name: nginx\r\n        resources: {}\r\nstatus: {}\r\n\r\n[root@controller ~]# kubectl create -f oldnginx.yaml\r\ndeployment.apps\/old-nginx created\r\n\r\n[root@controller ~]# kubectl expose deploy old-nginx --name=nginx --port=80 --selector type=canary\r\n<\/pre>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #3366ff;\">Step 2: Creating a ConfigMap<\/span><\/p>\n<ul>\n<li><code>kubectl cp &lt;old-nginx-pod&gt;:\/usr\/share\/nginx\/html\/index.html index.html<\/code><\/li>\n<li><code>vim index.html<\/code>\n<ul>\n<li>Add a line that uniquely identifies this as the canary Pod<\/li>\n<\/ul>\n<\/li>\n<li><code>kubectl create cm canary --from-file=index.html<\/code><\/li>\n<li><code>kubectl describe cm canary<\/code><\/li>\n<\/ul>\n<pre class=\"lang:default decode:true \">[root@controller ~]# kubectl get all\r\nNAME                             READY   STATUS    RESTARTS   AGE\r\npod\/old-nginx-5d6fc48749-7x8t6   1\/1     Running   0          39m\r\npod\/old-nginx-5d6fc48749-knph2   1\/1     Running   0          39m\r\npod\/old-nginx-5d6fc48749-tn5d5   1\/1     Running   0          39m\r\n\r\nNAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE\r\nservice\/kubernetes   ClusterIP   10.96.0.1       &lt;none&gt;        443\/TCP   46m\r\nservice\/nginx        ClusterIP   10.97.203.127   &lt;none&gt;        80\/TCP    10m\r\n\r\nNAME                        READY   UP-TO-DATE   AVAILABLE   AGE\r\ndeployment.apps\/old-nginx   3\/3     3            3           39m\r\n\r\nNAME                                   DESIRED   CURRENT   READY   AGE\r\nreplicaset.apps\/old-nginx-5d6fc48749   3         3         3       39m\r\n\r\n\r\n[root@controller ~]# kubectl cp old-nginx-5d6fc48749-tn5d5:\/usr\/share\/nginx\/html\/index.html index.html\r\ntar: Removing leading `\/' from member names\r\n\r\n\r\n[root@controller ~]# cat index.html\r\n&lt;!DOCTYPE html&gt;\r\n&lt;html&gt;\r\n&lt;head&gt;\r\n&lt;title&gt;Welcome to nginx!&lt;\/title&gt;\r\n&lt;style&gt;\r\n    body {\r\n        width: 35em;\r\n        margin: 0 auto;\r\n        font-family: Tahoma, Verdana, Arial, sans-serif;\r\n    }\r\n&lt;\/style&gt;\r\n&lt;\/head&gt;\r\n&lt;body&gt;\r\n&lt;h1&gt;Welcome to nginx!&lt;\/h1&gt;\r\n&lt;p&gt;If you see this page, the nginx web server is successfully installed and\r\nworking. Further configuration is required.&lt;\/p&gt;\r\n\r\n&lt;p&gt;For online documentation and support please refer to\r\n&lt;a href=\"http:\/\/nginx.org\/\"&gt;nginx.org&lt;\/a&gt;.&lt;br\/&gt;\r\nCommercial support is available at\r\n&lt;a href=\"http:\/\/nginx.com\/\"&gt;nginx.com&lt;\/a&gt;.&lt;\/p&gt;\r\n\r\n&lt;p&gt;&lt;em&gt;Thank you for using nginx.&lt;\/em&gt;&lt;\/p&gt;\r\n&lt;\/body&gt;\r\n&lt;\/html&gt;\r\n\r\n\r\n[root@controller ~]# vi index.html\r\n[root@controller ~]# cat index.html\r\n&lt;!DOCTYPE html&gt;\r\n&lt;html&gt;\r\n&lt;head&gt;\r\n&lt;title&gt;Welcome to the Canary!&lt;\/title&gt;\r\n&lt;style&gt;\r\n    body {\r\n        width: 35em;\r\n        margin: 0 auto;\r\n        font-family: Tahoma, Verdana, Arial, sans-serif;\r\n    }\r\n&lt;\/style&gt;\r\n&lt;\/head&gt;\r\n&lt;body&gt;\r\n&lt;h1&gt;Welcome to Canary !&lt;\/h1&gt;\r\n&lt;p&gt;Hello Canary&lt;\/p&gt;\r\n&lt;\/body&gt;\r\n&lt;\/html&gt;\r\n\r\n\r\n[root@controller ~]# kubectl create cm canary --from-file=index.html\r\nconfigmap\/canary created\r\n\r\n\r\n[root@controller ~]# kubectl describe cm canary\r\nName:         canary\r\nNamespace:    default\r\nLabels:       &lt;none&gt;\r\nAnnotations:  &lt;none&gt;\r\n\r\nData\r\n====\r\nindex.html:\r\n----\r\n&lt;!DOCTYPE html&gt;\r\n&lt;html&gt;\r\n&lt;head&gt;\r\n&lt;title&gt;Welcome to the Canary!&lt;\/title&gt;\r\n&lt;style&gt;\r\n    body {\r\n        width: 35em;\r\n        margin: 0 auto;\r\n        font-family: Tahoma, Verdana, Arial, sans-serif;\r\n    }\r\n&lt;\/style&gt;\r\n&lt;\/head&gt;\r\n&lt;body&gt;\r\n&lt;h1&gt;Welcome to Canary !&lt;\/h1&gt;\r\n&lt;p&gt;Hello Canary&lt;\/p&gt;\r\n&lt;\/body&gt;\r\n&lt;\/html&gt;\r\n\r\n<\/pre>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #3366ff;\">Step 3: Preparing the New Version<\/span><\/p>\n<ul>\n<li><code>cp oldnginx.yaml canary.yaml<\/code><\/li>\n<li><code>vim canary.yaml<\/code>\n<ul>\n<li><code>image: nginx:latest<\/code><\/li>\n<li><code>replicas: 1<\/code><\/li>\n<li><code> :%s\/old\/new\/g<\/code><\/li>\n<li>Mount the configMap as a volume (see Git repo canary.yaml)<\/li>\n<\/ul>\n<\/li>\n<li><code>kubectl create -f canary.yaml<\/code><\/li>\n<li><code>kubectl get svc; kubectl get endpoints<\/code><\/li>\n<li><code>minikube ssh; curl &lt;service-ip&gt;<\/code> and notice different results: this is canary in action<\/li>\n<\/ul>\n<pre class=\"lang:default decode:true\">[root@controller ~]# cp oldnginx.yaml canary.yaml\r\n\r\n[root@controller ~]# cat canary.yaml\r\napiVersion: apps\/v1\r\nkind: Deployment\r\nmetadata:\r\n  creationTimestamp: null\r\n  labels:\r\n    app: old-nginx\r\n    type: canary\r\n  name: old-nginx\r\nspec:\r\n  replicas: 3\r\n  selector:\r\n    matchLabels:\r\n      app: old-nginx\r\n  strategy: {}\r\n  template:\r\n    metadata:\r\n      creationTimestamp: null\r\n      labels:\r\n        app: old-nginx\r\n        type: canary\r\n    spec:\r\n      containers:\r\n      - image: nginx:1.14\r\n        name: nginx\r\n        resources: {}\r\nstatus: {}\r\n\r\n\r\n[root@controller ~]# vim canary.yaml\r\n[root@controller ~]# cat canary.yaml\r\napiVersion: apps\/v1\r\nkind: Deployment\r\nmetadata:\r\n  creationTimestamp: null\r\n  labels:\r\n    app: new-nginx\r\n    type: canary\r\n  name: new-nginx\r\nspec:\r\n  replicas: 1\r\n  selector:\r\n    matchLabels:\r\n      app: new-nginx\r\n  strategy: {}\r\n  template:\r\n    metadata:\r\n      creationTimestamp: null\r\n      labels:\r\n        app: new-nginx\r\n        type: canary\r\n    spec:\r\n      containers:\r\n      - image: nginx:latest\r\n        name: nginx\r\n        resources: {}\r\nstatus: {}\r\n<\/pre>\n<p>Go to the Kubernetes documentation -&gt; search: ConfigMap<\/p>\n<pre class=\"lang:default mark:28-35 decode:true\">[root@controller ~]# vim canary.yaml\r\n[root@controller ~]# cat canary.yaml\r\napiVersion: apps\/v1\r\nkind: Deployment\r\nmetadata:\r\n  creationTimestamp: null\r\n  labels:\r\n    app: new-nginx\r\n    type: canary\r\n  name: new-nginx\r\nspec:\r\n  replicas: 1\r\n  selector:\r\n    matchLabels:\r\n      app: new-nginx\r\n  strategy: {}\r\n  template:\r\n    metadata:\r\n      creationTimestamp: null\r\n      labels:\r\n        app: new-nginx\r\n        type: canary\r\n    spec:\r\n      containers:\r\n      - image: nginx:latest\r\n        name: nginx\r\n        resources: {}\r\n        volumeMounts:\r\n        - name: canvol\r\n          mountPath: \"\/usr\/share\/nginx\/html\/index.html\"\r\n      volumes:\r\n      - name: canvol\r\n        configMap:\r\n          name: canary\r\nstatus: {}\r\n\r\n\r\n[root@controller ~]# kubectl create -f canary.yaml\r\ndeployment.apps\/new-nginx created\r\n\r\n\r\n[root@controller ~]# kubectl get svc; kubectl get endpoints\r\nNAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE\r\nkubernetes   ClusterIP   10.96.0.1       &lt;none&gt;        443\/TCP   74m\r\noldnginx     ClusterIP   10.97.203.127   &lt;none&gt;        80\/TCP    38m\r\nNAME         ENDPOINTS                                             AGE\r\nkubernetes   172.30.9.25:6443                                      74m\r\nnginx        172.16.102.131:80,172.16.71.192:80,172.16.71.193:80   38m\r\n\r\n\r\n[root@controller ~]# kubectl get pods --show-labels\r\nNAME                         READY   STATUS             RESTARTS      AGE   LABELS\r\nnew-nginx-b7bfd4469-6hdcs    0\/1     CrashLoopBackOff   3 (36s ago)   84s   app=new-nginx,pod-template-hash=b7bfd4469,type=canary\r\nold-nginx-5d6fc48749-7x8t6   1\/1     Running            0             67m   app=old-nginx,pod-template-hash=5d6fc48749,type=canary\r\nold-nginx-5d6fc48749-knph2   1\/1     Running            0             67m   app=old-nginx,pod-template-hash=5d6fc48749,type=canary\r\nold-nginx-5d6fc48749-tn5d5   1\/1     Running            0             67m   app=old-nginx,pod-template-hash=5d6fc48749,type=canary\r\n\r\n\r\n[root@controller ~]# kubectl describe new-nginx-b7bfd4469-6hdcs\r\nerror: the server doesn't have a resource type \"new-nginx-b7bfd4469-6hdcs\"\r\n\r\n\r\n[root@controller ~]# kubectl describe pod new-nginx-b7bfd4469-6hdcs\r\nName:             new-nginx-b7bfd4469-6hdcs\r\nNamespace:        default\r\nPriority:         0\r\nService Account:  default\r\nNode:             worker2.example.com\/172.30.9.27\r\nStart Time:       Thu, 29 Feb 2024 12:38:07 -0500\r\nLabels:           app=new-nginx\r\n                  pod-template-hash=b7bfd4469\r\n                  type=canary\r\nAnnotations:      cni.projectcalico.org\/containerID: 1aba5965b8b90c8aca42dacf6fe882ed25aa73ead94fd1bcb7e426d7beb6903d\r\n                  cni.projectcalico.org\/podIP: 172.16.71.194\/32\r\n                  cni.projectcalico.org\/podIPs: 172.16.71.194\/32\r\nStatus:           Running\r\nIP:               172.16.71.194\r\nIPs:\r\n  IP:           172.16.71.194\r\nControlled By:  ReplicaSet\/new-nginx-b7bfd4469\r\nContainers:\r\n  nginx:\r\n    Container ID:   containerd:\/\/e0e8ffe81b49b64788a6a9474870b52151e88764d8bf2f5f49c841665fc57f13\r\n    Image:          nginx:latest\r\n    Image ID:       docker.io\/library\/nginx@sha256:25ff478171a2fd27d61a1774d97672bb7c13e888749fc70c711e207be34d370a\r\n    Port:           &lt;none&gt;\r\n    Host Port:      &lt;none&gt;\r\n    State:          Waiting\r\n      Reason:       RunContainerError\r\n    Last State:     Terminated\r\n      Reason:       StartError\r\n      Message:      failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error mounting \"\/var\/lib\/kubelet\/pods\/5600e3eb-ed9c-4834-940b-fb15c114d56c\/volumes\/kubernetes.io~configmap\/canvol\" to rootfs at \"\/usr\/share\/nginx\/html\/index.html\": mount \/var\/lib\/kubelet\/pods\/5600e3eb-ed9c-4834-940b-fb15c114d56c\/volumes\/kubernetes.io~configmap\/canvol:\/usr\/share\/nginx\/html\/index.html (via \/proc\/self\/fd\/6), flags: 0x5001: not a directory: unknown\r\n      Exit Code:    128\r\n      Started:      Wed, 31 Dec 1969 19:00:00 -0500\r\n      Finished:     Thu, 29 Feb 2024 12:39:48 -0500\r\n    Ready:          False\r\n    Restart Count:  4\r\n    Environment:    &lt;none&gt;\r\n    Mounts:\r\n      \/usr\/share\/nginx\/html\/index.html from canvol (rw)\r\n      \/var\/run\/secrets\/kubernetes.io\/serviceaccount from kube-api-access-vmxvc (ro)\r\nConditions:\r\n  Type              Status\r\n  Initialized       True\r\n  Ready             False\r\n  ContainersReady   False\r\n  PodScheduled      True\r\nVolumes:\r\n  canvol:\r\n    Type:      ConfigMap (a volume populated by a ConfigMap)\r\n    Name:      canary\r\n    Optional:  false\r\n  kube-api-access-vmxvc:\r\n    Type:                    Projected (a volume that contains injected data from multiple sources)\r\n    TokenExpirationSeconds:  3607\r\n    ConfigMapName:           kube-root-ca.crt\r\n    ConfigMapOptional:       &lt;nil&gt;\r\n    DownwardAPI:             true\r\nQoS Class:                   BestEffort\r\nNode-Selectors:              &lt;none&gt;\r\nTolerations:                 node.kubernetes.io\/not-ready:NoExecute op=Exists for 300s\r\n                             node.kubernetes.io\/unreachable:NoExecute op=Exists for 300s\r\nEvents:\r\n  Type     Reason     Age                 From               Message\r\n  ----     ------     ----                ----               -------\r\n  Normal   Scheduled  105s                default-scheduler  Successfully assigned default\/new-nginx-b7bfd4469-6hdcs to worker2.example.com\r\n  Normal   Pulled     103s                kubelet            Successfully pulled image \"nginx:latest\" in 1.26s (1.26s including waiting)\r\n  Normal   Pulled     101s                kubelet            Successfully pulled image \"nginx:latest\" in 934ms (934ms including waiting)\r\n  Normal   Pulled     85s                 kubelet            Successfully pulled image \"nginx:latest\" in 897ms (897ms including waiting)\r\n  Normal   Created    57s (x4 over 103s)  kubelet            Created container nginx\r\n  Warning  Failed     57s (x4 over 103s)  kubelet            Error: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error mounting \"\/var\/lib\/kubelet\/pods\/5600e3eb-ed9c-4834-940b-fb15c114d56c\/volumes\/kubernetes.io~configmap\/canvol\" to rootfs at \"\/usr\/share\/nginx\/html\/index.html\": mount \/var\/lib\/kubelet\/pods\/5600e3eb-ed9c-4834-940b-fb15c114d56c\/volumes\/kubernetes.io~configmap\/canvol:\/usr\/share\/nginx\/html\/index.html (via \/proc\/self\/fd\/6), flags: 0x5001: not a directory: unknown\r\n  Normal   Pulled     57s                 kubelet            Successfully pulled image \"nginx:latest\" in 908ms (908ms including waiting)\r\n  Warning  BackOff    18s (x8 over 100s)  kubelet            Back-off restarting failed container nginx in pod new-nginx-b7bfd4469-6hdcs_default(5600e3eb-ed9c-4834-940b-fb15c114d56c)\r\n  Normal   Pulling    5s (x5 over 104s)   kubelet            Pulling image \"nginx:latest\"\r\n[root@controller ~]#\r\n\r\n[root@controller ~]# vim canary.yaml\r\n\r\n[root@controller ~]# kubectl delete -f canary.yaml\r\ndeployment.apps \"new-nginx\" deleted\r\n\r\n\r\n[root@controller ~]# cat canary.yaml\r\napiVersion: apps\/v1\r\nkind: Deployment\r\nmetadata:\r\n  creationTimestamp: null\r\n  labels:\r\n    app: new-nginx\r\n    type: canary\r\n  name: new-nginx\r\nspec:\r\n  replicas: 1\r\n  selector:\r\n    matchLabels:\r\n      app: new-nginx\r\n  strategy: {}\r\n  template:\r\n    metadata:\r\n      creationTimestamp: null\r\n      labels:\r\n        app: new-nginx\r\n        type: canary\r\n    spec:\r\n      containers:\r\n      - image: nginx:latest\r\n        name: nginx\r\n        resources: {}\r\n        volumeMounts:\r\n        - name: canvol\r\n          mountPath: \"\/usr\/share\/nginx\/html\"\r\n      volumes:\r\n      - name: canvol\r\n        configMap:\r\n          name: canary\r\nstatus: {}\r\n\r\n\r\n[root@controller ~]# kubectl create -f canary.yaml\r\ndeployment.apps\/new-nginx created\r\n\r\n\r\n[root@controller ~]# kubectl get pods --show-labels\r\nNAME                         READY   STATUS    RESTARTS   AGE   LABELS\r\nnew-nginx-7c59c857b9-pvwvb   1\/1     Running   0          7s    app=new-nginx,pod-template-hash=7c59c857b9,type=canary\r\nold-nginx-5d6fc48749-7x8t6   1\/1     Running   0          69m   app=old-nginx,pod-template-hash=5d6fc48749,type=canary\r\nold-nginx-5d6fc48749-knph2   1\/1     Running   0          69m   app=old-nginx,pod-template-hash=5d6fc48749,type=canary\r\nold-nginx-5d6fc48749-tn5d5   1\/1     Running   0          69m   app=old-nginx,pod-template-hash=5d6fc48749,type=canary\r\n\r\n\r\n[root@controller ~]# kubectl get svc; kubectl get endpoints\r\nNAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE\r\nkubernetes   ClusterIP   10.96.0.1       &lt;none&gt;        443\/TCP   76m\r\noldnginx     ClusterIP   10.97.203.127   &lt;none&gt;        80\/TCP    40m\r\nNAME         ENDPOINTS                                                         AGE\r\nkubernetes   172.30.9.25:6443                                                  76m\r\nnginx        172.16.102.131:80,172.16.71.192:80,172.16.71.193:80 + 1 more...   40m\r\n\r\n[root@controller ~]# curl 10.97.203.127\r\n\r\n<\/pre>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #3366ff;\">Step 4: Activating the New Version<\/span><\/p>\n<ul>\n<li>Use <code>kubectl get deploy<\/code> to verify the names of the old and the new<br \/>\ndeployment<\/li>\n<li>Use<code> kubectl scale<\/code> to scale the canary deployment up to the desired number of replicas<\/li>\n<li><code>kubectl delete deploy<\/code> to delete the old deployment<\/li>\n<\/ul>\n<pre class=\"lang:default decode:true \">[root@controller ~]# kubectl get deploy\r\nNAME        READY   UP-TO-DATE   AVAILABLE   AGE\r\nnew-nginx   1\/1     1            1           94m\r\nold-nginx   3\/3     3            3           162m\r\n\r\n[root@controller ~]# kubectl scale deploy new-nginx --replicas=3\r\ndeployment.apps\/new-nginx scaled\r\n\r\n[root@controller ~]# kubectl get deploy\r\nNAME        READY   UP-TO-DATE   AVAILABLE   AGE\r\nnew-nginx   3\/3     3            3           95m\r\nold-nginx   3\/3     3            3           164m\r\n\r\n[root@controller ~]# kubectl describe svc oldnginx\r\nName:              oldnginx\r\nNamespace:         default\r\nLabels:            app=old-nginx\r\n                   type=canary\r\nAnnotations:       &lt;none&gt;\r\nSelector:          type=canary\r\nType:              ClusterIP\r\nIP Family Policy:  SingleStack\r\nIP Families:       IPv4\r\nIP:                10.97.203.127\r\nIPs:               10.97.203.127\r\nPort:              &lt;unset&gt;  80\/TCP\r\nTargetPort:        80\/TCP\r\nEndpoints:         172.16.102.131:80,172.16.102.132:80,172.16.71.192:80 + 3 more...\r\nSession Affinity:  None\r\nEvents:            &lt;none&gt;\r\n\r\n[root@controller ~]# kubectl scale deploy old-nginx --replicas=0\r\ndeployment.apps\/old-nginx scaled\r\n\r\n[root@controller ~]# kubectl get deploy\r\nNAME        READY   UP-TO-DATE   AVAILABLE   AGE\r\nnew-nginx   3\/3     3            3           97m\r\nold-nginx   0\/0     0            0           166m\r\n\r\n[root@controller ~]# kubectl get all\r\nNAME                             READY   STATUS    RESTARTS   AGE\r\npod\/new-nginx-7c59c857b9-75l29   1\/1     Running   0          2m19s\r\npod\/new-nginx-7c59c857b9-cmdct   1\/1     Running   0          2m19s\r\npod\/new-nginx-7c59c857b9-pvwvb   1\/1     Running   0          97m\r\n\r\nNAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE\r\nservice\/kubernetes   ClusterIP   10.96.0.1       &lt;none&gt;        443\/TCP   173m\r\nservice\/oldnginx     ClusterIP   10.97.203.127   &lt;none&gt;        80\/TCP    137m\r\n\r\nNAME                        READY   UP-TO-DATE   AVAILABLE   AGE\r\ndeployment.apps\/new-nginx   3\/3     3            3           97m\r\ndeployment.apps\/old-nginx   0\/0     0            0           166m\r\n\r\nNAME                                   DESIRED   CURRENT   READY   AGE\r\nreplicaset.apps\/new-nginx-7c59c857b9   3         3         3       97m\r\nreplicaset.apps\/old-nginx-5d6fc48749   0         0         0       166m\r\n<\/pre>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #3366ff;\">Custom Resource Definitions<\/span><\/p>\n<ul>\n<li>Custom Resource Definitions (CRDs) allow users to add custom resources to<br \/>\nclusters<\/li>\n<li>Doing so allows anything to be integrated in a cloud-native environment<\/li>\n<li>The CRD allows users to add resources in a very easy way\n<ul>\n<li>The resources are added as extension to the original Kubernetes API server<\/li>\n<li>No programming skills required<\/li>\n<\/ul>\n<\/li>\n<li>The alternative way to build custom resources, is via API integration\n<ul>\n<li>This will build a custom API server<\/li>\n<li>Programming skills are required<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #3366ff;\">Creating Custom Resources<\/span><\/p>\n<ul>\n<li>Creating Custom Resources using CRDs is a two-step procedure<\/li>\n<li>First, you&#8217;ll need to define the resource, using the CustomResourceDefinition API kind<\/li>\n<li>After defining the resource, it can be added through its own API resource<\/li>\n<\/ul>\n<p><span style=\"color: #3366ff;\">Creating Custom Resources Commands<\/span><\/p>\n<ul>\n<li><code>cat crd-object.yaml<\/code><\/li>\n<li><code>kubectl create -f crd-object.yaml<\/code><\/li>\n<li><code>kubectl api-resources | grep backup<\/code><\/li>\n<li><code>cat crd-backup.yaml<\/code><\/li>\n<li><code>kubectl create -f crd-backup.yaml<\/code><\/li>\n<li><code>kubectl get backups<\/code><\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<pre class=\"lang:default decode:true \">[root@controller ckad]# cat crd-object.yaml\r\napiVersion: apiextensions.k8s.io\/v1\r\nkind: CustomResourceDefinition\r\nmetadata:\r\n  name: backups.stable.example.com\r\nspec:\r\n  group: stable.example.com\r\n  versions:\r\n  - name: v1\r\n    served: true\r\n    storage: true\r\n    schema:\r\n      openAPIV3Schema:\r\n        type: object\r\n        properties:\r\n          spec:\r\n            type: object\r\n            properties:\r\n              backupType:\r\n                type: string\r\n              image:\r\n                type: string\r\n              replicas:\r\n                type: integer\r\n  scope: Namespaced\r\n  names:\r\n    plural: backups\r\n    singular: backup\r\n    shortNames:\r\n     - bks\r\n    kind: BackUp\r\n\r\n[root@controller ckad]# kubectl create -f crd-object.yaml\r\ncustomresourcedefinition.apiextensions.k8s.io\/backups.stable.example.com created\r\n\r\n[root@controller ckad]# kubectl api-versions | grep backup\r\n\r\n[root@controller ckad]# kubectl api-resources | grep backup\r\nbackups                           bks          stable.example.com\/v1                  true         BackUp\r\n\r\n[root@controller ckad]# cat crd-backup.yaml\r\napiVersion: \"stable.example.com\/v1\"\r\nkind: BackUp\r\nmetadata:\r\n  name: mybackup\r\nspec:\r\n  backupType: full\r\n  image: linux-backup-image\r\n  replicas: 5\r\n\r\n[root@controller ckad]# kubectl create -f crd-backup.yaml\r\nbackup.stable.example.com\/mybackup created\r\n\r\n[root@controller ckad]# kubectl get backups\r\nNAME       AGE\r\nmybackup   12s\r\n\r\n[root@controller ckad]# kubectl describe backups.stable.example.com mybackup\r\nName:         mybackup\r\nNamespace:    default\r\nLabels:       &lt;none&gt;\r\nAnnotations:  &lt;none&gt;\r\nAPI Version:  stable.example.com\/v1\r\nKind:         BackUp\r\nMetadata:\r\n  Creation Timestamp:  2024-03-05T11:26:52Z\r\n  Generation:          1\r\n  Resource Version:    646523\r\n  UID:                 3c69b166-da8a-47d1-82a6-f9c27f616142\r\nSpec:\r\n  Backup Type:  full\r\n  Image:        linux-backup-image\r\n  Replicas:     5\r\nEvents:         &lt;none&gt;\r\n<\/pre>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #3366ff;\">Operators and Controllers<\/span><\/p>\n<ul>\n<li>Operators are custom applications, based on Custom Resource Definitions<\/li>\n<li>Operators can be seen as a way of packaging, running and managing applications in Kubernetes<\/li>\n<li>Operators are based on Controllers, which are Kubernetes components that continuously operate dynamic systems<\/li>\n<li>The Controller loop is the essence of any Controllers<\/li>\n<li>The Kubernetes Controller manager runs a reconciliation loop, which continuously observes the current state, compares it to the desired state, and adjusts it when necessary<\/li>\n<li>Operators are application-specific Controllers<\/li>\n<li>Operators can be added to Kubernetes by developing them yourself<\/li>\n<li>Operators are also available from community websites<\/li>\n<li>A common registry for operators is found at operatorhub.io (which is rather OpenShift oriented)<\/li>\n<li>Many solutions from the Kubernetes ecosystem are provided as operators\n<ul>\n<li><strong>Prometheus<\/strong>: a monitoring and alerting solution<\/li>\n<li><strong>Tigera<\/strong>: the operator that manages the calico network plugin<\/li>\n<li><strong>Jaeger<\/strong>: used for tracing transactions between distributed services<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #3366ff;\">Lab:\u00a0 Using Canary Deployments<\/span><\/p>\n<ul>\n<li>Run an nginx Deployment that meets the following requirements\n<ul>\n<li>Use a ConfigMap to provide an index.html file containing the text &#8220;welcome to the old version&#8221;<\/li>\n<li>Use image version 1.14<\/li>\n<li>Run 3 replicas<\/li>\n<\/ul>\n<\/li>\n<li>Use the canary Deployment upgrade strategy to replace with a newer version of the application\n<ul>\n<li>Use a ConfigMap to provide an index.html in the new application, containing the text &#8220;welcome to the new version&#8221;<\/li>\n<li>Set the image version to latest<\/li>\n<\/ul>\n<\/li>\n<li>Complete the transition such that the old application is completely removed after verifying successful working of the updated application<\/li>\n<\/ul>\n<pre class=\"lang:default decode:true \">[root@controller ckad]# vim index.html\r\n[root@controller ckad]# cat index.html\r\nWelcome to the old version !\r\n[root@controller ckad]# kubectl create cm -h | more\r\n...\r\nAliases:\r\nconfigmap, cm\r\n\r\nExamples:\r\n  # Create a new config map named my-config based on folder bar\r\n  kubectl create configmap my-config --from-file=path\/to\/bar\r\n ...\r\n[root@controller ckad]# kubectl create cm oldversion --from-file=index.html\r\nconfigmap\/oldversion created\r\n\r\n[root@controller ckad]# kubectl get cm oldversion\r\nNAME         DATA   AGE\r\noldversion   1      30s\r\n\r\n[root@controller ckad]# kubectl get cm oldversion -o yaml\r\napiVersion: v1\r\ndata:\r\n  index.html: |\r\n    Welcome to the old version !\r\nkind: ConfigMap\r\nmetadata:\r\n  creationTimestamp: \"2024-03-05T14:07:41Z\"\r\n  name: oldversion\r\n  namespace: default\r\n  resourceVersion: \"661988\"\r\n  uid: edc224bf-7059-44bb-b96c-30275858019f\r\n\r\n[root@controller ckad]# echo Welcome to the new version ! &gt; index.html\r\n[root@controller ckad]# cat index.html\r\nWelcome to the new version !\r\n\r\n[root@controller ckad]# kubectl create cm newversion --from-file=index.html\r\nconfigmap\/newversion created\r\n\r\n[root@controller ckad]# kubectl get cm  -o yaml\r\n\r\n- apiVersion: v1\r\n  data:\r\n    index.html: |\r\n      Welcome to the new version !\r\n  kind: ConfigMap\r\n  metadata:\r\n    creationTimestamp: \"2024-03-05T14:10:35Z\"\r\n    name: newversion\r\n    namespace: default\r\n    resourceVersion: \"662262\"\r\n    uid: d7db8273-dd05-45af-b592-7f8e16dd08cc\r\n- apiVersion: v1\r\n  data:\r\n    index.html: |\r\n      Welcome to the old version !\r\n  kind: ConfigMap\r\n  metadata:\r\n    creationTimestamp: \"2024-03-05T14:07:41Z\"\r\n    name: oldversion\r\n    namespace: default\r\n    resourceVersion: \"661988\"\r\n    uid: edc224bf-7059-44bb-b96c-30275858019f\r\nkind: List\r\nmetadata:\r\n  resourceVersion: \"\"\r\n\r\n\r\n[root@controller ckad]# kubectl create deploy oldnginx --image=nginx:1.14 --replicas=3 --dry-run=client -o yaml &gt; oldnginx.yaml\r\n[root@controller ckad]# cat oldnginx.yaml\r\napiVersion: apps\/v1\r\nkind: Deployment\r\nmetadata:\r\n  creationTimestamp: null\r\n  labels:\r\n    app: oldnginx\r\n  name: oldnginx\r\nspec:\r\n  replicas: 3\r\n  selector:\r\n    matchLabels:\r\n      app: oldnginx\r\n  strategy: {}\r\n  template:\r\n    metadata:\r\n      creationTimestamp: null\r\n      labels:\r\n        app: oldnginx\r\n    spec:\r\n      containers:\r\n      - image: nginx:1.14\r\n        name: nginx\r\n        resources: {}\r\nstatus: {}\r\n\r\n[root@controller ckad]# vim oldnginx.yaml\r\n[root@controller ckad]# cat oldnginx.yaml\r\napiVersion: apps\/v1\r\nkind: Deployment\r\nmetadata:\r\n  creationTimestamp: null\r\n  labels:\r\n    app: oldnginx\r\n    type: canary\r\n  name: oldnginx\r\nspec:\r\n  replicas: 3\r\n  selector:\r\n    matchLabels:\r\n      app: oldnginx\r\n  strategy: {}\r\n  template:\r\n    metadata:\r\n      creationTimestamp: null\r\n      labels:\r\n        app: oldnginx\r\n        type: canary\r\n    spec:\r\n      containers:\r\n      - image: nginx:1.14\r\n        name: nginx\r\n        resources: {}\r\n        volumeMounts:\r\n        - name: indexfile # pod volume name\r\n          mountPath: \"\/usr\/share\/nginx\/html\/\"\r\n      volumes:\r\n      - name: indexfile\r\n        configMap:\r\n          name: oldversion\r\nstatus: {}\r\n[root@controller ckad]# kubectl create -f oldnginx.yaml\r\ndeployment.apps\/oldnginx created\r\n\r\n\r\n[root@controller ckad]# kubectl get all\r\nNAME                           READY   STATUS    RESTARTS   AGE\r\npod\/oldngix-7f6d676788-bglsk   1\/1     Running   0          16s\r\npod\/oldngix-7f6d676788-gsxxg   1\/1     Running   0          16s\r\npod\/oldngix-7f6d676788-jsfft   1\/1     Running   0          16s\r\n\r\nNAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE\r\nservice\/kubernetes   ClusterIP   10.96.0.1    &lt;none&gt;        443\/TCP   4d21h\r\n\r\nNAME                      READY   UP-TO-DATE   AVAILABLE   AGE\r\ndeployment.apps\/oldngix   3\/3     3            3           16s\r\n\r\nNAME                                 DESIRED   CURRENT   READY   AGE\r\nreplicaset.apps\/oldngix-7f6d676788   3         3         3       16s\r\n\r\n\r\n[root@controller ckad]# kubectl get all --show-labels\r\nNAME                           READY   STATUS    RESTARTS   AGE   LABELS\r\npod\/oldngix-7f6d676788-bglsk   1\/1     Running   0          63s   app=oldngix,pod-template-hash=7f6d676788,type=canary\r\npod\/oldngix-7f6d676788-gsxxg   1\/1     Running   0          63s   app=oldngix,pod-template-hash=7f6d676788,type=canary\r\npod\/oldngix-7f6d676788-jsfft   1\/1     Running   0          63s   app=oldngix,pod-template-hash=7f6d676788,type=canary\r\n\r\nNAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE     LABELS\r\nservice\/kubernetes   ClusterIP   10.96.0.1    &lt;none&gt;        443\/TCP   4d21h   component=apiserver,provider=kubernetes\r\n\r\nNAME                      READY   UP-TO-DATE   AVAILABLE   AGE   LABELS\r\ndeployment.apps\/oldngix   3\/3     3            3           64s   app=oldngix,type=canary\r\n\r\nNAME                                 DESIRED   CURRENT   READY   AGE   LABELS\r\nreplicaset.apps\/oldngix-7f6d676788   3         3         3       64s   app=oldngix,pod-template-hash=7f6d676788,type=canary\r\n\r\n\r\n[root@controller ckad]# kubectl get all -o wide\r\nNAME                           READY   STATUS    RESTARTS   AGE   IP               NODE                  NOMINATED NODE   READINESS GATES\r\npod\/oldngix-7f6d676788-bglsk   1\/1     Running   0          69s   172.16.71.197    worker2.example.com   &lt;none&gt;           &lt;none&gt;\r\npod\/oldngix-7f6d676788-gsxxg   1\/1     Running   0          69s   172.16.102.136   worker1.example.com   &lt;none&gt;           &lt;none&gt;\r\npod\/oldngix-7f6d676788-jsfft   1\/1     Running   0          69s   172.16.71.198    worker2.example.com   &lt;none&gt;           &lt;none&gt;\r\n\r\nNAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE     SELECTOR\r\nservice\/kubernetes   ClusterIP   10.96.0.1    &lt;none&gt;        443\/TCP   4d21h   &lt;none&gt;\r\n\r\nNAME                      READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES       SELECTOR\r\ndeployment.apps\/oldngix   3\/3     3            3           69s   nginx        nginx:1.14   app=oldngix\r\n\r\nNAME                                 DESIRED   CURRENT   READY   AGE   CONTAINERS   IMAGES       SELECTOR\r\nreplicaset.apps\/oldngix-7f6d676788   3         3         3       69s   nginx        nginx:1.14   app=oldngix,pod-template-hash=7f6d676788\r\n\r\n\r\n[root@controller ckad]# kubectl expose -h | more\r\n...\r\n  # Create a service for an nginx deployment, which serves on port 80 and connects to the containers on port 8000\r\n  kubectl expose deployment nginx --port=80 --target-port=8000\r\n...\r\n    --selector='':\r\n        A label selector to use for this service. Only equality-based selector requirements are supported. If empty (the default) infer the selector from t\r\nhe replication controller or replica set.)\r\n\r\n\r\n[root@controller ckad]# kubectl expose deployment oldnginx --name=canary --port=80 --selector=type=canary\r\nservice\/canary exposed\r\n\r\n[root@controller ckad]# kubectl get endpoints\r\nNAME         ENDPOINTS                                             AGE\r\ncanary       172.16.102.137:80,172.16.71.199:80,172.16.71.200:80   25m\r\nkubernetes   172.30.9.25:6443                                      4d22h\r\n\r\n[root@controller ckad]# kubectl get svc\r\nNAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE\r\ncanary       ClusterIP   10.96.148.14   &lt;none&gt;        80\/TCP    4s\r\nkubernetes   ClusterIP   10.96.0.1      &lt;none&gt;        443\/TCP   4d22h\r\n\r\n[root@controller ckad]# kubectl describe svc canary\r\nName:              canary\r\nNamespace:         default\r\nLabels:            app=oldngix\r\n                   type=canary\r\nAnnotations:       &lt;none&gt;\r\nSelector:          type=canary\r\nType:              ClusterIP\r\nIP Family Policy:  SingleStack\r\nIP Families:       IPv4\r\nIP:                10.96.148.14\r\nIPs:               10.96.148.14\r\nPort:              &lt;unset&gt;  80\/TCP\r\nTargetPort:        80\/TCP\r\nEndpoints:         172.16.102.136:80,172.16.71.197:80,172.16.71.198:80\r\nSession Affinity:  None\r\nEvents:            &lt;none&gt;\r\n[root@controller ckad]# curl 10.96.148.14\r\n\r\n\r\n[root@controller ckad]# cp oldnginx.yaml newnginx.yaml\r\n[root@controller ckad]# cat newnginx.yaml\r\napiVersion: apps\/v1\r\nkind: Deployment\r\nmetadata:\r\n  creationTimestamp: null\r\n  labels:\r\n    app: oldnginx\r\n    type: canary\r\n  name: oldnginx\r\nspec:\r\n  replicas: 3\r\n  selector:\r\n    matchLabels:\r\n      app: oldnginx\r\n  strategy: {}\r\n  template:\r\n    metadata:\r\n      creationTimestamp: null\r\n      labels:\r\n        app: oldnginx\r\n        type: canary\r\n    spec:\r\n      containers:\r\n      - image: nginx:1.14\r\n        name: nginx\r\n        resources: {}\r\n        volumeMounts:\r\n        - name: indexfile # pod volume name\r\n          mountPath: \"\/usr\/share\/nginx\/html\/\"\r\n      volumes:\r\n      - name: indexfile\r\n        configMap:\r\n          name: oldversion\r\nstatus: {}\r\n\r\n[root@controller ckad]# vim newnginx.yaml\r\n[root@controller ckad]# cat newnginx.yaml\r\napiVersion: apps\/v1\r\nkind: Deployment\r\nmetadata:\r\n  creationTimestamp: null\r\n  labels:\r\n    app: newnginx\r\n    type: canary\r\n  name: newnginx\r\nspec:\r\n  replicas: 1\r\n  selector:\r\n    matchLabels:\r\n      app: newnginx\r\n  strategy: {}\r\n  template:\r\n    metadata:\r\n      creationTimestamp: null\r\n      labels:\r\n        app: newnginx\r\n        type: canary\r\n    spec:\r\n      containers:\r\n      - image: nginx\r\n        name: nginx\r\n        resources: {}\r\n        volumeMounts:\r\n        - name: indexfile # pod volume name\r\n          mountPath: \"\/usr\/share\/nginx\/html\/\"\r\n      volumes:\r\n      - name: indexfile\r\n        configMap:\r\n          name: newversion\r\nstatus: {}\r\n\r\n[root@controller ckad]# kubectl create -f newnginx.yaml\r\ndeployment.apps\/newnginx created\r\n\r\n[root@controller ckad]# kubectl get svc\r\nNAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE\r\ncanary       ClusterIP   10.96.148.14   &lt;none&gt;        80\/TCP    33m\r\nkubernetes   ClusterIP   10.96.0.1      &lt;none&gt;        443\/TCP   4d22h\r\n\r\n[root@controller ckad]# curl 10.96.148.14\r\n\r\n[root@controller ckad]# kubectl get deploy\r\nNAME       READY   UP-TO-DATE   AVAILABLE   AGE\r\nnewnginx   1\/1     1            1           115s\r\noldnginx   3\/3     3            3           12m\r\n\r\n[root@controller ckad]# kubectl scale deployment newnginx --replicas=3\r\ndeployment.apps\/newnginx scaled\r\n\r\n[root@controller ckad]# kubectl scale deployment oldnginx --replicas=0\r\ndeployment.apps\/oldnginx scaled\r\n\r\n[root@controller ckad]# kubectl get deploy\r\nNAME       READY   UP-TO-DATE   AVAILABLE   AGE\r\nnewnginx   3\/3     3            3           3m20s\r\noldnginx   0\/0     0            0           14m\r\n\r\n[root@controller ckad]# curl 10.96.148.14\r\n\r\n<\/pre>\n<p>&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"","protected":false},"author":1,"featured_media":5946,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[99],"tags":[],"_links":{"self":[{"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/posts\/5588"}],"collection":[{"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/comments?post=5588"}],"version-history":[{"count":58,"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/posts\/5588\/revisions"}],"predecessor-version":[{"id":5947,"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/posts\/5588\/revisions\/5947"}],"wp:featuredmedia":[{"embeddable":true,"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/media\/5946"}],"wp:attachment":[{"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/media?parent=5588"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/categories?post=5588"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/tags?post=5588"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}