Working with Namespaces
Create a Namespace ckad-ns1 in your cluster. In this Namespace, run
the following Pods:
1. A Pod with the name pod-a, running the httpd server image
2. A Pod with the name pod-b, running the nginx server image as well
as the alpine image
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 |
[root@controller ~]# kubectl create ns ckad-ns1 namespace/ckad-ns1 created [root@controller ~]# kubectl get ns NAME STATUS AGE ckad-ns1 Active 5s default Active 6d19h kube-node-lease Active 6d19h kube-public Active 6d19h kube-system Active 6d19h myvol Active 37h [root@controller ~]# kubectl run pod-a --image=httpd -n ckad-ns1 pod/pod-a created [root@controller ~]# kubectl run pod-b -n ckad-ns1 --image=alpine --dry-run=client -o yaml -- sleep 3600 > task1.yaml [root@controller ~]# cat task1.yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: pod-b name: pod-b namespace: ckad-ns1 spec: containers: - args: - sleep - "3600" image: alpine name: pod-b resources: {} dnsPolicy: ClusterFirst restartPolicy: Always status: {} [root@controller ~]# vim task1.yaml [root@controller ~]# cat task1.yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: pod-b name: pod-b namespace: ckad-ns1 spec: containers: - args: - sleep - "3600" image: alpine name: alpine resources: {} - name: nginx image: nginx dnsPolicy: ClusterFirst restartPolicy: Always status: {} [root@controller ~]# kubectl create -f task1.yaml pod/pod-b created [root@controller ~]# kubectl get pods -n ckad-ns1 NAME READY STATUS RESTARTS AGE pod-a 1/1 Running 0 10m pod-b 2/2 Running 0 17s [root@controller ~]# kubectl describe pod pod-b -n ckad-ns1 Name: pod-b Namespace: ckad-ns1 Priority: 0 Service Account: default Node: worker2.example.com/172.30.9.27 Start Time: Thu, 07 Mar 2024 07:14:13 -0500 Labels: run=pod-b Annotations: cni.projectcalico.org/containerID: f311336c22851ad11ff2fa155dc1c5332f11b14884e25fe537277657662e706d cni.projectcalico.org/podIP: 172.16.71.219/32 cni.projectcalico.org/podIPs: 172.16.71.219/32 Status: Running IP: 172.16.71.219 IPs: IP: 172.16.71.219 Containers: alpine: Container ID: containerd://86d25366b5f7e1d7977286263b8d9c44d6cf2f9542d67bcfeeff6a1005304de4 Image: alpine Image ID: docker.io/library/alpine@sha256:c5b1261d6d3e43071626931fc004f70149baeba2c8ec672bd4f27761f8e1ad6b Port: <none> Host Port: <none> Args: sleep 3600 State: Running Started: Thu, 07 Mar 2024 07:14:15 -0500 Ready: True Restart Count: 0 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tkw6v (ro) nginx: Container ID: containerd://3ea012e6a6173466fe38315d1e7a57f274b10307bfd98dafda5d69837b3ebd0c Image: nginx Image ID: docker.io/library/nginx@sha256:25ff478171a2fd27d61a1774d97672bb7c13e888749fc70c711e207be34d370a Port: <none> Host Port: <none> State: Running Started: Thu, 07 Mar 2024 07:14:16 -0500 Ready: True Restart Count: 0 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tkw6v (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-tkw6v: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 58s default-scheduler Successfully assigned ckad-ns1/pod-b to worker2.example.com Normal Pulling 57s kubelet Pulling image "alpine" Normal Pulled 56s kubelet Successfully pulled image "alpine" in 952ms (952ms including waiting) Normal Created 56s kubelet Created container alpine Normal Started 56s kubelet Started container alpine Normal Pulling 56s kubelet Pulling image "nginx" Normal Pulled 55s kubelet Successfully pulled image "nginx" in 923ms (923ms including waiting) Normal Created 55s kubelet Created container nginx Normal Started 55s kubelet Started container nginx |
Using Secrets
Create a Secret that defines the variable password=secret. Create a
Deployment with the name secretapp, which starts the nginx image
and uses this variable.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 |
[root@controller ~]# kubectl create secret -h | more Create a secret with specified type. ... Available Commands: docker-registry Create a secret for use with a Docker registry generic Create a secret from a local file, directory, or literal value tls Create a TLS secret ... [root@controller ~]# kubectl create secret generic -h | more ... # Create a new secret named my-secret with key1=supersecret and key2=topsecret kubectl create secret generic my-secret --from-literal=key1=supersecret --from-literal=key2=topsecret ... [root@controller ~]# kubectl create secret generic secretpw --from-literal=password=secret secret/secretpw created [root@controller ~]# kubectl describe secrets secretpw Name: secretpw Namespace: default Labels: <none> Annotations: <none> Type: Opaque Data ==== password: 6 bytes [root@controller ~]# kubectl create deploy secretap --image=nginx ... kubectl set env --from=secret/mysecret deployment/myapp ... [root@controller ~]# kubectl set env --from=secret/secretpw deployment/secretap Warning: key password transferred to PASSWORD deployment.apps/secretap env updated [root@controller ~]# kubectl get pods NAME READY STATUS RESTARTS AGE secretap-7b54c85f6d-qq6ll 1/1 Running 0 28m [root@controller ~]# kubectl exec secretap-7b54c85f6d-qq6ll -- env ... PASSWORD=secret ... |
Creating Custom Images
Create a Dockerfile that runs an alpine image with the command “echo
hello world” as the default command. Build the image, and export it in
OCI format to a tar file with the name “greetworld”.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 |
[root@controller ~]# vim Dockerfile [root@controller ~]# cat Dockerfile FROM alpine CMD ["echo","hello world"] [root@controller ~]# docker build -t greetworld . [+] Building 3.1s (5/5) FINISHED docker:default => [internal] load build definition from Dockerfile 0.1s => => transferring dockerfile: 76B 0.0s => [internal] load metadata for docker.io/library/alpine:latest 2.1s => [internal] load .dockerignore 0.0s => => transferring context: 2B 0.0s => [1/1] FROM docker.io/library/alpine:latest@sha256:c5b1261d6d3e43071626931fc004f70149baeba2c8ec672bd4f27761f8e1ad6b 0.7s => => resolve docker.io/library/alpine:latest@sha256:c5b1261d6d3e43071626931fc004f70149baeba2c8ec672bd4f27761f8e1ad6b 0.0s => => sha256:c5b1261d6d3e43071626931fc004f70149baeba2c8ec672bd4f27761f8e1ad6b 1.64kB / 1.64kB 0.0s => => sha256:6457d53fb065d6f250e1504b9bc42d5b6c65941d57532c072d929dd0628977d0 528B / 528B 0.0s => => sha256:05455a08881ea9cf0e752bc48e61bbd71a34c029bb13df01e40e3e70e0d007bd 1.47kB / 1.47kB 0.0s => => sha256:4abcf20661432fb2d719aaf90656f55c287f8ca915dc1c92ec14ff61e67fbaf8 3.41MB / 3.41MB 0.3s => => extracting sha256:4abcf20661432fb2d719aaf90656f55c287f8ca915dc1c92ec14ff61e67fbaf8 0.3s => exporting to image 0.0s => => exporting layers 0.0s => => writing image sha256:27735e917715dde71cfbf396ed13edeeef9603c18bfbdf750c437956d9c9779a 0.0s => => naming to docker.io/library/greetworld 0.0s [root@controller ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE mysshd latest 88ace079c865 12 days ago 231MB myapache latest 3b7004c30a95 12 days ago 167MB namp latest 6d3d4ddc5fcc 12 days ago 387MB nmap latest 6d3d4ddc5fcc 12 days ago 387MB ubuntu latest 3db8720ecbf5 3 weeks ago 77.9MB mariadb latest 2f62d6fb2c8b 3 weeks ago 405MB localhost:5000/mariadb latest 2f62d6fb2c8b 3 weeks ago 405MB greetworld latest 27735e917715 5 weeks ago 7.38MB busybox latest 3f57d9401f8d 7 weeks ago 4.26MB httpd latest 2776f4da9d55 7 weeks ago 167MB [root@controller ~]# docker save --help Usage: docker save [OPTIONS] IMAGE [IMAGE...] Save one or more images to a tar archive (streamed to STDOUT by default) Aliases: docker image save, docker save Options: -o, --output string Write to a file, instead of STDOUT [root@controller ~]# docker save -o greetworld.tar greetworld [root@controller ~]# ls -l greet* -rw------- 1 root root 7677952 03-07 11:32 greetworld.tar |
Using Sidecars
Create a Multi-container Pod with the name sidecar-pod, that runs in the
ckad-ns3 Namespace
- The primary container is running busybox, and writes the output of
the date command to the/var/log/date.log
file every 5 seconds - The second container should run as a sidecar and provide nginx web-access to this file, using an hostPath shared volume. (mount on
/usr/share/nginx/html
) - Make sure the image for this container is only pulled if it’s not available on the local system yet
Go to the documentation
1. search -> Communicate Between Containers in the Same Pod
2. search: hostpath -> Configure a Pod to Use a PersistentVolume for Storage
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 |
[root@controller ~]# vi task156.yaml [root@controller ~]# cat task156.yaml apiVersion: v1 kind: Pod metadata: name: two-containers spec: restartPolicy: Never volumes: - name: shared-data emptyDir: {} containers: - name: nginx-container image: nginx volumeMounts: - name: shared-data mountPath: /usr/share/nginx/html - name: debian-container image: debian volumeMounts: - name: shared-data mountPath: /pod-data command: ["/bin/sh"] args: ["-c", "echo Hello from the debian container > /pod-data/index.html"] [root@controller ~]# vim task156.yaml [root@controller ~]# cat task156.yaml apiVersion: v1 kind: Pod metadata: name: sidecar-pod namespace: ckad-ns3 spec: restartPolicy: Never volumes: - name: shared-data hostPath: path: "/mydata" containers: - name: nginx-container image: nginx volumeMounts: - name: shared-data mountPath: /usr/share/nginx/html - name: busybox-container image: busybox volumeMounts: - name: shared-data mountPath: /var/log command: ["/bin/sh"] args: ["-c", "echo Hello from the debian container > /pod-data/index.html"] [root@controller ~]# kubectl run busybox --image=busybox --dry-run=client -o yaml -- sh -c "while sleep 5; do date >> /var/log/date.log; done" apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: busybox name: busybox spec: containers: - args: - sh - -c - while sleep 5; do date >> /var/log/date.log; done image: busybox name: busybox resources: {} dnsPolicy: ClusterFirst restartPolicy: Always status: {} [root@controller ~]# vim task156.yaml [root@controller ~]# cat task156.yaml apiVersion: v1 kind: Pod metadata: name: sidecar-pod namespace: ckad-ns3 spec: restartPolicy: Never volumes: - name: shared-data hostPath: path: "/mydata" containers: - name: nginx-container image: nginx volumeMounts: - name: shared-data mountPath: /usr/share/nginx/html - name: busybox-container image: busybox volumeMounts: - name: shared-data mountPath: /var/log args: - sh - -c - while sleep 5; do date >> /var/log/date.log; done [root@controller ~]# kubectl create ns ckad-ns3 namespace/ckad-ns3 created [root@controller ~]# kubectl create -f task156.yaml pod/sidecar-pod created [root@controller ~]# kubectl exec -it sidecar-pod -n ckad-ns3 sh kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead. Defaulted container "nginx-container" out of: nginx-container, busybox-container # cat /usr/share/nginx/html/date.log Fri Mar 8 09:04:09 UTC 2024 Fri Mar 8 09:04:14 UTC 2024 Fri Mar 8 09:04:19 UTC 2024 Fri Mar 8 09:04:24 UTC 2024 Fri Mar 8 09:04:29 UTC 2024 Fri Mar 8 09:04:34 UTC 2024 Fri Mar 8 09:04:39 UTC 2024 Fri Mar 8 09:04:44 UTC 2024 Fri Mar 8 09:04:49 UTC 2024 Fri Mar 8 09:04:54 UTC 2024 Fri Mar 8 09:04:59 UTC 2024 Fri Mar 8 09:05:04 UTC 2024 Fri Mar 8 09:05:09 UTC 2024 Fri Mar 8 09:05:14 UTC 2024 Fri Mar 8 09:05:19 UTC 2024 Fri Mar 8 09:05:24 UTC 2024 Fri Mar 8 09:05:29 UTC 2024 Fri Mar 8 09:05:34 UTC 2024 Fri Mar 8 09:05:39 UTC 2024 Fri Mar 8 09:05:44 UTC 2024 Fri Mar 8 09:05:49 UTC 2024 Fri Mar 8 09:05:54 UTC 2024 Fri Mar 8 09:05:59 UTC 2024 Fri Mar 8 09:06:04 UTC 2024 Fri Mar 8 09:06:09 UTC 2024 Fri Mar 8 09:06:14 UTC 2024 Fri Mar 8 09:06:19 UTC 2024 Fri Mar 8 09:06:24 UTC 2024 Fri Mar 8 09:06:29 UTC 2024 Fri Mar 8 09:06:34 UTC 2024 Fri Mar 8 09:06:39 UTC 2024 Fri Mar 8 09:06:44 UTC 2024 Fri Mar 8 09:06:49 UTC 2024 Fri Mar 8 09:06:54 UTC 2024 Fri Mar 8 09:06:59 UTC 2024 Fri Mar 8 09:07:04 UTC 2024 Fri Mar 8 09:07:09 UTC 2024 Fri Mar 8 09:07:14 UTC 2024 # exit |
Fixing a Deployment
Start the Deployment from the redis.yaml file in the course Git repository.
Fix any problems that may occur while starting it.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 |
[root@controller ckad]# cat redis.yaml --- apiVersion: apps/v1beta1 kind: Deployment metadata: name: redis labels: app: redis spec: selector: matchLabels: app: redis replicas: template: metadata: labels: app: redis spec: containers: - name: redis image: redis:alpine ports: - containerPort: 6379 name: redis [root@controller ckad]# kubectl create -f redis.yaml error: resource mapping not found for name: "redis" namespace: "" from "redis.yaml": no matches for kind "Deployment" in version "apps/v1beta1" ensure CRDs are installed first [root@controller ckad]# kubectl api-versions admissionregistration.k8s.io/v1 apiextensions.k8s.io/v1 apiregistration.k8s.io/v1 apps/v1 authentication.k8s.io/v1 authorization.k8s.io/v1 autoscaling/v1 autoscaling/v2 batch/v1 certificates.k8s.io/v1 coordination.k8s.io/v1 crd.projectcalico.org/v1 discovery.k8s.io/v1 events.k8s.io/v1 flowcontrol.apiserver.k8s.io/v1beta2 flowcontrol.apiserver.k8s.io/v1beta3 networking.k8s.io/v1 node.k8s.io/v1 policy/v1 rbac.authorization.k8s.io/v1 scheduling.k8s.io/v1 stable.example.com/v1 storage.k8s.io/v1 v1 [root@controller ckad]# vim redis.yaml [root@controller ckad]# kubectl create -f redis.yaml deployment.apps/redis created [root@controller ckad]# cat redis.yaml --- apiVersion: apps/v1 kind: Deployment metadata: name: redis labels: app: redis spec: selector: matchLabels: app: redis replicas: template: metadata: labels: app: redis spec: containers: - name: redis image: redis:alpine ports: - containerPort: 6379 name: redis |
Using Probes
Create a Pod that runs the nginx webserver
- The webserver should be offering its services on port 80 and run in the
ckad-ns3 Namespace - This Pod should check the
/healthz
path on the API-server before starting the main container
Go to the documentation, search “healthz api” -> Kubernetes API health endpoints.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 |
[root@controller ckad]# curl -k https://localhost:6443/livez?verbose [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [+]poststarthook/rbac/bootstrap-roles ok [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]poststarthook/apiservice-discovery-controller ok livez check passed [root@controller ckad]# curl -k https://localhost:8443/healthz?verbose curl: (7) Failed to connect to localhost port 8443: Połączenie odrzucone [root@controller ckad]# curl -k https://localhost:6443/healthz?verbose [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [+]poststarthook/rbac/bootstrap-roles ok [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]poststarthook/apiservice-discovery-controller ok healthz check passed [root@controller ckad]# echo $? 0 [root@controller ckad]# ps aux | grep api root 3488764 0.0 0.0 12144 1108 pts/1 S+ 08:43 0:00 grep --color=auto api root 3687337 5.0 10.3 1630584 385488 ? Ssl lut29 577:17 kube-apiserver --advertise-address=172.30.9.25 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/pki/ca.crt --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-account-signing-key-file=/etc/kubernetes/pki/sa.key --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key [root@controller ckad]# kubectl get pods -n kube-system -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES calico-kube-controllers-658d97c59c-7vxhd 0/1 CrashLoopBackOff 2752 (53s ago) 7d21h 172.16.102.151 worker1.example.com <none> <none> calico-node-d2plz 0/1 CrashLoopBackOff 2237 (2m16s ago) 7d21h 172.30.9.27 worker2.example.com <none> <none> calico-node-vwl67 0/1 CrashLoopBackOff 2239 (2m58s ago) 7d21h 172.30.9.26 worker1.example.com <none> <none> calico-node-zgsx7 1/1 Running 0 7d21h 172.30.9.25 controller.example.com <none> <none> coredns-5dd5756b68-9hwls 0/1 CrashLoopBackOff 2200 (2m35s ago) 7d21h 172.16.102.152 worker1.example.com <none> <none> coredns-5dd5756b68-wwq8f 0/1 CrashLoopBackOff 2200 (16s ago) 7d21h 172.16.102.154 worker1.example.com <none> <none> etcd-controller.example.com 1/1 Running 6 7d21h 172.30.9.25 controller.example.com <none> <none> kube-apiserver-controller.example.com 1/1 Running 10 7d21h 172.30.9.25 controller.example.com <none> <none> kube-controller-manager-controller.example.com 1/1 Running 0 7d21h 172.30.9.25 controller.example.com <none> <none> kube-proxy-26j88 1/1 Running 3 7d21h 172.30.9.26 worker1.example.com <none> <none> kube-proxy-jprp5 1/1 Running 0 7d21h 172.30.9.25 controller.example.com <none> <none> kube-proxy-jz8hj 1/1 Running 3 7d21h 172.30.9.27 worker2.example.com <none> <none> kube-scheduler-controller.example.com 1/1 Running 202 7d21h 172.30.9.25 controller.example.com <none> <none> metrics-server-6db4d75b97-sd7jb 0/1 CrashLoopBackOff 909 (4s ago) 46h 172.16.71.220 worker2.example.com <none> <none> metrics-server-7697b55fbd-srpjn 0/1 CrashLoopBackOff 1074 (4m4s ago) 2d6h 172.16.102.153 worker1.example.com <none> <none> |
Go to the documentation, search “probes” -> Configure Liveness, Readiness and Startup Probes
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 |
[root@controller ckad]# vi task158.yaml [root@controller ckad]# cat task158.yaml apiVersion: v1 kind: Pod metadata: labels: test: liveness name: liveness-exec spec: containers: - name: liveness image: registry.k8s.io/busybox args: - /bin/sh - -c - touch /tmp/healthy; sleep 30; rm -f /tmp/healthy; sleep 600 livenessProbe: exec: command: - cat - /tmp/healthy initialDelaySeconds: 5 periodSeconds: 5 [root@controller ckad]# kubectl explain pod.spec.containers.ports KIND: Pod VERSION: v1 FIELD: ports <[]ContainerPort> DESCRIPTION: List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255. Cannot be updated. ContainerPort represents a network port in a single container. FIELDS: containerPort <integer> -required- Number of port to expose on the pod's IP address. This must be a valid port number, 0 < x < 65536. hostIP <string> What host IP to bind the external port to. hostPort <integer> Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. name <string> If specified, this must be an IANA_SVC_NAME and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services. protocol <string> Protocol for port. Must be UDP, TCP, or SCTP. Defaults to "TCP". Possible enum values: - `"SCTP"` is the SCTP protocol. - `"TCP"` is the TCP protocol. - `"UDP"` is the UDP protocol. [root@controller ckad]# vi task158.yaml [root@controller ckad]# cat task158.yaml apiVersion: v1 kind: Pod metadata: labels: test: readiness name: readiness-exec namespace: ckad-ns3 spec: containers: - name: nginx image: nginx - containerPort: 80 readinessProbe: exec: command: - curl - -k - https://localhost:6443/healthz initialDelaySeconds: 5 periodSeconds: 5 [root@controller ckad]# kubectl create -f task158.yaml error: error parsing task158.yaml: error converting YAML to JSON: yaml: line 11: did not find expected key [root@controller ckad]# vi task158.yaml [root@controller ckad]# cat task158.yaml apiVersion: v1 kind: Pod metadata: labels: test: readiness name: readiness-exec namespace: ckad-ns3 spec: containers: - name: nginx image: nginx ports: - containerPort: 80 readinessProbe: exec: command: - curl - "-k" - https://localhost:6443/healthz initialDelaySeconds: 5 periodSeconds: 5 [root@controller ckad]# kubectl get pods -n ckad-ns3 NAME READY STATUS RESTARTS AGE readiness-exec 0/1 Running 0 23m sidecar-pod 2/2 Running 0 6h12m |
Creating a Deployment
Write a manifest file with the name nginx-exam.yaml that meets the
following requirements:
- It starts 5 replicas that run the nginx:1.18 image
- Each Pod has the label type=webshop
- Create the Deployment such that while updating, it can temporarily run 8 application instances at the same time, of which 3 should always be available
- The Deployment itself should use the label service=nginx
- Update the Deployment to the latest version of the nginx image
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 |
[root@controller ~]# kubectl create deploy nginx-exam --image=nginx:1.18 --dry-run=client -o yaml > nginx-exam.yaml [root@controller ~]# cat nginx-exam.yaml apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: nginx-exam name: nginx-exam spec: replicas: 1 selector: matchLabels: app: nginx-exam strategy: {} template: metadata: creationTimestamp: null labels: app: nginx-exam spec: containers: - image: nginx:1.18 name: nginx resources: {} status: {} [root@controller ~]# vim nginx-exam.yaml [root@controller ~]# cat nginx-exam.yaml apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: nginx-exam service: nginx name: nginx-exam spec: replicas: 5 selector: matchLabels: app: nginx-exam strategy: {} template: metadata: creationTimestamp: null labels: app: nginx-exam type: webshop spec: containers: - image: nginx:1.18 name: nginx resources: {} status: {} [root@controller ~]# kubectl explain --recursive deployment.spec.strategy ... FIELDS: rollingUpdate <RollingUpdateDeployment> maxSurge <IntOrString> maxUnavailable <IntOrString> type <string> [root@controller ~]# vim nginx-exam.yaml [root@controller ~]# cat nginx-exam.yaml apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: nginx-exam service: nginx name: nginx-exam spec: replicas: 5 selector: matchLabels: app: nginx-exam strategy: rollingUpdate: maxSurge: 3 maxUnavailable: 2 template: metadata: creationTimestamp: null labels: app: nginx-exam type: webshop spec: containers: - image: nginx:1.18 name: nginx resources: {} status: {} [root@controller ~]# kubectl create -f nginx-exam.yaml deployment.apps/nginx-exam created [root@controller ~]# kubectl get all --selector app=nginx-exam NAME READY STATUS RESTARTS AGE pod/nginx-exam-548d9c4767-gbrll 1/1 Running 0 25s pod/nginx-exam-548d9c4767-hc7qz 1/1 Running 0 25s pod/nginx-exam-548d9c4767-mrlh2 1/1 Running 0 25s pod/nginx-exam-548d9c4767-n88jj 1/1 Running 0 25s pod/nginx-exam-548d9c4767-v6qrx 1/1 Running 0 25s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/nginx-exam 5/5 5 5 25s NAME DESIRED CURRENT READY AGE replicaset.apps/nginx-exam-548d9c4767 5 5 5 25s [root@controller ~]# kubectl set image -h | more Update existing container image(s) of resources. Possible resources include (case insensitive): pod (po), replicationcontroller (rc), deployment (deploy), daemonset (ds), statefulset (sts), cronjob (cj), replicaset (rs) Examples: # Set a deployment's nginx container image to 'nginx:1.9.1', and its busybox container image to 'busybox' kubectl set image deployment/nginx busybox=busybox nginx=nginx:1.9.1 ... [root@controller ~]# kubectl set image deployment/nginx-exam nginx=nginx:latest deployment.apps/nginx-exam image updated [root@controller ~]# kubectl get all --selector app=nginx-exam NAME READY STATUS RESTARTS AGE pod/nginx-exam-67d9b7fc84-f5xzt 1/1 Running 0 2s pod/nginx-exam-67d9b7fc84-sdw5v 1/1 Running 0 2s pod/nginx-exam-67d9b7fc84-wzxd6 1/1 Running 0 2s pod/nginx-exam-67d9b7fc84-xxzvv 0/1 ContainerCreating 0 2s pod/nginx-exam-67d9b7fc84-z2257 0/1 ContainerCreating 0 2s pod/nginx-exam-79947bcdb8-24xs8 1/1 Terminating 0 100s pod/nginx-exam-79947bcdb8-6dprv 1/1 Running 0 100s pod/nginx-exam-79947bcdb8-p2kkz 1/1 Terminating 0 100s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/nginx-exam 3/5 5 3 27m NAME DESIRED CURRENT READY AGE replicaset.apps/nginx-exam-548d9c4767 0 0 0 27m replicaset.apps/nginx-exam-67d9b7fc84 5 5 3 2m47s replicaset.apps/nginx-exam-79947bcdb8 0 1 1 100s [root@controller ~]# kubectl get all --selector app=nginx-exam NAME READY STATUS RESTARTS AGE pod/nginx-exam-67d9b7fc84-f5xzt 1/1 Running 0 7s pod/nginx-exam-67d9b7fc84-sdw5v 1/1 Running 0 7s pod/nginx-exam-67d9b7fc84-wzxd6 1/1 Running 0 7s pod/nginx-exam-67d9b7fc84-xxzvv 1/1 Running 0 7s pod/nginx-exam-67d9b7fc84-z2257 1/1 Running 0 7s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/nginx-exam 5/5 5 5 27m NAME DESIRED CURRENT READY AGE replicaset.apps/nginx-exam-548d9c4767 0 0 0 27m replicaset.apps/nginx-exam-67d9b7fc84 5 5 5 2m52s replicaset.apps/nginx-exam-79947bcdb8 0 0 0 105s |
Exposing Applications
In the ckad-ns6 Namespace, create a Deployment that runs the nginx 1.19
image and give it the name nginx-deployment
- Ensure it runs 3 replicas
- After verifying that the Deployment runs successfully, expose it such that users that are external to the cluster can reach it by addressing the Node Port 32000 on the Kubernetes Cluster node
- Configure Ingress to access the application at mynginx.info
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
[root@controller ~]# kubectl create ns ckad-ns6 namespace/ckad-ns6 created [root@controller ~]# kubectl create deploy nginx-deployment --image=nginx:1.19 --replicas=3 -n ckad-ns6 deployment.apps/nginx-deployment created [root@controller ~]# kubectl expose -h ... # Create a service for an nginx deployment, which serves on port 80 and connects to the containers on port 8000 kubectl expose deployment nginx --port=80 --target-port=8000 ... [root@controller ~]# kubectl expose deployment -n ckad-ns6 nginx-deployment --port=80 service/nginx-deployment exposed [root@controller ~]# kubectl edit svc nginx-deployment -n ckad-ns6 |
Change:
1 2 3 4 5 6 7 |
- port: 80 protocol: TCP targetPort: 80 selector: app: nginx-deployment sessionAffinity: None type: ClusterIP |
To:
1 2 3 4 5 6 7 8 |
- port: 80 nodePort: 32000 protocol: TCP targetPort: 80 selector: app: nginx-deployment sessionAffinity: None type: NodePort |
And now:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 |
[root@controller ~]# kubectl get svc -n ckad-ns6 NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx-deployment NodePort 10.110.142.232 <none> 80:32000/TCP 13m [root@controller ~]# kubectl create ingress -h | more ... # Create an ingress with the same host and multiple paths kubectl create ingress multipath --class=default \ --rule="foo.com/=svc:port" \ --rule="foo.com/admin/=svcadmin:portadmin" ... [root@controller ~]# kubectl create ingress nginxdeploy --class=default --rule="mynginx.info/=nginx-deployment:80" ingress.networking.k8s.io/nginxdeploy created [root@controller ~]# kubectl delete ingress nginxdeploy ingress.networking.k8s.io "nginxdeploy" deleted [root@controller ~]# kubectl create ingress nginxdeploy --class=default --rule="mynginx.info/=nginx-deployment:80" -n ckad-ns6 ingress.networking.k8s.io/nginxdeploy created [root@controller ~]# kubectl get ingress -n ckad-ns6 NAME CLASS HOSTS ADDRESS PORTS AGE nginxdeploy default mynginx.info 80 26s [root@controller ~]# vim /etc/hosts [root@controller ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 172.30.9.25 controller controller.example.com 172.30.9.26 worker1 worker1.example.com mynginx.info 172.30.9.27 worker2 worker2.example.com [root@controller ~]# kubectl get pod No resources found in default namespace. [root@controller ~]# kubectl get pod -n ckad-ns6 NAME READY STATUS RESTARTS AGE nginx-deployment-799c656846-68gt5 1/1 Running 0 21m nginx-deployment-799c656846-8l78n 1/1 Running 0 21m nginx-deployment-799c656846-dc6l8 1/1 Running 0 21m [root@controller ~]# kubectl get pod -n ckad-ns6 -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-deployment-799c656846-68gt5 1/1 Running 0 21m 172.16.102.171 worker1.example.com <none> <none> nginx-deployment-799c656846-8l78n 1/1 Running 0 21m 172.16.71.241 worker2.example.com <none> <none> nginx-deployment-799c656846-dc6l8 1/1 Running 0 21m 172.16.71.242 worker2.example.com <none> <none> [root@controller ~]# ping mynginx.info PING worker1 (172.30.9.26) 56(84) bytes of data. 64 bytes from worker1 (172.30.9.26): icmp_seq=1 ttl=64 time=0.423 ms 64 bytes from worker1 (172.30.9.26): icmp_seq=2 ttl=64 time=0.350 ms ^C --- worker1 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1012ms rtt min/avg/max/mdev = 0.350/0.386/0.423/0.041 ms [root@controller ~]# curl mynginx.info curl: (7) Failed to connect to mynginx.info port 80: Brak trasy do hosta [root@controller ~]# curl 172.30.9.26 curl: (7) Failed to connect to 172.30.9.26 port 80: Brak trasy do hosta [root@controller ~]# curl 10.110.142.232 |
Using NetworkPolicies
Create a YAML file with the name my-nw-policy that runs two Pods and a
NetworkPolicy
- The first Pod should run an Nginx server with default settings
- The second Pod should run a busybox image with the sleep 3600 command
- Use a NetworkPolicy to restrict traffic between Pods in the following way:
- Access to the nginx server is allowed for the busybox Pod
- The busybox Pod is not restricted in any way
Go to the documentation, search: networkpolicy -> Network Policies
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 |
[root@controller ~]# vim my-nw-policy.yaml [root@controller ~]# cat my-nw-policy.yaml apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: test-network-policy namespace: default spec: podSelector: matchLabels: role: db policyTypes: - Ingress - Egress ingress: - from: - ipBlock: cidr: 172.17.0.0/16 except: - 172.17.1.0/24 - namespaceSelector: matchLabels: project: myproject - podSelector: matchLabels: role: frontend ports: - protocol: TCP port: 6379 egress: - to: - ipBlock: cidr: 10.0.0.0/24 ports: - protocol: TCP port: 5978 |
Go to the documentation, search: pods -> Pods and copy simple-pod.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 |
[root@controller ~]# vi my-nw-policy.yaml [root@controller ~]# cat my-nw-policy.yaml apiVersion: v1 kind: Pod metadata: name: nginx spec: containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80 --- apiVersion: v1 kind: Pod metadata: name: nginx spec: containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80 --- apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: test-network-policy namespace: default spec: podSelector: matchLabels: role: db policyTypes: - Ingress - Egress ingress: - from: - ipBlock: cidr: 172.17.0.0/16 except: - 172.17.1.0/24 - namespaceSelector: matchLabels: project: myproject - podSelector: matchLabels: role: frontend ports: - protocol: TCP port: 6379 egress: - to: - ipBlock: cidr: 10.0.0.0/24 ports: - protocol: TCP port: 5978 [root@controller ~]# vim my-nw-policy.yaml [root@controller ~]# cat my-nw-policy.yaml apiVersion: v1 kind: Pod metadata: name: nwp-nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 --- apiVersion: v1 kind: Pod metadata: name: nwp-busybox spec: containers: - name: busybox image: busybox ports: - containerPort: 80 --- apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: test-network-policy namespace: default spec: podSelector: matchLabels: role: db policyTypes: - Ingress - Egress ingress: - from: - ipBlock: cidr: 172.17.0.0/16 except: - 172.17.1.0/24 - namespaceSelector: matchLabels: project: myproject - podSelector: matchLabels: role: frontend ports: - protocol: TCP port: 6379 egress: - to: - ipBlock: cidr: 10.0.0.0/24 ports: - protocol: TCP port: 5978 [root@controller ~]# kubectl run busybox --image=busybox --dry-run=client -o yaml -- sleep 3600 apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: busybox name: busybox spec: containers: - args: - sleep - "3600" image: busybox name: busybox resources: {} dnsPolicy: ClusterFirst restartPolicy: Always status: {} [root@controller ~]# cat my-nw-policy.yaml apiVersion: v1 kind: Pod metadata: name: nwp-nginx labels: app: nginx spec: containers: - name: nginx image: nginx --- apiVersion: v1 kind: Pod metadata: name: nwp-busybox labels: access: allowed spec: containers: - name: busybox image: busybox args: - sleep - "3600" --- apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: test-network-policy namespace: default spec: podSelector: matchLabels: app: nginx policyTypes: - Ingress ingress: - from: - podSelector: matchLabels: role: frontend [root@controller ~]# kubectl create -f my-nw-policy.yaml pod/nwp-nginx created pod/nwp-busybox created networkpolicy.networking.k8s.io/test-network-policy created [root@controller ~]# kubectl expose pod nwp-nginx --port=80 service/nwp-nginx exposed [root@controller ~]# kubectl get all NAME READY STATUS RESTARTS AGE pod/nwp-busybox 1/1 Running 0 58s pod/nwp-nginx 1/1 Running 0 58s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 8d service/nwp-nginx ClusterIP 10.101.186.122 <none> 80/TCP 8s [root@controller ~]# kubectl exec -it busybox -- wget --spider --timeout=1 nginx Error from server (NotFound): pods "busybox" not found [root@controller ~]# kubectl exec -it nwp-busybox -- wget --spider --timeout=1 nginx wget: bad address 'nginx' command terminated with exit code 1 [root@controller ~]# kubectl exec -it nwp-busybox -- wget --spider --timeout=1 10.101.186.122 Connecting to 10.101.186.122 (10.101.186.122:80) wget: download timed out command terminated with exit code 1 [root@controller ~]# kubectl label pod nwp-busybox role=frontend pod/nwp-busybox labeled [root@controller ~]# kubectl exec -it nwp-busybox -- wget --spider --timeout=1 10.101.186.122 Connecting to 10.101.186.122 (10.101.186.122:80) wget: download timed out command terminated with exit code 1 |
Using Storage
All objects in this assignment should be created in the ckad-1311 Namespace.
- Create a PersistentVolume with the name 1311-pv. It should provide 2 GiB of storage and read/write access to multiple clients simultaneously. Use the hostPath storage type
- Next, create a PersistentVolumeClaim that requests 1 GiB from any PersistentVolume that allows multiple clients simultaneous read/write access. The name of the object should be 1311-pvc
- Finally, create a Pod with the name 1311-pod that uses this PersistentVolume. It should run an nginx image and mount the volume on the directory /webdata
Go to the documentation,, search: persistent volume -> Configure a Pod to Use a PersistentVolume for Storage -> Create a PersistentVolume
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 |
[root@controller ~]# vi task1512.yaml [root@controller ~]# cat task1512.yaml apiVersion: v1 kind: PersistentVolume metadata: name: task-pv-volume labels: type: local spec: storageClassName: manual capacity: storage: 10Gi accessModes: - ReadWriteOnce hostPath: path: "/mnt/data" --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: task-pv-claim spec: storageClassName: manual accessModes: - ReadWriteOnce resources: requests: storage: 3Gi --- apiVersion: v1 kind: Pod metadata: name: task-pv-pod spec: volumes: - name: task-pv-storage persistentVolumeClaim: claimName: task-pv-claim containers: - name: task-pv-container image: nginx ports: - containerPort: 80 name: "http-server" volumeMounts: - mountPath: "/usr/share/nginx/html" name: task-pv-storage [root@controller ~]# vi task1512.yaml [root@controller ~]# cat task1512.yaml apiVersion: v1 kind: PersistentVolume metadata: name: 1312-pv namespace: ckad-1312 labels: type: local spec: storageClassName: manual capacity: storage: 2Gi accessModes: - ReadWriteMany hostPath: path: "/mnt/data" --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: 1312-pvc namespace: ckad-1312 spec: storageClassName: manual accessModes: - ReadWriteMany resources: requests: storage: 1Gi --- apiVersion: v1 kind: Pod metadata: name: 1312-pod namespace: ckad-1312 spec: volumes: - name: task-pv-storage persistentVolumeClaim: claimName: 1312-pvc containers: - name: task-pv-container image: nginx ports: - containerPort: 80 name: "http-server" volumeMounts: - mountPath: "/webdata" name: task-pv-storage [root@controller ~]# kubectl create ns ckad-1312 namespace/ckad-1312 created [root@controller ~]# kubectl create -f task1512.yaml persistentvolume/1312-pv created persistentvolumeclaim/1312-pvc created pod/1312-pod created [root@controller ~]# kubectl get pods,pvc,pv -n ckad-1312 NAME READY STATUS RESTARTS AGE pod/1312-pod 1/1 Running 0 76s NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/1312-pvc Bound 1312-pv 2Gi RWX manual 76s NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE persistentvolume/1312-pv 2Gi RWX Retain Bound ckad-1312/1312-pvc manual 76s [root@controller ~]# kubectl exec -n ckad-1312 -it 1312-pod -- touch /webdata/testfile [root@controller ~]# ls /mnt/data ls: nie ma dostępu do '/mnt/data': Nie ma takiego pliku ani katalogu [root@controller ~]# kubectl get pods -n ckad-1312 -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES 1312-pod 1/1 Running 0 69m 172.16.71.247 worker2.example.com <none> <none> [root@controller ~]# ssh root@worker2 Last login: Sat Mar 9 02:20:36 2024 from 10.8.152.84 [root@worker2 ~]# ls /mnt/data testfile [root@worker2 ~]# wylogowanie Connection to worker2 closed. |
Using Quota
Create a Namespace with the name limited, in which 5 Pods can be started and a total amount of 1000 millicore and 2 GiB of RAM is available.
Run a Deployment with the name restrictginx in this Namespace, with 3 Pods where every Pod initially requests 64 MiB RAM, with an upper limit of 256 MiB
RAM.
1000 milicore is 1 cpu.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 |
[root@controller ~]# kubectl create ns limited [root@controller ~]# kubectl create quota -h | more Create a resource quota with the specified name, hard limits, and optional scopes. ... # Create a new resource quota named my-quota kubectl create quota my-quota --hard=cpu=1,memory=1G,pods=2,services=3,replicationcontrollers=2,resourcequotas=1,secrets=5,persistentvolumeclaims=10 ... [root@controller ~]# # kubectl create quota my-quota --hard=cpu=1,memory=1G,pods=2,services=3,replicationcontrollers=2,resourcequotas=1,secrets=5,persistentvolumeclaims=10 [root@controller ~]# kubectl create quota limitedquota -n limited --hard=cpu=1,memory=2G,pods=5 resourcequota/limitedquota created [root@controller ~]# kubectl describe ns limited Name: limited Labels: kubernetes.io/metadata.name=limited Annotations: <none> Status: Active Resource Quotas Name: limitedquota Resource Used Hard -------- --- --- cpu 0 1 memory 0 2G pods 0 5 No LimitRange resource. [root@controller ~]# kubectl create deploy restrict-nginx --replicas=3 --image=nginx -n limited deployment.apps/restrict-nginx created [root@controller ~]# kubectl set resuorces -h | more ... # Set the resource request and limits for all containers in nginx kubectl set resources deployment nginx --limits=cpu=200m,memory=512Mi --requests=cpu=100m,memory=256Mi ... [root@controller ~]# # kubectl set resources deployment nginx --limits=cpu=200m,memory=512Mi --requests=cpu=100m,memory=256Mi [root@controller ~]# kubectl set resources deployment restrict-nginx --limits=memory=256Mi --requests=memory=64Mi -n limited deployment.apps/restrict-nginx resource requirements updated [root@controller ~]# kubectl describe ns limited Name: limited Labels: kubernetes.io/metadata.name=limited Annotations: <none> Status: Active Resource Quotas Name: limitedquota Resource Used Hard -------- --- --- cpu 0 1 memory 0 2G pods 0 5 No LimitRange resource. [root@controller ~]# kubectl get all -n limited NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/restrict-nginx 0/3 0 0 3m25s NAME DESIRED CURRENT READY AGE replicaset.apps/restrict-nginx-67f4786c7 1 0 0 61s replicaset.apps/restrict-nginx-857b64fb78 3 0 0 3m25s [root@controller ~]# kubectl describe replicaset.apps/restrict-nginx-857b64fb78 -n limited Name: restrict-nginx-857b64fb78 Namespace: limited Selector: app=restrict-nginx,pod-template-hash=857b64fb78 Labels: app=restrict-nginx pod-template-hash=857b64fb78 Annotations: deployment.kubernetes.io/desired-replicas: 3 deployment.kubernetes.io/max-replicas: 4 deployment.kubernetes.io/revision: 1 Controlled By: Deployment/restrict-nginx Replicas: 0 current / 3 desired Pods Status: 0 Running / 0 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app=restrict-nginx pod-template-hash=857b64fb78 Containers: nginx: Image: nginx Port: <none> Host Port: <none> Environment: <none> Mounts: <none> Volumes: <none> Conditions: Type Status Reason ---- ------ ------ ReplicaFailure True FailedCreate Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedCreate 4m20s replicaset-controller Error creating: pods "restrict-nginx-857b64fb78-ttdnd" is forbidden: failed quota: limitedquota: must specify cpu for: nginx; memory for: nginx Warning FailedCreate 4m20s replicaset-controller Error creating: pods "restrict-nginx-857b64fb78-xzbqr" is forbidden: failed quota: limitedquota: must specify cpu for: nginx; memory for: nginx Warning FailedCreate 4m20s replicaset-controller Error creating: pods "restrict-nginx-857b64fb78-hjnvg" is forbidden: failed quota: limitedquota: must specify cpu for: nginx; memory for: nginx Warning FailedCreate 4m20s replicaset-controller Error creating: pods "restrict-nginx-857b64fb78-khf8q" is forbidden: failed quota: limitedquota: must specify cpu for: nginx; memory for: nginx Warning FailedCreate 4m20s replicaset-controller Error creating: pods "restrict-nginx-857b64fb78-q6bg4" is forbidden: failed quota: limitedquota: must specify cpu for: nginx; memory for: nginx Warning FailedCreate 4m20s replicaset-controller Error creating: pods "restrict-nginx-857b64fb78-qvmps" is forbidden: failed quota: limitedquota: must specify cpu for: nginx; memory for: nginx Warning FailedCreate 4m20s replicaset-controller Error creating: pods "restrict-nginx-857b64fb78-x4jnp" is forbidden: failed quota: limitedquota: must specify cpu for: nginx; memory for: nginx Warning FailedCreate 4m19s replicaset-controller Error creating: pods "restrict-nginx-857b64fb78-n8f5c" is forbidden: failed quota: limitedquota: must specify cpu for: nginx; memory for: nginx Warning FailedCreate 4m19s replicaset-controller Error creating: pods "restrict-nginx-857b64fb78-hfd27" is forbidden: failed quota: limitedquota: must specify cpu for: nginx; memory for: nginx Warning FailedCreate 96s (x7 over 4m17s) replicaset-controller (combined from similar events): Error creating: pods "restrict-nginx-857b64fb78-gfnhk" is forbidden: failed quota: limitedquota: must specify cpu for: nginx; memory for: nginx [root@controller ~]# [root@controller ~]# # kubectl set resources deployment nginx --limits=cpu=1 --requests=cpu=1 -n limited [root@controller ~]# kubectl set resources deployment nginx --limits=cpu=1 --requests=cpu=1 -n limited Error from server (NotFound): deployments.apps "nginx" not found [root@controller ~]# kubectl set resources deployment restrict-nginx --limits=cpu=1 --requests=cpu=1 -n limited deployment.apps/restrict-nginx resource requirements updated [root@controller ~]# kubectl describe ns limited Name: limited Labels: kubernetes.io/metadata.name=limited Annotations: <none> Status: Active Resource Quotas Name: limitedquota Resource Used Hard -------- --- --- cpu 1 1 memory 64Mi 2G pods 1 5 [root@controller ~]# kubectl set resuorces -h | more ... # Set the resource request and limits for all containers in nginx kubectl set resources deployment nginx --limits=cpu=200m,memory=512Mi --requests=cpu=100m,memory=256Mi ... [root@controller ~]# kubectl create deploy restrict-nginx --replicas=3 --image=nginx -n limited deployment.apps/restrict-nginx created [root@controller ~]# kubectl set resources deployment restrict-nginx --limits=cpu=200m,memory=256Mi --requests=cpu=200m,memory=64Mi -n limited deployment.apps/restrict-nginx resource requirements updated [root@controller ~]# kubectl describe ns limited Name: limited Labels: kubernetes.io/metadata.name=limited Annotations: <none> Status: Active Resource Quotas Name: limitedquota Resource Used Hard -------- --- --- cpu 600m 1 memory 192Mi 2G pods 3 5 No LimitRange resource. |
Creating Canary Deployments
Run a Deployment with the name myweb, using the nginx:1.14 image and 3
replicas. Ensure this Deployment is accessible through a Service with the name
canary, which uses the NodePort Service type.
Update the Deployment to the latest version of Nginx, using the canary
Deployment update strategy, in such a way that 40% of the application offers
access to the updated application and 60% still uses the old application.
1 2 3 |
[root@controller ~]# kubectl create deploy myweb-old --image=nginx:1.14 --replicas=3 deployment.apps/myweb created [root@controller ~]# kubectl edit deploy myweb-old |
Add label type: canary:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
labels: app: myweb-old name: myweb-old namespace: default resourceVersion: "1249762" uid: b42e79fa-a7a2-4b5c-b3ee-f6cccd3b7d1b spec: progressDeadlineSeconds: 600 replicas: 3 revisionHistoryLimit: 10 selector: matchLabels: app: myweb-old strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type: RollingUpdate template: metadata: creationTimestamp: null labels: app: myweb-old |
To:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 |
labels: app: myweb-old type: canary name: myweb namespace: default resourceVersion: "1249762" uid: b42e79fa-a7a2-4b5c-b3ee-f6cccd3b7d1b spec: progressDeadlineSeconds: 600 replicas: 3 revisionHistoryLimit: 10 selector: matchLabels: app: myweb-old strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type: RollingUpdate template: metadata: creationTimestamp: null labels: app: myweb-old type: canary |
And now:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 |
[root@controller ~]# kubectl get all --selector type=canary NAME READY STATUS RESTARTS AGE pod/myweb-old-c6d768fc-h8pf6 1/1 Running 0 5m3s pod/myweb-old-c6d768fc-rcfnx 1/1 Running 0 5m6s pod/myweb-old-c6d768fc-srs7g 1/1 Running 0 5m5s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/myweb-old 3/3 3 3 18m NAME DESIRED CURRENT READY AGE replicaset.apps/myweb-old-c6d768fc 3 3 3 5m6s [root@controller ~]# kubectl expose deploy myweb-old --name=myweb --selector type=canary --port=80 service/myweb exposed [root@controller ~]# kubectl describe svc myweb Name: myweb Namespace: default Labels: app=myweb-old type=canary Annotations: <none> Selector: type=canary Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.105.145.29 IPs: 10.105.145.29 Port: <unset> 80/TCP TargetPort: 80/TCP Endpoints: 172.16.102.182:80,172.16.71.251:80,172.16.71.252:80 Session Affinity: None Events: <none> [root@controller ~]# kubectl create deploy myweb-new --image=nginx --replicas=2 deployment.apps/myweb-new created [root@controller ~]# kubectl edit deploy myweb-new |
Add type: canary label:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
labels: app: myweb-new name: myweb-new namespace: default resourceVersion: "1252010" uid: ff3655ab-dbae-41ae-a92d-05e4b488e3f4 spec: progressDeadlineSeconds: 600 replicas: 2 revisionHistoryLimit: 10 selector: matchLabels: app: myweb-new strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type: RollingUpdate template: metadata: creationTimestamp: null labels: app: myweb-new |
To:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 |
labels: app: myweb-new type: canary name: myweb-new namespace: default resourceVersion: "1252010" uid: ff3655ab-dbae-41ae-a92d-05e4b488e3f4 spec: progressDeadlineSeconds: 600 replicas: 2 revisionHistoryLimit: 10 selector: matchLabels: app: myweb-new strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type: RollingUpdate template: metadata: creationTimestamp: null labels: app: myweb-new type: canary |
And now we have three repliicas in the old version and two replicas in new version:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
[root@controller ~]# kubectl describe svc myweb Name: myweb Namespace: default Labels: app=myweb type=canary Annotations: <none> Selector: type=canary Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.105.145.29 IPs: 10.105.145.29 Port: <unset> 80/TCP TargetPort: 80/TCP Endpoints: 172.16.102.182:80,172.16.102.184:80,172.16.71.251:80 + 2 more... Session Affinity: None Events: <none> [root@controller ~]# kubectl get endpoints NAME ENDPOINTS AGE kubernetes 172.30.9.25:6443 9d myweb 172.16.102.182:80,172.16.102.184:80,172.16.71.251:80 + 2 more... 15m |
Managing Pod Permissions
Create a Pod manifest file to run a Pod with the name sleepybox. It should run
the latest version of busybox, with the sleep 3600 command as the default
command. Ensure the primary Pod user is a member of the supplementary
group 2000 while this Pod is started.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 |
[root@controller ~]# kubectl run sleepybox --image=busybox --dry-run=client -o yaml -- sleep 3600 > task1515.yaml [root@controller ~]# cat task1515.yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: sleepybox name: sleepybox spec: containers: - args: - sleep - "3600" image: busybox name: sleepybox resources: {} dnsPolicy: ClusterFirst restartPolicy: Always status: {} [root@controller ~]# kubectl explain pod.spec.securityContext .. FIELDS: fsGroup <integer> A special supplemental group that applies to all containers in a pod. Some volume types allow the Kubelet to change the ownership of that volume to be owned by the pod: ... [root@controller ~]# vim task1515.yaml [root@controller ~]# cat task1515.yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: sleepybox name: sleepybox spec: containers: - args: - sleep - "3600" image: busybox name: sleepybox resources: {} dnsPolicy: ClusterFirst securityContext: fsGroup: 2000 restartPolicy: Always status: {} [root@controller ~]# kubectl create -f task1515.yaml pod/sleepybox created [root@controller ~]# kubectl get pods sleepybox -o yaml apiVersion: v1 kind: Pod metadata: annotations: cni.projectcalico.org/containerID: 81507fea72e2a315dd543f6103689a7db6d41d2bf0d4dc6645d97d0919d37fc9 cni.projectcalico.org/podIP: 172.16.71.255/32 cni.projectcalico.org/podIPs: 172.16.71.255/32 creationTimestamp: "2024-03-09T21:16:57Z" labels: run: sleepybox name: sleepybox namespace: default resourceVersion: "1254387" uid: ba0a9671-41f4-4fd6-89b9-032ecd44c3d0 spec: containers: - args: - sleep - "3600" image: busybox imagePullPolicy: Always name: sleepybox resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-cmnjt readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true nodeName: worker2.example.com preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: fsGroup: 2000 serviceAccount: default ... |
Using a ServiceAccount
Create a Pod with the name allaccess. Also create a ServiceAccount with the
name allaccess and ensure that the Pod is using the ServiceAccount. Notice
that no further RBAC setup is required.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |