Creating a Kubernetes Cluster
- Create a 3-node Kubernetes cluster, using one control plane node and 2
worker nodes.
Scheduling a Pod
- Schedule a Pod with the name lab123 that runs the Nginx and redis
applications.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 |
[root@controller ~]# kubectl run lab123 --image=nginx --dry-run=client -o yaml > lab123.yaml [root@controller ~]# cat lab123.yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: lab123 name: lab123 spec: containers: - image: nginx name: lab123 resources: {} dnsPolicy: ClusterFirst restartPolicy: Always status: {} [root@controller ~]# cat lab123.yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: lab123 name: lab123 spec: containers: - image: nginx name: lab123 resources: {} - image: redis name: redis dnsPolicy: ClusterFirst restartPolicy: Always status: {} [root@controller ~]# kubectl apply -f lab123.yaml pod/lab123 created [root@controller ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES lab123 2/2 Running 0 6s 192.168.0.155 worker1.example.com <none> <none> |
Managing Application Initialization
- Create a deployment with the name lab124deploy which runs the Nginx image,
but waits 30 seconds before starting the actual Pods.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 |
[root@controller ~]# kubectl create deploy lab124deploy --image=busybox --dry-run=client -o yaml -- sleep 30 > lab124deploy.yaml [root@controller ~]# cat lab124deploy.yaml apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: lab124deploy name: lab124deploy spec: replicas: 1 selector: matchLabels: app: lab124deploy strategy: {} template: metadata: creationTimestamp: null labels: app: lab124deploy spec: containers: - command: - sleep - "30" image: busybox name: busybox resources: {} status: {} |
Go to the kubernetes documentation page -> search: init container -> copy init container in use:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
apiVersion: v1 kind: Pod metadata: name: myapp-pod labels: app.kubernetes.io/name: MyApp spec: containers: - name: myapp-container image: busybox:1.28 command: ['sh', '-c', 'echo The app is running! && sleep 3600'] initContainers: - name: init-myservice image: busybox:1.28 command: ['sh', '-c', "until nslookup myservice.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for myservice; sleep 2; done"] - name: init-mydb image: busybox:1.28 command: ['sh', '-c', "until nslookup mydb.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for mydb; sleep 2; done"] |
Modify lab124deploy.yaml
so it looks like:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 |
[root@controller ~]# cat lab124deploy.yaml apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: lab124deploy name: lab124deploy spec: replicas: 1 selector: matchLabels: app: lab124deploy strategy: {} template: metadata: creationTimestamp: null labels: app: lab124deploy spec: containers: - name: nginx image: nginx initContainers: - command: - sleep - "30" image: busybox name: busybox resources: {} status: {} [root@controller ~]# kubectl apply -f lab124deploy.yaml deployment.apps/lab124deploy created [root@controller ~]# kubectl get pods NAME READY STATUS RESTARTS AGE lab123 2/2 Running 0 18m lab124deploy-7c7c8457f9-lclk4 0/1 Init:0/1 0 7s [root@controller ~]# kubectl get pods NAME READY STATUS RESTARTS AGE lab123 2/2 Running 0 19m lab124deploy-7c7c8457f9-lclk4 0/1 PodInitializing 0 35s [root@controller ~]# kubectl get pods NAME READY STATUS RESTARTS AGE lab123 2/2 Running 0 19m lab124deploy-7c7c8457f9-lclk4 1/1 Running 0 42s |
Setting up Persistent Storage
Create a Persistent Volume with the name lab125 that uses HostPath on the
directory /lab125.
Go to Kubernetes documentation: persistent volume -> Configure a Pod to Use a PersistentVolume for Storage -> Create a PersistentVolume
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 |
[root@controller ~]# vi lab125.yaml [root@controller ~]# cat lab125.yaml apiVersion: v1 kind: PersistentVolume metadata: name: lab125 labels: type: local spec: storageClassName: manual capacity: storage: 10Gi accessModes: - ReadWriteOnce hostPath: path: "/lab125" [root@controller ~]# kubectl apply -f lab125.yaml persistentvolume/lab125 created [root@controller ~]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE lab125 10Gi RWO Retain Available manual 9s [root@controller ~]# kubectl describe pv lab125 Name: lab125 Labels: type=local Annotations: <none> Finalizers: [kubernetes.io/pv-protection] StorageClass: manual Status: Available Claim: Reclaim Policy: Retain Access Modes: RWO VolumeMode: Filesystem Capacity: 10Gi Node Affinity: <none> Message: Source: Type: HostPath (bare host directory volume) Path: /lab125 HostPathType: Events: <none> |
Configuring Application Access
- Create a Deployment with the name lab126deploy, running 3 instances of
the Nginx image. - Configure it such that it can be accessed by external users on port 32567 on each cluster node.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 |
[root@controller ~]# kubectl create deployment lab126deploy --image=nginx --replicas=3 deployment.apps/lab126deploy created [root@controller ~]# kubectl expose deployment lab126deploy --port=80 service/lab126deploy exposed [root@controller ~]# kubectl get all --selector app=lab126deploy NAME READY STATUS RESTARTS AGE pod/lab126deploy-fff46cd4b-4drk6 1/1 Running 0 51s pod/lab126deploy-fff46cd4b-lhmfs 1/1 Running 0 51s pod/lab126deploy-fff46cd4b-zw5fq 1/1 Running 0 51s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/lab126deploy ClusterIP 10.105.103.37 <none> 80/TCP 34s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/lab126deploy 3/3 3 3 51s NAME DESIRED CURRENT READY AGE replicaset.apps/lab126deploy-fff46cd4b 3 3 3 51s [root@controller ~]# kubectl explain service.spec.ports ... nodePort <integer> The port on each node on which this service is exposed when type is NodePort or LoadBalancer. Usually assigned by the system. If a value is specified, in-range, and not in use it will be used, otherwise the operation will fail. If not specified, a port will be allocated if this Service requires one. If this field is specified when creating a Service which does not need it, creation will fail. This field will be wiped when updating a Service to no longer need it (e.g. changing type from NodePort to ClusterIP). More info: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport ... [root@controller ~]# kubectl edit svc lab126deploy |
Edit the svc:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 |
apiVersion: v1 kind: Service metadata: creationTimestamp: "2024-02-18T12:07:17Z" labels: app: lab126deploy name: lab126deploy namespace: default resourceVersion: "495475" uid: 591535a4-24ba-406c-8f37-cb0d2e594ba3 spec: clusterIP: 10.105.103.37 clusterIPs: - 10.105.103.37 internalTrafficPolicy: Cluster ipFamilies: - IPv4 ipFamilyPolicy: SingleStack ports: - port: 80 protocol: TCP targetPort: 80 selector: app: lab126deploy sessionAffinity: None type: ClusterIP status: loadBalancer: {} |
After svc has been edited:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 |
apiVersion: v1 kind: Service metadata: creationTimestamp: "2024-02-18T12:07:17Z" labels: app: lab126deploy name: lab126deploy namespace: default resourceVersion: "495475" uid: 591535a4-24ba-406c-8f37-cb0d2e594ba3 spec: clusterIP: 10.105.103.37 clusterIPs: - 10.105.103.37 internalTrafficPolicy: Cluster ipFamilies: - IPv4 ipFamilyPolicy: SingleStack ports: - port: 80 protocol: TCP targetPort: 80 nodePort: 32567 selector: app: lab126deploy sessionAffinity: None type: NodePort status: loadBalancer: {} |
Let’s check:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 |
[root@controller ~]# kubectl get all --selector app=lab126deploy NAME READY STATUS RESTARTS AGE pod/lab126deploy-fff46cd4b-4drk6 1/1 Running 0 75m pod/lab126deploy-fff46cd4b-lhmfs 1/1 Running 0 75m pod/lab126deploy-fff46cd4b-zw5fq 1/1 Running 0 75m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/lab126deploy NodePort 10.105.103.37 <none> 80:32567/TCP 75m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/lab126deploy 3/3 3 3 75m NAME DESIRED CURRENT READY AGE replicaset.apps/lab126deploy-fff46cd4b 3 3 3 75m [root@controller ~]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d21h lab126deploy NodePort 10.105.103.37 <none> 80:32567/TCP 75m [root@controller ~]# kubectl describe svc lab126deploy Name: lab126deploy Namespace: default Labels: app=lab126deploy Annotations: <none> Selector: app=lab126deploy Type: NodePort IP Family Policy: SingleStack IP Families: IPv4 IP: 10.105.103.37 IPs: 10.105.103.37 Port: <unset> 80/TCP TargetPort: 80/TCP NodePort: <unset> 32567/TCP Endpoints: 192.168.0.157:80,192.168.0.158:80,192.168.0.159:80 Session Affinity: None External Traffic Policy: Cluster Events: <none> [root@controller ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES lab123 2/2 Running 0 23h 192.168.0.155 worker1.example.com <none> <none> lab124deploy-7c7c8457f9-lclk4 1/1 Running 0 23h 192.168.0.156 worker2.example.com <none> <none> lab126deploy-fff46cd4b-4drk6 1/1 Running 0 88m 192.168.0.157 worker2.example.com <none> <none> lab126deploy-fff46cd4b-lhmfs 1/1 Running 0 88m 192.168.0.159 worker1.example.com <none> <none> lab126deploy-fff46cd4b-zw5fq 1/1 Running 0 88m 192.168.0.158 worker1.example.com <none> <none> [root@controller ~]# curl worker1.example.com:32567 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> [root@controller ~]# curl worker2.example.com:32567 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> |
Securing Network Traffic
Create a Namespace with the name restricted, and configure it such that it
only allows access to Pods exposing port 80 for Pods coming from the
Namespaces access.
Go to the kubernetes documentation page -> search: network policy -> Network Policies -> The NetworkPolicy resource
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 |
[root@controller ~]# kubectl create ns restricted namespace/restricted created [root@controller ~]# kubectl create ns access namespace/access created [root@controller ~]# kubectl run testnginx --image=nginx -n restricted pod/testnginx created [root@controller ~]# kubectl run testbox --image=busybox -n access -- sleep 3600 pod/testbox created [root@controller ~]# kubectl run testbox --image=busybox -- sleep 3600 pod/testbox created [root@controller ~]# vi lab127.yaml [root@controller ~]# cat lab127.yaml apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: test-network-policy namespace: default spec: podSelector: matchLabels: role: db policyTypes: - Ingress - Egress ingress: - from: - ipBlock: cidr: 172.17.0.0/16 except: - 172.17.1.0/24 - namespaceSelector: matchLabels: project: myproject - podSelector: matchLabels: role: frontend ports: - protocol: TCP port: 6379 egress: - to: - ipBlock: cidr: 10.0.0.0/24 ports: - protocol: TCP port: 5978 |
Let’s edit the 127lab.yaml.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 |
[root@controller ~]# cat lab127.yaml apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: test-network-policy namespace: restricted spec: policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: project: myproject ports: - protocol: TCP port: 80 [root@controller ~]# kubectl label ns access project=myproject namespace/access labeled [root@controller ~]# kubectl get ns --show-labels NAME STATUS AGE LABELS access Active 154m kubernetes.io/metadata.name=access,project=myproject calico-apiserver Active 3d kubernetes.io/metadata.name=calico-apiserver,name=calico-apiserver,pod-security.kubernetes.io/enforce-version=latest,pod-security.kubernetes.io/enforce=privileged calico-system Active 3d kubernetes.io/metadata.name=calico-system,name=calico-system,pod-security.kubernetes.io/enforce-version=latest,pod-security.kubernetes.io/enforce=privileged default Active 3d kubernetes.io/metadata.name=default kube-node-lease Active 3d kubernetes.io/metadata.name=kube-node-lease kube-public Active 3d kubernetes.io/metadata.name=kube-public kube-system Active 3d kubernetes.io/metadata.name=kube-system restricted Active 155m kubernetes.io/metadata.name=restricted tigera-operator Active 3d kubernetes.io/metadata.name=tigera-operator,name=tigera-operator,pod-security.kubernetes.io/enforce=privileged [root@controller ~]# kubectl create -f lab127.yaml networkpolicy.networking.k8s.io/test-network-policy created [root@controller ~]# kubectl describe <TAB> apiservers.operator.tigera.io events.events.k8s.io networkpolicies.crd.projectcalico.org apiservices.apiregistration.k8s.io felixconfigurations.crd.projectcalico.org networkpolicies.networking.k8s.io bgpconfigurations.crd.projectcalico.org felixconfigurations.projectcalico.org networkpolicies.projectcalico.org bgpconfigurations.projectcalico.org flowschemas.flowcontrol.apiserver.k8s.io networksets.crd.projectcalico.org bgpfilters.crd.projectcalico.org globalnetworkpolicies.crd.projectcalico.org networksets.projectcalico.org bgpfilters.projectcalico.org globalnetworkpolicies.projectcalico.org nodes bgppeers.crd.projectcalico.org globalnetworksets.crd.projectcalico.org persistentvolumeclaims bgppeers.projectcalico.org globalnetworksets.projectcalico.org persistentvolumes blockaffinities.crd.projectcalico.org horizontalpodautoscalers.autoscaling poddisruptionbudgets.policy blockaffinities.projectcalico.org hostendpoints.crd.projectcalico.org pods caliconodestatuses.crd.projectcalico.org hostendpoints.projectcalico.org podtemplates caliconodestatuses.projectcalico.org imagesets.operator.tigera.io priorityclasses.scheduling.k8s.io certificatesigningrequests.certificates.k8s.io ingressclasses.networking.k8s.io prioritylevelconfigurations.flowcontrol.apiserver.k8s.io clusterinformations.crd.projectcalico.org ingresses.networking.k8s.io profiles.projectcalico.org clusterinformations.projectcalico.org installations.operator.tigera.io replicasets.apps clusterrolebindings.rbac.authorization.k8s.io ipamblocks.crd.projectcalico.org replicationcontrollers clusterroles.rbac.authorization.k8s.io ipamconfigs.crd.projectcalico.org resourcequotas componentstatuses ipamconfigurations.projectcalico.org rolebindings.rbac.authorization.k8s.io configmaps ipamhandles.crd.projectcalico.org roles.rbac.authorization.k8s.io controllerrevisions.apps ippools.crd.projectcalico.org runtimeclasses.node.k8s.io cronjobs.batch ippools.projectcalico.org secrets csidrivers.storage.k8s.io ipreservations.crd.projectcalico.org serviceaccounts csinodes.storage.k8s.io ipreservations.projectcalico.org services csistoragecapacities.storage.k8s.io jobs.batch statefulsets.apps customresourcedefinitions.apiextensions.k8s.io kubecontrollersconfigurations.crd.projectcalico.org storageclasses.storage.k8s.io daemonsets.apps kubecontrollersconfigurations.projectcalico.org tigerastatuses.operator.tigera.io deployments.apps leases.coordination.k8s.io validatingwebhookconfigurations.admissionregistration.k8s.io endpoints limitranges volumeattachments.storage.k8s.io endpointslices.discovery.k8s.io mutatingwebhookconfigurations.admissionregistration.k8s.io events namespaces [root@controller ~]# kubectl describe networkpolicies.networking.k8s.io -n restricted Name: test-network-policy Namespace: restricted Created on: 2024-02-18 11:31:10 -0500 EST Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: 80/TCP From: NamespaceSelector: project=myproject Not affecting egress traffic Policy Types: Ingress [root@controller ~]# kubectl get pods -n restricted NAME READY STATUS RESTARTS AGE testnginx 1/1 Running 0 157m [root@controller ~]# kubectl expose pod testnginx --port=80 -n restricted service/testnginx exposed [root@controller ~]# kubectl get svc -n restricted NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE testnginx ClusterIP 10.110.72.60 <none> 80/TCP 28s [root@controller ~]# kubectl get svc -n restricted -o wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR testnginx ClusterIP 10.110.72.60 <none> 80/TCP 38s run=testnginx [root@controller ~]# kubectl exec -it testbox -n access -- curl 10.110.72.60 error: Internal error occurred: error executing command in container: failed to exec in container: failed to start exec "73803fa926c75637709e237ef1c55e9ca8fdc4a6a50557d585ca55e8dc567a2b": OCI runtime exec f ailed: exec failed: unable to start container process: exec: "curl": executable file not found in $PATH: unknown [root@controller ~]# kubectl exec -it testbox -n access -- wget 10.110.72.60 Connecting to 10.110.72.60 (10.110.72.60:80) saving to 'index.html' index.html 100% |********************************| 615 0:00:00 ETA 'index.html' saved [root@controller ~]# kubectl exec -it testbox -- wget 10.110.72.60 Connecting to 10.110.72.60 (10.110.72.60:80) ^Ccommand terminated with exit code 130 |
Setting up Quota
- Create a Namespace with the name limited and configure it such that only
5 Pods can be started and the total amount of available memory for
applications running in that Namespace is limited to 2 GiB. - Run a webserver Deployment with the name lab128deploy and using 3 Pods in this Namespace.
- Each of the Pods should request 128MiB memory and be limited to 256MiB.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 |
[root@controller ~]# kubectl create ns limited namespace/limited created [root@controller ~]# kubectl create quota my-quota --hard=memory=2G,pods=5 -n limited resourcequota/my-quota created [root@controller ~]# kubectl describe ns limited Name: limited Labels: kubernetes.io/metadata.name=limited Annotations: <none> Status: Active Resource Quotas Name: my-quota Resource Used Hard -------- --- --- memory 0 2G pods 0 5 No LimitRange resource. [root@controller ~]# kubectl create deploy lab128deploy --image=nginx --replicas=3 -n limited deployment.apps/lab128deploy created [root@controller ~]# kubectl set resources -h ... # Set the resource request and limits for all containers in nginx kubectl set resources deployment nginx --limits=cpu=200m,memory=512Mi --requests=cpu=100m,memory=256Mi ... root@controller ~]# kubectl set resources deployment lab128deploy --limits=memory=256Mi --requests=memory=128Mi -n limited deployment.apps/lab128deploy resource requirements updated [root@controller ~]# kubectl get all -n limited NAME READY STATUS RESTARTS AGE pod/lab128deploy-6f9c55779d-shcdl 0/1 ContainerCreating 0 15s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/lab128deploy 0/3 1 0 14m NAME DESIRED CURRENT READY AGE replicaset.apps/lab128deploy-595cd4d5cb 3 0 0 14m replicaset.apps/lab128deploy-6f9c55779d 1 1 0 15s [root@controller ~]# kubectl describe ns limited Name: limited Labels: kubernetes.io/metadata.name=limited Annotations: <none> Status: Active Resource Quotas Name: my-quota Resource Used Hard -------- --- --- memory 128Mi 2G pods 1 5 No LimitRange resource. |
Creating a Static Pod
Configure a Pod with the name lab129pod that will be started by the kubelet
on node worker2 as a static Pod.
On controller:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
[root@controller ~]# kubectl run lab129pod --image=nginx --dry-run=client -o yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: lab129pod name: lab129pod spec: containers: - image: nginx name: lab129pod resources: {} dnsPolicy: ClusterFirst restartPolicy: Always status: {} |
On worker2:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
[root@worker2 ~]# cd /etc/kubernetes/manifests [root@worker2 manifests]# vi lab129pod.yaml [root@worker2 manifests]# cat lab129pod.yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: lab129pod name: lab129pod spec: containers: - image: nginx name: lab129pod resources: {} dnsPolicy: ClusterFirst restartPolicy: Always status: {} |
On Controller:
1 2 3 4 5 6 7 8 9 10 11 12 |
[root@controller ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES lab123 2/2 Running 0 44h 192.168.0.155 worker1.example.com <none> <none> lab124deploy-7c7c8457f9-lclk4 1/1 Running 0 44h 192.168.0.156 worker2.example.com <none> <none> lab126deploy-fff46cd4b-4drk6 1/1 Running 0 22h 192.168.0.157 worker2.example.com <none> <none> lab126deploy-fff46cd4b-lhmfs 1/1 Running 0 22h 192.168.0.159 worker1.example.com <none> <none> lab126deploy-fff46cd4b-zw5fq 1/1 Running 0 22h 192.168.0.158 worker1.example.com <none> <none> lab128deploy-595cd4d5cb-595b8 0/1 Terminating 0 52m <none> worker2.example.com <none> <none> lab128deploy-595cd4d5cb-bhnqm 0/1 Terminating 0 52m <none> worker1.example.com <none> <none> lab128deploy-595cd4d5cb-t86fz 0/1 Terminating 0 52m <none> worker2.example.com <none> <none> lab129pod-worker2.example.com 0/1 ContainerCreating 0 6m33s <none> worker2.example.com <none> <none> testbox 1/1 Running 19 (<invalid> ago) 20h 192.168.0.162 worker1.example.com <none> <none> |
Troubleshooting Node Services
Assume that node worker2 is not currently available. Ensure that the appropriate service is started on that node which will show the node as
running.
Configuring Cluster Access
Create a ServiceAccount that has permissions to create Pods, Deployments,
DaemonSets and StatefulSets in the Namespace “access”.
Go to the kubernetes documentation page -> search: role -> Using RBAC Authorization -> Role examples
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 |
[root@controller ~]# kubectl create ns access Error from server (AlreadyExists): namespaces "access" already exists [root@controller ~]# kubectl create role -h Create a role with single rule. Examples: # Create a role named "pod-reader" that allows user to perform "get", "watch" and "list" on pods kubectl create role pod-reader --verb=get --verb=list --verb=watch --resource=pods ... [root@controller ~]# kubectl create role app-creator --verb=get --verb=list --verb=watch --verb=create --verb=update --verb=patch --verb=delete --resource=pods,deployment,daemonset,statefulset -n access role.rbac.authorization.k8s.io/app-creator created [root@controller ~]# kubectl describe role app-creator -n access Name: app-creator Labels: <none> Annotations: <none> PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- pods [] [] [get list watch create update patch delete] daemonsets.apps [] [] [get list watch create update patch delete] deployments.apps [] [] [get list watch create update patch delete] statefulsets.apps [] [] [get list watch create update patch delete] [root@controller ~]# kubectl get clusterroles | more NAME CREATED AT admin 2024-02-15T16:08:33Z ... 2024-02-15T16:08:33Z edit 2024-02-15T16:08:33Z kubeadm:get-nodes 2024-02-15T16:08:34Z system:aggregate-to-admin 2024-02-15T16:08:33Z system:aggregate-to-edit 2024-02-15T16:08:33Z system:aggregate-to-view 2024-02-15T16:08:33Z system:auth-delegator 2024-02-15T16:08:33Z system:basic-user 2024-02-15T16:08:33Z system:certificates.k8s.io:certificatesigningrequests:nodeclient 2024-02-15T16:08:33Z system:certificates.k8s.io:certificatesigningrequests:selfnodeclient 2024-02-15T16:08:33Z system:certificates.k8s.io:kube-apiserver-client-approver 2024-02-15T16:08:33Z system:certificates.k8s.io:kube-apiserver-client-kubelet-approver 2024-02-15T16:08:33Z system:certificates.k8s.io:kubelet-serving-approver 2024-02-15T16:08:33Z system:certificates.k8s.io:legacy-unknown-approver 2024-02-15T16:08:33Z system:controller:attachdetach-controller 2024-02-15T16:08:33Z system:controller:certificate-controller 2024-02-15T16:08:33Z system:controller:clusterrole-aggregation-controller 2024-02-15T16:08:33Z system:controller:cronjob-controller 2024-02-15T16:08:33Z system:controller:daemon-set-controller 2024-02-15T16:08:33Z system:controller:deployment-controller 2024-02-15T16:08:33Z system:controller:disruption-controller 2024-02-15T16:08:33Z system:controller:endpoint-controller 2024-02-15T16:08:33Z system:controller:endpointslice-controller 2024-02-15T16:08:33Z system:controller:endpointslicemirroring-controller 2024-02-15T16:08:33Z system:controller:ephemeral-volume-controller 2024-02-15T16:08:33Z system:controller:expand-controller 2024-02-15T16:08:33Z system:controller:generic-garbage-collector 2024-02-15T16:08:33Z system:controller:horizontal-pod-autoscaler 2024-02-15T16:08:33Z system:controller:job-controller 2024-02-15T16:08:33Z system:controller:namespace-controller 2024-02-15T16:08:33Z system:controller:node-controller 2024-02-15T16:08:33Z system:controller:persistent-volume-binder 2024-02-15T16:08:33Z system:controller:pod-garbage-collector 2024-02-15T16:08:33Z [root@controller ~]# kubectl create sa app-creator -n access serviceaccount/app-creator created [root@controller ~]# kubectl create rolebinding app-creator --role=app-creator --serviceaccount=access:app-creator -n access rolebinding.rbac.authorization.k8s.io/app-creator created [root@controller ~]# kubectl get role,rolebinding,serviceaccount -n access NAME CREATED AT role.rbac.authorization.k8s.io/app-creator 2024-02-19T12:31:59Z NAME ROLE AGE rolebinding.rbac.authorization.k8s.io/app-creator Role/app-creator 58s NAME SECRETS AGE serviceaccount/app-creator 0 3m7s serviceaccount/default 0 22h |
Configuring Taints and Tolerations
- Configure node worker2 such that it will only allow Pods to run that have
been configured with the setting type:db - After verifying this works, remove the node restriction to return to normal operation.
Go to the kubernetes documentation page -> search: taint -> Taints and Tolerations
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 |
[root@controller ~]# kubectl taint nodes worker2.example.com type=db:NoSchedule node/worker2.example.com tainted [root@controller ~]# kubectl create deploy tolerate-nginx --image=nginx --replicas=3 --dry-run=client -o yaml > lab1212.yaml [root@controller ~]# vim lab1212.yaml [root@controller ~]# cat lab1212.yaml apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: tolerate-nginx name: tolerate-nginx spec: replicas: 3 selector: matchLabels: app: tolerate-nginx strategy: {} template: metadata: creationTimestamp: null labels: app: tolerate-nginx spec: tolerations: - key: "type" operator: "Equal" value: "db" effect: "NoSchedule" containers: - image: nginx name: nginx resources: {} status: {} [root@controller ~]# kubectl apply -f lab1212.yaml deployment.apps/tolerate-nginx created [root@controller ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES lab124deploy-7c7c8457f9-lclk4 1/1 Running 1 (107m ago) 46h 192.168.0.167 worker2.example.com <none> <none> lab126deploy-fff46cd4b-4drk6 1/1 Running 1 (107m ago) 24h 192.168.0.166 worker2.example.com <none> <none> lab126deploy-fff46cd4b-lhmfs 1/1 Running 1 (97m ago) 24h 192.168.0.177 worker1.example.com <none> <none> lab126deploy-fff46cd4b-zw5fq 1/1 Running 1 (97m ago) 24h 192.168.0.175 worker1.example.com <none> <none> tolerate-nginx-74bb955695-bl5vt 1/1 Running 0 8s 192.168.0.183 worker1.example.com <none> <none> tolerate-nginx-74bb955695-dm27q 1/1 Running 0 8s 192.168.0.182 worker2.example.com <none> <none> tolerate-nginx-74bb955695-h2vrf 1/1 Running 0 8s 192.168.0.184 worker1.example.com <none> <none> [root@controller ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES lab124deploy-7c7c8457f9-lclk4 1/1 Running 1 (108m ago) 46h 192.168.0.167 worker2.example.com <none> <none> lab126deploy-fff46cd4b-4drk6 1/1 Running 1 (108m ago) 24h 192.168.0.166 worker2.example.com <none> <none> lab126deploy-fff46cd4b-lhmfs 1/1 Running 1 (98m ago) 24h 192.168.0.177 worker1.example.com <none> <none> lab126deploy-fff46cd4b-zw5fq 1/1 Running 1 (98m ago) 24h 192.168.0.175 worker1.example.com <none> <none> tolerate-nginx-74bb955695-bl5vt 1/1 Running 0 69s 192.168.0.183 worker1.example.com <none> <none> tolerate-nginx-74bb955695-dm27q 1/1 Running 0 69s 192.168.0.182 worker2.example.com <none> <none> tolerate-nginx-74bb955695-h2vrf 1/1 Running 0 69s 192.168.0.184 worker1.example.com <none> <none> [root@controller ~]# kubectl create deploy test-deploy --image=nginx --replicas=4 deployment.apps/test-deploy created [root@controller ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES lab124deploy-7c7c8457f9-lclk4 1/1 Running 1 (110m ago) 46h 192.168.0.167 worker2.example.com <none> <none> lab126deploy-fff46cd4b-4drk6 1/1 Running 1 (110m ago) 24h 192.168.0.166 worker2.example.com <none> <none> lab126deploy-fff46cd4b-lhmfs 1/1 Running 1 (99m ago) 24h 192.168.0.177 worker1.example.com <none> <none> lab126deploy-fff46cd4b-zw5fq 1/1 Running 1 (99m ago) 24h 192.168.0.175 worker1.example.com <none> <none> test-deploy-859f95ffcc-8st6q 1/1 Running 0 5s 192.168.0.186 worker1.example.com <none> <none> test-deploy-859f95ffcc-bcxfl 1/1 Running 0 5s 192.168.0.188 worker1.example.com <none> <none> test-deploy-859f95ffcc-g9t6k 1/1 Running 0 5s 192.168.0.185 worker1.example.com <none> <none> test-deploy-859f95ffcc-xw2gv 1/1 Running 0 5s 192.168.0.187 worker1.example.com <none> <none> tolerate-nginx-74bb955695-bl5vt 1/1 Running 0 2m31s 192.168.0.183 worker1.example.com <none> <none> tolerate-nginx-74bb955695-dm27q 1/1 Running 0 2m31s 192.168.0.182 worker2.example.com <none> <none> tolerate-nginx-74bb955695-h2vrf 1/1 Running 0 2m31s 192.168.0.184 worker1.example.com <none> <none> [root@controller ~]# kubectl delete deploy test-deploy deployment.apps "test-deploy" deleted [root@controller ~]# kubectl taint nodes worker2.example.com type=db:NoSchedule- node/worker2.example.com untainted [root@controller ~]# kubectl delete -f lab1212.yaml |
Configuring a High Availability Cluster
- Configure a High Availability cluster with three control plane nodes and two
worker nodes. - Ensure that each control plane node can be used as a client as well.
- Use the scripts provided in the course Git repository at https://github.com/sandervanvugt/cka to install the CRI, kubetools and load balancer.
Etcd Backup and Restore
Note: all tasks from here on should be performed on a non-HA cluster
- Before creating the backup, create a Deployment that runs nginx.
- Create a backup of the etcd and write it to /tmp/etcdbackup.
- Delete the Deployment you just created.
- Restore the backup that you have created in the first step of this procedure and verify that the Deployment is available again.
Performing a Control Node Upgrade
- Notice that this task requires you to have a control node running an older
version of Kubernetes available. - Update the control node to the latest version of Kubernetes.
- Ensure that the kubelet and kubectl are updated as well.
Configuring Application Logging
- Create a Pod with a logging agent that runs as a sidecar container.
- Configure the main application to use Busybox and run the Linux date command every minute. The result of this command should be written to the directory
/output/date.log
. - Set up a sidecar container that runs Nginx and provide access to the date.log file on
/usr/share/nginx/html/date.log
.
Go to the kubernetes documentation page -> search: logging-> Logging Architecture
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 |
[root@controller ~]# vi lab135.yaml [root@controller ~]# cat lab135.yaml apiVersion: v1 kind: Pod metadata: name: counter spec: containers: - name: count image: busybox:1.28 args: - /bin/sh - -c - > i=0; while true; do echo "$i: $(date)" >> /var/log/1.log; echo "$(date) INFO $i" >> /var/log/2.log; i=$((i+1)); sleep 1; done volumeMounts: - name: varlog mountPath: /var/log - name: count-log-1 image: busybox:1.28 args: [/bin/sh, -c, 'tail -n+1 -F /var/log/1.log'] volumeMounts: - name: varlog mountPath: /var/log - name: count-log-2 image: busybox:1.28 args: [/bin/sh, -c, 'tail -n+1 -F /var/log/2.log'] volumeMounts: - name: varlog mountPath: /var/log volumes: - name: varlog emptyDir: {} [root@controller ~]# vi lab135.yaml [root@controller ~]# cat lab135.yaml apiVersion: v1 kind: Pod metadata: name: counter spec: containers: - name: count image: busybox args: - /bin/sh - -c - > while sleep 60; do echo "$(date)" >> /output/date.log done volumeMounts: - name: varlog mountPath: /output - name: count-log-1 image: nginx volumeMounts: - name: varlog mountPath: /usr/share/nginx/html volumes: - name: varlog emptyDir: {} [root@controller ~]# kubectl apply -f lab135.yaml pod/counter created [root@controller ~]# kubectl describe pod counter Name: counter Namespace: default Priority: 0 Service Account: default Node: worker1.example.com/172.30.9.26 Start Time: Mon, 19 Feb 2024 11:32:05 -0500 Labels: <none> Annotations: cni.projectcalico.org/containerID: 749bc21f17631ce5c99e0012bb32da1713812b25fe6c3257b8f3dcb3a7195c0e cni.projectcalico.org/podIP: 192.168.0.189/32 cni.projectcalico.org/podIPs: 192.168.0.189/32 Status: Running IP: 192.168.0.189 IPs: IP: 192.168.0.189 Containers: count: Container ID: containerd://beeb87d6485daca25884b9cfda033aa029fae1f9a16c7ea99c6f315ab29dbf4f Image: busybox Image ID: docker.io/library/busybox@sha256:6d9ac9237a84afe1516540f40a0fafdc86859b2141954b4d643af7066d598b74 Port: <none> Host Port: <none> Args: /bin/sh -c while sleep 60; do echo "$(date)" >> /output/date.log done State: Running Started: Mon, 19 Feb 2024 11:32:07 -0500 Ready: True Restart Count: 0 Environment: <none> Mounts: /output from varlog (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-c6f7x (ro) count-log-1: Container ID: containerd://0068e32f24123c84c4e4607bc9e4f6f709aba7af956092ee45204101d3430780 ... [root@controller ~]# kubectl exec -it counter -c count-log-1 -- cat /usr/share/nginx/html/date.log Mon Feb 19 16:33:07 UTC 2024 Mon Feb 19 16:34:07 UTC 2024 [root@controller ~]# kubectl exec -it counter -c count-log-1 -- cat /usr/share/nginx/html/date.log Mon Feb 19 16:33:07 UTC 2024 Mon Feb 19 16:34:07 UTC 2024 Mon Feb 19 16:35:07 UTC 2024 |
Managing Persistent Volume Claims
- Create a PerstistentVolume that uses 1GB of HostPath storage.
- Create a PersistentVolumeClaim that uses the PersistentVolume; the PersistentVolumeClaim should request 100 MiB of storage.
- Run a Pod with the name storage, using the Nginx image and mounting this PVC on the directory /data.
- After creating the configuration, change the PersistentVolumeClaim to request a size of 200MiB.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 |
[root@controller cka]# cat resize_pvc.yaml apiVersion: v1 kind: Namespace metadata: name: myvol --- apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: mystorageclass allowVolumeExpansion: true provisioner: kubernetes.io/no-provisioner --- apiVersion: v1 kind: PersistentVolume metadata: name: mypv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Recycle storageClassName: mystorageclass hostPath: path: /tmp/pv1 --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mypvc namespace: myvol spec: accessModes: - ReadWriteOnce resources: requests: storage: 100Mi storageClassName: mystorageclass --- apiVersion: v1 kind: Pod metadata: name: pv-pod namespace: myvol spec: containers: - name: busybox image: busybox args: - sleep - "3600" volumeMounts: - mountPath: "/vol1" name: myvolume volumes: - name: myvolume persistentVolumeClaim: claimName: mypvc [root@controller cka]# kubectl apply -f resize_pvc.yaml namespace/myvol created storageclass.storage.k8s.io/mystorageclass created persistentvolume/mypv created persistentvolumeclaim/mypvc created pod/pv-pod created [root@controller cka]# kubectl get pv,pvc -n myvol NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE persistentvolume/mypv 1Gi RWO Recycle Bound myvol/mypvc mystorageclass 3m48s NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/mypvc Bound mypv 1Gi RWO mystorageclass 3m48s [root@controller cka]# kubectl edit pvc mypvc -n myvol persistentvolumeclaim/mypvc edited |
And change to:
1 2 3 4 5 6 |
spec: accessModes: - ReadWriteOnce resources: requests: storage: 200Mi |
Investigating Pod Logs
- Run a Pod with the name failingdb, which starts the mariadb image
without any further options (it should fail). - Investigate the Pod logs and write all lines that start with ERROR to /tmp/failingdb.log
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 |
[root@controller ~]# kubectl run failingdb --image=mariadb pod/failingdb created [root@controller ~]# kubectl get pods NAME READY STATUS RESTARTS AGE busybox-ready 0/1 Running 12 (53m ago) 12h failingdb 0/1 ContainerCreating 0 6s liveness-exec 1/1 Running 9 (44m ago) 9h nginx-probes 1/1 Running 0 12h [root@controller ~]# kubectl get pods NAME READY STATUS RESTARTS AGE busybox-ready 0/1 Running 12 (53m ago) 12h failingdb 0/1 CrashLoopBackOff 1 (8s ago) 23s liveness-exec 1/1 Running 9 (45m ago) 9h nginx-probes 1/1 Running 0 12h [root@controller ~]# kubectl logs failingdb 2024-03-06 06:46:27+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:11.3.2+maria~ubu2204 started. 2024-03-06 06:46:27+00:00 [Warn] [Entrypoint]: /sys/fs/cgroup/blkio:/system.slice/containerd.service/kubepods-besteffort-pod21510976_7a31_46ec_80dd_1a 4ebd5ba06d.slice:cri-containerd:5a50cff6815a8d19a3c66f23c65b60e40b347b5c3f2e8d1231d766ba1e305dab 11:pids:/system.slice/containerd.service/kubepods-besteffort-pod21510976_7a31_46ec_80dd_1a4ebd5ba06d.slice:cri-containerd:5a50cff6815a8d19a3c66f23c65b 60e40b347b5c3f2e8d1231d766ba1e305dab 10:rdma:/kubepods-besteffort-pod21510976_7a31_46ec_80dd_1a4ebd5ba06d.slice:cri-containerd:5a50cff6815a8d19a3c66f23c65b60e40b347b5c3f2e8d1231d766ba1e30 5dab 9:devices:/system.slice/containerd.service/kubepods-besteffort-pod21510976_7a31_46ec_80dd_1a4ebd5ba06d.slice:cri-containerd:5a50cff6815a8d19a3c66f23c6 5b60e40b347b5c3f2e8d1231d766ba1e305dab 8:net_cls,net_prio:/kubepods-besteffort-pod21510976_7a31_46ec_80dd_1a4ebd5ba06d.slice:cri-containerd:5a50cff6815a8d19a3c66f23c65b60e40b347b5c3f2e8d123 1d766ba1e305dab 7:cpuset:/kubepods-besteffort-pod21510976_7a31_46ec_80dd_1a4ebd5ba06d.slice:cri-containerd:5a50cff6815a8d19a3c66f23c65b60e40b347b5c3f2e8d1231d766ba1e3 05dab 6:cpu,cpuacct:/system.slice/containerd.service/kubepods-besteffort-pod21510976_7a31_46ec_80dd_1a4ebd5ba06d.slice:cri-containerd:5a50cff6815a8d19a3c66f 23c65b60e40b347b5c3f2e8d1231d766ba1e305dab 5:memory:/system.slice/containerd.service/kubepods-besteffort-pod21510976_7a31_46ec_80dd_1a4ebd5ba06d.slice:cri-containerd:5a50cff6815a8d19a3c66f23c65 b60e40b347b5c3f2e8d1231d766ba1e305dab 4:hugetlb:/kubepods-besteffort-pod21510976_7a31_46ec_80dd_1a4ebd5ba06d.slice:cri-containerd:5a50cff6815a8d19a3c66f23c65b60e40b347b5c3f2e8d1231d766ba1e 305dab 3:perf_event:/kubepods-besteffort-pod21510976_7a31_46ec_80dd_1a4ebd5ba06d.slice:cri-containerd:5a50cff6815a8d19a3c66f23c65b60e40b347b5c3f2e8d1231d766b a1e305dab 2:freezer:/kubepods-besteffort-pod21510976_7a31_46ec_80dd_1a4ebd5ba06d.slice:cri-containerd:5a50cff6815a8d19a3c66f23c65b60e40b347b5c3f2e8d1231d766ba1e 305dab 1:name=systemd:/system.slice/containerd.service/kubepods-besteffort-pod21510976_7a31_46ec_80dd_1a4ebd5ba06d.slice:cri-containerd:5a50cff6815a8d19a3c66 f23c65b60e40b347b5c3f2e8d1231d766ba1e305dab 0:://memory.pressure not writable, functionality unavailable to MariaDB 2024-03-06 06:46:27+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' 2024-03-06 06:46:27+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:11.3.2+maria~ubu2204 started. 2024-03-06 06:46:27+00:00 [ERROR] [Entrypoint]: Database is uninitialized and password option is not specified You need to specify one of MARIADB_ROOT_PASSWORD, MARIADB_ROOT_PASSWORD_HASH, MARIADB_ALLOW_EMPTY_ROOT_PASSWORD and MARIADB_RANDOM_ROOT_PASSWO RD [root@controller ~]# kubectl logs failingdb | tail -2 2024-03-06 06:46:53+00:00 [ERROR] [Entrypoint]: Database is uninitialized and password option is not specified You need to specify one of MARIADB_ROOT_PASSWORD, MARIADB_ROOT_PASSWORD_HASH, MARIADB_ALLOW_EMPTY_ROOT_PASSWORD and MARIADB_RANDOM_ROOT_PASSWORD [root@controller ~]# kubectl logs failingdb | tail -2 > /tmp/failingdb.log [root@controller ~]# cat /tmp/failingdb.log 2024-03-06 06:47:42+00:00 [ERROR] [Entrypoint]: Database is uninitialized and password option is not specified You need to specify one of MARIADB_ROOT_PASSWORD, MARIADB_ROOT_PASSWORD_HASH, MARIADB_ALLOW_EMPTY_ROOT_PASSWORD and MARIADB_RANDOM_ROOT_PASSWORD |
Analyzing Performance
- Find out which Pod currently has the highest CPU load.
1 2 3 4 5 |
kubectl top pods kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml kubectl get pods -n kube-system kubectl logs metrics-server-6db4d75b97-d57gq kubectl edit -n kube-system deploy metrics-server |
Change
1 2 3 4 5 6 7 8 |
spec: containers: - args: - --cert-dir=/tmp - --secure-port=10250 - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname - --kubelet-use-node-status-port - --metric-resolution=15s |
to
1 2 3 4 5 6 7 8 9 |
spec: containers: - args: - --cert-dir=/tmp - --kubelet-insecure-tls - --secure-port=10250 - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname - --kubelet-use-node-status-port - --metric-resolution=15s |
And
1 |
kubectl top pods |
Managing Scheduling
- Runa Pod with the name lab139pod.
- Ensure that it only runs on nodes that have the label storage=ssd set.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 |
[root@controller ~]# kubectl label node worker1.example.com storage=ssd node/worker1.example.com labeled [root@controller ~]# kubectl run lab139pod --image=nginx --dry-run=client -o yaml > lab139pod.yaml [root@controller ~]# cat lab139pod.yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: lab139pod name: lab139pod spec: containers: - image: nginx name: lab139pod resources: {} dnsPolicy: ClusterFirst restartPolicy: Always status: {} [root@controller ~]# vim lab139pod.yaml [root@controller ~]# cat lab139pod.yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: lab139pod name: lab139pod spec: nodeSelector: storage: ssd containers: - image: nginx name: lab139pod resources: {} dnsPolicy: ClusterFirst restartPolicy: Always status: {} [root@controller ~]# kubectl apply -f lab139pod.yaml pod/lab139pod created [root@controller ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES busybox-ready 0/1 Running 18 (12m ago) 18h 172.16.71.203 worker2.example.com <none> <none> lab139pod 1/1 Running 0 10s 172.16.102.142 worker1.example.com <none> <none> liveness-exec 1/1 Running 14 (60m ago) 15h 172.16.71.204 worker2.example.com <none> <none> nginx-probes 1/1 Running 0 17h 172.16.102.139 worker1.example.com <none> <none> |
Configuring Ingress
- Run a Pod with the name lab1310pod, using the Nginx image.
- Expose this Pod using a NodePort type Service.
- Configure Ingress such that its web content is available on the path lab1310.info/hi
- You will not have to configure an Ingress controller for this assignment, just the API resource is enough
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 |
[root@controller ~]# kubectl run lab1310pod --image=nginx pod/lab1310pod created [root@controller ~]# kubectl expose pod lab1310pod --port=80 --type=NodePort service/lab1310pod exposed [root@controller ~]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5d22h lab1310pod NodePort 10.102.16.147 <none> 80:30453/TCP 12s [root@controller ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES lab1310pod 1/1 Running 0 30s 172.16.71.211 worker2.example.com <none> <none> [root@controller ~]# kubectl create ingress -h | more ... Examples: # Create a single ingress called 'simple' that directs requests to foo.com/bar to svc # svc1:8080 with a TLS secret "my-cert" kubectl create ingress simple --rule="foo.com/bar=svc1:8080,tls=my-cert" ... [root@controller ~]# kubectl create ingress simple --rule="lab1310.info/hi=lab1310pod:80" ingress.networking.k8s.io/simple created [root@controller ~]# kubectl describe ingress simple Name: simple Labels: <none> Namespace: default Address: Ingress Class: <none> Default backend: <default> Rules: Host Path Backends ---- ---- -------- lab1310.info /hi lab1310pod:80 (172.16.71.211:80) Annotations: <none> Events: <none> |
Preparing for Node Maintenance
- Schedule node worker2 for maintenance in such a way that all run
Pods are evicted.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 |
[root@controller ~]# kubectl drain worker2.example.com node/worker2.example.com cordoned error: unable to drain node "worker2.example.com" due to error:[cannot delete Pods declare no controller (use --force to override): default/lab1310pod, myvol/pv-pod, cannot delete DaemonSet-managed Pods (use --ignore-daemonsets to ignore): kube-system/calico-node-d2plz, kube-system/kube-proxy-jz8hj, cannot delete Pods with local storage (use --delete-emptydir-data to override): kube-system/metrics-server-6db4d75b97-d57gq], continuing command... There are pending nodes to be drained: worker2.example.com cannot delete Pods declare no controller (use --force to override): default/lab1310pod, myvol/pv-pod cannot delete DaemonSet-managed Pods (use --ignore-daemonsets to ignore): kube-system/calico-node-d2plz, kube-system/kube-proxy-jz8hj cannot delete Pods with local storage (use --delete-emptydir-data to override): kube-system/metrics-server-6db4d75b97-d57gq [root@controller ~]# kubectl drain worker2.example.com --ignore-daemonsets --force --delete-emptydir-data node/worker2.example.com already cordoned Warning: deleting Pods that declare no controller: default/lab1310pod, myvol/pv-pod; ignoring DaemonSet-managed Pods: kube-system/calico-node-d2plz, kube-system/kube-proxy-jz8hj evicting pod myvol/pv-pod evicting pod default/lab1310pod evicting pod kube-system/metrics-server-6db4d75b97-d57gq pod/lab1310pod evicted pod/metrics-server-6db4d75b97-d57gq evicted pod/pv-pod evicted node/worker2.example.com drained [root@controller ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION controller.example.com NotReady control-plane 5d23h v1.28.2 worker1.example.com Ready <none> 5d23h v1.28.2 worker2.example.com Ready,SchedulingDisabled <none> 5d23h v1.28.2 [root@controller ~]# kubectl uncordon worker2.example.com node/worker2.example.com uncordoned [root@controller ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION controller.example.com NotReady control-plane 5d23h v1.28.2 worker1.example.com Ready <none> 5d23h v1.28.2 worker2.example.com Ready <none> 5d23h v1.28.2 |
Scaling Applications
- Run a Deployment with the name lab1312deploy using the nginx image
- Scale it such that it runs 6 application instances.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
[root@controller ~]# kubectl create deploy lab1312deploy --image=nginx deployment.apps/lab1312deploy created [root@controller ~]# kubectl get all --selector app=lab1312deploy NAME READY STATUS RESTARTS AGE pod/lab1312deploy-54478d59c6-pl922 1/1 Running 0 24s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/lab1312deploy 1/1 1 1 24s NAME DESIRED CURRENT READY AGE replicaset.apps/lab1312deploy-54478d59c6 1 1 1 24s [root@controller ~]# kubectl scale deployment lab1312deploy --replicas=6 deployment.apps/lab1312deploy scaled [root@controller ~]# kubectl get all --selector app=lab1312deploy NAME READY STATUS RESTARTS AGE pod/lab1312deploy-54478d59c6-64dm6 1/1 Running 0 7s pod/lab1312deploy-54478d59c6-c7gxn 1/1 Running 0 7s pod/lab1312deploy-54478d59c6-ns5m5 1/1 Running 0 7s pod/lab1312deploy-54478d59c6-pl922 1/1 Running 0 113s pod/lab1312deploy-54478d59c6-s7gnz 1/1 Running 0 7s pod/lab1312deploy-54478d59c6-wjwng 1/1 Running 0 7s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/lab1312deploy 6/6 6 6 113s NAME DESIRED CURRENT READY AGE replicaset.apps/lab1312deploy-54478d59c6 6 6 6 113s |