.
Analyzing Cluster Nodes
- Kubernetes cluster nodes run Linux processes. To monitor these processes,
generic Linux rules apply- Use
systemctl status kubelet
to get runtime information about the kubelet - Use log files in
/var/log
as well as journalctl output to get access to logs
- Use
- Generic node information is obtained through kubectl describe
- If the Metrics Server is installed, use kubectl top nodes to get a summary
of CPU/memory usage on a node.
Analyzing Node State Commands
Is -lrt /var/log
journalctl
systemctl status kubelet
Now, let’s reproduce a failure:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 |
[root@k8s ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s.example.pl Ready control-plane 3d3h v1.28.3 [root@k8s ~]# systemctl stop kubelet [root@k8s ~]# systemctl status kubelet ● kubelet.service - kubelet: The Kubernetes Node Agent Loaded: loaded (/usr/lib/systemd/system/kubelet.service; disabled; vendor preset: disabled) Drop-In: /etc/systemd/system/kubelet.service.d └─10-kubeadm.conf Active: inactive (dead) since Sat 2024-02-03 13:55:06 EST; 8s ago Docs: http://kubernetes.io/docs/ Process: 14821 ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/ku> Main PID: 14821 (code=exited, status=0/SUCCESS) lut 03 13:54:33 k8s.example.pl kubelet[14821]: E0203 13:54:33.646323 14821 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartCon> lut 03 13:54:45 k8s.example.pl kubelet[14821]: I0203 13:54:45.645915 14821 scope.go:117] "RemoveContainer" containerID="a31a7becd0cfd282ab55f4c39c573b4> lut 03 13:54:45 k8s.example.pl kubelet[14821]: E0203 13:54:45.646441 14821 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartCon> lut 03 13:54:59 k8s.example.pl kubelet[14821]: E0203 13:54:59.729342 14821 desired_state_of_world_populator.go:320] "Error processing volume" err="erro> lut 03 13:55:00 k8s.example.pl kubelet[14821]: I0203 13:55:00.646196 14821 scope.go:117] "RemoveContainer" containerID="a31a7becd0cfd282ab55f4c39c573b4> lut 03 13:55:00 k8s.example.pl kubelet[14821]: E0203 13:55:00.646616 14821 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartCon> lut 03 13:55:06 k8s.example.pl systemd[1]: Stopping kubelet: The Kubernetes Node Agent... lut 03 13:55:06 k8s.example.pl kubelet[14821]: I0203 13:55:06.846437 14821 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bu> lut 03 13:55:06 k8s.example.pl systemd[1]: kubelet.service: Succeeded. lut 03 13:55:06 k8s.example.pl systemd[1]: Stopped kubelet: The Kubernetes Node Agent. [root@k8s ~]# kubectl describe node k8s.example.pl Name: k8s.example.pl Roles: control-plane Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=k8s.example.pl kubernetes.io/os=linux minikube.k8s.io/commit=8220a6eb95f0a4d75f7f2d7b14cef975f050512d minikube.k8s.io/name=minikube minikube.k8s.io/primary=true minikube.k8s.io/updated_at=2024_01_31T10_03_27_0700 minikube.k8s.io/version=v1.32.0 node-role.kubernetes.io/control-plane= node.kubernetes.io/exclude-from-external-load-balancers= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Wed, 31 Jan 2024 10:03:23 -0500 Taints: node.kubernetes.io/unreachable:NoSchedule Unschedulable: false Lease: HolderIdentity: k8s.example.pl AcquireTime: <unset> RenewTime: Sat, 03 Feb 2024 13:54:59 -0500 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure Unknown Sat, 03 Feb 2024 13:51:48 -0500 Sat, 03 Feb 2024 13:55:39 -0500 NodeStatusUnknown Kubelet stopped posting node status. DiskPressure Unknown Sat, 03 Feb 2024 13:51:48 -0500 Sat, 03 Feb 2024 13:55:39 -0500 NodeStatusUnknown Kubelet stopped posting node status. PIDPressure Unknown Sat, 03 Feb 2024 13:51:48 -0500 Sat, 03 Feb 2024 13:55:39 -0500 NodeStatusUnknown Kubelet stopped posting node status. Ready Unknown Sat, 03 Feb 2024 13:51:48 -0500 Sat, 03 Feb 2024 13:55:39 -0500 NodeStatusUnknown Kubelet stopped posting node status. Addresses: InternalIP: 172.30.9.24 Hostname: k8s.example.pl Capacity: cpu: 8 ephemeral-storage: 64177544Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 16099960Ki pods: 110 Allocatable: cpu: 8 ephemeral-storage: 64177544Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 16099960Ki pods: 110 System Info: Machine ID: 0cc7c63085694b83adcd204eff748ff8 System UUID: 3e3ec47d-1fe1-b5b7-cbca-edd2da14db37 Boot ID: 79a4e58f-5d2a-4f44-ad34-520bab9b01cc Kernel Version: 4.18.0-500.el8.x86_64 OS Image: CentOS Stream 8 Operating System: linux Architecture: amd64 Container Runtime Version: docker://25.0.1 Kubelet Version: v1.28.3 Kube-Proxy Version: v1.28.3 PodCIDR: 10.244.0.0/24 PodCIDRs: 10.244.0.0/24 Non-terminated Pods: (35 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- default apples-78656fd5db-4rpj7 0 (0%) 0 (0%) 0 (0%) 0 (0%) 20h default apples-78656fd5db-qsm4x 0 (0%) 0 (0%) 0 (0%) 0 (0%) 20h default apples-78656fd5db-t82tg 0 (0%) 0 (0%) 0 (0%) 0 (0%) 20h default deploydaemon-zzllp 0 (0%) 0 (0%) 0 (0%) 0 (0%) 45h default firstnginx-d8679d567-249g9 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2d22h default firstnginx-d8679d567-66c4s 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2d22h default firstnginx-d8679d567-72qbd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2d22h default firstnginx-d8679d567-rhhlz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2d5h default init-demo 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2d7h default lab4-pod 0 (0%) 0 (0%) 0 (0%) 0 (0%) 28h default morevol 0 (0%) 0 (0%) 0 (0%) 0 (0%) 41h default mydaemon-d4dcd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 45h default newdep-749c9b5675-2x9mb 0 (0%) 0 (0%) 0 (0%) 0 (0%) 21h default nginxsvc-5f8b7d4f4d-dtrs7 0 (0%) 0 (0%) 0 (0%) 0 (0%) 22h default pv-pod 0 (0%) 0 (0%) 0 (0%) 0 (0%) 40h default sleepy 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2d8h default testpod 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2d23h default two-containers 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2d5h default web-0 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2d10h default web-1 0 (0%) 0 (0%) 0 (0%) 0 (0%) 45h default web-2 0 (0%) 0 (0%) 0 (0%) 0 (0%) 45h default webserver-76d44586d-8gqhf 0 (0%) 0 (0%) 0 (0%) 0 (0%) 29h default webshop-7f9fd49d4c-92nj2 0 (0%) 0 (0%) 0 (0%) 0 (0%) 25h default webshop-7f9fd49d4c-kqllw 0 (0%) 0 (0%) 0 (0%) 0 (0%) 25h default webshop-7f9fd49d4c-x2czc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 25h ingress-nginx ingress-nginx-controller-6858749594-27tm9 100m (1%) 0 (0%) 90Mi (0%) 0 (0%) 22h kube-system coredns-5dd5756b68-sgfkj 100m (1%) 0 (0%) 70Mi (0%) 170Mi (1%) 3d3h kube-system etcd-k8s.example.pl 100m (1%) 0 (0%) 100Mi (0%) 0 (0%) 3d3h kube-system kube-apiserver-k8s.example.pl 250m (3%) 0 (0%) 0 (0%) 0 (0%) 3d3h kube-system kube-controller-manager-k8s.example.pl 200m (2%) 0 (0%) 0 (0%) 0 (0%) 3d3h kube-system kube-proxy-5nmms 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3d3h kube-system kube-scheduler-k8s.example.pl 100m (1%) 0 (0%) 0 (0%) 0 (0%) 3d3h kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3d3h kubernetes-dashboard dashboard-metrics-scraper-7fd5cb4ddc-9ld5n 0 (0%) 0 (0%) 0 (0%) 0 (0%) 46h kubernetes-dashboard kubernetes-dashboard-8694d4445c-xjlsr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 46h Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 850m (10%) 0 (0%) memory 260Mi (1%) 170Mi (1%) ephemeral-storage 0 (0%) 0 (0%) hugepages-1Gi 0 (0%) 0 (0%) hugepages-2Mi 0 (0%) 0 (0%) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal NodeNotReady 74s node-controller Node k8s.example.pl status is now: NodeNotReady [root@k8s ~]# systemctl start kubelet [root@k8s ~]# kubectl describe node k8s.example.pl Name: k8s.example.pl Roles: control-plane Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=k8s.example.pl kubernetes.io/os=linux minikube.k8s.io/commit=8220a6eb95f0a4d75f7f2d7b14cef975f050512d minikube.k8s.io/name=minikube minikube.k8s.io/primary=true minikube.k8s.io/updated_at=2024_01_31T10_03_27_0700 minikube.k8s.io/version=v1.32.0 node-role.kubernetes.io/control-plane= node.kubernetes.io/exclude-from-external-load-balancers= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Wed, 31 Jan 2024 10:03:23 -0500 Taints: <none> Unschedulable: false Lease: HolderIdentity: k8s.example.pl AcquireTime: <unset> RenewTime: Sat, 03 Feb 2024 13:59:11 -0500 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Sat, 03 Feb 2024 13:59:11 -0500 Sat, 03 Feb 2024 13:59:11 -0500 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Sat, 03 Feb 2024 13:59:11 -0500 Sat, 03 Feb 2024 13:59:11 -0500 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Sat, 03 Feb 2024 13:59:11 -0500 Sat, 03 Feb 2024 13:59:11 -0500 KubeletHasSufficientPID kubelet has sufficient PID av ailable Ready True Sat, 03 Feb 2024 13:59:11 -0500 Sat, 03 Feb 2024 13:59:11 -0500 KubeletReady kubelet is posting ready stat us Addresses: InternalIP: 172.30.9.24 Hostname: k8s.example.pl Capacity: cpu: 8 ephemeral-storage: 64177544Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 16099960Ki pods: 110 Allocatable: cpu: 8 ephemeral-storage: 64177544Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 16099960Ki pods: 110 System Info: Machine ID: 0cc7c63085694b83adcd204eff748ff8 System UUID: 3e3ec47d-1fe1-b5b7-cbca-edd2da14db37 Boot ID: 79a4e58f-5d2a-4f44-ad34-520bab9b01cc Kernel Version: 4.18.0-500.el8.x86_64 OS Image: CentOS Stream 8 Operating System: linux Architecture: amd64 Container Runtime Version: docker://25.0.1 Kubelet Version: v1.28.3 Kube-Proxy Version: v1.28.3 PodCIDR: 10.244.0.0/24 PodCIDRs: 10.244.0.0/24 Non-terminated Pods: (35 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- default apples-78656fd5db-4rpj7 0 (0%) 0 (0%) 0 (0%) 0 (0%) 21h default apples-78656fd5db-qsm4x 0 (0%) 0 (0%) 0 (0%) 0 (0%) 21h default apples-78656fd5db-t82tg 0 (0%) 0 (0%) 0 (0%) 0 (0%) 21h default deploydaemon-zzllp 0 (0%) 0 (0%) 0 (0%) 0 (0%) 45h default firstnginx-d8679d567-249g9 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2d22h default firstnginx-d8679d567-66c4s 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2d22h default firstnginx-d8679d567-72qbd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2d22h default firstnginx-d8679d567-rhhlz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2d5h default init-demo 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2d7h default lab4-pod 0 (0%) 0 (0%) 0 (0%) 0 (0%) 28h default morevol 0 (0%) 0 (0%) 0 (0%) 0 (0%) 41h default mydaemon-d4dcd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 46h default newdep-749c9b5675-2x9mb 0 (0%) 0 (0%) 0 (0%) 0 (0%) 21h default nginxsvc-5f8b7d4f4d-dtrs7 0 (0%) 0 (0%) 0 (0%) 0 (0%) 22h default pv-pod 0 (0%) 0 (0%) 0 (0%) 0 (0%) 40h default sleepy 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2d8h default testpod 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2d23h default two-containers 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2d5h default web-0 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2d10h default web-1 0 (0%) 0 (0%) 0 (0%) 0 (0%) 46h default web-2 0 (0%) 0 (0%) 0 (0%) 0 (0%) 46h default webserver-76d44586d-8gqhf 0 (0%) 0 (0%) 0 (0%) 0 (0%) 29h default webshop-7f9fd49d4c-92nj2 0 (0%) 0 (0%) 0 (0%) 0 (0%) 25h default webshop-7f9fd49d4c-kqllw 0 (0%) 0 (0%) 0 (0%) 0 (0%) 25h default webshop-7f9fd49d4c-x2czc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 25h ingress-nginx ingress-nginx-controller-6858749594-27tm9 100m (1%) 0 (0%) 90Mi (0%) 0 (0%) 22h kube-system coredns-5dd5756b68-sgfkj 100m (1%) 0 (0%) 70Mi (0%) 170Mi (1%) 3d3h kube-system etcd-k8s.example.pl 100m (1%) 0 (0%) 100Mi (0%) 0 (0%) 3d3h kube-system kube-apiserver-k8s.example.pl 250m (3%) 0 (0%) 0 (0%) 0 (0%) 3d3h kube-system kube-controller-manager-k8s.example.pl 200m (2%) 0 (0%) 0 (0%) 0 (0%) 3d3h kube-system kube-proxy-5nmms 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3d3h kube-system kube-scheduler-k8s.example.pl 100m (1%) 0 (0%) 0 (0%) 0 (0%) 3d3h kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3d3h kubernetes-dashboard dashboard-metrics-scraper-7fd5cb4ddc-9ld5n 0 (0%) 0 (0%) 0 (0%) 0 (0%) 46h kubernetes-dashboard kubernetes-dashboard-8694d4445c-xjlsr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 46h Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 850m (10%) 0 (0%) memory 260Mi (1%) 170Mi (1%) ephemeral-storage 0 (0%) 0 (0%) hugepages-1Gi 0 (0%) 0 (0%) hugepages-2Mi 0 (0%) 0 (0%) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal NodeNotReady 3m36s node-controller Node k8s.example.pl status is now: NodeNotReady Normal Starting 4s kubelet Starting kubelet. Normal NodeAllocatableEnforced 4s kubelet Updated Node Allocatable limit across pods Normal NodeReady 4s kubelet Node k8s.example.pl status is now: NodeReady Normal NodeHasSufficientMemory 3s (x2 over 4s) kubelet Node k8s.example.pl status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 3s (x2 over 4s) kubelet Node k8s.example.pl status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 3s (x2 over 4s) kubelet Node k8s.example.pl status is now: NodeHasSufficientPID [root@k8s ~]# ls -lrt /var/log razem 46576 drwxr-xr-x. 2 root root 6 2021-08-31 glusterfs drwx------. 2 root root 6 2023-07-17 private drwxr-xr-x. 3 root root 21 2023-07-17 swtpm drwxr-xr-x. 2 root root 4096 2023-07-17 anaconda -rw-------. 1 root root 54470 2023-07-18 boot.log-20230718 drwxr-xr-x. 2 root root 23 09-04 12:17 tuned -rw-r--r--. 1 root root 1048499 10-13 18:11 dnf.librepo.log.1 -rw-------. 1 root root 0 10-22 03:19 secure-20231029 -rw-------. 1 root root 0 10-22 03:19 maillog-20231029 -rw-------. 1 root root 0 10-22 03:19 spooler-20231029 -rw-r--r--. 1 root root 2700 10-29 02:00 hawkey.log-20231029 -rw-------. 1 root root 53654 10-29 02:00 messages-20231029 -rw-------. 1 root root 45594 10-29 03:01 cron-20231029 -rw-------. 1 root root 0 10-29 03:07 spooler-20231105 -rw-------. 1 root root 0 10-29 03:07 secure-20231105 -rw-------. 1 root root 0 10-29 03:07 maillog-20231105 -rw-------. 1 root root 46120 11-05 03:01 cron-20231105 -rw-r--r--. 1 root root 2580 11-05 03:09 hawkey.log-20231105 -rw-------. 1 root root 53957 11-05 03:09 messages-20231105 -rw-------. 1 root root 0 11-05 03:22 spooler-20231112 -rw-------. 1 root root 0 11-05 03:22 secure-20231112 -rw-------. 1 root root 0 11-05 03:22 maillog-20231112 drwx------. 2 root root 23 11-06 03:13 audit -rw-r--r--. 1 root root 2640 11-12 01:35 hawkey.log-20231112 -rw-------. 1 root root 53494 11-12 02:50 messages-20231112 -rw-------. 1 root root 45736 11-12 03:34 cron-20231112 -rw-------. 1 root root 0 11-12 03:34 spooler-20240131 -rw-------. 1 root root 0 11-12 03:34 maillog-20240131 drwx------. 3 root root 18 12-12 13:03 libvirt drwxr-x---. 2 sssd sssd 6 01-13 11:33 sssd drwxr-xr-x. 2 root root 6 01-17 14:34 qemu-ga drwx------. 3 root root 17 01-18 06:29 samba drwxr-x---. 2 chrony chrony 6 01-23 11:14 chrony -rw-r--r--. 1 root root 1048519 01-31 08:35 dnf.log.4 -rw-r--r--. 1 root root 976665 01-31 08:45 dnf.log.3 -rw-r--r--. 1 root root 922460 01-31 08:49 dnf.log.2 -rw-r--r--. 1 root root 929588 01-31 08:50 dnf.log.1 -rw-------. 1 root root 4340 01-31 09:01 cron-20240131 -rw-------. 1 root root 7290 01-31 09:44 secure-20240131 -rw-r--r--. 1 root root 2040 01-31 09:47 hawkey.log-20240131 drwx------. 3 root root 18 01-31 09:49 crio -rw-------. 1 root root 269633 01-31 09:49 messages-20240131 -rw-------. 1 root root 18241 01-31 09:51 boot.log-20240131 -rw-------. 1 root root 0 01-31 09:51 spooler -rw-------. 1 root root 0 01-31 09:51 maillog -rw-rw----. 1 root utmp 384 01-31 10:06 btmp-20240201 -rw-rw----. 1 root utmp 0 02-01 03:25 btmp -rw-------. 1 root root 6190 02-01 08:38 kdump.log -rw-r-----. 1 root root 20742 02-01 15:47 firewalld -rw------- 1 root root 9508 02-02 03:40 boot.log-20240202 -rw-------. 1 root root 0 02-02 03:40 boot.log drwxr-xr-x. 37 root root 4096 02-02 16:57 pods -rw-r--r--. 1 root root 1740 02-03 11:40 hawkey.log -rw-------. 1 root root 19604 02-03 13:01 cron -rw-r--r--. 1 root root 497330 02-03 13:38 dnf.librepo.log -rw-r--r--. 1 root root 182173 02-03 13:38 dnf.rpm.log -rw-r--r--. 1 root root 785398 02-03 13:38 dnf.log -rw-rw-r--. 1 root utmp 40320 02-03 13:52 wtmp -rw-rw-r--. 1 root utmp 292584 02-03 13:52 lastlog -rw-------. 1 root root 352561 02-03 13:52 secure drwxr-xr-x. 2 root root 8192 02-03 13:59 containers -rw-------. 1 root root 38193765 02-03 13:59 messages [root@k8s ~]# journalctl -u kubelet -- Logs begin at Thu 2024-02-01 08:38:25 EST, end at Sat 2024-02-03 14:02:35 EST. -- lut 01 08:40:55 k8s.example.pl systemd[1]: Started kubelet: The Kubernetes Node Agent. lut 01 08:40:55 k8s.example.pl kubelet[3174]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file sp> lut 01 08:40:55 k8s.example.pl kubelet[3174]: I0201 08:40:55.958248 3174 server.go:467] "Kubelet version" kubeletVersion="v1.28.3" lut 01 08:40:55 k8s.example.pl kubelet[3174]: I0201 08:40:55.958354 3174 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" lut 01 08:40:55 k8s.example.pl kubelet[3174]: I0201 08:40:55.958716 3174 server.go:895] "Client rotation is on, will bootstrap in background" lut 01 08:40:55 k8s.example.pl kubelet[3174]: I0201 08:40:55.965902 3174 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/k> lut 01 08:40:55 k8s.example.pl kubelet[3174]: I0201 08:40:55.967590 3174 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle:> lut 01 08:40:55 k8s.example.pl kubelet[3174]: E0201 08:40:55.970239 3174 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Fai> lut 01 08:40:55 k8s.example.pl kubelet[3174]: I0201 08:40:55.992341 3174 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specifi> [root@k8s ~]# systemctl status kubelet ● kubelet.service - kubelet: The Kubernetes Node Agent Loaded: loaded (/usr/lib/systemd/system/kubelet.service; disabled; vendor preset: disabled) Drop-In: /etc/systemd/system/kubelet.service.d └─10-kubeadm.conf Active: active (running) since Sat 2024-02-03 13:59:11 EST; 3min 50s ago Docs: http://kubernetes.io/docs/ Main PID: 555436 (kubelet) Tasks: 16 (limit: 100376) Memory: 56.0M CGroup: /system.slice/kubelet.service └─555436 /var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/c> lut 03 14:02:55 k8s.example.pl kubelet[555436]: E0203 14:02:55.687080 555436 kubelet.go:1907] "Unable to attach or mount volumes for pod; skipping pod" > lut 03 14:02:55 k8s.example.pl kubelet[555436]: E0203 14:02:55.687181 555436 pod_workers.go:1300] "Error syncing pod, skipping" err="unmounted volumes=[> lut 03 14:02:56 k8s.example.pl kubelet[555436]: E0203 14:02:56.435108 555436 desired_state_of_world_populator.go:320] "Error processing volume" err="err> lut 03 14:02:56 k8s.example.pl kubelet[555436]: E0203 14:02:56.686939 555436 kubelet.go:1907] "Unable to attach or mount volumes for pod; skipping pod" > lut 03 14:02:56 k8s.example.pl kubelet[555436]: E0203 14:02:56.687005 555436 pod_workers.go:1300] "Error syncing pod, skipping" err="unmounted volumes=[> lut 03 14:02:57 k8s.example.pl kubelet[555436]: E0203 14:02:57.445006 555436 desired_state_of_world_populator.go:320] "Error processing volume" err="err> lut 03 14:02:57 k8s.example.pl kubelet[555436]: E0203 14:02:57.686464 555436 kubelet.go:1907] "Unable to attach or mount volumes for pod; skipping pod" > lut 03 14:02:57 k8s.example.pl kubelet[555436]: E0203 14:02:57.686526 555436 pod_workers.go:1300] "Error syncing pod, skipping" err="unmounted volumes=[> lut 03 14:02:59 k8s.example.pl kubelet[555436]: I0203 14:02:59.386449 555436 scope.go:117] "RemoveContainer" containerID="e6e599d43e20022cb554472cf2e128> lut 03 14:02:59 k8s.example.pl kubelet[555436]: E0203 14:02:59.386925 555436 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartCo> |
crictl
command
- All Pods are started as containers on the nodes
crictl
is a generic tool that communicates to the container runtime to get
information about running containers- As such, it replaces generic tools like docker and podman
- To use it, a runtime-endpoint and image-endpoint need to be set
- The most convenient way to do so, is by defining the
/etc/crictl.yaml
file on
the nodes where you want to runcrictl
Using crictl
- List containers:
sudo crictl ps
List Pods that have been scheduled on this node:
sudo crictl pods
Inspect container configuration:
sudo crictl inspect <name-or-id>
Pull an image:
sudo crictl pull <imagename>
- List images:
sudo crictl images
- For more options,
use crictl --help
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 |
[root@k8s ~]# crictl ps CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD bc6f6c3b226a9 busybox@sha256:6d9ac9237a84afe1516540f40a0fafdc86859b2141954b4d643af7066d598b74 2 minutes ago Running busybox-container 272 776a9d9f213bc two-containers 44bb58b75411b busybox@sha256:6d9ac9237a84afe1516540f40a0fafdc86859b2141954b4d643af7066d598b74 26 minutes ago Running sleepy 46 b598cba0e6d7f sleepy 04f6f10d3e46a eeb6ee3f44bd0 59 minutes ago Running centos2 41 4eb967073bfbd morevol c8d89745f340c eeb6ee3f44bd0 59 minutes ago Running centos1 41 4eb967073bfbd morevol a65e79f99bba9 nginx@sha256:31754bca89a3afb25c04d6ecfa2d9671bc3972d8f4809ff855f7e35caa580de9 21 hours ago Running nginx 0 9793a44f77d5a apples-78656fd5db-qsm4x 383fb11871cd9 nginx@sha256:31754bca89a3afb25c04d6ecfa2d9671bc3972d8f4809ff855f7e35caa580de9 21 hours ago Running nginx 0 93f7305ee0d77 apples-78656fd5db-4rpj7 bc4f405f4de6f nginx@sha256:31754bca89a3afb25c04d6ecfa2d9671bc3972d8f4809ff855f7e35caa580de9 21 hours ago Running nginx 0 58dbf2f78a5b6 apples-78656fd5db-t82tg 464ab8ab35afc gcr.io/google-samples/hello-app@sha256:7104356ed4e3476a96a23b96f8d7c04dfa7a1881aa97d66a76217f6bc8a370d0 22 hours ago Running hello-app 0 64eac220b635b newdep-749c9b5675-2x9mb 3a65e1db97169 nginx@sha256:31754bca89a3afb25c04d6ecfa2d9671bc3972d8f4809ff855f7e35caa580de9 23 hours ago Running nginx 0 0fd201ae2d934 nginxsvc-5f8b7d4f4d-dtrs7 1a4722cbaaf94 registry.k8s.io/ingress-nginx/controller@sha256:1405cc613bd95b2c6edd8b2a152510ae91c7e62aea4698500d23b2145960ab9c 23 hours ago Running controller 0 ea8cb6f3530f0 ingress-nginx-controller-6858749594-27tm9 332cd7a3b2aa9 nginx@sha256:31754bca89a3afb25c04d6ecfa2d9671bc3972d8f4809ff855f7e35caa580de9 26 hours ago Running nginx 0 8956ee62249ab webshop-7f9fd49d4c-x2czc e136bd99527b9 nginx@sha256:31754bca89a3afb25c04d6ecfa2d9671bc3972d8f4809ff855f7e35caa580de9 26 hours ago Running nginx 0 ae8bad2f8c457 webshop-7f9fd49d4c-92nj2 2c1067f28073c nginx@sha256:31754bca89a3afb25c04d6ecfa2d9671bc3972d8f4809ff855f7e35caa580de9 26 hours ago Running nginx 0 746bebf244884 webshop-7f9fd49d4c-kqllw c6c00eece623f nginx@sha256:31754bca89a3afb25c04d6ecfa2d9671bc3972d8f4809ff855f7e35caa580de9 29 hours ago Running task-pv-container 0 d39bb441ef944 lab4-pod 036a4a1599a1a nginx@sha256:31754bca89a3afb25c04d6ecfa2d9671bc3972d8f4809ff855f7e35caa580de9 30 hours ago Running nginx 0 6b4abfa363771 webserver-76d44586d-8gqhf f7426897bdb2e nginx@sha256:31754bca89a3afb25c04d6ecfa2d9671bc3972d8f4809ff855f7e35caa580de9 41 hours ago Running pv-container 0 a09dd92ff2186 pv-pod 8dc188f5131a2 nginx@sha256:985224176778a8939b3869d3b9b9624ea9b3fe4eb1e9002c5f444d99ef034a9b 46 hours ago Running nginx 0 ced6eefe01d16 deploydaemon-zzllp 29bf2d747ac9a nginx@sha256:985224176778a8939b3869d3b9b9624ea9b3fe4eb1e9002c5f444d99ef034a9b 46 hours ago Running nginx 0 2a08474cc3d2a init-demo ... 47 hours ago Running kube-scheduler 1 6e075c204c3c8 kube-scheduler-k8s.example.pl [root@k8s cka]# crictl pods POD ID CREATED STATE NAME NAMESPACE ATTEMPT RUNTI ME 93f7305ee0d77 22 hours ago Ready apples-78656fd5db-4rpj7 default 0 (defa ult) 9793a44f77d5a 22 hours ago Ready apples-78656fd5db-qsm4x default 0 (defa ult) 58dbf2f78a5b6 22 hours ago Ready apples-78656fd5db-t82tg default 0 (defa ult) 64eac220b635b 23 hours ago Ready newdep-749c9b5675-2x9mb default 0 (defa ult) 0fd201ae2d934 24 hours ago Ready nginxsvc-5f8b7d4f4d-dtrs7 default 0 (defa ult) ea8cb6f3530f0 24 hours ago Ready ingress-nginx-controller-6858749594-27tm9 ingress-nginx 0 (defa ult) ae8bad2f8c457 26 hours ago Ready webshop-7f9fd49d4c-92nj2 default 0 (defa ult) 8956ee62249ab 26 hours ago Ready webshop-7f9fd49d4c-x2czc default 0 (defa ult) 746bebf244884 26 hours ago Ready webshop-7f9fd49d4c-kqllw default 0 (defa ult) d39bb441ef944 29 hours ago Ready lab4-pod default 0 (defa ult) 6b4abfa363771 30 hours ago Ready webserver-76d44586d-8gqhf default 0 (defa ult) a09dd92ff2186 41 hours ago Ready pv-pod default 0 (defa ult) 4eb967073bfbd 42 hours ago Ready morevol default 0 (defa ult) ced6eefe01d16 47 hours ago Ready deploydaemon-zzllp default 0 (defa ult) 11c7ed7ad36e9 47 hours ago Ready web-2 default 0 (defa ult) b1be4ab59e2ca 47 hours ago Ready web-1 default 0 (defa ult) 15ef4cc356862 47 hours ago Ready web-0 default 0 (defa ult) 776a9d9f213bc 47 hours ago Ready two-containers default 0 (defa ult) ... [root@k8s cka]# crictl inspect cadb21950a794 { "status": { "id": "cadb21950a7944a7fccd11cdd28bfc0b243338638f970e83055f0f3cc0d4f104", "metadata": { "attempt": 1, "name": "POD" }, "state": "CONTAINER_RUNNING", "createdAt": "2024-02-01T15:18:29.256532148-05:00", "startedAt": "2024-02-01T15:18:29.524703813-05:00", "finishedAt": "0001-01-01T00:00:00Z", "exitCode": 0, "image": { "annotations": {}, "image": "registry.k8s.io/pause:3.9" }, "imageRef": "docker-pullable://registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097", "reason": "", "message": "", "labels": { "component": "kube-controller-manager", "io.kubernetes.pod.name": "kube-controller-manager-k8s.example.pl", "io.kubernetes.pod.namespace": "kube-system", "io.kubernetes.pod.uid": "5579bcc112143af09d2938747a302b57", "tier": "control-plane" }, "annotations": { "kubernetes.io/config.hash": "5579bcc112143af09d2938747a302b57", "kubernetes.io/config.seen": "2024-02-01T15:18:28.614844905-05:00", "kubernetes.io/config.source": "file" }, "mounts": [], "logPath": "" }, "info": { "sandboxID": "", "pid": 14988 } } [root@k8s cka]# crictl images IMAGE TAG IMAGE ID SIZE busybox latest 3f57d9401f8d4 4.26MB centos 7 eeb6ee3f44bd0 204MB gcr.io/google-samples/hello-app 2.0 f59157bf39125 27.2MB gcr.io/k8s-minikube/storage-provisioner v5 6e38f40d628db 31.5MB k8s.gcr.io/nginx-slim 0.8 18ea23a675dae 110MB kubernetesui/dashboard <none> 07655ddf2eebe 246MB kubernetesui/metrics-scraper <none> 115053965e86b 43.8MB nginx latest b690f5f0a2d53 187MB nginx <none> a8758716bb6aa 187MB registry.k8s.io/coredns/coredns v1.10.1 ead0a4a53df89 53.6MB registry.k8s.io/etcd 3.5.9-0 73deb9a3f7025 294MB registry.k8s.io/ingress-nginx/controller <none> 2bdab7410148a 261MB registry.k8s.io/ingress-nginx/kube-webhook-certgen <none> eb825d2bb76b9 53.6MB registry.k8s.io/kube-apiserver v1.28.3 5374347291230 126MB registry.k8s.io/kube-controller-manager v1.28.3 10baa1ca17068 122MB registry.k8s.io/kube-proxy v1.28.3 bfc896cf80fba 73.1MB registry.k8s.io/kube-scheduler v1.28.3 6d1b4fd1b182d 60.1MB registry.k8s.io/pause 3.9 e6f1816883972 744kB [root@k8s cka]# crictl pull docker.io/library/mysql Image is up to date for mysql@sha256:d7c20c5ba268c558f4fac62977f8c7125bde0630ff8946b08dde44135ef40df3 [root@k8s cka]# crictl --help NAME: crictl - client for CRI USAGE: crictl [global options] command [command options] [arguments...] VERSION: v1.24.1 COMMANDS: attach Attach to a running container create Create a new container exec Run a command in a running container version Display runtime version information images, image, img List images inspect Display the status of one or more containers inspecti Return the status of one or more images imagefsinfo Return image filesystem info inspectp Display the status of one or more pods logs Fetch the logs of a container port-forward Forward local port to a pod ps List containers pull Pull an image from a registry run Run a new container inside a sandbox runp Run a new pod rm Remove one or more containers rmi Remove one or more images rmp Remove one or more pods pods List pods start Start one or more created containers ... |
Static Pods
- The kubelet systemd process is configured to run static Pods from the
/etc/kubernetes/manifests
directory - On the control node, static Pods are an essential part of how Kubernetes
works: systemd starts kubelet, and kubelet starts core Kubernetes services
as static Pods - Administrators can manually add static Pods if so desired, just copy a
manifest file into the/etc/kubernetes/manifests
directory and the kubelet
process will pick it up - To modify the path where Kubelet picks up the static Pods, edit
staticPodPath in/var/lib/kubelet/config.yaml
and use sudo systemctl
restart kubelet to restart - Never do this on the control node!
Running Static Pods
kubectl run staticpod --image=nginx --dry-run=client -o yaml >
staticpod.yaml
sudo cp staticpod.yaml /etc/kubernetes/manifests/
kubectl get pods -o wide
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 |
[root@k8s cka]# kubectl run staticpod --image=nginx --dry-run=client -o yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: staticpod name: staticpod spec: containers: - image: nginx name: staticpod resources: {} dnsPolicy: ClusterFirst restartPolicy: Always status: {} [root@k8s /]# kubectl run staticpod --image=nginx --dry-run=client -o yaml > /etc/kubernetes/manifests/staticpod.yaml [root@k8s /]# kubectl get pods NAME READY STATUS RESTARTS AGE apples-78656fd5db-4rpj7 1/1 Running 0 23h apples-78656fd5db-qsm4x 1/1 Running 0 23h apples-78656fd5db-t82tg 1/1 Running 0 23h deploydaemon-zzllp 1/1 Running 0 2d firstnginx-d8679d567-249g9 1/1 Running 0 3d1h firstnginx-d8679d567-66c4s 1/1 Running 0 3d1h firstnginx-d8679d567-72qbd 1/1 Running 0 3d1h firstnginx-d8679d567-rhhlz 1/1 Running 0 2d8h init-demo 1/1 Running 0 2d10h lab4-pod 1/1 Running 0 30h morevol 2/2 Running 86 (45m ago) 43h mydaemon-d4dcd 1/1 Running 0 2d newdep-749c9b5675-2x9mb 1/1 Running 0 23h nginxsvc-5f8b7d4f4d-dtrs7 1/1 Running 0 24h pv-pod 1/1 Running 0 42h sleepy 1/1 Running 48 (11m ago) 2d10h staticpod-k8s.netico.pl 1/1 Running 0 29s testpod 1/1 Running 0 3d1h two-containers 2/2 Running 282 (7m30s ago) 2d7h web-0 1/1 Running 0 2d13h web-1 1/1 Running 0 2d web-2 1/1 Running 0 2d webserver-76d44586d-8gqhf 1/1 Running 0 31h webshop-7f9fd49d4c-92nj2 1/1 Running 0 27h webshop-7f9fd49d4c-kqllw 1/1 Running 0 27h webshop-7f9fd49d4c-x2czc 1/1 Running 0 27h |
That’s how it works. If you run a static pod the kubelet will automatically pick it up at the moment you create it.
Managing Node State
kubectl cordon
is used to mark a node as unschedulablekubectl drain
is used to mark a node as unschedulable and
remove all running Pods from it- Pods that have been started from a DaemonSet will not be
removed while usingkubectl drain
, add--ignore-daemonsets
to
ignore that - Add
--delete-emptydir-data
to delete data from emptyDir Pod volumes.
- Pods that have been started from a DaemonSet will not be
- While using cordon or drain, a taint is set on the nodes
- Use
kubectl uncordon
to get the node back in a schedulable
state
Managing Node State – Commands
kubectl cordon worker2
kubectl describe node worker2 # look for taints
kubectl get nodes
kubectl uncordon worker2
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 |
[root@k8s /]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s.example.pl Ready control-plane 3d6h v1.28.3 [root@k8s /]# kubectl get pods NAME READY STATUS RESTARTS AGE apples-78656fd5db-4rpj7 1/1 Running 0 23h apples-78656fd5db-qsm4x 1/1 Running 0 23h apples-78656fd5db-t82tg 1/1 Running 0 23h deploydaemon-zzllp 1/1 Running 0 2d firstnginx-d8679d567-249g9 1/1 Running 0 3d1h firstnginx-d8679d567-66c4s 1/1 Running 0 3d1h firstnginx-d8679d567-72qbd 1/1 Running 0 3d1h firstnginx-d8679d567-rhhlz 1/1 Running 0 2d8h init-demo 1/1 Running 0 2d10h lab4-pod 1/1 Running 0 31h morevol 2/2 Running 88 (11m ago) 44h mydaemon-d4dcd 1/1 Running 0 2d newdep-749c9b5675-2x9mb 1/1 Running 0 24h nginxsvc-5f8b7d4f4d-dtrs7 1/1 Running 0 25h pv-pod 1/1 Running 0 43h sleepy 1/1 Running 48 (38m ago) 2d11h testpod 1/1 Running 0 3d1h two-containers 2/2 Running 285 (4m7s ago) 2d8h web-0 1/1 Running 0 2d13h web-1 1/1 Running 0 2d web-2 1/1 Running 0 2d webserver-76d44586d-8gqhf 1/1 Running 0 32h webshop-7f9fd49d4c-92nj2 1/1 Running 0 27h webshop-7f9fd49d4c-kqllw 1/1 Running 0 27h webshop-7f9fd49d4c-x2czc 1/1 Running 0 27h [root@k8s /]# kubectl cordon k8s.example.pl node/k8s.example.pl cordoned [root@k8s /]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s.example.pl Ready,SchedulingDisabled control-plane 3d6h v1.28.3 [root@k8s /]# kubectl describe node k8s.example.pl Name: k8s.example.pl Roles: control-plane Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=k8s.example.pl kubernetes.io/os=linux minikube.k8s.io/commit=8220a6eb95f0a4d75f7f2d7b14cef975f050512d minikube.k8s.io/name=minikube minikube.k8s.io/primary=true minikube.k8s.io/updated_at=2024_01_31T10_03_27_0700 minikube.k8s.io/version=v1.32.0 node-role.kubernetes.io/control-plane= node.kubernetes.io/exclude-from-external-load-balancers= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Wed, 31 Jan 2024 10:03:23 -0500 Taints: node.kubernetes.io/unschedulable:NoSchedule Unschedulable: true Lease: HolderIdentity: k8s.example.pl AcquireTime: <unset> RenewTime: Sat, 03 Feb 2024 16:42:05 -0500 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Sat, 03 Feb 2024 16:37:14 -0500 Sat, 03 Feb 2024 13:59:11 -0500 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Sat, 03 Feb 2024 16:37:14 -0500 Sat, 03 Feb 2024 13:59:11 -0500 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Sat, 03 Feb 2024 16:37:14 -0500 Sat, 03 Feb 2024 13:59:11 -0500 KubeletHasSufficientPID kubelet has sufficient PID av ailable Ready True Sat, 03 Feb 2024 16:37:14 -0500 Sat, 03 Feb 2024 13:59:11 -0500 KubeletReady kubelet is posting ready stat us Addresses: InternalIP: 172.30.9.24 Hostname: k8s.example.pl Capacity: cpu: 8 ephemeral-storage: 64177544Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 16099960Ki pods: 110 Allocatable: cpu: 8 ephemeral-storage: 64177544Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 16099960Ki pods: 110 System Info: Machine ID: 0cc7c63085694b83adcd204eff748ff8 System UUID: 3e3ec47d-1fe1-b5b7-cbca-edd2da14db37 Boot ID: 79a4e58f-5d2a-4f44-ad34-520bab9b01cc Kernel Version: 4.18.0-500.el8.x86_64 OS Image: CentOS Stream 8 Operating System: linux Architecture: amd64 Container Runtime Version: docker://25.0.1 Kubelet Version: v1.28.3 Kube-Proxy Version: v1.28.3 PodCIDR: 10.244.0.0/24 PodCIDRs: 10.244.0.0/24 Non-terminated Pods: (35 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- default apples-78656fd5db-4rpj7 0 (0%) 0 (0%) 0 (0%) 0 (0%) 23h default apples-78656fd5db-qsm4x 0 (0%) 0 (0%) 0 (0%) 0 (0%) 23h default apples-78656fd5db-t82tg 0 (0%) 0 (0%) 0 (0%) 0 (0%) 23h default deploydaemon-zzllp 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2d default firstnginx-d8679d567-249g9 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3d1h default firstnginx-d8679d567-66c4s 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3d1h default firstnginx-d8679d567-72qbd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3d1h default firstnginx-d8679d567-rhhlz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2d8h default init-demo 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2d10h default lab4-pod 0 (0%) 0 (0%) 0 (0%) 0 (0%) 31h default morevol 0 (0%) 0 (0%) 0 (0%) 0 (0%) 44h default mydaemon-d4dcd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2d default newdep-749c9b5675-2x9mb 0 (0%) 0 (0%) 0 (0%) 0 (0%) 24h default nginxsvc-5f8b7d4f4d-dtrs7 0 (0%) 0 (0%) 0 (0%) 0 (0%) 25h default pv-pod 0 (0%) 0 (0%) 0 (0%) 0 (0%) 43h default sleepy 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2d11h default testpod 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3d1h default two-containers 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2d8h default web-0 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2d13h default web-1 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2d default web-2 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2d default webserver-76d44586d-8gqhf 0 (0%) 0 (0%) 0 (0%) 0 (0%) 32h default webshop-7f9fd49d4c-92nj2 0 (0%) 0 (0%) 0 (0%) 0 (0%) 27h default webshop-7f9fd49d4c-kqllw 0 (0%) 0 (0%) 0 (0%) 0 (0%) 27h default webshop-7f9fd49d4c-x2czc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 27h ingress-nginx ingress-nginx-controller-6858749594-27tm9 100m (1%) 0 (0%) 90Mi (0%) 0 (0%) 25h kube-system coredns-5dd5756b68-sgfkj 100m (1%) 0 (0%) 70Mi (0%) 170Mi (1%) 3d6h kube-system etcd-k8s.example.pl 100m (1%) 0 (0%) 100Mi (0%) 0 (0%) 3d6h kube-system kube-apiserver-k8s.example.pl 250m (3%) 0 (0%) 0 (0%) 0 (0%) 3d6h kube-system kube-controller-manager-k8s.example.pl 200m (2%) 0 (0%) 0 (0%) 0 (0%) 3d6h kube-system kube-proxy-5nmms 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3d6h kube-system kube-scheduler-k8s.example.pl 100m (1%) 0 (0%) 0 (0%) 0 (0%) 3d6h kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3d6h kubernetes-dashboard dashboard-metrics-scraper-7fd5cb4ddc-9ld5n 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2d1h kubernetes-dashboard kubernetes-dashboard-8694d4445c-xjlsr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2d1h Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 850m (10%) 0 (0%) memory 260Mi (1%) 170Mi (1%) ephemeral-storage 0 (0%) 0 (0%) hugepages-1Gi 0 (0%) 0 (0%) hugepages-2Mi 0 (0%) 0 (0%) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal NodeNotSchedulable 27s kubelet Node k8s.example.pl status is now: NodeNotSchedulable [root@k8s /]# kubectl uncordon k8s.example.pl node/k8s.example.pl uncordoned [root@k8s /]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s.example.pl Ready control-plane 3d6h v1.28.3 |
Managing Node Services
- The container runtime (often containerd) and kubelet are managed by the
Linux systemd service manager - Use
systemctl status kubelet
to check the current status of the kubelet - To manually start it, use
sudo systemctl start kubelet
- Notice that Pods that are scheduled on a node show as container processes in
ps aux
output. Don’t use Linux tools to manage Pods !
Managing Node Services Commands
ps aux | grep kubelet
ps aux | grep containerd
systemctl status kubelet
sudo systemctl stop kubelet
sudo systemctl start kubelet
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 |
[root@k8s /]# ps aux | grep kubelet root 15161 4.2 1.9 1042036 311412 ? Ssl lut01 127:51 kube-apiserver --advertise-address=172.30.9.24 --allow-privileged=true --authorization -mode=Node,RBAC --client-ca-file=/var/lib/minikube/certs/ca.crt --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageCl ass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota --enable-bootstrap-token-auth=true --etcd- cafile=/var/lib/minikube/certs/etcd/ca.crt --etcd-certfile=/var/lib/minikube/certs/apiserver-etcd-client.crt --etcd-keyfile=/var/lib/minikube/certs/apise rver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 --kubelet-client-certificate=/var/lib/minikube/certs/apiserver-kubelet-client.crt --kubelet-cl ient-key=/var/lib/minikube/certs/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/ var/lib/minikube/certs/front-proxy-client.crt --proxy-client-key-file=/var/lib/minikube/certs/front-proxy-client.key --requestheader-allowed-names=front- proxy-client --requestheader-client-ca-file=/var/lib/minikube/certs/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requesthead er-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=8443 --service-account-issuer=https://kubernetes.default.svc .cluster.local --service-account-key-file=/var/lib/minikube/certs/sa.pub --service-account-signing-key-file=/var/lib/minikube/certs/sa.key --service-clus ter-ip-range=10.96.0.0/12 --tls-cert-file=/var/lib/minikube/certs/apiserver.crt --tls-private-key-file=/var/lib/minikube/certs/apiserver.key root 555436 4.4 0.8 1938596 133496 ? Ssl 13:59 8:08 /var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/boot strap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=k8s.example.pl --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.30.9.24 root 591406 0.0 0.0 12216 1168 pts/0 S+ 17:01 0:00 grep --color=auto kubelet [root@k8s /]# ps aux | grep containerd root 14187 4.3 0.9 3733916 153944 ? Ssl lut01 131:09 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock root 14899 0.0 0.1 720064 17276 ? Sl lut01 0:09 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 437b912e567ef421738d84dfea73f07f9 854e00f9b5c142b5506db6d5ed580d1 -address /run/containerd/containerd.sock root 14900 0.0 0.0 720064 15620 ? Sl lut01 0:09 /usr/bin/containerd-shim-runc-v2 -namespace moby -id cadb21950a7944a7fccd11cdd28bfc0b2 43338638f970e83055f0f3cc0d4f104 -address /run/containerd/containerd.sock root 14915 0.0 0.0 720384 15992 ? Sl lut01 0:08 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 6e075c204c3c87f7f8340dccd39d5426a c3038dbbf5f54913d4919009e70834c -address /run/containerd/containerd.sock ... [root@k8s /]# systemctl status kubelet ● kubelet.service - kubelet: The Kubernetes Node Agent Loaded: loaded (/usr/lib/systemd/system/kubelet.service; disabled; vendor preset: disabled) Drop-In: /etc/systemd/system/kubelet.service.d └─10-kubeadm.conf Active: active (running) since Sat 2024-02-03 13:59:11 EST; 3h 3min ago Docs: http://kubernetes.io/docs/ Main PID: 555436 (kubelet) Tasks: 16 (limit: 100376) Memory: 70.8M CGroup: /system.slice/kubelet.service └─555436 /var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/c> lut 03 17:02:46 k8s.example.pl kubelet[555436]: E0203 17:02:46.407467 555436 desired_state_of_world_populator.go:320] "Error processing volume" err="err> ... [root@k8s /]# systemctl cat kubelet # /usr/lib/systemd/system/kubelet.service [Unit] Description=kubelet: The Kubernetes Node Agent Documentation=http://kubernetes.io/docs/ StartLimitIntervalSec=0 [Service] ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet Restart=always # Tuned for local dev: faster than upstream default (10s), but slower than systemd default (100ms) RestartSec=600ms [Install] WantedBy=multi-user.target # /etc/systemd/system/kubelet.service.d/10-kubeadm.conf [Unit] Wants=docker.socket [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml> [root@k8s /]# systemctl status kubelet ● kubelet.service - kubelet: The Kubernetes Node Agent Loaded: loaded (/usr/lib/systemd/system/kubelet.service; disabled; vendor preset: disabled) Drop-In: /etc/systemd/system/kubelet.service.d └─10-kubeadm.conf Active: active (running) since Sat 2024-02-03 13:59:11 EST; 3h 7min ago Docs: http://kubernetes.io/docs/ Main PID: 555436 (kubelet) Tasks: 16 (limit: 100376) Memory: 70.4M CGroup: /system.slice/kubelet.service └─555436 /var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/c> lut 03 17:06:57 k8s.example.pl kubelet[555436]: W0203 17:06:57.828054 555436 reflector.go:458] vendor/k8s.io/client-go/informers/factory.go:150: watch o> lut 03 17:06:58 k8s.example.pl kubelet[555436]: E0203 17:06:58.406624 555436 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"> lut 03 17:06:58 k8s.example.pl kubelet[555436]: E0203 17:06:58.449716 555436 desired_state_of_world_populator.go:320] "Error processing volume" err="err> lut 03 17:06:58 k8s.example.pl kubelet[555436]: I0203 17:06:58.603442 555436 scope.go:117] "RemoveContainer" containerID="14c60aafe505b57dcec7dbcf2d50dd> lut 03 17:06:58 k8s.example.pl kubelet[555436]: I0203 17:06:58.603939 555436 scope.go:117] "RemoveContainer" containerID="c739ddcefc6ada2c434f3e403abefb> lut 03 17:06:58 k8s.example.pl kubelet[555436]: I0203 17:06:58.604584 555436 status_manager.go:853] "Failed to get status for pod" podUID="ea109a012b2a2> lut 03 17:06:58 k8s.example.pl kubelet[555436]: E0203 17:06:58.687274 555436 kubelet.go:1907] "Unable to attach or mount volumes for pod; skipping pod" > lut 03 17:06:58 k8s.example.pl kubelet[555436]: E0203 17:06:58.687341 555436 pod_workers.go:1300] "Error syncing pod, skipping" err="unmounted volumes=[> lut 03 17:06:58 k8s.example.pl kubelet[555436]: W0203 17:06:58.745783 555436 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed > lut 03 17:06:58 k8s.example.pl kubelet[555436]: E0203 17:06:58.745856 555436 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed > [root@k8s /]# kill -9 15161 [root@k8s /]# systemctl status kubelet ● kubelet.service - kubelet: The Kubernetes Node Agent Loaded: loaded (/usr/lib/systemd/system/kubelet.service; disabled; vendor preset: disabled) Drop-In: /etc/systemd/system/kubelet.service.d └─10-kubeadm.conf Active: active (running) since Sat 2024-02-03 13:59:11 EST; 3h 8min ago Docs: http://kubernetes.io/docs/ Main PID: 555436 (kubelet) Tasks: 16 (limit: 100376) Memory: 71.0M CGroup: /system.slice/kubelet.service └─555436 /var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/c> lut 03 17:07:00 k8s.example.pl kubelet[555436]: E0203 17:07:00.923579 555436 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch > lut 03 17:07:01 k8s.example.pl kubelet[555436]: E0203 17:07:01.439804 555436 desired_state_of_world_populator.go:320] "Error processing volume" err="err> lut 03 17:07:01 k8s.example.pl kubelet[555436]: E0203 17:07:01.688453 555436 kubelet.go:1907] "Unable to attach or mount volumes for pod; skipping pod" > lut 03 17:07:01 k8s.example.pl kubelet[555436]: E0203 17:07:01.688506 555436 pod_workers.go:1300] "Error syncing pod, skipping" err="unmounted volumes=[> lut 03 17:07:02 k8s.example.pl kubelet[555436]: I0203 17:07:02.386672 555436 scope.go:117] "RemoveContainer" containerID="e391b0bd09e900fcb6aa768fa23d0b> lut 03 17:07:02 k8s.example.pl kubelet[555436]: E0203 17:07:02.387187 555436 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartCo> lut 03 17:07:08 k8s.example.pl kubelet[555436]: E0203 17:07:08.490599 555436 desired_state_of_world_populator.go:320] "Error processing volume" err="err> lut 03 17:07:08 k8s.example.pl kubelet[555436]: E0203 17:07:08.687894 555436 kubelet.go:1907] "Unable to attach or mount volumes for pod; skipping pod" > lut 03 17:07:08 k8s.example.pl kubelet[555436]: E0203 17:07:08.687958 555436 pod_workers.go:1300] "Error syncing pod, skipping" err="unmounted volumes=[> lut 03 17:07:11 k8s.example.pl kubelet[555436]: E0203 17:07:11.331441 555436 file.go:187] "Could not process manifest file" err="/etc/kubernetes/manifes> |
As we see even if we kill the kubelet process the systemt will pick it up. But if we would down the service by systemctl systemd won’t pick the kubelet process up. It only works for distaster not for intended operations.
Lab: Managing Static Pods
- On node worker1, run a static Pod with the name mypod, using an Nginx
image and no further configuration - Use the appropriate tools to verify that the static Pod has started successfully
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 |
[root@k8s ~]# kubectl run static --image=nginx --dry-run=client -o yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: static name: static spec: containers: - image: nginx name: static resources: {} dnsPolicy: ClusterFirst restartPolicy: Always status: {} [root@k8s ~]# ssh root@worker2 ssh: Could not resolve hostname worker2: Name or service not known [root@k8s ~]# cd /etc/kubernetes/manifests [root@k8s manifests]# vi mystaticpod.yaml [root@k8s manifests]# cat mystaticpod.yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: static name: mystaticpod spec: containers: - image: nginx name: mystaticpod resources: {} dnsPolicy: ClusterFirst restartPolicy: Always status: {} [root@k8s manifests]# crictl pods POD ID CREATED STATE NAME NAMESPACE ATTEMPT RUNTIME 51c54def1614c About a minute ago Ready mystaticpod-k8s.example.pl default 0 (default) 93f7305ee0d77 37 hours ago Ready apples-78656fd5db-4rpj7 default 0 (default) 9793a44f77d5a 37 hours ago Ready apples-78656fd5db-qsm4x default 0 (default) ... [root@k8s manifests]# kubectl get pods NAME READY STATUS RESTARTS AGE apples-78656fd5db-4rpj7 1/1 Running 0 36h apples-78656fd5db-qsm4x 1/1 Running 0 36h apples-78656fd5db-t82tg 1/1 Running 0 36h deploydaemon-zzllp 1/1 Running 0 2d13h firstnginx-d8679d567-249g9 1/1 Running 0 3d14h firstnginx-d8679d567-66c4s 1/1 Running 0 3d14h firstnginx-d8679d567-72qbd 1/1 Running 0 3d14h firstnginx-d8679d567-rhhlz 1/1 Running 0 2d21h init-demo 1/1 Running 0 2d23h lab4-pod 1/1 Running 0 44h morevol 2/2 Running 114 (7m50s ago) 2d9h mydaemon-d4dcd 1/1 Running 0 2d13h mystaticpod-k8s.example.pl 1/1 Running 0 97s newdep-749c9b5675-2x9mb 1/1 Running 0 37h ... |