{"id":5326,"date":"2023-11-18T16:16:42","date_gmt":"2023-11-18T15:16:42","guid":{"rendered":"http:\/\/miro.borodziuk.eu\/?p=5326"},"modified":"2025-05-17T19:33:48","modified_gmt":"2025-05-17T17:33:48","slug":"kubernetes_node_maintenance","status":"publish","type":"post","link":"http:\/\/miro.borodziuk.eu\/index.php\/2023\/11\/18\/kubernetes_node_maintenance\/","title":{"rendered":"Kubernetes Node Maintenance"},"content":{"rendered":"<p><!--more--><\/p>\n<p><span style=\"color: #3366ff;\">Kubernetes Monitoring<\/span><\/p>\n<ul>\n<li>Kubernetes monitoring is offered by the integrated Metrics Server<\/li>\n<li>The server, after installation, exposes a standard API and can be used to<br \/>\nexpose custom metrics<\/li>\n<li>Use <code>kubectl top<\/code> to see a top-like interface to provide resource usage<\/li>\n<\/ul>\n<p><span style=\"color: #3366ff;\">Setting up Metrics Server<\/span><\/p>\n<ul>\n<li>See <code>https:\/\/github.com\/kubernetes-sigs\/metrics-server.git<\/code><\/li>\n<li>Read github documentation!<\/li>\n<li><code>kubectl apply -f https:\/\/github.com\/kubernetes-sigs\/metrics-server\/releases\/latest\/download\/components.yaml<\/code><\/li>\n<li><code>kubectl \u2014n kube-system get pods <\/code># look for metrics-server<\/li>\n<li><code>kubectl \u2014n kube-system edit deployment metrics-server<\/code>\n<ul>\n<li>In <code>spec.template.spec.containers.args<\/code>, use the following\n<ul>\n<li><code>- --kubelet-insecure-tls<\/code><\/li>\n<li><code>- --kubelet-preferred-address-types=InternallP,ExternallP, Hostname<\/code><\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<\/li>\n<li><code>kubectl \u2014n kube-system logs metrics-server&lt;TAB&gt; <\/code>should show &#8220;Generating self-signed cert&#8221; and &#8220;Serving securely on [::]443<\/li>\n<li><code>kubectl top pods --all-namespaces<\/code> will show most active Pods<\/li>\n<\/ul>\n<p>Let&#8217;s investigate the metric server.<\/p>\n<pre class=\"lang:default mark:20 decode:true\">[root@k8s manifests]# kubectl apply -f https:\/\/github.com\/kubernetes-sigs\/metrics-server\/releases\/latest\/download\/components.yaml\r\nserviceaccount\/metrics-server created\r\nclusterrole.rbac.authorization.k8s.io\/system:aggregated-metrics-reader created\r\nclusterrole.rbac.authorization.k8s.io\/system:metrics-server created\r\nrolebinding.rbac.authorization.k8s.io\/metrics-server-auth-reader created\r\nclusterrolebinding.rbac.authorization.k8s.io\/metrics-server:system:auth-delegator created\r\nclusterrolebinding.rbac.authorization.k8s.io\/system:metrics-server created\r\nservice\/metrics-server created\r\ndeployment.apps\/metrics-server created\r\napiservice.apiregistration.k8s.io\/v1beta1.metrics.k8s.io created\r\n\r\n[root@k8s manifests]# kubectl -n kube-system get pods\r\nNAME                                    READY   STATUS             RESTARTS        AGE\r\ncoredns-5dd5756b68-sgfkj                0\/1     CrashLoopBackOff   799 (67s ago)   4d1h\r\netcd-k8s.example.pl                      1\/1     Running            1 (2d20h ago)   4d1h\r\nkube-apiserver-k8s.example.pl            1\/1     Running            5 (18h ago)     4d1h\r\nkube-controller-manager-k8s.example.pl   1\/1     Running            3 (2d20h ago)   4d1h\r\nkube-proxy-5nmms                        1\/1     Running            1 (2d20h ago)   4d1h\r\nkube-scheduler-k8s.example.pl            1\/1     Running            1 (2d20h ago)   4d1h\r\nmetrics-server-6db4d75b97-z54v6         0\/1     Running            0               60s\r\nstorage-provisioner                     1\/1     Running            0               4d1h\r\n\r\n[root@k8s manifests]# kubectl logs -n kube-system metrics-server-6db4d75b97-z54v6\r\nI0204 16:24:12.739473       1 serving.go:374] Generated self-signed cert (\/tmp\/apiserver.crt, \/tmp\/apiserver.key)\r\nI0204 16:24:13.292873       1 handler.go:275] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager\r\nI0204 16:24:13.403268       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController\r\nI0204 16:24:13.403309       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController\r\nI0204 16:24:13.403390       1 configmap_cafile_content.go:202] \"Starting controller\" name=\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\"\r\nI0204 16:24:13.403423       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\r\nI0204 16:24:13.403458       1 configmap_cafile_content.go:202] \"Starting controller\" name=\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\"\r\nI0204 16:24:13.403476       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\r\nI0204 16:24:13.404129       1 secure_serving.go:213] Serving securely on [::]:10250\r\nI0204 16:24:13.404193       1 dynamic_serving_content.go:132] \"Starting controller\" name=\"serving-cert::\/tmp\/apiserver.crt::\/tmp\/apiserver.key\"\r\nI0204 16:24:13.404331       1 tlsconfig.go:240] \"Starting DynamicServingCertificateController\"\r\nI0204 16:24:13.503728       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\r\nI0204 16:24:13.503792       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController\r\nI0204 16:24:13.503799       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\r\nI0204 16:26:56.309835       1 server.go:191] \"Failed probe\" probe=\"metric-storage-ready\" err=\"no metrics to serve\"\r\nI0204 16:26:59.392609       1 server.go:191] \"Failed probe\" probe=\"metric-storage-ready\" err=\"no metrics to serve\"\r\nE0204 16:26:59.424509       1 scraper.go:149] \"Failed to scrape node\" err=\"Get \\\"https:\/\/172.30.9.24:10250\/metrics\/resource\\\": dial tcp 172.30.9.24:10250: connect: no route to host\" node=\"k8s.example.pl\"\r\nI0204 16:27:06.309637       1 server.go:191] \"Failed probe\" probe=\"metric-storage-ready\" err=\"no metrics to serve\"\r\nE0204 16:27:14.400536       1 scraper.go:149] \"Failed to scrape node\" err=\"Get \\\"https:\/\/172.30.9.24:10250\/metrics\/resource\\\": dial tcp 172.30.9.24:10250: connect: no route to host\" node=\"k8s.example.pl\"\r\nI0204 16:27:16.311738       1 server.go:191] \"Failed probe\" probe=\"metric-storage-ready\" err=\"no metrics to serve\"\r\nI0204 16:27:26.309031       1 server.go:191] \"Failed probe\" probe=\"metric-storage-ready\" err=\"no metrics to serve\"\r\nE0204 16:27:29.440485       1 scraper.go:149] \"Failed to scrape node\" err=\"Get \\\"https:\/\/172.30.9.24:10250\/metrics\/resource\\\": dial tcp 172.30.9.24:10250: connect: no route to host\" node=\"k8s.example.pl\"\r\nI0204 16:27:36.311114       1 server.go:191] \"Failed probe\" probe=\"metric-storage-ready\" err=\"no metrics to serve\"\r\nE0204 16:27:44.417699       1 scraper.go:149] \"Failed to scrape node\" err=\"Get \\\"https:\/\/172.30.9.24:10250\/metrics\/resource\\\": dial tcp 172.30.9.24:10250: connect: no route to host\" node=\"k8s.example.pl\"\r\nI0204 16:27:46.309503       1 server.go:191] \"Failed probe\" probe=\"metric-storage-ready\" err=\"no metrics to serve\"\r\nI0204 16:27:56.309958       1 server.go:191] \"Failed probe\" probe=\"metric-storage-ready\" err=\"no metrics to serve\"\r\nE0204 16:27:59.456455       1 scraper.go:149] \"Failed to scrape node\" err=\"Get \\\"https:\/\/172.30.9.24:10250\/metrics\/resource\\\": dial tcp 172.30.9.24:10250: connect: no route to host\" node=\"k8s.example.pl\"\r\n[root@k8s manifests]#\r\n\r\n[root@k8s manifests]# kubectl -n kube-system edit deployments.apps metrics-server\r\ndeployment.apps\/metrics-server edited\r\n[root@k8s manifests]# kubectl -n kube-system edit deployments.apps metrics-server -o yaml\r\n<\/pre>\n<p>There is an issue int th metric server and we have to edit deployment metric server.\u00a0 One line has been added:<\/p>\n<pre class=\"lang:default mark:7 decode:true\">   spec:\r\n      containers:\r\n      - args:\r\n        - --cert-dir=\/tmp\r\n        - --secure-port=10250\r\n        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname\r\n        - --kubelet-insecure-tls\r\n        - --kubelet-use-node-status-port\r\n        - --metric-resolution=15s<\/pre>\n<p>And now:<\/p>\n<pre class=\"lang:default decode:true\">[root@k8s manifests]# kubectl -n kube-system get pods\r\nNAME                                    READY   STATUS             RESTARTS        AGE\r\ncoredns-5dd5756b68-sgfkj                0\/1     CrashLoopBackOff   803 (42s ago)   4d1h\r\netcd-k8s.example.pl                      1\/1     Running            1 (2d20h ago)   4d1h\r\nkube-apiserver-k8s.example.pl            1\/1     Running            5 (18h ago)     4d1h\r\nkube-controller-manager-k8s.example.pl   1\/1     Running            3 (2d20h ago)   4d1h\r\nkube-proxy-5nmms                        1\/1     Running            1 (2d20h ago)   4d1h\r\nkube-scheduler-k8s.example.pl            1\/1     Running            1 (2d20h ago)   4d1h\r\nmetrics-server-5f8988d664-7r8j7         0\/1     Running            0               12m\r\nmetrics-server-6db4d75b97-z54v6         0\/1     Running            0               21m\r\nstorage-provisioner                     1\/1     Running            0               4d1h\r\n\r\n[root@k8s manifests]# kubectl logs -n kube-system metrics-server-5f8988d664-7r8j7\r\nI0204 16:33:15.992406       1 server.go:191] \"Failed probe\" probe=\"metric-storage-ready\" err=\"no metrics to serve\"\r\nE0204 16:33:19.328452       1 scraper.go:149] \"Failed to scrape node\" err=\"Get \\\"https:\/\/172.30.9.24:10250\/metrics\/resource\\\": dial tcp 172.30.9.24:10250: connect: no route to host\" node=\"k8s.example.pl\"\r\nI0204 16:33:25.991109       1 server.go:191] \"Failed probe\" probe=\"metric-storage-ready\" err=\"no metrics to serve\"\r\nE0204 16:33:34.368456       1 scraper.go:149] \"Failed to scrape node\" err=\"Get \\\"https:\/\/172.30.9.24:10250\/metrics\/resource\\\": dial tcp 172.30.9.24:10250: connect: no route to host\" node=\"k8s.example.pl\"\r\nI0204 16:33:35.991518       1 server.go:191] \"Failed probe\" probe=\"metric-storage-ready\" err=\"no metrics to serve\"\r\nI0204 16:33:45.989133       1 server.go:191] \"Failed probe\" probe=\"metric-storage-ready\" err=\"no metrics to serve\"\r\nE0204 16:33:49.344463       1 scraper.go:149] \"Failed to scrape node\" err=\"Get \\\"https:\/\/172.30.9.24:10250\/metrics\/resource\\\": dial tcp 172.30.9.24:10250: connect: no route to host\" node=\"k8s.example.pl\"\r\n...\r\n\r\n[root@k8s manifests]# firewall-cmd --permanent --add-port=10250\/tcp\r\nsuccess\r\n[root@k8s manifests]# firewall-cmd --reload\r\nsuccess\r\n\r\n[root@k8s manifests]# kubectl -n kube-system get pods\r\nNAME                                    READY   STATUS             RESTARTS          AGE\r\ncoredns-5dd5756b68-sgfkj                0\/1     CrashLoopBackOff   803 (2m28s ago)   4d1h\r\netcd-k8s.example.pl                      1\/1     Running            1 (2d20h ago)     4d1h\r\nkube-apiserver-k8s.example.pl            1\/1     Running            5 (18h ago)       4d1h\r\nkube-controller-manager-k8s.example.pl   1\/1     Running            3 (2d20h ago)     4d1h\r\nkube-proxy-5nmms                        1\/1     Running            1 (2d20h ago)     4d1h\r\nkube-scheduler-k8s.example.pl            1\/1     Running            1 (2d20h ago)     4d1h\r\nmetrics-server-5f8988d664-7r8j7         1\/1     Running            0                 14m\r\nstorage-provisioner                     1\/1     Running            0                 4d1h\r\n\r\n[root@k8s manifests]# kubectl top pods\r\nNAME                         CPU(cores)   MEMORY(bytes)\r\napples-78656fd5db-4rpj7      0m           7Mi\r\napples-78656fd5db-qsm4x      0m           7Mi\r\napples-78656fd5db-t82tg      0m           7Mi\r\ndeploydaemon-zzllp           0m           7Mi\r\nfirstnginx-d8679d567-249g9   0m           7Mi\r\nfirstnginx-d8679d567-66c4s   0m           7Mi\r\nfirstnginx-d8679d567-72qbd   0m           7Mi\r\nfirstnginx-d8679d567-rhhlz   0m           7Mi\r\ninit-demo                    0m           7Mi\r\nlab4-pod                     0m           7Mi\r\nmorevol                      0m           0Mi\r\nmydaemon-d4dcd               0m           7Mi\r\nmystaticpod-k8s.example.pl    0m           7Mi\r\nnewdep-749c9b5675-2x9mb      0m           2Mi\r\nnginxsvc-5f8b7d4f4d-dtrs7    0m           7Mi\r\npv-pod                       0m           7Mi\r\nsleepy                       0m           0Mi\r\ntestpod                      0m           7Mi\r\ntwo-containers               0m           7Mi\r\nweb-0                        1m           2Mi\r\nweb-1                        1m           2Mi\r\nweb-2                        1m           2Mi\r\nwebserver-76d44586d-8gqhf    0m           7Mi\r\nwebshop-7f9fd49d4c-92nj2     0m           7Mi\r\nwebshop-7f9fd49d4c-kqllw     0m           7Mi\r\nwebshop-7f9fd49d4c-x2czc     0m           7Mi\r\n\r\n[root@k8s manifests]# kubectl top nodes\r\nNAME            CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%\r\nk8s.example.pl   288m         3%     3330Mi          21%\r\n<\/pre>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #3366ff;\">Etcd<\/span><\/p>\n<ul>\n<li>The etcd is a core Kubernetes service that contains all resources that have<br \/>\nbeen created<\/li>\n<li>It is started by the kubelet as a static Pod on the control node<\/li>\n<li>Losing the etcd means losing all your configuration<\/li>\n<\/ul>\n<p><span style=\"color: #3366ff;\">Etcd Backup<\/span><\/p>\n<ul>\n<li>To back up the etcd, root access is required to run the <code>etcdctl<\/code> tool<\/li>\n<li>Use <code>sudo apt install etcd-client<\/code> to install this tool<\/li>\n<li><code>etcdctl<\/code> uses the wrong API version by default, fix this by using sudo <code>ETCDCTL_API=3 etcdctl ... snapshot save<\/code><\/li>\n<li>to use <code>etcdctl<\/code>, you need to specify the etcd service API endpoint, as well as cacert, cert and key to be used<\/li>\n<li>Values for all of these can be obtained by using <code>ps aux | grep etcd<\/code><\/li>\n<\/ul>\n<p><span style=\"color: #3366ff;\">Backing up the Etcd<\/span><\/p>\n<ul>\n<li><code>sudo apt install etcd-client<\/code><\/li>\n<li><code>sudo etcdctl --help; sudo ETCDCTL_API=3 etcdctl --help<\/code><\/li>\n<li><code>ps aux | grep etcd<\/code><\/li>\n<li><code>sudo ETCDCTL_API=3 etcdctl --endpoints=localhost:2379 --cacert\u00a0 \/etc\/kubernetes\/pki\/etcd\/ca.crt --cert \/etc\/kubernetes\/pki\/etcd\/server.crt --key \/etc\/kubernetes\/pki\/etcd\/server.key get \/ --prefix --keys-only<\/code><\/li>\n<li><code>sudo ETCDCTL_API=3 etcdctl --endpoints=localhost:2379 --cacert \/etc\/kubernetes\/pki\/etcd\/ca.crt --cert \/etc\/kubernetes\/pki\/etcd\/server.crt --key \/etc\/kubernetes\/pki\/etcd\/server.key snapshot save \/tmp\/etcdbackup.db<\/code><\/li>\n<\/ul>\n<pre class=\"lang:default decode:true\">[root@k8s ~]# ETCD_RELEASE=$(curl -s https:\/\/api.github.com\/repos\/etcd-io\/etcd\/releases\/latest|grep tag_name | cut -d '\"' -f 4)\r\n[root@k8s ~]# echo $ETCD_RELEASE\r\nv3.5.12\r\n[root@k8s ~]# wget https:\/\/github.com\/etcd-io\/etcd\/releases\/download\/${ETCD_RELEASE}\/etcd-${ETCD_RELEASE}-linux-amd64.tar.gz\r\n--2024-02-04 12:51:29--  https:\/\/github.com\/etcd-io\/etcd\/releases\/download\/v3.5.12\/etcd-v3.5.12-linux-amd64.tar.gz\r\nTranslacja github.com (github.com)... 140.82.121.3\r\n\u0141\u0105czenie si\u0119 z github.com (github.com)|140.82.121.3|:443... po\u0142\u0105czono.\r\n\u017b\u0105danie HTTP wys\u0142ano, oczekiwanie na odpowied\u017a... 302 Found\r\nLokalizacja: https:\/\/objects.githubusercontent.com\/github-production-release-asset-2e65be\/11225014\/f198beb0-cda9-4776-bc21-3ee9ce967646?X-Amz-Algorithm=AWS4-HMAC-SHA256&amp;X-Amz-Credential=AKIAVCODYLSA53PQK4ZA%2F20240204%           2Fus-east-1%2Fs3%2Faws4_request&amp;X-Amz-Date=20240204T175129Z&amp;X-Amz-Expires=300&amp;X-Amz-Signature=58301bf185577765f3b913e6cb7647a1ec517a7cb6076d6c390bb28659a4a0e0&amp;X-Amz-SignedHeaders=host&amp;actor_id=0&amp;key_id=0&amp;repo_id=112250           14&amp;response-content-disposition=attachment%3B%20filename%3Detcd-v3.5.12-linux-amd64.tar.gz&amp;response-content-type=application%2Foctet-stream [pod\u0105\u017canie]\r\n--2024-02-04 12:51:29--  https:\/\/objects.githubusercontent.com\/github-production-release-asset-2e65be\/11225014\/f198beb0-cda9-4776-bc21-3ee9ce967646?X-Amz-Algorithm=AWS4-HMAC-SHA256&amp;X-Amz-Credential=AKIAVCODYLSA53PQK4ZA           %2F20240204%2Fus-east-1%2Fs3%2Faws4_request&amp;X-Amz-Date=20240204T175129Z&amp;X-Amz-Expires=300&amp;X-Amz-Signature=58301bf185577765f3b913e6cb7647a1ec517a7cb6076d6c390bb28659a4a0e0&amp;X-Amz-SignedHeaders=host&amp;actor_id=0&amp;key_id=0&amp;re           po_id=11225014&amp;response-content-disposition=attachment%3B%20filename%3Detcd-v3.5.12-linux-amd64.tar.gz&amp;response-content-type=application%2Foctet-stream\r\nTranslacja objects.githubusercontent.com (objects.githubusercontent.com)... 185.199.111.133, 185.199.109.133, 185.199.108.133, ...\r\n\u0141\u0105czenie si\u0119 z objects.githubusercontent.com (objects.githubusercontent.com)|185.199.111.133|:443... po\u0142\u0105czono.\r\n\u017b\u0105danie HTTP wys\u0142ano, oczekiwanie na odpowied\u017a... 200 OK\r\nD\u0142ugo\u015b\u0107: 20337842 (19M) [application\/octet-stream]\r\nZapis do: `etcd-v3.5.12-linux-amd64.tar.gz'\r\n\r\netcd-v3.5.12-linux-amd64.tar.gz                        100%[==========================================================================================================================&gt;]  19,40M  3,28MB\/s     w 13s\r\n\r\n2024-02-04 12:51:43 (1,51 MB\/s) - zapisano `etcd-v3.5.12-linux-amd64.tar.gz' [20337842\/20337842]\r\n\r\n[root@k8s ~]# tar xvf etcd-${ETCD_RELEASE}-linux-amd64.tar.gz\r\netcd-v3.5.12-linux-amd64\/\r\netcd-v3.5.12-linux-amd64\/README.md\r\netcd-v3.5.12-linux-amd64\/READMEv2-etcdctl.md\r\netcd-v3.5.12-linux-amd64\/etcdutl\r\netcd-v3.5.12-linux-amd64\/etcdctl\r\netcd-v3.5.12-linux-amd64\/Documentation\/\r\netcd-v3.5.12-linux-amd64\/Documentation\/README.md\r\netcd-v3.5.12-linux-amd64\/Documentation\/dev-guide\/\r\netcd-v3.5.12-linux-amd64\/Documentation\/dev-guide\/apispec\/\r\netcd-v3.5.12-linux-amd64\/Documentation\/dev-guide\/apispec\/swagger\/\r\netcd-v3.5.12-linux-amd64\/Documentation\/dev-guide\/apispec\/swagger\/v3election.swagger.json\r\netcd-v3.5.12-linux-amd64\/Documentation\/dev-guide\/apispec\/swagger\/rpc.swagger.json\r\netcd-v3.5.12-linux-amd64\/Documentation\/dev-guide\/apispec\/swagger\/v3lock.swagger.json\r\netcd-v3.5.12-linux-amd64\/README-etcdutl.md\r\netcd-v3.5.12-linux-amd64\/README-etcdctl.md\r\netcd-v3.5.12-linux-amd64\/etcd\r\n[root@k8s ~]# cd etcd-${ETCD_RELEASE}-linux-amd64\r\n[root@k8s etcd-v3.5.12-linux-amd64]# mv etcd* \/usr\/local\/bin\r\n\r\n[root@k8s etcd-v3.5.12-linux-amd64]# ps ax | grep etcd\r\n  15154 ?        Ssl   75:24 etcd --advertise-client-urls=https:\/\/172.30.9.24:2379 --cert-file=\/var\/lib\/minikube\/certs\/etcd\/server.crt --client-cert-auth=true --data-dir=\/var\/lib\/minikube\/etcd --experimental-initial-co           rrupt-check=true --experimental-watch-progress-notify-interval=5s --initial-advertise-peer-urls=https:\/\/172.30.9.24:2380 --initial-cluster=k8s.netico.pl=https:\/\/172.30.9.24:2380 --key-file=\/var\/lib\/minikube\/certs\/etcd\/           server.key --listen-client-urls=https:\/\/127.0.0.1:2379,https:\/\/172.30.9.24:2379 --listen-metrics-urls=http:\/\/127.0.0.1:2381 --listen-peer-urls=https:\/\/172.30.9.24:2380 --name=k8s.netico.pl --peer-cert-file=\/var\/lib\/min           ikube\/certs\/etcd\/peer.crt --peer-client-cert-auth=true --peer-key-file=\/var\/lib\/minikube\/certs\/etcd\/peer.key --peer-trusted-ca-file=\/var\/lib\/minikube\/certs\/etcd\/ca.crt --proxy-refresh-interval=70000 --snapshot-count=10           000 --trusted-ca-file=\/var\/lib\/minikube\/certs\/etcd\/ca.crt\r\n 592463 ?        Ssl   52:12 kube-apiserver --advertise-address=172.30.9.24 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=\/var\/lib\/minikube\/certs\/ca.crt --enable-admission-plugins=NamespaceLif           ecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota --enable-bootstrap-token-auth=true --etcd-cafile=\/var\/lib           \/minikube\/certs\/etcd\/ca.crt --etcd-certfile=\/var\/lib\/minikube\/certs\/apiserver-etcd-client.crt --etcd-keyfile=\/var\/lib\/minikube\/certs\/apiserver-etcd-client.key --etcd-servers=https:\/\/127.0.0.1:2379 --kubelet-client-cert           ificate=\/var\/lib\/minikube\/certs\/apiserver-kubelet-client.crt --kubelet-client-key=\/var\/lib\/minikube\/certs\/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cer           t-file=\/var\/lib\/minikube\/certs\/front-proxy-client.crt --proxy-client-key-file=\/var\/lib\/minikube\/certs\/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=\/var\/lib\/mini           kube\/certs\/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=8443 --service-account-issuer           =https:\/\/kubernetes.default.svc.cluster.local --service-account-key-file=\/var\/lib\/minikube\/certs\/sa.pub --service-account-signing-key-file=\/var\/lib\/minikube\/certs\/sa.key --service-cluster-ip-range=10.96.0.0\/12 --tls-ce           rt-file=\/var\/lib\/minikube\/certs\/apiserver.crt --tls-private-key-file=\/var\/lib\/minikube\/certs\/apiserver.key\r\n 822039 pts\/0    S+     0:00 grep --color=auto etcd\r\n[root@k8s etcd-v3.5.12-linux-amd64]#\r\n[root@k8s etcd-v3.5.12-linux-amd64]#\r\n[root@k8s etcd-v3.5.12-linux-amd64]#\r\n[root@k8s etcd-v3.5.12-linux-amd64]#\r\n\r\n[root@k8s etcd-v3.5.12-linux-amd64]# ETCDCTL_API=3 etcdctl --endpoints=localhost:2379 --cacert \/var\/lib\/minikube\/certs\/etcd\/ca.crt --cert \/var\/lib\/minikube\/certs\/etcd\/server.crt --key \/var\/lib\/minikube\/certs\/etcd\/server.key get  --prefix --keys-only\r\nError: get command needs one argument as key and an optional argument as range_end\r\n[root@k8s etcd-v3.5.12-linux-amd64]# ETCDCTL_API=3 etcdctl --endpoints=localhost:2379 --cacert \/var\/lib\/minikube\/certs\/etcd\/ca.crt --cert \/var\/lib\/minikube\/certs\/etcd\/server.crt --key \/var\/lib\/minikube\/certs\/etcd\/server.key get \/ --prefix --keys-only\r\n\/registry\/apiregistration.k8s.io\/apiservices\/v1.\r\n\r\n\/registry\/apiregistration.k8s.io\/apiservices\/v1.admissionregistration.k8s.io\r\n\r\n\/registry\/apiregistration.k8s.io\/apiservices\/v1.apiextensions.k8s.io\r\n...\r\n\/registry\/storageclasses\/standard\r\n\r\n\/registry\/validatingwebhookconfigurations\/ingress-nginx-admission\r\n\r\n[root@k8s etcd-v3.5.12-linux-amd64]# ETCDCTL_API=3 etcdctl --endpoints=localhost:2379 --cacert \/var\/lib\/minikube\/certs\/etcd\/ca.crt --cert \/var\/lib\/minikube\/certs\/etcd\/server.crt --key \/var\/lib\/minikube\/certs\/etcd\/server.key snapshot save \/tmp\/etcdbackup.db\r\n{\"level\":\"info\",\"ts\":\"2024-02-04T13:07:48.656201-0500\",\"caller\":\"snapshot\/v3_snapshot.go:65\",\"msg\":\"created temporary db file\",\"path\":\"\/tmp\/etcdbackup.db.part\"}\r\n{\"level\":\"info\",\"ts\":\"2024-02-04T13:07:48.668951-0500\",\"logger\":\"client\",\"caller\":\"v3@v3.5.12\/maintenance.go:212\",\"msg\":\"opened snapshot stream; downloading\"}\r\n{\"level\":\"info\",\"ts\":\"2024-02-04T13:07:48.669004-0500\",\"caller\":\"snapshot\/v3_snapshot.go:73\",\"msg\":\"fetching snapshot\",\"endpoint\":\"localhost:2379\"}\r\n{\"level\":\"info\",\"ts\":\"2024-02-04T13:07:48.73457-0500\",\"logger\":\"client\",\"caller\":\"v3@v3.5.12\/maintenance.go:220\",\"msg\":\"completed snapshot read; closing\"}\r\n{\"level\":\"info\",\"ts\":\"2024-02-04T13:07:48.75807-0500\",\"caller\":\"snapshot\/v3_snapshot.go:88\",\"msg\":\"fetched snapshot\",\"endpoint\":\"localhost:2379\",\"size\":\"4.5 MB\",\"took\":\"now\"}\r\n{\"level\":\"info\",\"ts\":\"2024-02-04T13:07:48.758224-0500\",\"caller\":\"snapshot\/v3_snapshot.go:97\",\"msg\":\"saved\",\"path\":\"\/tmp\/etcdbackup.db\"}\r\nSnapshot saved at \/tmp\/etcdbackup.db\r\n<\/pre>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #3366ff;\">Verifying the Etcd Backup<\/span><\/p>\n<ul>\n<li><code>sudo etcdutl --write-out=table snapshot status<\/code><br \/>\n<code>\/tmp\/etcdbackup.db<\/code><\/li>\n<li>Just to be sure: <code>cp \/tmp\/etcdbackup.db \/tmp\/etcdbackup.db.2<\/code><\/li>\n<\/ul>\n<pre class=\"lang:default decode:true \">[root@k8s ~]# ETCDCTL_API=3 etcdctl --write-out=table snapshot status \/tmp\/etcdbackup.db\r\nDeprecated: Use `etcdutl snapshot status` instead.\r\n\r\n+----------+----------+------------+------------+\r\n|   HASH   | REVISION | TOTAL KEYS | TOTAL SIZE |\r\n+----------+----------+------------+------------+\r\n| 98a917f2 |   258850 |        778 |     4.5 MB |\r\n+----------+----------+------------+------------+\r\n\r\n[root@k8s ~]# etcdutl --write-out=table snapshot status \/tmp\/etcdbackup.db\r\n+----------+----------+------------+------------+\r\n|   HASH   | REVISION | TOTAL KEYS | TOTAL SIZE |\r\n+----------+----------+------------+------------+\r\n| 98a917f2 |   258850 |        778 |     4.5 MB |\r\n+----------+----------+------------+------------+\r\n\r\n[root@k8s ~]# cp \/tmp\/etcdbackup.db \/tmp\/etcdbackup.db.2\r\n<\/pre>\n<p>In case anything happens to one of this backup we always have a spare version.<\/p>\n<p><span style=\"color: #3366ff;\">Restoring the ETCD<\/span><\/p>\n<ul>\n<li><code>sudo etcdutl snapshot restore \/tmp\/etcdbackup.db --<\/code><br \/>\n<code>data-dir \/var\/lib\/etcd-backup<\/code> restores the etcd backup in a non-default<br \/>\nfolder<\/li>\n<li>To start using it, the Kubernetes core services must be stopped, after which the etcd can be reconfigured to use the new directory<\/li>\n<li>To stop the core services, temporarily move<code> \/etc\/kubernetes\/manifests\/*.yaml to \/etc\/kubernetes\/<\/code><\/li>\n<li>As the kubelet process temporarily polls for static Pod files, the etcd process will disappear within a minute<\/li>\n<li>Use <code>sudo crictl ps<\/code> to verify that is has been stopped<\/li>\n<li>Once the etcd Pod has stopped, reconfigure the etcd to use the non-<br \/>\ndefault etcd path<\/li>\n<li>In etcd.yaml you&#8217;ll find a HostPath volume with the name etcd-data, pointing to the location where the Etcd files are found. Change this to the location where the restored files are<\/li>\n<li>Move back the static Pod files to <code>\/etc\/kubernetes\/manifests\/<\/code><\/li>\n<li>Use <code>sudo crictl ps<\/code> to verify the Pods have restarted successfully<\/li>\n<li>Next, <code>kubectl get all<\/code> should show the original Etcd resources<\/li>\n<\/ul>\n<p><span style=\"color: #3366ff;\">Restoring the Etcd Commands<\/span><\/p>\n<ul>\n<li><code>kubectl delete --all deploy<\/code><\/li>\n<li><code>cd \/etc\/kubernetes\/manifests\/<\/code><\/li>\n<li><code>sudo mv * .. <\/code># this will stop all running pods<\/li>\n<li><code>sudo crictl ps<\/code><\/li>\n<li><code>sudo etcdutl snapshot restore \/tmp\/etcdbackup.db --data-dir \/var\/lib\/etcd-backup<\/code><\/li>\n<li><code>sudo ls -l \/var\/lib\/etcd-backup\/<\/code><\/li>\n<li><code>sudo vi \/etc\/kubernetes\/etcd.yaml <\/code># change etcd-data HostPath volume to \/var\/lib\/etcd-backup<\/li>\n<li><code>sudo mv..\/*.yaml .<\/code><\/li>\n<li><code>sudo crictl ps <\/code># should show all resources<\/li>\n<li><code>kubectl get deploy -A<\/code><\/li>\n<\/ul>\n<pre class=\"lang:default decode:true \">[root@k8s ~]# kubectl get deploy\r\nNAME         READY   UP-TO-DATE   AVAILABLE   AGE\r\napples       3\/3     3            3           47h\r\nfirstnginx   4\/4     4            4           4d1h\r\nnewdep       1\/1     1            1           47h\r\nnginxsvc     1\/1     1            1           2d\r\nwebserver    1\/1     1            1           2d7h\r\nwebshop      3\/3     3            3           2d3h\r\n\r\n[root@k8s ~]# kubectl delete deploy apples\r\ndeployment.apps \"apples\" deleted\r\n\r\n[root@k8s ~]# kubectl delete deploy newdep\r\ndeployment.apps \"newdep\" deleted\r\n\r\n[root@k8s ~]# cd \/etc\/kubernetes\/manifests\r\n[root@k8s manifests]# ll\r\nrazem 20\r\n-rw-------. 1 root root 2497 02-01 15:18 etcd.yaml\r\n-rw-------. 1 root root 3800 02-01 15:18 kube-apiserver.yaml\r\n-rw-------. 1 root root 3124 02-01 15:18 kube-controller-manager.yaml\r\n-rw-------. 1 root root 1464 02-01 15:18 kube-scheduler.yaml\r\n-rw-r--r--  1 root root  246 02-04 05:32 mystaticpod.yaml\r\n-rw-r--r--  1 root root    0 02-03 16:17 staticpod.yaml\r\n[root@k8s manifests]# mv * ..\r\n[root@k8s manifests]# ll\r\nrazem 0\r\n[root@k8s manifests]# crictl ps\r\nCONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                        ATTEMPT             POD ID                      POD\r\n7aec53d95a6a2       busybox@sha256:6d9ac9237a84afe1516540f40a0fafdc86859b2141954b4d643af7066d598b74                                    2 minutes ago       Running             busybox-container           426                 776a9d        9f213bc       two-containers\r\ne500dd1ee227f       busybox@sha256:6d9ac9237a84afe1516540f40a0fafdc86859b2141954b4d643af7066d598b74                                    11 minutes ago      Running             sleepy                      72                  b598cb        a0e6d7f       sleepy\r\nb3ceda46f1ac7       eeb6ee3f44bd0                                                                                                      45 minutes ago      Running             centos2                     67                  4eb967        073bfbd       morevol\r\n352c3daf52c65       eeb6ee3f44bd0                                                                                                      45 minutes ago      Running             centos1                     67                  4eb967        073bfbd       morevol\r\n34fd825715348       b9a5a1927366a                                                                                                      5 hours ago         Running             metrics-server              0                   09837f        97cd991       metrics-server-5f8988d664-7r8j7\r\n3d63e59315a29       5374347291230                                                                                                      23 hours ago        Running             kube-apiserver              5                   2ba0d9        b8722c4       kube-apiserver-k8s.netico.pl\r\n3a65e1db97169       nginx@sha256:31754bca89a3afb25c04d6ecfa2d9671bc3972d8f4809ff855f7e35caa580de9                                      2 days ago          Running             nginx                       0                   0fd201        ae2d934       nginxsvc-5f8b7d4f4d-dtrs7\r\n1a4722cbaaf94       registry.k8s.io\/ingress-nginx\/controller@sha256:1405cc613bd95b2c6edd8b2a152510ae91c7e62aea4698500d23b2145960ab9c   2 days ago          Running             controller                  0                   ea8cb6        f3530f0       ingress-nginx-controller-6858749594-27tm9\r\n332cd7a3b2aa9       nginx@sha256:31754bca89a3afb25c04d6ecfa2d9671bc3972d8f4809ff855f7e35caa580de9                                      2 days ago          Running             nginx                       0                   8956ee        62249ab       webshop-7f9fd49d4c-x2czc\r\ne136bd99527b9       nginx@sha256:31754bca89a3afb25c04d6ecfa2d9671bc3972d8f4809ff855f7e35caa580de9                                      2 days ago          Running             nginx                       0                   ae8bad        2f8c457       webshop-7f9fd49d4c-92nj2\r\n2c1067f28073c       nginx@sha256:31754bca89a3afb25c04d6ecfa2d9671bc3972d8f4809ff855f7e35caa580de9                                      2 days ago          Running             nginx                       0                   746beb        f244884       webshop-7f9fd49d4c-kqllw\r\nc6c00eece623f       nginx@sha256:31754bca89a3afb25c04d6ecfa2d9671bc3972d8f4809ff855f7e35caa580de9                                      2 days ago          Running             task-pv-container           0                   d39bb4        41ef944       lab4-pod\r\n036a4a1599a1a       nginx@sha256:31754bca89a3afb25c04d6ecfa2d9671bc3972d8f4809ff855f7e35caa580de9                                      2 days ago          Running             nginx                       0                   6b4abf        a363771       webserver-76d44586d-8gqhf\r\nf7426897bdb2e       nginx@sha256:31754bca89a3afb25c04d6ecfa2d9671bc3972d8f4809ff855f7e35caa580de9                                      2 days ago          Running             pv-container                0                   a09dd9        2ff2186       pv-pod\r\n8dc188f5131a2       nginx@sha256:985224176778a8939b3869d3b9b9624ea9b3fe4eb1e9002c5f444d99ef034a9b                                      3 days ago          Running             nginx                       0                   ced6ee        fe01d16       deploydaemon-zzllp\r\n29bf2d747ac9a       nginx@sha256:985224176778a8939b3869d3b9b9624ea9b3fe4eb1e9002c5f444d99ef034a9b                                      3 days ago          Running             nginx                       0                   2a0847        4cc3d2a       init-demo\r\n86bd1107d80e0       18ea23a675dae                                                                                                      3 days ago          Running             nginx                       0                   11c7ed        7ad36e9       web-2\r\ndc28e2ca0b0f6       18ea23a675dae                                                                                                      3 days ago          Running             nginx                       0                   b1be4a        b59e2ca       web-1\r\n9232cbdd25263       k8s.gcr.io\/nginx-slim@sha256:8b4501fe0fe221df663c22e16539f399e89594552f400408303c42f3dd8d0e52                      3 days ago          Running             nginx                       0                   15ef4c        c356862       web-0\r\n440f7bffbcf2e       kubernetesui\/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c               3 days ago          Running             dashboard-metrics-scraper   0                   b894aa        1e8f3df       dashboard-metrics-scraper-7fd5cb4ddc-9ld5n\r\n0cde8ae538723       nginx@sha256:985224176778a8939b3869d3b9b9624ea9b3fe4eb1e9002c5f444d99ef034a9b                                      3 days ago          Running             testpod                     0                   a69881        1bcb3d2       testpod\r\n11bee0d7f2f94       nginx@sha256:985224176778a8939b3869d3b9b9624ea9b3fe4eb1e9002c5f444d99ef034a9b                                      3 days ago          Running             nginx                       0                   ac3431        dfc0d2d       firstnginx-d8679d567-rhhlz\r\n963cd068358d1       nginx@sha256:985224176778a8939b3869d3b9b9624ea9b3fe4eb1e9002c5f444d99ef034a9b                                      3 days ago          Running             nginx                       0                   61727f        7186357       mydaemon-d4dcd\r\n56cb9d7954e7b       kubernetesui\/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93                     3 days ago          Running             kubernetes-dashboard        0                   f7e102        fa08dbe       kubernetes-dashboard-8694d4445c-xjlsr\r\n5f83a2ff3e40c       nginx@sha256:985224176778a8939b3869d3b9b9624ea9b3fe4eb1e9002c5f444d99ef034a9b                                      3 days ago          Running             nginx                       0                   e33888        e473986       firstnginx-d8679d567-249g9\r\n077ed650c7764       nginx@sha256:985224176778a8939b3869d3b9b9624ea9b3fe4eb1e9002c5f444d99ef034a9b                                      3 days ago          Running             nginx                       0                   bc0b06        833b812       firstnginx-d8679d567-66c4s\r\n6e11c05ad56f0       nginx@sha256:985224176778a8939b3869d3b9b9624ea9b3fe4eb1e9002c5f444d99ef034a9b                                      3 days ago          Running             nginx-container             0                   776a9d        9f213bc       two-containers\r\ne4766099186a1       nginx@sha256:985224176778a8939b3869d3b9b9624ea9b3fe4eb1e9002c5f444d99ef034a9b                                      3 days ago          Running             nginx                       0                   10ffa7        290bd38       firstnginx-d8679d567-72qbd\r\nd132702c0281b       bfc896cf80fba                                                                                                      3 days ago          Running             kube-proxy                  1                   e968a0        c3cc86b       kube-proxy-5nmms\r\n\r\n[root@k8s manifests]# etcdutl snapshot restore \/tmp\/etcdbackup.db --data-dir \/var\/lib\/etcd-backup\r\n2024-02-04T16:14:51-05:00       info    snapshot\/v3_snapshot.go:260     restoring snapshot      {\"path\": \"\/tmp\/etcdbackup.db\", \"wal-dir\": \"\/var\/lib\/etcd-backup\/member\/wal\", \"data-dir\": \"\/var\/lib\/etcd-backup\", \"snap-dir\": \"\/var\/li        b\/etcd-backup\/member\/snap\"}\r\n2024-02-04T16:14:51-05:00       info    membership\/store.go:141 Trimming membership information from the backend...\r\n2024-02-04T16:14:51-05:00       info    membership\/cluster.go:421       added member    {\"cluster-id\": \"cdf818194e3a8c32\", \"local-member-id\": \"0\", \"added-peer-id\": \"8e9e05c52164694d\", \"added-peer-peer-urls\": [\"http:\/\/localhost:23        80\"]}\r\n2024-02-04T16:14:51-05:00       info    snapshot\/v3_snapshot.go:287     restored snapshot       {\"path\": \"\/tmp\/etcdbackup.db\", \"wal-dir\": \"\/var\/lib\/etcd-backup\/member\/wal\", \"data-dir\": \"\/var\/lib\/etcd-backup\", \"snap-dir\": \"\/var\/li        b\/etcd-backup\/member\/snap\"}\r\n\r\n[root@k8s manifests]# ls -l \/var\/lib\/etcd-backup\r\nrazem 0\r\ndrwx------ 4 root root 29 02-04 16:14 member\r\n\r\n[root@k8s manifests]# ls -l \/var\/lib\/etcd-backup\/member\r\nrazem 0\r\ndrwx------ 2 root root 62 02-04 16:14 snap\r\ndrwx------ 2 root root 51 02-04 16:14 wal\r\n[root@k8s manifests]# vim \/etc\/kubernetes\/etcd.yaml\r\n\r\n[root@k8s manifests]# cat \/etc\/kubernetes\/etcd.yaml\r\napiVersion: v1\r\nkind: Pod\r\nmetadata:\r\n  annotations:\r\n    kubeadm.kubernetes.io\/etcd.advertise-client-urls: https:\/\/172.30.9.24:2379\r\n  creationTimestamp: null\r\n  labels:\r\n    component: etcd\r\n    tier: control-plane\r\n  name: etcd\r\n  namespace: kube-system\r\nspec:\r\n  containers:\r\n  - command:\r\n    - etcd\r\n    - --advertise-client-urls=https:\/\/172.30.9.24:2379\r\n    - --cert-file=\/var\/lib\/minikube\/certs\/etcd\/server.crt\r\n    - --client-cert-auth=true\r\n    - --data-dir=\/var\/lib\/minikube\/etcd\r\n    - --experimental-initial-corrupt-check=true\r\n    - --experimental-watch-progress-notify-interval=5s\r\n...\r\n  - hostPath:\r\n      #path: \/var\/lib\/minikube\/etcd\r\n      path: \/var\/lib\/etcd-backup\r\n      type: DirectoryOrCreate\r\n    name: etcd-data\r\nstatus: {}\r\n\r\n[root@k8s manifests]# mv ..\/*.yaml .\r\n\r\n[root@k8s manifests]# ll\r\nrazem 20\r\ndrwx------  3 root root   20 02-04 16:13 default.etcd\r\n-rw-------  1 root root 2530 02-04 16:17 etcd.yaml\r\n-rw-------. 1 root root 3800 02-01 15:18 kube-apiserver.yaml\r\n-rw-------. 1 root root 3124 02-01 15:18 kube-controller-manager.yaml\r\n-rw-------. 1 root root 1464 02-01 15:18 kube-scheduler.yaml\r\n-rw-r--r--  1 root root  246 02-04 05:32 mystaticpod.yaml\r\n-rw-r--r--  1 root root    0 02-03 16:17 staticpod.yaml\r\n[root@k8s manifests]#\r\n\r\n[root@k8s manifests]# crictl ps\r\nCONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                        ATTEMPT             POD ID                      POD\r\n9d6694c088673       ead0a4a53df89                                                                                                      3 seconds ago       Running             coredns                     857                 7d04b5        a159192       coredns-5dd5756b68-sgfkj\r\na285ffd1ce0a0       73deb9a3f7025                                                                                                      21 seconds ago      Running             etcd                        0                   df26fd        67d3b54       etcd-k8s.netico.pl\r\n9c56e218c8d67       busybox@sha256:6d9ac9237a84afe1516540f40a0fafdc86859b2141954b4d643af7066d598b74                                    2 minutes ago       Running             busybox-container           427                 776a9d        9f213bc       two-containers\r\nc00b990d0741d       nginx@sha256:31754bca89a3afb25c04d6ecfa2d9671bc3972d8f4809ff855f7e35caa580de9                                      3 minutes ago       Running             mystaticpod                 0                   8e1741        461cf68       mystaticpod-k8s.netico.pl\r\n1cd28d372b0b5       6d1b4fd1b182d                                                                                                      3 minutes ago       Running             kube-scheduler              0                   dd2089        8212225       kube-scheduler-k8s.netico.pl\r\n...                                                                                                   3 days ago          Running             kube-proxy                  1                   e968a0        c3cc86b       kube-proxy-5nmms\r\n\r\n[root@k8s manifests]# kubectl get deploy\r\nNAME         READY   UP-TO-DATE   AVAILABLE   AGE\r\napples       3\/3     3            3           47h\r\nfirstnginx   4\/4     4            4           4d1h\r\nnewdep       1\/1     1            1           2d\r\nnginxsvc     1\/1     1            1           2d1h\r\nwebserver    1\/1     1            1           2d8h\r\nwebshop      3\/3     3            3           2d3h\r\n<\/pre>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #3366ff;\">Cluster Nodes Upgrade<\/span><\/p>\n<ul>\n<li>Kubernetes clusters can be upgraded from one to another minor versions<br \/>\nSkipping minor versions (1.23 to 1.25) is not supported<\/li>\n<li>First, you&#8217;ll have to upgrade <code>kubeadm<\/code><\/li>\n<li>Next, you&#8217;ll need to upgrade the control plane node<\/li>\n<li>After that, the worker nodes are upgraded<\/li>\n<li>Use &#8220;Upgrading kubeadm clusters&#8221; from the documentation<\/li>\n<\/ul>\n<p><span style=\"color: #3366ff;\">Control Plane Node Upgrade Overview<\/span><\/p>\n<ul>\n<li>upgrade <code>kubeadm<\/code><\/li>\n<li>use <code>kubeadm upgrade plan<\/code> to check available versions<\/li>\n<li>use <code>kubeadm upgrade apply v1.xx.y<\/code> to run the upgrade<\/li>\n<li>use <code>kubectl drain controlnode --ignore-daemonsets<\/code><\/li>\n<li>upgrade and restart kubelet and kubectl<\/li>\n<li>use <code>kubectl uncordon controlnode<\/code> to bring back the control node<\/li>\n<li>proceed with other nodes<\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #3366ff;\">High Availability Options<\/span><\/p>\n<ul>\n<li><em>Stacked control plane nodes<\/em> requires less infrastructure as the etcd<br \/>\nmembers, and control plane nodes are co-located<\/p>\n<ul>\n<li>Control planes and etcd members are running together on the same node<\/li>\n<li>For optimal protection, requires a minimum of 3 stacked control plane nodes<\/li>\n<\/ul>\n<\/li>\n<li><em>External etcd cluster<\/em> requires more infrastructure as the control plane nodes and etcd members are separated\n<ul>\n<li>Etcd service is running on external nodes, so this requires twice the number of nodes<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p><span style=\"color: #3366ff;\">High Availability Requirements<\/span><\/p>\n<ul>\n<li>In a Kubernetes HA cluster, a load balancer is needed to distribute the<br \/>\nworkload between the cluster nodes<\/li>\n<li>The load balancer can be externally provided using open source software, or a load balancer appliance<\/li>\n<\/ul>\n<p><span style=\"color: #3366ff;\">Exploring Load Balancer Configuration<\/span><\/p>\n<ul>\n<li>In the load balancer setup, HAProxy is running on each server to provide<br \/>\naccess to port 8443 on all IP addresses on that server<\/li>\n<li>Incoming traffic on port 8443 is forwarded to the kube-apiserver port 6443<\/li>\n<li>The keepalived service is running on all HA nodes to provide a virtual IP address on one of the nodes<\/li>\n<li>kubectl clients connect to this VIP:8443,<\/li>\n<li>Use the setup-lb-ubuntu.sh script provided in the GitHub repository for easy setup<\/li>\n<li>Additional instructions are in the script<\/li>\n<li>After running the load balancer setup, use <code>nc 192.168.29.100 8443<\/code> to verify the availability of the load balancer IP and port<\/li>\n<\/ul>\n<p><span style=\"color: #3366ff;\">Setting up a Highly Available Kubernetes Cluster<\/span><\/p>\n<ul>\n<li>3 VMs to be used as controllers in the cluster; Install K8s software but don&#8217;t<br \/>\nset up the cluster yet<\/li>\n<li>2 VMs to be used as worker nodes; Install K8s software<\/li>\n<li>Ensure<code> \/etc\/hosts<\/code> is set up for name resolution of all nodes and copy to all nodes<\/li>\n<li>Disable selinux on all nodes if applicable<\/li>\n<li>Disable firewall if applicable<\/li>\n<\/ul>\n<p><span style=\"color: #3366ff;\">Initializing the HA Setup<\/span><\/p>\n<ul>\n<li><code>sudo kubeadm init --control-plane-endpoint \"192.168.29.100:8443\" --<\/code><br \/>\n<code>upload-certs<\/code><\/li>\n<li>Save the output of the command which shows next steps<\/li>\n<li>Configure networking\n<ul>\n<li><code>kubectl apply -f https:\/\/docs.projectcalico.org\/manifests\/calico.yaml<\/code><\/li>\n<\/ul>\n<\/li>\n<li>Copy the <code>kubectl join<\/code> command that was printed after successfully initializing the first control node\n<ul>\n<li>Make sure to use the command that has <code>--control-plane<\/code> in it!<\/li>\n<\/ul>\n<\/li>\n<li>Complete setup on other control nodes as instructed<\/li>\n<li>Use<code> kubectl get nodes<\/code> to verify setup<\/li>\n<li>Continue and join worker nodes as instructed<\/li>\n<\/ul>\n<p><span style=\"color: #3366ff;\">Configuring the HA Client<\/span><\/p>\n<ul>\n<li>On the machine you want to use as operator workstation, create a<code> .kube<\/code><br \/>\ndirectory and copy <code>\/etc\/kubernetes\/admin.conf<\/code> from any control node to<br \/>\nthe client machine<\/li>\n<li>Install the <code>kubectl<\/code> utility<\/li>\n<li>Ensure that host name resolution goes to the new control plane VIP<\/li>\n<li>Verify using <code>kubectl get nodes<\/code><\/li>\n<\/ul>\n<p><span style=\"color: #3366ff;\">Testing it<\/span><\/p>\n<ul>\n<li>On all nodes: find the VIP using<code> ip a<\/code><\/li>\n<li>On all nodes with a <code>kubectl<\/code>, use <code>kubectl get all<\/code> to verify client working<\/li>\n<li>Shut down the node that has the VIP<\/li>\n<li>Verify that <code>kubectl get all<\/code> still works<\/li>\n<li>Troubleshooting: consider using <code>sudo systemctl restart haproxy<\/code><\/li>\n<\/ul>\n<p><span style=\"color: #3366ff;\">Lab: Etcd Backup and Restore<\/span><\/p>\n<ul>\n<li>Create a backup of the etcd<\/li>\n<li>Remove a few resources (Pods and\/or Deployments)<\/li>\n<li>Restore the backup of the etcd and verify that gets your resources back<\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"","protected":false},"author":1,"featured_media":5953,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[99],"tags":[],"_links":{"self":[{"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/posts\/5326"}],"collection":[{"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/comments?post=5326"}],"version-history":[{"count":31,"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/posts\/5326\/revisions"}],"predecessor-version":[{"id":5468,"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/posts\/5326\/revisions\/5468"}],"wp:featuredmedia":[{"embeddable":true,"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/media\/5953"}],"wp:attachment":[{"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/media?parent=5326"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/categories?post=5326"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/tags?post=5326"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}