{"id":5356,"date":"2023-11-25T11:42:56","date_gmt":"2023-11-25T10:42:56","guid":{"rendered":"http:\/\/miro.borodziuk.eu\/?p=5356"},"modified":"2025-05-17T19:22:21","modified_gmt":"2025-05-17T17:22:21","slug":"scheduling-on-kubernetes","status":"publish","type":"post","link":"http:\/\/miro.borodziuk.eu\/index.php\/2023\/11\/25\/scheduling-on-kubernetes\/","title":{"rendered":"Scheduling on Kubernetes"},"content":{"rendered":"<p><!--more--><\/p>\n<p><span style=\"color: #3366ff;\">Scheduling<\/span><\/p>\n<ul>\n<li>Kube-scheduler takes care of finding a node to schedule new Pods<\/li>\n<li>Nodes are filtered according to specific requirements that may be set<\/li>\n<li>Resource requirements<\/li>\n<li>Affinity and anti-affinity<\/li>\n<li>Taints and tolerations and more<\/li>\n<li>The scheduler first finds feasible nodes then scores them; it then picks the node with the highest score<\/li>\n<li>Once this node is found, the scheduler notifies the API server in a process called binding<\/li>\n<\/ul>\n<p><span style=\"color: #3366ff;\">From Scheduler to Kubelet<\/span><\/p>\n<ul>\n<li>Once the scheduler decision has been made, it is picked up by the kubelet<\/li>\n<li>The kubelet will instruct the CRI to fetch the image of the required container<\/li>\n<li>After fetching the image, the container is created and started<\/li>\n<\/ul>\n<p><span style=\"color: #3366ff;\">Setting Node Preferences<\/span><\/p>\n<ul>\n<li>The nodeSelector field in the pod.spec specifies a key-value pair that must<br \/>\nmatch a label which is set on nodes that are eligible to run the Pod<\/li>\n<li>Use <code>kubectl label nodes worker1.example.com disktype=ssd<\/code> to set the label on a node<\/li>\n<li>Use <code>nodeSelector:disktype:ssd<\/code> in the pod.spec to match the Pod to the specific node<\/li>\n<li><code>nodeName<\/code> is part of the pod.spec and can be used to always run a Pod on a node with a specific name\n<ul>\n<li>Not recommended: if that node is not currently available; the Pod will never run<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p><span style=\"color: #3366ff;\">Using Node Preferences<\/span><\/p>\n<ul>\n<li><code>kubectl label nodes worker2 disktype=ssd<\/code><\/li>\n<li><code>kubectl apply -f selector-pod.yaml<\/code><\/li>\n<\/ul>\n<pre class=\"lang:default decode:true \">[root@k8s cka]# kubectl get nodes\r\nNAME            STATUS   ROLES           AGE     VERSION\r\nk8s.example.pl   Ready    control-plane   4d21h   v1.28.3\r\n\r\n[root@k8s cka]# kubectl label nodes k8s.example.pl disktype=ssd\r\nnode\/k8s.example.pl labeled\r\n\r\n[root@k8s cka]# cat selector-pod.yaml\r\napiVersion: v1\r\nkind: Pod\r\nmetadata:\r\n  name: nginx\r\nspec:\r\n  containers:\r\n  - name: nginx\r\n    image: nginx\r\n    imagePullPolicy: IfNotPresent\r\n  nodeSelector:\r\n    disktype: ssd\r\n\r\n[root@k8s cka]# kubectl cordon k8s.example.pl\r\nnode\/k8s.example.pl cordoned\r\n\r\n[root@k8s cka]# kubectl apply -f selector-pod.yaml\r\npod\/nginx created\r\n\r\n[root@k8s cka]# kubectl get pods\r\nNAME                         READY   STATUS    RESTARTS          AGE\r\ndeploydaemon-zzllp           1\/1     Running   0                 3d15h\r\nfirstnginx-d8679d567-249g9   1\/1     Running   0                 4d16h\r\nfirstnginx-d8679d567-66c4s   1\/1     Running   0                 4d16h\r\nfirstnginx-d8679d567-72qbd   1\/1     Running   0                 4d16h\r\nfirstnginx-d8679d567-rhhlz   1\/1     Running   0                 3d23h\r\ninit-demo                    1\/1     Running   0                 4d1h\r\nlab4-pod                     1\/1     Running   0                 2d22h\r\nmorevol                      2\/2     Running   166 (8m12s ago)   3d11h\r\nmydaemon-d4dcd               1\/1     Running   0                 3d15h\r\nmystaticpod-k8s.example.pl    1\/1     Running   0                 26h\r\nnginx                        0\/1     Pending   0                 8s\r\nnginxsvc-5f8b7d4f4d-dtrs7    1\/1     Running   0                 2d16h\r\npv-pod                       1\/1     Running   0                 3d10h\r\nsleepy                       1\/1     Running   87 (33m ago)      4d2h\r\ntestpod                      1\/1     Running   0                 4d16h\r\ntwo-containers               2\/2     Running   518 (2m12s ago)   3d23h\r\nweb-0                        1\/1     Running   0                 4d4h\r\nweb-1                        1\/1     Running   0                 3d15h\r\nweb-2                        1\/1     Running   0                 3d15h\r\nwebserver-76d44586d-8gqhf    1\/1     Running   0                 2d23h\r\nwebshop-7f9fd49d4c-92nj2     1\/1     Running   0                 2d18h\r\nwebshop-7f9fd49d4c-kqllw     1\/1     Running   0                 2d18h\r\nwebshop-7f9fd49d4c-x2czc     1\/1     Running   0                 2d18h\r\n\r\n[root@k8s cka]# kubectl get all\r\nNAME                             READY   STATUS    RESTARTS          AGE\r\npod\/deploydaemon-zzllp           1\/1     Running   0                 3d15h\r\npod\/firstnginx-d8679d567-249g9   1\/1     Running   0                 4d16h\r\npod\/firstnginx-d8679d567-66c4s   1\/1     Running   0                 4d16h\r\npod\/firstnginx-d8679d567-72qbd   1\/1     Running   0                 4d16h\r\npod\/firstnginx-d8679d567-rhhlz   1\/1     Running   0                 3d23h\r\npod\/init-demo                    1\/1     Running   0                 4d1h\r\npod\/lab4-pod                     1\/1     Running   0                 2d22h\r\npod\/morevol                      2\/2     Running   166 (8m36s ago)   3d11h\r\npod\/mydaemon-d4dcd               1\/1     Running   0                 3d15h\r\npod\/mystaticpod-k8s.example.pl    1\/1     Running   0                 26h\r\npod\/nginx                        0\/1     Pending   0                 32s\r\npod\/nginxsvc-5f8b7d4f4d-dtrs7    1\/1     Running   0                 2d16h\r\npod\/pv-pod                       1\/1     Running   0                 3d10h\r\npod\/sleepy                       1\/1     Running   87 (34m ago)      4d2h\r\npod\/testpod                      1\/1     Running   0                 4d16h\r\npod\/two-containers               2\/2     Running   518 (2m36s ago)   3d23h\r\npod\/web-0                        1\/1     Running   0                 4d4h\r\npod\/web-1                        1\/1     Running   0                 3d15h\r\npod\/web-2                        1\/1     Running   0                 3d15h\r\npod\/webserver-76d44586d-8gqhf    1\/1     Running   0                 2d23h\r\npod\/webshop-7f9fd49d4c-92nj2     1\/1     Running   0                 2d18h\r\npod\/webshop-7f9fd49d4c-kqllw     1\/1     Running   0                 2d18h\r\npod\/webshop-7f9fd49d4c-x2czc     1\/1     Running   0                 2d18h\r\n\r\nNAME                 TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE\r\nservice\/apples       ClusterIP   10.101.6.55      &lt;none&gt;        80\/TCP         2d14h\r\nservice\/kubernetes   ClusterIP   10.96.0.1        &lt;none&gt;        443\/TCP        4d21h\r\nservice\/newdep       ClusterIP   10.100.68.120    &lt;none&gt;        8080\/TCP       2d15h\r\nservice\/nginx        ClusterIP   None             &lt;none&gt;        80\/TCP         4d4h\r\nservice\/nginxsvc     ClusterIP   10.104.155.180   &lt;none&gt;        80\/TCP         2d16h\r\nservice\/webshop      NodePort    10.109.119.90    &lt;none&gt;        80:32064\/TCP   2d17h\r\n\r\nNAME                          DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE\r\ndaemonset.apps\/deploydaemon   1         1         1       1            1           &lt;none&gt;          3d15h\r\ndaemonset.apps\/mydaemon       1         1         1       1            1           &lt;none&gt;          4d15h\r\n\r\nNAME                         READY   UP-TO-DATE   AVAILABLE   AGE\r\ndeployment.apps\/firstnginx   4\/4     4            4           4d16h\r\ndeployment.apps\/nginxsvc     1\/1     1            1           2d16h\r\ndeployment.apps\/webserver    1\/1     1            1           2d23h\r\ndeployment.apps\/webshop      3\/3     3            3           2d18h\r\n\r\nNAME                                   DESIRED   CURRENT   READY   AGE\r\nreplicaset.apps\/firstnginx-d8679d567   4         4         4       4d16h\r\nreplicaset.apps\/nginxsvc-5f8b7d4f4d    1         1         1       2d16h\r\nreplicaset.apps\/webserver-667ddc69b6   0         0         0       2d23h\r\nreplicaset.apps\/webserver-76d44586d    1         1         1       2d23h\r\nreplicaset.apps\/webshop-7f9fd49d4c     3         3         3       2d18h\r\n\r\nNAME                   READY   AGE\r\nstatefulset.apps\/web   3\/3     4d4h\r\n\r\n\r\n\r\n[root@k8s cka]# kubectl describe pod\/nginx\r\nName:             nginx\r\nNamespace:        default\r\nPriority:         0\r\nService Account:  default\r\nNode:             &lt;none&gt;\r\nLabels:           &lt;none&gt;\r\nAnnotations:      &lt;none&gt;\r\nStatus:           Pending\r\nIP:\r\nIPs:              &lt;none&gt;\r\nContainers:\r\n  nginx:\r\n    Image:        nginx\r\n    Port:         &lt;none&gt;\r\n    Host Port:    &lt;none&gt;\r\n    Environment:  &lt;none&gt;\r\n    Mounts:\r\n      \/var\/run\/secrets\/kubernetes.io\/serviceaccount from kube-api-access-ttksw (ro)\r\nConditions:\r\n  Type           Status\r\n  PodScheduled   False\r\nVolumes:\r\n  kube-api-access-ttksw:\r\n    Type:                    Projected (a volume that contains injected data from multiple sources)\r\n    TokenExpirationSeconds:  3607\r\n    ConfigMapName:           kube-root-ca.crt\r\n    ConfigMapOptional:       &lt;nil&gt;\r\n    DownwardAPI:             true\r\nQoS Class:                   BestEffort\r\nNode-Selectors:              disktype=ssd\r\nTolerations:                 node.kubernetes.io\/not-ready:NoExecute op=Exists for 300s\r\n                             node.kubernetes.io\/unreachable:NoExecute op=Exists for 300s\r\nEvents:\r\n  Type     Reason            Age   From               Message\r\n  ----     ------            ----  ----               -------\r\n  Warning  FailedScheduling  76s   default-scheduler  0\/1 nodes are available: 1 node(s) were unschedulable. preemption: 0\/1 nodes are available: 1 Preemption is not helpful for scheduling..\r\n\r\n[root@k8s cka]# kubectl get pods\r\nNAME                         READY   STATUS    RESTARTS          AGE\r\ndeploydaemon-zzllp           1\/1     Running   0                 3d15h\r\nfirstnginx-d8679d567-249g9   1\/1     Running   0                 4d16h\r\nfirstnginx-d8679d567-66c4s   1\/1     Running   0                 4d16h\r\nfirstnginx-d8679d567-72qbd   1\/1     Running   0                 4d16h\r\nfirstnginx-d8679d567-rhhlz   1\/1     Running   0                 3d23h\r\ninit-demo                    1\/1     Running   0                 4d1h\r\nlab4-pod                     1\/1     Running   0                 2d22h\r\nmorevol                      2\/2     Running   166 (10m ago)     3d11h\r\nmydaemon-d4dcd               1\/1     Running   0                 3d15h\r\nmystaticpod-k8s.example.pl    1\/1     Running   0                 26h\r\nnginx                        1\/1     Running   0                 2m31s\r\nnginxsvc-5f8b7d4f4d-dtrs7    1\/1     Running   0                 2d16h\r\npv-pod                       1\/1     Running   0                 3d10h\r\nsleepy                       1\/1     Running   87 (36m ago)      4d2h\r\ntestpod                      1\/1     Running   0                 4d16h\r\ntwo-containers               2\/2     Running   518 (4m35s ago)   3d23h\r\nweb-0                        1\/1     Running   0                 4d4h\r\nweb-1                        1\/1     Running   0                 3d15h\r\nweb-2                        1\/1     Running   0                 3d15h\r\nwebserver-76d44586d-8gqhf    1\/1     Running   0                 2d23h\r\nwebshop-7f9fd49d4c-92nj2     1\/1     Running   0                 2d18h\r\nwebshop-7f9fd49d4c-kqllw     1\/1     Running   0                 2d18h\r\nwebshop-7f9fd49d4c-x2czc     1\/1     Running   0                 2d18h\r\n<\/pre>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #3366ff;\">Affinity and Anti-Affinity<\/span><\/p>\n<ul>\n<li>(Anti-)Affinity is used to define advanced scheduler rules<\/li>\n<li>Node affinity is used to constrain a node that can receive a Pod by matching labels of these nodes<\/li>\n<li>Inter-pod affinity constrains nodes to receive Pods by matching labels of existing Pods already running on that node<\/li>\n<li>Anti-affinity can only be applied between Pods<\/li>\n<\/ul>\n<p><span style=\"color: #3366ff;\">How it Works<\/span><\/p>\n<ul>\n<li>A Pod that has a node affinity label of key=value will only be scheduled to<br \/>\nnodes with a matching label<\/li>\n<li>A Pod that has a Pod affinity label of key=value will only be scheduled to nodes running Pods with the matching label<\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #3366ff;\">Setting Node Affinity<\/span><\/p>\n<ul>\n<li>To define node affinity, two different statements can be used<\/li>\n<li><code>requiredDuringSchedulinglgnoredDuringExecution<\/code> requires the node to meet the constraint that is defined<\/li>\n<li><code>preferredDuringSchedulinglgnoredDuringExecution<\/code> defines a soft affinity that is ignored if it cannot be fulfilled<\/li>\n<li>At the moment, affinity is only applied while scheduling Pods, and cannot be used to change where Pods are already running<\/li>\n<\/ul>\n<p><span style=\"color: #3366ff;\">Defining Affinity Labels<\/span><\/p>\n<ul>\n<li>Affinity rules go beyond labels that use a <code>key=value<\/code> label<\/li>\n<li>A <code>matchexpression<\/code> is used to define a key (the label), an operator as well as optionally one or more values<\/li>\n<\/ul>\n<pre class=\"lang:default decode:true\">affinity:\r\n  nodeAffinity:\r\n    requiredDuringSchedulinglgnoredDuringExecution:\r\n      nodeSelectorTerms:\r\n      - matchExpressions:\r\n        - key: type\r\n          operator: In\r\n          values:   \r\n          - blue\r\n          - green<\/pre>\n<ul>\n<li>\u00a0Matches any node that has type set to either blue or green<\/li>\n<\/ul>\n<pre class=\"lang:default decode:true\">nodeSelectorTerms:\r\n- matchExpressions:\r\n  - key: storage\r\n    operator: Exists<\/pre>\n<ul>\n<li>Matches any node where the key storage is defined<\/li>\n<\/ul>\n<p>Examples:<\/p>\n<pre class=\"lang:default decode:true\">[root@k8s cka]# cat pod-with-node-affinity.yaml\r\napiVersion: v1\r\nkind: Pod\r\nmetadata:\r\n  name: with-node-affinity\r\nspec:\r\n  affinity:\r\n    nodeAffinity:\r\n      requiredDuringSchedulingIgnoredDuringExecution:\r\n        nodeSelectorTerms:\r\n        - matchExpressions:\r\n          - key: kubernetes.io\/e2e-az-name\r\n            operator: In\r\n            values:\r\n            - e2e-az1\r\n            - e2e-az2\r\n      preferredDuringSchedulingIgnoredDuringExecution:\r\n      - weight: 1\r\n        preference:\r\n          matchExpressions:\r\n          - key: another-node-label-key\r\n            operator: In\r\n            values:\r\n            - another-node-label-value\r\n  containers:\r\n  - name: with-node-affinity\r\n    image: k8s.gcr.io\/pause:2.0\r\n\r\n[root@k8s cka]# kubectl apply -f pod-with-node-affinity.yaml\r\npod\/with-node-affinity created\r\n\r\n[root@k8s cka]# kubectl get pods\r\nNAME                         READY   STATUS    RESTARTS          AGE\r\n...\r\nwith-node-affinity           0\/1     Pending   0                 7s\r\n\r\n\r\n\r\n[root@k8s cka]# kubectl delete -f pod-with-node-affinity.yaml\r\npod \"with-node-affinity\" deleted\r\n\r\n\r\n[root@k8s cka]# cat pod-with-node-antiaffinity.yaml\r\n#kubectl label nodes node01 disktype=ssd\r\n\r\napiVersion: v1\r\nkind: Pod\r\nmetadata:\r\n  name: antinginx\r\nspec:\r\n  affinity:\r\n    nodeAffinity:\r\n      requiredDuringSchedulingIgnoredDuringExecution:\r\n        nodeSelectorTerms:\r\n        - matchExpressions:\r\n          - key: disktype\r\n            operator: NotIn\r\n            values:\r\n            - ssd\r\n  containers:\r\n  - name: nginx\r\n    image: nginx\r\n    imagePullPolicy: IfNotPresent\r\n\r\n[root@k8s cka]# kubectl apply -f pod-with-node-antiaffinity.yaml\r\npod\/antinginx created\r\n\r\n[root@k8s cka]# kubectl get pods\r\nNAME                         READY   STATUS    RESTARTS          AGE\r\nantinginx                    0\/1     Pending   0                 5s\r\ndeploydaemon-zzllp           1\/1     Running   0                 3d18h\r\n...\r\n\r\n[root@k8s cka]# kubectl describe node k8s.example.pl\r\nName:               k8s.example.pl\r\nRoles:              control-plane\r\nLabels:             beta.kubernetes.io\/arch=amd64\r\n                    beta.kubernetes.io\/os=linux\r\n                    disktype=ssd\r\n...\r\n\r\n[root@k8s cka]# kubectl describe node k8s.example.pl | grep ssd\r\n                    disktype=ssd\r\n\r\n[root@k8s cka]# kubectl label nodes k8s.example.pl disktype-\r\nnode\/k8s.example.pl unlabeled\r\n\r\n[root@k8s cka]# kubectl get pods\r\nNAME                         READY   STATUS    RESTARTS          AGE\r\nantinginx                    1\/1     Running   0                 34m\r\ndeploydaemon-zzllp           1\/1     Running   0                 3d18h\r\n...\r\n\r\n[root@k8s cka]# cat pod-with-pod-affinity.yaml\r\napiVersion: v1\r\nkind: Pod\r\nmetadata:\r\n  name: with-pod-affinity\r\nspec:\r\n  affinity:\r\n    podAffinity:\r\n      requiredDuringSchedulingIgnoredDuringExecution:\r\n      - labelSelector:\r\n          matchExpressions:\r\n          - key: security\r\n            operator: In\r\n            values:\r\n            - S1\r\n        topologyKey: failure-domain.beta.kubernetes.io\/zone\r\n    podAntiAffinity:\r\n      preferredDuringSchedulingIgnoredDuringExecution:\r\n      - weight: 100\r\n        podAffinityTerm:\r\n          labelSelector:\r\n            matchExpressions:\r\n            - key: security\r\n              operator: In\r\n              values:\r\n              - S2\r\n          topologyKey: failure-domain.beta.kubernetes.io\/zone\r\n  containers:\r\n  - name: with-pod-affinity\r\n    image: k8s.gcr.io\/pause:2.0\r\n<\/pre>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #3366ff;\">TopologyKey<\/span><\/p>\n<ul>\n<li>When defining Pod affinity and anti-affinity, a toplogyKey property is<br \/>\nrequired<\/li>\n<li>The topologyKey refers to a label that exists on nodes, and typically has a format containing a slash\n<ul>\n<li><code>kubernetes.io\/host<\/code><\/li>\n<\/ul>\n<\/li>\n<li>Using topologyKeys allows the Pods only to be assigned to hosts matching the topologyKey<\/li>\n<li>This allows administrators to use zones where the workloads are implemented<\/li>\n<li>If no matching topologyKey is found on the host, the specified topologyKey will be ignored in the affinity<\/li>\n<\/ul>\n<p><span style=\"color: #3366ff;\">Using Pod Anti-Affinity<\/span><\/p>\n<ul>\n<li><code>kubectl create -f redis-with-pod-affinity.yaml<\/code><\/li>\n<li>On a two-node cluster, one Pod stays in a state of pending<\/li>\n<li><code>kubectl create -f web-with-pod-affinity.yaml<\/code><\/li>\n<li>This will run web instances only on nodes where redis is running as well<\/li>\n<\/ul>\n<pre class=\"lang:default mark:49-50 decode:true\">[root@k8s cka]# cat redis-with-pod-affinity.yaml\r\napiVersion: apps\/v1\r\nkind: Deployment\r\nmetadata:\r\n  name: redis-cache\r\nspec:\r\n  selector:\r\n    matchLabels:\r\n      app: store\r\n  replicas: 3\r\n  template:\r\n    metadata:\r\n      labels:\r\n        app: store\r\n    spec:\r\n      affinity:\r\n        podAntiAffinity:\r\n          requiredDuringSchedulingIgnoredDuringExecution:\r\n          - labelSelector:\r\n              matchExpressions:\r\n              - key: app\r\n                operator: In\r\n                values:\r\n                - store\r\n            topologyKey: \"kubernetes.io\/hostname\"\r\n      containers:\r\n      - name: redis-server\r\n        image: redis:3.2-alpine\r\n\r\n[root@k8s cka]# kubectl create -f redis-with-pod-affinity.yaml\r\ndeployment.apps\/redis-cache created\r\n\r\n[root@k8s cka]# kubectl get pods\r\nNAME                           READY   STATUS    RESTARTS        AGE\r\nantinginx                      1\/1     Running   0               113m\r\n...\r\nredis-cache-8478cbdc86-cfsmz   0\/1     Pending   0               6s\r\nredis-cache-8478cbdc86-kr8qr   0\/1     Pending   0               6s\r\nredis-cache-8478cbdc86-w2swz   1\/1     Running   0               6s\r\nsleepy                         1\/1     Running   92 (14m ago)    4d6h\r\ntestpod                        1\/1     Running   0               4d21h\r\ntwo-containers                 2\/2     Running   546 (94s ago)   4d3h\r\nweb-0                          1\/1     Running   0               4d9h\r\nweb-1                          1\/1     Running   0               3d20h\r\nweb-2                          1\/1     Running   0               3d20h\r\nwebserver-76d44586d-8gqhf      1\/1     Running   0               3d3h\r\nwebshop-7f9fd49d4c-92nj2       1\/1     Running   0               2d23h\r\nwebshop-7f9fd49d4c-kqllw       1\/1     Running   0               2d23h\r\nwebshop-7f9fd49d4c-x2czc       1\/1     Running   0               2d23h\r\n<\/pre>\n<p>The anti affinity rule makes that you&#8217;ll never get two of the same applications running on the same node.<\/p>\n<pre class=\"lang:default decode:true\">[root@k8s cka]# cat web-with-pod-affinity.yaml\r\napiVersion: apps\/v1\r\nkind: Deployment\r\nmetadata:\r\n  name: web-server\r\nspec:\r\n  selector:\r\n    matchLabels:\r\n      app: web-store\r\n  replicas: 3\r\n  template:\r\n    metadata:\r\n      labels:\r\n        app: web-store\r\n    spec:\r\n      affinity:\r\n        podAntiAffinity:\r\n          requiredDuringSchedulingIgnoredDuringExecution:\r\n          - labelSelector:\r\n              matchExpressions:\r\n              - key: app\r\n                operator: In\r\n                values:\r\n                - web-store\r\n            topologyKey: \"kubernetes.io\/hostname\"\r\n        podAffinity:\r\n          requiredDuringSchedulingIgnoredDuringExecution:\r\n          - labelSelector:\r\n              matchExpressions:\r\n              - key: app\r\n                operator: In\r\n                values:\r\n                - store\r\n            topologyKey: \"kubernetes.io\/hostname\"\r\n      containers:\r\n      - name: web-app\r\n        image: nginx:1.16-alpine\r\n\r\n[root@k8s cka]# kubectl create -f web-with-pod-affinity.yaml\r\ndeployment.apps\/web-server created\r\n\r\n[root@k8s cka]# kubectl get pods\r\nNAME                           READY   STATUS    RESTARTS        AGE\r\nantinginx                      1\/1     Running   0               162m\r\n...\r\nredis-cache-8478cbdc86-cfsmz   0\/1     Pending   0               48m\r\nredis-cache-8478cbdc86-kr8qr   0\/1     Pending   0               48m\r\nredis-cache-8478cbdc86-w2swz   1\/1     Running   0               48m\r\n...\r\nweb-server-55f57c89d4-25qhr    0\/1     Pending   0               96s\r\nweb-server-55f57c89d4-crtfn    1\/1     Running   0               96s\r\nweb-server-55f57c89d4-vl4p5    0\/1     Pending   0               96s\r\nwebserver-76d44586d-8gqhf      1\/1     Running   0               3d4h\r\nwebshop-7f9fd49d4c-92nj2       1\/1     Running   0               3d\r\nwebshop-7f9fd49d4c-kqllw       1\/1     Running   0               3d\r\nwebshop-7f9fd49d4c-x2czc       1\/1     Running   0               3d\r\n<\/pre>\n<p>That mean that it&#8217;ll run web instances only on nodes where redis is running as well.<\/p>\n<p><span style=\"color: #3366ff;\">Taints<\/span><\/p>\n<ul>\n<li><em>Taints<\/em> are applied to a node to mark that the node should not accept any<br \/>\nPod that doesn&#8217;t tolerate the taint<\/li>\n<li><em>Tolerations<\/em> are applied to Pods and allow (but do not require) Pods to schedule on nodes with matching Taints \u2014 so they are an exception to taints that are applied<\/li>\n<li>Where <em>Affinities<\/em> are used on Pods to attract them to specific nodes, Taints allow a node to repel a set of Pods<\/li>\n<li><em>Taints<\/em> and <em>Tolerations<\/em> are used to ensure Pods are not scheduled on inappropriate nodes, and thus make sure that dedicated nodes can beconfigured for dedicated tasks<\/li>\n<\/ul>\n<p><span style=\"color: #3366ff;\">Taint Types<\/span><\/p>\n<ul>\n<li>Three types of Taint can be applied:\n<ul>\n<li><code>NoSchedule<\/code>: does not schedule new Pods<\/li>\n<li><code>PreferNoSchedule<\/code>: does not schedule new Pods, unless there is no other option<\/li>\n<li><code>NoExecute<\/code>: migrates all Pods away from this node<\/li>\n<\/ul>\n<\/li>\n<li>If the Pod has a toleration however, it will ignore the taint<\/li>\n<\/ul>\n<p><span style=\"color: #3366ff;\">SettingTaints<\/span><\/p>\n<ul>\n<li>Taints are set in different ways<\/li>\n<li>Control plane nodes automatically get taints that won&#8217;t schedule user Pods<\/li>\n<li>When <code>kubectl drain<\/code> and <code>kubectl cordon<\/code> are used, a taint is applied on the target node<\/li>\n<li>Taints can be set automatically by the cluster when critical conditions arise, such as a node running out of disk space<\/li>\n<li>Administrators can use<code> kubectl taint<\/code> to set taints:\n<ul>\n<li><code>kubectl taint nodes worker1 key1=value1:NoSchedule<\/code><\/li>\n<li><code>kubectl taint nodes worker1 key1=value1:NoSchedule-<\/code><\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p><span style=\"color: #3366ff;\">Tolerations<\/span><\/p>\n<ul>\n<li>To allow a Pod to run on a node with a specific taint, a toleration can be<br \/>\nused<\/li>\n<li>This is essential for running core Kubernetes Pods on the control plane nodes<\/li>\n<li>While creating taints and tolerations, a key and value are defined to allow for more specific access\n<ul>\n<li><code>kubectl taint nodes worker1 storage=ssd:NoSchedule<\/code><\/li>\n<\/ul>\n<\/li>\n<li>This will allow a Pod to run if it has a toleration containing the key <code>storage<\/code> and the value <code>ssd<\/code><\/li>\n<\/ul>\n<p><span style=\"color: #3366ff;\">Taint Key and Value<\/span><\/p>\n<ul>\n<li>While defining a toleration, the Pod needs a key, operator, and value:<br \/>\n<code>tolerations:<\/code><br \/>\n<code>- key: \"storage\"<\/code><br \/>\n<code>\u00a0 operator: \"Equal\"<\/code><br \/>\n<code>\u00a0 value: \"ssd\"<\/code><\/li>\n<li>The default value for the operator is &#8220;Equal\u201d; as an alternative, &#8220;Exists&#8221; is commonly used<\/li>\n<li>If the operator &#8220;Exists&#8221; is used, the key should match the taint key and the value is ignored<\/li>\n<li>If the operator &#8220;Equal&#8221; is used, the key and value must match<\/li>\n<\/ul>\n<p><span style=\"color: #3366ff;\">Node Conditions and Taints<\/span><\/p>\n<ul>\n<li>Node conditions can automatically create taints on nodes if one of the<br \/>\nfollowing applies<\/p>\n<ul>\n<li>memory-pressure<\/li>\n<li>disk-pressure<\/li>\n<li>pid-pressure<\/li>\n<li>unschedulable<\/li>\n<li>network-unavailable<\/li>\n<\/ul>\n<\/li>\n<li>If any of these conditions apply, a taint is automatically set<\/li>\n<li>Node conditions can be ignored by adding corresponding Pod tolerations<\/li>\n<\/ul>\n<p><span style=\"color: #3366ff;\">Using Taints &#8211; commands<\/span><\/p>\n<ul>\n<li><code>kubectl taint nodes worker1 storage=ssd:NoSchedule<\/code><\/li>\n<li><code>kubectl describe nodes worker1<\/code><\/li>\n<li><code>kubectl create deployment nginx-taint --image=nginx<\/code><\/li>\n<li><code>kubectl scale deployment nginx-taint \u2014replicas=3<\/code><\/li>\n<li><code>kubectl get pods \u2014o wide <\/code># will show that pods are all on worker2<\/li>\n<li><code>kubectl create \u2014f taint-toleration.yaml <\/code># will run<\/li>\n<li><code>kubectl create -f taint-toleration2.yaml <\/code># will not run<\/li>\n<\/ul>\n<pre class=\"lang:default decode:true\">[root@k8s cka]# kubectl get nodes\r\nNAME            STATUS   ROLES           AGE     VERSION\r\nk8s.example.pl   Ready    control-plane   5d17h   v1.28.3\r\n\r\n[root@k8s cka]# kubectl taint nodes k8s.example.pl storage=ssd:NoSchedule\r\nnode\/k8s.example.pl tainted\r\n\r\n[root@k8s cka]# kubectl describe node k8s.example.pl | grep Taints\r\nTaints:             storage=ssd:NoSchedule\r\n\r\n[root@k8s cka]# kubectl create deploy nginx-taint --image=nginx\r\ndeployment.apps\/nginx-taint created\r\n\r\n[root@k8s cka]# kubectl scale deploy nginx-taint --replicas=3\r\ndeployment.apps\/nginx-taint scaled\r\n\r\n[root@k8s cka]# kubectl get pods --selector app=nginx-taint\r\nNAME                           READY   STATUS    RESTARTS   AGE\r\nnginx-taint-68bd5db674-7skqs   0\/1     Pending   0          2m2s\r\nnginx-taint-68bd5db674-vjq89   0\/1     Pending   0          2m2s\r\nnginx-taint-68bd5db674-vqz2z   0\/1     Pending   0          2m38s\r\n<\/pre>\n<p>All the nginx-taint pods don&#8217;t run because the taint is set on control node and there is only one node. Control node will only allow nodes that have this storage in ssd. Let&#8217;s create a toleration.<\/p>\n<pre class=\"lang:default decode:true \">[root@k8s cka]# kubectl describe node k8s.example.pl | grep Taints\r\nTaints:             storage=ssd:NoSchedule\r\n\r\n[root@k8s cka]# cat taint-toleration.yaml\r\napiVersion: v1\r\nkind: Pod\r\nmetadata:\r\n  name: nginx-ssd\r\n  labels:\r\n    env: test\r\nspec:\r\n  containers:\r\n  - name: nginx-ssd\r\n    image: nginx\r\n    imagePullPolicy: IfNotPresent\r\n  tolerations:\r\n  - key: \"storage\"\r\n    operator: \"Equal\"\r\n    value: \"ssd\"\r\n    effect: \"NoSchedule\"\r\n\r\n[root@k8s cka]# kubectl apply -f taint-toleration.yaml\r\npod\/nginx-ssd created\r\n\r\n[root@k8s cka]# kubectl get pods nginx-ssd -o wide\r\nNAME        READY   STATUS    RESTARTS   AGE   IP            NODE            NOMINATED NODE   READINESS GATES\r\nnginx-ssd   1\/1     Running   0          24s   10.244.0.53   k8s.netico.pl   &lt;none&gt;           &lt;none&gt;\r\n\r\n\r\n[root@k8s cka]# cat taint-toleration2.yaml\r\napiVersion: v1\r\nkind: Pod\r\nmetadata:\r\n  name: nginx-hdd\r\n  labels:\r\n    env: test\r\nspec:\r\n  containers:\r\n  - name: nginx-hdd\r\n    image: nginx\r\n    imagePullPolicy: IfNotPresent\r\n  tolerations:\r\n  - key: \"storage\"\r\n    operator: \"Equal\"\r\n    value: \"hdd\"\r\n    effect: \"NoSchedule\"\r\n\r\n[root@k8s cka]# kubectl apply -f taint-toleration2.yaml\r\npod\/nginx-hdd created\r\n\r\n[root@k8s cka]# kubectl get pods nginx-hdd -o wide\r\nNAME        READY   STATUS    RESTARTS   AGE   IP       NODE     NOMINATED NODE   READINESS GATES\r\nnginx-hdd   0\/1     Pending   0          11s   &lt;none&gt;   &lt;none&gt;   &lt;none&gt;           &lt;none&gt;\r\n<\/pre>\n<p>The nginx-ssd has configured tolerations for <code>storage=ssd:NoSchedule<\/code> taint and is running on contrl node. The nginx-hdd has confogured only tolerations for <code>storage=hdd:NoSchedule<\/code> so it&#8217;s not running on node.<\/p>\n<p><span style=\"color: #3366ff;\">LimitRange<\/span><\/p>\n<ul>\n<li><em>LimitRange<\/em> is an API object that limits resource usage per container or Pod<br \/>\nin a Namespace<\/li>\n<li>It uses three relevant options:\n<ul>\n<li><code>type:<\/code> specifies whether it applies to Pods or containers<\/li>\n<li><code>defaultRequest:<\/code> the default resources the application will request<\/li>\n<li><code>default:<\/code> the maximum resources the application can use<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p><span style=\"color: #3366ff;\">Quota<\/span><\/p>\n<ul>\n<li><em>Quota<\/em> is an API object that limits total resources available in a Namespace<\/li>\n<li>If a Namespace is configured with <em>Quota<\/em>, applications in that Namespace must be configured with resource settings in <code>pod.spec.containers.resources<\/code><\/li>\n<li>Where the goal of the <em>LimitRange<\/em> is to set default restrictions for each application running in a Namespace, the goal of Quota is to define maximum resources that can be consumed within a Namespace by all applications<\/li>\n<\/ul>\n<p><span style=\"color: #3366ff;\">Managing Quota<\/span><\/p>\n<ul>\n<li><code>kubectl create quota qtest --hard pods=3,cpu=100m,memory=500Mi<\/code><code>--namespace limited<\/code><\/li>\n<li><code>kubectl describe quota --namespace limited<\/code><\/li>\n<li><code>kubectl create deploy nginx --image=nginx:latest --replicas=3 -n limited<\/code><\/li>\n<li><code>kubectl get all -n limited <\/code># no pods<\/li>\n<li><code>kubectl describe rs\/nginx-xxx -n limited <\/code># it fails because no quota have been set on the deployment<\/li>\n<li><code>kubectl set resources deploy nginx --requests cpu=100m,memory=5Mii --limits cou=200m,memory=20Mi -n limited<\/code><\/li>\n<li><code>kubectl get pods -n limited<\/code><\/li>\n<\/ul>\n<pre class=\"lang:default decode:true\">[root@k8s cka]# kubectl create ns limited\r\nnamespace\/limited created\r\n\r\n[root@k8s cka]# kubectl create quota qtest --hard pods=3,cpu=100m,memory=500Mi --namespace limited\r\nresourcequota\/qtest created\r\n\r\n[root@k8s cka]# kubectl describe quota --namespace limited\r\nName:       qtest\r\nNamespace:  limited\r\nResource    Used  Hard\r\n--------    ----  ----\r\ncpu         0     100m\r\nmemory      0     500Mi\r\npods        0     3\r\n\r\n[root@k8s cka]# kubectl describe quota -n limited\r\nName:       qtest\r\nNamespace:  limited\r\nResource    Used  Hard\r\n--------    ----  ----\r\ncpu         0     100m\r\nmemory      0     500Mi\r\npods        0     3\r\n\r\n[root@k8s cka]# kubectl create deploy nginx --image=nginx:latest --replicas=3 -n limited\r\ndeployment.apps\/nginx created\r\n\r\n[root@k8s cka]# kubectl get all -n limited\r\nNAME                    READY   UP-TO-DATE   AVAILABLE   AGE\r\ndeployment.apps\/nginx   0\/3     0            0           17s\r\n\r\nNAME                               DESIRED   CURRENT   READY   AGE\r\nreplicaset.apps\/nginx-56fcf95486   3         0         0       17s\r\n\r\n[root@k8s cka]# kubectl describe -n limited replicaset.apps\/nginx-56fcf95486\r\nName:           nginx-56fcf95486\r\nNamespace:      limited\r\nSelector:       app=nginx,pod-template-hash=56fcf95486\r\nLabels:         app=nginx\r\n                pod-template-hash=56fcf95486\r\nAnnotations:    deployment.kubernetes.io\/desired-replicas: 3\r\n                deployment.kubernetes.io\/max-replicas: 4\r\n                deployment.kubernetes.io\/revision: 1\r\nControlled By:  Deployment\/nginx\r\nReplicas:       0 current \/ 3 desired\r\nPods Status:    0 Running \/ 0 Waiting \/ 0 Succeeded \/ 0 Failed\r\nPod Template:\r\n  Labels:  app=nginx\r\n           pod-template-hash=56fcf95486\r\n  Containers:\r\n   nginx:\r\n    Image:        nginx:latest\r\n    Port:         &lt;none&gt;\r\n    Host Port:    &lt;none&gt;\r\n    Environment:  &lt;none&gt;\r\n    Mounts:       &lt;none&gt;\r\n  Volumes:        &lt;none&gt;\r\nConditions:\r\n  Type             Status  Reason\r\n  ----             ------  ------\r\n  ReplicaFailure   True    FailedCreate\r\nEvents:\r\n  Type     Reason        Age               From                   Message\r\n  ----     ------        ----              ----                   -------\r\n  Warning  FailedCreate  83s               replicaset-controller  Error creating: pods \"nginx-56fcf95486-6457s\" is forbidden: failed quota: qtest: must specify cpu for: nginx; memory for: nginx\r\n  Warning  FailedCreate  83s               replicaset-controller  Error creating: pods \"nginx-56fcf95486-8pr6v\" is forbidden: failed quota: qtest: must specify cpu for: nginx; memory for: nginx\r\n  Warning  FailedCreate  83s               replicaset-controller  Error creating: pods \"nginx-56fcf95486-szt9c\" is forbidden: failed quota: qtest: must specify cpu for: nginx; memory for: nginx\r\n  Warning  FailedCreate  83s               replicaset-controller  Error creating: pods \"nginx-56fcf95486-lr5qn\" is forbidden: failed quota: qtest: must specify cpu for: nginx; memory for: nginx\r\n  Warning  FailedCreate  83s               replicaset-controller  Error creating: pods \"nginx-56fcf95486-pgt4r\" is forbidden: failed quota: qtest: must specify cpu for: nginx; memory for: nginx\r\n  Warning  FailedCreate  83s               replicaset-controller  Error creating: pods \"nginx-56fcf95486-8dvpm\" is forbidden: failed quota: qtest: must specify cpu for: nginx; memory for: nginx\r\n  Warning  FailedCreate  82s               replicaset-controller  Error creating: pods \"nginx-56fcf95486-lwk76\" is forbidden: failed quota: qtest: must specify cpu for: nginx; memory for: nginx\r\n  Warning  FailedCreate  82s               replicaset-controller  Error creating: pods \"nginx-56fcf95486-n84vk\" is forbidden: failed quota: qtest: must specify cpu for: nginx; memory for: nginx\r\n  Warning  FailedCreate  81s               replicaset-controller  Error creating: pods \"nginx-56fcf95486-mt69h\" is forbidden: failed quota: qtest: must specify cpu for: nginx; memory for: nginx\r\n  Warning  FailedCreate  1s (x6 over 80s)  replicaset-controller  (combined from similar events): Error creating: pods \"nginx-56fcf95486-mcfxv\" is forbidden: failed quota: qtest: must specify cpu for: nginx; memory for: nginx\r\n<\/pre>\n<p>It doesn&#8217;t work because no resource limitations have been set on a deployment. It can easily be done using <code>kubectl set resources<\/code>:<\/p>\n<pre class=\"lang:default decode:true\">[root@k8s cka]# kubectl set resources -h\r\n[root@k8s cka]# kubectl set resources deploy nginx --requests cpu=100m,memory=5Mi --limits cpu=200m,memory=20Mi -n limited\r\ndeployment.apps\/nginx resource requirements updated\r\n\r\n[root@k8s cka]# kubectl get all -n limited\r\nNAME                        READY   STATUS    RESTARTS   AGE\r\npod\/nginx-77d7cdd4d-p5dhh   0\/1     Pending   0          32s\r\n\r\nNAME                    READY   UP-TO-DATE   AVAILABLE   AGE\r\ndeployment.apps\/nginx   0\/3     1            0           8m47s\r\n\r\nNAME                               DESIRED   CURRENT   READY   AGE\r\nreplicaset.apps\/nginx-56fcf95486   3         0         0       8m47s\r\nreplicaset.apps\/nginx-77d7cdd4d    1         1         0       32s\r\n\r\n[root@k8s cka]# kubectl get pods -n limited\r\nNAME                    READY   STATUS    RESTARTS   AGE\r\nnginx-77d7cdd4d-p5dhh   0\/1     Pending   0          55s\r\n\r\n[root@k8s cka]# kubectl taint nodes k8s.netico.pl storage-\r\nnode\/k8s.netico.pl untainted\r\n\r\n[root@k8s cka]# kubectl get pods -n limited\r\nNAME                    READY   STATUS              RESTARTS   AGE\r\nnginx-77d7cdd4d-p5dhh   0\/1     ContainerCreating   0          117s\r\n\r\n[root@k8s cka]# kubectl get pods -n limited\r\nNAME                    READY   STATUS    RESTARTS   AGE\r\nnginx-77d7cdd4d-p5dhh   1\/1     Running   0          2m15s\r\n\r\n[root@k8s cka]# kubectl describe quota -n limited\r\nName:       qtest\r\nNamespace:  limited\r\nResource    Used  Hard\r\n--------    ----  ----\r\ncpu         100m  100m\r\nmemory      5Mi   500Mi\r\npods        1     3\r\n<\/pre>\n<p>Only one pod is running because of quota. We can edit quota and set 1000m for spec hard instead 100m.<\/p>\n<pre class=\"lang:default mark:13 decode:true\">[root@k8s cka]# kubectl edit quota -n limited\r\n\r\napiVersion: v1\r\nkind: ResourceQuota\r\nmetadata:\r\n  creationTimestamp: \"2024-02-06T12:50:47Z\"\r\n  name: qtest\r\n  namespace: limited\r\n  resourceVersion: \"405684\"\r\n  uid: 88b14c91-7097-48f9-a71e-0b159ad49916\r\nspec:\r\n  hard:\r\n    cpu: 1000m\r\n    memory: 500Mi\r\n    pods: \"3\"\r\nstatus:\r\n  hard:\r\n    cpu: 100m\r\n    memory: 500Mi\r\n    pods: \"3\"\r\n  used:\r\n    cpu: 100m\r\n    memory: 5Mi\r\n    pods: \"1\"\r\n\r\n\r\n[root@k8s cka]# kubectl describe quota -n limited\r\nName:       qtest\r\nNamespace:  limited\r\nResource    Used  Hard\r\n--------    ----  ----\r\ncpu         300m  1\r\nmemory      15Mi  500Mi\r\npods        3     3\r\n<\/pre>\n<p>Now we can see that the three pods have been scheduled.<\/p>\n<p><span style=\"color: #3366ff;\">Defining Limitrange<\/span><\/p>\n<ul>\n<li><code>kubectl explain limitrange.spec<\/code><\/li>\n<li><code>kubectl create ns limited<\/code><\/li>\n<li><code>kubectl apply -f limitrange.yaml -n limited<\/code><\/li>\n<li><code>kubectl describe ns limited<\/code><\/li>\n<li><code>kubectl run limitpod --image=nginx -n limited<\/code><\/li>\n<li><code>kubectl describe pod limitpod -n limited<\/code><\/li>\n<\/ul>\n<pre class=\"lang:default mark:96-99 decode:true\">[root@k8s cka]# kubectl explain -h\r\n\r\n[root@k8s cka]# kubectl explain limitrange.spec.limits\r\n\r\n[root@k8s cka]# kubectl delete ns limited\r\nnamespace \"limited\" deleted\r\n\r\n[root@k8s cka]# kubectl create ns limited\r\nnamespace\/limited created\r\n\r\n[root@k8s cka]# cat limitrange.yaml\r\napiVersion: v1\r\nkind: LimitRange\r\nmetadata:\r\n  name: mem-limit-range\r\nspec:\r\n  limits:\r\n  - default:\r\n      memory: 512Mi\r\n    defaultRequest:\r\n      memory: 256Mi\r\n    type: Container\r\n\r\n[root@k8s cka]# kubectl apply -f limitrange.yaml -n limited\r\nlimitrange\/mem-limit-range created\r\n\r\n[root@k8s cka]# kubectl describe ns limited\r\nName:         limited\r\nLabels:       kubernetes.io\/metadata.name=limited\r\nAnnotations:  &lt;none&gt;\r\nStatus:       Active\r\n\r\nNo resource quota.\r\n\r\nResource Limits\r\n Type       Resource  Min  Max  Default Request  Default Limit  Max Limit\/Request Ratio\r\n ----       --------  ---  ---  ---------------  -------------  -----------------------\r\n Container  memory    -    -    256Mi            512Mi          -\r\n\r\n[root@k8s cka]# cat limitedpod.yaml\r\napiVersion: v1\r\nkind: Pod\r\nmetadata:\r\n  name: limitedpod\r\nspec:\r\n  containers:\r\n  - name: demo\r\n    image: registry.k8s.io\/pause:2.0\r\n    resources:\r\n      requests:\r\n        cpu: 700m\r\n      limits:\r\n        cpu: 700m\r\n\r\n[root@k8s cka]# kubectl run limited --image=nginx -n limited\r\npod\/limited created\r\n\r\n[root@k8s cka]# kubectl describe ns limited\r\nName:         limited\r\nLabels:       kubernetes.io\/metadata.name=limited\r\nAnnotations:  &lt;none&gt;\r\nStatus:       Active\r\n\r\nNo resource quota.\r\n\r\nResource Limits\r\n Type       Resource  Min  Max  Default Request  Default Limit  Max Limit\/Request Ratio\r\n ----       --------  ---  ---  ---------------  -------------  -----------------------\r\n Container  memory    -    -    256Mi            512Mi          -\r\n\r\n\r\n[root@k8s cka]# kubectl describe pod limited -n limited\r\nName:             limited\r\nNamespace:        limited\r\nPriority:         0\r\nService Account:  default\r\nNode:             k8s.netico.pl\/172.30.9.24\r\nStart Time:       Tue, 06 Feb 2024 08:44:37 -0500\r\nLabels:           run=limited\r\nAnnotations:      kubernetes.io\/limit-ranger: LimitRanger plugin set: memory request for container limited; memory limit for container limited\r\nStatus:           Running\r\nIP:               10.244.0.61\r\nIPs:\r\n  IP:  10.244.0.61\r\nContainers:\r\n  limited:\r\n    Container ID:   docker:\/\/a7f1549fb345c6200b35c5a9880fac91863b887df85a043f1c75088d6e75580c\r\n    Image:          nginx\r\n    Image ID:       docker-pullable:\/\/nginx@sha256:31754bca89a3afb25c04d6ecfa2d9671bc3972d8f4809ff855f7e35caa580de9\r\n    Port:           &lt;none&gt;\r\n    Host Port:      &lt;none&gt;\r\n    State:          Running\r\n      Started:      Tue, 06 Feb 2024 08:44:39 -0500\r\n    Ready:          True\r\n    Restart Count:  0\r\n    Limits:\r\n      memory:  512Mi\r\n    Requests:\r\n      memory:     256Mi\r\n    Environment:  &lt;none&gt;\r\n    Mounts:\r\n      \/var\/run\/secrets\/kubernetes.io\/serviceaccount from kube-api-access-vtc92 (ro)\r\nConditions:\r\n  Type              Status\r\n  Initialized       True\r\n  Ready             True\r\n  ContainersReady   True\r\n  PodScheduled      True\r\nVolumes:\r\n  kube-api-access-vtc92:\r\n    Type:                    Projected (a volume that contains injected data from multiple sources)\r\n    TokenExpirationSeconds:  3607\r\n    ConfigMapName:           kube-root-ca.crt\r\n    ConfigMapOptional:       &lt;nil&gt;\r\n    DownwardAPI:             true\r\nQoS Class:                   Burstable\r\nNode-Selectors:              &lt;none&gt;\r\nTolerations:                 node.kubernetes.io\/not-ready:NoExecute op=Exists for 300s\r\n                             node.kubernetes.io\/unreachable:NoExecute op=Exists for 300s\r\nEvents:\r\n  Type    Reason     Age    From               Message\r\n  ----    ------     ----   ----               -------\r\n  Normal  Scheduled  3m45s  default-scheduler  Successfully assigned limited\/limited to k8s.netico.pl\r\n  Normal  Pulling    3m45s  kubelet            Pulling image \"nginx\"\r\n  Normal  Pulled     3m43s  kubelet            Successfully pulled image \"nginx\" in 1.258s (1.258s including waiting)\r\n  Normal  Created    3m43s  kubelet            Created container limited\r\n  Normal  Started    3m43s  kubelet            Started container limited\r\n<\/pre>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #3366ff;\">Lab: Configuring Taints<\/span><\/p>\n<ul>\n<li>Create a taint on node worker2, that doesn&#8217;t allow new Pods to be<br \/>\nscheduled that don&#8217;t have an SSD hard disk, unless they have the<br \/>\nappropriate toleration set<\/li>\n<li>Remove the taint after verifying that it works<\/li>\n<\/ul>\n<pre class=\"lang:default decode:true\">[root@k8s cka]# kubectl describe node k8s.example.pl | grep Taint\r\nTaints:             storage=ssd:NoSchedule\r\n\r\n[root@k8s cka]# kubectl create deploy newtaint --image=nginx replicas=3\r\nerror: exactly one NAME is required, got 2\r\nSee 'kubectl create deployment -h' for help and examples\r\n\r\n[root@k8s cka]# kubectl create deploy newtaint --image=nginx --replicas=3\r\ndeployment.apps\/newtaint created\r\n\r\n[root@k8s cka]# kubectl get all --selector app=newtaint\r\nNAME                            READY   STATUS    RESTARTS   AGE\r\npod\/newtaint-85fc66d575-bjlt5   0\/1     Pending   0          26s\r\npod\/newtaint-85fc66d575-h9ht7   0\/1     Pending   0          26s\r\npod\/newtaint-85fc66d575-lqfxm   0\/1     Pending   0          26s\r\n\r\nNAME                       READY   UP-TO-DATE   AVAILABLE   AGE\r\ndeployment.apps\/newtaint   0\/3     3            0           26s\r\n\r\nNAME                                  DESIRED   CURRENT   READY   AGE\r\nreplicaset.apps\/newtaint-85fc66d575   3         3         0       26s\r\n[root@k8s cka]#\r\n[root@k8s cka]# kubectl edit deploy newtaint\r\ndeployment.apps\/newtaint edited\r\n<\/pre>\n<p>Add tolerations: in container spec:<\/p>\n<pre class=\"lang:default decode:true \">      terminationGracePeriodSeconds: 30\r\n      tolerations:\r\n      - key: \"storage\"\r\n        operator: \"Equal\"\r\n        value: \"ssd\"\r\n        effect: \"NoSchedule\"\r\nstatus:\r\n<\/pre>\n<p>And now:<\/p>\n<pre class=\"lang:default decode:true \">[root@k8s cka]# kubectl get all --selector app=newtaint\r\nNAME                           READY   STATUS    RESTARTS   AGE\r\npod\/newtaint-bb94b7647-4bnzq   1\/1     Running   0          4m39s\r\npod\/newtaint-bb94b7647-cmsrf   1\/1     Running   0          4m44s\r\npod\/newtaint-bb94b7647-xnx5r   1\/1     Running   0          4m41s\r\n\r\nNAME                       READY   UP-TO-DATE   AVAILABLE   AGE\r\ndeployment.apps\/newtaint   3\/3     3            3           9m34s\r\n\r\nNAME                                  DESIRED   CURRENT   READY   AGE\r\nreplicaset.apps\/newtaint-85fc66d575   0         0         0       9m34s\r\nreplicaset.apps\/newtaint-bb94b7647    3         3         3       4m44s\r\n\r\n[root@k8s cka]# kubectl get all --selector app=newtaint -o wide\r\nNAME                           READY   STATUS    RESTARTS   AGE    IP            NODE            NOMINATED NODE   READINESS GATES\r\npod\/newtaint-bb94b7647-4bnzq   1\/1     Running   0          5m1s   10.244.0.64   k8s.example.pl   &lt;none&gt;           &lt;none&gt;\r\npod\/newtaint-bb94b7647-cmsrf   1\/1     Running   0          5m6s   10.244.0.62   k8s.example.pl   &lt;none&gt;           &lt;none&gt;\r\npod\/newtaint-bb94b7647-xnx5r   1\/1     Running   0          5m3s   10.244.0.63   k8s.example.pl   &lt;none&gt;           &lt;none&gt;\r\n\r\nNAME                       READY   UP-TO-DATE   AVAILABLE   AGE     CONTAINERS   IMAGES   SELECTOR\r\ndeployment.apps\/newtaint   3\/3     3            3           9m56s   nginx        nginx    app=newtaint\r\n\r\nNAME                                  DESIRED   CURRENT   READY   AGE     CONTAINERS   IMAGES   SELECTOR\r\nreplicaset.apps\/newtaint-85fc66d575   0         0         0       9m56s   nginx        nginx    app=newtaint,pod-template-hash=85fc66d575\r\nreplicaset.apps\/newtaint-bb94b7647    3         3         3       5m6s    nginx        nginx    app=newtaint,pod-template-hash=bb94b7647\r\n\r\n[root@k8s cka]# kubectl scale deploy newtaint --replicas=5\r\ndeployment.apps\/newtaint scaled\r\n\r\n\r\n[root@k8s cka]# kubectl get all --selector app=newtaint -o wide\r\nNAME                           READY   STATUS    RESTARTS   AGE     IP            NODE            NOMINATED NODE   READINESS GATES\r\npod\/newtaint-bb94b7647-4bnzq   1\/1     Running   0          6m20s   10.244.0.64   k8s.example.pl   &lt;none&gt;           &lt;none&gt;\r\npod\/newtaint-bb94b7647-cmsrf   1\/1     Running   0          6m25s   10.244.0.62   k8s.example.pl   &lt;none&gt;           &lt;none&gt;\r\npod\/newtaint-bb94b7647-cvwn2   1\/1     Running   0          7s      10.244.0.66   k8s.example.pl   &lt;none&gt;           &lt;none&gt;\r\npod\/newtaint-bb94b7647-l4brf   1\/1     Running   0          7s      10.244.0.65   k8s.example.pl   &lt;none&gt;           &lt;none&gt;\r\npod\/newtaint-bb94b7647-xnx5r   1\/1     Running   0          6m22s   10.244.0.63   k8s.example.pl   &lt;none&gt;           &lt;none&gt;\r\n\r\nNAME                       READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES   SELECTOR\r\ndeployment.apps\/newtaint   5\/5     5            5           11m   nginx        nginx    app=newtaint\r\n\r\nNAME                                  DESIRED   CURRENT   READY   AGE     CONTAINERS   IMAGES   SELECTOR\r\nreplicaset.apps\/newtaint-85fc66d575   0         0         0       11m     nginx        nginx    app=newtaint,pod-template-hash=85fc66d575\r\nreplicaset.apps\/newtaint-bb94b7647    5         5         5       6m25s   nginx        nginx    app=newtaint,pod-template-hash=bb94b7647\r\n\r\n[root@k8s cka]# kubectl delete deploy newtaint\r\ndeployment.apps \"newtaint\" deleted\r\n\r\n[root@k8s cka]# kubectl taint nodes k8s.example.pl storage-\r\nnode\/k8s.example.pl untainted\r\n<\/pre>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"","protected":false},"author":1,"featured_media":5951,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[99],"tags":[],"_links":{"self":[{"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/posts\/5356"}],"collection":[{"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/comments?post=5356"}],"version-history":[{"count":47,"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/posts\/5356\/revisions"}],"predecessor-version":[{"id":5952,"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/posts\/5356\/revisions\/5952"}],"wp:featuredmedia":[{"embeddable":true,"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/media\/5951"}],"wp:attachment":[{"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/media?parent=5356"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/categories?post=5356"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/tags?post=5356"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}