{"id":5103,"date":"2023-08-05T15:22:48","date_gmt":"2023-08-05T13:22:48","guid":{"rendered":"http:\/\/miro.borodziuk.eu\/?p=5103"},"modified":"2023-09-22T09:18:27","modified_gmt":"2023-09-22T07:18:27","slug":"pod-scheduling-on-openshift","status":"publish","type":"post","link":"http:\/\/miro.borodziuk.eu\/index.php\/2023\/08\/05\/pod-scheduling-on-openshift\/","title":{"rendered":"Pod scheduling on Openshift"},"content":{"rendered":"<p><!--more--><\/p>\n<p><span style=\"color: #3366ff;\">Understanding the Scheduler<\/span><\/p>\n<ul>\n<li>3 types of rules are applied for Pod scheduling\n<ul>\n<li>Node labels<\/li>\n<li>Affinity rules<\/li>\n<li>Anti-affinity rules<\/li>\n<\/ul>\n<\/li>\n<li>The Pod Scheduler works through 3 steps:\n<ul>\n<li>Node filter: this is where node availability, but also selectors, taints, resource availability and more is evaluated<\/li>\n<li>Node prioritizing: based on affinity rules, nodes are prioritized<\/li>\n<li>Select the best nodes: the best scoring node is used, and if multiple nodes apply, round robin is used to select one of them<\/li>\n<\/ul>\n<\/li>\n<li>When used on cloud, scheduling by default happens within the boundaries of a region<\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #3366ff;\">Using Node Labels to Control Pod Placement<\/span><\/p>\n<ul>\n<li>Nodes can be configured with a label<\/li>\n<li>A label is an arbitrary key-value pair that is set with <code>oc label<\/code><\/li>\n<li>Pods can next be configured with a <code>nodeSelector<\/code> property on the Pod so that they&#8217;ll only run on the node that has the right label<\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #3366ff;\">Applying Labels to Nodes<\/span><\/p>\n<ul>\n<li>A label is an arbitrary key-value pair that can be used as a selector for Pod placement<\/li>\n<li>Use <code>oc label node workerl.example.com env=test<\/code> to set the label<\/li>\n<li>Use <code>oc label node workerl.example.com env=prod --overwrite<\/code> to overwrite<\/li>\n<li>Use <code>oc label node workerl.example.com env-<\/code> to remove the label<\/li>\n<li>Use <code>oc get ... --show-labels<\/code> to show labels set on any type of resource<\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #3366ff;\">Applying Labels to Machine Sets<\/span><\/p>\n<ul>\n<li>A machine set is a group of machines that is created when installing OpenShift using full stack automation<\/li>\n<li>Machine sets can be labeled so that nodes generated from the machine set will automatically get a label<\/li>\n<li>To see which nodes are in which machine set, use <code>oc get machines -n openshift-machine-api -o wide<\/code><\/li>\n<li>Use <code>oc edit machineset<\/code> &#8230; to set a label in the machine set spec.metadata<\/li>\n<li>Notice that nodes that were already generated from the machine set will not be updated with the new label<\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #3366ff;\">Configuring NodeSelector on Pods<\/span><\/p>\n<ul>\n<li>Infrastructure-related Pods in an OpenShift cluster are configured to run on a controller node<\/li>\n<li>Use <code>nodeSelector<\/code> on the Deployment or DeploymentConfig to configure its Pods to run on a node that has a specific label<\/li>\n<li>Use <code>oc edit<\/code> to apply nodeSelector to existing Deployments or DeploymentConfigs<\/li>\n<li>If a Deployment is configured with a nodeSelector that doesn&#8217;t match any node label, the Pods will show as pending<\/li>\n<li>Fix this by setting Deployment spec.template.spec.nodeSelector to the desired key-value pair<\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #3366ff;\">Configuring NodeSelector on Projects<\/span><\/p>\n<ul>\n<li>nodeSelector can also be set on a project such that resources created in the deployment are automatically placed on the right nodes: <code>oc adm new-project test --node-selector \"env=test\"<\/code><\/li>\n<li>To configure a default nodeSelector on an existing project, add an annotation to its underlying namespace resource: <code>oc annotate namespace test openshift.ianode-selector=\"test\" --overwrite<\/code><\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #3366ff;\">Using NodeSelector<\/span><\/p>\n<ul>\n<li><code>oc login -u developer -p password <\/code><\/li>\n<li><code>oc create deployment simple --image=bitnami\/nginx:latest <\/code><\/li>\n<li><code>oc get all <\/code><\/li>\n<li><code>oc scale --replicas 4 deployment\/simple <\/code><\/li>\n<li><code>oc get pods -o wide <\/code><\/li>\n<li><code>oc login -u admin -p password <\/code><\/li>\n<li><code>oc get nodes -L env <\/code><\/li>\n<li><code>oc label node NODE_NAME env=dev<\/code><\/li>\n<\/ul>\n<p>As normal user::<\/p>\n<pre class=\"lang:default decode:true\">$ oc login -u developer -p developer\r\nLogin successful.\r\n\r\nYou have access to the following projects and can switch between them with 'oc project &lt;projectname&gt;':\r\n\r\n    debug\r\n    myproject\r\n  * network-security\r\n\r\nUsing project \"network-security\".\r\n[root@okd ~]# oc new-project nodesel\r\nNow using project \"nodesel\" on server \"https:\/\/172.30.9.22:8443\".\r\n\r\nYou can add applications to this project with the 'new-app' command. For example, try:\r\n\r\n    oc new-app centos\/ruby-25-centos7~https:\/\/github.com\/sclorg\/ruby-ex.git\r\n\r\nto build a new example application in Ruby.\r\n\r\n$  oc create deployment simple --image=bitnami\/nginx:latest\r\ndeployment.apps\/simple created\r\n\r\n$ oc get all\r\nNAME                          READY     STATUS    RESTARTS   AGE\r\npod\/simple-776bd789d8-zmldg   1\/1       Running   0          5s\r\n\r\nNAME                     DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE\r\ndeployment.apps\/simple   1         1         1            1           5s\r\n\r\nNAME                                DESIRED   CURRENT   READY     AGE\r\nreplicaset.apps\/simple-776bd789d8   1         1         1         5s\r\n\r\n$ oc scale --replicas 4 deployment\/simple\r\ndeployment.extensions\/simple scaled\r\n\r\n$ oc get pods -o wide\r\nNAME                      READY     STATUS    RESTARTS   AGE       IP            NODE        NOMINATED NODE\r\nsimple-776bd789d8-26zqb   1\/1       Running   0          8s        172.17.0.25   localhost   &lt;none&gt;\r\nsimple-776bd789d8-nphph   1\/1       Running   0          8s        172.17.0.24   localhost   &lt;none&gt;\r\nsimple-776bd789d8-tjltf   1\/1       Running   0          8s        172.17.0.26   localhost   &lt;none&gt;\r\nsimple-776bd789d8-zmldg   1\/1       Running   0          35s       172.17.0.22   localhost   &lt;none&gt;\r\n<\/pre>\n<p>As admin:<\/p>\n<pre class=\"lang:default decode:true\"># oc login -u system:admin\r\nLogged into \"https:\/\/172.30.9.22:8443\" as \"system:admin\" using existing credentials.\r\n\r\nYou have access to the following projects and can switch between them with 'oc project &lt;projectname&gt;':\r\n\r\ndefault\r\n* nodesel\r\n\r\nUsing project \"nodesel\".\r\n\r\n$ oc get nodes --show-labels\r\nNAME STATUS ROLES AGE VERSION LABELS\r\nlocalhost Ready &lt;none&gt; 4d v1.11.0+d4cacc0 beta.kubernetes.io\/arch=amd64,beta.kubernetes.io\/os=linux,kubernetes.io\/hostname=localhost\r\n\r\n$ oc label node localhost env=dev\r\nnode\/localhost labeled<\/pre>\n<p>Back to the developer user:<\/p>\n<pre class=\"lang:default decode:true\">$ oc login -u developer -p developer\r\nLogin successful.\r\n\r\nYou have access to the following projects and can switch between them with 'oc project &lt;projectname&gt;':\r\n\r\n    debug\r\n    myproject\r\n    network-security\r\n  * nodesel\r\n\r\nUsing project \"nodesel\".\r\n\r\n$ oc edit deployment\/simple\r\n\r\napiVersion: extensions\/v1beta1\r\nkind: Deployment\r\nmetadata:\r\n  annotations:\r\n    deployment.kubernetes.io\/revision: \"1\"\r\n  creationTimestamp: 2023-07-27T14:19:39Z\r\n  generation: 2\r\n  labels:\r\n    app: simple\r\n  name: simple\r\n  namespace: nodesel\r\n  resourceVersion: \"1679815\"\r\n  selfLink: \/apis\/extensions\/v1beta1\/namespaces\/nodesel\/deployments\/simple\r\n  uid: 9f3a67e2-2c88-11ee-8f96-8e5760356a66\r\nspec:\r\n  progressDeadlineSeconds: 600\r\n  replicas: 4\r\n  revisionHistoryLimit: 10\r\n  selector:\r\n    matchLabels:\r\n      app: simple\r\n  strategy:\r\n    rollingUpdate:\r\n      maxSurge: 25%\r\n      maxUnavailable: 25%\r\n    type: RollingUpdate\r\n  template:\r\n    metadata:\r\n      creationTimestamp: null\r\n      labels:\r\n        app: simple\r\n    spec:\r\n      containers:\r\n      - image: bitnami\/nginx:latest\r\n        imagePullPolicy: Always\r\n        name: nginx\r\n        resources: {}\r\n        terminationMessagePath: \/dev\/termination-log\r\n        terminationMessagePolicy: File\r\n      dnsPolicy: ClusterFirst\r\n      restartPolicy: Always\r\n      schedulerName: default-scheduler\r\n      securityContext: {}\r\n      terminationGracePeriodSeconds: 30\r\nstatus:\r\n  availableReplicas: 4\r\n  conditions:\r\n  - lastTransitionTime: 2023-07-27T14:19:39Z\r\n    lastUpdateTime: 2023-07-27T14:19:42Z\r\n    message: ReplicaSet \"simple-776bd789d8\" has successfully progressed.\r\n    reason: NewReplicaSetAvailable\r\n    status: \"True\"\r\n    type: Progressing\r\n  - lastTransitionTime: 2023-07-27T14:20:11Z\r\n    lastUpdateTime: 2023-07-27T14:20:11Z\r\n    message: Deployment has minimum availability.\r\n    reason: MinimumReplicasAvailable\r\n    status: \"True\"\r\n    type: Available\r\n  observedGeneration: 2\r\n  readyReplicas: 4\r\n  replicas: 4\r\n  updatedReplicas: 4\r\n\r\n<\/pre>\n<p>Edit the\u00a0 deployment\/simple\u00a0 by adding after <code>dnsPolicy<\/code>:<\/p>\n<pre class=\"lang:default decode:true \">dnsPolicy:\r\nnodeSelector:\r\n\u00a0 env: blah<\/pre>\n<p>And then:<\/p>\n<pre class=\"lang:default decode:true\">$ oc get all\r\nNAME                          READY     STATUS    RESTARTS   AGE\r\npod\/simple-776bd789d8-26zqb   1\/1       Running   0          1h\r\npod\/simple-776bd789d8-nphph   1\/1       Running   0          1h\r\npod\/simple-776bd789d8-zmldg   1\/1       Running   0          1h\r\npod\/simple-77bd5f84cf-dhdch   0\/1       Pending   0          16m\r\npod\/simple-77bd5f84cf-hlzdv   0\/1       Pending   0          16m\r\n\r\n\r\nNAME                     DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE\r\ndeployment.apps\/simple   4         5         2            3           47m\r\n\r\nNAME                                DESIRED   CURRENT   READY     AGE\r\nreplicaset.apps\/simple-776bd789d8   3         3         3         47m\r\nreplicaset.apps\/simple-77bd5f84cf   2         2         0         6m\r\n\r\n$ oc edit deployment.apps\/simple\r\ndeployment.apps\/simple edited\r\n<\/pre>\n<p>Let&#8217;s describe the pod which is not runing due to bad nodeSelector=blah:<\/p>\n<pre class=\"lang:default decode:true\">$ oc get pods\r\nNAME                      READY     STATUS    RESTARTS   AGE\r\nsimple-776bd789d8-26zqb   1\/1       Running   0          1h\r\nsimple-776bd789d8-nphph   1\/1       Running   0          1h\r\nsimple-776bd789d8-zmldg   1\/1       Running   0          1h\r\nsimple-77bd5f84cf-dhdch   0\/1       Pending   0          16m\r\nsimple-77bd5f84cf-hlzdv   0\/1       Pending   0          16m\r\n\r\n$ oc describe  pod simple-77bd5f84cf-hlzdv\r\nName:               simple-77bd5f84cf-hlzdv\r\nNamespace:          nodesel\r\nPriority:           0\r\nPriorityClassName:  &lt;none&gt;\r\nNode:               &lt;none&gt;\r\nLabels:             app=simple\r\n                    pod-template-hash=3368194079\r\nAnnotations:        openshift.io\/scc=restricted\r\nStatus:             Pending\r\nIP:\r\nControlled By:      ReplicaSet\/simple-77bd5f84cf\r\nContainers:\r\n  nginx:\r\n    Image:        bitnami\/nginx:latest\r\n    Port:         &lt;none&gt;\r\n    Host Port:    &lt;none&gt;\r\n    Environment:  &lt;none&gt;\r\n    Mounts:\r\n      \/var\/run\/secrets\/kubernetes.io\/serviceaccount from default-token-pnms6 (ro)\r\nConditions:\r\n  Type           Status\r\n  PodScheduled   False\r\nVolumes:\r\n  default-token-pnms6:\r\n    Type:        Secret (a volume populated by a Secret)\r\n    SecretName:  default-token-pnms6\r\n    Optional:    false\r\nQoS Class:       BestEffort\r\nNode-Selectors:  env=blah\r\nTolerations:     &lt;none&gt;\r\nEvents:\r\n  Type     Reason            Age                From               Message\r\n  ----     ------            ----               ----               -------\r\n  Warning  FailedScheduling  2m (x93 over 17m)  default-scheduler  0\/1 nodes are available: 1 node(s) didn't match node selector.\r\n\r\n<\/pre>\n<p>Now change\u00a0 again nodeSelector in deployment\/simple :<\/p>\n<pre class=\"lang:default decode:true\">nodeSelector:\r\n\u00a0 env: dev<\/pre>\n<p>After the changes all pods are running again:<\/p>\n<pre class=\"lang:default decode:true \">$ oc get all\r\nNAME                          READY     STATUS    RESTARTS   AGE\r\npod\/simple-6f55965d79-5d59d   1\/1       Running   0          22s\r\npod\/simple-6f55965d79-5dt56   1\/1       Running   0          21s\r\npod\/simple-6f55965d79-mklpc   1\/1       Running   0          25s\r\npod\/simple-6f55965d79-q8pq9   1\/1       Running   0          25s\r\n\r\nNAME                     DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE\r\ndeployment.apps\/simple   4         4         4            4           3h\r\n\r\nNAME                                DESIRED   CURRENT   READY     AGE\r\nreplicaset.apps\/simple-57f7866b4b   0         0         0         2h\r\nreplicaset.apps\/simple-6f55965d79   4         4         4         25s\r\nreplicaset.apps\/simple-776bd789d8   0         0         0         3h\r\nreplicaset.apps\/simple-77bd5f84cf   0         0         0         2h\r\nreplicaset.apps\/simple-8559698ddc   0         0         0         1h\r\n<\/pre>\n<p>And now we must remove the label which previously was added::<\/p>\n<pre class=\"lang:default decode:true \">$ oc login -u system:admin\r\nLogged into \"https:\/\/172.30.9.22:8443\" as \"system:admin\" using existing credentials.\r\n\r\nYou have access to the following projects and can switch between them with 'oc project &lt;projectname&gt;':\r\n\r\n  * default\r\n    nodesel\r\nUsing project \"default\".\r\n\r\n$ oc project nodesel\r\nNow using project \"nodesel\" on server \"https:\/\/172.30.9.22:8443\".\r\n\r\n$ oc get nodes --show-labels\r\nNAME        STATUS    ROLES     AGE       VERSION           LABELS\r\nlocalhost   Ready     &lt;none&gt;    4d        v1.11.0+d4cacc0   beta.kubernetes.io\/arch=amd64,beta.kubernetes.io\/os=linux,env=dev,kubernetes.io\/hostname=localhost\r\n\r\n$ oc label node localhost env-\r\nnode\/localhost labeled\r\n\r\n$ oc get nodes --show-labels\r\nNAME        STATUS    ROLES     AGE       VERSION           LABELS\r\nlocalhost   Ready     &lt;none&gt;    4d        v1.11.0+d4cacc0   beta.kubernetes.io\/arch=amd64,beta.kubernetes.io\/os=linux,kubernetes.io\/hostname=localhost\r\n<\/pre>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #3366ff;\">Deployment or DeploymentConfig<\/span><\/p>\n<ul>\n<li>Deployment is the Kubernetes resource, DeploymentConfig is the OpenShift resource<\/li>\n<li>It doesn&#8217;t matter which one is used<\/li>\n<li>DeploymentConfig is created when working with the console<\/li>\n<li>Deployment is the standard, but when using <code>oc new-app --as-deployment-config<\/code> it will create a DeploymentConfig instead<\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #3366ff;\">Pod Affinity<\/span><\/p>\n<ul>\n<li>[Anti-]affinity defines relations between Pods<\/li>\n<li>podAffinity is a Pod property that tells the scheduler to locate a new Pod on the same node as other Pods<\/li>\n<li>podAntiAffinity tells the scheduler not to locate a new Pod on the same node as other Pods<\/li>\n<li>nodeAffinity tells a Pod (not) to schedule on nodes with specific labels<\/li>\n<li>[Anti]-affinity is applied based on Pod labels<\/li>\n<li><code>Required<\/code> affinity rules must be met before a Pod can be scheduled on a node<\/li>\n<li>\u00a0<code>Preferred<\/code> rules are not guaranteed<\/li>\n<\/ul>\n<p><span style=\"color: #3366ff;\">matchExpressions<\/span><\/p>\n<ul>\n<li>In affinity rules, a matchExpression is used on the key-value specification that matches the label<\/li>\n<li>In this matchExpression the operator can have the following values\n<ul>\n<li>NotIn<\/li>\n<li>Exists<\/li>\n<li>DoesNotExist<\/li>\n<li>Lt<\/li>\n<li>Gt<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p><span style=\"color: #3366ff;\">Node Affinity<\/span><\/p>\n<ul>\n<li>Node affinity can be used to only run a Pod on a node that meets specific requirements<\/li>\n<li>Node affinity, like Pod affinity, works with labels that are set on the node<\/li>\n<li>Required rules must be met<\/li>\n<li>Preferred rules should be met<\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #3366ff;\">Using nodeAffinity<\/span><\/p>\n<ul>\n<li><code>oc login -u admin -p password <\/code><\/li>\n<li><code>oc create -f nodeaffinity.yaml <\/code><\/li>\n<li><code>oc describe pod runonssd <\/code><\/li>\n<li><code>oc label node crc-[Tab] disktype=nvme <\/code><\/li>\n<li><code>oc describe pod runonssd<\/code><\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<pre class=\"lang:default decode:true \">$ cat anti-affinity.yaml\r\n---\r\napiVersion: v1\r\nkind: Pod\r\nmetadata:\r\n  name: anti1\r\n  labels:\r\n    love: ihateyou\r\nspec:\r\n  containers:\r\n  - name: ocp\r\n    image: docker.io\/ocpqe\/hello-pod\r\n\r\n---\r\napiVersion: v1\r\nkind: Pod\r\nmetadata:\r\n  name: anti2\r\n  labels:\r\n    love: ihateyou\r\nspec:\r\n  containers:\r\n  - name: ocp\r\n    image: docker.io\/ocpqe\/hello-pod\r\n  affinity:\r\n    podAntiAffinity:\r\n      requiredDuringSchedulingIgnoredDuringExecution:\r\n      - labelSelector:\r\n          matchExpressions:\r\n          - key: love\r\n            operator: In\r\n            values:\r\n            - ihateyou\r\n        topologyKey: kubernetes.io\/hostname\r\n\r\n$ oc new-project love\r\nNow using project \"love\" on server \"https:\/\/172.30.9.22:8443\".\r\n\r\nYou can add applications to this project with the 'new-app' command. For example, try:\r\n\r\n    oc new-app centos\/ruby-25-centos7~https:\/\/github.com\/sclorg\/ruby-ex.git\r\n\r\nto build a new example application in Ruby.\r\n\r\n$ oc create -f anti-affinity.yaml\r\npod\/anti1 created\r\npod\/anti2 created\r\n\r\n$ oc get pods\r\nNAME      READY     STATUS    RESTARTS   AGE\r\nanti1     1\/1       Running   0          4s\r\nanti2     0\/1       Pending   0          4s\r\n\r\n$ oc describe pod anti2\r\nName:               anti2\r\nNamespace:          love\r\nPriority:           0\r\nPriorityClassName:  &lt;none&gt;\r\nNode:               &lt;none&gt;\r\nLabels:             love=ihateyou\r\nAnnotations:        openshift.io\/scc=anyuid\r\nStatus:             Pending\r\nIP:\r\nContainers:\r\n  ocp:\r\n    Image:        docker.io\/ocpqe\/hello-pod\r\n    Port:         &lt;none&gt;\r\n    Host Port:    &lt;none&gt;\r\n    Environment:  &lt;none&gt;\r\n    Mounts:\r\n      \/var\/run\/secrets\/kubernetes.io\/serviceaccount from default-token-bcpgf (ro)\r\nConditions:\r\n  Type           Status\r\n  PodScheduled   False\r\nVolumes:\r\n  default-token-bcpgf:\r\n    Type:        Secret (a volume populated by a Secret)\r\n    SecretName:  default-token-bcpgf\r\n    Optional:    false\r\nQoS Class:       BestEffort\r\nNode-Selectors:  &lt;none&gt;\r\nTolerations:     &lt;none&gt;\r\nEvents:\r\n  Type     Reason            Age               From               Message\r\n  ----     ------            ----              ----               -------\r\n  Warning  FailedScheduling  1s (x4 over 15s)  default-scheduler  0\/1 nodes are available: 1 node(s) didn't match pod affinity\/anti-affinity, 1 node(s) didn't match pod anti-affinity rules.\r\n\r\n$ oc get pods\r\nNAME      READY     STATUS    RESTARTS   AGE\r\nanti1     1\/1       Running   0          37s\r\nanti2     0\/1       Pending   0          37s\r\n\r\n<\/pre>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #3366ff;\">Using nodeAffinity<\/span><\/p>\n<ul>\n<li><code>oc login -u admin -p password <\/code><\/li>\n<li><code>oc create -f node-affinity.yaml <\/code><\/li>\n<li><code>oc describe pod runonssd <\/code><\/li>\n<li><code>oc label node crc-[Tab] disktype=nvme <\/code><\/li>\n<li><code>oc describe pod runonssd<\/code><\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<pre class=\"lang:default decode:true\">$ oc whoami system:admin \r\n\r\n$ cat node-affinity*\r\n\r\napiVersion: v1\r\nkind: Pod \r\nmetadata: \r\n  name: runonssd \r\nspec: \r\n  affinity: \r\n    nodeAffinity: \r\n      reguiredDuringSchedulingIgnoredDuringExecution: \r\n        nodeSelectorTerms: \r\n        - matchExpressions: \r\n          - key: disktype \r\n            operator: In \r\n            values: \r\n            - ssd \r\n            - nvme \r\n  containers: \r\n  - name: onssd \r\n    image: docker.io\/ocpqe\/hello-pod \r\n\r\n\r\n$ oc create -f node-affinity.yaml\r\npod\/runonssd created\r\n\r\n\r\n$ oc get pods\r\nNAME       READY     STATUS    RESTARTS   AGE\r\nanti1      1\/1       Running   0          1h\r\nanti2      0\/1       Pending   0          1h\r\nrunonssd   1\/1       Running   0          7s\r\n\r\n$ oc describe pod runonssd\r\nName:               runonssd\r\nNamespace:          love\r\nPriority:           0\r\nPriorityClassName:  &lt;none&gt;\r\nNode:               localhost\/172.30.9.22\r\nStart Time:         Sat, 29 Jul 2023 12:46:04 +0200\r\nLabels:             &lt;none&gt;\r\nAnnotations:        openshift.io\/scc=anyuid\r\nStatus:             Running\r\nIP:                 172.17.0.37\r\nContainers:\r\n  onssd:\r\n    Container ID:   docker:\/\/f0167be7cd30fdb106fe08c50df644d5a705a65ac1dc2ae0e07f35a4006ce7b4\r\n    Image:          docker.io\/ocpqe\/hello-pod\r\n    Image ID:       docker-pullable:\/\/ocpqe\/hello-pod@sha256:04b6af86b03c1836211be2589db870dba09b7811c197c47c07fbbe33c7f80ef7\r\n    Port:           &lt;none&gt;\r\n    Host Port:      &lt;none&gt;\r\n    State:          Running\r\n      Started:      Sat, 29 Jul 2023 12:46:06 +0200\r\n    Ready:          True\r\n    Restart Count:  0\r\n    Environment:    &lt;none&gt;\r\n    Mounts:\r\n      \/var\/run\/secrets\/kubernetes.io\/serviceaccount from default-token-bcpgf (ro)\r\nConditions:\r\n  Type              Status\r\n  Initialized       True\r\n  Ready             True\r\n  ContainersReady   True\r\n  PodScheduled      True\r\nVolumes:\r\n  default-token-bcpgf:\r\n    Type:        Secret (a volume populated by a Secret)\r\n    SecretName:  default-token-bcpgf\r\n    Optional:    false\r\nQoS Class:       BestEffort\r\nNode-Selectors:  &lt;none&gt;\r\nTolerations:     &lt;none&gt;\r\nEvents:\r\n  Type    Reason     Age   From                Message\r\n  ----    ------     ----  ----                -------\r\n  Normal  Scheduled  20s   default-scheduler   Successfully assigned love\/runonssd to localhost\r\n  Normal  Pulling    19s   kubelet, localhost  pulling image \"docker.io\/ocpqe\/hello-pod\"\r\n  Normal  Pulled     18s   kubelet, localhost  Successfully pulled image \"docker.io\/ocpqe\/hello-pod\"\r\n  Normal  Created    18s   kubelet, localhost  Created container\r\n  Normal  Started    18s   kubelet, localhost  Started container\r\n\r\n\r\n$ oc label node localhost disktype=nvme\r\nnode\/localhost labeled\r\n\r\n\r\n$ oc get pods\r\nNAME       READY     STATUS    RESTARTS   AGE\r\nanti1      1\/1       Running   0          1h\r\nanti2      0\/1       Pending   0          1h\r\nrunonssd   1\/1       Running   0          1m\r\n<\/pre>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #3366ff;\">Taints and Tolerations<\/span><\/p>\n<ul>\n<li>A taint allows a node to refuse a Pod unless the Pod has a matching toleration<\/li>\n<li><code>taints<\/code> are applied to nodes through the node spec<\/li>\n<li><code>tolerations<\/code> are applied to a Pod through the Pod spec<\/li>\n<li>Taints and tolerations consist of a key, a value, and an effect<\/li>\n<li>The effect is one of the following:\n<ul>\n<li><code>NoSchedule<\/code>: new Pods will not be scheduled<\/li>\n<li><code>PreferNoSchedule<\/code>: the scheduler tries to avoid scheduling new Pods<\/li>\n<li><code>NoExecute<\/code>: new Pods won&#8217;t be scheduled and existing Pods will be removed<\/li>\n<li>All effects are only applied if no toleration exists on the Pods<\/li>\n<\/ul>\n<\/li>\n<li>Use <code>tolerationSeconds<\/code> to specify how long it takes before Pods are evicted when <code>NoExecute<\/code> is set<\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #3366ff;\">Managing Taints (Fails on CRC!)<\/span><\/p>\n<ul>\n<li><code>oc login -u admin -p password <\/code><\/li>\n<li><code>oc adm taint nodes crc-[Tab] keyl=valuel:NoSchedule <\/code><\/li>\n<li><code>oc run newpod --image=bitnami\/nginx <\/code><\/li>\n<li><code>oc get pods <\/code><\/li>\n<li><code>oc describe pods mypod <\/code><\/li>\n<li><code>oc edit pod mypod <\/code><\/li>\n<\/ul>\n<p><code>spec: <\/code><\/p>\n<p><code>\u00a0 tolerations: <\/code><\/p>\n<p><code>\u00a0 - key: key1 <\/code><\/p>\n<p><code>\u00a0\u00a0\u00a0 value: value1<\/code><\/p>\n<p><code>\u00a0\u00a0\u00a0 operator: Equal <\/code><\/p>\n<p><code>\u00a0\u00a0\u00a0 effect: NoExecute <\/code><\/p>\n<ul>\n<li><code>oc adm taint nodes crc-[Tab] keyl-<\/code><\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<pre class=\"lang:default decode:true\">$ oc whoami\r\nsystem:admin\r\n[root@okd ex280]# oc adm taint nodes localhost keyl=valuel:NoSchedule\r\nnode\/localhost tainted\r\n\r\n$ oc run newpod --image=bitnami\/nginx\r\ndeploymentconfig.apps.openshift.io\/newpod created\r\n\r\n$ oc get pods\r\nNAME              READY     STATUS    RESTARTS   AGE\r\nanti1             1\/1       Running   0          1h\r\nanti2             0\/1       Pending   0          1h\r\nnewpod-1-deploy   0\/1       Pending   0          6s\r\nrunonssd          1\/1       Running   0          15m\r\n\r\n$ oc describe pods newpod-1-deploy\r\nName:               newpod-1-deploy\r\nNamespace:          love\r\nPriority:           0\r\nPriorityClassName:  &lt;none&gt;\r\nNode:               &lt;none&gt;\r\nLabels:             openshift.io\/deployer-pod-for.name=newpod-1\r\nAnnotations:        openshift.io\/deployment-config.name=newpod\r\n                    openshift.io\/deployment.name=newpod-1\r\n                    openshift.io\/scc=restricted\r\nStatus:             Pending\r\nIP:\r\nContainers:\r\n  deployment:\r\n    Image:      openshift\/origin-deployer:v3.11\r\n    Port:       &lt;none&gt;\r\n    Host Port:  &lt;none&gt;\r\n    Environment:\r\n      OPENSHIFT_DEPLOYMENT_NAME:       newpod-1\r\n      OPENSHIFT_DEPLOYMENT_NAMESPACE:  love\r\n    Mounts:\r\n      \/var\/run\/secrets\/kubernetes.io\/serviceaccount from deployer-token-cdhw9 (ro)\r\nConditions:\r\n  Type           Status\r\n  PodScheduled   False\r\nVolumes:\r\n  deployer-token-cdhw9:\r\n    Type:        Secret (a volume populated by a Secret)\r\n    SecretName:  deployer-token-cdhw9\r\n    Optional:    false\r\nQoS Class:       BestEffort\r\nNode-Selectors:  &lt;none&gt;\r\nTolerations:     &lt;none&gt;\r\nEvents:\r\n  Type     Reason            Age               From               Message\r\n  ----     ------            ----              ----               -------\r\n  Warning  FailedScheduling  4s (x5 over 31s)  default-scheduler  0\/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.\r\n\r\n$ oc edit pod newpod-1-deploy\r\n\r\n\r\n<\/pre>\n<p>Editing:<\/p>\n<pre class=\"lang:default decode:true\">apiVersion: v1\r\nkind: Pod\r\nmetadata:\r\n  annotations:\r\n    openshift.io\/deployment-config.name: newpod\r\n    openshift.io\/deployment.name: newpod-1\r\n    openshift.io\/scc: restricted\r\n  creationTimestamp: 2023-07-29T11:01:32Z\r\n  labels:\r\n    openshift.io\/deployer-pod-for.name: newpod-1\r\n  name: newpod-1-deploy\r\n  namespace: love\r\n  ownerReferences:\r\n  - apiVersion: v1\r\n    kind: ReplicationController\r\n    name: newpod-1\r\n    uid: 471096bf-2dff-11ee-8f96-8e5760356a66\r\n  resourceVersion: \"2333327\"\r\n  selfLink: \/api\/v1\/namespaces\/love\/pods\/newpod-1-deploy\r\n  uid: 471355a8-2dff-11ee-8f96-8e5760356a66\r\nspec:\r\n  activeDeadlineSeconds: 21600\r\n  containers:\r\n  - env:\r\n    - name: OPENSHIFT_DEPLOYMENT_NAME\r\n      value: newpod-1\r\n    - name: OPENSHIFT_DEPLOYMENT_NAMESPACE\r\n      value: love\r\n    image: openshift\/origin-deployer:v3.11\r\n    imagePullPolicy: IfNotPresent\r\n    name: deployment\r\n    resources: {}\r\n    securityContext:\r\n      capabilities:\r\n        drop:\r\n        - KILL\r\n        - MKNOD\r\n        - SETGID\r\n        - SETUID\r\n      runAsUser: 1000410000\r\n    terminationMessagePath: \/dev\/termination-log\r\n    terminationMessagePolicy: File\r\n    volumeMounts:\r\n    - mountPath: \/var\/run\/secrets\/kubernetes.io\/serviceaccount\r\n      name: deployer-token-cdhw9\r\n      readOnly: true\r\n  dnsPolicy: ClusterFirst\r\n  imagePullSecrets:\r\n  - name: deployer-dockercfg-dpdps\r\n  priority: 0\r\n  restartPolicy: Never\r\n  schedulerName: default-scheduler\r\n  securityContext:\r\n    fsGroup: 1000410000\r\n    seLinuxOptions:\r\n      level: s0:c20,c15\r\n  serviceAccount: deployer\r\n  serviceAccountName: deployer\r\n  terminationGracePeriodSeconds: 10\r\n  volumes:\r\n  - name: deployer-token-cdhw9\r\n    secret:\r\n      defaultMode: 420\r\n      secretName: deployer-token-cdhw9\r\nstatus:\r\n  conditions:\r\n  - lastProbeTime: null\r\n    lastTransitionTime: 2023-07-29T11:01:32Z\r\n    message: '0\/1 nodes are available: 1 node(s) had taints that the pod didn''t tolerate.'\r\n    reason: Unschedulable\r\n    status: \"False\"\r\n    type: PodScheduled\r\n  phase: Pending\r\n  qosClass: BestEffort\r\n\r\n\r\n<\/pre>\n<p>add<code><\/code><\/p>\n<pre class=\"lang:default decode:true\">  tolerations:\r\n\u00a0 - key: key1\r\n\u00a0\u00a0\u00a0 value: value1\r\n\u00a0\u00a0\u00a0 operator: Equal\r\n\u00a0\u00a0\u00a0 effect: NoExecute<\/pre>\n<p>After editing:<\/p>\n<pre class=\"lang:default decode:true\">apiVersion: v1\r\nkind: Pod\r\nmetadata:\r\n  annotations:\r\n    openshift.io\/deployment-config.name: newpod\r\n    openshift.io\/deployment.name: newpod-1\r\n    openshift.io\/scc: restricted\r\n  creationTimestamp: 2023-07-29T11:01:32Z\r\n  labels:\r\n    openshift.io\/deployer-pod-for.name: newpod-1\r\n  name: newpod-1-deploy\r\n  namespace: love\r\n  ownerReferences:\r\n  - apiVersion: v1\r\n    kind: ReplicationController\r\n    name: newpod-1\r\n    uid: 471096bf-2dff-11ee-8f96-8e5760356a66\r\n  resourceVersion: \"2333327\"\r\n  selfLink: \/api\/v1\/namespaces\/love\/pods\/newpod-1-deploy\r\n  uid: 471355a8-2dff-11ee-8f96-8e5760356a66\r\nspec:\r\n  activeDeadlineSeconds: 21600\r\n  containers:\r\n  - env:\r\n    - name: OPENSHIFT_DEPLOYMENT_NAME\r\n      value: newpod-1\r\n    - name: OPENSHIFT_DEPLOYMENT_NAMESPACE\r\n      value: love\r\n    image: openshift\/origin-deployer:v3.11\r\n    imagePullPolicy: IfNotPresent\r\n    name: deployment\r\n    resources: {}\r\n    securityContext:\r\n      capabilities:\r\n        drop:\r\n        - KILL\r\n        - MKNOD\r\n        - SETGID\r\n        - SETUID\r\n      runAsUser: 1000410000\r\n    terminationMessagePath: \/dev\/termination-log\r\n    terminationMessagePolicy: File\r\n    volumeMounts:\r\n    - mountPath: \/var\/run\/secrets\/kubernetes.io\/serviceaccount\r\n      name: deployer-token-cdhw9\r\n      readOnly: true\r\n  dnsPolicy: ClusterFirst\r\n  imagePullSecrets:\r\n  - name: deployer-dockercfg-dpdps\r\n  priority: 0\r\n  restartPolicy: Never\r\n  schedulerName: default-scheduler\r\n  securityContext:\r\n    fsGroup: 1000410000\r\n    seLinuxOptions:\r\n      level: s0:c20,c15\r\n  serviceAccount: deployer\r\n  serviceAccountName: deployer\r\n  terminationGracePeriodSeconds: 10\r\n\r\n\r\n tolerations:\r\n\u00a0 - key: key1\r\n\u00a0\u00a0\u00a0 value: value1\r\n\u00a0\u00a0\u00a0 operator: Equal\r\n\u00a0\u00a0\u00a0 effect: NoExecute\r\n  \r\n  volumes:\r\n  - name: deployer-token-cdhw9\r\n    secret:\r\n      defaultMode: 420\r\n      secretName: deployer-token-cdhw9\r\nstatus:\r\n  conditions:\r\n  - lastProbeTime: null\r\n    lastTransitionTime: 2023-07-29T11:01:32Z\r\n    message: '0\/1 nodes are available: 1 node(s) had taints that the pod didn''t tolerate.'\r\n    reason: Unschedulable\r\n    status: \"False\"\r\n    type: PodScheduled\r\n  phase: Pending\r\n  qosClass: BestEffort\r\n<\/pre>\n<p>Now:<\/p>\n<pre class=\"lang:default decode:true \">$ oc get pods\r\nNAME              READY     STATUS    RESTARTS   AGE\r\nanti1             1\/1       Running   0          1h\r\nanti2             0\/1       Pending   0          1h\r\nnewpod-1-deploy   0\/1       Pending   0          21m\r\nrunonssd          1\/1       Running   0          37m\r\n\r\n$ oc adm taint nodes localhost keyl-\r\nnode\/localhost untainted\r\n\r\n$ oc get pods\r\nNAME             READY     STATUS    RESTARTS   AGE\r\nanti1            1\/1       Running   0          1h\r\nanti2            0\/1       Pending   0          1h\r\nnewpod-1-qgmpj   1\/1       Running   0          12s\r\nrunonssd         1\/1       Running   0          38m\r\n<\/pre>\n<p>Let&#8217;s remove the taint and the pod which before was pending now started tu run:<\/p>\n<pre class=\"lang:default decode:true \">$ oc adm taint nodes localhost keyl-\r\nnode\/localhost untainted\r\n\r\n$ oc get pods\r\nNAME             READY     STATUS    RESTARTS   AGE\r\nanti1            1\/1       Running   0          1h\r\nanti2            0\/1       Pending   0          1h\r\nnewpod-1-qgmpj   1\/1       Running   0          12s\r\nrunonssd         1\/1       Running   0          38m\r\n<\/pre>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"","protected":false},"author":2,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[93],"tags":[],"_links":{"self":[{"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/posts\/5103"}],"collection":[{"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/comments?post=5103"}],"version-history":[{"count":4,"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/posts\/5103\/revisions"}],"predecessor-version":[{"id":5115,"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/posts\/5103\/revisions\/5115"}],"wp:attachment":[{"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/media?parent=5103"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/categories?post=5103"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/tags?post=5103"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}