Understanding pod scaling
- The desired number of Pods is set in the Deployment or Deployment Configuration
- From there, the replicaset or replication controller is used to guarantee that this number of replicas is running
- The deployment is using a selector for identifying the replicated Pods
Scaling Pods Manually
- Use oc scaleto manually scale the number of Pods- oc scale --replicas 3 deployment myapp
 
- While doing this, the new desired number of replicas is added to the Deployment, and from there written to the ReplicaSet
Let’s try to scale pod manually:
| 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 | # oc login -u developer -p developer Login successful. You have access to the following projects and can switch between them with 'oc project <projectname>':     debug     myproject     network-security   * nodesel Using project "nodesel". [root@okd ~]# oc get all NAME                          READY     STATUS    RESTARTS   AGE pod/simple-6f55965d79-5d59d   1/1       Running   0          18m pod/simple-6f55965d79-5dt56   1/1       Running   0          18m pod/simple-6f55965d79-mklpc   1/1       Running   0          18m pod/simple-6f55965d79-q8pq9   1/1       Running   0          18m NAME                     DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE deployment.apps/simple   4         4         4            4           3h NAME                                DESIRED   CURRENT   READY     AGE replicaset.apps/simple-57f7866b4b   0         0         0         2h replicaset.apps/simple-6f55965d79   4         4         4         18m replicaset.apps/simple-776bd789d8   0         0         0         3h replicaset.apps/simple-77bd5f84cf   0         0         0         3h replicaset.apps/simple-8559698ddc   0         0         0         1h [root@okd ~]# [root@okd ~]# oc scale --replicas=3 deployment deployment.apps/simple error: there is no need to specify a resource type as a separate argument when passing arguments in resource/name form (e.g. 'oc get resource/<resource_name>' instead of 'oc get resource resource/<resource_name>' [root@okd ~]# oc scale --replicas=3 deployment.apps/simple deployment.apps/simple scaled [root@okd ~]# [root@okd ~]# oc get all NAME                          READY     STATUS    RESTARTS   AGE pod/simple-6f55965d79-5d59d   1/1       Running   0          19m pod/simple-6f55965d79-mklpc   1/1       Running   0          19m pod/simple-6f55965d79-q8pq9   1/1       Running   0          19m NAME                     DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE deployment.apps/simple   3         3         3            3           3h NAME                                DESIRED   CURRENT   READY     AGE replicaset.apps/simple-57f7866b4b   0         0         0         2h replicaset.apps/simple-6f55965d79   3         3         3         19m replicaset.apps/simple-776bd789d8   0         0         0         3h replicaset.apps/simple-77bd5f84cf   0         0         0         3h replicaset.apps/simple-8559698ddc   0         0         0         1h | 
Another way to do the same:
| 1 2 | $ oc edit  deployment.apps/simple deployment.apps/simple edited | 
And change :
| 1 2 3 | spec: progressDeadlineSeconds: 600 replicas: 3 | 
to
| 1 2 3 | spec: progressDeadlineSeconds: 600 replicas: 2 | 
Now the number of pods is limited to two:
| 1 2 3 4 5 6 7 8 9 10 11 12 13 14 | $ oc get all NAME                          READY     STATUS    RESTARTS   AGE pod/simple-6f55965d79-mklpc   1/1       Running   0          23m pod/simple-6f55965d79-q8pq9   1/1       Running   0          23m NAME                     DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE deployment.apps/simple   2         2         2            2           3h NAME                                DESIRED   CURRENT   READY     AGE replicaset.apps/simple-57f7866b4b   0         0         0         3h replicaset.apps/simple-6f55965d79   2         2         2         23m replicaset.apps/simple-776bd789d8   0         0         0         3h replicaset.apps/simple-77bd5f84cf   0         0         0         3h replicaset.apps/simple-8559698ddc   0         0         0         2h | 
Autoscaling Pods
- OpenShift provides the HorizontalPodAutoscalerresource for automatically scaling Pods
- This resource depends on the OpenShift Metrics subsystem, which is pre-installed in OpenShift 4
- To use autoscaling, resource requests need to be specified so that the autoscaler knows when to do what:
- Use Resource Requests or project resource limitations to take care of this
 
- Currently, Autoscaling is based on CPU usage, autoscaling for memory utilization is in tech preview
| 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 | $ oc login -u developer -p developer Login successful. You have access to the following projects and can switch between them with 'oc project <projectname>':   * nodesel Using project "nodesel". $ oc new-project auto Now using project "auto" on server "https://172.30.9.22:8443". You can add applications to this project with the 'new-app' command. For example, try:     oc new-app centos/ruby-25-centos7~https://github.com/sclorg/ruby-ex.git to build a new example application in Ruby. $ oc new-app --name auto php-https://github.com/sandervanvugt/simpleapp error: git ls-remote failed with: fatal: Unable to find remote helper for 'php-https';  local file access failed with: stat php-https://github.le or directory error: unable to locate any images in image streams, templates loaded in accessible projects, template files, local docker images with name "phleapp" Argument 'php-https://github.com/sandervanvugt/simpleapp' was classified as an image, image~source, or loaded template reference. The 'oc new-app' command will match arguments to the following types:   1. Images tagged into image streams in the current project or the 'openshift' project      - if you don't specify a tag, we'll add ':latest'   2. Images in the Docker Hub, on remote registries, or on the local Docker engine   3. Templates in the current project or the 'openshift' project   4. Git repository URLs or local paths that point to Git repositories --allow-missing-images can be used to point to an image that does not exist yet. See 'oc new-app -h' for examples. $ oc new-app --name auto https://github.com/sandervanvugt/simpleapp --> Found Docker image 5d0da3d (22 months old) from Docker Hub for "centos"     * An image stream tag will be created as "centos:latest" that will track the source image     * A Docker build using source code from https://github.com/sandervanvugt/simpleapp will be created       * The resulting image will be pushed to image stream tag "auto:latest"       * Every time "centos:latest" changes a new build will be triggered     * This image will be deployed in deployment config "auto"     * The image does not expose any ports - if you want to load balance or send traffic to this component       you will need to create a service with 'expose dc/auto --port=[port]' later     * WARNING: Image "centos" runs as the 'root' user which may not be permitted by your cluster administrator --> Creating resources ...     imagestream.image.openshift.io "centos" created     imagestream.image.openshift.io "auto" created     buildconfig.build.openshift.io "auto" created     deploymentconfig.apps.openshift.io "auto" created --> Success     Build scheduled, use 'oc logs -f bc/auto' to track its progress.     Run 'oc status' to view your app. $ oc status In project auto on server https://172.30.9.22:8443 dc/auto deploys istag/auto:latest <-   bc/auto docker builds https://github.com/sandervanvugt/simpleapp on istag/centos:latest     build #1 failed 26 seconds ago - a6c13bc: message (sandervanvugt <mail@sandervanvugt.nl>)   deployment #1 waiting on image or update Errors:   * build/auto-1 has failed. 1 error, 1 warning, 2 infos identified, use 'oc status --suggest' to see details. $ oc get deploy No resources found. $ oc new-app --name auto php-https://github.com/sandervanvugt/simpleapp error: git ls-remote failed with: fatal: Unable to find remote helper for 'php-https';  local file access failed with: stat php-https://github.le or directory error: unable to locate any images in image streams, templates loaded in accessible projects, template files, local docker images with name "phleapp" Argument 'php-https://github.com/sandervanvugt/simpleapp' was classified as an image, image~source, or loaded template reference. The 'oc new-app' command will match arguments to the following types:   1. Images tagged into image streams in the current project or the 'openshift' project      - if you don't specify a tag, we'll add ':latest'   2. Images in the Docker Hub, on remote registries, or on the local Docker engine   3. Templates in the current project or the 'openshift' project   4. Git repository URLs or local paths that point to Git repositories --allow-missing-images can be used to point to an image that does not exist yet. See 'oc new-app -h' for examples. $ oc get pods NAME           READY     STATUS    RESTARTS   AGE auto-1-build   0/1       Error     0          2m $ oc get deploy No resources found. $ oc autoscale deploy auto --min 5 --max 10 --cpu-percent 20 Error from server (NotFound): deployments.extensions "auto" not found $ oc get hpa No resources found. $ oc get pods NAME           READY     STATUS    RESTARTS   AGE auto-1-build   0/1       Error     0          18m $ oc get deploy No resources found. $ oc get hpa -o yaml apiVersion: v1 items: [] kind: List metadata:   resourceVersion: ""   selfLink: "" | 
Resource Requests and Limits
- Resource requests and limits are used on a per-application basis
- Quota are enforced on a project or cluster basis
- In Pods spec.containers.resources.requests, a Pod can request minimal amounts of CPU and memory resources
- The scheduler will look for a node that meets these requirements
 
- In Pods spec.containers.resources.iimits, the Pod can be limited to a maximum use of resources
- CGroups are used on the node to enforce the limits
 
Setting Resources
- Use oc set resourcesto set resource requests as well as limits, or edit the YAML code directly
- Resource restrictions can be set on individual containers, as well as on a complete deployment
- oc set resources deployment hello-world-nginx --requests cpu=10m,memory=10Mi --limits cpu=50m,memory=50Mi
- Use oc set resources -hfor ready-to-use examples
Setting Resource Limits
- oc create deployment nee --image=bitnami/nginx:latest --replicas=3
- oc get pods
- oc set resources deploy nee --requests cpu lOm, memory=1Mi --limits cpu=20m,memory=5Mi
- oc get pods# one new pod will be stuck in state “Creating”
- oc describe pods nee-xxxx# will show this is because of resource limits
- oc set resources deploy nee --requests cpu=0m,memory=0Mi --limits cpu=0m,memory=0Mi
- oc get pods
| 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 | $ oc login -u developer -p developer Login successful. You have access to the following projects and can switch between them with 'oc project <projectname>':   * auto Using project "auto". $ oc new-project limits Now using project "limits" on server "https://172.30.9.22:8443". You can add applications to this project with the 'new-app' command. For example, try:     oc new-app centos/ruby-25-centos7~https://github.com/sclorg/ruby-ex.git to build a new example application in Ruby. $oc create deployment nee --image=bitnami/nginx --replicas=3 Error: unknown flag: --replicas $  oc create deployment -h Create a deployment with the specified name. Aliases: deployment, deploy Usage:   oc create deployment NAME --image=image [--dry-run] [flags] Examples:   # Create a new deployment named my-dep that runs the busybox image.   oc create deployment my-dep --image=busybox Options:       --allow-missing-template-keys=true: If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.       --dry-run=false: If true, only print the object that would be sent, without sending it.       --generator='': The name of the API generator to use.       --image=[]: Image name to run.   -o, --output='': Output format. One of: json|yaml|name|go-template|go-template-file|templatefile|template|jsonpath|jsonpath-file.       --save-config=false: If true, the configuration of current object will be saved in its annotation. Otherwise, the annotation will be unchanged. This flag is useful when you want to perform kubectl apply on this object in the future.       --template='': Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http://golang.org/pkg/text/template/#pkg-overview].       --validate=false: If true, use a schema to validate the input before sending it Use "oc options" for a list of global command-line options (applies to all commands). $ oc create deployment nee --image=bitnami/nginx deployment.apps/nee created $ oc get all NAME                       READY     STATUS    RESTARTS   AGE pod/nee-6f4f4dbf77-p247v   1/1       Running   0          26s NAME                  DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE deployment.apps/nee   1         1         1            1           26s NAME                             DESIRED   CURRENT   READY     AGE replicaset.apps/nee-6f4f4dbf77   1         1         1         26s $ oc get pods NAME                   READY     STATUS    RESTARTS   AGE nee-6f4f4dbf77-p247v   1/1       Running   0          37s $ oc set resources deploy nee --requests=cpu=10m,memory=1Mi --limits=cpu=20m,memory=5Mi deployment.extensions/nee resource requirements updated $ oc get pods NAME                   READY     STATUS                 RESTARTS   AGE nee-6f4f4dbf77-p247v   1/1       Running                0          7m nee-7b855dcd99-zh7z6   0/1       CreateContainerError   0          3m $ oc get deploy NAME      DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE nee       1         2         1            1           8m [root@okd ~]# oc describe pods nee-7b855dcd99-zh7z6 Name:               nee-7b855dcd99-zh7z6 Namespace:          limits Priority:           0 PriorityClassName:  <none> Node:               localhost/172.30.9.22 Start Time:         Fri, 28 Jul 2023 18:16:29 +0200 Labels:             app=nee                     pod-template-hash=3641187855 Annotations:        openshift.io/scc=restricted Status:             Pending IP:                 172.17.0.24 Controlled By:      ReplicaSet/nee-7b855dcd99 Containers:   nginx:     Container ID:     Image:          bitnami/nginx     Image ID:     Port:           <none>     Host Port:      <none>     State:          Waiting       Reason:       CreateContainerError     Ready:          False     Restart Count:  0     Limits:       cpu:     20m       memory:  5Mi     Requests:       cpu:        10m       memory:     1Mi     Environment:  <none>     Mounts:       /var/run/secrets/kubernetes.io/serviceaccount from default-token-h5g6h (ro) Conditions:   Type              Status   Initialized       True   Ready             False   ContainersReady   False   PodScheduled      True Volumes:   default-token-h5g6h:     Type:        Secret (a volume populated by a Secret)     SecretName:  default-token-h5g6h     Optional:    false QoS Class:       Burstable Node-Selectors:  <none> Tolerations:     node.kubernetes.io/memory-pressure:NoSchedule Events:   Type     Reason     Age              From                Message   ----     ------     ----             ----                -------   Normal   Scheduled  4m               default-scheduler   Successfully assigned limits/nee-7b855dcd99-zh7z6 to localhost   Normal   Pulled     3m (x8 over 4m)  kubelet, localhost  Successfully pulled image "bitnami/nginx"   Warning  Failed     3m (x8 over 4m)  kubelet, localhost  Error: Error response from daemon: Minimum memory limit allowed is 6MB   Normal   Pulling    3m (x9 over 4m)  kubelet, localhost  pulling image "bitnami/nginx" $ oc set resources deploy nee --requests=cpu=0m,memory=0Mi --limits=cpu=0m,memory=0Mi deployment.extensions/nee resource requirements updated $ oc get pods NAME                   READY     STATUS        RESTARTS   AGE nee-597889d8c7-p6tc2   1/1       Running       0          4s nee-6f4f4dbf77-p247v   0/1       Terminating   0          10m $ oc describe deployment nee-6f4f4dbf77-p247v Error from server (NotFound): deployments.extensions "nee-6f4f4dbf77-p247v" not found $ oc describe deployment nee Name:                   nee Namespace:              limits CreationTimestamp:      Fri, 28 Jul 2023 18:12:04 +0200 Labels:                 app=nee Annotations:            deployment.kubernetes.io/revision=5 Selector:               app=nee Replicas:               1 desired | 1 updated | 1 total | 1 available | 0 unavailable StrategyType:           RollingUpdate MinReadySeconds:        0 RollingUpdateStrategy:  25% max unavailable, 25% max surge Pod Template:   Labels:  app=nee   Containers:    nginx:     Image:      bitnami/nginx     Port:       <none>     Host Port:  <none>     Limits:       cpu:     0       memory:  0     Requests:       cpu:        0       memory:     0     Environment:  <none>     Mounts:       <none>   Volumes:        <none> Conditions:   Type           Status  Reason   ----           ------  ------   Available      True    MinimumReplicasAvailable   Progressing    True    NewReplicaSetAvailable OldReplicaSets:  <none> NewReplicaSet:   nee-597889d8c7 (1/1 replicas created) Events:   Type    Reason             Age               From                   Message   ----    ------             ----              ----                   -------   Normal  ScalingReplicaSet  10m               deployment-controller  Scaled up replica set nee-6f4f4dbf77 to 1   Normal  ScalingReplicaSet  6m                deployment-controller  Scaled up replica set nee-c944fdfd6 to 1   Normal  ScalingReplicaSet  6m (x2 over 6m)   deployment-controller  Scaled up replica set nee-7b855dcd99 to 1   Normal  ScalingReplicaSet  6m                deployment-controller  Scaled down replica set nee-c944fdfd6 to 0   Normal  ScalingReplicaSet  32s (x2 over 6m)  deployment-controller  Scaled down replica set nee-7b855dcd99 to 0   Normal  ScalingReplicaSet  32s               deployment-controller  Scaled up replica set nee-597889d8c7 to 1   Normal  ScalingReplicaSet  30s               deployment-controller  Scaled down replica set nee-6f4f4dbf77 to 0 $ oc get pods NAME                   READY     STATUS    RESTARTS   AGE nee-597889d8c7-p6tc2   1/1       Running   0          44s | 
Monitoring resource availability:
- Use oc describe node nodenameto get information about current CPU and memory usage for each Pod running on the node- Notice the summary line at the end of the output, where you’ll see requests as well as limits that have been set
 
- Use oc adm topto get actual resource usage- Notice this requires metrics server to be installed and configured
 
| 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 | $ oc login -u developer -p developer $ oc describe node localhost Error from server (Forbidden): nodes "localhost" is forbidden: User "developer" cannot get nodes at the cluster scope: no RBAC policy matched $ oc login -u kubadmin -p kubepass $ oc describe node localhost Error from server (Forbidden): nodes "localhost" is forbidden: User "kubadmin" cannot get nodes at the cluster scope: no RBAC policy matched $ oc login -u system:admin Logged into "https://172.30.9.22:8443" as "system:admin" using existing credentials. You have access to the following projects and can switch between them with 'oc project <projectname>':     auto     debug   * default Using project "default". $ oc describe node localhost Name:               localhost Roles:              <none> Labels:             beta.kubernetes.io/arch=amd64                     beta.kubernetes.io/os=linux                     kubernetes.io/hostname=localhost Annotations:        volumes.kubernetes.io/controller-managed-attach-detach=true CreationTimestamp:  Sat, 22 Jul 2023 20:54:32 +0200 Taints:             <none> Unschedulable:      false Conditions:   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message   ----             ------  -----------------                 ------------------                ------                       -------   OutOfDisk        False   Fri, 28 Jul 2023 18:29:58 +0200   Sat, 22 Jul 2023 20:54:22 +0200   KubeletHasSufficientDisk     kubelet has sufficient disk space available   MemoryPressure   False   Fri, 28 Jul 2023 18:29:58 +0200   Sat, 22 Jul 2023 20:54:22 +0200   KubeletHasSufficientMemory   kubelet has sufficient memory available   DiskPressure     False   Fri, 28 Jul 2023 18:29:58 +0200   Sat, 22 Jul 2023 20:54:22 +0200   KubeletHasNoDiskPressure     kubelet has no disk pressure   PIDPressure      False   Fri, 28 Jul 2023 18:29:58 +0200   Sat, 22 Jul 2023 20:54:22 +0200   KubeletHasSufficientPID      kubelet has sufficient PID available   Ready            True    Fri, 28 Jul 2023 18:29:58 +0200   Sat, 22 Jul 2023 20:54:22 +0200   KubeletReady                 kubelet is posting ready status Addresses:   InternalIP:  172.30.9.22   Hostname:    localhost Capacity:  cpu:            8  hugepages-1Gi:  0  hugepages-2Mi:  0  memory:         7981844Ki  pods:           250 Allocatable:  cpu:            8  hugepages-1Gi:  0  hugepages-2Mi:  0  memory:         7879444Ki  pods:           250 System Info:  Machine ID:                     a37388a4746444f1b3f079f777748845  System UUID:                    6099DE02-9EA8-C210-7553-A7697F2C302A  Boot ID:                        7c076895-85e9-45ce-ae2c-8bbe7127be73  Kernel Version:                 3.10.0-1160.92.1.el7.x86_64  OS Image:                       CentOS Linux 7 (Core)  Operating System:               linux  Architecture:                   amd64  Container Runtime Version:      docker://24.0.3  Kubelet Version:                v1.11.0+d4cacc0  Kube-Proxy Version:             v1.11.0+d4cacc0 Non-terminated Pods:             (32 in total)   Namespace                      Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits   ---------                      ----                                                       ------------  ----------  ---------------  -------------   debug                          dnginx-88c7766dd-hlbtd                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)   default                        bitginx-1-jzk9r                                            0 (0%)        0 (0%)      0 (0%)           0 (0%)   default                        busybox                                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)   default                        docker-registry-1-ctgff                                    100m (1%)     0 (0%)      256Mi (3%)       0 (0%)   default                        lab4pod                                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)   default                        linginx1-dc9f65f54-6zw8j                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)   default                        linginx2-69bf6fc66b-mv6wx                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)   default                        nginx                                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)   default                        nginx-cm                                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)   default                        pv-pod                                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)   default                        router-1-k8zgt                                             100m (1%)     0 (0%)      256Mi (3%)       0 (0%)   default                        test1                                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)   kube-dns                       kube-dns-t727w                                             0 (0%)        0 (0%)      0 (0%)           0 (0%)   kube-proxy                     kube-proxy-cr7kh                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)   kube-system                    kube-controller-manager-localhost                          0 (0%)        0 (0%)      0 (0%)           0 (0%)   kube-system                    kube-scheduler-localhost                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)   kube-system                    master-api-localhost                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)   kube-system                    master-etcd-localhost                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)   limits                         nee-597889d8c7-p6tc2                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)   network-security               nginxlab-1-bcgkt                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)   nodesel                        simple-6f55965d79-mklpc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)   nodesel                        simple-6f55965d79-q8pq9                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)   openshift-apiserver            openshift-apiserver-thwpd                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)   openshift-controller-manager   openshift-controller-manager-c9ms5                         0 (0%)        0 (0%)      0 (0%)           0 (0%)   openshift-core-operators       openshift-service-cert-signer-operator-6d477f986b-jzcgw    0 (0%)        0 (0%)      0 (0%)           0 (0%)   openshift-core-operators       openshift-web-console-operator-664b974ff5-px7gw            0 (0%)        0 (0%)      0 (0%)           0 (0%)   openshift-service-cert-signer  apiservice-cabundle-injector-8ffbbb6dc-x9l4r               0 (0%)        0 (0%)      0 (0%)           0 (0%)   openshift-service-cert-signer  service-serving-cert-signer-668c45d5f-lxvff                0 (0%)        0 (0%)      0 (0%)           0 (0%)   openshift-web-console          webconsole-78f59b4bfb-qqv4p                                100m (1%)     0 (0%)      100Mi (1%)       0 (0%)   source-project                 nginx-access                                               0 (0%)        0 (0%)      0 (0%)           0 (0%)   source-project                 nginx-noaccess                                             0 (0%)        0 (0%)      0 (0%)           0 (0%)   target-project                 nginx-target-1-9kdn6                                       0 (0%)        0 (0%)      0 (0%)           0 (0%) Allocated resources:   (Total limits may be over 100 percent, i.e., overcommitted.)   Resource  Requests    Limits   --------  --------    ------   cpu       300m (3%)   0 (0%)   memory    612Mi (7%)  0 (0%) Events:     <none> | 
Using Quotas
- Quotas are used to apply limits
- On the number of objects, such as Pods, services, and routes
- On compute resources, such as CPU, memory and storage
 
- Quotas are useful for preventing the exhaustion of vital resources
- Etcd
- IP addresses
- Compute capacity of worker nodes
 
- Quotas are applied to new resources but do not limit current resources
- To apply quota, the ResourceQuota resource is used
- Use a YAML file, or oc create quota my-quota --hard service 10,cpu=1400,memory-1.8Gi
Quota Scope
-  resourcequotasare applied to projects to limit use of resources
- clusterresourcequotasapply quota with a cluster scope
- Multiple resourcequotas can be applied on the same project
- The effect is cummulative
- Limit one specific resource type for each quota resource used
 
- Use oc create quota -hfor command line help on how to apply
- Avoid using YAML
Verifying Resource Quota
- oc get resourcequotagives an overview of all resourcequota API resources
- oc describe quotawill show cumulative quotas from all resourcequota in the current project
Quota-related Failure
- If a modification exceeds the resource count (like number of Pods), OpenShift will deny the modification immediately
- If a modification exceeds quota for a compute resource (such as available RAM), OpenShift will not fail immediately to give the administrator some time to fix the issue
- If a quota that restricts usage of compute resources is used, OpenShift will not create Pods that do not have resource requests or limits set also
- It’s also recommended to use LimitRange to specify the default values for resource requests
Applying Resource Quota
- oc login -u developer -p password
- oc new-project quota-test
- oc login -u admin -p password
- oc create quota qtest --hard pods=3,cpu=100,memory=500Mi
- oc describe quota
- oc login -u developer -p password
- oc create deploy bitginx --image=bitnami/nginx:latest --replicas =3
- oc get all# no pods
- oc describe rs/bitginx-xxx# it fails because no quota have been set on the deployment
- oc set resources deploy bitginx --requests cpu=10m,memory=5Mi --limits cpu=20m,memory=20Mi
| 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 | $ oc login -u developer -p developer Login successful. You have access to the following projects and can switch between them with 'oc project <projectname>':   * auto Using project "auto". $ oc new-project quota-test Now using project "quota-test" on server "https://172.30.9.22:8443". You can add applications to this project with the 'new-app' command. For example, try:     oc new-app centos/ruby-25-centos7~https://github.com/sclorg/ruby-ex.git to build a new example application in Ruby. $ oc login -u system:admin Logged into "https://172.30.9.22:8443" as "system:admin" using existing credentials. You have access to the following projects and can switch between them with 'oc project <projectname>':   * quota-test Using project "quota-test". $ oc create quota qtest --hard pods=3,cpu=100,memory=500Mi resourcequota/qtest created $ oc describe quota Name:       qtest Namespace:  quota-test Resource    Used  Hard --------    ----  ---- cpu         0     100 memory      0     500Mi pods        0     3 $ oc login -u developer -p developer Login successful. You have access to the following projects and can switch between them with 'oc project <projectname>':   * quota-test Using project "quota-test". $ oc create deploy bitginx --image=bitnami/nginx:latest --replicas=3 Error: unknown flag: --replicas Aliases: deployment, deploy Usage:   oc create deployment NAME --image=image [--dry-run] [flags] Examples:   # Create a new deployment named my-dep that runs the busybox image.   oc create deployment my-dep --image=busybox Options:       --allow-missing-template-keys=true: If true, ignore any errors in templates when a field or map key is missing in the template. Only appl                                                  ies to golang and jsonpath output formats.       --dry-run=false: If true, only print the object that would be sent, without sending it.       --generator='': The name of the API generator to use.       --image=[]: Image name to run.   -o, --output='': Output format. One of: json|yaml|name|go-template-file|templatefile|template|go-template|jsonpath|jsonpath-file.       --save-config=false: If true, the configuration of current object will be saved in its annotation. Otherwise, the annotation will be unch                                                  anged. This flag is useful when you want to perform kubectl apply on this object in the future.       --template='': Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang te                                                  mplates [http://golang.org/pkg/text/template/#pkg-overview].       --validate=false: If true, use a schema to validate the input before sending it Use "oc options" for a list of global command-line options (applies to all commands). $ oc create deploy bitginx --image=bitnami/nginx:latest deployment.apps/bitginx created $ oc get all NAME                      DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE deployment.apps/bitginx   1         0         0            0           6s NAME                                 DESIRED   CURRENT   READY     AGE replicaset.apps/bitginx-794f7cf64f   1         0         0         6s $ oc get all NAME                      DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE deployment.apps/bitginx   1         0         0            0           21s NAME                                 DESIRED   CURRENT   READY     AGE replicaset.apps/bitginx-794f7cf64f   1         0         0         21s $ oc describe rs/bitginx-794f7cf64f Name:           bitginx-794f7cf64f Namespace:      quota-test Selector:       app=bitginx,pod-template-hash=3509379209 Labels:         app=bitginx                 pod-template-hash=3509379209 Annotations:    deployment.kubernetes.io/desired-replicas=1                 deployment.kubernetes.io/max-replicas=2                 deployment.kubernetes.io/revision=1 Controlled By:  Deployment/bitginx Replicas:       0 current / 1 desired Pods Status:    0 Running / 0 Waiting / 0 Succeeded / 0 Failed Pod Template:   Labels:  app=bitginx            pod-template-hash=3509379209   Containers:    nginx:     Image:        bitnami/nginx:latest     Port:         <none>     Host Port:    <none>     Environment:  <none>     Mounts:       <none>   Volumes:        <none> Conditions:   Type             Status  Reason   ----             ------  ------   ReplicaFailure   True    FailedCreate Events:   Type     Reason        Age                From                   Message   ----     ------        ----               ----                   -------   Warning  FailedCreate  50s                replicaset-controller  Error creating: pods "bitginx-794f7cf64f-hk8l5" is forbidden: failed quota:                                                   qtest: must specify cpu,memory   Warning  FailedCreate  49s                replicaset-controller  Error creating: pods "bitginx-794f7cf64f-jp7p9" is forbidden: failed quota:                                                   qtest: must specify cpu,memory   Warning  FailedCreate  49s                replicaset-controller  Error creating: pods "bitginx-794f7cf64f-plhwc" is forbidden: failed quota:                                                   qtest: must specify cpu,memory   Warning  FailedCreate  49s                replicaset-controller  Error creating: pods "bitginx-794f7cf64f-gc5l4" is forbidden: failed quota:                                                   qtest: must specify cpu,memory   Warning  FailedCreate  49s                replicaset-controller  Error creating: pods "bitginx-794f7cf64f-bgrcz" is forbidden: failed quota:                                                   qtest: must specify cpu,memory   Warning  FailedCreate  49s                replicaset-controller  Error creating: pods "bitginx-794f7cf64f-k89qz" is forbidden: failed quota:                                                   qtest: must specify cpu,memory   Warning  FailedCreate  49s                replicaset-controller  Error creating: pods "bitginx-794f7cf64f-w5hv7" is forbidden: failed quota:                                                   qtest: must specify cpu,memory   Warning  FailedCreate  49s                replicaset-controller  Error creating: pods "bitginx-794f7cf64f-xcknq" is forbidden: failed quota:                                                   qtest: must specify cpu,memory   Warning  FailedCreate  48s                replicaset-controller  Error creating: pods "bitginx-794f7cf64f-r5frr" is forbidden: failed quota:                                                   qtest: must specify cpu,memory   Warning  FailedCreate  29s (x5 over 47s)  replicaset-controller  (combined from similar events): Error creating: pods "bitginx-794f7cf64f-p78                                                  6m" is forbidden: failed quota: qtest: must specify cpu,memory $ oc set resources deploy bitginx --requests cpu=10m,memory=5Mi --limits cpu=50m,memory=20Mi deployment.extensions/bitginx resource requirements updated $ oc get all NAME                           READY     STATUS    RESTARTS   AGE pod/bitginx-84b698ff5c-x8228   1/1       Running   0          7s NAME                      DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE deployment.apps/bitginx   1         1         1            1           2m NAME                                 DESIRED   CURRENT   READY     AGE replicaset.apps/bitginx-794f7cf64f   0         0         0         2m replicaset.apps/bitginx-84b698ff5c   1         1         1         7s [root@okd ~]# oc describe rs/bitginx-794f7cf64f Name:           bitginx-794f7cf64f Namespace:      quota-test Selector:       app=bitginx,pod-template-hash=3509379209 Labels:         app=bitginx                 pod-template-hash=3509379209 Annotations:    deployment.kubernetes.io/desired-replicas=1                 deployment.kubernetes.io/max-replicas=2                 deployment.kubernetes.io/revision=1 Controlled By:  Deployment/bitginx Replicas:       0 current / 0 desired Pods Status:    0 Running / 0 Waiting / 0 Succeeded / 0 Failed Pod Template:   Labels:  app=bitginx            pod-template-hash=3509379209   Containers:    nginx:     Image:        bitnami/nginx:latest     Port:         <none>     Host Port:    <none>     Environment:  <none>     Mounts:       <none>   Volumes:        <none> Events:   Type     Reason        Age               From                   Message   ----     ------        ----              ----                   -------   Warning  FailedCreate  2m                replicaset-controller  Error creating: pods "bitginx-794f7cf64f-hk8l5" is forbidden: failed quota: q                                                  test: must specify cpu,memory   Warning  FailedCreate  2m                replicaset-controller  Error creating: pods "bitginx-794f7cf64f-jp7p9" is forbidden: failed quota: q                                                  test: must specify cpu,memory   Warning  FailedCreate  2m                replicaset-controller  Error creating: pods "bitginx-794f7cf64f-plhwc" is forbidden: failed quota: q                                                  test: must specify cpu,memory   Warning  FailedCreate  2m                replicaset-controller  Error creating: pods "bitginx-794f7cf64f-gc5l4" is forbidden: failed quota: q                                                  test: must specify cpu,memory   Warning  FailedCreate  2m                replicaset-controller  Error creating: pods "bitginx-794f7cf64f-bgrcz" is forbidden: failed quota: q                                                  test: must specify cpu,memory   Warning  FailedCreate  2m                replicaset-controller  Error creating: pods "bitginx-794f7cf64f-k89qz" is forbidden: failed quota: q                                                  test: must specify cpu,memory   Warning  FailedCreate  2m                replicaset-controller  Error creating: pods "bitginx-794f7cf64f-w5hv7" is forbidden: failed quota: q                                                  test: must specify cpu,memory   Warning  FailedCreate  2m                replicaset-controller  Error creating: pods "bitginx-794f7cf64f-xcknq" is forbidden: failed quota: q                                                  test: must specify cpu,memory   Warning  FailedCreate  2m                replicaset-controller  Error creating: pods "bitginx-794f7cf64f-r5frr" is forbidden: failed quota: q                                                  test: must specify cpu,memory   Warning  FailedCreate  27s (x7 over 2m)  replicaset-controller  (combined from similar events): Error creating: pods "bitginx-794f7cf64f-m85v                                                  g" is forbidden: failed quota: qtest: must specify cpu,memory $ oc scale deployment bitginx --replicas=3 deployment.extensions/bitginx scaled $ oc get all NAME                           READY     STATUS    RESTARTS   AGE pod/bitginx-84b698ff5c-2tbzt   1/1       Running   0          14s pod/bitginx-84b698ff5c-48mn7   1/1       Running   0          14s pod/bitginx-84b698ff5c-x8228   1/1       Running   0          1m NAME                      DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE deployment.apps/bitginx   3         3         3            3           4m NAME                                 DESIRED   CURRENT   READY     AGE replicaset.apps/bitginx-794f7cf64f   0         0         0         4m replicaset.apps/bitginx-84b698ff5c   3         3         3         1m $ oc describe rs/bitginx-794f7cf64f Name:           bitginx-794f7cf64f Namespace:      quota-test Selector:       app=bitginx,pod-template-hash=3509379209 Labels:         app=bitginx                 pod-template-hash=3509379209 Annotations:    deployment.kubernetes.io/desired-replicas=1                 deployment.kubernetes.io/max-replicas=2                 deployment.kubernetes.io/revision=1 Controlled By:  Deployment/bitginx Replicas:       0 current / 0 desired Pods Status:    0 Running / 0 Waiting / 0 Succeeded / 0 Failed Pod Template:   Labels:  app=bitginx            pod-template-hash=3509379209   Containers:    nginx:     Image:        bitnami/nginx:latest     Port:         <none>     Host Port:    <none>     Environment:  <none>     Mounts:       <none>   Volumes:        <none> Events:   Type     Reason        Age              From                   Message   ----     ------        ----             ----                   -------   Warning  FailedCreate  4m               replicaset-controller  Error creating: pods "bitginx-794f7cf64f-hk8l5" is forbidden: failed quota: qt                                                  est: must specify cpu,memory   Warning  FailedCreate  4m               replicaset-controller  Error creating: pods "bitginx-794f7cf64f-jp7p9" is forbidden: failed quota: qt                                                  est: must specify cpu,memory   Warning  FailedCreate  4m               replicaset-controller  Error creating: pods "bitginx-794f7cf64f-plhwc" is forbidden: failed quota: qt                                                  est: must specify cpu,memory   Warning  FailedCreate  4m               replicaset-controller  Error creating: pods "bitginx-794f7cf64f-gc5l4" is forbidden: failed quota: qt                                                  est: must specify cpu,memory   Warning  FailedCreate  4m               replicaset-controller  Error creating: pods "bitginx-794f7cf64f-bgrcz" is forbidden: failed quota: qt                                                  est: must specify cpu,memory   Warning  FailedCreate  4m               replicaset-controller  Error creating: pods "bitginx-794f7cf64f-k89qz" is forbidden: failed quota: qt                                                  est: must specify cpu,memory   Warning  FailedCreate  4m               replicaset-controller  Error creating: pods "bitginx-794f7cf64f-w5hv7" is forbidden: failed quota: qt                                                  est: must specify cpu,memory   Warning  FailedCreate  4m               replicaset-controller  Error creating: pods "bitginx-794f7cf64f-xcknq" is forbidden: failed quota: qt                                                  est: must specify cpu,memory   Warning  FailedCreate  4m               replicaset-controller  Error creating: pods "bitginx-794f7cf64f-r5frr" is forbidden: failed quota: qt                                                  est: must specify cpu,memory   Warning  FailedCreate  2m (x7 over 4m)  replicaset-controller  (combined from similar events): Error creating: pods "bitginx-794f7cf64f-m85vg                                                  " is forbidden: failed quota: qtest: must specify cpu,memory $ oc get all NAME                           READY     STATUS    RESTARTS   AGE pod/bitginx-84b698ff5c-2tbzt   1/1       Running   0          1m pod/bitginx-84b698ff5c-48mn7   1/1       Running   0          1m pod/bitginx-84b698ff5c-x8228   1/1       Running   0          2m NAME                      DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE deployment.apps/bitginx   3         3         3            3           5m NAME                                 DESIRED   CURRENT   READY     AGE replicaset.apps/bitginx-794f7cf64f   0         0         0         5m replicaset.apps/bitginx-84b698ff5c   3         3         3         2m $ oc delete rs bitginx-84b698ff5c-2tbzt Error from server (NotFound): replicasets.extensions "bitginx-84b698ff5c-2tbzt" not found [root@okd ~]# oc delete rs pod/bitginx-84b698ff5c-2tbzt error: there is no need to specify a resource type as a separate argument when passing arguments in resource/name form (e.g. 'oc get resource/<                                                  resource_name>' instead of 'oc get resource resource/<resource_name>' $ oc delete pod bitginx-84b698ff5c-2tbzt pod "bitginx-84b698ff5c-2tbzt" deleted $ oc delete pod bitginx-84b698ff5c-48mn7 pod "bitginx-84b698ff5c-48mn7" deleted $ oc delete pod bitginx-84b698ff5c-x8228 pod "bitginx-84b698ff5c-x8228" deleted $ oc get all NAME                           READY     STATUS    RESTARTS   AGE pod/bitginx-84b698ff5c-h5j57   1/1       Running   0          27s pod/bitginx-84b698ff5c-qxwd5   1/1       Running   0          13s pod/bitginx-84b698ff5c-xfnp4   1/1       Running   0          47s NAME                      DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE deployment.apps/bitginx   3         3         3            3           7m NAME                                 DESIRED   CURRENT   READY     AGE replicaset.apps/bitginx-794f7cf64f   0         0         0         7m replicaset.apps/bitginx-84b698ff5c   3         3         3         5m $ oc describe rs/bitginx-794f7cf64f Name:           bitginx-794f7cf64f Namespace:      quota-test Selector:       app=bitginx,pod-template-hash=3509379209 Labels:         app=bitginx                 pod-template-hash=3509379209 Annotations:    deployment.kubernetes.io/desired-replicas=1                 deployment.kubernetes.io/max-replicas=2                 deployment.kubernetes.io/revision=1 Controlled By:  Deployment/bitginx Replicas:       0 current / 0 desired Pods Status:    0 Running / 0 Waiting / 0 Succeeded / 0 Failed Pod Template:   Labels:  app=bitginx            pod-template-hash=3509379209   Containers:    nginx:     Image:        bitnami/nginx:latest     Port:         <none>     Host Port:    <none>     Environment:  <none>     Mounts:       <none>   Volumes:        <none> Events:   Type     Reason        Age              From                   Message   ----     ------        ----             ----                   -------   Warning  FailedCreate  7m               replicaset-controller  Error creating: pods "bitginx-794f7cf64f-hk8l5" is forbidden: failed quota: qt                                                  est: must specify cpu,memory   Warning  FailedCreate  7m               replicaset-controller  Error creating: pods "bitginx-794f7cf64f-jp7p9" is forbidden: failed quota: qt                                                  est: must specify cpu,memory   Warning  FailedCreate  7m               replicaset-controller  Error creating: pods "bitginx-794f7cf64f-plhwc" is forbidden: failed quota: qt                                                  est: must specify cpu,memory   Warning  FailedCreate  7m               replicaset-controller  Error creating: pods "bitginx-794f7cf64f-gc5l4" is forbidden: failed quota: qt                                                  est: must specify cpu,memory   Warning  FailedCreate  7m               replicaset-controller  Error creating: pods "bitginx-794f7cf64f-bgrcz" is forbidden: failed quota: qt                                                  est: must specify cpu,memory   Warning  FailedCreate  7m               replicaset-controller  Error creating: pods "bitginx-794f7cf64f-k89qz" is forbidden: failed quota: qt                                                  est: must specify cpu,memory   Warning  FailedCreate  7m               replicaset-controller  Error creating: pods "bitginx-794f7cf64f-w5hv7" is forbidden: failed quota: qt                                                  est: must specify cpu,memory   Warning  FailedCreate  7m               replicaset-controller  Error creating: pods "bitginx-794f7cf64f-xcknq" is forbidden: failed quota: qt                                                  est: must specify cpu,memory   Warning  FailedCreate  7m               replicaset-controller  Error creating: pods "bitginx-794f7cf64f-r5frr" is forbidden: failed quota: qt                                                  est: must specify cpu,memory   Warning  FailedCreate  5m (x7 over 7m)  replicaset-controller  (combined from similar events): Error creating: pods "bitginx-794f7cf64f-m85vg                                                  " is forbidden: failed quota: qtest: must specify cpu,memory $ oc describe rs/bitginx-794f7cf64f Name:           bitginx-794f7cf64f Namespace:      quota-test Selector:       app=bitginx,pod-template-hash=3509379209 Labels:         app=bitginx                 pod-template-hash=3509379209 Annotations:    deployment.kubernetes.io/desired-replicas=1                 deployment.kubernetes.io/max-replicas=2                 deployment.kubernetes.io/revision=1 Controlled By:  Deployment/bitginx Replicas:       0 current / 0 desired Pods Status:    0 Running / 0 Waiting / 0 Succeeded / 0 Failed Pod Template:   Labels:  app=bitginx            pod-template-hash=3509379209   Containers:    nginx:     Image:        bitnami/nginx:latest     Port:         <none>     Host Port:    <none>     Environment:  <none>     Mounts:       <none>   Volumes:        <none> Events:   Type     Reason        Age              From                   Message   ----     ------        ----             ----                   -------   Warning  FailedCreate  8m               replicaset-controller  Error creating: pods "bitginx-794f7cf64f-hk8l5" is forbidden: failed quota: qtest: must specify cpu,memory   Warning  FailedCreate  8m               replicaset-controller  Error creating: pods "bitginx-794f7cf64f-jp7p9" is forbidden: failed quota: qtest: must specify cpu,memory   Warning  FailedCreate  8m               replicaset-controller  Error creating: pods "bitginx-794f7cf64f-plhwc" is forbidden: failed quota: qtest: must specify cpu,memory   Warning  FailedCreate  8m               replicaset-controller  Error creating: pods "bitginx-794f7cf64f-gc5l4" is forbidden: failed quota: qtest: must specify cpu,memory   Warning  FailedCreate  8m               replicaset-controller  Error creating: pods "bitginx-794f7cf64f-bgrcz" is forbidden: failed quota: qtest: must specify cpu,memory   Warning  FailedCreate  8m               replicaset-controller  Error creating: pods "bitginx-794f7cf64f-k89qz" is forbidden: failed quota: qtest: must specify cpu,memory   Warning  FailedCreate  8m               replicaset-controller  Error creating: pods "bitginx-794f7cf64f-w5hv7" is forbidden: failed quota: qtest: must specify cpu,memory   Warning  FailedCreate  8m               replicaset-controller  Error creating: pods "bitginx-794f7cf64f-xcknq" is forbidden: failed quota: qtest: must specify cpu,memory   Warning  FailedCreate  8m               replicaset-controller  Error creating: pods "bitginx-794f7cf64f-r5frr" is forbidden: failed quota: qtest: must specify cpu,memory   Warning  FailedCreate  5m (x7 over 8m)  replicaset-controller  (combined from similar events): Error creating: pods "bitginx-794f7cf64f-m85vg" is forbidden: failed quota: qtest: must specify cpu,memory | 
Using Limit Ranges
- A limit range resource defines default, minimum and maximum values for compute resource requests
- Limit range can be set on a project, as well as on individual resources
- Limit range can specify CPU and memory for containers and Pods
- Limit range can specify storage for Image and PVC
- Use a template to apply the limit range to any new project created from that moment on
- The main difference between a limit range and a resource quota, is that the limit range specifies allowed values for individual resources, whereas project quota set the maximum values that can be used by all resources in a project
Creating a Limit Range
- oc new-project limits
- oc login -u admin -p password
- oc explain limitrange.spec.limits
- oc create --save-config -f limits.yaml
- oc get limitrange
- oc describe limitrange limit-limits
| 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 | $ oc login -u developer -p developer Login successful. You have access to the following projects and can switch between them with 'oc project <projectname>':     debug     limits   * quota-test Using project "quota-test". $ oc new-project limits Error from server (AlreadyExists): project.project.openshift.io "limits" already exists $ oc project limits Now using project "limits" on server "https://172.30.9.22:8443". $ oc explain limitrange.spec.limits KIND:     LimitRange VERSION:  v1 RESOURCE: limits <[]Object> DESCRIPTION:      Limits is the list of LimitRangeItem objects that are enforced.      LimitRangeItem defines a min/max usage limit for any resource that matches      on kind. FIELDS:    default      <map[string]string>      Default resource requirement limit value by resource name if resource limit      is omitted.    defaultRequest       <map[string]string>      DefaultRequest is the default resource requirement request value by      resource name if resource request is omitted.    max  <map[string]string>      Max usage constraints on this kind by resource name.    maxLimitRequestRatio <map[string]string>      MaxLimitRequestRatio if specified, the named resource must have a request      and limit that are both non-zero where limit divided by request is less      than or equal to the enumerated value; this represents the max burst for      the named resource.    min  <map[string]string>      Min usage constraints on this kind by resource name.    type <string>      Type of resource that this limit applies to. $ oc explain limitrange.spec.limits.type KIND:     LimitRange VERSION:  v1 FIELD:    type <string> DESCRIPTION:      Type of resource that this limit applies to. $ cd ex280 $ vi limitrange.yaml $ cat limitrange.yaml apiVersion: template.openshift.io/v1 kind: Template metadata:   creationTimestamp: null   name: project-request objects: - apiVersion: project.openshift.io/v1   kind: Project   metadata:     annotations:       openshift.io/description: ${PROJECT_DESCRIPTION}       openshift.io/display-name: ${PROJECT_DISPLAYNAME}       openshift.io/requester: ${PROJECT_REQUESTING_USER}     creationTimestamp: null     name: ${PROJECT_NAME}   spec: {}   status: {} - apiVersion: rbac.authorization.k8s.io/v1   kind: RoleBinding   metadata:     creationTimestamp: null     name: admin     namespace: ${PROJECT_NAME}   roleRef:     apiGroup: rbac.authorization.k8s.io     kind: ClusterRole     name: admin   subjects:   - apiGroup: rbac.authorization.k8s.io     kind: User     name: ${PROJECT_ADMIN_USER} - apiVersion: v1   kind: ResourceQuota   metadata:     name: ${PROJECT_NAME}-quota   spec:     hard:       cpu: 3       memory: 10G - apiVersion: v1   kind: LimitRange   metadata:     name: ${PROJECT_NAME}-limits   spec:     limits:       - type: Container         defaultRequest:           cpu: 30m           memory: 30M parameters: - name: PROJECT_NAME - name: PROJECT_DISPLAYNAME - name: PROJECT_DESCRIPTION - name: PROJECT_ADMIN_USER - name: PROJECT_REQUESTING_USER $ cat limits.yaml apiVersion: v1 kind: LimitRange metadata:   name: limit-limits spec:   limits:   - type: Pod     max:       cpu: 500m       memory: 2Mi     min:       cpu: 10m       memory: 1Mi   - type: Container     max:       cpu: 500m       memory: 500Mi     min:       cpu: 10m       memory: 5Mi     default:       cpu: 250m       memory: 200Mi     defaultRequest:       cpu: 20m       memory: 20Mi   - type: openshift.io/Image     max:       storage: 1Gi   - type: openshift.io/ImageStream     max:       openshift.io/image-tags: 10       openshift.io/images: 20   - type: PersistentVolumeClaim     min:       storage: 2Gi     max:       storage: 50Gi $ oc create --save-config -f limits.yaml Error from server (Forbidden): error when creating "limits.yaml": limitranges is forbidden: User "developer" cannot create limitranges in the namespace "limits": no RBAC policy matched $ oc login -u system:admin Logged into "https://172.30.9.22:8443" as "system:admin" using existing credentials. You have access to the following projects and can switch between them with 'oc project <projectname>':     default   * limits Using project "limits". $ oc create --save-config -f limits.yaml limitrange/limit-limits created $ oc get limitrange NAME           CREATED AT limit-limits   2023-07-29T07:34:24Z $ oc describe limitrange limit-limits Name:                     limit-limits Namespace:                limits Type                      Resource                 Min  Max    Default Request  Default Limit  Max Limit/Request Ratio ----                      --------                 ---  ---    ---------------  -------------  ----------------------- Pod                       cpu                      10m  500m   -                -              - Pod                       memory                   1Mi  2Mi    -                -              - Container                 cpu                      10m  500m   20m              250m           - Container                 memory                   5Mi  500Mi  20Mi             200Mi          - openshift.io/Image        storage                  -    1Gi    -                -              - openshift.io/ImageStream  openshift.io/image-tags  -    10     -                -              - openshift.io/ImageStream  openshift.io/images      -    20     -                -              - PersistentVolumeClaim     storage                  2Gi  50Gi   -                -              - $ oc delete -f limits.yaml limitrange "limit-limits" deleted | 
Applying Quotas to Multiple Projects
- The ClusterResourceQuota resource is created at cluster level and applies to multiple projects
- Administrator can specify which projects are subject to cluster resource quotas
- By using the openshift.io/requesterannotation to specify project owner, in which all projects with that specific owner are subject to the quota
- Using a selector and labels: all projects that have labels matching the selector are subject to the quota
 
- By using the 
Using Annotations or labels
- This will set a cluster resource quota that applies to all projects owned by user developer
- oc create clusterquota user-developer--project-annotation-selector openshift.io/requester=developer--hard pods=10,secrets=10
 
- This will add a quota for all projects that have the label env=testing
- oc create clusterquota testing --project-label-selector env=testing --hard pods=5,services=2
- oc new-project test-project
- oc label ns new-project env=testing
 
- Project users can use oc describe quota to view quota that currently apply
- Tip! Set quota on individual projects and try to avoid cluster-wide quota, looking them up in large clusters may take a lot of time!
| 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 | $ oc login -u developer -p developer Login successful. You have access to the following projects and can switch between them with 'oc project <projectname>':   * limits Using project "limits". $ oc create clusterquota testing --project-label-selector env=testing --hard pods=5,services=2 Error from server (Forbidden): clusterresourcequotas.quota.openshift.io is forbidden: User "developer" cannot create clusterresourcequotas.quota.openshift.io at the cluster scope                                                 : no RBAC policy matched $ oc login -u system:admin Logged into "https://172.30.9.22:8443" as "system:admin" using existing credentials. You have access to the following projects and can switch between them with 'oc project <projectname>':   * limits Using project "limits". $ oc create clusterquota testing --project-label-selector env=testing --hard pods=5,services=2 clusterresourcequota.quota.openshift.io/testing created $ oc get clusterquota NAME      LABEL SELECTOR   ANNOTATION SELECTOR testing   env=testing      map[] $ oc get clusterquota -A Error: unknown shorthand flag: 'A' in -A $ oc new-project test-project Now using project "test-project" on server "https://172.30.9.22:8443". You can add applications to this project with the 'new-app' command. For example, try:     oc new-app centos/ruby-25-centos7~https://github.com/sclorg/ruby-ex.git to build a new example application in Ruby. $ oc describe quota $ oc label ns test-project env=testing namespace/test-project labeled $ oc describe quota $ oc describe quota -A Error: unknown shorthand flag: 'A' in -A $ oc describe ns test-project Name:         test-project Labels:       env=testing Annotations:  openshift.io/description=               openshift.io/display-name=               openshift.io/requester=system:admin               openshift.io/sa.scc.mcs=s0:c19,c9               openshift.io/sa.scc.supplemental-groups=1000360000/10000               openshift.io/sa.scc.uid-range=1000360000/10000 Status:       Active No resource quota. No resource limits. $ oc create deploy nginxmany --image=bitnami/nginx deployment.apps/nginxmany created $ oc get all NAME                             READY     STATUS    RESTARTS   AGE pod/nginxmany-5859c9dbb6-6ljwm   1/1       Running   0          37s NAME                        DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE deployment.apps/nginxmany   1         1         1            1           37s NAME                                   DESIRED   CURRENT   READY     AGE replicaset.apps/nginxmany-5859c9dbb6   1         1         1         37s $ oc scale --replicas 6 deployment.apps/nginxmany deployment.apps/nginxmany scaled $ oc get pods NAME                         READY     STATUS    RESTARTS   AGE nginxmany-5859c9dbb6-5xxr6   1/1       Running   0          15s nginxmany-5859c9dbb6-6ljwm   1/1       Running   0          1m nginxmany-5859c9dbb6-9lv6c   1/1       Running   0          15s nginxmany-5859c9dbb6-dgr7k   1/1       Running   0          15s nginxmany-5859c9dbb6-hk2sm   1/1       Running   0          15s $ oc describe deploy nginxmany Name:                   nginxmany Namespace:              test-project CreationTimestamp:      Sat, 29 Jul 2023 09:58:33 +0200 Labels:                 app=nginxmany Annotations:            deployment.kubernetes.io/revision=1 Selector:               app=nginxmany Replicas:               6 desired | 5 updated | 5 total | 5 available | 1 unavailable StrategyType:           RollingUpdate MinReadySeconds:        0 RollingUpdateStrategy:  25% max unavailable, 25% max surge Pod Template:   Labels:  app=nginxmany   Containers:    nginx:     Image:        bitnami/nginx     Port:         <none>     Host Port:    <none>     Environment:  <none>     Mounts:       <none>   Volumes:        <none> Conditions:   Type             Status  Reason   ----             ------  ------   Progressing      True    NewReplicaSetAvailable   ReplicaFailure   True    FailedCreate   Available        True    MinimumReplicasAvailable OldReplicaSets:    <none> NewReplicaSet:     nginxmany-5859c9dbb6 (5/6 replicas created) Events:   Type    Reason             Age   From                   Message   ----    ------             ----  ----                   -------   Normal  ScalingReplicaSet  1m    deployment-controller  Scaled up replica set nginxmany-5859c9dbb6 to 1   Normal  ScalingReplicaSet  1m    deployment-controller  Scaled up replica set nginxmany-5859c9dbb6 to 6 $ oc describe rs nginxmany-5859c9dbb6 Name:           nginxmany-5859c9dbb6 Namespace:      test-project Selector:       app=nginxmany,pod-template-hash=1415758662 Labels:         app=nginxmany                 pod-template-hash=1415758662 Annotations:    deployment.kubernetes.io/desired-replicas=6                 deployment.kubernetes.io/max-replicas=8                 deployment.kubernetes.io/revision=1 Controlled By:  Deployment/nginxmany Replicas:       5 current / 6 desired Pods Status:    5 Running / 0 Waiting / 0 Succeeded / 0 Failed Pod Template:   Labels:  app=nginxmany            pod-template-hash=1415758662   Containers:    nginx:     Image:        bitnami/nginx     Port:         <none>     Host Port:    <none>     Environment:  <none>     Mounts:       <none>   Volumes:        <none> Conditions:   Type             Status  Reason   ----             ------  ------   ReplicaFailure   True    FailedCreate Events:   Type     Reason            Age               From                   Message   ----     ------            ----              ----                   -------   Normal   SuccessfulCreate  2m                replicaset-controller  Created pod: nginxmany-5859c9dbb6-6ljwm   Warning  FailedCreate      1m                replicaset-controller  Error creating: pods "nginxmany-5859c9dbb6-ksbmv" is forbidden: exceeded quota: testing, requested: pods=1,                                                  used: pods=5, limited: pods=5   Normal   SuccessfulCreate  1m                replicaset-controller  Created pod: nginxmany-5859c9dbb6-9lv6c   Normal   SuccessfulCreate  1m                replicaset-controller  Created pod: nginxmany-5859c9dbb6-dgr7k   Warning  FailedCreate      1m                replicaset-controller  Error creating: pods "nginxmany-5859c9dbb6-2njqh" is forbidden: exceeded quota: testing, requested: pods=1,                                                  used: pods=5, limited: pods=5   Normal   SuccessfulCreate  1m                replicaset-controller  Created pod: nginxmany-5859c9dbb6-hk2sm   Warning  FailedCreate      1m                replicaset-controller  Error creating: pods "nginxmany-5859c9dbb6-hxsp9" is forbidden: exceeded quota: testing, requested: pods=1,                                                  used: pods=5, limited: pods=5   Normal   SuccessfulCreate  1m                replicaset-controller  Created pod: nginxmany-5859c9dbb6-5xxr6   Warning  FailedCreate      1m                replicaset-controller  Error creating: pods "nginxmany-5859c9dbb6-g8x4g" is forbidden: exceeded quota: testing, requested: pods=1,                                                  used: pods=5, limited: pods=5   Warning  FailedCreate      1m                replicaset-controller  Error creating: pods "nginxmany-5859c9dbb6-49bcp" is forbidden: exceeded quota: testing, requested: pods=1,                                                  used: pods=5, limited: pods=5   Warning  FailedCreate      1m                replicaset-controller  Error creating: pods "nginxmany-5859c9dbb6-v7jgf" is forbidden: exceeded quota: testing, requested: pods=1,                                                  used: pods=5, limited: pods=5   Warning  FailedCreate      1m                replicaset-controller  Error creating: pods "nginxmany-5859c9dbb6-g78kq" is forbidden: exceeded quota: testing, requested: pods=1,                                                  used: pods=5, limited: pods=5   Warning  FailedCreate      1m                replicaset-controller  Error creating: pods "nginxmany-5859c9dbb6-tbnzj" is forbidden: exceeded quota: testing, requested: pods=1,                                                  used: pods=5, limited: pods=5   Warning  FailedCreate      1m                replicaset-controller  Error creating: pods "nginxmany-5859c9dbb6-2stw4" is forbidden: exceeded quota: testing, requested: pods=1,                                                  used: pods=5, limited: pods=5   Warning  FailedCreate      1m (x10 over 1m)  replicaset-controller  (combined from similar events): Error creating: pods "nginxmany-5859c9dbb6-k4qpx" is forbidden: exceeded quo                                                 ta: testing, requested: pods=1, used: pods=5, limited: pods=5 $ oc describe rs nginxmany-5859c9dbb6 Name:           nginxmany-5859c9dbb6 Namespace:      test-project Selector:       app=nginxmany,pod-template-hash=1415758662 Labels:         app=nginxmany                 pod-template-hash=1415758662 Annotations:    deployment.kubernetes.io/desired-replicas=6                 deployment.kubernetes.io/max-replicas=8                 deployment.kubernetes.io/revision=1 Controlled By:  Deployment/nginxmany Replicas:       5 current / 6 desired Pods Status:    5 Running / 0 Waiting / 0 Succeeded / 0 Failed Pod Template:   Labels:  app=nginxmany            pod-template-hash=1415758662   Containers:    nginx:     Image:        bitnami/nginx     Port:         <none>     Host Port:    <none>     Environment:  <none>     Mounts:       <none>   Volumes:        <none> Conditions:   Type             Status  Reason   ----             ------  ------   ReplicaFailure   True    FailedCreate Events:   Type     Reason            Age               From                   Message   ----     ------            ----              ----                   -------   Normal   SuccessfulCreate  2m                replicaset-controller  Created pod: nginxmany-5859c9dbb6-6ljwm   Warning  FailedCreate      2m                replicaset-controller  Error creating: pods "nginxmany-5859c9dbb6-ksbmv" is forbidden: exceeded quota: testing, requested: pods=1, used: pods=5, limited: pods=5   Normal   SuccessfulCreate  2m                replicaset-controller  Created pod: nginxmany-5859c9dbb6-9lv6c   Normal   SuccessfulCreate  2m                replicaset-controller  Created pod: nginxmany-5859c9dbb6-dgr7k   Warning  FailedCreate      2m                replicaset-controller  Error creating: pods "nginxmany-5859c9dbb6-2njqh" is forbidden: exceeded quota: testing, requested: pods=1, used: pods=5, limited: pods=5   Normal   SuccessfulCreate  2m                replicaset-controller  Created pod: nginxmany-5859c9dbb6-hk2sm   Warning  FailedCreate      2m                replicaset-controller  Error creating: pods "nginxmany-5859c9dbb6-hxsp9" is forbidden: exceeded quota: testing, requested: pods=1, used: pods=5, limited: pods=5   Normal   SuccessfulCreate  2m                replicaset-controller  Created pod: nginxmany-5859c9dbb6-5xxr6   Warning  FailedCreate      2m                replicaset-controller  Error creating: pods "nginxmany-5859c9dbb6-g8x4g" is forbidden: exceeded quota: testing, requested: pods=1, used: pods=5, limited: pods=5   Warning  FailedCreate      2m                replicaset-controller  Error creating: pods "nginxmany-5859c9dbb6-49bcp" is forbidden: exceeded quota: testing, requested: pods=1, used: pods=5, limited: pods=5   Warning  FailedCreate      2m                replicaset-controller  Error creating: pods "nginxmany-5859c9dbb6-v7jgf" is forbidden: exceeded quota: testing, requested: pods=1, used: pods=5, limited: pods=5   Warning  FailedCreate      2m                replicaset-controller  Error creating: pods "nginxmany-5859c9dbb6-g78kq" is forbidden: exceeded quota: testing, requested: pods=1, used: pods=5, limited: pods=5   Warning  FailedCreate      2m                replicaset-controller  Error creating: pods "nginxmany-5859c9dbb6-tbnzj" is forbidden: exceeded quota: testing, requested: pods=1, used: pods=5, limited: pods=5   Warning  FailedCreate      2m                replicaset-controller  Error creating: pods "nginxmany-5859c9dbb6-2stw4" is forbidden: exceeded quota: testing, requested: pods=1, used: pods=5, limited: pods=5   Warning  FailedCreate      1m (x10 over 2m)  replicaset-controller  (combined from similar events): Error creating: pods "nginxmany-5859c9dbb6-k4qpx" is forbidden: exceeded quota: testing, requested: pods=1, used: pods=5, limited: pods=5 | 
As we see only 5 replicas has been created because of quota limit.
| 1 2 | $ oc delete clusterquota testing clusterresourcequota.quota.openshift.io "testing" deleted | 
Templates
- A Template is an API resource that can set different properties when
 creating a new project- quota
- limit ranges
- network policies
 
- Use oc adm create-bootstrap-project-template -o yaml > mytemplate.yamlto generate a YAML file that can be further modified
- Add new resources under objects, specifying the kind of resource you want to add
- Next, editprojects.config.openshift.io/clusterto use the new template
Setting Project Restrictions
- oc login -u admin -p password
- oc adm create-bootstrap-project-template -o yaml > mytemplate.yaml#ignoring that here as its a lot of work to create
- oc create -f limitrange.yaml -n openshift-config
- oc describe limitrange test-limits
- oc edit projects.config.openshift.io/cluster
spec: 
  projectRequestTemplate: 
     name: project-request 
- watch oc get pods -n openshift-apiserver# wait 2 minutes
- oc new-project test-project
- oc get resourcequotas,limitranges
- oc delete project test-project
- oc edit project.config.openshiftio/cluster# remove spec
| 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 | $ oc login -u system:admin Logged into "https://172.30.9.22:8443" as "system:admin" using existing credentials. You have access to the following projects and can switch between them with 'oc project <projectname>':   * test-project Using project "test-project". $ oc adm create-bootstrap-project-template -o yaml > mytemplate.yaml $ cat mytemplate.yaml apiVersion: template.openshift.io/v1 kind: Template metadata:   creationTimestamp: null   name: project-request objects: - apiVersion: project.openshift.io/v1   kind: Project   metadata:     annotations:       openshift.io/description: ${PROJECT_DESCRIPTION}       openshift.io/display-name: ${PROJECT_DISPLAYNAME}       openshift.io/requester: ${PROJECT_REQUESTING_USER}     creationTimestamp: null     name: ${PROJECT_NAME}   spec: {}   status: {} - apiVersion: rbac.authorization.k8s.io/v1   kind: RoleBinding   metadata:     annotations:       openshift.io/description: Allows all pods in this namespace to pull images from         this namespace.  It is auto-managed by a controller; remove subjects to disable.     creationTimestamp: null     name: system:image-pullers     namespace: ${PROJECT_NAME}   roleRef:     apiGroup: rbac.authorization.k8s.io     kind: ClusterRole     name: system:image-puller   subjects:   - apiGroup: rbac.authorization.k8s.io     kind: Group     name: system:serviceaccounts:${PROJECT_NAME} - apiVersion: rbac.authorization.k8s.io/v1   kind: RoleBinding   metadata:     annotations:       openshift.io/description: Allows builds in this namespace to push images to         this namespace.  It is auto-managed by a controller; remove subjects to disable.     creationTimestamp: null     name: system:image-builders     namespace: ${PROJECT_NAME}   roleRef:     apiGroup: rbac.authorization.k8s.io     kind: ClusterRole     name: system:image-builder   subjects:   - kind: ServiceAccount     name: builder     namespace: ${PROJECT_NAME} - apiVersion: rbac.authorization.k8s.io/v1   kind: RoleBinding   metadata:     annotations:       openshift.io/description: Allows deploymentconfigs in this namespace to rollout         pods in this namespace.  It is auto-managed by a controller; remove subjects         to disable.     creationTimestamp: null     name: system:deployers     namespace: ${PROJECT_NAME}   roleRef:     apiGroup: rbac.authorization.k8s.io     kind: ClusterRole     name: system:deployer   subjects:   - kind: ServiceAccount     name: deployer     namespace: ${PROJECT_NAME} - apiVersion: rbac.authorization.k8s.io/v1   kind: RoleBinding   metadata:     creationTimestamp: null     name: admin     namespace: ${PROJECT_NAME}   roleRef:     apiGroup: rbac.authorization.k8s.io     kind: ClusterRole     name: admin   subjects:   - apiGroup: rbac.authorization.k8s.io     kind: User     name: ${PROJECT_ADMIN_USER} parameters: - name: PROJECT_NAME - name: PROJECT_DISPLAYNAME - name: PROJECT_DESCRIPTION - name: PROJECT_ADMIN_USER - name: PROJECT_REQUESTING_USER $ cat limitrange.yaml apiVersion: template.openshift.io/v1 kind: Template metadata:   creationTimestamp: null   name: project-request objects: - apiVersion: project.openshift.io/v1   kind: Project   metadata:     annotations:       openshift.io/description: ${PROJECT_DESCRIPTION}       openshift.io/display-name: ${PROJECT_DISPLAYNAME}       openshift.io/requester: ${PROJECT_REQUESTING_USER}     creationTimestamp: null     name: ${PROJECT_NAME}   spec: {}   status: {} - apiVersion: rbac.authorization.k8s.io/v1   kind: RoleBinding   metadata:     creationTimestamp: null     name: admin     namespace: ${PROJECT_NAME}   roleRef:     apiGroup: rbac.authorization.k8s.io     kind: ClusterRole     name: admin   subjects:   - apiGroup: rbac.authorization.k8s.io     kind: User     name: ${PROJECT_ADMIN_USER} - apiVersion: v1   kind: ResourceQuota   metadata:     name: ${PROJECT_NAME}-quota   spec:     hard:       cpu: 3       memory: 10G - apiVersion: v1   kind: LimitRange   metadata:     name: ${PROJECT_NAME}-limits   spec:     limits:       - type: Container         defaultRequest:           cpu: 30m           memory: 30M parameters: - name: PROJECT_NAME - name: PROJECT_DISPLAYNAME - name: PROJECT_DESCRIPTION - name: PROJECT_ADMIN_USER - name: PROJECT_REQUESTING_USER $ oc create -f limitrange.yaml -n openshift-config Error from server (NotFound): error when creating "limitrange.yaml": namespaces "openshift-config" not found $ oc new-project openshift-config Error from server (Forbidden): project.project.openshift.io "openshift-config" is forbidden: cannot request a project starting with "openshift-" $ oc create -f limitrange.yaml template.template.openshift.io/project-request created $ oc get template NAME              DESCRIPTION   PARAMETERS    OBJECTS project-request                 5 (5 blank)   4 $ oc get template -n openshift-config No resources found. $ oc describe limitrange test-limits Error from server (NotFound): limitranges "test-limits" not found $ oc edit projects.config.openshift.io/cluster error: the server doesn't have a resource type "projects" $ watch oc get pods -n openshift-apiserver $ oc new-project test-project Error from server (AlreadyExists): project.project.openshift.io "test-project" already exists $ oc new-project template-project Now using project "template-project" on server "https://172.30.9.22:8443". You can add applications to this project with the 'new-app' command. For example, try:     oc new-app centos/ruby-25-centos7~https://github.com/sclorg/ruby-ex.git to build a new example application in Ruby. $ oc get resourcequotas,limitranges No resources found. $ oc get resourcequotas No resources found. $ oc get limitranges No resources found. $ oc edit project.config.openshift.io/cluster error: the server doesn't have a resource type "project" | 
Lab: Using Quota
- Create a new project with the name limit-project. Set quota on this project, that meet the following requirements:
- Pods can use a max of 1Gi of memory
- A maximum of 4 Pods can be created
 
| 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 | $ oc login -u system:admin Logged into "https://172.30.9.22:8443" as "system:admin" using existing credentials. You have access to the following projects and can switch between them with 'oc project <projectname>':   * love Using project "love". $ oc new-project limit-project Now using project "limit-project" on server "https://172.30.9.22:8443". You can add applications to this project with the 'new-app' command. For example, try:     oc new-app centos/ruby-25-centos7~https://github.com/sclorg/ruby-ex.git to build a new example application in Ruby. $ oc create quota qtest --hard pods=4,memory=1Gi resourcequota/qtest created $ oc describe project limit-project Name:           limit-project Created:        41 seconds ago Labels:         <none> Annotations:    openshift.io/description=                 openshift.io/display-name=                 openshift.io/requester=system:admin                 openshift.io/sa.scc.mcs=s0:c21,c5                 openshift.io/sa.scc.supplemental-groups=1000430000/10000                 openshift.io/sa.scc.uid-range=1000430000/10000 Display Name:   <none> Description:    <none> Status:         Active Node Selector:  <none> Quota:                         Name:           qtest                         Resource        Used    Hard                         --------        ----    ----                         memory          0       1Gi                         pods            0       4 Resource limits:        <none> $ oc get quota NAME      CREATED AT qtest     2023-07-29T11:34:08Z | 

