Pod scaling, limit ranges and quotas on Openshift

Understanding pod scaling

  • The desired number of Pods is set in the Deployment or Deployment Configuration
  • From there, the replicaset or replication controller is used to guarantee that this number of replicas is running
  • The deployment is using a selector for identifying the replicated Pods

Scaling Pods Manually

  • Use oc scale to manually scale the number of Pods
    • oc scale --replicas 3 deployment myapp
  • While doing this, the new desired number of replicas is added to the Deployment, and from there written to the ReplicaSet

Let’s try to scale pod manually:

Another way to do the same:

And change :


Now the number of pods is limited to two:


Autoscaling Pods

  • OpenShift provides the HorizontalPodAutoscaler resource for automatically scaling Pods
  • This resource depends on the OpenShift Metrics subsystem, which is pre-installed in OpenShift 4
  • To use autoscaling, resource requests need to be specified so that the autoscaler knows when to do what:
    • Use Resource Requests or project resource limitations to take care of this
  • Currently, Autoscaling is based on CPU usage, autoscaling for memory utilization is in tech preview


Resource Requests and Limits

  • Resource requests and limits are used on a per-application basis
  • Quota are enforced on a project or cluster basis
  • In Pods spec.containers.resources.requests, a Pod can request minimal amounts of CPU and memory resources
    • The scheduler will look for a node that meets these requirements
  • In Pods spec.containers.resources.iimits, the Pod can be limited to a maximum use of resources
    • CGroups are used on the node to enforce the limits


Setting Resources

  • Use oc set resources to set resource requests as well as limits, or edit the YAML code directly
  • Resource restrictions can be set on individual containers, as well as on a complete deployment
  • oc set resources deployment hello-world-nginx --requests cpu=10m,memory=10Mi --limits cpu=50m,memory=50Mi
  • Use oc set resources -h for ready-to-use examples


Setting Resource Limits

  • oc create deployment nee --image=bitnami/nginx:latest --replicas=3
  • oc get pods
  • oc set resources deploy nee --requests cpu lOm, memory=1Mi --limits cpu=20m,memory=5Mi
  • oc get pods # one new pod will be stuck in state “Creating”
  • oc describe pods nee-xxxx # will show this is because of resource limits
  • oc set resources deploy nee --requests cpu=0m,memory=0Mi --limits cpu=0m,memory=0Mi
  • oc get pods


Monitoring resource availability:

  • Use oc describe node nodename to get information about current CPU and memory usage for each Pod running on the node
    • Notice the summary line at the end of the output, where you’ll see requests as well as limits that have been set
  • Use oc adm top to get actual resource usage
    • Notice this requires metrics server to be installed and configured


Using Quotas

  • Quotas are used to apply limits
    • On the number of objects, such as Pods, services, and routes
    • On compute resources, such as CPU, memory and storage
  • Quotas are useful for preventing the exhaustion of vital resources
    • Etcd
    • IP addresses
    • Compute capacity of worker nodes
  • Quotas are applied to new resources but do not limit current resources
  • To apply quota, the ResourceQuota resource is used
  • Use a YAML file, or oc create quota my-quota --hard service 10,cpu=1400,memory-1.8Gi


Quota Scope

  •  resourcequotas are applied to projects to limit use of resources
  • clusterresourcequotas apply quota with a cluster scope
  • Multiple resourcequotas can be applied on the same project
    • The effect is cummulative
    • Limit one specific resource type for each quota resource used
  • Use oc create quota -h for command line help on how to apply
  • Avoid using YAML


Verifying Resource Quota

  • oc get resourcequota gives an overview of all resourcequota API resources
  • oc describe quota will show cumulative quotas from all resourcequota in the current project


Quota-related Failure

  • If a modification exceeds the resource count (like number of Pods), OpenShift will deny the modification immediately
  • If a modification exceeds quota for a compute resource (such as available RAM), OpenShift will not fail immediately to give the administrator some time to fix the issue
  • If a quota that restricts usage of compute resources is used, OpenShift will not create Pods that do not have resource requests or limits set also
  • It’s also recommended to use LimitRange to specify the default values for resource requests


Applying Resource Quota

  • oc login -u developer -p password
  • oc new-project quota-test
  • oc login -u admin -p password
  • oc create quota qtest --hard pods=3,cpu=100,memory=500Mi
  • oc describe quota
  • oc login -u developer -p password
  • oc create deploy bitginx --image=bitnami/nginx:latest --replicas =3
  • oc get all # no pods
  • oc describe rs/bitginx-xxx # it fails because no quota have been set on the deployment
  • oc set resources deploy bitginx --requests cpu=10m,memory=5Mi --limits cpu=20m,memory=20Mi


Using Limit Ranges

  • A limit range resource defines default, minimum and maximum values for compute resource requests
  • Limit range can be set on a project, as well as on individual resources
  • Limit range can specify CPU and memory for containers and Pods
  • Limit range can specify storage for Image and PVC
  • Use a template to apply the limit range to any new project created from that moment on
  • The main difference between a limit range and a resource quota, is that the limit range specifies allowed values for individual resources, whereas project quota set the maximum values that can be used by all resources in a project


Creating a Limit Range

  • oc new-project limits
  • oc login -u admin -p password
  • oc explain limitrange.spec.limits
  • oc create --save-config -f limits.yaml
  • oc get limitrange
  • oc describe limitrange limit-limits


Applying Quotas to Multiple Projects

  • The ClusterResourceQuota resource is created at cluster level and applies to multiple projects
  • Administrator can specify which projects are subject to cluster resource quotas
    • By using the openshift.io/requester annotation to specify project owner, in which all projects with that specific owner are subject to the quota
    • Using a selector and labels: all projects that have labels matching the selector are subject to the quota

Using Annotations or labels

  • This will set a cluster resource quota that applies to all projects owned by user developer
    • oc create clusterquota user-developer--project-annotation-selector openshift.io/requester=developer--hard pods=10,secrets=10
  • This will add a quota for all projects that have the label env=testing
    • oc create clusterquota testing --project-label-selector env=testing --hard pods=5,services=2
    • oc new-project test-project
    • oc label ns new-project env=testing
  • Project users can use oc describe quota to view quota that currently apply
  • Tip! Set quota on individual projects and try to avoid cluster-wide quota, looking them up in large clusters may take a lot of time!

As we see only 5 replicas has been created because of quota limit.



  • A Template is an API resource that can set different properties when
    creating a new project

    • quota
    • limit ranges
    • network policies
  • Use oc adm create-bootstrap-project-template -o yaml > mytemplate.yaml to generate a YAML file that can be further modified
  • Add new resources under objects, specifying the kind of resource you want to add
  • Next, edit projects.config.openshift.io/cluster to use the new template


Setting Project Restrictions

  • oc login -u admin -p password
  • oc adm create-bootstrap-project-template -o yaml > mytemplate.yaml #ignoring that here as its a lot of work to create
  • oc create -f limitrange.yaml -n openshift-config
  • oc describe limitrange test-limits
  • oc edit projects.config.openshift.io/cluster



     name: project-request

  • watch oc get pods -n openshift-apiserver # wait 2 minutes
  • oc new-project test-project
  • oc get resourcequotas,limitranges
  • oc delete project test-project
  • oc edit project.config.openshiftio/cluster # remove spec