Managing Openshift Clusters

Cluster Troubleshooting

  • An OpenShift cluster has two focal areas for troubleshooting
  • OpenShift operators are cluster applications that can be monitored and fixed
    like any other application that runs in OpenShift
  • OpenShift nodes can be monitored individually
  • Other problems may come from version mismatches


Verifying Node Health

  • oc get nodes is a good first step to investigate current health of nodes
    • Anything other than Ready means that the node is dead to the control plane
  • oc adm top nodes shows current node health, based on statistics gathered by the metrics server
  • oc describe node may be used to investigate recent events and resource usage
    • Events shows an event log
    • Allocated resources gives an overview of allocated resources and requests
    • Capacity shows available capacity
    • Non-terminated Pods shows Pods currently being used


Monitoring Operators

  • Operators are the programs that are responsible for starting the different components running in the cluster
  • These components are started by operators as Daemonsets or Deployments
  • oc get clusteroperators shows the current status of operators
    • If an operator is in progressing state, it is currently being updated
    • If in degraded state, something is wrong and further investigation is required


Analyzing Operators

  • ClusterOperator resources are non-namespaced
  • Each operator starts its resources in dedicated namespaces
    • Some operators use one namespace, some operators use more
  • If an operator shows a degraded status in oc get co, investigate resources running in its namespace and use common tools to check their status


Verifying Cluster Versions

  • oc get clusterversion shows details about the current version of the cluster that is used
  • oc describe clusterversion shows more details about versions of the different components
  • oc version shows OpenShift version, Kubernetes version, as well as client version


Understanding Nodes

  • OpenShift worker nodes run CoreOS
  • CoreOS is a minimized operating system that is managed like a container
    • No direct modifications allowed
  • Most services on the CoreOS node run as containers
    • Investigate like any other container
  • Some services are managed by systemd
    • CRI-o is the container engine that is required to run the containers
    • kubelet is the interface that allows the OpenShift cluster to schedule containers on top of the container engine


Investigating Node Logs

  • oc adm node-logs nodename will show logs generated by a CoreOS node
  • oc adm node-logs -u crio nodename will show logs generated by the CRI-o service
  • oc adm node-logs -u kubelet nodename will show logs generated by the kubelet service


Opening a Shell on a Node

  • Opening a shell session to nodes in a managed full-stack automation OpenShift cluster is not always necessary, because of how the cluster is offered within the cloud
  • Use oc debug node/nodename to open a debug shell on a node
    • The debug shell mounts the node root file system at the /host folder, which allows you to inspect files from the node
    • To run host binaries, use chroot /host
    • Notice that the host is running a minimal operating system and does not provide access to all Linux tools
    • Use systemctl status kubelet or systemctl status crio to investigate status of these vital services
    • Use crictl ps for low-level information about CRI-o containers
  • If the control plane is not running, you cannot use oc debug node


Using Direct SSH Access

  • You should not use direct SSH access
  • If you want to do it anyway, look up the SSH keys stored on the client machine in some deployment scenarios
  • On CRC, use ssh -i ~/.crc/machines/crc/id_rsa coreos@$(crc ip) to open a shell as user coreos


Cluster Scaling

  • Manual or automatic cluster scaling works through machine API
  • Installing an additional worker node is not considered scaling!
  • Machine API is a standard component that runs as an operator
  • This operator provides controllers that interact with cluster resources
  • In a full-stack automated environment, it communicates with the provider to take care of cluster scaling


Machine API Custom Resources

  • Machines are the compute units in the cluster
  • MachineSets describe groups of machines, but not control plane nodes
    • MachineSets are to machines what replica sets are to Pods
    • It includes labels that allow you to work with regions, zones, and instance types
    • When deployed to public cloud, you’ll typically get one machine set per availability zone
  • MachineHealthChecks verify the health of a machine and take action if required


Manually Scaling Machines

  • Manually scaling the number of machines works in two ways:
    • Use oc scale
    • Change the number of replicas in the machine set resource


Automatic scaling

  • Automatic scaling in full-stack automation requires two custom resources:
    • MachineAutoscaler
    • ClusterAutoscaler
  • The Machine API operator must be operational in order to configure any type of scaling
  • The machine autoscaler automatically scales the number of replicas based on load
  • The cluster autoscaler enforces limits for the entire cluster
    • MaxNodesTotal sets maximum nodes
    • MaxMemoryTotal sets maximum memory


Implementing AutoScaler

  • To implement autoscaling, the following requirements must be met:
    • The cluster is deployed in full-stack automation
    • There is a cluster autoscaler resource
      • Set scaleDown to enabled: true to allow for downscaling as well
    • At least one machine autoscaler resource exists


Cluster Updates

  • OpenShift 4.x offers Over-the-Air (OTA) upgrades
  • The OTA software distribution system manages controller manifests, cluster roles and other resources necessary to update a cluster
  • OTA is offered as a service on; which provides a web interface to easily perform the update
  • OTA requires the cluster to have a persistent connection to the Internet

How OTA Works

  • Prometheus based telemetry is used to determine the update path
  • Supported operators can be automatically updated
  • Future versions will include Independent Software Vendor (ISV) operators to be updated in this way as well
  • From the interface, an update channel can be selected to determine the version of OpenShift to update to
  • Notice that no support is offered to do rollback

How OTA Flow

  • First, all operators need to be updated to the newer version
  • Next, the CoreOS images can be updated
    • The node will first pull the new image
    • Next, the image is written to disk
    • Then the bootloader is changed to boot the new image
    • To complete, the CoreOS machine reboots


Manually Updating the Cluster

  • oc get clusterversion will show the current version
  • oc adm upgrade will show if an upgrade is available
  • oc adm upgrade --to-latest=true will upgrade to the latest version
  • oc adm upgrade --to=version will upgrade to a specific version
  • oc get clusterversion allows to verify the update
  • oc get clusteroperators will show if operators are in the right state
  • oc describe clusterversion will show an overview of past upgrades



Lab: Monitoring Cluster Health

  • Use the appropriate tools to create a full cluster health report, and write the result of these commands to the file /tmp/health.txt