Troubleshooting OpenShift Applications

You can usually ignore the differences between Kubernetes deployments and OpenShift deployment configurations when troubleshooting applications. The common failure scenarios and the ways to troubleshoot them are essentially the same.

Troubleshooting Pods That Fail to Start
A common scenario is that OpenShift creates a pod and that pod never establishes a Running state.  At some point, the pods are in an error state, such as ErrImagePull or ImagePullBackOff. Troubleshooting:

  • oc get pod
  • oc status
  • oc get events
  • oc describe pod
  • oc describe

Troubleshooting Running and Terminated Pods
OpenShift creates a pod, and for a short time no problem is encountered. The pod enters the Running state, which means at least one of its containers started running. Later, an application running inside one of the pod containers stops working. OpenShift tries to restart the container several times. If the application continues terminating, due to health probes or other reasons, then the pod will be left in the CrashLoopBackOff state.

  •  oc logs <my-pod-name>
    If the pod contains multiple containers, then the oc logs command requires the -c option.
  • oc logs <my-pod-name> -c <my-container-name>

Using oc debug

  • When troubleshooting, it’s useful to get an exact copy of a running Pod and troubleshoot from there
  • Since a Pod that is failing may not be started, and for that reason is not accessible to rsh and exec, the debug command provides an alternative
  • The debug Pod will start a shell inside of the first container of the referenced Pod
  • The started Pod is a copy of the source Pod, with labels stripped, no probes, and the command changed to /bin/sh
  • Useful command arguments can be --as-root or --as-user=10000 to run as root or as a specific user
  • Use exit to close and destroy the debug Pod

Demo: Using oc debug

  • oc login -u developer -p developer
  • oc create deployment dnginx --image=nginx
  • oc get pods # shows failure
  • oc debug deploymentconfig/dnginx --as-user=10000 # will fail, select user ID as suggested
    • nginx # will fail
    • exit
  • oc debug deploymentconfig/dnginx --as-root # will fail, login as admin and try again
    • nginx # will run
    • exit
  • This test has shown that the nginx image needs to run as root

Let’s create a new project and new deployment:

Let’s debug the pod:

As we see in the log w are doing “Permission denied”, that is why there is an error in the pod.

Now let’s debug the pod as admin:

As we see on the admin account the nginx pod works prooperly.

 

Lab: Fixing Application Permissions

  • Use oc run mynginx --image=nginx to run an Nginx webserver Pod
  • It fails. Fix it.

Change the yaml file:

And then:

 

Lab: Configuring MySQL

• As the developer user, use a deployment to create an application named mysql in the microservice project
• Create a generic secret named mysql, using password as the key and mypassword as its value.
Use this secret to set the MYSQL_ROOT_PASSWORD environment variable to the value of the password in the secret.
• Configure the MySQL application to mount a PVC to /mnt. The PVC must have a 1GiB size, and the ReadWriteOnce access mode
• Use a Nodeselector to ensure that MySQL will only run on your CRC node

 

 

Lab: Configuring WordPress

• As the developer user, use a deployment to create an application named wordpress in the microservice project
• Run this application with the anyuid security context assigned to the wordpress-sa service account
• Create a route to the WordPress application, using the hostname word press-microservice.apps-crc.testi ng
• Use secrets and or ConfigMaps to set environment variables:
• WORDPRESS_DB_HOST: is set to mysql
• WORDPRESS_DB_NAME: is set to the value of wordpress
• WORDPRESS_DB_USER: has the value “root”
• WORDPRESS_DB_PASSWORD is set to the value of the password key in the mysql secret