Deploying Applications on OpenShift

Red Hat OpenShift Container Platform is a set of modular components and services built on top of Red Hat CoreOS and Kubernetes. RHOCP adds PaaS capabilities such as remote management, increased security, monitoring and auditing, application life-cycle management, and self-service interfaces for developers. An OpenShift cluster is a Kubernetes cluster that can be managed the same way, but using the management tools provided by OpenShift, such as the command-line interface or the web console. This allows for more productive workflows and makes common tasks much easier.

The following schema illustrates the OpenShift Container Platform stack.

The main method of interacting with an RHOCP cluster is using the oc command. The basic usage of the command is through its subcommands in the following syntax:

Before interacting with a cluster, most operations require a logged-in user. The syntax to log in is shown below:

 

Understanding Projects

  • The Linux kernel provides namespaces to offer strict isolation between processes
  • Kubernetes implements namespaces in a cluster environment
  • To limit inter-namespace access
  • To apply resource limitations on namespaces
  • To delegate management tasks to users
  • OpenShift implements Kubernetes namespaces as a vehicle to manage access to resources for users
  • In OpenShift, namespaces are reserved for cluster-level admin access
  • Users work with projects to store their resources

To list projects:

To list name spaces:

 

Understanding Applications

  • After creating a project, oc new-app can be used to create an application
  • While creating an application, different Kubernetes resources are created
  • Deployment or deploymentconfig: the application itself, including its cluster properties
  • Replicationcontroller or replicaset: takes care of running pods in a scalable way, using multiple instances
  • Pod: the actual instance of the application that typically runs one container
  • Service: a load balancer that exposes access to the application
  • Route: the resource that allows incoming traffic by exposing an FQDN
  • When Source 2 Image (S21) is used, additional resources are used as well

This command gives everything which is available with new-app:

Let’s create a nginx app:

To show all resources currently running in this encvironment use:

To see imagestreams:

 

Understanding Resources

  • To run an application in OpenShift, different Kubernetes resources are used
  • Each resource is defined in the API to offer specific functionality
  • The API defines how resources connect to each other
  • Many resources are used to define how a component in the cluster is running:
    • Pods define how containers are started using images
    • Services implement a load balancer to distribute incoming workload
    • Routes define an FUN for accessing the application
  • Resources are defined in the API

 

Monitoring Applications

  • The Pod is the representation of the running processes
  • To see process STDOUT, use oc logs podname
  • To see how the Pods (as well as other resources) are created in the cluster, use oc describe
  • To see all status information about the Pods, use oc get pod podname -o yaml

Openshift has awesome command line completion:

Let’s enable this feature:

And we can use it:

Lets deploy mariadb:

Let’s chceck the deployment:

More information about mymariadb you will have using this command:

And::

As we see the last state of apllication is Terminated.

Let’s check the logs:

The reason why there is an error with mariadb is we didn’t specify password.

We can delete deployment:

 

Understanding the API

  • OpenShift is based on the Kubernetes APIs
  • On top of the Kubernetes APIs, OpenShift-specific APIs are added
  • OpenShift uses Kubernetes resources, but in many cases offers its own functionality using different APIs
  • As a result, OpenShift resources are not always guaranteed to be compatible with Kubernetes resources

Exploring the APIs

  • oc api-resources shows resources as defined in the API
  • oc api-versions shows versions of the APIs
  • oc explain [--recursive] can be used to explore what is in the APIs
  • Based on this information, OpenShift resources can be defined in a declarative way in YAML files

 

Lab: Managing Resources

  1. As user developer, create a project with the name myproject
  2. In this project, create an application using oc new-app
  3. Use oc -h to find usage information about this command
  4. Ensure that the application was created successfully
  5. Write the resources created with the oc new-app command to a YAML file such that the resource can easily be recreated

Solution

 

Deploying a Database Server on OpenShift

Let’s log in to the OpenShift cluster :

Create a new project:

Create a new application container image using the oc new-app command.
This image requires that you use the
-e option to set the MYSQL_USER, MYSQL_PASSWORD, MYSQL_DATABASE, and MYSQL_ROOT_PASSWORD environment variables.

If you want you can use the --template option with the oc new-app command to specify a template with persistent storage so that OpenShift does not try and pull the image from the internet:

Verify that the MariaDB pod was created successfully and view the details about the pod and its service. Run the oc status command to view the status of the new application:

List the pods in this project to verify that the MySQL pod is ready and running:

Notice the worker on which the pod is running. You need this information to be able to log in to the MySQL database server later.

Use the oc describe command to view more details about the pod:

List the services in this project and verify that the service to access the MySQL pod was created:

Retrieve the details of the mysql-openshift service using the oc describe
command and note that the Service type is ClusterIP by default:

View details about the deployment configuration (dc) for this application:

Expose the service creating a route with a default name and a fully qualified domain name (FQDN):

Connect to the MySQL database server and verify that the database was created successfully. To do that first configure port forwarding between workstation and the database pod running on OpenShift using port 3306. The terminal will hang after executing the command. 

Open another terminal and connect to the MySQL server using the MySQL client.

Verify the creation of the testdb database.

Exit from the MySQL prompt:

Close the terminal and return to the previous one. Finish the port forwarding process by pressing Ctrl+C

Delete the project to remove all the resources within the project:

 

Creating Routes

Services allow for network access between pods inside an OpenShift instance, and routes allow for network access to pods from users and applications outside the OpenShift instance.

A route connects a public-facing IP address and DNS host name to an internal-facing service IP. It uses the service resource to find the endpoints; that is, the ports exposed by the service. OpenShift routes are implemented by a cluster-wide router service, which runs as a containerized application in the OpenShift cluster. OpenShift scales and replicates router pods like any other
OpenShift application.
In practice, to improve performance and reduce latency, the OpenShift router connects directly to the pods using the internal pod software-defined network (SDN).
The router service uses
HAProxy as the default implementation. An important consideration for OpenShift administrators is that the public DNS host names
configured for routes need to point to the public-facing IP addresses of the nodes running the router. Router pods, unlike regular application pods, bind to their nodes’ public IP addresses instead of to the internal pod SDN.

The following example shows a minimal route defined using JSON syntax:

The apiVersion, kind, and metadata attributes follow standard Kubernetes resource definition rules. The Route value for kind shows that this is a route resource, and the metadata.name attribute  ives this particular route the identifier quoteapp. As with pods and services, the main part is the spec attribute, which is an object containing the following attributes:

  • host is a string containing the FQDN associated with the route. DNS must resolve this FQDN to the IP address of the OpenShift router. The details to modify DNS configuration are outside the scope of this course.
  • to is an object stating the resource this route points to. In this case, the route points to an OpenShift Service with the name set to quoteapp

Use the oc create command to create route resources, just like any other OpenShift resource. You must provide a JSON or YAML resource definition file, which defines the route, to the oc create command.

Another way to create a route is to use the
oc expose service command, passing a service resource name as the input. The --name option can be used to control the name of the route resource. For example:

By default, routes created by oc expose generate DNS names of the form: route_name-project_name.default-domain

For example, creating a route named
quote in project named test from an OpenShift instance where the wildcard domain is cloudapps.example.com results in the FQDN quote-test.cloudapps.example.com.


Leveraging the Default Routing Service
The default routing service is implemented as an HAProxy pod. Router pods, containers, and their configuration can be inspected just like any other resource in an OpenShift cluster:

Use oc describe pod command to get the routing configuration details:

The subdomain, or default domain to be used in all default routes, takes its value from the ROUTER_CANONICAL_HOSTNAME entry

Exposing a Service as a Route

Log in to the OpenShift cluster

Create a new project for the resources you create during this exercise.

Use the oc new-app command to create the application

Wait until the application finishes building and deploying by monitoring the progress with the oc get pods -w command:

You can monitor the build and deployment logs with the oc logs -f. Press Ctrl + C to exit the command if necessary.

Review the service for this application using the oc describe command:

Expose the service, which creates a route. Use the default name and fully qualified domain name (FQDN) for the route:

Access the service  to verify that the service and route are working.

Replace this route with a route named xyz.

Delete the current route:

Create a route for the service with a name of xyz:

Note the new FQDN that was generated based on the new route name. Both the route name and the project name contain your user name, hence it appears twice in the route FQDN.
Make an HTTP request using the FQDN on port 80:

Delete project: