Network Security on Openshift

OpenShift Networking

  • Services provide load balancing to replicated Pods in an application, and are essential in providing access to applications
    • Services connect to Endpoints, which are Pod individual IP addresses
  • Ingress is a Kubernetes resource that exposes services to external users
    • Ingress adds URLs, load balancing, as well as access rules
    • Ingress is not used as such in OpenShift
  • OpenShift routes are an alternative to Ingress

OpenShift SDN

  • OpenShift uses Software Defined Networking (SDN) to implement connectivity
  • OpenShift SDN separates the network in a control plane and a data plane
  • The SDN solves five requirements
    • Manage network traffic and resources as software such that they can be programmed by the application owners
    • Communicate between containers running within the same project
    • Communicate between Pods within and beyond project boundaries
    • Manage network communication from a Pod to a service
    • Manage network communication from external network to service
  • The network is managed by the OpenShift Cluster Network Operator

The DNS Operator

  • The DNS operator implements the CoreDNS DNS server
  • The internal CoreDNS server is used by Pods for DNS resolution
  • Use oc describe dns.operator/default to see its config
  • The DNS Operator has different roles
    • Create a default cluster DNS name cluster.local
    • Assign DNS names to namespaces
    • Assign DNS names to services
    • Assign DNS names to Pods

Managing DNS Records for Services

  • DNS names are composed as servicename.projectname.cluster-dns-name
    • db.myproject.cluster.local
  • Apart from the A resource records, core DNS also implements an SRV record, in which port name and protocol are prepended to the service A record name
    • _443._tcp.webserver.myproject.cluster.local
  • If a service has no IP address, DNS records are created for the IP addresses of the Pods, and roundrobin is applied

The Cluster Network Operator

  • The OpenShift Cluster Network Operator defines how the network is shaped and provides information about the following:
    • Network address
    • Network mode
    • Network provider
    • IP address pools
  • Use oc get network/cluster -o yaml for details
  • Notice that currently OpenShift only supports the OpenShift SDN network provider, this may have changed by the time you read this
  • Future versions will use OVN-Kubernetes to manage the cluster network

Network Policy

  • Network policy allows defining Ingress and Egress filtering
  • If no network policy exists, all traffic is allowed
  • If a network policy exists, it will block all traffic with the exception of allowed Ingress and Egress traffic

Service Types

  • ClusterIP: the service is exposed as an IP address internal to the cluster. This is used as the default type, where services cannot be directly addressed
  • NodePort: a service type that exposes a port on the node IP address.
  • LoadBalancer: exposes the service through a cloud provider load balancer. The cloud provider LB talks to the OpenShift network controller to automatically create a node port to route incoming requests
  • ExternalName: creates a CNAME in DNS to match an external host name. Use this to create different access points for applications external to the cluster

 

The Ingress Resource

  • Ingress traffic is generic terminology incoming traffic (and is more than just
    Kubernetes Ingress)
  • The Ingress resources is managed by the Ingress operator and accepts external requests that will be proxied
  • The route resource is an OpenShift resource that provides more features than Ingress
    • TLS re-encryption
    • TLS passthrough
    • split traffic for blue-green deployments

OpenShift Route

  • OpenShift route resources are implemented by the shared router serviace that runs as a Pod in the cluster
  • Router Pods bind to public-facing IP addresses on the nodes
  • DNS wildcard is required for this to work
  • Routes can be implemented as secure and as insecure routes

Creating routes requirements

  • Route resources need the following values:
    • Name of the service that the route accesses
    • A host name for the route that is related to the cluster DNS wildcard domain
    • An optional path for path-based routes
    • A target port, which is where the application listens
    • An encryption strategy
    • Optional labels that can be used as selectors
  • Notice that the route does not use the service directly, it just needs it to find out to which Pods it should connect

Route Options and Types

  • Secure routes can use different types of TLS termination
  • Edge: certificates are terminated at the route, so TLS certificates must be configured in the route
  • Pass-through: termination is happening at the Pods, which means that the Pods are responsible for serving the certificates. Use this to support mutual authentication
  • Re-encryption: TLS traffic is terminated at the route, and new encrypted traffic is established with the Pods
  • Unsecure routes require no key or certificates

Creating Insecure Routes

  • Easy: just use oc expose service my.service --hostname my.name.example.com
    • The service my.service is exposed
    • The hostname my.name.example.com is set for the route
  • If no name is specified, the name routename.projectname.defaultdomain is used
  • Notice that only the OpenShift route, and not the CoreDNS DNS server knows about route names
    • DNS has a wildcard domain name that sends traffic to the IP address that runs the router software, which will further take care of the specific name resolving
    • Therefore, the route name must always be a subdomain of the cluster wildcard domain

Let’s create an insecure route:

 

Why we need for Certificates ?

  • PKI certificates are everywhere in OpenShift
  • To secure resources – like routes – it’s essential to understand how certificates are working
  • To use public keys, they need to be signed by a Certificate Authority
  • Self-signed certificates are an easy way to get started with your own certificates
  • Next, these certificates can be used in OpenShift resources like routes

Creating self-signed certificates

  • Creating the CA
    • mkdir ~/openssl
    • cd ~/openssl
    • openssl genrsa -des3 -out myCA.key 2048
    • public key: openssl req -x509 -new -nodes -key myCA.key -sha256 -days 3650 -outmyCA.pem
  • Creating the certificate
    • openssl genrsa -out tls.key 2048
    • openssl req -new -key tls.key -out tls.csr # make sure the CN matches the DNS name of route which is project.apps-crc.testing
  • Self-signing the certificate
    • openssl x509 -req -in tls.csr -CA myCA.pem -CAkey myCA.key -CAcreateserial -out tls.crt -days 1650 -sha256

To generate a certificate let’s use openssl utility. Creating the CA:

The most important is to speecify Common Name.

Creating the certificate:

Self-signing the certificate:

At this point the certificate has been created and is ready to use.

 

Edge Routes

  • Edge routes hold TLS key material so that TLS termination can occur at the router
  • Connections between router and application are not encrypted, so no TLS configuration is needed at the application
  • Re-encryption routes offer a variation on edge termination
    • The router terminates TLS with a certificate, and re-encrypts its connection to the endpoint (typically with a different certificate)

Configuring an Edge Route

  • Creating deploy, svc, route
    • oc new-project myproject
    • oc create cm linginx1 --from-file linginx1.conf
    • as admin: oc create sa linginx-sa creates the dedicated service account
    • As administrator: oc adm policy add-scc-to-user anyuid -z linginx-sa
    • oc create -f linginx-v1.yaml
    • oc get pods
    • oc get svc
    • oc create route edge linginxl --service linginxl --cert=../openssl/tls.crt --key=../openssl/tls.key --ca-cert=../openssl/myCA.pem --hostname=okd.netico.pl --port=80
  • Testing from another pod in the cluster
    • curl -svv https://linginx-myproject.apps-crc.testing # will show a self-signed certificate error
    • curl -s -k https://linginx-myproject.apps-crc.testing # will give access

First thing we need is linginx1.conf file::

Let’s put this config to the config map:

We are going to use the linginx-v1.yaml  file

It is using  docker.io/nginx container and exposing container port 8080. It is using config map we have  just created. It’s also using service account linginx-sa.

Let’s create service accont::

In admin shell we must define a policy as an admin user:

Then as an oridinary user create resouces:

Wait couple of minutes to resorces be succesfully created.

and now can create a route:

Now, we can test:

 

Passthrough Routes

  • A passthrough route configures the route to pass forward the certificate to guarantee client-route-application end-to-end encryption
  • To make this happen, a secret providing the certificate as well as the certificate key is created and mounted in the application
  • The passthrough route type doesn’t hold any key materials, but transparently presents the key materials that are available in the application — the router doesn’t provide TLS termination
  • Passthrough is the only method that supports mutual authentication between application and client

Configuring a Passthrough Route

  • Part 1: Creating Certificates: ensure that subject name matches name used in the route
    • mkdir openssl; cd openssl
    • openssl genrsa -des3 -out myCA.key 2048
    • openssl req -x509 -new -nodes -key myCA.key -sha256 -days 3650 -out myCA.pem
    • openssl genrsa -out tls.key 2048 # set common name to linginx-default.apps-crc.testing
    • openssl req -new -key tls.key -out tls.csr
    • openssl x509 -req -in tls.csr -CA myCA.pem -CAkey myCA.key -CAcreateserial -out tls.crt -days 1650 -sha256
  • Part 2: Creating a Secret
    • oc create secret tls linginx-certs --cert tls.crt --key tls.key
    • oc get secret linginx-certs -o yaml
  • Part 3: Create a ConfigMap
    • oc create cm nginxconfigmap --from-file default.conf
    • oc create sa linginx-sa creates the dedicated service account
    • oc adm policy add-scc-to-user anyuid -z linginx-sa
  • Part 4: Starting Deployment and Service
    • vim linginx-v2.yaml #check volumes
    • oc create -f linginx-v2.yaml
  • Part 5: Creating the Passhthrough Route
    • oc create route passthrough linginx --service linginx2 --port 8443 --hostname=linginx-defaultapps-crc.testing
    • oc get routes
    • oc get svc
  • Part 6: Testing in a Debug Pod
  • oc debug -t deployment/linginx2 --image registry.access.redhat.com/ubi8/ubi:8.0
    • curl -s -k https://172.25.201.41:8443 # only works from same network
  • curl https://Iinginx-default.apps-crc.testing
  • curl --insecure https://Iinginx-default.apps-crc.testing

 

For this excersise we need to go through entire procedure again so we need to delete old openssl directory.

Now, let’s do the certificate stuff:

Now we can generate the server keys

Creating a Secret:

We also need to create a config map:

Starting Deployment and Service:

Create a route:

Testing in a Debug Pod

curl -s -k https://172.25.201.41:8443 # only works from same network

curl https://okd.netico.pl

curl --insecure https://Iinginx-default.apps-crc.testing

 

Network Policies

  • By default, there are no restrictions to network traffic in K8s
  • Pods can always communicate, even if they’re in other namespaces
  • To limit this, Network Policies can be used
  • If in a policy there is no match, traffic will be denied
  • If no Network Policy is used, all traffic is allowed

Network Policy Identifiers

  • In network policies, three different identifiers can be used
    • Pods: (podSelector) note that a Pod cannot block access to itself
    • Namespaces: (namespaceSelector) to grant access to specific namespaces
    • IP blocks: (ipBlock) notice that traffic to and from the node where a Pod is running is always allowed
  • When defining a Pod- or namespace-based network policy, a selector label is used to specify what traffic is allowed to and from the Pods that match the selector
  • Network policies do not conflict, they are additive

Allowing Ingress and Monitoring

  • If cluster monitoring or exposed routes are used, Ingress from them needs to be included in the network policy
  • Use spec.ingress.fronnamespaceSelectonmatchlabels to define:
    • network.openshift.io/policy-group: monitoring
    • network.openshift.io/policy-group: ingress

Configuring Network Policy

  • oc login -u admin -p password
  • oc apply -f nwpolicy-complete-example.yaml
  • oc expose pod nginx --port=80
  • oc exec -it busybox -- wget --spider --timeout=1 nginx # will fail
  • oc label pod busybox access=true
  • oc exec -it busybox -- wget --spider --timeout=1 nginx # will work

Excercise

Now we need to expse the nginx pod:

Let’s chceck if there is a service:

Let’s check the network policy:

Download timed out because network policy didn’t find any matching rules.

So, let’s create such a rule:

 

Advanced Network Policies

  • oc login -u kubeadmin -p ...
  • oc new-project source-project
  • oc label ns source-project type=incoming
  • oc create -f nginx-source1.yml
  • oc create -f nginx-source2.yml
  • oc project target-project
  • oc login -u developer -p developer
  • oc new-project target-project
  • oc new-app --name nginx-target --docker-image quay.io/openshifttest/hello-openshift:openshift
  • oc get pods -o wide 
  • oc login -u kubeadmin -p ...
  • oc exec -it nginx-access -n source-project -- curl <ip-of-nginx-target-pod>:8080
  • oc exec -it nginx-noaccess -n source-project -- curl <ip-of-nginx-target-pod>:8080
  • oc create -f nwpol-allow-specific.yaml
  • oc exec -it nginx-noaccess -n source-project -- curl <ip-of-nginx-target-pod>:8080
  • oc label pod nginx-target-1-<xxxxx> type=incoming
  • oc exec -it nginx-noaccess -n source-project -- curl <ip-of-nginx-target-pod>:8080

Lets create a target project.

Now, back to the admin shell:

We have no network policy yet so there no traffic restrictions even it is traffic between diffrent namespaces.

Let’s look at this network policy:

Let’s create this network policy:

And now we still can reach to the “Hello Openshift”

That is because the label is not set on the nginx noaccces.

 

Lab: Creating an Edge Router
Run an Nginx deployment, and ensure this deployment is accessible by addressing an Edge router

That conludes the lab.