OpenShift Networking
- Services provide load balancing to replicated Pods in an application, and are essential in providing access to applications
- Services connect to Endpoints, which are Pod individual IP addresses
- Ingress is a Kubernetes resource that exposes services to external users
- Ingress adds URLs, load balancing, as well as access rules
- Ingress is not used as such in OpenShift
- OpenShift routes are an alternative to Ingress
OpenShift SDN
- OpenShift uses Software Defined Networking (SDN) to implement connectivity
- OpenShift SDN separates the network in a control plane and a data plane
- The SDN solves five requirements
- Manage network traffic and resources as software such that they can be programmed by the application owners
- Communicate between containers running within the same project
- Communicate between Pods within and beyond project boundaries
- Manage network communication from a Pod to a service
- Manage network communication from external network to service
- The network is managed by the OpenShift Cluster Network Operator
The DNS Operator
- The DNS operator implements the CoreDNS DNS server
- The internal CoreDNS server is used by Pods for DNS resolution
- Use
oc describe dns.operator/default
to see its config - The DNS Operator has different roles
- Create a default cluster DNS name cluster.local
- Assign DNS names to namespaces
- Assign DNS names to services
- Assign DNS names to Pods
Managing DNS Records for Services
- DNS names are composed as
servicename.projectname.cluster-dns-name
db.myproject.cluster.local
- Apart from the A resource records, core DNS also implements an SRV record, in which port name and protocol are prepended to the service A record name
_443._tcp.webserver.myproject.cluster.local
- If a service has no IP address, DNS records are created for the IP addresses of the Pods, and roundrobin is applied
The Cluster Network Operator
- The OpenShift Cluster Network Operator defines how the network is shaped and provides information about the following:
- Network address
- Network mode
- Network provider
- IP address pools
- Use
oc get network/cluster -o yaml
for details - Notice that currently OpenShift only supports the OpenShift SDN network provider, this may have changed by the time you read this
- Future versions will use OVN-Kubernetes to manage the cluster network
Network Policy
- Network policy allows defining Ingress and Egress filtering
- If no network policy exists, all traffic is allowed
- If a network policy exists, it will block all traffic with the exception of allowed Ingress and Egress traffic
Service Types
- ClusterIP: the service is exposed as an IP address internal to the cluster. This is used as the default type, where services cannot be directly addressed
- NodePort: a service type that exposes a port on the node IP address.
- LoadBalancer: exposes the service through a cloud provider load balancer. The cloud provider LB talks to the OpenShift network controller to automatically create a node port to route incoming requests
- ExternalName: creates a CNAME in DNS to match an external host name. Use this to create different access points for applications external to the cluster
The Ingress Resource
- Ingress traffic is generic terminology incoming traffic (and is more than just
Kubernetes Ingress) - The Ingress resources is managed by the Ingress operator and accepts external requests that will be proxied
- The route resource is an OpenShift resource that provides more features than Ingress
- TLS re-encryption
- TLS passthrough
- split traffic for blue-green deployments
OpenShift Route
- OpenShift route resources are implemented by the shared router serviace that runs as a Pod in the cluster
- Router Pods bind to public-facing IP addresses on the nodes
- DNS wildcard is required for this to work
- Routes can be implemented as secure and as insecure routes
Creating routes requirements
- Route resources need the following values:
- Name of the service that the route accesses
- A host name for the route that is related to the cluster DNS wildcard domain
- An optional path for path-based routes
- A target port, which is where the application listens
- An encryption strategy
- Optional labels that can be used as selectors
- Notice that the route does not use the service directly, it just needs it to find out to which Pods it should connect
Route Options and Types
- Secure routes can use different types of TLS termination
- Edge: certificates are terminated at the route, so TLS certificates must be configured in the route
- Pass-through: termination is happening at the Pods, which means that the Pods are responsible for serving the certificates. Use this to support mutual authentication
- Re-encryption: TLS traffic is terminated at the route, and new encrypted traffic is established with the Pods
- Unsecure routes require no key or certificates
Creating Insecure Routes
- Easy: just use
oc expose service my.service --hostname my.name.example.com
- The service
my.service
is exposed - The hostname
my.name.example.com
is set for the route
- The service
- If no name is specified, the name
routename.projectname.defaultdomain
is used - Notice that only the OpenShift route, and not the CoreDNS DNS server knows about route names
- DNS has a wildcard domain name that sends traffic to the IP address that runs the router software, which will further take care of the specific name resolving
- Therefore, the route name must always be a subdomain of the cluster wildcard domain
Let’s create an insecure route:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 |
$ oc new-app --docker-image=bitnami/nginx --name bitginx --> Found Docker image b3834d0 (45 hours old) from Docker Hub for "bitnami/nginx" * An image stream tag will be created as "bitginx:latest" that will track this image * This image will be deployed in deployment config "bitginx" * Ports 8080/tcp, 8443/tcp will be load balanced by service "bitginx" * Other containers can access this service through the hostname "bitginx" --> Creating resources ... imagestream.image.openshift.io "bitginx" created deploymentconfig.apps.openshift.io "bitginx" created service "bitginx" created --> Success Application is not exposed. You can expose services to the outside world by executing one or more of the commands below: 'oc expose svc/bitginx' Run 'oc status' to view your app. $ oc get all NAME READY STATUS RESTARTS AGE pod/bitginx-1-jzk9r 1/1 Running 0 7s pod/docker-registry-1-ctgff 1/1 Running 0 1d pod/lab4pod 1/1 Running 0 8h pod/nginx-cm 1/1 Running 0 9h pod/persistent-volume-setup-8f6lt 0/1 Completed 0 1d pod/pv-pod 1/1 Running 0 1d pod/router-1-k8zgt 1/1 Running 0 1d pod/test1 0/1 CrashLoopBackOff 125 10h NAME DESIRED CURRENT READY AGE replicationcontroller/bitginx-1 1 1 1 10s replicationcontroller/docker-registry-1 1 1 1 1d replicationcontroller/router-1 1 1 1 1d NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/bitginx ClusterIP 172.30.122.7 <none> 8080/TCP,8443/TCP 12s service/docker-registry ClusterIP 172.30.1.1 <none> 5000/TCP 1d service/kubernetes ClusterIP 172.30.0.1 <none> 443/TCP 1d service/router ClusterIP 172.30.110.170 <none> 80/TCP,443/TCP,1936/TCP 1d NAME DESIRED SUCCESSFUL AGE job.batch/persistent-volume-setup 1 1 1d NAME REVISION DESIRED CURRENT TRIGGERED BY deploymentconfig.apps.openshift.io/bitginx 1 1 1 config,image(bitginx:latest) deploymentconfig.apps.openshift.io/docker-registry 1 1 1 config deploymentconfig.apps.openshift.io/router 1 1 1 config NAME DOCKER REPO TAGS UPDATED imagestream.image.openshift.io/bitginx 172.30.1.1:5000/default/bitginx latest 10 seconds ago $ oc expose service bitginx route.route.openshift.io/bitginx exposed $ oc get routes NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD bitginx bitginx-default.127.0.0.1.nip.io bitginx 8080-tcp None $ oc describe routes bitginx Name: bitginx Namespace: default Created: 3 minutes ago Labels: app=bitginx Annotations: openshift.io/host.generated=true Requested Host: bitginx-default.127.0.0.1.nip.io exposed on router router 3 minutes ago Path: <none> TLS Termination: <none> Insecure Policy: <none> Endpoint Port: 8080-tcp Service: bitginx Weight: 100 (100%) Endpoints: 172.17.0.14:8080, 172.17.0.14:8443 |
Why we need for Certificates ?
- PKI certificates are everywhere in OpenShift
- To secure resources – like routes – it’s essential to understand how certificates are working
- To use public keys, they need to be signed by a Certificate Authority
- Self-signed certificates are an easy way to get started with your own certificates
- Next, these certificates can be used in OpenShift resources like routes
Creating self-signed certificates
- Creating the CA
mkdir ~/openssl
cd ~/openssl
openssl genrsa -des3 -out myCA.key 2048
- public key:
openssl req -x509 -new -nodes -key myCA.key -sha256 -days 3650 -out
myCA.pem
- Creating the certificate
openssl genrsa -out tls.key 2048
openssl req -new -key tls.key -out tls.csr
# make sure the CN matches the DNS name of route which is project.apps-crc.testing
- Self-signing the certificate
openssl x509 -req -in tls.csr -CA myCA.pem -CAkey myCA.key -CAcreateserial -out tls.crt -days 1650 -sha256
To generate a certificate let’s use openssl utility. Creating the CA:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 |
$ mkdir ~/openssl $ cd ~/openssl $ openssl genrsa -des3 -out myCA.key 2048 Generating RSA private key, 2048 bit long modulus ............................................................+++ ....................+++ e is 65537 (0x10001) Enter pass phrase for myCA.key: Verifying - Enter pass phrase for myCA.key: [root@okd ~]# [root@okd ~]# openssl req -x509 -new -nodes -key myCA.key -sha256 -days 3650 -out myCA.pem Enter pass phrase for myCA.key: You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [XX]:PL State or Province Name (full name) []: Locality Name (eg, city) [Default City]: Organization Name (eg, company) [Default Company Ltd]: Organizational Unit Name (eg, section) []: Common Name (eg, your name or your server's hostname) []:<strong>okd.netico.pl</strong> Email Address []: $ ll razem 8 -rw-r--r-- 1 root root 1751 07-24 21:39 myCA.key -rw-r--r-- 1 root root 1285 07-24 21:45 myCA.pem |
The most important is to speecify Common Name.
Creating the certificate:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 |
$ openssl genrsa -out tls.key 2048 Generating RSA private key, 2048 bit long modulus ..........+++ ....................................+++ e is 65537 (0x10001) $ openssl req -new -key tls.key -out tls.csr You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [XX]: State or Province Name (full name) []: Locality Name (eg, city) [Default City]: Organization Name (eg, company) [Default Company Ltd]: Organizational Unit Name (eg, section) []: Common Name (eg, your name or your server's hostname) []:<strong>okd2.netico.pl</strong> Email Address []: Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []: An optional company name []: |
Self-signing the certificate:
1 2 3 4 5 |
$ openssl x509 -req -in tls.csr -CA myCA.pem -CAkey myCA.key -CAcreateserial -out tls.crt -days 1650 -sha256 Signature ok subject=/C=XX/L=Default City/O=Default Company Ltd/CN=okd2.netico.pl Getting CA Private Key Enter pass phrase for myCA.key: [the password which was use to create CA] |
At this point the certificate has been created and is ready to use.
Edge Routes
- Edge routes hold TLS key material so that TLS termination can occur at the router
- Connections between router and application are not encrypted, so no TLS configuration is needed at the application
- Re-encryption routes offer a variation on edge termination
- The router terminates TLS with a certificate, and re-encrypts its connection to the endpoint (typically with a different certificate)
Configuring an Edge Route
- Creating deploy, svc, route
oc new-project myproject
oc create cm linginx1 --from-file linginx1.conf
- as admin:
oc create sa linginx-sa
creates the dedicated service account - As administrator:
oc adm policy add-scc-to-user anyuid -z linginx-sa
oc create -f linginx-v1.yaml
oc get pods
oc get svc
oc create route edge linginxl --service linginxl --cert=../openssl/tls.crt --key=../openssl/tls.key --ca-cert=../openssl/myCA.pem --hostname=okd.netico.pl --port=80
- Testing from another pod in the cluster
curl -svv https://linginx-myproject.apps-crc.testing
# will show a self-signed certificate errorcurl -s -k https://linginx-myproject.apps-crc.testing
# will give access
First thing we need is linginx1.conf file::
1 2 3 4 5 6 7 8 |
server { listen 8080 default server; listen [::]:80 default_server ipv6only=on; root /usr/share/nginx/html; index index.html index.htm; server_name localhost; } |
Let’s put this config to the config map:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
$ oc create cm linginx1 --from-file linginx1.conf configmap/linginx1 created $ oc describe cm Name: linginx1 Namespace: default Labels: <none> Annotations: <none> Data ==== linginx1.conf: ---- server { listen 8080 default_server; listen [::]:80 default_server ipv6only=on; root /usr/share/nginx/html; index index.html; server_name localhost; } Events: <none> |
We are going to use the linginx-v1.yaml
file
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 |
apiVersion: apps/v1 kind: Deployment metadata: name: linginx1 labels: deployment: linginx1 spec: replicas: 1 selector: matchLabels: deployment: linginx1 template: metadata: labels: deployment: linginx1 spec: containers: - image: docker.io/nginx name: linginx1 ports: - containerPort: 8080 protocol: TCP volumeMounts: - mountPath: "/etc/nginx/conf.d" name: configmap-volume volumes: - name: configmap-volume configMap: name: linginx1 items: - key: linginx1.conf path: default.conf serviceAccount: linginx-sa serviceAccountName: linginx-sa --- apiVersion: v1 kind: Service metadata: labels: deployment: linginx1 name: linginx1 spec: ports: - name: http port: 8080 protocol: TCP targetPort: 8080 selector: deployment: linginx1 |
It is using docker.io/nginx container and exposing container port 8080. It is using config map we have just created. It’s also using service account linginx-sa.
Let’s create service accont::
1 2 |
$ oc create sa linginx-sa serviceaccount/linginx-sa created |
In admin shell we must define a policy as an admin user:
1 2 |
$ oc adm policy add-scc-to-user anyuid -z linginx-sa scc "anyuid" added to: ["system:serviceaccount:default:linginx-sa"] |
Then as an oridinary user create resouces:
1 2 3 |
$ oc create -f linginx-v1.yaml deployment.apps/linginx1 created service/linginx1 created |
Wait couple of minutes to resorces be succesfully created.
1 2 3 4 5 6 7 |
$ oc get pods NAME READY STATUS RESTARTS AGE linginx1-dc9f65f54-6zw8j 1/1 Running 0 1m $ oc get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE linginx1 ClusterIP 172.30.157.239 <none> 8080/TCP 1m |
and now can create a route:
1 2 3 4 5 6 |
$ oc create route edge linginxl --service linginxl --cert=../openssl/tls.crt --key=../openssl/tls.key --ca-cert=../openssl/myCA.pem --hostname=okd.netico.pl --port=80 route.route.openshift.io/linginxl created $ oc get routes NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD linginxl okd.netico.pl linginxl 80 edge None |
Now, we can test:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 |
$ curl -svv https://okd.netico.pl * About to connect() to okd.netico.pl port 443 (#0) * Trying 172.30.9.22... * Connected to okd.netico.pl (172.30.9.22) port 443 (#0) * Initializing NSS with certpath: sql:/etc/pki/nssdb * CAfile: /etc/pki/tls/certs/ca-bundle.crt CApath: none * Server certificate: * subject: CN=okd2.netico.pl,O=Default Company Ltd,L=Default City,C=XX * start date: lip 24 19:57:16 2023 GMT * expire date: sty 29 19:57:16 2028 GMT * common name: okd2.netico.pl * issuer: CN=okd.netico.pl,O=Default Company Ltd,L=Default City,C=PL * NSS error -8172 (SEC_ERROR_UNTRUSTED_ISSUER) * Peer's certificate issuer has been marked as not trusted by the user. * Closing connection 0 $ curl -s -k https://okd.netico.pl <html> <head> ... <body> <div> <h1>Application is not available</h1> <p>The application is currently not serving requests at this endpoint. It may not have been started or is still starting.</p> <div class="alert alert-info"> <p class="info"> Possible reasons you are seeing this page: </p> <ul> <li> <strong>The host doesn't exist.</strong> Make sure the hostname was typed correctly and that a route matching this hostname exists. </li> <li> <strong>The host exists, but doesn't have a matching path.</strong> Check if the URL path was typed correctly and that the route was created using the desired path. </li> <li> <strong>Route and path matches, but all pods are down.</strong> Make sure that the resources exposed by this route (pods, services, deployment configs, etc) have at least one pod running. </li> </ul> </div> </div> </body> </html> |
Passthrough Routes
- A passthrough route configures the route to pass forward the certificate to guarantee client-route-application end-to-end encryption
- To make this happen, a secret providing the certificate as well as the certificate key is created and mounted in the application
- The passthrough route type doesn’t hold any key materials, but transparently presents the key materials that are available in the application — the router doesn’t provide TLS termination
- Passthrough is the only method that supports mutual authentication between application and client
Configuring a Passthrough Route
- Part 1: Creating Certificates: ensure that subject name matches name used in the route
mkdir openssl; cd openssl
openssl genrsa -des3 -out myCA.key 2048
openssl req -x509 -new -nodes -key myCA.key -sha256 -days 3650 -out myCA.pem
openssl genrsa -out tls.key 2048
# set common name to linginx-default.apps-crc.testingopenssl req -new -key tls.key -out tls.csr
openssl x509 -req -in tls.csr -CA myCA.pem -CAkey myCA.key -CAcreateserial -out tls.crt -days 1650 -sha256
- Part 2: Creating a Secret
oc create secret tls linginx-certs --cert tls.crt --key tls.key
oc get secret linginx-certs -o yaml
- Part 3: Create a ConfigMap
oc create cm nginxconfigmap --from-file default.conf
oc create sa linginx-sa
creates the dedicated service accountoc adm policy add-scc-to-user anyuid -z linginx-sa
- Part 4: Starting Deployment and Service
vim linginx-v2.yaml
#check volumesoc create -f linginx-v2.yaml
- Part 5: Creating the Passhthrough Route
oc create route passthrough linginx --service linginx2 --port 8443 --hostname=linginx-defaultapps-crc.testing
oc get routes
oc get svc
- Part 6: Testing in a Debug Pod
oc debug -t deployment/linginx2 --image registry.access.redhat.com/ubi8/ubi:8.0
curl -s -k https://172.25.201.41:8443
# only works from same network
curl https://Iinginx-default.apps-crc.testing
curl --insecure https://Iinginx-default.apps-crc.testing
For this excersise we need to go through entire procedure again so we need to delete old openssl directory.
1 2 3 4 |
$ cd ~ $ rm -rf openssl $ mkdir openssl $ cd openssl |
Now, let’s do the certificate stuff:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
$ openssl genrsa -des3 -out myCA.key 2048 Generating RSA private key, 2048 bit long modulus ...........+++ ..........................+++ e is 65537 (0x10001) Enter pass phrase for myCA.key: Verifying - Enter pass phrase for myCA.key: $ openssl req -x509 -new -nodes -key myCA.key -sha256 -days 3650 -out myCA.pem Enter pass phrase for myCA.key: You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [XX]:PL State or Province Name (full name) []:silesia Locality Name (eg, city) [Default City]: Organization Name (eg, company) [Default Company Ltd]: Organizational Unit Name (eg, section) []: Common Name (eg, your name or your server's hostname) []:okd.netico.pl Email Address []: |
Now we can generate the server keys
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 |
$ openssl genrsa -out tls.key 2048 $ openssl req -new -key tls.key -out tls.csr You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [XX]:PL State or Province Name (full name) []: Locality Name (eg, city) [Default City]: Organization Name (eg, company) [Default Company Ltd]: Organizational Unit Name (eg, section) []: Common Name (eg, your name or your server's hostname) []:okd.netico.pl Email Address []: Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []: An optional company name []: $ openssl x509 -req -in tls.csr -CA myCA.pem -CAkey myCA.key -CAcreateserial -out tls.crt -days 1650 -sha256 Signature ok subject=/C=PL/L=Default City/O=Default Company Ltd/CN=okd.netico.pl Getting CA Private Key Enter pass phrase for myCA.key: |
Creating a Secret:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
$ oc create secret tls linginx-certs --cert tls.crt --key tls.key secret/linginx-certs created $ oc get secret linginx-certs -o yaml apiVersion: v1 data: tls.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURRakNDQWlvQ0NRRDk4bFlaMTl2VkNUQU5CZ2txaGtpRzl3MEJBUXNGQURCc01Rc3dDUVlEVlFRR0V3SlEKVERFUU1BNEdBMVVFQ0F3SGMybHNaWE5wWVRFVk1CTUdBMVVFQnd3TVJHVm1ZWFZzZENCRGFYUjVNUnd3R2dZRApWUVFLREJORVpXWmhkV3gwSUVOdmJYQmhibmtnVEhSa01SWXdGQVlEVlFRRERBMXZhMlF1Ym1WMGFXTnZMbkJzCk1CNFhEVEl6TURjeU5qRTFOREl4TTFvWERUSTRNREV6TVRFMU5ESXhNMW93V2pFTE1Ba0dBMVVFQmhNQ1VFd3gKRlRBVEJnTlZCQWNNREVSbFptRjFiSFFnUTJsMGVURWNNQm9HQTFVRUNnd1RSR1ZtWVhWc2RDQkRiMjF3WVc1NQpJRXgwWkRFV01CUUdBMVVFQXd3TmIydGtMbTVsZEdsamJ5NXdiRENDQVNJd0RRWUpLb1pJaHZjTkFRRUJCUUFECmdnRVBBRENDQVFvQ2dnRUJBTXN0dXU1NmJHcmZ0MERYTWdrUjIrQ215MTBKNE1tV0xpWHRYVURTWXk2ZjhOQXAKVG9sZGs2NGMra0t0K0RwZXRvUFd4bldyRnU3VittcHZyVCt1eFcwSkM2U1IvRlRMRTh4bVBRNFNiRU5pbEFpTgpUeEpGaUozcVBQV2VjOTZjVlgxd1puM0Rld2xLN245NkY0V1YyWFNFSTFJOHd1Nm1YZG9GM2QwTExDWHJLWE9oCkdZOXlqdkdsa2Jjb2VpdFBJWE9BV2tFZzRJdUJ6WnF1azVCZ243cWZ0dUFGYkFlby9yb20xNGZ4ckRRNWc0UHYKRWQ2UjJ4ZmhkSnEzc3gwdVlSd0NNQ0JSYmJvYlF5K1NrclVpQ29Qemxqd2w0aU50UU4xcFNUL3VWVzZqdUlFSQpOMUZyaGQxY3JyYTliVFJETEJGMjhhQVZKY2RveGdhbnBqTEFsRzhDQXdFQUFUQU5CZ2txaGtpRzl3MEJBUXNGCkFBT0NBUUVBVFN2RkZzdGdlNkl2WXpVMHZYWkJVVzdTdmxzKzZtRTVKQ3VCWTMxNXRrNXVQUWNobUxWaW5SUFAKN3RzcmhkSHNybGNLQ2srcVp2ZXdXWGQxbDJDa21tdzlKeCtkTHBzWTRxL1pMZi91bnJIdEpJeElyL3FLbnJrVgp6cG53Ti90YjVLR0NVc3RaQmU2Q1AxQ1FySmpZNUZwWUJQbGw5cVlQYlEzWEVNd3crKzRWUEJJR2tLL1NDMDVwCmpLYWNnak9WS2MrVXB1anowQ3NuL25uZ0V1RjlZRW93VDJndSs0Q1lkYUJhVjcvMDJQNFpjUmZTVHFHODV0SmYKRi84b0NTQjZFOHV3QnJDWHZhdUtuR1JFeVpvSVgrVllQTDNiV2RVTitIZHpoNkRPTnc1bUV1bE1ibUxPdU44WgpFS3grdUxDMjZjZDdNdU8xRVJjRXB3Y1ljWFNkSXc9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg== tls.key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBeXkyNjducHNhdCszUU5jeUNSSGI0S2JMWFFuZ3laWXVKZTFkUU5KakxwL3cwQ2xPCmlWMlRyaHo2UXEzNE9sNjJnOWJHZGFzVzd0WDZhbSt0UDY3RmJRa0xwSkg4Vk1zVHpHWTlEaEpzUTJLVUNJMVAKRWtXSW5lbzg5WjV6M3B4VmZYQm1mY043Q1VydWYzb1hoWlhaZElRalVqekM3cVpkMmdYZDNRc3NKZXNwYzZFWgpqM0tPOGFXUnR5aDZLMDhoYzRCYVFTRGdpNEhObXE2VGtHQ2Z1cCsyNEFWc0I2ait1aWJYaC9Hc05EbURnKzhSCjNwSGJGK0YwbXJlekhTNWhIQUl3SUZGdHVodERMNUtTdFNJS2cvT1dQQ1hpSTIxQTNXbEpQKzVWYnFPNGdRZzMKVVd1RjNWeXV0cjF0TkVNc0VYYnhvQlVseDJqR0JxZW1Nc0NVYndJREFRQUJBb0lCQUNCYzlXU2RIWDNjaXEwSwpXZzcxeUVjOWFqRTBySmlQa21RNkxkdHdaNW42b2ZvV1NrczVHNWZsUjd1dFNGZkwxRmlsc2xEMTRwNUNlVFBRCi9CQ2p2eERDR3hlb3BUL0FaVFB1cVJUL3ZEenppODdjNjFabXV2OGtXM2RvT042aG1rQnowZStBWHEyNVFNb1AKWVlYR3U2K0NpTG5Gc2VzZmx0MXVoOHQ0eHh4MjBxL21qZ2ZrOURrTDVaZERzaEl4WDU3NEMrWEkwWVF2d2pMUAo3ZlZJbEZaUUxNOEJnODFSR0pzTVgyRDJKb01WRzFPQU9wWlFxNUhvUXJwTW02UEdaYXE2NnpxbjNwU216bEtVCmVrTGV5aVVMdkY1d0xBT0JkMGpsTFFUVDB3SFAyeEFqSjd0MERhNDNQS2Q1TkxGYWFvUS9RNnpJdFdNL3lhRWsKeDFaazhEa0NnWUVBLzVROHZNWCtnV1ZCVmJKQnFzUy9wWXJRVVdwMjdxY3FZcFVhZ2poM29KaGljVU1mcTB5TwpURkt0bm83Y0Q4V05SK1R6d2dFK3Q3QVJMNXlvbDdZRGZyV1hoVHBWZDYvbDRkeDN2UWFNMDlhWFhHcmp0VG5CClJIRVdCZThSb1FjdEFZZjNZMXQ3b0ZoLzNmVnpxZVdTNUZ3T0RsaEQ1Mktjb0RQWjBka2Y1Y1VDZ1lFQXk0Tm0KRksxVFNXajZEcExETit3VVlZbXhLbmJibVBJRTBGQmxiZ2xsd2E1QmFkUmNqSXlWK21COElWK09DTEtRSG5qbwpZTExrbHV2VHlRbU5ZUy93bXh0N3RseDZubWVvWlVtM25lZzhkb1FFUmxIeHMvaWVqV1dSN0NLUTFWTFJnT1lnCjJFK1ptR2tuTXJ1elFqUE82YjQrVjNKaGIrNmRZbkp4QldTSXFLTUNnWUVBd2xoTHMxUnZ5cDlmaGpYTm4zaUwKTHV1V3EwSmsrK2NiNE9qMnhtei84cHZOeDhpK0RUbGl2NERqU3ozZzh4Rks5SStTR0VWd0ZxZ0krWWFMNFFsawpNUGNQS0IwS25yK2Y5QmI4NmoxUDIwcER3Ti83RlhTOGxUblZBR0Fockt4VE9lWFZaYlZNRmNzV09JY01FL1poCnM4cVlXYW9ld0pXSStuMVROaktBQWUwQ2dZQlEzWmhkVlBYSU1LOVR4UnRQQ0Q2Yzl6SnZsaVR4NUJEbm1WcUUKVzdXVVBTSis0OFFXa1BJek44MTdFVllGdkxZcGRZK1loTnp4M3lrYk0vRjZrYXNBWnU1RWF3REtHcFErRXdtago5Qmk2V3dDNzFHbS9RbVgxOTBzQlVrYk1qUWowTi8wTEZxNEljcGdCdjdXZDg2b2ZGTm4rczFObVA2Rkg4Z05ZCnlqYkhFd0tCZ1FDSEhNZ0xUZExDTjFDNjlqQ0N2MjE5T1dvdHNWL2gvbDBJbFZPT3ZHSWxhWUFSNExjQjYrRjgKSTlPam1wNFBTd1pLMVhsOE9YRWZzeWpub1lvd1A3MGJadkpCYUlzLzFxMlF3T0QyaXFheEhhQVNiMnFCMHdnZgpBZXJ2NFZhZzR6YUNwVVgrY0ZGcTczdExkNEFwVkNKeFpaenp2bTAxT2VPby9KWkhIQVNXckE9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo= kind: Secret metadata: creationTimestamp: 2023-07-26T15:45:17Z name: linginx-certs namespace: default resourceVersion: "1350402" selfLink: /api/v1/namespaces/default/secrets/linginx-certs uid: 6b86b042-2bcb-11ee-8f96-8e5760356a66 type: kubernetes.io/tls |
We also need to create a config map:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 |
$ cd ../ex280 $ vi default.conf $ cat default.conf server { listen 8080 default_server; listen [::]:80 default_server ipv6only=on; listen 8443 ssl; root /usr/share/nginx/html; index index.html; server_name localhost; ssl_certificate /etc/nginx/ssl/tls.crt; ssl_certificate_key /etc/nginx/ssl/tls.key; ssl_session_timeout 1d; ssl_session_cache shared:SSL:50m; ssl_session_tickets off; # modern configuration. tweak to your needs. ssl_protocols TLSv1.2; ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256'; ssl_prefer_server_ciphers on; # HSTS (ngx_http_headers_module is required) (15768000 seconds = 6 months) add_header Strict-Transport-Security max-age=15768000; # OCSP Stapling --- # fetch OCSP records from URL in ssl_certificate and cache them ssl_stapling on; ssl_stapling_verify on; location / { try_files $uri $uri/ =404; } } $ oc create cm nginxconfigmap --from-file default.conf configmap/nginxconfigmap created $ oc create sa linginx-sa Error from server (AlreadyExists): serviceaccounts "linginx-sa" already exists $ oc adm policy add-scc-to-user anyuid -z linginx-sa scc "anyuid" added to: ["system:serviceaccount:default:linginx-sa"] |
Starting Deployment and Service:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 |
$ cat linginx-v2.yaml apiVersion: apps/v1 kind: Deployment metadata: name: linginx2 labels: deployment: linginx2 spec: replicas: 1 selector: matchLabels: deployment: linginx2 template: metadata: labels: deployment: linginx2 spec: containers: - image: docker.io/nginx name: linginx2 ports: - containerPort: 8080 protocol: TCP - containerPort: 8443 protocol: TCP volumeMounts: - mountPath: "/etc/nginx/ssl" name: tls-certs - mountPath: "/etc/nginx/conf.d" name: configmap-volume volumes: - name: tls-certs secret: secretName: linginx-certs - name: configmap-volume configMap: name: nginxconfigmap serviceAccount: linginx-sa serviceAccountName: linginx-sa --- apiVersion: v1 kind: Service metadata: labels: deployment: linginx2 name: linginx2 spec: ports: - name: http port: 8080 protocol: TCP targetPort: 8080 - name: https port: 8443 protocol: TCP targetPort: 8443 selector: deployment: linginx2 $ oc create -f linginx-v2.yaml deployment.apps/linginx2 created service/linginx2 created $ oc get pods NAME READY STATUS RESTARTS AGE linginx1-dc9f65f54-6zw8j 1/1 Running 0 2h linginx2-69bf6fc66b-mv6wx 1/1 Running 0 21s |
Create a route:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
$ oc create route passthrough linginx --service linginx2 --port 8443 --hostname=okd.netico.pl route.route.openshift.io/linginx created $ oc get routes NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD bitginx bitginx-default.127.0.0.1.nip.io bitginx 8080-tcp None linginx HostAlreadyClaimed linginx2 8443 passthrough None linginxl okd.netico.pl linginxl 80 edge None $ oc delete route linginxl route.route.openshift.io "linginxl" deleted $ oc get routes NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD bitginx bitginx-default.127.0.0.1.nip.io bitginx 8080-tcp None linginx okd.netico.pl linginx2 8443 passthrough None $ oc get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE linginx1 ClusterIP 172.30.157.239 <none> 8080/TCP 3h linginx2 ClusterIP 172.30.240.91 <none> 8080/TCP,8443/TCP 5m |
Testing in a Debug Pod
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 |
$ oc debug -t deployment/linginx2 --image registry.access.redhat.com/ubi8/ubi:8.0 Defaulting container name to linginx2. Use 'oc describe pod/linginx2-debug -n default' to see all of the containers in this pod. Debugging with pod/linginx2-debug, original command: <image entrypoint> Waiting for pod to start ... If you don't see a command prompt, try pressing enter. # curl -s -k https://172.30.240.91:8443 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> |
curl -s -k https://172.25.201.41:8443
# only works from same network
curl https://okd.netico.pl
curl --insecure https://Iinginx-default.apps-crc.testing
Network Policies
- By default, there are no restrictions to network traffic in K8s
- Pods can always communicate, even if they’re in other namespaces
- To limit this, Network Policies can be used
- If in a policy there is no match, traffic will be denied
- If no Network Policy is used, all traffic is allowed
Network Policy Identifiers
- In network policies, three different identifiers can be used
- Pods: (podSelector) note that a Pod cannot block access to itself
- Namespaces: (namespaceSelector) to grant access to specific namespaces
- IP blocks: (ipBlock) notice that traffic to and from the node where a Pod is running is always allowed
- When defining a Pod- or namespace-based network policy, a selector label is used to specify what traffic is allowed to and from the Pods that match the selector
- Network policies do not conflict, they are additive
Allowing Ingress and Monitoring
- If cluster monitoring or exposed routes are used, Ingress from them needs to be included in the network policy
- Use
spec.ingress.fronnamespaceSelectonmatchlabels
to define:network.openshift.io/policy-group: monitoring
network.openshift.io/policy-group: ingress
Configuring Network Policy
oc login -u admin -p password
oc apply -f nwpolicy-complete-example.yaml
oc expose pod nginx --port=80
oc exec -it busybox -- wget --spider --timeout=1 nginx # will fail
oc label pod busybox access=true
oc exec -it busybox -- wget --spider --timeout=1 nginx # will work
Excercise
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 |
$ cat nwpolicy-complete-example.yaml apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: access-nginx spec: podSelector: matchLabels: app: nginx ingress: - from: - podSelector: matchLabels: access: "true" ... --- apiVersion: v1 kind: Pod metadata: name: nginx labels: app: nginx spec: containers: - name: nwp-nginx image: nginx:1.17 ... --- apiVersion: v1 kind: Pod metadata: name: busybox labels: app: sleepy spec: containers: - name: nwp-busybox image: busybox command: - sleep - "3600" $ oc apply -f nwpolicy-complete-example.yaml networkpolicy.networking.k8s.io/access-nginx created pod/nginx created pod/busybox created $ oc get pods NAME READY STATUS RESTARTS AGE busybox 1/1 Running 0 15s nginx 1/1 Running 0 15s |
Now we need to expse the nginx pod:
1 2 |
$ oc expose pod nginx --port=80 service/nginx exposed |
Let’s chceck if there is a service:
1 2 3 |
$ oc get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx ClusterIP 172.30.188.121 <none> 80/TCP 34s |
Let’s check the network policy:
1 2 3 4 |
$ oc exec -it busybox -- wget --spider --timeout=1 nginx Connecting to nginx (172.30.188.121:80) wget: download timed out command terminated with exit code 1 |
Download timed out because network policy didn’t find any matching rules.
So, let’s create such a rule:
1 2 3 4 5 6 |
$ oc label pod busybox access=true pod/busybox labeled $ oc exec -it busybox -- wget --spider --timeout=1 nginx Connecting to nginx (172.30.188.121:80) remote file exists |
Advanced Network Policies
oc login -u kubeadmin -p ...
oc new-project source-project
oc label ns source-project type=incoming
oc create -f nginx-source1.yml
oc create -f nginx-source2.yml
oc project target-project
oc login -u developer -p developer
oc new-project target-project
oc new-app --name nginx-target --docker-image quay.io/openshifttest/hello-openshift:openshift
oc get pods -o wide
oc login -u kubeadmin -p ...
oc exec -it nginx-access -n source-project -- curl <ip-of-nginx-target-pod>:8080
oc exec -it nginx-noaccess -n source-project -- curl <ip-of-nginx-target-pod>:8080
oc create -f nwpol-allow-specific.yaml
oc exec -it nginx-noaccess -n source-project -- curl <ip-of-nginx-target-pod>:8080
oc label pod nginx-target-1-<xxxxx> type=incoming
oc exec -it nginx-noaccess -n source-project -- curl <ip-of-nginx-target-pod>:8080
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 |
$ oc new-project source-project Now using project "source-project" on server "https://172.30.9.22:8443". You can add applications to this project with the 'new-app' command. For example, try: oc new-app centos/ruby-25-centos7~https://github.com/sclorg/ruby-ex.git to build a new example application in Ruby. $ oc label ns source-project type=incoming namespace/source-project labeled $ cat nginx-source1.yml apiVersion: v1 kind: Pod metadata: name: nginx-access labels: type: access namespace: source-project spec: containers: - name: nginx image: nginx ports: - containerPort: 8080 protocol: TCP $ cat nginx-source2.yml apiVersion: v1 kind: Pod metadata: name: nginx-noaccess labels: type: noaccess namespace: source-project spec: containers: - name: nginx image: nginx ports: - containerPort: 8080 protocol: TCP $ oc create -f nginx-source1.yml pod/nginx-access created $ oc create -f nginx-source2.yml pod/nginx-noaccess created $ oc get all NAME READY STATUS RESTARTS AGE pod/nginx-access 1/1 Running 0 25s pod/nginx-noaccess 1/1 Running 0 20s |
Lets create a target project.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
$ oc new-project target-project Now using project "target-project" on server "https://172.30.9.22:8443". You can add applications to this project with the 'new-app' command. For example, try: oc new-app centos/ruby-25-centos7~https://github.com/sclorg/ruby-ex.git to build a new example application in Ruby. $ oc new-app --name nginx-target --docker-image quay.io/openshifttest/hello-openshift:openshift --> Found Docker image 7af3297 (5 years old) from quay.io for "quay.io/openshifttest/hello-openshift:openshift" * An image stream tag will be created as "nginx-target:openshift" that will track this image * This image will be deployed in deployment config "nginx-target" * Ports 8080/tcp, 8888/tcp will be load balanced by service "nginx-target" * Other containers can access this service through the hostname "nginx-target" --> Creating resources ... imagestream.image.openshift.io "nginx-target" created deploymentconfig.apps.openshift.io "nginx-target" created service "nginx-target" created --> Success Application is not exposed. You can expose services to the outside world by executing one or more of the commands below: 'oc expose svc/nginx-target' Run 'oc status' to view your app. $ oc get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE nginx-target-1-9kdn6 1/1 Running 0 49s 172.17.0.21 localhost <none> |
Now, back to the admin shell:
1 2 3 4 5 |
$ oc exec -it nginx-access -n source-project -- curl 172.17.0.21:8080 Hello OpenShift! $ oc exec -it nginx-noaccess -n source-project -- curl 172.17.0.21:8080 Hello OpenShift! |
We have no network policy yet so there no traffic restrictions even it is traffic between diffrent namespaces.
Let’s look at this network policy:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 |
$ cat nwpol-allow-specific.yaml kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-some spec: podSelector: matchLabels: type: incoming ingress: - from: - namespaceSelector: matchLabels: type: incoming podSelector: matchLabels: type: access ports: - port: 8080 protocol: TCP $ oc get pods -n source-project --show-labels NAME READY STATUS RESTARTS AGE LABELS nginx-access 1/1 Running 0 21m type=access nginx-noaccess 1/1 Running 0 21m type=noaccess |
Let’s create this network policy:
1 2 |
$ oc create -f nwpol-allow-specific.yaml networkpolicy.networking.k8s.io/allow-some created |
And now we still can reach to the “Hello Openshift”
1 2 |
$ oc exec -it nginx-noaccess -n source-project -- curl 172.17.0.21:8080 Hello OpenShift! |
That is because the label is not set on the nginx noaccces.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
$ oc get pods NAME READY ST ATUS RESTARTS AGE nginx-target-1-9kdn6 1/1 Running 0 17m $ oc get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS nginx-target-1-9kdn6 1/1 Running 0 18m app=nginx-target,deployment=nginx-target-1,deploymentconfig=nginx-target $ oc label pod nginx-target-1-9kdn6 type=incoming pod/nginx-target-1-9kdn6 labeled $ oc exec -it nginx-noaccess -n source-project -- curl 172.17.0.21:8080 PROBLEM |
Lab: Creating an Edge Router
Run an Nginx deployment, and ensure this deployment is accessible by addressing an Edge router
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 |
$ oc login -u developer -p developer Login successful. You have access to the following projects and can switch between them with 'oc project <projectname>': * debug myproject Using project "debug". [root@okd ~]# oc new-project network-security Now using project "network-security" on server "https://172.30.9.22:8443". You can add applications to this project with the 'new-app' command. For example, try: oc new-app centos/ruby-25-centos7~https://github.com/sclorg/ruby-ex.git to build a new example application in Ruby. $ oc new-app --name nginxlab --docker-image=bitnami/ngninx error: unable to locate any local docker images with name "bitnami/ngninx" The 'oc new-app' command will match arguments to the following types: 1. Images tagged into image streams in the current project or the 'openshift' project - if you don't specify a tag, we'll add ':latest' 2. Images in the Docker Hub, on remote registries, or on the local Docker engine 3. Templates in the current project or the 'openshift' project 4. Git repository URLs or local paths that point to Git repositories --allow-missing-images can be used to point to an image that does not exist yet. See 'oc new-app -h' for examples. [root@okd ~]# oc new-app --name nginxlab --docker-image=bitnami/nginx --> Found Docker image 1005528 (35 hours old) from Docker Hub for "bitnami/nginx" * An image stream tag will be created as "nginxlab:latest" that will track this image * This image will be deployed in deployment config "nginxlab" * Ports 8080/tcp, 8443/tcp will be load balanced by service "nginxlab" * Other containers can access this service through the hostname "nginxlab" --> Creating resources ... imagestream.image.openshift.io "nginxlab" created deploymentconfig.apps.openshift.io "nginxlab" created service "nginxlab" created --> Success Application is not exposed. You can expose services to the outside world by executing one or more of the commands below: 'oc expose svc/nginxlab' Run 'oc status' to view your app. $ oc get pods NAME READY STATUS RESTARTS AGE nginxlab-1-bcgkt 0/1 ContainerCreating 0 11s nginxlab-1-deploy 1/1 Running 0 13s $ oc get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginxlab ClusterIP 172.30.146.79 <none> 8080/TCP,8443/TCP 38s $ oc create route edge ngnixlab --service ngninxlab --cert=openssl/tls.crt --key=openssl/tls.key --ca-cert=openssl/myCA.crt error: you need to provide a route port via --port when exposing a non-existent service [root@okd ~]# oc create route edge ngnixlab --service ngninxlab --cert=openssl/tls.crt --key=openssl/tls.key --ca-cert=openssl/myCA.crt --port=8080 error: open openssl/myCA.crt: no such file or directory [root@okd ~]# oc create route edge ngnixlab --service ngninxlab --cert=openssl/tls.crt --key=openssl/tls.key --ca-cert=openssl/myCA.pem --port=8080 route.route.openshift.io/ngnixlab created $ oc get routes NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD ngnixlab ngnixlab-network-security.127.0.0.1.nip.io ngninxlab 8080 edge None $ curl -svv https://ngnixlab-network-security.127.0.0.1.nip.io * About to connect() to ngnixlab-network-security.127.0.0.1.nip.io port 443 (#0) * Trying 127.0.0.1... * Connected to ngnixlab-network-security.127.0.0.1.nip.io (127.0.0.1) port 443 (#0) * Initializing NSS with certpath: sql:/etc/pki/nssdb * CAfile: /etc/pki/tls/certs/ca-bundle.crt CApath: none * Server certificate: * subject: CN=okd.netico.pl,O=Default Company Ltd,L=Default City,C=PL * start date: lip 26 15:42:13 2023 GMT * expire date: sty 31 15:42:13 2028 GMT * common name: okd.netico.pl * issuer: CN=okd.netico.pl,O=Default Company Ltd,L=Default City,ST=silesia,C=PL * NSS error -8172 (SEC_ERROR_UNTRUSTED_ISSUER) * Peer's certificate issuer has been marked as not trusted by the user. * Closing connection 0 $ curl -s -k https://ngnixlab-network-security.127.0.0.1.nip.io <html> <head> <meta name="viewport" content="width=device-width, initial-scale=1"> ... </head> <body> <div> <h1>Application is not available</h1> <p>The application is currently not serving requests at this endpoint. It may not have been started or is still starting.</p> <div class="alert alert-info"> <p class="info"> Possible reasons you are seeing this page: </p> <ul> <li> <strong>The host doesn't exist.</strong> Make sure the hostname was typed correctly and that a route matching this hostname exists. </li> <li> <strong>The host exists, but doesn't have a matching path.</strong> Check if the URL path was typed correctly and that the route was created using the desired path. </li> <li> <strong>Route and path matches, but all pods are down.</strong> Make sure that the resources exposed by this route (pods, services, deployment configs, etc) have at least one pod running. </li> </ul> </div> </div> </body> </html> |
That conludes the lab.