Secrets
- A secret is a base64 encoded ConfigMap
- To really protect data in a secret, the Etcd can be encrypted
- Secrets are commonly used to decouple configuration and data from the applications running in OpenShift
- Using secrets allows OpenShift to load site-specific data from external sources
- Secrets can be used to store different kinds of data
- Passwords
- Sensitive configuration files
- Credentials such as SSH keys or OAuth tokens
Secret Types
- Different types of secrets exist:
- docker-registry
- generic
- tls
- When information is stored in a secret, OpenShift validates that the data conforms to the type of secret
- In OpenShift, secrets are mainly used for two reasons
- To store credentials which is used by Pods in a MicroService architecture
- To store TLS certificates and keys
- A TLS secret stores the certificate as tls.crt and the certificate key as tls.key
- Developers can mount the secret as a volume and create a pass-through route to the application
Creating Secrets
- Generic secrets:
oc create secret generic secretvars --from-literal user=root --from-literal password=verysecret
- Generic secrets, containing SSH keys:
oc create secret generic ssh-keys --from-file id_rsa=nissh/id_rsa --from-file id_rsa.pub="sissh/id_rsa.pub
- Secrets containing TLS certificate and key:
oc create secret tls secret-tls -- cert certsitls.crt keysitls.key
Exposing Secrets to Pods
- Secrets can be referred to as variables, or as files from the Pod
- Use
oc set env
to write the environment variables obtained from a secret to a pod or deploymentoc set env deployment/mysql --from secret/mysql --prefix MYSQL_
- Use oc set volume to mount secrets as volumes
- Notice that when using oc set volume, all files currently in the target directory are no longer accessible
oc set volume deployment/mysql --add --type secret --mount-path /run/secrets/mysql --secret-name mysql
- Notice that
oc set env
can use--prefix
to add a prefix to the environment variables defined in the secret
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 |
$ oc create secret generic mysql --from-literal user=sqluser --from-literal password=password --from-literal database=secretdb --from-literal hostname=mysql --from-literal root _password=password secret/mysql created $ oc new-app --name mysql --docker-image bitnami/mysql $ oc get pods -w NAME READY STATUS RESTARTS AGE mysql-1-deploy 0/1 ContainerCreating 0 23s $ oc logs mysql-1-deploy Error from server (BadRequest): container "deployment" in pod "mysql-1-deploy" is waiting to start: ContainerCreating $ oc get all NAME READY STATUS RESTARTS AGE pod/mysql-2-deploy 0/1 ContainerCreating 0 47s NAME DESIRED CURRENT READY AGE replicationcontroller/mysql-1 0 0 0 12m replicationcontroller/mysql-2 0 0 0 47s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/mysql ClusterIP 172.30.55.23 <none> 3306/TCP 12m NAME REVISION DESIRED CURRENT TRIGGERED BY deploymentconfig.apps.openshift.io/mysql 2 1 0 config,image(mysql:latest) NAME DOCKER REPO TAGS UPDATED imagestream.image.openshift.io/mysql 172.30.1.1:5000/userstaff/mysql latest 12 minutes ago $ oc set env --from=secret/mysql --prefix=MYSQL_ deploymentconfig.apps.openshift.io/mysql deploymentconfig.apps.openshift.io/mysql updated $ oc get pods -w NAME READY STATUS RESTARTS AGE mysql-2-deploy 0/1 ContainerCreating 0 25s $ oc get all NAME READY STATUS RESTARTS AGE pod/mysql-2-deploy 0/1 ContainerCreating 0 1m NAME DESIRED CURRENT READY AGE replicationcontroller/mysql-1 0 0 0 12m replicationcontroller/mysql-2 0 0 0 1m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/mysql ClusterIP 172.30.55.23 <none> 3306/TCP 12m NAME REVISION DESIRED CURRENT TRIGGERED BY deploymentconfig.apps.openshift.io/mysql 2 1 0 config,image(mysql:latest) NAME DOCKER REPO TAGS UPDATED imagestream.image.openshift.io/mysql 172.30.1.1:5000/userstaff/mysql latest 12 minutes ago [root@okd ~]# oc exec -it pod/mysql-2-deploy -- env error: invalid resource name "pod/mysql-2-deploy": [may not contain '/'] [root@okd ~]# [root@okd ~]# oc exec -it mysql-2-deploy -- env error: unable to upgrade connection: container not found ("deployment") [root@okd ~]# [root@okd ~]# oc get pods -w NAME READY STATUS RESTARTS AGE mysql-2-deploy 0/1 ContainerCreating 0 2m ^C[root@okd ~]# [root@okd ~]# oc get pods NAME READY STATUS RESTARTS AGE mysql-2-deploy 0/1 ContainerCreating 0 2m [root@okd ~]# oc get pods NAME READY STATUS RESTARTS AGE mysql-2-deploy 0/1 ContainerCreating 0 4m [root@okd ~]# oc logs mysql-2-deploy Error from server (BadRequest): container "deployment" in pod "mysql-2-deploy" is waiting to start: ContainerCreating |
ServiceAccounts
- A ServiceAccount (SA) is a user account that is used by a Pod to determine Pod access privileges to system resources
- The default ServiceAccount used by Pods allows for very limited access to cluster resources
- Sometimes a Pod cannot run with this very restricted ServiceAccount
- After creating the ServiceAccount, specific access privileges need to be set
Configuring ServiceAccount Access Restrictions
- To create a ServiceAccount, use
oc create serviceaccount mysa
- Optionally, add
-n namespace
to assign the SA to a specific namespace - After creating the SA, use a role binding to connect the SA to a specific role
- Or associate the SA with a specific Security Context Constraint
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 |
# oc login -u kubeadmin -p redahat $ oc get pods NAME READY STATUS RESTARTS AGE mysql-2-deploy 0/1 Error 0 17h $ oc get pods mysql-2-deploy -o yaml apiVersion: v1 kind: Pod metadata: annotations: openshift.io/deployment-config.name: mysql openshift.io/deployment.name: mysql-2 openshift.io/scc: restricted creationTimestamp: 2023-08-05T20:09:07Z labels: openshift.io/deployer-pod-for.name: mysql-2 name: mysql-2-deploy namespace: userstaff ownerReferences: - apiVersion: v1 kind: ReplicationController name: mysql-2 uid: eeb46f7e-33cb-11ee-8f96-8e5760356a66 resourceVersion: "4936318" selfLink: /api/v1/namespaces/userstaff/pods/mysql-2-deploy uid: eeb9b6b4-33cb-11ee-8f96-8e5760356a66 spec: activeDeadlineSeconds: 21600 containers: - env: - name: OPENSHIFT_DEPLOYMENT_NAME value: mysql-2 - name: OPENSHIFT_DEPLOYMENT_NAMESPACE value: userstaff image: openshift/origin-deployer:v3.11 imagePullPolicy: IfNotPresent name: deployment resources: {} securityContext: capabilities: drop: - KILL - MKNOD - SETGID - SETUID runAsUser: 1000460000 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: deployer-token-snmch readOnly: true dnsPolicy: ClusterFirst imagePullSecrets: - name: deployer-dockercfg-rbbvc nodeName: localhost priority: 0 restartPolicy: Never schedulerName: default-scheduler securityContext: fsGroup: 1000460000 seLinuxOptions: level: s0:c21,c20 serviceAccount: deployer serviceAccountName: deployer terminationGracePeriodSeconds: 10 volumes: - name: deployer-token-snmch secret: defaultMode: 420 secretName: deployer-token-snmch status: conditions: - lastProbeTime: null lastTransitionTime: 2023-08-05T20:09:07Z status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: 2023-08-05T21:03:58Z message: 'containers with unready status: [deployment]' reason: ContainersNotReady status: "False" type: Ready - lastProbeTime: null lastTransitionTime: null message: 'containers with unready status: [deployment]' reason: ContainersNotReady status: "False" type: ContainersReady - lastProbeTime: null lastTransitionTime: 2023-08-05T20:09:07Z status: "True" type: PodScheduled containerStatuses: - containerID: docker://50203c3593f8c4ed15fc43514ec64c8ee126b788e709ea7b70ee97262f5c7762 image: openshift/origin-deployer:v3.11 imageID: docker-pullable://openshift/origin-deployer@sha256:0b09d18f5616617b222394b726be2a860250c10a41737057dba69147ec894ad0 lastState: {} name: deployment ready: false restartCount: 0 state: terminated: containerID: docker://50203c3593f8c4ed15fc43514ec64c8ee126b788e709ea7b70ee97262f5c7762 exitCode: 1 finishedAt: 2023-08-05T21:03:57Z reason: Error startedAt: 2023-08-05T20:53:55Z hostIP: 172.30.9.22 phase: Failed podIP: 172.17.0.38 qosClass: BestEffort startTime: 2023-08-05T20:09:07Z $ oc create sa newsa serviceaccount/newsa created $ oc get sa NAME SECRETS AGE builder 2 19h default 2 19h deployer 2 19h newsa 2 12m |
Security Context Constraints
- A Security Context Constraint (SCC) is an OpenShift resource, similar to the Kubernetes Security Context resource, that restricts access to resources
- The purpose is to limit access from a Pod to the host environment
- Different SCCs are available to control:
- Running privileged containers
- Requesting additional capabilities to a container
- Using host directories as volumes
- Changing SELinux context of a container
- Changing the user ID
- Using SCCs may be necessary to run community containers that by default don’t work under the tight OpenShift security restrictions
Exploring SCCs
- Use
oc get scc
for an overview of SCCs - For more details, use
oc describe scc <name>
, as inoc describe scc nonroot
- Use
oc describe pod <podname> | grep scc
to see which SCC is currently used by a Pod - If a Pod can’t run due to an SCC, use
oc get pod <name> -o yaml | oc adm policy scc-subject-review -f -
- To change a container to run with a different SCC, you must create a service account and use that in the Pod
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 |
$ oc login -u kubeadmin -p redahat $ oc get scc NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP PRIORITY READONLYROOTFS VOLUMES anyuid false [] MustRunAs RunAsAny RunAsAny RunAsAny 10 false [configMap downwardAPI emptyDir persistentVolumeClaim projected secret] hostaccess false [] MustRunAs MustRunAsRange MustRunAs RunAsAny <none> false [configMap downwardAPI emptyDir hostPath persistentVo lumeClaim projected secret] hostmount-anyuid false [] MustRunAs RunAsAny RunAsAny RunAsAny <none> false [configMap downwardAPI emptyDir hostPath nfs persiste ntVolumeClaim projected secret] hostnetwork false [] MustRunAs MustRunAsRange MustRunAs MustRunAs <none> false [configMap downwardAPI emptyDir persistentVolumeClaim projected secret] nonroot false [] MustRunAs MustRunAsNonRoot RunAsAny RunAsAny <none> false [configMap downwardAPI emptyDir persistentVolumeClaim projected secret] privileged true [*] RunAsAny RunAsAny RunAsAny RunAsAny <none> false [*] restricted false [] MustRunAs MustRunAsRange MustRunAs RunAsAny <none> false [configMap downwardAPI emptyDir persistentVolumeClaim projected secret] [root@okd ~]# oc describe scc nonroot Name: nonroot Priority: <none> Access: Users: <none> Groups: <none> Settings: Allow Privileged: false Allow Privilege Escalation: 0xc42111004c Default Add Capabilities: <none> Required Drop Capabilities: KILL,MKNOD,SETUID,SETGID Allowed Capabilities: <none> Allowed Seccomp Profiles: <none> Allowed Volume Types: configMap,downwardAPI,emptyDir,persistentVolumeClaim,projected,secret Allowed Flexvolumes: <all> Allowed Unsafe Sysctls: <none> Forbidden Sysctls: <none> Allow Host Network: false Allow Host Ports: false Allow Host PID: false Allow Host IPC: false Read Only Root Filesystem: false Run As User Strategy: MustRunAsNonRoot UID: <none> UID Range Min: <none> UID Range Max: <none> SELinux Context Strategy: MustRunAs User: <none> Role: <none> Type: <none> Level: <none> FSGroup Strategy: RunAsAny Ranges: <none> Supplemental Groups Strategy: RunAsAny Ranges: <none> $ oc run nginx --image=ngninx deploymentconfig.apps.openshift.io/nginx created $ oc run pod --image=ngninx deploymentconfig.apps.openshift.io/pod created $ oc get all NAME READY STATUS RESTARTS AGE pod/auto-1-build 0/1 Error 0 8d pod/nginx-1-deploy 0/1 ContainerCreating 0 1m pod/pod-1-deploy 0/1 ContainerCreating 0 53s NAME DESIRED CURRENT READY AGE replicationcontroller/nginx-1 0 0 0 1m replicationcontroller/pod-1 0 0 0 53s NAME REVISION DESIRED CURRENT TRIGGERED BY deploymentconfig.apps.openshift.io/auto 0 1 0 config,image(auto:latest) deploymentconfig.apps.openshift.io/nginx 1 1 0 config deploymentconfig.apps.openshift.io/pod 1 1 0 config NAME TYPE FROM LATEST buildconfig.build.openshift.io/auto Docker Git 1 NAME TYPE FROM STATUS STARTED DURATION build.build.openshift.io/auto-1 Docker Git@a6c13bc Failed (DockerBuildFailed) 8 days ago 26s NAME DOCKER REPO TAGS UPDATED imagestream.image.openshift.io/auto 172.30.1.1:5000/auto/auto imagestream.image.openshift.io/centos 172.30.1.1:5000/auto/centos latest 8 days ago $ oc get pod/pod-1-deploy -o yaml apiVersion: v1 kind: Pod metadata: annotations: openshift.io/deployment-config.name: pod openshift.io/deployment.name: pod-1 openshift.io/scc: restricted creationTimestamp: 2023-08-06T13:37:56Z labels: openshift.io/deployer-pod-for.name: pod-1 name: pod-1-deploy namespace: auto ownerReferences: - apiVersion: v1 kind: ReplicationController name: pod-1 uid: 73cbd6d5-345e-11ee-8f96-8e5760356a66 resourceVersion: "5178541" selfLink: /api/v1/namespaces/auto/pods/pod-1-deploy uid: 73cff627-345e-11ee-8f96-8e5760356a66 spec: activeDeadlineSeconds: 21600 containers: - env: - name: OPENSHIFT_DEPLOYMENT_NAME value: pod-1 - name: OPENSHIFT_DEPLOYMENT_NAMESPACE value: auto image: openshift/origin-deployer:v3.11 imagePullPolicy: IfNotPresent name: deployment resources: {} securityContext: capabilities: drop: - KILL - MKNOD - SETGID - SETUID runAsUser: 1000290000 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: deployer-token-r2dtz readOnly: true dnsPolicy: ClusterFirst imagePullSecrets: - name: deployer-dockercfg-xtpcd nodeName: localhost priority: 0 restartPolicy: Never schedulerName: default-scheduler securityContext: fsGroup: 1000290000 seLinuxOptions: level: s0:c17,c9 serviceAccount: deployer serviceAccountName: deployer terminationGracePeriodSeconds: 10 volumes: - name: deployer-token-r2dtz secret: defaultMode: 420 secretName: deployer-token-r2dtz status: conditions: - lastProbeTime: null lastTransitionTime: 2023-08-06T13:37:56Z status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: 2023-08-06T13:37:56Z message: 'containers with unready status: [deployment]' reason: ContainersNotReady status: "False" type: Ready - lastProbeTime: null lastTransitionTime: null message: 'containers with unready status: [deployment]' reason: ContainersNotReady status: "False" type: ContainersReady - lastProbeTime: null lastTransitionTime: 2023-08-06T13:37:56Z status: "True" type: PodScheduled containerStatuses: - image: openshift/origin-deployer:v3.11 imageID: "" lastState: {} name: deployment ready: false restartCount: 0 state: waiting: reason: ContainerCreating hostIP: 172.30.9.22 phase: Pending qosClass: BestEffort startTime: 2023-08-06T13:37:56Z $ oc get pod/pod-1-deploy -o yaml | oc adm policy scc-subject-review -f - RESOURCE ALLOWED BY Pod/pod-1-deploy anyuid |
Using SCCs
oc get scc
gives an overview of all SCCsoc describe scc anyuid
shows information about a specific SCCoc describe pod
shows a line openshift.io/scc: restricted; most Pods run as restricted- Some Pods require access beyond the scope of their own containers, such as S21 Pods. To provide this access, SAs are needed
- To change the container to run using a different SCC, you need to create a service account and use that with the Pod or Deployment
Understanding SCC and ServiceAccount
- The service account is used to connect to an SCC
- Once the service account is connected to the SCC it can be bound to a deployment or pod to make sure that it is working
- This allows you for instance to run a Pod that requires root access to use the anyuid SCC so that it can run anyway
Demo: using SCCs
- As developer:
oc new-project sccs
oc new-app --name sccnginx --docker-image nginx
oc get pods
will show an erroroc logs pod/nginx[Tab]
will show that is fails because of a permission problem- as admin:
oc get pod nginx[Tab] -o yaml | oc adm policy scc-subject-review -f -
will show which scc to use - as admin:
oc create sa nginx-sa
creates the dedicated service account - As administrator:
oc adm policy add-scc-to-user anyuid -z nginx-sa
- As developer:
oc set serviceaccount deployment sccnginx nginx-sa
oc get pods sccs[Tab] -o yaml
; look for serviceAccountoc get pods
should show the pod as running (may have to wait a minute)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 |
$ oc login -u kubeadmin -p redahat $ oc new-project scc-demo $ oc new-app --name sccnginx --docker-image nginx $ oc get pods NAME READY STATUS RESTARTS AGE sccnginx-784bd9587d-6lpw8 0/1 CrashLoopBackOff 4 (60s ago) 2m44s $ oc logs sccnginx-784bd9587d-6lpw8 /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/ /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh 10-listen-on-ipv6-by-default.sh: info: can not modify /etc/nginx/conf.d/default.conf (read-only file system?) /docker-entrypoint.sh: Sourcing /docker-entrypoint.d/15-local-resolvers.envsh /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh /docker-entrypoint.sh: Configuration complete; ready for start up 2023/09/10 12:32:14 [warn] 1#1: the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /etc/nginx/nginx.conf:2 nginx: [warn] the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /etc/nginx/nginx.conf:2 2023/09/10 12:32:14 [emerg] 1#1: mkdir() "/var/cache/nginx/client_temp" failed (13: Permission denied) nginx: [emerg] mkdir() "/var/cache/nginx/client_temp" failed (13: Permission denied) $ oc get pod sccnginx-784bd9587d-6lpw8 -o yaml | grep message message: 'containers with unready status: [sccnginx]' message: 'containers with unready status: [sccnginx]' message: back-off 5m0s restarting failed container=sccnginx pod=sccnginx-784bd9587d-6lpw8_scc(877967db-9891-4576-b440-86d792629f6e) $ oc get pod sccnginx-784bd9587d-6lpw8 -o yaml | oc adm policy scc-subject-review -f - RESOURCE ALLOWED BY Pod/sccnginx-1-deploy anyuid $ oc create sa nginx-sa serviceaccount/nginx-sa created $ oc adm policy add-scc-to-user anyuid -z nginx-sa scc "anyuid" added to: ["system:serviceaccount:scc-demo:nginx-sa"] $ oc set sa deployment sccnginx nginx-sa $ oc get pods NAME READY STATUS RESTARTS AGE sccnginx-bf44d4dd6-2v2zv 1/1 Running 0 19s |
Running Containers as Non-root
- By default, OpenShift denies containers to run as root
- Many containers run as root by default
- A container that runs as root has root privileges on the container host as well, and should be avoided
- If you build your own container images, specify which user it should run
- Frequently, non-root alternatives are available for the images you’re using
- quay.io images are made with OpenShift in mind
- bitnami has reworked common images to be started as non-root: https://engineering.bitnami.com/articles/running-non-root-containers-on-openshift.html
Managing Non-root Container Ports
- Non-root containers cannot bind to a privileged port
- In OpenShift, this is not an issue, as containers are accessed through services and routes
- Configure the port on the service/route, not on the Pod
- Also, non-root containers will have limitations accessing files
Running Bitnami non-root Nginx
oc new-app --docker-image bitnami/nginx:latest --name=bginx
oc get pods -o wide
oc describe pods bginx-<xxx>
oc get services
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 |
$ oc whoami system:admin $ oc new-app --docker-image bitnami/nginx:latest --name=bginx --> Found Docker image c80f470 (2 days old) from Docker Hub for "bitnami/nginx:latest" * An image stream tag will be created as "bginx:latest" that will track this image * This image will be deployed in deployment config "bginx" * Ports 8080/tcp, 8443/tcp will be load balanced by service "bginx" * Other containers can access this service through the hostname "bginx" --> Creating resources ... imagestream.image.openshift.io "bginx" created deploymentconfig.apps.openshift.io "bginx" created service "bginx" created --> Success Application is not exposed. You can expose services to the outside world by executing one or more of the commands below: 'oc expose svc/bginx' Run 'oc status' to view your app. $ oc get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE auto-1-build 0/1 Error 0 8d 172.17.0.22 localhost <none> bginx-1-deploy 0/1 ContainerCreating 0 11s <none> localhost <none> $ oc get all NAME READY STATUS RESTARTS AGE pod/bginx-1-deploy 0/1 ContainerCreating 0 8m NAME DESIRED CURRENT READY AGE replicationcontroller/bginx-1 0 0 0 8m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/bginx ClusterIP 172.30.66.189 <none> 8080/TCP,8443/TCP 8m NAME REVISION DESIRED CURRENT TRIGGERED BY deploymentconfig.apps.openshift.io/auto 0 1 0 config,image(auto:latest) deploymentconfig.apps.openshift.io/bginx 1 1 0 config,image(bginx:latest) NAME DOCKER REPO TAGS UPDATED imagestream.image.openshift.io/bginx 172.30.1.1:5000/auto/bginx latest 8 minutes ago $ oc describe pod/bginx-1-deploy Name: bginx-1-deploy Namespace: auto Priority: 0 PriorityClassName: <none> Node: localhost/172.30.9.22 Start Time: Sun, 06 Aug 2023 16:49:07 +0200 Labels: openshift.io/deployer-pod-for.name=bginx-1 Annotations: openshift.io/deployment-config.name=bginx openshift.io/deployment.name=bginx-1 openshift.io/scc=restricted Status: Pending IP: Containers: deployment: Container ID: Image: openshift/origin-deployer:v3.11 Image ID: Port: <none> Host Port: <none> State: Waiting Reason: ContainerCreating Ready: False Restart Count: 0 Environment: OPENSHIFT_DEPLOYMENT_NAME: bginx-1 OPENSHIFT_DEPLOYMENT_NAMESPACE: auto Mounts: /var/run/secrets/kubernetes.io/serviceaccount from deployer-token-r2dtz (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: deployer-token-r2dtz: Type: Secret (a volume populated by a Secret) SecretName: deployer-token-r2dtz Optional: false QoS Class: BestEffort Node-Selectors: <none> Tolerations: <none> Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 9m default-scheduler Successfully assigned auto/bginx-1-deploy to localhost Warning FailedCreatePodSandBox 8m (x13 over 9m) kubelet, localhost Failed create pod sandbox: rpc error: code = Unknown desc = failed to start sandbox container for pod "bginx-1-deploy": Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: can't get final child's PID from pipe: EOF: unknown Normal SandboxChanged 4m (x225 over 9m) kubelet, localhost Pod sandbox changed, it will be killed and re-created. $ oc get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE bginx ClusterIP 172.30.66.189 <none> 8080/TCP,8443/TCP 10m |