Kubernetes defines a network model that helps provide simplicity and consistency across a range of networking environments and network implementations. The Kubernetes network model provides the foundation for understanding how containers, pods, and services within Kubernetes communicate with each other.
CNI
- The Container Network Interface (CNI) is the common interface used for
networking when starting kubelet on a worker node - The CNI doesn’t take care of networking, that is done by the network plugin
- CNl ensures the pluggable nature of networking, and makes it easy to select between different network plugins provided by the ecosystem
Exploring CNI Configuration
- The CNI plugin configuration is in
/etc/cni/net.d
- Some plugins have the complete network setup in this directory
- Other plugins have generic settings, and are using additional configuration
- Often, the additional configuration is implemented by Pods
- Generic CNI documentation is on https://github.com/containernetworking/cni
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 |
[root@k8s cka]# kubectl get ns NAME STATUS AGE default Active 6d6h ingress-nginx Active 4d1h kube-node-lease Active 6d6h kube-public Active 6d6h kube-system Active 6d6h kubernetes-dashboard Active 5d1h limited Active 8h [root@k8s cka]# kubectl get all -n kube-system NAME READY STATUS RESTARTS AGE pod/coredns-5dd5756b68-sgfkj 0/1 CrashLoopBackOff 1426 (3m15s ago) 6d6h pod/etcd-k8s.example.pl 1/1 Running 0 6d6h pod/kube-apiserver-k8s.example.pl 1/1 Running 0 47h pod/kube-controller-manager-k8s.example.pl 1/1 Running 0 47h pod/kube-proxy-hgh55 1/1 Running 0 47h pod/kube-scheduler-k8s.example.pl 1/1 Running 0 47h pod/metrics-server-5f8988d664-7r8j7 1/1 Running 0 2d5h pod/storage-provisioner 1/1 Running 8 (47h ago) 6d6h NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 6d6h service/metrics-server ClusterIP 10.102.216.61 <none> 443/TCP 2d5h NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/kube-proxy 1 1 1 1 1 kubernetes.io/os=linux 6d6h NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/coredns 0/1 1 0 6d6h deployment.apps/metrics-server 1/1 1 1 2d5h NAME DESIRED CURRENT READY AGE replicaset.apps/coredns-5dd5756b68 1 1 0 6d6h replicaset.apps/metrics-server-5f8988d664 1 1 1 2d5h replicaset.apps/metrics-server-6db4d75b97 0 0 0 2d5h [root@k8s cka]# [root@k8s cka]# ps aux | grep api root 875261 4.3 1.8 1041508 298416 ? Ssl lut04 125:21 kube-apiserver --advertise-address=172.30.9.24 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/var/lib/minikube/certs/ca.crt --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota --enable-bootstrap-token-auth=true --etcd-cafile=/var/lib/minikube/certs/etcd/ca.crt --etcd-certfile=/var/lib/minikube/certs/apiserver-etcd-client.crt --etcd-keyfile=/var/lib/minikube/certs/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 --kubelet-client-certificate=/var/lib/minikube/certs/apiserver-kubelet-client.crt --kubelet-client-key=/var/lib/minikube/certs/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/var/lib/minikube/certs/front-proxy-client.crt --proxy-client-key-file=/var/lib/minikube/certs/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/var/lib/minikube/certs/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=8443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/minikube/certs/sa.pub --service-account-signing-key-file=/var/lib/minikube/certs/sa.key --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/var/lib/minikube/certs/apiserver.crt --tls-private-key-file=/var/lib/minikube/certs/apiserver.key root 1399886 0.0 0.0 12216 1080 pts/0 S+ 16:57 0:00 grep --color=auto api [root@k8s cka]# ls /etc/cni/net.d 100-crio-bridge.conf.mk_disabled 1-k8s.conflist 200-loopback.conf [root@k8s cka]# cat /etc/cni/net.d/1-k8s.conflist { "cniVersion": "0.3.1", "name": "bridge", "plugins": [ { "type": "bridge", "bridge": "bridge", "addIf": "true", "isDefaultGateway": true, "forceAddress": false, "ipMasq": true, "hairpinMode": true, "ipam": { "type": "host-local", "subnet": "10.244.0.0/16" } }, { "type": "portmap", "capabilities": { "portMappings": true } } ] } [root@k8s cka]# cat /etc/cni/net.d/100-crio-bridge.conf.mk_disabled { "cniVersion": "0.3.1", "name": "crio", "type": "bridge", "bridge": "cni0", "isGateway": true, "ipMasq": true, "hairpinMode": true, "ipam": { "type": "host-local", "routes": [ { "dst": "0.0.0.0/0" }, { "dst": "1100:200::1/24" } ], "ranges": [ [{ "subnet": "10.85.0.0/16" }], [{ "subnet": "1100:200::/24" }] ] } } [root@k8s cka]# cat /etc/cni/net.d/200-loopback.conf { "cniVersion": "1.0.0", "name": "loopback", "type": "loopback" } |
Kubernetes internal networking coms from two parts. One part is Calico which is bolt networking, and the cluster network is implementes by the API server. For the rest of it is just a physical external network.
Service Auto Registration
- Kubernetes runs the coredns Pods in the kube-system Namespace as
internal DNS servers - These Pods are exposed by the kubedns Service
- Service register with this kubedns Service
- Pods are automatically configured with the IP address of the kubedns Service as their DNS resolver
- As a result, all Pods can access all Services by name
Accessing Service in other Namespaces
- If a Service is running in the same Namespace, it can be reached by the
short hostname - If a Service is running in another Namespace, an FQDN consisting of servicename.namespace.svc.clustername must be used
- The clustername is defined in the coredns Corefile and set to cluster.local if it hasn’t been changed, use
kubectl get cm -n kube-system coredns -o yaml
to verify
Accessing Services by Name
kubectl run webserver --image=nginx
kubectl expose pod webserver --port=80
kubectl run testpod --image=busybox -- sleep 3600
kubectl get svc
kubectl exec -it testpod -- wget webserver
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 |
[root@k8s cka]# kubectl run webserver --image=nginx pod/webserver created [root@k8s cka]# kubectl expose pod webserver --port=80 service/webserver exposed [root@k8s cka]# kubectl get all NAME READY STATUS RESTARTS AGE pod/antinginx 1/1 Running 0 31h pod/deploydaemon-zzllp 1/1 Running 0 5d1h pod/firstnginx-d8679d567-249g9 1/1 Running 0 6d2h pod/firstnginx-d8679d567-66c4s 1/1 Running 0 6d2h pod/firstnginx-d8679d567-72qbd 1/1 Running 0 6d2h pod/firstnginx-d8679d567-rhhlz 1/1 Running 0 5d9h pod/init-demo 1/1 Running 0 5d11h pod/lab4-pod 1/1 Running 0 4d8h pod/morevol 2/2 Running 234 (7m12s ago) 4d21h pod/mydaemon-d4dcd 1/1 Running 0 5d1h pod/mystaticpod-k8s.netico.pl 1/1 Running 0 2d12h pod/nginx 1/1 Running 0 33h pod/nginx-hdd 1/1 Running 0 11h pod/nginx-ssd 1/1 Running 0 12h pod/nginx-taint-68bd5db674-7skqs 1/1 Running 0 12h pod/nginx-taint-68bd5db674-vjq89 1/1 Running 0 12h pod/nginx-taint-68bd5db674-vqz2z 1/1 Running 0 12h pod/nginxsvc-5f8b7d4f4d-dtrs7 1/1 Running 0 4d2h pod/pv-pod 1/1 Running 0 4d20h pod/redis-cache-8478cbdc86-cfsmz 0/1 Pending 0 29h pod/redis-cache-8478cbdc86-kr8qr 0/1 Pending 0 29h pod/redis-cache-8478cbdc86-w2swz 1/1 Running 0 29h pod/sleepy 1/1 Running 121 (18m ago) 5d12h pod/testpod 1/1 Running 0 6d2h pod/two-containers 2/2 Running 714 (7m30s ago) 5d9h pod/web-0 1/1 Running 0 5d14h pod/web-1 1/1 Running 0 5d1h pod/web-2 1/1 Running 0 5d1h pod/web-server-55f57c89d4-25qhr 0/1 Pending 0 28h pod/web-server-55f57c89d4-crtfn 1/1 Running 0 28h pod/web-server-55f57c89d4-vl4p5 0/1 Pending 0 28h pod/webserver 1/1 Running 0 81s pod/webserver-76d44586d-8gqhf 1/1 Running 0 4d9h pod/webshop-7f9fd49d4c-92nj2 1/1 Running 0 4d4h pod/webshop-7f9fd49d4c-kqllw 1/1 Running 0 4d4h pod/webshop-7f9fd49d4c-x2czc 1/1 Running 0 4d4h NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/apples ClusterIP 10.101.6.55 <none> 80/TCP 4d service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6d7h service/newdep ClusterIP 10.100.68.120 <none> 8080/TCP 4d1h service/nginx ClusterIP None <none> 80/TCP 5d14h service/nginxsvc ClusterIP 10.104.155.180 <none> 80/TCP 4d2h service/webserver ClusterIP 10.109.5.62 <none> 80/TCP 64s service/webshop NodePort 10.109.119.90 <none> 80:32064/TCP 4d3h NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/deploydaemon 1 1 1 1 1 <none> 5d1h daemonset.apps/mydaemon 1 1 1 1 1 <none> 6d1h NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/firstnginx 4/4 4 4 6d2h deployment.apps/nginx-taint 3/3 3 3 12h deployment.apps/nginxsvc 1/1 1 1 4d2h deployment.apps/redis-cache 1/3 3 1 29h deployment.apps/web-server 1/3 3 1 28h deployment.apps/webserver 1/1 1 1 4d9h deployment.apps/webshop 3/3 3 3 4d4h NAME DESIRED CURRENT READY AGE replicaset.apps/firstnginx-d8679d567 4 4 4 6d2h replicaset.apps/nginx-taint-68bd5db674 3 3 3 12h replicaset.apps/nginxsvc-5f8b7d4f4d 1 1 1 4d2h replicaset.apps/redis-cache-8478cbdc86 3 3 1 29h replicaset.apps/web-server-55f57c89d4 3 3 1 28h replicaset.apps/webserver-667ddc69b6 0 0 0 4d9h replicaset.apps/webserver-76d44586d 1 1 1 4d9h replicaset.apps/webshop-7f9fd49d4c 3 3 3 4d4h NAME READY AGE statefulset.apps/web 3/3 5d14h [root@k8s cka]# kubectl run testpod --image=busybox -- sleep 3600 pod/testpod created [root@k8s cka]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE apples ClusterIP 10.101.6.55 <none> 80/TCP 4d kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6d7h newdep ClusterIP 10.100.68.120 <none> 8080/TCP 4d1h nginx ClusterIP None <none> 80/TCP 5d14h nginxsvc ClusterIP 10.104.155.180 <none> 80/TCP 4d2h webserver ClusterIP 10.109.5.62 <none> 80/TCP 105s webshop NodePort 10.109.119.90 <none> 80:32064/TCP 4d3h [root@k8s cka]# kubectl exec -it testpod -- wget webserver wget: bad address 'webserver' command terminated with exit code 1 [root@k8s cka]# kubectl exec -it testpod -- wget 10.109.5.62 Connecting to 10.109.5.62 (10.109.5.62:80) saving to 'index.html' index.html 100% |********************************| 615 0:00:00 ETA 'index.html' saved |
It was ease because is all in the same namespace. If you are between diffrent namespace ii’s getting a little bit more complex.
Accessing Pods in other Namespaces
kubectl create ns remote
kubectl run interginx --image=nginx
kubectl run remotebox --image=busybox -n remote -- sleep 3600
kubectl expose pod interginx --port=80
kubectl exec -it remotebox -n remote -- cat /etc/resolv.conf
kubectl exec -it remotebox -n remote -- nslookup interginx
# failskubectl exec -it remotebox -n remote -- nslookup interginx.default.svc.cluster.local
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 |
[root@k8s cka]# kubectl create ns remote namespace/remote created [root@k8s cka]# kubectl run interginx --image=nginx pod/interginx created [root@k8s cka]# kubectl run remotebox --image=busybox -n remote -- sleep 3600 pod/remotebox created [root@k8s cka]# kubectl expose pod interginx --port=80 service/interginx exposed [root@k8s cka]# kubectl exec -it remotebox -n remote -- cat /etc/resolv.conf nameserver 10.96.0.10 search remote.svc.cluster.local svc.cluster.local cluster.local netico.pl options ndots:5 [root@k8s cka]# kubectl exec -it remotebox -n remote -- nslookup interginx nslookup: write to '10.96.0.10': Connection refused ;; connection timed out; no servers could be reached command terminated with exit code 1 [root@k8s cka]# kubectl exec -it remotebox -n remote -- nslookup interginx.default.svc.cluster.local nslookup: write to '10.96.0.10': Connection refused ;; connection timed out; no servers could be reached command terminated with exit code 1 |
Network Policy
- By default, there are no restrictions to network traffic in K8s
- Pods can always communicate, even if they’re in other Namespaces
- To limit this, NetworkPolicies can be used
- NetworkPolicies need to be supported by the network plugin though
- The weave plugin does NOT support NetworkPolicies!
- If in a policy there is no match, traffic will be denied
- If no NetworkPolicy is used, all traffic is allowed
Using NetworkPolicy Identifiers
- In NetworkPolicy, three different identifiers can be used
- Pods: (podSelector) note that a Pod cannot block access to itself
- Namespaces: (namespaceSelector) to grant access to specific Namespaces
- IP blocks: (ipBlock) notice that traffic to and from the node where a Pod is running is always allowed
- When defining a Pod- or Namespace-based NetworkPolicy, a selector label is used to specify what traffic is allowed to and from the Pods that match the selector
- NetworkPolicies do not conflict, they are additive
Exploring NetworkPolicy
kubectl apply -f nwpolicy-complete-example.yaml
kubectl expose pod nginx --port=80
kubectl exec -it busybox -- wget --spider --timeout=1 nginx
will failkubectl label pod busybox access=true
kubectl exec -it busybox -- wget --spider --timeout=1 nginx
will work the steps in the demo are
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 |
[root@k8s cka]# cat nwpolicy-complete-example.yaml apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: access-nginx spec: podSelector: matchLabels: app: nginx ingress: - from: - podSelector: matchLabels: access: "true" ... --- apiVersion: v1 kind: Pod metadata: name: nginx labels: app: nginx spec: containers: - name: nwp-nginx image: nginx ... --- apiVersion: v1 kind: Pod metadata: name: busybox labels: app: sleepy spec: containers: - name: nwp-busybox image: busybox command: - sleep - "3600" [root@k8s cka]# kubectl apply -f nwpolicy-complete-example.yaml networkpolicy.networking.k8s.io/access-nginx created pod/nginx created pod/busybox created [root@k8s cka]# kubectl expose pod nginx --port=80 service/nginx exposed [root@k8s cka]# kubectl exec -it busybox -- wget --spider --timeout=1 nginx wget: bad address 'nginx' command terminated with exit code 1 [root@k8s cka]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE apples ClusterIP 10.101.6.55 <none> 80/TCP 4d12h interginx ClusterIP 10.102.130.239 <none> 80/TCP 10h kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6d18h newdep ClusterIP 10.100.68.120 <none> 8080/TCP 4d12h nginx ClusterIP 10.107.22.126 <none> 80/TCP 85s nginxsvc ClusterIP 10.104.155.180 <none> 80/TCP 4d13h webserver ClusterIP 10.109.5.62 <none> 80/TCP 11h webshop NodePort 10.109.119.90 <none> 80:32064/TCP 4d14h [root@k8s cka]# kubectl exec -it busybox -- wget --spider --timeout=1 10.107.22.126 Connecting to 10.107.22.126 (10.107.22.126:80) wget: server returned error: HTTP/1.1 403 Forbidden command terminated with exit code 1 [root@k8s cka]# kubectl label pod busybox access=true pod/busybox labeled [root@k8s cka]# kubectl exec -it busybox -- wget --spider --timeout=1 10.107.22.126 Connecting to 10.107.22.126 (10.107.22.126:80) wget: server returned error: HTTP/1.1 403 Forbidden command terminated with exit code 1 [root@k8s cka]# kubectl describe networkpolicy access-nginx Name: access-nginx Namespace: default Created on: 2024-02-07 04:40:26 -0500 EST Labels: <none> Annotations: <none> Spec: PodSelector: app=nginx Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: PodSelector: access=true Not affecting egress traffic Policy Types: Ingress [root@k8s cka]# kubectl exec -it busybox -- wget --spider --timeout=1 10.107.22.126 Connecting to 10.107.22.126 (10.107.22.126:80) wget: server returned error: HTTP/1.1 403 Forbidden command terminated with exit code 1 [root@k8s cka]# kubectl exec -it busybox -- curl 10.107.22.126 OCI runtime exec failed: exec failed: unable to start container process: exec: "curl": executable file not found in $PATH: unknown command terminated with exit code 126 |
Applying NetworkPolicy to Namespaces
- To apply a NetworkPolicy to a Namespace, use
-n namespace
in the
definition of the NetworkPolicy - To allow ingress and egress traffic, use the namespaceSelector to match the traffic
Using NetworkPolicy between Namespaces
kubectl create ns nwp-namespace
kubectl apply -f nwp-lab9-1.yaml
kubectl expose pod nwp-nginx --port=80
kubectl exec -it nwp-busybox -n nwp-namespace -- wget --spider --timeout=1 nwp-nginx: gives a bad address error
kubectl exec -it nwp-busybox -n nwp-namespace -- nslookup nwp-nginx
explains that it’s looking in the wrong nskubectl exec -it nwp-busybox -n nwp-namespace -- wget --spider --timeout=1 nwp-nginx.default.svc.cluster.local
is allowed
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 |
[root@k8s cka]# kubectl create ns nwp-namespace namespace/nwp-namespace created [root@k8s cka]# cat nwp-lab9-1.yaml --- apiVersion: v1 kind: Pod metadata: name: nwp-nginx namespace: default labels: app: nwp-nginx spec: containers: - name: nwp-nginx image: nginx ... --- apiVersion: v1 kind: Pod metadata: name: nwp-busybox namespace: nwp-namespace labels: app: sleepy spec: containers: - name: nwp-busybox image: busybox command: - sleep - "3600" [root@k8s cka]# kubectl apply -f nwp-lab9-1.yaml pod/nwp-nginx created pod/nwp-busybox created [root@k8s cka]# kubectl expose pod nwp-nginx --port=80 service/nwp-nginx exposed [root@k8s cka]# kubectl exec -it nwp-busybox -n nwp-namespace -- wget --spider --timeout=1 nwp-nginx wget: bad address 'nwp-nginx' command terminated with exit code 1 [root@k8s cka]# kubectl exec -it nwp-busybox -n nwp-namespace -- nslookup nwp-nginx nslookup: write to '10.96.0.10': Connection refused ;; connection timed out; no servers could be reached command terminated with exit code 1 [root@k8s cka]# kubectl exec -it nwp-busybox -n nwp-namespace -- timeout=1 nwp-nginx.default.svc.cluster.local OCI runtime exec failed: exec failed: unable to start container process: exec: "timeout=1": executable file not found in $PATH: unknown command terminated with exit code 126 [root@k8s cka]# kubectl exec -it nwp-busybox -n nwp-namespace -- wget --spider --timeout=1 timeout=1 nwp-nginx.default.svc.cluster.local wget: bad address 'timeout=1' command terminated with exit code 1 [root@k8s cka]# kubectl exec -it nwp-busybox -n nwp-namespace -- wget --spider --timeout=1 nwp-nginx.default.svc.cluster.local wget: bad address 'nwp-nginx.default.svc.cluster.local' command terminated with exit code 1 [root@k8s cka]# kubectl exec -it nwp-busybox -n nwp-namespace -- ping nwp-nginx.default.svc.cluster.local ping: bad address 'nwp-nginx.default.svc.cluster.local' command terminated with exit code 1 [root@k8s cka]# kubectl exec -it nwp-busybox -n nwp-namespace -- cat /etc/resolv.conf nameserver 10.96.0.10 search nwp-namespace.svc.cluster.local svc.cluster.local cluster.local netico.pl |
Using NetworkPolicy between Namespaces
kubectl apply -f nwp-lab9-2.yaml
kubectl exec -it nwp-busybox -n nwp-namespace -- wget --spider --timeout=1 nwp-nginx.default.svc.cluster.local
is not allowedkubectl create deployment busybox --image=busybox -- sleep 3600
kubectl exec -it busybox[Tab] -- wget --spider --timeout=1 nwp-nginx
1 2 3 4 5 6 7 8 9 10 11 12 |
[root@k8s cka]# cat nwp-lab9-2.yaml kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: namespace: default name: deny-from-other-namespaces spec: podSelector: matchLabels: ingress: - from: - podSelector: {} |
This network policy is going to deny inncoming traffic for all other namespaces. Network policy is only allowing traffic that has a specyfic pod selector that is matching.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 |
[root@k8s cka]# kubectl apply -f nwp-lab9-2.yaml networkpolicy.networking.k8s.io/deny-from-other-namespaces created [root@k8s cka]# kubectl exec -it nwp-busybox -n nwp-namespace -- wget --spider --timeout=1 nwp-nginx.default.svc.cluster.local wget: bad address 'nwp-nginx.default.svc.cluster.local' command terminated with exit code 1 [root@k8s cka]# kubectl create deployment busybox --image=busybox -- sleep 3600 deployment.apps/busybox created [root@k8s cka]# kubectl get pods NAME READY STATUS RESTARTS AGE antinginx 1/1 Running 0 44h busybox 1/1 Running 1 (50m ago) 110m busybox-6fc6c44c5b-xmmxd 1/1 Running 0 48s deploydaemon-zzllp 1/1 Running 0 5d14h firstnginx-d8679d567-249g9 1/1 Running 0 6d15h firstnginx-d8679d567-66c4s 1/1 Running 0 6d15h firstnginx-d8679d567-72qbd 1/1 Running 0 6d15h firstnginx-d8679d567-rhhlz 1/1 Running 0 5d22h init-demo 1/1 Running 0 6d interginx 1/1 Running 0 12h lab4-pod 1/1 Running 0 4d20h morevol 2/2 Running 260 (3m ago) 5d10h mydaemon-d4dcd 1/1 Running 0 5d14h mystaticpod-k8s.netico.pl 1/1 Running 0 3d nginx 1/1 Running 0 110m nginx-hdd 1/1 Running 0 24h nginx-ssd 1/1 Running 0 25h nginx-taint-68bd5db674-7skqs 1/1 Running 0 25h nginx-taint-68bd5db674-vjq89 1/1 Running 0 25h nginx-taint-68bd5db674-vqz2z 1/1 Running 0 25h nginxsvc-5f8b7d4f4d-dtrs7 1/1 Running 0 4d15h nwp-nginx 1/1 Running 0 32m pv-pod 1/1 Running 0 5d9h redis-cache-8478cbdc86-cfsmz 0/1 Pending 0 42h redis-cache-8478cbdc86-kr8qr 0/1 Pending 0 42h redis-cache-8478cbdc86-w2swz 1/1 Running 0 42h sleepy 1/1 Running 134 (13m ago) 6d1h testpod 1/1 Running 12 (55m ago) 12h two-containers 2/2 Running 792 (57s ago) 5d22h web-0 1/1 Running 0 6d3h web-1 1/1 Running 0 5d14h web-2 1/1 Running 0 5d14h web-server-55f57c89d4-25qhr 0/1 Pending 0 41h web-server-55f57c89d4-crtfn 1/1 Running 0 41h web-server-55f57c89d4-vl4p5 0/1 Pending 0 41h webserver 1/1 Running 0 12h webserver-76d44586d-8gqhf 1/1 Running 0 4d21h webshop-7f9fd49d4c-92nj2 1/1 Running 0 4d17h webshop-7f9fd49d4c-kqllw 1/1 Running 0 4d17h webshop-7f9fd49d4c-x2czc 1/1 Running 0 4d17h [root@k8s cka]# kubectl exec -it busybox-6fc6c44c5b-xmmxd -- wget --spider --timeout=1 nwp-nginx wget: bad address 'nwp-nginx' command terminated with exit code 1 |
The first wget dont’t work because network policy deny traffic from other namespace. The second wget should work because it is a traffic from the same namespace.
Lab: Using NetworkPolicies
- Run a webserver with the name lab9server in Namespace restricted, using
the Nginx image and ensure it is exposed by a Service - From the default Namepsace start two Pods: sleepybox1 and sleepybox2, each based on the Busybox image using the sleep 3600 command as the command
- Create a NetworkPolicy that limits Ingress traffic to restricted, in such a way that only the sleepybox1 Pod from the default Namespace has access and all other access is forbidden
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 |
[root@k8s cka]# cat lesson9lab.yaml apiVersion: v1 kind: Namespace metadata: name: restricted --- apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: mynp namespace: restricted spec: podSelector: matchLabels: target: "yes" policyTypes: - Ingress - Egress ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: default podSelector: matchLabels: access: "yes" ports: - protocol: TCP port: 80 egress: - {} --- apiVersion: v1 kind: Pod metadata: name: lab-nginx namespace: restricted labels: target: "yes" spec: containers: - name: nginx image: nginx ports: - containerPort: 80 --- apiVersion: v1 kind: Pod metadata: name: sleepybox1 namespace: default labels: access: "yes" spec: containers: - name: busybox image: busybox args: - sleep - "3600" --- apiVersion: v1 kind: Pod metadata: name: sleepybox2 namespace: default labels: access: "noway" spec: containers: - name: busybox image: busybox args: - sleep - "3600" |
First part of this yaml file creates namespace restricted. Second part creates network policy mynp. This network policy is going to apply to namespace restricted. Policy type is ingress and egress. Ingres policy is gooing to apply to traffic from default namespace. So only pods that are in the default namespace and which have the matchlabel access “yes” will get access. And the access is to target port 80. The next parts create pods. First creates pod lab9server in the restricted namespace and has access label set to yes. Second creates pod sleepybox1 in default namespace and has access label set to yes. The third one creates the pod sleepybox2 in the default namespace and has access label set to “noway”.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 |
[root@k8s cka]# kubectl apply -f lesson9lab.yaml namespace/restricted created networkpolicy.networking.k8s.io/mynp created pod/lab-nginx created pod/sleepybox1 created pod/sleepybox2 created [root@k8s cka]# kubectl get all -n restricted NAME READY STATUS RESTARTS AGE pod/lab-nginx 1/1 Running 0 7s [root@k8s cka]# kubectl expose pod lab-nginx -n restricted --port=80 service/lab-nginx exposed [root@k8s cka]# kubectl get all -n restricted NAME READY STATUS RESTARTS AGE pod/lab-nginx 1/1 Running 0 8m31s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/lab-nginx ClusterIP 10.110.67.192 <none> 80/TCP 11s [root@k8s cka]# kubectl exec -it sleepybox1 -- wget --spider --timeout=1 lab-nginx.restricted.svc.cluster.local wget: bad address 'lab-nginx.restricted.svc.cluster.local' command terminated with exit code 1 [root@k8s cka]# kubectl exec -it lab-nginx -- cat /etc/resolv.conf Error from server (NotFound): pods "lab-nginx" not found [root@k8s cka]# kubectl get all -n restricted NAME READY STATUS RESTARTS AGE pod/lab-nginx 1/1 Running 0 11m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/lab-nginx ClusterIP 10.110.67.192 <none> 80/TCP 2m45s [root@k8s cka]# kubectl exec -it lab-nginx -n restricted -- cat /etc/resolv.conf nameserver 10.96.0.10 search restricted.svc.cluster.local svc.cluster.local cluster.local netico.pl options ndots:5 [root@k8s cka]# kubectl exec -it sleepybox1 -- wget --spider --timeout=1 lab-nginx.restricted.svc.cluster.local wget: bad address 'lab-nginx.restricted.svc.cluster.local' command terminated with exit code 1 [root@k8s cka]# kubectl exec -it lab-nginx -n restricted -- cat /etc/hosts # Kubernetes-managed hosts file. 127.0.0.1 localhost ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet fe00::0 ip6-mcastprefix fe00::1 ip6-allnodes fe00::2 ip6-allrouters 10.244.0.84 lab-nginx [root@k8s cka]# kubectl get all -n restricted NAME READY STATUS RESTARTS AGE pod/lab-nginx 1/1 Running 0 14m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/lab-nginx ClusterIP 10.110.67.192 <none> 80/TCP 5m47s [root@k8s cka]# kubectl exec -it sleepybox1 -- wget --spider --timeout=1 lab-nginx.restricted.svc.cluster.local wget: bad address 'lab-nginx.restricted.svc.cluster.local' command terminated with exit code 1 [root@k8s cka]# kubectl exec -it sleepybox1 -- wget --spider --timeout=1 lab-nginx wget: bad address 'lab-nginx' command terminated with exit code 1 [root@k8s cka]# kubectl exec -it lab-nginx -n restricted -- cat /etc/hosts # Kubernetes-managed hosts file. 127.0.0.1 localhost ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet fe00::0 ip6-mcastprefix fe00::1 ip6-allnodes fe00::2 ip6-allrouters 10.244.0.84 lab-nginx [root@k8s cka]# kubectl exec -it lab-nginx -n restricted -- echo '10.244.0.84 lab-nginx.restricted.svc.cluster.local' >> /etc/hosts [root@k8s cka]# kubectl exec -it lab-nginx -n restricted -- cat /etc/hosts # Kubernetes-managed hosts file. 127.0.0.1 localhost ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet fe00::0 ip6-mcastprefix fe00::1 ip6-allnodes fe00::2 ip6-allrouters 10.244.0.84 lab-nginx [root@k8s cka]# kubectl exec -it sleepybox1 -- wget --spider --timeout=1 10.244.0.84 Connecting to 10.244.0.84 (10.244.0.84:80) remote file exists [root@k8s cka]# kubectl exec -it sleepybox2 -- wget --spider --timeout=1 10.244.0.84 Connecting to 10.244.0.84 (10.244.0.84:80) remote file exists [root@k8s cka]# kubectl exec -it sleepybox1 -- wget --spider --timeout=1 10.110.67.192 Connecting to 10.110.67.192 (10.110.67.192:80) remote file exists [root@k8s cka]# kubectl exec -it sleepybox2 -- wget --spider --timeout=1 10.110.67.192 Connecting to 10.110.67.192 (10.110.67.192:80) remote file exists [root@k8s cka]# curl 10.110.67.192 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> |