Kubernetes Networking

Kubernetes defines a network model that helps provide simplicity and consistency across a range of networking environments and network implementations. The Kubernetes network model provides the foundation for understanding how containers, pods, and services within Kubernetes communicate with each other.


  • The Container Network Interface (CNI) is the common interface used for
    networking when starting kubelet on a worker node
  • The CNI doesn’t take care of networking, that is done by the network plugin
  • CNl ensures the pluggable nature of networking, and makes it easy to select between different network plugins provided by the ecosystem

Exploring CNI Configuration

  • The CNI plugin configuration is in /etc/cni/net.d
  • Some plugins have the complete network setup in this directory
  • Other plugins have generic settings, and are using additional configuration
  • Often, the additional configuration is implemented by Pods
  • Generic CNI documentation is on https://github.com/containernetworking/cni

Kubernetes internal networking coms from two parts. One part is Calico which is bolt networking, and the cluster network is implementes by the API server. For the rest of it is just a physical external network.

Service Auto Registration

  • Kubernetes runs the coredns Pods in the kube-system Namespace as
    internal DNS servers
  • These Pods are exposed by the kubedns Service
  • Service register with this kubedns Service
  • Pods are automatically configured with the IP address of the kubedns Service as their DNS resolver
  • As a result, all Pods can access all Services by name

Accessing Service in other Namespaces

  • If a Service is running in the same Namespace, it can be reached by the
    short hostname
  • If a Service is running in another Namespace, an FQDN consisting of servicename.namespace.svc.clustername must be used
  • The clustername is defined in the coredns Corefile and set to cluster.local if it hasn’t been changed, use kubectl get cm -n kube-system coredns -o yaml to verify

Accessing Services by Name

  • kubectl run webserver --image=nginx
  • kubectl expose pod webserver --port=80
  • kubectl run testpod --image=busybox -- sleep 3600
  • kubectl get svc
  • kubectl exec -it testpod -- wget webserver

It was ease because is all in the same namespace. If you are between diffrent namespace ii’s getting a little bit more complex.

Accessing Pods in other Namespaces

  • kubectl create ns remote
  • kubectl run interginx --image=nginx
  • kubectl run remotebox --image=busybox -n remote -- sleep 3600
  • kubectl expose pod interginx --port=80
  • kubectl exec -it remotebox -n remote -- cat /etc/resolv.conf
  • kubectl exec -it remotebox -n remote -- nslookup interginx # fails
  • kubectl exec -it remotebox -n remote -- nslookup interginx.default.svc.cluster.local


Network Policy

  • By default, there are no restrictions to network traffic in K8s
  • Pods can always communicate, even if they’re in other Namespaces
  • To limit this, NetworkPolicies can be used
  • NetworkPolicies need to be supported by the network plugin though
    • The weave plugin does NOT support NetworkPolicies!
  • If in a policy there is no match, traffic will be denied
  • If no NetworkPolicy is used, all traffic is allowed

Using NetworkPolicy Identifiers

  • In NetworkPolicy, three different identifiers can be used
    • Pods: (podSelector) note that a Pod cannot block access to itself
    • Namespaces: (namespaceSelector) to grant access to specific Namespaces
    • IP blocks: (ipBlock) notice that traffic to and from the node where a Pod is running is always allowed
  • When defining a Pod- or Namespace-based NetworkPolicy, a selector label is used to specify what traffic is allowed to and from the Pods that match the selector
  • NetworkPolicies do not conflict, they are additive

Exploring NetworkPolicy

  • kubectl apply -f nwpolicy-complete-example.yaml
  • kubectl expose pod nginx --port=80
  • kubectl exec -it busybox -- wget --spider --timeout=1 nginx will fail
  • kubectl label pod busybox access=true
  • kubectl exec -it busybox -- wget --spider --timeout=1 nginx will work the steps in the demo are

Applying NetworkPolicy to Namespaces

  • To apply a NetworkPolicy to a Namespace, use -n namespace in the
    definition of the NetworkPolicy
  • To allow ingress and egress traffic, use the namespaceSelector to match the traffic

Using NetworkPolicy between Namespaces

  • kubectl create ns nwp-namespace
  • kubectl apply -f nwp-lab9-1.yaml
  • kubectl expose pod nwp-nginx --port=80
  • kubectl exec -it nwp-busybox -n nwp-namespace -- wget --spider --timeout=1 nwp-nginx: gives a bad address error
  • kubectl exec -it nwp-busybox -n nwp-namespace -- nslookup nwp-nginx explains that it’s looking in the wrong ns
  • kubectl exec -it nwp-busybox -n nwp-namespace -- wget --spider --timeout=1 nwp-nginx.default.svc.cluster.local is allowed


Using NetworkPolicy between Namespaces

  • kubectl apply -f nwp-lab9-2.yaml
  • kubectl exec -it nwp-busybox -n nwp-namespace -- wget --spider --timeout=1 nwp-nginx.default.svc.cluster.local is not allowed
  • kubectl create deployment busybox --image=busybox -- sleep 3600
  • kubectl exec -it busybox[Tab] -- wget --spider --timeout=1 nwp-nginx

This network policy is going to deny inncoming traffic for all other namespaces. Network policy is only allowing traffic that has a specyfic pod selector that is matching.

The first wget dont’t work because network policy deny traffic from other namespace. The second wget should work because it is a traffic from the same namespace.

Lab: Using NetworkPolicies

  • Run a webserver with the name lab9server in Namespace restricted, using
    the Nginx image and ensure it is exposed by a Service
  • From the default Namepsace start two Pods: sleepybox1 and sleepybox2, each based on the Busybox image using the sleep 3600 command as the command
  • Create a NetworkPolicy that limits Ingress traffic to restricted, in such a way that only the sleepybox1 Pod from the default Namespace has access and all other access is forbidden

First part of this yaml file creates namespace restricted.  Second part creates network policy mynp. This network policy is going to apply to namespace restricted. Policy type is ingress and egress. Ingres  policy is gooing to apply to traffic from default namespace. So only pods that are in the default namespace and which have the matchlabel access “yes” will get access. And the access is to target port 80. The next parts create pods. First creates pod lab9server in the restricted namespace and has access label set to yes. Second creates pod sleepybox1 in default namespace and has access label set to yes. The third one creates the pod sleepybox2 in the default namespace and has access label set to “noway”.