{"id":5401,"date":"2023-12-02T22:01:45","date_gmt":"2023-12-02T21:01:45","guid":{"rendered":"http:\/\/miro.borodziuk.eu\/?p=5401"},"modified":"2025-05-17T19:16:13","modified_gmt":"2025-05-17T17:16:13","slug":"kubernetes-networking","status":"publish","type":"post","link":"http:\/\/miro.borodziuk.eu\/index.php\/2023\/12\/02\/kubernetes-networking\/","title":{"rendered":"Kubernetes Networking"},"content":{"rendered":"<p>Kubernetes defines a network model that helps provide simplicity and consistency across a range of networking environments and network implementations. The Kubernetes network model provides the foundation for understanding how containers, pods, and services within Kubernetes communicate with each other.<!--more--><\/p>\n<p><span style=\"color: #3366ff;\">CNI<\/span><\/p>\n<ul>\n<li>The Container Network Interface (CNI) is the common interface used for<br \/>\nnetworking when starting kubelet on a worker node<\/li>\n<li>The CNI doesn&#8217;t take care of networking, that is done by the network plugin<\/li>\n<li>CNl ensures the pluggable nature of networking, and makes it easy to select between different network plugins provided by the ecosystem<\/li>\n<\/ul>\n<p><span style=\"color: #3366ff;\">Exploring CNI Configuration<\/span><\/p>\n<ul>\n<li>The CNI plugin configuration is in <code>\/etc\/cni\/net.d<\/code><\/li>\n<li>Some plugins have the complete network setup in this directory<\/li>\n<li>Other plugins have generic settings, and are using additional configuration<\/li>\n<li>Often, the additional configuration is implemented by Pods<\/li>\n<li>Generic CNI documentation is on <em>https:\/\/github.com\/containernetworking\/cni<\/em><\/li>\n<\/ul>\n<pre class=\"lang:default decode:true\">[root@k8s cka]# kubectl get ns\r\nNAME                   STATUS   AGE\r\ndefault                Active   6d6h\r\ningress-nginx          Active   4d1h\r\nkube-node-lease        Active   6d6h\r\nkube-public            Active   6d6h\r\nkube-system            Active   6d6h\r\nkubernetes-dashboard   Active   5d1h\r\nlimited                Active   8h\r\n\r\n[root@k8s cka]# kubectl get all -n kube-system\r\nNAME                                        READY   STATUS             RESTARTS           AGE\r\npod\/coredns-5dd5756b68-sgfkj                0\/1     CrashLoopBackOff   1426 (3m15s ago)   6d6h\r\npod\/etcd-k8s.example.pl                      1\/1     Running            0                  6d6h\r\npod\/kube-apiserver-k8s.example.pl            1\/1     Running            0                  47h\r\npod\/kube-controller-manager-k8s.example.pl   1\/1     Running            0                  47h\r\npod\/kube-proxy-hgh55                        1\/1     Running            0                  47h\r\npod\/kube-scheduler-k8s.example.pl            1\/1     Running            0                  47h\r\npod\/metrics-server-5f8988d664-7r8j7         1\/1     Running            0                  2d5h\r\npod\/storage-provisioner                     1\/1     Running            8 (47h ago)        6d6h\r\n\r\nNAME                     TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                  AGE\r\nservice\/kube-dns         ClusterIP   10.96.0.10      &lt;none&gt;        53\/UDP,53\/TCP,9153\/TCP   6d6h\r\nservice\/metrics-server   ClusterIP   10.102.216.61   &lt;none&gt;        443\/TCP                  2d5h\r\n\r\nNAME                        DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE\r\ndaemonset.apps\/kube-proxy   1         1         1       1            1           kubernetes.io\/os=linux   6d6h\r\n\r\nNAME                             READY   UP-TO-DATE   AVAILABLE   AGE\r\ndeployment.apps\/coredns          0\/1     1            0           6d6h\r\ndeployment.apps\/metrics-server   1\/1     1            1           2d5h\r\n\r\nNAME                                        DESIRED   CURRENT   READY   AGE\r\nreplicaset.apps\/coredns-5dd5756b68          1         1         0       6d6h\r\nreplicaset.apps\/metrics-server-5f8988d664   1         1         1       2d5h\r\nreplicaset.apps\/metrics-server-6db4d75b97   0         0         0       2d5h\r\n[root@k8s cka]#\r\n\r\n[root@k8s cka]# ps aux | grep api\r\nroot      875261  4.3  1.8 1041508 298416 ?      Ssl  lut04 125:21 kube-apiserver --advertise-address=172.30.9.24 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=\/var\/lib\/minikube\/certs\/ca.crt --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota --enable-bootstrap-token-auth=true --etcd-cafile=\/var\/lib\/minikube\/certs\/etcd\/ca.crt --etcd-certfile=\/var\/lib\/minikube\/certs\/apiserver-etcd-client.crt --etcd-keyfile=\/var\/lib\/minikube\/certs\/apiserver-etcd-client.key --etcd-servers=https:\/\/127.0.0.1:2379 --kubelet-client-certificate=\/var\/lib\/minikube\/certs\/apiserver-kubelet-client.crt --kubelet-client-key=\/var\/lib\/minikube\/certs\/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=\/var\/lib\/minikube\/certs\/front-proxy-client.crt --proxy-client-key-file=\/var\/lib\/minikube\/certs\/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=\/var\/lib\/minikube\/certs\/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=8443 --service-account-issuer=https:\/\/kubernetes.default.svc.cluster.local --service-account-key-file=\/var\/lib\/minikube\/certs\/sa.pub --service-account-signing-key-file=\/var\/lib\/minikube\/certs\/sa.key --service-cluster-ip-range=10.96.0.0\/12 --tls-cert-file=\/var\/lib\/minikube\/certs\/apiserver.crt --tls-private-key-file=\/var\/lib\/minikube\/certs\/apiserver.key\r\nroot     1399886  0.0  0.0  12216  1080 pts\/0    S+   16:57   0:00 grep --color=auto api\r\n[root@k8s cka]# ls \/etc\/cni\/net.d\r\n100-crio-bridge.conf.mk_disabled  1-k8s.conflist  200-loopback.conf\r\n\r\n[root@k8s cka]# cat \/etc\/cni\/net.d\/1-k8s.conflist\r\n\r\n{\r\n  \"cniVersion\": \"0.3.1\",\r\n  \"name\": \"bridge\",\r\n  \"plugins\": [\r\n    {\r\n      \"type\": \"bridge\",\r\n      \"bridge\": \"bridge\",\r\n      \"addIf\": \"true\",\r\n      \"isDefaultGateway\": true,\r\n      \"forceAddress\": false,\r\n      \"ipMasq\": true,\r\n      \"hairpinMode\": true,\r\n      \"ipam\": {\r\n          \"type\": \"host-local\",\r\n          \"subnet\": \"10.244.0.0\/16\"\r\n      }\r\n    },\r\n    {\r\n      \"type\": \"portmap\",\r\n      \"capabilities\": {\r\n          \"portMappings\": true\r\n      }\r\n    }\r\n  ]\r\n}\r\n\r\n[root@k8s cka]# cat \/etc\/cni\/net.d\/100-crio-bridge.conf.mk_disabled\r\n{\r\n    \"cniVersion\": \"0.3.1\",\r\n    \"name\": \"crio\",\r\n    \"type\": \"bridge\",\r\n    \"bridge\": \"cni0\",\r\n    \"isGateway\": true,\r\n    \"ipMasq\": true,\r\n    \"hairpinMode\": true,\r\n    \"ipam\": {\r\n        \"type\": \"host-local\",\r\n        \"routes\": [\r\n            { \"dst\": \"0.0.0.0\/0\" },\r\n            { \"dst\": \"1100:200::1\/24\" }\r\n        ],\r\n        \"ranges\": [\r\n            [{ \"subnet\": \"10.85.0.0\/16\" }],\r\n            [{ \"subnet\": \"1100:200::\/24\" }]\r\n        ]\r\n    }\r\n}\r\n\r\n[root@k8s cka]# cat \/etc\/cni\/net.d\/200-loopback.conf\r\n{\r\n    \"cniVersion\": \"1.0.0\",\r\n    \"name\": \"loopback\",\r\n    \"type\": \"loopback\"\r\n}\r\n<\/pre>\n<p>Kubernetes internal networking coms from two parts. One part is Calico which is bolt networking, and the cluster network is implementes by the API server. For the rest of it is just a physical external network.<\/p>\n<p><span style=\"color: #3366ff;\">Service Auto Registration<\/span><\/p>\n<ul>\n<li>Kubernetes runs the coredns Pods in the kube-system Namespace as<br \/>\ninternal DNS servers<\/li>\n<li>These Pods are exposed by the kubedns Service<\/li>\n<li>Service register with this kubedns Service<\/li>\n<li>Pods are automatically configured with the IP address of the kubedns Service as their DNS resolver<\/li>\n<li>As a result, all Pods can access all Services by name<\/li>\n<\/ul>\n<p><span style=\"color: #3366ff;\">Accessing Service in other Namespaces<\/span><\/p>\n<ul>\n<li>If a Service is running in the same Namespace, it can be reached by the<br \/>\nshort hostname<\/li>\n<li>If a Service is running in another Namespace, an FQDN consisting of servicename.namespace.svc.clustername must be used<\/li>\n<li>The clustername is defined in the coredns Corefile and set to cluster.local if it hasn&#8217;t been changed, use<code> kubectl get cm -n kube-system coredns -o yaml<\/code> to verify<\/li>\n<\/ul>\n<p><span style=\"color: #3366ff;\">Accessing Services by Name<\/span><\/p>\n<ul>\n<li><code>kubectl run webserver --image=nginx<\/code><\/li>\n<li><code>kubectl expose pod webserver --port=80<\/code><\/li>\n<li><code>kubectl run testpod --image=busybox -- sleep 3600<\/code><\/li>\n<li><code>kubectl get svc<\/code><\/li>\n<li><code>kubectl exec -it testpod -- wget webserver<\/code><\/li>\n<\/ul>\n<pre class=\"lang:default decode:true\">[root@k8s cka]# kubectl run webserver --image=nginx\r\npod\/webserver created\r\n\r\n[root@k8s cka]# kubectl expose pod webserver --port=80\r\nservice\/webserver exposed\r\n\r\n[root@k8s cka]# kubectl get all\r\nNAME                               READY   STATUS    RESTARTS          AGE\r\npod\/antinginx                      1\/1     Running   0                 31h\r\npod\/deploydaemon-zzllp             1\/1     Running   0                 5d1h\r\npod\/firstnginx-d8679d567-249g9     1\/1     Running   0                 6d2h\r\npod\/firstnginx-d8679d567-66c4s     1\/1     Running   0                 6d2h\r\npod\/firstnginx-d8679d567-72qbd     1\/1     Running   0                 6d2h\r\npod\/firstnginx-d8679d567-rhhlz     1\/1     Running   0                 5d9h\r\npod\/init-demo                      1\/1     Running   0                 5d11h\r\npod\/lab4-pod                       1\/1     Running   0                 4d8h\r\npod\/morevol                        2\/2     Running   234 (7m12s ago)   4d21h\r\npod\/mydaemon-d4dcd                 1\/1     Running   0                 5d1h\r\npod\/mystaticpod-k8s.netico.pl      1\/1     Running   0                 2d12h\r\npod\/nginx                          1\/1     Running   0                 33h\r\npod\/nginx-hdd                      1\/1     Running   0                 11h\r\npod\/nginx-ssd                      1\/1     Running   0                 12h\r\npod\/nginx-taint-68bd5db674-7skqs   1\/1     Running   0                 12h\r\npod\/nginx-taint-68bd5db674-vjq89   1\/1     Running   0                 12h\r\npod\/nginx-taint-68bd5db674-vqz2z   1\/1     Running   0                 12h\r\npod\/nginxsvc-5f8b7d4f4d-dtrs7      1\/1     Running   0                 4d2h\r\npod\/pv-pod                         1\/1     Running   0                 4d20h\r\npod\/redis-cache-8478cbdc86-cfsmz   0\/1     Pending   0                 29h\r\npod\/redis-cache-8478cbdc86-kr8qr   0\/1     Pending   0                 29h\r\npod\/redis-cache-8478cbdc86-w2swz   1\/1     Running   0                 29h\r\npod\/sleepy                         1\/1     Running   121 (18m ago)     5d12h\r\npod\/testpod                        1\/1     Running   0                 6d2h\r\npod\/two-containers                 2\/2     Running   714 (7m30s ago)   5d9h\r\npod\/web-0                          1\/1     Running   0                 5d14h\r\npod\/web-1                          1\/1     Running   0                 5d1h\r\npod\/web-2                          1\/1     Running   0                 5d1h\r\npod\/web-server-55f57c89d4-25qhr    0\/1     Pending   0                 28h\r\npod\/web-server-55f57c89d4-crtfn    1\/1     Running   0                 28h\r\npod\/web-server-55f57c89d4-vl4p5    0\/1     Pending   0                 28h\r\npod\/webserver                      1\/1     Running   0                 81s\r\npod\/webserver-76d44586d-8gqhf      1\/1     Running   0                 4d9h\r\npod\/webshop-7f9fd49d4c-92nj2       1\/1     Running   0                 4d4h\r\npod\/webshop-7f9fd49d4c-kqllw       1\/1     Running   0                 4d4h\r\npod\/webshop-7f9fd49d4c-x2czc       1\/1     Running   0                 4d4h\r\n\r\nNAME                 TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE\r\nservice\/apples       ClusterIP   10.101.6.55      &lt;none&gt;        80\/TCP         4d\r\nservice\/kubernetes   ClusterIP   10.96.0.1        &lt;none&gt;        443\/TCP        6d7h\r\nservice\/newdep       ClusterIP   10.100.68.120    &lt;none&gt;        8080\/TCP       4d1h\r\nservice\/nginx        ClusterIP   None             &lt;none&gt;        80\/TCP         5d14h\r\nservice\/nginxsvc     ClusterIP   10.104.155.180   &lt;none&gt;        80\/TCP         4d2h\r\nservice\/webserver    ClusterIP   10.109.5.62      &lt;none&gt;        80\/TCP         64s\r\nservice\/webshop      NodePort    10.109.119.90    &lt;none&gt;        80:32064\/TCP   4d3h\r\n\r\nNAME                          DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE\r\ndaemonset.apps\/deploydaemon   1         1         1       1            1           &lt;none&gt;          5d1h\r\ndaemonset.apps\/mydaemon       1         1         1       1            1           &lt;none&gt;          6d1h\r\n\r\nNAME                          READY   UP-TO-DATE   AVAILABLE   AGE\r\ndeployment.apps\/firstnginx    4\/4     4            4           6d2h\r\ndeployment.apps\/nginx-taint   3\/3     3            3           12h\r\ndeployment.apps\/nginxsvc      1\/1     1            1           4d2h\r\ndeployment.apps\/redis-cache   1\/3     3            1           29h\r\ndeployment.apps\/web-server    1\/3     3            1           28h\r\ndeployment.apps\/webserver     1\/1     1            1           4d9h\r\ndeployment.apps\/webshop       3\/3     3            3           4d4h\r\n\r\nNAME                                     DESIRED   CURRENT   READY   AGE\r\nreplicaset.apps\/firstnginx-d8679d567     4         4         4       6d2h\r\nreplicaset.apps\/nginx-taint-68bd5db674   3         3         3       12h\r\nreplicaset.apps\/nginxsvc-5f8b7d4f4d      1         1         1       4d2h\r\nreplicaset.apps\/redis-cache-8478cbdc86   3         3         1       29h\r\nreplicaset.apps\/web-server-55f57c89d4    3         3         1       28h\r\nreplicaset.apps\/webserver-667ddc69b6     0         0         0       4d9h\r\nreplicaset.apps\/webserver-76d44586d      1         1         1       4d9h\r\nreplicaset.apps\/webshop-7f9fd49d4c       3         3         3       4d4h\r\n\r\nNAME                   READY   AGE\r\nstatefulset.apps\/web   3\/3     5d14h\r\n\r\n[root@k8s cka]# kubectl run testpod --image=busybox -- sleep 3600\r\npod\/testpod created\r\n\r\n[root@k8s cka]# kubectl get svc\r\nNAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE\r\napples       ClusterIP   10.101.6.55      &lt;none&gt;        80\/TCP         4d\r\nkubernetes   ClusterIP   10.96.0.1        &lt;none&gt;        443\/TCP        6d7h\r\nnewdep       ClusterIP   10.100.68.120    &lt;none&gt;        8080\/TCP       4d1h\r\nnginx        ClusterIP   None             &lt;none&gt;        80\/TCP         5d14h\r\nnginxsvc     ClusterIP   10.104.155.180   &lt;none&gt;        80\/TCP         4d2h\r\nwebserver    ClusterIP   10.109.5.62      &lt;none&gt;        80\/TCP         105s\r\nwebshop      NodePort    10.109.119.90    &lt;none&gt;        80:32064\/TCP   4d3h\r\n\r\n[root@k8s cka]# kubectl exec -it testpod -- wget webserver\r\nwget: bad address 'webserver'\r\ncommand terminated with exit code 1\r\n\r\n[root@k8s cka]# kubectl exec -it testpod -- wget 10.109.5.62\r\nConnecting to 10.109.5.62 (10.109.5.62:80)\r\nsaving to 'index.html'\r\nindex.html           100% |********************************|   615  0:00:00 ETA\r\n'index.html' saved\r\n<\/pre>\n<p>It was ease because is all in the same namespace. If you are between diffrent namespace ii&#8217;s getting a little bit more complex.<\/p>\n<p><span style=\"color: #3366ff;\">Accessing Pods in other Namespaces<\/span><\/p>\n<ul>\n<li><code>kubectl create ns remote<\/code><\/li>\n<li><code>kubectl run interginx --image=nginx<\/code><\/li>\n<li><code>kubectl run remotebox --image=busybox -n remote -- sleep 3600<\/code><\/li>\n<li><code>kubectl expose pod interginx --port=80<\/code><\/li>\n<li><code>kubectl exec -it remotebox -n remote -- cat \/etc\/resolv.conf<\/code><\/li>\n<li><code>kubectl exec -it remotebox -n remote -- nslookup interginx <\/code># fails<\/li>\n<li><code>kubectl exec -it remotebox -n remote -- nslookup interginx.default.svc.cluster.local<\/code><\/li>\n<\/ul>\n<pre class=\"lang:default decode:true\">[root@k8s cka]# kubectl create ns remote\r\nnamespace\/remote created\r\n\r\n[root@k8s cka]# kubectl run interginx --image=nginx\r\npod\/interginx created\r\n\r\n[root@k8s cka]# kubectl run remotebox --image=busybox -n remote -- sleep 3600\r\npod\/remotebox created\r\n\r\n[root@k8s cka]# kubectl expose pod interginx --port=80\r\nservice\/interginx exposed\r\n\r\n[root@k8s cka]# kubectl exec -it remotebox -n remote -- cat \/etc\/resolv.conf\r\nnameserver 10.96.0.10\r\nsearch remote.svc.cluster.local svc.cluster.local cluster.local netico.pl\r\noptions ndots:5\r\n\r\n[root@k8s cka]# kubectl exec -it remotebox -n remote -- nslookup interginx\r\nnslookup: write to '10.96.0.10': Connection refused\r\n;; connection timed out; no servers could be reached\r\n\r\ncommand terminated with exit code 1\r\n\r\n[root@k8s cka]# kubectl exec -it remotebox -n remote -- nslookup interginx.default.svc.cluster.local\r\nnslookup: write to '10.96.0.10': Connection refused\r\n;; connection timed out; no servers could be reached\r\n\r\ncommand terminated with exit code 1\r\n<\/pre>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #3366ff;\">Network Policy<\/span><\/p>\n<ul>\n<li>By default, there are no restrictions to network traffic in K8s<\/li>\n<li>Pods can always communicate, even if they&#8217;re in other Namespaces<\/li>\n<li>To limit this, NetworkPolicies can be used<\/li>\n<li>NetworkPolicies need to be supported by the network plugin though\n<ul>\n<li>The weave plugin does NOT support NetworkPolicies!<\/li>\n<\/ul>\n<\/li>\n<li>If in a policy there is no match, traffic will be denied<\/li>\n<li>If no NetworkPolicy is used, all traffic is allowed<\/li>\n<\/ul>\n<p><span style=\"color: #3366ff;\">Using NetworkPolicy Identifiers<\/span><\/p>\n<ul>\n<li>In NetworkPolicy, three different identifiers can be used\n<ul>\n<li><em>Pods<\/em>: (podSelector) note that a Pod cannot block access to itself<\/li>\n<li><em>Namespaces<\/em>: (namespaceSelector) to grant access to specific Namespaces<\/li>\n<li><em>IP blocks<\/em>: (ipBlock) notice that traffic to and from the node where a Pod is running is always allowed<\/li>\n<\/ul>\n<\/li>\n<li>When defining a Pod- or Namespace-based NetworkPolicy, a selector label is used to specify what traffic is allowed to and from the Pods that match the selector<\/li>\n<li>NetworkPolicies do not conflict, they are additive<\/li>\n<\/ul>\n<p><span style=\"color: #3366ff;\">Exploring NetworkPolicy<\/span><\/p>\n<ul>\n<li><code>kubectl apply -f nwpolicy-complete-example.yaml<\/code><\/li>\n<li><code>kubectl expose pod nginx --port=80<\/code><\/li>\n<li><code>kubectl exec -it busybox -- wget --spider --timeout=1 nginx <\/code>will fail<\/li>\n<li><code>kubectl label pod busybox access=true<\/code><\/li>\n<li><code>kubectl exec -it busybox -- wget --spider --timeout=1 nginx <\/code>will work the steps in the demo are<\/li>\n<\/ul>\n<pre class=\"lang:default decode:true\">[root@k8s cka]# cat nwpolicy-complete-example.yaml\r\napiVersion: networking.k8s.io\/v1\r\nkind: NetworkPolicy\r\nmetadata:\r\n  name: access-nginx\r\nspec:\r\n  podSelector:\r\n    matchLabels:\r\n      app: nginx\r\n  ingress:\r\n  - from:\r\n    - podSelector:\r\n        matchLabels:\r\n          access: \"true\"\r\n...\r\n\r\n---\r\napiVersion: v1\r\nkind: Pod\r\nmetadata:\r\n  name: nginx\r\n  labels:\r\n    app: nginx\r\nspec:\r\n  containers:\r\n  - name: nwp-nginx\r\n    image: nginx\r\n...\r\n\r\n---\r\napiVersion: v1\r\nkind: Pod\r\nmetadata:\r\n  name: busybox\r\n  labels:\r\n    app: sleepy\r\nspec:\r\n  containers:\r\n  - name: nwp-busybox\r\n    image: busybox\r\n    command:\r\n    - sleep\r\n    - \"3600\"\r\n\r\n[root@k8s cka]# kubectl apply -f nwpolicy-complete-example.yaml\r\nnetworkpolicy.networking.k8s.io\/access-nginx created\r\npod\/nginx created\r\npod\/busybox created\r\n\r\n[root@k8s cka]# kubectl expose pod nginx --port=80\r\nservice\/nginx exposed\r\n\r\n[root@k8s cka]# kubectl exec -it busybox -- wget --spider --timeout=1 nginx\r\nwget: bad address 'nginx'\r\ncommand terminated with exit code 1\r\n\r\n[root@k8s cka]# kubectl get svc\r\nNAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE\r\napples       ClusterIP   10.101.6.55      &lt;none&gt;        80\/TCP         4d12h\r\ninterginx    ClusterIP   10.102.130.239   &lt;none&gt;        80\/TCP         10h\r\nkubernetes   ClusterIP   10.96.0.1        &lt;none&gt;        443\/TCP        6d18h\r\nnewdep       ClusterIP   10.100.68.120    &lt;none&gt;        8080\/TCP       4d12h\r\nnginx        ClusterIP   10.107.22.126    &lt;none&gt;        80\/TCP         85s\r\nnginxsvc     ClusterIP   10.104.155.180   &lt;none&gt;        80\/TCP         4d13h\r\nwebserver    ClusterIP   10.109.5.62      &lt;none&gt;        80\/TCP         11h\r\nwebshop      NodePort    10.109.119.90    &lt;none&gt;        80:32064\/TCP   4d14h\r\n\r\n[root@k8s cka]# kubectl exec -it busybox -- wget --spider --timeout=1 10.107.22.126\r\nConnecting to 10.107.22.126 (10.107.22.126:80)\r\nwget: server returned error: HTTP\/1.1 403 Forbidden\r\ncommand terminated with exit code 1\r\n[root@k8s cka]# kubectl label pod busybox access=true\r\npod\/busybox labeled\r\n\r\n[root@k8s cka]# kubectl exec -it busybox -- wget --spider --timeout=1 10.107.22.126\r\nConnecting to 10.107.22.126 (10.107.22.126:80)\r\nwget: server returned error: HTTP\/1.1 403 Forbidden\r\ncommand terminated with exit code 1\r\n\r\n[root@k8s cka]# kubectl describe networkpolicy access-nginx\r\nName:         access-nginx\r\nNamespace:    default\r\nCreated on:   2024-02-07 04:40:26 -0500 EST\r\nLabels:       &lt;none&gt;\r\nAnnotations:  &lt;none&gt;\r\nSpec:\r\n  PodSelector:     app=nginx\r\n  Allowing ingress traffic:\r\n    To Port: &lt;any&gt; (traffic allowed to all ports)\r\n    From:\r\n      PodSelector: access=true\r\n  Not affecting egress traffic\r\n  Policy Types: Ingress\r\n\r\n[root@k8s cka]# kubectl exec -it busybox -- wget --spider --timeout=1 10.107.22.126\r\nConnecting to 10.107.22.126 (10.107.22.126:80)\r\nwget: server returned error: HTTP\/1.1 403 Forbidden\r\ncommand terminated with exit code 1\r\n\r\n[root@k8s cka]# kubectl exec -it busybox -- curl 10.107.22.126\r\nOCI runtime exec failed: exec failed: unable to start container process: exec: \"curl\": executable file not found in $PATH: unknown\r\ncommand terminated with exit code 126\r\n\r\n<\/pre>\n<p><span style=\"color: #3366ff;\">Applying NetworkPolicy to Namespaces<\/span><\/p>\n<ul>\n<li>To apply a NetworkPolicy to a Namespace, use <code>-n namespace<\/code> in the<br \/>\ndefinition of the NetworkPolicy<\/li>\n<li>To allow ingress and egress traffic, use the namespaceSelector to match the traffic<\/li>\n<\/ul>\n<p><span style=\"color: #3366ff;\">Using NetworkPolicy between Namespaces<\/span><\/p>\n<ul>\n<li><code>kubectl create ns nwp-namespace<\/code><\/li>\n<li><code>kubectl apply -f nwp-lab9-1.yaml<\/code><\/li>\n<li><code>kubectl expose pod nwp-nginx --port=80<\/code><\/li>\n<li><code>kubectl exec -it nwp-busybox -n nwp-namespace -- wget --spider --timeout=1 nwp-nginx: gives a bad address error<\/code><\/li>\n<li><code>kubectl exec -it nwp-busybox -n nwp-namespace -- nslookup nwp-nginx <\/code>explains that it&#8217;s looking in the wrong ns<\/li>\n<li><code>kubectl exec -it nwp-busybox -n nwp-namespace -- wget --spider --timeout=1 nwp-nginx.default.svc.cluster.local <\/code>is allowed<\/li>\n<\/ul>\n<pre class=\"lang:default decode:true \">[root@k8s cka]#  kubectl create ns nwp-namespace\r\nnamespace\/nwp-namespace created\r\n\r\n[root@k8s cka]# cat nwp-lab9-1.yaml\r\n---\r\napiVersion: v1\r\nkind: Pod\r\nmetadata:\r\n  name: nwp-nginx\r\n  namespace: default\r\n  labels:\r\n    app: nwp-nginx\r\nspec:\r\n  containers:\r\n  - name: nwp-nginx\r\n    image: nginx\r\n...\r\n\r\n---\r\napiVersion: v1\r\nkind: Pod\r\nmetadata:\r\n  name: nwp-busybox\r\n  namespace: nwp-namespace\r\n  labels:\r\n    app: sleepy\r\nspec:\r\n  containers:\r\n  - name: nwp-busybox\r\n    image: busybox\r\n    command:\r\n    - sleep\r\n    - \"3600\"\r\n[root@k8s cka]#  kubectl apply -f nwp-lab9-1.yaml\r\npod\/nwp-nginx created\r\npod\/nwp-busybox created\r\n\r\n[root@k8s cka]# kubectl expose pod nwp-nginx --port=80\r\nservice\/nwp-nginx exposed\r\n\r\n[root@k8s cka]#  kubectl exec -it nwp-busybox -n nwp-namespace -- wget --spider --timeout=1 nwp-nginx\r\nwget: bad address 'nwp-nginx'\r\ncommand terminated with exit code 1\r\n\r\n[root@k8s cka]#  kubectl exec -it nwp-busybox -n nwp-namespace -- nslookup nwp-nginx\r\nnslookup: write to '10.96.0.10': Connection refused\r\n;; connection timed out; no servers could be reached\r\n\r\ncommand terminated with exit code 1\r\n\r\n[root@k8s cka]#  kubectl exec -it nwp-busybox -n nwp-namespace -- timeout=1 nwp-nginx.default.svc.cluster.local\r\nOCI runtime exec failed: exec failed: unable to start container process: exec: \"timeout=1\": executable file not found in $PATH: unknown\r\ncommand terminated with exit code 126\r\n\r\n[root@k8s cka]#  kubectl exec -it nwp-busybox -n nwp-namespace -- wget --spider --timeout=1 timeout=1 nwp-nginx.default.svc.cluster.local\r\nwget: bad address 'timeout=1'\r\ncommand terminated with exit code 1\r\n\r\n[root@k8s cka]#  kubectl exec -it nwp-busybox -n nwp-namespace -- wget --spider --timeout=1 nwp-nginx.default.svc.cluster.local\r\nwget: bad address 'nwp-nginx.default.svc.cluster.local'\r\ncommand terminated with exit code 1\r\n\r\n[root@k8s cka]#  kubectl exec -it nwp-busybox -n nwp-namespace -- ping  nwp-nginx.default.svc.cluster.local\r\nping: bad address 'nwp-nginx.default.svc.cluster.local'\r\ncommand terminated with exit code 1\r\n\r\n[root@k8s cka]#  kubectl exec -it nwp-busybox -n nwp-namespace -- cat \/etc\/resolv.conf\r\nnameserver 10.96.0.10\r\nsearch nwp-namespace.svc.cluster.local svc.cluster.local cluster.local netico.pl\r\n<\/pre>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #3366ff;\">Using NetworkPolicy between Namespaces<\/span><\/p>\n<ul>\n<li><code>kubectl apply -f nwp-lab9-2.yaml<\/code><\/li>\n<li><code>kubectl exec -it nwp-busybox -n nwp-namespace -- wget --spider --timeout=1 nwp-nginx.default.svc.cluster.local <\/code>is not allowed<\/li>\n<li><code>kubectl create deployment busybox --image=busybox -- sleep 3600<\/code><\/li>\n<li><code>kubectl exec -it busybox[Tab] -- wget --spider --timeout=1 nwp-nginx<\/code><\/li>\n<\/ul>\n<pre class=\"lang:default decode:true \">[root@k8s cka]# cat  nwp-lab9-2.yaml\r\nkind: NetworkPolicy\r\napiVersion: networking.k8s.io\/v1\r\nmetadata:\r\n  namespace: default\r\n  name: deny-from-other-namespaces\r\nspec:\r\n  podSelector:\r\n    matchLabels:\r\n  ingress:\r\n  - from:\r\n    - podSelector: {}\r\n<\/pre>\n<p>This network policy is going to deny inncoming traffic for all other namespaces. Network policy is only allowing traffic that has a specyfic pod selector that is matching.<\/p>\n<pre class=\"lang:default decode:true\">[root@k8s cka]# kubectl apply -f nwp-lab9-2.yaml\r\nnetworkpolicy.networking.k8s.io\/deny-from-other-namespaces created\r\n\r\n[root@k8s cka]# kubectl exec -it nwp-busybox -n nwp-namespace -- wget --spider --timeout=1 nwp-nginx.default.svc.cluster.local\r\nwget: bad address 'nwp-nginx.default.svc.cluster.local'\r\ncommand terminated with exit code 1\r\n\r\n[root@k8s cka]# kubectl create deployment busybox --image=busybox -- sleep 3600\r\ndeployment.apps\/busybox created\r\n\r\n[root@k8s cka]# kubectl  get pods\r\nNAME                           READY   STATUS    RESTARTS        AGE\r\nantinginx                      1\/1     Running   0               44h\r\nbusybox                        1\/1     Running   1 (50m ago)     110m\r\nbusybox-6fc6c44c5b-xmmxd       1\/1     Running   0               48s\r\ndeploydaemon-zzllp             1\/1     Running   0               5d14h\r\nfirstnginx-d8679d567-249g9     1\/1     Running   0               6d15h\r\nfirstnginx-d8679d567-66c4s     1\/1     Running   0               6d15h\r\nfirstnginx-d8679d567-72qbd     1\/1     Running   0               6d15h\r\nfirstnginx-d8679d567-rhhlz     1\/1     Running   0               5d22h\r\ninit-demo                      1\/1     Running   0               6d\r\ninterginx                      1\/1     Running   0               12h\r\nlab4-pod                       1\/1     Running   0               4d20h\r\nmorevol                        2\/2     Running   260 (3m ago)    5d10h\r\nmydaemon-d4dcd                 1\/1     Running   0               5d14h\r\nmystaticpod-k8s.netico.pl      1\/1     Running   0               3d\r\nnginx                          1\/1     Running   0               110m\r\nnginx-hdd                      1\/1     Running   0               24h\r\nnginx-ssd                      1\/1     Running   0               25h\r\nnginx-taint-68bd5db674-7skqs   1\/1     Running   0               25h\r\nnginx-taint-68bd5db674-vjq89   1\/1     Running   0               25h\r\nnginx-taint-68bd5db674-vqz2z   1\/1     Running   0               25h\r\nnginxsvc-5f8b7d4f4d-dtrs7      1\/1     Running   0               4d15h\r\nnwp-nginx                      1\/1     Running   0               32m\r\npv-pod                         1\/1     Running   0               5d9h\r\nredis-cache-8478cbdc86-cfsmz   0\/1     Pending   0               42h\r\nredis-cache-8478cbdc86-kr8qr   0\/1     Pending   0               42h\r\nredis-cache-8478cbdc86-w2swz   1\/1     Running   0               42h\r\nsleepy                         1\/1     Running   134 (13m ago)   6d1h\r\ntestpod                        1\/1     Running   12 (55m ago)    12h\r\ntwo-containers                 2\/2     Running   792 (57s ago)   5d22h\r\nweb-0                          1\/1     Running   0               6d3h\r\nweb-1                          1\/1     Running   0               5d14h\r\nweb-2                          1\/1     Running   0               5d14h\r\nweb-server-55f57c89d4-25qhr    0\/1     Pending   0               41h\r\nweb-server-55f57c89d4-crtfn    1\/1     Running   0               41h\r\nweb-server-55f57c89d4-vl4p5    0\/1     Pending   0               41h\r\nwebserver                      1\/1     Running   0               12h\r\nwebserver-76d44586d-8gqhf      1\/1     Running   0               4d21h\r\nwebshop-7f9fd49d4c-92nj2       1\/1     Running   0               4d17h\r\nwebshop-7f9fd49d4c-kqllw       1\/1     Running   0               4d17h\r\nwebshop-7f9fd49d4c-x2czc       1\/1     Running   0               4d17h\r\n\r\n[root@k8s cka]# kubectl exec -it busybox-6fc6c44c5b-xmmxd -- wget --spider --timeout=1 nwp-nginx\r\nwget: bad address 'nwp-nginx'\r\ncommand terminated with exit code 1\r\n<\/pre>\n<p>The first wget dont&#8217;t work because network policy deny traffic from other namespace. The second wget should work because it is a traffic from the same namespace.<\/p>\n<p><span style=\"color: #3366ff;\">Lab: Using NetworkPolicies<\/span><\/p>\n<ul>\n<li>Run a webserver with the name lab9server in Namespace restricted, using<br \/>\nthe Nginx image and ensure it is exposed by a Service<\/li>\n<li>From the default Namepsace start two Pods: sleepybox1 and sleepybox2, each based on the Busybox image using the sleep 3600 command as the command<\/li>\n<li>Create a NetworkPolicy that limits Ingress traffic to restricted, in such a way that only the sleepybox1 Pod from the default Namespace has access and all other access is forbidden<\/li>\n<\/ul>\n<pre class=\"lang:default decode:true \">[root@k8s cka]# cat lesson9lab.yaml\r\napiVersion: v1\r\nkind: Namespace\r\nmetadata:\r\n  name: restricted\r\n\r\n---\r\napiVersion: networking.k8s.io\/v1\r\nkind: NetworkPolicy\r\nmetadata:\r\n  name: mynp\r\n  namespace: restricted\r\nspec:\r\n  podSelector:\r\n    matchLabels:\r\n      target: \"yes\"\r\n  policyTypes:\r\n  - Ingress\r\n  - Egress\r\n  ingress:\r\n  - from:\r\n    - namespaceSelector:\r\n        matchLabels:\r\n          kubernetes.io\/metadata.name: default\r\n      podSelector:\r\n        matchLabels:\r\n          access: \"yes\"\r\n    ports:\r\n    - protocol: TCP\r\n      port: 80\r\n  egress:\r\n  - {}\r\n\r\n---\r\napiVersion: v1\r\nkind: Pod\r\nmetadata:\r\n  name: lab-nginx\r\n  namespace: restricted\r\n  labels:\r\n    target: \"yes\"\r\nspec:\r\n  containers:\r\n    - name: nginx\r\n      image: nginx\r\n      ports:\r\n        - containerPort: 80\r\n\r\n---\r\napiVersion: v1\r\nkind: Pod\r\nmetadata:\r\n  name: sleepybox1\r\n  namespace: default\r\n  labels:\r\n    access: \"yes\"\r\nspec:\r\n  containers:\r\n    - name: busybox\r\n      image: busybox\r\n      args:\r\n        - sleep\r\n        - \"3600\"\r\n\r\n---\r\napiVersion: v1\r\nkind: Pod\r\nmetadata:\r\n  name: sleepybox2\r\n  namespace: default\r\n  labels:\r\n    access: \"noway\"\r\nspec:\r\n  containers:\r\n    - name: busybox\r\n      image: busybox\r\n      args:\r\n        - sleep\r\n        - \"3600\"\r\n<\/pre>\n<p>First part of this yaml file creates namespace <em>restricted<\/em>.\u00a0 Second part creates network policy <em>mynp<\/em>. This network policy is going to apply to namespace restricted. Policy type is ingress and egress. Ingres\u00a0 policy is gooing to apply to traffic from default namespace. So only pods that are in the default namespace and which have the matchlabel access &#8220;yes&#8221; will get access. And the access is to target port 80. The next parts create pods. First creates pod <em>lab9server<\/em> in the restricted namespace and has access label set to yes. Second creates pod <em>sleepybox1<\/em> in default namespace and has access label set to yes. The third one creates the pod <em>sleepybox2<\/em> in the default namespace and has access label set to &#8220;noway&#8221;.<\/p>\n<pre class=\"lang:default decode:true \">[root@k8s cka]# kubectl apply -f lesson9lab.yaml\r\nnamespace\/restricted created\r\nnetworkpolicy.networking.k8s.io\/mynp created\r\npod\/lab-nginx created\r\npod\/sleepybox1 created\r\npod\/sleepybox2 created\r\n\r\n[root@k8s cka]# kubectl get all -n restricted\r\nNAME            READY   STATUS    RESTARTS   AGE\r\npod\/lab-nginx   1\/1     Running   0          7s\r\n\r\n[root@k8s cka]# kubectl expose pod lab-nginx -n restricted --port=80\r\nservice\/lab-nginx exposed\r\n\r\n[root@k8s cka]# kubectl get all -n restricted\r\nNAME            READY   STATUS    RESTARTS   AGE\r\npod\/lab-nginx   1\/1     Running   0          8m31s\r\n\r\nNAME                TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE\r\nservice\/lab-nginx   ClusterIP   10.110.67.192   &lt;none&gt;        80\/TCP    11s\r\n\r\n\r\n[root@k8s cka]# kubectl exec -it sleepybox1 -- wget --spider --timeout=1 lab-nginx.restricted.svc.cluster.local\r\nwget: bad address 'lab-nginx.restricted.svc.cluster.local'\r\ncommand terminated with exit code 1\r\n\r\n[root@k8s cka]# kubectl exec -it lab-nginx -- cat \/etc\/resolv.conf\r\nError from server (NotFound): pods \"lab-nginx\" not found\r\n\r\n[root@k8s cka]# kubectl get all -n restricted\r\nNAME            READY   STATUS    RESTARTS   AGE\r\npod\/lab-nginx   1\/1     Running   0          11m\r\n\r\nNAME                TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE\r\nservice\/lab-nginx   ClusterIP   10.110.67.192   &lt;none&gt;        80\/TCP    2m45s\r\n\r\n[root@k8s cka]# kubectl exec -it lab-nginx -n restricted -- cat \/etc\/resolv.conf\r\nnameserver 10.96.0.10\r\nsearch restricted.svc.cluster.local svc.cluster.local cluster.local netico.pl\r\noptions ndots:5\r\n\r\n[root@k8s cka]# kubectl exec -it sleepybox1 -- wget --spider --timeout=1 lab-nginx.restricted.svc.cluster.local\r\nwget: bad address 'lab-nginx.restricted.svc.cluster.local'\r\ncommand terminated with exit code 1\r\n\r\n[root@k8s cka]# kubectl exec -it lab-nginx -n restricted -- cat \/etc\/hosts\r\n# Kubernetes-managed hosts file.\r\n127.0.0.1       localhost\r\n::1     localhost ip6-localhost ip6-loopback\r\nfe00::0 ip6-localnet\r\nfe00::0 ip6-mcastprefix\r\nfe00::1 ip6-allnodes\r\nfe00::2 ip6-allrouters\r\n10.244.0.84     lab-nginx\r\n\r\n[root@k8s cka]# kubectl get all -n restricted\r\nNAME            READY   STATUS    RESTARTS   AGE\r\npod\/lab-nginx   1\/1     Running   0          14m\r\n\r\nNAME                TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE\r\nservice\/lab-nginx   ClusterIP   10.110.67.192   &lt;none&gt;        80\/TCP    5m47s\r\n\r\n[root@k8s cka]# kubectl exec -it sleepybox1 -- wget --spider --timeout=1 lab-nginx.restricted.svc.cluster.local\r\nwget: bad address 'lab-nginx.restricted.svc.cluster.local'\r\ncommand terminated with exit code 1\r\n\r\n[root@k8s cka]# kubectl exec -it sleepybox1 -- wget --spider --timeout=1 lab-nginx\r\nwget: bad address 'lab-nginx'\r\ncommand terminated with exit code 1\r\n\r\n[root@k8s cka]# kubectl exec -it lab-nginx -n restricted -- cat \/etc\/hosts\r\n# Kubernetes-managed hosts file.\r\n127.0.0.1       localhost\r\n::1     localhost ip6-localhost ip6-loopback\r\nfe00::0 ip6-localnet\r\nfe00::0 ip6-mcastprefix\r\nfe00::1 ip6-allnodes\r\nfe00::2 ip6-allrouters\r\n10.244.0.84     lab-nginx\r\n[root@k8s cka]# kubectl exec -it lab-nginx -n restricted -- echo '10.244.0.84     lab-nginx.restricted.svc.cluster.local' &gt;&gt; \/etc\/hosts\r\n[root@k8s cka]# kubectl exec -it lab-nginx -n restricted -- cat \/etc\/hosts\r\n# Kubernetes-managed hosts file.\r\n127.0.0.1       localhost\r\n::1     localhost ip6-localhost ip6-loopback\r\nfe00::0 ip6-localnet\r\nfe00::0 ip6-mcastprefix\r\nfe00::1 ip6-allnodes\r\nfe00::2 ip6-allrouters\r\n10.244.0.84     lab-nginx\r\n\r\n[root@k8s cka]# kubectl exec -it sleepybox1 -- wget --spider --timeout=1 10.244.0.84\r\nConnecting to 10.244.0.84 (10.244.0.84:80)\r\nremote file exists\r\n\r\n[root@k8s cka]# kubectl exec -it sleepybox2 -- wget --spider --timeout=1 10.244.0.84\r\nConnecting to 10.244.0.84 (10.244.0.84:80)\r\nremote file exists\r\n\r\n[root@k8s cka]# kubectl exec -it sleepybox1 -- wget --spider --timeout=1 10.110.67.192\r\nConnecting to 10.110.67.192 (10.110.67.192:80)\r\nremote file exists\r\n[root@k8s cka]# kubectl exec -it sleepybox2 -- wget --spider --timeout=1 10.110.67.192\r\nConnecting to 10.110.67.192 (10.110.67.192:80)\r\nremote file exists\r\n\r\n[root@k8s cka]# curl 10.110.67.192\r\n&lt;!DOCTYPE html&gt;\r\n&lt;html&gt;\r\n&lt;head&gt;\r\n&lt;title&gt;Welcome to nginx!&lt;\/title&gt;\r\n&lt;style&gt;\r\nhtml { color-scheme: light dark; }\r\nbody { width: 35em; margin: 0 auto;\r\nfont-family: Tahoma, Verdana, Arial, sans-serif; }\r\n&lt;\/style&gt;\r\n&lt;\/head&gt;\r\n&lt;body&gt;\r\n&lt;h1&gt;Welcome to nginx!&lt;\/h1&gt;\r\n&lt;p&gt;If you see this page, the nginx web server is successfully installed and\r\nworking. Further configuration is required.&lt;\/p&gt;\r\n\r\n&lt;p&gt;For online documentation and support please refer to\r\n&lt;a href=\"http:\/\/nginx.org\/\"&gt;nginx.org&lt;\/a&gt;.&lt;br\/&gt;\r\nCommercial support is available at\r\n&lt;a href=\"http:\/\/nginx.com\/\"&gt;nginx.com&lt;\/a&gt;.&lt;\/p&gt;\r\n\r\n&lt;p&gt;&lt;em&gt;Thank you for using nginx.&lt;\/em&gt;&lt;\/p&gt;\r\n&lt;\/body&gt;\r\n&lt;\/html&gt;\r\n<\/pre>\n<p>&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Kubernetes defines a network model that helps provide simplicity and consistency across a range of networking environments and network implementations. The Kubernetes network model provides the foundation for understanding how containers, pods, and services within Kubernetes communicate with each other.<\/p>\n","protected":false},"author":1,"featured_media":5950,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[99],"tags":[],"_links":{"self":[{"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/posts\/5401"}],"collection":[{"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/comments?post=5401"}],"version-history":[{"count":26,"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/posts\/5401\/revisions"}],"predecessor-version":[{"id":5470,"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/posts\/5401\/revisions\/5470"}],"wp:featuredmedia":[{"embeddable":true,"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/media\/5950"}],"wp:attachment":[{"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/media?parent=5401"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/categories?post=5401"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/tags?post=5401"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}