The Kubernetes architecture consists of a lot of different components working with each other, talking to each other in many different ways. So they all need to know where the other components are. There are different modes of authentication, authorization, encryption and security.
Kubernetes Ecosystem:
- Cloud Native Computing Foundation (CNCF) hosts many projects related to
cloud native computing - Kubernetes is among the most important projects, but many other projects are offered as well, implementing a wide range of functionality
- Networking
- Dashboard
- Storage
- Observability
- Ingress
- To get a completely working Kubernetes solution, products from the ecosystem need to be installed also
- This can be done manually, or by using a distribution
Running Kubernetes Anywhere
- Kubernetes is a platform for cloud native computing, and as such is commonly used in cloud
- All major cloud providers have their own integrated Kubernetes distribution
- Kubernetes can also be installed on premise, within the secure boundaries of your own datacenter
- And also, there are all-in-one solutions which are perfect for learning Kubernetes
Understanding Kubernetes Distributions
- Kubernetes distributions add products from the ecosystem to vanilla kubernetes and provide support
- Normally, distributions run one or two Kubernetes versions behind
- Some distributions are opinionated: they pick one product for a specific solution and support only that
- Other distributions are less opinionated and integrate multiple products to offer specific solutions
Common Kubernetes Distributions
- In Cloud
- Amazon Elastic Kubernetes Services (EKS)
- Azure Kubernetes Services (AKS)
- Google Kubernetes Engine (GKE)
- On Premise
- OpenShift
- Google Antos
- Rancher
- Canonical Charmed Kubernetes
- Minimal (learning) Solutions
- Minikube
- K3s
Kubernetes Node Roles
- The control plane runs Kubernetes core services, kubernetes agents, and no user workloads
- The worker plane runs user workloads and Kubernetes agents
- All nodes are configured with a container runtime, which is required for running containerized workloads
- The kubelet systemd service is responsible for running orchestrated containers as Pods on any node
Node Requirements
- To install a Kubernetes cluster using
kubeadm
, you’ll need at least two nodes that meet the following requirements:- Running a recent version of Ubuntu or CentOS
- 2GiB RAM or more
- 2 CPUs or more on the control-plane node
- Network connectivity between the nodes
- Before setting up the cluster with
kubeadm
, install the following:- A container runtime
- The Kubernetes tools
Installing a Container Runtime
- The container runtime is the component that allows you to run containers
- Kubernetes supports different container runtimes
- containerd
- CR1-0
- Docker Engine
- Mirantis Container Runtime
Kubernetes Networking
Different types of network communication are used in Kubernetes
-
- Node communication: handled by the physical network
- External-to-Service communication: handled by Kubernetes Service resources
- Pod-to-Service communication: handled by Kubernetes Services
- Pod-to-Pod communication: handled by the network plugin
Network Add-on
- To create the software defined Pod network, a network add-on is needed
- Different network add-ons are provided by the Kubernetes ecosystem
- Vanilla Kubernetes doesn’t come with a default add-on, as it doesn’t want to favor a specific solution
- Kubernetes provides the Container Network Interface (CNI), a generic interface that allows different plugins to be used
- Availability of specific features depends on the network plugin that is used
- Networkpolicy
- IPv6
- Role Based Access Control (RBAC)
Common Network Add-ons
- Calico: probably the most common network plugin with support for all
relevant features - Flannel: a generic network add-on that was used a lot in the past, but doesn’t support NetworkPolicy
- Multus: a plugin that can work with multiple network plugins. Current default in OpenShift
- Weave: a common network add-on that does support common features
ETCD
ETCD
is a distributed reliable key-value store that is Simple, Secure & Fast. The etcd data store stores information regarding the cluster such as:
• Nodes
• PODs
• Configs
• Secrets
• Accounts
• Roles
• Bindings
• Others
Every information you see when you run the kube control get command is from the etcd server. Every change you make to your cluster such as adding additional nodes, deploying pods or replica sets are updated in the etcd server.
Only once it is updated in the etcd server is the change considered to be complete. Depending on how you set up your cluster, etcd is deployed differently.
There are two types of Kubernetes deployments,
- deploying from scratch
- deploying using the Qadium tool.
Setup – Manual
1 2 |
$ wget -q --https-only \ "https://github.com/coreos/etcd/releases/download/v3.3.9/etcd- v3.3.9- linux-amd64.tar.gz" |
The advertised client URL. is the address on which etcd listens. It happens to be on the IP of the server and on port 2379, which is the default port on which etcd listens. This is the URL that should be configured on the kube API server when it tries to reach the etcd server.
etcd.service
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 |
ExecStart=/usr/local/bin/etcd \\ --name ${ETCD_NAME} \\ --cert-file=/etc/etcd/kubernetes.pem \\ --key-file=/etc/etcd/kubernetes-key.pem \\ --peer-cert-file=/etc/etcd/kubernetes.pem \\ --peer-key-file=/etc/etcd/kubernetes-key.pem \\ --trusted-ca-file=/etc/etcd/ca.pem \\ --peer-trusted-ca-file=/etc/etcd/ca.pem \\ --peer-client-cert-auth \\ --client-cert-auth \\ --initial-advertise-peer-urls https://${INTERNAL_IP}:2380 \\ --listen-peer-urls https://${INTERNAL_IP}:2380 \\ --listen-client-urls https://${INTERNAL_IP}:2379,https://127.0.0.1:2379 \\ --advertise-client-urls https://${INTERNAL_IP}:2379 \\ --initial-cluster-token etcd-cluster-0 \\ --initial-cluster controller-0=https://${CONTROLLER0_IP}:2380,controller-1=https://${CONTROLLER1_IP}:2380 \\ --initial-cluster-state new \\ --data-dir=/var/lib/etcd etcd.service ExecStart=/usr/local/bin/etcd \\ --name ${ETCD_NAME} \\ --cert-file=/etc/etcd/kubernetes.pem \\ --key-file=/etc/etcd/kubernetes-key.pem \\ --peer-cert-file=/etc/etcd/kubernetes.pem \\ --peer-key-file=/etc/etcd/kubernetes-key.pem \\ --trusted-ca-file=/etc/etcd/ca.pem \\ --peer-trusted-ca-file=/etc/etcd/ca.pem \\ --peer-client-cert-auth \\ --client-cert-auth \\ --initial-advertise-peer-urls https://${INTERNAL_IP}:2380 \\ --listen-peer-urls https://${INTERNAL_IP}:2380 \\ --listen-client-urls https://${INTERNAL_IP}:2379,https://127.0.0.1:2379 \\ --advertise-client-urls https://${INTERNAL_IP}:2379 \\ --initial-cluster-token etcd-cluster-0 \\ --initial-cluster controller-0=https://${CONTROLLER0_IP}:2380,controller-1=https://${CONTROLLER1_IP}:2380 \\ --initial-cluster-state new \\ --data-dir=/var/lib/etcd |
Setup – kubeadm
If you set up your cluster using kubeadm
, then kubeadm
deploys the etcd server for you as a pod in the kube system namespace. You can explore the etcd database using the etcd control utility within this pod.
1 2 3 4 5 6 7 8 9 10 11 12 13 |
$ kubectl get pods - n kube- system NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-78fcdf6894-prwvl 1/1 Running 0 1h kube-system coredns-78fcdf6894-vqd9w 1/1 Running 0 1h kube-system etcd-master 1/1 Running 0 1h kube-system kube-apiserver-master 1/1 Running 0 1h kube-system kube-controller-manager-master 1/1 Running 0 1h kube-system kube-proxy-f6k26 1/1 Running 0 1h kube-system kube-proxy-hnzsw 1/1 Running 0 1h kube-system kube-scheduler-master 1/1 Running 0 1h kube-system weave-net-924k8 2/2 Running 1 1h kube-system weave-net-hzfcz 2/2 Running 1 1h |
To list all keys stored by Kubernetes, run the etcd control get command like this.
1 2 3 4 5 6 7 8 9 10 11 |
$ kubectl exec etcd- master –n kube- system etcdctl get / -- prefix –keys- only /registry/apiregistration.k8s.io/apiservices/v1. /registry/apiregistration.k8s.io/apiservices/v1.apps /registry/apiregistration.k8s.io/apiservices/v1.authentication.k8s.io /registry/apiregistration.k8s.io/apiservices/v1.authorization.k8s.io /registry/apiregistration.k8s.io/apiservices/v1.autoscaling /registry/apiregistration.k8s.io/apiservices/v1.batch /registry/apiregistration.k8s.io/apiservices/v1.networking.k8s.io /registry/apiregistration.k8s.io/apiservices/v1.rbac.authorization.k8s.io /registry/apiregistration.k8s.io/apiservices/v1.storage.k8s.io /registry/apiregistration.k8s.io/apiservices/v1beta1.admissionregistration.k8s.io |
Kubernetes stores data in the specific directory structure. The root directory is a registry, and under that, you have the various Kubernetes constructs, such as minions or nodes, pods, replica sets, deployments, etc.
Explore ETCD
1 2 3 4 5 6 7 8 9 10 11 12 |
$ kubectl exec etcd- master –n kube- system etcdctl get / -- prefix –keys- only /registry/apiregistration.k8s.io/apiservices/v1. /registry/apiregistration.k8s.io/apiservices/v1.apps /registry/apiregistration.k8s.io/apiservices/v1.authentication.k8s.io /registry/apiregistration.k8s.io/apiservices/v1.authorization.k8s.io /registry/apiregistration.k8s.io/apiservices/v1.autoscaling /registry/apiregistration.k8s.io/apiservices/v1.batch /registry/apiregistration.k8s.io/apiservices/v1.networking.k8s.io /registry/apiregistration.k8s.io/apiservices/v1.rbac.authorization.k8s.io /registry/apiregistration.k8s.io/apiservices/v1.storage.k8s.io /registry/apiregistration.k8s.io/apiservices/v1beta1.admissionregistration.k8s.io |
In a high availability environment, you will have multiple master nodes in your cluster. Then you will have multiple etcd instances spread across the master nodes. In that case, make sure that the etcd instances know about each other by setting the right parameter in the etcd service configuration.
ETCD in HA Environment
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
etcd.service ExecStart=/usr/local/bin/etcd \\ --name ${ETCD_NAME} \\ --cert-file=/etc/etcd/kubernetes.pem \\ --key-file=/etc/etcd/kubernetes-key.pem \\ --peer-cert-file=/etc/etcd/kubernetes.pem \\ --peer-key-file=/etc/etcd/kubernetes-key.pem \\ --trusted-ca-file=/etc/etcd/ca.pem \\ --peer-trusted-ca-file=/etc/etcd/ca.pem \\ --peer-client-cert-auth \\ --client-cert-auth \\ --initial-advertise-peer-urls https://${INTERNAL_IP}:2380 \\ --listen-peer-urls https://${INTERNAL_IP}:2380 \\ --listen-client-urls https://${INTERNAL_IP}:2379,https://127.0.0.1:2379 \\ --advertise-client-urls https://${INTERNAL_IP}:2379 \\ --initial-cluster-token etcd-cluster-0 \\ --initial-cluster controller-0=https://${CONTROLLER0_IP}:2380,controller-1=https://${CONTROLLER1_IP}:2380 \\ --initial-cluster-state new \\ --data-dir=/var/lib/etcd |
The initial cluster option is where you must specify the different instances of the etcd service.
ETCDCTL is the CLI tool used to interact with ETCD. ETCDCTL can interact with ETCD Server using 2 API versions – Version 2 and Version 3. By default its set to use Version 2. Each version has different sets of commands.
For example ETCDCTL version 2 supports the following commands:
1 2 3 4 5 |
etcdctl backup etcdctl cluster-health etcdctl mk etcdctl mkdir etcdctl set |
Whereas the commands are different in version 3
1 2 3 4 |
etcdctl snapshot save etcdctl endpoint health etcdctl get etcdctl put |
To set the right version of API set the environment variable ETCDCTL_API command
1 |
export ETCDCTL_API=3 |
When API version is not set, it is assumed to be set to version 2. And version 3 commands listed above don’t work. When API version is set to version 3, version 2 commands listed above don’t work.
Apart from that, you must also specify path to certificate files so that ETCDCTL can authenticate to the ETCD API Server. The certificate files are available in the etcd-master at the following path.
1 2 3 |
--cacert /etc/kubernetes/pki/etcd/ca.crt --cert /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key |
So for the commands which was showed earlier to work you must specify the ETCDCTL API version and path to certificate files. Below is the final form:
1 2 3 |
$ kubectl exec etcd-master -n kube-system -- sh -c "ETCDCTL_API=3 \ etcdctl get / --prefix --keys-only --limit=10 --cacert /etc/kubernetes/pki/etcd/ca.crt \ --cert /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key" |
kube-apiserver
The kube-apiserver is the primary management component in Kubernetes. When you run a kubectl
command, the kubectl
utility is in fact reaching to the kube-apiserver.
1 2 3 4 |
$ kubectl get nodes NAME STATUS ROLES AGE VERSION master Ready master 20m v1.11.3 node01 Ready <none> 20m v1.11.3 |
The kube-apiserver first authenticates the request and validates it. It then retrieves the data from the etcd cluster and responds back with the requested information. The kube-apiserver is at the center of all the different tasks that needs to be performed to make a change in the cluster. To summarize, the kube-apiserver is responsible for authenticating and validating requests, retrieving and updating data in the etcd data store. In fact, kube-apiserver is the only component that interacts directly with the etcd data store.
The other components, such as the scheduler, kube-controller-manager and kubelet uses the API server to perform updates in the cluster in their respective areas.
Installing kube-api server
If you’re setting up the hardware, then the kube-apiserver is available as a binary in the Kubernetes release page. Download it and configure it to run as a service on your Kubernetes master node.
1 |
wget https://storage.googleapis.com/kubernetes- release/release/v1.13.0/bin/linux/amd64/kube-apiserver<code> |
The kube-apiserver is run with a lot of parameters, as you can see here.
kube-apiserver.service
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
ExecStart=/usr/local/bin/kube-apiserver \\ --advertise-address=${INTERNAL_IP} \\ --allow-privileged=true \\ --apiserver-count=3 \\ --authorization-mode=Node,RBAC \\ --bind-address=0.0.0.0 \\ --enable-admission plugins=Initializers,NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\ --enable-swagger-ui=true \\ --etcd-servers=https://127.0.0.1:2379 \\ --event-ttl=1h \\ --experimental-encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \\ --runtime-config=api/all \\ --service-account-key-file=/var/lib/kubernetes/service-account.pem \\ --service-cluster-ip-range=10.32.0.0/24 \\ --service-node-port-range=30000-32767 \\ --v=2 |
How to view the kube-apiserver options in an existing cluster depends on how you set up your cluster.
View api-server – kubeadm
1 2 3 4 5 6 7 8 9 10 11 12 13 |
$ kubectl get pods -n kube-system NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-78fcdf6894-hwrq9 1/1 Running 0 16m kube-system coredns-78fcdf6894-rzhjr 1/1 Running 0 16m kube-system etcd-master 1/1 Running 0 15m kube-system kube-apiserver-master 1/1 Running 0 15m kube-system kube-controller-manager-master 1/1 Running 0 15m kube-system kube-proxy-lzt6f 1/1 Running 0 16m kube-system kube-proxy-zm5qd 1/1 Running 0 16m kube-system kube-scheduler-master 1/1 Running 0 15m kube-system weave-net-29z42 2/2 Running 1 16m kube-system weave-net-snmdl 2/2 Running 1 16m |
View api-server options – cluster set it up with a kubeadm tool
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 |
$ cat /etc/kubernetes/manifests/kube-apiserver.yaml spec: containers: - command: - kube-apiserver - --authorization-mode=Node,RBAC - --advertise-address=172.17.0.32 - --allow-privileged=true - --client-ca-file=/etc/kubernetes/pki/ca.crt - --disable-admission-plugins=PersistentVolumeLabel - --enable-admission-plugins=NodeRestriction - --enable-bootstrap-token-auth=true - --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt - --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt - --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key - --etcd-servers=https://127.0.0.1:2379 - --insecure-port=0 - --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt - --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname - --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt - --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key - --requestheader-allowed-names=front-proxy-client - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt - --requestheader-extra-headers-prefix=X-Remote-Extra- - --requestheader-group-headers=X-Remote-Group - --requestheader-username-headers=X-Remote-User - --secure-port=6443 |
View api-server options – non kubeadm setup,
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
$ cat /etc/systemd/system/kube-apiserver.service [Service] ExecStart=/usr/local/bin/kube-apiserver \\ --advertise-address=${INTERNAL_IP} \\ --allow-privileged=true \\ --apiserver-count=3 \\ --audit-log-maxage=30 \\ --audit-log-maxbackup=3 \\ --audit-log-maxsize=100 \\ --audit-log-path=/var/log/audit.log \\ --authorization-mode=Node,RBAC \\ --bind-address=0.0.0.0 \\ --client-ca-file=/var/lib/kubernetes/ca.pem \\ --enable-admissionplugins=Initializers,NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,Defa ultStorageClass,ResourceQuota \\ --enable-swagger-ui=true \\ --etcd-cafile=/var/lib/kubernetes/ca.pem \\ --etcd-certfile=/var/lib/kubernetes/kubernetes.pem \\ --etcd-keyfile=/var/lib/kubernetes/kubernetes-key.pem \\ --etcdservers=https://10.240.0.10:2379,https://10.240.0.11:2379,https://10.240.0.12:2379 \\ --event-ttl=1h \\ --experimental-encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \\ --kubelet-certificate-authority=/var/lib/kubernetes/ca.pem \\ --kubelet-client-certificate=/var/lib/kubernetes/kubernetes.pem \\ --kubelet-client-key=/var/lib/kubernetes/kubernetes-key.pem \\ |
You can also see the running process and the effective options by listing the process on the master node and searching for kube-apiserver.
1 2 3 4 5 |
$ ps -aux | grep kube-apiserver root 2348 3.3 15.4 399040 315604 ? Ssl 15:46 1:22 kube-apiserver --authorization-mode=Node,RBAC -- advertise-address=172.17.0.32 --allow-privileged=true --client-ca-file=/etc/kubernetes/pki/ca.crt --disableadmission-plugins=PersistentVolumeLabel --enable-admission-plugins=NodeRestriction--enable-bootstrap-tokenauth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt --etcd-certfile=/etc/kubernetes/pki/apiserver-etcdclient.crt --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 -- insecure-port=0 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-clientkey=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-addresstypes=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxyclient-key-file=/etc/kubernetes/pki/front-proxy-client.key--requestheader-allowed-names=front-proxy-client -- requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-RemoteExtra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secureport=6443 --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-cluster-ip-range=10.96.0.0/12 --tlscert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key |
Kube Controller Manager
In the Kubernetes terms a controller is a process that continuously monitors the state of various components within the system and works towards bringing the whole system to the desired functioning state.
For example, the node controller is responsible for monitoring the status of the nodes and taking necessary actions to keep the applications running. It does that through the Kube API server. The node controller tests the status of the nodes every five seconds. That way the note controller can monitor the health of the notes.
The replication controller. It is responsible for monitoring the status of replica sets and ensuring that the desired number of PODs are available at all times within the set. If a POD dies, it creates another one.
There are many more such controllers available within Kubernetes. All of them are packaged into a single process knownas the Kubernetes Controller Manager.
Installing kube-controller-manager
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
$ wget https://storage.googleapis.com/kubernetes- release/release/v1.13.0/bin/linux/amd64/kube-controller- manager $ kube-controller-manager.service ExecStart=/usr/local/bin/kube-controller-manager \\ --address=0.0.0.0 \\ --cluster-cidr=10.200.0.0/16 \\ --cluster-name=kubernetes \\ --cluster-signing-cert-file=/var/lib/kubernetes/ca.pem \\ --cluster-signing-key-file=/var/lib/kubernetes/ca-key.pem \\ --kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig \\ --leader-elect=true \\ --root-ca-file=/var/lib/kubernetes/ca.pem \\ --service-account-private-key-file=/var/lib/kubernetes/service-account-key.pem \\ --service-cluster-ip-range=10.32.0.0/24 \\ --use-service-account-credentials=true \\ --v=2 |
How to view the Kube controller managers server options depends on how you set up your cluster.
View kube-controller-manager – set it up with the Kube admin tool
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
$ kubectl get pods -n kube-system NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-78fcdf6894-hwrq9 1/1 Running 0 16m kube-system coredns-78fcdf6894-rzhjr 1/1 Running 0 16m kube-system etcd-master 1/1 Running 0 15m kube-system kube-apiserver-master 1/1 Running 0 15m kube-system kube-controller-manager-master 1/1 Running 0 15m kube-system kube-proxy-lzt6f 1/1 Running 0 16m kube-system kube-proxy-zm5qd 1/1 Running 0 16m kube-system kube-scheduler-master 1/1 Running 0 15m kube-system weave-net-29z42 2/2 Running 1 16m kube-system weave-net-snmdl 2/2 Running 1 16m NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-78fcdf6894-hwrq9 1/1 Running 0 16m kube-system coredns-78fcdf6894-rzhjr 1/1 Running 0 16m kube-system etcd-master 1/1 Running 0 15m kube-system kube-apiserver-master 1/1 Running 0 15m kube-system kube-controller-manager-master 1/1 Running 0 15m kube-system kube-proxy-lzt6f 1/1 Running 0 16m kube-system kube-proxy-zm5qd 1/1 Running 0 16m kube-system kube-scheduler-master 1/1 Running 0 15m kube-system weave-net-29z42 2/2 Running 1 16m kube-system weave-net-snmdl 2/2 Running 1 16m |
The options within the POD definition file located at etc kubernetes manifest folder.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
$ cat /etc/kubernetes/manifests/kube-controller-manager.yaml spec: containers: - command: - kube-controller-manager - --address=127.0.0.1 - --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt - --cluster-signing-key-file=/etc/kubernetes/pki/ca.key - --controllers=*, bootstrapsigner,tokencleaner - --kubeconfig=/etc/kubernetes/controller-manager.conf - --leader-elect=true - --root-ca-file=/etc/kubernetes/pki/ca.crt - --service-account-private-key-file=/etc/kubernetes/pki/sa.key - --use-service-account-credentials=true |
View controller-manager options – non Kube admin setup,
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
$ cat /etc/systemd/system/kube-controller- manager.service [Service] ExecStart=/usr/local/bin/kube-controller-manager \\ --address=0.0.0.0 \\ --cluster-cidr=10.200.0.0/16 \\ --cluster-name=kubernetes \\ --cluster-signing-cert-file=/var/lib/kubernetes/ca.pem \\ --cluster-signing-key-file=/var/lib/kubernetes/ca-key.pem \\ --kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig \\ --leader-elect=true \\ --root-ca-file=/var/lib/kubernetes/ca.pem \\ --service-account-private-key-file=/var/lib/kubernetes/service-account-key.pem \\ --service-cluster-ip-range=10.32.0.0/24 \\ --use-service-account-credentials=true \\ --v=2 Restart=on-failure RestartSec=5 |
To see the running process and the effective options you must list the process on the master node and searching for Kube Controller Manager.
1 2 3 4 5 |
$ ps - aux | grep kube-controller- manager root 1994 2.7 5.1 154360 105024 ? Ssl 06:45 1:25 kube- controller-manager -- address=127.0.0.1 --cluster-signing- cert-file=/etc/kubernetes/pki/ca.crt --cluster-signingkey-file=/etc/kubernetes/pki/ca.key --controllers=*, bootstrapsigner,tokencleaner -- kubeconfig=/etc/kubernetes/controller-manager.conf --leader-elect=true --root-cafile=/etc/kubernetes/pki/ca.crt -- service-account-private-key-file=/etc/kubernetes/pki/sa.key --use-service-account-credentials=true |
Kube Scheduler
The Kubernetes scheduler is responsible for scheduling pods on nodes.
Installing kube-scheduler
Download the kube-scheduler binary from the Kubernetes release page, extract it, and run it as a service. When you run it as a service, you specify the scheduler configuration file.
1 2 3 4 5 |
$ wget https://storage.googleapis.com/kubernetes- release/release/v1.13.0/bin/linux/amd64/kube-scheduler kube-scheduler.service ExecStart=/usr/local/bin/kube-scheduler \\ --config=/etc/kubernetes/config/kube-scheduler.yaml \\ --v=2 |
View kube-scheduler options kubeadm
If you set it up with the kubeadm tool, you can see the options within the pod definition file located at /etc/kubernetes/manifest/folder.
1 2 3 4 5 6 7 8 9 |
$ cat /etc/kubernetes/manifests/kube-scheduler.yaml spec: containers: - command: - kube-scheduler - --address=127.0.0.1 - --kubeconfig=/etc/kubernetes/scheduler.conf - --leader-elect=true |
You can also see the running process and the effective options by listing the process on the master node and searching for kube-scheduler.
1 2 3 |
$ ps -aux | grep kube-scheduler root 2477 0.8 1.6 48524 34044 ? Ssl 17:31 0:08 kube- scheduler -- address=127.0.0.1 --kubeconfig=/etc/kubernetes/scheduler.conf --leader-elect=true |
Kubelet
The kubelet in the Kubernetes worker node registers the node with a Kubernetes cluster. When it receives instructions to load a container or a pod on the node, it requests the container runtime engine, which may be Docker, to pull the required image and run an instance. The kubelet then continues to monitor the state of the pod and containers in it and reports to the kube API server on a timely basis.
Installing kubelet
If you use the kubeadm tool to deploy your cluster, it does not automatically deploy the kubelet. Now that’s the difference from other components. You must always manually install the kubelet on your worker nodes. Download the installer, extract it, and run it as a service.
1 2 3 4 5 6 7 8 9 10 11 12 |
$ wget https://storage.googleapis.com/kubernetes- release/release/v1.13.0/bin/linux/amd64/kubelet kubelet.service ExecStart=/usr/local/bin/kubelet \\ --config=/var/lib/kubelet/kubelet-config.yaml \\ --container-runtime=remote \\ --container-runtime-endpoint=unix:///var/run/containerd/containerd.sock \\ --image-pull-progress-deadline=2m \\ --kubeconfig=/var/lib/kubelet/kubeconfig \\ --network-plugin=cni \\ --register-node=true \\ --v=2 |
View kubelet options
You can view the running kubelet process and the effective options by listing the process on the worker node and searching for kubelet.
1 2 3 |
$ ps -aux | grep kubelet root 2095 1.8 2.4 960676 98788 ? Ssl 02:32 0:36 /usr/bin/kubelet -- bootstrapkubeconfig=/etc/kubernetes/bootstrap-kubelet.conf -- kubeconfig=/etc/kubernetes/kubelet.conf -- config=/var/lib/kubelet/config.yaml --cgroup-driver=cgroupfs --cni-bin-dir=/opt/cni/bin -- cniconf-dir=/etc/cni/net.d --network- plugin=cni |
kube-proxy
Kube-proxy is a process that runs on each node in the Kubernetes cluster.n Its job is to look for new services,and every time a new service is created, it creates the appropriate rules on each nodeto forward traffic to those services to the backend pods.One way it does this is using iptables rules.
Within a Kubernetes cluster, every pod can reach every other pod. This is accomplished by deploying a pod networking solution to the cluster. A pod network is an internal virtual network that spans across all the nodes in the cluster to which all the pods connect to. Through this network, they’re able to communicate with each other.
Installing kube-proxy
1 2 3 4 5 6 |
$ wget https://storage.googleapis.com/kubernetes- release/release/v1.13.0/bin/linux/amd64/kube-proxy kube-proxy.service ExecStart=/usr/local/bin/kube-proxy \\ --config=/var/lib/kube-proxy/kube-proxy-config.yaml Restart=on-failure RestartSec=5 |
View kube-proxy – kubeadm
The kubeadm tool deploys kube-proxy as pods on each node.
1 2 3 4 5 6 7 8 9 10 11 12 |
$ kubectl get pods -n kube-system NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-78fcdf6894-hwrq9 1/1 Running 0 16m kube-system coredns-78fcdf6894-rzhjr 1/1 Running 0 16m kube-system etcd-master 1/1 Running 0 15m kube-system kube-apiserver-master 1/1 Running 0 15m kube-system kube-controller-manager-master 1/1 Running 0 15m kube-system kube-proxy-lzt6f 1/1 Running 0 16m kube-system kube-proxy-zm5qd 1/1 Running 0 16m kube-system kube-scheduler-master 1/1 Running 0 15m kube-system weave-net-29z42 2/2 Running 1 16m kube-system weave-net-snmdl 2/2 Running 1 16m |
In fact, it is deployed as a DaemonSet, so a single pod is always deployed on each node in the cluster.
1 2 3 |
$ kubectl get daemonset -n kube-system NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE kube-proxy 2 2 2 2 2 beta.kubernetes.io/arch=amd64 1h |