{"id":5191,"date":"2023-10-14T09:24:19","date_gmt":"2023-10-14T07:24:19","guid":{"rendered":"http:\/\/miro.borodziuk.eu\/?p=5191"},"modified":"2024-02-13T18:03:02","modified_gmt":"2024-02-13T17:03:02","slug":"the-kube-api-server-in-kubernetes","status":"publish","type":"post","link":"http:\/\/miro.borodziuk.eu\/index.php\/2023\/10\/14\/the-kube-api-server-in-kubernetes\/","title":{"rendered":"Kubernetes cluster architecture"},"content":{"rendered":"<div class=\"transcript--cue-container--wu3UY\">\n<p class=\"transcript--underline-cue--3osdw\" tabindex=\"-1\" role=\"button\" data-purpose=\"transcript-cue\"><span class=\"\" data-purpose=\"cue-text\">The Kubernetes architecture consists of a lot of different components working with each other, talking to each other in many different ways. So they all need to know where the other components are. There are different modes of authentication, authorization, encryption and security.<br \/>\n<\/span><\/p>\n<\/div>\n<p><span style=\"color: #3366ff;\"><!--more--><\/span><\/p>\n<p><span style=\"color: #3366ff;\"><strong>Kubernetes Ecosystem:<\/strong><\/span><\/p>\n<ul>\n<li>Cloud Native Computing Foundation (CNCF) hosts many projects related to<br \/>\ncloud native computing<\/li>\n<li>Kubernetes is among the most important projects, but many other projects are offered as well, implementing a wide range of functionality\n<ul>\n<li>Networking<\/li>\n<li>Dashboard<\/li>\n<li>Storage<\/li>\n<li>Observability<\/li>\n<li>Ingress<\/li>\n<\/ul>\n<\/li>\n<li>To get a completely working Kubernetes solution, products from the ecosystem need to be installed also<\/li>\n<li>This can be done manually, or by using a distribution<\/li>\n<\/ul>\n<p><span style=\"color: #3366ff;\">Running Kubernetes Anywhere<\/span><\/p>\n<ul>\n<li>Kubernetes is a platform for cloud native computing, and as such is commonly used in cloud<\/li>\n<li>All major cloud providers have their own integrated Kubernetes distribution<\/li>\n<li>Kubernetes can also be installed on premise, within the secure boundaries of your own datacenter<\/li>\n<li>And also, there are all-in-one solutions which are perfect for learning Kubernetes<\/li>\n<\/ul>\n<p><span style=\"color: #3366ff;\">Understanding Kubernetes Distributions<\/span><\/p>\n<ul>\n<li>Kubernetes distributions add products from the ecosystem to vanilla kubernetes and provide support<\/li>\n<li>Normally, distributions run one or two Kubernetes versions behind<\/li>\n<li>Some distributions are opinionated: they pick one product for a specific solution and support only that<\/li>\n<li>Other distributions are less opinionated and integrate multiple products to offer specific solutions<\/li>\n<\/ul>\n<p><span style=\"color: #3366ff;\">Common Kubernetes Distributions<\/span><\/p>\n<ul>\n<li>In Cloud\n<ul>\n<li>Amazon Elastic Kubernetes Services (EKS)<\/li>\n<li>Azure Kubernetes Services (AKS)<\/li>\n<li>Google Kubernetes Engine (GKE)<\/li>\n<\/ul>\n<\/li>\n<li>On Premise\n<ul>\n<li>OpenShift<\/li>\n<li>Google Antos<\/li>\n<li>Rancher<\/li>\n<li>Canonical Charmed Kubernetes<\/li>\n<\/ul>\n<\/li>\n<li>Minimal (learning) Solutions\n<ul>\n<li>Minikube<\/li>\n<li>K3s<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p><span style=\"color: #3366ff;\">Kubernetes Node Roles<\/span><\/p>\n<ul>\n<li>The control plane runs Kubernetes core services, kubernetes agents, and no user workloads<\/li>\n<li>The worker plane runs user workloads and Kubernetes agents<\/li>\n<li>All nodes are configured with a container runtime, which is required for running containerized workloads<\/li>\n<li>The kubelet systemd service is responsible for running orchestrated containers as Pods on any node<\/li>\n<\/ul>\n<p><span style=\"color: #3366ff;\">Node Requirements<\/span><\/p>\n<ul>\n<li>To install a Kubernetes cluster using <code>kubeadm<\/code>, you&#8217;ll need at least two nodes that meet the following requirements:\n<ul>\n<li>Running a recent version of Ubuntu or CentOS<\/li>\n<li>2GiB RAM or more<\/li>\n<li>2 CPUs or more on the control-plane node<\/li>\n<li>Network connectivity between the nodes<\/li>\n<\/ul>\n<\/li>\n<li>Before setting up the cluster with <code>kubeadm<\/code>, install the following:\n<ul>\n<li>A container runtime<\/li>\n<li>The Kubernetes tools<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p><span style=\"color: #3366ff;\">Installing a Container Runtime<\/span><\/p>\n<ul>\n<li>The container runtime is the component that allows you to run containers<\/li>\n<li>Kubernetes supports different container runtimes\n<ul>\n<li>containerd<\/li>\n<li>CR1-0<\/li>\n<li>Docker Engine<\/li>\n<li>Mirantis Container Runtime<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p><span style=\"color: #3366ff;\">Kubernetes Networking<\/span><\/p>\n<p>Different types of network communication are used in Kubernetes<\/p>\n<ul>\n<li>\n<ul>\n<li>Node communication: handled by the physical network<\/li>\n<li>External-to-Service communication: handled by Kubernetes Service resources<\/li>\n<li>Pod-to-Service communication: handled by Kubernetes Services<\/li>\n<li>Pod-to-Pod communication: handled by the network plugin<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p><span style=\"color: #3366ff;\">Network Add-on<\/span><\/p>\n<ul>\n<li>To create the software defined Pod network, a network add-on is needed<\/li>\n<li>Different network add-ons are provided by the Kubernetes ecosystem<\/li>\n<li>Vanilla Kubernetes doesn&#8217;t come with a default add-on, as it doesn&#8217;t want to favor a specific solution<\/li>\n<li>Kubernetes provides the Container Network Interface (CNI), a generic interface that allows different plugins to be used<\/li>\n<li>Availability of specific features depends on the network plugin that is used\n<ul>\n<li>Networkpolicy<\/li>\n<li>IPv6<\/li>\n<li>Role Based Access Control (RBAC)<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p><span style=\"color: #3366ff;\">Common Network Add-ons<\/span><\/p>\n<ul>\n<li><strong>Calico<\/strong>: probably the most common network plugin with support for all<br \/>\nrelevant features<\/li>\n<li><strong>Flannel<\/strong>: a generic network add-on that was used a lot in the past, but doesn&#8217;t support NetworkPolicy<\/li>\n<li><strong>Multus<\/strong>: a plugin that can work with multiple network plugins. Current default in OpenShift<\/li>\n<li><strong>Weave<\/strong>: a common network add-on that does support common features<\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<p><span style=\"font-size: 14pt; color: #3366ff;\"><strong>ETCD<\/strong><\/span><\/p>\n<p><code>ETCD<\/code> is a distributed reliable key-value store that is Simple, Secure &amp; Fast. <span class=\"\" data-purpose=\"cue-text\">The etcd data store stores information regarding the cluster <\/span><span class=\"transcript--highlight-cue--1bEgq\" data-purpose=\"cue-text\">such as:<\/span><\/p>\n<p>\u2022 Nodes<br \/>\n\u2022 PODs<br \/>\n\u2022 Configs<br \/>\n\u2022 Secrets<br \/>\n\u2022 Accounts<br \/>\n\u2022 Roles<br \/>\n\u2022 Bindings<br \/>\n\u2022 Others<\/p>\n<div class=\"transcript--cue-container--wu3UY\">\n<p class=\"transcript--underline-cue--3osdw\" tabindex=\"-1\" role=\"button\" data-purpose=\"transcript-cue\"><span class=\"\" data-purpose=\"cue-text\">Every information you see when you run the kube control get command is from the etcd server. Every change you make to your cluster such as adding additional nodes, deploying pods or replica sets are updated in the etcd server.<\/span><\/p>\n<\/div>\n<div class=\"transcript--cue-container--wu3UY\">\n<p class=\"transcript--underline-cue--3osdw\" tabindex=\"-1\" role=\"button\" data-purpose=\"transcript-cue\"><span class=\"\" data-purpose=\"cue-text\">Only once it is updated in the etcd server is the change considered to be complete. Depending on how you set up your cluster, etcd is deployed differently. <\/span><\/p>\n<p class=\"transcript--underline-cue--3osdw\" tabindex=\"-1\" role=\"button\" data-purpose=\"transcript-cue\"><span class=\"\" data-purpose=\"cue-text\">There\u00a0 are two types of Kubernetes deployments,<\/span><\/p>\n<div class=\"transcript--cue-container--wu3UY\">\n<ul>\n<li class=\"transcript--underline-cue--3osdw\" tabindex=\"-1\" role=\"button\" data-purpose=\"transcript-cue\"><span class=\"\" data-purpose=\"cue-text\">deploying from scratch <\/span><\/li>\n<li class=\"transcript--underline-cue--3osdw\" tabindex=\"-1\" role=\"button\" data-purpose=\"transcript-cue\"><span class=\"\" data-purpose=\"cue-text\">deploying using the Qadium tool.<\/span><\/li>\n<\/ul>\n<\/div>\n<p class=\"transcript--underline-cue--3osdw\" tabindex=\"-1\" role=\"button\" data-purpose=\"transcript-cue\"><span style=\"color: #3366ff;\"><strong><span class=\"\" data-purpose=\"cue-text\"> Setup &#8211; Manual<\/span><\/strong><\/span><\/p>\n<pre class=\"lang:default decode:true\">$ wget -q --https-only \\\r\n\"https:\/\/github.com\/coreos\/etcd\/releases\/download\/v3.3.9\/etcd- v3.3.9- linux-amd64.tar.gz\"<\/pre>\n<\/div>\n<p>T<span class=\"\" data-purpose=\"cue-text\">he advertised client URL. is the address on which etcd listens. It happens to be on the IP of the server and on port 2379, which is the default port on which etcd listens. <\/span><span class=\"transcript--highlight-cue--1bEgq\" data-purpose=\"cue-text\">This is the URL that should be configured <\/span><span class=\"\" data-purpose=\"cue-text\">on the kube API server when it tries to reach the etcd server.<br \/>\n<\/span><\/p>\n<p><code>etcd.service<\/code><\/p>\n<pre class=\"lang:default decode:true\">ExecStart=\/usr\/local\/bin\/etcd \\\\\r\n--name ${ETCD_NAME} \\\\\r\n--cert-file=\/etc\/etcd\/kubernetes.pem \\\\\r\n--key-file=\/etc\/etcd\/kubernetes-key.pem \\\\\r\n--peer-cert-file=\/etc\/etcd\/kubernetes.pem \\\\\r\n--peer-key-file=\/etc\/etcd\/kubernetes-key.pem \\\\\r\n--trusted-ca-file=\/etc\/etcd\/ca.pem \\\\\r\n--peer-trusted-ca-file=\/etc\/etcd\/ca.pem \\\\\r\n--peer-client-cert-auth \\\\\r\n--client-cert-auth \\\\\r\n--initial-advertise-peer-urls https:\/\/${INTERNAL_IP}:2380 \\\\\r\n--listen-peer-urls https:\/\/${INTERNAL_IP}:2380 \\\\\r\n--listen-client-urls https:\/\/${INTERNAL_IP}:2379,https:\/\/127.0.0.1:2379 \\\\\r\n--advertise-client-urls https:\/\/${INTERNAL_IP}:2379 \\\\\r\n--initial-cluster-token etcd-cluster-0 \\\\\r\n--initial-cluster controller-0=https:\/\/${CONTROLLER0_IP}:2380,controller-1=https:\/\/${CONTROLLER1_IP}:2380 \\\\\r\n--initial-cluster-state new \\\\\r\n--data-dir=\/var\/lib\/etcd\r\netcd.service\r\nExecStart=\/usr\/local\/bin\/etcd \\\\\r\n--name ${ETCD_NAME} \\\\\r\n--cert-file=\/etc\/etcd\/kubernetes.pem \\\\\r\n--key-file=\/etc\/etcd\/kubernetes-key.pem \\\\\r\n--peer-cert-file=\/etc\/etcd\/kubernetes.pem \\\\\r\n--peer-key-file=\/etc\/etcd\/kubernetes-key.pem \\\\\r\n--trusted-ca-file=\/etc\/etcd\/ca.pem \\\\\r\n--peer-trusted-ca-file=\/etc\/etcd\/ca.pem \\\\\r\n--peer-client-cert-auth \\\\\r\n--client-cert-auth \\\\\r\n--initial-advertise-peer-urls https:\/\/${INTERNAL_IP}:2380 \\\\\r\n--listen-peer-urls https:\/\/${INTERNAL_IP}:2380 \\\\\r\n--listen-client-urls https:\/\/${INTERNAL_IP}:2379,https:\/\/127.0.0.1:2379 \\\\\r\n--advertise-client-urls https:\/\/${INTERNAL_IP}:2379 \\\\\r\n--initial-cluster-token etcd-cluster-0 \\\\\r\n--initial-cluster controller-0=https:\/\/${CONTROLLER0_IP}:2380,controller-1=https:\/\/${CONTROLLER1_IP}:2380 \\\\\r\n--initial-cluster-state new \\\\\r\n--data-dir=\/var\/lib\/etcd<\/pre>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #3366ff;\"><strong>Setup &#8211; kubeadm<\/strong><\/span><\/p>\n<p><span class=\"\" data-purpose=\"cue-text\">If you set up your cluster using <code>kubeadm<\/code>, then <code>kubeadm<\/code> deploys the etcd server for you as a pod in the kube system namespace. You can explore the etcd database using the etcd control utility within this pod.<br \/>\n<\/span><\/p>\n<pre class=\"lang:default mark:6 decode:true \">$ kubectl get pods - n kube- system\r\n\r\nNAMESPACE NAME READY STATUS RESTARTS AGE\r\nkube-system coredns-78fcdf6894-prwvl 1\/1 Running 0 1h\r\nkube-system coredns-78fcdf6894-vqd9w 1\/1 Running 0 1h\r\nkube-system etcd-master 1\/1 Running 0 1h\r\nkube-system kube-apiserver-master 1\/1 Running 0 1h\r\nkube-system kube-controller-manager-master 1\/1 Running 0 1h\r\nkube-system kube-proxy-f6k26 1\/1 Running 0 1h\r\nkube-system kube-proxy-hnzsw 1\/1 Running 0 1h\r\nkube-system kube-scheduler-master 1\/1 Running 0 1h\r\nkube-system weave-net-924k8 2\/2 Running 1 1h\r\nkube-system weave-net-hzfcz 2\/2 Running 1 1h\r\n<\/pre>\n<p><span class=\"\" data-purpose=\"cue-text\">To list all keys stored by Kubernetes, run the etcd control get command like this.<\/span><\/p>\n<pre class=\"lang:default decode:true \">$ kubectl exec etcd- master \u2013n kube- system etcdctl get \/ -- prefix \u2013keys- only\r\n\/registry\/apiregistration.k8s.io\/apiservices\/v1.\r\n\/registry\/apiregistration.k8s.io\/apiservices\/v1.apps\r\n\/registry\/apiregistration.k8s.io\/apiservices\/v1.authentication.k8s.io\r\n\/registry\/apiregistration.k8s.io\/apiservices\/v1.authorization.k8s.io\r\n\/registry\/apiregistration.k8s.io\/apiservices\/v1.autoscaling\r\n\/registry\/apiregistration.k8s.io\/apiservices\/v1.batch\r\n\/registry\/apiregistration.k8s.io\/apiservices\/v1.networking.k8s.io\r\n\/registry\/apiregistration.k8s.io\/apiservices\/v1.rbac.authorization.k8s.io\r\n\/registry\/apiregistration.k8s.io\/apiservices\/v1.storage.k8s.io\r\n\/registry\/apiregistration.k8s.io\/apiservices\/v1beta1.admissionregistration.k8s.io<\/pre>\n<p><span class=\"\" data-purpose=\"cue-text\">Kubernetes stores data in the specific directory structure. The root directory is a registry, and under that, you have the various Kubernetes constructs, such as minions or nodes, pods, replica sets, <\/span><span class=\"transcript--highlight-cue--1bEgq\" data-purpose=\"cue-text\">deployments, etc.<\/span><\/p>\n<div class=\"transcript--cue-container--wu3UY\">\n<p class=\"transcript--underline-cue--3osdw\" tabindex=\"-1\" role=\"button\" data-purpose=\"transcript-cue\"><span style=\"color: #3366ff;\"><strong>Explore ETCD<\/strong><\/span><\/p>\n<\/div>\n<pre class=\"lang:default decode:true\">$ kubectl exec etcd- master \u2013n kube- system etcdctl get \/ -- prefix \u2013keys- only\r\n\r\n\/registry\/apiregistration.k8s.io\/apiservices\/v1.\r\n\/registry\/apiregistration.k8s.io\/apiservices\/v1.apps\r\n\/registry\/apiregistration.k8s.io\/apiservices\/v1.authentication.k8s.io\r\n\/registry\/apiregistration.k8s.io\/apiservices\/v1.authorization.k8s.io\r\n\/registry\/apiregistration.k8s.io\/apiservices\/v1.autoscaling\r\n\/registry\/apiregistration.k8s.io\/apiservices\/v1.batch\r\n\/registry\/apiregistration.k8s.io\/apiservices\/v1.networking.k8s.io\r\n\/registry\/apiregistration.k8s.io\/apiservices\/v1.rbac.authorization.k8s.io\r\n\/registry\/apiregistration.k8s.io\/apiservices\/v1.storage.k8s.io\r\n\/registry\/apiregistration.k8s.io\/apiservices\/v1beta1.admissionregistration.k8s.io<\/pre>\n<p><span class=\"\" data-purpose=\"cue-text\">In a high availability environment, you will have multiple master nodes in your cluster. Then you will have multiple etcd instances spread across the master nodes. In that case, make sure that the etcd instances know about each other by setting the right parameter in the etcd service configuration.<\/span><\/p>\n<div class=\"transcript--cue-container--wu3UY\">\n<p tabindex=\"-1\" role=\"button\" data-purpose=\"transcript-cue\"><span style=\"color: #3366ff;\"><strong>ETCD in HA Environment<\/strong><\/span><\/p>\n<pre class=\"lang:default mark:18 decode:true\">etcd.service\r\n\r\nExecStart=\/usr\/local\/bin\/etcd \\\\\r\n--name ${ETCD_NAME} \\\\\r\n--cert-file=\/etc\/etcd\/kubernetes.pem \\\\\r\n--key-file=\/etc\/etcd\/kubernetes-key.pem \\\\\r\n--peer-cert-file=\/etc\/etcd\/kubernetes.pem \\\\\r\n--peer-key-file=\/etc\/etcd\/kubernetes-key.pem \\\\\r\n--trusted-ca-file=\/etc\/etcd\/ca.pem \\\\\r\n--peer-trusted-ca-file=\/etc\/etcd\/ca.pem \\\\\r\n--peer-client-cert-auth \\\\\r\n--client-cert-auth \\\\\r\n--initial-advertise-peer-urls https:\/\/${INTERNAL_IP}:2380 \\\\\r\n--listen-peer-urls https:\/\/${INTERNAL_IP}:2380 \\\\\r\n--listen-client-urls https:\/\/${INTERNAL_IP}:2379,https:\/\/127.0.0.1:2379 \\\\\r\n--advertise-client-urls https:\/\/${INTERNAL_IP}:2379 \\\\\r\n--initial-cluster-token etcd-cluster-0 \\\\\r\n--initial-cluster controller-0=https:\/\/${CONTROLLER0_IP}:2380,controller-1=https:\/\/${CONTROLLER1_IP}:2380 \\\\\r\n--initial-cluster-state new \\\\\r\n--data-dir=\/var\/lib\/etcd<\/pre>\n<p><span class=\"\" data-purpose=\"cue-text\">The initial cluster option is where you must specify the different instances of the etcd service.<\/span><\/p>\n<\/div>\n<p>&nbsp;<\/p>\n<div class=\"text-viewer--scroll-container--1iy0Z\">\n<div class=\"text-viewer--content--3hoqQ\">\n<div class=\"ud-heading-xxl text-viewer--main-heading--ZbxZA\"><span style=\"color: #3366ff;\"><strong>ETCD &#8211; Commands<\/strong><\/span><\/div>\n<div class=\"article-asset--container--3djM8\">\n<div class=\"article-asset--content--1dAQ9 rt-scaffolding\" data-purpose=\"safely-set-inner-html:rich-text-viewer:html\">\n<p>ETCDCTL is the CLI tool used to interact with ETCD. ETCDCTL can interact with ETCD Server using 2 API versions &#8211; Version 2 and Version 3.\u00a0 By default its set to use Version 2. Each version has different sets of commands.<\/p>\n<p>For example ETCDCTL version 2 supports the following commands:<\/p>\n<pre class=\"lang:default decode:true \">etcdctl backup\r\netcdctl cluster-health\r\netcdctl mk\r\netcdctl mkdir\r\netcdctl set<\/pre>\n<p>Whereas the commands are different in version 3<\/p>\n<div class=\"ud-component--base-components--code-block\">\n<div>\n<pre class=\"lang:default decode:true \">etcdctl snapshot save\r\netcdctl endpoint health\r\netcdctl get\r\netcdctl put<\/pre>\n<p>To set the right version of API set the environment variable ETCDCTL_API command<\/p>\n<\/div>\n<\/div>\n<pre class=\"lang:default decode:true \">export ETCDCTL_API=3<\/pre>\n<p>When API\u00a0version is not set, it is assumed to be set to version 2. And version 3 commands listed above don&#8217;t work. When API\u00a0version is set to version 3, version 2 commands listed above don&#8217;t work.<\/p>\n<p>Apart from that, you must also specify path to certificate files so that ETCDCTL can authenticate to the ETCD API Server. The certificate files are available in the etcd-master at the following path.<\/p>\n<div class=\"ud-component--base-components--code-block\">\n<div>\n<pre class=\"lang:default decode:true \">--cacert \/etc\/kubernetes\/pki\/etcd\/ca.crt\r\n--cert \/etc\/kubernetes\/pki\/etcd\/server.crt\r\n--key \/etc\/kubernetes\/pki\/etcd\/server.key<\/pre>\n<p>So for the commands which was showed earlier to work you must specify the ETCDCTL API version and path to certificate files. Below is the final form:<\/p>\n<\/div>\n<\/div>\n<div class=\"ud-component--base-components--code-block\">\n<div>\n<pre class=\"lang:default decode:true \">$ kubectl exec etcd-master -n kube-system -- sh -c \"ETCDCTL_API=3 \\\r\netcdctl get \/ --prefix --keys-only --limit=10 --cacert \/etc\/kubernetes\/pki\/etcd\/ca.crt \\\r\n--cert \/etc\/kubernetes\/pki\/etcd\/server.crt  --key \/etc\/kubernetes\/pki\/etcd\/server.key\"<\/pre>\n<p>&nbsp;<\/p>\n<p><span style=\"font-size: 14pt;\"><strong><span class=\"\" style=\"color: #3366ff;\" data-purpose=\"cue-text\">kube-apiserver<\/span><\/strong><\/span><\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"transcript--cue-container--wu3UY\">\n<p tabindex=\"-1\" role=\"button\" data-purpose=\"transcript-cue\"><span class=\"\" data-purpose=\"cue-text\">The kube-apiserver is the primary management component in Kubernetes. <\/span><span class=\"\" data-purpose=\"cue-text\">When you run a <code>kubectl<\/code> command, <\/span><span class=\"transcript--highlight-cue--1bEgq\" data-purpose=\"cue-text\">the <code>kubectl<\/code> utility is in fact reaching <\/span><span class=\"\" data-purpose=\"cue-text\">to the kube-apiserver. <\/span><\/p>\n<pre class=\"lang:default decode:true \">$ kubectl get nodes\r\nNAME STATUS ROLES AGE VERSION\r\nmaster Ready master 20m v1.11.3\r\nnode01 Ready &lt;none&gt; 20m v1.11.3<\/pre>\n<p class=\"transcript--underline-cue--3osdw\" tabindex=\"-1\" role=\"button\" data-purpose=\"transcript-cue\"><span class=\"\" data-purpose=\"cue-text\">The kube-apiserver first authenticates the request and validates it. It then retrieves the data from the etcd cluster and responds back with the requested information. <\/span>The kube-apiserver is at the center of all the different tasks that needs to be performed to make a change in the cluster. To summarize, the kube-apiserver is responsible for authenticating and validating requests, retrieving and updating data in the etcd data store. In fact, kube-apiserver is the only component that interacts directly with the etcd data store.<\/p>\n<p class=\"transcript--underline-cue--3osdw\" tabindex=\"-1\" role=\"button\" data-purpose=\"transcript-cue\">The other components, such as the scheduler, kube-controller-manager and kubelet uses the API server to perform updates in the cluster in their respective areas.<\/p>\n<\/div>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #3366ff;\"><strong>Installing kube-api server<\/strong><\/span><\/p>\n<p>If you&#8217;re setting up the hardware, then the kube-apiserver is available as a binary in the Kubernetes release page. Download it and configure it to run as a service on your Kubernetes master node.<\/p>\n<pre class=\"lang:default decode:true\">wget https:\/\/storage.googleapis.com\/kubernetes- release\/release\/v1.13.0\/bin\/linux\/amd64\/kube-apiserver<code> <\/code><\/pre>\n<p>The kube-apiserver is run with a lot of parameters, as you can see here.<\/p>\n<p><code>kube-apiserver.service<\/code><\/p>\n<pre class=\"lang:default mark:9 decode:true\">ExecStart=\/usr\/local\/bin\/kube-apiserver \\\\\r\n--advertise-address=${INTERNAL_IP} \\\\\r\n--allow-privileged=true \\\\\r\n--apiserver-count=3 \\\\\r\n--authorization-mode=Node,RBAC \\\\\r\n--bind-address=0.0.0.0 \\\\\r\n--enable-admission\r\nplugins=Initializers,NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\\\\r\n--enable-swagger-ui=true \\\\\r\n--etcd-servers=https:\/\/127.0.0.1:2379 \\\\\r\n--event-ttl=1h \\\\\r\n--experimental-encryption-provider-config=\/var\/lib\/kubernetes\/encryption-config.yaml \\\\\r\n--runtime-config=api\/all \\\\\r\n--service-account-key-file=\/var\/lib\/kubernetes\/service-account.pem \\\\\r\n--service-cluster-ip-range=10.32.0.0\/24 \\\\\r\n--service-node-port-range=30000-32767 \\\\\r\n--v=2\r\n<\/pre>\n<div class=\"transcript--cue-container--wu3UY\">\n<p class=\"transcript--underline-cue--3osdw\" tabindex=\"-1\" role=\"button\" data-purpose=\"transcript-cue\"><span class=\"\" data-purpose=\"cue-text\">How to view the kube-apiserver options in an existing cluster <\/span><span class=\"transcript--highlight-cue--1bEgq\" data-purpose=\"cue-text\">depends on how you set up your cluster.<\/span><\/p>\n<\/div>\n<p><span style=\"color: #3366ff;\"><strong>View api-server &#8211; kubeadm<\/strong><\/span><\/p>\n<pre class=\"lang:default decode:true\">$ kubectl get pods -n kube-system\r\n\r\nNAMESPACE NAME READY STATUS RESTARTS AGE\r\nkube-system coredns-78fcdf6894-hwrq9 1\/1 Running 0 16m\r\nkube-system coredns-78fcdf6894-rzhjr 1\/1 Running 0 16m\r\nkube-system etcd-master 1\/1 Running 0 15m\r\nkube-system kube-apiserver-master 1\/1 Running 0 15m\r\nkube-system kube-controller-manager-master 1\/1 Running 0 15m\r\nkube-system kube-proxy-lzt6f 1\/1 Running 0 16m\r\nkube-system kube-proxy-zm5qd 1\/1 Running 0 16m\r\nkube-system kube-scheduler-master 1\/1 Running 0 15m\r\nkube-system weave-net-29z42 2\/2 Running 1 16m\r\nkube-system weave-net-snmdl 2\/2 Running 1 16m<\/pre>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #3366ff;\">View api-server options &#8211;\u00a0 cluster <span class=\"transcript--highlight-cue--1bEgq\" data-purpose=\"cue-text\"><span style=\"color: #3366ff;\">set it up with a kubeadm tool<\/span><br \/>\n<\/span><\/span><\/p>\n<pre class=\"lang:default decode:true\">$ cat \/etc\/kubernetes\/manifests\/kube-apiserver.yaml\r\n\r\nspec:\r\ncontainers:\r\n- command:\r\n- kube-apiserver\r\n- --authorization-mode=Node,RBAC\r\n- --advertise-address=172.17.0.32\r\n- --allow-privileged=true\r\n- --client-ca-file=\/etc\/kubernetes\/pki\/ca.crt\r\n- --disable-admission-plugins=PersistentVolumeLabel\r\n- --enable-admission-plugins=NodeRestriction\r\n- --enable-bootstrap-token-auth=true\r\n- --etcd-cafile=\/etc\/kubernetes\/pki\/etcd\/ca.crt\r\n- --etcd-certfile=\/etc\/kubernetes\/pki\/apiserver-etcd-client.crt\r\n- --etcd-keyfile=\/etc\/kubernetes\/pki\/apiserver-etcd-client.key\r\n- --etcd-servers=https:\/\/127.0.0.1:2379\r\n- --insecure-port=0\r\n- --kubelet-client-certificate=\/etc\/kubernetes\/pki\/apiserver-kubelet-client.crt\r\n- --kubelet-client-key=\/etc\/kubernetes\/pki\/apiserver-kubelet-client.key\r\n- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname\r\n- --proxy-client-cert-file=\/etc\/kubernetes\/pki\/front-proxy-client.crt\r\n- --proxy-client-key-file=\/etc\/kubernetes\/pki\/front-proxy-client.key\r\n- --requestheader-allowed-names=front-proxy-client\r\n- --requestheader-client-ca-file=\/etc\/kubernetes\/pki\/front-proxy-ca.crt\r\n- --requestheader-extra-headers-prefix=X-Remote-Extra-\r\n- --requestheader-group-headers=X-Remote-Group\r\n- --requestheader-username-headers=X-Remote-User\r\n- --secure-port=6443<\/pre>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #3366ff;\">View api-server options &#8211; <span class=\"transcript--highlight-cue--1bEgq\" data-purpose=\"cue-text\">non kubeadm setup,<\/span><\/span><\/p>\n<pre class=\"lang:default decode:true \">$ cat \/etc\/systemd\/system\/kube-apiserver.service\r\n\r\n[Service]\r\nExecStart=\/usr\/local\/bin\/kube-apiserver \\\\\r\n--advertise-address=${INTERNAL_IP} \\\\\r\n--allow-privileged=true \\\\\r\n--apiserver-count=3 \\\\\r\n--audit-log-maxage=30 \\\\\r\n--audit-log-maxbackup=3 \\\\\r\n--audit-log-maxsize=100 \\\\\r\n--audit-log-path=\/var\/log\/audit.log \\\\\r\n--authorization-mode=Node,RBAC \\\\\r\n--bind-address=0.0.0.0 \\\\\r\n--client-ca-file=\/var\/lib\/kubernetes\/ca.pem \\\\\r\n--enable-admissionplugins=Initializers,NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,Defa\r\nultStorageClass,ResourceQuota \\\\\r\n--enable-swagger-ui=true \\\\\r\n--etcd-cafile=\/var\/lib\/kubernetes\/ca.pem \\\\\r\n--etcd-certfile=\/var\/lib\/kubernetes\/kubernetes.pem \\\\\r\n--etcd-keyfile=\/var\/lib\/kubernetes\/kubernetes-key.pem \\\\\r\n--etcdservers=https:\/\/10.240.0.10:2379,https:\/\/10.240.0.11:2379,https:\/\/10.240.0.12:2379 \\\\\r\n--event-ttl=1h \\\\\r\n--experimental-encryption-provider-config=\/var\/lib\/kubernetes\/encryption-config.yaml\r\n\\\\\r\n--kubelet-certificate-authority=\/var\/lib\/kubernetes\/ca.pem \\\\\r\n--kubelet-client-certificate=\/var\/lib\/kubernetes\/kubernetes.pem \\\\\r\n--kubelet-client-key=\/var\/lib\/kubernetes\/kubernetes-key.pem \\\\<\/pre>\n<p>&nbsp;<\/p>\n<div class=\"transcript--cue-container--wu3UY\">\n<p class=\"transcript--underline-cue--3osdw\" tabindex=\"-1\" role=\"button\" data-purpose=\"transcript-cue\"><span class=\"\" data-purpose=\"cue-text\">You can also see the running process and the effective options by listing the process <\/span><span class=\"transcript--highlight-cue--1bEgq\" data-purpose=\"cue-text\">on the master node and searching for kube-apiserver.<\/span><\/p>\n<\/div>\n<pre class=\"lang:default decode:true\">$ ps -aux | grep kube-apiserver\r\nroot 2348 3.3 15.4 399040 315604 ? Ssl 15:46 1:22 kube-apiserver --authorization-mode=Node,RBAC --\r\nadvertise-address=172.17.0.32 --allow-privileged=true --client-ca-file=\/etc\/kubernetes\/pki\/ca.crt --disableadmission-plugins=PersistentVolumeLabel --enable-admission-plugins=NodeRestriction--enable-bootstrap-tokenauth=true --etcd-cafile=\/etc\/kubernetes\/pki\/etcd\/ca.crt --etcd-certfile=\/etc\/kubernetes\/pki\/apiserver-etcdclient.crt --etcd-keyfile=\/etc\/kubernetes\/pki\/apiserver-etcd-client.key --etcd-servers=https:\/\/127.0.0.1:2379 --\r\ninsecure-port=0 --kubelet-client-certificate=\/etc\/kubernetes\/pki\/apiserver-kubelet-client.crt --kubelet-clientkey=\/etc\/kubernetes\/pki\/apiserver-kubelet-client.key --kubelet-preferred-addresstypes=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=\/etc\/kubernetes\/pki\/front-proxy-client.crt --proxyclient-key-file=\/etc\/kubernetes\/pki\/front-proxy-client.key--requestheader-allowed-names=front-proxy-client --\r\nrequestheader-client-ca-file=\/etc\/kubernetes\/pki\/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-RemoteExtra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secureport=6443 --service-account-key-file=\/etc\/kubernetes\/pki\/sa.pub --service-cluster-ip-range=10.96.0.0\/12 --tlscert-file=\/etc\/kubernetes\/pki\/apiserver.crt --tls-private-key-file=\/etc\/kubernetes\/pki\/apiserver.key<\/pre>\n<p>&nbsp;<\/p>\n<p><span style=\"font-size: 14pt;\"><strong><span style=\"color: #3366ff;\">Kube Controller Manager<\/span><\/strong><\/span><\/p>\n<p>In the Kubernetes terms a controller is a process that continuously monitors the state of various components within the system and works towards bringing the whole system to the desired functioning state.<\/p>\n<p>For example, the node controller is responsible for monitoring the status of the nodes and taking necessary actions to keep the applications running. It does that through the Kube API server. The node controller tests the status of the nodes every five seconds. That way the note controller can monitor the health of the notes.<\/p>\n<p>The replication controller. It is responsible for monitoring the status of replica sets and ensuring that the desired number of PODs are available at all times within the set. If a POD dies, it creates another one.<\/p>\n<p>There are many more such controllers available within Kubernetes. All <span class=\"\" data-purpose=\"cue-text\">of them are packaged into a single process known<\/span><span class=\"transcript--highlight-cue--1bEgq\" data-purpose=\"cue-text\">as the Kubernetes Controller Manager.<\/span><\/p>\n<p><span style=\"color: #3366ff;\">Installing kube-controller-manager<\/span><\/p>\n<pre class=\"lang:default decode:true\">$ wget https:\/\/storage.googleapis.com\/kubernetes- release\/release\/v1.13.0\/bin\/linux\/amd64\/kube-controller- manager\r\n\r\n$ kube-controller-manager.service\r\n\r\nExecStart=\/usr\/local\/bin\/kube-controller-manager \\\\\r\n--address=0.0.0.0 \\\\\r\n--cluster-cidr=10.200.0.0\/16 \\\\\r\n--cluster-name=kubernetes \\\\\r\n--cluster-signing-cert-file=\/var\/lib\/kubernetes\/ca.pem \\\\\r\n--cluster-signing-key-file=\/var\/lib\/kubernetes\/ca-key.pem \\\\\r\n--kubeconfig=\/var\/lib\/kubernetes\/kube-controller-manager.kubeconfig \\\\\r\n--leader-elect=true \\\\\r\n--root-ca-file=\/var\/lib\/kubernetes\/ca.pem \\\\\r\n--service-account-private-key-file=\/var\/lib\/kubernetes\/service-account-key.pem \\\\\r\n--service-cluster-ip-range=10.32.0.0\/24 \\\\\r\n--use-service-account-credentials=true \\\\\r\n--v=2\r\n<\/pre>\n<div class=\"transcript--cue-container--wu3UY\">\n<p class=\"transcript--underline-cue--3osdw\" tabindex=\"-1\" role=\"button\" data-purpose=\"transcript-cue\"><span class=\"\" data-purpose=\"cue-text\"> How to view the Kube controller managers server options depends on how you set up your cluster.<br \/>\n<\/span><\/p>\n<\/div>\n<div class=\"transcript--cue-container--wu3UY\"><\/div>\n<div class=\"transcript--cue-container--wu3UY\">\n<p class=\"transcript--underline-cue--3osdw\" tabindex=\"-1\" role=\"button\" data-purpose=\"transcript-cue-active\"><span style=\"color: #3366ff;\">View kube-controller-manager &#8211; <span class=\"\" data-purpose=\"cue-text\">set it up with the Kube admin tool<\/span><\/span><\/p>\n<\/div>\n<pre class=\"lang:default decode:true \">$ kubectl get pods -n kube-system\r\nNAMESPACE NAME READY STATUS RESTARTS AGE\r\nkube-system coredns-78fcdf6894-hwrq9 1\/1 Running 0 16m\r\nkube-system coredns-78fcdf6894-rzhjr 1\/1 Running 0 16m\r\nkube-system etcd-master 1\/1 Running 0 15m\r\nkube-system kube-apiserver-master 1\/1 Running 0 15m\r\nkube-system kube-controller-manager-master 1\/1 Running 0 15m\r\nkube-system kube-proxy-lzt6f 1\/1 Running 0 16m\r\nkube-system kube-proxy-zm5qd 1\/1 Running 0 16m\r\nkube-system kube-scheduler-master 1\/1 Running 0 15m\r\nkube-system weave-net-29z42 2\/2 Running 1 16m\r\nkube-system weave-net-snmdl 2\/2 Running 1 16m NAMESPACE NAME READY STATUS RESTARTS AGE\r\nkube-system coredns-78fcdf6894-hwrq9 1\/1 Running 0 16m\r\nkube-system coredns-78fcdf6894-rzhjr 1\/1 Running 0 16m\r\nkube-system etcd-master 1\/1 Running 0 15m\r\nkube-system kube-apiserver-master 1\/1 Running 0 15m\r\nkube-system kube-controller-manager-master 1\/1 Running 0 15m\r\nkube-system kube-proxy-lzt6f 1\/1 Running 0 16m\r\nkube-system kube-proxy-zm5qd 1\/1 Running 0 16m\r\nkube-system kube-scheduler-master 1\/1 Running 0 15m\r\nkube-system weave-net-29z42 2\/2 Running 1 16m\r\nkube-system weave-net-snmdl 2\/2 Running 1 16m<\/pre>\n<div class=\"transcript--cue-container--wu3UY\">\n<p class=\"transcript--underline-cue--3osdw\" tabindex=\"-1\" role=\"button\" data-purpose=\"transcript-cue\"><span class=\"\" data-purpose=\"cue-text\">The options within the POD definition file located <\/span><span class=\"transcript--highlight-cue--1bEgq\" data-purpose=\"cue-text\">at etc kubernetes manifest folder.<\/span><\/p>\n<\/div>\n<pre class=\"lang:default decode:true \">$ cat \/etc\/kubernetes\/manifests\/kube-controller-manager.yaml\r\nspec:\r\ncontainers:\r\n- command:\r\n- kube-controller-manager\r\n- --address=127.0.0.1\r\n- --cluster-signing-cert-file=\/etc\/kubernetes\/pki\/ca.crt\r\n- --cluster-signing-key-file=\/etc\/kubernetes\/pki\/ca.key\r\n- --controllers=*, bootstrapsigner,tokencleaner\r\n- --kubeconfig=\/etc\/kubernetes\/controller-manager.conf\r\n- --leader-elect=true\r\n- --root-ca-file=\/etc\/kubernetes\/pki\/ca.crt\r\n- --service-account-private-key-file=\/etc\/kubernetes\/pki\/sa.key\r\n- --use-service-account-credentials=true<\/pre>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #3366ff;\">View controller-manager options &#8211; <span class=\"transcript--highlight-cue--1bEgq\" data-purpose=\"cue-text\">non Kube admin setup,<\/span><\/span><\/p>\n<pre class=\"lang:default decode:true \">$ cat \/etc\/systemd\/system\/kube-controller- manager.service\r\n[Service]\r\nExecStart=\/usr\/local\/bin\/kube-controller-manager \\\\\r\n--address=0.0.0.0 \\\\\r\n--cluster-cidr=10.200.0.0\/16 \\\\\r\n--cluster-name=kubernetes \\\\\r\n--cluster-signing-cert-file=\/var\/lib\/kubernetes\/ca.pem \\\\\r\n--cluster-signing-key-file=\/var\/lib\/kubernetes\/ca-key.pem \\\\\r\n--kubeconfig=\/var\/lib\/kubernetes\/kube-controller-manager.kubeconfig \\\\\r\n--leader-elect=true \\\\\r\n--root-ca-file=\/var\/lib\/kubernetes\/ca.pem \\\\\r\n--service-account-private-key-file=\/var\/lib\/kubernetes\/service-account-key.pem \\\\\r\n--service-cluster-ip-range=10.32.0.0\/24 \\\\\r\n--use-service-account-credentials=true \\\\\r\n--v=2\r\nRestart=on-failure\r\nRestartSec=5<\/pre>\n<p><span class=\"\" data-purpose=\"cue-text\">To see the running process and the effective options you must list the process on the master node <\/span><span class=\"transcript--highlight-cue--1bEgq\" data-purpose=\"cue-text\">and searching for Kube Controller Manager.<\/span><\/p>\n<pre class=\"lang:default decode:true \">$ ps - aux | grep kube-controller- manager\r\nroot 1994 2.7 5.1 154360 105024 ? Ssl 06:45 1:25 kube- controller-manager --\r\naddress=127.0.0.1 --cluster-signing- cert-file=\/etc\/kubernetes\/pki\/ca.crt --cluster-signingkey-file=\/etc\/kubernetes\/pki\/ca.key --controllers=*, bootstrapsigner,tokencleaner --\r\nkubeconfig=\/etc\/kubernetes\/controller-manager.conf --leader-elect=true --root-cafile=\/etc\/kubernetes\/pki\/ca.crt -- service-account-private-key-file=\/etc\/kubernetes\/pki\/sa.key\r\n--use-service-account-credentials=true<\/pre>\n<p>&nbsp;<\/p>\n<p><span style=\"font-size: 14pt;\"><strong><span style=\"color: #3366ff;\">Kube Scheduler<\/span><\/strong><\/span><\/p>\n<p>The Kubernetes scheduler is responsible for scheduling pods on nodes.<\/p>\n<p><span style=\"color: #3366ff;\"><strong>Installing kube-scheduler<\/strong><\/span><\/p>\n<div class=\"transcript--cue-container--wu3UY\">\n<p class=\"transcript--underline-cue--3osdw\" tabindex=\"-1\" role=\"button\" data-purpose=\"transcript-cue\"><span class=\"\" data-purpose=\"cue-text\">Download the kube-scheduler binary from the Kubernetes release page, extract it, and run it as a service. When you run it as a service, <\/span><span class=\"transcript--highlight-cue--1bEgq\" data-purpose=\"cue-text\">you specify the scheduler configuration file.<\/span><\/p>\n<\/div>\n<pre class=\"lang:default decode:true \">$ wget https:\/\/storage.googleapis.com\/kubernetes- release\/release\/v1.13.0\/bin\/linux\/amd64\/kube-scheduler\r\nkube-scheduler.service\r\nExecStart=\/usr\/local\/bin\/kube-scheduler \\\\\r\n--config=\/etc\/kubernetes\/config\/kube-scheduler.yaml \\\\\r\n--v=2<\/pre>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #3366ff;\">View kube-scheduler options kubeadm<\/span><\/p>\n<div class=\"transcript--cue-container--wu3UY\">\n<p class=\"transcript--underline-cue--3osdw\" tabindex=\"-1\" role=\"button\" data-purpose=\"transcript-cue\"><span class=\"\" data-purpose=\"cue-text\">If you set it up with the kubeadm tool, you can see the options within the pod definition file located at \/etc\/kubernetes\/manifest\/folder.<\/span><\/p>\n<\/div>\n<pre class=\"\" data-purpose=\"cue-text\">$ cat \/etc\/kubernetes\/manifests\/kube-scheduler.yaml\r\n\r\nspec:\r\ncontainers:\r\n- command:\r\n- kube-scheduler\r\n- --address=127.0.0.1\r\n- --kubeconfig=\/etc\/kubernetes\/scheduler.conf\r\n- --leader-elect=true<\/pre>\n<div class=\"transcript--cue-container--wu3UY\">\n<p class=\"transcript--underline-cue--3osdw\" tabindex=\"-1\" role=\"button\" data-purpose=\"transcript-cue\"><span class=\"\" data-purpose=\"cue-text\">You can also see the running process and the effective options by listing the process <\/span><span class=\"transcript--highlight-cue--1bEgq\" data-purpose=\"cue-text\">on the master node and searching for kube-scheduler.<\/span><\/p>\n<\/div>\n<pre class=\"lang:default decode:true\">$ ps -aux | grep kube-scheduler\r\nroot 2477 0.8 1.6 48524 34044 ? Ssl 17:31 0:08 kube- scheduler --\r\naddress=127.0.0.1 --kubeconfig=\/etc\/kubernetes\/scheduler.conf --leader-elect=true<\/pre>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #3366ff;\"><strong> Kubelet<\/strong><\/span><\/p>\n<div class=\"transcript--cue-container--wu3UY\">\n<p class=\"transcript--underline-cue--3osdw\" tabindex=\"-1\" role=\"button\" data-purpose=\"transcript-cue\"><span class=\"\" data-purpose=\"cue-text\">The kubelet in the Kubernetes worker node registers the node with a Kubernetes cluster. When it receives instructions to load a container or a pod on the node, it requests the container runtime engine, which may be Docker, to pull the required image and run an instance. The kubelet then continues to monitor the state of the pod and containers in it and reports to the kube API server <\/span><span class=\"transcript--highlight-cue--1bEgq\" data-purpose=\"cue-text\">on a timely basis.<\/span><\/p>\n<p tabindex=\"-1\" role=\"button\" data-purpose=\"transcript-cue\"><span style=\"color: #3366ff;\"><strong>Installing kubelet<\/strong><\/span><\/p>\n<p tabindex=\"-1\" role=\"button\" data-purpose=\"transcript-cue\"><span class=\"\" data-purpose=\"cue-text\">If you use the kubeadm tool to deploy your cluster, it does not automatically deploy the kubelet. Now that&#8217;s the difference from other components. You must always manually install the kubelet on your worker nodes. Download the installer, extract it, and run it as a service. <\/span><\/p>\n<pre class=\"lang:default decode:true \">$ wget https:\/\/storage.googleapis.com\/kubernetes- release\/release\/v1.13.0\/bin\/linux\/amd64\/kubelet\r\n\r\nkubelet.service\r\nExecStart=\/usr\/local\/bin\/kubelet \\\\\r\n--config=\/var\/lib\/kubelet\/kubelet-config.yaml \\\\\r\n--container-runtime=remote \\\\\r\n--container-runtime-endpoint=unix:\/\/\/var\/run\/containerd\/containerd.sock \\\\\r\n--image-pull-progress-deadline=2m \\\\\r\n--kubeconfig=\/var\/lib\/kubelet\/kubeconfig \\\\\r\n--network-plugin=cni \\\\\r\n--register-node=true \\\\\r\n--v=2<\/pre>\n<p>&nbsp;<\/p>\n<p tabindex=\"-1\" role=\"button\" data-purpose=\"transcript-cue\"><span style=\"color: #3366ff;\">View kubelet options<\/span><\/p>\n<div class=\"transcript--cue-container--wu3UY\">\n<p class=\"transcript--underline-cue--3osdw\" tabindex=\"-1\" role=\"button\" data-purpose=\"transcript-cue\"><span class=\"\" data-purpose=\"cue-text\">You can view the running kubelet process and the effective options by listing the process on the worker node and searching for kubelet.<\/span><\/p>\n<\/div>\n<pre class=\"lang:default decode:true \">$ ps -aux | grep kubelet\r\nroot 2095 1.8 2.4 960676 98788 ? Ssl 02:32 0:36 \/usr\/bin\/kubelet -- bootstrapkubeconfig=\/etc\/kubernetes\/bootstrap-kubelet.conf -- kubeconfig=\/etc\/kubernetes\/kubelet.conf --\r\nconfig=\/var\/lib\/kubelet\/config.yaml --cgroup-driver=cgroupfs --cni-bin-dir=\/opt\/cni\/bin -- cniconf-dir=\/etc\/cni\/net.d --network- plugin=cni<\/pre>\n<p>&nbsp;<\/p>\n<div class=\"transcript--cue-container--wu3UY\">\n<p class=\"transcript--underline-cue--3osdw\" tabindex=\"-1\" role=\"button\" data-purpose=\"transcript-cue\"><span style=\"color: #3366ff;\"><strong><span class=\"\" data-purpose=\"cue-text\">\u00a0kube-proxy<\/span><\/strong><\/span><\/p>\n<\/div>\n<div class=\"transcript--cue-container--wu3UY\">\n<p class=\"transcript--underline-cue--3osdw\" tabindex=\"-1\" role=\"button\" data-purpose=\"transcript-cue\"><span class=\"\" data-purpose=\"cue-text\">Kube-proxy is a process that runs on each node in the Kubernetes cluster.n Its job is to look for new services,and every time a new service is created, it creates the appropriate rules on each nodeto forward traffic to those services to the backend pods.<\/span><span class=\"transcript--highlight-cue--1bEgq\" data-purpose=\"cue-text\">One way it does this is using iptables rules.<\/span><\/p>\n<div class=\"transcript--cue-container--wu3UY\">\n<p class=\"transcript--underline-cue--3osdw\" tabindex=\"-1\" role=\"button\" data-purpose=\"transcript-cue\"><span class=\"\" data-purpose=\"cue-text\">Within a Kubernetes cluster, every pod can reach every other pod. This is accomplished by deploying a pod networking solution to the cluster. A pod network is an internal virtual network that spans across all the nodes in the cluster to which all the pods connect to. Through this network, they&#8217;re able to communicate with each other.<\/span><\/p>\n<p tabindex=\"-1\" role=\"button\" data-purpose=\"transcript-cue\"><span style=\"color: #3366ff;\"><strong>Installing kube-proxy<\/strong><\/span><\/p>\n<pre class=\"lang:default decode:true\">$ wget https:\/\/storage.googleapis.com\/kubernetes- release\/release\/v1.13.0\/bin\/linux\/amd64\/kube-proxy\r\nkube-proxy.service\r\nExecStart=\/usr\/local\/bin\/kube-proxy \\\\\r\n--config=\/var\/lib\/kube-proxy\/kube-proxy-config.yaml\r\nRestart=on-failure\r\nRestartSec=5<\/pre>\n<p tabindex=\"-1\" role=\"button\" data-purpose=\"transcript-cue\"><span style=\"color: #3366ff;\"><strong>View kube-proxy &#8211; kubeadm<\/strong><\/span><\/p>\n<p tabindex=\"-1\" role=\"button\" data-purpose=\"transcript-cue\"><span class=\"\" data-purpose=\"cue-text\">The kubeadm tool deploys kube-proxy as pods on each node.<\/span><\/p>\n<pre class=\"lang:default decode:true\">$ kubectl get pods -n kube-system\r\nNAMESPACE NAME READY STATUS RESTARTS AGE\r\nkube-system coredns-78fcdf6894-hwrq9 1\/1 Running 0 16m\r\nkube-system coredns-78fcdf6894-rzhjr 1\/1 Running 0 16m\r\nkube-system etcd-master 1\/1 Running 0 15m\r\nkube-system kube-apiserver-master 1\/1 Running 0 15m\r\nkube-system kube-controller-manager-master 1\/1 Running 0 15m\r\nkube-system kube-proxy-lzt6f 1\/1 Running 0 16m\r\nkube-system kube-proxy-zm5qd 1\/1 Running 0 16m\r\nkube-system kube-scheduler-master 1\/1 Running 0 15m\r\nkube-system weave-net-29z42 2\/2 Running 1 16m\r\nkube-system weave-net-snmdl 2\/2 Running 1 16m<\/pre>\n<div class=\"transcript--cue-container--wu3UY\">\n<p class=\"transcript--underline-cue--3osdw\" tabindex=\"-1\" role=\"button\" data-purpose=\"transcript-cue\"><span class=\"\" data-purpose=\"cue-text\">In fact, it is deployed as a DaemonSet, so a single pod is always deployed <\/span><span class=\"transcript--highlight-cue--1bEgq\" data-purpose=\"cue-text\">on each node in the cluster.<\/span><\/p>\n<\/div>\n<pre class=\"lang:default decode:true \">$ kubectl get daemonset -n kube-system\r\nNAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE\r\nkube-proxy 2 2 2 2 2 beta.kubernetes.io\/arch=amd64 1h<\/pre>\n<p>&nbsp;<\/p>\n<\/div>\n<\/div>\n<div class=\"transcript--cue-container--wu3UY\">\n<p tabindex=\"-1\" role=\"button\" data-purpose=\"transcript-cue\">\n<\/div>\n<div class=\"transcript--cue-container--wu3UY\">\n<p class=\"transcript--underline-cue--3osdw\" tabindex=\"-1\" role=\"button\" data-purpose=\"transcript-cue\">\n<\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>The Kubernetes architecture consists of a lot of different components working with each other, talking to each other in many different ways. So they all need to know where the other components are. There are different modes of authentication, authorization, encryption and security.<\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[99],"tags":[],"_links":{"self":[{"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/posts\/5191"}],"collection":[{"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/comments?post=5191"}],"version-history":[{"count":25,"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/posts\/5191\/revisions"}],"predecessor-version":[{"id":5467,"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/posts\/5191\/revisions\/5467"}],"wp:attachment":[{"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/media?parent=5191"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/categories?post=5191"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/miro.borodziuk.eu\/index.php\/wp-json\/wp\/v2\/tags?post=5191"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}