Kubernetes is an open source platform for managing containerized applications. It allows you to manage, scale, and automatically deploy your containerized applications in the clustered environment. Kubernetes is developed by Google.
With Kubernetes, you can orchestrate containers across multiple hosts, scale the containerized applications with all resources on the fly, and have a centralized container management environment.
In this tutorial, I will show you step-by-step how to install and configure Kubernetes on CentOS version 8. We will be using 1 server 'KubeMaster' as the Kubernetes Master Node, and 2 servers as Kubernetes workers, 'minion-1' and 'minion-2'.- KubeMaster: 192.168.4.130
- minion-1 : 192.168.4.131
- minion-2 : 192.168.4.132
Master Node – This machine generally acts as the control plane and runs the cluster database and the API server (which the kubectl CLI communicates with).
Our 3-node Kubernetes Cluster will look something like this:
Prepare Hostname, Firewall, swap and SELinux
On your CentOS 8 Master-Node, set the system hostname and update DNS in your /etc/hosts file.[root@KubeMaster ~]# cat <<EOF>> /etc/hosts > 192.168.4.130 KubeMaster > 192.168.4.131 minion-1 > 192.168.4.132 minion-2 > EOF |
Next, disable Selinux(On all three machines), as this is required to allow containers to access the host filesystem, which is needed by pod networks and other services.
[root@KubeMaster ~]# setenforce 0 setenforce: SELinux is disabled |
To completely disable it, use the below command and reboot.
[root@KubeMaster ~]# sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux |
Configure the firewall rules on the ports.
#firewall-cmd --zone=public --permanent --add-port={2379,2380,6443,10250,1021,10252,10255}/tcp #firewall-cmd --reload |
Next, disable swap (on all three machines) with the following command:
[root@KubeMaster ~]# swapoff -a |
We must also ensure that swap isn't re-enabled during a reboot on each server. Open up the /etc/fstab and comment out the swap entry like this:[root@KubeMaster ~]# cat /etc/fstab | grep swap #/dev/mapper/cs_controller-swap swap swap defaults 0 0 |
Enable br_netfilter
For our next trick, we'll be enabling the br_netfilter kernel module on all three servers. This is done with the following commands:[root@KubeMaster ~]# modprobe br_netfilter [root@KubeMaster ~]# echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables |
Install Docker :
1. Add the repository for the docker installation package.
[root@KubeMaster ~]# dnf config-manager --add-repo=https://download.docker.com/linux/centos/docker-ce.repo Adding repo from: https://download.docker.com/linux/centos/docker-ce.repo |
2. Install container.io which is not yet provided by the package manager before installing docker.
[root@KubeMaster ~]# dnf install https://download.docker.com/linux/centos/7/x86_64/stable/Packages/containerd.io-1.2.6-3.3.el7.x86_64.rpm Last metadata expiration check: 0:03:03 ago on Mon 08 Jun 2020 05:38:24 PM IST. containerd.io-1.2.6-3.3.el7.x86_64.rpm 5.3 MB/s | 26 MB 00:04 Dependencies resolved. =============================================================================================================================================================== Package Architecture Version Repository Size =============================================================================================================================================================== Installing: containerd.io x86_64 1.2.6-3.3.el7 @commandline 26 M Installing dependencies: container-selinux noarch 2:2.124.0-1.module_el8.1.0+298+41f9343a AppStream 47 k
Transaction Summary =============================================================================================================================================================== Install 2 Packages |
3. Then install Docker from the repositories.
[root@KubeMaster ~]# dnf install docker-ce Last metadata expiration check: 0:39:00 ago on Mon 08 Jun 2020 05:38:24 PM IST. Dependencies resolved. =============================================================================================================================================================== Package Architecture Version Repository Size =============================================================================================================================================================== Installing: docker-ce x86_64 3:19.03.11-3.el7 docker-ce-stable 24 M Installing dependencies: docker-ce-cli x86_64 1:19.03.11-3.el7 docker-ce-stable 38 M libcgroup x86_64 0.41-19.el8 BaseOS 70 k
Transaction Summary =============================================================================================================================================================== Install 3 Packages |
4. Start & enable the docker service.
[root@KubeMaster ~]# systemctl start docker.service [root@KubeMaster ~]# systemctl enable docker.service Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service. |
5. See the docker version.[root@KubeMaster ~]# docker version Client: Docker Engine - Community Version: 19.03.11 API version: 1.40 Go version: go1.13.10 Git commit: 42e35e61f3 Built: Mon Jun 1 09:13:48 2020 OS/Arch: linux/amd64 Experimental: false Server: Docker Engine - Community Engine: Version: 19.03.11 API version: 1.40 (minimum version 1.12) Go version: go1.13.10 Git commit: 42e35e61f3 Built: Mon Jun 1 09:12:26 2020 OS/Arch: linux/amd64 Experimental: false containerd: Version: 1.2.6 GitCommit: 894b81a4b802e4eb2a91d1ce216b8817763c29fb runc: Version: 1.0.0-rc8 GitCommit: 425e105d5a03fabd737a126ad93d62a9eeede87f docker-init: Version: 0.18.0 GitCommit: fec3683 |
6. List what is inside the docker images.[root@KubeMaster ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE weaveworks/weave-npc 2.6.4 78ae9e32f34e 9 days ago 36.8MB weaveworks/weave-kube 2.6.4 32950afead86 9 days ago 123MB k8s.gcr.io/kube-proxy v1.18.3 3439b7546f29 2 weeks ago 117MB k8s.gcr.io/kube-controller-manager v1.18.3 da26705ccb4b 2 weeks ago 162MB k8s.gcr.io/kube-apiserver v1.18.3 7e28efa976bd 2 weeks ago 173MB k8s.gcr.io/kube-scheduler v1.18.3 76216c34ed0c 2 weeks ago 95.3MB k8s.gcr.io/pause 3.2 80d28bedfe5d 3 months ago 683kB k8s.gcr.io/coredns 1.6.7 67da37a9a360 4 months ago 43.8MB k8s.gcr.io/etcd 3.4.3-0 303ce5db0e90 7 months ago 288MB |
Now that Docker is ready to go, continue below to install Kubernetes itself.
Installing Kubernetes:
Add the Kubernetes repository to your package manager by creating the following file
[root@KubeMaster ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg EOF |
Kubeadm is a tool built to provide "kubeadm init" and "kubeadm join" as best-practice “fast paths” for creating Kubernetes clusters.
kubeadm performs the actions necessary to get a minimum viable cluster up and running. By design, it cares only about bootstrapping, not about provisioning machines. Likewise, installing various nice-to-have addons, like the Kubernetes Dashboard, monitoring solutions, and cloud-specific addons, is not in scope.
[root@KubeMaster ~]# dnf install kubeadm -y Last metadata expiration check: 0:00:56 ago on Mon 08 Jun 2020 06:50:40 PM IST. Dependencies resolved. =============================================================================================================================================================== Package Architecture Version Repository Size =============================================================================================================================================================== Installing: kubeadm x86_64 1.18.3-0 kubernetes 8.8 M Installing dependencies: conntrack-tools x86_64 1.4.4-10.el8 Stream-BaseOS 204 k cri-tools x86_64 1.13.0-0 kubernetes 5.1 M kubectl x86_64 1.18.3-0 kubernetes 9.5 M kubelet x86_64 1.18.3-0 kubernetes 21 M kubernetes-cni x86_64 0.7.5-0 kubernetes 10 M libnetfilter_cthelper x86_64 1.0.0-15.el8 Stream-BaseOS 24 k libnetfilter_cttimeout x86_64 1.0.0-11.el8 BaseOS 24 k libnetfilter_queue x86_64 1.0.2-11.el8 BaseOS 30 k socat x86_64 1.7.3.3-2.el8 Stream-AppStream 302 k
Transaction Summary =============================================================================================================================================================== Install 10 Packages |
Start the Kubernetes services and enable them to run at startup.[root@KubeMaster ~]# systemctl start kubelet.service [root@KubeMaster ~]# systemctl enable kubelet.service |
Set up the Kubernetes Control Plane
After installing the Kubernetes related tooling on all your machines, you are ready to set up the Kubernetes control plane on the master node. The control plane is responsible for allocating resources to your cluster, maintaining the health of your cluster, and ensuring that it meets the minimum requirements you designate for the cluster.
The primary components of the control plane are the kube-apiserver, kube-controller-manager, kube-scheduler, and etcd. You can easily initialize the Kubernetes master node with all the necessary control plane components using kubeadm.
[root@KubeMaster ~]# kubeadm init |
Next, copy the following command and store it somewhere, as we required to run this command on the worker nodes later. kubeadm join 192.168.4.130:6443 --token pknv09.7zu6jfcqpjp1r9cf \ --discovery-token-ca-cert-hash sha256:98685c3b9ea0611c28c17889784e8fe0d058996de36138e761f932b2a08a90db |
Make the following directory and configuration files.[root@KubeMaster ~]# mkdir -p $HOME/.kube [root@KubeMaster ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config [root@KubeMaster ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config |
Now confirm that the kubectl command is activated.[root@KubeMaster ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION kubemaster NotReady master 33m v1.18.3 |
At this moment, you will see the status of the master-node is ‘NotReady’. This is because we are yet to deploy the pod network to the cluster.
The pod Network is the overlay network for the cluster, that is deployed on top of the present node network. It is designed to allow connectivity across the pod.
Setting up POD Networking:
Deploying the network cluster is a highly flexible process depending on your needs and there are many options available. Since we want to keep our installation as simple as possible, we will use Weavenet plugin which does not require any configuration or extra code and it provides one IP address per pod which is great for us.
[root@KubeMaster ~]# export kubever=$(kubectl version | base64 | tr -d '\n') [root@KubeMaster ~]# kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$kubever" serviceaccount/weave-net created clusterrole.rbac.authorization.k8s.io/weave-net created clusterrolebinding.rbac.authorization.k8s.io/weave-net created role.rbac.authorization.k8s.io/weave-net created rolebinding.rbac.authorization.k8s.io/weave-net created daemonset.apps/weave-net created |
Now if you check the status of your master-node, it should be ‘Ready’.[root@KubeMaster ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION kubemaster Ready master 41m v1.18.3 |
Next, we add the worker nodes to the cluster.Adding Worker Nodes to Kubernetes Cluster:
Prepare Hostname, Firewall, swap and SELinux
First set the hostname on your worker-node-1 and worker-node-2, and then add the host entries to the /etc/hosts file.# cat <<EOF>> /etc/hosts 192.168.4.130 KubeMaster 192.168.4.131 minion-1 192.168.4.132 minion-2 EOF |
Next, disable SElinux and update your firewall rules.
# setenforce 0 #sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux |
Configure the firewall rules on the ports.
#firewall-cmd --zone=public --permanent --add-port={6783,10250,10255,30000-32767}/tcp #firewall-cmd --reload |
disable swap (on all three machines) with the following command:
We must also ensure that swap isn't re-enabled during a reboot on each server. Open up the /etc/fstab and comment out the swap entry
# modprobe br_netfilter # echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables |
Setup Docker-CE and Kubernetes Repo
Add the Docker repository first using DNF config-manager.
# dnf config-manager --add-repo=https://download.docker.com/linux/centos/docker-ce.repo |
Next, add the containerd.io package.
# dnf install https://download.docker.com/linux/centos/7/x86_64/stable/Packages/containerd.io-1.2.6-3.3.el7.x86_64.rpm |
Install the latest version of docker-ce.
# dnf install docker-ce -y |
Enable and start the docker service.
# systemctl enable docker # systemctl start docker |
You will need to add Kubernetes repositories manually as they do not come pre-installed on CentOS 8.
# cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg EOF |
Install Kubernetes
Install kubernetes with "kubeadm"
Start and enable the service.
# systemctl enable kubelet # systemctl start kubelet |
Join the Worker Node to the Kubernetes Cluster
We now require the token that kubeadm init generated, to join the cluster. You can copy and paste it to your node-1 and node-2 if you had copied it somewhere.
# kubeadm join 192.168.4.130:6443 --token pknv09.7zu6jfcqpjp1r9cf --discovery-token-ca-cert-hash sha256:98685c3b9ea0611c28c17889784e8fe0d058996de36138e761f932b2a08a90db |
[root@minion-1 ~]# kubeadm join 192.168.4.130:6443 --token pknv09.7zu6jfcqpjp1r9cf --discovery-token-ca-cert-hash sha256:98685c3b9ea0611c28c17889784e8fe0d058996de36138e761f932b2a08a90db W0609 06:51:23.718580 2433 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set. [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING FileExisting-tc]: tc not found in system path [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster. |
See if the Worker nodes successfully joined.
Go back to the Master node and issue the following command.
 |
All Nodes status from Master Server
|
List all Container images in all namespaces
 |
Container Images in all namespaces
|
Finished!
Congratulations, you should now have a working Kubernetes installation running on three nodes.
In case anything goes wrong, you can always repeat the process.
Run this on Master and Workers:
# kubeadm reset && rm -rf /etc/cni/net.d |
Have fun clustering.
The information which you have provided in this blog is really useful to everyone. Thanks for sharing.
ReplyDeleteDocker and Kubernetes Training
Docker and Kubernetes Online Training
Kubernetes Online Training
Docker Online Training
You're very welcome
DeleteI appreciate the work and effort that went into this article. This paragraph provides you with a clear picture Devops Managed Service. Without a question, it is an insightful piece that may assist us in expanding our understanding.
ReplyDeleteIt was my pleasure!
DeleteNicely done, Thank you for sharing such a useful article. I had a great time. This article was fantastic to read. continue to write about
ReplyDeleteData Engineering Solutions
Advanced Data Analytics Solutions
Business Intelligence Solutions
Artificial Intelligence Solutions
No Problems!
Delete