July 3, 2020

Setup Kubernetes cluster using kubeadm in vSphere virtual machines.

Create Virtual Machines.

Let us create the required number of virtual machines for setting up cluster using the preferred operating system. Here, I am going with Ubuntu-18.04.3. I have planned to setup a cluster using single control plane(master) and three worker nodes.

Each node should be equipped with at least 2GB memory, 20GB disk space and 2vCPUs. To make the disk space usage optimal in VMware, enable thin provisioning while creating virtual disk. Kubernetes online training helps you to learn more effectively.

Let us customise the virtual machines with the preferred configuration and start booting through ISO. Once the virtual machines are created successfully, go ahead with the below steps to configure a Kubernetes cluster.

Setup Networking

Based on your networking solution, configure network settings in the virtual machines. Ensure that all the machines are connected to each other.

Kubernetes

Setup hostname(Optional)

Setup meaningful hostname in all the nodes if necessary.

sudo hostnamectl set-hostname <hostname>

Reboot the machine to make the change effective.

Enable ssh on the machines

If ssh is not configured, install openssh-server on the virtual machines and enable connectivity between them.

sudo apt-get install openssh-server -y

Disable swap on the virtual machines.

As a super user, disable swap on all the machines. Execute the below command to disable swap on the machines.

swapoff -a

In order to disable swap permanently , comment out swap entry in /etc/fstabfile.

This can be verified using the following command.

root@host1:~# free -h              total        used        free      shared  buff/cache   availableMem:           7.8G        990M        6.0G         13M        797M        6.6GSwap:          2.0G          0B        2.0G

Note: This has to be done on all the machines.

Install necessary Packages

Let us install curl and `apt-transport-https` in all the machines.

sudo apt-get update && sudo apt-get install -y apt-transport-https curl

Obtain the Key for the kubernetes repository and add it to your local key-manager by executing the below command.

root@host1:~# curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -OK

After adding the above key, execute the below command to add the kubernetes repo to your local system.

cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.listdeb https://apt.kubernetes.io/ kubernetes-xenial mainEOF

kubeadm, kubectl and kubelet installation

After adding the above install kubeadm, kubelet and kubectl in all the machines.

sudo apt-get updatesudo apt-get install -y kubelet kubeadm kubectl

After installing the above packages, let us hold them as it is in the machine by executing the following command.

root@host1:~# sudo apt-mark hold kubelet kubeadm kubectlkubelet set on hold.kubeadm set on hold.kubectl set on hold.

Install Container Runtime

In each node, container runtime (CRI) component should be installed to manage the containers. In this setup, I will install the container runtime `docker` by executing the below command.

sudo apt-get install docker.io -y

Install Control plane

In the master node, execute kubeadm init command to deploy control plane components

kubeadm init --pod-network-cidr=192.168.2.0/16

When the above command execution is successful, it will yield a command to be executed on all the worker nodes to configure them with the master.

Worker nodes.

After configuring the master node successfully, configure the worker nodes by executing the join command displayed in master node.

kubeadm join x.x.x.x:6443 --token <token>\    --discovery-token-ca-cert-hash <hash>

Accessing Cluster

You can communicate with the cluster components using kubectl interface. In order to communicate, you need kubernetes cluster config file to be placed in the home directory of the user from where you want to access the cluster. Once the cluster is created, a file named admin.conf will be generated in /etc/kubernetes directory. This file has to be copied to the home directory of target user. Kubernetes online course helps you to learn more effectively.

Let us execute the below commands from the non-root user to access cluster from that respective user.

mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/config

After setting up the kubeconfig file , check the node status. All the machines will be in not ready state.

k8s@master:~$ kubectl get nodesNAME          STATUS     ROLES    AGE     VERSIONmaster   NotReady   master   5m41s   v1.17.2host1    NotReady   <none>   3m2s    v1.17.2host2    NotReady   <none>   2m58s   v1.17.2host3    NotReady   <none>   2m54s   v1.17.2

And you can observe that coredns pod is not started.

NAMESPACE     NAME                                  READY   STATUS    RESTARTS   AGEkube-system   coredns-6955765f44-9nlw5              0/1     Pending   0          4m33skube-system   coredns-6955765f44-wjxj2              0/1     Pending   0          4m33skube-system   etcd-master                      1/1     Running   0          4m45skube-system   kube-apiserver-master            1/1     Running   0          4m45skube-system   kube-controller-manager-master   1/1     Running   0          4m45skube-system   kube-proxy-bzcbw                      1/1     Running   0          2m6skube-system   kube-proxy-clmpz                      1/1     Running   0          2m14skube-system   kube-proxy-crx5v                      1/1     Running   0          4m32skube-system   kube-proxy-xcmlv                      1/1     Running   0          2m10skube-system   kube-scheduler-master            1/1     Running   0          4m45s

This will be resolved when you deploy network CNI plugin in the cluster. Here, I will deploy calico by executing the following command in the master node.

kubectl apply -f https://docs.projectcalico.org/v3.8/manifests/calico.yaml

In next few minutes, your cluster will be created successfully. Check the node status and

ensure the successful creation.

k8s@master:~$ kubectl get nodesNAME          STATUS   ROLES    AGE   VERSIONmaster   Ready    master   50m   v1.17.2host1    Ready    <none>   47m   v1.17.2host2    Ready    <none>   47m   v1.17.2host3    Ready    <none>   47m   v1.17.2

You can check the cluster state by executing the following command.

k8s@master:~$ kubectl get pods --all-namespacesNAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGEdefault       abc1-b95b76d84-2qmhw                       1/1     Running   0          2m41skube-system   calico-kube-controllers-5c45f5bd9f-r9rxj   1/1     Running   0          4m59skube-system   calico-node-bd4tx                          1/1     Running   0          5mkube-system   calico-node-lxk75                          1/1     Running   0          5mkube-system   calico-node-zmnn4                          1/1     Running   0          5mkube-system   calico-node-zzvhk                          1/1     Running   0          5mkube-system   coredns-6955765f44-9nlw5                   1/1     Running   0          10mkube-system   coredns-6955765f44-wjxj2                   1/1     Running   0          10mkube-system   etcd-master                           1/1     Running   0          10mkube-system   kube-apiserver-master                 1/1     Running   0          10mkube-system   kube-controller-manager-master        1/1     Running   0          10mkube-system   kube-proxy-bzcbw                           1/1     Running   0          8m19skube-system   kube-proxy-clmpz                           1/1     Running   0          8m27skube-system   kube-proxy-crx5v                           1/1     Running   0          10mkube-system   kube-proxy-xcmlv                           1/1     Running   0          8m23skube-system   kube-scheduler-master                 1/1     Running   0          10m

Now, the kubernetes cluster has been created successfully. You can verify this by setting up a deployment/pod.

k8s@master:~$ kubectl create deploy nginx --image=nginxdeployment.apps/nginx created

You can check the pod status by executing the below command.

k8s@master:~$ kubectl get podsNAME                     READY   STATUS    RESTARTS   AGEnginx-86c57db685-rpzm2   1/1     Running   0          70s

Deleting cluster.

Kubernetes cluster can be teared down by executing the below single command.

sudo kubeadm reset

Thus, a cluster can be deleted.  online kubernetes course will help you to learn more skills and techniques from industrial experts.