October 20

Removing worker and control-plane nodes from the cluster

1)

kubectl get nodes
NAME  STATUS  ROLES  AGE  VERSION
ip-192-168-31-20  Ready control-plane  5d23h  v1.30.5
ip-192-168-31-21  Ready control-plane  21h  v1.30.5

2)

kubectl drain ip-192-168-31-21 --ignore-daemonsets --delete-emptydir-data
node/ip-192-168-31-21 cordoned
Warning: ignoring DaemonSet-managed Pods: kube-system/cilium-envoy-lfdl5, kube-system/cilium-rtht8
evicting pod kube-system/cilium-operator-768b9ff749-pdbh2
evicting pod default/my-nginx-fdd6574f7-rm92k
pod/my-nginx-fdd6574f7-rm92k evicted
pod/cilium-operator-768b9ff749-pdbh2 evicted
node/ip-192-168-31-21 drained

3)

kubectl delete node ip-192-168-31-21
node "ip-192-168-31-21" deleted

4) Не точно, возможно всему приходит труба

pi@ip-192-168-31-21:~$ sudo kubeadm reset -f
[reset] Reading configuration from the cluster...
[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W1020 20:11:59.472225    3274 configset.go:78] Warning: No kubeproxy.config.k8s.io/v1alpha1 config is loaded. Continuing without it: configmaps "kube-proxy" not found
W1020 20:11:59.485375    3274 reset.go:124] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get node registration: failed to get corresponding node: nodes "ip-192-168-31-21" not found
[preflight] Running pre-flight checks
W1020 20:11:59.485467    3274 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] Deleted contents of the etcd data directory: /var/lib/etcd
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.

5)

sudo rm -rf /etc/cni
sudo rm -rf /etc/kubernetes
sudo rm -rf /var/lib/etcd
sudo rm -rf /var/lib/kubelet
sudo rm -rf .kube/

6)

sudo crictl rmi --prune
sudo crictl rm --all
sudo ctr -n k8s.io containers list
sudo ctr -n k8s.io containers rm 487df21e97d9fda1674c25eff01b82acca720200b40c603faa388b737a0ccbcf

https://iam.slys.dev/p/removing-worker-and-control-plane-nodes-from-the-kubernetes-cluster

https://paranoiaque.fr/en/2020/04/19/remove-master-node-from-ha-kubernetes/