October 13

Adding additional master nodes to an existing cluster to create HA

Если кластер был установлен при помощи kubeadm init --skip-phases=addon/kube-proxy то при добавлении дополнительной мастер ноды мы получаем ошибку

One or more conditions for hosting a new control plane instance is not satisfied.

unable to add a new control plane instance to a cluster that doesn't have a stable controlPlaneEndpoint address

Please ensure that:
* The cluster has a stable controlPlaneEndpoint address.
* The certificates that must be shared among control plane instances are provided.


To see the stack trace of this error execute with --v=5 or higher

Для установки HA кластера требуется указать --control-plane-endpoint и инстяляция будет выглядеть так :

kubeadm init --control-plane-endpoint "LOAD_BALANCER_DNS:LOAD_BALANCER_PORT" --upload-certs --skip-phases=addon/kube-proxy

Чтобы исправить текущий кластер поменяем конфиг

kubectl -n kube-system get cm kubeadm-config -o yaml
apiVersion: v1
data:
  ClusterConfiguration: |
    apiServer:
      timeoutForControlPlane: 4m0s
    apiVersion: kubeadm.k8s.io/v1beta3
    certificatesDir: /etc/kubernetes/pki
    clusterName: kubernetes
    controllerManager: {}
    dns: {}
    etcd:
      local:
        dataDir: /var/lib/etcd
    imageRepository: registry.k8s.io
    kind: ClusterConfiguration
    kubernetesVersion: v1.30.5
    networking:
      dnsDomain: cluster.local
      serviceSubnet: 10.96.0.0/12
    scheduler: {}
kind: ConfigMap
metadata:
  creationTimestamp: "2024-10-06T17:58:21Z"
  name: kubeadm-config
  namespace: kube-system
  resourceVersion: "238"
  uid: 8ecbc051-e14f-4a84-a8b4-6e91b9f36d34

Отредактируем конфиг kubectl -n kube-system edit cm kubeadm-config -o yaml

и добавим controlPlaneEndpoint: LOAD_BALANCER_DNS:LOAD_BALANCER_PORT

kubectl -n kube-system get cm kubeadm-config -o yaml
apiVersion: v1
data:
  ClusterConfiguration: |
    apiServer:
      timeoutForControlPlane: 4m0s
    apiVersion: kubeadm.k8s.io/v1beta3
    certificatesDir: /etc/kubernetes/pki
    clusterName: kubernetes
    controlPlaneEndpoint: 127.0.0.1:8443
    controllerManager: {}
    dns: {}
    etcd:
      local:
        dataDir: /var/lib/etcd
    imageRepository: registry.k8s.io
    kind: ClusterConfiguration
    kubernetesVersion: v1.30.5
    networking:
      dnsDomain: cluster.local
      serviceSubnet: 10.96.0.0/12
    scheduler: {}
kind: ConfigMap
metadata:
  creationTimestamp: "2024-10-06T17:58:21Z"
  name: kubeadm-config
  namespace: kube-system
  resourceVersion: "36598"
  uid: 8ecbc051-e14f-4a84-a8b4-6e91b9f36d34