Kubernetes theory
September 24, 2022

PodAffinity/PodAntiAffinity

Эти два класса отвечают за размещение подов относительно друг друга на нодах кластера.

Для наглядного представления их применения приведу пример. В качестве ресурса будет использоваться Deployment. Об этом ресурсе и его применении поговорим позднее.

Пример использования podAntiAffinity.

В данном примере создаются 2 пода с Redis, которым с помощью назначенной политики podAntiAffinity запрещено аллоцироваться на одной ноде кластера. Это делает архитектуру данного приложения более надёжной.

cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-cache
spec:
  selector:
    matchLabels:
      app: store
  replicas: 2
  template:
    metadata:
      labels:
        app: store
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - store
            topologyKey: "kubernetes.io/hostname"
      containers:
      - name: redis-server
        image: redis:3.2-alpine
EOF

Политика здесь применяется к подам с меткой app=store.

Задаются метки вот здесь:

        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - store

Важный параметр – topologyKey. Именно по нему scheduler понимает, какие node считаются одной зоной размещения, а какие – нет.

podAffinity

podAffinity работает диаметрально противоположно. Все создваемые поды будут находиться на одной ноде кластера. Это увеличивает скорость их взаимодействия друг с другом.

cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-test
spec:
  selector:
    matchLabels:
      app1: store1
  replicas: 5
  template:
    metadata:
      labels:
        app1: store1
    spec:
      affinity:
        podAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app1
                operator: In
                values:
                - store1
            topologyKey: "kubernetes.io/hostname"
      containers:
      - name: nginx-server
        image: nginx
EOF

Все 5 подов оказались на одной ноде кластера:

kubectl describe no cl10sucmg9fveg2v9mka-ijym
Name:               cl10sucmg9fveg2v9mka-ijym
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/instance-type=standard-v2
                    beta.kubernetes.io/os=linux
                    failure-domain.beta.kubernetes.io/zone=ru-central1-c
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=cl10sucmg9fveg2v9mka-ijym
                    kubernetes.io/os=linux
                    node.kubernetes.io/instance-type=standard-v2
                    node.kubernetes.io/kube-proxy-ds-ready=true
                    node.kubernetes.io/masq-agent-ds-ready=true
                    node.kubernetes.io/node-problem-detector-ds-ready=true
                    topology.kubernetes.io/zone=ru-central1-c
                    yandex.cloud/node-group-id=catnlb80c8vgpc1651n7
                    yandex.cloud/pci-topology=k8s
                    yandex.cloud/preemptible=true
Annotations:        csi.volume.kubernetes.io/nodeid:
                      {"disk-csi-driver.mks.ycloud.io":"ef33754ftqa4eme6fs70","io.ycloud.mks.disk-csi-driver":"ef33754ftqa4eme6fs70"}
                    node.alpha.kubernetes.io/ttl: 0
                    projectcalico.org/IPv4Address: 10.130.0.5/24
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Wed, 07 Sep 2022 23:02:47 +0300
Taints:             <none>
Unschedulable:      false
Lease:
  HolderIdentity:  cl10sucmg9fveg2v9mka-ijym
  AcquireTime:     <unset>
  RenewTime:       Sun, 25 Sep 2022 00:10:47 +0300
Conditions:
  Type                          Status    LastHeartbeatTime                 LastTransitionTime                Reason                          Message
  ----                          ------    -----------------                 ------------------                ------                          -------
  FrequentContainerdRestart     Unknown   Sun, 25 Sep 2022 00:10:05 +0300   Sat, 24 Sep 2022 20:09:41 +0300   NoFrequentContainerdRestart     containerd is functioning properly
  FrequentUnregisterNetDevice   Unknown   Sun, 25 Sep 2022 00:10:05 +0300   Sat, 24 Sep 2022 20:09:41 +0300   NoFrequentUnregisterNetDevice   node is functioning properly
  CorruptDockerOverlay2         Unknown   Sun, 25 Sep 2022 00:10:05 +0300   Sat, 24 Sep 2022 20:09:41 +0300   NoCorruptDockerOverlay2         docker overlay2 is functioning properly
  KernelDeadlock                False     Sun, 25 Sep 2022 00:10:05 +0300   Sat, 24 Sep 2022 20:09:39 +0300   KernelHasNoDeadlock             kernel has no deadlock
  ReadonlyFilesystem            False     Sun, 25 Sep 2022 00:10:05 +0300   Sat, 24 Sep 2022 20:09:39 +0300   FilesystemIsNotReadOnly         Filesystem is not read-only
  FrequentKubeletRestart        Unknown   Sun, 25 Sep 2022 00:10:05 +0300   Sat, 24 Sep 2022 20:09:41 +0300   NoFrequentKubeletRestart        kubelet is functioning properly
  FrequentDockerRestart         Unknown   Sun, 25 Sep 2022 00:10:05 +0300   Sat, 24 Sep 2022 20:09:41 +0300   NoFrequentDockerRestart         docker is functioning properly
  NetworkUnavailable            False     Wed, 07 Sep 2022 23:02:49 +0300   Wed, 07 Sep 2022 23:02:49 +0300   RouteCreated                    RouteController created a route
  MemoryPressure                False     Sun, 25 Sep 2022 00:09:40 +0300   Sat, 24 Sep 2022 20:08:50 +0300   KubeletHasSufficientMemory      kubelet has sufficient memory available
  DiskPressure                  False     Sun, 25 Sep 2022 00:09:40 +0300   Sat, 24 Sep 2022 20:08:50 +0300   KubeletHasNoDiskPressure        kubelet has no disk pressure
  PIDPressure                   False     Sun, 25 Sep 2022 00:09:40 +0300   Sat, 24 Sep 2022 20:08:50 +0300   KubeletHasSufficientPID         kubelet has sufficient PID available
  Ready                         True      Sun, 25 Sep 2022 00:09:40 +0300   Sat, 24 Sep 2022 20:09:01 +0300   KubeletReady                    kubelet is posting ready status. AppArmor enabled
Addresses:
  InternalIP:  10.130.0.5
  ExternalIP:  51.250.45.201
  Hostname:    cl10sucmg9fveg2v9mka-ijym
Capacity:
  cpu:                2
  ephemeral-storage:  99034708Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             4026052Ki
  pods:               32
Allocatable:
  cpu:                1930m
  ephemeral-storage:  49394455606
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             2772676Ki
  pods:               32
System Info:
  Machine ID:                 230000073c6888b69deb8cf2a12dc048
  System UUID:                23000007-3c63-3948-fee9-44759c67f0e0
  Boot ID:                    3cfd5fd1-0498-45c7-aa3a-ecc4c684efec
  Kernel Version:             5.4.0-124-generic
  OS Image:                   Ubuntu 20.04.4 LTS
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  containerd://1.6.7
  Kubelet Version:            v1.22.6
  Kube-Proxy Version:         v1.22.6
PodCIDR:                      192.168.2.0/26
PodCIDRs:                     192.168.2.0/26
ProviderID:                   yandex://ef33754ftqa4eme6fs70
Non-terminated Pods:          (18 in total)
  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
  default                     nginx-test-6fc74bfc6d-56dgx                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
  default                     nginx-test-6fc74bfc6d-9f6gh                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
  default                     nginx-test-6fc74bfc6d-bjxvz                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
  default                     nginx-test-6fc74bfc6d-gbhcg                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
  default                     nginx-test-6fc74bfc6d-jlzgz                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
  default                     redis-cache-56cd95799f-s86nh                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         4h3m
  kube-system                 calico-node-rnwt9                                      250m (12%)    0 (0%)      0 (0%)           0 (0%)         17d
  kube-system                 calico-typha-79cddf6bd8-8dhqc                          200m (10%)    0 (0%)      0 (0%)           0 (0%)         17d
  kube-system                 calico-typha-horizontal-autoscaler-8495b957fc-tk5xl    10m (0%)      500m (25%)  0 (0%)           0 (0%)         17d
  kube-system                 calico-typha-vertical-autoscaler-6cc57f94f4-6b7kc      0 (0%)        0 (0%)      0 (0%)           0 (0%)         17d
  kube-system                 coredns-5f8dbbff8f-r49l6                               100m (5%)     0 (0%)      70Mi (2%)        170Mi (6%)     30h
  kube-system                 coredns-5f8dbbff8f-zjc57                               100m (5%)     0 (0%)      70Mi (2%)        170Mi (6%)     17d
  kube-system                 ip-masq-agent-zhtvb                                    10m (0%)      0 (0%)      16Mi (0%)        0 (0%)         17d
  kube-system                 kube-dns-autoscaler-598db8ff9c-b4xgg                   20m (1%)      0 (0%)      10Mi (0%)        0 (0%)         17d
  kube-system                 kube-proxy-pl9b9                                       100m (5%)     0 (0%)      0 (0%)           0 (0%)         17d
  kube-system                 metrics-server-7574f55985-hktc6                        69m (3%)      164m (8%)   202Mi (7%)       452Mi (16%)    17d
  kube-system                 npd-v0.8.0-9tdw5                                       20m (1%)      200m (10%)  20Mi (0%)        100Mi (3%)     4h3m
  kube-system                 yc-disk-csi-node-v2-qqtxt                              30m (1%)      600m (31%)  96Mi (3%)        600Mi (22%)    17d
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests     Limits
  --------           --------     ------
  cpu                909m (47%)   1464m (75%)
  memory             484Mi (17%)  1492Mi (55%)
  ephemeral-storage  0 (0%)       0 (0%)
  hugepages-1Gi      0 (0%)       0 (0%)
  hugepages-2Mi      0 (0%)       0 (0%)
Events:              <none>

Полезные ссылки.

Ещё примеры использования podAffinity/podAntiAffinity:
https://prudnitskiy.pro/post/2021-01-15-k8s-pod-distribution/

Большая статья про размещение подов на нодах: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity

Про поды писал здесь: https://teletype.in/@cameda2/Yme-IYqYWB0
Про ноды кластера здесь: https://teletype.in/@cameda2/PVXtu-qHWcv