Pod со статической и динамической подготовкой тома с yc-network-hdd/yc-network-ssd
В этой статье разберём создание пода с PV из Yandex Cloud. В одном случае подготовим том заранее. В другом создадим запросом.
Pod c PV yc-network-ssd.
Создаём диск нужного объёма в той же зоне доступности, что и мастер кластера k8s. Это касается зональных мастеров.
yc compute disk create \ --zone ru-central1-a \ --name k8s-pv \ --type network-ssd \ --block-size 4K \ --size 4 \ --description "k8s pv for test" \ --async
Устанавливаем тип StorageClass yc-network-ssd по-умолчанию:
kubectl patch storageclass yc-network-ssd \ -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
kubectl patch storageclass yc-network-hdd \ -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'
Если не поменяем этот параметр, то при создании PVC будет автоматически создаваться запрос к yc-network-hdd, который мы не создавали.
Создаём PV/PVC/Pod на базе данного диска.
В строку volumeHandle: вставляем идентификатор созданного ранее диска.
cat <<EOF | kubectl apply -f - apiVersion: v1 kind: PersistentVolume metadata: name: pv-nginx namespace: default labels: pv: nginx annotations: author: cameda spec: capacity: storage: 4Gi accessModes: - ReadWriteOnce storageClassName: "yc-network-ssd" csi: driver: disk-csi-driver.mks.ycloud.io fsType: ext4 volumeHandle: fhmkr5gsa9fbao9efm98 --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-nginx namespace: default labels: pvc: nginx annotations: author: cameda spec: accessModes: - ReadWriteOnce resources: requests: storage: 4Gi volumeName: pv-nginx EOF
cat <<EOF | kubectl apply -f - apiVersion: v1 kind: Pod metadata: name: cam-nginx namespace: default labels: app: nginx env: prod annotations: author: cameda spec: containers: - name: cam-nginx image: nginx:latest imagePullPolicy: IfNotPresent resources: requests: cpu: 300m memory: 300Mi limits: memory: 500Mi ports: - containerPort: 80 - containerPort: 443 livenessProbe: failureThreshold: 10 successThreshold: 1 httpGet: path: / port: 80 periodSeconds: 10 timeoutSeconds: 1 initialDelaySeconds: 5 readinessProbe: failureThreshold: 3 successThreshold: 1 exec: command: - curl - http://127.0.0.1:80 periodSeconds: 10 timeoutSeconds: 1 initialDelaySeconds: 7 volumeMounts: - name: cam-volume mountPath: /mnt/cameda restartPolicy: OnFailure volumes: - name: cam-volume persistentVolumeClaim: claimName: pvc-nginx securityContext: fsGroup: 1000 runAsUser: 0 EOF
cat <<EOF | kubectl apply -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-nginx2 namespace: default labels: pvc: nginx annotations: author: cameda spec: accessModes: - ReadWriteOnce resources: requests: storage: 6Gi EOF
cat <<EOF | kubectl apply -f - apiVersion: v1 kind: Pod metadata: name: cameda-nginx namespace: default labels: app: nginx env: prod annotations: author: cameda spec: containers: - name: cameda-nginx image: nginx:latest imagePullPolicy: IfNotPresent resources: requests: cpu: 300m memory: 300Mi limits: memory: 500Mi ports: - containerPort: 80 - containerPort: 443 livenessProbe: failureThreshold: 10 successThreshold: 1 httpGet: path: / port: 80 periodSeconds: 10 timeoutSeconds: 1 initialDelaySeconds: 5 readinessProbe: failureThreshold: 3 successThreshold: 1 exec: command: - curl - http://127.0.0.1:80 periodSeconds: 10 timeoutSeconds: 1 initialDelaySeconds: 7 volumeMounts: - name: cam-volume mountPath: /mnt/cameda restartPolicy: OnFailure volumes: - name: cam-volume persistentVolumeClaim: claimName: pvc-nginx2 securityContext: fsGroup: 1000 runAsUser: 0 EOF
Pod c PV yc-network-hdd.
Создаём диск нужного объёма в той же зоне доступности, что и мастер кластера k8s. Это касается зональных мастеров.
yc compute disk create \ --zone ru-central1-a \ --name k8s-hdd-pv \ --type network-hdd \ --block-size 4K \ --size 8 \ --description "k8s pv hdd for test" \ --async
Устанавливаем тип StorageClass yc-network-hdd по-умолчанию:
kubectl patch storageclass yc-network-ssd \ -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'
kubectl patch storageclass yc-network-hdd \ -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
Если не поменяем этот параметр, то при создании PVC будет автоматически создаваться запрос к yc-network-ssd, который мы не создавали.
Создаём PV/PVC/Pod на базе данного диска. Статическая подготовка.
В строку volumeHandle: вставляем идентификатор созданного диска.
cat <<EOF | kubectl apply -f - apiVersion: v1 kind: PersistentVolume metadata: name: pv-nginx-hdd namespace: default labels: pv: nginx annotations: author: cameda spec: capacity: storage: 8Gi accessModes: - ReadWriteOnce storageClassName: "yc-network-hdd" csi: driver: disk-csi-driver.mks.ycloud.io fsType: ext4 volumeHandle: fhmlre3ckae4nlaanr5k --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-nginx-hdd namespace: default labels: pvc: nginx annotations: author: cameda spec: accessModes: - ReadWriteOnce resources: requests: storage: 8Gi volumeName: pv-nginx-hdd EOF
cat <<EOF | kubectl apply -f - apiVersion: v1 kind: Pod metadata: name: cam-nginx-hdd namespace: default labels: app: nginx env: prod annotations: author: cameda spec: containers: - name: cam-nginx-hdd image: nginx:latest imagePullPolicy: IfNotPresent resources: requests: cpu: 300m memory: 300Mi limits: memory: 500Mi ports: - containerPort: 80 - containerPort: 443 livenessProbe: failureThreshold: 10 successThreshold: 1 httpGet: path: / port: 80 periodSeconds: 10 timeoutSeconds: 1 initialDelaySeconds: 5 readinessProbe: failureThreshold: 3 successThreshold: 1 exec: command: - curl - http://127.0.0.1:80 periodSeconds: 10 timeoutSeconds: 1 initialDelaySeconds: 7 volumeMounts: - name: cam-volume mountPath: /mnt/cameda restartPolicy: OnFailure volumes: - name: cam-volume persistentVolumeClaim: claimName: pvc-nginx-hdd securityContext: fsGroup: 1000 runAsUser: 0 EOF
cat <<EOF | kubectl apply -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-nginx-hdd-dynamic namespace: default labels: pvc: nginx annotations: author: cameda spec: accessModes: - ReadWriteOnce resources: requests: storage: 7Gi EOF
cat <<EOF | kubectl apply -f - apiVersion: v1 kind: Pod metadata: name: cameda-nginx2 namespace: default labels: app: nginx env: prod annotations: author: cameda spec: containers: - name: cameda-nginx2 image: nginx:latest imagePullPolicy: IfNotPresent resources: requests: cpu: 300m memory: 300Mi limits: memory: 500Mi ports: - containerPort: 80 - containerPort: 443 livenessProbe: failureThreshold: 10 successThreshold: 1 httpGet: path: / port: 80 periodSeconds: 10 timeoutSeconds: 1 initialDelaySeconds: 5 readinessProbe: failureThreshold: 3 successThreshold: 1 exec: command: - curl - http://127.0.0.1:80 periodSeconds: 10 timeoutSeconds: 1 initialDelaySeconds: 7 volumeMounts: - name: cam-volume mountPath: /mnt/cameda restartPolicy: OnFailure volumes: - name: cam-volume persistentVolumeClaim: claimName: pvc-nginx-hdd-dynamic securityContext: fsGroup: 1000 runAsUser: 0 EOF