Prometheus+Grafana+Loki
March 3, 2023

Prometheus+Grafana в кластере mk8s

Managed Service for Kubernetes позволяет выгружать метрики объектов кластера в системы мониторинга.

Кластер устанавливал здесь: https://teletype.in/@cameda/3BqWWeOkRZ0

Установка Prometheus.

helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
helm install my-prom prometheus-community/prometheus
kubectl get pods -l "app=prometheus"

Установка кеширующего прокси trickster.

Кеширующий прокси trickster ускоряет чтение из базы данных Prometheus, что позволяет отображать метрики Grafana практически в реальном времени, а также снизить нагрузку на Prometheus.

helm repo add tricksterproxy https://helm.tricksterproxy.io && \
helm repo update

Создаём манифест trickster.yaml.

frontend:
  listenAddress: ""
  tlsListenAddress: ""
  tlsListenPort: ""
  connectionsLimit: "0"
origins:
  - name: default
    originType: prometheus
    originURL: http://my-prom-prometheus-server:80
profiler:
  enabled: false
  port: 6060
prometheusScrape: false
prometheus:
  serviceMonitor:
    enabled: false
    interval: 30s
    labels: {}
replicaCount: 1
image:
  repository: tricksterproxy/trickster
  tag: "1.1"
  pullPolicy: IfNotPresent
service:
  annotations: {}
  labels: {}
  clusterIP: ""
  externalIPs: []
  loadBalancerIP: ""
  loadBalancerSourceRanges: []
  metricsPort: 8481
  servicePort: 8480
  type: ClusterIP
ingress:
  enabled: false
  annotations: {}
  extraLabels: {}
  hosts: []
  tls: []
volumes:
  persistent:
    type: "persistentVolume"
    enabled: false
    mountPath: "/tmp/trickster"
    accessModes:
      - ReadWriteOnce
    annotations: {}
    existingClaim: ""
    size: 15Gi
  generic:
    type: "generic"
    enabled: true
    mountPath: "/tmp/trickster"
podAnnotations: {}
resources: {}
securityContext: {}

И запускаем инсталляцию.

helm install trickster tricksterproxy/trickster --namespace default -f trickster.yaml
kubectl get pods -l "app=trickster"

Установка Grafana.

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: grafana-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 2Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: grafana
  name: grafana
spec:
  selector:
    matchLabels:
      app: grafana
  template:
    metadata:
      labels:
        app: grafana
    spec:
      securityContext:
        fsGroup: 472
        supplementalGroups:
          - 0
      containers:
        - name: grafana
          image: grafana/grafana:latest
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 3000
              name: http-grafana
              protocol: TCP
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /robots.txt
              port: 3000
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 30
            successThreshold: 1
            timeoutSeconds: 2
          livenessProbe:
            failureThreshold: 3
            initialDelaySeconds: 30
            periodSeconds: 10
            successThreshold: 1
            tcpSocket:
              port: 3000
            timeoutSeconds: 1
          resources:
            requests:
              cpu: 250m
              memory: 750Mi
          volumeMounts:
            - mountPath: /var/lib/grafana
              name: grafana-pv
      volumes:
        - name: grafana-pv
          persistentVolumeClaim:
            claimName: grafana-pvc
---
apiVersion: v1
kind: Service
metadata:
  name: grafana
spec:
  ports:
    - port: 3000
      protocol: TCP
      targetPort: http-grafana
  selector:
    app: grafana
  sessionAffinity: None
  type: LoadBalancer
EOF
kubectl get pods -l "app=grafana"
export GRAFANA_IP=$(kubectl get service/grafana -o jsonpath='{.status.loadBalancer.ingress[0].ip}') && \
export GRAFANA_PORT=$(kubectl get service/grafana -o jsonpath='{.spec.ports[0].port}') && \
echo http://$GRAFANA_IP:$GRAFANA_PORT

Настройка Grafana.

login/password: admin/admin

1. Добавляем источник данных.

2. Импортируем дашборд с идентификатором 8588.