Prometheus monitoring artifact - Kubernetes

Manually deploy Grafana of Statefulset in Kubernetes, use StorageClass to persist data, and configure ingress nginx access.

In this article, StorageClass is used to persist data, build Grafana of Statefulset, configure the internal access address of Prometheus cluster created before Dashboard import, and configure ingress nginx external access.

environment

The sealos one click deployment used in my local environment is mainly to facilitate testing.

OS Kubernetes HostName IP Service
Ubuntu 18.04 1.17.7 sealos-k8s-m1 192.168.1.151 node-exporter prometheus-federate-0
Ubuntu 18.04 1.17.7 sealos-k8s-m2 192.168.1.152 node-exporter grafana alertmanager-0
Ubuntu 18.04 1.17.7 sealos-k8s-m3 192.168.1.150 node-exporter alertmanager-1
Ubuntu 18.04 1.17.7 sealos-k8s-node1 192.168.1.153 node-exporter prometheus-0 kube-state-metrics
Ubuntu 18.04 1.17.7 sealos-k8s-node2 192.168.1.154 node-exporter prometheus-1
Ubuntu 18.04 1.17.7 sealos-k8s-node2 192.168.1.155 node-exporter prometheus-2

Deploy Grafana

Create SA file for Grafana

mkdir /data/manual-deploy/grafana/
cat grafana-serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: grafana
  namespace: kube-system

Create the sc profile for Grafana

cat grafana-data-storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: grafana-lpv
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer

Create a pv profile for Grafana

cat grafana-data-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: grafana-pv-0
spec:
  capacity:
    storage: 10Gi
  volumeMode: Filesystem
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: grafana-lpv
  local:
    path: /data/grafana-data
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - sealos-k8s-m2

Create pv directory and authorization on the scheduling node

mkdir /data/grafana-data
chown -R 65534.65534 /data/grafana-data

The Dashboard file is too large. Download and change the namespace yourself

# Download to local
cat grafana-dashboard-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  creationTimestamp: null
  name: grafana-dashboards
  namespace: kube-system
  labels:
    app.kubernetes.io/name: grafana
    app.kubernetes.io/component: grafana
data:
....

Create the configmap configuration file of Grafana, where Prometheus is the dns address inside the cluster. Please adjust it yourself.

cat grafana-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: grafana-datasources
  namespace: kube-system
  labels:
    app.kubernetes.io/name: grafana
data:
  datasources.yaml: |
    apiVersion: 1
    datasources:
    - access: proxy
      isDefault: true
      name: prometheus
      type: prometheus
      url: http://prometheus-0.prometheus.kube-system.svc.cluster.local:9090
      version: 1
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: grafana-dashboardproviders
  namespace: kube-system
  labels:
    app.kubernetes.io/name: grafana
data:
  dashboardproviders.yaml: |
    apiVersion: 1
    providers:
    - disableDeletion: false
      editable: true
      folder: ""
      name: default
      options:
        path: /var/lib/grafana/dashboards
      orgId: 1
      type: file

I don't use secret here. I need to adjust myself. There are calling methods in statefullset, which I have commented.

cat grafana-secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: grafana-secret
  namespace: kube-system
  labels:
    app.kubernetes.io/name: grafana
    app.kubernetes.io/component: grafana
type: Opaque
data:
  admin-user: YWRtaW4=
  admin-password: "123456"

Create Grafana's statefullset configuration file

cat grafana-statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: grafana
  namespace: kube-system
  labels: &Labels
    k8s-app: grafana
    app.kubernetes.io/name: grafana
    app.kubernetes.io/component: grafana
spec:
  serviceName: grafana
  replicas: 1
  selector:
    matchLabels: *Labels
  template:
    metadata:
      labels: *Labels
    spec:
      serviceAccountName: grafana
      initContainers:
          - name: "init-chmod-data"
            image: debian:9
            imagePullPolicy: "IfNotPresent"
            command: ["chmod", "777", "/var/lib/grafana"]
            volumeMounts:
            - name: grafana-data
              mountPath: "/var/lib/grafana"
      containers:
        - name: grafana
          image: grafana/grafana:7.1.0
          imagePullPolicy: Always
          volumeMounts:
            - name: dashboards
              mountPath: "/var/lib/grafana/dashboards"
            - name: datasources
              mountPath: "/etc/grafana/provisioning/datasources"              
            - name: grafana-dashboardproviders
              mountPath: "/etc/grafana/provisioning/dashboards"
            - name: grafana-data
              mountPath: "/var/lib/grafana"
          ports:
            - name: service
              containerPort: 80
              protocol: TCP
            - name: grafana
              containerPort: 3000
              protocol: TCP
          env:
            - name: GF_SECURITY_ADMIN_USER
              value: "admin"
              #valueFrom:
              #  secretKeyRef:
              #    name: grafana-secret
              #    key: admin-user
            - name: GF_SECURITY_ADMIN_PASSWORD
              value: "admin"
              #valueFrom:
              #  secretKeyRef:
              #    name: grafana-secret
              #    key: admin-password
          livenessProbe:
            httpGet:
              path: /api/health
              port: 3000
          readinessProbe:
            httpGet:
              path: /api/health
              port: 3000
            initialDelaySeconds: 60
            timeoutSeconds: 30
            failureThreshold: 10
            periodSeconds: 10
          resources:
            limits:
              cpu: 50m
              memory: 100Mi
            requests:
              cpu: 50m
              memory: 100Mi
      volumes:
        - name: datasources
          configMap:
            name: grafana-datasources
        - name: grafana-dashboardproviders
          configMap:
            name: grafana-dashboardproviders
        - name: dashboards
          configMap:
            name: grafana-dashboards
  volumeClaimTemplates:
  - metadata:
      name: grafana-data
    spec:
      storageClassName: "grafana-lpv"
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: "2Gi"

Create the svc configuration file for Grafana's statefullset

cat grafana-service-statefulset.yaml
apiVersion: v1
kind: Service
metadata:
  name: grafana
  namespace: kube-system
  labels:
    k8s-app: grafana
    app.kubernetes.io/name: grafana
    app.kubernetes.io/component: grafana
  annotations:
    prometheus.io/scrape: 'true'
spec:
  ports:
    - name: http
      port: 80
      protocol: TCP
      targetPort: 3000
  selector:
    k8s-app: grafana

deploy

cd /data/manual-deploy/grafana
ls
grafana-configmap.yaml
grafana-dashboard-configmap.yaml
grafana-data-pv.yaml
grafana-data-storageclass.yaml
grafana-secret.yaml
grafana-serviceaccount.yaml
grafana-service-statefulset.yaml
grafana-statefulset.yaml
kubectl apply .

verification

kubectl -n kube-system get sa,pod,svc,ep,sc,secret|grep grafana
serviceaccount/grafana                              1         1h
pod/grafana-0                                  1/1     Running   0          1h
service/grafana                   ClusterIP   10.101.176.62    <none>        80/TCP                         1h
endpoints/grafana                   100.73.217.86:3000                                                         1h
storageclass.storage.k8s.io/grafana-lpv               kubernetes.io/no-provisioner   Delete          WaitForFirstConsumer   false                  33h
secret/grafana-token-lrsbd                              kubernetes.io/service-account-token   3      1h

Deploy ingress nginx

cd /data/manual-deploy/ingress-nginx
# Create ns and svc of ingress nginx
cat ngress-nginx-svc.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx
---
apiVersion: v1
kind: Service
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  type: NodePort
  ports:
    - name: http
      port: 80
      targetPort: 80
      protocol: TCP
    - name: https
      port: 443
      targetPort: 443
      protocol: TCP
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

# Create mandatory file
cat ngress-nginx-mandatory.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---

kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-configuration
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: tcp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: udp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nginx-ingress-serviceaccount
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: nginx-ingress-clusterrole
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - events
    verbs:
      - create
      - patch
  - apiGroups:
      - "extensions"
      - "networking.k8s.io"
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - "extensions"
      - "networking.k8s.io"
    resources:
      - ingresses/status
    verbs:
      - update

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  name: nginx-ingress-role
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - pods
      - secrets
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - configmaps
    resourceNames:
      # Defaults to "<election-id>-<ingress-class>"
      # Here: "<ingress-controller-leader>-<nginx>"
      # This has to be adapted if you change either parameter
      # when launching the nginx-ingress-controller.
      - "ingress-controller-leader-nginx"
    verbs:
      - get
      - update
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ""
    resources:
      - endpoints
    verbs:
      - get

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: nginx-ingress-role-nisa-binding
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: nginx-ingress-role
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: nginx-ingress-clusterrole-nisa-binding
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: nginx-ingress-clusterrole
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-ingress-controller
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/part-of: ingress-nginx
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
      annotations:
        prometheus.io/port: "10254"
        prometheus.io/scrape: "true"
    spec:
      hostNetwork: true
      # wait up to five minutes for the drain of connections
      terminationGracePeriodSeconds: 300
      serviceAccountName: nginx-ingress-serviceaccount
      nodeSelector:
        kubernetes.io/os: linux
      containers:
        - name: nginx-ingress-controller
          image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.30.0
          args:
            - /nginx-ingress-controller
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
            - --publish-service=$(POD_NAMESPACE)/ingress-nginx
            - --annotations-prefix=nginx.ingress.kubernetes.io
          securityContext:
            allowPrivilegeEscalation: true
            capabilities:
              drop:
                - ALL
              add:
                - NET_BIND_SERVICE
            # www-data -> 101
            runAsUser: 101
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
            - name: http
              containerPort: 80
              protocol: TCP
            - name: https
              containerPort: 443
              protocol: TCP
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
          lifecycle:
            preStop:
              exec:
                command:
                  - /wait-shutdown

---

apiVersion: v1
kind: LimitRange
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  limits:
  - min:
      memory: 90Mi
      cpu: 100m
    type: Container

deploy

cd /data/manual-deploy/ingress-nginx
ls
ingress-nginx-mandatory.yaml
ngress-nginx-svc.yaml
kubectl apply -f  .

verification

kubectl -n ingress-nginx get pod,svc,ep
NAME                                            READY   STATUS    RESTARTS   AGE
pod/nginx-ingress-controller-6ffc8fdf96-45ksg   1/1     Running   0          3d12h
pod/nginx-ingress-controller-6ffc8fdf96-76rxj   1/1     Running   0          3d13h
pod/nginx-ingress-controller-6ffc8fdf96-xrhlp   1/1     Running   0          3d13h

NAME                    TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
service/ingress-nginx   NodePort   10.110.106.22   <none>        80:31926/TCP,443:31805/TCP   3d13h

NAME                      ENDPOINTS                                                        AGE
endpoints/ingress-nginx   192.168.1.153:80,192.168.1.154:80,192.168.1.155:80 + 3 more...   3d13h

Configure ingress access.

Prometheus ingress nginx configuration file

cat alertmanager-ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: prometheus-ingress
  namespace: kube-system
  annotations:
    nginx.ingress.kubernetes.io/affinity: cookie
    nginx.ingress.kubernetes.io/session-cookie-name: "prometheus-cookie"
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
    kubernetes.io/ingress.class: nginx
    certmanager.k8s.io/cluster-issuer: "letsencrypt-local"
    kubernetes.io/tls-acme: "false"
spec:
  rules:
  - host: prom.example.com
    http:
      paths:
      - path: /
        backend:
          serviceName: prometheus
          servicePort: 9090
  tls:
  - hosts:
      - prom.example.com

Alertmanager ingress nginx profile

cat alertmanager-ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: alertmanager-ingress
  namespace: kube-system
  annotations:
    nginx.ingress.kubernetes.io/affinity: cookie
    nginx.ingress.kubernetes.io/session-cookie-name: "alert-cookie"
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
    kubernetes.io/ingress.class: nginx
    certmanager.k8s.io/cluster-issuer: "letsencrypt-local"
    kubernetes.io/tls-acme: "false"
spec:
  rules:
  - host: alert.example.com
    http:
      paths:
      - path: /
        backend:
          serviceName: alertmanager-operated
          servicePort: 9093
  tls:
  - hosts:
      - alert.example.com

Grafana ingress nginx configuration file

cat grafana-ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: grafana-ingress
  namespace: kube-system
  annotations:
    nginx.ingress.kubernetes.io/affinity: cookie
    nginx.ingress.kubernetes.io/session-cookie-name: "grafana-cookie"
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
    kubernetes.io/ingress.class: nginx
    certmanager.k8s.io/cluster-issuer: "letsencrypt-local"
    kubernetes.io/tls-acme: "false"
spec:
  rules:
  - host: grafana.example.com
    http:
      paths:
      - path: /
        backend:
          serviceName: grafana
          servicePort: http
  tls:
  - hosts:
      - grafana.example.com

deploy

cd /data/manual-deploy/ingress-nginx
alertmanager-ingress.yaml
grafana-ingress.yaml
prometheus-ingress.yaml
kubectl apply -f alertmanager-ingress.yaml
kubectl apply -f prometheus-ingress.yaml
kubectl apply -f grafana-ingress.yaml                 

verification

kubectl -n kube-system get ingresses
NAME                   HOSTS                 ADDRESS         PORTS     AGE
alertmanager-ingress   alert.example.com     10.110.106.22   80, 443   15h
grafana-ingress        grafana.example.com   10.110.106.22   80, 443   30h
prometheus-ingress     prom.example.com      10.110.106.22   80, 443   15h

Then you can bind the domain name in the local dns server or host to access the host relationship. I don't configure SSL certificate here. If you have this requirement, you need to configure it separately.

Now, the whole process of manual deployment is over. The service built this time is a relatively new version. There may be unknown problems in interdependence. Try to keep the version consistent.

Tags: Prometheus

Posted by midi_mick on Wed, 18 May 2022 17:23:37 +0300