跳转至

Sidecar 组件

首先将前面章节中的 Prometheus 相关的资源对象全部删除,然后我们需要在 Prometheus 中去自动发现集群的一些资源对象,所以依然需要对应的 RBAC 权限声明:

# prometheus-rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: prometheus
  namespace: kube-mon
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: prometheus
rules:
  - apiGroups:
      - ""
    resources:
      - nodes
      - services
      - endpoints
      - pods
      - nodes/proxy
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - "extensions"
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - configmaps
      - nodes/metrics
    verbs:
      - get
  - nonResourceURLs:
      - /metrics
    verbs:
      - get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: prometheus
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: prometheus
subjects:
  - kind: ServiceAccount
    name: prometheus
    namespace: kube-mon

然后需要部署 Prometheus 的配置文件,下面的资源对象是创建 Prometheus 配置文件的模板,该模板将由 Thanos sidecar 组件进行读取,最终会通过该模板生成实际的配置文件,在同一个 Pod 中的 Prometheus 容器将读取最终的配置文件,在配置文件中添加 external_labels 标签是非常重要的,以便让 Queirer 可以基于这些标签对数据进行去重处理:

# prometheus-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: prometheus-config
  namespace: kube-mon
data:
  prometheus.yaml.tmpl: | # 注意这里的名称是 prometheus.yaml.tmpl
    global:
      scrape_interval: 15s
      scrape_timeout: 15s
      external_labels:
        cluster: ydzs-test
        replica: $(POD_NAME)  # 每个 Prometheus 有一个唯一的标签

    rule_files:  # 报警规则文件配置
    - /etc/prometheus/rules/*rules.yaml

    alerting:
      alert_relabel_configs:  # 我们希望告警从不同的副本中也是去重的
      - regex: replica
        action: labeldrop
      alertmanagers:
      - scheme: http
        path_prefix: /
        static_configs:
        - targets: ['alertmanager:9093']

    scrape_configs:
    ......  # 其他抓取任务配置和前面章节中的配置保持一致即可

上面配置了报警规则文件,由于这里配置文件太大了,所以为了更加清晰,我们将报警规则文件拆分到另外的 ConfigMap 对象中来,下面我们配置了两个报警规则:

# prometheus-rules.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: prometheus-rules
  namespace: kube-mon
data:
  alert-rules.yaml: |-
    groups:
      - name: K8sObjects_Alerts
        rules:
        - alert: Deployment_Replicas_0
          expr: |
            sum(kube_deployment_status_replicas) by (deployment, namespace) < 1
          for: 1m
          labels:
            severity: warning
          annotations:
            summary: Deployment {{$labels.deployment}} of {{$labels.namespace}} is currently having no pods running
            description: Has no pods running in Deployment {{$labels.deployment}} of {{$labels.namespace}}, you can describe to get events, or get replicas status.

Thanos 通过 Sidecar 和现有的 Prometheus 进行集成,将 Prometheus 的数据备份到对象存储中,所以首先我们需要将 Prometheus 和 Sidecar 部署在同一个 Pod 中,另外 Prometheus 中一定要开启下面两个参数:

  • --web.enable-admin-api 允许 Thanos 的 Sidecar 从 Prometheus 获取元数据。
  • --web.enable-lifecycle 允许 Thanos 的 Sidecar 重新加载 Prometheus 的配置和规则文件。

由于 Prometheus 默认每2h生成一个 TSDB 数据块,所以仍然并不意味着 Prometheus 可以是完全无状态的,因为如果它崩溃并重新启动,我们将丢失〜2 个小时的指标,因此强烈建议依然对 Prometheus 做数据持久化,所以我们这里使用了 StatefulSet 来管理这个应用,添加 volumeClaimTemplates 来声明了数据持久化的 PVC 模板:

# thanos-sidecar.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: prometheus
  namespace: kube-mon
  labels:
    app: prometheus
spec:
  serviceName: prometheus
  replicas: 2
  selector:
    matchLabels:
      app: prometheus
      thanos-store-api: "true"
  template:
    metadata:
      labels:
        app: prometheus
        thanos-store-api: "true"
    spec:
      serviceAccountName: prometheus
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
            - weight: 100
              podAffinityTerm:
                topologyKey: kubernetes.io/hostname
                labelSelector:
                  matchExpressions:
                    - key: app
                      operator: In
                      values:
                        - prometheus
      volumes:
        - name: prometheus-config
          configMap:
            name: prometheus-config
        - name: prometheus-rules
          configMap:
            name: prometheus-rules
        - name: prometheus-config-shared
          emptyDir: {}
      initContainers:
        - name: fix-permissions
          image: busybox:stable
          command: [chown, -R, "nobody:nobody", /prometheus]
          volumeMounts:
            - name: data
              mountPath: /prometheus
      containers:
        - name: prometheus
          image: prom/prometheus:v2.34.0
          imagePullPolicy: IfNotPresent
          args:
            - "--config.file=/etc/prometheus-shared/prometheus.yaml"
            - "--storage.tsdb.path=/prometheus"
            - "--storage.tsdb.retention.time=6h"
            - "--storage.tsdb.no-lockfile"
            - "--storage.tsdb.min-block-duration=2h" # Thanos处理数据压缩
            - "--storage.tsdb.max-block-duration=2h"
            - "--web.enable-admin-api" # 通过一些命令去管理数据
            - "--web.enable-lifecycle" # 支持热更新  localhost:9090/-/reload 加载
          ports:
            - name: http
              containerPort: 9090
          resources:
            requests:
              memory: 1Gi
              cpu: 500m
            limits:
              memory: 1Gi
              cpu: 500m
          volumeMounts:
            - name: prometheus-config-shared
              mountPath: /etc/prometheus-shared/
            - name: prometheus-rules
              mountPath: /etc/prometheus/rules
            - name: data
              mountPath: /prometheus
        - name: thanos
          image: thanosio/thanos:v0.25.1
          imagePullPolicy: IfNotPresent
          args:
            - sidecar
            - --log.level=debug
            - --tsdb.path=/prometheus
            - --prometheus.url=http://localhost:9090
            - --reloader.config-file=/etc/prometheus/prometheus.yaml.tmpl
            - --reloader.config-envsubst-file=/etc/prometheus-shared/prometheus.yaml
            - --reloader.rule-dir=/etc/prometheus/rules/
          ports:
            - name: http-sidecar
              containerPort: 10902
            - name: grpc
              containerPort: 10901
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
          resources:
            requests:
              memory: 1Gi
              cpu: 500m
            limits:
              memory: 1Gi
              cpu: 500m
          volumeMounts:
            - name: prometheus-config-shared
              mountPath: /etc/prometheus-shared/
            - name: prometheus-config
              mountPath: /etc/prometheus
            - name: prometheus-rules
              mountPath: /etc/prometheus/rules
            - name: data
              mountPath: /prometheus
  volumeClaimTemplates: # 由于prometheus每2h生成一个TSDB数据块,所以还是需要保存本地的数据
    - metadata:
        name: data
        labels:
          app: prometheus
      spec:
        storageClassName: longhorn # 不要用nfs存储
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: 2Gi

由于 Prometheus 和 Thanos 的 Sidecar 在同一个 Pod 中了,所以我们完全可以用 localhost 就可以访问到了,然后将数据目录做了声明挂载,所以同样可以在两个容器中共享数据目录了,一定要注意几个配置文件的挂载方式。此外在上面的配置文件中我们通过 POD_NAME 这个环境变量作为 external 标签附加到了 Prometheus 实例上,这里我们通过 Downward API 去设置该环境变量。

由于现在使用的是 StatefulSet 控制器,所以需要创建一个 Headless Service,而且后面的 Thanos Query 还将使用该无头服务来查询所有 Prometheus 实例中的数据,当然我们也可以为每一个 Prometheus 实例去创建一个 Service 对象便于调试,当然这个不是必须的:

# prometheus-headless.yaml
# 该服务为 querier 创建 srv 记录,以便查找 store-api 的信息
apiVersion: v1
kind: Service
metadata:
  name: thanos-store-gateway
  namespace: kube-mon
spec:
  type: ClusterIP
  clusterIP: None
  ports:
    - name: grpc
      port: 10901
      targetPort: grpc
  selector:
    thanos-store-api: "true"

然后我们就可以使用上面的这些资源对象来创建带有 Thanos Sidecar 容器的高可用 Prometheus 应用了:

☸ ➜ kubectl apply -f https://p8s.io/docs/thanos/manifests/prometheus-rbac.yaml
☸ ➜ kubectl apply -f https://p8s.io/docs/thanos/manifests/prometheus-config.yaml
☸ ➜ kubectl apply -f https://p8s.io/docs/thanos/manifests/prometheus-rules.yaml
☸ ➜ kubectl apply -f https://p8s.io/docs/thanos/manifests/prometheus-headless.yaml
☸ ➜ kubectl apply -f https://p8s.io/docs/thanos/manifests/thanos-sidecar.yaml
☸ ➜ kubectl get pods -n kube-mon -l app=prometheus
NAME           READY   STATUS    RESTARTS      AGE
prometheus-0   2/2     Running   1 (78s ago)   106s
prometheus-1   2/2     Running   1 (47s ago)   75s

创建成功后可以看到 Prometheus 中包含两个容器,其中的 Sidecar 容器启动的时候有两个非常重要的参数 --reloader.config-file--reloader.config-envsubst-file,第一个参数是指定 Prometheus 配置文件的模板文件,然后通过渲染配置模板文件,这里就是将 external_labels.replica: $(POD_NAME) 的标签值用环境变量 POD_NAME 进行替换,然后将渲染后的模板文件放到 config-envsubst-file 指定的路径,也就是 /etc/prometheus-shared/prometheus.yaml,所以应用主容器也通过 --config.file 来指定的该配置文件路径。我们也可以查看 Sidecar 容器的相关日志来验证:

☸ ➜ kubectl logs -f prometheus-0 -n kube-mon -c thanos
level=debug ts=2022-03-20T03:30:48.476658968Z caller=main.go:66 msg="maxprocs: Updating GOMAXPROCS=[1]: using minimum allowed GOMAXPROCS"
level=info ts=2022-03-20T03:30:48.477201922Z caller=sidecar.go:123 msg="no supported bucket was configured, uploads will be disabled"
level=info ts=2022-03-20T03:30:48.477275662Z caller=options.go:27 protocol=gRPC msg="disabled TLS, key and cert must be set to enable"
level=info ts=2022-03-20T03:30:48.477623162Z caller=sidecar.go:357 msg="starting sidecar"
level=info ts=2022-03-20T03:30:48.477804709Z caller=intrumentation.go:75 msg="changing probe status" status=healthy
level=info ts=2022-03-20T03:30:48.477833439Z caller=http.go:73 service=http/server component=sidecar msg="listening for requests and metrics" address=0.0.0.0:10902
level=info ts=2022-03-20T03:30:48.477973149Z caller=tls_config.go:195 service=http/server component=sidecar msg="TLS is disabled." http2=false
level=debug ts=2022-03-20T03:30:48.478060891Z caller=promclient.go:623 msg="build version" url=http://localhost:9090/api/v1/status/buildinfo
level=info ts=2022-03-20T03:30:48.479035037Z caller=intrumentation.go:56 msg="changing probe status" status=ready
level=info ts=2022-03-20T03:30:48.480670711Z caller=grpc.go:131 service=gRPC/server component=sidecar msg="listening for serving gRPC" address=0.0.0.0:10901
level=warn ts=2022-03-20T03:30:48.480775546Z caller=sidecar.go:172 msg="failed to fetch prometheus version. Is Prometheus running? Retrying" err="perform GET request against http://localhost:9090/api/v1/status/buildinfo: Get \"http://localhost:9090/api/v1/status/buildinfo\": dial tcp [::1]:9090: connect: connection refused"
level=error ts=2022-03-20T03:30:48.480838975Z caller=runutil.go:101 component=reloader msg="function failed. Retrying in next tick" err="trigger reload: reload request failed: Post \"http://localhost:9090/-/reload\": dial tcp [::1]:9090: connect: connection refused"
level=debug ts=2022-03-20T03:30:50.478628169Z caller=promclient.go:623 msg="build version" url=http://localhost:9090/api/v1/status/buildinfo
level=info ts=2022-03-20T03:30:50.479976025Z caller=sidecar.go:179 msg="successfully loaded prometheus version"
level=info ts=2022-03-20T03:30:50.481937285Z caller=sidecar.go:201 msg="successfully loaded prometheus external labels" external_labels="{cluster=\"ydzs-test\", replica=\"prometheus-0\"}"
level=info ts=2022-03-20T03:30:53.485019736Z caller=reloader.go:373 component=reloader msg="Reload triggered" cfg_in=/etc/prometheus/prometheus.yaml.tmpl cfg_out=/etc/prometheus-shared/prometheus.yaml watched_dirs=/etc/prometheus/rules/
level=info ts=2022-03-20T03:30:53.485067975Z caller=reloader.go:235 component=reloader msg="started watching config file and directories for changes" cfg=/etc/prometheus/prometheus.yaml.tmpl out=/etc/prometheus-shared/prometheus.yaml dirs=/etc/prometheus/rules/

由于在 Sidecar 中我们并没有配置对象存储相关参数,所以出现了 no supported bucket was configured, uploads will be disabled 的警告信息,也就是现在并不会上传我们的指标数据,到这里我们就将 Thanos Sidecar 组件成功部署上了。