访次: AmazingCounters.com 次

k8s安装kube-promethues

本文主要讲解在k8s(kubernetes)下安装kube-prometheus。

 

kube-prometheus的github地址:https://github.com/prometheus-operator/kube-prometheus

 

kube-promethues本质就是以下内容的集合:

 

  • Prometheus Operator
  • Prometheus
  • Alertmanager
  • node-exporter
  • Prometheus Adapter for Kubernetes Metrics APIs
  • kube-state-metrics
  • Grafana

 

注意kube-promethues与kubernetes的版本对应关系如下:

 由于目前系统使用的k8s版本是1.21.0-0,根据上面的列表,是找不到的,因此查阅了其它的资料,发现是支持的:

 因此,从版本对应关系表,可以看出,采用release-0.9版本,下载地址:https://github.com/prometheus-operator/kube-prometheus/releases/tag/v0.9.0

02 kube-prometheus安装

2.1 step1:分类yaml

首选使用ssh工具把kube-prometheus压缩包上传至服务器。

 

解压:

wget https://github.com/prometheus-operator/kube-prometheus/archive/refs/tags/v0.9.0.tar.gz
tar -zxvf v0.9.0.tar.gz

按yaml分类:

cd kube-prometheus-0.9.0/manifests

# 创建文件夹
mkdir -p node-exporter alertmanager grafana kube-state-metrics prometheus serviceMonitor adapter

# 移动 yaml 文件,进行分类到各个文件夹下
mv *-serviceMonitor* serviceMonitor/
mv grafana-* grafana/
mv kube-state-metrics-* kube-state-metrics/
mv alertmanager-* alertmanager/
mv node-exporter-* node-exporter/
mv prometheus-adapter* adapter/
mv prometheus-* prometheus/

最终结构如下:

[root@master manifests]# pwd
/root/backup/prometheus/tmp/kube-prometheus-0.9.0/manifests
[root@master manifests]# tree .
.
|-- adapter
|   |-- prometheus-adapter-apiService.yaml
|   |-- prometheus-adapter-clusterRoleAggregatedMetricsReader.yaml
|   |-- prometheus-adapter-clusterRoleBindingDelegator.yaml
|   |-- prometheus-adapter-clusterRoleBinding.yaml
|   |-- prometheus-adapter-clusterRoleServerResources.yaml
|   |-- prometheus-adapter-clusterRole.yaml
|   |-- prometheus-adapter-configMap.yaml
|   |-- prometheus-adapter-deployment.yaml
|   |-- prometheus-adapter-podDisruptionBudget.yaml
|   |-- prometheus-adapter-roleBindingAuthReader.yaml
|   |-- prometheus-adapter-serviceAccount.yaml
|   `-- prometheus-adapter-service.yaml
|-- alertmanager
|   |-- alertmanager-alertmanager.yaml
|   |-- alertmanager-podDisruptionBudget.yaml
|   |-- alertmanager-prometheusRule.yaml
|   |-- alertmanager-secret.yaml
|   |-- alertmanager-serviceAccount.yaml
|   `-- alertmanager-service.yaml
|-- blackbox-exporter-clusterRoleBinding.yaml
|-- blackbox-exporter-clusterRole.yaml
|-- blackbox-exporter-configuration.yaml
|-- blackbox-exporter-deployment.yaml
|-- blackbox-exporter-serviceAccount.yaml
|-- blackbox-exporter-service.yaml
|-- grafana
|   |-- grafana-dashboardDatasources.yaml
|   |-- grafana-dashboardDefinitions.yaml
|   |-- grafana-dashboardSources.yaml
|   |-- grafana-deployment.yaml
|   |-- grafana-pvc.yaml
|   |-- grafana-serviceAccount.yaml
|   `-- grafana-service.yaml
|-- kube-prometheus-prometheusRule.yaml
|-- kubernetes-prometheusRule.yaml
|-- kube-state-metrics
|   |-- kube-state-metrics-clusterRoleBinding.yaml
|   |-- kube-state-metrics-clusterRole.yaml
|   |-- kube-state-metrics-deployment.yaml
|   |-- kube-state-metrics-prometheusRule.yaml
|   |-- kube-state-metrics-serviceAccount.yaml
|   `-- kube-state-metrics-service.yaml
|-- node-exporter
|   |-- node-exporter-clusterRoleBinding.yaml
|   |-- node-exporter-clusterRole.yaml
|   |-- node-exporter-daemonset.yaml
|   |-- node-exporter-prometheusRule.yaml
|   |-- node-exporter-serviceAccount.yaml
|   `-- node-exporter-service.yaml
|-- prometheus
|   |-- prometheus-clusterRoleBinding.yaml
|   |-- prometheus-clusterRole.yaml
|   |-- prometheus-operator-prometheusRule.yaml
|   |-- prometheus-podDisruptionBudget.yaml
|   |-- prometheus-prometheusRule.yaml
|   |-- prometheus-prometheus.yaml
|   |-- prometheus-roleBindingConfig.yaml
|   |-- prometheus-roleBindingSpecificNamespaces.yaml
|   |-- prometheus-roleConfig.yaml
|   |-- prometheus-roleSpecificNamespaces.yaml
|   |-- prometheus-serviceAccount.yaml
|   `-- prometheus-service.yaml
|-- serviceMonitor
|   |-- alertmanager-serviceMonitor.yaml
|   |-- blackbox-exporter-serviceMonitor.yaml
|   |-- grafana-serviceMonitor.yaml
|   |-- kubernetes-serviceMonitorApiserver.yaml
|   |-- kubernetes-serviceMonitorCoreDNS.yaml
|   |-- kubernetes-serviceMonitorKubeControllerManager.yaml
|   |-- kubernetes-serviceMonitorKubelet.yaml
|   |-- kubernetes-serviceMonitorKubeScheduler.yaml
|   |-- kube-state-metrics-serviceMonitor.yaml
|   |-- node-exporter-serviceMonitor.yaml
|   |-- prometheus-adapter-serviceMonitor.yaml
|   |-- prometheus-operator-serviceMonitor.yaml
|   `-- prometheus-serviceMonitor.yaml
`-- setup
    |-- 0namespace-namespace.yaml
    |-- prometheus-operator-0alertmanagerConfigCustomResourceDefinition.yaml
    |-- prometheus-operator-0alertmanagerCustomResourceDefinition.yaml
    |-- prometheus-operator-0podmonitorCustomResourceDefinition.yaml
    |-- prometheus-operator-0probeCustomResourceDefinition.yaml
    |-- prometheus-operator-0prometheusCustomResourceDefinition.yaml
    |-- prometheus-operator-0prometheusruleCustomResourceDefinition.yaml
    |-- prometheus-operator-0servicemonitorCustomResourceDefinition.yaml
    |-- prometheus-operator-0thanosrulerCustomResourceDefinition.yaml
    |-- prometheus-operator-clusterRoleBinding.yaml
    |-- prometheus-operator-clusterRole.yaml
    |-- prometheus-operator-deployment.yaml
    |-- prometheus-operator-serviceAccount.yaml
    `-- prometheus-operator-service.yaml

8 directories, 84 files

2.2 step2:修改数据持久化存储

 

prometheus 实际上是通过 emptyDir 进行挂载的,我们知道 emptyDir 挂载的数据的生命周期和 Pod 生命周期一致的,如果 Pod 挂掉了,那么数据也就丢失了,这也就是为什么我们重建 Pod 后之前的数据就没有了的原因,所以这里修改它的持久化配置。

 

本文默认已经安装好了openebs,不再进行讲述,请自行百度。

 

使用命令查询当前StorageClass的名称(需要安装openebs):

## 查询当前的storeclass名称
[root@master manifests]# kubectl get sc
NAME                  PROVISIONER                                   RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
harbor-storageclass   example.com/nfs                               Delete          Immediate           false                  19d
nfs-client            k8s-sigs.io/nfs-subdir-external-provisioner   Delete          Immediate           false                  20d

可以看到StorageClass的名称为nfs-client,接下来可以修改配置文件了

2.2.1 修改Prometheus 持久化

 

prometheus是一种 StatefulSet 有状态集的部署模式,所以直接将 StorageClass 配置到里面,在下面的 yaml 中最下面添加持久化配置:

 

目录:manifests/prometheus/prometheus-prometheus.yaml

 

在文件末尾新增:

storage:
    volumeClaimTemplate:
      spec:
        storageClassName: nfs-client
        resources:
          requests:
            storage: 5Gi

 

2.2.2 修改grafana持久化配置

 

由于 Grafana 是部署模式为 Deployment,所以我们提前为其创建一个 grafana-pvc.yaml 文件,加入下面 PVC 配置。

 

目录:manifests/grafana/grafana-pvc.yaml

 

完整内容如下:

cat manifests/grafana/grafana-pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: grafana
  namespace: monitoring  #---指定namespace为monitoring
spec:
  storageClassName: nfs-client #---指定StorageClass
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi

 

接着修改 grafana-deployment.yaml 文件设置持久化配置,应用上面的 PVC(目录:manifests/grafana/grafana-deployment.yaml)

 

修改内容如下:

# cat manifests/grafana/grafana-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app.kubernetes.io/component: grafana
    app.kubernetes.io/name: grafana
    app.kubernetes.io/part-of: kube-prometheus
    app.kubernetes.io/version: 8.1.1
  name: grafana
  namespace: monitoring
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/component: grafana
      app.kubernetes.io/name: grafana
      app.kubernetes.io/part-of: kube-prometheus
  template:
    metadata:
      annotations:
        checksum/grafana-datasources: fbf9c3b28f5667257167c2cec0ac311a
      labels:
        app.kubernetes.io/component: grafana
        app.kubernetes.io/name: grafana
        app.kubernetes.io/part-of: kube-prometheus
        app.kubernetes.io/version: 8.1.1
    spec:
      containers:
      - env: []
        image: grafana/grafana:8.1.1
        name: grafana
        ports:
        - containerPort: 3000
          name: http
        readinessProbe:
          httpGet:
            path: /api/health
            port: http
        resources:
          limits:
            cpu: 200m
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 100Mi
        volumeMounts:
        - mountPath: /var/lib/grafana
          name: grafana-storage
          readOnly: false
        - mountPath: /etc/grafana/provisioning/datasources
          name: grafana-datasources
          readOnly: false
        - mountPath: /etc/grafana/provisioning/dashboards
          name: grafana-dashboards
          readOnly: false
        - mountPath: /grafana-dashboard-definitions/0/alertmanager-overview
          name: grafana-dashboard-alertmanager-overview
          readOnly: false
        - mountPath: /grafana-dashboard-definitions/0/apiserver
          name: grafana-dashboard-apiserver
          readOnly: false
        - mountPath: /grafana-dashboard-definitions/0/cluster-total
          name: grafana-dashboard-cluster-total
          readOnly: false
        - mountPath: /grafana-dashboard-definitions/0/controller-manager
          name: grafana-dashboard-controller-manager
          readOnly: false
        - mountPath: /grafana-dashboard-definitions/0/k8s-resources-cluster
          name: grafana-dashboard-k8s-resources-cluster
          readOnly: false
        - mountPath: /grafana-dashboard-definitions/0/k8s-resources-namespace
          name: grafana-dashboard-k8s-resources-namespace
          readOnly: false
        - mountPath: /grafana-dashboard-definitions/0/k8s-resources-node
          name: grafana-dashboard-k8s-resources-node
          readOnly: false
        - mountPath: /grafana-dashboard-definitions/0/k8s-resources-pod
          name: grafana-dashboard-k8s-resources-pod
          readOnly: false
        - mountPath: /grafana-dashboard-definitions/0/k8s-resources-workload
          name: grafana-dashboard-k8s-resources-workload
          readOnly: false
        - mountPath: /grafana-dashboard-definitions/0/k8s-resources-workloads-namespace
          name: grafana-dashboard-k8s-resources-workloads-namespace
          readOnly: false
        - mountPath: /grafana-dashboard-definitions/0/kubelet
          name: grafana-dashboard-kubelet
          readOnly: false
        - mountPath: /grafana-dashboard-definitions/0/namespace-by-pod
          name: grafana-dashboard-namespace-by-pod
          readOnly: false
        - mountPath: /grafana-dashboard-definitions/0/namespace-by-workload
          name: grafana-dashboard-namespace-by-workload
          readOnly: false
        - mountPath: /grafana-dashboard-definitions/0/node-cluster-rsrc-use
          name: grafana-dashboard-node-cluster-rsrc-use
          readOnly: false
        - mountPath: /grafana-dashboard-definitions/0/node-rsrc-use
          name: grafana-dashboard-node-rsrc-use
          readOnly: false
        - mountPath: /grafana-dashboard-definitions/0/nodes
          name: grafana-dashboard-nodes
          readOnly: false
        - mountPath: /grafana-dashboard-definitions/0/persistentvolumesusage
          name: grafana-dashboard-persistentvolumesusage
          readOnly: false
        - mountPath: /grafana-dashboard-definitions/0/pod-total
          name: grafana-dashboard-pod-total
          readOnly: false
        - mountPath: /grafana-dashboard-definitions/0/prometheus-remote-write
          name: grafana-dashboard-prometheus-remote-write
          readOnly: false
        - mountPath: /grafana-dashboard-definitions/0/prometheus
          name: grafana-dashboard-prometheus
          readOnly: false
        - mountPath: /grafana-dashboard-definitions/0/proxy
          name: grafana-dashboard-proxy
          readOnly: false
        - mountPath: /grafana-dashboard-definitions/0/scheduler
          name: grafana-dashboard-scheduler
          readOnly: false
        - mountPath: /grafana-dashboard-definitions/0/workload-total
          name: grafana-dashboard-workload-total
          readOnly: false
      nodeSelector:
        kubernetes.io/os: linux
      securityContext:
        fsGroup: 65534
        runAsNonRoot: true
        runAsUser: 65534
      serviceAccountName: grafana
      volumes:
      - name: grafana-storage       # 新增持久化配置
        persistentVolumeClaim:
          claimName: grafana        # 设置为创建的PVC名称
      - name: grafana-datasources
        secret:
          secretName: grafana-datasources
        secret:
          secretName: grafana-datasources
      - configMap:
          name: grafana-dashboards
        name: grafana-dashboards
      - configMap:
          name: grafana-dashboard-alertmanager-overview
        name: grafana-dashboard-alertmanager-overview
      - configMap:
          name: grafana-dashboard-apiserver
        name: grafana-dashboard-apiserver
      - configMap:
          name: grafana-dashboard-cluster-total
        name: grafana-dashboard-cluster-total
      - configMap:
          name: grafana-dashboard-controller-manager
        name: grafana-dashboard-controller-manager
      - configMap:
          name: grafana-dashboard-k8s-resources-cluster
        name: grafana-dashboard-k8s-resources-cluster
      - configMap:
          name: grafana-dashboard-k8s-resources-namespace
        name: grafana-dashboard-k8s-resources-namespace
      - configMap:
          name: grafana-dashboard-k8s-resources-node
        name: grafana-dashboard-k8s-resources-node
      - configMap:
          name: grafana-dashboard-k8s-resources-pod
        name: grafana-dashboard-k8s-resources-pod
      - configMap:
          name: grafana-dashboard-k8s-resources-workload
        name: grafana-dashboard-k8s-resources-workload
      - configMap:
          name: grafana-dashboard-k8s-resources-workloads-namespace
        name: grafana-dashboard-k8s-resources-workloads-namespace
      - configMap:
          name: grafana-dashboard-kubelet
        name: grafana-dashboard-kubelet
      - configMap:
          name: grafana-dashboard-namespace-by-pod
        name: grafana-dashboard-namespace-by-pod
      - configMap:
          name: grafana-dashboard-namespace-by-workload
        name: grafana-dashboard-namespace-by-workload
      - configMap:
          name: grafana-dashboard-node-cluster-rsrc-use
        name: grafana-dashboard-node-cluster-rsrc-use
      - configMap:
          name: grafana-dashboard-node-rsrc-use
        name: grafana-dashboard-node-rsrc-use
      - configMap:
          name: grafana-dashboard-nodes
        name: grafana-dashboard-nodes
      - configMap:
          name: grafana-dashboard-persistentvolumesusage
        name: grafana-dashboard-persistentvolumesusage
      - configMap:
          name: grafana-dashboard-pod-total
        name: grafana-dashboard-pod-total
      - configMap:
          name: grafana-dashboard-prometheus-remote-write
        name: grafana-dashboard-prometheus-remote-write
      - configMap:
          name: grafana-dashboard-prometheus
        name: grafana-dashboard-prometheus
      - configMap:
          name: grafana-dashboard-proxy
        name: grafana-dashboard-proxy
      - configMap:
          name: grafana-dashboard-scheduler
        name: grafana-dashboard-scheduler
      - configMap:
          name: grafana-dashboard-workload-total
        name: grafana-dashboard-workload-total
manifests/grafana/grafana-deployment.yaml

 

2.3 step3:修改 Service 端口设置

 

2.3.1 修改 Prometheus Service

 

修改prometheus Service端口类型为 NodePort,设置 NodePort 端口为 32101:

 

目录:manifests/prometheus/prometheus-service.yaml

 

修改 prometheus-service.yaml 文件:

cat manifests/prometheus/prometheus-service.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/component: prometheus
    app.kubernetes.io/name: prometheus
    app.kubernetes.io/part-of: kube-prometheus
    app.kubernetes.io/version: 2.29.1
    prometheus: k8s
  name: prometheus-k8s
  namespace: monitoring
spec:
  type: NodePort
  ports:
  - name: web
    port: 9090
    targetPort: web
    nodePort: 32101
  selector:
    app: prometheus
    app.kubernetes.io/component: prometheus
    app.kubernetes.io/name: prometheus
    app.kubernetes.io/part-of: kube-prometheus
    prometheus: k8s
  sessionAffinity: ClientIP

2.3.2 修改 Grafana Service

 

修改 garafana service 端口类型为 NodePort,设置 NodePort 端口为 32102

 

目录:manifests/grafana/grafana-service.yaml

 

修改 grafana-service.yaml 文件:

cat manifests/grafana/grafana-service.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/component: grafana
    app.kubernetes.io/name: grafana
    app.kubernetes.io/part-of: kube-prometheus
    app.kubernetes.io/version: 8.1.1
  name: grafana
  namespace: monitoring
spec:
  type: NodePort
  ports:
  - name: http
    port: 3000
    targetPort: http
    nodePort: 32102
  selector:
    app.kubernetes.io/component: grafana
    app.kubernetes.io/name: grafana
    app.kubernetes.io/part-of: kube-prometheus

查看需要的镜像:

find ./manifests/ -type f |xargs grep 'image: '|sort|uniq|awk '{print $3}'|grep ^[a-zA-Z]|grep -Evw 'error|kubeRbacProxy'|sort -rn|uniq

[root@master kube-prometheus-0.9.0]# find ./manifests/ -type f |xargs grep 'image: '|sort|uniq|awk '{print $3}'|grep ^[a-zA-Z]|grep -Evw 'error|kubeRbacProxy'|sort -rn|uniq
quay.io/prometheus/prometheus:v2.29.1
quay.io/prometheus-operator/prometheus-operator:v0.49.0
quay.io/prometheus/node-exporter:v1.2.2
quay.io/prometheus/blackbox-exporter:v0.19.0
quay.io/prometheus/alertmanager:v0.22.2
quay.io/brancz/kube-rbac-proxy:v0.11.0
k8s.gcr.io/prometheus-adapter/prometheus-adapter:v0.9.0
k8s.gcr.io/kube-state-metrics/kube-state-metrics:v2.1.1
jimmidyson/configmap-reload:v0.5.0
grafana/grafana:8.1.1

 

准备镜像(每个节点都操作一遍)

由于k8s.gcr.io下的国外镜像是无法直接下载的,所以需要提前准备一下。
拉取这两个镜像:

docker pull zfhub/prometheus-adapter:v0.9.0
docker pull zfhub/kube-state-metrics:v2.1.1

给镜像打tag:

docker tag zfhub/prometheus-adapter:v0.9.0 k8s.gcr.io/prometheus-adapter/prometheus-adapter:v0.9.0
docker tag zfhub/kube-state-metrics:v2.1.1 k8s.gcr.io/kube-state-metrics/kube-state-metrics:v2.1.1

下载其它镜像:

docker pull quay.io/prometheus/prometheus:v2.29.1
docker pull quay.io/prometheus-operator/prometheus-operator:v0.49.0
docker pull quay.io/prometheus/node-exporter:v1.2.2
docker pull quay.io/prometheus/blackbox-exporter:v0.19.0
docker pull quay.io/prometheus/alertmanager:v0.22.2
docker pull quay.io/brancz/kube-rbac-proxy:v0.11.0
docker pull jimmidyson/configmap-reload:v0.5.0
docker pull grafana/grafana:8.1.1

 

2.4 step4:安装promethues-operator

 

注意:以下操作均在manifest目录下操作!

cd kube-prometheus-0.9.0/manifests/

开始安装 Operator:

kubectl apply -f setup/
[root@master tmp]# cd kube-prometheus-0.9.0/manifests/
[root@master manifests]# kubectl apply -f setup/
namespace/monitoring unchanged
customresourcedefinition.apiextensions.k8s.io/alertmanagerconfigs.monitoring.coreos.com configured
customresourcedefinition.apiextensions.k8s.io/alertmanagers.monitoring.coreos.com configured
customresourcedefinition.apiextensions.k8s.io/podmonitors.monitoring.coreos.com configured
customresourcedefinition.apiextensions.k8s.io/probes.monitoring.coreos.com configured
customresourcedefinition.apiextensions.k8s.io/prometheuses.monitoring.coreos.com configured
customresourcedefinition.apiextensions.k8s.io/prometheusrules.monitoring.coreos.com configured
customresourcedefinition.apiextensions.k8s.io/servicemonitors.monitoring.coreos.com configured
customresourcedefinition.apiextensions.k8s.io/thanosrulers.monitoring.coreos.com configured
clusterrole.rbac.authorization.k8s.io/prometheus-operator unchanged
clusterrolebinding.rbac.authorization.k8s.io/prometheus-operator unchanged
deployment.apps/prometheus-operator unchanged
service/prometheus-operator unchanged
serviceaccount/prometheus-operator unchanged

查看 Pod,等 pod 创建起来在进行下一步:

kubectl get pods -n monitoring
NAME                                   READY   STATUS    RESTARTS   AGE
prometheus-operator-75d9b475d9-2srpf   2/2     Running   0          112m

接下来安装其它组件:

 

[root@master manifests]# kubectl apply -f adapter/
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io unchanged
clusterrole.rbac.authorization.k8s.io/prometheus-adapter unchanged
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader unchanged
clusterrolebinding.rbac.authorization.k8s.io/prometheus-adapter unchanged
clusterrolebinding.rbac.authorization.k8s.io/resource-metrics:system:auth-delegator unchanged
clusterrole.rbac.authorization.k8s.io/resource-metrics-server-resources unchanged
configmap/adapter-config unchanged
deployment.apps/prometheus-adapter configured
Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
poddisruptionbudget.policy/prometheus-adapter configured
rolebinding.rbac.authorization.k8s.io/resource-metrics-auth-reader unchanged
service/prometheus-adapter unchanged
serviceaccount/prometheus-adapter unchanged
[root@master manifests]# kubectl apply -f alertmanager/
alertmanager.monitoring.coreos.com/main unchanged
Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
poddisruptionbudget.policy/alertmanager-main configured
prometheusrule.monitoring.coreos.com/alertmanager-main-rules unchanged
secret/alertmanager-main configured
service/alertmanager-main unchanged
serviceaccount/alertmanager-main unchanged
[root@master manifests]# kubectl apply -f node-exporter/
clusterrole.rbac.authorization.k8s.io/node-exporter unchanged
clusterrolebinding.rbac.authorization.k8s.io/node-exporter unchanged
daemonset.apps/node-exporter unchanged
prometheusrule.monitoring.coreos.com/node-exporter-rules unchanged
service/node-exporter unchanged
serviceaccount/node-exporter unchanged
[root@master manifests]# kubectl apply -f kube-state-metrics/
clusterrole.rbac.authorization.k8s.io/kube-state-metrics unchanged
clusterrolebinding.rbac.authorization.k8s.io/kube-state-metrics unchanged
deployment.apps/kube-state-metrics unchanged
prometheusrule.monitoring.coreos.com/kube-state-metrics-rules unchanged
service/kube-state-metrics unchanged
serviceaccount/kube-state-metrics unchanged
[root@master manifests]# kubectl apply -f grafana/
secret/grafana-datasources unchanged
configmap/grafana-dashboard-alertmanager-overview unchanged
configmap/grafana-dashboard-apiserver unchanged
configmap/grafana-dashboard-cluster-total unchanged
configmap/grafana-dashboard-controller-manager unchanged
configmap/grafana-dashboard-k8s-resources-cluster unchanged
configmap/grafana-dashboard-k8s-resources-namespace unchanged
configmap/grafana-dashboard-k8s-resources-node unchanged
configmap/grafana-dashboard-k8s-resources-pod unchanged
configmap/grafana-dashboard-k8s-resources-workload unchanged
configmap/grafana-dashboard-k8s-resources-workloads-namespace unchanged
configmap/grafana-dashboard-kubelet unchanged
configmap/grafana-dashboard-namespace-by-pod unchanged
configmap/grafana-dashboard-namespace-by-workload unchanged
configmap/grafana-dashboard-node-cluster-rsrc-use unchanged
configmap/grafana-dashboard-node-rsrc-use unchanged
configmap/grafana-dashboard-nodes unchanged
configmap/grafana-dashboard-persistentvolumesusage unchanged
configmap/grafana-dashboard-pod-total unchanged
configmap/grafana-dashboard-prometheus-remote-write unchanged
configmap/grafana-dashboard-prometheus unchanged
configmap/grafana-dashboard-proxy unchanged
configmap/grafana-dashboard-scheduler unchanged
configmap/grafana-dashboard-workload-total unchanged
configmap/grafana-dashboards unchanged
deployment.apps/grafana configured
persistentvolumeclaim/grafana unchanged
service/grafana unchanged
serviceaccount/grafana unchanged
[root@master manifests]# kubectl apply -f prometheus/
clusterrole.rbac.authorization.k8s.io/prometheus-k8s unchanged
clusterrolebinding.rbac.authorization.k8s.io/prometheus-k8s unchanged
prometheusrule.monitoring.coreos.com/prometheus-operator-rules unchanged
Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
poddisruptionbudget.policy/prometheus-k8s configured
prometheus.monitoring.coreos.com/k8s unchanged
prometheusrule.monitoring.coreos.com/prometheus-k8s-prometheus-rules unchanged
rolebinding.rbac.authorization.k8s.io/prometheus-k8s-config unchanged
rolebinding.rbac.authorization.k8s.io/prometheus-k8s unchanged
rolebinding.rbac.authorization.k8s.io/prometheus-k8s unchanged
rolebinding.rbac.authorization.k8s.io/prometheus-k8s unchanged
role.rbac.authorization.k8s.io/prometheus-k8s-config unchanged
role.rbac.authorization.k8s.io/prometheus-k8s unchanged
role.rbac.authorization.k8s.io/prometheus-k8s unchanged
role.rbac.authorization.k8s.io/prometheus-k8s unchanged
service/prometheus-k8s unchanged
serviceaccount/prometheus-k8s unchanged
[root@master manifests]# kubectl apply -f serviceMonitor/
servicemonitor.monitoring.coreos.com/alertmanager unchanged
servicemonitor.monitoring.coreos.com/blackbox-exporter unchanged
servicemonitor.monitoring.coreos.com/grafana unchanged
servicemonitor.monitoring.coreos.com/kube-state-metrics unchanged
servicemonitor.monitoring.coreos.com/kube-apiserver unchanged
servicemonitor.monitoring.coreos.com/coredns unchanged
servicemonitor.monitoring.coreos.com/kube-controller-manager unchanged
servicemonitor.monitoring.coreos.com/kube-scheduler unchanged
servicemonitor.monitoring.coreos.com/kubelet unchanged
servicemonitor.monitoring.coreos.com/node-exporter unchanged
servicemonitor.monitoring.coreos.com/prometheus-adapter unchanged
servicemonitor.monitoring.coreos.com/prometheus-operator unchanged
servicemonitor.monitoring.coreos.com/prometheus-k8s unchanged

查看 Pod 状态,等待所有状态均为Running:

[root@master manifests]# kubectl get pods -n monitoring
NAME                                   READY   STATUS    RESTARTS   AGE
alertmanager-main-0                    2/2     Running   0          112m
alertmanager-main-1                    2/2     Running   0          112m
alertmanager-main-2                    2/2     Running   0          112m
grafana-595b555899-2qpgs               1/1     Running   0          60m
kube-state-metrics-74964b6cd4-5sxq5    3/3     Running   0          112m
node-exporter-5cndt                    2/2     Running   0          112m
node-exporter-kghsv                    2/2     Running   0          112m
node-exporter-m4zpn                    2/2     Running   0          112m
prometheus-adapter-5b8db7955f-8mpmv    1/1     Running   0          112m
prometheus-adapter-5b8db7955f-h69qw    1/1     Running   0          112m
prometheus-k8s-0                       2/2     Running   0          109m
prometheus-k8s-1                       2/2     Running   0          109m
prometheus-operator-75d9b475d9-2srpf   2/2     Running   0          113m

2.5 step5:验证

 

打开地址:http://101.43.196.155:32101/targets,看看各个服务状态有没有问题:

 

可以看到已经监控上了很多指标数据了,上面我们可以看到 Prometheus 是两个副本,我们这里通过 Service 去访问,按正常来说请求是会去轮询访问后端的两个 Prometheus 实例的,但实际上我们这里访问的时候始终是路由到后端的一个实例上去,因为这里的 Service 在创建的时候添加了 SessionAffinity:ClientIP 这样的属性,会根据 ClientIP 来做 Session 亲和性,所以我们不用担心请求会到不同的副本上去。

 

有可能grafna会没有启动成功,提示如下:

[root@master kube-prometheus-0.9.0]# kubectl get pods -n monitoring
NAME                                   READY   STATUS    RESTARTS   AGE
alertmanager-main-0                    2/2     Running   0          127m
alertmanager-main-1                    2/2     Running   0          127m
alertmanager-main-2                    2/2     Running   0          127m
grafana-595b555899-2qpgs               1/1     Running   0          74m
kube-state-metrics-74964b6cd4-5sxq5    3/3     Running   0          126m
node-exporter-5cndt                    2/2     Running   0          126m
node-exporter-kghsv                    2/2     Running   0          126m
node-exporter-m4zpn                    2/2     Running   0          126m
prometheus-adapter-5b8db7955f-8mpmv    1/1     Running   0          127m
prometheus-adapter-5b8db7955f-h69qw    1/1     Running   0          127m
prometheus-k8s-0                       2/2     Running   0          123m
prometheus-k8s-1                       2/2     Running   0          123m
prometheus-operator-75d9b475d9-2srpf   2/2     Running   0          128m

 

 如果不成功 执行如下语句去删除pod,然后会自动创建pod:

kubectl delete grafana-595b555899-2qpgs -nmonitoring

打开地址:http://101.43.196.155:32102,默认账号密码为admin:

 可以看到所有组件都安装成功了!

03 文末


至此,我们在k8s上安装kube-promethues已经成功了!

 

 

 

 

 

 

 

posted @ 2024-03-19 16:54  IT老登  阅读(259)  评论(0编辑  收藏  举报
访次: AmazingCounters.com 次