|NO.Z.00297|——————————|CloudNative|——|KuberNetes&运维.V18|——|监控.v04|部署kube-prometheus|

一、安装kube-prometheus
### --- 下载kube-prometheus的最新版本包

~~~     # kube-prometheus下载地址:
~~~     https://github.com/coreos/kube-prometheus.git
~~~     ——>——>  最左边:main——>Switch branches/tags——>Branches:release-0.5
### --- 下载安装文件

[root@k8s-master01 prometheus]# git clone -b release-0.5 --single-branch https://github.com/coreos/kube-prometheus.git
Cloning into 'kube-prometheus'...
remote: Enumerating objects: 8051, done.
remote: Counting objects: 100% (2/2), done.
remote: Compressing objects: 100% (2/2), done.
remote: Total 8051 (delta 0), reused 1 (delta 0), pack-reused 8049
Receiving objects: 100% (8051/8051), 4.54 MiB | 27.00 KiB/s, done.
Resolving deltas: 100% (4876/4876), done.
### --- 查看下载的文件

[root@k8s-master01 prometheus]# cd kube-prometheus/
[root@k8s-master01 kube-prometheus]# ls
build.sh            DCO   example.jsonnet  experimental  go.sum  jsonnet           jsonnetfile.lock.json  LICENSE   manifests  OWNERS     scripts                            tests
code-of-conduct.md  docs  examples         go.mod        hack    jsonnetfile.json  kustomization.yaml     Makefile  NOTICE     README.md  sync-to-internal-registry.jsonnet  test.sh
### --- 操作目录文件详解
~~~     注:该目录为操作目录,定义了一些定义好的一些模板,可以直接使用

[root@k8s-master01 kube-prometheus]# ls manifests/
alertmanager-alertmanager.yaml     // 部署Alertmanager        
node-exporter-daemonset.yaml       // 定义了node-exporter,是采集宿主机的监控数据的,这些监控数据比zabbix监控的更详细                            
prometheus-prometheus.yaml         // 部署Prometheus的server端的
grafana-dashboardDatasources.yaml  // grafana定义很多dashboard;把这个dashboard放在了configmap中,若是没有后端存储的话,新增一个模板的话,需要把它挂载到这个configmap中,然后grafana就会读取这个dashboard。若是采用宿主机部署的,就可以直接上传一个dashboard,它会存储在宿主机上。此环境使用宿主机部署;因为grafana是一个展示,挂掉影响也不会很严重。    
prometheus-rules.yaml              // 定义了Prometheus的基本规则
grafana-deployment.yaml            // 定义grafana,grafana集成了一些dashboard,这个dashboard是用configmap去注入的 ;这个grafana有存储的话建议挂一个存储,若是没有存储,建议使用一个宿主机去部署,grafana使用容器部署的话会不方便,因为我们需要经常性的去更改里面的参数,更改模板,创建模板。所以最好使用一个宿主机来部署,有存储可以把存储挂载到pod的目录上即可                                         
setup
二、安装operator
### --- 进入operator安装目录

[root@k8s-master01 setup]# pwd
/root/README/EFK/prometheus/kube-prometheus/manifests/setup
### --- 安装operator
~~~     注:它会创建一个monitoring的namespace

[root@k8s-master01 setup]# kubectl create -f .
namespace/monitoring created
customresourcedefinition.apiextensions.k8s.io/alertmanagers.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/podmonitors.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/prometheuses.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/prometheusrules.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/servicemonitors.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/thanosrulers.monitoring.coreos.com created
clusterrole.rbac.authorization.k8s.io/prometheus-operator created
clusterrolebinding.rbac.authorization.k8s.io/prometheus-operator created
deployment.apps/prometheus-operator created
service/prometheus-operator created
serviceaccount/prometheus-operator created
### --- 查看operator安装结果

[root@k8s-master01 setup]# kubectl get po -n monitoring -owide
NAME                                   READY   STATUS    RESTARTS   AGE   IP              NODE         NOMINATED NODE   READINESS GATES
prometheus-operator-848d669f6d-j5vjd   2/2     Running   0          64m   172.17.125.16   k8s-node01   <none>           <none>
### --- 验证crd有没有产生

[root@k8s-master01 setup]# until kubectl get servicemonitors --all-namespaces ; do date; sleep 1; echo ""; done
三、创建Prometheus集群
### --- 进入安装目录

[root@k8s-master01 manifests]# pwd
/root/README/EFK/prometheus/kube-prometheus/manifests
### --- 修改配置文件

[root@k8s-master01 manifests]# vim alertmanager-alertmanager.yaml 
~~~     注释一:
  replicas: 1                              // 副本数量默认是3个,此环境我们只启1个即可,生成环境是最少启动3个
~~~     注释二:
  nodeSelector:
    kubernetes.io/hostname: k8s-node02     // 绑定在k8s-node02上面
[root@k8s-master01 manifests]# vim prometheus-prometheus.yaml           //修改Prometheus副本数
  replicas: 1
[root@k8s-master01 manifests]# vim prometheus-adapter-deployment.yaml   //副本数设置为1
  replicas: 1
### --- 创建Prometheus集群

[root@k8s-master01 manifests]# kubectl create -f . 
alertmanager.monitoring.coreos.com/main created
secret/alertmanager-main created
service/alertmanager-main created
serviceaccount/alertmanager-main created
servicemonitor.monitoring.coreos.com/alertmanager created
secret/grafana-datasources created
configmap/grafana-dashboard-apiserver created
configmap/grafana-dashboard-cluster-total created
configmap/grafana-dashboard-controller-manager created
configmap/grafana-dashboard-k8s-resources-cluster created
configmap/grafana-dashboard-k8s-resources-namespace created
configmap/grafana-dashboard-k8s-resources-node created
configmap/grafana-dashboard-k8s-resources-pod created
configmap/grafana-dashboard-k8s-resources-workload created
configmap/grafana-dashboard-k8s-resources-workloads-namespace created
configmap/grafana-dashboard-kubelet created
configmap/grafana-dashboard-namespace-by-pod created
configmap/grafana-dashboard-namespace-by-workload created
configmap/grafana-dashboard-node-cluster-rsrc-use created
configmap/grafana-dashboard-node-rsrc-use created
configmap/grafana-dashboard-nodes created
configmap/grafana-dashboard-persistentvolumesusage created
configmap/grafana-dashboard-pod-total created
configmap/grafana-dashboard-prometheus-remote-write created
configmap/grafana-dashboard-prometheus created
configmap/grafana-dashboard-proxy created
configmap/grafana-dashboard-scheduler created
configmap/grafana-dashboard-statefulset created
configmap/grafana-dashboard-workload-total created
configmap/grafana-dashboards created
deployment.apps/grafana created
service/grafana created
serviceaccount/grafana created
servicemonitor.monitoring.coreos.com/grafana created
clusterrole.rbac.authorization.k8s.io/kube-state-metrics created
clusterrolebinding.rbac.authorization.k8s.io/kube-state-metrics created
deployment.apps/kube-state-metrics created
service/kube-state-metrics created
serviceaccount/kube-state-metrics created
servicemonitor.monitoring.coreos.com/kube-state-metrics created
clusterrole.rbac.authorization.k8s.io/node-exporter created
clusterrolebinding.rbac.authorization.k8s.io/node-exporter created
daemonset.apps/node-exporter created
service/node-exporter created
serviceaccount/node-exporter created
servicemonitor.monitoring.coreos.com/node-exporter created
clusterrole.rbac.authorization.k8s.io/prometheus-adapter created
clusterrolebinding.rbac.authorization.k8s.io/prometheus-adapter created
clusterrolebinding.rbac.authorization.k8s.io/resource-metrics:system:auth-delegator created
clusterrole.rbac.authorization.k8s.io/resource-metrics-server-resources created
configmap/adapter-config created
deployment.apps/prometheus-adapter created
rolebinding.rbac.authorization.k8s.io/resource-metrics-auth-reader created
service/prometheus-adapter created
serviceaccount/prometheus-adapter created
clusterrole.rbac.authorization.k8s.io/prometheus-k8s created
clusterrolebinding.rbac.authorization.k8s.io/prometheus-k8s created
servicemonitor.monitoring.coreos.com/prometheus-operator created
prometheus.monitoring.coreos.com/k8s created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s-config created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s created
role.rbac.authorization.k8s.io/prometheus-k8s-config created
role.rbac.authorization.k8s.io/prometheus-k8s created
role.rbac.authorization.k8s.io/prometheus-k8s created
role.rbac.authorization.k8s.io/prometheus-k8s created
prometheusrule.monitoring.coreos.com/prometheus-k8s-rules created
service/prometheus-k8s created
serviceaccount/prometheus-k8s created
servicemonitor.monitoring.coreos.com/prometheus created
servicemonitor.monitoring.coreos.com/kube-apiserver created
servicemonitor.monitoring.coreos.com/coredns created
servicemonitor.monitoring.coreos.com/kube-controller-manager created
servicemonitor.monitoring.coreos.com/kube-scheduler created
servicemonitor.monitoring.coreos.com/kubelet created
### --- 查看创建pod的状态

[root@k8s-master01 manifests]# kubectl get po -n monitoring -owide
NAME                                   READY   STATUS    RESTARTS   AGE   IP               NODE           NOMINATED NODE   READINESS GATES
alertmanager-main-0                    2/2     Running   0          22m   172.25.244.211   k8s-master01   <none>           <none>
grafana-5d9d5f67c4-68kxb               1/1     Running   0          22m   172.17.125.18    k8s-node01     <none>           <none>
kube-state-metrics-7fddf8779f-g7959    3/3     Running   0          22m   172.25.244.212   k8s-master01   <none>           <none>
node-exporter-db78b                    2/2     Running   0          22m   192.168.1.15     k8s-node02     <none>           <none>
node-exporter-rwdf8                    2/2     Running   0          22m   192.168.1.14     k8s-node01     <none>           <none>
node-exporter-sxf9d                    2/2     Running   0          22m   192.168.1.11     k8s-master01   <none>           <none>
prometheus-adapter-cb548cdbf-qnjgd     1/1     Running   0          22m   172.17.125.17    k8s-node01     <none>           <none>
prometheus-k8s-0                       3/3     Running   2          22m   172.27.14.209    k8s-node02     <none>           <none>
prometheus-k8s-1                       3/3     Running   2          22m   172.27.14.208    k8s-node02     <none>           <none>
prometheus-operator-848d669f6d-j5vjd   2/2     Running   0          96m   172.17.125.16    k8s-node01     <none>           <none>
### --- 创建完成之后,会创建3个服务

~~~     第一个服务:Alertmanager:经常会使用到,需要给它配置域名;可以查看当前的告警有哪些,那些告警需要处理,需要停止那些告警
~~~     第二个服务:grafana:需要给它配置域名;展示数据的。
~~~     第三个服务:prometheus-k8s:需要给它配置域名;创建一些规则,创建的的规则的语法是否正确,查询操作,查看target
[root@k8s-master01 manifests]# kubectl get svc -n monitoring -owide
NAME                    TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE   SELECTOR
alertmanager-main       ClusterIP   10.111.201.48   <none>        9093/TCP                     24m   alertmanager=main,app=alertmanager
alertmanager-operated   ClusterIP   None            <none>        9093/TCP,9094/TCP,9094/UDP   24m   app=alertmanager
grafana                 ClusterIP   10.98.164.98    <none>        3000/TCP                     24m   app=grafana
kube-state-metrics      ClusterIP   None            <none>        8443/TCP,9443/TCP            24m   app.kubernetes.io/name=kube-state-metrics
node-exporter           ClusterIP   None            <none>        9100/TCP                     24m   app.kubernetes.io/name=node-exporter,app.kubernetes.io/version=v0.18.1
prometheus-adapter      ClusterIP   10.98.176.139   <none>        443/TCP                      24m   name=prometheus-adapter
prometheus-k8s          ClusterIP   10.110.112.47   <none>        9090/TCP                     23m   app=prometheus,prometheus=k8s
prometheus-operated     ClusterIP   None            <none>        9090/TCP                     23m   app=prometheus
prometheus-operator     ClusterIP   None            <none>        8443/TCP                     99m   app.kubernetes.io/component=controller,app.kubernetes.io/name=prometheus-operator
四、创建Prometheus-ingress
### --- 创建prometheus-configmap.yaml文件

[root@k8s-master01 prometheus]# vim prometheus-ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: prom-ingresses
  namespace: monitoring
spec:
  rules:
  - host: alert.test.com
    http:
      paths:
      - backend:
          serviceName: alertmanager-main
          servicePort: 9093
        path: /
  - host: grafana.test.com
    http:
      paths:
      - backend:
          serviceName: grafana
          servicePort: 3000
        path: /
  - host: prom.test.com
    http:
      paths:
      - backend:
          serviceName: prometheus-k8s
          servicePort: 9090
        path: /
### --- 创建prometheus-configmap

[root@k8s-master01 prometheus]# kubectl create -f prometheus-ingress.yaml -n monitoring
ingress.extensions/prom-ingresses created
### --- 查看创建结果

[root@k8s-master01 prometheus]# kubectl get ingress -n monitoring
NAME             CLASS    HOSTS                                           ADDRESS          PORTS   AGE
prom-ingresses   <none>   alert.test.com,grafana.test.com,prom.test.com   10.107.150.111   80      44s

 
 
 
 
 
 
 
 
 

Walter Savage Landor:strove with none,for none was worth my strife.Nature I loved and, next to Nature, Art:I warm'd both hands before the fire of life.It sinks, and I am ready to depart
                                                                                                                                                   ——W.S.Landor

 

 

posted on   yanqi_vip  阅读(32)  评论(0编辑  收藏  举报

相关博文:
阅读排行:
· 全程不用写代码,我用AI程序员写了一个飞机大战
· MongoDB 8.0这个新功能碉堡了,比商业数据库还牛
· 记一次.NET内存居高不下排查解决与启示
· DeepSeek 开源周回顾「GitHub 热点速览」
· 白话解读 Dapr 1.15:你的「微服务管家」又秀新绝活了
< 2025年3月 >
23 24 25 26 27 28 1
2 3 4 5 6 7 8
9 10 11 12 13 14 15
16 17 18 19 20 21 22
23 24 25 26 27 28 29
30 31 1 2 3 4 5

导航

统计

点击右上角即可分享
微信分享提示