kube-prometheus 安装

 官网地址:https://github.com/prometheus-operator/kube-prometheus

代码克隆,注意版本对应

git clone -b  release-0.8 https://github.com/prometheus-operator/kube-prometheus.git

添加参数

复制代码
KUBELET_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--hostname-override=master-1 \
--network-plugin=cni \
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
--config=/opt/kubernetes/cfg/kubelet-config.yml \
--cert-dir=/opt/kubernetes/ssl \
--authentication-token-webhook=true \   //添加两行参数
--authorization-mode=Webhook \   //添加参数
--pod-infra-container-image=liuyanzhen/pause-amd64:3.0"
复制代码

模式:

创建了一个clusterip类型的svc,使用daemonset创建了一个hostnetwork的ingress控制器,用来暴露端口到宿主机,然后创建了一个ingress,对clusterip类型的svc进行代理

DaemonSet+HostNetwork+nodeSelector

用DaemonSet结合nodeselector来部署ingress-controller到特定的node上,然后使用HostNetwork直接把该pod与宿主机node的网络打通,直接使用宿主机的80/433端口就能访问服务。这时,ingress-controller所在的node机器就很类似传统架构的边缘节点,比如机房入口的nginx服务器。该方式整个请求链路最简单,性能相对NodePort模式更好。缺点是由于直接利用宿主机节点的网络和端口,一个node只能部署一个ingress-controller pod。比较适合大并发的生产环境使用。

部署prometheus相关组件

修改镜像地址

[root@master-1 manifests]# ll |grep deployment
-rw-r--r-- 1 root root    3080 5月  30 10:39 blackbox-exporter-deployment.yaml
-rw-r--r-- 1 root root    8065 5月  30 10:46 grafana-deployment.yaml
-rw-r--r-- 1 root root    2957 5月  30 10:39 kube-state-metrics-deployment.yaml
-rw-r--r-- 1 root root    1804 5月  30 10:39 prometheus-adapter-deployment.yaml

对于无法拉取的,可以使用境外服务器拉取,上传到harbor拉取

复制代码
docker pull quay.io/prometheus/blackbox-exporter:v0.18.0
docker pull jimmidyson/configmap-reload:v0.5.0
docker pull quay.io/brancz/kube-rbac-proxy:v0.8.0
docker pull grafana/grafana:7.5.4
docker pull quay.io/brancz/kube-rbac-proxy:v0.8.0
docker pull quay.io/brancz/kube-rbac-proxy:v0.8.0
docker pull directxman12/k8s-prometheus-adapter:v0.8.4
docker pull quay.io/prometheus-operator/prometheus-operator:v0.47.0
 docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller:v1.3.0
 docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen:v1.3.0
复制代码

不使用标签校验

    registry: registry.cn-hangzhou.aliyuncs.com
    image: google_containers/nginx-ingress-controller
    ## for backwards compatibility consider setting the full image url via the repository value below
    ## use *either* current default registry/image or repository format or installing chart by providing the values.yaml will fail
    ## repository:
    tag: "v1.3.0"

部署

先部署前置

kubectl apply --server-side -f manifests/setup

校验

复制代码
kubectl wait \
> --for condition=Established \
> --all CustomResourceDefinition \
> --namespace=monitoring

customresourcedefinition.apiextensions.k8s.io/alertmanagerconfigs.monitoring.coreos.com condition met
customresourcedefinition.apiextensions.k8s.io/alertmanagers.monitoring.coreos.com condition met
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org condition met
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org condition met
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org condition met
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org condition met
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org condition met
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org condition met
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org condition met
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org condition met
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org condition met
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org condition met
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org condition met
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org condition met
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org condition met
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org condition met
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org condition met
customresourcedefinition.apiextensions.k8s.io/podmonitors.monitoring.coreos.com condition met
customresourcedefinition.apiextensions.k8s.io/probes.monitoring.coreos.com condition met
customresourcedefinition.apiextensions.k8s.io/prometheuses.monitoring.coreos.com condition met
customresourcedefinition.apiextensions.k8s.io/prometheusrules.monitoring.coreos.com condition met
customresourcedefinition.apiextensions.k8s.io/servicemonitors.monitoring.coreos.com condition met
customresourcedefinition.apiextensions.k8s.io/thanosrulers.monitoring.coreos.com condition met
复制代码

部署业务

kubectl apply -f manifests/

查看pod

复制代码
[root@master-1 kube-prometheus]# kubectl   get pod -n monitoring
NAME                                   READY   STATUS    RESTARTS   AGE
alertmanager-main-0                    1/2     Running   2          6m36s
alertmanager-main-1                    1/2     Running   3          6m36s
alertmanager-main-2                    1/2     Running   2          6m36s
blackbox-exporter-55f94897c4-rw8nt     3/3     Running   0          6m36s
grafana-5b5c586c88-kzhtk               1/1     Running   0          6m33s
kube-state-metrics-65c7449585-kpv4d    3/3     Running   0          6m32s
node-exporter-b54fs                    2/2     Running   0          6m31s
node-exporter-cq2s7                    2/2     Running   0          6m31s
node-exporter-k4b76                    2/2     Running   0          6m31s
node-exporter-qnhw9                    2/2     Running   0          6m31s
node-exporter-qpql5                    2/2     Running   0          6m31s
prometheus-adapter-758696b565-dmb6r    1/1     Running   0          6m30s
prometheus-adapter-758696b565-xwnrh    1/1     Running   0          6m30s
prometheus-k8s-0                       2/2     Running   1          6m30s
prometheus-k8s-1                       2/2     Running   1          6m30s
prometheus-operator-7775c66ccf-bqx8n   2/2     Running   0          13m
复制代码

查看svc

复制代码
[root@master-1 ingress]# kubectl   get svc -A
NAMESPACE              NAME                        TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                        AGE
default                ingress-nginx-controller    ClusterIP   10.0.220.24    <none>        7899/TCP,53443/TCP             52m
default                kubernetes                  ClusterIP   10.0.0.1       <none>        443/TCP                        227d
kube-system            calico-typha                ClusterIP   10.0.240.85    <none>        5473/TCP                       226d
kube-system            kube-dns                    ClusterIP   10.0.0.2       <none>        53/UDP,53/TCP,9153/TCP         226d
kube-system            kubelet                     ClusterIP   None           <none>        10250/TCP,10255/TCP,4194/TCP   37m
kube-system            metrics-server              ClusterIP   10.0.236.238   <none>        443/TCP                        218d
kubernetes-dashboard   dashboard-metrics-scraper   ClusterIP   10.0.108.242   <none>        8000/TCP                       218d
kubernetes-dashboard   kubernetes-dashboard        NodePort    10.0.196.6     <none>        443:30002/TCP                  218d
monitoring             alertmanager-main           ClusterIP   10.0.83.70     <none>        9093/TCP                       31m
monitoring             alertmanager-operated       ClusterIP   None           <none>        9093/TCP,9094/TCP,9094/UDP     31m
monitoring             blackbox-exporter           ClusterIP   10.0.88.253    <none>        9115/TCP,19115/TCP             31m
monitoring             grafana                     ClusterIP   10.0.205.91    <none>        3000/TCP                       31m
monitoring             kube-state-metrics          ClusterIP   None           <none>        8443/TCP,9443/TCP              31m
monitoring             node-exporter               ClusterIP   None           <none>        9100/TCP                       31m
monitoring             prometheus-adapter          ClusterIP   10.0.62.215    <none>        443/TCP                        31m
monitoring             prometheus-k8s              ClusterIP   10.0.133.157   <none>        9090/TCP                       31m
monitoring             prometheus-operated         ClusterIP   None           <none>        9090/TCP                       31m
monitoring             prometheus-operator         ClusterIP   None           <none>        8443/TCP                       38m
复制代码

部署ingress 控制器

查看版本,必须和k8s版本支持

helm search repo ingress-nginx --versions

拉取

 helm pull ingress-nginx/ingress-nginx --version 4.2.2

修改 values.yaml

修改节点标签选择器

      #nodeSelector:
       # beta.kubernetes.io/os: linux

网络类型和dns策略

hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet

 控制器类型

kind: DaemonSet

节点选择器

nodeSelector:
  ingress: "true" # 增加选择器,如果 node 上有 ingress=true 就部署

不使用https

将 admissionWebhooks.enabled 修改为 false

service网络类型

将 service 中的 type 由 LoadBalancer 修改为 ClusterIP,如果服务器是云平台才用 LoadBalancer

 部署

helm install ingress-nginx ingress-nginx/

 查看ingress  控制器

[root@master-1 kube-prometheus]# kubectl   get  ingressClass
NAME    CONTROLLER             PARAMETERS   AGE
nginx   k8s.io/ingress-nginx   <none>       116m

 部署ingress转发规则

复制代码
# 通过域名访问(没有域名可以在主机配置 hosts)
# 因为写了Daemonset控制器,在每个节点都部署了一个pod,所以可以写任意一个节点的ip
# 如果在前面配置了负载均衡,可以写负载均衡的ip或者域名。
#192.168.43.115  grafana. web.cn
#192.168.43.115  prometheus.web.cn
#192.168.43.115  alertmanager.web.cn

# 创建 prometheus-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  namespace: monitoring
  name: prometheus-ingress
spec:
  ingressClassName: nginx    //使用哪个控制器进行转发,重要!!!
  rules:
  - host: grafana.web.cn
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: grafana
            port:
              number: 3000
  - host: prometheus.web.cn
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: prometheus-k8s
            port:
              number: 9090
  - host: alertmanager.web.cn
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: alertmanager-main
            port:
              number: 9093
复制代码

 配置hosts

访问

 扩展

查看ingress 控制器的nginx.conf配置

 kubectl   exec   ingress-nginx-controller-9rkv2 -- cat /etc/nginx/nginx.conf  >nginx.conf

 可以看到转发给后端了

 配置多域名的https ingress转发

制作证书以及secret

# 创建证书
openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=magedu-nginx-service/O=magedu-nginx-service"
# k8s创建secret
kubectl create secret tls tls-secret --key tls.key --cert tls.crt

配置证书

复制代码
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  namespace: monitoring
  name: prometheus-ingress
spec:
  ingressClassName: nginx
  tls:
  - hosts:
    - web.cn    //通用,可以单独多个写三级域名
    secretName: tls-secret   //secret名称
  rules:
  - host: grafana.web.cn
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: grafana
            port:
              number: 3000
  - host: prometheus.web.cn
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: prometheus-k8s
            port:
              number: 9090
  - host: alertmanager.web.cn
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: alertmanager-main
            port:
              number: 9093
复制代码

访问

 

再次导出ingress控制器配置查看

 

posted @   不会跳舞的胖子  阅读(52)  评论(0编辑  收藏  举报
相关博文:
阅读排行:
· 使用C#创建一个MCP客户端
· 分享一个免费、快速、无限量使用的满血 DeepSeek R1 模型,支持深度思考和联网搜索!
· ollama系列1:轻松3步本地部署deepseek,普通电脑可用
· 基于 Docker 搭建 FRP 内网穿透开源项目(很简单哒)
· 按钮权限的设计及实现
历史上的今天:
2022-05-30 Zabbix Serevr基于主动模式实现监控Linux服务器
点击右上角即可分享
微信分享提示