云原生监控系统Prometheus——k8s集群中Prometheus-Operator安装Prometheus+Grafana
k8s集群中 Prometheus-Operator 安装 Prometheus+Grafana
本文使用的是 CoreOS 的 Prometheus-Operator 安装 Prometheus。
Note that everything is experimental and may change significantly at any time.
This repository collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.
The content of this project is written in jsonnet. This project could both be described as a package as well as a library.
Components included in this package:
-
- The Prometheus Operator
- Highly available Prometheus
- Highly available Alertmanager
- Prometheus node-exporter
- Prometheus Adapter for Kubernetes Metrics APIs
- kube-state-metrics
- Grafana
This stack is meant for cluster monitoring, so it is pre-configured to collect metrics from all Kubernetes components. In addition to that it delivers a default set of dashboards and alerting rules. Many of the useful dashboards and alerts come from the kubernetes-mixin project, similar to this project it provides composable jsonnet as a library for users to customize to their needs.
一、前期环境准备:
Prometheus-Operator 项目地址:https://github.com/prometheus-operator/prometheus-operator
Kubernetes 宿主机操作系统版本:CentOS Linux release 7.9.2009 (Core)
Kubernetes 宿主机操作系统内核版本:4.18.16-1.el7.elrepo.x86_64
Kubernetes 版本:
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.0", GitCommit:"af46c47ce925f4c4ad5cc8d1fca46c7b77d13b38", GitTreeState:"clean", BuildDate:"2020-12-08T17:59:43Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.0", GitCommit:"af46c47ce925f4c4ad5cc8d1fca46c7b77d13b38", GitTreeState:"clean", BuildDate:"2020-12-08T17:51:19Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
Prometheus-operator/kube-prometheus: v0.80
二、开始安装,GO,GO,GO!
1、下载 kube-prometheus 并解压缩:
[root@k8s-master01 k8s-prometheus]#wget https://github.com/prometheus-operator/kube-prometheus/archive/refs/tags/v0.8.0.tar.gz -O kube-prometheus-v0.8.0.tar.gz --2021-07-07 09:32:30-- https://github.com/prometheus-operator/kube-prometheus/archive/refs/tags/v0.8.0.tar.gz Resolving github.com (github.com)... 52.74.223.119 Connecting to github.com (github.com)|52.74.223.119|:443... connected. HTTP request sent, awaiting response... 302 Found Location: https://codeload.github.com/prometheus-operator/kube-prometheus/tar.gz/refs/tags/v0.8.0 [following] --2021-07-07 09:32:31-- https://codeload.github.com/prometheus-operator/kube-prometheus/tar.gz/refs/tags/v0.8.0 Resolving codeload.github.com (codeload.github.com)... 13.250.162.133 Connecting to codeload.github.com (codeload.github.com)|13.250.162.133|:443... connected. HTTP request sent, awaiting response... 200 OK Length: unspecified [application/x-gzip] Saving to: ‘kube-prometheus-v0.8.0.tar.gz’ [ <=> ] 315,444 846KB/s in 0.4s 2021-07-07 09:32:32 (846 KB/s) - ‘kube-prometheus-v0.8.0.tar.gz’ saved [315444] [root@k8s-master01 k8s-prometheus]#ls kube-prometheus-v0.8.0.tar.gz [root@k8s-master01 k8s-prometheus]#tar -zxf kube-prometheus-v0.8.0.tar.gz [root@k8s-master01 k8s-prometheus]#ls -l total 316 drwxrwxr-x 11 root root 4096 Apr 27 19:19 kube-prometheus-0.8.0 -rw-r--r-- 1 root root 315444 Jul 7 09:32 kube-prometheus-v0.8.0.tar.gz [root@k8s-master01 k8s-prometheus]#cd kube-prometheus-0.8.0/ [root@k8s-master01 kube-prometheus-0.8.0]#ls -l total 184 -rwxrwxr-x 1 root root 679 Apr 27 19:19 build.sh -rw-rw-r-- 1 root root 3039 Apr 27 19:19 code-of-conduct.md -rw-rw-r-- 1 root root 1422 Apr 27 19:19 DCO drwxrwxr-x 2 root root 4096 Apr 27 19:19 docs -rw-rw-r-- 1 root root 2051 Apr 27 19:19 example.jsonnet drwxrwxr-x 7 root root 4096 Apr 27 19:19 examples drwxrwxr-x 3 root root 28 Apr 27 19:19 experimental -rw-rw-r-- 1 root root 237 Apr 27 19:19 go.mod -rw-rw-r-- 1 root root 59996 Apr 27 19:19 go.sum drwxrwxr-x 3 root root 68 Apr 27 19:19 hack drwxrwxr-x 3 root root 29 Apr 27 19:19 jsonnet -rw-rw-r-- 1 root root 206 Apr 27 19:19 jsonnetfile.json -rw-rw-r-- 1 root root 4857 Apr 27 19:19 jsonnetfile.lock.json -rw-rw-r-- 1 root root 4437 Apr 27 19:19 kustomization.yaml -rw-rw-r-- 1 root root 11325 Apr 27 19:19 LICENSE -rw-rw-r-- 1 root root 2101 Apr 27 19:19 Makefile drwxrwxr-x 3 root root 4096 Apr 27 19:19 manifests -rw-rw-r-- 1 root root 126 Apr 27 19:19 NOTICE -rw-rw-r-- 1 root root 38246 Apr 27 19:19 README.md drwxrwxr-x 2 root root 187 Apr 27 19:19 scripts -rw-rw-r-- 1 root root 928 Apr 27 19:19 sync-to-internal-registry.jsonnet drwxrwxr-x 3 root root 17 Apr 27 19:19 tests -rwxrwxr-x 1 root root 808 Apr 27 19:19 test.sh [root@k8s-master01 kube-prometheus-0.8.0]#
2、查看 kustomization.yaml:
[root@k8s-master01 kube-prometheus-0.8.0]#cat kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ./manifests/alertmanager-alertmanager.yaml
- ./manifests/alertmanager-podDisruptionBudget.yaml
- ./manifests/alertmanager-prometheusRule.yaml
- ./manifests/alertmanager-secret.yaml
- ./manifests/alertmanager-service.yaml
- ./manifests/alertmanager-serviceAccount.yaml
- ./manifests/alertmanager-serviceMonitor.yaml
- ./manifests/blackbox-exporter-clusterRole.yaml
- ./manifests/blackbox-exporter-clusterRoleBinding.yaml
- ./manifests/blackbox-exporter-configuration.yaml
- ./manifests/blackbox-exporter-deployment.yaml
- ./manifests/blackbox-exporter-service.yaml
- ./manifests/blackbox-exporter-serviceAccount.yaml
- ./manifests/blackbox-exporter-serviceMonitor.yaml
- ./manifests/grafana-dashboardDatasources.yaml
- ./manifests/grafana-dashboardDefinitions.yaml
- ./manifests/grafana-dashboardSources.yaml
- ./manifests/grafana-deployment.yaml
- ./manifests/grafana-service.yaml
- ./manifests/grafana-serviceAccount.yaml
- ./manifests/grafana-serviceMonitor.yaml
- ./manifests/kube-prometheus-prometheusRule.yaml
- ./manifests/kube-state-metrics-clusterRole.yaml
- ./manifests/kube-state-metrics-clusterRoleBinding.yaml
- ./manifests/kube-state-metrics-deployment.yaml
- ./manifests/kube-state-metrics-prometheusRule.yaml
- ./manifests/kube-state-metrics-service.yaml
- ./manifests/kube-state-metrics-serviceAccount.yaml
- ./manifests/kube-state-metrics-serviceMonitor.yaml
- ./manifests/kubernetes-prometheusRule.yaml
- ./manifests/kubernetes-serviceMonitorApiserver.yaml
- ./manifests/kubernetes-serviceMonitorCoreDNS.yaml
- ./manifests/kubernetes-serviceMonitorKubeControllerManager.yaml
- ./manifests/kubernetes-serviceMonitorKubeScheduler.yaml
- ./manifests/kubernetes-serviceMonitorKubelet.yaml
- ./manifests/node-exporter-clusterRole.yaml
- ./manifests/node-exporter-clusterRoleBinding.yaml
- ./manifests/node-exporter-daemonset.yaml
- ./manifests/node-exporter-prometheusRule.yaml
- ./manifests/node-exporter-service.yaml
- ./manifests/node-exporter-serviceAccount.yaml
- ./manifests/node-exporter-serviceMonitor.yaml
- ./manifests/prometheus-adapter-apiService.yaml
- ./manifests/prometheus-adapter-clusterRole.yaml
- ./manifests/prometheus-adapter-clusterRoleAggregatedMetricsReader.yaml
- ./manifests/prometheus-adapter-clusterRoleBinding.yaml
- ./manifests/prometheus-adapter-clusterRoleBindingDelegator.yaml
- ./manifests/prometheus-adapter-clusterRoleServerResources.yaml
- ./manifests/prometheus-adapter-configMap.yaml
- ./manifests/prometheus-adapter-deployment.yaml
- ./manifests/prometheus-adapter-roleBindingAuthReader.yaml
- ./manifests/prometheus-adapter-service.yaml
- ./manifests/prometheus-adapter-serviceAccount.yaml
- ./manifests/prometheus-adapter-serviceMonitor.yaml
- ./manifests/prometheus-clusterRole.yaml
- ./manifests/prometheus-clusterRoleBinding.yaml
- ./manifests/prometheus-operator-prometheusRule.yaml
- ./manifests/prometheus-operator-serviceMonitor.yaml
- ./manifests/prometheus-podDisruptionBudget.yaml
- ./manifests/prometheus-prometheus.yaml
- ./manifests/prometheus-prometheusRule.yaml
- ./manifests/prometheus-roleBindingConfig.yaml
- ./manifests/prometheus-roleBindingSpecificNamespaces.yaml
- ./manifests/prometheus-roleConfig.yaml
- ./manifests/prometheus-roleSpecificNamespaces.yaml
- ./manifests/prometheus-service.yaml
- ./manifests/prometheus-serviceAccount.yaml
- ./manifests/prometheus-serviceMonitor.yaml
- ./manifests/setup/0namespace-namespace.yaml
- ./manifests/setup/prometheus-operator-0alertmanagerConfigCustomResourceDefinition.yaml
- ./manifests/setup/prometheus-operator-0alertmanagerCustomResourceDefinition.yaml
- ./manifests/setup/prometheus-operator-0podmonitorCustomResourceDefinition.yaml
- ./manifests/setup/prometheus-operator-0probeCustomResourceDefinition.yaml
- ./manifests/setup/prometheus-operator-0prometheusCustomResourceDefinition.yaml
- ./manifests/setup/prometheus-operator-0prometheusruleCustomResourceDefinition.yaml
- ./manifests/setup/prometheus-operator-0servicemonitorCustomResourceDefinition.yaml
- ./manifests/setup/prometheus-operator-0thanosrulerCustomResourceDefinition.yaml
- ./manifests/setup/prometheus-operator-clusterRole.yaml
- ./manifests/setup/prometheus-operator-clusterRoleBinding.yaml
- ./manifests/setup/prometheus-operator-deployment.yaml
- ./manifests/setup/prometheus-operator-service.yaml
- ./manifests/setup/prometheus-operator-serviceAccount.yaml
[root@k8s-master01 kube-prometheus-0.8.0]#
3、国外镜像源某些镜像无法拉取,我们这里修改prometheus-operator,prometheus,alertmanager,kube-state-metrics,node-exporter,prometheus-adapter的镜像源为国内镜像源。我这里使用中科大的镜像源。
sed -i 's/quay.io/quay.mirrors.ustc.edu.cn/g' setup/prometheus-operator-deployment.yaml sed -i 's/quay.io/quay.mirrors.ustc.edu.cn/g' prometheus-prometheus.yaml sed -i 's/quay.io/quay.mirrors.ustc.edu.cn/g' alertmanager-alertmanager.yaml sed -i 's/quay.io/quay.mirrors.ustc.edu.cn/g' kube-state-metrics-deployment.yaml sed -i 's/quay.io/quay.mirrors.ustc.edu.cn/g' node-exporter-daemonset.yaml sed -i 's/quay.io/quay.mirrors.ustc.edu.cn/g' prometheus-adapter-deployment.yaml
4、修改promethes,alertmanager,grafana的service类型为NodePort类型
apiVersion: v1 kind: Service metadata: labels: app.kubernetes.io/component: prometheus app.kubernetes.io/name: prometheus app.kubernetes.io/part-of: kube-prometheus app.kubernetes.io/version: 2.26.0 prometheus: k8s name: prometheus-k8s namespace: monitoring spec: type: NodePort ports: - name: web port: 9090 targetPort: web selector: app: prometheus app.kubernetes.io/component: prometheus app.kubernetes.io/name: prometheus app.kubernetes.io/part-of: kube-prometheus prometheus: k8s sessionAffinity: ClientIP
apiVersion: v1 kind: Service metadata: labels: alertmanager: main app.kubernetes.io/component: alert-router app.kubernetes.io/name: alertmanager app.kubernetes.io/part-of: kube-prometheus app.kubernetes.io/version: 0.21.0 name: alertmanager-main namespace: monitoring spec: type: NodePort ports: - name: web port: 9093 targetPort: web selector: alertmanager: main app: alertmanager app.kubernetes.io/component: alert-router app.kubernetes.io/name: alertmanager app.kubernetes.io/part-of: kube-prometheus sessionAffinity: ClientIP
apiVersion: v1 kind: Service metadata: labels: app.kubernetes.io/component: grafana app.kubernetes.io/name: grafana app.kubernetes.io/part-of: kube-prometheus app.kubernetes.io/version: 7.5.4 name: grafana namespace: monitoring spec: type: NodePort ports: - name: http port: 3000 targetPort: http selector: app.kubernetes.io/component: grafana app.kubernetes.io/name: grafana app.kubernetes.io/part-of: kube-prometheus
5、安装 CRD 和 prometheus-operator
[root@k8s-master01 manifests]#kubectl apply -f setup/ namespace/monitoring created customresourcedefinition.apiextensions.k8s.io/alertmanagerconfigs.monitoring.coreos.com created customresourcedefinition.apiextensions.k8s.io/alertmanagers.monitoring.coreos.com created customresourcedefinition.apiextensions.k8s.io/podmonitors.monitoring.coreos.com created customresourcedefinition.apiextensions.k8s.io/probes.monitoring.coreos.com created customresourcedefinition.apiextensions.k8s.io/prometheuses.monitoring.coreos.com created customresourcedefinition.apiextensions.k8s.io/prometheusrules.monitoring.coreos.com created customresourcedefinition.apiextensions.k8s.io/servicemonitors.monitoring.coreos.com created customresourcedefinition.apiextensions.k8s.io/thanosrulers.monitoring.coreos.com created clusterrole.rbac.authorization.k8s.io/prometheus-operator created clusterrolebinding.rbac.authorization.k8s.io/prometheus-operator created deployment.apps/prometheus-operator created service/prometheus-operator created serviceaccount/prometheus-operator created [root@k8s-master01 manifests]#
6、查看 prometheus-operator 状态:
[root@k8s-master01 manifests]#kubectl get pods -n monitoring NAME READY STATUS RESTARTS AGE prometheus-operator-7775c66ccf-ssfq4 2/2 Running 0 110s [root@k8s-master01 manifests]#
7、安装prometheus,、alertmanager,、grafana、 kube-state-metrics,、node-exporter
[root@k8s-master01 manifests]#pwd /root/k8s-prometheus/kube-prometheus-0.8.0/manifests [root@k8s-master01 manifests]#kubectl apply -f . alertmanager.monitoring.coreos.com/main created poddisruptionbudget.policy/alertmanager-main created prometheusrule.monitoring.coreos.com/alertmanager-main-rules created secret/alertmanager-main created service/alertmanager-main created serviceaccount/alertmanager-main created servicemonitor.monitoring.coreos.com/alertmanager created clusterrole.rbac.authorization.k8s.io/blackbox-exporter created clusterrolebinding.rbac.authorization.k8s.io/blackbox-exporter created configmap/blackbox-exporter-configuration created deployment.apps/blackbox-exporter created service/blackbox-exporter created serviceaccount/blackbox-exporter created servicemonitor.monitoring.coreos.com/blackbox-exporter created secret/grafana-datasources created configmap/grafana-dashboard-apiserver created configmap/grafana-dashboard-cluster-total created configmap/grafana-dashboard-controller-manager created configmap/grafana-dashboard-k8s-resources-cluster created configmap/grafana-dashboard-k8s-resources-namespace created configmap/grafana-dashboard-k8s-resources-node created configmap/grafana-dashboard-k8s-resources-pod created configmap/grafana-dashboard-k8s-resources-workload created configmap/grafana-dashboard-k8s-resources-workloads-namespace created configmap/grafana-dashboard-kubelet created configmap/grafana-dashboard-namespace-by-pod created configmap/grafana-dashboard-namespace-by-workload created configmap/grafana-dashboard-node-cluster-rsrc-use created configmap/grafana-dashboard-node-rsrc-use created configmap/grafana-dashboard-nodes created configmap/grafana-dashboard-persistentvolumesusage created configmap/grafana-dashboard-pod-total created configmap/grafana-dashboard-prometheus-remote-write created configmap/grafana-dashboard-prometheus created configmap/grafana-dashboard-proxy created configmap/grafana-dashboard-scheduler created configmap/grafana-dashboard-statefulset created configmap/grafana-dashboard-workload-total created configmap/grafana-dashboards created deployment.apps/grafana created service/grafana created serviceaccount/grafana created servicemonitor.monitoring.coreos.com/grafana created prometheusrule.monitoring.coreos.com/kube-prometheus-rules created clusterrole.rbac.authorization.k8s.io/kube-state-metrics created clusterrolebinding.rbac.authorization.k8s.io/kube-state-metrics created deployment.apps/kube-state-metrics created prometheusrule.monitoring.coreos.com/kube-state-metrics-rules created service/kube-state-metrics created serviceaccount/kube-state-metrics created servicemonitor.monitoring.coreos.com/kube-state-metrics created prometheusrule.monitoring.coreos.com/kubernetes-monitoring-rules created servicemonitor.monitoring.coreos.com/kube-apiserver created servicemonitor.monitoring.coreos.com/coredns created servicemonitor.monitoring.coreos.com/kube-controller-manager created servicemonitor.monitoring.coreos.com/kube-scheduler created servicemonitor.monitoring.coreos.com/kubelet created clusterrole.rbac.authorization.k8s.io/node-exporter created clusterrolebinding.rbac.authorization.k8s.io/node-exporter created daemonset.apps/node-exporter created prometheusrule.monitoring.coreos.com/node-exporter-rules created service/node-exporter created serviceaccount/node-exporter created servicemonitor.monitoring.coreos.com/node-exporter created Warning: resource apiservices/v1beta1.metrics.k8s.io is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically. apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io configured clusterrole.rbac.authorization.k8s.io/prometheus-adapter created Warning: resource clusterroles/system:aggregated-metrics-reader is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically. clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader configured clusterrolebinding.rbac.authorization.k8s.io/prometheus-adapter created clusterrolebinding.rbac.authorization.k8s.io/resource-metrics:system:auth-delegator created clusterrole.rbac.authorization.k8s.io/resource-metrics-server-resources created configmap/adapter-config created deployment.apps/prometheus-adapter created rolebinding.rbac.authorization.k8s.io/resource-metrics-auth-reader created service/prometheus-adapter created serviceaccount/prometheus-adapter created servicemonitor.monitoring.coreos.com/prometheus-adapter created clusterrole.rbac.authorization.k8s.io/prometheus-k8s created clusterrolebinding.rbac.authorization.k8s.io/prometheus-k8s created prometheusrule.monitoring.coreos.com/prometheus-operator-rules created servicemonitor.monitoring.coreos.com/prometheus-operator created poddisruptionbudget.policy/prometheus-k8s created prometheus.monitoring.coreos.com/k8s created prometheusrule.monitoring.coreos.com/prometheus-k8s-prometheus-rules created rolebinding.rbac.authorization.k8s.io/prometheus-k8s-config created rolebinding.rbac.authorization.k8s.io/prometheus-k8s created rolebinding.rbac.authorization.k8s.io/prometheus-k8s created rolebinding.rbac.authorization.k8s.io/prometheus-k8s created role.rbac.authorization.k8s.io/prometheus-k8s-config created role.rbac.authorization.k8s.io/prometheus-k8s created role.rbac.authorization.k8s.io/prometheus-k8s created role.rbac.authorization.k8s.io/prometheus-k8s created service/prometheus-k8s created serviceaccount/prometheus-k8s created servicemonitor.monitoring.coreos.com/prometheus-k8s created [root@k8s-master01 manifests]#
8、查看各服务状态
[root@k8s-master01 manifests]#kubectl get pod -n monitoring NAME READY STATUS RESTARTS AGE alertmanager-main-0 0/2 ContainerCreating 0 61s alertmanager-main-1 0/2 ContainerCreating 0 61s alertmanager-main-2 0/2 ContainerCreating 0 61s blackbox-exporter-55c457d5fb-j24lw 0/3 ContainerCreating 0 60s grafana-9df57cdc4-j4qfz 0/1 ContainerCreating 0 59s kube-state-metrics-76f6cb7996-9dslf 0/3 ContainerCreating 0 57s node-exporter-pz2hs 0/2 ContainerCreating 0 55s node-exporter-rqv7s 0/2 ContainerCreating 0 55s node-exporter-t2z29 2/2 Running 0 55s prometheus-adapter-59df95d9f5-pmtf8 0/1 ContainerCreating 0 53s prometheus-adapter-59df95d9f5-sgbqr 0/1 ContainerCreating 0 54s prometheus-k8s-0 0/2 ContainerCreating 0 46s prometheus-k8s-1 0/2 ContainerCreating 0 45s prometheus-operator-7775c66ccf-ssfq4 2/2 Running 0 4m18s [root@k8s-master01 manifests]#
9、再次查看各服务状态
[root@k8s-master01 ~]#kubectl get pods -n monitoring NAME READY STATUS RESTARTS AGE alertmanager-main-0 2/2 Running 0 4h46m alertmanager-main-1 2/2 Running 0 4h46m alertmanager-main-2 2/2 Running 0 4h46m blackbox-exporter-55c457d5fb-j24lw 3/3 Running 0 4h46m grafana-9df57cdc4-j4qfz 1/1 Running 0 4h46m kube-state-metrics-76f6cb7996-9dslf 3/3 Running 0 4h46m node-exporter-pz2hs 2/2 Running 0 4h46m node-exporter-rqv7s 2/2 Running 0 4h46m node-exporter-t2z29 2/2 Running 0 4h46m prometheus-adapter-59df95d9f5-pmtf8 1/1 Running 0 4h46m prometheus-adapter-59df95d9f5-sgbqr 1/1 Running 0 4h46m prometheus-k8s-0 2/2 Running 0 4h46m prometheus-k8s-1 2/2 Running 1 4h46m prometheus-operator-7775c66ccf-ssfq4 2/2 Running 0 4h49m
10、访问prometheus、alertmanager、grafana
[root@k8s-master01 ~]#kubectl get svc -n monitoring NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE alertmanager-main NodePort 10.109.25.4 <none> 9093:30568/TCP 5h alertmanager-operated ClusterIP None <none> 9093/TCP,9094/TCP,9094/UDP 5h blackbox-exporter ClusterIP 10.109.192.70 <none> 9115/TCP,19115/TCP 5h grafana NodePort 10.108.237.230 <none> 3000:30376/TCP 5h kube-state-metrics ClusterIP None <none> 8443/TCP,9443/TCP 5h node-exporter ClusterIP None <none> 9100/TCP 5h prometheus-adapter ClusterIP 10.104.222.122 <none> 443/TCP 5h prometheus-k8s NodePort 10.105.229.133 <none> 9090:30207/TCP 5h prometheus-operated ClusterIP None <none> 9090/TCP 5h prometheus-operator ClusterIP None <none> 8443/TCP 5h4m