Jsonnnet配置prometheus-operator, 快速生成yaml资源清单
kube-prometheus
1. 简介
Prometheus
是一套开源的系统监控、报警、时间序列数据库的组合,而 Prometheus Operator 是 CoreOS 开源的一套用于管理在 Kubernetes 集群上的 Prometheus 控制器,它是为了简化在 Kubernetes 上部署、管理和运行 Prometheus 和 Alertmanager 集群。
prometheus-operator
与kube-prometheus
做对比
prometheus-operator
使用kubernetes的CRD自定义资源来简化了prometheus、alertmanager和相关监控组件的部署和配置
kube-prometheus提供了基于prometheus和prometheus-operator的完整集群监控示例配置,包括
- 部署多个 Prometheus和 Alertmanager 实例, 实现高可用
- metrics exporters(如用于收集节点 metrics 的 node_exporters)
- 将 Prometheus 链接到各种度量端点的目标配置
- 用于通知集群中潜在问题的示例警报规则
Jsonnet
是Google开源的一门配置语言,用于弥补JSON所暴露的短板,它完全兼容JSON,并加入了JSON所没有的一些特性,包括注释、引用、算数运算、条件操作符、数组和对象深入、引入函数、局部变量、继承等,Jsonnet程序被编译为兼容JSON的数据格式。简单来说Jsonnet就是增强版JSON。
kube-prometheus
项目内容由Jsonnet
编写, 包含了以下组件的基础模板, 除此以外,还包含了默认的dashboard仪表盘和 alerts规则
- Prometheus Operator
- Highly available Prometheus
- Highly available Alertmanager
- Prometheus node-exporter
- Prometheus Adapter for Kubernetes Metrics APIs
- kube-state-metrics
- Grafana
2. 安装
安装golang语言和git
wget https://golang.google.cn/dl/go1.13.15.linux-amd64.tar.gz
yum -y install git
解压
tar -xf go1.13.15.linux-amd64.tar.gz
mv go /opt/software/go1.13.15
ln -s /opt/software/go1.13.15 /opt/software/go
环境变量
cat >> /etc/profile<<EOF
# Golang
SOFT=/opt/software
export GOROOT=$SOFT/go
export GOPATH=$SOFT/gopath
export GO111MODULE=on
export GOPROXY=https://goproxy.io,direct
export PATH=${PATH}:$GOROOT/bin:${GOPATH}/bin
EOF
source /etc/profile
版本测试
$ go version
go version go1.13.15 linux/amd64
安装jsonnet-bundler jsonnet gojsontoyaml
go get github.com/jsonnet-bundler/jsonnet-bundler/cmd/jb
go get github.com/google/go-jsonnet/cmd/jsonnet
go get github.com/brancz/gojsontoyaml
创建目录
mkdir my-kube-prometheus ; cd my-kube-prometheus
初始化目录并下载kube-prometheus
# 创建初始(空)的 jsonnetfile.json
jb init
$ cat jsonnetfile.json
{
"version": 1,
"dependencies": [],
"legacyImports": true
}
安装kube-prometheus基础依赖
version="main"
# 需要FQ
# 创建 `vendor/ ` & `jsonnetfile.lock.json`,并填写`jsonnetfile.json`
jb install github.com/prometheus-operator/kube-prometheus/jsonnet/kube-prometheus@${version}
# 示例jsonnet文件
wget https://raw.githubusercontent.com/prometheus-operator/kube-prometheus/${version}/example.jsonnet -O example.jsonnet
# build.sh 会根据 jsonnnet文件生成对应的yaml文件
wget https://raw.githubusercontent.com/prometheus-operator/kube-prometheus/${version}/build.sh -O build.sh
执行命令生成资源对应yaml文件
sh build.sh example.jsonnet
执行yaml文件
-
注意当前环境网络是否能下载国外的镜像仓库
-
apply和 delete 资源 都有顺序要求.
kubectl apply -f manifests/setup/.
kubectl apply -f manifests/.
3.语法说明
(有待补充)
以示例example.json为准
文本一
local kp =
(import 'kube-prometheus/main.libsonnet') +
// Uncomment the following imports to enable its patches
// (import 'kube-prometheus/addons/anti-affinity.libsonnet') +
// (import 'kube-prometheus/addons/managed-cluster.libsonnet') +
-
kp
为定义的本地变量 -
import 'kube-prometheus/main.libsonnet'
导入指定文件的内容, 此处为prometheus-operator主要入口文件, 实际路径为 vendor/kube-prometheus/main.libsonnet,此处为什么可以省略了vendor路径却可以找到对应的文件路径? 是因为
build.sh
脚本中的命令,-J
参数指定了对应的目录路径jsonnet -J vendor -m manifests "${1-example.jsonnet}" | xargs -I{} sh -c 'cat {} | gojsontoyaml > {}.yaml' -- {}
文本二
{
values+:: {
common+: {
namespace: 'monitoring',
},
},
};
+
在基础配置文本中追加指定内容, 而不覆盖原始.::
隐式字段, 不会在jsonnet执行后输出, 可作为内部变量使用common+:{namespace: 'monitoring',}
根据配置文件定义了 全局的命名空间
文本三
{ 'setup/0namespace-namespace': kp.kubePrometheus.namespace } +
{
['setup/prometheus-operator-' + name]: kp.prometheusOperator[name]
for name in std.filter((function(name) name != 'serviceMonitor' && name != 'prometheusRule'), std.objectFields(kp.prometheusOperator))
} +
// serviceMonitor and prometheusRule are separated so that they can be created after the CRDs are ready
{ 'prometheus-operator-serviceMonitor': kp.prometheusOperator.serviceMonitor } +
{ 'prometheus-operator-prometheusRule': kp.prometheusOperator.prometheusRule } +
{ 'kube-prometheus-prometheusRule': kp.kubePrometheus.prometheusRule } +
{ ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } +
-
setup/0namespace-namespace
指定了yaml配置文件指定的路径, (注意, setup目录是由buld.sh
脚本生成,而非自动生成) -
kp.kubePrometheus.namespace
指定了namespace配置内容 -
std.filter
: 根据文档标准格式为std.filter(func, arr)
, 返回一个新数组arr
,其中包含func
函数返回 true 的所有元素。 -
std.objectFields
: 根据文档标准格式为std.objectFields(o)
, 返回一个字符串数组,每个元素都是给定对象的一个字段。不包括隐藏字段。 -
//
: 为注释内容, 可在空行注释, 也可以行尾进行注释,
4.应用场景
jsonnnet配置文件中由于过长或重复, 会在文章中选择 `...` 省略大部分内容
Typora暂时没有代码着重显示功能, 所以会在新增内容前后添加`// ******`
4.1. etcd外部服务监听
复制证书至当前目录中(也可指定其他路径的证书), 生成yaml后可删除,
# 由kubernetes集群提供证书文件, 包含当前所需文件ca.pem etcd.pem etcd-key.pem
cp /opt/software/kubernetes/certs .
编写jsonnet配置文件 example.jsonnet
local kp =
(import 'kube-prometheus/main.libsonnet') +
// Uncomment the following imports to enable its patches
// (import 'kube-prometheus/addons/anti-affinity.libsonnet') +
// (import 'kube-prometheus/addons/managed-cluster.libsonnet') +
// (import 'kube-prometheus/addons/node-ports.libsonnet') +
// ******
(import 'kube-prometheus/addons/static-etcd.libsonnet') +
// ******
// (import 'kube-prometheus/addons/custom-metrics.libsonnet') +
// (import 'kube-prometheus/addons/external-metrics.libsonnet') +
{
values+:: {
common+: {
namespace: 'monitoring',
}, //common+:
// ******
etcd+: {
ips: ["10.0.10.53","10.0.20.109","10.0.20.108"],
clientCA: importstr 'certs/ca.pem',
clientCert: importstr 'certs/etcd.pem',
clientKey: importstr 'certs/etcd-key.pem',
insecureSkipVerify: true,
}, // etcd+:
// ******
}, //values+::
};
...
...
...
涉及到的文件 kube-prometheus/addons/static-etcd.libsonnet
github.com/etcd-io/etcd/contrib/mixin/mixin.libsonnet
- 导入基础模板自带的
static-etcd.libsonnet
文本文件, 代表新增此功能 - ips必须IP地址,否则apply报错
- 证书文件必须存在或为空文件, 否则报错.
certs/ca.pem
为``example.jsonnet的相对路径 按照模板可得secret yaml文件, 其中
metadata.name为
kube-etcd-client-certs`, 如无必要,请勿修改,涉及到许多修改 - 执行
build.sh
会生成文件- 记录etcd证书的secret文件:
prometheus-secretEtcdCerts.yaml
- etcd的service文件:
prometheus-serviceEtcd.yaml
- etcd的serviceMonitor文件:
prometheus-serviceMonitorEtcd.yaml
- etcd的endponits文件:
prometheus-endpointsEtcd.yaml
- 追加etcd secret资源:
prometheus-prometheus.yaml
- 记录etcd证书的secret文件:
4.2. 根据etcd的mixin.libsonnet文件并没有生成etcd的规则文件和dashboard内容
查找`github.com/etcd-io/etcd/contrib/mixin/mixin.libsonnet`可得两个信息, `prometheusAlerts+::` `grafanaDashboards+::` , 这两个配置块根本就没有生成, 在查看模板引用关系, 这两个配置块没有被引用
(详情可查看文件`kube-prometheus/main.libsonnet` `kube-prometheus/components/prometheus.libsonnet` `kube-prometheus/components/grafana.libsonnet`)
根据官方提供的示例新增 (引用上个示例的配置代码)
...
...
...
{
values+:: {
common+: {
namespace: 'monitoring',
}, //common+:
etcd+: {
ips: ["10.0.10.53","10.0.20.109","10.0.20.108"],
clientCA: importstr 'certs/ca.pem',
clientCert: importstr 'certs/etcd.pem',
clientKey: importstr 'certs/etcd-key.pem',
insecureSkipVerify: true,
}, // etcd+:
// ******
grafana+: {
dashboards+:: {
// 导入etcd本地json文件 (推荐)
// 'etcd-dashboard.json' : (import 'etcd-dashboard.json')
// 导入kube-prometheus自带的etcd dashboard json, 当前示例
'etcd-dashboard.json': (kp.grafanaDashboards['etcd.json']),
}, // dashboards+::
}, // grafana+:
// ******
}, //values+::
// ******
etcd+: {
prometheusRule:{
apiVersion: 'monitoring.coreos.com/v1',
kind: 'PrometheusRule',
metadata: {
name: 'my-prometheus-rule',
namespace: $.values.common.namespace,
},
spec: {
groups: [ if x['name'] == 'etcd' then x for x in kp.prometheusAlerts.groups],
},
},
},
// ******
};
{ 'setup/0namespace-namespace': kp.kubePrometheus.namespace } +
{
['setup/prometheus-operator-' + name]: kp.prometheusOperator[name]
for name in std.filter((function(name) name != 'serviceMonitor' && name != 'prometheusRule'), std.objectFields(kp.prometheusOperator))
} +
// serviceMonitor and prometheusRule are separated so that they can be created after the CRDs are ready
{ 'prometheus-operator-serviceMonitor': kp.prometheusOperator.serviceMonitor } +
{ 'prometheus-operator-prometheusRule': kp.prometheusOperator.prometheusRule } +
{ 'kube-prometheus-prometheusRule': kp.kubePrometheus.prometheusRule } +
{ ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } +
{ ['blackbox-exporter-' + name]: kp.blackboxExporter[name] for name in std.objectFields(kp.blackboxExporter) } +
{ ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) } +
{ ['kube-state-metrics-' + name]: kp.kubeStateMetrics[name] for name in std.objectFields(kp.kubeStateMetrics) } +
{ ['kubernetes-' + name]: kp.kubernetesControlPlane[name] for name in std.objectFields(kp.kubernetesControlPlane) }
{ ['node-exporter-' + name]: kp.nodeExporter[name] for name in std.objectFields(kp.nodeExporter) } +
{ ['prometheus-' + name]: kp.prometheus[name] for name in std.objectFields(kp.prometheus) } +
{ ['prometheus-adapter-' + name]: kp.prometheusAdapter[name] for name in std.objectFields(kp.prometheusAdapter) } +
// ******
{ ['etcd-' + name]: kp.etcd[name] for name in std.objectFields(kp.etcd) }
// ******
新增三个配置
values+::.grafana
由于kube-prometehus自带了etcd的grafanadashboard模板内容, 示例中直接引用, 但发现此dashboard模板并不好看, 仅做参考.- etcd-dashboard.json为grafana dashboard模板编号
3070
, 在实际环境中更推荐使用此模板 etcd+:
在etcd块中新增prometheusRule
, 根据其他示例编写, 引用github.com/etcd-io/etcd/contrib/mixin/mixin.libsonnet
的prometheusAlerts+::
规则. (此处有个问题: 如果kp.prometheusAlerts.groups存在多个, 该如何处理, groups为数组, 该如何指定对应的etcd规则内容 )- 指定etcd块文件配置路径, 生成etcd前缀的yaml文件
4.3. etcd的yaml文件都是prometheus前缀, 如何修改成etcd前缀
根据 `kube-prometheus/addons/static-etcd.libsonnet`可得, 所有的配置都在prometheus块中, 导致了 etcd的yaml资源生成都为prometheus前缀.
所需需要修改内容对应的块. 请注意: prometheus的yaml文件指定了etcd的secret资源, 所以在apply prometheus前需要生成etcd的secret.
修改kube-prometheus/addons/static-etcd.libsonnet文件, 请注意备份, 此处对资源名进行位置变更, 会对内容进行省略处理
(import 'github.com/etcd-io/etcd/contrib/mixin/mixin.libsonnet') + {
values+:: {
etcd: {
...
},
},
// 此处不变
prometheus+: {
secretEtcdCerts: {
...
},
prometheus+: {
...
},
// 此处新增了etcd块, 与prometheus区分
etcd+: {
serviceEtcd: {
...
},
endpointsEtcd: {
...
},
serviceMonitorEtcd: {
...
},
},
}
根据example.jsonnet文件
{ ['etcd-' + name]: kp.etcd[name] for name in std.objectFields(kp.etcd) }
生成yaml文件
etcd-endpointsEtcd.yaml
etcd-prometheusRule.yaml
etcd-serviceEtcd.yaml
etcd-serviceMonitorEtcd.yaml
prometheus-secretEtcdCerts.yaml
prometheus-prometheus.yaml的secret内容
4.4. prometheus数据持久化
请提前部署nfs-server服务端和nfs-client-provisioner
...
...
...
// prometheus Start
prometheus+: {
// prometheus配置
prometheus+:{
spec+: {
retention: '30d',
storage: {
volumeClaimTemplate: {
apiVersion: 'v1',
kind: 'PersistentVolumeClaim',
metadata: {
name: "prometheus-claim",
annotations: {
'volume.beta.kubernetes.io/storage-class': "managed-nfs-storage",
}, // annotations
}, // metadata
spec: {
accessModes: ['ReadWriteMany'],
resources: { requests: { storage: '10Gi' } },
}, // spec
}, // volumeClaimTemplate
}, // storage:
}, // spec+
}, // prometheus+:
}, // prometheus+:
// prometheus End
...
...
...
- retention: 数据保留时间 默认为24h , 如有没有指定该值, 则会通过prometheus-operator传递参数
--storage.tsdb.retention=24h
给prometheus-k8s - resources.requests.storage: prometheus-k8s存储空间大小
- 查看
prometheus-prometheus.yaml
可以看到资源信息
storage:
volumeClaimTemplate:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
volume.beta.kubernetes.io/storage-class: managed-nfs-storage
name: prometheus-claim
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
4.5. prometheus添加外部暴露地址, 添加前缀路径
默认: prometheus的访问地址为 http://192.168.1.100:9090/ , 当前使用ingress暴露prometheus服务,却又不想新增二级域名给prometheus.
match: Host(test.k8s.com
) && PathPrefix(/prometheus
)
在配置ingressroute资源发现访问prometheus服务最终会重定向到 http://test.k8s.com/graph 链接去. 与配置不符合, 导致404报错
在trafik找不到对应的解决办法, 那么就只能更改后端的路径, 给路径添加前缀. 修改后会影响prometheus-adapter访问, 自身prometheus监听, grafana默认数据源
...
...
...
{
values+:: {
common+: {
namespace: 'monitoring',
}, //common+::
prometheusAdapter+: {
prometheusURL: 'http://prometheus-' + $.values.prometheus.name + '.' + $.values.common.namespace + '.svc.cluster.local:9090/prometheus',
},
}, // values+::
// prometheus Start
prometheus+: {
serviceMonitor+:{
spec+:{
endpoints:[
{"path": "/prometheus/metrics"},
],
},
},
// prometheus配置
prometheus+:{
spec+: {
retention: '30d',
externalUrl: "http://test.k8s.com/prometheus", //外部暴露地址
routePrefix: "/prometheus", // 添加前缀路径
} // spec+
} // prometheus+:
}, // prometheus+:
// prometheus End
...
...
...
-
prometheusAdapter
对应服务 prometheus-adapter需要访问k8s集群内部prometheus指标信息, 而此处需求因变更prometheus实际访问路径,需要变更参数值. 参考kube-prometheus/main.libsonnet
文件中 prometheusAdapter 的 prometheusURL -
prometheus.serviceMonitor
需要监听 prometheus提供的指标信息, 同样的情况, 指定了metrics的路径(默认为: /metrics) -
prometheus.spec.externalUrl
: 该参数定义了外部暴露地址, 现阶段看到 alertmanager使用 -
prometheus.spec.routePrefix
: 该参数定义了prometheus实际的访问路径, 添加前缀地址/prometheus
4.6. prometheus通过ingressroute暴露服务
前提: 部署traefik服务 并监听monitor命名空间
当前配置中定义了ingressroute函数, 通过传递参数形式实现配置.
githup示例中, 定义的是ingress
资源, 且直接定义了ingress的部分.
而这里定义的是IngressRoute
资源, 将ingress的部分写入到对应服务代码部分中, 让ingress生成的文件以服务名称前缀命名
// ******
local ingressroute(name, namespace, entrypoint, route) = {
apiVersion: 'traefik.containo.us/v1alpha1',
kind: 'IngressRoute',
metadata: {
name: name,
namespace: namespace,
},
spec: {
entryPoints: entrypoint,
routes: route
},
};
// ******
local kp =
(import 'kube-prometheus/main.libsonnet') +
{
values+:: {
common+: {
namespace: 'monitoring',
}, //common+::
}, // values+::
// ******
prometheus+:{
ingressroute+: ingressroute(
'prometheus',
$.values.common.namespace,
[ 'web' ],
[{
match: 'Host(`test.k8s.com`) && PathPrefix(`/prometheus`)',
kind: 'Rule',
services: [{
name: 'prometheus-k8s',
port: 9090,
}],
}],
), // ingressroute
}, // prometheus+
// ******
};
...
...
...
4.7. grafana添加前缀路径
问题与prometehus一样, 通过ingressroute暴露服务后, 访问中会出现重定向地址, 导致404报错
match: Host(test.k8s.com
) && PathPrefix(/grafana
)
修改grafana配置
// grafana Start
grafana+:: {
_config+:: {
env+: [
{"name": "GF_SERVER_ROOT_URL", "value": "http://localhost:3000/grafana/"},
{"name": "GF_SERVER_SERVE_FROM_SUB_PATH", "value": "true"},
{"name": "GF_LOG_LEVEL", "value": "debug"},
], // env
}, // _config+:
}, // _grafana+:
// grafana End
参考文件内容: github.com/brancz/kubernetes-grafana/grafana/grafana.libsonnet
- GF_SERVER_ROOT_URL: 定义grafana的访问路径地址, 后缀添加grafana, 对应参数
root_url
- GF_SERVER_SERVE_FROM_SUB_PATH: 从root_url设置中指定实际访问 路径 对应参数
serve_from_sub_path
4.8. grafana修改默认数据源
参考模板文件: github.com/brancz/kubernetes-grafana/grafana/grafana.libsonnet
由prometheus新增了前缀路径 导致grafana生成的默认数据源不生效, 而且在grafana页面修改默认数据源信息也不生效, 而新添加的数据源没问题.
// grafana Start
grafana+: {
_config+:: {
datasources: [{
name: 'Prometheus', // 该名称对应dashboard模板文件的数据源指向,注意大小写
type: 'prometheus',
access: 'proxy',
orgId: 1,
url: 'http://prometheus-k8s.monitoring.svc:9090/prometheus',
version: 1,
editable: false,
}], //datasources+::
}, // _config+:
}, // _grafana+:
// grafana End
查看yaml
$ cat manifests/grafana-dashboardDatasources.yaml | grep "datasources.yaml" | cut -d " " -f 4 | base64 -d
{
"apiVersion": 1,
"datasources": [
{
"access": "proxy",
"editable": false,
"name": "prometheus",
"orgId": 1,
"type": "prometheus",
"url": "http://prometheus-k8s.monitoring.svc:9090/prometheus",
"version": 1
}
]
}
- url 默认为:
'http://' + $._config.prometheus.serviceName + '.' + $._config.namespace + '.svc:9090',
在example.jsonnet
无法正确的引用, 所以此处写死url地址.
4.9. grafana启动前添加自定义dashboard
[githup示例](https://github.com/prometheus-operator/kube-prometheus/blob/main/examples/grafana-additional-rendered-dashboard-example.jsonnet)
从grafana官网上下载dashboard模板json文件至目录中, json示例编号为13105 : [kubernetes-dashboard.json](https://grafana.com/api/dashboards/13105/revisions/5/download)
wget https://grafana.com/api/dashboards/13105/revisions/5/download -O kubernetes-dashboard.json
{
values+:: {
common+: {
namespace: 'monitoring',
}, //common+::
grafana+: {
// grafana-dashboardDefinitions.yaml
// 需要注意dashboard文件数据源指向是否对应正确拼写的数据源名称
dashboards+:: {
'kubernetes-dashboard.json' : (import 'kubernetes-dashboard.json'),
}, // dashboards+::
}, // grafana+
}, // values+::
$ grep "kubernetes-dashboard" manifests/grafana-dashboardDefinitions.yaml
kubernetes-dashboard.json: |-
name: grafana-dashboard-kubernetes-dashboard
- 在grafana的UI界面中可以看到对应的dashboard
4.10. grafana持久化
根据实际情况进行修改
前提: 搭建nfs-server 和 nfs-client-provisioner
github.com/brancz/kubernetes-grafana/grafana/grafana.libsonnet
查看三个参数 storageVolumeName
storageVolumeName
storageVolumeMount
, 可以看到都是添加了local关键字, 导致无法从example.jsonnet进行修改
直接修改模板配置文件
vim github.com/brancz/kubernetes-grafana/grafana/grafana.libsonnet
# 原始
local storageVolumeName = 'grafana-storage';
local storageVolume = { name: storageVolumeName, emptyDir: {} };
local storageVolumeMount = { name: storageVolumeName, mountPath: '/var/lib/grafana', readOnly: false };
# 修改后
local storageVolumeName = 'grafana-storage';
local storageVolumeClaim = 'grafana-claim';
local storageVolume = { name: storageVolumeName, persistentVolumeClaim: { claimName: storageVolumeClaim } };
local storageVolumeMount = { name: storageVolumeName, mountPath: '/var/lib/grafana', readOnly: false };
- storageVolumeClaim grafana-claim 为 nfs的资源PersistentVolumeClaim定义的名称
grafana+: {
PersistentVolumeClaim: {
apiVersion: 'v1',
kind: 'PersistentVolumeClaim',
metadata: {
name: 'grafana-claim',
namespace: $.values.common.namespace,
annotations: {
'volume.beta.kubernetes.io/storage-class' : 'managed-nfs-storage',
}, // annotations
}, // metadata
spec: {
accessModes: ['ReadWriteMany'],
resources: { requests: { storage: '10G'},},
}, // spec
}, // PersistentVolumeClaim:
}, // grafana+:
4.11. grafana通过ingressroute暴露服务
前提: 部署traefik服务 并监听monitor命名空间
当前配置中定义了ingressroute函数, 通过传递参数形式实现配置.
githup示例中, 定义的是ingress
资源, 且直接定义了ingress的部分.
而这里定义的是IngressRoute
资源, 将ingress的部分写入到对应服务代码部分中, 让ingress生成的文件以服务名称前缀命名
// ******
local ingressroute(name, namespace, entrypoint, route) = {
apiVersion: 'traefik.containo.us/v1alpha1',
kind: 'IngressRoute',
metadata: {
name: name,
namespace: namespace,
},
spec: {
entryPoints: entrypoint,
routes: route
},
};
// ******
local kp =
(import 'kube-prometheus/main.libsonnet') +
{
values+:: {
common+: {
namespace: 'monitoring',
}, //common+::
}, // values+::
// ******
grafana+:{
ingressroute+: ingressroute(
'grafana',
$.values.common.namespace,
[ 'web' ],
[{
match: 'Host(`test.k8s.com`) && PathPrefix(`/grafana`)',
kind: 'Rule',
services: [{
name: 'grafana',
port: 3000,
}],
}],
), // ingressroute
}, // grafana+
// ******
};
...
...
...
4.12. alertmanager钉钉告警配置
alertmanager当前不支持直接往钉钉告警信息, 需要通过dingtalk服务来进行告警信息, 告警自定义模板可参考alertmanager告警模板定义模板文档
(个人编写的jsonnet文件, 后续有待调整, 会将参数变为可定义的值, template_configmap配置里面相关的URL需要根据实际情况进行修改, )
编写 dingtalk.libsonnet, 修改 url 的 token , secret , @的人的手机号码(可多人)
// 这是作用于钉钉机器人的告警 有待进行下一步的调整
// 请修改 config_configmap资源里面的url地址 secret mobiles text: @your-@-phone
{
dingtalk: {
// service
service: {
apiVersion: 'v1',
kind: 'Service',
metadata: {
labels: {
app: 'alertmanager-webhook-dingtalk'
},
name: 'alertmanager-webhook-dingtalk',
namespace: $.values.common.namespace,
},
spec: {
selector: {
app: 'alertmanager-webhook-dingtalk'
},
ports: [
{ name: 'http', port: 8060, targetPort: 'http' }
]
},
},
// deployment
deployment: {
apiVersion: 'apps/v1',
kind: 'Deployment',
metadata: {
labels: {
app: 'alertmanager-webhook-dingtalk'
}, // label
name: 'alertmanager-webhook-dingtalk',
namespace: $.values.common.namespace,
}, // metadata
spec: {
selector: {
matchLabels: {
app: 'alertmanager-webhook-dingtalk'
}, // matchLabels
}, // selector
template: {
metadata: {
labels: {
app: 'alertmanager-webhook-dingtalk'
}, // labels
}, // metadata
spec: {
containers: [{
name: 'alertmanager-webhook-dingtalk',
image: 'timonwong/prometheus-webhook-dingtalk',
args: [
'--web.listen-address=:8060',
'--config.file=/config/config.yaml',
'--web.enable-ui',
'--web.enable-lifecycle',
'--log.level=info'
], // args
volumeMounts: [
{
name: 'dingtalk-config',
mountPath: '/config/config.yaml',
subPath: 'config.yaml'
},
{
name: 'dingtalk-template',
mountPath: '/config/template.tmpl',
subPath: 'template.tmpl'
}
], // volumeMounts
resources: { limits: { cpu: '100m', memory: '100Mi' } },
ports: [ { name: 'http', containerPort: 8060 } ]
}], // containers
volumes: [
{ name: 'dingtalk-config',configMap: { name: 'dingtalk-config' } },
{ name: 'dingtalk-template',configMap: { name: 'dingtalk-template' } }
] // volumes
}, // spec
}, // template
}, // spec
}, // deployment
// template_configmap
template_configmap:{
apiVersion: 'v1',
kind: 'ConfigMap',
metadata: {
name: 'dingtalk-template',
namespace: $.values.common.namespace,
},
data: {
'template.tmpl': |||
{{/* Alert List Begin */}}
{{ define "example.__text_alert_list" }}{{ range . }}
**{{ .Annotations.message }}**
[Prometheus]({{ .GeneratorURL }}) | [Alertmanager](https://alertmanager.example.com/#/alerts) | [Grafana](https://grafana.example.com/dashboards)
**\[{{ .Labels.severity | upper }}\]** {{ .Annotations.summary }}
**Description:** {{ .Annotations.description }}
{{ range .Labels.SortedPairs }}> - {{ .Name }}: {{ .Value | markdown | html }}
{{ end }}
{{ end }}{{ end }}
{{/* Alert List End */}}
{{/* Message Title Begin */}}
{{ define "example.title" }}{{ template "__subject" . }}{{ end }}
{{/* Message Title End */}}
{{/* Message Content Begin */}}
{{ define "example.content" }}
### [{{ index .GroupLabels "alertname" }}](https://example.app.opsgenie.com/alert/list)
{{ if gt (len .Alerts.Firing) 0 -}}
{{ template "example.__text_alert_list" .Alerts.Firing }}
{{- end }}
{{ if gt (len .Alerts.Resolved) 0 -}}
{{ template "example.__text_alert_list" .Alerts.Resolved }}
{{- end }}
{{- end }}
{{/* Message Content End */}}
|||,
},
},
// config_configmap
config_configmap: {
apiVersion: 'v1',
kind: 'ConfigMap',
metadata: {
name: 'dingtalk-template',
namespace: $.values.common.namespace,
},
data: {
'config.yaml': |||
templates:
- /config/template.tmpl
targets:
webhook1: &target_base
url: https://oapi.dingtalk.com/robot/send?access_token=xxx
# secret for signature
secret: xxx
message:
title: '{{ template "example.title" . }}'
text: '{{ template "example.content" . }}'
webhook2:
<<: *target_base
mention:
mobiles: ['phone1','phone2']
message:
text: |
@phone1 @phone2
{{ template "example.content" . }}
|||,
},
},
},
}
编写alertmanager.yaml文件
alertmanager.yaml
名称不要更改, 否则无法覆盖alertmanager
的对应文件
global:
resolve_timeout: 5m
route:
group_by: ["severity",'alertname']
receiver: 'webhook1'
routes:
- match_re:
severity: warning
receiver: webhook2
receivers:
- name: webhook1
webhook_configs:
- &dingtalk_config
url: 'http://alertmanager-webhook-dingtalk.monitoring.svc.cluster.local:8060/dingtalk/webhook1/send'
- name: webhook2
webhook_configs:
- <<: *dingtalk_config
url: 'http://alertmanager-webhook-dingtalk.monitoring.svc.cluster.local:8060/dingtalk/webhook2/send'
编写example.jsonnet
在测试中发现不配置configSecret参数,, 上面的配置信息无法在pod中覆盖原配置.
local kp =
(import 'kube-prometheus/main.libsonnet') +
// 新增
(import 'dingtalk.libsonnet') +
{
values+:: {
common+: {
namespace: 'monitoring',
}, //common+::
alertmanager+: {
config: config: importstr 'alertmanager.yaml',
}, // alertmanager
}, // values+::
alertmanager+: {
alertmanager+: {
configSecret: 'alertmanager-main',
},
},
};
...
...
{ ['prometheus/prometheus-' + name]: kp.prometheus[name] for name in std.objectFields(kp.prometheus) } +
{ ['prometheus-adapter/prometheus-adapter-' + name]: kp.prometheusAdapter[name] for name in std.objectFields(kp.prometheusAdapter) }+
// 新增
{ ['dingtalk-' + name]: kp.dingtalk[name] for name in std.objectFields(kp.dingtalk) }
4.13. alertmanger通过ingressroute暴露服务
super
与python的super
意义一样,调用父类
// ******
local ingressroute(name, namespace, entrypoint, route) = {
apiVersion: 'traefik.containo.us/v1alpha1',
kind: 'IngressRoute',
metadata: {
name: name,
namespace: namespace,
},
spec: {
entryPoints: entrypoint,
routes: route
},
};
// ******
local kp =
(import 'kube-prometheus/main.libsonnet') +
{
values+:: {
common+: {
namespace: 'monitoring',
}, //common+::
}, // values+::
prometheus+: {
prometheus+:{
spec+: {
alerting+:{ alertmanagers: [super.alertmanagers[0] + { pathPrefix: '/alertmanager'} ]},
},
},
}, // prometheus
alertmanager+:{
alertmanager+: {
routePrefix: '/alertmanager'
},
ingressroute+: ingressroute(
'alertmanager',
$.values.common.namespace,
[ 'web' ],
[{
match: 'Host(`test.k8s.com`) && PathPrefix(`/alertmanager`)',
kind: 'Rule',
services: [{
name: kp.alertmanager.service.metadata.name,
port: kp.alertmanager.service.spec.ports[0]['port']
}],
}],
), // ingressroute
}, // alertmanager+
// ******
};
...
...
...
4.14 将服务以目录形式存放文件
在example.jsonnet
路径中添加前缀目录即可 如 ['grafana-' + name]
更改为 ['grafana/grafana-' + name]
但会出现一个问题, yaml文件都存放在各自目录 需要手动一个目录一个目录执行
修改build.sh脚本, 按照需求进行变更
# 原始
mkdir -p manifests/setup
# 修改
mkdir -p manifests/{setup,prometheus-operator,prometheus,alertmanager,blackbox-exporter,grafana,kube-state-metrics,kubernetes,node-exporter,prometheus-adapter}
修改example.json
{ 'setup/0namespace-namespace': kp.kubePrometheus.namespace } +
{
['setup/prometheus-operator-' + name]: kp.prometheusOperator[name]
for name in std.filter((function(name) name != 'serviceMonitor' && name != 'prometheusRule'), std.objectFields(kp.prometheusOperator))
} +
// serviceMonitor and prometheusRule are separated so that they can be created after the CRDs are ready
{ 'prometheus-operator/prometheus-operator-serviceMonitor': kp.prometheusOperator.serviceMonitor } +
{ 'prometheus-operator/prometheus-operator-prometheusRule': kp.prometheusOperator.prometheusRule } +
{ 'prometheus/kube-prometheus-prometheusRule': kp.kubePrometheus.prometheusRule } +
{ ['alertmanager/alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } +
{ ['blackbox-exporter/blackbox-exporter-' + name]: kp.blackboxExporter[name] for name in std.objectFields(kp.blackboxExporter) } +
{ ['grafana/grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) } +
{ ['kube-state-metrics/kube-state-metrics-' + name]: kp.kubeStateMetrics[name] for name in std.objectFields(kp.kubeStateMetrics) } +
{ ['kubernetes/kubernetes-' + name]: kp.kubernetesControlPlane[name] for name in std.objectFields(kp.kubernetesControlPlane) }
{ ['node-exporter/node-exporter-' + name]: kp.nodeExporter[name] for name in std.objectFields(kp.nodeExporter) } +
{ ['prometheus/prometheus-' + name]: kp.prometheus[name] for name in std.objectFields(kp.prometheus) } +
{ ['prometheus-adapter/prometheus-adapter-' + name]: kp.prometheusAdapter[name] for name in std.objectFields(kp.prometheusAdapter) } +
执行yamll ,setup目录执行的优先级最高, 因为需要先生成CRD自定义资源
kubectl apply -f manifests/setup
kubectl apply -f manifests/prometheus
kubectl apply -f manifests/prometheus-operator
kubectl apply -f manifests/grafana
kubectl apply -f manifests/prometheus-adapter
kubectl apply -f manifests/kubernetes
kubectl apply -f manifests/blackbox-exporter
kubectl apply -f manifests/kube-state-metrics
kubectl apply -f manifests/node-exporter
kubectl apply -f manifests/alertmanager
删除yaml
kubectl delete -f manifests/grafana
kubectl delete -f manifests/prometheus-adapter
kubectl delete -f manifests/kubernetes
kubectl delete -f manifests/blackbox-exporter
kubectl delete -f manifests/kube-state-metrics
kubectl delete -f manifests/node-exporter
kubectl delete -f manifests/alertmanager
kubectl delete -f manifests/prometheus-operator
kubectl delete -f manifests/prometheus
kubectl delete -f manifests/setup
5. 问题
问题一: grafana默认prometheus数据源信息不正确且修改无效
通过prometheus-oprator安装方式安装, 对prometheus访问路径添加前缀后, grafana默认prometheus数据源变更也无法访问,
该问题由于grafana挂载了datasources.yaml配置文件, 写死了prometheus url地址, 无论如何修改, 始终都是该url地址
所以需要找到datasources.yaml的定义位置.
kube-prometheus/manifests/grafana-dashboardDatasources.yaml
该配置文件是一组加密的配置文件,该如何修改呢
apiVersion: v1
data:
datasources.yaml: ewogICAgImFwaVZlcnNpb24iOiAxLAogICAgImRhdGFzb3VyY2VzIjogWwogICAgICAgIHsKICAgICAgICAgICAgImFjY2VzcyI6ICJwcm94eSIsCiAgICAgICAgICAgICJlZGl0YWJsZSI6IGZhbHNlLAogICAgICAgICAgICAibmFtZSI6ICJwcm9tZXRoZXVzIiwKICAgICAgICAgICAgIm9yZ0lkIjogMSwKICAgICAgICAgICAgInR5cGUiOiAicHJvbWV0aGV1cyIsCiAgICAgICAgICAgICJ1cmwiOiAiaHR0cDovL3Byb21ldGhldXMtazhzLm1vbml0b3Jpbmcuc3ZjOjkwOTAiLAogICAgICAgICAgICAidmVyc2lvbiI6IDEKICAgICAgICB9CiAgICBdCn0=
kind: Secret
metadata:
labels:
app.kubernetes.io/component: grafana
app.kubernetes.io/name: grafana
app.kubernetes.io/part-of: kube-prometheus
app.kubernetes.io/version: 8.0.3
name: grafana-datasources
namespace: monitoring
type: Opaque
解压
echo -n "ewogICAgImFwaVZlcnNpb24iOiAxLAogICAgImRhdGFzb3VyY2VzIjogWwogICAgICAgIHsKICAgICAgICAgICAgImFjY2VzcyI6ICJwcm94eSIsCiAgICAgICAgICAgICJlZGl0YWJsZSI6IGZhbHNlLAogICAgICAgICAgICAibmFtZSI6ICJwcm9tZXRoZXVzIiwKICAgICAgICAgICAgIm9yZ0lkIjogMSwKICAgICAgICAgICAgInR5cGUiOiAicHJvbWV0aGV1cyIsCiAgICAgICAgICAgICJ1cmwiOiAiaHR0cDovL3Byb21ldGhldXMtazhzLm1vbml0b3Jpbmcuc3ZjOjkwOTAiLAogICAgICAgICAgICAidmVyc2lvbiI6IDEKICAgICAgICB9CiAgICBdCn0="| base64 --decode
{
"apiVersion": 1,
"datasources": [
{
"access": "proxy",
"editable": false,
"name": "prometheus",
"orgId": 1,
"type": "prometheus",
"url": "http://prometheus-k8s.monitoring.svc:9090",
"version": 1
}
]
}
生成临时文件tmp.json 修改url
{
"apiVersion": 1,
"datasources": [
{
"access": "proxy",
"editable": false,
"name": "prometheus",
"orgId": 1,
"type": "prometheus",
"url": "http://prometheus-k8s.monitoring.svc:9090/prometheus",
"version": 1
}
]
}
加密
base64 < tmp.json | tr -d '\n'
ewogICAgImFwaVZlcnNpb24iOiAxLAogICAgImRhdGFzb3VyY2VzIjogWwogICAgICAgIHsKICAgICAgICAgICAgImFjY2VzcyI6ICJwcm94eSIsCiAgICAgICAgICAgICJlZGl0YWJsZSI6IGZhbHNlLAogICAgICAgICAgICAibmFtZSI6ICJwcm9tZXRoZXVzIiwKICAgICAgICAgICAgIm9yZ0lkIjogMSwKICAgICAgICAgICAgInR5cGUiOiAicHJvbWV0aGV1cyIsCiAgICAgICAgICAgICJ1cmwiOiAiaHR0cDovL3Byb21ldGhldXMtazhzLm1vbml0b3Jpbmcuc3ZjOjkwOTAvcHJvbWV0aGV1cyIsCiAgICAgICAgICAgICJ2ZXJzaW9uIjogMQogICAgICAgIH0KICAgIF0KfQo=
修改配置文件 对应位置, 重新apply和restart相关资源
问题二: alertmanger高可用无法组成集群
level=warn ts=2021-08-19T09:51:05.495Z caller=cluster.go:251 component=cluster msg="failed to join cluster" err="3 errors occurred:\n\t* Failed to resolve alertmanager-main-0.alertmanager-operated:9094: lookup alertmanager-main-0.alertmanager-operated on 10.254.0.2:53: no such host\n\t* Failed to resolve alertmanager-main-1.alertmanager-operated:9094: lookup alertmanager-main-1.alertmanager-operated on 10.254.0.2:53: no such host\n\t* Failed to resolve alertmanager-main-2.alertmanager-operated:9094: lookup alertmanager-main-2.alertmanager-operated on 10.254.0.2:53: no such host\n\n"
一般都为coredns问题,请重启coredns服务.
测试: 在busybox容器中使用ping命令成功, 使用nslookup失败. 在alertmanager-main-0中使用nsllokup失败. 排除alertmanager问题. 重启coredns后继续测试, nslookup没问题,且alertmanger日志无报错, 成功使用
6. 参考链接
[kube-prometheus]
[prometheus-operator]
[jsonnet]
7. 完整实例
根据应用场景所有功能编写的example.jsonnet文件, 仅做参考, 需要根据实际情况修改
// 准备文件
// 服务器可FQ获取镜像资源
// etcd : ca.pem etcd.pem etcd-key.pem 修改etcd
// prometheus: dnsUrl
// grafana: dnsurl
// dingtalk template: token secret
// nfs-provisioner 服务 (后续可以自行编写jsonnet)\
// 修改build.sh mkdir生成目录加入服务前缀
local ingressroute(name, namespace, entrypoint, route) = {
apiVersion: 'traefik.containo.us/v1alpha1',
kind: 'IngressRoute',
metadata: {
name: name,
namespace: namespace,
},
spec: {
entryPoints: entrypoint,
routes: route
},
};
local kp =
(import 'kube-prometheus/main.libsonnet') +
(import 'kube-prometheus/addons/static-etcd.libsonnet') +
(import 'dingtalk.libsonnet') +
{
values+:: {
common+: {
namespace: 'monitoring',
}, //common+:
// etcd外部服务监听
etcd+: {
ips: ["10.0.10.53","10.0.20.109","10.0.20.108"],
clientCA: importstr 'ca.pem',
clientCert: importstr 'etcd.pem',
clientKey: importstr 'etcd-key.pem',
insecureSkipVerify: true,
}, // etcd+:
prometheus+: {
prometheus+:{
spec+: {
alerting+:{ alertmanagers: super.alertmanagers[0] + { pathPrefix: '/alertmanager'} },
},
},
}, // prometheus
prometheusAdapter+: {
prometheusURL: 'http://prometheus-' + $.values.prometheus.name + '.' + $.values.common.namespace + '.svc.cluster.local:9090/prometheus',
hostNetwork: true
},
// 根据etcd的mixin.libsonnet文件并没有生成etcd的规则文件和dashboard内容
grafana+: {
dashboards+:: {
'kubernetes-dashboard.json' : (import 'kubernetes-dashboard.json'),
// 导入etcd本地json文件 (推荐)
'etcd-dashboard.json' : (import 'etcd-dashboard.json')
// 导入kube-prometheus自带的etcd dashboard json, 当前示例
// 'etcd-dashboard.json': (kp.grafanaDashboards['etcd.json']),
}, // dashboards+::
}, // grafana+:
alertmanager+: {
config: importstr 'alertmanager.yaml',
}, // alertmanager
}, //values+::
etcd+: {
prometheusRule:{
apiVersion: 'monitoring.coreos.com/v1',
kind: 'PrometheusRule',
metadata: {
name: 'etcd-rule',
namespace: $.values.common.namespace,
}, // metadata
spec: {
groups: [ if x['name'] == 'etcd' then x for x in kp.prometheusAlerts.groups],
}, // spec
}, // prometheusRule
}, // etcd
// prometheus Start
prometheus+: {
serviceMonitor+:{
spec+:{
endpoints:[
{"path": "/prometheus/metrics"},
],
},
},
// prometheus配置
prometheus+:{
spec+: {
retention: '30d',
externalUrl: "http://test.k8s-host.com/prometheus", //外部暴露地址
routePrefix: "/prometheus", // 添加前缀路径
storage: {
volumeClaimTemplate: {
apiVersion: 'v1',
kind: 'PersistentVolumeClaim',
metadata: {
name: "prometheus-claim",
annotations: {
'volume.beta.kubernetes.io/storage-class': "managed-nfs-storage",
}, // annotations
}, // metadata
spec: {
accessModes: ['ReadWriteMany'],
resources: { requests: { storage: '10Gi' } },
}, // spec
}, // volumeClaimTemplate
}, // storage:
}, // spec+
}, // prometheus+:
ingressroute+: ingressroute(
'prometheus',
$.values.common.namespace,
[ 'web' ],
[{
match: 'Host(`test.k8s-host.com`) && PathPrefix(`/prometheus`)',
kind: 'Rule',
services: [{
name: 'prometheus-k8s',
port: 9090,
}],
}],
), // ingressroute
}, // prometheus
grafana+: {
_config+:: {
env+: [
{"name": "GF_SERVER_ROOT_URL", "value": "http://localhost:3000/grafana/"},
{"name": "GF_SERVER_SERVE_FROM_SUB_PATH", "value": "true"},
{"name": "GF_LOG_LEVEL", "value": "debug"},
], // env
datasources: [{
name: 'Prometheus', // 该名称对应dashboard模板文件的数据源指向,注意大小写
type: 'prometheus',
access: 'proxy',
orgId: 1 ,
url: 'http://prometheus-k8s.monitoring.svc:9090/prometheus',
version: 1,
editable: false,
}], //datasources+::
}, // _config+:
PersistentVolumeClaim: {
apiVersion: 'v1',
kind: 'PersistentVolumeClaim',
metadata: {
name: 'grafana-claim', // 与 grafana.libsonnet的'storageVolumeClaim' 一致
namespace: $.values.common.namespace,
annotations: {
'volume.beta.kubernetes.io/storage-class' : 'managed-nfs-storage',
}, // annotations
}, // metadata
spec: {
accessModes: ['ReadWriteMany'],
resources: { requests: { storage: '10G'},},
}, // spec
}, // PersistentVolumeClaim:
ingressroute+: ingressroute(
'grafana',
$.values.common.namespace,
[ 'web' ],
[{
match: 'Host(`test.k8s-host.com`) && PathPrefix(`/grafana`)',
kind: 'Rule',
services: [{
name: 'grafana',
port: 3000,
}],
}],
), // ingressroute
}, // _grafana+:
alertmanager+: {
alertmanager+: {
routePrefix: '/alertmanager'
configSecret: 'alertmanager-main',
},
ingressroute+: ingressroute(
'alertmanager',
$.values.common.namespace,
[ 'web' ],
[{
match: 'Host(`test.k8s.com`) && PathPrefix(`/alertmanager`)',
kind: 'Rule',
services: [{
name: kp.alertmanager.service.metadata.name,
port: kp.alertmanager.service.spec.ports[0]['port']
}],
}],
), // ingressroute
},
};
{ 'setup/0namespace-namespace': kp.kubePrometheus.namespace } +
{
['setup/prometheus-operator-' + name]: kp.prometheusOperator[name]
for name in std.filter((function(name) name != 'serviceMonitor' && name != 'prometheusRule'), std.objectFields(kp.prometheusOperator))
} +
// serviceMonitor and prometheusRule are separated so that they can be created after the CRDs are ready
{ 'prometheus-operator/prometheus-operator-serviceMonitor': kp.prometheusOperator.serviceMonitor } +
{ 'prometheus-operator/prometheus-operator-prometheusRule': kp.prometheusOperator.prometheusRule } +
{ 'prometheus/kube-prometheus-prometheusRule': kp.kubePrometheus.prometheusRule } +
{ ['alertmanager/alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } +
{ ['blackbox-exporter/blackbox-exporter-' + name]: kp.blackboxExporter[name] for name in std.objectFields(kp.blackboxExporter) } +
{ ['grafana/grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) } +
{ ['kube-state-metrics/kube-state-metrics-' + name]: kp.kubeStateMetrics[name] for name in std.objectFields(kp.kubeStateMetrics) } +
{ ['kubernetes/kubernetes-' + name]: kp.kubernetesControlPlane[name] for name in std.objectFields(kp.kubernetesControlPlane) }
{ ['node-exporter/node-exporter-' + name]: kp.nodeExporter[name] for name in std.objectFields(kp.nodeExporter) } +
{ ['prometheus/prometheus-' + name]: kp.prometheus[name] for name in std.objectFields(kp.prometheus) } +
{ ['prometheus-adapter/prometheus-adapter-' + name]: kp.prometheusAdapter[name] for name in std.objectFields(kp.prometheusAdapter) } +
{ ['etcd/etcd-' + name]: kp.etcd[name] for name in std.objectFields(kp.etcd) } +
{ ['dingtalk/dingtalk-' + name]: kp.dingtalk[name] for name in std.objectFields(kp.dingtalk) }