云原生监控系统Prometheus——Prometheus Query Language

Prometheus Query Language

  Prometheus 内置了自己的功能表达式查询语言——PromQL(Prometheus Query Language)。它允许用户实时选择和汇聚时间序列数据,从而很方便地在 Prometheus 中查询和检索数据。表达式的结果可以在浏览器中展示位图形,也可以展示位表格,或者由外部系统通过 HTTP API 的形式进行调用。虽然 PromQL 这个单词以 QL 结尾,但是它并不是一种与 SQL 类似的语言,因为当涉及在时间序列上执行计算时,SQL 往往缺乏必要的表达能力。

  PromQL 的表现力非常强,除了支持常见的操作符外,还提供了大量的内置函数来实现对数据的高级处理,让监控的数据会说话。日常数据查询、可视化及告警配置这三大功能模块都是依赖 PromQL 实现的。

  PromQL 是 Prometheus 实战的核心,是 Prometheus 场景的基础,也是 Prometheus 的必修课。

一、初识 PromQL

  我们先通过案例来看看 PromQL,感受下 PromQL 是如何让用户通过指标更好地了解系统的性能的。

  案例一:获取当前主机可用的内存空间大小,单位 MB。

node_memory_free_bytes_total / (1024 * 1024)

  说明:node_memory_free_bytes_total 是瞬时向量表达式,返回的结果是瞬时向量。它可以用于获取当前主机可用的内存大小,默认的样本单位是 B,我们需要将单位换算为 MB。

  案例二:基于 2 小时的样本数据,预测未来 24 小时内磁盘是否会满。

if predict_linear(node_filesystem_free[2h],24*3600) < 0 

  说明:predict_linear(v range-vector,t scalar) 函数可以预测时间序列 v 在 t 秒后的值,它基于简单线性回归的方式,对时间窗口内的样本数据进行统计,从而对时间序列的变化趋势做出预测。上述命令就是根据文件系统过去2小时以内的空闲磁盘,去计算未来24小时磁盘空间是否会小鱼0.如果用户需要基于这个线性预测函数增加告警功能,也可以按如下方式扩展更新。

ALERT DiskWillFullIn24Houre
    IF predict_linear(node_filesystem_free[2h],24*3600)<0

  案例三:http_request_total(HTTP 请求总数)的 9 种常见 PromQL 语句。

# 1. 查询 HTTP 请求总数。
http_requests_total

# 2.查询返回所有时间序列、指标 http_requests_total,以及给定 job 和 handler 的标签
http_requests_total{job="apiserver",handle="/api/comments"}

# 3.条件查询:查询状态码为 200 的请求总数。
http_requests_total{code="200"}

# 4.区间查询:查询5分钟内的请求总量
http_requests_total{}[5m]

# 5.系统函数使用
# 查询系统所有 HTTP 请求的总量
sum(http_requests_total)

# 6.使用正则表达式,选择名称与特定模式匹配的作业(如以 server 结尾的作业)的时间序列
http_requests_total{job=~"."server"}

# 7.过滤除了 4xx 之外所有 HTTP 状态码的数据
http_requests_total{status!~"4.."}

# 8.子查询,以1次/分钟的速率采集最近30分钟内的指标数据,然后返回这30分钟内距离当前时间
# 最近的5分钟内的采集结果
rate(http_requests_total[5m])[30m:1m]

# 9.函数 rate,以1次/秒的速度采集最近5分钟内的数据并将结果以时间序列的形式返回
rate(http_requests_total[5m])

  如上所述,我们仅针对 http_requests_total 这一个指标就做了9种不同的具有代表性的监控按理,可以看出 PromQL 语句是非常灵活的。

1.1  PromQL 的4种数据类型

  结合上述案例,我们看到了瞬时向量 Instant vector 和区间向量 Ranger vector,它们属于 Prometheus 表达式语言的4种数据类型。

1、瞬时向量(Instant vector):一组时间序列,每个时间序列包含单个样本,它们共享相同的时间戳。也就是说,表达式的返回值中只会包含该时间序列中的最新的一个样本值。而相应的这样的表达式称之为瞬时向量表达式。

2、区间向量(Range vector):一组时间序列,每个时间序列包含一段时间范围内的样本数据。

3、标量(Scalar):一个浮点型的数据值。

4、字符串(String): 一个简单的字符串值。

1.2 时间序列

  和MySQL 关系型数据库不同的是,时间序列数据库主要按照一定的时间间隔产生一个个数据点,而这些数据点按照时间戳和值的生成顺序存放,这就得到了我们上问提到的向量(vector)。以时间轴为横坐标、序列为纵坐标,这些数据点连接起来就会形成一个矩阵。

  矩阵中的每一个点都可称为一个样本(Sample),样本主要由3方面构成。

    • 指标(Metrics):包括指标名称(Metrics name)和一组标签集(Label set)名称,如 request_total{path="/status",method="GET"}。
    • 时间戳(TimeStamp):这个值默认精确到毫秒。
    • 样本值(Value):这个值默认使用 Float64 浮点类型。

  Prometheus 会定期为所有系列收集新数据点。

1.3 指标

  时间序列的指标(Metrics)可以基于 Bigtable(Google 论文)设计为 Key-Value 存储方式,如下图所示

  上图中的 http_requests_total{status="401",method="GET"}  @1434317560938   94358 为例,在 Key-value中,94358 作为 Value(也就是样本值 Sample Value),前面的 http_requests_total{status="401",method="GET"}   @1434317560938 一律为 Key。在 Key 中,又由 Metrics Name(例子中的 http_requests_total)、Label(例子中的{status="401",method="GET"})和 Timestamp(例子中的 @1434317560938)3部分构成。

  在 Prometheus 的世界里,所有的数值都是 64 bit的。每条时间序列里面记录的就是 64 bit Timestamp(时间戳)和 64 bit 的 Sample Value(采样值)。

  如图所示,Prometheus 的 Metrics 可以有两种表现方式。第一种方式是经典的形式。

<Metric Name>{<Label name>=<label value>, ...}

  其中,Metric Name 就是指标名称,反映监控样本的含义。指标名称只能由ASCII字符、数字、下划线以及冒号组成并必须符合正则表达式[a-zA-Z_:][a-zA-Z0-9_:]*。

  标签反映了当前样本的多种特征纬度。通过这些纬度,Prometheus 可以对样本数据进行过滤、聚合、统计等操作,从而产生新的计算后的一条时间序列。标签名称也只能由ASCII字符、数字以及下划线组成,并且必须满足正则表达式[a-zA-Z_][a-zA-Z0-9_]*。

  通过命令 go_gc_duration_seconds{quantile="0"} 可以在 Prometheus 的 Graph控制台获得图:

  第二种方式来源于 Prometheus 内部。

  (__name__=metrics.<label name>=<label value>, ...)

  第二种方式和第一种方式是一样的,表示同一条时间序列。这种方式是 Prometheus 内部的表现形式,是系统保留的关键字,官方推荐只能在系统内部使用。在 Prometheus 的底层实现中,指标名称实际上是以 __name__=<metric name> 的形式保存在数据库中的;__name__ 是特定的标签,代表了 Metric Name。标签的值可以包含任何 Unicode 编码的字符。

  通过命令 {__name__="go_gc_duration_seconds",quantile="0"} 可以在 Prometheus 的 Graph 控制台获得如下结果:

二、PromQL中的4大选择器

  如果一个指标来自多个不同类型的服务器或者应用,那么技术人员通常都有缩小范围的需求,例如希望从不计其数的指标中查看来自一个实例 instance 或者 handler 标签的指标。这时就要用标签筛选功能了。这种标签的筛选功能是通过选择器(Selector)来完成的。

http_requests_total{job="Helloworld",status="200",method="POST",handler="/api/comments"}

  这就是一个选择器,它返回的 job 是 HelloWorld,返回值是 200,方法是 POST(handler 标签为 "/api/comments" 的 http_requests_total)。它是 HTTP 请求总数的瞬时向量选择器(InstantVector Selector)。

  例子中的 job="HelloWorld"是一个匹配器(Matcher),一个选择器中可以有多个匹配器,它们组合在一起使用。

  接下来就从匹配器(Matcher)、瞬时向量选择器(Instant Vector Selector)、区间向量选择器(Range Vector Selector)和偏移量修改器(Offset)这4个方面对 PromQL 进行介绍。

2.1 匹配器

  匹配器是作用于标签上的,标签匹配器可以对时间序列进行过滤,Prometheus 自持完全匹配和正则匹配两种模式。

2.1.1. 相等匹配器(=)

  相等匹配器(Equality Matcher),用于选择与提供的字符串完全相同的标签。下面介绍的例子中就会使用相等匹配器按照条件进行一系列过滤。

http_requests_total{job="Helloworld",status="200",method="POST",handler="/api/comments"}

  需要注意的是,如果标签为空或者不存在,那么也可以使用 Label="" 的形式。对于不存在的标签,比如 demo 标签,go_gc_duration_seconds_count 和 go_gc_duration_seconds_count{demo=""} 效果是一样的,对比如下:

2.1.2. 不相等匹配器(!=)

  不相等匹配器(Negative Equality Matcher),用于选择与提供的字符串不相等的标签。它和相等匹配器是完全性相反的。举个例子,如果想要查看 job 并不是 HelloWorld 的 HTTP 请求总数,可以使用如下不相等匹配器。

http_requests_total{job!="HelloWorld"}

2.1.3. 正则表达式匹配器(=~)

  正则表达式匹配器(Regular Expression Matcher),用于选择与提供的字符串进行正则运算后所得结果相匹配的标签。Prometheus 的正则运算是强指定的,比如正则表达式 a 只会匹配到字符串 a,而并不会匹配到 ab 或者 ba 或者 abc。如果你不想使用这样的强制指定功能。可以在正则表达式的前面或者后面加上 ".*"。比如下面的例子表示 job 是所有以 Hello 开头的 HTTP 请求总数。

http_requests_total{job=~"Hello.*"}

  http_requests_total 直接等效于 {__name__="http_requests_total"},后者也可以使用和前者一样的4种匹配器(=,!=,=~,!=)。比如下面的例子表示 job 是所有以 Hello 开头的指标。

{__name_-=~"Hello.*"}

  如果想要查看 job 是以 Hello 开头的,且在生产(prod)、测试(test)、预发布(pre)等环境下响应结果不是 200 的 HTTP 请求总数,可以使用这样的方式进行查询。

http_requests_total{job="Hello.*",env=~"prod|test|pre",code!="200"}

  由于所有的 PromQL 表达式必须至少包含一个指标名称,或者至少有一个不会匹配到空字符串的标签过滤器,因此结合 Prometheus 官方文档,可以梳理出如下非法实例:

{job=~".*"}    #非法!
{job=""}       #非法!
{job!=""}      #非法!

  相反,如下表达式是合法的:

{job=~".+"}                #合法! .+ 表示至少一个字符
{job=~".*",method="get"}   #合法! .* 表示任意一个字符
{job=~"",method="post"}    #合法! 存在一个非空匹配
{job=~".+",method="post"} #合法! 存在一个非空匹配

2.1.4. 正则表达式相反匹配器(!~)

  正则表达式相反匹配器(Negative Regular Expression Matcher),用于选择与提供的字符串进行正则运算后所得结果不匹配的标签。因为 PromQL 的正则表达式基于 RE2 的语法,但是 RE2 不支持向前不匹配表达式,所以 !~ 的出现是一种替代方案,以实现基于正则表达式排除指定标签值的功能。在一个选择器当中,可以针对同一个标签来使用多个匹配器。比如下面的例子,可以实现查找 job 名是 node 且安装在 /prometheus 目录下,但是并不在 /prometheus/user 目录下的所有文件系统并确定其大小。

node_filesystem_size_bytes{job="node",mountpoint=~"/prometheus/.*",mountpoint !~ "/prometheus/user/.*"}

  PromQL 采用的是 RE2 引擎,支持正则表达式。RE2 来源于 Go 语言,它被设计为一种线性时间模式,非常适合用于 PromQL 这种时间序列的方法。但是就像我们前文描述的 RE2 那样,其不支持向前不匹配表达式(向前断言),也不支持反向引用,同时还缺失很多高级特性。


 知识延伸:

  =、!=、=~、!~ 这4个匹配器在实战中非常有用,但是如果频繁为标签施加正则匹配器,比如 HTTP 状态码有 1xx、2xx、3xx、4xx、5xx,在统计所有返回值是 5xx 的 HTTP 请求时,PromQL 语句就会变成 http_requests_total{job="HelloWorld",status=~"500",status=~"501",status=~"502",status=~"503",status=~"504",status=~"505"……}

  但是,我们都知道 5xx 代表服务器错误,这些状态表示服务器在尝试处理请求时发生了内部错误。这些错误可能来自服务器本身,而不是请求。

  500 (服务器内部错误) 服务器遇到错误,无法完成请求。
  501 (尚未实施) 服务器不具备完成请求的功能。 例如,服务器无法识别请求方法时可能会返回此代码。
  502 (错误网关) 服务器作为网关或代理,从上游服务器收到无效响应。
  503 (服务不可用) 服务器目前无法使用(由于超载或停机维护)。 通常,这只是暂时状态。
  504 (网关超时) 服务器作为网关或代理,但是没有及时从上游服务器收到请求。
  505 (HTTP 版本不受支持) 服务器不支持请求中所用的 HTTP 协议版本。

    ……

  为了消化这样的错误,可以进行如下优化:

  优化一:多个表达式之间使用 "|" 进行分割:http_requests_total{job="HelloWorld",status=~"500|501|502|503|504|505"}。

  优化二:将这些返回值包装为 5xx,这样就可以直接使用正则表达式匹配器对 http_requests_total{job="HelloWorld",status=~"5xx"}进行优化。

  优化三:如果要选择不以 4xx 开头的所有 HTTP 状态码,可以使用 http_requests_total{status!~"4.."}。

2.2 瞬时向量选择器

  瞬时向量选择器用于返回在指定时间戳之前查询到的最新样本的瞬时向量,也就是包含 0 个或者多个时间序列的列表。在最简单的形式中,可以仅指定指标的名称,如 http_requests_total,这将生成包含此指标名称的所有时间序列的元素的瞬时向量。我们可以通过大括号 {} 中添加一组匹配的标签来进一步过滤这些时间序列,如:

http_requests_total{job="HelloWorld",group="middlueware"}
http_requests_total{}    选择当前最新的数据

  瞬时向量并不希望获取过时的数据,这里需要注意的是,在 Prometheus 1.x 和 2.x 版本中是有区别的。

  在 Prometheus 1.x 中会返回在查询时间之间不超过 5 分钟的时间序列,这种方法还是能满足大多数场景的需求的。但是如果在第一次查询,如 http_requests_total{job="HellWorld"} 这个5分钟的时间窗口内增加一个 label,如 http_requests_total{job="HelloWorld",group="middleware"},之后再重新进行一次瞬时查询,那么就会重复计费。这是一个问题。

  在 Prometheus 2.x 是这么处理上述问题的:它会像汽车雨刮器一样刮擦,如果一个时间序列从一个刮擦到另一个,或者 Prometheus 的服务发现不再能找到当前 target,陈旧的标记就会被添加到时间序列中。这时使用瞬时向量过滤器,除需要找到满足匹配条件的时间序列之外,还需要考虑查询求值时间之前 5 分钟内的最新样本。如果样本是正常样本,那么它将在瞬时向量中返回;但如果是过期的标记,那么该时间序列将不出现在瞬时向量中。需要注意的是,如果你使用了 Prometheus Export 来暴露时间戳,那么过期的标记和 Prometheus 2.x 对过期标记的处理逻辑就会失效,受影响的时间序列会继续和 5 分钟以前的旧逻辑一起执行。

2.3 区间向量选择器

  区间向量选择器返回一组时间序列,每个时间序列包含一段时间范围内的样本数据。和瞬时向量选择器不同的是,它从当前时间向前选择了一定时间范围的样本。区间向量选择器主要在选择器末尾的方括号 [] 中,通过时间范围选择器进行定义,以指定每个返回的区间向量样本值中提取多长的时间范围。例如,下面的例子可以表示最近5分钟内的所有HTTP请求的样本数据,其中[5m]将瞬时向量选择器转变为区间向量选择器。

http_requests_total{}[5m]

  时间范围通过整数来表示,可以使用以下单位之一:秒(s)、分钟(m)、小时(h)、天(d)、周(w)、年(y)。需要强调的是,必须用整数来表示时间,比如 38m 是正确的,但是 2h 15m 和 1.5h 都是错误的。注意,这里的年是忽略闰年的,永远是50*60*25*365 秒。

  关于区间向量选择还需要补充的就是,它返回的是一定范围内所有的样本数据,虽然刮擦时间是相同的,但是多个时间序列的时间戳往往并不会对齐,如下所示:

http_requests_total{code="200",job="HelloWorld",method="get"}=[
1@1518096812.678
1@1518096817.678
1@1518096822.678
1@1518096827.678
1@1518096832.678
1@1518096837.678
]
http_requests_total{code="200",job="HelloWorld",method="get"}=[
4@1518096813.233
4@1518096818.233
4@1518096823.233
4@1518096828.233
4@1518096833.233
4@1518096838.233
]

  这是因为距离向量会保留样本的原始时间戳,不同 target 的刮擦被分布以均匀的负载,所以虽然我们可以控制刮擦和规则评估的频率,比如 5秒/次(第一组 12、17、22、27、32、37;第二组 13、18、23、28、33、28),但是我们无法控制他们完全对齐时间戳(1@1518096812.678和4@1518096813.233),因为假如有成百上千的 target,每次5秒的刮擦都会导致这些 target 在不同的位置被处理,所以时间序列一定会存在略微不同的时间点的。但是这在实际生产中并不是非常重要(偶发的不对系统造成影响的瞬时毛刺数据不是很重要),因为 Prometheus 等指标监控本身的定位就不像 Log 监控那样精准,而是趋势准确。

  最后,我们结合本节介绍的知识,来看几个关于 CPU 和 PromQL 实战案例,夯实下理论。

  案例一:计算 2 分钟内系统进程的 CPU 使用率。

  rate是PromQL内置函数,获取一段时间窗口的平均量。取一段时间增量的平均每秒数量,2m内总增量/2m

rate(node_cpu_seconds_total{}[2m])

  案例二:计算系统 CPU 的总体使用率,通过排除系统闲置的 CPU 使用率即可获得(without用于从计算结果中移除列举的标签,而保留其它标签)。

  without 用于从计算结果中移除列举的标签,而保留其它标签。by则正好相反,结果向量中只保留列出的标签,其余标签则移除。通过without和by可以按照样本的问题对数据进行聚合。

  avg without 不按cpu标签分组,然后计算平均值。

1 - avg without(cpu) (rate(node_cpu_seconds_total{mode="idle"}[2m]))

  案例三:node_cpu_seconds_total 可以获取当前 CPU 的所有信息。使用 avg 聚合查询到数据后,再使用 by 来区分实例,这样就能做到分实例查询各自的数据。

  irate(5m):指定时间范围内的最近两个数据点来算速率,适合快速变化的计数器(counter)。

avg(irate(node_cpu_seconds_total{job="node-exporter"}[5m])) by (instance)


知识延伸:

1)区间向量选择器往往和速率函数 rate 一起使用。比如子查询,以 1次/分钟的速率采集关于 http_requests_total 指标在过去30分钟的数据,然后返回这30分钟内距离当前最近的5分钟内的采集结果,示例如下:

rate(http_requests_total{}[5m])[30m:1m]

2)一个区间向量表达式不能直接展示在 Graph 中,但是可以展示在 Console 视图中。

2.4 偏移量修改器

  偏移量修改器可以让瞬时向量选择器和区间向量选择器发生偏移,它允许获取查询计算时间并在每个选择器的基础上将其向前推移。

  瞬时向量选择器和区间向量选择器都可以获取当前时间基准下的样本数据,如果我们要获取查询计算时间前5分钟的 HTTP 请求情况,可以使用下面这样的方式:

http_requests_total{} offset 5m

  偏移向量修改器的关键字必须紧跟在选择器{}后面,

sum(http_requests_total{method="GET"} offset 5m)   #正确
sum(http_requests_total{method="GET"}) offset 5m   #错误

  区间向量修改器的关键字也必须跟在选择器{}后面,

sum(http_requests_total[5m] offset 5m)   #正确
sum(http_requests_total[5m]) offset 5m   #错误

  偏移量修改器一般用于单条数据调试比较有帮助。而趋势变化数据中,用它较少。

三、Prometheus的4大指标类型

  Prometheus有4大指标类型(Metrics Type),分别是 Counter(计数器)、Gauge(仪表盘)、Histogram(直方图)和 Summary(摘要)。这是在 Prometheus 客户端(目前主要有 Go、Java、Python、Ruby 等语言版本)中提供的4种核心指标类型,但是 Prometheus 的服务端并不区分指标类型,而是简单地把这些指标统一视为无类型的时间序列。

3.1 计数器

  计数器类型代表一种样本数据单调递增的指标,在没有发生重置(如服务重启、应用重启)的情况下只增不减,其样本值应该是不断增大的。例如,可以使用 Counter 类型的指标来表示服务的请求数、已完成的任务数、错误发生的次数等。计数器指标主要有两个应用方法:

1) Inc()      //将 Counter 值加 1
2) Add(float64) //将指定值加到 Counter 值上,如果指定值小于0,会产生 Go语言的Panic异常,进而可能导致崩溃

  但是,计数器计算的总数对用户来说大多没有什么用。大家千万不要用于计数器类型用于计算当前运行的进程数量、当前登录的用户数量等。

  为了能够更直观地表示样本数据的变化情况,往往需要计算样本的增长速率,这时候通常使用 PromQL 的rate、topk、increase 和 irate 等函数

rate(http_requests_total[5m]) //通过 rate() 函数获取 HTTP 请求量的增长速率
topk(10,http_requests_total)  //通过当前系统中访问量排在前10的 HTTP 地址

知识延伸:

  Prometheus 要先执行 rate() 再执行 sum(),不能执行完 sum() 再执行 rate()。

  这背后与 rate() 的实现方式有关,rate()在设计上假定对应的指标应该是一个计数器,也就是只有incr(增加)和 reset(归零)两种方式。而执行了sum()或其他聚合操作之后,得到的就不再是一个计数器了。


  increase(v range-vector)函数传递的参数是一个区间向量,increase 函数获取区间向量中的第一个和最后一个样本并返回其增长量。下面的例子可以查询 Counter 类型指标的增长速率,可以获取 http_requests_total 在最近 5 分钟内的平均样本,其中 300 代表 300 秒。

increase(http_requests_total[5m]) / 300

 知识延伸:

  rate 和 increase 函数计算的增长量容易陷入长尾效应中。比如在某一个由于访问量或者其他问题导致 CPU 占用 100% 的情况中,通过计算在时间窗口内的平均增长率是无法反应出该问题的。

  为什么监控和性能测试中,我们更关注 p95/p99位?就是因为长尾效应。由于个别请求的响应时间需要1秒或者更久,传统的响应时间的平均值就体现不出响应时间中的尖刺,去尖刺也是数据采集中一个很重要的工序,这就是所谓的长尾效应。p95/p99就是长尾效应的分割线,如表示99%的请求在 xxx 范围内,或者是1%的请求在 xxx 范围之外。99%是一个范围,意思是99%的请求在某一延迟内,剩下的1%就在延迟之外了。


  irate(v range-vector) 是 PromQL 针对长尾效应专门提供的灵敏度更高的函数。irate 同样用于计算区间向量的增长速率,但是其反映出的瞬时增长速率。irate 函数是通过区间向量中的最后两个样本数据来计算区间向量的增长速率的。这种方式可以避免在时间窗口范围内的"长尾问题",并且体现出更好的灵敏度。通过 irate 函数绘制的图标能够更好地反映样本数据的瞬时变化状态。irate 的调用命令如下所示:

irate(http_requests_total[5m])

知识延伸:

  irate 函数相比于 rate 函数提供了更高的灵敏度,不过分析长期趋势时或者在告警规则中,irate 的这种灵敏度反而容易造成干扰。因此,在长期趋势分析或者告警钟更推荐 rate 函数。


 3.2 仪表盘

  仪表盘类型代表一种样本数据可以任意变化的指标,即可增可减。它可以理解为状态的快照,Gauge 通常用于表示温度或者内存使用率这种指标数据,也可以表示随时增加或减少的 “总数”,例如当前并发请求的数量 node_memory_MemFee(主机但钱空闲的内容大小)、node_memory_MemAvailable(可用内存大小)等。在使用 Gauge 时,用户往往希望使用它们求和、取平均值、最小值、最大值等。

  以 Prometheus 经典的 Node Exporter 的指标 node_filesystem_size_bytes 为例,它可以报告从 node_filesystem_size_bytes 采集来的文件系统大小,包含 device、fstype 和 mountpoint 等标签。如果想要对每一台机器上的总文件系统大小求和(sum),可以使用如下 PromQL语句:

sum without(device,fstype,mountpoint)(node_filesystem_size_bytes)

  除了求和、求最大值等,利用 Gauge 的函数求最小值和平均值原理也是类似的。除了基本的操作外,Gauge 经常结合 PromQL 的 predict_linear 和 data 函数使用。

3.3 直方图

  在大多数情况下,人们都倾向于使用某些量化指标的平均值,例如 CPU 的平均使用率、页面的平均响应时间。用这种方式呈现结果很明显,以系统 API 调用的平均响应时间为例,如果大多数 API 请求维持在 100ms 的响应方位内,而个别请求的响应时间需要 5s,就表示出现了长尾问题。

  响应慢可能是平均值大导致的,也可能是长尾效应导致的额,区分两者的最简单的方式就是按照请求延迟的范围进行分区。例如,统计延迟在0~10ms之间的请求数有多少,在10~20ms之间的请求数有多少。通过 Histogram 展示监控指标,我们可以快速了解监控样本的分布情况。

# HELP go_gc_duration_seconds A summary of the pause duration of garbage collection cycles.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 5.1373e-05
go_gc_duration_seconds{quantile="0.25"} 9.5224e-05
go_gc_duration_seconds{quantile="0.5"} 0.000133418
go_gc_duration_seconds{quantile="0.75"} 0.000273065
go_gc_duration_seconds{quantile="1"} 1.565256115
go_gc_duration_seconds_sum 2.600561302
go_gc_duration_seconds_count 473
# HELP go_goroutines Number of goroutines that currently exist.
# TYPE go_goroutines gauge
go_goroutines 269
# HELP go_info Information about the Go environment.
# TYPE go_info gauge
go_info{version="go1.16.2"} 1
# HELP go_memstats_alloc_bytes Number of bytes allocated and still in use.
# TYPE go_memstats_alloc_bytes gauge
go_memstats_alloc_bytes 3.00471752e+08
# HELP go_memstats_alloc_bytes_total Total number of bytes allocated, even if freed.
# TYPE go_memstats_alloc_bytes_total counter
go_memstats_alloc_bytes_total 8.9008069072e+10
# HELP go_memstats_buck_hash_sys_bytes Number of bytes used by the profiling bucket hash table.
# TYPE go_memstats_buck_hash_sys_bytes gauge
go_memstats_buck_hash_sys_bytes 5.190072e+06
# HELP go_memstats_frees_total Total number of frees.
# TYPE go_memstats_frees_total counter
go_memstats_frees_total 1.138419718e+09
# HELP go_memstats_gc_cpu_fraction The fraction of this program's available CPU time used by the GC since the program started.
# TYPE go_memstats_gc_cpu_fraction gauge
go_memstats_gc_cpu_fraction 0.0005366479170588193
# HELP go_memstats_gc_sys_bytes Number of bytes used for garbage collection system metadata.
# TYPE go_memstats_gc_sys_bytes gauge
go_memstats_gc_sys_bytes 2.8316152e+07
# HELP go_memstats_heap_alloc_bytes Number of heap bytes allocated and still in use.
# TYPE go_memstats_heap_alloc_bytes gauge
go_memstats_heap_alloc_bytes 3.00471752e+08
# HELP go_memstats_heap_idle_bytes Number of heap bytes waiting to be used.
# TYPE go_memstats_heap_idle_bytes gauge
go_memstats_heap_idle_bytes 2.73113088e+08
# HELP go_memstats_heap_inuse_bytes Number of heap bytes that are in use.
# TYPE go_memstats_heap_inuse_bytes gauge
go_memstats_heap_inuse_bytes 3.25197824e+08
# HELP go_memstats_heap_objects Number of allocated objects.
# TYPE go_memstats_heap_objects gauge
go_memstats_heap_objects 2.138627e+06
# HELP go_memstats_heap_released_bytes Number of heap bytes released to OS.
# TYPE go_memstats_heap_released_bytes gauge
go_memstats_heap_released_bytes 1.33824512e+08
# HELP go_memstats_heap_sys_bytes Number of heap bytes obtained from system.
# TYPE go_memstats_heap_sys_bytes gauge
go_memstats_heap_sys_bytes 5.98310912e+08
# HELP go_memstats_last_gc_time_seconds Number of seconds since 1970 of last garbage collection.
# TYPE go_memstats_last_gc_time_seconds gauge
go_memstats_last_gc_time_seconds 1.6262509307495074e+09
# HELP go_memstats_lookups_total Total number of pointer lookups.
# TYPE go_memstats_lookups_total counter
go_memstats_lookups_total 0
# HELP go_memstats_mallocs_total Total number of mallocs.
# TYPE go_memstats_mallocs_total counter
go_memstats_mallocs_total 1.140558345e+09
# HELP go_memstats_mcache_inuse_bytes Number of bytes in use by mcache structures.
# TYPE go_memstats_mcache_inuse_bytes gauge
go_memstats_mcache_inuse_bytes 9600
# HELP go_memstats_mcache_sys_bytes Number of bytes used for mcache structures obtained from system.
# TYPE go_memstats_mcache_sys_bytes gauge
go_memstats_mcache_sys_bytes 16384
# HELP go_memstats_mspan_inuse_bytes Number of bytes in use by mspan structures.
# TYPE go_memstats_mspan_inuse_bytes gauge
go_memstats_mspan_inuse_bytes 3.778488e+06
# HELP go_memstats_mspan_sys_bytes Number of bytes used for mspan structures obtained from system.
# TYPE go_memstats_mspan_sys_bytes gauge
go_memstats_mspan_sys_bytes 6.504448e+06
# HELP go_memstats_next_gc_bytes Number of heap bytes when next garbage collection will take place.
# TYPE go_memstats_next_gc_bytes gauge
go_memstats_next_gc_bytes 4.16926496e+08
# HELP go_memstats_other_sys_bytes Number of bytes used for other system allocations.
# TYPE go_memstats_other_sys_bytes gauge
go_memstats_other_sys_bytes 2.062552e+06
# HELP go_memstats_stack_inuse_bytes Number of bytes in use by the stack allocator.
# TYPE go_memstats_stack_inuse_bytes gauge
go_memstats_stack_inuse_bytes 5.668864e+06
# HELP go_memstats_stack_sys_bytes Number of bytes obtained from system for stack allocator.
# TYPE go_memstats_stack_sys_bytes gauge
go_memstats_stack_sys_bytes 5.668864e+06
# HELP go_memstats_sys_bytes Number of bytes obtained from system.
# TYPE go_memstats_sys_bytes gauge
go_memstats_sys_bytes 6.46069384e+08
# HELP go_threads Number of OS threads created.
# TYPE go_threads gauge
go_threads 14
# HELP net_conntrack_dialer_conn_attempted_total Total number of connections attempted by the given dialer a given name.
# TYPE net_conntrack_dialer_conn_attempted_total counter
net_conntrack_dialer_conn_attempted_total{dialer_name="alertmanager"} 69
net_conntrack_dialer_conn_attempted_total{dialer_name="default"} 0
net_conntrack_dialer_conn_attempted_total{dialer_name="serviceMonitor/monitoring/alertmanager/0"} 44
net_conntrack_dialer_conn_attempted_total{dialer_name="serviceMonitor/monitoring/blackbox-exporter/0"} 12
net_conntrack_dialer_conn_attempted_total{dialer_name="serviceMonitor/monitoring/coredns/0"} 0
net_conntrack_dialer_conn_attempted_total{dialer_name="serviceMonitor/monitoring/grafana/0"} 17
net_conntrack_dialer_conn_attempted_total{dialer_name="serviceMonitor/monitoring/kube-apiserver/0"} 31
net_conntrack_dialer_conn_attempted_total{dialer_name="serviceMonitor/monitoring/kube-controller-manager/0"} 0
net_conntrack_dialer_conn_attempted_total{dialer_name="serviceMonitor/monitoring/kube-scheduler/0"} 0
net_conntrack_dialer_conn_attempted_total{dialer_name="serviceMonitor/monitoring/kube-state-metrics/0"} 4
net_conntrack_dialer_conn_attempted_total{dialer_name="serviceMonitor/monitoring/kube-state-metrics/1"} 8
net_conntrack_dialer_conn_attempted_total{dialer_name="serviceMonitor/monitoring/kubelet/0"} 33
net_conntrack_dialer_conn_attempted_total{dialer_name="serviceMonitor/monitoring/kubelet/1"} 30
net_conntrack_dialer_conn_attempted_total{dialer_name="serviceMonitor/monitoring/kubelet/2"} 28
net_conntrack_dialer_conn_attempted_total{dialer_name="serviceMonitor/monitoring/node-exporter/0"} 59
net_conntrack_dialer_conn_attempted_total{dialer_name="serviceMonitor/monitoring/prometheus-adapter/0"} 9
net_conntrack_dialer_conn_attempted_total{dialer_name="serviceMonitor/monitoring/prometheus-k8s/0"} 14
net_conntrack_dialer_conn_attempted_total{dialer_name="serviceMonitor/monitoring/prometheus-operator/0"} 15
# HELP net_conntrack_dialer_conn_closed_total Total number of connections closed which originated from the dialer of a given name.
# TYPE net_conntrack_dialer_conn_closed_total counter
net_conntrack_dialer_conn_closed_total{dialer_name="alertmanager"} 30
net_conntrack_dialer_conn_closed_total{dialer_name="default"} 0
net_conntrack_dialer_conn_closed_total{dialer_name="serviceMonitor/monitoring/alertmanager/0"} 20
net_conntrack_dialer_conn_closed_total{dialer_name="serviceMonitor/monitoring/blackbox-exporter/0"} 5
net_conntrack_dialer_conn_closed_total{dialer_name="serviceMonitor/monitoring/coredns/0"} 0
net_conntrack_dialer_conn_closed_total{dialer_name="serviceMonitor/monitoring/grafana/0"} 6
net_conntrack_dialer_conn_closed_total{dialer_name="serviceMonitor/monitoring/kube-apiserver/0"} 16
net_conntrack_dialer_conn_closed_total{dialer_name="serviceMonitor/monitoring/kube-controller-manager/0"} 0
net_conntrack_dialer_conn_closed_total{dialer_name="serviceMonitor/monitoring/kube-scheduler/0"} 0
net_conntrack_dialer_conn_closed_total{dialer_name="serviceMonitor/monitoring/kube-state-metrics/0"} 3
net_conntrack_dialer_conn_closed_total{dialer_name="serviceMonitor/monitoring/kube-state-metrics/1"} 6
net_conntrack_dialer_conn_closed_total{dialer_name="serviceMonitor/monitoring/kubelet/0"} 18
net_conntrack_dialer_conn_closed_total{dialer_name="serviceMonitor/monitoring/kubelet/1"} 17
net_conntrack_dialer_conn_closed_total{dialer_name="serviceMonitor/monitoring/kubelet/2"} 14
net_conntrack_dialer_conn_closed_total{dialer_name="serviceMonitor/monitoring/node-exporter/0"} 27
net_conntrack_dialer_conn_closed_total{dialer_name="serviceMonitor/monitoring/prometheus-adapter/0"} 7
net_conntrack_dialer_conn_closed_total{dialer_name="serviceMonitor/monitoring/prometheus-k8s/0"} 7
net_conntrack_dialer_conn_closed_total{dialer_name="serviceMonitor/monitoring/prometheus-operator/0"} 8
# HELP net_conntrack_dialer_conn_established_total Total number of connections successfully established by the given dialer a given name.
# TYPE net_conntrack_dialer_conn_established_total counter
net_conntrack_dialer_conn_established_total{dialer_name="alertmanager"} 33
net_conntrack_dialer_conn_established_total{dialer_name="default"} 0
net_conntrack_dialer_conn_established_total{dialer_name="serviceMonitor/monitoring/alertmanager/0"} 23
net_conntrack_dialer_conn_established_total{dialer_name="serviceMonitor/monitoring/blackbox-exporter/0"} 6
net_conntrack_dialer_conn_established_total{dialer_name="serviceMonitor/monitoring/coredns/0"} 0
net_conntrack_dialer_conn_established_total{dialer_name="serviceMonitor/monitoring/grafana/0"} 7
net_conntrack_dialer_conn_established_total{dialer_name="serviceMonitor/monitoring/kube-apiserver/0"} 19
net_conntrack_dialer_conn_established_total{dialer_name="serviceMonitor/monitoring/kube-controller-manager/0"} 0
net_conntrack_dialer_conn_established_total{dialer_name="serviceMonitor/monitoring/kube-scheduler/0"} 0
net_conntrack_dialer_conn_established_total{dialer_name="serviceMonitor/monitoring/kube-state-metrics/0"} 4
net_conntrack_dialer_conn_established_total{dialer_name="serviceMonitor/monitoring/kube-state-metrics/1"} 7
net_conntrack_dialer_conn_established_total{dialer_name="serviceMonitor/monitoring/kubelet/0"} 21
net_conntrack_dialer_conn_established_total{dialer_name="serviceMonitor/monitoring/kubelet/1"} 20
net_conntrack_dialer_conn_established_total{dialer_name="serviceMonitor/monitoring/kubelet/2"} 17
net_conntrack_dialer_conn_established_total{dialer_name="serviceMonitor/monitoring/node-exporter/0"} 30
net_conntrack_dialer_conn_established_total{dialer_name="serviceMonitor/monitoring/prometheus-adapter/0"} 9
net_conntrack_dialer_conn_established_total{dialer_name="serviceMonitor/monitoring/prometheus-k8s/0"} 9
net_conntrack_dialer_conn_established_total{dialer_name="serviceMonitor/monitoring/prometheus-operator/0"} 9
# HELP net_conntrack_dialer_conn_failed_total Total number of connections failed to dial by the dialer a given name.
# TYPE net_conntrack_dialer_conn_failed_total counter
net_conntrack_dialer_conn_failed_total{dialer_name="alertmanager",reason="refused"} 3
net_conntrack_dialer_conn_failed_total{dialer_name="alertmanager",reason="resolution"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="alertmanager",reason="timeout"} 33
net_conntrack_dialer_conn_failed_total{dialer_name="alertmanager",reason="unknown"} 36
net_conntrack_dialer_conn_failed_total{dialer_name="default",reason="refused"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="default",reason="resolution"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="default",reason="timeout"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="default",reason="unknown"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/alertmanager/0",reason="refused"} 3
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/alertmanager/0",reason="resolution"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/alertmanager/0",reason="timeout"} 18
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/alertmanager/0",reason="unknown"} 21
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/blackbox-exporter/0",reason="refused"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/blackbox-exporter/0",reason="resolution"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/blackbox-exporter/0",reason="timeout"} 6
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/blackbox-exporter/0",reason="unknown"} 6
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/coredns/0",reason="refused"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/coredns/0",reason="resolution"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/coredns/0",reason="timeout"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/coredns/0",reason="unknown"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/grafana/0",reason="refused"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/grafana/0",reason="resolution"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/grafana/0",reason="timeout"} 10
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/grafana/0",reason="unknown"} 10
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/kube-apiserver/0",reason="refused"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/kube-apiserver/0",reason="resolution"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/kube-apiserver/0",reason="timeout"} 11
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/kube-apiserver/0",reason="unknown"} 12
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/kube-controller-manager/0",reason="refused"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/kube-controller-manager/0",reason="resolution"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/kube-controller-manager/0",reason="timeout"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/kube-controller-manager/0",reason="unknown"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/kube-scheduler/0",reason="refused"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/kube-scheduler/0",reason="resolution"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/kube-scheduler/0",reason="timeout"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/kube-scheduler/0",reason="unknown"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/kube-state-metrics/0",reason="refused"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/kube-state-metrics/0",reason="resolution"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/kube-state-metrics/0",reason="timeout"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/kube-state-metrics/0",reason="unknown"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/kube-state-metrics/1",reason="refused"} 1
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/kube-state-metrics/1",reason="resolution"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/kube-state-metrics/1",reason="timeout"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/kube-state-metrics/1",reason="unknown"} 1
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/kubelet/0",reason="refused"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/kubelet/0",reason="resolution"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/kubelet/0",reason="timeout"} 11
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/kubelet/0",reason="unknown"} 12
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/kubelet/1",reason="refused"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/kubelet/1",reason="resolution"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/kubelet/1",reason="timeout"} 10
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/kubelet/1",reason="unknown"} 10
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/kubelet/2",reason="refused"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/kubelet/2",reason="resolution"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/kubelet/2",reason="timeout"} 10
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/kubelet/2",reason="unknown"} 11
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/node-exporter/0",reason="refused"} 1
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/node-exporter/0",reason="resolution"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/node-exporter/0",reason="timeout"} 26
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/node-exporter/0",reason="unknown"} 29
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/prometheus-adapter/0",reason="refused"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/prometheus-adapter/0",reason="resolution"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/prometheus-adapter/0",reason="timeout"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/prometheus-adapter/0",reason="unknown"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/prometheus-k8s/0",reason="refused"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/prometheus-k8s/0",reason="resolution"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/prometheus-k8s/0",reason="timeout"} 5
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/prometheus-k8s/0",reason="unknown"} 5
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/prometheus-operator/0",reason="refused"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/prometheus-operator/0",reason="resolution"} 0
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/prometheus-operator/0",reason="timeout"} 6
net_conntrack_dialer_conn_failed_total{dialer_name="serviceMonitor/monitoring/prometheus-operator/0",reason="unknown"} 6
# HELP net_conntrack_listener_conn_accepted_total Total number of connections opened to the listener of a given name.
# TYPE net_conntrack_listener_conn_accepted_total counter
net_conntrack_listener_conn_accepted_total{listener_name="http"} 4231
# HELP net_conntrack_listener_conn_closed_total Total number of connections closed that were made to the listener of a given name.
# TYPE net_conntrack_listener_conn_closed_total counter
net_conntrack_listener_conn_closed_total{listener_name="http"} 4227
# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.
# TYPE process_cpu_seconds_total counter
process_cpu_seconds_total 934.63
# HELP process_max_fds Maximum number of open file descriptors.
# TYPE process_max_fds gauge
process_max_fds 1.048576e+06
# HELP process_open_fds Number of open file descriptors.
# TYPE process_open_fds gauge
process_open_fds 61
# HELP process_resident_memory_bytes Resident memory size in bytes.
# TYPE process_resident_memory_bytes gauge
process_resident_memory_bytes 6.1779968e+08
# HELP process_start_time_seconds Start time of the process since unix epoch in seconds.
# TYPE process_start_time_seconds gauge
process_start_time_seconds 1.6262298738e+09
# HELP process_virtual_memory_bytes Virtual memory size in bytes.
# TYPE process_virtual_memory_bytes gauge
process_virtual_memory_bytes 1.802285056e+09
# HELP process_virtual_memory_max_bytes Maximum amount of virtual memory available in bytes.
# TYPE process_virtual_memory_max_bytes gauge
process_virtual_memory_max_bytes 1.8446744073709552e+19
# HELP prometheus_api_remote_read_queries The current number of remote read queries being executed or waiting.
# TYPE prometheus_api_remote_read_queries gauge
prometheus_api_remote_read_queries 0
# HELP prometheus_build_info A metric with a constant '1' value labeled by version, revision, branch, and goversion from which prometheus was built.
# TYPE prometheus_build_info gauge
prometheus_build_info{branch="HEAD",goversion="go1.16.2",revision="3cafc58827d1ebd1a67749f88be4218f0bab3d8d",version="2.26.0"} 1
# HELP prometheus_config_last_reload_success_timestamp_seconds Timestamp of the last successful configuration reload.
# TYPE prometheus_config_last_reload_success_timestamp_seconds gauge
prometheus_config_last_reload_success_timestamp_seconds 1.6262257000479589e+09
# HELP prometheus_config_last_reload_successful Whether the last configuration reload attempt was successful.
# TYPE prometheus_config_last_reload_successful gauge
prometheus_config_last_reload_successful 1
# HELP prometheus_engine_queries The current number of queries being executed or waiting.
# TYPE prometheus_engine_queries gauge
prometheus_engine_queries 0
# HELP prometheus_engine_queries_concurrent_max The max number of concurrent queries.
# TYPE prometheus_engine_queries_concurrent_max gauge
prometheus_engine_queries_concurrent_max 20
# HELP prometheus_engine_query_duration_seconds Query timings
# TYPE prometheus_engine_query_duration_seconds summary
prometheus_engine_query_duration_seconds{slice="inner_eval",quantile="0.5"} 0.000172349
prometheus_engine_query_duration_seconds{slice="inner_eval",quantile="0.9"} 0.006378077
prometheus_engine_query_duration_seconds{slice="inner_eval",quantile="0.99"} 0.092900003
prometheus_engine_query_duration_seconds_sum{slice="inner_eval"} 508.9111094559962
prometheus_engine_query_duration_seconds_count{slice="inner_eval"} 122911
prometheus_engine_query_duration_seconds{slice="prepare_time",quantile="0.5"} 8.8421e-05
prometheus_engine_query_duration_seconds{slice="prepare_time",quantile="0.9"} 0.001274996
prometheus_engine_query_duration_seconds{slice="prepare_time",quantile="0.99"} 0.005844206
prometheus_engine_query_duration_seconds_sum{slice="prepare_time"} 142.2880246389999
prometheus_engine_query_duration_seconds_count{slice="prepare_time"} 122911
prometheus_engine_query_duration_seconds{slice="queue_time",quantile="0.5"} 4.857e-06
prometheus_engine_query_duration_seconds{slice="queue_time",quantile="0.9"} 1.4419e-05
prometheus_engine_query_duration_seconds{slice="queue_time",quantile="0.99"} 5.3215e-05
prometheus_engine_query_duration_seconds_sum{slice="queue_time"} 20.567446440999838
prometheus_engine_query_duration_seconds_count{slice="queue_time"} 122911
prometheus_engine_query_duration_seconds{slice="result_sort",quantile="0.5"} NaN
prometheus_engine_query_duration_seconds{slice="result_sort",quantile="0.9"} NaN
prometheus_engine_query_duration_seconds{slice="result_sort",quantile="0.99"} NaN
prometheus_engine_query_duration_seconds_sum{slice="result_sort"} 0.000177348
prometheus_engine_query_duration_seconds_count{slice="result_sort"} 35
# HELP prometheus_engine_query_log_enabled State of the query log.
# TYPE prometheus_engine_query_log_enabled gauge
prometheus_engine_query_log_enabled 0
# HELP prometheus_engine_query_log_failures_total The number of query log failures.
# TYPE prometheus_engine_query_log_failures_total counter
prometheus_engine_query_log_failures_total 0
# HELP prometheus_http_request_duration_seconds Histogram of latencies for HTTP requests.
# TYPE prometheus_http_request_duration_seconds histogram
prometheus_http_request_duration_seconds_bucket{handler="/",le="0.1"} 1
prometheus_http_request_duration_seconds_bucket{handler="/",le="0.2"} 1
prometheus_http_request_duration_seconds_bucket{handler="/",le="0.4"} 1
prometheus_http_request_duration_seconds_bucket{handler="/",le="1"} 1
prometheus_http_request_duration_seconds_bucket{handler="/",le="3"} 1
prometheus_http_request_duration_seconds_bucket{handler="/",le="8"} 1
prometheus_http_request_duration_seconds_bucket{handler="/",le="20"} 1
prometheus_http_request_duration_seconds_bucket{handler="/",le="60"} 1
prometheus_http_request_duration_seconds_bucket{handler="/",le="120"} 1
prometheus_http_request_duration_seconds_bucket{handler="/",le="+Inf"} 1
prometheus_http_request_duration_seconds_sum{handler="/"} 2.3757e-05
prometheus_http_request_duration_seconds_count{handler="/"} 1
prometheus_http_request_duration_seconds_bucket{handler="/-/ready",le="0.1"} 4205
prometheus_http_request_duration_seconds_bucket{handler="/-/ready",le="0.2"} 4205
prometheus_http_request_duration_seconds_bucket{handler="/-/ready",le="0.4"} 4205
prometheus_http_request_duration_seconds_bucket{handler="/-/ready",le="1"} 4205
prometheus_http_request_duration_seconds_bucket{handler="/-/ready",le="3"} 4205
prometheus_http_request_duration_seconds_bucket{handler="/-/ready",le="8"} 4205
prometheus_http_request_duration_seconds_bucket{handler="/-/ready",le="20"} 4205
prometheus_http_request_duration_seconds_bucket{handler="/-/ready",le="60"} 4205
prometheus_http_request_duration_seconds_bucket{handler="/-/ready",le="120"} 4205
prometheus_http_request_duration_seconds_bucket{handler="/-/ready",le="+Inf"} 4205
prometheus_http_request_duration_seconds_sum{handler="/-/ready"} 0.044763871999999934
prometheus_http_request_duration_seconds_count{handler="/-/ready"} 4205
prometheus_http_request_duration_seconds_bucket{handler="/-/reload",le="0.1"} 0
prometheus_http_request_duration_seconds_bucket{handler="/-/reload",le="0.2"} 0
prometheus_http_request_duration_seconds_bucket{handler="/-/reload",le="0.4"} 0
prometheus_http_request_duration_seconds_bucket{handler="/-/reload",le="1"} 0
prometheus_http_request_duration_seconds_bucket{handler="/-/reload",le="3"} 0
prometheus_http_request_duration_seconds_bucket{handler="/-/reload",le="8"} 0
prometheus_http_request_duration_seconds_bucket{handler="/-/reload",le="20"} 1
prometheus_http_request_duration_seconds_bucket{handler="/-/reload",le="60"} 1
prometheus_http_request_duration_seconds_bucket{handler="/-/reload",le="120"} 1
prometheus_http_request_duration_seconds_bucket{handler="/-/reload",le="+Inf"} 1
prometheus_http_request_duration_seconds_sum{handler="/-/reload"} 12.747278755
prometheus_http_request_duration_seconds_count{handler="/-/reload"} 1
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/label/:name/values",le="0.1"} 7
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/label/:name/values",le="0.2"} 7
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/label/:name/values",le="0.4"} 7
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/label/:name/values",le="1"} 7
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/label/:name/values",le="3"} 7
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/label/:name/values",le="8"} 7
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/label/:name/values",le="20"} 7
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/label/:name/values",le="60"} 7
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/label/:name/values",le="120"} 7
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/label/:name/values",le="+Inf"} 7
prometheus_http_request_duration_seconds_sum{handler="/api/v1/label/:name/values"} 0.056257193999999996
prometheus_http_request_duration_seconds_count{handler="/api/v1/label/:name/values"} 7
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/query",le="0.1"} 76
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/query",le="0.2"} 76
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/query",le="0.4"} 76
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/query",le="1"} 76
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/query",le="3"} 76
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/query",le="8"} 76
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/query",le="20"} 76
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/query",le="60"} 76
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/query",le="120"} 76
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/query",le="+Inf"} 76
prometheus_http_request_duration_seconds_sum{handler="/api/v1/query"} 0.12029475700000002
prometheus_http_request_duration_seconds_count{handler="/api/v1/query"} 76
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/query_range",le="0.1"} 38
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/query_range",le="0.2"} 38
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/query_range",le="0.4"} 38
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/query_range",le="1"} 38
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/query_range",le="3"} 38
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/query_range",le="8"} 38
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/query_range",le="20"} 38
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/query_range",le="60"} 38
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/query_range",le="120"} 38
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/query_range",le="+Inf"} 38
prometheus_http_request_duration_seconds_sum{handler="/api/v1/query_range"} 0.06583801699999998
prometheus_http_request_duration_seconds_count{handler="/api/v1/query_range"} 38
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/targets",le="0.1"} 1
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/targets",le="0.2"} 1
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/targets",le="0.4"} 1
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/targets",le="1"} 1
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/targets",le="3"} 1
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/targets",le="8"} 1
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/targets",le="20"} 1
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/targets",le="60"} 1
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/targets",le="120"} 1
prometheus_http_request_duration_seconds_bucket{handler="/api/v1/targets",le="+Inf"} 1
prometheus_http_request_duration_seconds_sum{handler="/api/v1/targets"} 0.006790512
prometheus_http_request_duration_seconds_count{handler="/api/v1/targets"} 1
prometheus_http_request_duration_seconds_bucket{handler="/favicon.ico",le="0.1"} 4
prometheus_http_request_duration_seconds_bucket{handler="/favicon.ico",le="0.2"} 4
prometheus_http_request_duration_seconds_bucket{handler="/favicon.ico",le="0.4"} 4
prometheus_http_request_duration_seconds_bucket{handler="/favicon.ico",le="1"} 4
prometheus_http_request_duration_seconds_bucket{handler="/favicon.ico",le="3"} 4
prometheus_http_request_duration_seconds_bucket{handler="/favicon.ico",le="8"} 4
prometheus_http_request_duration_seconds_bucket{handler="/favicon.ico",le="20"} 4
prometheus_http_request_duration_seconds_bucket{handler="/favicon.ico",le="60"} 4
prometheus_http_request_duration_seconds_bucket{handler="/favicon.ico",le="120"} 4
prometheus_http_request_duration_seconds_bucket{handler="/favicon.ico",le="+Inf"} 4
prometheus_http_request_duration_seconds_sum{handler="/favicon.ico"} 0.003068569
prometheus_http_request_duration_seconds_count{handler="/favicon.ico"} 4
prometheus_http_request_duration_seconds_bucket{handler="/graph",le="0.1"} 6
prometheus_http_request_duration_seconds_bucket{handler="/graph",le="0.2"} 6
prometheus_http_request_duration_seconds_bucket{handler="/graph",le="0.4"} 6
prometheus_http_request_duration_seconds_bucket{handler="/graph",le="1"} 6
prometheus_http_request_duration_seconds_bucket{handler="/graph",le="3"} 6
prometheus_http_request_duration_seconds_bucket{handler="/graph",le="8"} 6
prometheus_http_request_duration_seconds_bucket{handler="/graph",le="20"} 6
prometheus_http_request_duration_seconds_bucket{handler="/graph",le="60"} 6
prometheus_http_request_duration_seconds_bucket{handler="/graph",le="120"} 6
prometheus_http_request_duration_seconds_bucket{handler="/graph",le="+Inf"} 6
prometheus_http_request_duration_seconds_sum{handler="/graph"} 0.001303871
prometheus_http_request_duration_seconds_count{handler="/graph"} 6
prometheus_http_request_duration_seconds_bucket{handler="/metrics",le="0.1"} 1395
prometheus_http_request_duration_seconds_bucket{handler="/metrics",le="0.2"} 1396
prometheus_http_request_duration_seconds_bucket{handler="/metrics",le="0.4"} 1396
prometheus_http_request_duration_seconds_bucket{handler="/metrics",le="1"} 1396
prometheus_http_request_duration_seconds_bucket{handler="/metrics",le="3"} 1396
prometheus_http_request_duration_seconds_bucket{handler="/metrics",le="8"} 1397
prometheus_http_request_duration_seconds_bucket{handler="/metrics",le="20"} 1397
prometheus_http_request_duration_seconds_bucket{handler="/metrics",le="60"} 1398
prometheus_http_request_duration_seconds_bucket{handler="/metrics",le="120"} 1398
prometheus_http_request_duration_seconds_bucket{handler="/metrics",le="+Inf"} 1398
prometheus_http_request_duration_seconds_sum{handler="/metrics"} 41.05895542500007
prometheus_http_request_duration_seconds_count{handler="/metrics"} 1398
# HELP prometheus_http_requests_total Counter of HTTP requests.
# TYPE prometheus_http_requests_total counter
prometheus_http_requests_total{code="200",handler="/-/ready"} 4202
prometheus_http_requests_total{code="200",handler="/-/reload"} 1
prometheus_http_requests_total{code="200",handler="/api/v1/label/:name/values"} 7
prometheus_http_requests_total{code="200",handler="/api/v1/query"} 73
prometheus_http_requests_total{code="200",handler="/api/v1/query_range"} 35
prometheus_http_requests_total{code="200",handler="/api/v1/targets"} 1
prometheus_http_requests_total{code="200",handler="/favicon.ico"} 4
prometheus_http_requests_total{code="200",handler="/graph"} 6
prometheus_http_requests_total{code="200",handler="/metrics"} 1398
prometheus_http_requests_total{code="302",handler="/"} 1
prometheus_http_requests_total{code="400",handler="/api/v1/query"} 3
prometheus_http_requests_total{code="400",handler="/api/v1/query_range"} 3
prometheus_http_requests_total{code="503",handler="/-/ready"} 3
# HELP prometheus_http_response_size_bytes Histogram of response size for HTTP requests.
# TYPE prometheus_http_response_size_bytes histogram
prometheus_http_response_size_bytes_bucket{handler="/",le="100"} 1
prometheus_http_response_size_bytes_bucket{handler="/",le="1000"} 1
prometheus_http_response_size_bytes_bucket{handler="/",le="10000"} 1
prometheus_http_response_size_bytes_bucket{handler="/",le="100000"} 1
prometheus_http_response_size_bytes_bucket{handler="/",le="1e+06"} 1
prometheus_http_response_size_bytes_bucket{handler="/",le="1e+07"} 1
prometheus_http_response_size_bytes_bucket{handler="/",le="1e+08"} 1
prometheus_http_response_size_bytes_bucket{handler="/",le="1e+09"} 1
prometheus_http_response_size_bytes_bucket{handler="/",le="+Inf"} 1
prometheus_http_response_size_bytes_sum{handler="/"} 29
prometheus_http_response_size_bytes_count{handler="/"} 1
prometheus_http_response_size_bytes_bucket{handler="/-/ready",le="100"} 4205
prometheus_http_response_size_bytes_bucket{handler="/-/ready",le="1000"} 4205
prometheus_http_response_size_bytes_bucket{handler="/-/ready",le="10000"} 4205
prometheus_http_response_size_bytes_bucket{handler="/-/ready",le="100000"} 4205
prometheus_http_response_size_bytes_bucket{handler="/-/ready",le="1e+06"} 4205
prometheus_http_response_size_bytes_bucket{handler="/-/ready",le="1e+07"} 4205
prometheus_http_response_size_bytes_bucket{handler="/-/ready",le="1e+08"} 4205
prometheus_http_response_size_bytes_bucket{handler="/-/ready",le="1e+09"} 4205
prometheus_http_response_size_bytes_bucket{handler="/-/ready",le="+Inf"} 4205
prometheus_http_response_size_bytes_sum{handler="/-/ready"} 88299
prometheus_http_response_size_bytes_count{handler="/-/ready"} 4205
prometheus_http_response_size_bytes_bucket{handler="/-/reload",le="100"} 1
prometheus_http_response_size_bytes_bucket{handler="/-/reload",le="1000"} 1
prometheus_http_response_size_bytes_bucket{handler="/-/reload",le="10000"} 1
prometheus_http_response_size_bytes_bucket{handler="/-/reload",le="100000"} 1
prometheus_http_response_size_bytes_bucket{handler="/-/reload",le="1e+06"} 1
prometheus_http_response_size_bytes_bucket{handler="/-/reload",le="1e+07"} 1
prometheus_http_response_size_bytes_bucket{handler="/-/reload",le="1e+08"} 1
prometheus_http_response_size_bytes_bucket{handler="/-/reload",le="1e+09"} 1
prometheus_http_response_size_bytes_bucket{handler="/-/reload",le="+Inf"} 1
prometheus_http_response_size_bytes_sum{handler="/-/reload"} 0
prometheus_http_response_size_bytes_count{handler="/-/reload"} 1
prometheus_http_response_size_bytes_bucket{handler="/api/v1/label/:name/values",le="100"} 0
prometheus_http_response_size_bytes_bucket{handler="/api/v1/label/:name/values",le="1000"} 0
prometheus_http_response_size_bytes_bucket{handler="/api/v1/label/:name/values",le="10000"} 7
prometheus_http_response_size_bytes_bucket{handler="/api/v1/label/:name/values",le="100000"} 7
prometheus_http_response_size_bytes_bucket{handler="/api/v1/label/:name/values",le="1e+06"} 7
prometheus_http_response_size_bytes_bucket{handler="/api/v1/label/:name/values",le="1e+07"} 7
prometheus_http_response_size_bytes_bucket{handler="/api/v1/label/:name/values",le="1e+08"} 7
prometheus_http_response_size_bytes_bucket{handler="/api/v1/label/:name/values",le="1e+09"} 7
prometheus_http_response_size_bytes_bucket{handler="/api/v1/label/:name/values",le="+Inf"} 7
prometheus_http_response_size_bytes_sum{handler="/api/v1/label/:name/values"} 49810
prometheus_http_response_size_bytes_count{handler="/api/v1/label/:name/values"} 7
prometheus_http_response_size_bytes_bucket{handler="/api/v1/query",le="100"} 21
prometheus_http_response_size_bytes_bucket{handler="/api/v1/query",le="1000"} 69
prometheus_http_response_size_bytes_bucket{handler="/api/v1/query",le="10000"} 76
prometheus_http_response_size_bytes_bucket{handler="/api/v1/query",le="100000"} 76
prometheus_http_response_size_bytes_bucket{handler="/api/v1/query",le="1e+06"} 76
prometheus_http_response_size_bytes_bucket{handler="/api/v1/query",le="1e+07"} 76
prometheus_http_response_size_bytes_bucket{handler="/api/v1/query",le="1e+08"} 76
prometheus_http_response_size_bytes_bucket{handler="/api/v1/query",le="1e+09"} 76
prometheus_http_response_size_bytes_bucket{handler="/api/v1/query",le="+Inf"} 76
prometheus_http_response_size_bytes_sum{handler="/api/v1/query"} 31427
prometheus_http_response_size_bytes_count{handler="/api/v1/query"} 76
prometheus_http_response_size_bytes_bucket{handler="/api/v1/query_range",le="100"} 31
prometheus_http_response_size_bytes_bucket{handler="/api/v1/query_range",le="1000"} 35
prometheus_http_response_size_bytes_bucket{handler="/api/v1/query_range",le="10000"} 38
prometheus_http_response_size_bytes_bucket{handler="/api/v1/query_range",le="100000"} 38
prometheus_http_response_size_bytes_bucket{handler="/api/v1/query_range",le="1e+06"} 38
prometheus_http_response_size_bytes_bucket{handler="/api/v1/query_range",le="1e+07"} 38
prometheus_http_response_size_bytes_bucket{handler="/api/v1/query_range",le="1e+08"} 38
prometheus_http_response_size_bytes_bucket{handler="/api/v1/query_range",le="1e+09"} 38
prometheus_http_response_size_bytes_bucket{handler="/api/v1/query_range",le="+Inf"} 38
prometheus_http_response_size_bytes_sum{handler="/api/v1/query_range"} 14573
prometheus_http_response_size_bytes_count{handler="/api/v1/query_range"} 38
prometheus_http_response_size_bytes_bucket{handler="/api/v1/targets",le="100"} 0
prometheus_http_response_size_bytes_bucket{handler="/api/v1/targets",le="1000"} 0
prometheus_http_response_size_bytes_bucket{handler="/api/v1/targets",le="10000"} 1
prometheus_http_response_size_bytes_bucket{handler="/api/v1/targets",le="100000"} 1
prometheus_http_response_size_bytes_bucket{handler="/api/v1/targets",le="1e+06"} 1
prometheus_http_response_size_bytes_bucket{handler="/api/v1/targets",le="1e+07"} 1
prometheus_http_response_size_bytes_bucket{handler="/api/v1/targets",le="1e+08"} 1
prometheus_http_response_size_bytes_bucket{handler="/api/v1/targets",le="1e+09"} 1
prometheus_http_response_size_bytes_bucket{handler="/api/v1/targets",le="+Inf"} 1
prometheus_http_response_size_bytes_sum{handler="/api/v1/targets"} 5249
prometheus_http_response_size_bytes_count{handler="/api/v1/targets"} 1
prometheus_http_response_size_bytes_bucket{handler="/favicon.ico",le="100"} 0
prometheus_http_response_size_bytes_bucket{handler="/favicon.ico",le="1000"} 0
prometheus_http_response_size_bytes_bucket{handler="/favicon.ico",le="10000"} 0
prometheus_http_response_size_bytes_bucket{handler="/favicon.ico",le="100000"} 4
prometheus_http_response_size_bytes_bucket{handler="/favicon.ico",le="1e+06"} 4
prometheus_http_response_size_bytes_bucket{handler="/favicon.ico",le="1e+07"} 4
prometheus_http_response_size_bytes_bucket{handler="/favicon.ico",le="1e+08"} 4
prometheus_http_response_size_bytes_bucket{handler="/favicon.ico",le="1e+09"} 4
prometheus_http_response_size_bytes_bucket{handler="/favicon.ico",le="+Inf"} 4
prometheus_http_response_size_bytes_sum{handler="/favicon.ico"} 60344
prometheus_http_response_size_bytes_count{handler="/favicon.ico"} 4
prometheus_http_response_size_bytes_bucket{handler="/graph",le="100"} 0
prometheus_http_response_size_bytes_bucket{handler="/graph",le="1000"} 0
prometheus_http_response_size_bytes_bucket{handler="/graph",le="10000"} 6
prometheus_http_response_size_bytes_bucket{handler="/graph",le="100000"} 6
prometheus_http_response_size_bytes_bucket{handler="/graph",le="1e+06"} 6
prometheus_http_response_size_bytes_bucket{handler="/graph",le="1e+07"} 6
prometheus_http_response_size_bytes_bucket{handler="/graph",le="1e+08"} 6
prometheus_http_response_size_bytes_bucket{handler="/graph",le="1e+09"} 6
prometheus_http_response_size_bytes_bucket{handler="/graph",le="+Inf"} 6
prometheus_http_response_size_bytes_sum{handler="/graph"} 13818
prometheus_http_response_size_bytes_count{handler="/graph"} 6
prometheus_http_response_size_bytes_bucket{handler="/metrics",le="100"} 0
prometheus_http_response_size_bytes_bucket{handler="/metrics",le="1000"} 0
prometheus_http_response_size_bytes_bucket{handler="/metrics",le="10000"} 1
prometheus_http_response_size_bytes_bucket{handler="/metrics",le="100000"} 1398
prometheus_http_response_size_bytes_bucket{handler="/metrics",le="1e+06"} 1398
prometheus_http_response_size_bytes_bucket{handler="/metrics",le="1e+07"} 1398
prometheus_http_response_size_bytes_bucket{handler="/metrics",le="1e+08"} 1398
prometheus_http_response_size_bytes_bucket{handler="/metrics",le="1e+09"} 1398
prometheus_http_response_size_bytes_bucket{handler="/metrics",le="+Inf"} 1398
prometheus_http_response_size_bytes_sum{handler="/metrics"} 1.8615e+07
prometheus_http_response_size_bytes_count{handler="/metrics"} 1398
# HELP prometheus_notifications_alertmanagers_discovered The number of alertmanagers discovered and active.
# TYPE prometheus_notifications_alertmanagers_discovered gauge
prometheus_notifications_alertmanagers_discovered 3
# HELP prometheus_notifications_dropped_total Total number of alerts dropped due to errors when sending to Alertmanager.
# TYPE prometheus_notifications_dropped_total counter
prometheus_notifications_dropped_total 33
# HELP prometheus_notifications_errors_total Total number of errors sending alert notifications.
# TYPE prometheus_notifications_errors_total counter
prometheus_notifications_errors_total{alertmanager="http://172.162.195.35:9093/api/v2/alerts"} 23
prometheus_notifications_errors_total{alertmanager="http://172.162.195.36:9093/api/v2/alerts"} 19
prometheus_notifications_errors_total{alertmanager="http://172.162.195.37:9093/api/v2/alerts"} 20
# HELP prometheus_notifications_latency_seconds Latency quantiles for sending alert notifications.
# TYPE prometheus_notifications_latency_seconds summary
prometheus_notifications_latency_seconds{alertmanager="http://172.162.195.35:9093/api/v2/alerts",quantile="0.5"} 0.001683286
prometheus_notifications_latency_seconds{alertmanager="http://172.162.195.35:9093/api/v2/alerts",quantile="0.9"} 0.002870164
prometheus_notifications_latency_seconds{alertmanager="http://172.162.195.35:9093/api/v2/alerts",quantile="0.99"} 0.00966798
prometheus_notifications_latency_seconds_sum{alertmanager="http://172.162.195.35:9093/api/v2/alerts"} 284.101079177001
prometheus_notifications_latency_seconds_count{alertmanager="http://172.162.195.35:9093/api/v2/alerts"} 1547
prometheus_notifications_latency_seconds{alertmanager="http://172.162.195.36:9093/api/v2/alerts",quantile="0.5"} 0.001656781
prometheus_notifications_latency_seconds{alertmanager="http://172.162.195.36:9093/api/v2/alerts",quantile="0.9"} 0.002943216
prometheus_notifications_latency_seconds{alertmanager="http://172.162.195.36:9093/api/v2/alerts",quantile="0.99"} 0.010048782
prometheus_notifications_latency_seconds_sum{alertmanager="http://172.162.195.36:9093/api/v2/alerts"} 257.0053810620001
prometheus_notifications_latency_seconds_count{alertmanager="http://172.162.195.36:9093/api/v2/alerts"} 1547
prometheus_notifications_latency_seconds{alertmanager="http://172.162.195.37:9093/api/v2/alerts",quantile="0.5"} 0.001654836
prometheus_notifications_latency_seconds{alertmanager="http://172.162.195.37:9093/api/v2/alerts",quantile="0.9"} 0.002892869
prometheus_notifications_latency_seconds{alertmanager="http://172.162.195.37:9093/api/v2/alerts",quantile="0.99"} 0.010021074
prometheus_notifications_latency_seconds_sum{alertmanager="http://172.162.195.37:9093/api/v2/alerts"} 259.53750336499985
prometheus_notifications_latency_seconds_count{alertmanager="http://172.162.195.37:9093/api/v2/alerts"} 1547
# HELP prometheus_notifications_queue_capacity The capacity of the alert notifications queue.
# TYPE prometheus_notifications_queue_capacity gauge
prometheus_notifications_queue_capacity 10000
# HELP prometheus_notifications_queue_length The number of alert notifications in the queue.
# TYPE prometheus_notifications_queue_length gauge
prometheus_notifications_queue_length 0
# HELP prometheus_notifications_sent_total Total number of alerts sent.
# TYPE prometheus_notifications_sent_total counter
prometheus_notifications_sent_total{alertmanager="http://172.162.195.35:9093/api/v2/alerts"} 3286
prometheus_notifications_sent_total{alertmanager="http://172.162.195.36:9093/api/v2/alerts"} 3286
prometheus_notifications_sent_total{alertmanager="http://172.162.195.37:9093/api/v2/alerts"} 3286
# HELP prometheus_remote_storage_highest_timestamp_in_seconds Highest timestamp that has come into the remote storage via the Appender interface, in seconds since epoch.
# TYPE prometheus_remote_storage_highest_timestamp_in_seconds gauge
prometheus_remote_storage_highest_timestamp_in_seconds 1.626250946e+09
# HELP prometheus_remote_storage_samples_in_total Samples in to remote storage, compare to samples out for queue managers.
# TYPE prometheus_remote_storage_samples_in_total counter
prometheus_remote_storage_samples_in_total 4.855752e+07
# HELP prometheus_remote_storage_string_interner_zero_reference_releases_total The number of times release has been called for strings that are not interned.
# TYPE prometheus_remote_storage_string_interner_zero_reference_releases_total counter
prometheus_remote_storage_string_interner_zero_reference_releases_total 0
# HELP prometheus_rule_evaluation_duration_seconds The duration for a rule to execute.
# TYPE prometheus_rule_evaluation_duration_seconds summary
prometheus_rule_evaluation_duration_seconds{quantile="0.5"} 0.000405493
prometheus_rule_evaluation_duration_seconds{quantile="0.9"} 0.008879674
prometheus_rule_evaluation_duration_seconds{quantile="0.99"} 0.096290033
prometheus_rule_evaluation_duration_seconds_sum 1056.0493569379776
prometheus_rule_evaluation_duration_seconds_count 122803
# HELP prometheus_rule_evaluation_failures_total The total number of rule evaluation failures.
# TYPE prometheus_rule_evaluation_failures_total counter
prometheus_rule_evaluation_failures_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-alertmanager-main-rules.yaml;alertmanager.rules"} 0
prometheus_rule_evaluation_failures_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-prometheus-rules.yaml;general.rules"} 0
prometheus_rule_evaluation_failures_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-prometheus-rules.yaml;kube-prometheus-general.rules"} 0
prometheus_rule_evaluation_failures_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-prometheus-rules.yaml;kube-prometheus-node-recording.rules"} 0
prometheus_rule_evaluation_failures_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-prometheus-rules.yaml;node-network"} 0
prometheus_rule_evaluation_failures_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-state-metrics-rules.yaml;kube-state-metrics"} 0
prometheus_rule_evaluation_failures_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;k8s.rules"} 0
prometheus_rule_evaluation_failures_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kube-apiserver-availability.rules"} 0
prometheus_rule_evaluation_failures_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kube-apiserver-slos"} 0
prometheus_rule_evaluation_failures_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kube-apiserver.rules"} 0
prometheus_rule_evaluation_failures_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kube-scheduler.rules"} 0
prometheus_rule_evaluation_failures_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubelet.rules"} 0
prometheus_rule_evaluation_failures_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-apps"} 0
prometheus_rule_evaluation_failures_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-resources"} 0
prometheus_rule_evaluation_failures_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-storage"} 0
prometheus_rule_evaluation_failures_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system"} 0
prometheus_rule_evaluation_failures_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system-apiserver"} 0
prometheus_rule_evaluation_failures_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system-controller-manager"} 0
prometheus_rule_evaluation_failures_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system-kubelet"} 0
prometheus_rule_evaluation_failures_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system-scheduler"} 0
prometheus_rule_evaluation_failures_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;node.rules"} 0
prometheus_rule_evaluation_failures_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-node-exporter-rules.yaml;node-exporter"} 0
prometheus_rule_evaluation_failures_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-node-exporter-rules.yaml;node-exporter.rules"} 0
prometheus_rule_evaluation_failures_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-prometheus-k8s-prometheus-rules.yaml;prometheus"} 0
prometheus_rule_evaluation_failures_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-prometheus-operator-rules.yaml;prometheus-operator"} 0
# HELP prometheus_rule_evaluations_total The total number of rule evaluations.
# TYPE prometheus_rule_evaluations_total counter
prometheus_rule_evaluations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-alertmanager-main-rules.yaml;alertmanager.rules"} 5608
prometheus_rule_evaluations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-prometheus-rules.yaml;general.rules"} 1404
prometheus_rule_evaluations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-prometheus-rules.yaml;kube-prometheus-general.rules"} 1404
prometheus_rule_evaluations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-prometheus-rules.yaml;kube-prometheus-node-recording.rules"} 4212
prometheus_rule_evaluations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-prometheus-rules.yaml;node-network"} 701
prometheus_rule_evaluations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-state-metrics-rules.yaml;kube-state-metrics"} 1404
prometheus_rule_evaluations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;k8s.rules"} 7020
prometheus_rule_evaluations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kube-apiserver-availability.rules"} 3510
prometheus_rule_evaluations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kube-apiserver-slos"} 2804
prometheus_rule_evaluations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kube-apiserver.rules"} 14742
prometheus_rule_evaluations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kube-scheduler.rules"} 6309
prometheus_rule_evaluations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubelet.rules"} 2106
prometheus_rule_evaluations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-apps"} 10530
prometheus_rule_evaluations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-resources"} 5608
prometheus_rule_evaluations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-storage"} 2106
prometheus_rule_evaluations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system"} 1402
prometheus_rule_evaluations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system-apiserver"} 4212
prometheus_rule_evaluations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system-controller-manager"} 702
prometheus_rule_evaluations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system-kubelet"} 9126
prometheus_rule_evaluations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system-scheduler"} 701
prometheus_rule_evaluations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;node.rules"} 2103
prometheus_rule_evaluations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-node-exporter-rules.yaml;node-exporter"} 11232
prometheus_rule_evaluations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-node-exporter-rules.yaml;node-exporter.rules"} 7711
prometheus_rule_evaluations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-prometheus-k8s-prometheus-rules.yaml;prometheus"} 11232
prometheus_rule_evaluations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-prometheus-operator-rules.yaml;prometheus-operator"} 4914
# HELP prometheus_rule_group_duration_seconds The duration of rule group evaluations.
# TYPE prometheus_rule_group_duration_seconds summary
prometheus_rule_group_duration_seconds{quantile="0.01"} 0.000344771
prometheus_rule_group_duration_seconds{quantile="0.05"} 0.000446823
prometheus_rule_group_duration_seconds{quantile="0.5"} 0.002459279
prometheus_rule_group_duration_seconds{quantile="0.9"} 0.016124292
prometheus_rule_group_duration_seconds{quantile="0.99"} 0.781796405
prometheus_rule_group_duration_seconds_sum 1066.3853881880052
prometheus_rule_group_duration_seconds_count 16956
# HELP prometheus_rule_group_interval_seconds The interval of a rule group.
# TYPE prometheus_rule_group_interval_seconds gauge
prometheus_rule_group_interval_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-alertmanager-main-rules.yaml;alertmanager.rules"} 30
prometheus_rule_group_interval_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-prometheus-rules.yaml;general.rules"} 30
prometheus_rule_group_interval_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-prometheus-rules.yaml;kube-prometheus-general.rules"} 30
prometheus_rule_group_interval_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-prometheus-rules.yaml;kube-prometheus-node-recording.rules"} 30
prometheus_rule_group_interval_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-prometheus-rules.yaml;node-network"} 30
prometheus_rule_group_interval_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-state-metrics-rules.yaml;kube-state-metrics"} 30
prometheus_rule_group_interval_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;k8s.rules"} 30
prometheus_rule_group_interval_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kube-apiserver-availability.rules"} 180
prometheus_rule_group_interval_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kube-apiserver-slos"} 30
prometheus_rule_group_interval_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kube-apiserver.rules"} 30
prometheus_rule_group_interval_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kube-scheduler.rules"} 30
prometheus_rule_group_interval_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubelet.rules"} 30
prometheus_rule_group_interval_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-apps"} 30
prometheus_rule_group_interval_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-resources"} 30
prometheus_rule_group_interval_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-storage"} 30
prometheus_rule_group_interval_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system"} 30
prometheus_rule_group_interval_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system-apiserver"} 30
prometheus_rule_group_interval_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system-controller-manager"} 30
prometheus_rule_group_interval_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system-kubelet"} 30
prometheus_rule_group_interval_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system-scheduler"} 30
prometheus_rule_group_interval_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;node.rules"} 30
prometheus_rule_group_interval_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-node-exporter-rules.yaml;node-exporter"} 30
prometheus_rule_group_interval_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-node-exporter-rules.yaml;node-exporter.rules"} 30
prometheus_rule_group_interval_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-prometheus-k8s-prometheus-rules.yaml;prometheus"} 30
prometheus_rule_group_interval_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-prometheus-operator-rules.yaml;prometheus-operator"} 30
# HELP prometheus_rule_group_iterations_missed_total The total number of rule group evaluations missed due to slow rule group evaluation.
# TYPE prometheus_rule_group_iterations_missed_total counter
prometheus_rule_group_iterations_missed_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-alertmanager-main-rules.yaml;alertmanager.rules"} 139
prometheus_rule_group_iterations_missed_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-prometheus-rules.yaml;general.rules"} 139
prometheus_rule_group_iterations_missed_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-prometheus-rules.yaml;kube-prometheus-general.rules"} 139
prometheus_rule_group_iterations_missed_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-prometheus-rules.yaml;kube-prometheus-node-recording.rules"} 139
prometheus_rule_group_iterations_missed_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-prometheus-rules.yaml;node-network"} 139
prometheus_rule_group_iterations_missed_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-state-metrics-rules.yaml;kube-state-metrics"} 139
prometheus_rule_group_iterations_missed_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;k8s.rules"} 139
prometheus_rule_group_iterations_missed_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kube-apiserver-availability.rules"} 23
prometheus_rule_group_iterations_missed_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kube-apiserver-slos"} 139
prometheus_rule_group_iterations_missed_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kube-apiserver.rules"} 139
prometheus_rule_group_iterations_missed_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kube-scheduler.rules"} 139
prometheus_rule_group_iterations_missed_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubelet.rules"} 139
prometheus_rule_group_iterations_missed_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-apps"} 139
prometheus_rule_group_iterations_missed_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-resources"} 139
prometheus_rule_group_iterations_missed_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-storage"} 139
prometheus_rule_group_iterations_missed_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system"} 139
prometheus_rule_group_iterations_missed_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system-apiserver"} 139
prometheus_rule_group_iterations_missed_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system-controller-manager"} 139
prometheus_rule_group_iterations_missed_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system-kubelet"} 139
prometheus_rule_group_iterations_missed_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system-scheduler"} 139
prometheus_rule_group_iterations_missed_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;node.rules"} 139
prometheus_rule_group_iterations_missed_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-node-exporter-rules.yaml;node-exporter"} 139
prometheus_rule_group_iterations_missed_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-node-exporter-rules.yaml;node-exporter.rules"} 139
prometheus_rule_group_iterations_missed_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-prometheus-k8s-prometheus-rules.yaml;prometheus"} 139
prometheus_rule_group_iterations_missed_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-prometheus-operator-rules.yaml;prometheus-operator"} 139
# HELP prometheus_rule_group_iterations_total The total number of scheduled rule group evaluations, whether executed or missed.
# TYPE prometheus_rule_group_iterations_total counter
prometheus_rule_group_iterations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-alertmanager-main-rules.yaml;alertmanager.rules"} 840
prometheus_rule_group_iterations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-prometheus-rules.yaml;general.rules"} 841
prometheus_rule_group_iterations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-prometheus-rules.yaml;kube-prometheus-general.rules"} 841
prometheus_rule_group_iterations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-prometheus-rules.yaml;kube-prometheus-node-recording.rules"} 841
prometheus_rule_group_iterations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-prometheus-rules.yaml;node-network"} 840
prometheus_rule_group_iterations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-state-metrics-rules.yaml;kube-state-metrics"} 841
prometheus_rule_group_iterations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;k8s.rules"} 841
prometheus_rule_group_iterations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kube-apiserver-availability.rules"} 140
prometheus_rule_group_iterations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kube-apiserver-slos"} 840
prometheus_rule_group_iterations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kube-apiserver.rules"} 841
prometheus_rule_group_iterations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kube-scheduler.rules"} 840
prometheus_rule_group_iterations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubelet.rules"} 841
prometheus_rule_group_iterations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-apps"} 841
prometheus_rule_group_iterations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-resources"} 840
prometheus_rule_group_iterations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-storage"} 841
prometheus_rule_group_iterations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system"} 840
prometheus_rule_group_iterations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system-apiserver"} 841
prometheus_rule_group_iterations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system-controller-manager"} 841
prometheus_rule_group_iterations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system-kubelet"} 841
prometheus_rule_group_iterations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system-scheduler"} 840
prometheus_rule_group_iterations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;node.rules"} 840
prometheus_rule_group_iterations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-node-exporter-rules.yaml;node-exporter"} 841
prometheus_rule_group_iterations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-node-exporter-rules.yaml;node-exporter.rules"} 840
prometheus_rule_group_iterations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-prometheus-k8s-prometheus-rules.yaml;prometheus"} 841
prometheus_rule_group_iterations_total{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-prometheus-operator-rules.yaml;prometheus-operator"} 841
# HELP prometheus_rule_group_last_duration_seconds The duration of the last rule group evaluation.
# TYPE prometheus_rule_group_last_duration_seconds gauge
prometheus_rule_group_last_duration_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-alertmanager-main-rules.yaml;alertmanager.rules"} 0.002209278
prometheus_rule_group_last_duration_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-prometheus-rules.yaml;general.rules"} 0.005564863
prometheus_rule_group_last_duration_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-prometheus-rules.yaml;kube-prometheus-general.rules"} 0.0008587
prometheus_rule_group_last_duration_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-prometheus-rules.yaml;kube-prometheus-node-recording.rules"} 0.008148938
prometheus_rule_group_last_duration_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-prometheus-rules.yaml;node-network"} 0.000951374
prometheus_rule_group_last_duration_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-state-metrics-rules.yaml;kube-state-metrics"} 0.001208625
prometheus_rule_group_last_duration_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;k8s.rules"} 0.014632362
prometheus_rule_group_last_duration_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kube-apiserver-availability.rules"} 0.199707258
prometheus_rule_group_last_duration_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kube-apiserver-slos"} 0.001216636
prometheus_rule_group_last_duration_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kube-apiserver.rules"} 0.715759288
prometheus_rule_group_last_duration_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kube-scheduler.rules"} 0.000844075
prometheus_rule_group_last_duration_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubelet.rules"} 0.001544574
prometheus_rule_group_last_duration_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-apps"} 0.00338814
prometheus_rule_group_last_duration_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-resources"} 0.00318991
prometheus_rule_group_last_duration_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-storage"} 0.002515068
prometheus_rule_group_last_duration_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system"} 0.002272828
prometheus_rule_group_last_duration_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system-apiserver"} 0.01248441
prometheus_rule_group_last_duration_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system-controller-manager"} 0.000350055
prometheus_rule_group_last_duration_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system-kubelet"} 0.004622683
prometheus_rule_group_last_duration_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system-scheduler"} 0.000407063
prometheus_rule_group_last_duration_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;node.rules"} 0.003120466
prometheus_rule_group_last_duration_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-node-exporter-rules.yaml;node-exporter"} 0.017802478
prometheus_rule_group_last_duration_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-node-exporter-rules.yaml;node-exporter.rules"} 0.004361154
prometheus_rule_group_last_duration_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-prometheus-k8s-prometheus-rules.yaml;prometheus"} 0.003392752
prometheus_rule_group_last_duration_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-prometheus-operator-rules.yaml;prometheus-operator"} 0.001903696
# HELP prometheus_rule_group_last_evaluation_samples The number of samples returned during the last rule group evaluation.
# TYPE prometheus_rule_group_last_evaluation_samples gauge
prometheus_rule_group_last_evaluation_samples{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-alertmanager-main-rules.yaml;alertmanager.rules"} 0
prometheus_rule_group_last_evaluation_samples{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-prometheus-rules.yaml;general.rules"} 2
prometheus_rule_group_last_evaluation_samples{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-prometheus-rules.yaml;kube-prometheus-general.rules"} 13
prometheus_rule_group_last_evaluation_samples{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-prometheus-rules.yaml;kube-prometheus-node-recording.rules"} 13
prometheus_rule_group_last_evaluation_samples{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-prometheus-rules.yaml;node-network"} 0
prometheus_rule_group_last_evaluation_samples{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-state-metrics-rules.yaml;kube-state-metrics"} 0
prometheus_rule_group_last_evaluation_samples{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;k8s.rules"} 423
prometheus_rule_group_last_evaluation_samples{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kube-apiserver-availability.rules"} 59
prometheus_rule_group_last_evaluation_samples{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kube-apiserver-slos"} 2
prometheus_rule_group_last_evaluation_samples{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kube-apiserver.rules"} 457
prometheus_rule_group_last_evaluation_samples{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kube-scheduler.rules"} 0
prometheus_rule_group_last_evaluation_samples{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubelet.rules"} 9
prometheus_rule_group_last_evaluation_samples{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-apps"} 0
prometheus_rule_group_last_evaluation_samples{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-resources"} 16
prometheus_rule_group_last_evaluation_samples{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-storage"} 0
prometheus_rule_group_last_evaluation_samples{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system"} 0
prometheus_rule_group_last_evaluation_samples{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system-apiserver"} 2
prometheus_rule_group_last_evaluation_samples{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system-controller-manager"} 2
prometheus_rule_group_last_evaluation_samples{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system-kubelet"} 0
prometheus_rule_group_last_evaluation_samples{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system-scheduler"} 2
prometheus_rule_group_last_evaluation_samples{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;node.rules"} 36
prometheus_rule_group_last_evaluation_samples{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-node-exporter-rules.yaml;node-exporter"} 6
prometheus_rule_group_last_evaluation_samples{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-node-exporter-rules.yaml;node-exporter.rules"} 57
prometheus_rule_group_last_evaluation_samples{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-prometheus-k8s-prometheus-rules.yaml;prometheus"} 0
prometheus_rule_group_last_evaluation_samples{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-prometheus-operator-rules.yaml;prometheus-operator"} 0
# HELP prometheus_rule_group_last_evaluation_timestamp_seconds The timestamp of the last rule group evaluation in seconds.
# TYPE prometheus_rule_group_last_evaluation_timestamp_seconds gauge
prometheus_rule_group_last_evaluation_timestamp_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-alertmanager-main-rules.yaml;alertmanager.rules"} 1.626250918174826e+09
prometheus_rule_group_last_evaluation_timestamp_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-prometheus-rules.yaml;general.rules"} 1.6262509390158134e+09
prometheus_rule_group_last_evaluation_timestamp_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-prometheus-rules.yaml;kube-prometheus-general.rules"} 1.6262509428751833e+09
prometheus_rule_group_last_evaluation_timestamp_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-prometheus-rules.yaml;kube-prometheus-node-recording.rules"} 1.6262509336427326e+09
prometheus_rule_group_last_evaluation_timestamp_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-prometheus-rules.yaml;node-network"} 1.6262509204856255e+09
prometheus_rule_group_last_evaluation_timestamp_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-state-metrics-rules.yaml;kube-state-metrics"} 1.6262509298138967e+09
prometheus_rule_group_last_evaluation_timestamp_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;k8s.rules"} 1.626250937404332e+09
prometheus_rule_group_last_evaluation_timestamp_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kube-apiserver-availability.rules"} 1.6262508789433932e+09
prometheus_rule_group_last_evaluation_timestamp_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kube-apiserver-slos"} 1.626250920584692e+09
prometheus_rule_group_last_evaluation_timestamp_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kube-apiserver.rules"} 1.626250930246908e+09
prometheus_rule_group_last_evaluation_timestamp_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kube-scheduler.rules"} 1.6262509245451539e+09
prometheus_rule_group_last_evaluation_timestamp_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubelet.rules"} 1.6262509473252597e+09
prometheus_rule_group_last_evaluation_timestamp_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-apps"} 1.6262509436568923e+09
prometheus_rule_group_last_evaluation_timestamp_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-resources"} 1.6262509189774363e+09
prometheus_rule_group_last_evaluation_timestamp_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-storage"} 1.6262509344838426e+09
prometheus_rule_group_last_evaluation_timestamp_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system"} 1.6262509236114223e+09
prometheus_rule_group_last_evaluation_timestamp_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system-apiserver"} 1.626250928798736e+09
prometheus_rule_group_last_evaluation_timestamp_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system-controller-manager"} 1.6262509335774076e+09
prometheus_rule_group_last_evaluation_timestamp_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system-kubelet"} 1.6262509447650206e+09
prometheus_rule_group_last_evaluation_timestamp_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system-scheduler"} 1.6262509202492197e+09
prometheus_rule_group_last_evaluation_timestamp_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;node.rules"} 1.6262509268543773e+09
prometheus_rule_group_last_evaluation_timestamp_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-node-exporter-rules.yaml;node-exporter"} 1.6262509465485084e+09
prometheus_rule_group_last_evaluation_timestamp_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-node-exporter-rules.yaml;node-exporter.rules"} 1.6262509234589508e+09
prometheus_rule_group_last_evaluation_timestamp_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-prometheus-k8s-prometheus-rules.yaml;prometheus"} 1.6262509281589828e+09
prometheus_rule_group_last_evaluation_timestamp_seconds{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-prometheus-operator-rules.yaml;prometheus-operator"} 1.626250931304166e+09
# HELP prometheus_rule_group_rules The number of rules.
# TYPE prometheus_rule_group_rules gauge
prometheus_rule_group_rules{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-alertmanager-main-rules.yaml;alertmanager.rules"} 8
prometheus_rule_group_rules{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-prometheus-rules.yaml;general.rules"} 2
prometheus_rule_group_rules{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-prometheus-rules.yaml;kube-prometheus-general.rules"} 2
prometheus_rule_group_rules{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-prometheus-rules.yaml;kube-prometheus-node-recording.rules"} 6
prometheus_rule_group_rules{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-prometheus-rules.yaml;node-network"} 1
prometheus_rule_group_rules{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kube-state-metrics-rules.yaml;kube-state-metrics"} 2
prometheus_rule_group_rules{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;k8s.rules"} 10
prometheus_rule_group_rules{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kube-apiserver-availability.rules"} 30
prometheus_rule_group_rules{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kube-apiserver-slos"} 4
prometheus_rule_group_rules{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kube-apiserver.rules"} 21
prometheus_rule_group_rules{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kube-scheduler.rules"} 9
prometheus_rule_group_rules{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubelet.rules"} 3
prometheus_rule_group_rules{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-apps"} 15
prometheus_rule_group_rules{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-resources"} 8
prometheus_rule_group_rules{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-storage"} 3
prometheus_rule_group_rules{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system"} 2
prometheus_rule_group_rules{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system-apiserver"} 6
prometheus_rule_group_rules{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system-controller-manager"} 1
prometheus_rule_group_rules{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system-kubelet"} 13
prometheus_rule_group_rules{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;kubernetes-system-scheduler"} 1
prometheus_rule_group_rules{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-kubernetes-monitoring-rules.yaml;node.rules"} 3
prometheus_rule_group_rules{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-node-exporter-rules.yaml;node-exporter"} 16
prometheus_rule_group_rules{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-node-exporter-rules.yaml;node-exporter.rules"} 11
prometheus_rule_group_rules{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-prometheus-k8s-prometheus-rules.yaml;prometheus"} 16
prometheus_rule_group_rules{rule_group="/etc/prometheus/rules/prometheus-k8s-rulefiles-0/monitoring-prometheus-operator-rules.yaml;prometheus-operator"} 7
# HELP prometheus_sd_consul_rpc_duration_seconds The duration of a Consul RPC call in seconds.
# TYPE prometheus_sd_consul_rpc_duration_seconds summary
prometheus_sd_consul_rpc_duration_seconds{call="service",endpoint="catalog",quantile="0.5"} NaN
prometheus_sd_consul_rpc_duration_seconds{call="service",endpoint="catalog",quantile="0.9"} NaN
prometheus_sd_consul_rpc_duration_seconds{call="service",endpoint="catalog",quantile="0.99"} NaN
prometheus_sd_consul_rpc_duration_seconds_sum{call="service",endpoint="catalog"} 0
prometheus_sd_consul_rpc_duration_seconds_count{call="service",endpoint="catalog"} 0
prometheus_sd_consul_rpc_duration_seconds{call="services",endpoint="catalog",quantile="0.5"} NaN
prometheus_sd_consul_rpc_duration_seconds{call="services",endpoint="catalog",quantile="0.9"} NaN
prometheus_sd_consul_rpc_duration_seconds{call="services",endpoint="catalog",quantile="0.99"} NaN
prometheus_sd_consul_rpc_duration_seconds_sum{call="services",endpoint="catalog"} 0
prometheus_sd_consul_rpc_duration_seconds_count{call="services",endpoint="catalog"} 0
# HELP prometheus_sd_consul_rpc_failures_total The number of Consul RPC call failures.
# TYPE prometheus_sd_consul_rpc_failures_total counter
prometheus_sd_consul_rpc_failures_total 0
# HELP prometheus_sd_discovered_targets Current number of discovered targets.
# TYPE prometheus_sd_discovered_targets gauge
prometheus_sd_discovered_targets{config="config-0",name="notify"} 44
prometheus_sd_discovered_targets{config="serviceMonitor/monitoring/alertmanager/0",name="scrape"} 44
prometheus_sd_discovered_targets{config="serviceMonitor/monitoring/blackbox-exporter/0",name="scrape"} 44
prometheus_sd_discovered_targets{config="serviceMonitor/monitoring/coredns/0",name="scrape"} 13
prometheus_sd_discovered_targets{config="serviceMonitor/monitoring/grafana/0",name="scrape"} 44
prometheus_sd_discovered_targets{config="serviceMonitor/monitoring/kube-apiserver/0",name="scrape"} 3
prometheus_sd_discovered_targets{config="serviceMonitor/monitoring/kube-controller-manager/0",name="scrape"} 13
prometheus_sd_discovered_targets{config="serviceMonitor/monitoring/kube-scheduler/0",name="scrape"} 13
prometheus_sd_discovered_targets{config="serviceMonitor/monitoring/kube-state-metrics/0",name="scrape"} 44
prometheus_sd_discovered_targets{config="serviceMonitor/monitoring/kube-state-metrics/1",name="scrape"} 44
prometheus_sd_discovered_targets{config="serviceMonitor/monitoring/kubelet/0",name="scrape"} 13
prometheus_sd_discovered_targets{config="serviceMonitor/monitoring/kubelet/1",name="scrape"} 13
prometheus_sd_discovered_targets{config="serviceMonitor/monitoring/kubelet/2",name="scrape"} 13
prometheus_sd_discovered_targets{config="serviceMonitor/monitoring/node-exporter/0",name="scrape"} 44
prometheus_sd_discovered_targets{config="serviceMonitor/monitoring/prometheus-adapter/0",name="scrape"} 44
prometheus_sd_discovered_targets{config="serviceMonitor/monitoring/prometheus-k8s/0",name="scrape"} 44
prometheus_sd_discovered_targets{config="serviceMonitor/monitoring/prometheus-operator/0",name="scrape"} 44
# HELP prometheus_sd_dns_lookup_failures_total The number of DNS-SD lookup failures.
# TYPE prometheus_sd_dns_lookup_failures_total counter
prometheus_sd_dns_lookup_failures_total 0
# HELP prometheus_sd_dns_lookups_total The number of DNS-SD lookups.
# TYPE prometheus_sd_dns_lookups_total counter
prometheus_sd_dns_lookups_total 0
# HELP prometheus_sd_failed_configs Current number of service discovery configurations that failed to load.
# TYPE prometheus_sd_failed_configs gauge
prometheus_sd_failed_configs{name="notify"} 0
prometheus_sd_failed_configs{name="scrape"} 0
# HELP prometheus_sd_file_read_errors_total The number of File-SD read errors.
# TYPE prometheus_sd_file_read_errors_total counter
prometheus_sd_file_read_errors_total 0
# HELP prometheus_sd_file_scan_duration_seconds The duration of the File-SD scan in seconds.
# TYPE prometheus_sd_file_scan_duration_seconds summary
prometheus_sd_file_scan_duration_seconds{quantile="0.5"} NaN
prometheus_sd_file_scan_duration_seconds{quantile="0.9"} NaN
prometheus_sd_file_scan_duration_seconds{quantile="0.99"} NaN
prometheus_sd_file_scan_duration_seconds_sum 0
prometheus_sd_file_scan_duration_seconds_count 0
# HELP prometheus_sd_kubernetes_events_total The number of Kubernetes events handled.
# TYPE prometheus_sd_kubernetes_events_total counter
prometheus_sd_kubernetes_events_total{event="add",role="endpoints"} 48
prometheus_sd_kubernetes_events_total{event="add",role="endpointslice"} 0
prometheus_sd_kubernetes_events_total{event="add",role="ingress"} 0
prometheus_sd_kubernetes_events_total{event="add",role="node"} 0
prometheus_sd_kubernetes_events_total{event="add",role="pod"} 0
prometheus_sd_kubernetes_events_total{event="add",role="service"} 48
prometheus_sd_kubernetes_events_total{event="delete",role="endpoints"} 0
prometheus_sd_kubernetes_events_total{event="delete",role="endpointslice"} 0
prometheus_sd_kubernetes_events_total{event="delete",role="ingress"} 0
prometheus_sd_kubernetes_events_total{event="delete",role="node"} 0
prometheus_sd_kubernetes_events_total{event="delete",role="pod"} 0
prometheus_sd_kubernetes_events_total{event="delete",role="service"} 0
prometheus_sd_kubernetes_events_total{event="update",role="endpoints"} 1003
prometheus_sd_kubernetes_events_total{event="update",role="endpointslice"} 0
prometheus_sd_kubernetes_events_total{event="update",role="ingress"} 0
prometheus_sd_kubernetes_events_total{event="update",role="node"} 0
prometheus_sd_kubernetes_events_total{event="update",role="pod"} 0
prometheus_sd_kubernetes_events_total{event="update",role="service"} 851
# HELP prometheus_sd_kubernetes_http_request_duration_seconds Summary of latencies for HTTP requests to the Kubernetes API by endpoint.
# TYPE prometheus_sd_kubernetes_http_request_duration_seconds summary
prometheus_sd_kubernetes_http_request_duration_seconds_sum{endpoint="/api/v1/namespaces/%7Bnamespace%7D/endpoints"} 121.74581206600004
prometheus_sd_kubernetes_http_request_duration_seconds_count{endpoint="/api/v1/namespaces/%7Bnamespace%7D/endpoints"} 40
prometheus_sd_kubernetes_http_request_duration_seconds_sum{endpoint="/api/v1/namespaces/%7Bnamespace%7D/pods"} 41.886705843
prometheus_sd_kubernetes_http_request_duration_seconds_count{endpoint="/api/v1/namespaces/%7Bnamespace%7D/pods"} 44
prometheus_sd_kubernetes_http_request_duration_seconds_sum{endpoint="/api/v1/namespaces/%7Bnamespace%7D/services"} 0.23398366799999998
prometheus_sd_kubernetes_http_request_duration_seconds_count{endpoint="/api/v1/namespaces/%7Bnamespace%7D/services"} 23
# HELP prometheus_sd_kubernetes_http_request_total Total number of HTTP requests to the Kubernetes API by status code.
# TYPE prometheus_sd_kubernetes_http_request_total counter
prometheus_sd_kubernetes_http_request_total{status_code="200"} 820
# HELP prometheus_sd_kubernetes_workqueue_depth Current depth of the work queue.
# TYPE prometheus_sd_kubernetes_workqueue_depth gauge
prometheus_sd_kubernetes_workqueue_depth{queue_name="endpoints"} 24
# HELP prometheus_sd_kubernetes_workqueue_items_total Total number of items added to the work queue.
# TYPE prometheus_sd_kubernetes_workqueue_items_total counter
prometheus_sd_kubernetes_workqueue_items_total{queue_name="endpoints"} 1831
# HELP prometheus_sd_kubernetes_workqueue_latency_seconds How long an item stays in the work queue.
# TYPE prometheus_sd_kubernetes_workqueue_latency_seconds summary
prometheus_sd_kubernetes_workqueue_latency_seconds_sum{queue_name="endpoints"} 3.3407492769999902
prometheus_sd_kubernetes_workqueue_latency_seconds_count{queue_name="endpoints"} 1807
# HELP prometheus_sd_kubernetes_workqueue_longest_running_processor_seconds Duration of the longest running processor in the work queue.
# TYPE prometheus_sd_kubernetes_workqueue_longest_running_processor_seconds gauge
prometheus_sd_kubernetes_workqueue_longest_running_processor_seconds{queue_name="endpoints"} 0
# HELP prometheus_sd_kubernetes_workqueue_unfinished_work_seconds How long an item has remained unfinished in the work queue.
# TYPE prometheus_sd_kubernetes_workqueue_unfinished_work_seconds gauge
prometheus_sd_kubernetes_workqueue_unfinished_work_seconds{queue_name="endpoints"} 0
# HELP prometheus_sd_kubernetes_workqueue_work_duration_seconds How long processing an item from the work queue takes.
# TYPE prometheus_sd_kubernetes_workqueue_work_duration_seconds summary
prometheus_sd_kubernetes_workqueue_work_duration_seconds_sum{queue_name="endpoints"} 0.28262065699999955
prometheus_sd_kubernetes_workqueue_work_duration_seconds_count{queue_name="endpoints"} 1807
# HELP prometheus_sd_received_updates_total Total number of update events received from the SD providers.
# TYPE prometheus_sd_received_updates_total counter
prometheus_sd_received_updates_total{name="notify"} 773
prometheus_sd_received_updates_total{name="scrape"} 1034
# HELP prometheus_sd_updates_delayed_total Total number of update events that couldn't be sent immediately.
# TYPE prometheus_sd_updates_delayed_total counter
prometheus_sd_updates_delayed_total{name="notify"} 11
# HELP prometheus_sd_updates_total Total number of update events sent to the SD consumers.
# TYPE prometheus_sd_updates_total counter
prometheus_sd_updates_total{name="notify"} 77
prometheus_sd_updates_total{name="scrape"} 118
# HELP prometheus_target_interval_length_seconds Actual intervals between scrapes.
# TYPE prometheus_target_interval_length_seconds summary
prometheus_target_interval_length_seconds{interval="15s",quantile="0.01"} 14.998068585
prometheus_target_interval_length_seconds{interval="15s",quantile="0.05"} 14.998389585
prometheus_target_interval_length_seconds{interval="15s",quantile="0.5"} 15.000114951
prometheus_target_interval_length_seconds{interval="15s",quantile="0.9"} 15.001035609
prometheus_target_interval_length_seconds{interval="15s",quantile="0.99"} 15.002019479
prometheus_target_interval_length_seconds_sum{interval="15s"} 84120.07245244607
prometheus_target_interval_length_seconds_count{interval="15s"} 5606
prometheus_target_interval_length_seconds{interval="30s",quantile="0.01"} 29.998144878
prometheus_target_interval_length_seconds{interval="30s",quantile="0.05"} 29.998508468
prometheus_target_interval_length_seconds{interval="30s",quantile="0.5"} 30.000040285
prometheus_target_interval_length_seconds{interval="30s",quantile="0.9"} 30.001209116
prometheus_target_interval_length_seconds{interval="30s",quantile="0.99"} 30.00179937
prometheus_target_interval_length_seconds_sum{interval="30s"} 483375.48618420045
prometheus_target_interval_length_seconds_count{interval="30s"} 16112
# HELP prometheus_target_metadata_cache_bytes The number of bytes that are currently used for storing metric metadata in the cache
# TYPE prometheus_target_metadata_cache_bytes gauge
prometheus_target_metadata_cache_bytes{scrape_job="serviceMonitor/monitoring/alertmanager/0"} 15804
prometheus_target_metadata_cache_bytes{scrape_job="serviceMonitor/monitoring/blackbox-exporter/0"} 2108
prometheus_target_metadata_cache_bytes{scrape_job="serviceMonitor/monitoring/coredns/0"} 0
prometheus_target_metadata_cache_bytes{scrape_job="serviceMonitor/monitoring/grafana/0"} 3879
prometheus_target_metadata_cache_bytes{scrape_job="serviceMonitor/monitoring/kube-apiserver/0"} 25597
prometheus_target_metadata_cache_bytes{scrape_job="serviceMonitor/monitoring/kube-controller-manager/0"} 0
prometheus_target_metadata_cache_bytes{scrape_job="serviceMonitor/monitoring/kube-scheduler/0"} 0
prometheus_target_metadata_cache_bytes{scrape_job="serviceMonitor/monitoring/kube-state-metrics/0"} 12146
prometheus_target_metadata_cache_bytes{scrape_job="serviceMonitor/monitoring/kube-state-metrics/1"} 1964
prometheus_target_metadata_cache_bytes{scrape_job="serviceMonitor/monitoring/kubelet/0"} 20265
prometheus_target_metadata_cache_bytes{scrape_job="serviceMonitor/monitoring/kubelet/1"} 8835
prometheus_target_metadata_cache_bytes{scrape_job="serviceMonitor/monitoring/kubelet/2"} 504
prometheus_target_metadata_cache_bytes{scrape_job="serviceMonitor/monitoring/node-exporter/0"} 45530
prometheus_target_metadata_cache_bytes{scrape_job="serviceMonitor/monitoring/prometheus-adapter/0"} 7366
prometheus_target_metadata_cache_bytes{scrape_job="serviceMonitor/monitoring/prometheus-k8s/0"} 20502
prometheus_target_metadata_cache_bytes{scrape_job="serviceMonitor/monitoring/prometheus-operator/0"} 3207
# HELP prometheus_target_metadata_cache_entries Total number of metric metadata entries in the cache
# TYPE prometheus_target_metadata_cache_entries gauge
prometheus_target_metadata_cache_entries{scrape_job="serviceMonitor/monitoring/alertmanager/0"} 285
prometheus_target_metadata_cache_entries{scrape_job="serviceMonitor/monitoring/blackbox-exporter/0"} 40
prometheus_target_metadata_cache_entries{scrape_job="serviceMonitor/monitoring/coredns/0"} 0
prometheus_target_metadata_cache_entries{scrape_job="serviceMonitor/monitoring/grafana/0"} 84
prometheus_target_metadata_cache_entries{scrape_job="serviceMonitor/monitoring/kube-apiserver/0"} 328
prometheus_target_metadata_cache_entries{scrape_job="serviceMonitor/monitoring/kube-controller-manager/0"} 0
prometheus_target_metadata_cache_entries{scrape_job="serviceMonitor/monitoring/kube-scheduler/0"} 0
prometheus_target_metadata_cache_entries{scrape_job="serviceMonitor/monitoring/kube-state-metrics/0"} 217
prometheus_target_metadata_cache_entries{scrape_job="serviceMonitor/monitoring/kube-state-metrics/1"} 38
prometheus_target_metadata_cache_entries{scrape_job="serviceMonitor/monitoring/kubelet/0"} 287
prometheus_target_metadata_cache_entries{scrape_job="serviceMonitor/monitoring/kubelet/1"} 174
prometheus_target_metadata_cache_entries{scrape_job="serviceMonitor/monitoring/kubelet/2"} 6
prometheus_target_metadata_cache_entries{scrape_job="serviceMonitor/monitoring/node-exporter/0"} 935
prometheus_target_metadata_cache_entries{scrape_job="serviceMonitor/monitoring/prometheus-adapter/0"} 122
prometheus_target_metadata_cache_entries{scrape_job="serviceMonitor/monitoring/prometheus-k8s/0"} 352
prometheus_target_metadata_cache_entries{scrape_job="serviceMonitor/monitoring/prometheus-operator/0"} 56
# HELP prometheus_target_scrape_pool_exceeded_target_limit_total Total number of times scrape pools hit the target limit, during sync or config reload.
# TYPE prometheus_target_scrape_pool_exceeded_target_limit_total counter
prometheus_target_scrape_pool_exceeded_target_limit_total 0
# HELP prometheus_target_scrape_pool_reloads_failed_total Total number of failed scrape pool reloads.
# TYPE prometheus_target_scrape_pool_reloads_failed_total counter
prometheus_target_scrape_pool_reloads_failed_total 0
# HELP prometheus_target_scrape_pool_reloads_total Total number of scrape pool reloads.
# TYPE prometheus_target_scrape_pool_reloads_total counter
prometheus_target_scrape_pool_reloads_total 0
# HELP prometheus_target_scrape_pool_sync_total Total number of syncs that were executed on a scrape pool.
# TYPE prometheus_target_scrape_pool_sync_total counter
prometheus_target_scrape_pool_sync_total{scrape_job="serviceMonitor/monitoring/alertmanager/0"} 118
prometheus_target_scrape_pool_sync_total{scrape_job="serviceMonitor/monitoring/blackbox-exporter/0"} 118
prometheus_target_scrape_pool_sync_total{scrape_job="serviceMonitor/monitoring/coredns/0"} 118
prometheus_target_scrape_pool_sync_total{scrape_job="serviceMonitor/monitoring/grafana/0"} 118
prometheus_target_scrape_pool_sync_total{scrape_job="serviceMonitor/monitoring/kube-apiserver/0"} 118
prometheus_target_scrape_pool_sync_total{scrape_job="serviceMonitor/monitoring/kube-controller-manager/0"} 118
prometheus_target_scrape_pool_sync_total{scrape_job="serviceMonitor/monitoring/kube-scheduler/0"} 118
prometheus_target_scrape_pool_sync_total{scrape_job="serviceMonitor/monitoring/kube-state-metrics/0"} 118
prometheus_target_scrape_pool_sync_total{scrape_job="serviceMonitor/monitoring/kube-state-metrics/1"} 118
prometheus_target_scrape_pool_sync_total{scrape_job="serviceMonitor/monitoring/kubelet/0"} 118
prometheus_target_scrape_pool_sync_total{scrape_job="serviceMonitor/monitoring/kubelet/1"} 118
prometheus_target_scrape_pool_sync_total{scrape_job="serviceMonitor/monitoring/kubelet/2"} 118
prometheus_target_scrape_pool_sync_total{scrape_job="serviceMonitor/monitoring/node-exporter/0"} 118
prometheus_target_scrape_pool_sync_total{scrape_job="serviceMonitor/monitoring/prometheus-adapter/0"} 118
prometheus_target_scrape_pool_sync_total{scrape_job="serviceMonitor/monitoring/prometheus-k8s/0"} 118
prometheus_target_scrape_pool_sync_total{scrape_job="serviceMonitor/monitoring/prometheus-operator/0"} 118
# HELP prometheus_target_scrape_pool_targets Current number of targets in this scrape pool.
# TYPE prometheus_target_scrape_pool_targets gauge
prometheus_target_scrape_pool_targets{scrape_job="serviceMonitor/monitoring/alertmanager/0"} 3
prometheus_target_scrape_pool_targets{scrape_job="serviceMonitor/monitoring/blackbox-exporter/0"} 1
prometheus_target_scrape_pool_targets{scrape_job="serviceMonitor/monitoring/coredns/0"} 0
prometheus_target_scrape_pool_targets{scrape_job="serviceMonitor/monitoring/grafana/0"} 1
prometheus_target_scrape_pool_targets{scrape_job="serviceMonitor/monitoring/kube-apiserver/0"} 3
prometheus_target_scrape_pool_targets{scrape_job="serviceMonitor/monitoring/kube-controller-manager/0"} 0
prometheus_target_scrape_pool_targets{scrape_job="serviceMonitor/monitoring/kube-scheduler/0"} 0
prometheus_target_scrape_pool_targets{scrape_job="serviceMonitor/monitoring/kube-state-metrics/0"} 1
prometheus_target_scrape_pool_targets{scrape_job="serviceMonitor/monitoring/kube-state-metrics/1"} 1
prometheus_target_scrape_pool_targets{scrape_job="serviceMonitor/monitoring/kubelet/0"} 3
prometheus_target_scrape_pool_targets{scrape_job="serviceMonitor/monitoring/kubelet/1"} 3
prometheus_target_scrape_pool_targets{scrape_job="serviceMonitor/monitoring/kubelet/2"} 3
prometheus_target_scrape_pool_targets{scrape_job="serviceMonitor/monitoring/node-exporter/0"} 3
prometheus_target_scrape_pool_targets{scrape_job="serviceMonitor/monitoring/prometheus-adapter/0"} 2
prometheus_target_scrape_pool_targets{scrape_job="serviceMonitor/monitoring/prometheus-k8s/0"} 2
prometheus_target_scrape_pool_targets{scrape_job="serviceMonitor/monitoring/prometheus-operator/0"} 1
# HELP prometheus_target_scrape_pools_failed_total Total number of scrape pool creations that failed.
# TYPE prometheus_target_scrape_pools_failed_total counter
prometheus_target_scrape_pools_failed_total 0
# HELP prometheus_target_scrape_pools_total Total number of scrape pool creation attempts.
# TYPE prometheus_target_scrape_pools_total counter
prometheus_target_scrape_pools_total 16
# HELP prometheus_target_scrapes_cache_flush_forced_total How many times a scrape cache was flushed due to getting big while scrapes are failing.
# TYPE prometheus_target_scrapes_cache_flush_forced_total counter
prometheus_target_scrapes_cache_flush_forced_total 0
# HELP prometheus_target_scrapes_exceeded_sample_limit_total Total number of scrapes that hit the sample limit and were rejected.
# TYPE prometheus_target_scrapes_exceeded_sample_limit_total counter
prometheus_target_scrapes_exceeded_sample_limit_total 0
# HELP prometheus_target_scrapes_exemplar_out_of_order_total Total number of exemplar rejected due to not being out of the expected order.
# TYPE prometheus_target_scrapes_exemplar_out_of_order_total counter
prometheus_target_scrapes_exemplar_out_of_order_total 0
# HELP prometheus_target_scrapes_sample_duplicate_timestamp_total Total number of samples rejected due to duplicate timestamps but different values.
# TYPE prometheus_target_scrapes_sample_duplicate_timestamp_total counter
prometheus_target_scrapes_sample_duplicate_timestamp_total 0
# HELP prometheus_target_scrapes_sample_out_of_bounds_total Total number of samples rejected due to timestamp falling outside of the time bounds.
# TYPE prometheus_target_scrapes_sample_out_of_bounds_total counter
prometheus_target_scrapes_sample_out_of_bounds_total 0
# HELP prometheus_target_scrapes_sample_out_of_order_total Total number of samples rejected due to not being out of the expected order.
# TYPE prometheus_target_scrapes_sample_out_of_order_total counter
prometheus_target_scrapes_sample_out_of_order_total 0
# HELP prometheus_target_sync_length_seconds Actual interval to sync the scrape pool.
# TYPE prometheus_target_sync_length_seconds summary
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/alertmanager/0",quantile="0.01"} 0.001180311
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/alertmanager/0",quantile="0.05"} 0.001180311
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/alertmanager/0",quantile="0.5"} 0.001323402
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/alertmanager/0",quantile="0.9"} 0.002732891
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/alertmanager/0",quantile="0.99"} 0.002732891
prometheus_target_sync_length_seconds_sum{scrape_job="serviceMonitor/monitoring/alertmanager/0"} 0.31129993499999997
prometheus_target_sync_length_seconds_count{scrape_job="serviceMonitor/monitoring/alertmanager/0"} 118
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/blackbox-exporter/0",quantile="0.01"} 0.001118503
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/blackbox-exporter/0",quantile="0.05"} 0.001118503
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/blackbox-exporter/0",quantile="0.5"} 0.001288263
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/blackbox-exporter/0",quantile="0.9"} 0.003551095
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/blackbox-exporter/0",quantile="0.99"} 0.003551095
prometheus_target_sync_length_seconds_sum{scrape_job="serviceMonitor/monitoring/blackbox-exporter/0"} 0.35547638200000015
prometheus_target_sync_length_seconds_count{scrape_job="serviceMonitor/monitoring/blackbox-exporter/0"} 118
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/coredns/0",quantile="0.01"} 0.00013841
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/coredns/0",quantile="0.05"} 0.00013841
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/coredns/0",quantile="0.5"} 0.00017262
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/coredns/0",quantile="0.9"} 0.000504102
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/coredns/0",quantile="0.99"} 0.000504102
prometheus_target_sync_length_seconds_sum{scrape_job="serviceMonitor/monitoring/coredns/0"} 0.03321076300000001
prometheus_target_sync_length_seconds_count{scrape_job="serviceMonitor/monitoring/coredns/0"} 118
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/grafana/0",quantile="0.01"} 0.001157473
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/grafana/0",quantile="0.05"} 0.001157473
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/grafana/0",quantile="0.5"} 0.001196569
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/grafana/0",quantile="0.9"} 0.009884538
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/grafana/0",quantile="0.99"} 0.009884538
prometheus_target_sync_length_seconds_sum{scrape_job="serviceMonitor/monitoring/grafana/0"} 0.35717074400000015
prometheus_target_sync_length_seconds_count{scrape_job="serviceMonitor/monitoring/grafana/0"} 118
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kube-apiserver/0",quantile="0.01"} 0.000124401
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kube-apiserver/0",quantile="0.05"} 0.000124401
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kube-apiserver/0",quantile="0.5"} 0.00019638
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kube-apiserver/0",quantile="0.9"} 0.000301557
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kube-apiserver/0",quantile="0.99"} 0.000301557
prometheus_target_sync_length_seconds_sum{scrape_job="serviceMonitor/monitoring/kube-apiserver/0"} 0.024725554999999993
prometheus_target_sync_length_seconds_count{scrape_job="serviceMonitor/monitoring/kube-apiserver/0"} 118
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kube-controller-manager/0",quantile="0.01"} 0.000182843
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kube-controller-manager/0",quantile="0.05"} 0.000182843
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kube-controller-manager/0",quantile="0.5"} 0.000222847
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kube-controller-manager/0",quantile="0.9"} 0.000363633
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kube-controller-manager/0",quantile="0.99"} 0.000363633
prometheus_target_sync_length_seconds_sum{scrape_job="serviceMonitor/monitoring/kube-controller-manager/0"} 0.10367830199999997
prometheus_target_sync_length_seconds_count{scrape_job="serviceMonitor/monitoring/kube-controller-manager/0"} 118
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kube-scheduler/0",quantile="0.01"} 0.000162849
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kube-scheduler/0",quantile="0.05"} 0.000162849
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kube-scheduler/0",quantile="0.5"} 0.000171137
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kube-scheduler/0",quantile="0.9"} 0.001299137
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kube-scheduler/0",quantile="0.99"} 0.001299137
prometheus_target_sync_length_seconds_sum{scrape_job="serviceMonitor/monitoring/kube-scheduler/0"} 0.027109514999999994
prometheus_target_sync_length_seconds_count{scrape_job="serviceMonitor/monitoring/kube-scheduler/0"} 118
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kube-state-metrics/0",quantile="0.01"} 0.001387123
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kube-state-metrics/0",quantile="0.05"} 0.001387123
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kube-state-metrics/0",quantile="0.5"} 0.00143535
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kube-state-metrics/0",quantile="0.9"} 0.003176553
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kube-state-metrics/0",quantile="0.99"} 0.003176553
prometheus_target_sync_length_seconds_sum{scrape_job="serviceMonitor/monitoring/kube-state-metrics/0"} 0.24503347199999986
prometheus_target_sync_length_seconds_count{scrape_job="serviceMonitor/monitoring/kube-state-metrics/0"} 118
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kube-state-metrics/1",quantile="0.01"} 0.00112645
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kube-state-metrics/1",quantile="0.05"} 0.00112645
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kube-state-metrics/1",quantile="0.5"} 0.001250423
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kube-state-metrics/1",quantile="0.9"} 0.004949593
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kube-state-metrics/1",quantile="0.99"} 0.004949593
prometheus_target_sync_length_seconds_sum{scrape_job="serviceMonitor/monitoring/kube-state-metrics/1"} 0.3167213809999998
prometheus_target_sync_length_seconds_count{scrape_job="serviceMonitor/monitoring/kube-state-metrics/1"} 118
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kubelet/0",quantile="0.01"} 0.000263929
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kubelet/0",quantile="0.05"} 0.000263929
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kubelet/0",quantile="0.5"} 0.000290283
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kubelet/0",quantile="0.9"} 0.000508372
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kubelet/0",quantile="0.99"} 0.000508372
prometheus_target_sync_length_seconds_sum{scrape_job="serviceMonitor/monitoring/kubelet/0"} 0.1427956660000001
prometheus_target_sync_length_seconds_count{scrape_job="serviceMonitor/monitoring/kubelet/0"} 118
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kubelet/1",quantile="0.01"} 0.000234882
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kubelet/1",quantile="0.05"} 0.000234882
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kubelet/1",quantile="0.5"} 0.000332774
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kubelet/1",quantile="0.9"} 0.001499824
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kubelet/1",quantile="0.99"} 0.001499824
prometheus_target_sync_length_seconds_sum{scrape_job="serviceMonitor/monitoring/kubelet/1"} 0.13505725500000001
prometheus_target_sync_length_seconds_count{scrape_job="serviceMonitor/monitoring/kubelet/1"} 118
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kubelet/2",quantile="0.01"} 0.000241385
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kubelet/2",quantile="0.05"} 0.000241385
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kubelet/2",quantile="0.5"} 0.000358913
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kubelet/2",quantile="0.9"} 0.000496108
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/kubelet/2",quantile="0.99"} 0.000496108
prometheus_target_sync_length_seconds_sum{scrape_job="serviceMonitor/monitoring/kubelet/2"} 0.1453281079999999
prometheus_target_sync_length_seconds_count{scrape_job="serviceMonitor/monitoring/kubelet/2"} 118
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/node-exporter/0",quantile="0.01"} 0.001439707
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/node-exporter/0",quantile="0.05"} 0.001439707
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/node-exporter/0",quantile="0.5"} 0.001480579
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/node-exporter/0",quantile="0.9"} 0.003830324
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/node-exporter/0",quantile="0.99"} 0.003830324
prometheus_target_sync_length_seconds_sum{scrape_job="serviceMonitor/monitoring/node-exporter/0"} 0.3816530239999999
prometheus_target_sync_length_seconds_count{scrape_job="serviceMonitor/monitoring/node-exporter/0"} 118
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/prometheus-adapter/0",quantile="0.01"} 0.001664772
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/prometheus-adapter/0",quantile="0.05"} 0.001664772
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/prometheus-adapter/0",quantile="0.5"} 0.001780822
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/prometheus-adapter/0",quantile="0.9"} 0.007717476
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/prometheus-adapter/0",quantile="0.99"} 0.007717476
prometheus_target_sync_length_seconds_sum{scrape_job="serviceMonitor/monitoring/prometheus-adapter/0"} 0.33285981600000003
prometheus_target_sync_length_seconds_count{scrape_job="serviceMonitor/monitoring/prometheus-adapter/0"} 118
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/prometheus-k8s/0",quantile="0.01"} 0.001094397
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/prometheus-k8s/0",quantile="0.05"} 0.001094397
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/prometheus-k8s/0",quantile="0.5"} 0.001297703
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/prometheus-k8s/0",quantile="0.9"} 0.002738727
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/prometheus-k8s/0",quantile="0.99"} 0.002738727
prometheus_target_sync_length_seconds_sum{scrape_job="serviceMonitor/monitoring/prometheus-k8s/0"} 0.34355014900000014
prometheus_target_sync_length_seconds_count{scrape_job="serviceMonitor/monitoring/prometheus-k8s/0"} 118
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/prometheus-operator/0",quantile="0.01"} 0.00107138
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/prometheus-operator/0",quantile="0.05"} 0.00107138
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/prometheus-operator/0",quantile="0.5"} 0.001113514
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/prometheus-operator/0",quantile="0.9"} 0.002571519
prometheus_target_sync_length_seconds{scrape_job="serviceMonitor/monitoring/prometheus-operator/0",quantile="0.99"} 0.002571519
prometheus_target_sync_length_seconds_sum{scrape_job="serviceMonitor/monitoring/prometheus-operator/0"} 0.233227486
prometheus_target_sync_length_seconds_count{scrape_job="serviceMonitor/monitoring/prometheus-operator/0"} 118
# HELP prometheus_template_text_expansion_failures_total The total number of template text expansion failures.
# TYPE prometheus_template_text_expansion_failures_total counter
prometheus_template_text_expansion_failures_total 0
# HELP prometheus_template_text_expansions_total The total number of template text expansions.
# TYPE prometheus_template_text_expansions_total counter
prometheus_template_text_expansions_total 49562
# HELP prometheus_treecache_watcher_goroutines The current number of watcher goroutines.
# TYPE prometheus_treecache_watcher_goroutines gauge
prometheus_treecache_watcher_goroutines 0
# HELP prometheus_treecache_zookeeper_failures_total The total number of ZooKeeper failures.
# TYPE prometheus_treecache_zookeeper_failures_total counter
prometheus_treecache_zookeeper_failures_total 0
# HELP prometheus_tsdb_blocks_loaded Number of currently loaded data blocks
# TYPE prometheus_tsdb_blocks_loaded gauge
prometheus_tsdb_blocks_loaded 6
# HELP prometheus_tsdb_checkpoint_creations_failed_total Total number of checkpoint creations that failed.
# TYPE prometheus_tsdb_checkpoint_creations_failed_total counter
prometheus_tsdb_checkpoint_creations_failed_total 0
# HELP prometheus_tsdb_checkpoint_creations_total Total number of checkpoint creations attempted.
# TYPE prometheus_tsdb_checkpoint_creations_total counter
prometheus_tsdb_checkpoint_creations_total 2
# HELP prometheus_tsdb_checkpoint_deletions_failed_total Total number of checkpoint deletions that failed.
# TYPE prometheus_tsdb_checkpoint_deletions_failed_total counter
prometheus_tsdb_checkpoint_deletions_failed_total 0
# HELP prometheus_tsdb_checkpoint_deletions_total Total number of checkpoint deletions attempted.
# TYPE prometheus_tsdb_checkpoint_deletions_total counter
prometheus_tsdb_checkpoint_deletions_total 2
# HELP prometheus_tsdb_compaction_chunk_range_seconds Final time range of chunks on their first compaction
# TYPE prometheus_tsdb_compaction_chunk_range_seconds histogram
prometheus_tsdb_compaction_chunk_range_seconds_bucket{le="100"} 63
prometheus_tsdb_compaction_chunk_range_seconds_bucket{le="400"} 63
prometheus_tsdb_compaction_chunk_range_seconds_bucket{le="1600"} 63
prometheus_tsdb_compaction_chunk_range_seconds_bucket{le="6400"} 63
prometheus_tsdb_compaction_chunk_range_seconds_bucket{le="25600"} 77
prometheus_tsdb_compaction_chunk_range_seconds_bucket{le="102400"} 659
prometheus_tsdb_compaction_chunk_range_seconds_bucket{le="409600"} 1342
prometheus_tsdb_compaction_chunk_range_seconds_bucket{le="1.6384e+06"} 63035
prometheus_tsdb_compaction_chunk_range_seconds_bucket{le="6.5536e+06"} 445464
prometheus_tsdb_compaction_chunk_range_seconds_bucket{le="2.62144e+07"} 445559
prometheus_tsdb_compaction_chunk_range_seconds_bucket{le="+Inf"} 445559
prometheus_tsdb_compaction_chunk_range_seconds_sum 1.241090818295e+12
prometheus_tsdb_compaction_chunk_range_seconds_count 445559
# HELP prometheus_tsdb_compaction_chunk_samples Final number of samples on their first compaction
# TYPE prometheus_tsdb_compaction_chunk_samples histogram
prometheus_tsdb_compaction_chunk_samples_bucket{le="4"} 680
prometheus_tsdb_compaction_chunk_samples_bucket{le="6"} 812
prometheus_tsdb_compaction_chunk_samples_bucket{le="9"} 872
prometheus_tsdb_compaction_chunk_samples_bucket{le="13.5"} 1420
prometheus_tsdb_compaction_chunk_samples_bucket{le="20.25"} 54207
prometheus_tsdb_compaction_chunk_samples_bucket{le="30.375"} 56268
prometheus_tsdb_compaction_chunk_samples_bucket{le="45.5625"} 57798
prometheus_tsdb_compaction_chunk_samples_bucket{le="68.34375"} 60661
prometheus_tsdb_compaction_chunk_samples_bucket{le="102.515625"} 118251
prometheus_tsdb_compaction_chunk_samples_bucket{le="153.7734375"} 442730
prometheus_tsdb_compaction_chunk_samples_bucket{le="230.66015625"} 445559
prometheus_tsdb_compaction_chunk_samples_bucket{le="345.990234375"} 445559
prometheus_tsdb_compaction_chunk_samples_bucket{le="+Inf"} 445559
prometheus_tsdb_compaction_chunk_samples_sum 4.4569207e+07
prometheus_tsdb_compaction_chunk_samples_count 445559
# HELP prometheus_tsdb_compaction_chunk_size_bytes Final size of chunks on their first compaction
# TYPE prometheus_tsdb_compaction_chunk_size_bytes histogram
prometheus_tsdb_compaction_chunk_size_bytes_bucket{le="32"} 37379
prometheus_tsdb_compaction_chunk_size_bytes_bucket{le="48"} 53055
prometheus_tsdb_compaction_chunk_size_bytes_bucket{le="72"} 162893
prometheus_tsdb_compaction_chunk_size_bytes_bucket{le="108"} 278966
prometheus_tsdb_compaction_chunk_size_bytes_bucket{le="162"} 358962
prometheus_tsdb_compaction_chunk_size_bytes_bucket{le="243"} 388819
prometheus_tsdb_compaction_chunk_size_bytes_bucket{le="364.5"} 407643
prometheus_tsdb_compaction_chunk_size_bytes_bucket{le="546.75"} 426213
prometheus_tsdb_compaction_chunk_size_bytes_bucket{le="820.125"} 433195
prometheus_tsdb_compaction_chunk_size_bytes_bucket{le="1230.1875"} 445144
prometheus_tsdb_compaction_chunk_size_bytes_bucket{le="1845.28125"} 445559
prometheus_tsdb_compaction_chunk_size_bytes_bucket{le="2767.921875"} 445559
prometheus_tsdb_compaction_chunk_size_bytes_bucket{le="+Inf"} 445559
prometheus_tsdb_compaction_chunk_size_bytes_sum 6.5486443e+07
prometheus_tsdb_compaction_chunk_size_bytes_count 445559
# HELP prometheus_tsdb_compaction_duration_seconds Duration of compaction runs
# TYPE prometheus_tsdb_compaction_duration_seconds histogram
prometheus_tsdb_compaction_duration_seconds_bucket{le="1"} 1
prometheus_tsdb_compaction_duration_seconds_bucket{le="2"} 3
prometheus_tsdb_compaction_duration_seconds_bucket{le="4"} 4
prometheus_tsdb_compaction_duration_seconds_bucket{le="8"} 4
prometheus_tsdb_compaction_duration_seconds_bucket{le="16"} 4
prometheus_tsdb_compaction_duration_seconds_bucket{le="32"} 4
prometheus_tsdb_compaction_duration_seconds_bucket{le="64"} 4
prometheus_tsdb_compaction_duration_seconds_bucket{le="128"} 4
prometheus_tsdb_compaction_duration_seconds_bucket{le="256"} 4
prometheus_tsdb_compaction_duration_seconds_bucket{le="512"} 4
prometheus_tsdb_compaction_duration_seconds_bucket{le="1024"} 4
prometheus_tsdb_compaction_duration_seconds_bucket{le="2048"} 4
prometheus_tsdb_compaction_duration_seconds_bucket{le="4096"} 4
prometheus_tsdb_compaction_duration_seconds_bucket{le="8192"} 4
prometheus_tsdb_compaction_duration_seconds_bucket{le="+Inf"} 4
prometheus_tsdb_compaction_duration_seconds_sum 5.190013199000001
prometheus_tsdb_compaction_duration_seconds_count 4
# HELP prometheus_tsdb_compaction_populating_block Set to 1 when a block is currently being written to the disk.
# TYPE prometheus_tsdb_compaction_populating_block gauge
prometheus_tsdb_compaction_populating_block 0
# HELP prometheus_tsdb_compactions_failed_total Total number of compactions that failed for the partition.
# TYPE prometheus_tsdb_compactions_failed_total counter
prometheus_tsdb_compactions_failed_total 0
# HELP prometheus_tsdb_compactions_skipped_total Total number of skipped compactions due to disabled auto compaction.
# TYPE prometheus_tsdb_compactions_skipped_total counter
prometheus_tsdb_compactions_skipped_total 0
# HELP prometheus_tsdb_compactions_total Total number of compactions that were executed for the partition.
# TYPE prometheus_tsdb_compactions_total counter
prometheus_tsdb_compactions_total 4
# HELP prometheus_tsdb_compactions_triggered_total Total number of triggered compactions for the partition.
# TYPE prometheus_tsdb_compactions_triggered_total counter
prometheus_tsdb_compactions_triggered_total 357
# HELP prometheus_tsdb_data_replay_duration_seconds Time taken to replay the data on disk.
# TYPE prometheus_tsdb_data_replay_duration_seconds gauge
prometheus_tsdb_data_replay_duration_seconds 12.107785783
# HELP prometheus_tsdb_head_active_appenders Number of currently active appender transactions
# TYPE prometheus_tsdb_head_active_appenders gauge
prometheus_tsdb_head_active_appenders 0
# HELP prometheus_tsdb_head_chunks Total number of chunks in the head block.
# TYPE prometheus_tsdb_head_chunks gauge
prometheus_tsdb_head_chunks 153944
# HELP prometheus_tsdb_head_chunks_created_total Total number of chunks created in the head
# TYPE prometheus_tsdb_head_chunks_created_total counter
prometheus_tsdb_head_chunks_created_total 599503
# HELP prometheus_tsdb_head_chunks_removed_total Total number of chunks removed in the head
# TYPE prometheus_tsdb_head_chunks_removed_total counter
prometheus_tsdb_head_chunks_removed_total 445559
# HELP prometheus_tsdb_head_gc_duration_seconds Runtime of garbage collection in the head block.
# TYPE prometheus_tsdb_head_gc_duration_seconds summary
prometheus_tsdb_head_gc_duration_seconds_sum 0.294262921
prometheus_tsdb_head_gc_duration_seconds_count 4
# HELP prometheus_tsdb_head_max_time Maximum timestamp of the head block. The unit is decided by the library consumer.
# TYPE prometheus_tsdb_head_max_time gauge
prometheus_tsdb_head_max_time 1.626250946508e+12
# HELP prometheus_tsdb_head_max_time_seconds Maximum timestamp of the head block.
# TYPE prometheus_tsdb_head_max_time_seconds gauge
prometheus_tsdb_head_max_time_seconds 1.626250946e+09
# HELP prometheus_tsdb_head_min_time Minimum time bound of the head block. The unit is decided by the library consumer.
# TYPE prometheus_tsdb_head_min_time gauge
prometheus_tsdb_head_min_time 1.626242402664e+12
# HELP prometheus_tsdb_head_min_time_seconds Minimum time bound of the head block.
# TYPE prometheus_tsdb_head_min_time_seconds gauge
prometheus_tsdb_head_min_time_seconds 1.626242402e+09
# HELP prometheus_tsdb_head_samples_appended_total Total number of appended samples.
# TYPE prometheus_tsdb_head_samples_appended_total counter
prometheus_tsdb_head_samples_appended_total 4.855752e+07
# HELP prometheus_tsdb_head_series Total number of series in the head block.
# TYPE prometheus_tsdb_head_series gauge
prometheus_tsdb_head_series 73035
# HELP prometheus_tsdb_head_series_created_total Total number of series created in the head
# TYPE prometheus_tsdb_head_series_created_total counter
prometheus_tsdb_head_series_created_total 226914
# HELP prometheus_tsdb_head_series_not_found_total Total number of requests for series that were not found.
# TYPE prometheus_tsdb_head_series_not_found_total counter
prometheus_tsdb_head_series_not_found_total 0
# HELP prometheus_tsdb_head_series_removed_total Total number of series removed in the head
# TYPE prometheus_tsdb_head_series_removed_total counter
prometheus_tsdb_head_series_removed_total 153879
# HELP prometheus_tsdb_head_truncations_failed_total Total number of head truncations that failed.
# TYPE prometheus_tsdb_head_truncations_failed_total counter
prometheus_tsdb_head_truncations_failed_total 0
# HELP prometheus_tsdb_head_truncations_total Total number of head truncations attempted.
# TYPE prometheus_tsdb_head_truncations_total counter
prometheus_tsdb_head_truncations_total 4
# HELP prometheus_tsdb_isolation_high_watermark The highest TSDB append ID that has been given out.
# TYPE prometheus_tsdb_isolation_high_watermark gauge
prometheus_tsdb_isolation_high_watermark 144550
# HELP prometheus_tsdb_isolation_low_watermark The lowest TSDB append ID that is still referenced.
# TYPE prometheus_tsdb_isolation_low_watermark gauge
prometheus_tsdb_isolation_low_watermark 144550
# HELP prometheus_tsdb_lowest_timestamp Lowest timestamp value stored in the database. The unit is decided by the library consumer.
# TYPE prometheus_tsdb_lowest_timestamp gauge
prometheus_tsdb_lowest_timestamp 1.6261488e+12
# HELP prometheus_tsdb_lowest_timestamp_seconds Lowest timestamp value stored in the database.
# TYPE prometheus_tsdb_lowest_timestamp_seconds gauge
prometheus_tsdb_lowest_timestamp_seconds 1.6261488e+09
# HELP prometheus_tsdb_mmap_chunk_corruptions_total Total number of memory-mapped chunk corruptions.
# TYPE prometheus_tsdb_mmap_chunk_corruptions_total counter
prometheus_tsdb_mmap_chunk_corruptions_total 0
# HELP prometheus_tsdb_out_of_bound_samples_total Total number of out of bound samples ingestion failed attempts.
# TYPE prometheus_tsdb_out_of_bound_samples_total counter
prometheus_tsdb_out_of_bound_samples_total 0
# HELP prometheus_tsdb_out_of_order_exemplars_total Total number of out of order exemplars ingestion failed attempts.
# TYPE prometheus_tsdb_out_of_order_exemplars_total counter
prometheus_tsdb_out_of_order_exemplars_total 0
# HELP prometheus_tsdb_out_of_order_samples_total Total number of out of order samples ingestion failed attempts.
# TYPE prometheus_tsdb_out_of_order_samples_total counter
prometheus_tsdb_out_of_order_samples_total 14390
# HELP prometheus_tsdb_reloads_failures_total Number of times the database failed to reloadBlocks block data from disk.
# TYPE prometheus_tsdb_reloads_failures_total counter
prometheus_tsdb_reloads_failures_total 0
# HELP prometheus_tsdb_reloads_total Number of times the database reloaded block data from disk.
# TYPE prometheus_tsdb_reloads_total counter
prometheus_tsdb_reloads_total 354
# HELP prometheus_tsdb_retention_limit_bytes Max number of bytes to be retained in the tsdb blocks, configured 0 means disabled
# TYPE prometheus_tsdb_retention_limit_bytes gauge
prometheus_tsdb_retention_limit_bytes 0
# HELP prometheus_tsdb_size_retentions_total The number of times that blocks were deleted because the maximum number of bytes was exceeded.
# TYPE prometheus_tsdb_size_retentions_total counter
prometheus_tsdb_size_retentions_total 0
# HELP prometheus_tsdb_storage_blocks_bytes The number of bytes that are currently used for local storage by all blocks.
# TYPE prometheus_tsdb_storage_blocks_bytes gauge
prometheus_tsdb_storage_blocks_bytes 1.73152149e+08
# HELP prometheus_tsdb_symbol_table_size_bytes Size of symbol table in memory for loaded blocks
# TYPE prometheus_tsdb_symbol_table_size_bytes gauge
prometheus_tsdb_symbol_table_size_bytes 4864
# HELP prometheus_tsdb_time_retentions_total The number of times that blocks were deleted because the maximum time limit was exceeded.
# TYPE prometheus_tsdb_time_retentions_total counter
prometheus_tsdb_time_retentions_total 3
# HELP prometheus_tsdb_tombstone_cleanup_seconds The time taken to recompact blocks to remove tombstones.
# TYPE prometheus_tsdb_tombstone_cleanup_seconds histogram
prometheus_tsdb_tombstone_cleanup_seconds_bucket{le="0.005"} 0
prometheus_tsdb_tombstone_cleanup_seconds_bucket{le="0.01"} 0
prometheus_tsdb_tombstone_cleanup_seconds_bucket{le="0.025"} 0
prometheus_tsdb_tombstone_cleanup_seconds_bucket{le="0.05"} 0
prometheus_tsdb_tombstone_cleanup_seconds_bucket{le="0.1"} 0
prometheus_tsdb_tombstone_cleanup_seconds_bucket{le="0.25"} 0
prometheus_tsdb_tombstone_cleanup_seconds_bucket{le="0.5"} 0
prometheus_tsdb_tombstone_cleanup_seconds_bucket{le="1"} 0
prometheus_tsdb_tombstone_cleanup_seconds_bucket{le="2.5"} 0
prometheus_tsdb_tombstone_cleanup_seconds_bucket{le="5"} 0
prometheus_tsdb_tombstone_cleanup_seconds_bucket{le="10"} 0
prometheus_tsdb_tombstone_cleanup_seconds_bucket{le="+Inf"} 0
prometheus_tsdb_tombstone_cleanup_seconds_sum 0
prometheus_tsdb_tombstone_cleanup_seconds_count 0
# HELP prometheus_tsdb_vertical_compactions_total Total number of compactions done on overlapping blocks.
# TYPE prometheus_tsdb_vertical_compactions_total counter
prometheus_tsdb_vertical_compactions_total 0
# HELP prometheus_tsdb_wal_completed_pages_total Total number of completed pages.
# TYPE prometheus_tsdb_wal_completed_pages_total counter
prometheus_tsdb_wal_completed_pages_total 7863
# HELP prometheus_tsdb_wal_corruptions_total Total number of WAL corruptions.
# TYPE prometheus_tsdb_wal_corruptions_total counter
prometheus_tsdb_wal_corruptions_total 0
# HELP prometheus_tsdb_wal_fsync_duration_seconds Duration of WAL fsync.
# TYPE prometheus_tsdb_wal_fsync_duration_seconds summary
prometheus_tsdb_wal_fsync_duration_seconds{quantile="0.5"} NaN
prometheus_tsdb_wal_fsync_duration_seconds{quantile="0.9"} NaN
prometheus_tsdb_wal_fsync_duration_seconds{quantile="0.99"} NaN
prometheus_tsdb_wal_fsync_duration_seconds_sum 0.002690881
prometheus_tsdb_wal_fsync_duration_seconds_count 4
# HELP prometheus_tsdb_wal_page_flushes_total Total number of page flushes.
# TYPE prometheus_tsdb_wal_page_flushes_total counter
prometheus_tsdb_wal_page_flushes_total 75330
# HELP prometheus_tsdb_wal_segment_current WAL segment index that TSDB is currently writing to.
# TYPE prometheus_tsdb_wal_segment_current gauge
prometheus_tsdb_wal_segment_current 28
# HELP prometheus_tsdb_wal_truncate_duration_seconds Duration of WAL truncation.
# TYPE prometheus_tsdb_wal_truncate_duration_seconds summary
prometheus_tsdb_wal_truncate_duration_seconds_sum 1.919097646
prometheus_tsdb_wal_truncate_duration_seconds_count 2
# HELP prometheus_tsdb_wal_truncations_failed_total Total number of WAL truncations that failed.
# TYPE prometheus_tsdb_wal_truncations_failed_total counter
prometheus_tsdb_wal_truncations_failed_total 0
# HELP prometheus_tsdb_wal_truncations_total Total number of WAL truncations attempted.
# TYPE prometheus_tsdb_wal_truncations_total counter
prometheus_tsdb_wal_truncations_total 2
# HELP prometheus_tsdb_wal_writes_failed_total Total number of WAL writes that failed.
# TYPE prometheus_tsdb_wal_writes_failed_total counter
prometheus_tsdb_wal_writes_failed_total 0
# HELP prometheus_web_federation_errors_total Total number of errors that occurred while sending federation responses.
# TYPE prometheus_web_federation_errors_total counter
prometheus_web_federation_errors_total 0
# HELP prometheus_web_federation_warnings_total Total number of warnings that occurred while sending federation responses.
# TYPE prometheus_web_federation_warnings_total counter
prometheus_web_federation_warnings_total 0
# HELP promhttp_metric_handler_requests_in_flight Current number of scrapes being served.
# TYPE promhttp_metric_handler_requests_in_flight gauge
promhttp_metric_handler_requests_in_flight 1
# HELP promhttp_metric_handler_requests_total Total number of scrapes by HTTP status code.
# TYPE promhttp_metric_handler_requests_total counter
promhttp_metric_handler_requests_total{code="200"} 1398
promhttp_metric_handler_requests_total{code="500"} 0
promhttp_metric_handler_requests_total{code="503"} 0
http://192.168.153.40:30207/metrics 自带的一些 Histogram 信息

  如上述按理所示,Histogram 类型的样本会提供 3 种指标,假设指标名称为<basename>。

# HELP prometheus_http_request_duration_seconds Histogram of latencies for HTTP requests.
# TYPE prometheus_http_request_duration_seconds histogram
prometheus_http_request_duration_seconds_bucket{handler="/",le="0.1"} 1
prometheus_http_request_duration_seconds_bucket{handler="/",le="0.2"} 1
prometheus_http_request_duration_seconds_bucket{handler="/",le="0.4"} 1
prometheus_http_request_duration_seconds_bucket{handler="/",le="1"} 1
prometheus_http_request_duration_seconds_bucket{handler="/",le="3"} 1
prometheus_http_request_duration_seconds_bucket{handler="/",le="8"} 1
prometheus_http_request_duration_seconds_bucket{handler="/",le="20"} 1
prometheus_http_request_duration_seconds_bucket{handler="/",le="60"} 1
prometheus_http_request_duration_seconds_bucket{handler="/",le="120"} 1
prometheus_http_request_duration_seconds_bucket{handler="/",le="+Inf"} 1
prometheus_http_request_duration_seconds_sum{handler="/"} 2.3757e-05
prometheus_http_request_duration_seconds_count{handler="/"} 1

  样本的值分布在 Bucket 中的数量,命名为<basename>_backet{le="<上边界>"}。这个值表示指标最小等于上边界的所有样本数量。上述案例中的prometheus_http_request_duration_seconds_bucket{handler="/",le="0.1"} 1, 1就代表在总共的1次请求中,HTTP 请求响应时间 <= 0.1s的请求一共1次。

  所有样本值的总和,命名为< basename>_sum。上述案例中的 prometheus_http_request_duration_seconds_sum{handler="/"} 2.3757e-05 表示发生的1次 HTTP请求总响应时间是 2.3757e-05s。

  样本总数,命名为<basename>_count,其值和<basename>_bucket{le="+Inf"} 相同。

  sum 函数和 count 函数相除,可以得到一些平均值,比如 Prometheus 一天内的平均压缩时间,可由查询结果除以 instance 标签数量得到,如下所示:

sum without(instance) (rate(prometheus_tsdb_compaction_duration_sum[1d])) / sum without(instance) (rate(prometheus_tsdb_compction_duration_count[1d]))

  除了 Prometheus 内置的压缩时间,prometheus_local_storage_series_chunks_persisted 表示 Prometheus 中每个时序需要存储的 chunk 数量,也可以用于计算待持久化的数据的分位数。

3.4 摘要

  与 Histogram 类型类似,摘要用于表示一段时间内的数据采样的结果(通常是请求持续时间或响应大小等),但它直接存储了分位数(通过客户端计算,然后展示出来),而非通过区间计算。因此,对于分位数计算,Summary在通过PromQL进行查询时会有更好的性能表现,而 Histogram 则会消耗更多的资源。反之,对于客户端而言,Histogram 消耗的资源更少。在选择这两种方式时,用户应该根据座机的实际场景选择。


知识延伸: Summary 和 Histogram 的异同

1) 它们都包含了<basename>_sum 和<base_name>_count 指标。

2) Histogram 需要通过<basename>_bucket来计算分位数,而 Summary 则直接存储了分位数的值。

3) 如果需要汇总或者了解要观察的值的范围和分布,建议使用 Histogram;如果并不在乎要观察的值的范围和分布,仅需要精确的 quantile 值,那么建议使用 Summary。


四、13种聚合操作

  在实际生产环节中,往往有着成百上千的实例,用户不可能逐个筛选每个实例的指标。聚合操作(Aggregation Operator)允许用户在一个应用程序中或多个应用程序之间对指标进行聚合计算,可以对瞬时表达式返回的样本数据进行聚合,形成一个具有较少样本的新的时间序列。聚合操作只对瞬时向量起作用,输出的也是瞬时向量。

#查询系统所有 HTTP 请求的总量
sum(http_request_total)

#按照mode计算主机 CPU 的平均使用时间
avg(node_cpu) by (mode)

#查询各个主机的 CPU 使用率
sum(sum(irate(node_cpu{mode!='idle'}[5m])) / sum(irate(node_cpu[5m]))) by (instance)
  • sum(求和)
  • min(最小值)
  • max(最大值)
  • avg(平均值)
  • stddev(标准差)
  • stdvar(标准差异)
  • count(计数)
  • count_values(对 value 进行计数)
  • bottomk(样本值最小的 k 个元素)
  • topk(样本值最大的 k 个元素)
  • quantile(分布统计)
posted @ 2021-07-08 14:31  左扬  阅读(1458)  评论(0编辑  收藏  举报
levels of contents