kubernetes(31):Prometheus-adapter+custom-metrics-api实现自定义HPA

prometheus-adapter+custom-metrics-api实现k8s自定义HPA

参考https://blog.51cto.com/juestnow/2413581

1  HPA简介

Horizontal Pod Autoscaling,简称HPA,是Kubernetes中实现POD水平自动伸缩的功能。为什么要水平而不叫垂直, 那是因为自动扩展主要分为两种:

水平扩展(scale out),针对于实例数目的增减

垂直扩展(scal up),即单个实例可以使用的资源的增减, 比如增加cpu和增大内存

 

更多参考kubernetes(24):基于metrics-server水平扩展-资源HPA

1.1 基于资源的HPA

参考 kubernetes(24):基于metrics-server水平扩展-资源HPA

Metrics Server是一个集群范围的资源使用数据的聚合器,是Hepster的继承者。metrics server通过kubernetes.summary_api收集节点和pod的CPU和memory使用率,根据CPU和内存使用情况pod自动扩展。这种方式主要是资源层面的,根据资源计算。

1.2  自定义HPA(例如qps)

也可以通过prometheus和custom API server,向聚合器层注册自定义API服务,然后使用演示程序提供的自定义metrics配置HPA,例如基于http请求访问次数的HPA。

Metrics Server和custom-metrics-api都有多种部署方式,比较推荐

https://github.com/stefanprodan/k8s-prom-hpa

我们这里前面部署了PrometheusOperator,我们这里用它提供的custom-metrics-api。

2  部署准备

部署PrometheusOperator所有的容器组都运行在monitoring 命名空间

地址git clone https://github.com/coreos/kube-prometheus

Prometheus之前已经部署完成

否则请参考 Prometheus监控k8s(10)-PrometheusOperator-更优雅的Prometheus部署

3   部署custom-metrics-api

cd kube-prometheus/experimental/custom-metrics-api/
kubectl apply -f custom-metrics-apiserver-resource-reader-cluster-role-binding.yaml
kubectl apply -f custom-metrics-apiservice.yaml
kubectl apply -f custom-metrics-cluster-role.yaml
kubectl apply -f custom-metrics-configmap.yaml
kubectl apply -f hpa-custom-metrics-cluster-role-binding.yam

 

 

4   调整部署prometheus-adapter

prometheus的安装参考

Prometheus监控k8s(2)-手动部署Prometheus

Prometheus监控k8s(10)-PrometheusOperator-更优雅的Prometheus部署

 

部署prometheus的时候adapter已经部署,这个修改一下

 

[root@k8s-master experimental]# kubectl get pods -n monitoring -o wide | grep prometheus-adapter
prometheus-adapter-668748ddbd-9h8g4   1/1     Running   0          19h   10.254.1.247   k8s-node-1   <none>           <none>
[root@k8s-master experimental]#

 

4.1   整理prometheus-adapter yaml

[root@k8s-master kube-prometheus]# cd -
/root/prometheus/kube-prometheus/experimental
[root@k8s-master experimental]# cd ../manifests/
[root@k8s-master manifests]# mkdir prometheus-adapter
[root@k8s-master manifests]# mv prometheus-adapter*.yaml prometheus-adapter
[root@k8s-master manifests]# cd prometheus-adapter
[root@k8s-master prometheus-adapter]# ls
prometheus-adapter-apiService.yaml                          prometheus-adapter-configMap.yaml
prometheus-adapter-clusterRoleAggregatedMetricsReader.yaml  prometheus-adapter-deployment.yaml
prometheus-adapter-clusterRoleBindingDelegator.yaml         prometheus-adapter-roleBindingAuthReader.yaml
prometheus-adapter-clusterRoleBinding.yaml                  prometheus-adapter-serviceAccount.yaml
prometheus-adapter-clusterRoleServerResources.yaml          prometheus-adapter-service.yaml
prometheus-adapter-clusterRole.yaml
[root@k8s-master prometheus-adapter]#
[root@k8s-master prometheus-adapter]# kubectl delete -f  .
clusterrole.rbac.authorization.k8s.io "prometheus-adapter" deleted
clusterrolebinding.rbac.authorization.k8s.io "prometheus-adapter" deleted
clusterrolebinding.rbac.authorization.k8s.io "resource-metrics:system:auth-delegator" deleted
clusterrole.rbac.authorization.k8s.io "resource-metrics-server-resources" deleted
configmap "adapter-config" deleted
deployment.apps "prometheus-adapter" deleted
rolebinding.rbac.authorization.k8s.io "resource-metrics-auth-reader" deleted
service "prometheus-adapter" deleted
serviceaccount "prometheus-adapter" deleted
[root@k8s-master prometheus-adapter]# mv prometheus-adapter-configMap.yaml /tmp/
[root@k8s-master prometheus-adapter]#
说明:custom-metrics-api 里面已经有configmap 不能覆盖,删除或者移走

 

 

4.2   生成由Prometheus adapter所需的TLS证书

### 创建secret

kubectl create secret generic volume-serving-cert --from-file=apiserver.crt --from-file=apiserver.key  -n monitoring
kubectl get secret -n monitoring | grep volume-serving-cert
kubectl get secret volume-serving-cert -n monitoring volume-serving-cert -o yaml

 

 

我是通过kubeadm安装的k8s,已经有了APIserver证书

[root@k8s-master ~]# cd /etc/kubernetes/
manifests/ pki/
[root@k8s-master ~]# cd /etc/kubernetes/pki/
[root@k8s-master pki]# ls
apiserver.crt              apiserver.key                 ca.crt  front-proxy-ca.crt      front-proxy-client.key  serving.crt  wx.crt
apiserver-etcd-client.crt  apiserver-kubelet-client.crt  ca.key  front-proxy-ca.key      sa.key                  serving.csr  wx.csr
apiserver-etcd-client.key  apiserver-kubelet-client.key  etcd    front-proxy-client.crt  sa.pub                  serving.key  wx.key
[root@k8s-master pki]#
[root@k8s-master pki]# kubectl create secret generic volume-serving-cert --from-file=apiserver.crt --from-file=apiserver.key  -n monitoring
secret/volume-serving-cert created
[root@k8s-master pki]# kubectl get secret -n monitoring | grep volume-serving-cert
volume-serving-cert               Opaque                                2      6s
[root@k8s-master pki]# kubectl get secret volume-serving-cert -n monitoring volume-serving-cert -o yaml
apiVersion: v1
items:
- apiVersion: v1
  data:
apiserver.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURXakNDQWtLZ0F3SUJBZ0lJYnQ4MS9hSW8xRGN3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB4T1RBNE1qa3dNVEl3TXpGYUZ3MHlNREE0TWpnd01USXdNekZhTUJreApGekFWQmdOVkJBTVREbXQxWW1VdFlYQnBjMlZ5ZG1WeU1JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUThBCk1JSUJDZ0tDQVFFQXhzVGlsV0ZrSi82RnpRV21RNjA0NU9TYjRXcEowWGcwSDc3bW51dCtmOUVzRlRSQkMwQWcKTnBka09sZVN4aUt6Mi9GYXh2dndVZGtXaHBlY1hlT2xFbHM0VXlRanlpS
…..

 

 

如果没有证书(参考)

#安装CFSSL工具
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
chmod +x cfssl_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl

wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
chmod +x cfssljson_linux-amd64
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson

wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl-certinfo_linux-amd64
mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo

export PATH=/usr/local/bin:$PATH

cd /etc/kubernetes/pki
cat << EOF | tee apiserver.json
{
  "CN": "apiserver",
  "hosts": [""], 
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "GuangDong",
      "L": "GuangZhou",
      "O": "wx",
      "OU": "wx"
    }
  ]
}
EOF

### 生成证书
cfssl gencert -ca=/apps/work/k8s/cfssl/pki/k8s/k8s-ca.pem -ca-key=/apps/work/k8s/cfssl/pki/k8s/k8s-ca-key.pem \
    -config=/apps/work/k8s/cfssl/ca-config.json \
    -profile=kubernetes /apps/work/k8s/cfssl/k8s/apiserver.json | cfssljson -bare ./apiserver
###重命名证书名字
mv apiserver-key.pem apiserver.key
mv apiserver.pem apiserver.crt
### 创建secret
kubectl create secret generic volume-serving-cert --from-file=apiserver.crt --from-file=apiserver.key  -n monitoring
kubectl get secret -n monitoring | grep volume-serving-cert
kubectl get secret volume-serving-cert -n monitoring volume-serving-cert -o yaml

 

4.3   执行prometheus-adapter.yaml

 

[root@k8s-master pki]# cd /root/prometheus/kube-prometheus/manifests/prometheus-adapter/
[root@k8s-master prometheus-adapter]# ls
prometheus-adapter-apiService.yaml                          prometheus-adapter-clusterRole.yaml
prometheus-adapter-clusterRoleAggregatedMetricsReader.yaml  prometheus-adapter-deployment.yaml
prometheus-adapter-clusterRoleBindingDelegator.yaml         prometheus-adapter-roleBindingAuthReader.yaml
prometheus-adapter-clusterRoleBinding.yaml                  prometheus-adapter-serviceAccount.yaml
prometheus-adapter-clusterRoleServerResources.yaml          prometheus-adapter-service.yaml
[root@k8s-master prometheus-adapter]# kubectl apply -f .
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
clusterrole.rbac.authorization.k8s.io/prometheus-adapter created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrolebinding.rbac.authorization.k8s.io/prometheus-adapter created
clusterrolebinding.rbac.authorization.k8s.io/resource-metrics:system:auth-delegator created
clusterrole.rbac.authorization.k8s.io/resource-metrics-server-resources created
deployment.apps/prometheus-adapter created
rolebinding.rbac.authorization.k8s.io/resource-metrics-auth-reader created
service/prometheus-adapter created
serviceaccount/prometheus-adapter created
[root@k8s-master prometheus-adapter]#

 

4.4 验证prometheus-adapter 部署是否正常

[root@k8s-master prometheus-adapter]# kubectl get pods -n monitoring -o wide | grep prometheus-adapter
prometheus-adapter-668748ddbd-d9hxz   1/1     Running   0          6m55s   10.254.1.3     k8s-node-1   <none>           <none>
[root@k8s-master prometheus-adapter]# kubectl get service -n monitoring  | grep prometheus-adapter
prometheus-adapter      ClusterIP   10.98.19.67      <none>        443/TCP                      7m
[root@k8s-master prometheus-adapter]# kubectl get --raw "/apis/custom.metrics.k8s.io" | jq .
{
  "kind": "APIGroup",
  "apiVersion": "v1",
  "name": "custom.metrics.k8s.io",
  "versions": [
    {
      "groupVersion": "custom.metrics.k8s.io/v1beta1",
      "version": "v1beta1"
    }
  ],
  "preferredVersion": {
    "groupVersion": "custom.metrics.k8s.io/v1beta1",
    "version": "v1beta1"
  }
}
[root@k8s-master prometheus-adapter]#

 

 

列出由prometheus提供的自定义指标:

kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1" | jq .
{
  "kind": "APIResourceList",
  "apiVersion": "v1",
  "groupVersion": "custom.metrics.k8s.io/v1beta1",
  "resources": [
    {
      "name": "nodes/kubelet_pleg_relist_duration_seconds_count",
      "singularName": "",
      "namespaced": true,
      "kind": "MetricValueList",
      "verbs": [
        "get"
      ]
    },
    {
      "name": "jobs.batch/node_memory_Active_file_bytes",
      "singularName": "",
      "namespaced": true,
      "kind": "MetricValueList",
      "verbs": [
        "get"
      ]
    },
    {
      "name": "namespaces/node_memory_PageTables_bytes",
      "singularName": "",
      "namespaced": false,
      "kind": "MetricValueList",
      "verbs": [
        "get"
…..

 

 

获取monitoring命名空间中所有pod的FS信息:

 

[root@k8s-master prometheus-adapter]# kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/monitoring/pods/*/fs_usage_bytes" | jq .
{
  "kind": "MetricValueList",
  "apiVersion": "custom.metrics.k8s.io/v1beta1",
  "metadata": {
    "selfLink": "/apis/custom.metrics.k8s.io/v1beta1/namespaces/monitoring/pods/%2A/fs_usage_bytes"
  },
  "items": [
    {
      "describedObject": {
        "kind": "Pod",
        "namespace": "monitoring",
        "name": "alertmanager-main-0",
        "apiVersion": "/v1"
      },
      "metricName": "fs_usage_bytes",
      "timestamp": "2019-10-09T04:52:11Z",
      "value": "0"
    },
    {
      "describedObject": {
        "kind": "Pod",
        "namespace": "monitoring",
        "name": "alertmanager-main-1",
        "apiVersion": "/v1"
      },
      "metricName": "fs_usage_bytes",
      "timestamp": "2019-10-09T04:52:11Z",
      "value": "0"
    },
    {
      "describedObject": {
        "kind": "Pod",
        "namespace": "monitoring",
        "name": "alertmanager-main-2",
        "apiVersion": "/v1"
      },
      "metricName": "fs_usage_bytes",
      "timestamp": "2019-10-09T04:52:11Z",
      "value": "0"
    },
    {
      "describedObject": {
        "kind": "Pod",
        "namespace": "monitoring",
        "name": "grafana-57bfdd47f8-d9fns",
        "apiVersion": "/v1"
      },
      "metricName": "fs_usage_bytes",
      "timestamp": "2019-10-09T04:52:11Z",
      "value": "0"
    },
    {
      "describedObject": {
        "kind": "Pod",
        "namespace": "monitoring",
        "name": "kube-state-metrics-ff5cb7949-lh945",
        "apiVersion": "/v1"
      },
      "metricName": "fs_usage_bytes",
      "timestamp": "2019-10-09T04:52:11Z",
      "value": "0"
    },
    {
      "describedObject": {
        "kind": "Pod",
        "namespace": "monitoring",
        "name": "node-exporter-9mhxs",
        "apiVersion": "/v1"
      },
      "metricName": "fs_usage_bytes",
      "timestamp": "2019-10-09T04:52:11Z",
      "value": "0"
    },
    {
      "describedObject": {
        "kind": "Pod",
        "namespace": "monitoring",
        "name": "node-exporter-csqzm",
        "apiVersion": "/v1"
      },
      "metricName": "fs_usage_bytes",
      "timestamp": "2019-10-09T04:52:11Z",
      "value": "0"
    },
    {
      "describedObject": {
        "kind": "Pod",
        "namespace": "monitoring",
        "name": "node-exporter-xc8tb",
        "apiVersion": "/v1"
      },
      "metricName": "fs_usage_bytes",
      "timestamp": "2019-10-09T04:52:11Z",
      "value": "0"
    },
    {
      "describedObject": {
        "kind": "Pod",
        "namespace": "monitoring",
        "name": "prometheus-adapter-668748ddbd-d9hxz",
        "apiVersion": "/v1"
      },
      "metricName": "fs_usage_bytes",
      "timestamp": "2019-10-09T04:52:11Z",
      "value": "0"
    },
    {
      "describedObject": {
        "kind": "Pod",
        "namespace": "monitoring",
        "name": "prometheus-k8s-0",
        "apiVersion": "/v1"
      },
      "metricName": "fs_usage_bytes",
      "timestamp": "2019-10-09T04:52:11Z",
      "value": "0"
    },
    {
      "describedObject": {
        "kind": "Pod",
        "namespace": "monitoring",
        "name": "prometheus-k8s-1",
        "apiVersion": "/v1"
      },
      "metricName": "fs_usage_bytes",
      "timestamp": "2019-10-09T04:52:11Z",
      "value": "0"
    },
    {
      "describedObject": {
        "kind": "Pod",
        "namespace": "monitoring",
        "name": "prometheus-operator-55b978b89-cgczz",
        "apiVersion": "/v1"
      },
      "metricName": "fs_usage_bytes",
      "timestamp": "2019-10-09T04:52:11Z",
      "value": "0"
    },
    {
      "describedObject": {
        "kind": "Pod",
        "namespace": "monitoring",
        "name": "redis-6dc489fd96-77np5",
        "apiVersion": "/v1"
      },
      "metricName": "fs_usage_bytes",
      "timestamp": "2019-10-09T04:52:11Z",
      "value": "0"
    }
  ]
}

 

如果有这些数据证明prometheus-adapter部署正常

 

 

5  使用官方测试hpa 项目测试自定义接口扩容

 

[root@k8s-master prometheus]# cd kube-prometheus/experimental/custom-metrics-api/
[root@k8s-master custom-metrics-api]# kubectl apply -f  sample-app.yaml
servicemonitor.monitoring.coreos.com/sample-app created
service/sample-app created
deployment.apps/sample-app created
horizontalpodautoscaler.autoscaling/sample-app created
[root@k8s-master custom-metrics-api]# kubectl get pod | grep sample-app
sample-app-74684b97f-k5c5b                 1/1     Running   0          17s
[root@k8s-master custom-metrics-api]# kubectl get pod | grep sample-app  kubectl get service | grep sample-app   ^C
[root@k8s-master custom-metrics-api]# kubectl get service | grep sample-app
sample-app                ClusterIP   10.106.102.68    <none>        8080/TCP         25s
[root@k8s-master custom-metrics-api]#
[root@k8s-master custom-metrics-api]# kubectl get hpa | grep sample-app
sample-app   Deployment/sample-app   264m/500m                1         10        1          52s
[root@k8s-master custom-metrics-api]#

[root@k8s-master custom-metrics-api]# curl 10.106.102.68:8080/metrics
# HELP http_requests_total The amount of requests served by the server in total
# TYPE http_requests_total counter
http_requests_total 34

 

得到监控值http_requests_total 
以后所有的监控值后面_total 在这个接口都是去除的
 
kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/*/http_requests" | jq .
[root@k8s-master custom-metrics-api]# kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/*/http_requests" | jq .
{
  "kind": "MetricValueList",
  "apiVersion": "custom.metrics.k8s.io/v1beta1",
  "metadata": {
    "selfLink": "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/%2A/http_requests"
  },
  "items": [
    {
      "describedObject": {
        "kind": "Pod",
        "namespace": "default",
        "name": "sample-app-74684b97f-k5c5b",
        "apiVersion": "/v1"
      },
      "metricName": "http_requests",
      "timestamp": "2019-10-09T06:12:38Z",
      "value": "418m"
    }
  ]
}
#测试自动伸缩
#安装hey
go get -u github.com/rakyll/hey
hey -n 10000 -q 5 -c 5 http://10.106.102.68:8080

 

 

几分钟后,HPA开始扩大部署。

 

ot@k8s-master custom-metrics-api]# kubectl get pod | grep sample-app
sample-app-74684b97f-6gftk                 0/1     ContainerCreating   0          2s
sample-app-74684b97f-k5c5b                 1/1     Running             0          4m5s
sample-app-74684b97f-n5tvb                 0/1     ContainerCreating   0          2s
sample-app-74684b97f-sbkvn                 0/1     ContainerCreating   0          2s
[root@k8s-master custom-metrics-api]# kubectl get pod | grep sample-app
sample-app-74684b97f-6gftk                 1/1     Running             0          27s
sample-app-74684b97f-6jg8x                 1/1     Running             0          12s
sample-app-74684b97f-gq622                 1/1     Running             0          12s
sample-app-74684b97f-k5c5b                 1/1     Running             0          4m30s
sample-app-74684b97f-n5tvb                 1/1     Running             0          27s
sample-app-74684b97f-sbkvn                 1/1     Running             0          27s
sample-app-74684b97f-x6thr                 0/1     ContainerCreating   0          12s
sample-app-74684b97f-zk9dz                 1/1     Running             0          12s
[root@k8s-master custom-metrics-api]# kubectl get pod | grep sample-app
sample-app-74684b97f-6gftk                 1/1     Running             0          67s
sample-app-74684b97f-6jg8x                 1/1     Running             0          52s
sample-app-74684b97f-969vx                 1/1     Running             0          36s
sample-app-74684b97f-gq622                 1/1     Running             0          52s
sample-app-74684b97f-k5c5b                 1/1     Running             0          5m10s
sample-app-74684b97f-n5tvb                 1/1     Running             0          67s
sample-app-74684b97f-q8h2m                 1/1     Running             0          36s
sample-app-74684b97f-sbkvn                 1/1     Running             0          67s
sample-app-74684b97f-x6thr                 0/1     ContainerCreating   0          52s
sample-app-74684b97f-zk9dz                 1/1     Running             0          52s
[root@k8s-master custom-metrics-api]#
[root@k8s-master custom-metrics-api]# kubectl get pod | grep sample-app
sample-app-74684b97f-6gftk                 1/1     Running   0          112s
sample-app-74684b97f-6jg8x                 1/1     Running   0          97s
sample-app-74684b97f-969vx                 1/1     Running   0          81s
sample-app-74684b97f-gq622                 1/1     Running   0          97s
sample-app-74684b97f-k5c5b                 1/1     Running   0          5m55s
sample-app-74684b97f-n5tvb                 1/1     Running   0          112s
sample-app-74684b97f-q8h2m                 1/1     Running   0          81s
sample-app-74684b97f-sbkvn                 1/1     Running   0          112s
sample-app-74684b97f-x6thr                 1/1     Running   0          97s
sample-app-74684b97f-zk9dz                 1/1     Running   0          97s
[root@k8s-master custom-metrics-api]#

[root@k8s-master ~]# kubectl get hpa  | grep sample-app
NAME         REFERENCE               TARGETS                  MINPODS   MAXPODS   REPLICAS   AGE
sample-app   Deployment/sample-app   4656m/500m               1         10        4          4m46s
[root@k8s-master ~]# kubectl get hpa | grep sample-app
sample-app   Deployment/sample-app   3315m/500m               1         10        8         4m57s
[root@k8s-master ~]# kubectl get hpa | grep sample-app
sample-app   Deployment/sample-app   3315m/500m               1         10        10         5m


[root@k8s-master ~]# kubectl describe hpa
podinfo     sample-app
[root@k8s-master ~]# kubectl describe hpa sample-app
Name:                       sample-app
Namespace:                  default
Labels:                     <none>
Annotations:                kubectl.kubernetes.io/last-applied-configuration:
                              {"apiVersion":"autoscaling/v2beta1","kind":"HorizontalPodAutoscaler","metadata":{"annotations":{},"name":"sample-app","namespace":"default...
CreationTimestamp:          Wed, 09 Oct 2019 14:10:28 +0800
Reference:                  Deployment/sample-app
Metrics:                    ( current / target )
  "http_requests" on pods:  2899m / 500m
Min replicas:               1
Max replicas:               10
Deployment pods:            10 current / 10 desired
Conditions:
  Type            Status  Reason               Message
  ----            ------  ------               -------
  AbleToScale     True    ScaleDownStabilized  recent recommendations were higher than current one, applying the highest recent recommendation
  ScalingActive   True    ValidMetricFound     the HPA was able to successfully calculate a replica count from pods metric http_requests
  ScalingLimited  True    TooManyReplicas      the desired replica count is more than the maximum replica count
Events:
  Type    Reason             Age    From                       Message
  ----    ------             ----   ----                       -------
  Normal  SuccessfulRescale  4m52s  horizontal-pod-autoscaler  New size: 4; reason: pods metric http_requests above target
  Normal  SuccessfulRescale  4m37s  horizontal-pod-autoscaler  New size: 8; reason: pods metric http_requests above target
  Normal  SuccessfulRescale  4m21s  horizontal-pod-autoscaler  New size: 10; reason: pods metric http_requests above target
[root@k8s-master ~]#

#又过了N久之后
[root@k8s
-master ~]# kubectl describe hpa sample-app | tail -10 Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulRescale 35m horizontal-pod-autoscaler New size: 4; reason: pods metric http_requests above target Normal SuccessfulRescale 35m horizontal-pod-autoscaler New size: 8; reason: pods metric http_requests above target Normal SuccessfulRescale 35m horizontal-pod-autoscaler New size: 10; reason: pods metric http_requests above target Normal SuccessfulRescale 23m horizontal-pod-autoscaler New size: 8; reason: All metrics below target Normal SuccessfulRescale 18m horizontal-pod-autoscaler New size: 7; reason: All metrics below target Normal SuccessfulRescale 13m horizontal-pod-autoscaler New size: 6; reason: All metrics below target Normal SuccessfulRescale 8m18s horizontal-pod-autoscaler New size: 5; reason: All metrics below target Normal SuccessfulRescale 3m15s horizontal-pod-autoscaler New size: 4; reason: All metrics below target [root@k8s-master ~]# kubectl get pod | grep sample sample-app-74684b97f-6gftk 1/1 Running 0 36m sample-app-74684b97f-k5c5b 1/1 Running 0 40m sample-app-74684b97f-n5tvb 1/1 Running 0 36m sample-app-74684b97f-sbkvn 1/1 Running 0 36m [root@k8s-master ~]#

 

 

自动缩放器不会立即对使用峰值做出反应。默认情况下,度量标准同步每30秒发生一次,只有在最后5分钟内没有重新缩放时才能进行扩展/缩小。通过这种方式,HPA可以防止快速执行冲突的决策,并为Cluster Autoscaler提供时间。

 

 

阈值说明

m代表milli-units,例如,901m意味着milli-requests

1000m=1

例如44315m/100  实际上是400个并发,阈值是100并发

[root@k8s-master custom-metrics-api]# kubectl get hpa
NAME         REFERENCE               TARGETS                  MINPODS   MAXPODS   REPLICAS   AGE
sample-app   Deployment/sample-app   44315m/100                 1         10        4          43m
[root@k8s-master custom-metrics-api]#

 

例如设置成10000m,实际是10个并发,直接显示10

    pods:
      metricName: http_requests
      targetAverageValue: 10000m
[root@k8s-master custom-metrics-api]# kubectl get hpa| grep sample
sample-app   Deployment/sample-app   212m/10                  1         10        10         52m
[root@k8s-master custom-metrics-api]#

 

posted on 2019-10-15 11:28  光阴8023  阅读(2900)  评论(0编辑  收藏  举报