HPA&&metrics-server

27. HPA

27.1 Pod伸缩简介

根据当前pod的负载,动态调整pod副本数量,业务高峰期自动扩容pod的副本数以尽快响应pod的请求
在业务低峰期对pod进行缩容,实现降本增效的目的
公有云支持node级别的弹性伸缩

27.2 Scale命令扩容

#当--replicas的值设置为比原来的pod数,k8s会杀掉一些pod,下面3个变成1个
root@k8s-master1:~/k8s-data/yaml/dubbo# kubectl get deployments.apps -n demo 
NAME                            READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment         1/1     1            1           6d21h
tomcat-app1-deployment   1/1     1            1           6d22h
                 
root@k8s-master1:~/k8s-data/yaml/dubbo# kubectl scale deployment -n demo nginx-deployment --replicas=1

27.3 动态伸缩控制器类型

  • 水平pod自动缩放器(HPA)
基于pod资源利用率横向调整pod副本数量
  • 垂直pod自动缩放器(VPA)
基于pod资源利用率,调整对单个pod的最大资源限制,不能与HPA同时使用
  • 集群伸缩(CA)
基于集群中node资源使用情况,动态伸缩node节点,从而保证有CPU和内存资源用于创建pod

27.4 HPA控制器简介

# kube-controller-manager --help | grep initial-readiness-delay
Horizontal Pod Autoscaling(HPA)控制器,根据预定义好的阈值及pod当前的资源利用率,自动控制在k8s集群中运行的pod数量(自动弹性水平自动伸缩)
--horizontal-pod-autoscaler-sync-period #默认每隔15s也可以通过这个参数进行修改,查询metrics的资源使用情况
--horizontal-pod-autoscaler-downscale-stabilization #缩容间隔周期,默认5分钟
--horizontal-pod-autoscaler-sync-period #HPA控制器同步pod副本数的间隔周期
--horizontal-pod-autoscaler-cpu-initialization-period #初始化延迟时间,在此时间内pod的CPU资源指标将不会生效,默认为5分钟
--horizontal-pod-autoscaler-initial-readiness-delay #用于设置pod准备时间,在此时间内的pod统统被认为未就绪及不采集数据,默认为30秒
--horizontal-pod-autoscaler-tolerance #HPA控制器能容忍的数据差异(浮点数,默认为0.1)即新的指标要与当前的阈值差异在0.1或以上即要大于1+0.1=1.1, 或小于1-0.1=0.9,比如阈值为CPU利用率50%,当前为80%,那么80/50=1.6 > 1.1则会触发扩容,反之会缩容。
触发条件:avg(CurrentPodsConsumption)/Target >1.1 或 <0.9=把N个pod的数据相加后根据pod的数量计算出平均数除以阈值,大于1.1就扩容,小于0.9就缩容

计算公式:TargetNumOfPods = ceil(sum(CurrentPodsCPUUtilization)/Target) #ceil是一个向上取整的目的pod整数

指标数据需要部署metrics-server,即HPA使用metrics-server作为数据源
在k8s 1.1引入HPA控制器,早期使用Heapster组件采集pod指标数据,在k8s 1.11版本开始使用Metrices Server完成数据采集,然后将采集到的数据通过API,例如:metrics.k8s.io、custom.metrics.k8s.io、external.metrics.k8s.io、然后再把数据提供给HPA控制器进行查询,以实现基于某个资源利用率对pod进行扩缩容的目的

27.5 metrics-server部署

metrics-server 是Kubernetes内置的容器资源指标来源
metrics-server 从node节点上的kubelet收集资源指标,并通过Metrics API在 Kubernetes apiserver中公开指标数据,以供Horizontal Pod Autoscaler和Vertical Pod Autoscaler使用,也可以通过访问kubectl top node/pod 查看指标数据

  • YAML
root@k8s-master1:~/20220814/metrics-server-0.6.1-case# cat metrics-server-v0.6.1.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
    rbac.authorization.k8s.io/aggregate-to-admin: "true"
    rbac.authorization.k8s.io/aggregate-to-edit: "true"
    rbac.authorization.k8s.io/aggregate-to-view: "true"
  name: system:aggregated-metrics-reader
rules:
- apiGroups:
  - metrics.k8s.io
  resources:
  - pods
  - nodes
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
rules:
- apiGroups:
  - ""
  resources:
  - nodes/metrics
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - pods
  - nodes
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server-auth-reader
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server:system:auth-delegator
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:metrics-server
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  ports:
  - name: https
    port: 443
    protocol: TCP
    targetPort: https
  selector:
    k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
  strategy:
    rollingUpdate:
      maxUnavailable: 0
  template:
    metadata:
      labels:
        k8s-app: metrics-server
    spec:
      containers:
      - args:
        - --cert-dir=/tmp
        - --secure-port=4443
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --kubelet-use-node-status-port
        - --metric-resolution=15s
#更改了镜像名字
        #image: k8s.gcr.io/metrics-server/metrics-server:v0.6.1
        image: harbor.nbrhce.com/demo/metrics-server:v0.6.1
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /livez
            port: https
            scheme: HTTPS
          periodSeconds: 10
        name: metrics-server
        ports:
        - containerPort: 4443
          name: https
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /readyz
            port: https
            scheme: HTTPS
          initialDelaySeconds: 20
          periodSeconds: 10
        resources:
          requests:
            cpu: 100m
            memory: 200Mi
        securityContext:
          allowPrivilegeEscalation: false
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          runAsUser: 1000
        volumeMounts:
        - mountPath: /tmp
          name: tmp-dir
      nodeSelector:
        kubernetes.io/os: linux
      priorityClassName: system-cluster-critical
      serviceAccountName: metrics-server
      volumes:
      - emptyDir: {}
        name: tmp-dir
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
  labels:
    k8s-app: metrics-server
  name: v1beta1.metrics.k8s.io
spec:
  group: metrics.k8s.io
  groupPriorityMinimum: 100
  insecureSkipTLSVerify: true
  service:
    name: metrics-server
    namespace: kube-system
  version: v1beta1
  versionPriority: 100

27.6 HPA实现

  • TOMCAT YAML
root@k8s-master1:~/20220814/metrics-server-0.6.1-case# cat tomcat-app1.yaml 
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
  labels:
    app: tomcat-app1-deployment-label
  name: tomcat-app1-deployment
  namespace: demo
spec:
  replicas: 2
  selector:
    matchLabels:
      app: tomcat-app1-selector
  template:
    metadata:
      labels:
        app: tomcat-app1-selector
    spec:
      containers:
      - name: tomcat-app1-container
        #image: harbor.magedu.local/magedu/tomcat-app1:v7
        #image: tomcat:7.0.93-alpine 
#这个是压测的镜像运行两个CPU直接100%好实现目标
        image: lorel/docker-stress-ng 
        args: ["--vm", "2", "--vm-bytes", "256M"]
        ##command: ["/apps/tomcat/bin/run_tomcat.sh"]
        imagePullPolicy: IfNotPresent
        ##imagePullPolicy: Always
        ports:
        - containerPort: 8080
          protocol: TCP
          name: http
        env:
        - name: "password"
          value: "123456"
        - name: "age"
          value: "18"
        resources:
          limits:
            cpu: 1
            memory: "512Mi"
          requests:
            cpu: 500m
            memory: "512Mi"

---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: tomcat-app1-service-label
  name: tomcat-app1-service
  namespace: demo
spec:
  type: NodePort
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 8080
    nodePort: 40003
  selector:
    app: tomcat-app1-selector
  • HPA
#注意关联的是哪个名字哪个控制器
#添加最小副本minReplicas: 3
#添加最大副本maxReplicas: 10 最多就扩容到10
# targetCPUUtilizationPercentage 这个是CPU阈值 也就是触发的机制
root@k8s-master1:~/20220814/metrics-server-0.6.1-case# cat hpa.yaml 
#apiVersion: autoscaling/v2beta1
apiVersion: autoscaling/v1 
kind: HorizontalPodAutoscaler
metadata:
  namespace: demo
  name: tomcat-app1-podautoscaler
  labels:
    app: tomcat-app1
    version: v2beta1
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    #apiVersion: extensions/v1beta1 
    kind: Deployment
    name: tomcat-app1-deployment 
  minReplicas: 3
  maxReplicas: 10
  targetCPUUtilizationPercentage: 60
  #metrics:
  #- type: Resource
  #  resource:
  #    name: cpu
  #    targetAverageUtilization: 60
  #- type: Resource
  #  resource:
  #    name: memory

#这样显示就代表扩容了
root@k8s-master1:~/20220814/metrics-server-0.6.1-case# kubectl get hpa -n demo
NAME                        REFERENCE                           TARGETS    MINPODS   MAXPODS   REPLICAS   AGE
tomcat-app1-podautoscaler   Deployment/tomcat-app1-deployment   199%/60%   3         10        10         15m
posted @ 2023-01-01 22:39  YIDADA-SRE  阅读(62)  评论(0编辑  收藏  举报