K8S-Metrics Server

K8S版本:1.17.11

官网:https://kubernetes.io/docs/tasks/debug-application-cluster/resource-metrics-pipeline/#metrics-server

Metrics Server YAML: https://github.com/kubernetes-sigs/metrics-server/releases/tag/v0.3.6

kubectl autoscale 自动控制在k8s集群中运行的pod数量(水平自动伸缩),需要提前设置pod范围及触发条件。
k8s从1.1版本开始增加了名称为HPA(Horizontal Pod Autoscaler)的控制器,用于实现基于pod中资源
(CPU/Memory)利用率进行对pod的自动扩缩容功能的实现,早期的版本只能基于Heapster组件实现对CPU利用率
做为触发条件,但是在k8s 1.11版本开始使用Metrices Server完成数据采集,然后将采集到的数据通过
API(Aggregated API,汇总API),例如metrics.k8s.io、custom.metrics.k8s.io、external.metrics.k8s.io,然后
再把数据提供给HPA控制器进行查询,以实现基于某个资源利用率对pod进行扩缩容的目的。

控制管理器默认每隔15s(可以通过–horizontal-pod-autoscaler-sync-period修改)查询metrics的资源使用
情况.
支持以下三种metrics指标类型:
  ->预定义metrics(比如Pod的CPU)以利用率的方式计算
  ->自定义的Pod metrics,以原始值(raw value)的方式计算
  ->自定义的object metrics

支持两种metrics查询方式:
  ->Heapster
  ->自定义的REST API
  ->支持多metrics

Kubernetes Metrics Server:

  • Kubernetes Metrics Server 是 Cluster 的核心监控数据的聚合器,kubeadm 默认是不部署的。

  • Metrics Server 供 Dashboard 等其他组件使用,是一个扩展的 APIServer,依赖于 API Aggregator。所以,在安装 Metrics Server 之前需要先在 kube-apiserver 中开启 API Aggregator。

  • Metrics API 只可以查询当前的度量数据,并不保存历史数据。

  • Metrics API URI 为 /apis/metrics.k8s.io/,在 k8s.io/metrics 下维护。

  • 必须部署 metrics-server 才能使用该 API,metrics-server 通过调用 kubelet Summary API 获取数据。

检查 API Server 是否开启了 Aggregator Routing:查看 API Server 是否具有 --enable-aggregator-routing=true 选项。

1.8+]# ps -ef | grep apiserver
root       2824   2683 10 12月01 ?      02:27:19 kube-apiserver --advertise-address=192.168.64.110 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/pki/ca.crt 
--enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
--etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 --insecure-port=0 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
--kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client,system:metrics-server,metrics-server --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
--requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-key-file=/etc/kubernetes/pki/sa.pub
--service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key --enable-aggregator-routing=true
root 68993 40840 0 21:37 pts/0 00:00:00 grep --color=auto apiserver

修改每个 API Server 的 kube-apiserver.yaml 配置开启 Aggregator Routing:修改 manifests 配置后 API Server 会自动重启生效

]# cat /etc/kubernetes/manifests/kube-apiserver.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    component: kube-apiserver
    tier: control-plane
  name: kube-apiserver
  namespace: kube-system
spec:
  containers:
  - command:
    - kube-apiserver
    - --advertise-address=192.168.64.110
    - --allow-privileged=true
    - --authorization-mode=Node,RBAC
    - --client-ca-file=/etc/kubernetes/pki/ca.crt
    - --enable-admission-plugins=NodeRestriction
    - --enable-bootstrap-token-auth=true
    - --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
    - --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
    - --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
    - --etcd-servers=https://127.0.0.1:2379
    - --insecure-port=0
    - --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
    - --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
    - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
    - --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
    - --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
    - --requestheader-allowed-names=front-proxy-client,system:metrics-server,metrics-server
    - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
    - --requestheader-extra-headers-prefix=X-Remote-Extra-
    - --requestheader-group-headers=X-Remote-Group
    - --requestheader-username-headers=X-Remote-User
    - --secure-port=6443
    - --service-account-key-file=/etc/kubernetes/pki/sa.pub
    - --service-cluster-ip-range=10.96.0.0/12
    - --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
    - --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
    - --enable-aggregator-routing=true   //开启 Aggregator Routing

clone代码

]# git clone  -b  v0.3.6   https://github.com/kubernetes-sigs/metrics-server.git

获取指标数据

kubectl top nodes #报错如下
Error from server (NotFound): the server could not find the requested resource (get services http:heapster:)

准备所需镜像

# docker pull k8s.gcr.io/metrics-server-amd64:v0.3.6 #google镜像仓库
# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/metrics-server-amd64:v0.3.6 #阿里云镜像仓库

修改Metrics-Server配置文件

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: metrics-server
  namespace: kube-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: metrics-server
  namespace: kube-system
  labels:
    k8s-app: metrics-server
spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
  template:
    metadata:
      name: metrics-server
      labels:
        k8s-app: metrics-server
    spec:
      serviceAccountName: metrics-server
      volumes:
      - name: tmp-dir
        emptyDir: {}
      containers:
      - name: metrics-server
        image: mirrorgooglecontainers/metrics-server-amd64:v0.3.6
        imagePullPolicy: IfNotPresent
# 添加数据 command:
- /metrics-server - --kubelet-insecure-tls - --kubelet-preferred-address-types=InternalIP volumeMounts: - name: tmp-dir mountPath: /tmp resources: limits: cpu: 300m memory: 200Mi requests: cpu: 200m memory: 100Mi

完整版

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: system:aggregated-metrics-reader
  labels:
    rbac.authorization.k8s.io/aggregate-to-view: "true"
    rbac.authorization.k8s.io/aggregate-to-edit: "true"
    rbac.authorization.k8s.io/aggregate-to-admin: "true"
rules:
- apiGroups: ["metrics.k8s.io"]
  resources: ["pods", "nodes"]
  verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: metrics-server:system:auth-delegator
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: metrics-server-auth-reader
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: apiregistration.k8s.io/v1beta1
kind: APIService
metadata:
  name: v1beta1.metrics.k8s.io
spec:
  service:
    name: metrics-server
    namespace: kube-system
  group: metrics.k8s.io
  version: v1beta1
  insecureSkipTLSVerify: true
  groupPriorityMinimum: 100
  versionPriority: 100
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: metrics-server
  namespace: kube-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: metrics-server
  namespace: kube-system
  labels:
    k8s-app: metrics-server
spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
  template:
    metadata:
      name: metrics-server
      labels:
        k8s-app: metrics-server
    spec:
      serviceAccountName: metrics-server
      volumes:
      # mount in tmp so we can safely use from-scratch images and/or read-only containers
      - name: tmp-dir
        emptyDir: {}
      containers:
      - name: metrics-server
        image: k8s.gcr.io/metrics-server-amd64:v0.3.6
        imagePullPolicy: IfNotPresent
        args:
          - --cert-dir=/tmp
          - --secure-port=4443
        ports:
        - name: main-port
          containerPort: 4443
          protocol: TCP
        securityContext:
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          runAsUser: 1000
        volumeMounts:
        - name: tmp-dir
          mountPath: /tmp
      nodeSelector:
        kubernetes.io/os: linux
        kubernetes.io/arch: "amd64"
---
apiVersion: v1
kind: Service
metadata:
  name: metrics-server
  namespace: kube-system
  labels:
    kubernetes.io/name: "Metrics-server"
    kubernetes.io/cluster-service: "true"
spec:
  selector:
    k8s-app: metrics-server
  ports:
  - port: 443
    protocol: TCP
    targetPort: main-port
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: system:metrics-server
rules:
- apiGroups:
  - ""
  resources:
  - pods
  - nodes
  - nodes/stats
  - namespaces
  - configmaps
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system:metrics-server
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:metrics-server
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system

修改完成后部署

1.8+]# kubectl  apply -f .
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
serviceaccount/metrics-server created
deployment.apps/metrics-server created
service/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created

查看

1.8+]# kubectl  get pod -A
NAMESPACE              NAME                                             READY   STATUS    RESTARTS   AGE
kube-system            coredns-6955765f44-8vsfj                         1/1     Running   28         323d
kube-system            coredns-6955765f44-fqt6n                         1/1     Running   28         323d
kube-system            etcd-master                                      1/1     Running   74         323d
kube-system            kube-apiserver-master                            1/1     Running   10         102d
kube-system            kube-controller-manager-master                   1/1     Running   37         323d
kube-system            kube-flannel-ds-64qdh                            1/1     Running   16         323d
kube-system            kube-flannel-ds-d2g8j                            1/1     Running   28         323d
kube-system            kube-flannel-ds-qmssw                            1/1     Running   29         323d
kube-system            kube-proxy-4vkl5                                 1/1     Running   28         323d
kube-system            kube-proxy-mls6g                                 1/1     Running   29         323d
kube-system            kube-proxy-t4xmg                                 1/1     Running   16         323d
kube-system            kube-scheduler-master                            1/1     Running   38         323d
kube-system            metrics-server-d8669575f-xl6mw                   1/1     Running   0          15s
kubernetes-dashboard   dashboard-metrics-scraper-7b8b58dc8b-757sw       1/1     Running   11         102d
kubernetes-dashboard   kubernetes-dashboard-75fdb885bf-kj4kf            1/1     Running   0          2d17h

再次查看是否获取到监控指标

1.8+]# kubectl  top node
NAME     CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
master   429m         10%    1605Mi          43%
node1    162m         4%     691Mi           18%
node2    154m         3%     753Mi           20%
1.8+]# kubectl  top pod -A
NAMESPACE              NAME                                             CPU(cores)   MEMORY(bytes)
kube-system            coredns-6bcbd68c66-9j6sx                         8m           12Mi
kube-system            coredns-6bcbd68c66-9mjck                         5m           7Mi
kube-system            etcd-master                                      23m          168Mi
kube-system            kube-apiserver-master                            75m          309Mi
kube-system            kube-controller-manager-master                   32m          45Mi
kube-system            kube-flannel-ds-64qdh                            4m           42Mi
kube-system            kube-flannel-ds-d2g8j                            3m           47Mi
kube-system            kube-flannel-ds-qmssw                            3m           44Mi
kube-system            kube-proxy-4vkl5                                 1m           20Mi
kube-system            kube-proxy-mls6g                                 1m           18Mi
kube-system            kube-proxy-t4xmg                                 2m           20Mi
kube-system            kube-scheduler-master                            6m           20Mi
kube-system            metrics-server-d8669575f-xl6mw                   3m           14Mi
kubernetes-dashboard   dashboard-metrics-scraper-7b8b58dc8b-757sw       1m           17Mi
kubernetes-dashboard   kubernetes-dashboard-75fdb885bf-f29ng            4m           17Mi
posted @ 2021-12-02 21:52  不会跳舞的胖子  阅读(689)  评论(0编辑  收藏  举报