认证、授权与准入控制(二)
一、Kubernetes Dashboard
Dashboard 是Kubernetes的web GUI,可用于在Kubernetes集群上部署容器化应用、应用排障、管理集群本身及其附加的资源等。它常被管理员用于集群及应用速览、创建或修改单个资源(如Deployments、Jobs和DaemonSets等),以及扩展Deployment、启动滚动更新、重启Pod或使用部署向导部署一个应用等。
注:Dashboard依赖于Heapster或Metrics Server完成指标数据的采集和可视化
Dashboard的认证和授权均可由Kubernetes集群实现,它自身仅是一个代理,所有的相关操作都将发给API Server进行,而非由Dashboard自行完成。目前它支持使用的认证方式有承载令牌(bear token)和kubeconfig两种,在访问之前需要准备好相应的认证凭证。
1. 部署HTTPS通信的Dashboard
Dashboard 1.7(不含)之前的版本在部署时直接赋予了管理权限,这种方式可能存在安全风险,因此,1.7及之后的版本默认在部署时仅定义了运行Dashboard所需要的最小权限,仅能够在Master主机上通过“kubectl proxy”命令创建代理后于本机进行访问,它默认禁止了来自于其他任何主机的访问请求。若要绕过“kubectl proxy”直接访问Dashboard,则需要在部署时提供证书以便与客户端进行通信时建立一个安全的HTTPS连接。证书既可由公信CA颁发,也可自行使用openssl或cfssl一类的工具来创建。
部署Dashboard时会从Secrets对象中加载所需要的私钥和证书文件,需要事先准备好相关的私钥、证书和Secrets对象。下面命令需要以管理员的身份在Master节点上运行,否则将无法访问到ca.key文件:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 | [root@k8s-master1 ~] # mkdir dashboard [root@k8s-master1 ~] # cd dashboard/ [root@k8s-master1 dashboard] # [root@k8s-master1 dashboard] # (umask 077;openssl genrsa -out dashboard.key 2048) Generating RSA private key, 2048 bit long modulus ..........+++ ..........+++ e is 65537 (0x10001) [root@k8s-master1 dashboard] # ls -lrt total 4 -rw------- 1 root root 1679 Nov 3 21:15 dashboard.key [root@k8s-master1 dashboard] # openssl req -new -key dashboard.key -out dashboard.csr -subj "/O=iLinux/CN=dashboard" [root@k8s-master1 dashboard] # ls -lrt total 8 -rw------- 1 root root 1679 Nov 3 21:15 dashboard.key -rw-r--r-- 1 root root 915 Nov 3 21:16 dashboard.csr [root@k8s-master1 dashboard] # openssl x509 -req -in dashboard.csr -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key -CAcreateserial -out dashboard.crt -days 3650 Signature ok subject= /O =iLinux /CN =dashboard Getting CA Private Key [root@k8s-master1 dashboard] # ls -lrt total 12 -rw------- 1 root root 1679 Nov 3 21:15 dashboard.key -rw-r--r-- 1 root root 915 Nov 3 21:16 dashboard.csr -rw-r--r-- 1 root root 1001 Nov 3 21:18 dashboard.crt |
接下来,基于生成的私钥和证书文件创建名为kubernetes-dashboard-certs的Opaque类型的Secret对象,其键名分别为dashboard.key和dashboard.crt:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 | [root@k8s-master1 dashboard] # kubectl create ns kubernetes-dashboard namespace /kubernetes-dashboard created You have new mail in /var/spool/mail/root [root@k8s-master1 dashboard] # kubectl create secret generic kubernetes-dashboard-certs --from-file=dashboard.crt=./dashboard.crt --from-file=dashboard.key=./dashboard.key -n kubernetes-dashboard secret /kubernetes-dashboard-certs created [root@k8s-master1 dashboard] # kubectl get secrets kubernetes-dashboard-certs -n kubernetes-dashboard NAME TYPE DATA AGE kubernetes-dashboard-certs Opaque 2 19s [root@k8s-master1 dashboard] # kubectl describe secrets kubernetes-dashboard-certs -n kubernetes-dashboard Name: kubernetes-dashboard-certs Namespace: kubernetes-dashboard Labels: <none> Annotations: <none> Type: Opaque Data ==== dashboard.crt: 1001 bytes dashboard.key: 1679 bytes |
Secret对象准备完成后即可部署Dashboard。Dashboard配置清单文件如下,清单中也创建了NameSpace和Secrets对象,因此,创建过程中可能会发出告警信息。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 | [root@k8s-master1 dashboard] # cat kubernetes-dashboard.yaml # Copyright 2017 The Kubernetes Authors. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. apiVersion: v1 kind: Namespace metadata: name: kubernetes-dashboard --- apiVersion: v1 kind: ServiceAccount metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboard --- kind: Service apiVersion: v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboard spec: ports: - port: 443 targetPort: 8443 selector: k8s-app: kubernetes-dashboard --- apiVersion: v1 kind: Secret metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard-certs namespace: kubernetes-dashboard type : Opaque --- apiVersion: v1 kind: Secret metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard-csrf namespace: kubernetes-dashboard type : Opaque data: csrf: "" --- apiVersion: v1 kind: Secret metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard-key-holder namespace: kubernetes-dashboard type : Opaque --- kind: ConfigMap apiVersion: v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard-settings namespace: kubernetes-dashboard --- kind: Role apiVersion: rbac.authorization.k8s.io /v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboard rules: # Allow Dashboard to get, update and delete Dashboard exclusive secrets. - apiGroups: [ "" ] resources: [ "secrets" ] resourceNames: [ "kubernetes-dashboard-key-holder" , "kubernetes-dashboard-certs" , "kubernetes-dashboard-csrf" ] verbs: [ "get" , "update" , "delete" ] # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map. - apiGroups: [ "" ] resources: [ "configmaps" ] resourceNames: [ "kubernetes-dashboard-settings" ] verbs: [ "get" , "update" ] # Allow Dashboard to get metrics. - apiGroups: [ "" ] resources: [ "services" ] resourceNames: [ "heapster" , "dashboard-metrics-scraper" ] verbs: [ "proxy" ] - apiGroups: [ "" ] resources: [ "services/proxy" ] resourceNames: [ "heapster" , "http:heapster:" , "https:heapster:" , "dashboard-metrics-scraper" , "http:dashboard-metrics-scraper" ] verbs: [ "get" ] --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io /v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard rules: # Allow Metrics Scraper to get metrics from the Metrics server - apiGroups: [ "metrics.k8s.io" ] resources: [ "pods" , "nodes" ] verbs: [ "get" , "list" , "watch" ] --- apiVersion: rbac.authorization.k8s.io /v1 kind: RoleBinding metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboard roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: kubernetes-dashboard subjects: - kind: ServiceAccount name: kubernetes-dashboard namespace: kubernetes-dashboard --- apiVersion: rbac.authorization.k8s.io /v1 kind: ClusterRoleBinding metadata: name: kubernetes-dashboard roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: kubernetes-dashboard subjects: - kind: ServiceAccount name: kubernetes-dashboard namespace: kubernetes-dashboard --- kind: Deployment apiVersion: apps /v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboard spec: replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: k8s-app: kubernetes-dashboard template: metadata: labels: k8s-app: kubernetes-dashboard spec: containers: - name: kubernetes-dashboard image: kubernetesui /dashboard :v2.0.0-beta8 imagePullPolicy: IfNotPresent ports: - containerPort: 8443 protocol: TCP args: - --auto-generate-certificates - --namespace=kubernetes-dashboard # Uncomment the following line to manually specify Kubernetes API server Host # If not specified, Dashboard will attempt to auto discover the API server and connect # to it. Uncomment only if the default does not work. # - --apiserver-host=http://my-address:port volumeMounts: - name: kubernetes-dashboard-certs mountPath: /certs # Create on-disk volume to store exec logs - mountPath: /tmp name: tmp-volume livenessProbe: httpGet: scheme: HTTPS path: / port: 8443 initialDelaySeconds: 30 timeoutSeconds: 30 securityContext: allowPrivilegeEscalation: false readOnlyRootFilesystem: true runAsUser: 1001 runAsGroup: 2001 volumes: - name: kubernetes-dashboard-certs secret: secretName: kubernetes-dashboard-certs - name: tmp-volume emptyDir: {} serviceAccountName: kubernetes-dashboard nodeSelector: "beta.kubernetes.io/os" : linux # Comment the following tolerations if Dashboard must not be deployed on master tolerations: - key: node-role.kubernetes.io /master effect: NoSchedule --- kind: Service apiVersion: v1 metadata: labels: k8s-app: dashboard-metrics-scraper name: dashboard-metrics-scraper namespace: kubernetes-dashboard spec: ports: - port: 8000 targetPort: 8000 selector: k8s-app: dashboard-metrics-scraper --- kind: Deployment apiVersion: apps /v1 metadata: labels: k8s-app: dashboard-metrics-scraper name: dashboard-metrics-scraper namespace: kubernetes-dashboard spec: replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: k8s-app: dashboard-metrics-scraper template: metadata: labels: k8s-app: dashboard-metrics-scraper annotations: seccomp.security.alpha.kubernetes.io /pod : 'runtime/default' spec: containers: - name: dashboard-metrics-scraper image: kubernetesui /metrics-scraper :v1.0.1 imagePullPolicy: IfNotPresent ports: - containerPort: 8000 protocol: TCP livenessProbe: httpGet: scheme: HTTP path: / port: 8000 initialDelaySeconds: 30 timeoutSeconds: 30 volumeMounts: - mountPath: /tmp name: tmp-volume securityContext: allowPrivilegeEscalation: false readOnlyRootFilesystem: true runAsUser: 1001 runAsGroup: 2001 serviceAccountName: kubernetes-dashboard nodeSelector: "beta.kubernetes.io/os" : linux # Comment the following tolerations if Dashboard must not be deployed on master tolerations: - key: node-role.kubernetes.io /master effect: NoSchedule volumes: - name: tmp-volume emptyDir: {} |
创建dashboard命令如下:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | [root@k8s-master1 dashboard] # kubectl apply -f kubernetes-dashboard.yaml Warning: resource namespaces /kubernetes-dashboard is missing the kubectl.kubernetes.io /last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically. namespace /kubernetes-dashboard configured serviceaccount /kubernetes-dashboard created service /kubernetes-dashboard created Warning: resource secrets /kubernetes-dashboard-certs is missing the kubectl.kubernetes.io /last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically. secret /kubernetes-dashboard-certs configured secret /kubernetes-dashboard-csrf created secret /kubernetes-dashboard-key-holder created configmap /kubernetes-dashboard-settings created role.rbac.authorization.k8s.io /kubernetes-dashboard created clusterrole.rbac.authorization.k8s.io /kubernetes-dashboard unchanged rolebinding.rbac.authorization.k8s.io /kubernetes-dashboard created clusterrolebinding.rbac.authorization.k8s.io /kubernetes-dashboard unchanged deployment.apps /kubernetes-dashboard created service /dashboard-metrics-scraper created deployment.apps /dashboard-metrics-scraper created |
查看创建的资源信息如下:
1 2 3 4 5 6 7 8 9 | [root@k8s-master1 dashboard] # kubectl get pods -n kubernetes-dashboard NAME READY STATUS RESTARTS AGE dashboard-metrics-scraper-7445d59dfd-5875f 1 /1 Running 0 3m kubernetes-dashboard-54f5b6dc4b-wz68l 1 /1 Running 0 3m You have new mail in /var/spool/mail/root [root@k8s-master1 dashboard] # kubectl get svc -n kubernetes-dashboard NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE dashboard-metrics-scraper ClusterIP 10.97.28.167 <none> 8000 /TCP 3m11s kubernetes-dashboard ClusterIP 10.98.71.77 <none> 443 /TCP 3m12s |
可以看到创建的Service对象类型为ClusterIP,它仅能在Pod客户端中访问,若需要在集群外通过浏览器访问Dashboard,则需要将其修改成NodePort类型:
1 2 3 4 5 6 7 | [root@k8s-master1 dashboard] # kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard service /kubernetes-dashboard edited You have new mail in /var/spool/mail/root [root@k8s-master1 dashboard] # kubectl get svc -n kubernetes-dashboard NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE dashboard-metrics-scraper ClusterIP 10.97.28.167 <none> 8000 /TCP 8m25s kubernetes-dashboard NodePort 10.98.71.77 <none> 443:30302 /TCP 8m26s |
确定其使用的NodePort之后,便可在集群外通过浏览器进行访问,输入https://10.0.0.131:30302,则显示如下图,下图为其默认的登录页面,支持的认证方式为kubeconfig和token两种:
Dashboard是运行于Pod对象中的应用,其连接API Server的账户应为ServiceAccount类型,因此,用户在登录页面提供的用户账号须得是此类账户,而且访问权限也取决于kubeconfig或token认证时的ServiceAccount用户的权限。ServiceAccount的集群资源访问权限取决于它绑定的角色或集群角色。
2. 配置token认证
集群级别的管理操作依赖于集群管理员的权限。例如,管理持久存储卷和名称空间等资源,内建的cluster-admin集群角色拥有相关的全部权限,创建ServiceAccount并将其绑定其上即可完成集群管理员授权。而用户通过相应的ServiceAccount的token信息完成Dashboard认证也就能扮演起Dashboard接口上的集群管理员角色。例如上面创建的名为kubernetes-dashboard的ServiceAccount,完成集群角色绑定,使其具有管理员权限:
1 2 3 4 5 6 7 8 9 | [root@k8s-master1 ~] # kubectl get sa -n kubernetes-dashboard NAME SECRETS AGE default 1 2d22h kubernetes-dashboard 1 2d22h [root@k8s-master1 ~] # kubectl create clusterrolebinding dashboard-cluster-admin --clusterrole=cluster-admin --serviceaccount=kubernetes-dashboard:kubernetes-dashboard clusterrolebinding.rbac.authorization.k8s.io /dashboard-cluster-admin created [root@k8s-master1 ~] # kubectl get clusterrolebinding dashboard-cluster-admin NAME ROLE AGE dashboard-cluster-admin ClusterRole /cluster-admin 14s |
创建ServiceAccount对象时,它会自动为用户生成用于认证的token信息,这一点可以从与其相关的Secrets对象上获取:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 | [root@k8s-master1 ~] # kubectl get secrets -n kubernetes-dashboard NAME TYPE DATA AGE default-token-9snc4 kubernetes.io /service-account-token 3 2d22h kubernetes-dashboard-certs Opaque 2 2d22h kubernetes-dashboard-csrf Opaque 1 2d22h kubernetes-dashboard-key-holder Opaque 2 2d22h kubernetes-dashboard-token-s599v kubernetes.io /service-account-token 3 2d22h You have new mail in /var/spool/mail/root [root@k8s-master1 ~] # kubectl describe secrets kubernetes-dashboard-token-s599v -n kubernetes-dashboard Name: kubernetes-dashboard-token-s599v Namespace: kubernetes-dashboard Labels: <none> Annotations: kubernetes.io /service-account .name: kubernetes-dashboard kubernetes.io /service-account .uid: c7533525-7164-4fc7-801e-68e460851f88 Type: kubernetes.io /service-account-token Data ==== namespace: 20 bytes token: eyJhbGciOiJSUzI1NiIsImtpZCI6IkxKVHlZMHFIMWpxRnlOSWRPMmVDQnBPdGFwdWRlbkdrLUpuOG50WUhfbEkifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi1zNTk5diIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImM3NTMzNTI1LTcxNjQtNGZjNy04MDFlLTY4ZTQ2MDg1MWY4OCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.SuEBft4utO45S49XEeoxp-3uZevUWnbZopEasiCtzSMaRf4GxrMhaYTerlq66Noh2SjVUeATBqxSJnZxZsGDqdGcblgT4ubfqoSQ6_Supt5x1UfuzV6s9O3GC76c4OcRCwXjzFzw9moAkMRdz67wJ7UdMKw2TY9JN26mVh5qBr4Ws39YbwP045nmr9OVPTjoKrZM4PUsf9_B6JUjrlWvIB55nqPS7GoOO3xdjADrUrwJxWw9ZhXb8uvA6kf3BFb6GbqDaAiRCbd4yN8pVbqmVE7V-zM_YC6ZnPeR2tiNOfTqVnFrQwK-7ugXI9youPW8MG9Kf2daZT6UE32ssSjaow ca.crt: 1066 bytes |
对上面获取到的token在登录界面选择令牌认证方式,并键入token令牌即要登录的Dashboard,其效果如图所示:
点击Sign in后,其效果显示如下图:
上面认证时用到的令牌将采用base64的编码格式,且登录时需要将原内容粘贴入文本框,存储及操作都有不便之处,建议用户将它存入kubeconfig配置文件中,通过kubeconfig的认证方式进行登录。
3. 配置kubeconfig认证
kubeconfig是认证信息承载工具,它能够存入私钥和证书,或者认证令牌等作为用户的认证配置文件。为了说明如何配置一个仅具有特定名称空间管理权限的登录账号,这里创建一个新的ServiceAccount用于管理默认的default名称空间,并将其绑定于admin集群角色:
1 2 3 4 5 6 7 8 9 10 | [root@k8s-master1 dashboard] # kubectl create sa def-ns-admin -n default serviceaccount /def-ns-admin created [root@k8s-master1 dashboard] # kubectl get sa def-ns-admin -n default NAME SECRETS AGE def-ns-admin 1 28s [root@k8s-master1 dashboard] # kubectl create rolebinding def-ns-admin --clusterrole=admin --serviceaccount=default:def-ns-admin rolebinding.rbac.authorization.k8s.io /def-ns-admin created [root@k8s-master1 dashboard] # kubectl get rolebinding def-ns-admin -n default NAME ROLE AGE def-ns-admin ClusterRole /admin 21s |
下面分步骤说明如何创建所需的kubeconfig文件。
1)初始化集群信息,提供API Server的URL,以及验证API Server证书所用到的CA证书等。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 | [root@k8s-master1 dashboard] # kubectl config view apiVersion: v1 clusters: - cluster: certificate-authority-data: DATA+OMITTED server: https: //10 .0.0.131:6443 name: kubernetes contexts: - context: cluster: kubernetes user: kube-user1 name: kube-user1@kubernetes - context: cluster: kubernetes namespace: default user: kubernetes-admin name: kubernetes-admin@kubernetes current-context: kubernetes-admin@kubernetes kind: Config preferences: {} users : - name: kube-user1 user: client-certificate-data: REDACTED client-key-data: REDACTED - name: kubernetes-admin user: client-certificate-data: REDACTED client-key-data: REDACTED [root@k8s-master1 dashboard] # kubectl config set-cluster kubernetes --embed-certs=true --server="https://10.0.0.131:6443" --certificate-authority=/etc/kubernetes/pki/ca.crt --kubeconfig=./def-ns-admin.kubeconfig Cluster "kubernetes" set . [root@k8s-master1 dashboard] # ll total 24 -rw-r--r-- 1 root root 1001 Nov 3 21:18 dashboard.crt -rw-r--r-- 1 root root 915 Nov 3 21:16 dashboard.csr -rw------- 1 root root 1679 Nov 3 21:15 dashboard.key -rw------- 1 root root 1624 Nov 6 21:01 def-ns-admin.kubeconfig -rw-r--r-- 1 root root 7614 Nov 3 21:38 kubernetes-dashboard.yaml [root@k8s-master1 dashboard] # cat def-ns-admin.kubeconfig apiVersion: v1 clusters: - cluster: certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM1ekNDQWMrZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeU1EZ3dNVEV5TkRNMU1Wb1hEVE15TURjeU9URXlORE0xTVZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTDJOCmRLRjdRZnZqYjkzWExneXd6endxaGQvZ3lLOWhuZHQzUk05dlQ5QngvQTNWSTBwdnE2Q3NKOE8yUlBrRHA4V2gKWXhlTFZlS29zU1Y2eVROMWhvdkJoc1cwTWZEL2FxYjAzdjN1TUo3QVdvY1dhako3cWNocjRTT2NzcTRqbmJSagoxUmtsdDY3eFVKYjA2ZXZsUFFKSFUyem5ZYkE5VUFsYUlOYytycmVaZE1FbnM5eE9UcU1ZcE9BcjJzNkV5NXhRCkZJY2FCSG5uT0l6WDYxcWhnUFNmMzlVQmJ3bGozK0pVT0M3alR2ZDNSRGkycGNXZVJtZUVNNVNkUDRCSFN5dXkKOE5POGc4d28zVGFTSXZhN0VseTdHUDhNcFZHMHpuR2V3dVA2WGJUOW0rbjgwei9ZcnUwZmoyQXRsbHhoUmJ2LwpyWHcwYUZYU29DOEt4dXZ3S01FQ0F3RUFBYU5DTUVBd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZHMkdLSWJYcDRhTGRiRWw4aks5UTVrWldqN1hNQTBHQ1NxR1NJYjMKRFFFQkN3VUFBNElCQVFCakFncE5mNEJFOXFEY1k4dUk5a3lYa2ZjaE5Ic0JvTVExbnFFbkhUclJhdXlnUnBoTwpYMWVlZ0J4UjlxL2JkNGVxbmVBMjRBZ1A2VXJPYVJCNHZmM2ppTXFrOC9PdmlNMmFVVVczaXdiNHczWnRhY2VUCnFPWGlVRTYxVTJDamVSbFdNTkR6MUpWQWVud0JqeWFadFpuQlIxWFhuVzBKZy9EUm1VYkpxdkZER3Q3ekJ2cHkKcXZZNDZXZTRFYkp2R05UWWdIZE9uZXd2d1ZudFZLeWIydDlocmx5OGxJQlc2UGtQbGpEc2Zma21CV0NHcTJObgpOU2FZa29iQXlGOU43bk0yUElEdUZKQjEvZVR2NGVYbWg2bEFqRjgzZjRlS1o3TUJqczdjOHFjeVJlYjNrVE5VCkhMN1VsWVVGR0ZvQyt1cHFCamJ4ZVpYbkNENUZUcitSWWZKdQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg== server: https: //10 .0.0.131:6443 name: kubernetes contexts: null current-context: "" kind: Config preferences: {} users : null |
2)获取def-ns-admin的token,并将其作为认证信息。由于直接得到的token为base64编码格式,因此,下面使用了“base64 -d”命令将其解码还原:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 | [root@k8s-master1 dashboard] # kubectl get secrets | grep -i def-ns-admin def-ns-admin-token-rrvkx kubernetes.io /service-account-token 3 15m [root@k8s-master1 dashboard] # DEFNS_ADMIN_TOKEN=$(kubectl get secrets def-ns-admin-token-rrvkx -n default -o jsonpath={.data.token}|base64 -d) [root@k8s-master1 dashboard] # echo $DEFNS_ADMIN_TOKEN eyJhbGciOiJSUzI1NiIsImtpZCI6IkxKVHlZMHFIMWpxRnlOSWRPMmVDQnBPdGFwdWRlbkdrLUpuOG50WUhfbEkifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImRlZi1ucy1hZG1pbi10b2tlbi1ycnZreCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJkZWYtbnMtYWRtaW4iLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI5MjkwYWU2Ni01MTYyLTQ3NWUtOGIwZC1iZGVlMWE1YzliYWQiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6ZGVmYXVsdDpkZWYtbnMtYWRtaW4ifQ.YpF94wYp5071Te8mzUlZltu1Umdl5tuNdqlA3RxLbOMeKSn0LfTVk8sbRnV_EwoiPL00mPm_hFIoGzvRBWqhREpzU7iZfSYwph4QtQL5MCfbePAdmrJIYpXgIvqYYDXK0Nw8sK7QK0s_OsL0S17GsGqPmLXXYssu14p49AeJXkI_0VHTO3DlPOQg1xXY_0ZDJsLtRwKXsD9QjfBOr4jvB1qBIT5NHg2Yb0hYJsR2yN8lMRsQgqZucvMQg9EQFuN7ud06GFUIasjKmVKRfFIyqRHgoQxr2zfG6rlj6o0IE3Nrdx_Z_yTcPpiVnHoKYtOGTT4lEvF0JnkpGd8PtSVPZw [root@k8s-master1 dashboard] # kubectl describe secrets def-ns-admin-token-rrvkx -n default Name: def-ns-admin-token-rrvkx Namespace: default Labels: <none> Annotations: kubernetes.io /service-account .name: def-ns-admin kubernetes.io /service-account .uid: 9290ae66-5162-475e-8b0d-bdee1a5c9bad Type: kubernetes.io /service-account-token Data ==== ca.crt: 1066 bytes namespace: 7 bytes token: eyJhbGciOiJSUzI1NiIsImtpZCI6IkxKVHlZMHFIMWpxRnlOSWRPMmVDQnBPdGFwdWRlbkdrLUpuOG50WUhfbEkifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImRlZi1ucy1hZG1pbi10b2tlbi1ycnZreCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJkZWYtbnMtYWRtaW4iLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI5MjkwYWU2Ni01MTYyLTQ3NWUtOGIwZC1iZGVlMWE1YzliYWQiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6ZGVmYXVsdDpkZWYtbnMtYWRtaW4ifQ.YpF94wYp5071Te8mzUlZltu1Umdl5tuNdqlA3RxLbOMeKSn0LfTVk8sbRnV_EwoiPL00mPm_hFIoGzvRBWqhREpzU7iZfSYwph4QtQL5MCfbePAdmrJIYpXgIvqYYDXK0Nw8sK7QK0s_OsL0S17GsGqPmLXXYssu14p49AeJXkI_0VHTO3DlPOQg1xXY_0ZDJsLtRwKXsD9QjfBOr4jvB1qBIT5NHg2Yb0hYJsR2yN8lMRsQgqZucvMQg9EQFuN7ud06GFUIasjKmVKRfFIyqRHgoQxr2zfG6rlj6o0IE3Nrdx_Z_yTcPpiVnHoKYtOGTT4lEvF0JnkpGd8PtSVPZw You have new mail in /var/spool/mail/root [root@k8s-master1 dashboard] # kubectl config set-credentials def-ns-admin --token=${DEFNS_ADMIN_TOKEN} --kubeconfig=./def-ns-admin.kubeconfig User "def-ns-admin" set . #查看kubeconfig文件内容 [root@k8s-master1 dashboard] # cat def-ns-admin.kubeconfig apiVersion: v1 clusters: - cluster: certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM1ekNDQWMrZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeU1EZ3dNVEV5TkRNMU1Wb1hEVE15TURjeU9URXlORE0xTVZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTDJOCmRLRjdRZnZqYjkzWExneXd6endxaGQvZ3lLOWhuZHQzUk05dlQ5QngvQTNWSTBwdnE2Q3NKOE8yUlBrRHA4V2gKWXhlTFZlS29zU1Y2eVROMWhvdkJoc1cwTWZEL2FxYjAzdjN1TUo3QVdvY1dhako3cWNocjRTT2NzcTRqbmJSagoxUmtsdDY3eFVKYjA2ZXZsUFFKSFUyem5ZYkE5VUFsYUlOYytycmVaZE1FbnM5eE9UcU1ZcE9BcjJzNkV5NXhRCkZJY2FCSG5uT0l6WDYxcWhnUFNmMzlVQmJ3bGozK0pVT0M3alR2ZDNSRGkycGNXZVJtZUVNNVNkUDRCSFN5dXkKOE5POGc4d28zVGFTSXZhN0VseTdHUDhNcFZHMHpuR2V3dVA2WGJUOW0rbjgwei9ZcnUwZmoyQXRsbHhoUmJ2LwpyWHcwYUZYU29DOEt4dXZ3S01FQ0F3RUFBYU5DTUVBd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZHMkdLSWJYcDRhTGRiRWw4aks5UTVrWldqN1hNQTBHQ1NxR1NJYjMKRFFFQkN3VUFBNElCQVFCakFncE5mNEJFOXFEY1k4dUk5a3lYa2ZjaE5Ic0JvTVExbnFFbkhUclJhdXlnUnBoTwpYMWVlZ0J4UjlxL2JkNGVxbmVBMjRBZ1A2VXJPYVJCNHZmM2ppTXFrOC9PdmlNMmFVVVczaXdiNHczWnRhY2VUCnFPWGlVRTYxVTJDamVSbFdNTkR6MUpWQWVud0JqeWFadFpuQlIxWFhuVzBKZy9EUm1VYkpxdkZER3Q3ekJ2cHkKcXZZNDZXZTRFYkp2R05UWWdIZE9uZXd2d1ZudFZLeWIydDlocmx5OGxJQlc2UGtQbGpEc2Zma21CV0NHcTJObgpOU2FZa29iQXlGOU43bk0yUElEdUZKQjEvZVR2NGVYbWg2bEFqRjgzZjRlS1o3TUJqczdjOHFjeVJlYjNrVE5VCkhMN1VsWVVGR0ZvQyt1cHFCamJ4ZVpYbkNENUZUcitSWWZKdQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg== server: https: //10 .0.0.131:6443 name: kubernetes contexts: null current-context: "" kind: Config preferences: {} users : - name: def-ns-admin user: token: eyJhbGciOiJSUzI1NiIsImtpZCI6IkxKVHlZMHFIMWpxRnlOSWRPMmVDQnBPdGFwdWRlbkdrLUpuOG50WUhfbEkifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImRlZi1ucy1hZG1pbi10b2tlbi1ycnZreCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJkZWYtbnMtYWRtaW4iLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI5MjkwYWU2Ni01MTYyLTQ3NWUtOGIwZC1iZGVlMWE1YzliYWQiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6ZGVmYXVsdDpkZWYtbnMtYWRtaW4ifQ.YpF94wYp5071Te8mzUlZltu1Umdl5tuNdqlA3RxLbOMeKSn0LfTVk8sbRnV_EwoiPL00mPm_hFIoGzvRBWqhREpzU7iZfSYwph4QtQL5MCfbePAdmrJIYpXgIvqYYDXK0Nw8sK7QK0s_OsL0S17GsGqPmLXXYssu14p49AeJXkI_0VHTO3DlPOQg1xXY_0ZDJsLtRwKXsD9QjfBOr4jvB1qBIT5NHg2Yb0hYJsR2yN8lMRsQgqZucvMQg9EQFuN7ud06GFUIasjKmVKRfFIyqRHgoQxr2zfG6rlj6o0IE3Nrdx_Z_yTcPpiVnHoKYtOGTT4lEvF0JnkpGd8PtSVPZw |
3)设置context列表,定义一个名为def-ns-admin的context:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 | [root@k8s-master1 dashboard] # kubectl config set-context def-ns-admin --cluster=kubernetes --user=def-ns-admin --kubeconfig=./def-ns-admin.kubeconfig Context "def-ns-admin" created. You have new mail in /var/spool/mail/root [root@k8s-master1 dashboard] # cat def-ns-admin.kubeconfig apiVersion: v1 clusters: - cluster: certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM1ekNDQWMrZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeU1EZ3dNVEV5TkRNMU1Wb1hEVE15TURjeU9URXlORE0xTVZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTDJOCmRLRjdRZnZqYjkzWExneXd6endxaGQvZ3lLOWhuZHQzUk05dlQ5QngvQTNWSTBwdnE2Q3NKOE8yUlBrRHA4V2gKWXhlTFZlS29zU1Y2eVROMWhvdkJoc1cwTWZEL2FxYjAzdjN1TUo3QVdvY1dhako3cWNocjRTT2NzcTRqbmJSagoxUmtsdDY3eFVKYjA2ZXZsUFFKSFUyem5ZYkE5VUFsYUlOYytycmVaZE1FbnM5eE9UcU1ZcE9BcjJzNkV5NXhRCkZJY2FCSG5uT0l6WDYxcWhnUFNmMzlVQmJ3bGozK0pVT0M3alR2ZDNSRGkycGNXZVJtZUVNNVNkUDRCSFN5dXkKOE5POGc4d28zVGFTSXZhN0VseTdHUDhNcFZHMHpuR2V3dVA2WGJUOW0rbjgwei9ZcnUwZmoyQXRsbHhoUmJ2LwpyWHcwYUZYU29DOEt4dXZ3S01FQ0F3RUFBYU5DTUVBd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZHMkdLSWJYcDRhTGRiRWw4aks5UTVrWldqN1hNQTBHQ1NxR1NJYjMKRFFFQkN3VUFBNElCQVFCakFncE5mNEJFOXFEY1k4dUk5a3lYa2ZjaE5Ic0JvTVExbnFFbkhUclJhdXlnUnBoTwpYMWVlZ0J4UjlxL2JkNGVxbmVBMjRBZ1A2VXJPYVJCNHZmM2ppTXFrOC9PdmlNMmFVVVczaXdiNHczWnRhY2VUCnFPWGlVRTYxVTJDamVSbFdNTkR6MUpWQWVud0JqeWFadFpuQlIxWFhuVzBKZy9EUm1VYkpxdkZER3Q3ekJ2cHkKcXZZNDZXZTRFYkp2R05UWWdIZE9uZXd2d1ZudFZLeWIydDlocmx5OGxJQlc2UGtQbGpEc2Zma21CV0NHcTJObgpOU2FZa29iQXlGOU43bk0yUElEdUZKQjEvZVR2NGVYbWg2bEFqRjgzZjRlS1o3TUJqczdjOHFjeVJlYjNrVE5VCkhMN1VsWVVGR0ZvQyt1cHFCamJ4ZVpYbkNENUZUcitSWWZKdQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg== server: https: //10 .0.0.131:6443 name: kubernetes contexts: - context: cluster: kubernetes user: def-ns-admin name: def-ns-admin current-context: "" kind: Config preferences: {} users : - name: def-ns-admin user: token: eyJhbGciOiJSUzI1NiIsImtpZCI6IkxKVHlZMHFIMWpxRnlOSWRPMmVDQnBPdGFwdWRlbkdrLUpuOG50WUhfbEkifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImRlZi1ucy1hZG1pbi10b2tlbi1ycnZreCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJkZWYtbnMtYWRtaW4iLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI5MjkwYWU2Ni01MTYyLTQ3NWUtOGIwZC1iZGVlMWE1YzliYWQiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6ZGVmYXVsdDpkZWYtbnMtYWRtaW4ifQ.YpF94wYp5071Te8mzUlZltu1Umdl5tuNdqlA3RxLbOMeKSn0LfTVk8sbRnV_EwoiPL00mPm_hFIoGzvRBWqhREpzU7iZfSYwph4QtQL5MCfbePAdmrJIYpXgIvqYYDXK0Nw8sK7QK0s_OsL0S17GsGqPmLXXYssu14p49AeJXkI_0VHTO3DlPOQg1xXY_0ZDJsLtRwKXsD9QjfBOr4jvB1qBIT5NHg2Yb0hYJsR2yN8lMRsQgqZucvMQg9EQFuN7ud06GFUIasjKmVKRfFIyqRHgoQxr2zfG6rlj6o0IE3Nrdx_Z_yTcPpiVnHoKYtOGTT4lEvF0JnkpGd8PtSVPZw |
4)指定要使用的context为前面定义的名为def-ns-admin的context:
1 2 | [root@k8s-master1 dashboard] # kubectl config use-context def-ns-admin --kubeconfig=./def-ns-admin.kubeconfig Switched to context "def-ns-admin" . |
到此为止,一个用于Dashboard登录认证的default名称空间的管理员账号配置文件def-ns-admin.kubeconfig已经设置完成,将文件复制到远程客户端上即可用于登录认证。
可以看到只能看到default名称空间下的资源。
4. 通过Dashboard创建容器
以token方式登录到Dashboard,打开Dashboard界面,点开红色箭头标注的"+",如下图所示,可以看到有三种方式创建资源,a. 直接写资源清单;b. 上传已写好的字眼清单;c. 通过表格方式创建。下面示例以编写资源清单文件创建资源:
点击upload创建
二、准入控制器与应用示例
在经由认证插件与授权插件分别完成身份认证和权限检查之后,准入控制器将拦截那些创建、更新和删除相关的操作请求以强制实现控制器中定义的功能,包括执行对象的语义验证、设置缺失字段的默认值、限制所有容器使用的镜像文件必须来自某个特定的Registry、检查Pod对象的资源需求是否超出了指定的限制范围等。
在具体运行时,准入控制可分为两个阶段,第一个阶段串行运行各变异型控制器,第二个阶段串行运行各验证型控制器,在此过程中,一旦任一阶段中的任何控制器拒绝请求,则立即拒绝整个请求,并向用户返回错误。
1. LimitRange资源与LimitRanger准入控制器
虽然用户可以为容器指定资源需求及资源限制,但未予指定资源限制属性的容器应用很有可能因故吞掉所在工作节点上的所有可用计算资源,因此妥当的做法是使用LimitRange资源在每个名称空间中为每个容器指定最小及最大计算资源用量,甚至是设置默认的计算资源需求和计算资源限制。在名称空间上定义了LimitRange对象后,客户端提交创建或修改的资源对象将受到LimitRanger控制器的检查,任何违反LimitRange对象定义的资源最大用量的请求将被直接拒绝。
LimitRange资源支持限制容器、Pod和PersistentVolumeClaim三种资源对象的系统资源用量,其中Pod和容器主要用于定义可用的CPU和内存资源范围,而PersistentVolumeClaim则主要定义存储空间的限制范围。下面配置清单以容器的CPU资源为例,default用于定义默认的资源限制,defaultRequest定义默认的资源需求,min定义最小的资源用量,而最大的资源用量既可以使用max给出的固定值,也可以使用maxLimitRequestRatio设定为最小用量的指定倍数:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 | [root@k8s-master1 ~] # mkdir limitrange [root@k8s-master1 ~] # cd limitrange/ [root@k8s-master1 limitrange] # vim cpu-limit-range.yaml You have new mail in /var/spool/mail/root [root@k8s-master1 limitrange] # cat cpu-limit-range.yaml apiVersion: v1 kind: LimitRange metadata: name: cpu-limit-range spec: limits: - default: cpu: 1000m defaultRequest: cpu: 1000m min: cpu: 500m max: cpu: 2000m maxLimitRequestRatio: cpu: 4 type : Container |
将配置清单中的资源创建于集群上的default名称空间中,而后即可使用describe命令查看相关资源的生效结果。创建不同的pod对象对默认值、最小资源限制及最大资源限制分别进行测试以验证其限制机制。
1 2 3 4 5 6 7 8 9 10 11 | [root@k8s-master1 limitrange] # kubectl apply -f cpu-limit-range.yaml limitrange /cpu-limit-range created [root@k8s-master1 limitrange] # kubectl get limitrange cpu-limit-range -o wide NAME CREATED AT cpu-limit-range 2022-11-07T14:16:04Z [root@k8s-master1 limitrange] # kubectl describe limitrange cpu-limit-range Name: cpu-limit-range Namespace: default Type Resource Min Max Default Request Default Limit Max Limit /Request Ratio ---- -------- --- --- --------------- ------------- ----------------------- Container cpu 500m 2 1 1 4 |
创建一个仅包含一个容器且没有默认系统资源需求和限制的pod对象:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 | [root@k8s-master1 limitrange] # kubectl run limit-pod1 --image=ikubernetes/myapp:v1 --restart=Never pod /limit-pod1 created [root@k8s-master1 limitrange] # kubectl describe pods limit-pod1 Name: limit-pod1 Namespace: default Priority: 0 Node: k8s-node1 /10 .0.0.132 Start Time: Mon, 07 Nov 2022 22:21:05 +0800 Labels: run=limit-pod1 Annotations: cni.projectcalico.org /podIP : 10.244.36.89 /32 cni.projectcalico.org /podIPs : 10.244.36.89 /32 kubernetes.io /limit-ranger : LimitRanger plugin set : cpu request for container limit-pod1; cpu limit for container limit-pod1 Status: Pending IP: IPs: <none> Containers: limit-pod1: Container ID: Image: ikubernetes /myapp :v1 Image ID: Port: <none> Host Port: <none> State: Waiting Reason: ContainerCreating Ready: False Restart Count: 0 Limits: cpu: 1 Requests: cpu: 1 Environment: <none> Mounts: /var/run/secrets/kubernetes .io /serviceaccount from default-token-5n29f (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: default-token-5n29f: Type: Secret (a volume populated by a Secret) SecretName: default-token-5n29f Optional: false QoS Class: Burstable Node-Selectors: <none> Tolerations: node.kubernetes.io /not-ready :NoExecute op =Exists for 300s node.kubernetes.io /unreachable :NoExecute op =Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 13s default-scheduler Successfully assigned default /limit-pod1 to k8s-node1 Normal Pulled 10s kubelet Container image "ikubernetes/myapp:v1" already present on machine Normal Created 10s kubelet Created container limit-pod1 Normal Started 9s kubelet Started container limit-pod1 |
从上述命令中可以看到,pod对象limit-pod1的详细信息中,容器状态信息段中,CPU资源被默认设定为如下配置,这正好符合了LimitRange对象中定义的默认值:
1 2 3 4 | Limits: cpu: 1 Requests: cpu: 1 |
若pod对象设定的系统资源需求量小于LimitRange中的最小用量限制,则会触发LimitRanger准入控制器拒绝相关的请求。
1 2 | [root@k8s-master1 limitrange] # kubectl run limit-pod2 --image=ikubernetes/myapp:v1 --restart=Never --requests='cpu=400m' Error from server (Forbidden): pods "limit-pod2" is forbidden: minimum cpu usage per Container is 500m, but request is 400m |
若Pod对象设定的系统资源限制量大于LimitRange中的最大用量限制,则一样会触发LimitRanger准入控制器拒绝相关的请求:
1 2 | [root@k8s-master1 limitrange] # kubectl run limit-pod3 --image=ikubernetes/myapp:v1 --restart=Never --limits='cpu=3000m' Error from server (Forbidden): pods "limit-pod3" is forbidden: maximum cpu usage per Container is 2, but limit is 3 |
事实上,在LimitRange对象中设置默认的资源需求和资源限制,同最小资源用量及最大资源用量限制能够组合出多种不同的情形,不同组合场景下真正生效的结果也会存在不小的差异。另外,内存资源及PVC资源限制的实现与CPU资源大同小异。
2. ResourceQuota资源于准入控制器
尽管LimitRange资源能限制单个容器、pod及PVC等相关计算资源或存储资源的用量,但用户依然可以创建数量众多的此类资源对象进而侵占所有的系统资源。于是,kubernetes提供了ResourceQuota资源限制用于定义名称空间的对象数量或系统资源配额,它支持限制每种资源类型的对象总数,以及所有对象所能消耗的计算资源及存储资源总量等。ResourceQuota准入控制器负责观察传入的请求,并确保它没有违反相应的名称空间中ResourceQuota对象定义的任何约束。
于是,管理员可为每个名称空间分别创建一个ResourceQuota对象,随后,用户在名称空间中创建资源对象,ResourceQuota准入控制器将跟踪使用情况以确保它不超过相应ResourceQuota对象中定义的系统资源限制。用户创建或更新资源的操作违反配额约束将导致请求失败,API Server以HTTP状态码“403 FORBIDDEN”作为响应,并显示一条消息以提示可能违反的约束。不过,在名称空间上启用了CPU和内存等系统资源的配额后,用户创建pod对象时必须指定资源需求或资源限制,否则,会触发ResourceQuota准入控制器拒绝执行相应的操作。
资源配额详细说明参考官网说明:https://kubernetes.io/docs/concepts/policy/resource-quotas/
1)计算资源配额:
ResourceQuota对象可限制指定名称空间中非终止状态的所有pod对象的计算资源需求及计算资源限制总量
配额机制所支持的资源类型:
2)存储资源配额
ResourceQuota对象还支持限制特定名称空间中可以使用的PVC数量和这些PVC资源的空间大小总量,以及特定名称空间中可在指定的StorageClass上使用的PVC数量和这些PVC资源的总数
例如:如果一个操作人员针对 gold 存储类型与 bronze 存储类型设置配额, 操作人员可以定义如下配额:
1 2 | gold.storageclass.storage.k8s.io /requests .storage: 500Gi bronze.storageclass.storage.k8s.io /requests .storage: 100Gi |
在 Kubernetes 1.8 版本中,本地临时存储的配额支持已经是 Alpha 功能:
注:说明:如果所使用的是 CRI 容器运行时,容器日志会被计入临时存储配额。 这可能会导致存储配额耗尽的 Pods 被意外地驱逐出节点
3)对象数量配额
可以使用以下语法对所有标准的、命名空间域的资源类型进行配额设置:
count/<resource>.<group>:用于非核心(core)组的资源
count/<resource>:用于核心组的资源
例如以下示例说明:
count/persistentvolumeclaims
count/services
count/secrets
count/configmaps
count/replicationcontrollers
count/deployments.apps
count/replicasets.apps
count/statefulsets.apps
count/jobs.batch
count/cronjobs.batch
当使用 count/* 资源配额时,如果对象存在于服务器存储中,则会根据配额管理资源。 这些类型的配额有助于防止存储资源耗尽。例如,用户可能想根据服务器的存储能力来对服务器中 Secret 的数量进行配额限制。 集群中存在过多的 Secret 实际上会导致服务器和控制器无法启动。 用户可以选择对 Job 进行配额管理,以防止配置不当的 CronJob 在某命名空间中创建太多 Job 而导致集群拒绝服务。
对有限的一组资源上实施一般性的对象数量配额也是可能的。支持以下类型:
下面的配置清单示例定义了一个ResourceQuota资源对象,它配置了计算资源、存储资源及对象计数几个维度的限额:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 | [root@k8s-master1 resourcequota] # kubectl create ns test namespace /test created [root@k8s-master1 resourcequota] # vim quota-example.yaml You have new mail in /var/spool/mail/root [root@k8s-master1 resourcequota] # cat quota-example.yaml apiVersion: v1 kind: ResourceQuota metadata: name: quota -example namespace: test spec: hard: pods: "5" requests.cpu: "1" requests.memory: 1Gi limits.cpu: "2" limits.memory: 2Gi count /deployments .apps: "1" count /deployments .extensions: "1" persistentvolumeclaims: "2" [root@k8s-master1 resourcequota] # kubectl apply -f quota-example.yaml resourcequota /quota-example created [root@k8s-master1 resourcequota] # kubectl get quota -n test NAME AGE REQUEST LIMIT quota -example 15s count /deployments .apps: 0 /1 , count /deployments .extensions: 0 /1 , persistentvolumeclaims: 0 /2 , pods: 0 /5 , requests.cpu: 0 /1 , requests.memory: 0 /1Gi limits.cpu: 0 /2 , limits.memory: 0 /2Gi |
创建完成后,describe命令可以打印其限额的生效情况,例如,将上面的ResourceQuota对象创建于test名称空间中,其打印效果如下所示:
1 2 3 4 5 6 7 8 9 10 11 12 13 | [root@k8s-master1 resourcequota] # kubectl describe quota quota-example -n test Name: quota -example Namespace: test Resource Used Hard -------- ---- ---- count /deployments .apps 0 1 count /deployments .extensions 0 1 limits.cpu 0 2 limits.memory 0 2Gi persistentvolumeclaims 0 2 pods 0 5 requests.cpu 0 1 requests.memory 0 1Gi |
在test名称空间中创建deployment等对象即可进行配额测试、例如,下面的命令创建了一个有着3个pod副本的Deployment对象myapp-deploy:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 | [root@k8s-master1 resourcequota] # vim myapp-deploy.yaml [root@k8s-master1 resourcequota] # cat myapp-deploy.yaml apiVersion: apps /v1 kind: Deployment metadata: name: myapp-deploy namespace: test labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:latest imagePullPolicy: IfNotPresent ports: - containerPort: 80 resources: requests: cpu: 0.2 memory: 256Mi limits: cpu: 0.5 memory: 256Mi [root@k8s-master1 resourcequota] # kubectl apply -f myapp-deploy.yaml deployment.apps /myapp-deploy created [root@k8s-master1 resourcequota] # kubectl get deployments -n test NAME READY UP-TO-DATE AVAILABLE AGE myapp-deploy 3 /3 3 3 8s |
创建完成后,test名称空间上的ResourceQuota对象quota-example的各配置属性也相应地变成了如下状态:
1 2 3 4 5 6 7 8 9 10 11 12 13 | [root@k8s-master1 resourcequota] # kubectl describe quota quota-example -n test Name: quota -example Namespace: test Resource Used Hard -------- ---- ---- count /deployments .apps 1 1 count /deployments .extensions 0 1 limits.cpu 1500m 2 limits.memory 768Mi 2Gi persistentvolumeclaims 0 2 pods 3 5 requests.cpu 600m 1 requests.memory 768Mi 1Gi |
此时,在扩展myapp-deploy的规模则会很快遇到某一项配额的限制而导致扩展受阻。由分析可知,将pod副本数量扩展至5个就会达到limits.cpu资源上限而导致第5个扩展失败,从下面测试结果可以看出deployment资源只扩展到4个,第5个失败,因为limits.cpu资源达到了上限,requests.memory资源也达到了上限。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 | [root@k8s-master1 resourcequota] # kubectl scale --current-replicas=3 --replicas=5 deployment/myapp-deploy -n test deployment.apps /myapp-deploy scaled [root@k8s-master1 resourcequota] # kubectl get deployment myapp-deploy -n test -o wide NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR myapp-deploy 4 /5 4 4 5m38s nginx nginx:latest app=nginx You have new mail in /var/spool/mail/root [root@k8s-master1 resourcequota] # kubectl describe quota quota-example -n test Name: quota -example Namespace: test Resource Used Hard -------- ---- ---- count /deployments .apps 1 1 count /deployments .extensions 0 1 limits.cpu 2 2 limits.memory 1Gi 2Gi persistentvolumeclaims 0 2 pods 4 5 requests.cpu 800m 1 requests.memory 1Gi 1Gi |
需要注意的是,资源配额仅对那些在ResourceQuota对象创建之后生成的对象有效,对已经存在的对象不会产生任何限制。而且,一旦启用了计算资源需求和计算资源限制配额,那么创建的任何pod对象都必须设置此两类属性,否则pod对象的创建将会被相应的ResourceQuota对象所阻止。无须手动为每个pod对象设置此两类属性时,可以使用LimitRange对象为其设置默认值。
4)配额作用域
每个ResourceQuota对象上还支持定义一组相关的 scope(作用域),用于定义其配额仅对作用域内的资源生效。 配额机制仅统计所列举的作用域的交集中的资源用量。当一个作用域被添加到配额中后,它会对作用域相关的资源数量作限制。 如配额中指定了允许(作用域)集合之外的资源,会导致验证错误。
目前可用的scope包括以下几种:
从kubernetes 1.8版本起,管理员可以设置不同的优先级类别PriorityClass来创建pod对象,在1.11版本开始,kubernetes开始支持对每个PriorityClass对象分别设定资源限额,管理员可以使用scopeSelector字段,从而根据pod对象的优先级控制Pod资源对系统资源的消耗。
下面的示例创建一个配额对象,并将其与具有特定优先级的 Pod 进行匹配。 该示例的工作方式如下:
集群中的 Pod 可取三个优先级类之一,即 "low"、"medium"、"high"。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 | [root@k8s-master1 resourcequota] # vim high-priority.yaml [root@k8s-master1 resourcequota] # cat high-priority.yaml apiVersion: scheduling.k8s.io /v1 kind: PriorityClass metadata: name: high value: 1000000 globalDefault: false description: "first" [root@k8s-master1 resourcequota] # kubectl create -f high-priority.yaml priorityclass.scheduling.k8s.io /high created [root@k8s-master1 resourcequota] # kubectl get priorityclass high -o wide NAME VALUE GLOBAL-DEFAULT AGE high 1000000 false 28s [root@k8s-master1 resourcequota] # vim medium-priority.yaml [root@k8s-master1 resourcequota] # cat medium-priority.yaml apiVersion: scheduling.k8s.io /v1 kind: PriorityClass metadata: name: medium value: 5000 globalDefault: false description: "second" [root@k8s-master1 resourcequota] # kubectl create -f medium-priority.yaml priorityclass.scheduling.k8s.io /medium created [root@k8s-master1 resourcequota] # kubectl get priorityclass medium -o wide NAME VALUE GLOBAL-DEFAULT AGE medium 5000 false 23s [root@k8s-master1 resourcequota] # vim low-priority.yaml [root@k8s-master1 resourcequota] # cat low-priority.yaml apiVersion: scheduling.k8s.io /v1 kind: PriorityClass metadata: name: low value: 100 globalDefault: false description: "third" [root@k8s-master1 resourcequota] # kubectl create -f low-priority.yaml priorityclass.scheduling.k8s.io /low created [root@k8s-master1 resourcequota] # kubectl get priorityclass low -o wide NAME VALUE GLOBAL-DEFAULT AGE low 100 false 9s |
为每个优先级创建一个配额对象。将以下 YAML 保存到文件 quota.yml 中
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 | [root@k8s-master1 resourcequota] # vim quota.yml You have new mail in /var/spool/mail/root [root@k8s-master1 resourcequota] # cat quota.yml apiVersion: v1 kind: List items: - apiVersion: v1 kind: ResourceQuota metadata: name: pods-high spec: hard: cpu: "1" memory: 2Gi pods: "10" scopeSelector: matchExpressions: - operator : In scopeName: PriorityClass values: [ "high" ] - apiVersion: v1 kind: ResourceQuota metadata: name: pods-medium spec: hard: cpu: "0.5" memory: 1Gi pods: "10" scopeSelector: matchExpressions: - operator : In scopeName: PriorityClass values: [ "medium" ] - apiVersion: v1 kind: ResourceQuota metadata: name: pods-low spec: hard: cpu: "0.02" memory: 0.3Gi pods: "10" scopeSelector: matchExpressions: - operator : In scopeName: PriorityClass values: [ "low" ] [root@k8s-master1 resourcequota] # kubectl create -f quota.yml resourcequota /pods-high created resourcequota /pods-medium created resourcequota /pods-low created |
使用 kubectl describe quota 操作验证配额的 Used 值为 0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 | [root@k8s-master1 resourcequota] # kubectl describe quota Name: pods-high Namespace: default Resource Used Hard -------- ---- ---- cpu 0 1 memory 0 2Gi pods 0 10 Name: pods-low Namespace: default Resource Used Hard -------- ---- ---- cpu 0 20m memory 0 322122547200m pods 0 10 Name: pods-medium Namespace: default Resource Used Hard -------- ---- ---- cpu 0 500m memory 0 1Gi pods 0 10 |
创建优先级为 "high" 的 Pod对象:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 | [root@k8s-master1 resourcequota] # vim high-priority-pod.yml [root@k8s-master1 resourcequota] # cat high-priority-pod.yml apiVersion: v1 kind: Pod metadata: name: high-priority spec: containers: - name: high-priority image: busybox:latest imagePullPolicy: IfNotPresent command : [ "/bin/sh" ] args: [ "-c" , "while true; do echo hello; sleep 10;done" ] resources: requests: memory: "1.2Gi" cpu: "500m" limits: memory: "1.5Gi" cpu: "500m" priorityClassName: high [root@k8s-master1 resourcequota] # kubectl apply -f high-priority-pod.yml pod /high-priority created [root@k8s-master1 resourcequota] # kubectl get pods high-priority -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES high-priority 1 /1 Running 0 17s 10.244.36.94 k8s-node1 <none> <none> |
确认 "high" 优先级配额 pods-high 的 "Used" 统计信息已更改,并且其他两个配额未更改
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 | [root@k8s-master1 resourcequota] # kubectl describe quota Name: pods-high Namespace: default Resource Used Hard -------- ---- ---- cpu 500m 1 memory 1288490188800m 2Gi pods 1 10 Name: pods-low Namespace: default Resource Used Hard -------- ---- ---- cpu 0 20m memory 0 322122547200m pods 0 10 Name: pods-medium Namespace: default Resource Used Hard -------- ---- ---- cpu 0 500m memory 0 1Gi pods 0 10 |
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· 分享4款.NET开源、免费、实用的商城系统
· 全程不用写代码,我用AI程序员写了一个飞机大战
· MongoDB 8.0这个新功能碉堡了,比商业数据库还牛
· 白话解读 Dapr 1.15:你的「微服务管家」又秀新绝活了
· 记一次.NET内存居高不下排查解决与启示