Helm 安装部署Kubernetes的dashboard

Kubernetes Dashboard 是 k8s集群的一个 WEB UI管理工具,代码托管在 github 上,地址:https://github.com/kubernetes/dashboard

创建tls secret

通过https进行访问必需要使用证书和密钥,在Kubernetes中可以通过配置一个加密凭证(TLS secret)来提供。

这里只是拿来自己使用,创建一个自己签名的证书。

openssl req -x509 -nodes -days 3650 -newkey rsa:2048 -keyout ./tls.key -out ./tls.crt -subj "/CN=192.168.236.130"

将会产生两个文件tls.key和tls.crt,你可以改成自己的文件名或放在特定的目录下(如果你是为公共服务器创建的,请保证这个不会被别人访问到)。后面的192.168.126.130是我的服务器IP地址,你可以改成自己的。

安装tls secret

下一步,将这两个文件的信息创建为一个Kubernetes的secret访问凭证,我将名称指定为 hongda-com-tls-secret,这在后面的Ingress配置时将会用到。如果你修改了这个名字,注意后面的配置yaml文件也需要同步修改。

kubectl -n kube-system  create secret tls hongda-com-tls-secret --key ./tls.key --cert ./tls.crt

查看:

kubectl get secret -n kube-system |grep hongda
hongda-com-tls-secret                            kubernetes.io/tls                     2      43s

安装

kubernetes-dashboard.yaml:

image:
  repository: k8s.gcr.io/kubernetes-dashboard-amd64
  tag: v1.10.1
ingress:
  enabled: true
  hosts: 
    - k8s.hongda.com
  annotations:
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
  tls:
    - secretName: hongda-com-tls-secret
      hosts:
      - k8s.hongda.com
nodeSelector:
    node-role.kubernetes.io/edge: ''
tolerations:
    - key: node-role.kubernetes.io/master
      operator: Exists
      effect: NoSchedule
    - key: node-role.kubernetes.io/master
      operator: Exists
      effect: PreferNoSchedule
rbac:
  clusterAdminRole: true

相比默认配置,修改了以下配置项:

  • ingress.enabled - 置为 true 开启 Ingress,用 Ingress 将 Kubernetes Dashboard 服务暴露出来,以便让我们浏览器能够访问
  • ingress.annotations - 指定 ingress.class 为 nginx,让我们安装 Nginx Ingress Controller 来反向代理 Kubernetes Dashboard 服务;由于 Kubernetes Dashboard 后端服务是以 https 方式监听的,而 Nginx Ingress Controller 默认会以 HTTP 协议将请求转发给后端服务,用secure-backends这个 annotation 来指示 Nginx Ingress Controller 以 HTTPS 协议将请求转发给后端服务
  • ingress.hosts - 这里替换为证书配置的域名
  • Ingress.tls - secretName 配置为 cert-manager 生成的免费证书所在的 Secret 资源名称,hosts 替换为证书配置的域名
  • rbac.clusterAdminRole - 置为 true 让 dashboard 的权限够大,这样我们可以方便操作多个 namespace

命令安装:

helm install stable/kubernetes-dashboard \
-n kubernetes-dashboard \
--namespace kube-system  \
-f kubernetes-dashboard.yaml

输出:

[root@master /]# helm install stable/kubernetes-dashboard -n kubernetes-dashboard --namespace kube-system  -f kubernetes-dashboard.yaml
NAME:   kubernetes-dashboard
LAST DEPLOYED: Tue Aug  6 16:11:37 2019
NAMESPACE: kube-system
STATUS: DEPLOYED

RESOURCES:
==> v1/Deployment
NAME                  READY  UP-TO-DATE  AVAILABLE  AGE
kubernetes-dashboard  0/1    1           0          <invalid>

==> v1/Pod(related)
NAME                                   READY  STATUS             RESTARTS  AGE
kubernetes-dashboard-848b8dd798-gtddg  0/1    ContainerCreating  0         <invalid>

==> v1/Secret
NAME                  TYPE    DATA  AGE
kubernetes-dashboard  Opaque  0     <invalid>

==> v1/Service
NAME                  TYPE       CLUSTER-IP     EXTERNAL-IP  PORT(S)  AGE
kubernetes-dashboard  ClusterIP  10.108.244.10  <none>       443/TCP  <invalid>

==> v1/ServiceAccount
NAME                  SECRETS  AGE
kubernetes-dashboard  1        <invalid>

==> v1beta1/ClusterRoleBinding
NAME                  AGE
kubernetes-dashboard  <invalid>

==> v1beta1/Ingress
NAME                  HOSTS           ADDRESS  PORTS  AGE
kubernetes-dashboard  k8s.hongda.com  80, 443  <invalid>


NOTES:
*********************************************************************************
*** PLEASE BE PATIENT: kubernetes-dashboard may take a few minutes to install ***
*********************************************************************************
From outside the cluster, the server URL(s) are:
     https://k8s.hongda.com

查看pods:

[root@master /]# kubectl get pods -n kube-system -o wide
NAME                                    READY   STATUS             RESTARTS   AGE    IP              NODE      NOMINATED NODE   READINESS GATES
coredns-5c98db65d4-gts57                1/1     Running            1          3d6h   10.244.2.2      slaver2   <none>           <none>
coredns-5c98db65d4-qhwrw                1/1     Running            1          3d6h   10.244.1.2      slaver1   <none>           <none>
etcd-master                             1/1     Running            2          3d6h   18.16.202.163   master    <none>           <none>
kube-apiserver-master                   1/1     Running            2          3d6h   18.16.202.163   master    <none>           <none>
kube-controller-manager-master          1/1     Running            6          3d6h   18.16.202.163   master    <none>           <none>
kube-flannel-ds-amd64-2lwl8             1/1     Running            0          3d1h   18.16.202.227   slaver1   <none>           <none>
kube-flannel-ds-amd64-9bjck             1/1     Running            0          3d1h   18.16.202.95    slaver2   <none>           <none>
kube-flannel-ds-amd64-gxxqg             1/1     Running            0          3d1h   18.16.202.163   master    <none>           <none>
kube-proxy-8cwj4                        1/1     Running            0          107m   18.16.202.163   master    <none>           <none>
kube-proxy-j9zpz                        1/1     Running            0          107m   18.16.202.227   slaver1   <none>           <none>
kube-proxy-vfgjv                        1/1     Running            0          107m   18.16.202.95    slaver2   <none>           <none>
kube-scheduler-master                   1/1     Running            6          3d6h   18.16.202.163   master    <none>           <none>
kubernetes-dashboard-64f97ccb4f-nbpkx   0/1     ImagePullBackOff   0          33m    10.244.0.4      master    <none>           <none>
tiller-deploy-6787c946f8-6b5tv          1/1     Running            0          44m    10.244.1.4      slaver1   <none>           <none>

异常问题

查看线上版本:

[root@master /]# helm search kubernetes-dashboard
NAME                       	CHART VERSION	APP VERSION	DESCRIPTION                                   
stable/kubernetes-dashboard	0.6.0        	1.8.3      	General-purpose web UI for Kubernetes clusters

应该是版本不一致,阿里云里最新版本为1.8.3,而helm安装配置版本为1.10.1,所以导致没有拉取到镜像

添加新的仓库源

[root@master /]# helm repo add stable http://mirror.azure.cn/kubernetes/charts/
"stable" has been added to your repositories
[root@master /]# helm search kubernetes-dashboard
NAME                       	CHART VERSION	APP VERSION	DESCRIPTION                                   
stable/kubernetes-dashboard	1.8.0        	1.10.1     	General-purpose web UI for Kubernetes clusters

更换仓库以后,再次安装,还是一样的问题,查看

[root@master /]# kubectl get namespace
NAME              STATUS   AGE
default           Active   3d8h
ingress-nginx     Active   152m
kube-node-lease   Active   3d8h
kube-public       Active   3d8h
kube-system       Active   3d8h

[root@master /]# kubectl describe pod kubernetes-dashboard-7ffdf885d6-t4htt -n kube-system
Name:           kubernetes-dashboard-7ffdf885d6-t4htt
Namespace:      kube-system
Priority:       0
Node:           master/18.16.202.163
Start Time:     Wed, 31 Jul 2019 16:46:40 +0800
Labels:         app=kubernetes-dashboard
                kubernetes.io/cluster-service=true
                pod-template-hash=7ffdf885d6
                release=kubernetes-dashboard
Annotations:    <none>
Status:         Pending
IP:             10.244.0.20
Controlled By:  ReplicaSet/kubernetes-dashboard-7ffdf885d6
Containers:
  kubernetes-dashboard:
    Container ID:  
    Image:         k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3
    Image ID:      
    Port:          8443/TCP
    Host Port:     0/TCP
    Args:
      --auto-generate-certificates
    State:          Waiting
      Reason:       ImagePullBackOff
    Ready:          False
    Restart Count:  0
    Limits:
      cpu:     100m
      memory:  50Mi
    Requests:
      cpu:        100m
      memory:     50Mi
    Liveness:     http-get https://:8443/ delay=30s timeout=30s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /certs from kubernetes-dashboard-certs (rw)
      /tmp from tmp-volume (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kubernetes-dashboard-token-pph4g (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  kubernetes-dashboard-certs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kubernetes-dashboard
    Optional:    false
  tmp-volume:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  kubernetes-dashboard-token-pph4g:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kubernetes-dashboard-token-pph4g
    Optional:    false
QoS Class:       Guaranteed
Node-Selectors:  node-role.kubernetes.io/edge=
Tolerations:     node-role.kubernetes.io/master:NoSchedule
                 node-role.kubernetes.io/master:PreferNoSchedule
                 node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age                  From               Message
  ----     ------     ----                 ----               -------
  Normal   Scheduled  3m47s                default-scheduler  Successfully assigned kube-system/kubernetes-dashboard-7ffdf885d6-t4htt to master
  Normal   Pulling    89s (x4 over 3m45s)  kubelet, master    Pulling image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3"
  Warning  Failed     74s (x4 over 3m30s)  kubelet, master    Failed to pull image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3": rpc error: code = Unknown desc = Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
  Warning  Failed     74s (x4 over 3m30s)  kubelet, master    Error: ErrImagePull
  Normal   BackOff    61s (x6 over 3m30s)  kubelet, master    Back-off pulling image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3"
  Warning  Failed     46s (x7 over 3m30s)  kubelet, master    Error: ImagePullBackOff

明显是特么的拉取的k8s.gcr.io域名下面的,拉取不到。

好吧,我还是拉取不到。

解决问题

Docker Hub中拉取一个相同版本的,替换

拉取

docker pull sacred02/kubernetes-dashboard-amd64:v1.10.1

替换

docker tag sacred02/kubernetes-dashboard-amd64:v1.10.1 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1

删除

docker rmi sacred02/kubernetes-dashboard-amd64:v1.10.1

再次使用helm安装

helm install stable/kubernetes-dashboard -n kubernetes-dashboard --namespace kube-system  -f kubernetes-dashboard.yaml

查看

[root@master /]# helm ls
NAME                	REVISION	UPDATED                 	STATUS  	CHART                     	APP VERSION	NAMESPACE    
kubernetes-dashboard	1       	Wed Jul 31 17:11:35 2019	DEPLOYED	kubernetes-dashboard-1.8.0	1.10.1     	kube-system  
nginx-ingress       	1       	Wed Jul 31 13:59:14 2019	DEPLOYED	nginx-ingress-1.11.5      	0.25.0     	ingress-nginx

查看po,svc:

[root@master /]# kubectl get po,svc --all-namespaces -o wide
NAMESPACE       NAME                                                 READY   STATUS    RESTARTS   AGE   IP              NODE      NOMINATED NODE   READINESS GATES
default         pod/curl-6bf6db5c4f-vhsqc                            1/1     Running   1          10d   10.244.2.3      slaver2   <none>           <none>
ingress-nginx   pod/nginx-ingress-controller-b89575c7f-2xtkk         1/1     Running   0          26m   18.16.202.163   master    <none>           <none>
ingress-nginx   pod/nginx-ingress-default-backend-7b8b45bd49-g4mbz   1/1     Running   0          26m   10.244.0.23     master    <none>           <none>
kube-system     pod/coredns-5c98db65d4-gts57                         1/1     Running   7          11d   10.244.2.2      slaver2   <none>           <none>
kube-system     pod/coredns-5c98db65d4-qhwrw                         1/1     Running   6          11d   10.244.1.2      slaver1   <none>           <none>
kube-system     pod/etcd-master                                      1/1     Running   4          11d   18.16.202.163   master    <none>           <none>
kube-system     pod/kube-apiserver-master                            1/1     Running   4          11d   18.16.202.163   master    <none>           <none>
kube-system     pod/kube-controller-manager-master                   1/1     Running   8          11d   18.16.202.163   master    <none>           <none>
kube-system     pod/kube-flannel-ds-amd64-2lwl8                      1/1     Running   0          11d   18.16.202.227   slaver1   <none>           <none>
kube-system     pod/kube-flannel-ds-amd64-9bjck                      1/1     Running   0          11d   18.16.202.95    slaver2   <none>           <none>
kube-system     pod/kube-flannel-ds-amd64-gxxqg                      1/1     Running   3          11d   18.16.202.163   master    <none>           <none>
kube-system     pod/kube-proxy-8cwj4                                 1/1     Running   3          8d    18.16.202.163   master    <none>           <none>
kube-system     pod/kube-proxy-j9zpz                                 1/1     Running   0          8d    18.16.202.227   slaver1   <none>           <none>
kube-system     pod/kube-proxy-vfgjv                                 1/1     Running   0          8d    18.16.202.95    slaver2   <none>           <none>
kube-system     pod/kube-scheduler-master                            1/1     Running   8          11d   18.16.202.163   master    <none>           <none>
kube-system     pod/kubernetes-dashboard-848b8dd798-gtddg            1/1     Running   0          40s   10.244.0.24     master    <none>           <none>
kube-system     pod/tiller-deploy-6787c946f8-6b5tv                   1/1     Running   0          8d    10.244.1.4      slaver1   <none>           <none>

NAMESPACE       NAME                                    TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE   SELECTOR
default         service/kubernetes                      ClusterIP      10.96.0.1        <none>        443/TCP                      11d   <none>
ingress-nginx   service/nginx-ingress-controller        LoadBalancer   10.111.25.193    <pending>     80:31577/TCP,443:31246/TCP   26m   app=nginx-ingress,component=controller,release=nginx-ingress
ingress-nginx   service/nginx-ingress-default-backend   ClusterIP      10.106.126.222   <none>        80/TCP                       26m   app=nginx-ingress,component=default-backend,release=nginx-ingress
kube-system     service/kube-dns                        ClusterIP      10.96.0.10       <none>        53/UDP,53/TCP,9153/TCP       11d   k8s-app=kube-dns
kube-system     service/kubernetes-dashboard            ClusterIP      10.108.244.10    <none>        443/TCP                      40s   app=kubernetes-dashboard,release=kubernetes-dashboard
kube-system     service/tiller-deploy                   ClusterIP      10.98.116.74     <none>        44134/TCP                    8d    app=helm,name=tiller

token查看

[root@master /]# kubectl -n kube-system get secret | grep kubernetes-dashboard-token
kubernetes-dashboard-token-4v624                 kubernetes.io/service-account-token   3      5m42s
[root@master /]# kubectl describe -n kube-system secret/kubernetes-dashboard-token-4v624
Name:         kubernetes-dashboard-token-4v624
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: kubernetes-dashboard
              kubernetes.io/service-account.uid: 6688cc3b-5f28-4e38-a37a-67c0927752ab

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1025 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi00djYyNCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjY2ODhjYzNiLTVmMjgtNGUzOC1hMzdhLTY3YzA5Mjc3NTJhYiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.Wq6xvzLSJNnt9Zg9u5J-85RB0-Slf6HMFfHzNwDGJDn3Yc2lfxL88YXi0ForX4Q9F0v96nt_GNKOm6DB8FGoKR3cALeWpeuoXSSY_ryY8tj6KFN1mrOlvVnRRgsk_lReOxLZexvR58OQ7N04pDrZ6Okr3PDB22i-31xPaVPBt6BhZU5ee6VZyXr7y3pj8VAJSki7tnr7ZRlG6WJizrMf25sZ9xdznwcGJ7yGz2gD3moYhNKQa5KPwcLOGTfg3GuLUNoQjdz5wUmvx4X2YMhfj6Fx7I3mZzr9whrfhO2PWuNtFheaKscSg2UyIPH5Zav9WTSzXxDedORh8BjX3cUJcQ

查看k8s.hongda.com

[root@master /]# ping k8s.hongda.com
PING k8s.hongda.com (13.209.58.121) 56(84) bytes of data.
From 18.16.202.169 (18.16.202.169): icmp_seq=2 Redirect Network(New nexthop: 18.16.202.1 (18.16.202.1))
From 18.16.202.169 (18.16.202.169): icmp_seq=3 Redirect Network(New nexthop: 18.16.202.1 (18.16.202.1))
^C
--- k8s.hongda.com ping statistics ---
3 packets transmitted, 0 received, 100% packet loss, time 2002ms

参考:

使用kubeadm安装Kubernetes 1.15

利用Helm一键部署Kubernetes Dashboard并启用免费HTTPS

Kubernetes dashboard 通过 Ingress 提供HTTPS访问

posted @ 2019-08-01 18:41  hongdada  阅读(3455)  评论(5编辑  收藏  举报