k8s集群高可用(kubeadm)

1.下文需要的yaml文件所在的github地址:

https://github.com/luckylucky421/kubernetes1.17.3/tree/master

下面实验用到yaml文件大家需要从上面的github上clone和下载到本地,然后把yaml文件传到k8s集群的master节点,如果直接复制粘贴格式可能会有问题。

正文

一、准备实验环境

1.准备四台centos7虚拟机,用来安装k8s集群

二、初始化实验环境

1.配置静态ip

1.1 在master1节点配置网络(其他三台类似)

修改/etc/sysconfig/network-scripts/ifcfg-ens33文件,变成如下:

TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
IPADDR=192.168.0.6
NETMASK=255.255.255.0
GATEWAY=192.168.0.1
DNS1=192.168.0.1
DEFROUTE=yes
NAME=ens33
DEVICE=ens33
ONBOOT=yes

2.修改yum源,各个节点操作

(1)备份原来的yum源

mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup

wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

yum makecache fast

(2)配置安装k8s需要的yum源

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
EOF yum clean all yum makecache fast yum -y update yum -y install yum-utils device-mapper-persistent-data lvm2

 (3)添加新的软件源

yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

yum clean all
yum makecache fast

3.安装基础软件包,各个节点操作

yum -y install wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel wget vim ncurses-devel autoconf automake zlib-devel  python-devel epel-release openssh-server socat  ipvsadm conntrack ntpdate

 4.关闭firewalld防火墙,各个节点操作,centos7系统默认使用的是firewalld防火墙,停止firewalld防火墙,并禁用这个服务

 systemctl stop firewalld  && systemctl  disable  firewalld

5.安装iptables,各个节点操作,如果你用firewalld不是很习惯,可以安装iptables,这个步骤可以不做,根据大家实际需求

5.1 安装iptables

yum install iptables-services -y

5.2 禁用iptables

service iptables stop   && systemctl disable iptables

6.时间同步,各个节点操作

6.1 时间同步

ntpdate cn.pool.ntp.org

6.2 编辑计划任务,每小时做一次同步

1)crontab -e
* */1 * * * /usr/sbin/ntpdate   cn.pool.ntp.org

2)重启crond服务进程:
service crond restart

7. 关闭selinux,各个节点操作

 关闭selinux

sed -i  's/SELINUX=enforcing/SELINUX=disabled/'  /etc/sysconfig/selinux
sed -i  's/SELINUX=enforcing/SELINUX=disabled/g'  /etc/selinux/config
reboot -f

8.关闭交换分区,各个节点操作

swapoff -a
sed -i 's/.*swap.*/#&/' /etc/fstab

9.修改内核参数,各个节点操作

cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

10.修改主机名

hostnamectl set-hostname master1

 11.配置hosts文件,各个节点操作

 在/etc/hosts文件增加如下几行:

192.168.0.6  master1
192.168.0.16 master2
192.168.0.26 master3
192.168.0.56 node1

12.配置master1到node无密码登陆,配置master1到master2、master3无密码登陆

在master1上操作

ssh-keygen -t rsa
ssh-copy-id -i .ssh/id_rsa.pub root@master2
ssh-copy-id -i .ssh/id_rsa.pubroot@master3
ssh-copy-id -i .ssh/id_rsa.pubroot@node1

三、安装kubernetes1.18.2高可用集群

1.安装docker19.03,各个节点操作

1.1 查看支持的docker版本

yum list docker-ce --showduplicates |sort -r

1.2 安装19.03.7版本 

yum install -y docker-ce-19.03.7-3.el7
systemctl enable docker && systemctl start docker

1.3 修改docker配置文件

cat > /etc/docker/daemon.json <<EOF
{ "exec-opts": ["native.cgroupdriver=systemd"], 
"log-driver": "json-file",
"log-opts": {   
"max-size": "100m"  
}, 
"storage-driver": "overlay2",
"storage-opts": [   
"overlay2.override_kernel_check=true" 
]
}
EOF

1.4 重启docker使配置生效

systemctl daemon-reload && systemctl restart docker

1.5 设置网桥包经IPTables,core文件生成路径,配置永久生效

echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
echo 1 >/proc/sys/net/bridge/bridge-nf-call-ip6tables

echo """
vm.swappiness = 0
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1

""" > /etc/sysctl.conf

sysctl -p

1.6 开启ipvs,不开启ipvs将会使用iptables,但是效率低,所以官网推荐需要开通ipvs内核

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_fo ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack"
for kernel_module in \${ipvs_modules}; do 
/sbin/modinfo -F filename \${kernel_module} > /dev/null 2>&1 if [ $? -eq 0 ]; then 
/sbin/modprobe \${kernel_module} 
fi
done
EOF

chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs

2.安装kubernetes1.18.2

2.1在master1、master2、master3和node1上安装kubeadm和kubelet

yum install kubeadm-1.18.2 kubelet-1.18.2 -y

systemctl enable kubelet

2.2上传镜像到master1、master2、master3和node1节点之后,按如下方法通过docker load -i手动解压镜像,镜像在百度网盘

docker load -i   1-18-kube-apiserver.tar.gz
docker load -i   1-18-kube-scheduler.tar.gz
docker load -i   1-18-kube-controller-manager.tar.gz
docker load -i   1-18-pause.tar.gz
docker load -i   1-18-cordns.tar.gz
docker load -i   1-18-etcd.tar.gz
docker load -i   1-18-kube-proxy.tar.gz

说明:

pause版本是3.2,用到的镜像是k8s.gcr.io/pause:3.2

etcd版本是3.4.3,用到的镜像是k8s.gcr.io/etcd:3.4.3-0        

cordns版本是1.6.7,用到的镜像是k8s.gcr.io/coredns:1.6.7

 apiserver、scheduler、controller-manager、kube-proxy版本是1.18.2,用到的镜像分别是

k8s.gcr.io/kube-apiserver:v1.18.2

k8s.gcr.io/kube-controller-manager:v1.18.2

k8s.gcr.io/kube-scheduler:v1.18.2

k8s.gcr.io/kube-proxy:v1.18.2

2.3 部署keepalive+lvs实现master节点高可用-对apiserver做高可用

(1)部署keepalived+lvs,在各master节点操作

yum install -y socat keepalived ipvsadm conntrack

(2)修改master1的keepalived.conf文件,按如下修改

vim  /etc/keepalived/keepalived.conf
global_defs { router_id LVS_DEVEL } vrrp_instance VI_1 { state BACKUP nopreempt interface ens33 virtual_router_id 80 priority 100 advert_int 1 authentication { auth_type PASS auth_pass just0kk } virtual_ipaddress { 192.168.0.199 } } virtual_server 192.168.0.199 6443 { delay_loop 6 lb_algo loadbalance lb_kind DR net_mask 255.255.255.0 persistence_timeout 0 protocol TCP real_server 192.168.0.6 6443 { weight 1 SSL_GET { url { path /healthz status_code 200 } connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } real_server 192.168.0.16 6443 { weight 1 SSL_GET { url { path /healthz status_code 200 } connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } real_server 192.168.0.26 6443 { weight 1 SSL_GET { url { path /healthz status_code 200 } connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } }

(3)修改master2的keepalived.conf文件,按如下修改

vim /etc/keepalived/keepalived.conf
global_defs { router_id LVS_DEVEL } vrrp_instance VI_1 { state BACKUP nopreempt interface ens33 virtual_router_id 80 priority 50 advert_int 1 authentication { auth_type PASS auth_pass just0kk } virtual_ipaddress { 192.168.0.199 } } virtual_server 192.168.0.199 6443 { delay_loop 6 lb_algo loadbalance lb_kind DR net_mask 255.255.255.0 persistence_timeout 0 protocol TCP real_server 192.168.0.6 6443 { weight 1 SSL_GET { url { path /healthz status_code 200 } connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } real_server 192.168.0.16 6443 { weight 1 SSL_GET { url { path /healthz status_code 200 } connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } real_server 192.168.0.26 6443 { weight 1 SSL_GET { url { path /healthz status_code 200 } connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } }

(4)修改master3的keepalived.conf文件,按如下修改

修改/etc/keepalived/keepalived.conf

global_defs {
   router_id LVS_DEVEL
}
vrrp_instance VI_1 {
    state BACKUP
    nopreempt
    interface ens33
    virtual_router_id 80
    priority 30
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass just0kk
    }
    virtual_ipaddress {
        192.168.0.199
    }
}
virtual_server 192.168.0.199 6443 {
    delay_loop 6
    lb_algo loadbalance
    lb_kind DR
    net_mask 255.255.255.0
    persistence_timeout 0
    protocol TCP
    real_server 192.168.0.6 6443 {
        weight 1
        SSL_GET {
            url {
              path /healthz
              status_code 200
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
    real_server 192.168.0.16 6443 {
        weight 1
        SSL_GET {
            url {
              path /healthz
              status_code 200
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
    real_server 192.168.0.26 6443 {
        weight 1
        SSL_GET {
            url {
              path /healthz
              status_code 200
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}

重要知识点,必看,否则生产会遇到巨大的坑

keepalive需要配置BACKUP,而且是非抢占模式nopreempt,假设master1宕机,
启动之后vip不会自动漂移到master1,这样可以保证k8s集群始终处于正常状态,
因为假设master1启动,apiserver等组件不会立刻运行,如果vip漂移到master1,
那么整个集群就会挂掉,这就是为什么我们需要配置成非抢占模式了

 启动顺序master1->master2->master3,在master1、master2、master3依次执行如下命令

systemctl enable keepalived && systemctl start keepalived && systemctl status keepalived 

2.4 在master1节点初始化k8s集群,在master1上操作如下

如果按照我在2.2节手动上传镜像到各个节点那么用下面的yaml文件初始化,大家统一按照这种方法上传镜像到各个机器,手动解压,这样后面实验才会正常进行。

cat   kubeadm-config.yaml

apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.18.2
controlPlaneEndpoint: 192.168.0.199:6443
apiServer:
 certSANs:
 - 192.168.0.6
 - 192.168.0.16
 - 192.168.0.26
 - 192.168.0.56
 - 192.168.0.199
networking:
 podSubnet: 10.244.0.0/16
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind:  KubeProxyConfiguration
mode: ipvs

 kubeadm init --config kubeadm-config.yaml

注:如果没有按照2.2节的方法上传镜像到各个节点,那么用下面的yaml文件,多了imageRepository: registry.aliyuncs.com/google_containers参数,表示走的是阿里云镜像,我们可以直接访问,这个方法更简单,但是在这里了解即可,先不使用这种方法,使用的话在后面手动加节点到k8s集群会有问题。

apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.18.2
controlPlaneEndpoint: 192.168.0.199:6443
imageRepository: registry.aliyuncs.com/google_containers
apiServer:
 certSANs:
 - 192.168.0.6
 - 192.168.0.16
 - 192.168.0.26
 - 192.168.0.56
 - 192.168.0.199
networking:
 podSubnet: 10.244.0.0/16
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind:  KubeProxyConfiguration
mode: ipvs

kubeadm init --config kubeadm-config.yaml初始化命令执行成功之后显示如下内容,说明初始化成功了

To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of control-plane nodes by copying certificate authorities and service account keys on each node and then running the following as root: kubeadm join 192.168.0.199:6443 --token 7dwluq.x6nypje7h55rnrhl \ --discovery-token-ca-cert-hash sha256:fa75619ab0bb6273126350a9dbda9aa6c89828c2c4650299fe1647ab510a7e6c \ --control-plane Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.0.199:6443 --token 7dwluq.x6nypje7h55rnrhl \ --discovery-token-ca-cert-hash sha256:fa75619ab0bb6273126350a9dbda9aa6c89828c2c4650299fe1647ab510a7e6c

注:kubeadm join ... 这条命令需要记住,我们把k8s的master2、master3,node1节点加入到集群需要在这些节点节点输入这条命令,每次执行这个结果都是不一样的,大家记住自己执行的结果,在下面会用到

2.5 在master1节点执行如下,这样才能有权限操作k8s资源

mkdir -p $HOME/.kube
sudo cp -i  /etc/kubernetes/admin.conf  $HOME/.kube/config
sudo chown $(id -u):$(id -g)  $HOME/.kube/config

在master1节点执行

kubectl get nodes

显示如下,master1节点是NotReady

NAME          STATUS      ROLES        AGE     VERSION
master1   NotReady   master   8m11s   v1.18.2

kubectl get pods -n kube-system

显示如下,可看到cordns也是处于pending状态

coredns-7ff77c879f-j48h6         0/1     Pending  0          3m16
scoredns-7ff77c879f-lrb77        0/1     Pending  0         3m16s

上面可以看到STATUS状态是NotReady,cordns是pending,是因为没有安装网络插件,需要安装calico或者flannel,接下来我们安装calico,在master1节点安装calico网络插件:

安装calico需要的镜像是quay.io/calico/cni:v3.5.3和quay.io/calico/node:v3.5.3,镜像在文章开头处的百度网盘地址

手动上传上面两个镜像的压缩包到各个节点,通过docker load -i解压

docker load -i   cni.tar.gz
docker load -i   calico-node.tar.gz

在master1节点执行如下:

kubectl apply -f calico.yaml

calico.yaml文件内容在如下提供的地址,打开下面链接可复制内容:

https://raw.githubusercontent.com/luckylucky421/kubernetes1.17.3/master/calico.yaml

如果打不开上面的链接,可以访问下面的github地址,把下面的目录clone和下载下来,解压之后,在把文件传到master1节点即可

https://github.com/luckylucky421/kubernetes1.17.3/tree/master

在master1节点执行

kubectl get nodes

显示如下,看到STATUS是Ready

NAME      STATUS   ROLES    AGE   VERSION
master1   Ready    master   98m   v1.18.2

kubectl get pods -n kube-system

看到cordns也是running状态,说明master1节点的calico安装完成

NAME                              READY   STATUS    RESTARTS   AGE
calico-node-6rvqm                 1/1     Running   0          17m
coredns-7ff77c879f-j48h6          1/1     Running   0          97m
coredns-7ff77c879f-lrb77          1/1     Running   0          97m
etcd-master1                      1/1     Running   0          97m
kube-apiserver-master1            1/1     Running   0          97m
kube-controller-manager-master1   1/1     Running   0          97m
kube-proxy-njft6                  1/1     Running   0          97m
kube-scheduler-master1            1/1     Running   0          97m

2.6 把master1节点的证书拷贝到master2和master3上

(1)在master2和master3上创建证书存放目录

cd /root && mkdir -p /etc/kubernetes/pki/etcd &&mkdir -p ~/.kube/

(2)在master1节点把证书拷贝到master2和master3上,在master1上操作如下,下面的scp命令大家最好一行一行复制,这样不会出错:

scp /etc/kubernetes/pki/ca.crt master2:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/ca.key master2:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.key master2:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.pub master2:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/front-proxy-ca.crt master2:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/front-proxy-ca.key master2:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/etcd/ca.crt master2:/etc/kubernetes/pki/etcd/   (这一步需要创建文件夹/etc/kubernetes/pki/etcd/)
scp /etc/kubernetes/pki/etcd/ca.key master2:/etc/kubernete/pki/etcd/
scp /etc/kubernetes/pki/ca.crt master3:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/ca.key master3:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/sa.key master3:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/sa.pub master3:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/front-proxy-ca.crt master3:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/front-proxy-ca.key master3:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/etcd/ca.crt master3:/etc/kubernetes/pki/etcd/ scp /etc/kubernetes/pki/etcd/ca.key master3:/etc/kubernetes/pki/etcd/

证书拷贝之后在master2和master3上执行如下命令,大家复制自己的,这样就可以把master2和master3加入到集群 

Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join 172.24.160.194:6443 --token qubui1.kw617wpcc9vhjks0 \
    --discovery-token-ca-cert-hash sha256:849f3089e1702e557444637a9e2375c474bab2c61f168774c7c3d67124d42c25 \
    --control-plane 

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.24.160.194:6443 --token qubui1.kw617wpcc9vhjks0 \
    --discovery-token-ca-cert-hash sha256:849f3089e1702e557444637a9e2375c474bab2c61f168774c7c3d67124d42c25 

--control-plane:这个参数表示加入到k8s集群的是master节点

在master2和master3上操作:

   mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g)$HOME/.kube/config

2.7 把node1节点加入到k8s集群,在node1节点操作

kubeadm join 172.24.160.194:6443 --token qubui1.kw617wpcc9vhjks0 \
    --discovery-token-ca-cert-hash sha256:849f3089e1702e557444637a9e2375c474bab2c61f168774c7c3d67124d42c25  

注:上面的这个加入到k8s节点的一串命令kubeadm join就是在2.4初始化的时候生成的

2.8 在master1节点查看集群节点状态

kubectl get nodes  

显示如下

说明node1节点也加入到k8s集群了,通过以上就完成了k8s多master高可用集群的搭建

主机名没有统一,加入以后修改,参照https://mp.weixin.qq.com/s/Na5Ic3gRKS6YjVDQ_6UJRg

2.9 安装traefik

把traefik镜像上传到各个节点,按照如下方法通过docker load -i解压,镜像地址在文章开头处的百度网盘里,可自行下载

docker load -i  traefik_1_7_9.tar.gz

traefik用到的镜像是k8s.gcr.io/traefik:1.7.9

1)生成traefik证书,在master1上操作

mkdir  ~/ikube/tls/ -p

echo """

[req]

distinguished_name = req_distinguished_name

prompt = yes

[ req_distinguished_name ]

countryName                     = Country Name (2 letter code)

countryName_value               = CN

stateOrProvinceName             = State or Province Name (full name)

stateOrProvinceName_value       = Beijing

localityName                    = Locality Name (eg, city)

localityName_value              = Haidian

organizationName                = Organization Name (eg, company)

organizationName_value          = Channelsoft

organizationalUnitName          = Organizational Unit Name (eg, section)

organizationalUnitName_value    = R & D Department

commonName                      = Common Name (eg, your name or your server\'s hostname)

commonName_value                = *.multi.io

emailAddress                    = Email Address

emailAddress_value              = lentil1016@gmail.com

""" > ~/ikube/tls/openssl.cnf

openssl req -newkey rsa:4096 -nodes -config ~/ikube/tls/openssl.cnf -days 3650 -x509 -out ~/ikube/tls/tls.crt -keyout ~/ikube/tls/tls.key

kubectl create -n kube-system secret tls ssl --cert ~/ikube/tls/tls.crt --key ~/ikube/tls/tls.key

2)执行yaml文件,创建traefik

kubectl apply -f traefik.yaml

traefik.yaml文件内容在如下链接地址处复制:

https://raw.githubusercontent.com/luckylucky421/kubernetes1.17.3/master/traefik.yaml

3)查看traefik是否部署成功:

kubectl get pods -n kube-system

显示如下,说明部署成功

traefik-ingress-controller-csbp8   1/1     Running   0     5s
traefik-ingress-controller-hqkwf   1/1     Running   0     5s
traefik-ingress-controller-wtjqd   1/1     Running   0     5s

3.安装kubernetes-dashboard 2.0版本(kubernetes的web ui界面)

把kubernetes-dashboard镜像上传到各个节点,按照如下方法通过docker load -i解压,镜像地址在文章开头处的百度网盘里,可自行下载

docker load -i dashboard_2_0_0.tar.gz
docker load -i metrics-scrapter-1-0-1.tar.gz

在master1节点操作

kubectl apply -f kubernetes-dashboard.yaml

kubernetes-dashboard.yaml文件内容在如下链接地址处复制https://raw.githubusercontent.com/luckylucky421/kubernetes1.17.3/master/kubernetes-dashboard.yaml

上面如果访问不了,可以访问下面的链接,然后把下面的分支克隆和下载,手动把yaml文件传到master1上即可:

https://github.com/luckylucky421/kubernetes1.17.3

查看dashboard是否安装成功:

kubectl get pods -n kubernetes-dashboard

显示如下,说明dashboard安装成功了

NAME                                         READY   STATUS    RESTARTS   AGE  
dashboard-metrics-scraper-694557449d-8xmtf   1/1     Running   0          60s   
kubernetes-dashboard-5f98bdb684-ph9wg        1/1     Running   2          60s   

查看dashboard前端的service

kubectl get svc -n kubernetes-dashboard

显示如下:

NAME                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE   
dashboard-metrics-scraper   ClusterIP   10.100.23.9      <none>        8000/TCP   50s   
kubernetes-dashboard        ClusterIP   10.105.253.155   <none>        443/TCP    50s 

修改service type类型变成NodePort:

kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard

把 type: ClusterIP变成 type: NodePort,保存退出即可

kubectl get svc -n kube-system

显示如下:

NAME                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP   10.100.23.9      <none>        8000/TCP        3m59s
kubernetes-dashboard        NodePort    10.105.253.155   <none>        443:31175/TCP   4m

上面可看到service类型是NodePort,访问master1节点ip:31175端口即可访问kubernetes dashboard,我的环境需要输入如下地址

 https://192.168.0.6:31775/

可看到出现了dashboard界面

3.1通过yaml文件里指定的默认的token登陆dashboard

1)查看kubernetes-dashboard名称空间下的secret

kubectl get secret -n kubernetes-dashboard

显示如下:

NAME                               TYPE                                  DATA   AGE
default-token-vxd7t                kubernetes.io/service-account-token   3      5m27s
kubernetes-dashboard-certs         Opaque                                0      5m27s
kubernetes-dashboard-csrf          Opaque                                1      5m27s
kubernetes-dashboard-key-holder    Opaque                                2      5m27s
kubernetes-dashboard-token-ngcmg   kubernetes.io/service-account-token   3      5m27s

2)找到对应的带有token的kubernetes-dashboard-token-ngcmg

kubectl  describe  secret  kubernetes-dashboard-token-ngcmg  -n   kubernetes-dashboard

显示如下:

...

...
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IjZUTVVGMDN4enFTREpqV0s3cDRWa254cTRPc2xPRTZ3bk8wcFJBSy1JSzgifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi1uZ2NtZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImYwMDFhNTM0LWE2ZWQtNGQ5MC1iMzdjLWMxMWU5Njk2MDE0MCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.WQFE0ygYdKkUjaQjFFU-BeWqys07J98N24R_azv6f-o9AB8Zy1bFWZcNrOlo6WYQuh-xoR8tc5ZDuLQlnZMBSwl2jo9E9FLZuEt7klTfXf4TkrQGLCxzDMD5c2nXbdDdLDtRbSwQMcQwePwp5WTAfuLyqJPFs22Xi2awpLRzbHn3ei_czNuamWUuoGHe6kP_rTnu6OUpVf1txi9C1Tg_3fM2ibNy-NWXLvrxilG3x3SbW1A3G6Y2Vbt1NxqVNtHRRQsYCvTnp3NZQqotV0-TxnvRJ3SLo_X6oxdUVnqt3DZgebyIbmg3wvgAzGmuSLlqMJ-mKQ7cNYMFR2Z8vnhhtA

记住token后面的值,把下面的token值复制到浏览器token登陆处即可登陆:

eyJhbGciOiJSUzI1NiIsImtpZCI6IjZUTVVGMDN4enFTREpqV0s3cDRWa254cTRPc2xPRTZ3bk8wcFJBSy1JSzgifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi1uZ2NtZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImYwMDFhNTM0LWE2ZWQtNGQ5MC1iMzdjLWMxMWU5Njk2MDE0MCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.WQFE0ygYdKkUjaQjFFU-BeWqys07J98N24R_azv6f-o9AB8Zy1bFWZcNrOlo6WYQuh-xoR8tc5ZDuLQlnZMBSwl2jo9E9FLZuEt7klTfXf4TkrQGLCxzDMD5c2nXbdDdLDtRbSwQMcQwePwp5WTAfuLyqJPFs22Xi2awpLRzbHn3ei_czNuamWUuoGHe6kP_rTnu6OUpVf1txi9C1Tg_3fM2ibNy-NWXLvrxilG3x3SbW1A3G6Y2Vbt1NxqVNtHRRQsYCvTnp3NZQqotV0-TxnvRJ3SLo_X6oxdUVnqt3DZgebyIbmg3wvgAzGmuSLlqMJ-mKQ7cNYMFR2Z8vnhhtA

 

 点击sing in登陆,显示如下,默认是只能看到default名称空间内容

3.2 创建管理员token,可查看任何空间权限

kubectl create clusterrolebinding dashboard-cluster-admin --clusterrole=cluster-admin --serviceaccount=kubernetes-dashboard:kubernetes-dashboard

1)查看kubernetes-dashboard名称空间下的secret

kubectl get secret -n kubernetes-dashboard

显示如下:

NAME                               TYPE                                  DATA   AGE
default-token-vxd7t                kubernetes.io/service-account-token   3      5m27s
kubernetes-dashboard-certs         Opaque                                0      5m27s
kubernetes-dashboard-csrf          Opaque                                1      5m27s
kubernetes-dashboard-key-holder    Opaque                                2      5m27s
kubernetes-dashboard-token-ngcmg   kubernetes.io/service-account-token   3   

2)找到对应的带有token的kubernetes-dashboard-token-ngcmg

kubectl  describe  secret  kubernetes-dashboard-token-ngcmg  -n   kubernetes-dashboard

显示如下:

...

...
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IjZUTVVGMDN4enFTREpqV0s3cDRWa254cTRPc2xPRTZ3bk8wcFJBSy1JSzgifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi1uZ2NtZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImYwMDFhNTM0LWE2ZWQtNGQ5MC1iMzdjLWMxMWU5Njk2MDE0MCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.WQFE0ygYdKkUjaQjFFU-BeWqys07J98N24R_azv6f-o9AB8Zy1bFWZcNrOlo6WYQuh-xoR8tc5ZDuLQlnZMBSwl2jo9E9FLZuEt7klTfXf4TkrQGLCxzDMD5c2nXbdDdLDtRbSwQMcQwePwp5WTAfuLyqJPFs22Xi2awpLRzbHn3ei_czNuamWUuoGHe6kP_rTnu6OUpVf1txi9C1Tg_3fM2ibNy-NWXLvrxilG3x3SbW1A3G6Y2Vbt1NxqVNtHRRQsYCvTnp3NZQqotV0-TxnvRJ3SLo_X6oxdUVnqt3DZgebyIbmg3wvgAzGmuSLlqMJ-mKQ7cNYMFR2Z8vnhhtA

记住token后面的值,把下面的token值复制到浏览器token登陆处即可登陆:

eyJhbGciOiJSUzI1NiIsImtpZCI6IjZUTVVGMDN4enFTREpqV0s3cDRWa254cTRPc2xPRTZ3bk8wcFJBSy1JSzgifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi1uZ2NtZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImYwMDFhNTM0LWE2ZWQtNGQ5MC1iMzdjLWMxMWU5Njk2MDE0MCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.WQFE0ygYdKkUjaQjFFU-BeWqys07J98N24R_azv6f-o9AB8Zy1bFWZcNrOlo6WYQuh-xoR8tc5ZDuLQlnZMBSwl2jo9E9FLZuEt7klTfXf4TkrQGLCxzDMD5c2nXbdDdLDtRbSwQMcQwePwp5WTAfuLyqJPFs22Xi2awpLRzbHn3ei_czNuamWUuoGHe6kP_rTnu6OUpVf1txi9C1Tg_3fM2ibNy-NWXLvrxilG3x3SbW1A3G6Y2Vbt1NxqVNtHRRQsYCvTnp3NZQqotV0-TxnvRJ3SLo_X6oxdUVnqt3DZgebyIbmg3wvgAzGmuSLlqMJ-mKQ7cNYMFR2Z8vnhhtA

4.安装metrics监控相关的插件

把metrics-server-amd64_0_3_1.tar.gz和addon.tar.gz镜像上传到各个节点,按照如下方法通过docker load -i解压

docker load -i metrics-server-amd64_0_3_1.tar.gz
docker load -i addon.tar.gz

metrics-server版本0.3.1,用到的镜像是k8s.gcr.io/metrics-server-amd64:v0.3.1       

addon-resizer版本是1.8.4,用到的镜像是k8s.gcr.io/addon-resizer:1.8.4

在k8s-master节点操作

kubectl apply -f metrics.yaml

metrics.yaml文件内容在如下链接地址处复制

https://raw.githubusercontent.com/luckylucky421/kubernetes1.17.3/master/metrics.yaml

上面如果访问不了,可以访问下面的链接,然后把下面的分支克隆和下载,手动把yaml文件传到master1上就可以正常使用了:

https://github.com/luckylucky421/kubernetes1.17.3

上面组件都安装之后,kubectl  get  pods  -n kube-system  -o wide,查看组件安装是否正常,STATUS状态是Running,说明组件正常,如下所示

NAME                                    READY   STATUS    RESTARTS   AGE
calico-node-6rvqm                       1/1     Running   10         14h
calico-node-cbrvw                       1/1     Running   4          14h
calico-node-l6628                       0/1     Running   0          9h
coredns-7ff77c879f-j48h6                1/1     Running   2          16h
coredns-7ff77c879f-lrb77                1/1     Running   2          16h
etcd-master1                            1/1     Running   37         16h
etcd-master2                            1/1     Running   7          9h
kube-apiserver-master1                  1/1     Running   52         16h
kube-apiserver-master2                  1/1     Running   11         14h
kube-controller-manager-master1         1/1     Running   42         16h
kube-controller-manager-master2         1/1     Running   13         14h
kube-proxy-dq6vc                        1/1     Running   2          14h
kube-proxy-njft6                        1/1     Running   2          16h
kube-proxy-stv52                        1/1     Running   0          9h
kube-scheduler-master1                  1/1     Running   37         16h
kube-scheduler-master2                  1/1     Running   15         14h
kubernetes-dashboard-85f499b587-dbf72   1/1     Running   1          8h
metrics-server-8459f8db8c-5p59m         2/2     Running   0          33s
traefik-ingress-controller-csbp8        1/1     Running   0          8h
traefik-ingress-controller-hqkwf        1/1     Running   0          8h
traefik-ingress-controller-wtjqd        1/1     Running   0          8h

 

https://mp.weixin.qq.com/s/FZkATvHxcZuhqx5Hsxd1OA

https://mp.weixin.qq.com/s/UuyhPhe15sV8D4lApe7X7w   文章详细



 

posted @ 2020-05-17 12:44  凡人半睁眼  阅读(908)  评论(0编辑  收藏  举报