Kubernetes-二进制部署(多Master节点)
k8s多Master集群高可用方案
- 作用是实现高可用
- apiserver对外安全通信端口6443,对内端口8080
高可用实现方案
- etcd:etcd群集至少是3副本,奇数台,通过raft算法,保证数据的一致性
- node节点:承载业务,跟Master进行对接
- master节点:高可用使用keepalived+LB方案,keepalived能够提供VIP和主备,LB实现负载均衡,使用nginx+haproxy,将master加入nginx地址池,由nginx转发到对应的apiserver,再通过schduleer调度到相应的node节点,使用轮询算法。
多Master高可用的搭建过程
- 从maste01复制etcd,k8s的证书,可执行文件,配置文件,master组件的服务管理文件 到 master02节点 - 再Masters节点修改apiserver的配置,修改为自己的配置 - 启动master相关组件 - 部署keepalived + LB (nginx , haproxy)实现高可用 和 负载均衡 - keepalived需要健康检查脚本来实现自动的故障切换 - 在所有Node节点上修改 node相关组件的 Kubeconfig 文件配置 ,把对接的 server ip 指定为VIP
多master节点集群搭建(master02 节点部署)
继上一篇单节点后操作
1 //从 master01 节点上拷贝证书文件、各master组件的配置文件和服务管理文件到 master02 节点 2 scp -r /opt/etcd/ root@192.168.80.20:/opt/ 3 scp -r /opt/kubernetes/ root@192.168.80.20:/opt 4 scp /usr/lib/systemd/system/{kube-apiserver,kube-controller-manager,kube-scheduler}.service root@192.168.80.20:/usr/lib/systemd/system/ 5 6 //修改配置文件kube-apiserver中的IP 7 vim /opt/kubernetes/cfg/kube-apiserver 8 KUBE_APISERVER_OPTS="--logtostderr=true \ 9 --v=4 \ 10 --etcd-servers=https://192.168.80.10:2379,https://192.168.80.11:2379,https://192.168.80.12:2379 \ 11 --bind-address=192.168.80.20 \ #修改 12 --secure-port=6443 \ 13 --advertise-address=192.168.80.20 \ #修改 14 ...... 15 16 //在 master02 节点上启动各服务并设置开机自启 17 systemctl start kube-apiserver.service 18 systemctl enable kube-apiserver.service 19 systemctl start kube-controller-manager.service 20 systemctl enable kube-controller-manager.service 21 systemctl start kube-scheduler.service 22 systemctl enable kube-scheduler.service 23 24 //查看node节点状态 25 ln -s /opt/kubernetes/bin/* /usr/local/bin/ 26 kubectl get nodes 27 kubectl get nodes -o wide #-o=wide:输出额外信息;对于Pod,将输出Pod所在的Node名 28 //此时在master02节点查到的node节点状态仅是从etcd查询到的信息,而此时node节点实际上并未与master02节点建立通信连接,因此需要使用一个VIP把node节点与master节点都关联起来
在master01操作
在master02操作
负载均衡部署
1 //配置load balancer集群双机热备负载均衡(nginx实现负载均衡,keepalived实现双机热备) 2 ##### 在lb01、lb02节点上操作 ##### 3 //配置nginx的官方在线yum源,配置本地nginx的yum源 4 cat > /etc/yum.repos.d/nginx.repo << 'EOF' 5 [nginx] 6 name=nginx repo 7 baseurl=http://nginx.org/packages/centos/7/$basearch/ 8 gpgcheck=0 9 EOF 10 11 yum install nginx -y 12 13 //修改nginx配置文件,配置四层反向代理负载均衡,指定k8s群集2台master的节点ip和6443端口 14 vim /etc/nginx/nginx.conf 15 events { 16 worker_connections 1024; 17 } 18 19 #添加 20 stream { 21 log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent'; 22 23 access_log /var/log/nginx/k8s-access.log main; 24 25 upstream k8s-apiserver { 26 server 192.168.80.10:6443; 27 server 192.168.80.20:6443; 28 } 29 server { 30 listen 6443; 31 proxy_pass k8s-apiserver; 32 } 33 } 34 35 http { 36 ...... 37 38 39 //检查配置文件语法 40 nginx -t 41 42 //启动nginx服务,查看已监听6443端口 43 systemctl start nginx 44 systemctl enable nginx 45 netstat -natp | grep nginx 46 47 48 //部署keepalived服务 49 yum install keepalived -y 50 51 //修改keepalived配置文件 52 vim /etc/keepalived/keepalived.conf 53 ! Configuration File for keepalived 54 55 global_defs { 56 # 接收邮件地址 57 notification_email { 58 acassen@firewall.loc 59 failover@firewall.loc 60 sysadmin@firewall.loc 61 } 62 # 邮件发送地址 63 notification_email_from Alexandre.Cassen@firewall.loc 64 smtp_server 127.0.0.1 65 smtp_connect_timeout 30 66 router_id NGINX_MASTER #lb01节点的为 NGINX_MASTER,lb02节点的为 NGINX_BACKUP 67 } 68 69 #添加一个周期性执行的脚本 70 vrrp_script check_nginx { 71 script "/etc/nginx/check_nginx.sh" #指定检查nginx存活的脚本路径 72 } 73 74 vrrp_instance VI_1 { 75 state MASTER #lb01节点的为 MASTER,lb02节点的为 BACKUP 76 interface ens33 #指定网卡名称 ens33 77 virtual_router_id 51 #指定vrid,两个节点要一致 78 priority 100 #lb01节点的为 100,lb02节点的为 90 79 advert_int 1 80 authentication { 81 auth_type PASS 82 auth_pass 1111 83 } 84 virtual_ipaddress { 85 192.168.80.100/24 #指定 VIP 86 } 87 track_script { 88 check_nginx #指定vrrp_script配置的脚本 89 } 90 } 91 92 93 //创建nginx状态检查脚本 94 vim /etc/nginx/check_nginx.sh 95 #!/bin/bash 96 #egrep -cv "grep|$$" 用于过滤掉包含grep 或者 $$ 表示的当前Shell进程ID 97 count=$(ps -ef | grep nginx | egrep -cv "grep|$$") 98 99 if [ "$count" -eq 0 ];then 100 systemctl stop keepalived 101 fi 102 103 104 chmod +x /etc/nginx/check_nginx.sh 105 106 //启动keepalived服务(一定要先启动了nginx服务,再启动keepalived服务) 107 systemctl start keepalived 108 systemctl enable keepalived 109 ip a #查看VIP是否生成 110 111 //修改node节点上的bootstrap.kubeconfig,kubelet.kubeconfig配置文件为VIP 112 cd /opt/kubernetes/cfg/ 113 vim bootstrap.kubeconfig 114 server: https://192.168.80.100:6443 115 116 vim kubelet.kubeconfig 117 server: https://192.168.80.100:6443 118 119 vim kube-proxy.kubeconfig 120 server: https://192.168.80.100:6443 121 122 //重启kubelet和kube-proxy服务 123 systemctl restart kubelet.service 124 systemctl restart kube-proxy.service 125 126 //在lb01上查看nginx的k8s日志 127 tail /var/log/nginx/k8s-access.log 128 129 130 ##### 在 master01 节点上操作 ##### 131 //测试创建pod 132 kubectl run nginx --image=nginx 133 134 //查看Pod的状态信息 135 kubectl get pods 136 NAME READY STATUS RESTARTS AGE 137 nginx-dbddb74b8-nf9sk 0/1 ContainerCreating 0 33s #正在创建中 138 139 kubectl get pods 140 NAME READY STATUS RESTARTS AGE 141 nginx-dbddb74b8-nf9sk 1/1 Running 0 80s #创建完成,运行中 142 143 kubectl get pods -o wide 144 NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE 145 nginx-dbddb74b8-26r9l 1/1 Running 0 10m 172.17.36.2 192.168.80.15 <none> 146 //READY为1/1,表示这个Pod中有1个容器 147 148 //在对应网段的node节点上操作,可以直接使用浏览器或者curl命令访问 149 curl 172.17.36.2 150 151 //这时在master01节点上查看nginx日志,发现没有权限查看 152 kubectl logs nginx-dbddb74b8-nf9sk 153 Error from server (Forbidden): Forbidden (user=system:anonymous, verb=get, resource=nodes, subresource=proxy) ( nginx-dbddb74b8-nf9sk) 154 155 //在master01节点上,将cluster-admin角色授予用户system:anonymous 156 kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous 157 clusterrolebinding.rbac.authorization.k8s.io/cluster-system-anonymous created 158 159 //再次查看nginx日志 160 kubectl logs nginx-dbddb74b8-nf9sk
lb01、lb02操作
yum install keepalived -y
vim /etc/keepalived/keepalived.conf
修改node节点上的kubeconfig配置文件
1 //修改bootstrap.kubeconfig,kubelet.kubeconfig配置文件为VIP 2 3 cd /opt/kubernetes/cfg/ 4 vim bootstrap.kubeconfig 5 server: https://192.168.208.100:6443 6 7 vim kubelet.kubeconfig 8 server: https://192.168.208.100:6443 9 10 vim kube-proxy.kubeconfig 11 server: https://192.168.208.100:6443 12 13 //重启kubelet和kube-proxy服务 14 systemctl restart kubelet.service 15 systemctl restart kube-proxy.service
在mster01操作
在node2节点操作
在master01节点查看日志