使用keepalived+nginx+3master部署高可用的K8s集群——v1.29.1版本
前言
我是直接将keepalived+nginx安装在master主机上的,3个master都安装了。
这样省机器。
准备工作
基本环境准备
在所有的K8s相关的设备上都执行
#关闭防火墙 systemctl stop firewalld systemctl disable firewalld #关闭SELinux和取消swap sed -i 's/enforcing/disabled/' /etc/selinux/config sed -ri 's/.*swap.*/#&/' /etc/fstab #主机名,根据你自己的情况来设置 echo -e "192.168.50.10 centos-k8s-master0\n192.168.50.11 centos-k8s-master1\n192.168.50.12 centos-k8s-master2\n192.168.50.16 centos-k8s-node0\n192.168.50.17 centos-k8s-node1\n192.168.50.18 centos-k8s-node2\n" >> /etc/hosts #内核参数 echo -e "net.bridge.bridge-nf-call-ip6tables = 1\nnet.bridge.bridge-nf-call-iptables = 1" >/etc/sysctl.d/k8s.conf sysctl --system yum install ntpdate wget -y ntpdate time.windows.com #docker源 wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo yum install docker-ce -y echo '{"registry-mirrors": ["https://registry.docker-cn.com","https://gg3gwnry.mirror.aliyuncs.com"]}'>/etc/docker/daemon.json systemctl enable docker.service #设置默认配置文件 containerd config default > /etc/containerd/config.toml #修改一下 sed -i 's|registry.k8s.io/pause:3.6|registry.aliyuncs.com/google_containers/pause:3.9|' /etc/containerd/config.toml #K8s官方源 echo -e "[kubernetes]\nname=Kubernetes\nbaseurl=https://pkgs.k8s.io/core:/stable:/v1.29/rpm/\nenabled=1\ngpgcheck=1\ngpgkey=https://pkgs.k8s.io/core:/stable:/v1.29/rpm/repodata/repomd.xml.key\nexclude=kubelet kubeadm kubectl cri-tools kubernetes-cni" >/etc/yum.repos.d/kubernetes.repo yum clean all yum install -y --showduplicates kubeadm-1.29.1 kubelet-1.29.1 kubectl-1.29.1 --disableexcludes=kubernetes systemctl enable kubelet.service
设置容器运行时的默认值:
crictl config runtime-endpoint unix:///run/containerd/containerd.sock crictl config image-endpoint unix:///run/containerd/containerd.sock
此时建议重启一下设备,反正在kubeadm init之前就要重启设备才行。
安装和配置keepalived
在3台master节点上执行。
yum install keepalived -y cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak20240119
keepalived配置文件的参数含义可以参考:https://blog.csdn.net/MssGuo/article/details/127330115
在master0上面配置:
[root@centos-k8s-master0 ~]# cat /etc/keepalived/keepalived.conf global_defs { router_id LVS_DEVEL vrrp_skip_check_adv_addr # vrrp_strict vrrp_garp_interval 0 vrrp_gna_interval 0 } vrrp_instance VI_1 { state MASTER interface eth1 virtual_router_id 51 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 971225 } virtual_ipaddress { 192.168.50.2 } } [root@centos-k8s-master0 ~]#
在master1上面配置:
[root@centos-k8s-master1 ~]# cat /etc/keepalived/keepalived.conf
global_defs {
router_id LVS_DEVEL
vrrp_skip_check_adv_addr
# vrrp_strict
vrrp_garp_interval 0
vrrp_gna_interval 0
}
vrrp_instance VI_1 {
state MASTER
interface eth1
virtual_router_id 51
priority 80
advert_int 1
authentication {
auth_type PASS
auth_pass 971225
}
virtual_ipaddress {
192.168.50.2
}
}
[root@centos-k8s-master1 ~]#
在master2上面配置:
[root@centos-k8s-master2 ~]# cat /etc/keepalived/keepalived.conf
global_defs {
router_id LVS_DEVEL
vrrp_skip_check_adv_addr
# vrrp_strict
vrrp_garp_interval 0
vrrp_gna_interval 0
}
vrrp_instance VI_1 {
state MASTER
interface eth1
virtual_router_id 51
priority 60
advert_int 1
authentication {
auth_type PASS
auth_pass 971225
}
virtual_ipaddress {
192.168.50.2
}
}
[root@centos-k8s-master2 ~]#
值得注意的是不要将vrrp_strict配置上,不然会导致根本访问不了,即init会报错超时,去ping虚拟地址也不能ping通。我因为这个配置项搞了很久。
安装和配置nginx
在3台master节点上执行。
#nginx需要用到pcre库,pcre库全称是Perl compatible regular expressions ,翻译为Perl兼容正则表达式,是为了支持Nginx具备URL重写#rewrite模块,若不安装pcre库,则Nginx无法使用rewrite模块。 #安装nginx的依赖 yum -y install gcc gcc-c++ make pcre pcre-devel zlib-devel zlib openssl-devel openssl #参照官网安装nginx,官网地址:http://nginx.org/en/linux_packages.html#RHEL yum install yum-utils cat >/etc/yum.repos.d/nginx.repo<<'EOF' [nginx-stable] name=nginx stable repo baseurl=http://nginx.org/packages/centos/$releasever/$basearch/ gpgcheck=1 enabled=1 gpgkey=https://nginx.org/keys/nginx_signing.key module_hotfixes=true [nginx-mainline] name=nginx mainline repo baseurl=http://nginx.org/packages/mainline/centos/$releasever/$basearch/ gpgcheck=1 enabled=0 gpgkey=https://nginx.org/keys/nginx_signing.key module_hotfixes=true EOF yum-config-manager --enable nginx-mainline yum install nginx -y
配置NGINX:
[root@centos-k8s-master0 ~]# cat /etc/nginx/nginx.conf user nginx; worker_processes auto; error_log /var/log/nginx/error.log notice; pid /var/run/nginx.pid; events { worker_connections 1024; } #只添加了这里的stream模块 stream{ log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent'; access_log /var/log/nginx/k8s-access.log main; upstream k8s-apiserver { server 192.168.50.10:6443; #master01的IP和6443端口 server 192.168.50.11:6443; #master02的IP和6443端口 server 192.168.50.12:6443; #master03的IP和6443端口 } server { listen 16443; #监听的是16443端口,因为nginx和master复用机器,所以不能是6443端口 proxy_pass k8s-apiserver; #使用proxy_pass模块进行反向代理 } } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 65; #gzip on; include /etc/nginx/conf.d/*.conf; } [root@centos-k8s-master0 ~]#
将Nginx设置为自启动,并启动:
systemctl enable nginx && systemctl restart nginx
初始化master
kubeadm init --apiserver-advertise-address=192.168.50.10 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.29.1 --apiserver-bind-port=6443 --control-plane-endpoint=192.168.50.2:16443 --upload-certs
Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of control-plane nodes by copying certificate authorities and service account keys on each node and then running the following as root: kubeadm join 192.168.50.2:16443 --token in7aj9.e38arf9cyz7nxit5 \ --discovery-token-ca-cert-hash sha256:65f9e96ffdaaaa1623fec83ca127dd0e7b72f1a1029c304a74183cc1c72d55c0 \ --control-plane Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.50.2:16443 --token in7aj9.e38arf9cyz7nxit5 \ --discovery-token-ca-cert-hash sha256:65f9e96ffdaaaa1623fec83ca127dd0e7b72f1a1029c304a74183cc1c72d55c0 [root@centos-k8s-master0 ~]#
按照输出,设置用户的授权访问:
[root@centos-k8s-master0 ~]# mkdir -p $HOME/.kube [root@centos-k8s-master0 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config [root@centos-k8s-master0 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config [root@centos-k8s-master0 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION centos-k8s-master0 NotReady control-plane 24m v1.29.1 [root@centos-k8s-master0 ~]#
将其他master加入集群:
[root@centos-k8s-master1 ~]# kubeadm join 192.168.50.2:16443 --token kaxj0v.quptoehlg8lvzkfl \ > --discovery-token-ca-cert-hash sha256:2f912f71bd6ed637e4e5fa59b7873b7f4f377e964bfd53bca47092fcccb22390 \ > --control-plane --certificate-key 87b7d66290d0ba474699237fdf158624433ee146bc213c4de105f12eea635dd7 [preflight] Running pre-flight checks [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [preflight] Running pre-flight checks before initializing the new control plane instance [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace [download-certs] Saving the certificates to the folder: "/etc/kubernetes/pki" [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [centos-k8s-master1 localhost] and IPs [192.168.18.121 127.0.0.1 ::1] [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [centos-k8s-master1 localhost] and IPs [192.168.18.121 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [centos-k8s-master1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.18.121 192.168.50.2] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Valid certificates and keys now exist in "/etc/kubernetes/pki" [certs] Using the existing "sa" key [kubeconfig] Generating kubeconfig files [kubeconfig] Using kubeconfig folder "/etc/kubernetes" W0119 22:31:25.743376 5153 endpoint.go:57] [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address [kubeconfig] Writing "admin.conf" kubeconfig file W0119 22:31:26.345947 5153 endpoint.go:57] [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address [kubeconfig] Writing "controller-manager.conf" kubeconfig file W0119 22:31:26.979688 5153 endpoint.go:57] [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [check-etcd] Checking that the etcd cluster is healthy [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... [etcd] Announced new etcd member joining to the existing etcd cluster [etcd] Creating static Pod manifest for "etcd" {"level":"warn","ts":"2024-01-19T22:31:28.979091+0800","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0004b6c40/192.168.50.10:2379","attempt":0,"error":"rpc error: code = FailedPrecondition desc = etcdserver: can only promote a learner member which is in sync with leader"} {"level":"warn","ts":"2024-01-19T22:31:29.088155+0800","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0004b6c40/192.168.50.10:2379","attempt":0,"error":"rpc error: code = FailedPrecondition desc = etcdserver: can only promote a learner member which is in sync with leader"} {"level":"warn","ts":"2024-01-19T22:31:29.248819+0800","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0004b6c40/192.168.50.10:2379","attempt":0,"error":"rpc error: code = FailedPrecondition desc = etcdserver: can only promote a learner member which is in sync with leader"} {"level":"warn","ts":"2024-01-19T22:31:29.481135+0800","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0004b6c40/192.168.50.10:2379","attempt":0,"error":"rpc error: code = FailedPrecondition desc = etcdserver: can only promote a learner member which is in sync with leader"} {"level":"warn","ts":"2024-01-19T22:31:29.833254+0800","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0004b6c40/192.168.50.10:2379","attempt":0,"error":"rpc error: code = FailedPrecondition desc = etcdserver: can only promote a learner member which is in sync with leader"} [etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s The 'update-status' phase is deprecated and will be removed in a future release. Currently it performs no operation [mark-control-plane] Marking the node centos-k8s-master1 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers] [mark-control-plane] Marking the node centos-k8s-master1 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule] This node has joined the cluster and a new control plane instance was created: * Certificate signing request was sent to apiserver and approval was received. * The Kubelet was informed of the new secure connection details. * Control plane label and taint were applied to the new node. * The Kubernetes control plane instances scaled up. * A new etcd member was added to the local/stacked etcd cluster. To start administering your cluster from this node, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Run 'kubectl get nodes' to see this node join the cluster. [root@centos-k8s-master1 ~]# mkdir -p $HOME/.kube [root@centos-k8s-master1 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config [root@centos-k8s-master1 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config [root@centos-k8s-master1 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION centos-k8s-master0 NotReady control-plane 3m27s v1.29.1 centos-k8s-master1 NotReady control-plane 48s v1.29.1 [root@centos-k8s-master1 ~]#
将其他work节点加入集群
[root@centos-k8s-node0 ~]# kubeadm join 192.168.50.2:16443 --token kaxj0v.quptoehlg8lvzkfl \ > --discovery-token-ca-cert-hash sha256:2f912f71bd6ed637e4e5fa59b7873b7f4f377e964bfd53bca47092fcccb22390 [preflight] Running pre-flight checks [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster. [root@centos-k8s-node0 ~]#
[root@centos-k8s-node1 ~]# kubeadm join 192.168.50.2:16443 --token kaxj0v.quptoehlg8lvzkfl \ > --discovery-token-ca-cert-hash sha256:2f912f71bd6ed637e4e5fa59b7873b7f4f377e964bfd53bca47092fcccb22390 [preflight] Running pre-flight checks [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster. [root@centos-k8s-node1 ~]#
[root@centos-k8s-node2 ~]# kubeadm join 192.168.50.2:16443 --token kaxj0v.quptoehlg8lvzkfl \ > --discovery-token-ca-cert-hash sha256:2f912f71bd6ed637e4e5fa59b7873b7f4f377e964bfd53bca47092fcccb22390 [preflight] Running pre-flight checks [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster. [root@centos-k8s-node2 ~]#