kubeadmin安装k8s v1.20

节点部署

[root@master1 ~]# kubectl  get nodes -o wide
NAME      STATUS   ROLES                  AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION           CONTAINER-RUNTIME
master1   Ready    control-plane,master   11h   v1.20.9   10.70.51.105   <none>        CentOS Linux 7 (Core)   3.10.0-1062.el7.x86_64   docker://20.10.7
master2   Ready    control-plane,master   10h   v1.20.9   10.70.51.106   <none>        CentOS Linux 7 (Core)   3.10.0-1062.el7.x86_64   docker://20.10.7
node1     Ready    <none>                 11h   v1.21.0   10.70.51.107   <none>        CentOS Linux 7 (Core)   3.10.0-1062.el7.x86_64   docker://20.10.7
node2     Ready    <none>                 11h   v1.21.0   10.70.51.108   <none>        CentOS Linux 7 (Core)   3.10.0-1062.el7.x86_64   docker://20.10.7
node3     Ready    <none>                 11h   v1.21.0   10.70.51.109   <none>        CentOS Linux 7 (Core)   3.10.0-1062.el7.x86_64   docker://20.10.7

一、安装容器运行时-Docker

1.1添加阿里镜像源

sudo yum install -y yum-utils
sudo yum-config-manager \
--add-repo \
http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

1.2安装docker

查询可安装的版本,按版本号(从高到低)对结果进行排序
yum list docker-ce --showduplicates | sort -r
yum install -y docker-ce-20.10.7 docker-ce-cli-20.10.7  containerd.io-1.4.6

1.3启动

systemctl enable docker --now

1.4配置加速

复制代码
#这里额外添加了docker的生产环境核心配置cgroup
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://82m9ar63.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker
复制代码

二、安装kubernetes

2.1、基础环境

所有机器执行以下操作

复制代码
#各个机器设置自己的域名
hostnamectl set-hostname xxxx


# 将 SELinux 设置为 permissive 模式(相当于将其禁用)
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

#关闭swap
swapoff -a  
sed -ri 's/.*swap.*/#&/' /etc/fstab

#允许 iptables 检查桥接流量
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system
复制代码

2.2安装kubelet、kubeadm、kubectl

复制代码
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
   http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF

查询可用的版本:
 yum list kubeadm --showduplicates
sudo yum install -y kubelet-1.20.9 kubeadm-1.20.9 kubectl-1.20.9 --disableexcludes=kubernetes
sudo systemctl enable --now kubelet
复制代码

kubelet 现在每隔几秒就会重启,因为它陷入了一个等待 kubeadm 指令的死循环

 2.3下载镜像

复制代码
sudo tee ./images.sh <<-'EOF'
#!/bin/bash
images=(
kube-apiserver:v1.20.9
kube-proxy:v1.20.9
kube-controller-manager:v1.20.9
kube-scheduler:v1.20.9
coredns:1.7.0
etcd:3.4.13-0
pause:3.2
)
for imageName in ${images[@]} ; do
docker pull registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/$imageName
done
EOF
   
chmod +x ./images.sh && ./images.sh
复制代码

2.4配置hosts解析

echo -e "10.70.5.105 master1 \n10.70.5.106 master2\n10.70.5.107 node1\n10.70.5.108 node2\n10.70.5.109 node3" >> /etc/hosts

2.5master节点初始化集群

注意:
1. 所有网络范围不重叠
2. advertise-address等于master的IP
3. plane-endpoint等于填写在hosts文件中的master

kubeadm init --apiserver-advertise-address=10.70.51.10 --control-plane-endpoint=master1 --image-repository registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images --kubernetes-version v1.20.9 --service-cidr=10.96.0.0/16 --pod-network-cidr=10.168.0.0/16

初始化成功

提示信息保存一下,以后加入主节点或worker节点使用

升级为高可用的配置

复制代码
1. kubeadm-config
[root@master1 kubesphere]# cat kubeadm-config.yaml 
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.21.0
controlPlaneEndpoint: 10.70.5.10:16443   #虚拟IP
imageRepository: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images
apiServer:
certSANs:  #master节点ip地址和虚拟ip
- 10.70.51.105
- 10.70.51.106
- 10.70.51.107
- 10.70.51.108
- 10.70.51.10
networking:
podSubnet: 10.168.0.0/16
serviceSubnet: 10.96.0.0/16
2.执行初始化
 kubeadm init --config kubeadm-config.yaml --ignore-preflight-errors=SystemVerification --v=5
复制代码

2.6执行初始化命令

仅在master节点执行如下命令

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

2.7加入worker节点

worker节点分别执行如下命令(在上面有)

kubeadm join 10.70.51.10:6443 --token jseunn.umuscu7bn2tcc8jl     --discovery-token-ca-cert-hash sha256:430ba8de7b53ffb254eb25a649753f5f429f9974ca18ae7a701d6d95dcef4710

2.8验证

我们的集群已经搭建完成,但是节点都是Notready,是因为k8s各节点通信通过第三方的网络插件,所以接下来我们需要安装第三方网络插件

[root@master ~]# kubectl get node 
NAME     STATUS     ROLES                  AGE     VERSION
master1   NotReady   control-plane,master   7m51s   v1.20.9
master2   NotReady   control-plane,master   7m51s   v1.20.9
node1    NotReady   <none>                 2m3s    v1.20.9
node2    NotReady   <none>                 2m      v1.20.9

三、安装calico网络组件

 3.1下载calico的配置文件

curl https://docs.projectcalico.org/v3.18/manifests/calico.yaml -O

3.2修改配置文件

注意:
calico默认的CALICO_IPV4POOL_CIDR是192.168.0.0/16,在上面2.5、master节点初始化集群步骤中,为避免网段重复。所以更改了CIDR的值,所以在calico的配置文件中,要修改成相同的值,并取消注释
如下图:

 3.3通过命令安装calico网络插件

复制代码
kubectl create -f calico.yaml
[root@master1 docker]# kubectl  get pod -n kube-system
NAME                                      READY   STATUS    RESTARTS   AGE
calico-kube-controllers-97769f7c7-sl5dn   1/1     Running   0          134m
calico-node-dfjr5                         1/1     Running   0          134m
calico-node-gnqx5                         1/1     Running   0          134m
calico-node-lgq9d                         1/1     Running   0          134m
calico-node-r47mg                         1/1     Running   0          134m
calico-node-vzpsr                         1/1     Running   0          134m
coredns-5897cd56c4-54jzv                  1/1     Running   0          11h
coredns-5897cd56c4-prtt6                  1/1     Running   0          11h
etcd-master1                              1/1     Running   0          11h
etcd-master2                              1/1     Running   0          102m
kube-apiserver-master1                    1/1     Running   0          11h
kube-apiserver-master2                    1/1     Running   0          102m
kube-controller-manager-master1           1/1     Running   1          11h
kube-controller-manager-master2           1/1     Running   0          102m
kube-proxy-4b5pz                          1/1     Running   0          11h
kube-proxy-7rssk                          1/1     Running   0          11h
kube-proxy-7wtb7                          1/1     Running   0          11h
kube-proxy-ctkqb                          1/1     Running   0          11h
kube-proxy-gtn2p                          1/1     Running   0          11h
kube-scheduler-master1                    1/1     Running   1          11h
kube-scheduler-master2                    1/1     Running   0          102m
复制代码

当我们看到都是Running的时候,就代表我们calico网络插件安装成功了,
此时我们去看一下node的状态,发现已经全部ready了,至此kubernetes安装成功。

 四、k8s-新增master节点

在当前唯一的master节点上运行如下命令

 4.1更新证书

kubeadm init phase upload-certs --upload-certs
1109 14:34:00.836965    5988 version.go:255] remote version is much newer: v1.25.3; falling back to: stable-1.22
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
 ecf2abbfdf3a7bc45ddb2de75152ec12889971098d69939b98e4451b53aa3033

4.2获取token

[root@k8s-master ~]# kubeadm token create --print-join-command
kubeadm join 10.70.51.10:6443 --token xxxxxx --discovery-token-ca-cert-hash sha256:xxxxxxx

4.3将得到的token和key进行拼接,得到如下命令

kubeadm join 10.70.5.10:6443 --token q466v0.hbk3qjreznjsf8ew --discovery-token-ca-cert-hash xxxxxxx --control-plane --certificate-key xxxxxxx
注意事项:
  1. 不要使用 --experimental-control-plane,会报错
  2. 要加上--control-plane --certificate-key ,不然就会添加为node节点而不是master
  3. join的时候节点上不要部署,如果部署了kubeadm reset后再join

4.4join之后在原先唯一的master节点上成功后,显示如下消息:

复制代码
This node has joined the cluster and a new control plane instance was created:
 
* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.
 
To start administering your cluster from this node, you need to run the following as a regular user:
 
    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
 
Run 'kubectl get nodes' to see this node join the cluster.
复制代码

五、 部署Nginx+Keepalived高可用负载均衡器 

5.1kube-apiserver高可用架构图:

  • Nginx是一个主流Web服务和反向代理服务器,这里用四层实现对apiserver实现负载均衡。
  • Keepalived是一个主流高可用软件,基于VIP绑定实现服务器双机热备,在上述拓扑中,Keepalived主要根据Nginx运行状态判断是否需要故障转移(漂移VIP),例如当Nginx主节点挂掉,VIP会自动绑定在Nginx备节点,从而保证VIP一直可用,实现Nginx高可用。

注1:为了节省机器,这里与K8s Master节点机器复用。也可以独立于k8s集群之外部署,只要nginx与apiserver能通信就行。

注2:如果你是在公有云上,一般都不支持keepalived,那么你可以直接用它们的负载均衡器产品,直接负载均衡多台Master kube-apiserver,架构与上面一样。

5.2.在两台Master节点操作。

复制代码
  1 1.安装软件包(主/备)
  2 yum install epel-release -y
  3 yum install nginx keepalived -y
  4 2.Nginx配置文件(主/备一样)
  5 cat > /etc/nginx/nginx.conf << "EOF"
  6 user nginx;
  7 worker_processes auto;
  8 error_log /var/log/nginx/error.log;
  9 pid /run/nginx.pid;
 10 
 11 include /usr/share/nginx/modules/*.conf;
 12 
 13 events {
 14     worker_connections 1024;
 15 }
 16 
 17 # 四层负载均衡,为两台Master apiserver组件提供负载均衡
 18 stream {
 19 
 20     log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
 21 
 22     access_log  /var/log/nginx/k8s-access.log  main;
 23 
 24     upstream k8s-apiserver {
 25        server 10.70.51.105:6443;   # Master1 APISERVER IP:PORT
 26        server 10.70.51.106:6443;   # Master2 APISERVER IP:PORT
 27     }
 28 
 29     server {
 30        listen 16443; # 由于nginx与master节点复用,这个监听端口不能是6443,否则会冲突
 31        proxy_pass k8s-apiserver;
 32     }
 33 }
 34 
 35 http {
 36     log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
 37                       '$status $body_bytes_sent "$http_referer" '
 38                       '"$http_user_agent" "$http_x_forwarded_for"';
 39 
 40     access_log  /var/log/nginx/access.log  main;
 41 
 42     sendfile            on;
 43     tcp_nopush          on;
 44     tcp_nodelay         on;
 45     keepalive_timeout   65;
 46     types_hash_max_size 2048;
 47 
 48     include             /etc/nginx/mime.types;
 49     default_type        application/octet-stream;
 50 
 51     server {
 52         listen       80 default_server;
 53         server_name  _;
 54 
 55         location / {
 56         }
 57     }
 58 }
 59 EOF
 60 3.keepalived配置文件(Nginx Master)
 61 
 62 cat > /etc/keepalived/keepalived.conf << EOF
 63 global_defs {
 64    notification_email {
 65      acassen@firewall.loc
 66      failover@firewall.loc
 67      sysadmin@firewall.loc
 68    }
 69    notification_email_from Alexandre.Cassen@firewall.loc
 70    smtp_server 127.0.0.1
 71    smtp_connect_timeout 30
 72    router_id NGINX_MASTER
 73 }
 74 
 75 vrrp_script check_nginx {
 76     script "/etc/keepalived/check_nginx.sh"
 77 }
 78 
 79 vrrp_instance VI_1 {
 80     state MASTER
 81     interface ens33  # 修改为实际网卡名
 82     virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的
 83     priority 100    # 优先级,备服务器设置 90
 84     advert_int 1    # 指定VRRP 心跳包通告间隔时间,默认1秒
 85     authentication {
 86         auth_type PASS
 87         auth_pass 1111
 88     }
 89     # 虚拟IP
 90     virtual_ipaddress {
 91         10.70.51.10/24
 92     }
 93     track_script {
 94         check_nginx
 95     }
 96 }
 97 EOF
 98 
 99 •    vrrp_script:指定检查nginx工作状态脚本(根据nginx状态判断是否故障转移)
100 •    virtual_ipaddress:虚拟IP(VIP)
101 准备上述配置文件中检查nginx运行状态的脚本:
102 cat > /etc/keepalived/check_nginx.sh  << "EOF"
103 #!/bin/bash
104 count=$(ss -antp |grep 16443 |egrep -cv "grep|$$")
105 
106 if [ "$count" -eq 0 ];then
107     exit 1
108 else
109     exit 0
110 fi
111 EOF
112 chmod +x /etc/keepalived/check_nginx.sh
113 4.keepalived配置文件(Nginx Backup)
114 cat > /etc/keepalived/keepalived.conf << EOF
115 global_defs { 
116    notification_email { 
117      acassen@firewall.loc 
118      failover@firewall.loc 
119      sysadmin@firewall.loc 
120    } 
121    notification_email_from Alexandre.Cassen@firewall.loc  
122    smtp_server 127.0.0.1 
123    smtp_connect_timeout 30 
124    router_id NGINX_BACKUP
125 } 
126 
127 vrrp_script check_nginx {
128     script "/etc/keepalived/check_nginx.sh"
129 }
130 
131 vrrp_instance VI_1 { 
132     state BACKUP 
133     interface ens33
134     virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的 
135     priority 90
136     advert_int 1
137     authentication { 
138         auth_type PASS      
139         auth_pass 1111 
140     }  
141     virtual_ipaddress { 
142         10.70.51.10/24
143     } 
144     track_script {
145         check_nginx
146     } 
147 }
148 EOF
149 准备上述配置文件中检查nginx运行状态的脚本:
150 cat > /etc/keepalived/check_nginx.sh  << "EOF"
151 #!/bin/bash
152 count=$(ss -antp |grep 16443 |egrep -cv "grep|$$")
153 
154 if [ "$count" -eq 0 ];then
155     exit 1
156 else
157     exit 0
158 fi
159 EOF
160 chmod +x /etc/keepalived/check_nginx.sh
161 注:keepalived根据脚本返回状态码(0为工作正常,非0不正常)判断是否故障转移。
162 5. 启动并设置开机启动
163 systemctl daemon-reload
164 systemctl start nginx keepalived
165 systemctl enable nginx keepalived
166 
167 6. 查看keepalived工作状态
168 ip addr
169 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
170     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
171     inet 127.0.0.1/8 scope host lo
172        valid_lft forever preferred_lft forever
173     inet6 ::1/128 scope host 
174        valid_lft forever preferred_lft forever
175 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
176     link/ether 00:0c:29:04:f7:2c brd ff:ff:ff:ff:ff:ff
177     inet 10.70.51.10/24 brd 10.70.51.1 scope global noprefixroute ens33
178        valid_lft forever preferred_lft forever
179     inet 192.168.31.88/24 scope global secondary ens33
180        valid_lft forever preferred_lft forever
181     inet6 fe80::20c:29ff:fe04:f72c/64 scope link 
182        valid_lft forever preferred_lft forever
183 
184 可以看到,在ens33网卡绑定了10.70.5.10 虚拟IP,说明工作正常。
185 7. Nginx+Keepalived高可用测试
186 关闭主节点Nginx,测试VIP是否漂移到备节点服务器。
187 在Nginx Master执行 pkill nginx;
188 在Nginx Backup,ip addr命令查看已成功绑定VIP。
189 8. 访问负载均衡器测试
190 找K8s集群中任意一个节点,使用curl查看K8s版本测试,使用VIP访问:
191 curl -k https://10.70.51.10:16443/version
192 {
193   "major": "1",
194   "minor": "20",
195   "gitVersion": "v1.20.4",
196   "gitCommit": "e87da0bd6e03ec3fea7933c4b5263d151aafd07c",
197   "gitTreeState": "clean",
198   "buildDate": "2021-02-18T16:03:00Z",
199   "goVersion": "go1.15.8",
200   "compiler": "gc",
201   "platform": "linux/amd64"
202 }
203 
204 可以正确获取到K8s版本信息,说明负载均衡器搭建正常。该请求数据流程:curl -> vip(nginx) -> apiserver
205 通过查看Nginx日志也可以看到转发apiserver IP:
206 [root@master2 kubernetes]# tail /var/log/nginx/k8s-access.log -f
207 10.70.5.106 10.70.51.105:6443 - [14/Feb/2023:18:32:49 +0800] 200 421
208 10.70.5.106 10.70.51.106:6443 - [14/Feb/2023:18:33:49 +0800] 200 421
View Code
复制代码

 

posted @   逆风飞翔的博客  阅读(217)  评论(0编辑  收藏  举报
相关博文:
阅读排行:
· DeepSeek “源神”启动!「GitHub 热点速览」
· 微软正式发布.NET 10 Preview 1:开启下一代开发框架新篇章
· C# 集成 DeepSeek 模型实现 AI 私有化(本地部署与 API 调用教程)
· DeepSeek R1 简明指南:架构、训练、本地部署及硬件要求
· NetPad:一个.NET开源、跨平台的C#编辑器
点击右上角即可分享
微信分享提示