k8s增加新node节点

新增节点node3
1 (整个虚线部分)以下环境基础配置:(克隆的虚拟机上均已安装好,这边不再需要运行。)
---------------------------------------------
1.1 yum安装一些必要的工具包:
yum install net-tools vim wget lrzsz git -y

1.2 防火墙与selinux
systemctl stop firewalld
systemctl disable firewalld
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
reboot

1.3 设置时区:
cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime -rf
yum install -y ntpdate
ntpdate -u ntp.api.bz
echo "*/5 * * * * ntpdate time7.aliyun.com >/dev/null 2>&1" >> /etc/crontab
service crond restart
chkconfig crond on
---------------------------------------------
1.4 设置主机名
hostnamectl set-hostname node-3;bash
cat > /etc/hosts <<EOF
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.52.38 master-1
192.168.52.39 master-2
192.168.52.40 master-3
192.168.52.41 node-1
192.168.52.42 node-2
192.168.52.43 node-3
EOF
1.5 各个节点把192.168.52.43 node-3加到各自的hosts上
从master-1分发,这是免密码登录。
master-1上操作:
export mypass=xxxxxx(你的root密码)
name=(node-3)
for i in ${name[@]};do
expect -c "
spawn ssh-copy-id -i /root/.ssh/id_rsa.pub root@$i
expect {
\"*yes/no*\" {send \"yes\r\"; exp_continue}
\"*password*\" {send \"$mypass\r\"; exp_continue}
\"*Password*\" {send \"$mypass\r\";}
}"
done

1.6 内核修改:
cat >>/etc/sysctl.conf<<EOF
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
vm.swappiness=0
fs.file-max=52706963
fs.nr_open=52706963
EOF

modprobe br_netfilter
sysctl -p

2 node-3节点安装docker
#安装CE版本
[root@node-3 ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
[root@node-3 ~]# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
[root@node-3 ~]# yum install -y docker-ce-19.03.6 docker-ce-cli-19.03.6 containerd.io
本地有docker文件的话直接yum localinstall containerd.io-1.2.10-3.2.el7.x86_64.rpm docker-ce-19.03.6-3.el7.x86_64.rpm docker-ce-cli-19.03.6-3.el7.x86_64.rpm -y 这样装不用网上下载上面三个大的rpm包,节省下载时间。

2.1 启动Docker服务
[root@node-3 ~]# chkconfig docker on
[root@node-3 ~]# service docker start
[root@node-3 ~]# service docker status

2.2 配置镜像加速器(所有node节点)
[root@node-3 ~]# mkdir -p /etc/docker
[root@node-3 ~]# tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://plqjafsr.mirror.aliyuncs.com"]
}
EOF
2.3 启动容器
[root@node-3 ~]# systemctl daemon-reload
[root@ node-3 ~]# systemctl restart docker

3 配置flannel
3.1 master-1上操作:
#复制etcd证书
for i in node-3;do ssh $i mkdir -p /etc/etcd/{cfg,ssl};done
for i in node-3;do scp /etc/etcd/ssl/* $i:/etc/etcd/ssl/;done
#复制启动文件
for i in node-3;do scp /usr/local/bin/mk-docker-opts.sh /usr/local/bin/flanneld $i:/usr/local/bin/;done
for i in node-3;do ssh $i "mkdir -p /etc/flannel";done
for i in node-3;do scp /etc/flannel/flannel.cfg $i:/etc/flannel/;done

3.2 node-3上操作:
配置Flannel配置文件
[root@node-3 ~]# cat > /usr/lib/systemd/system/flanneld.service <<EOF
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service

[Service]
Type=notify
EnvironmentFile=/etc/flannel/flannel.cfg
ExecStart=/usr/local/bin/flanneld --ip-masq \$FLANNEL_OPTIONS
ExecStartPost=/usr/local/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF
启动Flannel
[root@node-3 ~]# service flanneld start
[root@node-3 ~]# chkconfig flanneld on
[root@node-3 ~]# service flanneld status
#node-3节点停止flanneld
[root@node-3 ~]# service flanneld stop

3.3 修改Docker启动文件(node-3节点)
[root@node-3 ~]# cat >/usr/lib/systemd/system/docker.service<<EOFL
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd \$DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP \$MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s

[Install]
WantedBy=multi-user.target
EOFL

3.4 重启Docker服务
[root@node-3 ~]# systemctl daemon-reload
[root@node-3 ~]# service flanneld restart
[root@node-3 ~]# service docker restart

#检查IP地址, docker 与flanneld 是同一个网段
[root@node-3 flannel]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp0s18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether fe:fc:fe:53:2d:34 brd ff:ff:ff:ff:ff:ff
inet 192.168.52.43/24 brd 192.168.52.255 scope global enp0s18
valid_lft forever preferred_lft forever
inet6 fe80::fcfc:feff:fe53:2d34/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:93:36:69:28 brd ff:ff:ff:ff:ff:ff
inet 172.17.94.1/24 brd 172.17.94.255 scope global docker0
valid_lft forever preferred_lft forever
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default
link/ether b2:10:19:ba:ed:d2 brd ff:ff:ff:ff:ff:ff
inet 172.17.94.0/32 scope global flannel.1
valid_lft forever preferred_lft forever
inet6 fe80::b010:19ff:feba:edd2/64 scope link
valid_lft forever preferred_lft forever
#查看路由表看看有没有别的节点的flannel路由信息:
[root@node-3 flannel]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.52.1 0.0.0.0 UG 0 0 0 enp0s18
169.254.0.0 0.0.0.0 255.255.0.0 U 1002 0 0 enp0s18
172.17.31.0 172.17.31.0 255.255.255.0 UG 0 0 0 flannel.1
172.17.32.0 172.17.32.0 255.255.255.0 UG 0 0 0 flannel.1
172.17.79.0 172.17.79.0 255.255.255.0 UG 0 0 0 flannel.1
172.17.81.0 172.17.81.0 255.255.255.0 UG 0 0 0 flannel.1
172.17.94.0 0.0.0.0 255.255.255.0 U 0 0 0 docker0
172.17.96.0 172.17.96.0 255.255.255.0 UG 0 0 0 flannel.1
192.168.52.0 0.0.0.0 255.255.255.0 U 0 0 0 enp0s18
#各个节点互ping下网关 看看互相都通不通。
至此node-3的docker和flannel已安装完成。

4 部署kubelet:
4.1 master-1上复制文件到node-3
cd /soft
for i in node-3;do scp kubernetes/server/bin/kubelet kubernetes/server/bin/kube-proxy $i:/usr/local/bin/;done
将bootstrap kubeconfig kube-proxy.kubeconfig 文件复制到node-3节点
for i in node-3;do ssh $i "mkdir -p /etc/kubernetes/{cfg,ssl}";done
复制证书文件:
for i in node-3;do scp /etc/kubernetes/ssl/* $i:/etc/kubernetes/ssl/;done
复制kubeconfig文件 (master-1)
[root@master-1 bin]# cd /root/config
[root@master-1 config]# for i in node-3;do scp -rp bootstrap.kubeconfig kube-proxy.kubeconfig $i:/etc/kubernetes/cfg/;done
4.2 创建kubelet参数配置文件
#不同的Node节点, 需要修改IP地址 (node节点操作)
[root@ node-3 bin]# cat >/etc/kubernetes/cfg/kubelet.config<<EOF
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 192.168.52.43
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS: ["10.0.0.2"]
clusterDomain: cluster.local.
failSwapOn: false
authentication:
anonymous:
enabled: true
EOF
4.3 创建kubelet配置文件
#/etc/kubernetes/cfg/kubelet.kubeconfig 文件自动生成
#kubelet 启动后使用 --bootstrap-kubeconfig 向 kube-apiserver 发送 CSR 请求,当这个 CSR 被 approve 后,
kube-controller-manager 为 kubelet 创建 TLS 客户端证书、私钥和 --kubeletconfig 文件。
[root@node-1 bin]#cat >/etc/kubernetes/cfg/kubelet<<EOF
KUBELET_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.52.43 \
--kubeconfig=/etc/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/etc/kubernetes/cfg/bootstrap.kubeconfig \
--config=/etc/kubernetes/cfg/kubelet.config \
--cert-dir=/etc/kubernetes/ssl \
--pod-infra-container-image=docker.io/kubernetes/pause:latest"
EOF
4.4 创建kubelet系统启动文件(node-3节点)
[root@node-3 bin]#cat >/usr/lib/systemd/system/kubelet.service<<EOF
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service

[Service]
EnvironmentFile=/etc/kubernetes/cfg/kubelet
ExecStart=/usr/local/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
KillMode=process

[Install]
WantedBy=multi-user.target
EOF

4.5 启动kubelet服务(node-3节点)
[root@node-3 bin]#chkconfig kubelet on
[root@node-3 bin]#service kubelet start
[root@node-3 bin]#service kubelet status
4.6 修改自动续签证书(node-3上操作,前面master和其他node节点都已修改过):
node-3上删除原有的客户端证书:
rm -f /etc/kubernetes/ssl/kubelet-client-current.pem /etc/kubernetes/ssl/kubelet-client*pem /etc/kubernetes/ssl/kubelet.key kubelet.crt
修改kubelet文件
cat >/etc/kubernetes/cfg/kubelet<<EOF
KUBELET_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.52.43 \
--kubeconfig=/etc/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/etc/kubernetes/cfg/bootstrap.kubeconfig \
--config=/etc/kubernetes/cfg/kubelet.config \
--feature-gates=RotateKubeletClientCertificate=true,RotateKubeletServerCertificate=true \
--rotate-certificates \
--cert-dir=/etc/kubernetes/ssl \
--pod-infra-container-image=docker.io/kubernetes/pause:latest"
EOF
重启服务:
service kubelet restart
查看/etc/kubernetes/ssl下有没有新证书产生
4.7 查看kubelet.crt 查看证书有效期有没有变
#默认一年
openssl x509 -in kubelet-client-current.pem -noout -text | grep "Not"
openssl x509 -in server.crt -text
openssl x509 -in /etc/kubernetes/ssl/kubelet.crt -noout -text | grep -A2 "Validity"

5 部署kube-proxy 组件
# kube-proxy 运行在所有Node节点上, 监听Apiserver 中 Service 和 Endpoint 的变化情况,创建路由规则来进行服务负载均衡。

5.1 创建kube-proxy配置文件
#注意修改hostname-override地址, 不同的节点则不同。
[root@node-3 ~]#cat >/etc/kubernetes/cfg/kube-proxy<<EOF
KUBE_PROXY_OPTS="--logtostderr=true \
--v=4 \
--metrics-bind-address=0.0.0.0 \
--hostname-override=192.168.52.43 \
--cluster-cidr=10.0.0.0/24 \
--kubeconfig=/etc/kubernetes/cfg/kube-proxy.kubeconfig"
EOF

5.2 创建kube-proxy systemd unit 文件
[root@node-3 ~]#cat >/usr/lib/systemd/system/kube-proxy.service<<EOF
[Unit]
Description=Kubernetes Proxy
After=network.target

[Service]
EnvironmentFile=/etc/kubernetes/cfg/kube-proxy
ExecStart=/usr/local/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

5.3 启动kube-proxy 服务
[root@node-3 ~]#chkconfig kube-proxy on
[root@node-3 ~]#service kube-proxy start
[root@node-3 ~]#service kube-proxy status

5.4新加入的节点可以设置为不可调度。
kubectl cordon 192.168.52.43

 

posted @ 2021-06-28 14:49  lpaxq  阅读(1371)  评论(0编辑  收藏  举报