kubeadm在单master k8s集群中添加新节点

 服务器信息

服务器信息
master1 10.38.0.50
master2 10.38.0.58
master3 10.38.0.166
node1 10.38.0.77
lb1 10.38.0.182
lb2 10.38.0.18
vip  10.38.0.144

 

1.服务器初始化

复制代码
1)关闭防火墙,selinux,取消swap分区

systemctl stop firewalld && systemctl disable firewalld

sed -i 's/enforcing/disabled/' /etc/selinux/config && setenforce 0 && getenforce

swapoff -a && vim /etc/fstab #把swap的给注释掉

2)更改主机名,添加hosts解析

hostnamectl set-hostname test-ceshi-master-2

vim /etc/hosts

10.12.8.16 registry.kubeoperator.io
10.38.0.58 test-ceshi-master-1
10.38.0.77 test-ceshi-worker-1
10.38.0.50 test-ceshi-master-2
10.38.0.166 test-ceshi-master-3
10.38.0.144 master.k8s.io k8s-vip

3)配置数据流

cat >> /etc/sysctl.d/k8s.conf << EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF

4)配置时间同步

yum install ntpdate -y && ntpdate time.windows.com
复制代码

 

2.安装docker

复制代码
wget https://download.docker.com/linux/static/stable/x86_64/docker-19.03.9.tgz

tar -zxvf docker-19.03.9.tgz

mv docker/* /usr/local/bin/

编写docker配置文件,设置docker的数据路径为/data/docker

mkdir /etc/docker

vim /etc/docker/daemon.json

{
"registry-mirrors": ["http://10.12.8.16:8082","http://registry.kubeoperator.io:8082","https://reg-mirror.qiniu.com","https://hub-mirror.c.163.com","http://nexus.goldwind.com.cn:9000"],
"insecure-registries": ["harbor.goldwind.com","nexus.goldwind.com.cn:8082","nexus.goldwind.com.cn:8080","nexus.goldwind.com.cn:9000","10.12.8.16:8082","10.12.8.16:8083","registry.kubeoperator.io:8083","registry.kubeoperator.io:8082","192.168.0.0/16","192.168.255.0/24"],
"max-concurrent-downloads": 10,
"log-driver": "json-file",
"log-level": "warn",
"log-opts": {
"max-size": "10m",
"max-file": "3"
},
"bip": "172.17.0.1/16",
"data-root": "/data/docker",
"exec-opts": ["native.cgroupdriver=cgroupfs"]
}

编写docker的systemd文件

vim /etc/systemd/system/docker.service

[Unit]
Description=Docker Application Container Engine
Documentation=http://docs.docker.io

[Service]
Environment="PATH=/usr/local/bin:/bin:/sbin:/usr/bin:/usr/sbin"
ExecStart=/usr/local/bin/dockerd
ExecStartPost=/sbin/iptables -I FORWARD -s 0.0.0.0/0 -j ACCEPT
ExecReload=/bin/kill -s HUP $MAINPID
Restart=always
RestartSec=5
LimitNOFILE=1048576
LimitNPROC=1048576
LimitCORE=1048576
Delegate=yes
KillMode=process

[Install]
WantedBy=multi-user.target

启动docker及添加开机自开启

systemctl daemon-reload

systemctl start docker && systemctl enable docker 

查看docker状态

systemctl status docker
复制代码

3.安装kubeadm,kubelet及kubectl

复制代码
mkdir /etc/yum.repos.d/bak 
mv /etc/yum.repos.d/*.repo /etc/yum.repos.d/bak 
vim /etc/yum.repos.d/kubeops.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

在master查看kubeadm版本 kubeadm version并安装相同版本
yum install -y kubelet-1.20.4 kubeadm-1.20.4 kubectl-1.20.4

安装时报错:

Error: Package: kubelet-1.20.4-0.x86_64 (kubernetes) Requires: conntrack
Error: Package: kubelet-1.20.4-0.x86_64 (kubernetes) Requires: socat

这是因为缺少conntrack socat这两个包,再添加可用的yum源如阿里源,然后安装这两个包
yum install conntrack socat -y
之后再进行安装,成功
yum install -y kubelet-1.20.4 kubeadm-1.20.4 kubectl-1.20.4
复制代码

 

4.安装及配置nginx+keepalived

  需要安装nginx(haproxy)+keepalived 为apiserver提供高可用master的vip。可以在master节点直接安装nginx+keepalive,但是由于80/443端口会被ingress占用,所以在本机安装只能提供apiserver的负载均衡,而使用另外的两台机器安装,就可以在nginx同时提供k8s集群所有node节点的80http和443https的负载均衡,用这个vip添加域名解析,不会有域名解析的单点故障。

  由于测试服务器是华为云服务器,所以vip需要进行注册,否则不能解析,在弹性负载均衡-子网中申请虚拟IP地址,并对两台lb服务器进行绑定。

 

复制代码
1)安装keepalived

yum install -y conntrack-tools libseccomp libtool-ltdl

yum -y install keepalived
2)配置keepalive

编辑keepalived配置文件

在lb1服务器:

vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   notification_email {
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
     script_user root
     enable_script_security
   }
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 192.168.200.1
   smtp_connect_timeout 30
   router_id NGINX
   vrrp_skip_check_adv_addr
   vrrp_garp_interval 0
   vrrp_gna_interval 0
}
vrrp_script nginx_check {
        script "/etc/keepalived/nginx_health.sh"
        interval 2
        weight -20
}
vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 55
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        10.38.0.144
    }
    track_script {
        nginx_check
    }
}

在lb2服务器:
! Configuration File for keepalived

global_defs {
   notification_email {
   acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
     script_user root
     enable_script_security
   }
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 192.168.200.1
   smtp_connect_timeout 30
   router_id NGINX
   vrrp_skip_check_adv_addr
   vrrp_garp_interval 0
   vrrp_gna_interval 0
}
vrrp_script nginx_check {
        script "/etc/keepalived/nginx_health.sh"
        interval 2
        weight -20
}
vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    virtual_router_id 55
    priority 90
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        10.38.0.144
    }
    track_script {
        nginx_check
    }
}
其中定义自定义资源监控脚本(vrrp_script):nginx_check,通过调用(track_script)这个具体的脚本/etc/keepalived/nginx_health.sh来实现对nginx的监控,并根据监控的结果实现动态调整
在lb1及lb2两台服务器添加监控脚本,如果nginx进程数量为0那么重启nginx,过两秒后再次查询进程数量,如果进程数量仍为0则关闭keepalived.
vim /etc/keepalived/nginx_health.sh
#!/bin/bash
counter=$(ps -C nginx --no-heading|wc -l)
if [ "${counter}" = "0" ]; then
    systemctl restart nginx
    sleep 2
    counter=$(ps -C nginx --no-heading|wc -l)
    if [ "${counter}" = "0" ]; then
       systemctl stop keepalived
    fi
fi
3)安装nginx
yum install -y pcre  pcre-devel zlib  zlib-devel openssl openssl-devel
wget http://nginx.org/download/nginx-1.23.4.tar.gz
tar -zxvf nginx-1.23.4.tar.gz
cd nginx-1.23.4
./configure --prefix=/data/nginx --with-http_stub_status_module --with-http_ssl_module --with-stream
make && make install
4)配置nginx
在lb1,lb2两台服务器上
vim /data/nginx/conf/nginx.conf

user root;
worker_processes auto;
error_log logs/error.log;
pid logs/nginx.pid;
#include /usr/share/nginx/modules/*.conf;
events {
    worker_connections 1024;
}
stream {

    log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';

    access_log  logs/k8s-access.log  main;

    upstream k8s-http {
       server 10.38.0.50:80;
       server 10.38.0.58:80;
       server 10.38.0.166:80;
       server 10.38.0.77:80;

    }
    upstream k8s-https {
       server 10.38.0.50:443;
       server 10.38.0.58:443;
       server 10.38.0.166:443;
       server 10.38.0.77:443;
    }
    upstream k8s-apiserver {
       server 10.38.0.50:6443;
       server 10.38.0.58:6443;
       server 10.38.0.166:6443;
    }
    server {
       listen 80;
       proxy_connect_timeout 2s;
       proxy_timeout 5m;
       proxy_upload_rate 0;
       proxy_download_rate 0;
       proxy_buffer_size 4k;
       proxy_pass k8s-http;
    }
    server {
       listen 443;
       proxy_connect_timeout 2s;
       proxy_timeout 5m;
       proxy_upload_rate 0;
       proxy_download_rate 0;
       proxy_buffer_size 4k;
       proxy_pass k8s-https;
    }
   server {
       listen 26443;
       proxy_connect_timeout 2s;
       proxy_timeout 5m;
       proxy_upload_rate 0;
       proxy_download_rate 0;
       proxy_buffer_size 4k;
       proxy_pass k8s-apiserver;
    }
}

添加nginx到systemd服务
vim /usr/lib/systemd/system/nginx.service
[Unit]
Description=nginx
After=network.target

[Service]
Type=forking
PIDFile=/data/nginx/logs/nginx.pid
ExecStart=/data/nginx/sbin/nginx -c /data/nginx/conf/nginx.conf
ExecReload=/data/nginx/sbin/nginx -s reload
ExecStop=/data/nginx/sbin/nginx -s quit
PrivateTmp=true

[Install]
WantedBy=multi-user.target
添加后可使用systemctl start nginx进行nginx的服务的启动
复制代码

5.离下载镜像

6.重新生成证书及加入master节点

kubectl edit  cm kubeadm-config  -n kube-system 

在data.ClusterConfiguration.apiServer.certSANs添加两台新master服务器的ip及hostname,并将controlPlaneEndpoint设置为上文配置的三台master apiserver的负载均衡10.38.0.144:26443

 将配置文件导出

kubectl -n kube-system get configmap kubeadm-config -o jsonpath='{.data.ClusterConfiguration}' > kubeadm.yaml

若直接使用此配置文件生成apiserver会只有一年的有效期,因为kubeadm生成的证书的有效期是由kubeadm决定的,所以需要重新编译kubeadm使其拥有100年的有效期

1)若没有go环境需要先安装go

wget https://studygolang.com/dl/golang/go1.15.4.linux-amd64.tar.gz

tar zxvf go1.15.4.linux-amd64.tar.gz -/usr/local/

添加环境变量

vim /etc/bashrc

export PATH=$PATH:/usr/local/go/bin

source /etc/bashrc

2)下载kubernetes源码

git clone https://github.com/kubernetes/kubernetes.git

查看本地kubeadm版本

kubeadm version

将源码切换至v1.20.4版本

cd kubernetes

git checkout -b remotes/origin/release-1.20.4 v1.20.4

报错error: Your local changes to the following files would be overwritten by checkout

丢弃本地改动

git checkout .

仍然报错文件,删除这个文件之后继续执行

git checkout -b remotes/origin/release-1.20.4 v1.20.4

3)修改kubeadm源码包更新证书策略

vim cmd/kubeadm/app/util/pkiutil/pki_helpers.go

1.添加const duration36500d = time.Hour * 24 * 365 * 100 
func NewSignedCert(cfg *certutil.Config, key crypto.Signer, caCert *x509.Certificate, caKey crypto.Signer) (*x509.Certificate, error) {
      const duration36500d = time.Hour * 24 * 365 * 100 # 在此模块下添加此行,即表示10年,如需100年将10改为100即可

2.修改NotAfter参数参数

NotAfter:     time.Now().Add(duration36500d).UTC(),

更改完成后重新编译kubeadm

make WHAT=cmd/kubeadm GOFLAGS=-v

4)编译成功后,kubeadm二进制文件生成在_output/bin/kubeadm
将原本的kubeadm备份,将新kubeadm加入/usr/local/bin/中
mv /usr/local/bin/kubeadm{,_bak}
mv _output/bin/kubeadm /usr/local/bin/kubeadm && chmod +x /usr/local/bin/kubeadm

首先移除原来的apiserver的证书 

mv /etc/kubernetes/pki/apiserver.{crt,key} ~

然后创建新的apiserver证书

kubeadm init phase certs apiserver --config kubeadm.yaml

重启apiserver容器让其接收新证书

docker restart `docker ps | grep kube-apiserver | grep -v pause | awk '{print $1}'`

使用openssl查看新证书中是否包含新添加的vip

openssl x509 -in /etc/kubernetes/pki/apiserver.crt -text

 将新证书上传到集群中

kubeadm init phase upload-certs --upload-certs

将master1生成的证书拷贝到其他的master节点

scp /etc/kubernetes/admin.conf 10.38.0.50:/etc/kubernetes/

scp /etc/kubernetes/pki/{ca.*,sa.*,front-proxy-ca.*} 10.38.0.50:/etc/kubernetes/pki 

scp /etc/kubernetes/pki/apiserver-etcd-client.* 10.38.0.50:/etc/kubernetes/pki

scp /etc/kubernetes/pki/etcd/ca.* 10.38.0.50:/etc/kubernetes/pki/etcd 

在master1生成join加入集群的token,这个token是默认两个小时失效

kubeadm token create --print-join-command

在master2中执行加入集群的命令,其中--control-plane表示加入为master节点。在其他文档中有添加--certificate-key,这个key为执行kubeadm init phase upload-certs --upload-certs生成的,但是使用这种方法总会报错error execution phase preflight: unable to fetch the kubeadm-config ConfigMap: failed to get config map: Get "https://127.0.0.1:8443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s": dial tcp 127.0.0.1:8443: connect: connection refused。貌似是apiserver为127.0.0.1:8443,这个是错误的,应该使用vip所以在join时不使用--certificate-key而是通过前面拷贝证书给新master节点的方式做认证。

 kubeadm join 10.38.0.58:8443 --token kq09ow.i5i7yf77wn6orwzo     --discovery-token-ca-cert-hash sha256:a532c49b0e34bbb27d019af344fe1dc777eb162aed4db2a527a1a63e5122fccf --control-plane

执行完成后查看node信息,添加成功

查看k8s证书时间

kubeadm certs check-expiration

 单独查看apiserver证书时间

openssl x509 -noout -text -in /etc/kubernetes/pki/apiserver.crt | grep 'Not'

 

 

 
 

1)生成私钥

openssl genpkey -algorithm RSA -out apiserver.key

2)生成请求文件

openssl req -new -key apiserver.key -out csr.csr -config apiserver.cnf

3)从CA注册公钥,有效期为100年

openssl x509 -req -days 36500 -in csr.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out apiserver.crt -extensions v3_req -extfile apiserver.cnf

openssl x509 -req -days 36500 -in csr.csr -CA ca.crt -CAkey ca.key -CAserial ca.srl  -out apiserver.crt -extensions v3_req -extfile apiserver.cnf

4)查看证书信息

openssl x509 -in apiserver.crt -noout -text

5)更新集群证书

kubeadm init phase upload-certs --upload-certs

posted @   潇潇暮鱼鱼  阅读(655)  评论(0编辑  收藏  举报
相关博文:
阅读排行:
· DeepSeek 开源周回顾「GitHub 热点速览」
· 物流快递公司核心技术能力-地址解析分单基础技术分享
· .NET 10首个预览版发布:重大改进与新特性概览!
· AI与.NET技术实操系列(二):开始使用ML.NET
· 单线程的Redis速度为什么快?
历史上的今天:
2022-06-30 k8s预留资源与pod驱逐
2022-06-30 K8s的Qos
点击右上角即可分享
微信分享提示