狂自私

导航

离线部署K8s V1.29.1版本

准备

私用的系统ISO镜像为:CentOS-7-x86_64-Everything-1908.iso

安装方式为带GUI的服务器

架构说明

K8s集群规划

VIP:192.168.24.2

         通过keepalived提供

harbor:镜像仓库、nfs、ntp

         连接外网;

         内网地址:192.168.24.5

k8s-master0:

         内网地址:192.168.24.10

k8s-master1:

         内网地址:192.168.24.11

k8s-master2:

         内网地址:192.168.24.12

k8s-node0:

         内网地址:192.168.24.15

k8s-node1:

         内网地址:192.168.24.16

k8s-node2:

         内网地址:192.168.24.17

Harbor主机上有如下服务:

         Harbor私有镜像仓库

         Ntp时间同步服务

         Nfs共享文件夹,用来提供离线软件包以及一些其他的共享文件,其所在路径为/nfs

部署harbor

请参考:https://www.cnblogs.com/love-DanDan/p/17977316

部署ntp服务

此步骤在harbor主机上执行;

systemctl disable chronyd.service && systemctl stop chronyd.service

 

timedatectl set-timezone 'Asia/Shanghai'

rm -rf /etc/localtime

ln -s /usr/share/zoneinfo/Asia/Shanghai /etc/localtime

 

#设置时间的配置文件

line_number=$(awk '/server 3.centos.pool.ntp.org iburst/{print NR}' /etc/ntp.conf)

sed -i -e 's/^server/#server/' /etc/ntp.conf

sed -i "$line_number a server time.windows.com iburst" /etc/ntp.conf

sed -i "$line_number a server ntp.aliyun.com iburst" /etc/ntp.conf

line_number=$(awk '/restrict ::1/{print NR}' /etc/ntp.conf)

sed -i "$line_number a restrict 192.168.24.1 mask 255.255.255.0 nomodify" /etc/ntp.conf

 

systemctl start ntpd &&systemctl enable ntpd

 

部署nfs服务

此步骤在harbor主机上执行:

mkdir /nfs

chmod 777 -Rf /nfs/    #给nfs赋予777的权限,不然会有问题。

yum list nfs-utils -y

#允许192.168.24.*网段内的主机访问/nfs目录,若是root用户访问则映射为匿名用户。

echo '/nfs 192.168.24.*(rw,sync,root_squash)' >> /etc/exports

#需要rpc服务配合

systemctl enable rpcbind nfs-server.service && systemctl restart rpcbind nfs-server.service

 

准备一些软件

此步骤在harbor主机上执行,可能需要提前准备一些yum源和下载一些安装包:

docker

命令:

mkdir -p /nfs/docker-ce

wget  https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-cli-25.0.1-1.el7.x86_64.rpm

 wget https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-rootless-extras-25.0.1-1.el7.x86_64.rpm

wget https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-25.0.1-1.el7.x86_64.rpm

wget http://mirror.centos.org/centos/7/extras/x86_64/Packages/container-selinux-2.119.2-1.911c772.el7_8.noarch.rpm

yumdownloader --destdir=/nfs slirp4netns docker-compose-plugin fuse-overl docker-buildx-plugin

wget http://mirror.centos.org/centos/7/os/x86_64/Packages/libnetfilter_cthelper-1.0.0-11.el7.x86_64.rpm

wget http://mirror.centos.org/centos/7/os/x86_64/Packages/libnetfilter_cttimeout-1.0.0-7.el7.x86_64.rpm

wget http://mirror.centos.org/centos/7/os/x86_64/Packages/libnetfilter_queue-1.0.2-2.el7_2.x86_64.rpm

wget http://mirror.centos.org/centos/7/extras/x86_64/Packages/fuse3-libs-3.6.1-4.el7.x86_64.rpm

wget http://mirror.centos.org/centos/7/extras/x86_64/Packages/slirp4netns-0.4.3-4.el7_8.x86_64.rpm

wget http://mirror.centos.org/centos/7/extras/x86_64/Packages/fuse-overlayfs-0.7.2-6.el7_8.x86_64.rpm

 

下载的包:

container-selinux-2.119.2-1.911c772.el7_8.noarch.rpm

docker-compose-plugin-2.24.2-1.el7.x86_64.rpm

docker-buildx-plugin-0.12.1-1.el7.x86_64.rpm

docker-ce-cli-25.0.1-1.el7.x86_64.rpm

containerd.io-1.6.27-3.1.el7.x86_64.rpm

libnetfilter_cthelper-1.0.0-11.el7.x86_64.rpm

libnetfilter_cttimeout-1.0.0-7.el7.x86_64.rpm

libnetfilter_queue-1.0.2-2.el7_2.x86_64.rpm

fuse3-libs-3.6.1-4.el7.x86_64.rpm

slirp4netns-0.4.3-4.el7_8.x86_64.rpm

fuse-overlayfs-0.7.2-6.el7_8.x86_64.rpm

docker-ce-25.0.1-1.el7.x86_64.rpm

docker-ce-rootless-extras-25.0.1-1.el7.x86_64.rpm

 

K8s

yum源:

cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo

[kubernetes]

name=Kubernetes

baseurl=https://pkgs.k8s.io/core:/stable:/v1.29/rpm/

enabled=1

gpgcheck=1

gpgkey=https://pkgs.k8s.io/core:/stable:/v1.29/rpm/repodata/repomd.xml.key

exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni

EOF

命令:

yumdownloader --destdir=/nfs  --disableexcludes=kubernetes kubelet kubeadm kubectl cri-tools kubernetes-cni conntrack socat

包:

kubeadm-1.29.1-150500.1.1.x86_64.rpm

kubelet-1.29.1-150500.1.1.x86_64.rpm

kubectl-1.29.1-150500.1.1.x86_64.rpm

cri-tools-1.29.0-150500.1.1.x86_64.rpm

kubernetes-cni-1.3.0-150500.1.1.x86_64.rpm

socat-1.7.3.2-2.el7.x86_64.rpm

conntrack-tools-1.4.4-7.el7.x86_64.rpm

crictl

命令:

wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.29.0/crictl-v1.29.0-linux-amd64.tar.gz -o /nfs/crictl-v1.29.0-linux-amd64.tar.gz

包:

crictl-v1.29.0-linux-amd64.tar.gz

keepalived

命令:

yumdownloader --destdir=/nfs keepalived

cd /nfs

wget http://mirror.centos.org/centos/7/os/x86_64/Packages/net-snmp-agent-libs-5.7.2-49.el7.x86_64.rpm

wget http://mirror.centos.org/centos/7/os/x86_64/Packages/net-snmp-libs-5.7.2-49.el7.x86_64.rpm

wget http://mirror.centos.org/centos/7/os/x86_64/Packages/perl-Data-Dumper-2.145-3.el7.x86_64.rpm

 

包:

keepalived-1.3.5-19.el7.x86_64.rpm

net-snmp-agent-libs-5.7.2-49.el7.x86_64.rpm

net-snmp-libs-5.7.2-49.el7.x86_64.rpm

perl-Data-Dumper-2.145-3.el7.x86_64.rpm

nginx:

yum源:

请参考:

https://nginx.org/en/linux_packages.html

cat > /etc/yum.repos.d/nginx.repo<<EOF

[nginx-mainline]

name=nginx mainline repo

baseurl=http://nginx.org/packages/mainline/centos/$releasever/$basearch/

gpgcheck=1

enabled=1

gpgkey=https://nginx.org/keys/nginx_signing.key

module_hotfixes=true

EOF

命令:

yumdownloader --destdir=/nfs yum-utils nginx

包:

yum-utils-1.1.31-54.el7_8.noarch.rpm

nginx-1.25.3-1.el7.ngx.x86_64.rpm

kubectl自动补全

命令:

cd /nfs

wget http://mirror.centos.org/centos/7/os/x86_64/Packages/bash-completion-2.1-8.el7.noarch.rpm

包:

bash-completion-2.1-8.el7.noarch.rpm

 

内核升级

命令:

mkdir /nfs/kernel -p && cd /nfs/kernel

wget https://mirrors.aliyun.com/elrepo/kernel/el7/x86_64/RPMS/kernel-lt-5.4.267-1.el7.elrepo.x86_64.rpm

包:

kernel-lt-5.4.267-1.el7.elrepo.x86_64.rpm

 

 

 

准备一些镜像

此步骤在harbor主机上执行,首先安装kubeadm:y

um install -y --showduplicates kubeadm-1.29.1 kubelet-1.29.1 kubectl-1.29.1 --disableexcludes=kubernetes

准备kubeadm部署所需要的镜像

使用命令列出安装K8s所需要的镜像名。

查看在线安装时需要的镜像列表:

[root@k8s-master0 ~]# kubeadm config images list

registry.k8s.io/kube-apiserver:v1.29.1

registry.k8s.io/kube-controller-manager:v1.29.1

registry.k8s.io/kube-scheduler:v1.29.1

registry.k8s.io/kube-proxy:v1.29.1

registry.k8s.io/coredns/coredns:v1.11.1

registry.k8s.io/pause:3.9

registry.k8s.io/etcd:3.5.10-0

[root@k8s-master0 ~]#

查看使用自定义仓库时需要的镜像名称:

[root@k8s-master0 ~]# kubeadm config images list --image-repository 192.168.24.5/k8s

192.168.24.5/k8s/kube-apiserver:v1.29.1

192.168.24.5/k8s/kube-controller-manager:v1.29.1

192.168.24.5/k8s/kube-scheduler:v1.29.1

192.168.24.5/k8s/kube-proxy:v1.29.1

192.168.24.5/k8s/coredns:v1.11.1

192.168.24.5/k8s/pause:3.9

192.168.24.5/k8s/etcd:3.5.10-0

[root@k8s-master0 ~]#

拉取镜像并上传到harbor中:

拉取

for image_name in registry.k8s.io/kube-apiserver:v1.29.1 registry.k8s.io/kube-controller-manager:v1.29.1 registry.k8s.io/kube-scheduler:v1.29.1 registry.k8s.io/kube-proxy:v1.29.1 registry.k8s.io/coredns/coredns:v1.11.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0  ; do ctr image pull $image_name --all-platforms; done

上传

k8s_image=$(ctr image list -q | grep "registry.k8s.io")

for image in $k8s_image ;do new_name=$(echo $image|sed "s@registry.k8s.io@192.168.24.5/k8s@g");ctr image tag $image $new_name;ctr image push --skip-verify --user k8s:Lovedan@971220 $new_name --plain-http;done;

coredns需要额外处理标签:

ctr image tag 192.168.24.5/library/coredns/coredns:v1.11.1 192.168.24.5/k8s/coredns:v1.11.1

ctr image push --skip-verify --user k8s:Lovedan@971220 192.168.24.5/k8s/coredns:v1.11.1

准备网络插件flannel所需要的镜像

这里可以根据实际需要选择不同的镜像:

下载flannel的yaml文件:https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml

从中提取出镜像名称,然后下载镜像、上传到harbor中:

for image_name in docker.io/flannel/flannel:v0.24.2 docker.io/flannel/flannel-cni-plugin:v1.4.0-flannel1 docker.io/flannel/flannel:v0.24.2; do ctr image pull $image_name --all-platforms; done

k8s_image=$(ctr image list -q | grep "docker.io/flannel")

for image in $k8s_image ;do new_name=$(echo $image|sed "s@docker.io/flannel@192.168.24.5/k8s@g");ctr image tag $image $new_name;ctr image push --skip-verify --user k8s:Lovedan@971220 $new_name --plain-http;done;

删除本地镜像

ctr image rm $(ctr image list -q)

准备一些文件

将部署网络插件flannel的yam文件放入/nfs

cd /nfs

wget https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml

同时还要修改一下镜像名:

sed -i 's:docker.io/flannel:192.168.24.5/k8s:' kube-flannel.yml

部署K8S

升级内核

需要加入K8s集群的所有主机都要做。

echo -e "192.168.24.5 harbor\n192.168.50.10 k8s-master0\n192.168.50.11 k8s-master1\n192.168.50.12 k8s-master2\n192.168.50.16 k8s-node0\n192.168.50.17 k8s-node1\n192.168.50.18 k8s-node2\n" >> /etc/hosts

mkdir /nfs && chmod 777 -Rf /nfs/

echo "harbor:/nfs /nfs nfs defaults 0 0" >> /etc/fstab

mount -t nfs harbor:/nfs /nfs

cd /nfs/kernerl/ && rpm -ivh kernel-lt-5.4.267-1.el7.elrepo.x86_64.rpm

grub2-set-default 0

#查看内核列表:

awk -F\' '$1=="menuentry " {print i++ " : " $2}' /boot/grub2/grub.cfg

grub2-mkconfig -o /boot/grub2/grub.cfg

reboot

uname -sa

Linux k8s-master0 5.4.267-1.el7.elrepo.x86_64 #1 SMP Tue Jan 16 13:02:38 EST 2024 x86_64 x86_64 x86_64 GNU/Linux

[root@k8s-master0 ~]#

基础环境准备

需要加入K8s集群的所有主机都要做。

注意要在网卡配置文件中写入网关。

#关闭防火墙

systemctl stop firewalld

systemctl disable firewalld

#关闭SELinux和取消swap

sed -i 's/enforcing/disabled/' /etc/selinux/config

sed -ri 's/.*swap.*/#&/' /etc/fstab

#主机名,根据你自己的情况来设置

 

#升级内核的时候已经做了

#echo -e "192.168.24.5 harbor\n192.168.50.10 k8s-master0\n192.168.50.11 k8s-master1\n192.168.50.12 k8s-master2\n192.168.50.16 k8s-node0\n192.168.50.17 k8s-node1\n192.168.50.18 k8s-node2\n" >> /etc/hosts

#内核参数

echo -e "net.bridge.bridge-nf-call-ip6tables = 1\nnet.bridge.bridge-nf-call-iptables = 1\nnet.ipv4.ip_forward = 1 " >/etc/sysctl.d/k8s.conf

sysctl --system

#挂载nfs

#升级内核的时候已经做了

# mkdir /nfs

# chmod 777 -Rf /nfs/

# echo "harbor:/nfs /nfs nfs defaults 0 0" >> /etc/fstab

# mount -t nfs harbor:/nfs /nfs

#设置时间同步

systemctl disable chronyd.service && systemctl stop chronyd.service

timedatectl set-timezone 'Asia/Shanghai'

rm -rf /etc/localtime

ln -s /usr/share/zoneinfo/Asia/Shanghai /etc/localtime

 

#设置时间的配置文件

line_number=$(awk '/server 3.centos.pool.ntp.org iburst/{print NR}' /etc/ntp.conf)   #这条语句是centos7特制的,其他版本不知道是不是这样

sed -i -e 's/^server/#server/' /etc/ntp.conf

sed -i "$line_number a server harbor iburst" /etc/ntp.conf

line_number=$(awk '/restrict ::1/{print NR}' /etc/ntp.conf)

ntpdate harbor

systemctl start ntpd &&systemctl enable ntpd

 

cd /etc/yum.repos.d/ &&  tar -cf all.tar *.repo --remove-files

 

#安装docker

cd /nfs/docker-ce

yum localinstall container-selinux-2.119.2-1.911c772.el7_8.noarch.rpm -y

yum localinstall docker-compose-plugin-2.24.2-1.el7.x86_64.rpm docker-buildx-plugin-0.12.1-1.el7.x86_64.rpm -y

yum localinstall docker-ce-cli-25.0.1-1.el7.x86_64.rpm -y

yum localinstall containerd.io-1.6.27-3.1.el7.x86_64.rpm -y

yum localinstall libnetfilter_cthelper-1.0.0-11.el7.x86_64.rpm libnetfilter_cttimeout-1.0.0-7.el7.x86_64.rpm libnetfilter_queue-1.0.2-2.el7_2.x86_64.rpm -y

yum localinstall  fuse3-libs-3.6.1-4.el7.x86_64.rpm -y

yum localinstall  slirp4netns-0.4.3-4.el7_8.x86_64.rpm  fuse-overlayfs-0.7.2-6.el7_8.x86_64.rpm  -y

#最后这个一定要一起安装,不然会形成相互依赖导致安装失败。

yum localinstall  docker-ce-25.0.1-1.el7.x86_64.rpm  docker-ce-rootless-extras-25.0.1-1.el7.x86_64.rpm -y

 

echo '{"insecure-registries": ["http://192.168.24.5:80"]}' > /etc/docker/daemon.json

systemctl enable docker.service && systemctl start docker.service

#从harbor上复制ca证书文件:

mkdir /etc/containerd/certs.d/192.168.24.5/ -p

scp root@192.168.24.5:/etc/containerd/certs.d/192.168.24.5/ca.crt /etc/containerd/certs.d/192.168.24.5

 

#设置默认配置文件

containerd config default > /etc/containerd/config.toml

#修改一下

sed -i 's|registry.k8s.io/pause:3.6|192.168.24.5/k8s/pause:3.9|' /etc/containerd/config.toml

#还有这些地方要修改:

vi /etc/containerd/config.toml

    144     [plugins."io.containerd.grpc.v1.cri".registry]

    145       config_path = ""

    146

    147       [plugins."io.containerd.grpc.v1.cri".registry.auths]

    148

    149       [plugins."io.containerd.grpc.v1.cri".registry.configs]

    150         [plugins."io.containerd.grpc.v1.cri".registry.configs."192.168.24.5".tls]

    151           ca_file = "/etc/containerd/certs.d/192.168.24.5/ca.crt"

    152         [plugins."io.containerd.grpc.v1.cri".registry.configs."192.168.24.5".auth]

    153           username = "k8s"

    154           password = "Lovedan@971220"

    155

    156       [plugins."io.containerd.grpc.v1.cri".registry.headers]

    157

    158       [plugins."io.containerd.grpc.v1.cri".registry.mirrors]

    159         [plugins."io.containerd.grpc.v1.cri".registry.mirrors."192.168.24.5"]

    160           endpoint = ["https://192.168.24.5"]

    161

    162     [plugins."io.containerd.grpc.v1.cri".x509_key_pair_streaming]

    163       tls_cert_file = ""

    164       tls_key_file = ""

#这里部署写好了,后面kubeadm拉取镜像才没问题。

 

#安装kubelet、kubectl、kubeadm

yum localinstall /nfs/kubernetes-cni-1.3.0-150500.1.1.x86_64.rpm -y

yum localinstall /nfs/libnetfilter_cttimeout-1.0.0-7.el7.x86_64.rpm -y

yum localinstall /nfs/libnetfilter_queue-1.0.2-2.el7_2.x86_64.rpm -y

yum localinstall /nfs/libnetfilter_cthelper-1.0.0-11.el7.x86_64.rpm -y

yum localinstall /nfs/conntrack-tools-1.4.4-7.el7.x86_64.rpm -y

yum localinstall /nfs/socat-1.7.3.2-2.el7.x86_64.rpm -y

yum localinstall /nfs/kubelet-1.29.1-150500.1.1.x86_64.rpm -y

yum localinstall /nfs/cri-tools-1.29.0-150500.1.1.x86_64.rpm  -y

yum localinstall /nfs/kubectl-1.29.1-150500.1.1.x86_64.rpm -y

yum localinstall /nfs/kubeadm-1.29.1-150500.1.1.x86_64.rpm -y

#设置kubectl自动补全,node节点可以不做

yum localinstall /nfs/bash-completion-2.1-8.el7.noarch.rpm -y

sleep 5

source /usr/share/bash-completion/bash_completion

kubectl completion bash | tee /etc/bashcompletion.d/kubectl > /dev/null

chmod a+r /etc/bash_completion.d/kubectl

source ~/.bashrc

 

#自启动kubelet

systemctl enable kubelet.service

#设置默认路由,ens33表示网卡设备名,根据实际情况设置

echo ‘192.168.24.0/24 via 192.168.24.1’ > /etc/sysconfig/network-scripts/route-ens33

 

#设置默认CRI

crictl config runtime-endpoint unix:///run/containerd/containerd.sock

crictl config image-endpoint unix:///run/containerd/containerd.sock

systemctl daemon-reload && systemctl restart containerd

 

 

此时,强烈建议重启主机(需要安装K8s的主机)。

安装和配置keepalived和NGINX

作为master角色的主机操作。

keepalived

安装

yum localinstall -y /nfs/perl-Data-Dumper-2.145-3.el7.x86_64.rpm

yum localinstall -y /nfs/net-snmp-libs-5.7.2-49.el7.x86_64.rpm

yum localinstall -y /nfs/net-snmp-agent-libs-5.7.2-49.el7.x86_64.rpm

yum localinstall -y /nfs/keepalived-1.3.5-19.el7.x86_64.rpm

配置

在master0上面配置:

 [root@centos-k8s-master0 ~]# cat /etc/keepalived/keepalived.conf

global_defs {

   router_id LVS_DEVEL

   vrrp_skip_check_adv_addr

#   vrrp_strict

   vrrp_garp_interval 0

   vrrp_gna_interval 0

}

 

vrrp_instance VI_1 {

    state MASTER

    interface ens33

    virtual_router_id 51

    priority 100

    advert_int 1

    authentication {

        auth_type PASS

        auth_pass 971225

    }

    virtual_ipaddress {

        192.168.24.2

    }

}

[root@centos-k8s-master0 ~]#

在master1上面配置:

 [root@centos-k8s-master1 ~]# cat /etc/keepalived/keepalived.conf

global_defs {

   router_id LVS_DEVEL

   vrrp_skip_check_adv_addr

#   vrrp_strict

   vrrp_garp_interval 0

   vrrp_gna_interval 0

}

 

vrrp_instance VI_1 {

    state MASTER

    interface ens33

    virtual_router_id 51

    priority 80

    advert_int 1

    authentication {

        auth_type PASS

        auth_pass 971225

    }

    virtual_ipaddress {

        192.168.24.2

    }

}

[root@centos-k8s-master1 ~]#

在master2上面配置:

 [root@centos-k8s-master2 ~]# cat /etc/keepalived/keepalived.conf

global_defs {

   router_id LVS_DEVEL

   vrrp_skip_check_adv_addr

#   vrrp_strict

   vrrp_garp_interval 0

   vrrp_gna_interval 0

}

 

vrrp_instance VI_1 {

    state MASTER

    interface ens33

    virtual_router_id 51

    priority 60

    advert_int 1

    authentication {

        auth_type PASS

        auth_pass 971225

    }

    virtual_ipaddress {

        192.168.24.2

    }

}

[root@centos-k8s-master2 ~]#

启动服务:

systemctl enable keepalived.service && systemctl restart keepalived.service

查看虚拟ip地址,目前将处于master0上:

[root@k8s-master0 ~]# ip a|grep 'inet 192.168.24.2'

    inet 192.168.24.2/32 scope global ens33

[root@k8s-master0 ~]#

nginx

安装

yum localinstall /nfs/yum-utils-1.1.31-54.el7_8.noarch.rpm -y

yum localinstall -y /nfs/nginx-1.25.3-1.el7.ngx.x86_64.rpm

配置

[root@centos-k8s-master0 ~]# cat /etc/nginx/nginx.conf

 

user  nginx;

worker_processes  auto;

 

error_log  /var/log/nginx/error.log notice;

pid        /var/run/nginx.pid;

 

 

events {

    worker_connections  1024;

}

#只添加了这里的stream模块

stream{

    log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';

    access_log  /var/log/nginx/k8s-access.log  main;

    upstream k8s-apiserver {

       server 192.168.24.10:6443;       #master01的IP和6443端口

       server 192.168.24.11:6443;       #master02的IP和6443端口

       server 192.168.24.12:6443;       #master03的IP和6443端口

    }

    server {

       listen 16443;    #监听的是16443端口,因为nginx和master复用机器,所以不能是6443端口

       proxy_pass k8s-apiserver; #使用proxy_pass模块进行反向代理

    }

}

http {

    include       /etc/nginx/mime.types;

    default_type  application/octet-stream;

 

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '

                      '$status $body_bytes_sent "$http_referer" '

                      '"$http_user_agent" "$http_x_forwarded_for"';

 

    access_log  /var/log/nginx/access.log  main;

 

    sendfile        on;

    #tcp_nopush     on;

 

    keepalive_timeout  65;

 

    #gzip  on;

 

    include /etc/nginx/conf.d/*.conf;

}

[root@centos-k8s-master0 ~]#

启动nginx

systemctl  enable nginx &&  systemctl  restart nginx

检查Nginx状态:

[root@k8s-master0 ~]# systemctl status nginx.service

● nginx.service - nginx - high performance web server

   Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled; vendor preset: disabled)

   Active: active (running) since Fri 2024-01-26 13:43:45 CST; 29s ago

初始化master

我们先初始化master0主机。

kubeadm init --apiserver-advertise-address=192.168.24.10 --image-repository 192.168.24.5/k8s --kubernetes-version v1.29.1 --apiserver-bind-port=6443 --control-plane-endpoint=192.168.24.2:16443 --upload-certs --service-cidr=10.96.0.0/12 --pod-network-cidr=10.244.0.0/16

Your Kubernetes control-plane has initialized successfully!

 

To start using your cluster, you need to run the following as a regular user:

 

  mkdir -p $HOME/.kube

  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

  sudo chown $(id -u):$(id -g) $HOME/.kube/config

 

Alternatively, if you are the root user, you can run:

 

  export KUBECONFIG=/etc/kubernetes/admin.conf

 

You should now deploy a pod network to the cluster.

Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

  https://kubernetes.io/docs/concepts/cluster-administration/addons/

 

You can now join any number of the control-plane node running the following command on each as root:

 

  kubeadm join 192.168.24.2:16443 --token b82f3g.tvq08xgbo66b8qwj \

        --discovery-token-ca-cert-hash sha256:d4563e2bb534cfeba098c3985e19b37c1906e4c84ccc28b202129847912dfaee \

        --control-plane --certificate-key 70cd80056e7fe0c1c0761ca5b7525ca2bda2978855e5051f40981a2cf5976cde

 

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!

As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use

"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

 

Then you can join any number of worker nodes by running the following on each as root:

 

kubeadm join 192.168.24.2:16443 --token b82f3g.tvq08xgbo66b8qwj \

        --discovery-token-ca-cert-hash sha256:d4563e2bb534cfeba098c3985e19b37c1906e4c84ccc28b202129847912dfaee

遇到的一些问题

第二个master加入后,第三个加入不进去;

原因是没有修改主机名,主机名重复导致。

部署网络插件flannel

[root@k8s-master0 ~]# kubectl create -f /nfs/kube-flannel.yml

namespace/kube-flannel created

serviceaccount/flannel created

clusterrole.rbac.authorization.k8s.io/flannel created

clusterrolebinding.rbac.authorization.k8s.io/flannel created

configmap/kube-flannel-cfg created

daemonset.apps/kube-flannel-ds created

[root@k8s-master0 ~]# kubectl get -n kube-flannel pod -o wide

NAME                    READY   STATUS    RESTARTS   AGE   IP              NODE          NOMINATED NODE   READINESS GATES

kube-flannel-ds-7dh6s   1/1     Running   0          93s   192.168.24.12   k8s-master2   <none>           <none>

kube-flannel-ds-g4m7m   1/1     Running   0          93s   192.168.24.10   k8s-master0   <none>           <none>

kube-flannel-ds-zttsw   1/1     Running   0          93s   192.168.24.11   k8s-master1   <none>           <none>

[root@k8s-master0 ~]#

将worker节点加入集群

检查内核和基础环境是否满足要求,不满足则操作下。

若是丢失了worker节点加入集群的指令,则执行以下命令获取:

kubeadm token create --print-join-command

kubeadm join 192.168.24.2:16443 --token j88id7.7j2i4rqxant4mg31 --discovery-token-ca-cert-hash sha256:867d2958d687b509450c793948ab62f7e72d5e3dc8b72e428fda8a69cec69836

加入集群:

[root@k8s-node2 ~]# kubeadm join 192.168.24.2:16443 --token j88id7.7j2i4rqxant4mg31 --discovery-token-ca-cert-hash sha256:867d2958d687b509450c793948ab62f7e72d5e3dc8b72e428fda8a69cec69836

[preflight] Running pre-flight checks

[preflight] Reading configuration from the cluster...

[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'

[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"

[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"

[kubelet-start] Starting the kubelet

[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

 

This node has joined the cluster:

* Certificate signing request was sent to apiserver and a response was received.

* The Kubelet was informed of the new secure connection details.

 

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

 

[root@k8s-node2 ~]#

[root@k8s-master0 ~]# kubectl get node

NAME          STATUS   ROLES           AGE   VERSION

k8s-master0   Ready    control-plane   33h   v1.29.1

k8s-master1   Ready    control-plane   33h   v1.29.1

k8s-master2   Ready    control-plane   33h   v1.29.1

k8s-node0     Ready    <none>          32s   v1.29.1

k8s-node1     Ready    <none>          32s   v1.29.1

k8s-node2     Ready    <none>          32s   v1.29.1

[root@k8s-master0 ~]#

自此,离线部署K8s v1.29.1集群已完成。

posted on 2024-01-28 23:22  狂自私  阅读(708)  评论(0编辑  收藏  举报