k8s集群部署(愚人节快乐)

集群规划

软件 版本 备注
操作系统 CentOS Linux release 7.9.2009 (Core)
kubernetes v1.29.2
docker Docker version 25.0.3, build 4debf41
calico v3.27.2
角色 Ip 备注
k8s-master-01 192.168.11.121
k8s-node-01 192.168.11.122
k8s-node-02 192.168.11.123

一.环境准备

1.设置主机名

cat >> /etc/hosts << EOF 
192.168.11.121 k8s-master-01
192.168.11.122 k8s-node-01
192.168.11.123 k8s-node-02 
EOF

2.关闭防火墙和selinux

# 关闭防火墙 
systemctl stop firewalld 
systemctl disable firewalld 
# 关闭selinux 
sed -i 's/enforcing/disabled/' /etc/selinux/config
setenforce 0

3.开启时间同步

yum install chrony ntpdate -y 
ntpdate ntp1.aliyun.com
vim /etc/chrony.conf
server ntp1.aliyun.com iburst
server ntp.aliyun.com iburst
hwclock -w
systemctl start chronyd && systemctl enable chronyd && chronyc sources date

4.关闭swap分区

sed -ri 's/.*swap.*/#&/' /etc/fstab
swapoff -a
grep swap /etc/fstab

5.配置内核路由转发及网桥过滤

cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward=1
vm.swappiness=0
EOF
sysctl --system
# 加载br_netfilter模块
modprobe  br_netfilter
lsmod |grep  br_netfilter

6.配置ipvs转发

yum -y install ipset ipvsadm
# 配置ipvsadm模块加载方式
# 添加需要加载的模块
mkdir -p /etc/sysconfig/ipvsadm
cat > /etc/sysconfig/ipvsadm/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOF
# 授权、运行、检查是否加载
chmod 755 /etc/sysconfig/ipvsadm/ipvs.modules && bash /etc/sysconfig/ipvsadm/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack

7.升级操作系统内核

# 查看当前内核版本
[root@node2 yum.repos.d]# uname -r
3.10.0-1160.el7.x86_64
# 7.升级操作系统内核(生产环境尽可能的使用高版本的系统内核,推荐 5.4+)
# 7.1 导入elrepo gpg key
sudo rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
# 7.2 安装elrepo yum源仓库
sudo yum -y install https://www.elrepo.org/elrepo-release-7.0-4.el7.elrepo.noarch.rpm
# 7.3 安装kernel-ml版本,ml为长期版本,lt为长期维护版本
sudo yum --enablerepo="elrepo-kernel" -y install kernel-lt.x86_64
# 7.4 设置grub2默认引导为0
sudo grub2-set-default 0
# 7.5 重新生成grub2引导文件
sudo grub2-mkconfig -o /boot/grub2/grub.cfg
# 7.6 更新完成重启生效
reboot
# 7.7 查看内核版本
[root@k8s-node-01 ~]# uname -r
5.4.269-1.el7.elrepo.x86_64

二.部署doocker-ce

1.下载阿里云开源镜像的docker

wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo

2.安装docker

yum -y install docker-ce
cat > /etc/docker/daemon.json << EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
   "registry-mirrors": [
        "https://f5l1gayk.mirror.aliyuncs.com",
        "https://registry.docker-cn.com",
        "https://docker.mirrors.ustc.edu.cn",
        "https://dockerhub.azk8s.cn",
        "http://hub-mirror.c.163.com",
        "https://registry.k8s.io"
]
}
EOF

3.让docker智能上网,才能通过https://registry.k8s.io下载k8s国外源最新的kubernetes。

mkdir /etc/systemd/system/docker.service.d/
cat >/etc/systemd/system/docker.service.d/http-proxy.conf <<'EOF'
[Service]
Environment="HTTP_PROXY=http://192.168.11.55:7890" "HTTPS_PROXY=http://192.168.11.55:7890"
EOF

4.确保智能上网服务正常的情况下,启动docker(不然会报错)

systemctl enable docker && systemctl start docker && systemctl status docker && docker info|grep systemd

三.部署cri-dockerd

Release v0.3.10 · Mirantis/cri-dockerd · GitHub
wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.10/cri-dockerd-0.3.10.amd64.tgz
tar -zxf cri-dockerd-0.3.10.amd64.tgz
cp cri-dockerd/cri-dockerd /usr/bin/
cat > /etc/systemd/system/cri-docker.service<<EOF
[Unit]
Description=CRI Interface for Docker Application Container Engine
Documentation=https://docs.mirantis.com
After=network-online.target firewalld.service docker.service
Wants=network-online.target
Requires=cri-docker.socket
[Service]
Type=notify
# ExecStart=/usr/bin/cri-dockerd --container-runtime-endpoint fd://
# 指定用作 Pod 的基础容器的容器镜像(“pause 镜像”)
ExecStart=/usr/bin/cri-dockerd --pod-infra-container-image=registry.k8s.io/pause:3.9 --container-runtime-endpoint fd:// 
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always
StartLimitBurst=3
StartLimitInterval=60s
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TasksMax=infinity
Delegate=yes
KillMode=process
[Install]
WantedBy=multi-user.target
EOF

cat > /etc/systemd/system/cri-docker.socket <<EOF
[Unit]
Description=CRI Docker Socket for the API
PartOf=cri-docker.service
[Socket]
ListenStream=%t/cri-dockerd.sock
SocketMode=0660
SocketUser=root
SocketGroup=docker
[Install]
WantedBy=sockets.target
EOF

systemctl daemon-reload
systemctl enable cri-docker && systemctl start cri-docker && systemctl status cri-docker

四.部署kubelet/kubeadm/kubectl

# This overwrites any existing configuration in /etc/yum.repos.d/kubernetes.repo
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.29/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.29/rpm/repodata/repomd.xml.key
exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
EOF
sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
sudo systemctl enable --now kubelet
也可以自己选择版本安装
sudo yum list kubeadm.x86_64 --showduplicates |sort -r
sudo yum list kubelet.x86_64 --showduplicates |sort -r
sudo yum list kubectl.x86_64 --showduplicates |sort -r
列出支持yum安装的版本再通过yum install 进行安装
# 配置 cgroup 驱动与docker一致
cp /etc/sysconfig/kubelet{,.bak}
cat > /etc/sysconfig/kubelet <<EOF
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"
EOF
systemctl enable kubelet
# 安装自动补全工具(可选)
yum install bash-completion -y 
source /usr/share/bash-completion/bash_completion
echo "source <(kubectl completion bash)" >> ~/.bashrc
source  ~/.bashrc   
# 查看配置镜像 
[root@k8s-node-01 ~]# kubeadm config images list
registry.k8s.io/kube-apiserver:v1.29.2
registry.k8s.io/kube-controller-manager:v1.29.2
registry.k8s.io/kube-scheduler:v1.29.2
registry.k8s.io/kube-proxy:v1.29.2
registry.k8s.io/coredns/coredns:v1.11.1
registry.k8s.io/pause:3.9
registry.k8s.io/etcd:3.5.10-0
由于我们docker有做了代理,为方便后续国内使用,可以把kubernetes镜像下载下来备用。这边是通过脚本。
cat > image_download.sh <<EOF
#!/bin/bash
images_list='
registry.k8s.io/kube-apiserver:v1.29.2
registry.k8s.io/kube-controller-manager:v1.29.2
registry.k8s.io/kube-scheduler:v1.29.2
registry.k8s.io/kube-proxy:v1.29.2
registry.k8s.io/coredns/coredns:v1.11.1
registry.k8s.io/pause:3.9
registry.k8s.io/etcd:3.5.10-0
'
for i in $images_list
do
        docker pull $i
done
docker save -o k8s-1-29-2.tar $images_list
EOF
下载镜像的另一个命令:
kubeadm config images pull --cri-socket unix:///var/run/cri-dockerd.sock --kubernetes-version=v1.29.2
后续可以直接导入k8s镜像包
docker load -i k8s-1.29.2.tar

五.集群初始化

kubeadm init --kubernetes-version=v1.29.2 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.11.121 --cri-socket unix:///var/run/cri-dockerd.sock

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Node1和node2执行:
kubeadm join 192.168.11.121:6443 --token cwky34.xg90mv77pemyygt6 --discovery-token-ca-cert-hash sha256:63dabe6aaa643b99936c6832d38a3940c9e19f62afb19568e2e860d98a16ceb7 --cri-socket unix:///var/run/cri-dockerd.sock

kubectl get nodes


由于网络插件没有安装,节点都处于NotReady状态

六.部署集群网络插件calico

https://docs.tigera.io/calico/latest/getting-started/kubernetes/quickstart
参考官网的安装方法:
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.2/manifests/tigera-operator.yaml

proxychains4 wget https://raw.githubusercontent.com/projectcalico/calico/v3.27.2/manifests/custom-resources.yaml
vim custom-resources.yaml
    ipPools:
    - blockSize: 26
      cidr: 10.244.0.0/16
      encapsulation: VXLANCrossSubnet
      natOutgoing: Enabled
      nodeSelector: all()

kubectl create -f  custom-resources.yaml

watch kubectl get pods -n calico-system

可以看到下载速度非常慢,且一直在滚动报错,本人的vpn确实比较廉价,好在还能用,就等吧,等到报错越滚越少,且calico-system都running起来就ok了。


全都running起来了

速度备份calico镜像

docker save -o k8s-calico-v3.27.2.tar  calico/cni calico/node-driver-registrar calico/csi calico/pod2daemon-flexvol calico/node

如果没有智能上网的同学可以通过导入离线镜像包的方式如下:
kubectl delete -f custom-resources.yaml
kubectl delete -f tigera-operator.yaml
docker load -i k8s-calico-v3.27.2.tar
检查节点状态
kubectl get nodes

kubectl get pods -n kube-system

参考:
https://blog.csdn.net/weixin_41904935/article/details/135894609
https://blog.csdn.net/ou5157/article/details/135281150
https://blog.csdn.net/Bensonofljb/article/details/135897501
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/

posted @ 2024-04-01 17:22  海yo  阅读(182)  评论(0编辑  收藏  举报