CentOS 7.9 环境下搭建k8s集群(一主两从)

一、硬件准备(虚拟主机)

角色 主机名 ip地址
master k8s-master 172.16.36.198
node k8s-node1 172.16.36.199
node k8s-node2 172.16.36.200
CentOS Linux release 7.9.2009 (Core)
4核CPU、16G内存
使用命令hostnamectl set-hostname临时修改主机名

二、环境准备

  • 1. 所有机器关闭防火墙
systemctl stop firewalld	#关闭
systemctl disable firewalld		#开机不自启
systemctl status firewalld		#查看状态
  • 2. 所有机器关闭selinux
sed -i 's/enforcing/disabled/' /etc/selinux/config 
setenforce 0
  • 3. 所有机器关闭swap
swapoff -a # 临时关闭
sed -ri 's/.*swap.*/#&/' /etc/fstab  #永久关闭
  • 4. 所有机器上添加主机名与ip的对应关系
vim /etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
172.16.36.198 k8s-master
172.16.36.199 k8s-node1
172.16.36.200 k8s-node2
  • 5. 在所有主机上将桥接的ipv4流量传递到iptables的链
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

三、为所有节点安装docker

yum install wget  -y
rm -rf /etc/yum.repos.d/*
wget -O /etc/yum.repos.d/centos7.repo http://mirrors.aliyun.com/repo/Centos-7.repo
wget -O /etc/yum.repos.d/epel-7.repo http://mirrors.aliyun.com/repo/epel-7.repo
wget -O /etc/yum.repos.d/docker-ce.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum install docker-ce-20.10.11 -y
systemctl start docker
systemctl enable docker
#docker配置aliyun加速器
https://www.cnblogs.com/qddbky/p/18305018

四、集群部署

  • 1. 为所有节点修改仓库,安装kubeadm、kubelet、kubectl
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum install kubelet-1.22.2 kubeadm-1.22.2 kubectl-1.22.2 -y
systemctl enable kubelet && systemctl start kubelet
  • 2. 修改docker的配置(所有节点)
[root@k8s-master ~]# cat > /etc/docker/daemon.json <<EOF
{
  "registry-mirrors": ["https://jfn3yf7d.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF

systemctl daemon-reload
systemctl restart docker.service
这里从节点的kubelet.service状态报code=exited, status=1/FAILURE是正常的
  • 3. 部署master节点(主节点k8s-master)
kubeadm init \
--apiserver-advertise-address=172.16.36.198 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.22.2 \
--control-plane-endpoint k8s-master \
--service-cidr=192.168.0.0/16 \
--pod-network-cidr=10.244.0.0/16
记得保存好这段命令是用于将一个工作节点(worker node)加入到已存在的 Kubernetes 集群中的过程。
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join k8s-master:6443 --token tmtcli.u0ycx3hwoa52nxh5 \
	--discovery-token-ca-cert-hash sha256:7d12f19434bba1e0e4e469284e4feeec5f69d2677c0ac471580f312f11e40d19 \
	--control-plane 

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join k8s-master:6443 --token tmtcli.u0ycx3hwoa52nxh5 \
	--discovery-token-ca-cert-hash sha256:7d12f19434bba1e0e4e469284e4feeec5f69d2677c0ac471580f312f11e40d19 
  • (1)、遇到报错:
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR Port-6443]: Port 6443 is in use
	[ERROR Port-10259]: Port 10259 is in use
	[ERROR Port-10257]: Port 10257 is in use
	[ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
	[ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
	[ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
	[ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
	[ERROR Port-10250]: Port 10250 is in use
	[ERROR Port-2379]: Port 2379 is in use
	[ERROR Port-2380]: Port 2380 is in use
	[ERROR DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
  • (2)、 解决办法:
kubeadm reset
# 然后重新初始化
  • 4. 按照指示执行:
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
  export KUBECONFIG=/etc/kubernetes/admin.conf
  • 5. 查看kubelet.service状态
systemctl status kubelet.service
  • 6. 安装网络插件flannel

官方文档:https://github.com/flannel-io/flannel

# 最好手动提前拉取所需镜像
docker pull quay.io/coreos/flannel:v0.14.0
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f kube-flannel.yml 

  • 7. node节点向master节点注册
kubeadm join k8s-master:6443 --token tmtcli.u0ycx3hwoa52nxh5 \
	--discovery-token-ca-cert-hash sha256:7d12f19434bba1e0e4e469284e4feeec5f69d2677c0ac471580f312f11e40d19 
posted @   钱超多  阅读(23)  评论(0编辑  收藏  举报
相关博文:
阅读排行:
· 【.NET】调用本地 Deepseek 模型
· CSnakes vs Python.NET:高效嵌入与灵活互通的跨语言方案对比
· DeepSeek “源神”启动!「GitHub 热点速览」
· 我与微信审核的“相爱相杀”看个人小程序副业
· Plotly.NET 一个为 .NET 打造的强大开源交互式图表库
点击右上角即可分享
微信分享提示