部署K8S集群(二):主节点初始化

主节点:

以下操作只在master进行

  1. 安装kubelet kubeadm kubectl
hostnamectl set-hostname master --static
yum install -y kubelet kubeadm kubectl
systemctl enable kubelet --now
  1. 安装containerd
    下载地址:https://github.com/containerd/containerd/releases
    找到标签为Latest的包,根据系统类型下载对应的包
#解压
tar Cxzvf /usr/local containerd-1.7.21-linux-amd64.tar.gz
#创建配置文件
mkdir -p /etc/containerd
cat > /etc/containerd/config.toml << EOF
version = 2
[plugins."io.containerd.grpc.v1.cri"]
sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.7"
EOF
#封装成系统服务由systemctl管理启停
cat > /usr/lib/systemd/system/containerd.service << EOF
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target local-fs.target
[Service]
#uncomment to enable the experimental sbservice (sandboxed) version of containerd/cri integration
#Environment="ENABLE_CRI_SANDBOXES=sandboxed"
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/local/bin/containerd
Type=notify
Delegate=yes
KillMode=process
Restart=always
RestartSec=5
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNPROC=infinity
LimitCORE=infinity
LimitNOFILE=infinity
# Comment TasksMax if your systemd version does not supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
OOMScoreAdjust=-999
[Install]
WantedBy=multi-user.target
EOF
#加载containerd.service并设置服务自启动
systemctl daemon-reload
systemctl enable --now containerd
systemctl status containerd
  1. 安装runc
    下载地址:https://github.com/opencontainers/runc/releases
    找到标签为Latest的包,根据系统类型下载对应的包
#安装
install -m 755 runc.amd64 /usr/local/sbin/runc
  1. 安装cni-plugin
    下载地址:https://github.com/containernetworking/plugins/releases
    找到标签为Latest的包,根据系统类型下载对应的包
mkdir -p /opt/cni/bin
tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.1.1.tgz

初始化主节点

  1. 创建配置文件/root/kubeadm-init-config.yaml

请注意其中的advertiseAddress为主节点IP,本文中为172.17.48.27;以及kubernetesVersion为使用的k8s版本,本文中为1.28.2。

cat > /root/kubeadm-init-config.yaml << EOF
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
bootstrapTokens:
- token: abcdef.0123456789abcdef
  ttl: 24h0m0s
localAPIEndpoint:
  advertiseAddress: 172.17.48.27
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///var/run/containerd/containerd.sock
  imagePullPolicy: IfNotPresent
  taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
  timeoutForControlPlane: 4m0s
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kubernetesVersion: 1.28.2
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
  podSubnet: 10.244.0.0/16
scheduler: {}
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
failSwapOn: false
address: 0.0.0.0
enableServer: true
cgroupDriver: cgroupfs
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
ipvs:
  strictARP: true
EOF
  1. 执行初始化
kubeadm init --config /root/kubeadm-init-config.yaml

如无意外,则将出现如下输出

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.17.48.27:6443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:12dbea4e7f42d9700129fbd26832b19a6dd47615a425eae983391052a65e11af

按上文提示设置环境变量

#root用户
export KUBECONFIG=/etc/kubernetes/admin.conf
#非root用户
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

请务必记录以下输出,后续子节点加入集群需要

kubeadm join 172.17.48.27:6443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:12dbea4e7f42d9700129fbd26832b19a6dd47615a425eae983391052a65e11af
  1. 验证初始化结果
kubectl get nodes
NAME     STATUS     ROLES           AGE     VERSION
master   NotReady   control-plane   2m49s   v1.28.2

此时仍旧显示NotReady,需要安装网络插件

  1. 安装flannel
#下载官方配置文件
wget -c https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml -O /root/kube-flannel.yml
#安装
kubectl apply -f /root/kube-flannel.yml
如果执行完后,依旧显示NotReady,那就是ctr拉镜像失败了,需要使用其他机器拉镜像,可以使用docker拉取再导入
#查看使用的flannel镜像版本
 cat /root/kube-flannel.yml | grep image
        image: docker.io/flannel/flannel-cni-plugin:v1.5.1-flannel2
        image: docker.io/flannel/flannel:v0.25.6
        image: docker.io/flannel/flannel:v0.25.6
#docker拉取镜像并导出
docker pull docker.io/flannel/flannel:v0.25.6 && docker save docker.io/flannel/flannel:v0.25.6 -o D:\flannel_flannel_v0.25.6.tar
docker pull docker.io/flannel/flannel-cni-plugin:v1.5.1-flannel2 && docker save docker.io/flannel/flannel:v0.25.6 -o D:\flannel_flannel-cni-plugin_v1.5.1-flannel2.tar
#ctr导入
ctr -n k8s.io image import flannel_flannel_v0.25.6.tar --digests=true
ctr -n k8s.io image import flannel_flannel-cni-plugin_v1.5.1-flannel2.tar --digests=true
#卸载flannel Daemonset,重新安装
kubectl delete daemonset kube-flannel-ds -n kube-flannel
kubectl apply -f /root/kube-flannel.yml

在导入镜像到ctr时,必须添加 -n k8s.io,将镜像放到命名空间k8s.io下,否则安装时,不会优先使用本地镜像
导入镜像时要注意,如果本身不能自动拉镜像,很大可能导入进去的镜像也无法识别成正确的名字,要手动改名
ctr -n k8s.io image tag 。个人建议导入一个改名一个,以免混淆。

  1. 安装完后再检查:
kubectl get nodes
##master     Ready    control-plane   10h     v1.28.2

主节点已经就绪。

posted @ 2024-09-05 13:50  Ar4te  阅读(127)  评论(0编辑  收藏  举报