Kubeadm 部署单Master节点 Kubernetes 集群
环境准备
软件版本
操作系统 | Ubuntu-18.04 |
---|---|
docker | 20.10.12 |
Kubernetes | 1.23.0 |
服务器规划
IP | 角色 |
---|---|
192.168.94.8 | K8s-master |
192.168.94.9 | K8s-node1 |
192.168.94.10 | K8s-node2 |
安装步骤
本教程所有操作均是在root用户下执行,如果在普通用户下,请自行切换到root,或者在命令前加上 sudo
1、服务器初始化
所有节点都需执行
# 关闭防火墙
sudo systemctl stop ufw
sudo systemctl disable ufw
# 关闭swap
sudo swapoff -a # 临时关闭
sudo sed -i '/swap/s/^/#/' /etc/fstab # 永久关闭
# 修改仓库源为阿里源
sed -i 's/cn.archive.ubuntu.com/mirrors.aliyun.com/g' /etc/apt/sources.list
# 允许iptables检查桥接流量
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system
2、安装docker
所有节点都需安装
2.1、卸载旧版本docker
sudo apt-get remove docker docker-engine docker.io containerd runc
2.2、更新apt软件包索引并安装软件包,以允许apt通过HTTPS使用存储库:
sudo apt-get update && apt-get -y install apt-transport-https ca-certificates curl software-properties-common
2.3、安装GPG证书与写入软件源信息
sudo curl -fsSL https://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"
2.4、更新并安装Docker-CE
默认安装apt仓库中最新版本
sudo apt-get update && apt-get -y install docker-ce docker-ce-cli containerd.io
2.4.1、安装指定版本的Docker-CE(可根据需求查看此步骤,否则默认安装最新即可)
# Step 1: 查找Docker-CE的版本:
sudo apt-cache madison docker-ce
docker-ce | 5:20.10.13~3-0~ubuntu-bionic | https://mirrors.aliyun.com/docker-ce/linux/ubuntu bionic/stable amd64 Packages
docker-ce | 5:20.10.12~3-0~ubuntu-bionic | https://mirrors.aliyun.com/docker-ce/linux/ubuntu bionic/stable amd64 Packages
docker-ce | 5:20.10.11~3-0~ubuntu-bionic | https://mirrors.aliyun.com/docker-ce/linux/ubuntu bionic/stable amd64 Packages
docker-ce | 5:20.10.10~3-0~ubuntu-bionic | https://mirrors.aliyun.com/docker-ce/linux/ubuntu bionic/stable amd64 Packages
docker-ce | 5:20.10.9~3-0~ubuntu-bionic | https://mirrors.aliyun.com/docker-ce/linux/ubuntu bionic/stable amd64 Packages
docker-ce | 5:20.10.8~3-0~ubuntu-bionic | https://mirrors.aliyun.com/docker-ce/linux/ubuntu bionic/stable amd64 Packages
.
.
.
.
.
.
# Step 2: 安装指定版本的Docker-CE: (VERSION例如上面的5:20.10.9~3-0~ubuntu-bionic)
apt -y install docker-ce=[VERSION]
2.5、修改docker默认镜像地址为国内的地址,修改cgroup为systemd
# 在/etc/docker/目录下增加daemon.json 文件,内容如下,然后重启docker。
sudo cat >> /etc/docker/daemon.json <<EOF
{
"registry-mirrors": ["https://e53aervw.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
2.6、重启docker,使配置生效
sudo systemctl daemon-reload
sudo systemctl restart docker
3、安装 kubeadm kubelet kubectl
所有节点都需要安装;
node节点无需安装kubectl,但是在安装kubeadm的时候会自动安装kubectl(安装仓库中最新版)。
3.1、下载公开签名秘钥:(这里下载的是阿里云的)
sudo curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -
3.2、添加kubernetes apt 仓库:
sudo cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF
3.3、更新apt包索引与安装kubelet、kubeadm和kubectl
默认安装仓库中的最新版的 kubelet、kubeadm和kubectl
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
3.3.1、安装指定的版本:
此处以kubeadm为例记录操作步骤,kubelet和kubectl操作参考即可。
建议 kubelet、kubeadm和kubectl的版本保持一致。
# Step 1: 查找版本列表:
apt-cache madison kubeadm
kubeadm | 1.22.1-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
kubeadm | 1.22.0-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
# Step 2: 安装指定的版本: (VERSION例如上面的1.22.0-00)
sudo apt-get -y install kubeadm=[VERSION]
sudo apt install -y kubelet=1.23.0-00 kubeadm=1.23.0-00 kubectl=1.23.0-00
增加参数(有待验证,不加是否可以)
# 增加参数(有待验证,不加是否可以)
sudo cat >> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf <<EOF
Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true --fail-swap-on=false"
EOF
4、初始化Master节点
在master节点上(192.168.94.8)执行kubeadm init 初始化master节点
sudo kubeadm init \
--apiserver-advertise-address=192.168.94.8 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.23.0 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16 \
--ignore-preflight-errors=all
-
--apiserver-advertise-address 集群通告地址
-
--image-repository 由于默认拉取镜像地址k8s.gcr.io国内无法访问,这里指定阿里云镜像仓库地址
-
--kubernetes-version K8s版本,与上面安装的一致
-
--service-cidr 集群内部虚拟网络,Pod统一访问入口
-
--pod-network-cidr Pod网络,与下面部署的CNI网络组件yaml中保持一致
-
--ignore-preflight-errors=all 表示不执行预检查
成功之后会输出如下信息:
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.94.8:6443 --token q720ko.mzgf34ihnthb44yp \
--discovery-token-ca-cert-hash sha256:d20eef7be62ac973859cd493bab375559767f3d012a67119ad91523347f7c2ca
根据提示,拷贝kubectl连接k8s需要使用的认证文件到指定路径:
# 普通用户执行如下命令
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
# 管理员用户,执行如下命令即可:
export KUBECONFIG=/etc/kubernetes/admin.conf
# 由于上面的命令只在当前shell中有效,为了保证一直可用,可切换到root用户,执行如下操作
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /root/.bashrc
5、node节点加入集群
登录node节点(192.168.94.9/10),执行kubeadm init 成功之后输出的kubeadm join命令,将节点加入集群:
root@k8s-node1:~# kubeadm join 192.168.94.8:6443 --token ccg0ka.xpv4lqk1b4j2osre --discovery-token-ca-cert-hash sha256:fc7408be3ca8bb1c1954a48fd5af2d6df2b761b0dc16710dcfa068f9e0a49d62
-------------------------分割线--------------------------------
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W0316 03:50:26.153519 11865 utils.go:69] The recommended value for "resolvConf" in "KubeletConfiguration" is: /run/systemd/resolve/resolv.conf; the provided value is: /run/systemd/resolve/resolv.conf
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
token的有效期默认是24小时,过期之后需要创建新的token。使用下面的命令直接生成即可:
root@k8s-master:~# kubeadm token create --print-join-command
-------------------------分割线--------------------------------
kubeadm join 192.168.94.8:6443 --token ri2axq.ibwqlq57b1xxo93t --discovery-token-ca-cert-hash sha256:fc7408be3ca8bb1c1954a48fd5af2d6df2b761b0dc16710dcfa068f9e0a49d62
返回master执行上执行kubectl get nodes
命令查看node节点状态:
kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master NotReady control-plane,master 18h v1.23.0
k8s-node1 NotReady <none> 20s v1.23.0
k8s-node2 NotReady <none> 20s v1.23.0
可以看到,目前master节点和node节点都是未准备就绪的状态,这是因为还没有安装网络插接件(CNI)。下一步,我们部署一下网络插件。
6、部署容器网络(CNI)
Calico是一个纯三层的数据中心网络方案,是目前Kubernetes主流的网络方案。
切换到master节点上下载yaml,修改里面定义Pod网络(CALICO_IPV4POOL_CIDR),与前面kubeadm init的 --pod-network-cidr指定的一样。
切换到master节点执行如下操作:
# 下载calico.yaml文件
wget https://docs.projectcalico.org/manifests/calico.yaml
# 修改文件中pod网络地址为自己的pod网络地址,这里我们的pod网段是:10.244.0.0/16
sed -i 's/# - name: CALICO_IPV4POOL_CIDR/- name: CALICO_IPV4POOL_CIDR/' calico.yaml
sed -i 's/# value: "192.168.0.0\/16"/ value: "192.168.0.0\/16"/' calico.yaml
# 安装CNI
kubectl apply -f calico.yaml
.
.
poddisruptionbudget.policy/calico-kube-controllers created # 执行完命令,当最后一行出现这样的提示信息,则表示插件已安装。
验证CNI是否正确安装:
# 执行如下命令,查看calico相关pod的状态,全部running之后,则表示calico已正常运行
kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-56fcbf9d6b-42k56 1/1 Running 0 7h45m
calico-node-psgd6 1/1 Running 0 7h45m
calico-node-th9kl 1/1 Running 0 7h45m
# 现在查看一下各节点状态,就已经全部准备就绪了
kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready control-plane,master 26h v1.23.0
k8s-node1 Ready <none> 7h47m v1.23.0
k8s-node2 Ready <none> 7h47m v1.23.0
至此,使用kubeadm搭建的单master集群就创建好了。