kubernates的集群安装-kubadm
kubernates的集群安装-kubadm
环境准备工作(CentOS)
- 准备三台或以上的虚拟机
- 停用防火墙
sudo systemctl stop firewalld
sudo systemctl disable firewalld
- 修改主机名(命名规则符合 DNS 标准)
cat <<EOF | sudo tee /etc/hostname
k8sv23-n03-p102
EOF
#立即生效
sudo hostnamectl set-hostname k8sv23-n12-p104
- 安装 docker(参照官网即可)
#移除历史
sudo yum remove docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-engine
# yum 添加仓库
sudo yum install -y yum-utils
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
#安装
sudo yum -y install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
- docker配置 cgroupdriver ,并启动
cat <<EOF | sudo tee /etc/docker/daemon.json
{
"registry-mirrors": [
"http://mirrors.ustc.edu.cn",
"http://hub-mirror.c.163.com",
"https://uy35zvn6.mirror.aliyuncs.com",
"https://cr.console.aliyun.com",
"https://docker.mirrors.ustc.edu.cn",
"https://registry.docker-cn.com",
"http://mirror.azure.cn/"
],
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": { "max-size": "100m" },
"storage-driver": "overlay2"
}
EOF
#启动并添加自启动
sudo systemctl enable docker
sudo systemctl daemon-reload
sudo systemctl restart docker
- 配置网络模块
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward=1 # better than modify /etc/sysctl.conf
EOF
sudo sysctl --system
- 关闭分区
sudo swapoff -a
sudo sed -ri '/\sswap\s/s/^#?/#/' /etc/fstab
以上就是所有节点需要准备的工作。
安装 master 节点
本次安装k8s 版本为:1.23.17,该版本是运行时容器“默认”或适配 docker-engine,后面版本需要安装插件 cri-docker.(CNI规范)
- 安装组件
三个非容器化的组件 kubectl ,kubeadm,kubelet;具体含义看官网
#移除历史记录
yum remove -y kubelet kubeadm kubectl
#指定版本 1.23.17
sudo yum install -y kubelet-1.23.17 kubeadm-1.23.17 kubectl-1.23.17 --disableexcludes=kubernetes
- k8s 集群初始化(kubeadm init)
#开启 kubelet
systemctl enable kubelet.service
#安装 kubeadm init,apiserver-advertise-address参数是本机 ip:192.168.10.109 ,image-repository:镜像国内仓库,1.24.版本后仓库 CNCF 接管
#service-cidr:网段,pod-network-cidr
kubeadm init \
--apiserver-advertise-address=192.168.10.109 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.23.17 \
--service-cidr=10.10.0.0/16 \
--pod-network-cidr=10.244.0.0/16
如果安装拉取镜像失败,可以查看 k8s 需要的组件或插件镜像列表`kubeadm config images list --kubernetes-version v1.23.17`
#查看需要那些镜像,这些镜像都是在谷歌仓库,需要指向国内镜像源
[root@k8s ~]# kubeadm config images list --kubernetes-version v1.28.1
registry.k8s.io/kube-apiserver:v1.28.1
registry.k8s.io/kube-controller-manager:v1.28.1
registry.k8s.io/kube-scheduler:v1.28.1
registry.k8s.io/kube-proxy:v1.28.1
registry.k8s.io/pause:3.9
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/coredns/coredns:v1.10.1
可配置国内yum源:
#如果 yum 安装失败,找不到包,配置 yum 阿里云源
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
安装成功会输出结果:
#启动成功输出结果
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.10.109:6443 --token l85iz5.bg08rhr8x8gmhain \
--discovery-token-ca-cert-hash sha256:4a1634f8c06e2c47cb06196df8ae59e6997cdfc9e08848ace5fb968c02b790a9
参照他给出的建议执行(复制管理配置文件):
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
此时,检查集群
#检查集群情况 ,可以看到是只有一个 master节点,但是状态是NotReady
[root@k8s ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s.master NotReady control-plane,master 4m42s v1.23.17
- 安装网络插件 Flannel(其他网络插件),需要去 github 下载kube-flannel.yml,并修改network 与集群 init 设置 --pod-network-cidr字段一致
#kubectl apply -f kube-flannel.yml
kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
-------------------------------
#注意 kube-flannel.yml 文件网段Network修改成 --pod-network-cidr
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
-----------------------------
#安装完成之后 节点就是 Ready 状态了
[root@k8s ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s.master Ready control-plane,master 28m v1.23.17
安装 worer 节点
- 安装组件
三个非容器化的组件 kubectl ,kubeadm,kubelet;具体含义看官网
#移除历史记录
yum remove -y kubelet kubeadm kubectl
#指定版本 1.23.17
sudo yum install -y kubelet-1.23.17 kubeadm-1.23.17 kubectl-1.23.17 --disableexcludes=kubernetes
- 节点加入集群 kubeadm join
#join加入 到主节点 ip,token有时效,可以在主节点生成:kubeadm token create,证书需要自己保存
kubeadm join 192.168.10.109:6443 --token l85iz5.bg08rhr8x8gmhain \
--discovery-token-ca-cert-hash sha256:4a1634f8c06e2c47cb06196df8ae59e6997cdfc9e08848ace5fb968c02b790a9
- 验证
#在master节点检查节点信息
[root@k8s ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s.192.168.10.108 Ready <none> 8m17s v1.23.17
k8s.master Ready control-plane,master 42m v1.23.17
至此,k8s 既安装成功,后续添加节点 执行kubeadm join 即可
其他
- 想要其他虚拟机操作集群
比如: 把kubectl,迁移到console服务器
#它只需要安装一个 kubectl,然后复制“config”文件就行,你可以直接在 Master 节点上用“scp”远程拷贝,例如
scp `which kubectl` chrono@192.168.10.208:~/
#或者:scp /usr/bin/kubectl root@192.168.10.120:/usr/bin/
scp ~/.kube/config chrono@192.168.10.208:~/.kube
#或者:scp ~/.kube/config root@192.168.10.120:~/.kube
- 官网问题集 https://kubernetes.io/zh-cn/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/
#重置节点
kubeadm reset
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
总结
- kubeadm 是一个方便易用的 Kubernetes 工具,能够部署生产级别的 Kubernetes 集群。
- 安装 Kubernetes 之前需要修改主机的配置,包括主机名、Docker 配置、网络设置、交换分区等。
- Kubernetes 的组件镜像存放在 gcr.io,国内下载比较麻烦,需要国内镜像网站获取。指定国内镜像。
- 安装 Master 节点需要使用命令 kubeadm init,安装 Worker 节点需要使用命令 kubeadm join,还要部署 Flannel 等网络插件才能让集群正常工作。
- kubeadm join 命令有时效性,如果失效了 kubeadm token create 新创建一个
[root@k8s ~]# kubeadm token create --print-join-command
kubeadm join 192.168.10.109:6443 --token i26bzd.v8bctu5zzgrj8fs7 --discovery-token-ca-cert-hash sha256:349254d493aee98b96718848f8d72a01454fa717383fc69ccc27ad13fee38cb8
[root@k8s ~]#
其他操作
[root@localhost ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s.master Ready control-plane,master 18h v1.23.17
k8sv23-n03-p102 Ready <none> 15h v1.23.17
k8sv23-n03-p115 Ready <none> 15h v1.23.17
k8sv23-n07-124 NotReady <none> 15h v1.23.17
k8sv23-n08-p132 Ready <none> 15h v1.23.17
k8sv23-n10-p133 Ready <none> 15h v1.23.17
k8sv23-n11-136 Ready <none> 15h v1.23.17
k8sv23-n11-p108 Ready <none> 16h v1.23.17
##查看守护 pod,有一个没有准备好
[root@localhost ~]# kubectl get daemonsets --all-namespaces
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-flannel kube-flannel-ds 8 8 7 8 7 <none> 79m
kube-system kube-proxy 8 8 8 8 8 kubernetes.io/os=linux 18h
##查看该 pod 详细信息
[root@localhost ~]# kubectl describe daemonset kube-flannel-ds -n kube-flannel
Name: kube-flannel-ds
Selector: app=flannel,k8s-app=flannel
Node-Selector: <none>
Labels: app=flannel
k8s-app=flannel
tier=node
Annotations: deprecated.daemonset.template.generation: 1
Desired Number of Nodes Scheduled: 8
Current Number of Nodes Scheduled: 8
Number of Nodes Scheduled with Up-to-date Pods: 8
Number of Nodes Scheduled with Available Pods: 7
Number of Nodes Misscheduled: 0
Pods Status: 7 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
........
#查看所有 pod 有个 node flannel 镜像拉取失败Init:ImagePullBackOff
[root@localhost ~]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-flannel kube-flannel-ds-5mxvl 1/1 Running 0 77m
kube-flannel kube-flannel-ds-5wsz7 1/1 Running 0 77m
kube-flannel kube-flannel-ds-c4k2s 0/1 Init:ImagePullBackOff 0 77m
kube-flannel kube-flannel-ds-h7c5d 1/1 Running 0 77m
kube-flannel kube-flannel-ds-j8tlh 1/1 Running 0 77m
kube-flannel kube-flannel-ds-kp9kn 1/1 Running 0 77m
kube-flannel kube-flannel-ds-l7g2v 1/1 Running 0 77m
kube-flannel kube-flannel-ds-tcklb 1/1 Running 0 77m
kube-system coredns-7bff545f9f-szb6n 1/1 Running 0 53m
kube-system coredns-7bff545f9f-vwp85 1/1 Running 0 53m
kube-system etcd-k8s.master 1/1 Running 0 18h
kube-system kube-apiserver-k8s.master 1/1 Running 0 18h
kube-system kube-controller-manager-k8s.master 1/1 Running 0 18h
kube-system kube-proxy-24v9j 1/1 Running 0 15h
kube-system kube-proxy-7gvqk 1/1 Running 0 15h
kube-system kube-proxy-bl6zf 1/1 Running 0 15h
kube-system kube-proxy-fwtkk 1/1 Running 0 18h
kube-system kube-proxy-gqlqc 1/1 Running 0 16h
kube-system kube-proxy-lxbcr 1/1 Running 0 15h
kube-system kube-proxy-mphk5 1/1 Running 0 15h
kube-system kube-proxy-wjfp6 1/1 Running 1 16h
kube-system kube-scheduler-k8s.master 1/1 Running