kubeadm 安装kubernetes集群。
组件分布
部署环境
kubeadm 步骤
master, node: 安装 kubelet, kubeadm, docker
master: kubeadm init 初始化
nodes: kubeadm join 加入集群
基础环境
主机名 | IP | 系统版本 | 内核版本 |
master | 192.168.1.220 | CentOS Linux release 7.4.1708 (Core) | 3.10.0-693.el7.x86_64 |
node01 | 192.168.1.221 | CentOS Linux release 7.4.1708 (Core) | 3.10.0-693.el7.x86_64 |
node02 | 192.168.1.222 | CentOS Linux release 7.4.1708 (Core) | 3.10.0-693.el7.x86_64 |
基础配置(所有主机)
[root@master ~]# systemctl stop firewalld && systemctl disable firewalld [root@master ~]# sestatus SELinux status: disabled [root@master ~]# swapoff -a && sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab [root@master ~]# cd /etc/yum.repos.d/ [root@master yum.repos.d]# mv CentOS-Base.repo CentOS-Base.repo.bak [root@master yum.repos.d]# wget http://mirrors.aliyun.com/repo/Centos-7.repo [root@master yum.repos.d]# wget http://mirrors.163.com/.help/CentOS7-Base-163.repo [root@master yum.repos.d]# ls -l 总用量 36 -rw-r--r-- 1 root root 1572 12月 1 2016 CentOS7-Base-163.repo -rw-r--r-- 1 root root 2523 6月 16 2018 Centos-7.repo -rw-r--r--. 1 root root 1664 8月 30 2017 CentOS-Base.repo.bak -rw-r--r--. 1 root root 1309 8月 30 2017 CentOS-CR.repo -rw-r--r--. 1 root root 649 8月 30 2017 CentOS-Debuginfo.repo -rw-r--r--. 1 root root 314 8月 30 2017 CentOS-fasttrack.repo -rw-r--r--. 1 root root 630 8月 30 2017 CentOS-Media.repo -rw-r--r--. 1 root root 1331 8月 30 2017 CentOS-Sources.repo -rw-r--r--. 1 root root 3830 8月 30 2017 CentOS-Vault.repo [root@master yum.repos.d]# yum clean all && yum makecache
配置添加hosts(所有主机)
[root@master yum.repos.d]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.1.220 master 192.168.1.221 node01 192.168.1.222 node02
创建/etc/sysctl.d/k8s.conf文件,添加如下内容
[root@master ~]# vi /etc/sysctl.d/k8s.conf net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1
[root@master ~]# modprobe br_netfilter [root@master ~]# sysctl -p /etc/sysctl.d/k8s.conf net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1
配置阿里docker源(所有主机)
[root@master yum.repos.d]# yum install -y yum-utils device-mapper-persistent-data lvm2 安装必要的一些系统工具 [root@master yum.repos.d]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo 添加软件源信息 [root@master yum.repos.d]# yum makecache fast 更新
安装docker-ce,docker-ce-cli(所有主机)
[root@master yum.repos.d]# yum install -y docker-ce docker-ce-cli [root@master yum.repos.d]# systemctl enable docker && systemctl start docker [root@master yum.repos.d]# docker --version Docker version 19.03.5, build 633a0ea
配置阿里k8s源(所有主机)
[root@master yum.repos.d]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
安装组件(所有主机)
[root@master yum.repos.d]# yum install -y kubelet kubeadm kubectl [root@master yum.repos.d]# systemctl enable kubelet && systemctl start kubelet Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@master yum.repos.d]# kubeadm version kubeadm version: &version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.2", GitCommit:"59603c6e503c87169aea6106f57b9f242f64df89", GitTreeState:"clean", BuildDate:"2020-01-18T23:27:49Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/amd64"} [root@master yum.repos.d]# kubectl version Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.2", GitCommit:"59603c6e503c87169aea6106f57b9f242f64df89", GitTreeState:"clean", BuildDate:"2020-01-18T23:30:10Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/amd64"} The connection to the server localhost:8080 was refused - did you specify the right host or port?
配置镜像加速(所有主机)
[root@master yum.repos.d]# cat <<EOF >>/etc/docker/daemon.json { "registry-mirrors": ["https://xxxxx.mirror.aliyuncs.com"] } EOF [root@master yum.repos.d]# [root@master yum.repos.d]# systemctl restart docker
修改docker服务管理方式为systemd(所有主机)
[root@master yum.repos.d]# sed -i "s#^ExecStart=/usr/bin/dockerd.*#ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --exec-opt native.cgroupdriver=systemd#g" /usr/lib/systemd/system/docker.service 这步可以不做,如果没有修改,集群在初始化和worker节点加入的时候会爆出警告信息
master初始化,仅在master节点操作
[root@master ~]# kubeadm init --apiserver-advertise-address=192.168.1.220 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.17.0 --service-cidr=10.1.0.0/16 --pod-network-cidr=10.244.0.0/16 W0209 15:12:41.111682 12650 validation.go:28] Cannot validate kube-proxy config - no validator is available W0209 15:12:41.111740 12650 validation.go:28] Cannot validate kubelet config - no validator is available [init] Using Kubernetes version: v1.17.0 [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.1.0.1 192.168.1.220] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [master localhost] and IPs [192.168.1.220 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [192.168.1.220 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" W0209 15:14:08.552160 12650 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" [control-plane] Creating static Pod manifest for "kube-scheduler" W0209 15:14:08.553832 12650 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [apiclient] All control plane components are healthy after 44.003126 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node master as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: v2yw0n.xaq2uu2oqqsk4wlv [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.1.220:6443 --token v2yw0n.xaq2uu2oqqsk4wlv \ --discovery-token-ca-cert-hash sha256:2773274565fa8f13ca7de2466ad98f5cb4f5815a20665f10d46399f318aa937c [root@master ~]#
[root@master ~]# mkdir -p $HOME/.kube [root@master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config [root@master ~]# chown $(id -u):$(id -g) $HOME/.kube/config
下载并安装flannel资源配置清单(此操作在master节点上进行)
# 手动拉取flannel的docker镜像
docker pull easzlab/flannel:v0.11.0-amd64
# 修改镜像名称
docker tag easzlab/flannel:v0.11.0-amd64 quay.io/coreos/flannel:v0.11.0-amd64
[root@master ~]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml [root@master ~]# kubectl apply -f kube-flannel.yml
#安装calico网络插件 wget https://docs.projectcalico.org/v3.9/manifests/calico.yaml sed -i "s#192\.168\.0\.0/16#${POD_SUBNET}#" calico.yaml kubectl apply -f calico.yaml
#网络插件这里,因为我是通过openvpn组建的内网,在这里折腾了很久,建议大家后面使用内网搭建集群。
#默认情况下,Calico自动检测每个节点的IP地址和子网。在大多数情况下,这种自动检测就足够了,但是当你的服务器有多个网卡或者网卡有多个地址的时候,可能会识别失败,
#可以参考下官方文档 https://docs.projectcalico.org/v3.9/networking/node#understanding-caliconode-ip-autodetection-logic 设置IP_AUTODETECTION_METHOD
关于几种网络插件的区别http://dockone.io/article/8722
[root@master ~]# kubectl get pod -n kube-system -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES coredns-9d85f5447-fvhs4 0/1 Running 0 2m27s <none> <none> <none> <none> coredns-9d85f5447-sf6vv 0/1 Running 0 2m27s <none> <none> <none> <none> etcd-master 1/1 Running 0 2m22s 192.168.1.220 master <none> <none> kube-apiserver-master 1/1 Running 0 2m22s 192.168.1.220 master <none> <none> kube-controller-manager-master 1/1 Running 0 2m22s 192.168.1.220 master <none> <none> kube-flannel-ds-amd64-mhsrw 0/1 Running 0 59s 192.168.1.220 master <none> <none> kube-proxy-w6pg8 1/1 Running 0 2m27s 192.168.1.220 master <none> <none> kube-scheduler-master 1/1 Running 0 2m22s 192.168.1.220 master <none> <none>
node01加入集群
[root@node01 ~]# kubeadm join 192.168.1.220:6443 --token v2yw0n.xaq2uu2oqqsk4wlv --discovery-token-ca-cert-hash sha256:2773274565fa8f13ca7de2466ad98f5cb4f5815a20665f10d46399f318aa937c
node02加入集群
[root@node02 yum.repos.d]# kubeadm join 192.168.1.220:6443 --token v2yw0n.xaq2uu2oqqsk4wlv --discovery-token-ca-cert-hash sha256:2773274565fa8f13ca7de2466ad98f5cb4f5815a20665f10d46399f318aa937c
如果忘记加入命令,在master节点上行执行 kubeadm token create --print-join-command
master上查看各节点状态
[root@master ~]# kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME master NotReady master 6m15s v1.17.2 192.168.1.220 <none> CentOS Linux 7 (Core) 3.10.0-693.el7.x86_64 docker://19.3.5 node01 NotReady <none> 2m3s v1.17.2 192.168.1.221 <none> CentOS Linux 7 (Core) 3.10.0-693.el7.x86_64 docker://19.3.5 node02 NotReady <none> 2m2s v1.17.2 192.168.1.222 <none> CentOS Linux 7 (Core) 3.10.0-693.el7.x86_64 docker://19.3.5
如果集群初始化出错需要重新初始化的时候,首先要执行 kubeadm reset -f
测试一下kubernetes集群
##创建一个镜像为nginx的容器 [root@master ~]# kubectl create deployment nginx --image=nginx deployment.apps/nginx created ##查看pod的详细信息,events部分可以看到创建过程 [root@master ~]# kubectl describe pod nginx Name: nginx-86c57db685-2vbwc Namespace: default Priority: 0 Node: node02/192.168.1.222 Start Time: Sun, 09 Feb 2020 15:36:31 +0800 Labels: app=nginx pod-template-hash=86c57db685 Annotations: cni.projectcalico.org/podIP: 192.168.140.65/32 Status: Pending IP: IPs: <none> Controlled By: ReplicaSet/nginx-86c57db685 Containers: nginx: Container ID: Image: nginx Image ID: Port: <none> Host Port: <none> State: Waiting Reason: ContainerCreating Ready: False Restart Count: 0 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-cxpk7 (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: default-token-cxpk7: Type: Secret (a volume populated by a Secret) SecretName: default-token-cxpk7 Optional: false QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled <unknown> default-scheduler Successfully assigned default/nginx-86c57db685-2vbwc to node02 Normal Pulling 40s kubelet, node02 Pulling image "nginx"
查看pod的ip
[root@master ~]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-86c57db685-2vbwc 1/1 Running 0 3m36s 192.168.140.65 node02 <none> <none>
kubectl命令自动补全
##安装包 [root@master ~]# yum install -y bash-completion* ##手工执行 [root@master ~]# source <(kubectl completion bash) ##写入环境变量 [root@master ~]# echo "source <(kubectl completion bash)" >> ~/.bashrc ##需要手工执行一下,否则tab补全时会提示“-bash: _get_comp_words_by_ref: command not found ” [root@master ~]# sh /usr/share/bash-completion/bash_completion ##加载环境变量 [root@master ~]# source /etc/profile ##再次使用kubectl命令进行tab补全就ok了
后续有nodes节点想加入集群的话,由于默认token的有效期为24小时,当过期之后,该token就不可用了,解决方法如下:
重新生成新的token ==> kubeadm token create # 1.查看当前的token列表 [root@master ~]# kubeadm token list TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS l507qh.nqysbrdxjtcfx4c9 23h 2020-02-10T15:20:19+08:00 authentication,signing <none> system:bootstrappers:kubeadm:default-node-token v2yw0n.xaq2uu2oqqsk4wlv 23h 2020-02-10T15:14:53+08:00 authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token
重新生成新的token
[root@master ~]# kubeadm token create W0209 15:44:21.172784 43064 validation.go:28] Cannot validate kube-proxy config - no validator is available W0209 15:44:21.172852 43064 validation.go:28] Cannot validate kubelet config - no validator is available 0xpz5e.7fcygebnug44a3xm
再次查看当前的token列表
[root@master ~]# kubeadm token list TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS 0xpz5e.7fcygebnug44a3xm 23h 2020-02-10T15:44:21+08:00 authentication,signing <none> system:bootstrappers:kubeadm:default-node-token l507qh.nqysbrdxjtcfx4c9 23h 2020-02-10T15:20:19+08:00 authentication,signing <none> system:bootstrappers:kubeadm:default-node-token v2yw0n.xaq2uu2oqqsk4wlv 23h 2020-02-10T15:14:53+08:00 authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token
获取ca证书sha256编码hash值
[root@master ~]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //' 2773274565fa8f13ca7de2466ad98f5cb4f5815a20665f10d46399f318aa937c
节点加入集群
root@k8s-node03 ~]# kubeadm join --token 0xpz5e.7fcygebnug44a3xm(新的token) --discovery-token-ca-cert-hash
Running
net.ipv4.ip_forward = 1
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· AI与.NET技术实操系列(二):开始使用ML.NET
· 记一次.NET内存居高不下排查解决与启示
· 探究高空视频全景AR技术的实现原理
· 理解Rust引用及其生命周期标识(上)
· 浏览器原生「磁吸」效果!Anchor Positioning 锚点定位神器解析
· DeepSeek 开源周回顾「GitHub 热点速览」
· 物流快递公司核心技术能力-地址解析分单基础技术分享
· .NET 10首个预览版发布:重大改进与新特性概览!
· AI与.NET技术实操系列(二):开始使用ML.NET
· 单线程的Redis速度为什么快?