基于 kubeadmin 安装 Kubernetes 集群
一、安装前准备
1.1、主机规划
IP | 系统 | 角色 | 主机名 |
---|---|---|---|
192.168.80.7 | CentOS7.6 | master | k8s-master-1 |
192.168.80.17 | CentOS7.6 | node | k8s-node-1 |
192.168.80.27 | CentOS7.6 | node | k8s-node-2 |
192.168.80.37 | CentOS7.6 | node | k8s-node-3 |
按主机规划设备各主机的主机名,并在 /etc/hosts 文件中添加解析配置
#修改主机名 hostnamectl set-hostname k8s-master-1 #修改/etc/hosts,添加以下配置 vim /etc/hosts 192.168.80.7 k8s-master-1 192.168.80.17 k8s-node-1 192.168.80.27 k8s-node-2 192.168.80.37 k8s-node-3
1.3、关闭防火墙
# 停止
systemctl stop firewalld.service
# 禁用
systemctl disable firewalld.service
1.4、关闭SELinux
setenforce 0 sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/sysconfig/selinux
1.5、关闭swap
swapoff -a
1.6、设置时间同步
# 设置时区 timedatectl set-timezone Asia/Shanghai # 同步时间 yum install -y ntpdate ntpdate time1.aliyun.com
二、安装Docker
#安装依赖包 yum install -y yum-utils device-mapper-persistent-data lvm2 wget http://mirror.centos.org/centos/7/extras/x86_64/Packages/container-selinux-2.119.1-1.c57a6f9.el7.noarch.rpm yum install -y ./container-selinux-2.119.1-1.c57a6f9.el7.noarch.rpm #设置docker仓库 yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo #安装docker yum update && yum install -y containerd.io-1.2.10 docker-ce-19.03.4 docker-ce-cli-19.03.4 #配置docker加速 mkdir /etc/docker vim /etc/docker/daemon.json { "registry-mirrors": ["https://docker.mirrors.ustc.edu.cn"], "exec-opts": ["native.cgroupdriver=systemd"] } #重启docker systemctl daemon-reload systemctl restart docker #开机自启动 systemctl enable docker
三、安装Kubeadm
3.1、设置仓库
vim /etc/yum.repos.d/kubernetes.repo #添加以下内容 [kubernetes] name=Kubernetes baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
3.2、修改内核参数
yum install -y bridge-utils.x86_64 modprobe br_netfilter vim /etc/sysctl.d/kubernetes.conf #添加以下内容 net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 #使规则生效 sysctl --system
3.2、安装kubeadm、kubelet、kubectl
yum install -y kubelet-1.18.10 kubeadm-1.18.10 kubectl-1.18.10 --disableexcludes=kubernetes # 设置开机启动 systemctl enable --now kubelet
四、初始化Kubernetes
4.1、初始化master节点
以上操作是在所有节点上执行,本次操作只在master节点执行。
#获取生产配置文件
kubeadm config print init-defaults --kubeconfig ClusterConfiguration > kubeadm.yml
查看kubeadm.yml,修改下面有备注的地方
[root@k8s-master-1 ~]# vim kubeadm.yml apiVersion: kubeadm.k8s.io/v1beta2 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token token: abcdef.0123456789abcdef ttl: 24h0m0s usages: - signing - authentication kind: InitConfiguration localAPIEndpoint: #修改为主节点IP advertiseAddress: 192.168.80.7 bindPort: 6443 nodeRegistration: criSocket: /var/run/dockershim.sock name: k8s-master-1 taints: - effect: NoSchedule key: node-role.kubernetes.io/master --- apiServer: timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta2 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controllerManager: {} dns: type: CoreDNS etcd: local: dataDir: /var/lib/etcd #修改镜像仓库 imageRepository: registry.aliyuncs.com/google_containers kind: ClusterConfiguration kubernetesVersion: v1.18.0 networking: #配置Pod所在网段,和虚拟机所在网络不重复,这里用的是Flannel 默认网段),如果宿主机已经使用该网段,则必须更改网段 podSubnet: 10.244.0.0/16 dnsDomain: cluster.local serviceSubnet: 10.96.0.0/12 scheduler: {}
开始初始化
方式一:
# 查看需要下载镜像 kubeadm config images list --config kubeadm.yml # 步骤一:拉取镜像 kubeadm config images pull --config kubeadm.yml # 步骤二:初始化k8s kubeadm init --config=kubeadm.yml --upload-certs | tee kubeadm-init.log ##命令输出如下 W0413 17:15:10.957178 24033 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [init] Using Kubernetes version: v1.18.0 [preflight] Running pre-flight checks [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service' [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [k8s-master-1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.80.7] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [k8s-master-1 localhost] and IPs [192.168.80.7 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [k8s-master-1 localhost] and IPs [192.168.80.7 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" W0413 17:15:14.374392 24033 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" [control-plane] Creating static Pod manifest for "kube-controller-manager" W0413 17:15:14.376030 24033 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 14.502639 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace [upload-certs] Using certificate key: 73707e269516fbc3c6c0b572d82f2760f5637206e8b22009f4d239229aeb4184 [mark-control-plane] Marking the node k8s-master-1 as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node k8s-master-1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: abcdef.0123456789abcdef [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.80.7:6443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:e2497085aac14070f12d157d0851e4884e492c6034a2ea8a5d50ef93b1d387c7
方式二:
# 或者使用参数直接初始化master节点 kubeadm init \ --apiserver-advertise-address=192.168.80.7 \ --image-repository=registry.aliyuncs.com/google_containers \ --pod-network-cidr=10.244.0.0/16 \ --upload-certs
配置kubectl
mkdir -p $HOME/.kube cp -i /etc/kubernetes/admin.conf $HOME/.kube/config chown $(id -u):$(id -g) $HOME/.kube/config
4.2、安装calica网络
# 安装Calico kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
4.3、初始化node节点
以下操作是在两台node节点上执行。
kubeadm join 192.168.80.7:6443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:e2497085aac14070f12d157d0851e4884e492c6034a2ea8a5d50ef93b1d387c7 #执行完成后,在master节点上查看node节点状态 [root@k8s-master-1 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master-1 Ready master 25m v1.18.10 k8s-node-1 Ready <none> 8m v1.18.10 k8s-node-2 Ready <none> 4m19s v1.18.10 k8s-node-3 Ready <none> 2m35s v1.18.10
注意:
#kubeadm生成的token一般24小时后就过期;所以后面再集群内部加入node需要重新创建新的token #1.重新生成新的token kubeadm token create #2.查看生成的token kubeadm token list #3.获取ca证书sha256编码hash值 openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //' #4.新Node节点使用命令加入,根据得到的token和SHA256替换 kubeadm join 192.168.80.7:6443 --token bq9xsp.bpf3zfl7mndpl9h2 \ --discovery-token-ca-cert-hash sha256:937e143e3bd79a24f1cdefd2693072484757beeb06869af07ba4962a78b4544d
4.4、解决node节点无法使用kubectl命令
(1) 在master节点将admin.conf文件拷贝到其它节点
scp /etc/kubernetes/admin.conf 192.168.80.17:/etc/kubernetes/
(2) 配置kubectl
mkdir -p $HOME/.kube cp -i /etc/kubernetes/admin.conf $HOME/.kube/config chown $(id -u):$(id -g) $HOME/.kube/config
五、k8s更换过期证书
(1) 查看当前证书到期时间
for item in `find /etc/kubernetes/pki -maxdepth 2 -name "*.crt"`;do openssl x509 -in $item -text -noout| grep Not;echo ======================$item===============;done
(2) 备份过期证书
cp -rp /etc/kubernetes /etc/kubernetes.bak
(3) 生成配置文件
kubeadm config view > /tmp/cluster.yaml
(4) 更新证书
kubeadm alpha certs renew all --config=/tmp/cluster.yaml
(5) 重启相关服务
docker ps |grep -E 'k8s_kube-apiserver|k8s_kube-controller-manager|k8s_kube-scheduler|k8s_etcd_etcd' | awk -F ' ' '{print $1}' |xargs docker restart
(6) 再次查看证书到期时间
for item in `find /etc/kubernetes/pki -maxdepth 2 -name "*.crt"`;do openssl x509 -in $item -text -noout| grep Not;echo ======================$item===============;done
(7) 覆盖配置文件
rm -rf /root/.kube/ mkdir /root/.kube/ cp -i /etc/kubernetes/admin.conf /root/.kube/config
(8) 验证
kubectl get nodeskubectl get nodes