Kubernetes 非高可用离线部署
集群介绍
1.节点信息
本次部署k8s的节点是按照两类角色:
master:集群的管理节点,部署是非高可用本次使用一台
slave:集群的slave节点,slave非必须
注意:机器的基础配置不低于2C4G
2.节点规划如下
主机名 | 角色 | 部署组件 |
k8s-master | master | etcd,kube-apiserver,kube-controller-manager,kubectl,kubeadm,kubelet,kube-proxy,flannel,registry,httpd |
k8s-slave | slave | kubectl,kubeadm,kubelet,kube-proxy,flannel |
3.组件版本
组件 | 版本 | 说明 |
centos | 7.5.1804 | |
kernel | linux 3.10.0-862.el7.x86_64 | |
etcd | v3.2.24 | 存储方式通过本地存储 |
coredns | v1.2.3 | |
kubeadm | v1.13.3 | |
kubectl | v1.13.3 | |
kubelet | v1.13.3 | |
kube-proxy | v1.13.3 | |
flannel | v0.11.0 | 使用flannel vxlan作backend |
httpd | v2.4.6 | 使用80端口通过服务 |
registry | v2.3.1 | 使用60080端口提供服务 |
集群初始化
1.设置机器hosts
#master节点
1 $hostnamectl set-hostname k8s-master
#slave节点
1 $hostnamectl set-hostname k8s-slave
注意:使用部署多少slave节点同样修改hosts
2.添加hosts解析
1 $ cat >>/etc/hosts<<EOF 2 10.0.129.84 k8s-master 3 10.0.128.240 k8s-slave 4 EOF
2. 调整系统配置
所以节点都需要操作
设置安全组开放端口
如果节点间无安全组限制(内网机器间可以任意访问),可以忽略,否则,至少保证如下端口可通:
k8s-master节点:TCP:6443,2379,2380,7443,60080,60081 UDP协议端口全部打开
k8s-slave节点:UDP协议端口全部打开
设置iptables
1 $ iptables -P FORWARD ACCEPT
关闭swap
1 $ swapoff -a
# 防止开机自动挂载 swap 分区
1 $ sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
关闭selinux和防火墙
1 $ sed -ri 's#(SELINUX=).*#\1disabled#' /etc/selinux/config 2 $ setenforce 0 3 $ systemctl disable firewalld && systemctl stop firewalld
修改内核参数
1 $ cat <<EOF > /etc/sysctl.d/k8s.conf 2 net.bridge.bridge-nf-call-ip6tables = 1 3 net.bridge.bridge-nf-call-iptables = 1 4 net.ipv4.ip_forward=1 5 EOF 6 $ modprobe br_netfilter 7 $ sysctl -p /etc/sysctl.d/k8s.conf
加载ipvs模块
1 $ cat > /etc/sysconfig/modules/ipvs.modules <<EOF 2 #!/bin/bash 3 modprobe -- ip_vs 4 modprobe -- ip_vs_rr 5 modprobe -- ip_vs_wrr 6 modprobe -- ip_vs_sh 7 modprobe -- nf_conntrack_ipv4 8 EOF 9 $ chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
3. 拷贝安装包
操作节点: k8s-master节点
# 安装包拷贝到k8s-master节点的/opt目录
1 $ scp k8s-installer.tar.gz root@k8s-master:/opt
# 解压并查看安装包
1 $ tar -zxf /opt/k8s-installer.tar.gz -C /opt 2 $ ls -lh /opt/k8s-installer # 查看安装包,会包含如下4项 3 total 337M 4 drwxr-xr-x 3 root root 4.0K Jun 16 21:00 docker-ce 5 -rw-r--r-- 1 root root 13K Jun 16 14:00 kube-flannel.yml 6 drwxr-xr-x 3 root root 4.0K Jun 15 15:19 registry 7 -rw------- 1 root root 337M Jun 16 10:24 registry-image.tar
4. 部署yum仓库
操作节点: k8s-master
配置本地repo文件
1 $ cat <<EOF > /etc/yum.repos.d/local.repo 2 [local] 3 name=local 4 baseurl=file:///opt/k8s-installer/docker-ce 5 gpgcheck=0 6 enabled=1 7 EOF 8 $ yum clean all && yum makecache
安装并配置httpd服务
1 $ yum install -y httpd --disablerepo=* --enablerepo=local
httpd默认使用80端口,为避免端口冲突,默认修改为60081端口
1 $ sed -i 's/Listen 80/Listen 60081/g' /etc/httpd/conf/httpd.conf
将安装包拷贝到服务目录中,服务目录默认使用/var/www/html,
1 $ cp -r /opt/k8s-installer/docker-ce/ /var/www/html/ 2 $ systemctl enable httpd && systemctl start httpd
5. 安装并配置docker
操作节点: 所有节点(k8s-master,k8s-slave)均需执行 - 配置yum repo 其中10.0.128.210:60081需要替换为k8s-master节点的httpd服务实际使用的ip:port
1 $ cat <<EOF > /etc/yum.repos.d/local-http.repo 2 [local-http] 3 name=local-http 4 baseurl=http://10.0.128.210:60081/docker-ce 5 gpgcheck=0 6 enabled=1 7 EOF 8 $ yum clean all && yum makecache
配置docker daemon文件 其中10.0.128.210需要替换成实际的k8s-master节点ip,60080端口为默认镜像仓库的端口,如使用其他端口,需替换文档中所有的60080为实际使用的端口
1 $ mkdir /etc/docker 2 $ cat <<EOF > /etc/docker/daemon.json 3 { 4 "insecure-registries": [ 5 "10.0.128.210:60080" 6 ], 7 "storage-driver": "overlay2" 8 } 9 EOF
安装并启动docker
1 $ yum install -y docker-ce docker-ce-cli containerd.io --disablerepo=* --enablerepo=local-http 2 $ systemctl enable docker && systemctl start docker
6. 配置镜像仓库
该仓库存储k8s部署所需的kube-apiserver、kube-controller-manager、kube-scheduler、etcd、flannel、coredns等组件的镜像,使用docker run的方式部署,默认暴漏机器的60080端口提供服务。
操作节点: 只在k8s-master节点执行 - 加载镜像到本地
1 $ docker load -i /opt/k8s-installer/registry-image.tar 2 $ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
index.alauda.cn/alaudaorg/distribution latest 2aee66f2203d 2 years ago 347MB
启动registry镜像仓库 默认使用60080作为registry对外的服务端口,如需修改,需将各节点的/etc/docker/daemon.json中的insecure-registries中配置的端口一并改掉
1 $ docker run -d --restart=always --name pkg-registry -p 60080:5000 -v /opt/k8s-installer/registry/:/var/lib/registry index.alauda.cn/alaudaorg/distribution:latest
部署kubernetes
1. 安装 kubeadm, kubelet 和 kubectl
操作节点: 所有的master和slave节点(k8s-master,k8s-slave) 需要执行
1 $ yum install -y kubeadm kubectl kubelet --disablerepo=* --enablerepo=local-http
2. 配置kubelet
操作节点: 所有的master和slave节点(k8s-master,k8s-slave) 需要执行 - 设置kubelet开机启动
1 $ systemctl enable kubelet
配置kubelet 配置文件/etc/systemd/system/kubelet.service,注意需要将--pod-infra-container-image地址设置为实际的镜像仓库地址(默认是k8s-master机器ip:60080)
1 $ cat <<EOF > /etc/systemd/system/kubelet.service 2 [Unit] 3 Description=kubelet: The Kubernetes Node Agent 4 Documentation=https://kubernetes.io/docs/ 5 6 [Service] 7 Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true" 8 Environment="KUBELET_INFRA_CONTAINER_IMAGE=--pod-infra-container-image=10.0.129.84:60080/k8s/pause:3.1" 9 ExecStart=/usr/bin/kubelet $KUBELET_SYSTEM_PODS_ARGS $KUBELET_INFRA_CONTAINER_IMAGE 10 Restart=always 11 StartLimitInterval=0 12 RestartSec=10 13 14 [Install] 15 WantedBy=multi-user.target 16 EOF
3. 配置kubeadm初始化文件
操作节点:只在master节点(k8s-master)执行 需要修改如下两处: - advertiseAddress:修改为k8s-master的内网ip地址 - imageRepository:修改为k8s-master的内网ip地址
1 $ cat <<EOF > /opt/kubeadm.conf 2 apiVersion: kubeadm.k8s.io/v1beta1 3 bootstrapTokens: 4 - groups: 5 - system:bootstrappers:kubeadm:default-node-token 6 token: abcdef.0123456789abcdef 7 ttl: 24h0m0s 8 usages: 9 - signing 10 - authentication 11 kind: InitConfiguration 12 localAPIEndpoint: 13 advertiseAddress: 10.0.129.84 14 bindPort: 6443 15 --- 16 apiServer: 17 timeoutForControlPlane: 4m0s 18 apiVersion: kubeadm.k8s.io/v1beta1 19 certificatesDir: /etc/kubernetes/pki 20 clusterName: kubernetes 21 controlPlaneEndpoint: "" 22 controllerManager: {} 23 dns: 24 type: CoreDNS 25 etcd: 26 local: 27 dataDir: /var/lib/etcd 28 imageRepository: 10.0.129.84:60080/k8s 29 kind: ClusterConfiguration 30 kubernetesVersion: v1.13.3 31 networking: 32 dnsDomain: cluster.local 33 podSubnet: "10.244.0.0/16" 34 serviceSubnet: 10.96.0.0/12 35 scheduler: {} 36 EOF
4. 提前下载镜像
操作节点:只在master节点(k8s-master)执行
# 查看需要使用的镜像列表,若无问题,将得到如下列表
1 $ kubeadm config images list --config /opt/kubeadm.conf 2 10.0.129.84:60080/k8s/kube-apiserver:v1.13.3 3 10.0.129.84:60080/k8s/kube-controller-manager:v1.13.3 4 10.0.129.84:60080/k8s/kube-scheduler:v1.13.3 5 10.0.129.84:60080/k8s/kube-proxy:v1.13.3 6 10.0.129.84:60080/k8s/pause:3.1 7 10.0.129.84:60080/k8s/etcd:3.2.24 8 10.0.129.84:60080/k8s/coredns:1.2.6
# 提前下载镜像到本地
1 $ kubeadm config images pull --config /opt/kubeadm.conf
5. 初始化master节点
操作节点:只在master节点(k8s-master)执行
1 $kubeadm init --config /opt/kubeadm.conf
若初始化成功后,最后会提示如下信息:
1 ... 2 Your Kubernetes master has initialized successfully! 3 4 To start using your cluster, you need to run the following as a regular user: 5 6 mkdir -p $HOME/.kube 7 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config 8 sudo chown $(id -u):$(id -g) $HOME/.kube/config 9 10 You should now deploy a pod network to the cluster. 11 Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: 12 https://kubernetes.io/docs/concepts/cluster-administration/addons/ 13 14 You can now join any number of machines by running the following on each node 15 as root: 16 17 kubeadm join 10.0.129.84:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:6bb7e2646f1f846efddf2525c012505b76831ff9453329d0203d010814783a51
接下来按照上述提示信息操作,配置kubectl客户端的认证
1 $ mkdir -p $HOME/.kube 2 $ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config 3 $ sudo chown $(id -u):$(id -g) $HOME/.kube/config
此时使用 kubectl get nodes查看节点应该处于notReady状态,因为还未配置网络插件
若执行初始化过程中出错,根据错误信息调整后,执行kubeadm reset后再次执行init操作即可
6. 添加slave节点到集群中
操作节点:所有的slave节点(k8s-slave)需要执行 在每台slave节点,执行如下命令,该命令是在kubeadm init成功后提示信息中打印出来的,需要替换成实际init后打印出的命令。
1 $ kubeadm join 10.0.129.84:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:6bb7e2646f1f846efddf2525c012505b76831ff9453329d0203d010814783a51
7. 安装flannel插件
操作节点:只在master节点(k8s-master)执行 - 拷贝kube-flannel.yml文件 把kube-flannel.yml拷贝到master节点的/opt目录
1 $ cp /opt/k8s-installer/kube-flannel.yml /opt
替换flannel镜像地址 其中10.0.129.84:60080需要替换为实际的镜像仓库地址(k8s-master节点ip:60080)
1 $ sed -i "s#quay.io/coreos#10.0.129.84:60080/k8s#g" /opt/kube-flannel.yml
若配置kubeadm初始化文件章节中,podSubnet使用了非10.244.0.0/16的值,则需要对应的修改kube-flannel.yml文件中如下部分,保持一致即可,否则会造成flannel无法启动。
1 net-conf.json: | 2 { 3 "Network": "10.244.0.0/16", 4 "Backend": { 5 "Type": "vxlan" 6 } 7 }
创建flannel相关资源
1 $ kubectl create -f /opt/kube-flannel.yml
8. 设置master节点是否可调度(可选)
操作节点:k8s-master 默认部署成功后,master节点无法调度业务pod,如需设置master节点也可以参与pod的调度,需执行:
1 $ kubectl taint node k8s-master node-role.kubernetes.io/master:NoSchedule-
9. 验证集群
操作节点: 在master节点(k8s-master)执行
1 $ kubectl get nodes 2 NAME STATUS ROLES AGE VERSION 3 k8s-master Ready master 22h v1.13.3 4 k8s-slave Ready <none> 22h v1.13.3
创建测试nginx服务,需要将10.0.129.84替换为实际k8s-master节点的ip地址
1 $ kubectl run test-nginx --image=10.0.129.84:60080/k8s/nginx
查看pod是否创建成功,并访问pod ip测试是否可用
1 $ kubectl get po -o wide |grep test-nginx 2 test-nginx-7d65ddddc9-lcg9z 1/1 Running 0 12s 10.244.1.3 k8s-slave <none> <none> 3 $ curl 10.244.1.3 4 ... 5 <h1>Welcome to nginx!</h1> 6 <p>If you see this page, the nginx web server is successfully installed and 7 working. Further configuration is required.</p> 8 9 <p>For online documentation and support please refer to 10 <a href="http://nginx.org/">nginx.org</a>.<br/> 11 Commercial support is available at 12 <a href="http://nginx.com/">nginx.com</a>.</p> 13 14 <p><em>Thank you for using nginx.</em></p> 15 ...