阿里云搭建k8s高可用集群(1.17.3)

首先准备5台centos7 ecs实例最低要求2c4G 开启SLB(私网)

这里我们采用堆叠拓扑的方式构建高可用集群,因为k8s 集群etcd采用了raft算法保证集群一致性,所以高可用必须保证至少3台master+2work

master01 172.26.0.1
master01 172.26.0.2
master01 172.26.0.3
work01 172.26.0.4
work02 172.26.0.5
slb 172.26.0.99

  

首先在每台机器上执行以下脚本,这段脚本将会帮助你初始安装docker+k8s三件套:

yum remove docker docker-client docker-client-latest docker-common docker-latest docker-latest-logrotate docker-logrotate docker-engine
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum install -y docker-ce docker-ce-cli containerd.io
systemctl start docker.service
systemctl enable docker.service
cat>>/etc/docker/daemon.json<<EOF
{
  "registry-mirrors": ["https://你的镜像加速器地址"]
}
EOF
systemctl daemon-reload
systemctl restart docker.service
cat>>/etc/yum.repos.d/kubernetes.repo<<EOF
[kubernetes]
name=Kubernetes Repo
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
EOF
yum install -y kubelet-1.17.3 kubeadm-1.17.3 kubectl-1.17.3
systemctl enable kubelet.service

由于kubernetes官方宣布在1.20以后将逐步弃用docker,所以目前新增了基于containerd作为标准OCI实现k8s集群

yum install -y yum-utils libseccomp 
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum install -y containerd
containerd config default > /etc/containerd/config.toml
systemctl enable containerd
systemctl start containerd
sed -i 's:k8s.gcr.io/pause:registry.aliyuncs.com/google_containers/pause:g' /etc/containerd/config.toml
systemctl daemon-reload
systemctl restart containerd
cat>>/etc/yum.repos.d/kubernetes.repo<<EOF
[kubernetes]
name=Kubernetes Repo
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
EOF
yum install -y kubelet-1.17.4 kubeadm-1.17.4 kubectl-1.17.4
systemctl enable kubelet.service
setenforce 0
modprobe br_netfilter
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
echo 1 > /proc/sys/net/ipv4/ip_forward
kubeadm init --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=cri --kubernetes-version=1.19.4
可选安装(替代docker命令) VERSION="v1.19.0" wget https://github.com/kubernetes-sigs/cri-tools/releases/download/$VERSION/crictl-$VERSION-linux-amd64.tar.gz tar zxvf crictl-$VERSION-linux-amd64.tar.gz -C /usr/local/bin rm -f crictl-$VERSION-linux-amd64.tar.gz echo "runtime-endpoint: unix:///run/containerd/containerd.sock" > /etc/crictl.yaml

接着我们进入master01修改hosts加入k8sapi地址(这里是实现高可用的重点)并初始化集群:

cat>>/etc/hosts<<EOF
172.26.0.1 k8sapi
EOF
kubeadm init --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=cri --control-plane-endpoint "k8sapi:6443"  --kubernetes-version=1.17.3
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

  初始化成功后,我们需要将slb映射到master01的tcp:6443端口上并将master01生成的证书拷贝到02,03两台机器上,登录02 03分别执行:

cat>>/etc/hosts<<EOF
172.26.0.99 k8sapi
EOF
mkdir  /etc/kubernetes/pki/
mkdir /etc/kubernetes/pki/etcd

  重新登录01并执行:

cd /etc/kubernetes/pki/
scp ca.crt ca.key sa.key sa.pub front-proxy-ca.crt front-proxy-ca.key root@172.26.0.2:/etc/kubernetes/pki/
scp ca.crt ca.key sa.key sa.pub front-proxy-ca.crt front-proxy-ca.key root@172.26.0.3:/etc/kubernetes/pki/
cd etcd
scp ca.crt ca.key root@172.26.0.2:/etc/kubernetes/pki/etcd/
scp ca.crt ca.key root@172.26.0.3:/etc/kubernetes/pki/etcd/

  再次登录02 03并分别执行:

kubeadm join k8sapi:6443 --token xxx     --discovery-token-ca-cert-hash sha256 :xxx  --control-plane
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

  登录04 05 并执行:

cat>>/etc/hosts<<EOF
172.26.0.99 k8sapi
EOF
kubeadm join k8sapi:6443 --token xxx   --discovery-token-ca-cert-hash sha256:xxx

  最后重新登录到02 03 修改etc/hosts k8sapi指向各自的内网IP地址并且将slb增加02 03的端口映射。这时候在任意master节点执行安装网络插件:

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
//需要注意的是最好修改一下flannel的cpu和内存limit。否则容易引发flannel outofmemory导致pod重启不了引发网络阻塞
kubectl edit daemonset.apps/kube-flannel-ds -n kube-system -o yaml

  最终效果如下:

$kubectl get node
NAME                    STATUS   ROLES    AGE   VERSION
master01                Ready    master   92m   v1.17.3
master02                Ready    <none>   50m   v1.17.3
master03                Ready    master   51m   v1.17.3
worker01                Ready    master   77m   v1.17.3
worker02                Ready    <none>   50m   v1.17.3

  

posted @ 2020-02-27 16:57  a1010  阅读(6405)  评论(21编辑  收藏  举报