二进制安装kubernetes v1.11.2 (第四章 etcd集群部署)
继续第一章的部署。
四、部署etcd集群
4.1 kubernetes使用etcd存储所有数据,本节部署一个2个节点高可用的etcd集群,复用第一章的master节点。
192.168.56.20 k8s-m1 192.168.56.21 k8s-m2
4.2 下载和分发etcd二进制文件
[k8s@k8s-m1 ~]$ cd /home/k8s/k8s [k8s@k8s-m1 k8s]$ wget https://github.com/coreos/etcd/releases/download/v3.3.7/etcd-v3.3.7-linux-amd64.tar.gz [k8s@k8s-m1 k8s]$ source /opt/k8s/bin/environment.sh [k8s@k8s-m1 k8s]$ for master_ip in ${MASTER_IPS[@]} do echo ">>> ${master_ip}" scp etcd-v3.3.7-linux-amd64/etcd* k8s@${master_ip}:/opt/k8s/bin ssh k8s@${master_ip} "chmod +x /opt/k8s/bin/*" done
4.3 创建etcd证书和私钥
创建证书签名请求
[k8s@k8s-m1 k8s]$ cd /opt/k8s/cert/ [k8s@k8s-m1 cert]$ cat > etcd-csr.json <<EOF { "CN": "etcd", "hosts": [ "127.0.0.1", "192.168.56.20", "192.168.56.21" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "k8s", "OU": "4Paradigm" } ] } EOF
- hosts字段指定授权使用该证书的etcd节点ip或者域名列表
4.4 生成证书和私钥
cfssl gencert -ca=/etc/kubernetes/cert/ca.pem \ -ca-key=/etc/kubernetes/cert/ca-key.pem \ -config=/etc/kubernetes/cert/ca-config.json \ -profile=kubernetes etcd-csr.json | cfssljson -bare etcd ls etcd*
4.5 分发到各etcd节点
[k8s@k8s-m1 cert]$ source /opt/k8s/bin/environment.sh [k8s@k8s-m1 cert]$ for master_ip in ${MASTER_IPS[@]} do echo ">>> ${master_ip}" ssh root@${master_ip} "mkdir -p /etc/etcd/cert && chown -R k8s /etc/etcd/cert" scp etcd*.pem k8s@${master_ip}:/etc/etcd/cert/ done
4.6 创建etcd的systemd unit模板文件
[k8s@k8s-m1 cert]$ mkdir -p /opt/k8s/template && cd /opt/k8s/template [k8s@k8s-m1 template]$ source /opt/k8s/bin/environment.sh [k8s@k8s-m1 template]$ cat > etcd.service.template <<EOF [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target Documentation=https://github.com/coreos [Service] User=k8s Type=notify WorkingDirectory=/var/lib/etcd/ ExecStart=/opt/k8s/bin/etcd \\ --data-dir=/var/lib/etcd \\ --name=##NODE_NAME## \\ --cert-file=/etc/etcd/cert/etcd.pem \\ --key-file=/etc/etcd/cert/etcd-key.pem \\ --trusted-ca-file=/etc/kubernetes/cert/ca.pem \\ --peer-cert-file=/etc/etcd/cert/etcd.pem \\ --peer-key-file=/etc/etcd/cert/etcd-key.pem \\ --peer-trusted-ca-file=/etc/kubernetes/cert/ca.pem \\ --peer-client-cert-auth \\ --client-cert-auth \\ --listen-peer-urls=https://##master_ip##:2380 \\ --initial-advertise-peer-urls=https://##master_ip##:2380 \\ --listen-client-urls=https://##master_ip##:2379,http://127.0.0.1:2379 \\ --advertise-client-urls=https://##master_ip##:2379 \\ --initial-cluster-token=etcd-cluster-0 \\ --initial-cluster=${ETCD_NODES} \\ --initial-cluster-state=new Restart=on-failure RestartSec=5 LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF
4.7 将模板文件修改正确的配置,并分发到各etcd节点
# 修改为正确的配置 [k8s@k8s-m1 template]$ for (( i=0; i < 2; i++ )) do sed -e "s/##NODE_NAME##/${NODE_NAMES[i]}/" -e "s/##master_ip##/${master_ipS[i]}/" etcd.service.template > etcd-${master_ipS[i]}.service done # 创建etcd的数据目录和工作目录,分发到各etcd节点,同时文件名重命名为etcd.service for master_ip in ${MASTER_IPS[@]} do echo ">>> ${master_ip}" ssh root@${master_ip} "mkdir -p /var/lib/etcd && chown -R k8s /var/lib/etcd" scp etcd-${master_ip}.service root@${master_ip}:/etc/systemd/system/etcd.service done
4.8 完整的etcd配置文件,参考如下(k8s-m1节点)
[root@k8s-m1 ~]# cat /etc/systemd/system/etcd.service [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target Documentation=https://github.com/coreos [Service] User=k8s Type=notify WorkingDirectory=/var/lib/etcd/ ExecStart=/opt/k8s/bin/etcd \ --data-dir=/var/lib/etcd \ --name=kube-node1 \ --cert-file=/etc/etcd/cert/etcd.pem \ --key-file=/etc/etcd/cert/etcd-key.pem \ --trusted-ca-file=/etc/kubernetes/cert/ca.pem \ --peer-cert-file=/etc/etcd/cert/etcd.pem \ --peer-key-file=/etc/etcd/cert/etcd-key.pem \ --peer-trusted-ca-file=/etc/kubernetes/cert/ca.pem \ --peer-client-cert-auth \ --client-cert-auth \ --listen-peer-urls=https://192.168.56.20:2380 \ --initial-advertise-peer-urls=https://192.168.56.20:2380 \ --listen-client-urls=https://192.168.56.20:2379,http://127.0.0.1:2379 \ --advertise-client-urls=https://192.168.56.20:2379 \ --initial-cluster-token=etcd-cluster-0 \ --initial-cluster=kube-node1=https://192.168.56.20:2380,kube-node2=https://192.168.56.21:2380 \ --initial-cluster-state=new Restart=on-failure RestartSec=5 LimitNOFILE=65536 [Install] WantedBy=multi-user.target
4.9 启动etcd服务
source /opt/k8s/bin/environment.sh for master_ip in ${MASTER_IPS[@]} do echo ">>> ${master_ip}" ssh root@${master_ip} "systemctl daemon-reload && systemctl enable etcd && systemctl restart etcd &" done
4.10 检查etcd服务是否启动
source /opt/k8s/bin/environment.sh for master_ip in ${MASTER_IPS[@]} do echo ">>> ${master_ip}" ssh root@${master_ip} "systemctl status etcd|grep Active" done
验证服务状态
[k8s@k8s-m1 ~]$ for master_ip in ${MASTER_IPS[@]} > do > echo ">>> ${master_ip}" > ETCDCTL_API=3 /opt/k8s/bin/etcdctl \ > --endpoints=https://${master_ip}:2379 \ > --cacert=/etc/kubernetes/cert/ca.pem \ > --cert=/etc/etcd/cert/etcd.pem \ > --key=/etc/etcd/cert/etcd-key.pem endpoint health > done >>> 192.168.56.20 https://192.168.56.20:2379 is healthy: successfully committed proposal: took = 5.458846ms >>> 192.168.56.21 https://192.168.56.21:2379 is healthy: successfully committed proposal: took = 3.662995ms