kubernetes备份和恢复
kubernetes备份和恢复
- 备份etcd数据
首先由于ETCD有三个备份,并且会同步,所以您只需要在一台master机器上执行ETCD备份即可。
另外在运行下列命令前,确保当前机器的kube-apiserver是运行的。
ps -ef|grep kube-apiserver执行备份
export ETCD_SERVERS=$(ps -ef|grep apiserver|grep -Eo "etcd servers=.*2379"|awk -F= '{print $NF}') mkdir -p /var/lib/etcd_backup/ export ETCDCTL_API=3 etcdctl snapshot --endpoints=$ETCD_SERVERS --cacert=/etc/kubernetes/ssl/ca.pem --cert=/etc/kubernetes/ssl/etcd.pem --key=/etc/kubernetes/ssl/etcd-key.pem save /var/lib/etcd_backup/backup_$(date "+%Y%m%d%H%M%S").db
Snapshot saved at /var/lib/etcd_backup/backup_20180107172459.db
执行完成后,您可以在/var/lib/etcd_backup中找到备份的snapshot
[root@iZwz95q64qi83o88y9lq4cZ etcd_backup]# cd /var/lib/etcd_backup/ [root@iZwz95q64qi83o88y9lq4cZ etcd_backup]# ls backup_20180107172459.db [root@iZwz95q64qi83o88y9lq4cZ etcd_backup]# du -sh backup_20180107172459.db 8.0M backup_20180107172459.db
- 利用ETCD的备份恢复Kubernetes集群
首先需要分别停掉三台Master机器的kube-apiserver,确保kube-apiserver已经停止了,执行下列命令返回值为0
ps -ef|grep kube-api|grep -v grep |wc -l 0
分别在三台Master节点上,停止ETCD服务
service etcd stop
移除ETCD数据目录
mv /var/lib/etcd/data.etcd /var/lib/etcd/data.etcd_bak
分别在各个节点恢复数据,首先需要拷贝数据到每个master节点, 假设备份数据存在于/var/lib/etcd_backup/backup_20180107172459.db
scp /var/lib/etcd_backup/backup_20180107172459.db root@master1:/var/lib/etcd_backup/ scp /var/lib/etcd_backup/backup_20180107172459.db root@master2:/var/lib/etcd_backup/ scp /var/lib/etcd_backup/backup_20180107172459.db root@master3:/var/lib/etcd_backup/
执行恢复命令
set -x export ETCD_NAME=$(cat /usr/lib/systemd/system/etcd.service|grep ExecStart|grep -Eo "name.*-name-[0-9].*--client"|awk '{print $2}') export ETCD_CLUSTER=$(cat /usr/lib/systemd/system/etcd.service|grep ExecStart|grep -Eo "initial-cluster.*--initial"|awk '{print $2}') export ETCD_INITIAL_CLUSTER_TOKEN=$(cat /usr/lib/systemd/system/etcd.service|grep ExecStart|grep -Eo "initial-cluster-token.*"|awk '{print $2}') export ETCD_INITIAL_ADVERTISE_PEER_URLS=$(cat /usr/lib/systemd/system/etcd.service|grep ExecStart|grep -Eo "initial-advertise-peer-urls.*--listen-peer"|awk '{print $2}') ETCDCTL_API=3 etcdctl snapshot --cacert=/etc/kubernetes/ssl/ca.pem --cert=/etc/kubernetes/ssl/etcd.pem --key=/etc/kubernetes/ssl/etcd-client-key.pem restore /var/lib/etcd_backup/backup_20180107172459.db --name $ETCD_NAME --data-dir /var/lib/etcd/data.etcd --initial-cluster $ETCD_CLUSTER --initial-cluster-token $ETCD_INITIAL_CLUSTER_TOKEN --initial-advertise-peer-urls $ETCD_INITIAL_ADVERTISE_PEER_URLS
chown -R etcd:etcd /var/lib/etcd/data.etcd
分别在三个master节点启动ETCD,并且通过service命令确认启动成功
# service etcd start # service etcd status
# export ETCD_SERVERS=$(cat /etc/kubernetes/manifests-backups/kube-apiserver.yaml |grep etcd-server|awk -F= '{print $2}') ETCDCTL_API=3 etcdctl endpoint health --endpoints=$ETCD_SERVERS --cacert=/etc/kubernetes/ssl/ca.pem --cert=/etc/kubernetes/ssl/etcd-client.pem --key=/etc/kubernetes/ssl/etcd-key.pem
https://192.168.250.198:2379 is healthy: successfully committed proposal: took = 2.238886ms
https://192.168.250.196:2379 is healthy: successfully committed proposal: took = 3.390819ms
https://192.168.250.197:2379 is healthy: successfully committed proposal: took = 2.925103ms
检查集群是否恢复正常,可以看到集群已经正常启动了。之前部署的应用也还在。
# kubectl get cs NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-0 Healthy {"health": "true"} etcd-2 Healthy {"health": "true"} etcd-1 Healthy {"health": "true"}
Kubernetes的备份主要是通过ETCD的备份完成的。而恢复时,主要考虑的是整个顺序:停止kube-apiserver,停止ETCD,恢复数据,启动ETCD,启动kube-apiserver。