etcd集群数据迁移至新集群
旧ETCD环境数据备份
备份V2:
etcdctl backup --data-dir /var/lib/etcd --backup-dir /opt/etcdv2
注:此处的数据目录为: /var/lib/etcd ,备份路径为:/opt/etcdv2
备份V3:
ETCDCTL_API=3 etcdctl snapshot save /opt/etcdv2/member/snap/db
注:此处的数据备份目录为 /opt/etcdv2/member/snap/db,路径和v2的备份路径相关联,具体关联如下:<v2-backdir>/member/snap/db
数据拷贝至新节点
旧节点数据打包:
zip -r etcdv2.zip /opt/etcdv2
传送至新节点:
scp etcdv2.zip root@xxxx:/opt # scp至新机器(一台机器即可,这里传到了new-01节点上)
新集群恢复
1.解压备份文件并放在etcd数据目录下
unzip /opt/etcdv2 && mv /opt/etcdv2/member /var/lib/etcd/infra1.etcd/
2.启动新节点(new-01节点)
因为备份的数据中,存在旧服务的集群信息,因为我们进行了迁移,需要将原本的集群信息覆盖掉(不影响用户数据),启动参数中添加配置--force-new-cluster,等服务成功启动后,旧集群信息已被覆盖,然后去掉此配置,重启服务即可
注:节点配置中,请勿过早添加其他节点信息,只需配置当前节点的信息即可,后面会依次加入新节点信息
new-01节点 etcd配置预览
启动服务
[root@prod-k8s-01 ~]#systemctl start etcd
服务启动成功后,把/usr/lib/systemd/system/etcd.service中的--force-new-cluster参数删除,再次重启etcd即可。
3.修正当前节点的peerURLs
在迁移过程中,出现了当前节点的peerURLs错误的问题,需要修正下
查看节点信息:
[root@prod-k8s-01 ~]# etcdctl member list
76926a56d901: name=infra1 peerURLs=http://10.94.19.179:2379 clientURLs=http://10.94.19.179:2379 isLeader=false
其中peerURLs=http://10.94.19.179:2379
和配置中不相同,需要重新设置:
[root@prod-k8s-01 ~]# etcdctl member update 76926a56d901 http://10.94.19.179:2380 # 更改节点peerurls
至此,我们已经成功在新集群恢复了旧集群的数据,但是服务只有一个节点,不符合高可用要求,需要我们添加更多节点,以满足高可用
加入新节点
1.加入节点2
[root@prod-k8s-01 ~]#
etcdctl member add infra2 http://10.94.19.180:2380
Added member named infra2 with ID ee0b43bac93847e to cluster ETCD_NAME="infra2"
ETCD_INITIAL_CLUSTER="infra1=http://10.94.19.179:2380,infra2=http://10.94.19.180:2380"
ETCD_INITIAL_CLUSTER_STATE="existing"
new-02节点 etcd配置预览
启动02节点,其中关键配置需要设置成上面输出的信息。
查看节点列表
[root@prod-k8s-01 ~]# etcdctl member list
76926a56d901: name=infra1 peerURLs=http://10.94.19.179:2380 clientURLs=http://10.94.19.179:2379 isLeader=false
ee0b43bac93847e: name=infra2 peerURLs=http://10.94.19.180:2380 clientURLs=http://10.94.19.180:2379 isLeader=true
2.加入节点3
[root@prod-k8s-01 ~]# etcdctl member add infra3 http://10.94.19.181:2380
Added member named infra4 with ID 58805300c0ea60c2 to cluster
ETCD_NAME="infra3"
ETCD_INITIAL_CLUSTER="infra1=http://10.94.19.179:2380,infra2=http://10.94.19.180:2380,infra3=http://10.94.19.181:2380"
ETCD_INITIAL_CLUSTER_STATE="existing"
new-03节点 etcd配置预览
启动03节点,其中关键配置需要设置成上面输出的信息。
查看节点列表
[root@prod-k8s-01 ~]# etcdctl member list
76926a56d901: name=infra1 peerURLs=http://10.94.19.179:2380 clientURLs=http://10.94.19.179:2379 isLeader=false
ee0b43bac93847e: name=infra2 peerURLs=http://10.94.19.180:2380 clientURLs=http://10.94.19.180:2379 isLeader=true
58805300c0ea60c2: name=infra3 peerURLs=http://10.94.19.181:2380 clientURLs=http://10.94.19.181:2379 isLeader=false
调整new-01、new-02节点配置
修改new-01、new-02节点etcd.conf配置, 把ETCD_INITIAL_CLUSTER参数的值改成和new-03节点ETCD_INITIAL_CLUSTER参数一致,然后重启new-01、new-02 etcd服务。
查看新集群状态
[root@prod-k8s-01 ~]# etcdctl cluster-health
member 76926a56d901 is healthy: got healthy result from http://10.94.19.179:2379
member ee0b43bac93847e is healthy: got healthy result from http://10.94.19.180:2379
member 58805300c0ea60c2 is healthy: got healthy result from http://10.94.19.181:2379
cluster is healthy