K&

概述

 

etcd 集群需要定期维护来保持可靠。基于etcd 应用的需要,这个维护通常可以自动执行,不需要停机或者显着的降低性能。

所有 etcd 的维护是指管理被 etcd 键空间消耗的存储资源。通过存储空间的配额来控制键空间大小;如果 etcd 成员运行空间不足,将触发集群级警告,这将使得系统进入有限操作的维护模式。为了避免没有空间来写入键空间, etcd 键空间历史必须被压缩。存储空间自身可能通过碎片整理 etcd 成员来回收。最后,etcd 成员状态的定期快照备份使得恢复任何非故意的逻辑数据丢失或者操作错误导致的损坏变成可能。

 

历史压缩

 

因为 etcd 保持它的键空间的确切历史,这个历史应该定期压缩来避免性能下降和最终的存储空间枯竭。压缩键空间历史删除所有关于被废弃的在给定键空间修订版本之前的键的信息。这些key使用的空间随机变得可用来继续写入键空间。

键空间可以使用 etcd 的时间窗口历史保持策略自动压缩,或者使用 etcdctl 手工压缩。 etcdctl 方法在压缩过程上提供细粒度的控制,反之自动压缩适合仅仅需要一定时间长度的键历史的应用。(这种类似的参数尽量在部署前就研究确定好,不然后期修改还是重启服务没区别)

etcd 可以使用带有小时时间单位的 --auto-compaction 选项来设置为自动压缩键空间:

 

# 保持一个小时的历史
$ etcd --auto-compaction-retention=1

 

etcdctl 如下发起压缩工作:

 

有些小细节,要是当前存在的版本才能压缩。用命令即可查看到,之后就可以用下面命令执行压缩了。

 

 

[root@master ~]# etcdctl ${ep} endpoint status -w fields
"ClusterID" : 2943589120715358745
"MemberID" : 5519034610221305200
"Revision" : 4
"RaftTerm" : 281
"Version" : "3.4.16"
"DBSize" : 20480
"Leader" : 6237433037948641366
"IsLearner" : false
"RaftIndex" : 136
"RaftTerm" : 281
"RaftAppliedIndex" : 136
"Errors" : []
"Endpoint" : "https://172.21.130.169:2379"

"ClusterID" : 2943589120715358745
"MemberID" : 14703050348134501601
"Revision" : 4
"RaftTerm" : 281
"Version" : "3.4.16"
"DBSize" : 20480
"Leader" : 6237433037948641366
"IsLearner" : false
"RaftIndex" : 136
"RaftTerm" : 281
"RaftAppliedIndex" : 136
"Errors" : []
"Endpoint" : "https://172.21.130.168:2379"

"ClusterID" : 2943589120715358745
"MemberID" : 6237433037948641366
"Revision" : 4
"RaftTerm" : 281
"Version" : "3.4.16"
"DBSize" : 20480
"Leader" : 6237433037948641366
"IsLearner" : false
"RaftIndex" : 136
"RaftTerm" : 281
"RaftAppliedIndex" : 136
"Errors" : []
"Endpoint" : "https://172.28.17.85:2379"

 

 

执行压缩

 

 

[root@master ~]# etcdctl ${ep} compact 4
compacted revision 4
[root@master ~]# 

 

# 压缩到修订版本3
$ etcdctl compact 3

 

在压缩修订版本之前的修订版本变得无法访问:

 

 

[root@master ~]# etcdctl ${ep} get --rev=3 hello
{"level":"warn","ts":"2021-05-22T12:05:00.501+0800","caller":"clientv3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"endpoint://client-b621f55a-2041-4c6f-8d4f-eb291ee5fdbe/172.21.130.169:2379","attempt":0,"error":"rpc error: code = OutOfRange desc = etcdserver: mvcc: required revision has been compacted"}
Error: etcdserver: mvcc: required revision has been compacted
[root@master ~]# etcdctl ${ep} get --rev=4 hello
hello
world
[root@master ~]# etcdctl ${ep} get --rev=4 test
test
123
[root@master ~]# 

 

 

$ etcdctl get --rev=2 somekey
Error: rpc error: code = 11 desc = etcdserver: mvcc: required revision has been compacted

 

反碎片化

在压缩键空间之后,后端数据库可能出现内部碎片。内部碎片是指可以被后端使用但是依然消耗存储空间的空间反碎片化过程释放这个存储空间到文件系统。反碎片化在每个成员上发起。

通过留下间隔在后端数据库,压缩旧有修订版本会内部碎片化 etcd 。碎片化的空间可以被 etcd 使用,但是对于主机文件系统不可用。

为了反碎片化 etcd 成员, 使用 etcdctl defrag 命令:

 

 

[root@master ~]# etcdctl ${ep} defrag
Finished defragmenting etcd member[https://172.21.130.169:2379]
Finished defragmenting etcd member[https://172.21.130.168:2379]
Finished defragmenting etcd member[https://172.28.17.85:2379]

 

 

$ etcdctl defrag
Finished defragmenting etcd member[127.0.0.1:2379]

 

空间配额(就是存储大小)

在 etcd 中空间配额确保集群以可靠方式运作没有空间配额, etcd 可能会收到低性能的困扰,如果键空间增长的过度的巨大,或者可能简单的超过存储空间,导致不可预测的集群行为。如果键空间的任何成员的后端数据库超过了空间配额, etcd 发起集群范围的警告,让集群进入维护模式,仅接收键的读取和删除在键空间释放足够的空间之后,警告可以被解除,而集群将恢复正常运作。

默认,etcd 设置适合大多数应用的保守的空间配额,但是它可以在命令行中设置,单位为字节:

 

# 设置非常小的 16MB 配额
$ etcd --quota-backend-bytes=16777216

 

空间配额可以用循环触发:

 

这个模拟需要按照个人环境去测试:

因为单次写入的大小是受到--max-request-bytes这条参数限制的,Raft消息最大字节数,ETCD默认该值为1.5M,我自己改成了30M,但是测试后发现rpc还有发送限制

 

 

May 22 13:24:28 iZuf6h1kfgutxc3el68z2lZ etcd: {"level":"warn","ts":"2021-05-22T13:24:28.090+0800","caller":"etcdserver/server.go:297","msg":"exceeded recommended request limit","max-request-bytes":31457280,"max-request-size":"32 MB","recommended-request-bytes":10485760,"recommended-request-size":"10 MB"}

 

 

[root@master ~]# while [ 1 ]; do dd if=/dev/urandom bs=1024 count=2048 | etcdctl ${ep} put key || break; done
2048+0 records in
2048+0 records out
2097152 bytes (2.1 MB) copied, 0.0282497 s, 74.2 MB/s
{"level":"warn","ts":"2021-05-22T13:12:25.154+0800","caller":"clientv3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"endpoint://client-b93503c1-f1fe-4df3-a241-cf5db1baba89/172.21.130.169:2379","attempt":0,"error":"rpc error: code = ResourceExhausted desc = trying to send message larger than max (2097162 vs. 2097152)"}
Error: rpc error: code = ResourceExhausted desc = trying to send message larger than max (2097162 vs. 2097152)

 

 

开始灌入数据消耗空间存储

 

 

[root@master ~]# while [ 1 ]; do dd if=/dev/urandom bs=1024 count=2047 | etcdctl ${ep} put key || break; done
2047+0 records in
2047+0 records out
2096128 bytes (2.1 MB) copied, 0.0298826 s, 70.1 MB/s
OK
.............
2047+0 records in
2047+0 records out
2096128 bytes (2.1 MB) copied, 0.0299517 s, 70.0 MB/s
{"level":"warn","ts":"2021-05-22T13:17:50.684+0800","caller":"clientv3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"endpoint://client-b5b8ace3-6988-4da8-964f-56a13b6b26a6/172.21.130.169:2379","attempt":0,"error":"rpc error: code = ResourceExhausted desc = etcdserver: mvcc: database space exceeded"}
Error: etcdserver: mvcc: database space exceeded

 

 

数据满了

 

测试写入新的key

 

 

[root@master ~]# etcdctl ${ep} put king QQ
{"level":"warn","ts":"2021-05-22T13:20:02.236+0800","caller":"clientv3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"endpoint://client-1f99cfab-08fe-4670-b761-c207cb77882d/172.21.130.169:2379","attempt":0,"error":"rpc error: code = ResourceExhausted desc = etcdserver: mvcc: database space exceeded"}
Error: etcdserver: mvcc: database space exceeded

 

 

结果失败查看状态,确认数据空间超限

 

 

[root@master2 ~]# etcdctl ${ep} endpoint status -w table
{"level":"warn","ts":"2021-05-22T13:26:27.482+0800","caller":"clientv3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"passthrough:///https://172.28.17.85:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
Failed to get the status of endpoint https://172.28.17.85:2379 (context deadline exceeded)
+-----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------------------------------+
|          ENDPOINT           |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX |             ERRORS             |
+-----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------------------------------+
| https://172.21.130.169:2379 | 4c978cbca553cd70 |  3.4.16 |  1.1 GB |      true |      false |       377 |       2516 |               2516 |   memberID:5519034610221305200 |
|                             |                  |         |         |           |            |           |            |                    |                 alarm:NOSPACE  |
| https://172.21.130.168:2379 | cc0bba643b3d8ce1 |  3.4.16 |  1.1 GB |     false |      false |       377 |       2516 |               2516 |   memberID:5519034610221305200 |
|                             |                  |         |         |           |            |           |            |                    |                 alarm:NOSPACE  |
+-----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------------------------------+

[root@master2 ~]# etcdctl ${ep} endpoint status -w fields
{"level":"warn","ts":"2021-05-22T13:28:12.434+0800","caller":"clientv3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"passthrough:///https://172.28.17.85:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
Failed to get the status of endpoint https://172.28.17.85:2379 (context deadline exceeded)
"ClusterID" : 2943589120715358745
"MemberID" : 5519034610221305200
"Revision" : 2349
"RaftTerm" : 377
"Version" : "3.4.16"
"DBSize" : 1078022144
"Leader" : 5519034610221305200
"IsLearner" : false
"RaftIndex" : 2517
"RaftTerm" : 377
"RaftAppliedIndex" : 2517
"Errors" : [memberID:5519034610221305200 alarm:NOSPACE ]
"Endpoint" : "https://172.21.130.169:2379"

"ClusterID" : 2943589120715358745
"MemberID" : 14703050348134501601
"Revision" : 2349
"RaftTerm" : 377
"Version" : "3.4.16"
"DBSize" : 1077481472
"Leader" : 5519034610221305200
"IsLearner" : false
"RaftIndex" : 2517
"RaftTerm" : 377
"RaftAppliedIndex" : 2517
"Errors" : [memberID:5519034610221305200 alarm:NOSPACE ]
"Endpoint" : "https://172.21.130.168:2379"

[root@master2 ~]# etcdctl ${ep} alarm list
memberID:5519034610221305200 alarm:NOSPACE 
[root@master2 ~]# 

 

 

有点过头了不小心弄死一个

 

开始修复,删除多读的键空间将把集群带回配额限制,因此警告能被接触:

 

1、查询修订版本,上面给了就不查了,直接开始压缩

 

 

[root@master ~]# etcdctl ${ep} compact 2349
compacted revision 2349
[root@master ~]# 

 

 

2、反碎片化过度空间,就是压缩过后有一片空间etcd可用而且实际还是占用存储,但是文件系统却用不了,作用就是把空间释放,让文件系统也可使用。每台设备都要执行,因为数据是通过日志同步所有设备都会有这个问题(玩挂那台后面重启服务就行了死不了)

 

 

[root@master ~]# etcdctl ${ep} defrag
Finished defragmenting etcd member[https://172.21.130.169:2379]
Finished defragmenting etcd member[https://172.21.130.168:2379]
{"level":"warn","ts":"2021-05-22T13:36:37.102+0800","caller":"clientv3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"passthrough:///https://172.28.17.85:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
Failed to defragment etcd member[https://172.28.17.85:2379] (context deadline exceeded)

 

 

3、手动解除告警不然,不会自动消除

 

 

[root@master ~]# etcdctl ${ep} alarm list
memberID:5519034610221305200 alarm:NOSPACE 
[root@master
~]# etcdctl ${ep} alarm disarm memberID:5519034610221305200 alarm:NOSPACE
[root@master
~]# etcdctl ${ep} alarm list [root@master ~]# etcdctl ${ep} put king QQ OK [root@master ~]#

 

挂掉那台设备的处理方法(及特殊问题才有这种情况)

 

state状态如果是new并且删除数据是不是可以,不行。因为flag和initial-cluster结合计算出来的member id还是那个值,所以他会用这个去建立链接。虽然你的设备服务挂掉了,但集群存储的成员状态你还是或者的所以认为你存活,所以也不可以。(这种是集群还没有死透,死透了用下面那种方法,重启服务恢复,只要leader没死,其他删除数据重启服务就行,死透了就更简单了,备份数据然后用这个数据启动所有节点服务)

 

 

[root@master ~]# etcdctl ${ep}  member list
4c978cbca553cd70, started, etcd-1, https://172.21.130.169:2380, https://172.21.130.169:2379, false
568fd04cf936e056, started, etcd-3, https://172.28.17.85:2380, https://172.28.17.85:2379, false
cc0bba643b3d8ce1, started, etcd-2, https://172.21.130.168:2380, https://172.21.130.168:2379, false


[root@master1 ~]# cat /var/log/messages | grep "discovery failed"
May 22 14:02:24 iZuf6h1kfgutxc3el68z2lZ etcd: {"level":"fatal","ts":"2021-05-22T14:02:24.860+0800","caller":"etcdmain/etcd.go:271","msg":"discovery failed","error":"member 568fd04cf936e056 has already been bootstrapped","stacktrace":"go.etcd.io/etcd/etcdmain.startEtcdOrProxyV2\n\t/tmp/etcd-release-3.4.16/etcd/release/etcd/etcdmain/etcd.go:271\ngo.etcd.io/etcd/etcdmain.Main\n\t/tmp/etcd-release-3.4.16/etcd/release/etcd/etcdmain/main.go:46\nmain.main\n\t/tmp/etcd-release-3.4.16/etcd/release/etcd/main.go:28\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:200"}

 

 

 

处理办法:

 

备份任意一个数据以这个为中心集群重建,new或者existing都可以,测试都能恢复。(最好同时重启服务)多看看日志

 

碰到启动失败的别慌,看看日志中有下面标准的这种直接改成existing重启服务(数据越大重启恢复时间会越长)

 

 

May 22 19:45:19 master1 etcd[32368]: {"level":"fatal","ts":"2021-05-22T19:45:19.783+0800","caller":"etcdmain/etcd.go:271","msg":"discovery failed","error":"member 568fd04cf936e056 has already been bootstrapped","stacktrace":"go.etcd.io/etcd/etcdmain.startEtcdOrProxyV2\n\t/tmp/etcd-release-3.4.16/etcd/release/etcd/etcdmain/etcd.go:271\ngo.etcd.io/etcd/etcdmain.Main\n\t/tmp/etcd-release-3.4.16/etcd/release/etcd/etcdmain/main.go:46\nmain.main\n\t/tmp/etcd-release-3.4.16/etcd/release/etcd/main.go:28\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:200"}

 

 

 

 

 

快照备份

在正规基础上执行 etcd 集群快照可以作为 etc 键空间的持久备份。通过获取 etcd 成员的候选数据库的定期快照,etcd 集群可以被恢复到某个有已知良好状态的时间点。

通过 etcdctl 获取快照:

 

$ etcdctl snapshot save backup.db
$ etcdctl --write-out=table snapshot status backup.db
+----------+----------+------------+------------+
| HASH | REVISION | TOTAL KEYS | TOTAL SIZE |
+----------+----------+------------+------------+
| fe01cf57 | 10 | 7 | 2.1 MB |
+----------+----------+------------+------------+
posted on 2021-05-11 18:17  K&  阅读(876)  评论(0编辑  收藏  举报