elasticsearch5.0.1集群一次误删除kibana索引引发的血案
elasticsearch集群中一次删除kibana索引引发的血案
1.问题发生的过程:
早上的时候有某个索引无法看到报表数据,于是就点该报表多次,估计集群被点挂了,报错:Elasticsearch is still initializing the kibana index
当时有点慌估计是昏了头,直接根据baidu某篇博文的提示进行了操作
curl -XDELETE http://localhost:9200/.kibana
这下悲剧发生了,kibana控制台没有东西了,业务部门无法查询报表。。。
后面在http://stackoverflow.com/上搜索到了老外的解释,会丢失所有的配置、索引、图形和报表:
the curl -XDELETE http://localhost:9200/.kibana command works fine, however you lost all your kibana' settings (indexes, graphs, dashboards)
2.解决思路
之前有5个节点部署在阿里云其中有两个节点因为磁盘空间不足扩容,扩容前做了镜像,抱着试试的态度新买服务器挂接上之前的镜像,
找到之前的操作记录:
发现 .kibana索引的uuid为E9kS4THKREKR36IuPICPIA,大小大约有179k,如下
# curl '10.26.241.237:9200/_cat/indices?v' health status index uuid pri rep docs.count docs.deleted store.size pri.store.size green open .kibana E9kS4THKREKR36IuPICPIA 1 1 41 5 179.3kb 89.6kb green open voice:user:login VMpfcIiFS9OixB-J7-ZLWw 5 1 695465821 0 323.4gb 162.3gb green open voice:user:logout W8MAAbp7RO6ZYYx7FcfcVA 5 1 686515590 0 279gb 138.3gb green open push:task:result qWs38E_eQbCicgB312PE8w 5 1 91340303 0 22.1gb 10.9gb yellow open push:user:req wqJi6jTFT-a0ZN-57Z63Yw 5 1 8476340396 0 5.4tb 3.4tb green open voice:mic:used pdZZr8mdSwirBGSAYTvIwg 5 1 8925004 0 3.5gb 1.7gb green open user:register LZ_DwUpDRAyc0gfsFlCdGA 5 1 64626999 0 22.4gb 11.1gb green open push:user:app 0ivr0VubTCG5mFM0yoq34w 5 1 138403810 0 305.4gb 152.2gb green open voice:send:text rrS8Kd4nRlim7wEOMqJ1wA 5 1 27960 0 21.6mb 10.8mb green open voice:mic:lose iUhw676hTTSsyJnv3K0s6Q 5 1 2847952 0 1.7gb 902mb green open user:login C-qSmB0ST2CrMkb7snrbKQ 5 1 2623263965 0 606.4gb 304.1gb green open push:user:task 7XPBJeBWRbas5t1XwiKt2Q 5 1 53518658 0 21gb 10.5gb green open speech:voice:result 7wISjRCeQZSY4SsCNunh8w 5 1 650234559 0 578gb 289.6gb green open script:user:info qTZEpjkmRiSyyL1WBpMS5g 5 1 0 0 1.4kb 766b green open speech:voice:upload VBrFZq8QScOFYN1jsCkN4A 5 1 4469453923 0 1.9tb 973.5gb
发现虽然有kibana的索引但是只有在es的数据目录/data/es/data/nodes/0/indices下查找,发现只有不到30k明显不是我们需要的数据
将这台服务器退掉,重新再次购买了一台服务器,挂载上另外一台之前保存的镜像,发现此时的大小为212k左右,和没有出问题之前差不多大小,估计这个就是我们需要的东西了,有点运气成分在吧,5台服务器每个索引有两份,直接将这个数据拷贝到目标服务器发现不行(直接就被es删除掉了)
于是在网上搜索到了elasticdump工具,可以通过这个工具将索引导出然后倒入到目标服务器
原服务器:10.30.138.62
目标服务器:10.26.241.237
3.安装elasticdump工具
yum install epel-release -y
yum install nodejs -y
yum install nodejs npm -y
npm install elasticdump -y
4.导出具体配置信息
# node_modules/elasticdump/bin/elasticdump --ignore-errors=true --scrollTime=120m --bulk=true --input=http://10.30.138.62:9200/.kibana --output=data.json --type=data
Fri, 10 Mar 2017 08:17:45 GMT | starting dump
Fri, 10 Mar 2017 08:17:45 GMT | got 40 objects from source elasticsearch (offset: 0)
Fri, 10 Mar 2017 08:17:45 GMT | sent 40 objects to destination file, wrote 40
Fri, 10 Mar 2017 08:17:45 GMT | got 0 objects from source elasticsearch (offset: 40)
Fri, 10 Mar 2017 08:17:45 GMT | Total Writes: 40
Fri, 10 Mar 2017 08:17:45 GMT | dump complete
导出mapping信息
# /root/node_modules/elasticdump/bin/elasticdump --ignore-errors=true --scrollTime=120m --bulk=true --input=http://10.30.138.62:9200/.kibana --output=mapping.json --type=mapping
Fri, 10 Mar 2017 08:19:11 GMT | starting dump
Fri, 10 Mar 2017 08:19:11 GMT | got 1 objects from source elasticsearch (offset: 0)
Fri, 10 Mar 2017 08:19:11 GMT | sent 1 objects to destination file, wrote 1
Fri, 10 Mar 2017 08:19:11 GMT | got 0 objects from source elasticsearch (offset: 1)
Fri, 10 Mar 2017 08:19:11 GMT | Total Writes: 1
Fri, 10 Mar 2017 08:19:11 GMT | dump complete
5.恢复到目标服务器中
直接在安装了elasticdump工具的服务器上操作,节省在目标服务器上安装工具的时间
导入mapping
# node_modules/elasticdump/bin/elasticdump --input=mapping.json --output=http://10.26.241.237:9200/.kibana --type=mapping
Fri, 10 Mar 2017 08:23:08 GMT | starting dump
Fri, 10 Mar 2017 08:23:08 GMT | got 1 objects from source file (offset: 0)
Fri, 10 Mar 2017 08:23:08 GMT | sent 1 objects to destination elasticsearch, wrote 7
Fri, 10 Mar 2017 08:23:08 GMT | got 0 objects from source file (offset: 1)
Fri, 10 Mar 2017 08:23:08 GMT | Total Writes: 7
Fri, 10 Mar 2017 08:23:08 GMT | dump complete
导入具体的kibana配置信息
# node_modules/elasticdump/bin/elasticdump --input=data.json --output=http://10.26.241.237:9200/.kibana --type=data
Fri, 10 Mar 2017 08:23:25 GMT | starting dump
Fri, 10 Mar 2017 08:23:25 GMT | got 40 objects from source file (offset: 0)
Fri, 10 Mar 2017 08:23:25 GMT | sent 40 objects to destination elasticsearch, wrote 40
Fri, 10 Mar 2017 08:23:25 GMT | got 0 objects from source file (offset: 40)
Fri, 10 Mar 2017 08:23:25 GMT | Total Writes: 40
Fri, 10 Mar 2017 08:23:25 GMT | dump complete
总结,通过elasicdump可以方便的备份kibana配置,或是迁移
操作之前一定要测试或者明白操作的结果,不能乱操作,特别是生产环境和自己慌乱的情况下
记录是个好习惯
阿里云的镜像功能不错,可以使用它做镜像策略,例如每天备份一次,数据量大可以2天备份一次,以防万一