单节点的es添加另外一个节点组成主从集群

环境:
OS:Centos 7
DB:6.5.0

192.168.1.135:19200 现有的单节点
192.168.1.134:19200 准备新添加的节点

--------新节点(192.168.1.134)安装es---------------
1.安装跟现有版本一致的es
我这里的版本是6.5.0
[root@localhost hxlmiao]# tar -xvf elasticsearch-6.5.0.tar.gz
[root@localhost hxlmiao]# mv elasticsearch-6.5.0 single_elasticsearch
[root@localhost hxlmiao]# chown -R hxlmiao:hxlmiao ./single_elasticsearch

2.创建相应目录
su - hxlmiao
[hxlmiao@localhost single_elasticsearch]$ cd /home/hxlmiao/single_elasticsearch
[hxlmiao@localhost single_elasticsearch]$ mkdir data

3.修改配置文件
vi /home/hxlmiao/single_elasticsearch/config/elasticsearch.yml
注意修改如下项目:
cluster.name: escluster ##主从节点一致
node.name: node-slave ##与主节点区别
path.data: /home/hxlmiao/single_elasticsearch/data
path.logs: /home/hxlmiao/single_elasticsearch/logs
network.host: 192.168.1.134
http.port: 19200
discovery.zen.ping.unicast.hosts: ["192.168.1.134:9300", "192.168.1.135:9300"] ##这里必须填写9300端口内部通信端口
discovery.zen.minimum_master_nodes: 1
http.cors.enabled: true
http.cors.allow-credentials: true
http.cors.allow-origin: "/.*/"
http.cors.allow-headers: WWW-Authenticate,X-Requested-With,X-Auth-Token,Content-Type,Content-Length,Authorization
xpack.security.enabled: false
transport.tcp.port:9300 ##节点之间内部通信端口

4.启动
[root@localhost opt]# su - hxlmiao
[hxlmiao@localhost bin]$ cd /home/hxlmiao/single_elasticsearch/bin
[hxlmiao@localhost bin]$./elasticsearch -d

5.这个时候主库配置没有修改,也没有重启动,看下数据是否有同步过来
[hxlmiao@localhost logs]$ curl -X GET 'http://192.168.1.134:19200/_cat/indices?v'
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size

看到数据是没有同步过来的


---------------------原来的节点(192.168.1.135)配置修改-----------------
1.查看原来节点数据情况
[hxlmiao@localhost config]$ curl -X GET 'http://192.168.1.135:19200/_cat/indices?v'
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
green open inoculate_new tWt0-g8wQreRCu-dp0Iuow 5 1 18280 0 4mb 2mb
green open reservation_new Shg_bn80TFucvknXxikmZw 5 1 5607 0 1.4mb 734.8kb


2.修改配置文件
vi /home/hxlmiao/single_elasticsearch/config/elasticsearch.yml
注意修改如下项目:
cluster.name: escluster ##主从节点一致
node.name: node-slave ##与主节点区别
path.data: /home/hxlmiao/single_elasticsearch/data
path.logs: /home/hxlmiao/single_elasticsearch/logs
network.host: 192.168.1.135
http.port: 19200
discovery.zen.ping.unicast.hosts: ["192.168.1.134:9300", "192.168.1.135:9300"] ##这里必须填写9300端口内部通信端口
discovery.zen.minimum_master_nodes: 1
http.cors.enabled: true
http.cors.allow-credentials: true
http.cors.allow-origin: "/.*/"
http.cors.allow-headers: WWW-Authenticate,X-Requested-With,X-Auth-Token,Content-Type,Content-Length,Authorization
xpack.security.enabled: false
transport.tcp.port:9300 ##节点之间内部通信端口

2.重新启动
[root@localhost opt]# su - hxlmiao
[hxlmiao@localhost bin]$ cd /home/hxlmiao/single_elasticsearch/bin
[hxlmiao@localhost bin]$./elasticsearch -d

3.在新的节点验证数据是否同步过来
[hxlmiao@localhost logs]$ curl -X GET 'http://192.168.1.134:19200/_cat/indices?v'
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
green open inoculate_new tWt0-g8wQreRCu-dp0Iuow 5 1 18280 0 4mb 2mb
green open reservation_new Shg_bn80TFucvknXxikmZw 5 1 5607 0 1.4mb 734.8kb

可以看到数据已经同步过来.

posted @ 2019-08-29 16:59  slnngk  阅读(4119)  评论(0编辑  收藏  举报