Elasticsearch7学习笔记之快速搭建一个ES7测试集群(xpack, kibana)
0x00 概述
最近因为测试需求,需要快速搭建一个ES集群,要求:
''' 1. 开启xpack 2. 开启kibana,导入测试数据集 3. ES和kibana版本为7.13.2
4. 集群为3个节点,master节点1个,data节点3个(master节点角色为master+data) '''
0x01 集群部署
1.1 新增es专用用户和组并开启权限(3个节点都要操作)
es是无法用root用户开启的,所以需要新增es专用用户和组
groupadd es # 新建es用户组 useradd -g es es # 新建es用户,属于es组
给es用户开启权限
vim /etc/sudoers
新增一行
# 在 root ALL=(ALL) ALL下面新增一行 es ALL=(ALL) ALL # 使用wq!保存退出
1.2 修改节点系统配置(3个节点都要操作)
#修改elastic系统文件打开数 cat << EOF >> /etc/security/limits.conf es soft nofile 65536 es hard nofile 65536
es soft nproc 4096
es hard nproc 4096 EOF #修改无法分配内存问题 cat << EOF >> /etc/security/limits.conf es soft memlock unlimited es hard memlock unlimited EOF
[root@localhost ~]# cd /etc/security/limits.d
[root@localhost limits.d]# ll
total 4
-rw-r--r--. 1 root root 191 Nov 6 2016 20-nproc.conf
[root@localhost limits.d]# vi 20-nproc.conf
# Default limit for number of user's processes to prevent
# accidental fork bombs.
# See rhbz #432903 for reasoning.
* soft nproc 4096
root soft nproc unlimited
~
~
将上面内容的*号改成用户名
# See rhbz #432903 for reasoning.
es soft nproc 4096
root soft nproc unlimited
#修改max_map_count值 sysctl -w vm.max_map_count=655360 echo 'vm.max_map_count=655360' >> /etc/sysctl.conf sysctl -p # 修改jvm,设置内存为机器内存的50%(测试节点为2g内存) vim elasticsearch-7.13.2/config/jvm.options -Xms1g -Xmx1g
1.3 准备安装包
# 将下载的安装包上传到node1根目录 [root@node1 /]# cd / # 解压 [root@node1 /]# tar -zvxf elasticsearch-7.13.2-linux-x86_64.tar.gz # 安装包改为es用户所有 [root@node1 /]# chown -R es:es elasticsearch-7.13.2 # 将安装包拷贝到节点2和节点3的根目录 [root@node1 /]# scp -r elasticsearch-7.13.2 root@$node2:/ [root@node1 /]# scp -r elasticsearch-7.13.2 root@$node3:/
1.4 修改配置文件
# 切换到es用户 [root@node1 /]# su - es # 修改es配置文件 [es@node1 /]# vim /elasticsearch-7.13.2/conf/elasticsearch.yml
修改节点1的elastisearch.yml为
cluster.name: elastic_test node.name: test_node1 node.roles: [master, data] path.data: /app/data # 使用chown -R命令改为es组 path.logs: /app/logs # 使用chown -R命令改为es组 bootstrap.memory_lock: false http.port: 19200 discovery.seed_hosts: ["192.168.59.128","192.168.59.129","192.168.59.130"] cluster.initial_master_nodes: ["192.168.59.128"] # 以下为打开xpack,加密开启TLS集群通信认证 xpack.security.enabled: true xpack.security.transport.ssl.enabled: true xpack.security.transport.ssl.verification_mode: certificate xpack.security.transport.ssl.keystore.path: elastic-certificates.p12 xpack.security.transport.ssl.truststore.path: elastic-certificates.p12
修改节点2的elastisearch.yml为
cluster.name: elastic_test node.name: test_node2 node.roles: [data] path.data: /app/data # 使用chown -R命令改为es组 path.logs: /app/logs # 使用chown -R命令改为es组 bootstrap.memory_lock: false http.port: 19200 discovery.seed_hosts: ["192.168.59.128","192.168.59.129","192.168.59.130"] cluster.initial_master_nodes: ["192.168.59.128"] # 以下为打开xpack,加密开启TLS集群通信认证 xpack.security.enabled: true xpack.security.transport.ssl.enabled: true xpack.security.transport.ssl.verification_mode: certificate xpack.security.transport.ssl.keystore.path: elastic-certificates.p12 xpack.security.transport.ssl.truststore.path: elastic-certificates.p12
修改节点3的elastisearch.yml为
cluster.name: elastic_test node.name: test_node3 node.roles: [data] path.data: /app/data # 使用chown -R命令改为es组 path.logs: /app/logs # 使用chown -R命令改为es组 bootstrap.memory_lock: false http.port: 19200 discovery.seed_hosts: ["192.168.59.128","192.168.59.129","192.168.59.130"] cluster.initial_master_nodes: ["192.168.59.128"] # 以下为打开xpack,加密开启TLS集群通信认证 xpack.security.enabled: true xpack.security.transport.ssl.enabled: true xpack.security.transport.ssl.verification_mode: certificate xpack.security.transport.ssl.keystore.path: elastic-certificates.p12 xpack.security.transport.ssl.truststore.path: elastic-certificates.p12
0x02 生成集群TLS认证用的证书
先不要着急启动集群,使用es自带的工具生成证书
# 在任意一个节点分别执行下面两个命令,一路直接enter,不要输入密码 /elasticsearch-7.13.2/bin/elasticsearch-certutil ca /elasticsearch-7.13.2/bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12
执行上面两个命令后,发现在elasticsearch-7.13.2目录多出来两个文件(7.13.2版本文件生成后放到了安装文件的根目录)
elastic-certificates.p12
elastic-stack-ca.p12
将这两个文件分别拷贝到各自节点的以下目录
/elasticsearch-7.13.2/config
0x03 启动集群
注意切换到es用户,执行启动操作
su - es
启动集群,在各个节点分别执行下面命令,暂时不加 -d后台操作,方便观察启动状态,有报错可以第一时间看到
/elasticsearch-7.13.2/bin/elasticsearch
0x04 设置集群访问用户名和密码
使用es自带的工具生成访问用户名和密码,用于集群访问控制
4.1 使用auto参数会随机生成密码
在Master节点执行以下命令
/elasticsearch-7.13.2/bin/elasticsearch-setup-password auto
需要将随机生成的密码做好保存!
[es@node1 ~]$ /elasticsearch-7.13.2/bin/elasticsearch-setup-passwords auto Initiating the setup of passwords for reserved users elastic,apm_system,kibana,logstash_system,beats_system,remote_monitoring_user. The passwords will be randomly generated and printed to the console. Please confirm that you would like to continue [y/N]y #确认随机生成密码 Changed password for user apm_system PASSWORD apm_system = g7JOAUBi4jJh7PAaXdAN Changed password for user kibana #kibana连接es的用户名及密码 PASSWORD kibana = 0B3EINVicVRbsnyJHk99 Changed password for user logstash_system #logstash连接es的用户名及密码 PASSWORD logstash_system = b34Aradp6gSqJMe3SbXK Changed password for user beats_system #beats连接es的用户名及密码 PASSWORD beats_system = EWjwNoDZILqCOCjCEjSc Changed password for user remote_monitoring_user #远程监控es的用户名及密码 PASSWORD remote_monitoring_user = N92vKgJ4AHrSfhg5mFUK Changed password for user elastic #应用程序连接es API的用户名及密码 PASSWORD elastic = 26tBktGolYCyZD2pPISW
4.2 使用interactive命令手动设置密码
/elasticsearch-7.13.2/bin/elasticsearch-setup-password interactive
0x05 验证集群访问用户名和密码
[es@node1 ~]$ curl -u elastic:26tBktGolYCyZD2pPISW -XGET 'http://192.168.59.128:19200/_cat/nodes?v' ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name 192.168.59.128 29 68 2 0.04 0.17 0.26 dilm - elastic_node1 192.168.59.129 22 68 1 0.04 0.17 0.26 dilm * elastic_node2 192.168.59.130 28 68 1 0.04 0.17 0.26 dilm - elastic_node3
0x06 部署kibana
建议将kibana部署到一台配置较低的机器,不要与es节点的机器混用,避免kibana和es节点争抢资源,导致相互拖累
6.1 开集群监控开关
在kibana dev输入如下:
PUT _cluster/settings { "persistent": { "xpack.monitoring.collection.enabled": true } }
6.2 修改Kibana配置文件
[root@kibana ~]$ vim /kibana-7.13.2/config/kibana.yml server.port: 15601 server.host: "192.168.59.131" server.name: "kibana" elasticsearch.hosts: "http://192.168.59.131:19200" kibana.index: ".kibana" logging.verbose: true elasticsearch.username: "kibana_system" #指定刚使用elasticsearch生成的kibana连接的用户名及密码 elasticsearch.password: "0B3EINVicVRbsnyJHk99"
6.3 启动并kibana
[root@kibana ~]$ nohup /kibana-7.13.2/bin/kibana --allow-root &
6.4 导入样本数据
在浏览器打开kibana web服务
http://192.168.59.131:15601
根据使用引导,分别导入3类测试数据
0x07 参考
https://blog.csdn.net/qq_20143059/article/details/112992016