ELK 7.x 集群配置

Elastic Search 7.x 集群配置

概念

Cluster 集群

一个 Elasticsearch 集群由一个或多个节点(Node)组成,每个集群都有一个共同的集群名称作为标识

Node节点

  • 一个 Elasticsearch 实例即一个 Node,一台机器可以有多个实例,正常使用下每个实例应该会部署在不同的机器上。 Elasticsearch 的配置文件中可以通过 node.master、 node.data 来设置节点类型
  • node.master:表示节点是否具有成为主节点的资格
  • true代表的是有资格竞选主节点
  • false代表的是没有资格竞选主节点
  • node.data:表示节点是否存储数据
    • true代表存储数据
    • false代表不存储数据

Node节点组合

  • 主节点+数据节点(master+data)
    // 节点即有成为主节点的资格,又存储数据
    node.master: true
    node.data: true
    
  • 数据节点(data)
    // 节点没有成为主节点的资格,不参与选举,只会存储数据
    node.master: false
    node.data: true
    
  • 客户端节点(client)
    // 不会成为主节点,也不会存储数据,主要是针对海量请求的时候可以进行负载均衡
    node.master: false
    node.data: false
    

分片

每个索引有一个或多个分片,每个分片存储不同的数据。分片可分为主分片( primary shard)和复制分(replica shard),复制分片是主分片的拷贝。默认每个主分片有一个复制分片,一个索引的复制分片的数量可以动态地调整,复制分片从不与它的主分片在同一个节点上

集群搭建

#

二进制包部署

  1. 拷贝elasticsearch安装包3份,分别命名es-a, es-b,es-c
  2. 分别修改elasticsearch.yml文件内容(见下)
  3. 分别启动a ,b ,c 三个节点
  4. 打开浏览器输入: http://localhost:9200/_cat/health?v ,如果返回的node.total是3,代表集群搭建成功

    配置elasticsearch.yml

  • a节点
    #集群名称
    cluster.name: elasticsearch
    #节点名称
    node.name: node-a
    #是不是有资格竞选主节点
    node.master: true
    #是否存储数据
    node.data: true
    #最大集群节点数
    node.max_local_storage_nodes: 3
    #网关地址
    network.host: 192.168.11.220
    #端口
    http.port: 9200
    #内部节点之间沟通端口
    transport.tcp.port: 9300
    #es7.x 之后新增的配置,写入候选主节点的设备地址,在开启服务后可以被选为主节点
    discovery.seed_hosts: ["192.168.11.220:9300","192.168.11.220:9301","192.168.11.220:9302"]
    #es7.x 之后新增的配置,初始化一个新的集群时需要此配置来选举master
    cluster.initial_master_nodes: ["node-a", "node-b","node-c"]
    #数据存储路径
    path.data: /home/es/software/es/data
    #日志存储路径
    path.logs: /home/es/software/es/logs
    
    *b节点
    #集群名称
    cluster.name: elasticsearch
    #节点名称
    node.name: node-b
    #是不是有资格竞选主节点
    node.master: true
    #是否存储数据
    node.data: true
    #最大集群节点数
    node.max_local_storage_nodes: 3
    #网关地址
    network.host: 192.168.11.220
    #端口
    http.port: 9201
    #内部节点之间沟通端口
    transport.tcp.port: 9301
    #es7.x 之后新增的配置,写入候选主节点的设备地址,在开启服务后可以被选为主节点
    discovery.seed_hosts: ["192.168.11.220:9300","192.168.11.220:9301","192.168.11.220:9302"]
    #es7.x 之后新增的配置,初始化一个新的集群时需要此配置来选举master
    cluster.initial_master_nodes: ["node-a", "node-b","node-c"]
    #数据存储路径
    path.data: /home/es/software/es/data
    #日志存储路径
    path.logs: /home/es/software/es/logs
    
    *c节点
    #集群名称
    cluster.name: elasticsearch
    #节点名称
    node.name: node-c
    #是不是有资格竞选主节点
    node.master: true
    #是否存储数据
    node.data: true
    #最大集群节点数
    node.max_local_storage_nodes: 3
    #网关地址
    network.host: 192.168.11.220
    #端口
    http.port: 9202
    #内部节点之间沟通端口
    transport.tcp.port: 9302
    #es7.x 之后新增的配置,写入候选主节点的设备地址,在开启服务后可以被选为主节点
    discovery.seed_hosts: ["192.168.11.220:9300","192.168.11.220:9301","192.168.11.220:9302"]
    #es7.x 之后新增的配置,初始化一个新的集群时需要此配置来选举master
    cluster.initial_master_nodes: ["node-a", "node-b","node-c"]
    #数据存储路径
    path.data: /home/es/software/es/data
    #日志存储路径
    path.logs: /home/es/software/es/logs
    

使用三台主机yum方式安装

  1. 下载rpm包
    [root@elasticsearch-node00 src]# ll
    total 289592
    -rw-r--r-- 1 root root 296535521 Apr 29 17:54 elasticsearch-7.6.2-x86_64.rpm
    
  2. yum安装
    yum install elasticsearch-7.6.2-x86_64.rpm -y
    chown -R elasticsearch:elasticsearch /data
    
  3. 分别启动三个节点
    systemctl start elasticsearch.service 
    systemctl enable elasticsearch.service
    

    配置elasticsearch.yml

  • elasticsearch-node00节点

    #集群名称
    cluster.name: tuoren-elasticsearch00
    #节点名称
    node.name: elasticsearch-node00
    #是不是有资格竞选主节点
    node.master: true
    #是否存储数据
    node.data: true
    #最大集群节点数
    node.max_local_storage_nodes: 3
    #网关地址
    network.host: 192.168.204.30
    #端口
    http.port: 9200
    #内部节点之间沟通端口
    transport.tcp.port: 9300
    #es7.x 之后新增的配置,写入候选主节点的设备地址,在开启服务后可以被选为主节点
    discovery.seed_hosts: ["elasticsearch-node00:9300", "elasticsearch-node00:9300", "elasticsearch-node00:9300"]
    #es7.x 之后新增的配置,初始化一个新的集群时需要此配置来选举master
    cluster.initial_master_nodes: ["elasticsearch-node00", "elasticsearch-node01", "elasticsearch-node02"]
    #数据存储路径
    path.data: /data/elasticsearch/data
    #日志存储路径
    path.logs: /data/elasticsearch/logs
    
  • elasticsearch-node01节点

    #集群名称
    cluster.name: tuoren-elasticsearch00
    #节点名称
    node.name: elasticsearch-node01
    #是不是有资格竞选主节点
    node.master: true
    #是否存储数据
    node.data: true
    #最大集群节点数
    node.max_local_storage_nodes: 3
    #网关地址
    network.host: 192.168.204.31
    #端口
    http.port: 9200
    #内部节点之间沟通端口
    transport.tcp.port: 9300
    #es7.x 之后新增的配置,写入候选主节点的设备地址,在开启服务后可以被选为主节点
    discovery.seed_hosts: ["elasticsearch-node00:9300", "elasticsearch-node00:9300", "elasticsearch-node00:9300"]
    #es7.x 之后新增的配置,初始化一个新的集群时需要此配置来选举master
    cluster.initial_master_nodes: ["elasticsearch-node00", "elasticsearch-node01", "elasticsearch-node02"]
    #数据存储路径
    path.data: /data/elasticsearch/data
    #日志存储路径
    path.logs: /data/elasticsearch/logs
    
  • elasticsearch-node02节点
    #集群名称
    cluster.name: tuoren-elasticsearch00
    #节点名称
    node.name: elasticsearch-node02
    #是不是有资格竞选主节点
    node.master: true
    #是否存储数据
    node.data: true
    #最大集群节点数
    node.max_local_storage_nodes: 3
    #网关地址
    network.host: 192.168.204.32
    #端口
    http.port: 9200
    #内部节点之间沟通端口
    transport.tcp.port: 9300
    #es7.x 之后新增的配置,写入候选主节点的设备地址,在开启服务后可以被选为主节点
    discovery.seed_hosts: ["elasticsearch-node00:9300", "elasticsearch-node00:9300", "elasticsearch-node00:9300"]
    #es7.x 之后新增的配置,初始化一个新的集群时需要此配置来选举master
    cluster.initial_master_nodes: ["elasticsearch-node00", "elasticsearch-node01", "elasticsearch-node02"]
    #数据存储路径
    path.data: /data/elasticsearch/data
    #日志存储路径
    path.logs: /data/elasticsearch/logs
    

开启内存锁定功能

[root@elasticsearch-node00 src]# vi /etc/elasticsearch/elasticsearch.yml  
# Lock the memory on startup:
#
bootstrap.memory_lock: true
[root@elasticsearch-node00 src]# vi /usr/lib/systemd/system/elasticsearch.service
# Specifies the maximum file size
LimitFSIZE=infinity

LimitMEMLOCK=infinity

修改jvm的内存参数

[root@elasticsearch-node00 src]# vi /etc/elasticsearch/jvm.options

# Xms represents the initial size of total heap space
# Xmx represents the maximum size of total heap space

-Xms2g
-Xmx2g

重启服务

[root@elasticsearch-node00 ~]# systemctl restart elasticsearch.service
[root@elasticsearch-node01 ~]# systemctl restart elasticsearch.service
[root@elasticsearch-node02 ~]# systemctl restart elasticsearch.service

生产环境配置

  • elasticsearch-node00
    [root@elasticsearch-node00 src]# grep '^[^#|$]' /etc/elasticsearch/elasticsearch.yml
    cluster.name: tuoren-elasticsearch00
    node.name: elasticsearch-node00
    node.master: true
    node.data: true
    node.max_local_storage_nodes: 3
    path.data: /data/elasticsearch/data
    path.logs: /data/elasticsearch/logs
    network.host: 192.168.204.30
    http.port: 9200
    transport.tcp.port: 9300
    discovery.seed_hosts: ["elasticsearch-node00:9300", "elasticsearch-node00:9300", "elasticsearch-node00:9300"]
    cluster.initial_master_nodes: ["elasticsearch-node00", "elasticsearch-node01", "elasticsearch-node02"]
    

设置所有索引的分片与副本模板

[root@elasticsearch-node00 ~]# curl -XPUT 'http://192.168.204.30:9200/_template/template_http_request_record' -H 'Content-Type: application/json' -d '{"index_patterns": ["*"],"settings": {"number_of_shards": 3,"number_of_replicas": 2}}'

# {"index_patterns": ["*"] 所有索引
# number_of_shards": 3 分片数
# number_of_replicas": 2 副本数

收集kafka-cluster日志

[root@kafka-node01 src]# grep -v "#" /etc/filebeat/filebeat.yml |grep  -v "^$"
filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /var/log/*.log
    - /var/log/messages
    - /opt/kafka/logs/*.log*
    - /data/zookeeper/logs/version-2/log.*
  exclude_lines: ["^DBG","^$"]
  scan_frequency: 120s
  max_bytes: 10485760
  multiline.pattern: ^\[
  multiline.negate: true
  multiline.match: after
  multiline.max_lines: 100
#output.file:
#  path: "/tmp"
#  filename: filebeat.file
output.kafka:
  hosts: ["kafka-node00:9092", "kafka-node01:9092", "kafka-node02:9092"]
  topic: "kafka-cluster-kafka-node00-log" 
  ## 另两个topic为kafka-node01-log和kafka-node02-log
  version: 2.0.0
  required_acks: 0
  max_message_bytes: 10485760

kafka集群中验证topic是否存在

[root@kafka-node00 kafka]# bin/kafka-topics.sh --list --zookeeper kafka-node02:2181
__consumer_offsets
kafka-cluster-kafka-node00-log
kafka-cluster-kafka-node01-log
kafka-cluster-kafka-node02-log

配置Logstash

kafka-node00

input {
  kafka {
    bootstrap_servers => "kafka-node00:9092,kafka-node01:9092,kafka-node03:9092"
    client_id => "logstash00"
    consumer_threads => 4
    group_id => "kafka-cluster"
    topics_pattern => "kafka-cluster-kafka-node00-log"
 }
}

filter {
  json {
   source => "message"
  }
}

output{
  elasticsearch {
    hosts => ["elasticsearch-node00:9200", "elasticsearch-node01:9200", "elastic
search-node02:9200"]
    index => "kafka-cluster-kafka-node00-log-%{+YYYY.MM.dd}"
  }
}

kafka-node01

input {
  kafka {
    bootstrap_servers => "kafka-node00:9092,kafka-node01:9092,kafka-node03:9092"
    client_id => "logstash00"
    consumer_threads => 4
    group_id => "kafka-cluster"
    topics_pattern => "kafka-cluster-kafka-node01-log"
 }
}

filter {
  json {
   source => "message"
  }
}

output{
  elasticsearch {
    hosts => ["elasticsearch-node00:9200", "elasticsearch-node01:9200", "elastic
search-node02:9200"]
    index => "kafka-cluster-kafka-node01-log-%{+YYYY.MM.dd}"
  }
}

kafka-node02

input {
  kafka {
    bootstrap_servers => "kafka-node00:9092,kafka-node01:9092,kafka-node03:9092"
    client_id => "logstash00"
    consumer_threads => 4
    group_id => "kafka-cluster"
    topics_pattern => "kafka-cluster-kafka-node01-log"
 }
}

filter {
  json {
   source => "message"
  }
}

output{
  elasticsearch {
    hosts => ["elasticsearch-node00:9200", "elasticsearch-node01:9200", "elastic
search-node02:9200"]
    index => "kafka-cluster-kafka-node02-log-%{+YYYY.MM.dd}"
  }
}

kibana配置文件

[root@kibana-node00 ~]# grep -v "#" /etc/kibana/kibana.yml | grep -v "^$"
server.port: 5601
server.host: "kibana-node00.tuoren.com"
elasticsearch.hosts: ["http://elasticsearch-node01:9200", "http://elasticsearch-node02:9200", "http://elasticsearch-node00:9200"]
# 用户语言设置,界面默认显示: English - en , 中文设置 - zh-CN .
i18n.locale: "en"

nginx代理配置文件

[root@kibana-node00 nginx]# grep -v "#" /etc/nginx/nginx.conf

user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;

include /usr/share/nginx/modules/*.conf;

events {
    worker_connections 1024;
}

http {
    log_format  access_json  '{"@timestamp":"$time_iso8601",'
                     '"host":"$server_addr",'
                     '"clientip":"$remote_addr",'
                     '"size":$body_bytes_sent,'
                     '"responsetime":$request_time,'
                     '"upstreamtime":"$upstream_response_time",'
                     '"upstreamhost":"$upstream_addr",'
                     '"http_host":"$host",'
                     '"url":"$uri",'
                     '"domain":"$host",'
                     '"xff":"$http_x_forwarded_for",'
                     '"referer":"$http_referer",'
                     '"status":"$status"}';

    access_log  /var/log/nginx/access.log  access_json;
    sendfile            on;
    tcp_nopush          on;
    tcp_nodelay         on;
    keepalive_timeout   65;
    types_hash_max_size 2048;

    include             /etc/nginx/mime.types;
    default_type        application/octet-stream;

    include /etc/nginx/conf.d/*.conf;
    upstream kibana_server {
       server  kibana-node00:5601 weight=1 max_fails=3 fail_timeout=60;
    }

   server {
        listen 80;
        server_name kibana-node00;
        location / {
        proxy_pass http://kibana_server;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
        }
        include /etc/nginx/default.d/*.conf;


        error_page 404 /404.html;
            location = /40x.html {
        }

        error_page 500 502 503 504 /50x.html;
            location = /50x.html {
        }
    }


}

定时删除脚本

1.将所有的index写入到一个文本文件中,若产生新的index,继续追加即可

[root@elasticsearch-node00 els_delete]# pwd
/opt/script/els_delete
[root@elasticsearch-node00 ~]# vi indexname.txt
kafka-cluster-kafka-node00-log
kafka-cluster-kafka-node01-log
kafka-cluster-kafka-node02-log

2. 删除脚本

2.1 测试删除单一index

[root@elasticsearch-node00 script]# cat els_delete.sh 
#!/bin/bash
DATE=`date -d "2 days ago" +%Y.%m.%d`
LOG_NAME="kafka-cluster-kafka-node00-log"
FILE_NAME=${LOG_NAME}-${DATE}
curl -XDELETE  http://elasticsearch-node01:9200/${FILE_NAME}
echo "${FILE_NAME} delete success"

2.2 删除所有一月之前的index

[root@elasticsearch-node00 els_delete]# vi els-delete.sh 
#!/bin/bash
DATE=`date -d "30 days ago" +%Y.%m.%d`
for LOG_NAME in `cat /opt/script/els_delete/indexname.txt`
do
FILE_NAME=${LOG_NAME}-${DATE}
curl -XDELETE  http://elasticsearch-node01:9200/${FILE_NAME}
echo "${FILE_NAME} delete success"
done

[root@elasticsearch-node00 els_delete]# chmod +x els-delete.sh

3.创建计划任务

crontab  -e
30 2 * * * /opt/script/els_delete/els-delete.sh #每天凌晨两点半清除一月之前的日志
posted @ 2020-05-07 09:31  renato-zhang  阅读(550)  评论(0编辑  收藏  举报