Centos7.X搭建ELK6.6.2版本

1、环境介绍

系统: CentOS Linux release 7.5.1804 (Core)
elasticsearch: 6.6.2
filebeat: 6.6.2
logstash: 6.6.2
kibana: 6.6.2

2、部署和架构

2.1 部署

elasticsearch + nginx + kibana 10.80.8.22 (master备+data节点+elastalert节点+curator节点)
elasticsearch + logstash       10.80.8.23 (master备+data节点)
elasticsearch                  10.80.8.24 (master主+data节点)
elasticsearch                  10.80.8.25 (data节点)
elasticsearch                  10.80.8.26 (data节点)

2.2 架构图

img

(1)首先最左侧的是filebeat,只做收集日志用,需要在待收集服务器上安装此软件

(2)第二filebeat把收集到的日志传输到logstash,由logstash做数据处理、数据清理等操作

(3)然后logstash将解析后的发送到elasticsearch,elasticsearch是负责存储数据的

(4)接着是kibana是展示端,仅仅负责展示ES中存储的数据

(5)最后是报警,目前咱们接入了2种报警工具,一个是python写的elasalert,另外一个是elk自带的watcher

3、服务器配置

3.1 java环境配置

JAVA_HOME=/usr/local/java/jdk1.8.0_152
CLASSPATH=.:$JAVA_HOME/lib/tools.jar:$JAVA_HOME/lib/dt.jar
PATH=$PATH:$JAVA_HOME/bin
export JAVA_HOME CLASSPATH PATH

3.2 limits配置

在/etc/security/limits.conf文件最后加入下面4行

* soft nofile 65536
* hard nofile 65536
* soft nproc  65536
* hard nproc  65536

3.3 sysctl.conf文件配置

温馨提示: 目前利用咱们初始化脚本初始化的话,基本elk需要的配置大部分都有了

vm.swappiness = 0
vm.max_map_count=262144
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1

4、安装elasticsearch

该套系统利用rpm包安装,三台服务器都需要安装

4.1 下载并安装

wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.6.2.rpm
yum install elasticsearch-6.6.2.rpm -y

4.2 配置java路径

如果是yum安装的java的话,就不需要设置,如果是自定义安装的java就需要设置,不然的话,es会报找不到java的错误

下面是原文件/etc/sysconfig/elasticsearch

################################
# Elasticsearch
################################

# Elasticsearch home directory
#ES_HOME=/usr/share/elasticsearch

# Elasticsearch Java path
#JAVA_HOME=
JAVA_HOME=/usr/local/java/jdk1.8.0_152  #修改此处

# Elasticsearch configuration directory
ES_PATH_CONF=/etc/elasticsearch

# Elasticsearch PID directory
#PID_DIR=/var/run/elasticsearch

# Additional Java OPTS
#ES_JAVA_OPTS=

# Configure restart on package upgrade (true, every other setting will lead to not restarting)
#RESTART_ON_UPGRADE=true

################################
# Elasticsearch service
################################

# SysV init.d
#
# The number of seconds to wait before checking if Elasticsearch started successfully as a daemon process
ES_STARTUP_SLEEP_TIME=5

################################
# System properties
################################

# Specifies the maximum file descriptor number that can be opened by this process
# When using Systemd, this setting is ignored and the LimitNOFILE defined in
# /usr/lib/systemd/system/elasticsearch.service takes precedence
#MAX_OPEN_FILES=65536

# The maximum number of bytes of memory that may be locked into RAM
# Set to "unlimited" if you use the 'bootstrap.memory_lock: true' option
# in elasticsearch.yml.
# When using systemd, LimitMEMLOCK must be set in a unit file such as
# /etc/systemd/system/elasticsearch.service.d/override.conf.
#MAX_LOCKED_MEMORY=unlimited

# Maximum number of VMA (Virtual Memory Areas) a process can own
# When using Systemd, this setting is ignored and the 'vm.max_map_count'
# property is set at boot time in /usr/lib/sysctl.d/elasticsearch.conf
#MAX_MAP_COUNT=262144

4.3 配置elasticsearch

服务器10.80.8.22的elasticsearch配置,下面是线上的配置源文件/etc/elasticsearch/elasticsearch.yml

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /data/elasticsearch
#
# Path to log files:
#
path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
#network.host: 192.168.0.1
#
# Set a custom port for HTTP:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.zen.ping.unicast.hosts: ["host1", "host2"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):
#
#discovery.zen.minimum_master_nodes: 
#
# For more information, consult the zen discovery module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true

cluster.name: super-log-system
node.name: ser8-22.super-idc.net
node.master: true 
node.data: true
network.host: 10.80.8.22
http.port: 9200
discovery.zen.ping.unicast.hosts: ["10.80.8.22", "10.80.8.23","10.80.8.24"]
discovery.zen.minimum_master_nodes: 2
xpack.security.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.ssl.key: elasticsearch/elasticsearch.key 
xpack.ssl.certificate: elasticsearch/elasticsearch.crt 
xpack.ssl.certificate_authorities: ca/ca.crt
xpack.security.transport.ssl.enabled: true
http.cors.enabled: true
http.cors.allow-origin: "*"
xpack.monitoring.user.enabled: true
#cluster.routing.allocation.disk.threshold_enabled: false
xpack.notification.email.account:
    exchange_account:
        profile: outlook
        email_defaults:
            from: monitor@super.com
        smtp:
            auth: true
            starttls.enable: false
            host: smtp.exmail.qq.com
            port: 25
            user: monitor@super.com
            password: 邮箱密码

服务器10.80.8.23的elasticsearch配置,下面是线上的配置源文件/etc/elasticsearch/elasticsearch.yml

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /data/elasticsearch
#
# Path to log files:
#
path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
#network.host: 192.168.0.1
#
# Set a custom port for HTTP:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.zen.ping.unicast.hosts: ["host1", "host2"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):
#
#discovery.zen.minimum_master_nodes: 
#
# For more information, consult the zen discovery module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true

cluster.name: super-log-system
node.name: ser8-23.super-idc.net
node.master: true
node.data: true
network.host: 10.80.8.23
http.port: 9200
discovery.zen.ping.unicast.hosts: ["10.80.8.22", "10.80.8.23","10.80.8.24"]
discovery.zen.minimum_master_nodes: 2
xpack.security.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.ssl.key: elasticsearch/elasticsearch.key
xpack.ssl.certificate: elasticsearch/elasticsearch.crt
xpack.ssl.certificate_authorities: ca/ca.crt
xpack.security.transport.ssl.enabled: true
http.cors.enabled: true
http.cors.allow-origin: "*"
xpack.monitoring.user.enabled: true
#cluster.routing.allocation.disk.threshold_enabled: false
xpack.notification.email.account:
    exchange_account:
        profile: outlook
        email_defaults:
            from: monitor@super.com
        smtp:
            auth: true
            starttls.enable: false
            host: smtp.exmail.qq.com
            port: 25

服务器10.80.8.24的elasticsearch配置,下面是线上的配置源文件/etc/elasticsearch/elasticsearch.yml

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /data/elasticsearch
#
# Path to log files:
#
path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
#network.host: 192.168.0.1
#
# Set a custom port for HTTP:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.zen.ping.unicast.hosts: ["host1", "host2"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):
#
#discovery.zen.minimum_master_nodes: 
#
# For more information, consult the zen discovery module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true

cluster.name: super-log-system
node.name: ser8-24.super-idc.net
node.master: true
node.data: true
network.host: 10.80.8.24
http.port: 9200
discovery.zen.ping.unicast.hosts: ["10.80.8.22", "10.80.8.23","10.80.8.24"]
discovery.zen.minimum_master_nodes: 2
xpack.security.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.ssl.key: elasticsearch/elasticsearch.key
xpack.ssl.certificate: elasticsearch/elasticsearch.crt
xpack.ssl.certificate_authorities: ca/ca.crt
xpack.security.transport.ssl.enabled: true
http.cors.enabled: true
http.cors.allow-origin: "*"
xpack.monitoring.user.enabled: true
#cluster.routing.allocation.disk.threshold_enabled: false
xpack.notification.email.account:
    exchange_account:
        profile: outlook
        email_defaults:
            from: monitor@super.com
        smtp:
            auth: true
            starttls.enable: false
            host: smtp.exmail.qq.com
            port: 25

3台es的配置基本上差不多,修改的地方只有node.name,network.host,node节点需要修改三个地方,其余配置和master节点一致node.name: "node节点主机名"、node.master: false、network.host: "node节点IP"

4.4 配置堆内存

修改/etc/elasticsearch/ jvm.options配置文件。

# Xms represents the initial size of total heap space
# Xmx represents the maximum size of total heap space
-Xms31g
-Xmx31g

4.5 启动es并设置开机自启动

systemctl start elasticsearch
systemctl enable elasticsearch

4.6 es的安装目录和日志目录

rpm包安装的目录:/etc/elasticsearch
日志目录: /var/log/elasticsearch (可在配置文件中elasticsearch.yml自定义)
数据目录: /data/elasticsearch     (可在配置文件中elasticsearch.yml自定义)

5、安装logstash

5.1 下载并安装

wget https://artifacts.elastic.co/downloads/logstash/logstash-6.6.2.rpm
yum install logstash-6.6.2.rpm -y

5.2 配置java路径

echo "JAVA_HOME=/usr/local/java/jdk1.8.0_152" >>/etc/sysconfig/logstash

修改/etc/logstash/startup.options中的JAVACMD为/usr/local/java/jdk1.8.0_152/bin/java

5.3 配置logstash

在/etc/logstash/logstash.yml文件最后加入以下几行
作用:加入以下配置后,可在kibana上上查看logstash的性能

xpack.monitoring.elasticsearch.username: elastic
xpack.monitoring.elasticsearch.password: 密码
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.url: http://10.80.8.22:9200
xpack.monitoring.user.interval: 10s

5.4 配置堆内存

修改/etc/logstash/jvm.options配置文件

# Xms represents the initial size of total heap space
# Xmx represents the maximum size of total heap space
-Xms4g
-Xmx4g

5.6 logstash的安装目录和日志目录

rpm包安装的目录:/etc/logstash
日志目录: /var/log/logstash (可在配置文件中elasticsearch.yml自定义)

5.7 样例展示

以user的业务日志为例

目录说明:

启动目录:/usr/share/logstash

配置文件目录:/etc/logstash/conf.d/   (整个目录都可为配置文件,亦可是一个配置文件,需要在启动时指定即可)

logstash的配置文件大致可分为三大块:

input{

}

filter{

}

output{
}

user收集日志配置文件如下:

filter {

       if [fields][log_topic] == "logstash-user-info" {

          grok {
             match => [
                "message","%{TIMESTAMP_ISO8601:tm} \[(?<thread>\S+*)\] %{LOGLEVEL:level} %{DATA:class} - (?<logmsg>.*)"
             ]

          }
          mutate {
            remove_field => ["input","beat","prospector","logmsg","log","thread"]
           }

       }
}


output {
   if [fields][log_topic] == "logstash-user-info" {
    elasticsearch {
        hosts => ["10.80.8.22:9200","10.80.8.23:9200","10.80.8.24:9200"]
        user => "elastic"
        password => "密码"
        index => "logstash-user-info-%{+YYYY.MM.dd}"
       }
   }
}

6、安装kibana

6.1 下载并安装

wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.6.2-x86_64.rpm
yum install filebeat-6.6.2-x86_64.rpm -y

6.2 配置kibana

kibana配置文件在/etc/kibana/kibana.yml,把端口监听在127.0.0.1,然后利用nginx方向代理,增加系统安全

server:
  basePath: "/awesome"
  host: "elk.super-in.com"
server.port: 5601 #监听的端口
server.host: "127.0.0.1" #监听的地址
elasticsearch.url: "http://10.80.8.22:9200" #elasticsearch访问的URL地址
elasticsearch.username: elastic
elasticsearch.password: 密码

6.3 启动kibana并设置开机自启动

systemctl start kibana
systemctl enable kibana

7、nginx配置

kibana.conf

server {
    listen 80;
    access_log  logs/elk.super-in.com_access.log logstash_json;
    server_name elk.super-in.com;

    location /awesome/ {
      proxy_http_version 1.1;
      proxy_set_header Upgrade $http_upgrade;
      proxy_set_header Connection 'upgrade';
      proxy_set_header Host $host;
      proxy_cache_bypass $http_upgrade;

      proxy_pass  http://127.0.0.1:5601/;
      rewrite ^/awesome/(.*)$ /$1 break;
    }
}

nginx.conf

user nginx;
worker_processes auto;

error_log  logs/error.log  error;
pid        logs/nginx.pid;
worker_rlimit_nofile 65535;

events {
    use epoll;
    worker_connections  51200;
}
http {
    include       mime.types;
    default_type  application/octet-stream;
    server_names_hash_bucket_size 256;
    client_header_buffer_size 64k;
    large_client_header_buffers 4 64k;
    client_max_body_size 80m;
    sendfile        on;
    tcp_nopush     on;
    keepalive_timeout  120;
    send_timeout 360;
    proxy_ignore_client_abort on;
    proxy_connect_timeout 600;
    proxy_read_timeout 600;
    proxy_send_timeout 600;
    proxy_buffer_size 512k;
    proxy_buffers 16 512k;
    charset utf-8;
    gzip  on;
    gzip_types text/plain application/x-javascript text/css application/xml;
    gzip_static on;
    gzip_min_length  1k;
    gzip_buffers     4 32k;
    gzip_http_version 1.0;
    gzip_proxied any;
    gzip_disable        "MSIE [1-6]\.";
    gzip_comp_level 6;
    proxy_set_header   Host             $host;
    proxy_set_header   X-Real-IP        $remote_addr;
    fastcgi_connect_timeout 300;
    fastcgi_send_timeout 300;
    fastcgi_read_timeout 300;
    fastcgi_buffer_size 128k;
    fastcgi_buffers 4 64k;
    fastcgi_busy_buffers_size 128k;
    fastcgi_temp_file_write_size 128k;
    log_format logstash_json '{ "hostname": "$hostname",'
                         '"log_time": "$time_iso8601", '
                         '"remote_addr": "$remote_addr", '
                         '"remote_user": "$remote_user", '
                         '"body_bytes_sent": "$body_bytes_sent", '
                         '"request_time": "$request_time", '
                         '"status": "$status", '
                         '"request_uri": "$request_uri", '
                         '"server_protocol":"$server_protocol",'
                         '"request_method": "$request_method", '
                         '"http_referrer": "$http_referer", '
                         '"http_x_forwarded_for": "$http_x_forwarded_for", '
                         '"http_user_agent": "$http_user_agent", '
                         '"http_cookie": "$http_cookie" }';
                         include servers/*.conf;
}

8、安装filebeat

8.1 下载并安装

wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.6.2-x86_64.rpm
yum install filebeat-6.6.2-x86_64.rpm -y

8.2 配置filebeat

以user为例

filebeat.prospectors:
- type: log
  enabled: true
  paths:
    - /data/home/www/super/logs/user/user.log
  multiline:
    pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'
    negate: true
    match: after
    max_lines: 500
    timeout: 5s
  fields:
    log_topic: logstash-user-info
output.logstash:
  hosts: ["10.80.8.23:5044"]

8.3 filbeat的安装目录

rpm包安装的目录:/usr/share/kibana/
日志目录: /var/log/filebeat
posted @ 2020-01-04 14:22  梦轻尘  阅读(614)  评论(0编辑  收藏  举报