ELK02-ELK收集Linux系统平台应用系统日志

1.ELK收集日志介绍

展示日志: kibana。即它本身没有数据,kibana从ES里收集数据进行展示。

收集日志:filebeat。想在哪一台收集日志,就在哪一台安装filebeat。它(filebeat)就是一个客户端!你需要在哪一台收集日志就在哪一台安装filebeat,和zabbix agent同样的道理!

  1)ELK组件功能介绍

   ELK收集日志:

    Elasticsearch:数据库,存数据。(java)

    Logstash:收集日志,过滤数据。(java)

    Kibana:分析,过滤,展示。(java)

    Filebeat:收集日志,传输到ES。(GO)

  2)ELK收集日志分类  

   ELK收集日志分类:

    代理层:nginx,haproxy

    web层:nginx,tomcat

    DB层:ES,mysql,redis,mongo

2.ELK收集日志实例

  ELK收集nginx日志

 

实验环境:

 

  

 共计使用2台主机:

第一台主机:ES,Kibana,nginx,filebeat

第二台主机:nginx,filebeat

 ELK收集日志实例步骤:filebeat收集(nginx...)日志送到ES,kibana或者es-head从ES取出日志进行展示!!!

部署ES

    部署ES:之前已经安装好了,重新修改下配置文件就可以了,1台ES(做好ntp时间同步);

部署Kibana

    部署Kibana:Kibana和ES安装在同一台服务器或者主机

开始部署Kibana

            开始部署Kibana:

      Kibana需要安装JDK支持!

      [root@localhost es_soft]# java -version
      openjdk version "1.8.0_312"
      OpenJDK Runtime Environment (build 1.8.0_312-b07)
      OpenJDK 64-Bit Server VM (build 25.312-b07, mixed mode)

      [root@localhost es_soft]# wget https://artifacts.elastic.co/downloads/kibana/kibana-6.6.0-x86_64.rpm

      [root@localhost es_soft]# rpm -ivh kibana-6.6.0-x86_64.rpm

检查Kibana配置 

     部署Kibana完成,检查Kibana配置:

      [root@localhost es_soft]# rpm -qc kibana
      /etc/kibana/kibana.yml

修改Kibana配置

    修改Kibana配置后如下:

  [root@localhost es_soft]# grep "^[a-z]" /etc/kibana/kibana.yml
  server.port: 5601
  server.host: "10.96.211.209"
  server.name: "ELK01"
  elasticsearch.hosts: ["http://localhost:9200"]
  kibana.index: ".kibana"

  注意elasticsearch.hosts: ["http://localhost:9200"],由于ES和Kibana安装在同一台主机,所以ES配置是localhost,如果不在同一台,需要写ES的具体的IP地址!

启动Kibana

       启动Kibana:

  [root@localhost es_soft]# systemctl start kibana
  [root@localhost es_soft]# echo $?
  0

检查Kibana状态 

         检查启动后的kibana状态:       
  [root@localhost es_soft]# systemctl status kibana
● kibana.service - Kibana
   Loaded: loaded (/etc/systemd/system/kibana.service; disabled; vendor preset: disabled)
   Active: active (running) since 一 2022-01-17 11:01:47 CST; 40s ago
 Main PID: 25780 (node)
   CGroup: /system.slice/kibana.service
           └─25780 /usr/share/kibana/bin/../node/bin/node --no-warnings /usr/share/kibana/bin/../src/cli -c /etc/kibana/kibana.yml

1月 17 11:02:05 elk01 kibana[25780]: {"type":"log","@timestamp":"2022-01-17T03:02:05Z","tags":["status","plugin:reporting@6.6.0","info"],"pid":25780,"state":"green","message":"Status changed from yellow to green - Re...r Elasticsearch"}
1月 17 11:02:05 elk01 kibana[25780]: {"type":"log","@timestamp":"2022-01-17T03:02:05Z","tags":["info","monitoring-ui","kibana-monitoring"],"pid":25780,"message":"Starting monitoring stats collection"}
1月 17 11:02:05 elk01 kibana[25780]: {"type":"log","@timestamp":"2022-01-17T03:02:05Z","tags":["status","plugin:security@6.6.0","info"],"pid":25780,"state":"green","message":"Status changed from yellow to green - Rea...r Elasticsearch"}
1月 17 11:02:05 elk01 kibana[25780]: {"type":"log","@timestamp":"2022-01-17T03:02:05Z","tags":["license","info","xpack"],"pid":25780,"message":"Imported license information from Elasticsearch for the [monitoring] clu... status: active"}
1月 17 11:02:06 elk01 kibana[25780]: {"type":"log","@timestamp":"2022-01-17T03:02:06Z","tags":["reporting","browser-driver","warning"],"pid":25780,"message":"Enabling the Chromium sandbox provides an additional layer of protection."}
1月 17 11:02:06 elk01 kibana[25780]: {"type":"log","@timestamp":"2022-01-17T03:02:06Z","tags":["info","migrations"],"pid":25780,"message":"Creating index .kibana_1."}
1月 17 11:02:07 elk01 kibana[25780]: {"type":"log","@timestamp":"2022-01-17T03:02:07Z","tags":["info","migrations"],"pid":25780,"message":"Pointing alias .kibana to .kibana_1."}
1月 17 11:02:07 elk01 kibana[25780]: {"type":"log","@timestamp":"2022-01-17T03:02:07Z","tags":["info","migrations"],"pid":25780,"message":"Finished in 459ms."}
1月 17 11:02:07 elk01 kibana[25780]: {"type":"log","@timestamp":"2022-01-17T03:02:07Z","tags":["listening","info"],"pid":25780,"message":"Server running at http://10.96.211.209:5601"}
1月 17 11:02:08 elk01 kibana[25780]: {"type":"log","@timestamp":"2022-01-17T03:02:08Z","tags":["status","plugin:spaces@6.6.0","info"],"pid":25780,"state":"green","message":"Status changed from yellow to green - Ready...r Elasticsearch"}
Hint: Some lines were ellipsized, use -l to show in full.

  

  [root@localhost es_soft]# netstat -lntup|grep 5601
tcp        0      0 10.96.211.209:5601      0.0.0.0:*               LISTEN      25780/node

       登录Kibana,出现如下,说明安装部署成功:登录地址 10.96.211.209:5601

 部署Nginx

安装nginx和ab工具

安装nginx和ab工具(用于压测)

[root@elk01 ~]# yum install nginx httpd-tools  -y

[root@elk01 ~]# netstat -lntup|grep 80

启动Nginx:
[root@elk01 ~]# systemctl start nginx
[root@elk01 ~]# netstat -lntup|grep 80
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      11423/nginx: master
tcp6       0      0 :::80                   :::*                    LISTEN      11423/nginx: master

ab测试

进行ab测试:

[root@elk01 ~]# tail -f /var/log/nginx/access.log
10.96.211.209 - - [17/Jan/2022:15:07:54 +0800] "GET / HTTP/1.0" 200 4833 "-" "ApacheBench/2.3" "-"
10.96.211.209 - - [17/Jan/2022:15:07:54 +0800] "GET / HTTP/1.0" 200 4833 "-" "ApacheBench/2.3" "-"
10.96.211.209 - - [17/Jan/2022:15:07:54 +0800] "GET / HTTP/1.0" 200 4833 "-" "ApacheBench/2.3" "-"
10.96.211.209 - - [17/Jan/2022:15:07:54 +0800] "GET / HTTP/1.0" 200 4833 "-" "ApacheBench/2.3" "-"
10.96.211.209 - - [17/Jan/2022:15:07:54 +0800] "GET / HTTP/1.0" 200 4833 "-" "ApacheBench/2.3" "-"
10.96.211.209 - - [17/Jan/2022:15:07:54 +0800] "GET / HTTP/1.0" 200 4833 "-" "ApacheBench/2.3" "-"
10.96.211.209 - - [17/Jan/2022:15:07:54 +0800] "GET / HTTP/1.0" 200 4833 "-" "ApacheBench/2.3" "-"
10.96.211.209 - - [17/Jan/2022:15:07:54 +0800] "GET / HTTP/1.0" 200 4833 "-" "ApacheBench/2.3" "-"
10.96.211.209 - - [17/Jan/2022:15:07:54 +0800] "GET / HTTP/1.0" 200 4833 "-" "ApacheBench/2.3" "-"
10.96.211.209 - - [17/Jan/2022:15:07:54 +0800] "GET / HTTP/1.0" 200 4833 "-" "ApacheBench/2.3" "-"

[root@elk01 ~]# ab -n 100 -c 100 http://10.96.211.209/
This is ApacheBench, Version 2.3 <$Revision: 1430300 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 10.96.211.209 (be patient).....done


Server Software:        nginx/1.20.1
Server Hostname:        10.96.211.209
Server Port:            80

Document Path:          /
Document Length:        4833 bytes

Concurrency Level:      100
Time taken for tests:   0.007 seconds
Complete requests:      100
Failed requests:        0
Write errors:           0
Total transferred:      506800 bytes
HTML transferred:       483300 bytes
Requests per second:    14505.37 [#/sec] (mean)
Time per request:       6.894 [ms] (mean)
Time per request:       0.069 [ms] (mean, across all concurrent requests)
Transfer rate:          71790.23 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    2   0.7      2       3
Processing:     1    2   0.7      2       3
Waiting:        0    2   0.6      2       2
Total:          3    4   0.5      4       5

Percentage of the requests served within a certain time (ms)
  50%      4
  66%      4
  75%      5
  80%      5
  90%      5
  95%      5
  98%      5
  99%      5
 100%      5 (longest request)
[root@elk01 ~]# tail -f /var/log/nginx/access.log
10.96.211.209 - - [17/Jan/2022:15:08:36 +0800] "GET / HTTP/1.0" 200 4833 "-" "ApacheBench/2.3" "-"
10.96.211.209 - - [17/Jan/2022:15:08:36 +0800] "GET / HTTP/1.0" 200 4833 "-" "ApacheBench/2.3" "-"
10.96.211.209 - - [17/Jan/2022:15:08:36 +0800] "GET / HTTP/1.0" 200 4833 "-" "ApacheBench/2.3" "-"
10.96.211.209 - - [17/Jan/2022:15:08:36 +0800] "GET / HTTP/1.0" 200 4833 "-" "ApacheBench/2.3" "-"
10.96.211.209 - - [17/Jan/2022:15:08:36 +0800] "GET / HTTP/1.0" 200 4833 "-" "ApacheBench/2.3" "-"
10.96.211.209 - - [17/Jan/2022:15:08:36 +0800] "GET / HTTP/1.0" 200 4833 "-" "ApacheBench/2.3" "-"
10.96.211.209 - - [17/Jan/2022:15:08:36 +0800] "GET / HTTP/1.0" 200 4833 "-" "ApacheBench/2.3" "-"
10.96.211.209 - - [17/Jan/2022:15:08:36 +0800] "GET / HTTP/1.0" 200 4833 "-" "ApacheBench/2.3" "-"
10.96.211.209 - - [17/Jan/2022:15:08:36 +0800] "GET / HTTP/1.0" 200 4833 "-" "ApacheBench/2.3" "-"
10.96.211.209 - - [17/Jan/2022:15:08:36 +0800] "GET / HTTP/1.0" 200 4833 "-" "ApacheBench/2.3" "-"

 

部署filebeat

安装filebeat

[root@elk01 es_soft]# wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.6.0-x86_64.rpm

[root@elk01 es_soft]# rpm -ivh filebeat-6.6.0-x86_64.rpm
警告:filebeat-6.6.0-x86_64.rpm: 头V4 RSA/SHA512 Signature, 密钥 ID d88e42b4: NOKEY
准备中...                          ################################# [100%]
正在升级/安装...
   1:filebeat-6.6.0-1                 ################################# [100%]

查看filebeat有关配置

[root@elk01 es_soft]# rpm -qc filebeat
/etc/filebeat/filebeat.yml
/etc/filebeat/modules.d/apache2.yml.disabled
/etc/filebeat/modules.d/auditd.yml.disabled
/etc/filebeat/modules.d/elasticsearch.yml.disabled
/etc/filebeat/modules.d/haproxy.yml.disabled
/etc/filebeat/modules.d/icinga.yml.disabled
/etc/filebeat/modules.d/iis.yml.disabled
/etc/filebeat/modules.d/kafka.yml.disabled
/etc/filebeat/modules.d/kibana.yml.disabled
/etc/filebeat/modules.d/logstash.yml.disabled
/etc/filebeat/modules.d/mongodb.yml.disabled
/etc/filebeat/modules.d/mysql.yml.disabled
/etc/filebeat/modules.d/nginx.yml.disabled
/etc/filebeat/modules.d/osquery.yml.disabled
/etc/filebeat/modules.d/postgresql.yml.disabled
/etc/filebeat/modules.d/redis.yml.disabled
/etc/filebeat/modules.d/suricata.yml.disabled
/etc/filebeat/modules.d/system.yml.disabled
/etc/filebeat/modules.d/traefik.yml.disabled

修改filebeat配置文件

修改filebeat配置文件有关参数,修改后:

[root@elk01 es_soft]# cat /etc/filebeat/filebeat.yml
filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /var/log/nginx/access.log
output.elasticsearch:
  hosts: ["10.96.211.209:9200"]

重启filebeat服务

[root@elk01 es_soft]# systemctl restart filebeat

查看filebeat状态

[root@elk01 es_soft]# systemctl status filebeat
● filebeat.service - Filebeat sends log files to Logstash or directly to Elasticsearch.
   Loaded: loaded (/usr/lib/systemd/system/filebeat.service; disabled; vendor preset: disabled)
   Active: active (running) since 一 2022-01-17 16:18:13 CST; 10s ago
     Docs: https://www.elastic.co/products/beats/filebeat
 Main PID: 16263 (filebeat)
   CGroup: /system.slice/filebeat.service
           └─16263 /usr/share/filebeat/bin/filebeat -c /etc/filebeat/filebeat.yml -path.home /usr/share/filebeat -path.config /etc/filebeat -path.data /var/lib/filebeat -path.logs /var/log/filebeat

1月 17 16:18:13 elk01 systemd[1]: Started Filebeat sends log files to Logstash or directly to Elasticsearch..

查看filebeat日志

注意:filebeat日志后缀没有.log,直接就是filebeat!
[root@elk01 es_soft]# tail -f /var/log/filebeat/filebeat
2022-01-17T16:18:14.025+0800    INFO    input/input.go:114      Starting input of type: log; ID: 4059289650930770670
2022-01-17T16:18:14.025+0800    INFO    crawler/crawler.go:106  Loading and starting Inputs completed. Enabled inputs: 1
2022-01-17T16:18:14.025+0800    INFO    log/harvester.go:255    Harvester started for file: /var/log/nginx/access.log
2022-01-17T16:18:15.026+0800    INFO    pipeline/output.go:95   Connecting to backoff(elasticsearch(http://10.96.211.209:9200))
2022-01-17T16:18:15.039+0800    INFO    elasticsearch/client.go:721     Connected to Elasticsearch version 6.6.0
2022-01-17T16:18:15.040+0800    INFO    template/load.go:83     Loading template for Elasticsearch version: 6.6.0
2022-01-17T16:18:15.159+0800    INFO    template/load.go:146    Elasticsearch template with name 'filebeat-6.6.0' loaded
2022-01-17T16:18:15.159+0800    INFO    instance/beat.go:894    Template successfully loaded.
2022-01-17T16:18:15.159+0800    INFO    pipeline/output.go:105  Connection to backoff(elasticsearch(http://10.96.211.209:9200)) established
2022-01-17T16:18:44.036+0800    INFO    [monitoring]    log/log.go:144  Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":50,"time":{"ms":57}},"total":{"ticks":110,"time":{"ms":119},"value":110},"user":{"ticks":60,"time":{"ms":62}}},"handles":{"limit":{"hard":4096,"soft":1024},"open":8},"info":{"ephemeral_id":"c09a003c-bf5e-460b-a727-8b819ef0b0ce","uptime":{"ms":30029}},"memstats":{"gc_next":8742192,"memory_alloc":6891456,"memory_total":20921088,"rss":24117248}},"filebeat":{"events":{"added":274,"done":274},"harvester":{"open_files":1,"running":1,"started":1}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":273,"batches":6,"total":273},"read":{"bytes":4533},"type":"elasticsearch","write":{"bytes":186552}},"pipeline":{"clients":1,"events":{"active":0,"filtered":1,"published":273,"retry":50,"total":274},"queue":{"acked":273}}},"registrar":{"states":{"current":1,"update":274},"writes":{"success":7,"total":7}},"system":{"cpu":{"cores":2},"load":{"1":0.01,"15":0.17,"5":0.14,"norm":{"1":0.005,"15":0.085,"5":0.07}}}}}}

 查看收集的普通格式的Nginx access log访问日志效果

  Kibana登录地址:10.96.211.209:5601

 

 

 

 

 ELk收集Nginx access log的JSON日志

  把nginx access log日志格式转换成JSON格式(键值)

  1)修改nginx配置文件

  以Nginx日志为例,把日志的格式修改为JSON格式,修改nginx配置文件后的内容如下:蓝色和红色部分是新增和替换!!!

  

[root@elk01 ~]# cat /etc/nginx/nginx.conf
# For more information on configuration, see:
# * Official English Documentation: http://nginx.org/en/docs/
# * Official Russian Documentation: http://nginx.org/ru/docs/

user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;

# Load dynamic modules. See /usr/share/doc/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;

events {
worker_connections 1024;
}

http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
log_format json '{ "time_local": "$time_local", '
'"remote_addr": "$remote_addr", '
'"referer": "$http_referer", '
'"request": "$request", '
'"status": $status, '
'"bytes": $body_bytes_sent, '
'"agent": "$http_user_agent", '
'"x_forwarded": "$http_x_forwarded_for", '
'"up_addr": "$upstream_addr",'
'"up_host": "$upstream_http_host",'
'"upstream_time": "$upstream_response_time",'
'"request_time": "$request_time"'
' }';


access_log /var/log/nginx/access.log json;

sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 4096;

include /etc/nginx/mime.types;
default_type application/octet-stream;

# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;

server {
listen 80;
listen [::]:80;
server_name _;
root /usr/share/nginx/html;

# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;

error_page 404 /404.html;
location = /404.html {
}

error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}

# Settings for a TLS enabled server.
#
# server {
# listen 443 ssl http2;
# listen [::]:443 ssl http2;
# server_name _;
# root /usr/share/nginx/html;
#
# ssl_certificate "/etc/pki/nginx/server.crt";
# ssl_certificate_key "/etc/pki/nginx/private/server.key";
# ssl_session_cache shared:SSL:1m;
# ssl_session_timeout 10m;
# ssl_ciphers HIGH:!aNULL:!MD5;
# ssl_prefer_server_ciphers on;
#
# # Load configuration files for the default server block.
# include /etc/nginx/default.d/*.conf;
#
# error_page 404 /404.html;
# location = /40x.html {
# }
#
# error_page 500 502 503 504 /50x.html;
# location = /50x.html {
# }
# }

}  

         2)清理原来格式的日志

    [root@elk01 ~]# >/var/log/nginx/access.log

    3)  重启nginx服务 

    [root@elk01 ~]# systemctl restart nginx 

    4)  使用ab工具压测进行测试

     [root@elk01 ~]# ab -n 100 -c 100 http://10.96.211.209/

       5)  查看nginx access log日志 json格式效果

    [root@elk01 ~]# tail -f /var/log/nginx/access.log
{ "time_local": "17/Jan/2022:19:22:52 +0800", "remote_addr": "10.96.211.209", "referer": "-", "request": "GET / HTTP/1.0", "status": 200, "bytes": 4833, "agent": "ApacheBench/2.3", "x_forwarded": "-", "up_addr": "-","up_host": "-","upstream_time": "-","request_time": "0.000" }
{ "time_local": "17/Jan/2022:19:22:52 +0800", "remote_addr": "10.96.211.209", "referer": "-", "request": "GET / HTTP/1.0", "status": 200, "bytes": 4833, "agent": "ApacheBench/2.3", "x_forwarded": "-", "up_addr": "-","up_host": "-","upstream_time": "-","request_time": "0.000" }
{ "time_local": "17/Jan/2022:19:22:52 +0800", "remote_addr": "10.96.211.209", "referer": "-", "request": "GET / HTTP/1.0", "status": 200, "bytes": 4833, "agent": "ApacheBench/2.3", "x_forwarded": "-", "up_addr": "-","up_host": "-","upstream_time": "-","request_time": "0.000" }
{ "time_local": "17/Jan/2022:19:22:52 +0800", "remote_addr": "10.96.211.209", "referer": "-", "request": "GET / HTTP/1.0", "status": 200, "bytes": 4833, "agent": "ApacheBench/2.3", "x_forwarded": "-", "up_addr": "-","up_host": "-","upstream_time": "-","request_time": "0.000" }
{ "time_local": "17/Jan/2022:19:22:52 +0800", "remote_addr": "10.96.211.209", "referer": "-", "request": "GET / HTTP/1.0", "status": 200, "bytes": 4833, "agent": "ApacheBench/2.3", "x_forwarded": "-", "up_addr": "-","up_host": "-","upstream_time": "-","request_time": "0.000" }
{ "time_local": "17/Jan/2022:19:22:52 +0800", "remote_addr": "10.96.211.209", "referer": "-", "request": "GET / HTTP/1.0", "status": 200, "bytes": 4833, "agent": "ApacheBench/2.3", "x_forwarded": "-", "up_addr": "-","up_host": "-","upstream_time": "-","request_time": "0.000" }
{ "time_local": "17/Jan/2022:19:22:52 +0800", "remote_addr": "10.96.211.209", "referer": "-", "request": "GET / HTTP/1.0", "status": 200, "bytes": 4833, "agent": "ApacheBench/2.3", "x_forwarded": "-", "up_addr": "-","up_host": "-","upstream_time": "-","request_time": "0.000" }
{ "time_local": "17/Jan/2022:19:22:52 +0800", "remote_addr": "10.96.211.209", "referer": "-", "request": "GET / HTTP/1.0", "status": 200, "bytes": 4833, "agent": "ApacheBench/2.3", "x_forwarded": "-", "up_addr": "-","up_host": "-","upstream_time": "-","request_time": "0.000" }
{ "time_local": "17/Jan/2022:19:22:52 +0800", "remote_addr": "10.96.211.209", "referer": "-", "request": "GET / HTTP/1.0", "status": 200, "bytes": 4833, "agent": "ApacheBench/2.3", "x_forwarded": "-", "up_addr": "-","up_host": "-","upstream_time": "-","request_time": "0.000" }
{ "time_local": "17/Jan/2022:19:22:52 +0800", "remote_addr": "10.96.211.209", "referer": "-", "request": "GET / HTTP/1.0", "status": 200, "bytes": 4833, "agent": "ApacheBench/2.3", "x_forwarded": "-", "up_addr": "-","up_host": "-","upstream_time": "-","request_time": "0.000" }

   6)filebeat读取日志按照JSON格式进行解析配置
      filebeat读取日志按照JSON格式解析配置步骤  

a. 注意:把日志转换成JSON格式需要 filebeat读取日志按照JSON格式进行解析的配置,即在filebeat配置文件增加如下两个配置项:

  json.keys_under_root: true

  json.overwrite_keys: true

修改filebeat配置文件后的内容如下:红色部分为新增内容!

[root@elk01 ~]# cat /etc/filebeat/filebeat.yml
filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /var/log/nginx/access.log
  json.keys_under_root: true
  json.overwrite_keys: true

output.elasticsearch:
  hosts: ["10.96.211.209:9200"]

b.清理原有日志

[root@elk01 ~]# > /var/log/nginx/access.log

 

c. 重启filebeat服务

[root@elk01 ~]# systemctl restart filebeat

d. 在用ab工具或者其他方法进行测试写入读取日志即可

 [root@elk01 ~]#  ab -n 100 -c 100 http://10.96.211.209/

自定义索引名称

  修改Filebeat默认模板(自定义索引名称)

filebeat默认日期按照每天生成索引,不符合日常工作需求,接下里修改按照每个月生成一个索引名字(一个月的日志放在一个索引里面)。

修改filebeat配置文件,按照月生成索引,修改后配置文件内容如下:红色部分为新增配置

[root@elk01 ~]# cat /etc/filebeat/filebeat.yml
filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /var/log/nginx/access.log
  json.keys_under_root: true
  json.overwrite_keys: true
setup.kibana:
  host: "10.96.211.209:5601"

output.elasticsearch:
  hosts: ["10.96.211.209:9200"]
  index: "nginx_access-%{[beat.version]}-%{+yyyy.MM}"
setup.template.name: "nginx"
setup.template.pattern: "nginx_*"
setup.template.enabled: false
setup.template.overwrite: true

注意:绿色部分注意!由于我安装的是filebeat6.6版本,上面配置项参数index: "nginx_access-%{[beat.version]}-%{+yyyy.MM}"是filebeat6,如果是filebeat7版本,应该是index: "nginx_access-%{[agent.version]}-%{+yyyy.MM}"

重启filebeat服务:

[root@elk01 ~]# systemctl restart filebeat

查看filebeat日志:

[root@elk01 ~]# tail -f /var/log/filebeat/filebeat
2022-01-18T15:09:53.660+0800    INFO    [publisher]     pipeline/module.go:110  Beat name: elk01
2022-01-18T15:09:53.660+0800    INFO    [monitoring]    log/log.go:117  Starting metrics logging every 30s
2022-01-18T15:09:53.660+0800    INFO    instance/beat.go:403    filebeat start running.
2022-01-18T15:09:53.660+0800    INFO    registrar/registrar.go:134      Loading registrar data from /var/lib/filebeat/registry
2022-01-18T15:09:53.660+0800    INFO    registrar/registrar.go:141      States Loaded from registrar: 1
2022-01-18T15:09:53.660+0800    INFO    crawler/crawler.go:72   Loading Inputs: 1
2022-01-18T15:09:53.661+0800    INFO    log/input.go:138        Configured paths: [/var/log/nginx/access.log]
2022-01-18T15:09:53.661+0800    INFO    input/input.go:114      Starting input of type: log; ID: 15383831961883387157
2022-01-18T15:09:53.661+0800    INFO    crawler/crawler.go:106  Loading and starting Inputs completed. Enabled inputs: 1
2022-01-18T15:10:23.668+0800    INFO    [monitoring]    log/log.go:144  Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":20,"time":{"ms":24}},"total":{"ticks":30,"time":{"ms":35},"value":30},"user":{"ticks":10,"time":{"ms":11}}},"handles":{"limit":{"hard":4096,"soft":1024},"open":6},"info":{"ephemeral_id":"a6d2a55b-b2bb-4b6b-955b-54eaf3d6af82","uptime":{"ms":30019}},"memstats":{"gc_next":4203040,"memory_alloc":2869776,"memory_total":4550272,"rss":15110144}},"filebeat":{"events":{"added":1,"done":1},"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0}},"output":{"type":"elasticsearch"},"pipeline":{"clients":1,"events":{"active":0,"filtered":1,"total":1}}},"registrar":{"states":{"current":1,"update":1},"writes":{"success":1,"total":1}},"system":{"cpu":{"cores":2},"load":{"1":0.26,"15":0.14,"5":0.13,"norm":{"1":0.13,"15":0.07,"5":0.065}}}}}}

 开始配置测试新索引生成:

利用ab工具或者其他方法产生新日志,然后在Kibana上删除已经创建的索引,新的自定义索引就生成了,具体配置测试新自定义索引步骤如下:

1)ab工具压测或者其它方式;2)在Kibana上删除已经创建的索引.

[root@elk01 ~]#  ab -n 100 -c 100 http://10.96.211.209/    ----》在Kibana上删除已经创建的索引 。

查看Kibana,如下按月生成索引成功,接下来就可以创建新的索引了:

 

收集多台主机日志

  前面研究了那么多,收集的都是一台主机的日志,接下来研究收集多台主机日志。

  其实收集多台主机日志的配置和收集一台主机的配置是一样的!

  收集第二台和第三台主机的日志

   收集第二台和第三台主机日志前的准备工作如下:第二台和第三台主机做如下操作

    第二台主机操作:

    [root@elk02 es_soft]# rpm -ivh filebeat-6.6.0-x86_64.rpm
警告:filebeat-6.6.0-x86_64.rpm: 头V4 RSA/SHA512 Signature, 密钥 ID d88e42b4: NOKEY
准备中...                          ################################# [100%]
正在升级/安装...
   1:filebeat-6.6.0-1                 ################################# [100%]

    [root@elk02 es_soft]# yum install epel-release -y

    [root@elk02 es_soft]#  yum install nginx httpd-tools -y

 

    第三台主机操作:    

[    root@localhost es_soft]# rpm -ivh filebeat-6.6.0-x86_64.rpm
警告:filebeat-6.6.0-x86_64.rpm: 头V4 RSA/SHA512 Signature, 密钥 ID d88e42b4: NOKEY
准备中...                          ################################# [100%]
正在升级/安装...
   1:filebeat-6.6.0-1                 ################################# [100%]

    [root@localhost es_soft]#  yum install epel-release -y

    [root@localhost es_soft]# yum install nginx httpd-tools -y


  第二台和第三台主机有关nginx和filebeat配置文件的配置

    第二台主机nginx,filebeat配置文件的配置和第一台(单台主机)配置文件配置相同,具体配置如下:

      [root@elk02 es_soft]# cat /etc/nginx/nginx.conf
# For more information on configuration, see:
#   * Official English Documentation: http://nginx.org/en/docs/
#   * Official Russian Documentation: http://nginx.org/ru/docs/

user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;

# Load dynamic modules. See /usr/share/doc/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;

events {
    worker_connections 1024;
}

http {
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';
    log_format  json '{ "time_local": "$time_local", '
                           '"remote_addr": "$remote_addr", '
                           '"referer": "$http_referer", '
                           '"request": "$request", '
                           '"status": $status, '
                           '"bytes": $body_bytes_sent, '
                           '"agent": "$http_user_agent", '
                           '"x_forwarded": "$http_x_forwarded_for", '
                           '"up_addr": "$upstream_addr",'
                           '"up_host": "$upstream_http_host",'
                           '"upstream_time": "$upstream_response_time",'
                           '"request_time": "$request_time"'
    ' }';


    access_log  /var/log/nginx/access.log  json;

    sendfile            on;
    tcp_nopush          on;
    tcp_nodelay         on;
    keepalive_timeout   65;
    types_hash_max_size 4096;

    include             /etc/nginx/mime.types;
    default_type        application/octet-stream;

    # Load modular configuration files from the /etc/nginx/conf.d directory.
    # See http://nginx.org/en/docs/ngx_core_module.html#include
    # for more information.
    include /etc/nginx/conf.d/*.conf;

    server {
        listen       80;
        listen       [::]:80;
        server_name  _;
        root         /usr/share/nginx/html;

        # Load configuration files for the default server block.
        include /etc/nginx/default.d/*.conf;

        error_page 404 /404.html;
        location = /404.html {
        }

        error_page 500 502 503 504 /50x.html;
        location = /50x.html {
        }
    }

 

      [root@elk02 es_soft]# cat /etc/filebeat/filebeat.yml
filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /var/log/nginx/access.log
  json.keys_under_root: true
  json.overwrite_keys: true
setup.kibana:
  host: "10.96.211.209:5601"

output.elasticsearch:
  hosts: ["10.96.211.209:9200"]
  index: "nginx_access-%{[beat.version]}-%{+yyyy.MM}"
setup.template.name: "nginx"
setup.template.pattern: "nginx_*"
setup.template.enabled: false
setup.template.overwrite: true

 

    第三台主机nginx,filebeat配置文件的配置和第一台(单台主机)配置文件配置相同,具体配置如下:

      [root@elk03 ~]# cat /etc/nginx/nginx.conf
# For more information on configuration, see:
#   * Official English Documentation: http://nginx.org/en/docs/
#   * Official Russian Documentation: http://nginx.org/ru/docs/

user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;

# Load dynamic modules. See /usr/share/doc/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;

events {
    worker_connections 1024;
}

http {
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';
    log_format  json '{ "time_local": "$time_local", '
                           '"remote_addr": "$remote_addr", '
                           '"referer": "$http_referer", '
                           '"request": "$request", '
                           '"status": $status, '
                           '"bytes": $body_bytes_sent, '
                           '"agent": "$http_user_agent", '
                           '"x_forwarded": "$http_x_forwarded_for", '
                           '"up_addr": "$upstream_addr",'
                           '"up_host": "$upstream_http_host",'
                           '"upstream_time": "$upstream_response_time",'
                           '"request_time": "$request_time"'
    ' }';


    access_log  /var/log/nginx/access.log  json;

    sendfile            on;
    tcp_nopush          on;
    tcp_nodelay         on;
    keepalive_timeout   65;
    types_hash_max_size 4096;

    include             /etc/nginx/mime.types;
    default_type        application/octet-stream;

    # Load modular configuration files from the /etc/nginx/conf.d directory.
    # See http://nginx.org/en/docs/ngx_core_module.html#include
    # for more information.
    include /etc/nginx/conf.d/*.conf;

    server {
        listen       80;
        listen       [::]:80;
        server_name  _;
        root         /usr/share/nginx/html;

        # Load configuration files for the default server block.
        include /etc/nginx/default.d/*.conf;

        error_page 404 /404.html;
        location = /404.html {
        }

        error_page 500 502 503 504 /50x.html;
        location = /50x.html {
        }
    }

 

      [root@elk03 ~]# cat /etc/filebeat/filebeat.yml
filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /var/log/nginx/access.log
  json.keys_under_root: true
  json.overwrite_keys: true
setup.kibana:
  host: "10.96.211.209:5601"

output.elasticsearch:
  hosts: ["10.96.211.209:9200"]
  index: "nginx_access-%{[beat.version]}-%{+yyyy.MM}"
setup.template.name: "nginx"
setup.template.pattern: "nginx_*"
setup.template.enabled: false
setup.template.overwrite: true

   第二台和第三台主机启动nginx filebeat服务

    第二台和第三台主机重启nginx服务和启动filebeat服务:

      [root@elk02 es_soft]# systemctl restart nginx
      [root@elk02 es_soft]# systemctl start filebeat

      [root@elk02 es_soft]# tail -f /var/log/filebeat/filebeat
2022-01-17T18:41:46.746+0800    INFO    registrar/registrar.go:97       No registry file found under: /var/lib/filebeat/registry. Creating a new registry file.
2022-01-17T18:41:46.761+0800    INFO    registrar/registrar.go:134      Loading registrar data from /var/lib/filebeat/registry
2022-01-17T18:41:46.761+0800    INFO    registrar/registrar.go:141      States Loaded from registrar: 0
2022-01-17T18:41:46.761+0800    INFO    crawler/crawler.go:72   Loading Inputs: 1
2022-01-17T18:41:46.762+0800    INFO    log/input.go:138        Configured paths: [/var/log/nginx/access.log]
2022-01-17T18:41:46.762+0800    INFO    input/input.go:114      Starting input of type: log; ID: 15383831961883387157
2022-01-17T18:41:46.762+0800    INFO    crawler/crawler.go:106  Loading and starting Inputs completed. Enabled inputs: 1
2022-01-17T18:41:46.762+0800    INFO    log/harvester.go:255    Harvester started for file: /var/log/nginx/access.log
2022-01-17T18:42:16.769+0800    INFO    [monitoring]    log/log.go:144  Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":20,"time":{"ms":28}},"total":{"ticks":30,"time":{"ms":44},"value":30},"user":{"ticks":10,"time":{"ms":16}}},"handles":{"limit":{"hard":4096,"soft":1024},"open":7},"info":{"ephemeral_id":"c5df38a9-9075-4dab-89ff-574af8cb60f7","uptime":{"ms":30029}},"memstats":{"gc_next":4194304,"memory_alloc":2827792,"memory_total":4509440,"rss":15347712}},"filebeat":{"events":{"added":1,"done":1},"harvester":{"open_files":1,"running":1,"started":1}},"libbeat":{"config":{"module":{"running":0}},"output":{"type":"elasticsearch"},"pipeline":{"clients":1,"events":{"active":0,"filtered":1,"total":1}}},"registrar":{"states":{"current":1,"update":1},"writes":{"success":2,"total":2}},"system":{"cpu":{"cores":2},"load":{"1":0.09,"15":0.15,"5":0.11,"norm":{"1":0.045,"15":0.075,"5":0.055}}}}}}
2022-01-17T18:42:46.754+0800    INFO    [monitoring]    log/log.go:144  Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":30,"time":{"ms":4}},"total":{"ticks":50,"time":{"ms":9},"value":50},"user":{"ticks":20,"time":{"ms":5}}},"handles":{"limit":{"hard":4096,"soft":1024},"open":7},"info":{"ephemeral_id":"c5df38a9-9075-4dab-89ff-574af8cb60f7","uptime":{"ms":60021}},"memstats":{"gc_next":4194304,"memory_alloc":3121200,"memory_total":4802848,"rss":929792}},"filebeat":{"harvester":{"open_files":1,"running":1}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":1,"events":{"active":0}}},"registrar":{"states":{"current":1}},"system":{"load":{"1":0.05,"15":0.14,"5":0.1,"norm":{"1":0.025,"15":0.07,"5":0.05}}}}}}
2022-01-17T18:43:16.769+0800    INFO    [monitoring]    log/log.go:144  Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":30,"time":{"ms":4}},"total":{"ticks":50,"time":{"ms":7},"value":50},"user":{"ticks":20,"time":{"ms":3}}},"handles":{"limit":{"hard":4096,"soft":1024},"open":7},"info":{"ephemeral_id":"c5df38a9-9075-4dab-89ff-574af8cb60f7","uptime":{"ms":90038}},"memstats":{"gc_next":4194304,"memory_alloc":3385952,"memory_total":5067600}},"filebeat":{"harvester":{"open_files":1,"running":1}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":1,"events":{"active":0}}},"registrar":{"states":{"current":1}},"system":{"load":{"1":0.09,"15":0.14,"5":0.1,"norm":{"1":0.045,"15":0.07,"5":0.05}}}}}}

      

      [root@elk03 ~]# systemctl restart nginx
      [root@elk03 ~]# systemctl start filebeat

      [root@elk03 ~]# tail -f /var/log/filebeat/filebeat
2022-01-17T14:50:59.287+0800    INFO    crawler/crawler.go:72   Loading Inputs: 1
2022-01-17T14:50:59.287+0800    INFO    log/input.go:138        Configured paths: [/var/log/nginx/access.log]
2022-01-17T14:50:59.287+0800    INFO    input/input.go:114      Starting input of type: log; ID: 15383831961883387157
2022-01-17T14:50:59.287+0800    INFO    crawler/crawler.go:106  Loading and starting Inputs completed. Enabled inputs: 1
2022-01-17T14:50:59.287+0800    INFO    log/harvester.go:255    Harvester started for file: /var/log/nginx/access.log
2022-01-17T14:51:29.287+0800    INFO    [monitoring]    log/log.go:144  Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":20,"time":{"ms":25}},"total":{"ticks":30,"time":{"ms":37},"value":30},"user":{"ticks":10,"time":{"ms":12}}},"handles":{"limit":{"hard":4096,"soft":1024},"open":7},"info":{"ephemeral_id":"83e09678-40dd-4ac9-848a-c887bddc2985","uptime":{"ms":30023}},"memstats":{"gc_next":4285200,"memory_alloc":2875216,"memory_total":4489344,"rss":15212544}},"filebeat":{"events":{"added":1,"done":1},"harvester":{"open_files":1,"running":1,"started":1}},"libbeat":{"config":{"module":{"running":0}},"output":{"type":"elasticsearch"},"pipeline":{"clients":1,"events":{"active":0,"filtered":1,"total":1}}},"registrar":{"states":{"current":1,"update":1},"writes":{"success":2,"total":2}},"system":{"cpu":{"cores":2},"load":{"1":0.01,"15":0.11,"5":0.07,"norm":{"1":0.005,"15":0.055,"5":0.035}}}}}}
2022-01-17T14:51:59.295+0800    INFO    [monitoring]    log/log.go:144  Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":30,"time":{"ms":6}},"total":{"ticks":40,"time":{"ms":10},"value":40},"user":{"ticks":10,"time":{"ms":4}}},"handles":{"limit":{"hard":4096,"soft":1024},"open":7},"info":{"ephemeral_id":"83e09678-40dd-4ac9-848a-c887bddc2985","uptime":{"ms":60030}},"memstats":{"gc_next":4285200,"memory_alloc":3242336,"memory_total":4856464,"rss":892928}},"filebeat":{"harvester":{"open_files":1,"running":1}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":1,"events":{"active":0}}},"registrar":{"states":{"current":1}},"system":{"load":{"1":0.01,"15":0.11,"5":0.07,"norm":{"1":0.005,"15":0.055,"5":0.035}}}}}}
2022-01-17T14:52:29.285+0800    INFO    [monitoring]    log/log.go:144  Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":30,"time":{"ms":1}},"total":{"ticks":40,"time":{"ms":4},"value":40},"user":{"ticks":10,"time":{"ms":3}}},"handles":{"limit":{"hard":4096,"soft":1024},"open":7},"info":{"ephemeral_id":"83e09678-40dd-4ac9-848a-c887bddc2985","uptime":{"ms":90022}},"memstats":{"gc_next":4285200,"memory_alloc":3439920,"memory_total":5054048}},"filebeat":{"harvester":{"open_files":1,"running":1}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":1,"events":{"active":0}}},"registrar":{"states":{"current":1}},"system":{"load":{"1":0,"15":0.11,"5":0.06,"norm":{"1":0,"15":0.055,"5":0.03}}}}}}
2022-01-17T14:52:59.283+0800    INFO    [monitoring]    log/log.go:144  Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":30,"time":{"ms":2}},"total":{"ticks":50,"time":{"ms":7},"value":50},"user":{"ticks":20,"time":{"ms":5}}},"handles":{"limit":{"hard":4096,"soft":1024},"open":7},"info":{"ephemeral_id":"83e09678-40dd-4ac9-848a-c887bddc2985","uptime":{"ms":120019}},"memstats":{"gc_next":4194304,"memory_alloc":1782112,"memory_total":5328960,"rss":253952}},"filebeat":{"harvester":{"open_files":1,"running":1}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":1,"events":{"active":0}}},"registrar":{"states":{"current":1}},"system":{"load":{"1":0,"15":0.1,"5":0.05,"norm":{"1":0,"15":0.05,"5":0.025}}}}}}
2022-01-17T14:53:29.302+0800    INFO    [monitoring]    log/log.go:144  Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":40,"time":{"ms":7}},"total":{"ticks":60,"time":{"ms":9},"value":60},"user":{"ticks":20,"time":{"ms":2}}},"handles":{"limit":{"hard":4096,"soft":1024},"open":7},"info":{"ephemeral_id":"83e09678-40dd-4ac9-848a-c887bddc2985","uptime":{"ms":150040}},"memstats":{"gc_next":4194304,"memory_alloc":2057032,"memory_total":5603880}},"filebeat":{"harvester":{"open_files":1,"running":1}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":1,"events":{"active":0}}},"registrar":{"states":{"current":1}},"system":{"load":{"1":0,"15":0.1,"5":0.05,"norm":{"1":0,"15":0.05,"5":0.025}}}}}}
2022-01-17T14:53:59.286+0800    INFO    [monitoring]    log/log.go:144  Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":40,"time":{"ms":6}},"total":{"ticks":70,"time":{"ms":10},"value":70},"user":{"ticks":30,"time":{"ms":4}}},"handles":{"limit":{"hard":4096,"soft":1024},"open":7},"info":{"ephemeral_id":"83e09678-40dd-4ac9-848a-c887bddc2985","uptime":{"ms":180024}},"memstats":{"gc_next":4194304,"memory_alloc":2334424,"memory_total":5881272}},"filebeat":{"harvester":{"open_files":1,"running":1}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":1,"events":{"active":0}}},"registrar":{"states":{"current":1}},"system":{"load":{"1":0,"15":0.1,"5":0.04,"norm":{"1":0,"15":0.05,"5":0.02}}}}}}

   第二台和第三台主机开始测试收集日志   

    第二台和第三台的nginx,filebeat配置文件配置好,nginx,filebeat服务启动成功后,开始测试收集日志:

      在第一台主机操作如下:ab工具测试或者其它方式测试

        [root@elk01 es_soft]#  ab -n 100 -c 100 http://10.96.211.110/

        [root@elk01 es_soft]#  ab -n 100 -c 100 http://10.96.211.111/ 

        [root@elk01 es_soft]#  ab -n 100 -c 100 http://10.96.211.110/elk02.html

        [root@elk01 es_soft]#  ab -n 100 -c 100 http://10.96.211.111/elk03.html

 

      查看第二台主机和第三台主机有关日志:

        [root@elk02 es_soft]# tail -f /var/log/nginx/access.log
{ "time_local": "17/Jan/2022:19:03:22 +0800", "remote_addr": "10.96.211.209", "referer": "-", "request": "GET / HTTP/1.0", "status": 200, "bytes": 4833, "agent": "ApacheBench/2.3", "x_forwarded": "-", "up_addr": "-","up_host": "-","upstream_time": "-","request_time": "0.000" }
{ "time_local": "17/Jan/2022:19:03:22 +0800", "remote_addr": "10.96.211.209", "referer": "-", "request": "GET / HTTP/1.0", "status": 200, "bytes": 4833, "agent": "ApacheBench/2.3", "x_forwarded": "-", "up_addr": "-","up_host": "-","upstream_time": "-","request_time": "0.000" }
{ "time_local": "17/Jan/2022:19:03:22 +0800", "remote_addr": "10.96.211.209", "referer": "-", "request": "GET / HTTP/1.0", "status": 200, "bytes": 4833, "agent": "ApacheBench/2.3", "x_forwarded": "-", "up_addr": "-","up_host": "-","upstream_time": "-","request_time": "0.000" }
{ "time_local": "17/Jan/2022:19:03:22 +0800", "remote_addr": "10.96.211.209", "referer": "-", "request": "GET / HTTP/1.0", "status": 200, "bytes": 4833, "agent": "ApacheBench/2.3", "x_forwarded": "-", "up_addr": "-","up_host": "-","upstream_time": "-","request_time": "0.000" }
{ "time_local": "17/Jan/2022:19:03:22 +0800", "remote_addr": "10.96.211.209", "referer": "-", "request": "GET / HTTP/1.0", "status": 200, "bytes": 4833, "agent": "ApacheBench/2.3", "x_forwarded": "-", "up_addr": "-","up_host": "-","upstream_time": "-","request_time": "0.000" }
{ "time_local": "17/Jan/2022:19:03:22 +0800", "remote_addr": "10.96.211.209", "referer": "-", "request": "GET / HTTP/1.0", "status": 200, "bytes": 4833, "agent": "ApacheBench/2.3", "x_forwarded": "-", "up_addr": "-","up_host": "-","upstream_time": "-","request_time": "0.000" }
{ "time_local": "17/Jan/2022:19:03:22 +0800", "remote_addr": "10.96.211.209", "referer": "-", "request": "GET / HTTP/1.0", "status": 200, "bytes": 4833, "agent": "ApacheBench/2.3", "x_forwarded": "-", "up_addr": "-","up_host": "-","upstream_time": "-","request_time": "0.000" }
{ "time_local": "17/Jan/2022:19:03:22 +0800", "remote_addr": "10.96.211.209", "referer": "-", "request": "GET / HTTP/1.0", "status": 200, "bytes": 4833, "agent": "ApacheBench/2.3", "x_forwarded": "-", "up_addr": "-","up_host": "-","upstream_time": "-","request_time": "0.000" }
{ "time_local": "17/Jan/2022:19:03:22 +0800", "remote_addr": "10.96.211.209", "referer": "-", "request": "GET / HTTP/1.0", "status": 200, "bytes": 4833, "agent": "ApacheBench/2.3", "x_forwarded": "-", "up_addr": "-","up_host": "-","upstream_time": "-","request_time": "0.000" }
{ "time_local": "17/Jan/2022:19:03:22 +0800", "remote_addr": "10.96.211.209", "referer": "-", "request": "GET / HTTP/1.0", "status": 200, "bytes": 4833, "agent": "ApacheBench/2.3", "x_forwarded": "-", "up_addr": "-","up_host": "-","upstream_time": "-","request_time": "0.000" }    

 

        [root@elk03 ~]# tail -f /var/log/nginx/access.log
{ "time_local": "17/Jan/2022:15:14:15 +0800", "remote_addr": "10.96.211.209", "referer": "-", "request": "GET / HTTP/1.0", "status": 200, "bytes": 4833, "agent": "ApacheBench/2.3", "x_forwarded": "-", "up_addr": "-","up_host": "-","upstream_time": "-","request_time": "0.000" }
{ "time_local": "17/Jan/2022:15:14:15 +0800", "remote_addr": "10.96.211.209", "referer": "-", "request": "GET / HTTP/1.0", "status": 200, "bytes": 4833, "agent": "ApacheBench/2.3", "x_forwarded": "-", "up_addr": "-","up_host": "-","upstream_time": "-","request_time": "0.000" }
{ "time_local": "17/Jan/2022:15:14:15 +0800", "remote_addr": "10.96.211.209", "referer": "-", "request": "GET / HTTP/1.0", "status": 200, "bytes": 4833, "agent": "ApacheBench/2.3", "x_forwarded": "-", "up_addr": "-","up_host": "-","upstream_time": "-","request_time": "0.000" }
{ "time_local": "17/Jan/2022:15:14:15 +0800", "remote_addr": "10.96.211.209", "referer": "-", "request": "GET / HTTP/1.0", "status": 200, "bytes": 4833, "agent": "ApacheBench/2.3", "x_forwarded": "-", "up_addr": "-","up_host": "-","upstream_time": "-","request_time": "0.000" }
{ "time_local": "17/Jan/2022:15:14:15 +0800", "remote_addr": "10.96.211.209", "referer": "-", "request": "GET / HTTP/1.0", "status": 200, "bytes": 4833, "agent": "ApacheBench/2.3", "x_forwarded": "-", "up_addr": "-","up_host": "-","upstream_time": "-","request_time": "0.000" }
{ "time_local": "17/Jan/2022:15:14:15 +0800", "remote_addr": "10.96.211.209", "referer": "-", "request": "GET / HTTP/1.0", "status": 200, "bytes": 4833, "agent": "ApacheBench/2.3", "x_forwarded": "-", "up_addr": "-","up_host": "-","upstream_time": "-","request_time": "0.000" }
{ "time_local": "17/Jan/2022:15:14:15 +0800", "remote_addr": "10.96.211.209", "referer": "-", "request": "GET / HTTP/1.0", "status": 200, "bytes": 4833, "agent": "ApacheBench/2.3", "x_forwarded": "-", "up_addr": "-","up_host": "-","upstream_time": "-","request_time": "0.000" }
{ "time_local": "17/Jan/2022:15:14:15 +0800", "remote_addr": "10.96.211.209", "referer": "-", "request": "GET / HTTP/1.0", "status": 200, "bytes": 4833, "agent": "ApacheBench/2.3", "x_forwarded": "-", "up_addr": "-","up_host": "-","upstream_time": "-","request_time": "0.000" }
{ "time_local": "17/Jan/2022:15:14:15 +0800", "remote_addr": "10.96.211.209", "referer": "-", "request": "GET / HTTP/1.0", "status": 200, "bytes": 4833, "agent": "ApacheBench/2.3", "x_forwarded": "-", "up_addr": "-","up_host": "-","upstream_time": "-","request_time": "0.000" }
{ "time_local": "17/Jan/2022:15:14:15 +0800", "remote_addr": "10.96.211.209", "referer": "-", "request": "GET / HTTP/1.0", "status": 200, "bytes": 4833, "agent": "ApacheBench/2.3", "x_forwarded": "-", "up_addr": "-","up_host": "-","upstream_time": "-","request_time": "0.000" }  

 

 收集多条日志(多个索引)

  前面研究的收集日志都是收集一条日志(一个索引),那么主机多个日志(索引)如何收集和配置呢,请看下文。

    修改filebeat配置文件后的内容如下:红色部分为新增配置!

     [root@elk01 ~]# cat /etc/filebeat/filebeat.yml
filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /var/log/nginx/access.log
  json.keys_under_root: true
  json.overwrite_keys: true
  tags: ["access"]

- type: log
  enabled: true
  paths:
    - /var/log/nginx/error.log
  tags: ["error"]

setup.kibana:
  host: "10.96.211.209:5601"

output.elasticsearch:
  hosts: ["10.96.211.209:9200"]
#  index: "nginx_access-%{[beat.version]}-%{+yyyy.MM}"
  indices:
    - index: "nginx-access-%{[beat.version]}-%{+yyyy.MM}"
      when.contains:
        tags: "access"
    - index: "nginx-error-%{[beat.version]}-%{+yyyy.MM}"
      when.contains:
        tags: "error"

setup.template.name: "nginx"
setup.template.pattern: "nginx-*"
setup.template.enabled: false
setup.template.overwrite: true

 第二台和第三台主机修改完filebeat配置文件后,在kibana上删除所有原有日志(索引),然后重启重启这三台filebeat服务!!!接下来就可以使用ab工具或者其它方式如上进行测试模拟访问!!!

  [root@elk01 ~]#  ab -n 100 -c 100 http://10.96.211.110/elk02.html

  [root@elk01 ~]#  ab -n 100 -c 100 http://10.96.211.111/elk03.html     

    

  ELK收集tomcat日志

     1.安装JDK && tomcat(1台主机为例)        

        安装tomcat之前需要安装JDK,之前已经安装过了,这里我不再赘述。

        [root@elk01 ~]# yum install tomcat tomcat-webapps tomcat-admin-webapps tomcat-docs-webapp tomcat-javadoc -y

        启动 && 检查tomcat:

          [root@elk01 ~]# systemctl start tomcat

          [root@elk01 ~]# netstat -lntup|grep 8080
          tcp6       0      0 :::8080                 :::*                    LISTEN      28801/java

          

 

       

    2.配置tomcat日志为JSON格式(1台主机为例)

        配置tomcat日志为JSON格式具体步骤如下:

        a.删除server.xml第139行:/etc/tomcat/server.xml

          

         b. 把如下内容复制到server.xml的第139行,如下所示:

            需要复制的内容:

         pattern="&quot;clientip&quot;:&quot;%h&quot;,&quot;ClientUser&quot;:&quot;%l&quot;,&quot;authenticated&quot;:&quot;%u&quot;,&quot;AccessTime&quot;:&quot;%t&quot;,&quot;method&quot;:&quot;%r&quot;,&quot;status&quot;:&quot;%s&quot;,&quot;SendBytes&quot;:&quot;%b&quot;,&quot;Query?string&quot;:&quot;%q&quot;,&quot;partner&quot;:&quot;%{Referer}i&quot;,&quot;AgentVersion&quot;:&quot;%{User-Agent}i&quot;}"/> 

            server.xml第139行复制好后内容如下:

         c.清空tomcat日志:

          [root@elk01 ~]# >/var/log/tomcat/localhost_access_log.2022-01-20.txt

        d.重启tomcat:

          [root@elk01 ~]# systemctl restart tomcat

        e.配置filebeat:

          filebeat配置后内容如下:如下红色部分为新增配置

            [root@elk01 ~]# cat /etc/filebeat/filebeat.yml
filebeat.inputs:
##########nginx#############
- type: log
  enabled: true
  paths:
    - /var/log/nginx/access.log
  json.keys_under_root: true
  json.overwrite_keys: true
  tags: ["access"]

- type: log
  enabled: true
  paths:
    - /var/log/nginx/error.log
  tags: ["error"]

###############tomcat###################

- type: log
  enabled: true
  paths:
    - /var/log/tomcat/localhost_access_log.*.txt
  json.keys_under_root: true
  json.overwrite_keys: true
  tags: ["tomcat"]

##############output##########################
setup.kibana:
  host: "10.96.211.209:5601"

output.elasticsearch:
  hosts: ["10.96.211.209:9200"]
#  index: "nginx_access-%{[beat.version]}-%{+yyyy.MM}"
  indices:
    - index: "nginx-access-%{[beat.version]}-%{+yyyy.MM}"
      when.contains:
        tags: "access"
    - index: "nginx-error-%{[beat.version]}-%{+yyyy.MM}"
      when.contains:
        tags: "error"
    - index: "tomcat-access-%{[beat.version]}-%{+yyyy.MM}"
      when.contains:
        tags: "tomcat"

setup.template.name: "nginx"
setup.template.pattern: "nginx-*"
setup.template.enabled: false
setup.template.overwrite: true

        f.重启filebeat服务:

          [root@elk01 ~]# systemctl restart filebeat

        e.接下来就可以进行ELK(Kibana)测试了,tomcat IP:8080登录,点击tomcat页面就可以产生访问日志了。

 

 

 

 

    

  ELK收集java多行日志

    elasticsearch就是java开发的,所以elasticsearch日志就是属于java日志。

    修改配置filebeat

     注意java多行日志没必要也不需要转成JSON格式!

    收集java多行日志,开始配置filebeat:

      filebeat配置后的配置文件内容如下:红色部分为新增配置

[root@elk01 ~]# cat /etc/filebeat/filebeat.yml
filebeat.inputs:
##########nginx#############
- type: log
  enabled: true
  paths:
    - /var/log/nginx/access.log
  json.keys_under_root: true
  json.overwrite_keys: true
  tags: ["access"]

- type: log
  enabled: true
  paths:
    - /var/log/nginx/error.log
  tags: ["error"]

###############tomcat###################

#################ES######################
- type: log
  enabled: true
  paths:
    - /var/log/elasticsearch/elasticsearch.log
  tags: ["es"]
  multiline.pattern: '^\['
  multiline.negate: true
  multiline.match: after

- type: log
  enabled: true
  paths:
    - /var/log/tomcat/localhost_access_log.*.txt
  json.keys_under_root: true
  json.overwrite_keys: true
  tags: ["tomcat"]


##############output##########################
setup.kibana:
  host: "10.96.211.209:5601"

output.elasticsearch:
  hosts: ["10.96.211.209:9200"]
#  index: "nginx_access-%{[beat.version]}-%{+yyyy.MM}"
  indices:
    - index: "nginx-access-%{[beat.version]}-%{+yyyy.MM}"
      when.contains:
        tags: "access"
    - index: "nginx-error-%{[beat.version]}-%{+yyyy.MM}"
      when.contains:
        tags: "error"
    - index: "tomcat-access-%{[beat.version]}-%{+yyyy.MM}"
      when.contains:
        tags: "tomcat"

    - index: "es-java-%{[beat.version]}-%{+yyyy.MM}"
      when.contains:
        tags: "es"

setup.template.name: "nginx"
setup.template.pattern: "nginx-*"
setup.template.enabled: false
setup.template.overwrite: true

    造java多行错误日志

    由于elasticsearch里面的java日志是正常启动日志,没有报错日志信息,所以接下来需要造java多行的错误日志进行测试查看:  

      造elasticsearch报错信息方法:把elasticsearch 配置文件随便修改错误就可以了,具体修改方法如下:红色部分多个一个“h”,启动elasticsearch会报错

        

[root@elk01 ~]# cat /etc/elasticsearch/elasticsearch.yml
#cluster.name: Linux
node.name: node-1
path.data: /data/elasticsearch
path.logs: /var/log/elasticsearch
#bootstrap.memory_lock: true
network.host: 10.96.211.209,127.0.0.1
hhttp.port: 9200
#discovery.zen.ping.unicast.hosts: ["10.96.211.209", "10.96.211.110"]
#discovery.zen.minimum_master_nodes: 2
http.cors.enabled: true
http.cors.allow-origin: "*"
http.cors.allow-methods: "GET"

然后,启动es,会报错,es的java报错日志信息就造出来了!!!

 [root@elk01 ~]# systemctl restart elasticsearch
[root@elk01 ~]# tail -f /var/log/elasticsearch/elasticsearch.log
        at org.elasticsearch.common.settings.AbstractScopedSettings.validate(AbstractScopedSettings.java:398) ~[elasticsearch-6.6.0.jar:6.6.0]
        at org.elasticsearch.common.settings.AbstractScopedSettings.validate(AbstractScopedSettings.java:369) ~[elasticsearch-6.6.0.jar:6.6.0]
        at org.elasticsearch.common.settings.SettingsModule.<init>(SettingsModule.java:148) ~[elasticsearch-6.6.0.jar:6.6.0]
        at org.elasticsearch.node.Node.<init>(Node.java:372) ~[elasticsearch-6.6.0.jar:6.6.0]
        at org.elasticsearch.node.Node.<init>(Node.java:265) ~[elasticsearch-6.6.0.jar:6.6.0]
        at org.elasticsearch.bootstrap.Bootstrap$5.<init>(Bootstrap.java:212) ~[elasticsearch-6.6.0.jar:6.6.0]
        at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:212) ~[elasticsearch-6.6.0.jar:6.6.0]
        at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:333) ~[elasticsearch-6.6.0.jar:6.6.0]
        at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:159) ~[elasticsearch-6.6.0.jar:6.6.0]

最后,把elasticsearch配置文件改正正确,重启elasticsearch和filebeat服务,才能输送到kIbana上!!

[root@elk01 ~]# cat /etc/elasticsearch/elasticsearch.yml
#cluster.name: Linux
node.name: node-1
path.data: /data/elasticsearch
path.logs: /var/log/elasticsearch
#bootstrap.memory_lock: true
network.host: 10.96.211.209,127.0.0.1
http.port: 9200
#discovery.zen.ping.unicast.hosts: ["10.96.211.209", "10.96.211.110"]
#discovery.zen.minimum_master_nodes: 2
http.cors.enabled: true
http.cors.allow-origin: "*"
http.cors.allow-methods: "GET"

[root@elk01 ~]# systemctl restart elasticsearch

[root@elk01 ~]# systemctl restart filebeat

[root@elk01 ~]# tail -f /var/log/elasticsearch/elasticsearch.log
        at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:965) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
        at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:656) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
        at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:556) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:510) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:470) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
        at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:909) [netty-common-4.1.32.Final.jar:4.1.32.Final]
        at java.lang.Thread.run(Thread.java:748) [?:1.8.0_312]
[2022-01-20T17:40:28,529][INFO ][o.e.c.r.a.AllocationService] [node-1] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[.kibana_1][0]] ...]).
[2022-01-20T17:40:40,439][INFO ][o.e.c.m.MetaDataMappingService] [node-1] [es-java-6.6.0-2022.01/-QLrP4RnReSkztNaXM6pAw] update_mapping [doc]

 

[root@elk01 ~]# systemctl status elasticsearch
● elasticsearch.service - Elasticsearch
   Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: disabled)
  Drop-In: /etc/systemd/system/elasticsearch.service.d
           └─override.conf
   Active: active (running) since 四 2022-01-20 17:40:12 CST; 1min 43s ago
     Docs: http://www.elastic.co
 Main PID: 13244 (java)
   CGroup: /system.slice/elasticsearch.service
           ├─13244 /bin/java -Xms2g -Xmx2g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Des.networkaddress.cache.ttl=60 -Des.networkaddress.cache.negative.ttl=10 -XX:+AlwaysPreTouch...
           └─13331 /usr/share/elasticsearch/modules/x-pack-ml/platform/linux-x86_64/bin/controller

1月 20 17:40:12 elk01 systemd[1]: Started Elasticsearch.

    查看java多行日志报错信息效果展示      

 最后查看Kibana,java多行日志报错就可以清晰展示出来了:

  

 

 

    

3.filebeat自带module模块收集日志    

    如果不是用filebeat自带模块,需要logstash写匹配规则特别复杂!而filebeat自带了解析普通日志的功能,使用filebeat自带模块功能就不需要修改对应的应用服务有关日志格式的配置文件了!所以filebeat简洁好用!

    查看filebeat有哪些配置文件:

      [root@elk01 ~]# rpm -qc filebeat
/etc/filebeat/filebeat.yml
/etc/filebeat/modules.d/apache2.yml.disabled
/etc/filebeat/modules.d/auditd.yml.disabled
/etc/filebeat/modules.d/elasticsearch.yml.disabled
/etc/filebeat/modules.d/haproxy.yml.disabled
/etc/filebeat/modules.d/icinga.yml.disabled
/etc/filebeat/modules.d/iis.yml.disabled
/etc/filebeat/modules.d/kafka.yml.disabled
/etc/filebeat/modules.d/kibana.yml.disabled
/etc/filebeat/modules.d/logstash.yml.disabled
/etc/filebeat/modules.d/mongodb.yml.disabled
/etc/filebeat/modules.d/mysql.yml.disabled
/etc/filebeat/modules.d/nginx.yml.disabled
/etc/filebeat/modules.d/osquery.yml.disabled
/etc/filebeat/modules.d/postgresql.yml.disabled
/etc/filebeat/modules.d/redis.yml.disabled
/etc/filebeat/modules.d/suricata.yml.disabled
/etc/filebeat/modules.d/system.yml.disabled
/etc/filebeat/modules.d/traefik.yml.disabled

  filebeat自带module模块收集nginx日志

            filebeat自带module模块收集nginx日志步骤

        filebeat自带模块收集nginx日志大致步骤截图

          filebeat自带模块收集nginx日志大致步骤如下截图:更详细见下面文字具体步骤笔记!!

            

     filebeat自带模块收集nginx日志更详细步骤

     filebeat自带模块收集nginx日志更详细步骤见如下文字具体步骤笔记:

      1)修改filebeat配置文件配置模块路径 

          修改后的filebeat配置文件内容如下:           

[root@elk01 modules.d]# cat /etc/filebeat/filebeat.yml
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: true
reload.period: 10s

setup.kibana:
host: "10.96.211.209:5601"

output.elasticsearch:
hosts: ["10.96.211.209:9200"]
indices:
- index: "nginx-access-%{[beat.version]}-%{+yyyy.MM}"
when.contains:
fileset.name: "access"

- index: "nginx-error-%{[beat.version]}-%{+yyyy.MM}"
when.contains:
fileset.name: "error"


setup.template.name: "nginx"
setup.template.pattern: "nginx-*"
setup.template.enabled: false
setup.template.overwrite: true

       

      2)激活filebeat module nginx模块        

[root@elk01 ~]# filebeat modules enable nginx
Enabled nginx

[root@elk01 ~]# filebeat modules list
Enabled:
nginx

Disabled:
apache2
auditd
elasticsearch
haproxy
icinga
iis
kafka
kibana
logstash
mongodb
mysql
osquery
postgresql
redis
suricata
system
traefik

     

      3)修改模块配置文件恢复nginx配置文件原有(日志)配置格式

         nginx恢复后的部分原有配置文件如下:        

[root@elk01 ~]# cat /etc/nginx/nginx.conf
# For more information on configuration, see:
# * Official English Documentation: http://nginx.org/en/docs/
# * Official Russian Documentation: http://nginx.org/ru/docs/

user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;

# Load dynamic modules. See /usr/share/doc/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;

events {
worker_connections 1024;
}

http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
log_format json '{ "time_local": "$time_local", '
'"remote_addr": "$remote_addr", '
'"referer": "$http_referer", '
'"request": "$request", '
'"status": $status, '
'"bytes": $body_bytes_sent, '
'"agent": "$http_user_agent", '
'"x_forwarded": "$http_x_forwarded_for", '
'"up_addr": "$upstream_addr",'
'"up_host": "$upstream_http_host",'
'"upstream_time": "$upstream_response_time",'
'"request_time": "$request_time"'
' }';


access_log /var/log/nginx/access.log main;

    

      4)重启Nginx服务       

[root@elk01 ~]# >/var/log/nginx/access.log
[root@elk01 ~]# systemctl restart nginx

      5)测试的步骤

        a.造nginx普通日志

          [root@elk01 modules.d]# curl 10.96.211.209

          [root@elk01 modules.d]# ab -n 20 -c 20 http://10.96.211.209/test001
          [root@elk01 modules.d]# ab -n 20 -c 20 http://10.96.211.209/test002

        b.查看nginx普通日志

          [root@elk01 modules.d]# tail -f /var/log/nginx/access.log
10.96.211.209 - - [20/Jan/2022:23:26:44 +0800] "GET /test002 HTTP/1.0" 404 3650 "-" "ApacheBench/2.3" "-"
10.96.211.209 - - [20/Jan/2022:23:26:44 +0800] "GET /test002 HTTP/1.0" 404 3650 "-" "ApacheBench/2.3" "-"
10.96.211.209 - - [20/Jan/2022:23:26:44 +0800] "GET /test002 HTTP/1.0" 404 3650 "-" "ApacheBench/2.3" "-"
10.96.211.209 - - [20/Jan/2022:23:26:44 +0800] "GET /test002 HTTP/1.0" 404 3650 "-" "ApacheBench/2.3" "-"
10.96.211.209 - - [20/Jan/2022:23:26:44 +0800] "GET /test002 HTTP/1.0" 404 3650 "-" "ApacheBench/2.3" "-"
10.96.211.209 - - [20/Jan/2022:23:26:44 +0800] "GET /test002 HTTP/1.0" 404 3650 "-" "ApacheBench/2.3" "-"
10.96.211.209 - - [20/Jan/2022:23:26:44 +0800] "GET /test002 HTTP/1.0" 404 3650 "-" "ApacheBench/2.3" "-"
10.96.211.209 - - [20/Jan/2022:23:26:44 +0800] "GET /test002 HTTP/1.0" 404 3650 "-" "ApacheBench/2.3" "-"
10.96.211.209 - - [20/Jan/2022:23:26:44 +0800] "GET /test002 HTTP/1.0" 404 3650 "-" "ApacheBench/2.3" "-"
10.96.211.209 - - [20/Jan/2022:23:26:44 +0800] "GET /test002 HTTP/1.0" 404 3650 "-" "ApacheBench/2.3" "-"

        c.修改filebeat 自带module nginx模块配置文件

         nginx模块配置文件修改后的内容如下:

[root@elk01 modules.d]# pwd
/etc/filebeat/modules.d
[root@elk01 modules.d]# cat nginx.yml
- module: nginx
# Access logs
access:
enabled: true

# Set custom paths for the log files. If left empty,
# Filebeat will choose the paths depending on your OS.
var.paths: ["/var/log/nginx/access.log"]

# Error logs
error:
enabled: true

# Set custom paths for the log files. If left empty,
# Filebeat will choose the paths depending on your OS.
var.paths: ["/var/log/nginx/error.log"]

          d.重启filebeat服务        

[root@elk01 modules.d]# systemctl restart filebeat

        查看filebeat日志:有报错插件有关信息,需要安装2个插件! 注意:elasticsearch6.7之后这两个插件默认集成到了elasticsearch,不需要单独安装了!


[root@elk01 modules.d]# tail -f /var/log/filebeat/filebeat
2022-01-20T23:43:18.161+0800 INFO instance/beat.go:281 Setup Beat: filebeat; Version: 6.6.0
2022-01-20T23:43:18.162+0800 INFO elasticsearch/client.go:165 Elasticsearch url: http://10.96.211.209:9200
2022-01-20T23:43:18.162+0800 INFO [publisher] pipeline/module.go:110 Beat name: elk01
2022-01-20T23:43:18.162+0800 INFO [monitoring] log/log.go:117 Starting metrics logging every 30s
2022-01-20T23:43:18.163+0800 INFO instance/beat.go:403 filebeat start running.
2022-01-20T23:43:18.163+0800 INFO registrar/registrar.go:134 Loading registrar data from /var/lib/filebeat/registry
2022-01-20T23:43:18.163+0800 INFO registrar/registrar.go:141 States Loaded from registrar: 5
2022-01-20T23:43:18.163+0800 INFO crawler/crawler.go:72 Loading Inputs: 0
2022-01-20T23:43:18.163+0800 INFO crawler/crawler.go:106 Loading and starting Inputs completed. Enabled inputs: 0
2022-01-20T23:43:18.163+0800 INFO cfgfile/reload.go:150 Config reloader started
2022-01-20T23:43:28.196+0800 INFO log/input.go:138 Configured paths: [/var/log/nginx/access.log]
2022-01-20T23:43:28.197+0800 INFO log/input.go:138 Configured paths: [/var/log/nginx/error.log]
2022-01-20T23:43:28.197+0800 INFO elasticsearch/client.go:165 Elasticsearch url: http://10.96.211.209:9200
2022-01-20T23:43:28.206+0800 INFO elasticsearch/client.go:721 Connected to Elasticsearch version 6.6.0
2022-01-20T23:43:28.209+0800 ERROR fileset/factory.go:142 Error loading pipeline: Error loading pipeline for fileset nginx/access: This module requires the following Elasticsearch plugins: ingest-user-agent, ingest-geoip. You can install them by running the following commands on all the Elasticsearch nodes:
sudo bin/elasticsearch-plugin install ingest-user-agent
sudo bin/elasticsearch-plugin install ingest-geoip
2022-01-20T23:43:28.209+0800 INFO input/input.go:114 Starting input of type: log; ID: 11511766710343781629
2022-01-20T23:43:28.209+0800 INFO input/input.go:114 Starting input of type: log; ID: 14645289590541825168

 

如上有报错插件有关信息,需要安装2个插件! 注意:elasticsearch6.7之后这两个插件默认集成到了elasticsearch,不需要单独安装了!

如下开始安装2个插件:

[root@elk01 modules.d]# /usr/share/elasticsearch/bin/elasticsearch-plugin install ingest-user-agent
-> Downloading ingest-user-agent from elastic
[=================================================] 100%  
-> Installed ingest-user-agent
[root@elk01 modules.d]# /usr/share/elasticsearch/bin/elasticsearch-plugin install ingest-geoip
-> Downloading ingest-geoip from elastic
[=================================================] 100%  
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: plugin requires additional permissions @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
* java.lang.RuntimePermission accessDeclaredMembers
* java.lang.reflect.ReflectPermission suppressAccessChecks
See http://docs.oracle.com/javase/8/docs/technotes/guides/security/permissions.html
for descriptions of what these permissions allow and the associated risks.

Continue with installation? [y/N]y
-> Installed ingest-geoip

      e.造日志开始测试:造完日志在kibana开始测试查看

      [root@elk01 modules.d]# ab -n 20 -c 20 http://10.96.211.209/test002

 

        

posted on 2022-01-17 09:10  永远的大空翼  阅读(986)  评论(0编辑  收藏  举报