ELK相关案例实战

一、LogStash filter功能简介及使用

1、logstash filter 简介

http://grokdebug.herokuapp.com #在线正则解析
https://github.com/elastic/logstash/blob/v1.4.2/patterns/grok-patterns #logstash grok插件
root@web1:~# vim /usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-patterns-core-4.3.4/patterns/legacy/grok- patterns #安装到本地的logstash内置插件
ll /usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/
 
(1)filter 插件可实现从input方向进入的event按照指定的条件进行数据解析、字段删除、数据类型转换等操作,然后再从ouput方向发送到elasticsearch等目的server进行存储及展示,filter 阶段主要基于不同的插件实现不同的功能,官方连接
https://www.elastic.co/guide/en/logstash/current/plugins-filters-date.html
a、aggregate:同一个事件的多行日志聚合功能,https://www.elastic.co/guide/en/logstash/current/plugins-filters-aggregate.html
b、bytes: 讲存储单位MB、GB、TB等转换为字节,https://www.elastic.co/guide/en/logstash/current/plugins-filters-bytes.html
c、date:从事件中解析日期,然后作为logsatsh的时间戳,https://www.elastic.co/guide/en/logstash/current/plugins-filters-date.html
..........
e、geoip:对IP进行地理信息识别并添加到事件中,https://www.elastic.co/guide/en/logstash/current/plugins-filters-geoip.html
f、 grok:基于正则表达式对事件进行匹配并以json格式输出,grok经常用于对系统errlog、mysql及zookeeper等中间件服务、网络设备日志等进行重新结构化处理(将非json格式日志转换为json格式),然后将转换后的日志重新输出到elasticsearch进行存储、在通过kibana进行绘图展示,https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html
g、mutate: 对事件中的字段进行重命名、删除、修改等操作,https://www.elastic.co/guide/en/logstash/current/plugins-filters-mutate.html
 

2、logstash filter 日志收集配置步骤

(1) 安装web服务nginx
(2)配置nginx提供域名请求
(3)启动nginx
(4)配置logsatsh收集nginx访问日志并基于filter实现日志过滤及处理
(5)重启logsatsh并在kibana验证nginx访问日志
(6)配置logsatsh收集nginx错误日志并基于filter实现日志过滤及处理
(7)重启logstash并在kibana验证nginx错误日志
 

3、logstash配置

input {
  file {
    path => "/apps/nginx/logs/access.log"
    type => "nginx-accesslog"
    stat_interval => "1"
    start_position => "beginning"
  }

  file {
    path => "/apps/nginx/logs/error.log"
    type => "nginx-errorlog"
    stat_interval => "1"
    start_position => "beginning"
  }

}

filter {
  if [type] == "nginx-accesslog" {
  grok {
    match => { "message" => ["%{IPORHOST:clientip} - %{DATA:username} \[%{HTTPDATE:request-time}\] \"%{WORD:request-method} %{DATA:request-uri} HTTP/%{NUMBER:http_version}\" %{NUMBER:response_code} %{NUMBER:body_sent_bytes} \"%{DATA:referrer}\" \"%{DATA:useragent}\""] }
    remove_field => "message"
    add_field => { "project" => "nuo"}
  }
  mutate {
    convert => [ "[response_code]", "integer"]
    }
  }
  if [type] == "nginx-errorlog" {
    grok {
      match => { "message" => ["(?<timestamp>%{YEAR}[./]%{MONTHNUM}[./]%{MONTHDAY} %{TIME}) \[%{LOGLEVEL:loglevel}\] %{POSINT:pid}#%{NUMBER:threadid}\: \*%{NUMBER:connectionid} %{GREEDYDATA:message}, client: %{IPV4:clientip}, server: %{GREEDYDATA:server}, request: \"(?:%{WORD:request-method} %{NOTSPACE:request-uri}(?: HTTP/%{NUMBER:httpversion}))\", host: %{GREEDYDATA:domainname}"]}
      remove_field => "message"
    }
  }
}

output {
  if [type] == "nginx-accesslog" {
    elasticsearch {
      hosts => ["192.168.84.132:9200"]
      index => "nuo-nginx-accesslog-%{+yyyy.MM.dd}"
      user => "nuo"
      password => "12345678"
  }}

  if [type] == "nginx-errorlog" {
    elasticsearch {
      hosts => ["192.168.84.132:9200"]
      index => "nuo-nginx-errorlog-%{+yyyy.MM.dd}"
      user => "nuo"
      password => "12345678"
  }}

}

 

 
[root@logstash conf.d]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/nginxlog-to-es.conf
root@web1:/etc/logstash/conf.d# systemctl restart logstash.service

 

二、LogStash 收集nginx json格式访问日志

1、自定义nginx日志格式

vim /usr/local/conf/nginx.conf
log_format access_json '{"@timestamp":"$time_iso8601",'
        '"host":"$server_addr",'
        '"clientip":"$remote_addr",'
        '"size":$body_bytes_sent,'
        '"responsetime":$request_time,'
        '"upstreamtime":"$upstream_response_time",'
        '"upstreamhost":"$upstream_addr",'
        '"http_host":"$host",'
        '"uri":"$uri",'
        '"domain":"$host",'
        '"xff":"$http_x_forwarded_for",'
        '"referer":"$http_referer",'
        '"tcp_xff":"$proxy_protocol_addr",'
        '"http_user_agent":"$http_user_agent",'
        '"status":"$status"}';
    access_log  /var/log/nginx/access.log  access_json;

mkdir -p /var/log/nginx

/usr/local/nginx/sbin/nginx -t

2、logstash配置

input {
  file {
    path => "/var/log/nginx/access.log"
    start_position => "end"
    type => "nginx-json-accesslog"
    stat_interval => "1"
    codec => json
  }
}


output {
  if [type] == "nginx-json-accesslog" {
    elasticsearch {
      hosts => ["192.168.84.132:9200"]
      index => "nginx-accesslog-2.107-%{+YYYY.MM.dd}"
      user => "nuo"
      password => "12345678"
  }}
}

 

 /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/nginx-json-log-to-es.conf

3、kibana 验证nginx 访问日志:

 

 三、LogStash 收集java服务日志

1、LogStash 收集java服务日志示例

(1)ES服务器安装logstash:

(2)multiline插件文档: 

 Multiline codec plugin | Logstash Reference [8.6] | Elastic

(3)logstash 配置

input {
  file {
    path => "/data/eslogs/nuo-es-cluster.log"
    type => "eslog"
    stat_interval => "1"
    start_position => "beginning"
    codec => multiline {
      #pattern => "^\["
      pattern => "^\[[0-9]{4}\-[0-9]{2}\-[0-9]{2}" #正则表达式,匹配事件的特征,开头或者是结束(分割多个事件)
      negate => "true"     #匹配成功或者匹配失败
      what => "previous"   #往前合并还是往后合并
  }
}

output {
  if [type] == "eslog" {
    elasticsearch {
      hosts =>  ["192.168.84.132:9200"]
      index => "nuo-eslog-%{+YYYY.ww}"
      user => "nuo"
      password => "12345678"
    }}
}

2、kibana创建索引并验证ES日志

 

 

 

 
 
posted @ 2023-03-14 21:33  耿筱诺  阅读(74)  评论(0编辑  收藏  举报