收集日志之一:logstash方式


收集日志的几种方式:
1.logstash (消耗内存多。功能性好)
2.logstash的TCP/UDP 监听端口,在”其他“服务器安装 nc 命令
3.通过 rsyslog 收集日志,要logstash接收再转发到ES :
4.filebeat 收集日志:写入es redis logstash kafka (消耗内存少,不使用java,不支持多输出 ,不支持IF的type判断,filebeat服务器的配置文件中先定义fields:)



部署 logstash: Logstash 是一个开源的数据收集引擎,基于 ruby 开发,可以水平伸缩,而且 logstash 整个 ELK当中拥有最多插件的一个组件, 其可以接收来自不同来源的数据并统一输出到指定的且可以是多个不同目的地 Logstash 参考文档 :https:
//www.elastic.co/guide/en/logstash/current/index.html https://mirrors.aliyun.com/elasticstack/yum/elastic-7.x/7.6.2/ 安装jdk # yum install jdk-8u121-linux-x64.rpm 2.1.2:安装 logstash: # yum install logstash-5.3.0.rpm # chown logstash.logstash /usr/share/logstash/data/queue –R #权限更改为 logstash 用户和组,否则启动的时候日志报错 配置文件路径 /etc/logstash/logstash.yml配置文件路径 /etc/logstash/conf.d/ 自定义日志收集路径 /var/log/logstash/logstash-* 日志文件 参数说明: /usr/share/logstash/bin/logstash --help Usage: environment [options] -p port set the port (default is 4567) -s server specify rack server/handler (default is thin) -q turn on quiet mode (default is off) -x turn on the mutex lock (default is off) -e env 设置环境 ,(默认为开发,也是终端测试) -o addr set the host (default is (env == 'development' ? 'localhost' : '0.0.0.0')) -t logstash配置文件检查 -f 配置文件 配置文件检查 /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/dmesg.conf -t 以进程的方式运行 /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/dmesg.conf 以服务的方式运行 systemctl restart logstash 基础组件:input、filter、output input: 输入组件,负责从哪里读取数据 有:stdin:标准输入、file:读取文件、jdbc:读取数据库、syslog:读取日志服务、ganglia:监控日志、tcp/udp:网络协议、kakfa:消息队列 filter:负责过滤,可以没有 output:输出组件,负责将采集到的数据写入什么位置 有:stdout:标准输出、file、es、kafka、webhdfs 、codec=>rubydebug:指定输出格式

 

2.logstash功能详解与使用

2.logstash功能详解与使用,以下测试要ctrl+C结束才能产生”数据“
1.标准输入和输出测试:
        # /usr/share/logstash/bin/logstash -e 'input { stdin {} } output { stdout { codec => "rubydebug" } }'
        hello
        {
         "@timestamp" => 2017-04-20T02:30:01.600Z,  #当前事件的发生时间,
         "@version" => "1",                         #事件版本号,一个事件就是一个 ruby 对象
        "host" => "linux-host3.exmaple.com",         #标记事件发生在哪里
        "message" => "hello"                        #消息的具体内容
        }
2.通过标准输入收集数据,然后输出到某个文件:
        # /usr/share/logstash/bin/logstash -e 'input { stdin {} } output { file { path => "/tmp/logstash-linux39.txt" } }'

3.将输出改成elasticsearch:
        # /usr/share/logstash/bin/logstash -e 'input { stdin {} } output { elasticsearch { hosts => ["192.168.80.120:9200"]  index => "linux39-%{+YYYY.MM.dd}" } }'

4.将输入改成日志文件:
        start_position  string, one of ["beginning", "end"]     从"/var/log/bootstrap.log中的什么位置开始读取内容到ES中,beginning指所有内容。 end指当前位置开始,之前的内容不读取。
        stat_interval number or string_duration             日志收集的间隔时间,多少秒读取一次/var/log/dmesg文件
        elasticsearch: output输出服务类型
        index:  es服务器上分片名称
        input中的是file文件,运行logstash的用户需要对该文件有读取权限。
        hosts:   es主机IP
        
        # /usr/share/logstash/bin/logstash -e 'input { file { path => "/var/log/dmesg" start_position => "beginning" stat_interval => "3"} } output { elasticsearch { hosts => ["192.168.80.120:9200"]  index => "linux40-%{+YYYY.MM.dd}" } }'

5.脚本处理,在后台自动运行,上面4种是以进程的方法在后台处理,使用脚本处理是以服务方式进行运行 ,(下面讲)kibana 部署及日志收集
# cat /etc/logstash/conf.d/dmesg.conf 
input { 
  file { 
    path => "/var/log/dmesg" 
    start_position => "beginning" 
    stat_interval => "3"
  } 
} 
output { 
  elasticsearch { 
    hosts => ["192.168.80.120:9200"]  
    index => "linux39-%{+YYYY.MM.dd}" 
  } 
}


使用浏览器查看 head / cerebro  插件显示的索引状态:

 

 

 

实验说明
192.168.80.100 localhost7A.localdomain    node1   head  cerebro  kibana
192.168.80.110 localhost7B.localdomain    node2  
192.168.80.120 localhost7C.localdomain    node3  
192.168.80.130 localhost7D.localdomain    logstash   nginx   tomcat


#ES群集配置文件
[root@localhost7A ~]# grep  -v  ^#  /usr/local/elasticsearch-7.6.1/config/elasticsearch.yml 
cluster.name: ZZHZ
node.name: node-1 
path.data: /usr/local/elasticsearch-7.6.1/data
path.logs: /usr/local/elasticsearch-7.6.1/logs
network.host: 0.0.0.0
http.port: 9200
discovery.seed_hosts: ["192.168.80.100", "192.168.80.110", "192.168.80.120"]
cluster.initial_master_nodes: ["192.168.80.100", "192.168.80.110", "192.168.80.120"]
gateway.recover_after_nodes: 2

http.cors.enabled: true
http.cors.allow-origin: "*"


 

通过 logstash 收集单个系统日志并输出至文件:
cat /etc/logstash/conf.d/dmesg.conf 
input { 
  file {                         #如果有多个file ,就会输出到一个文件中,日志分片就出现混合在一文件,就要用type 设置类判断分开保存。    
    path => "/var/log/dmesg"   #注意文件和上级目录权限
    start_position => "beginning" 
    stat_interval => "3"
  } 
} 
output { 
  elasticsearch { 
    hosts => ["192.168.80.100:9200"]  
    index => "syslog-%{+YYYY.MM.dd}" 
  } 
}

chmod 644 /var/log/dmesg    #也注意上级目录的权限
/usr/share/logstash/bin/logstash  -f /etc/logstash/conf.d/dmesg.conf  -t 
systemctl  restart logstash

# tail  -f  /var/log/logstash/logstash-plain.log  #  提示下面说明启动成功
[2021-11-30T14:44:47,160][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
[2021-11-30T14:44:47,218][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2021-11-30T14:44:47,238][INFO ][filewatch.observingtail  ][main] START, creating Discoverer, Watch with file and sincedb collections
[2021-11-30T14:44:47,517][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
 

查看 head / cerebro  插件查看的索引状态:

 


 

在http://192.168.810.130:5601 中management--》kibana --》  index-mode  创建索引模式:名称syslog-%
在http://192.168.810.130:5601 中disover中查看linux39-%的日志信息

 


 

------------------------------------------------------------------------------------------------------------

通过 logtsash 收集多个日志文件 tomcat 和 java 日志:
收集 Tomcat 服务器的访问日志以及 Tomcat 错误日志进行实时统计,在 kibana页面进行搜索展现,每台 Tomcat 服务器要安装 logstash 负责收集日志,
然后将日志转发给 elasticsearch 进行分析,在通过 kibana 在前端展现

通过 logstash 收集多个日志文件:Logstash 配置:
input { 
  file { 
    path => "/var/log/syslog" 
    start_position => "beginning" 
    stat_interval => "3"
    type => "syslog"  类型标识,用于判断
  } 
  file {
    path => "/usr/lcoal/tomcat/logs/localhost_access_log.*" #注意,多天的日志也能匹配,需要配合start_position => "end"使用。
    start_position => "end"
    stat_interval => "3"
    type => "tomcat-access-log"
  }
} 

output { 
  if [type] == "syslog" {
  elasticsearch { 
    hosts => ["192.168.80.100:9200"]  
    index => "syslog-130-%{+YYYY.MM.dd}" 
    }
  }
  if [type] == "tomcat-access-log" {
    elasticsearch {
      hosts => ["192.168.80.100:9200"]
      index => "tomcat-accesslog-130-%{+YYYY.MM.dd}"
    }
  file {
  #保存本地一份
    path => "/tmp/tomcat-accesslog-130-%{+YYYY.MM.dd}.log" #本地也保存一个。
    }
  }
}

systemctl  restart logstash
# tail  -f  /var/log/logstash/logstash-plain.log 

----------------------------------------------------------------------

#对tomcat 的访问日志进行json处理, 在kibana中,对json的关键字进行可视化。
#tomcat json格式设置
  <Host name="localhost"  appBase="webapps"
            unpackWARs="true" autoDeploy="true"> 
       <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs"
                prefix="localhost_access_log" suffix=".log"
    pattern="{&quot;clientip&quot;:&quot;%h&quot;,&quot;ClientUser&quot;:&quot;%l&quot;,&quot;authenticated&quot;:&quot;%u&quot;,&quot;AccessTime&quot;:&quot;%t&quot;,&quot;method&quot;:&quot;%r&quot;,
    &quot;status&quot;:&quot;%s&quot;,&quot;SendBytes&quot;:&quot;%b&quot;,&quot;Query?string&quot;:&quot;%q&quot;,&quot;partner&quot;:&quot;%{Referer}i&quot;,&quot;AgentVersion&quot;:&quot;%{User-Agent}i&quot;}"/>



logstash配置文件的json设置
input {
  file {
    path => "/var/log/dmesg"
    start_position => "beginning"
    stat_interval => "3"
    type => "syslog"
  }
  file {
    path => "/usr/local/apache-tomcat-8.5.42/logs/localhost_access_log.*.log"
    start_position => "end"
    stat_interval => "3"
    type => "tomcat-access-log"
    codec => "json"   #对的日志进行json处理
  }
}
output {
  if [type] == "syslog" {
  elasticsearch {
    hosts => ["192.168.80.100:9200"]
    index => "syslog.104-%{+YYYY.MM.dd}"
    }
  file {
    path => "/tmp/dmesg-linux39.log"
    }
  }
  if [type] == "tomcat-access-log" {
    elasticsearch {
      hosts => ["192.168.80.100:9200"]
      index => "tomcat-accesslog_%{+YYYY.MM.dd}"
    }
  file {
    path => "/tmp/tomcat-accesslog-2.104.log"
    }
  }
}

在http://192.168.80.100:5601 中management--》kibana --》  index-mode  创建索引模式:名称linux39-%
在http://192.168.80.100:5601 中disover中查看日志信息
在kibana的Visualize 进行可视化图形 ,可视化图形日志必须是json格式

------------------------------------------------------------------------------------

收集 java 多行日志:
使用 codec 的 multiline 插件实现多行匹配,这是一个可以将多行进行合并的插件,而且可以使用 what 指定将匹配到的行与前面的行合并还是和后面的行合并,

java报错是多行。多行就是一个报错日志。
如:cat /var/log/java.conf
[ main ] INFO  c.j.training.logging.service.UserService - 读取配置文件时出现异常
java.io.FileNotFoundException: File not exists
  at cn.justfly.training.logging.service.UserServiceTest.testLogResult(UserServiceTest.java:31) ~ [ test-classes/:na ]
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~ [ na:1.6.0_45 ]
[ main ] INFO  c.j.training.logging.service.UserService - 读取配置文件时出现异常
java.io.FileNotFoundException: File not exists
  at cn.justfly.training.logging.service.UserServiceTest.testLogResult(UserServiceTest.java:31) ~ [ test-classes/:na ]
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~ [ A1 ]


cat /etc/logstash/conf.d/java.conf
input {
 stdin {
 codec => multiline {
 pattern => "^\["         #当遇到[开头的行时候将多行进行合并
 negate => true         #true 为匹配成功进行操作,false 为不成功进行操作
 what => "previous"     #与以前的行合并,如果是下面的行合并就是 next
 }
 }
}

filter { #日志过滤,如果所有的日志都过滤就写这里,如果只针对某一个过滤就写在 input 里面的日志输入里面
}

output {
 stdout {
 codec => rubydebug
}
}

测试可以正常启动: 
/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/java.conf
[INFO ] 2021-12-07 16:19:33.759 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
111
222
333
[        
{
          "tags" => [
        [0] "multiline"
    ],
       "message" => "[\n111\n222\n333",
      "@version" => "1",
          "host" => "localhost7D.localdomain",
    "@timestamp" => 2021-12-07T08:20:28.196Z
}

444
555[
6[6]
[
{
          "tags" => [
        [0] "multiline"
    ],
       "message" => "[\n444\n555[\n6[6]",
      "@version" => "1",
          "host" => "localhost7D.localdomain",
    "@timestamp" => 2021-12-07T08:21:40.650Z
}

将输出改为 elasticsearch: 
配置读取日志文件写入到文件:
vim /etc/logstash/conf.d/java.conf
input {
  file {
    path => "/elk/logs/ELK-Cluster.log"#模拟java日志
    type => "javalog"
    start_position => "beginning"
    codec => multiline {
      pattern => "^\["
      negate => true
      what => "previous"
    }
  }
}
output {
  if [type] == "javalog" {
    elasticsearch {
      hosts => ["192.168.80.100:9200"]
      index => "javalog-1511-%{+YYYY.MM.dd}"
    }
  }
}

systemctl restart elasticsearch
在http://192.168.80.100:5601 中management--》kibana --》  index-mode  创建索引模式:

 

posted @ 2022-12-27 10:48  yuanbangchen  阅读(2092)  评论(0编辑  收藏  举报