fileBeat.yml

要采集 172.31.1.184这台机器 /var/applog/sncfc-trcs 下面的 action-access.log、other.log、main.log、error.log的文件,然后输出到

[root@trcs01-prd filebeat-5.5.0]# cat filebeat.yml
name: "172.31.1.184"
filebeat.prospectors:
- input_type: log---------------------------------------------------------------------------指定文件的输入类型log(默认)或者stdin
  paths:----------------------------------------------------------------------------------------指定要监控的日志,可以指定具体得文件或者目录

    - /var/applog/sncfc-trcs/action-access.log
  document_type: action-------------------------------------------------------------------设定Elasticsearch输出时的document的type字段 可以用来给日志进行分类
  tags: ["regex"]
  multiline:------------------------------------------------------------------------------------------控制filebeat如何处理跨多行日志的选项,多行日志通常发生在java堆栈中
    pattern: '^\d{4}\-\d{2}\-\d{2}'---------------------------------------------------------------指定匹配的正则表达式,filebeat支持的regexp模式与logstash支持的模式有所不同
    negate: true-----------------------------定义上面的模式匹配条件的动作,true表示以上面定义的规则进行匹配
    match: after------------------------------指定Filebeat如何将匹配行组合成事件,在之前或者之后
- input_type: log
  paths:
    - /var/applog/sncfc-trcs/other.log
  document_type: other
- input_type: log
  paths:
    - /var/applog/sncfc-trcs/main.log
  document_type: main
- input_type: log
  paths:
    - /var/applog/sncfc-trcs/error.log
  document_type: error
  multiline:
    pattern: '^\d{4}\-\d{2}\-\d{2}'
    negate: true
    match: after
ignore_older: 1h
encoding: utf-8
tags: ["trcs01-prd"]
processors:
- drop_fields:
  fields: ["beat.version", "input_type", "offset"]
output.kafka:
  enabled: true-----------------------------------------------------------------------------------------------------------控制prospector的启动和关闭

  hosts: ["172.16.255.192:9092", "172.16.255.193:9092", "172.16.255.194:9092"]

  topic: 'sncfc'
  worker: 1
  required_acks: 1
  compression: gzip
  max_message_bytes: 10000000
  logging.level: info

posted @ 2020-12-04 11:04  大福920917  阅读(190)  评论(0编辑  收藏  举报