ELK—使用filebeat收集日志写入kafka

filebeat作为轻量级日志收集软件,不依赖java环境,不消耗内存,可以用户无法安装java环境的服务器或容器使用。

一、使用filebeat收集日志写入kafka

复制代码
[root@linux-host2 src]# grep -v "#" /etc/filebeat/filebeat.yml | grep -v "^$"
filebeat.prospectors:
- input_type: log
paths:
  - /var/log/*.log   # 对哪些日志进行收集
  - /var/log/messages
exclude_lines: ["^DBG"]
exclude_files: [".gz$"]
document_type: "system-log-1512-filebeat"
output.file: #测试写入本地文件 path: "/tmp" filename: "filebeat.txt"
output.kafka: #写入 kafka hosts: ["192.168.15.11:9092","192.168.15.12:9092","192.168.15.13:9092"] # kafka集群 topic: "systemlog-1512-filebeat" # 主题名称 partition.round_robin: reachable_only: true required_acks: 1 #本地写入完成 compression: gzip #开启压缩 max_message_bytes: 1000000 #消息最大值
复制代码

启动 filebeat 并验证本地文件 是否有数据

验证是否写入kafka

/usr/local/kafka/bin/kafka-topics.sh  --list --zookeeper   192.168.15.11:2181,192.168.15.12:2181,192.168.15.13:2181

配置 logstash从kafka读取日志到elasticsearch

复制代码
input {
  kafka {
    bootstrap_servers => "192.168.15.11:9092"
    topics => "systemlog-1512-filebeat"
    consumer_threads => 1
    decorate_events => true
    codec => "json"
    auto_offset_reset => "latest"
  }
}

output {
  if [type] == "system-log-1512-filebeat" {
    elasticsearch {
    hosts => ["192.168.15.11:9200"]
    index => "system-log-1512-filebeat-%{+YYYY.MM.dd}"
    }
  }
}
复制代码

使用 filebeat  收集多个日志文件

复制代码
grep -v "#" /etc/filebeat/filebeat.yml | grep -v "^$"
filebeat.prospectors:
- input_type: log
  paths:
    - /var/log/ syslog.log
    - /var/log/messages
  exclude_lines: ["^DBG"]
  exclude_files: [".gz$"]
#document_type: "system-log-1512"
  fields:
    type: "system-log-1512"

- input_type: log
  paths:
    - /var/log/nginx/access.log
  exclude_lines: ["^DBG"]
  exclude_files: [".gz$"]
fields: type:
"nginx-accesslog-1512" output.file: path: "/tmp" filename: "filebeat.txt" output.kafka: hosts: ["192.168.15.11:9092","192.168.15.12:9092","192.168.15.13:9092"] topic: "systemlog-1512-filebeat" partition.round_robin: reachable_only: true #如果 reachable_only 设置为 true,则事件将仅发布到可用分区,false 将发送到所有分区 required_acks: 1 compression: gzip max_message_bytes: 1000000 
复制代码

配置 logstash从 kafka读取nginx日志,写入 elasticsearch

复制代码
input {
  kafka {
    bootstrap_servers => "192.168.15.11:9092"
    topics => "nginx-accesslog-1512"
    codec => "json"
    consumer_threads => 1
    decorate_events => true
  }

  kafka {
    bootstrap_servers => "192.168.15.11:9092"
    topics => "system-log-1512"
    consumer_threads => 1
    decorate_events => true
    codec => "json"
  }
}

output {
  if [type] == "nginx-accesslog-1512" {
    elasticsearch {
      hosts => ["192.168.15.11:9200"]
      index => "nginx-accesslog-1512-%{+YYYY.MM.dd}"
    }
  }

  if [type] == "system-log-1512" {
    elasticsearch {
      hosts => ["192.168.15.12:9200"]
      index => "system-log-1512-%{+YYYY.MM.dd}"
    }
  }
}
复制代码

 

posted @   不会跳舞的胖子  阅读(3899)  评论(0编辑  收藏  举报
相关博文:
阅读排行:
· 阿里最新开源QwQ-32B,效果媲美deepseek-r1满血版,部署成本又又又降低了!
· 单线程的Redis速度为什么快?
· SQL Server 2025 AI相关能力初探
· AI编程工具终极对决:字节Trae VS Cursor,谁才是开发者新宠?
· 展开说说关于C#中ORM框架的用法!
点击右上角即可分享
微信分享提示