部署文件:filebeat->kafka集群(zk集群)->logstash->es集群->kibana
该压缩包内包含以下文件:
1.install_java.txt 配置java环境,logstash使用
2.es.txt 三节点的es集群
3.filebeat.txt 获取日志输出到kafka集群
4.install_zookeeper_cluster.txt zk集群
5.install_kafka_cluster.txt kafka集群
6.logstash.txt
7.kibana.txt
文件下载地址:https://files.cnblogs.com/files/sanduzxcvbnm/部署文件.zip
扩展:
手动创建kafka消息主题:
/opt/kafka/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic apache
filebeat.yml文件设置
filebeat.inputs:
- type: log
enabled: true
paths:
- /etc/filebeat/access.log
output.kafka:
codec.format:
string: '%{[@timestamp]} %{[message]}'
hosts: ["192.168.43.192:9092"]
topic: apache
partition.round_robin:
reachable_only: false
required_acks: 1
compression: gzip
max_message_bytes: 1000000
注意codec.format指令的使用-这是为了确保正确提取message和timestamp字段。 否则,这些行将以JSON发送到Kafka.
logstash使用的conf文件内容,apache.conf:
input {
kafka {
bootstrap_servers => "localhost:9092"
topics => "apache"
}
}
filter {
grok {
match => { "message" => "%{COMBINEDAPACHELOG}" }
}
date {
match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
}
geoip {
source => "clientip"
}
}
output {
elasticsearch {
hosts => ["192.168.43.220:9200"]
}
logstash指定文件启动:./bin/logstash -f apache.conf