ELK-第二集
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 | 收集nginx日志和系统日志写到kafaka在用logstash读取出来写到elasticsearch ##node1 把nginx日志写到kafka里面 [root@node1 conf.d] # vim /etc/logstash/conf.d/nginx.conf input{ file { path => "/var/log/nginx/access.log" type => "nginx-access-log-1105" start_position => "beginning" stat_interval => "2" codec => "json" } file { path => "/var/log/messages" type => "system-log-1105" start_position => "beginning" stat_interval => "2" } } output { if [ type ] == "nginx-access-log-1105" { kafka { bootstrap_servers => "192.168.1.106:9092" topic_id => "nginx-accesslog-1105" codec => "json" } } if [ type ] == "system-log-1105" { kafka { bootstrap_servers => "192.168.1.106:9092" topic_id => "system-log-1105" codec => "json" }} } node2 ##从kafaka读出来写到elasticsearch input { kafka { bootstrap_servers => "192.168.1.105:9092" topics => "nginx-accesslog-1105" group_id => "nginx-access-log" codec => "json" consumer_threads => 1 decorate_events => true } kafka { bootstrap_servers => "192.168.1.105:9092" topics => "system-log-1105" group_id => "nginx-access-log" codec => "json" consumer_threads => 1 decorate_events => true } } output { # stdout { # codec => "rubydebug" # } if [ type ] == "nginx-access-log-1105" { elasticsearch { hosts => [ "192.168.1.105:9200" ] index => "logstash-nginx-access-log-1105-%{+YYYY.MM.dd}" }} if [ type ] == "system-log-1105" { elasticsearch { hosts => [ "192.168.1.106:9200" ] index => "logstash-systemzzz-log-1105-%{+YYYY.MM.dd}" } } } |
在添加到kibana里面
##使用fileeat来收集日志写入kafka
node1 上传filebeat-5.6.5-x86_64.rpm
yum install filebeat-5.6.5-x86_64.rpm -y
systemctl stop logstash.service
[root@node1 tmp]# grep -v "#" /etc/filebeat/filebeat.yml | grep -v "^$"
filebeat.prospectors:
- input_type: log
paths:
- /var/log/*.log
- /var/log/messages
exclude_lines: ["^DBG"]
exclude_files: [".gz$"]
document_type: "system-log-1105-filebeat"
output.file:
path: "/tmp"
filename: "filebeat.txt"
output.logstash:
hosts: ["192.168.1.105:5044"] #logstash 服务器地址,可以是多个
enabled: true #是否开启输出至logstash,默认即为true
worker: 1 #工作线程数
comperession_level: 3 #压缩级别
#loadbalance: true #多个输出的时候开启负载
[root@node1 src]# vim /etc/logstash/conf.d/filebate.conf
input {
beats {
port => "5044"
codec => "json"
}
}
output {
if [type] == "system-log-1105-filebeat" {
kafka {
bootstrap_servers => "192.168.1.105:9092"
topic_id => "system-log-filebeat-1105"
codec => "json"
}
}
}
[root@node1 conf.d]# systemctl restart logstash.service
[root@node1 conf.d]# /usr/local/kafka/bin/kafka-topics.sh --list --zookeeper 192.168.1.105:2181,192.168.1.106:2181,192.168.1.101:2181
node2##
[root@node2 conf.d]# vim kafka-es.conf
input {
kafka {
bootstrap_servers => "192.168.1.105:9092"
topics => "system-log-filebeat-1105"
group_id => "system-log-filebeat"
codec => "json"
consumer_threads => 1
decorate_events => true
}
}
output {
# stdout {
# codec => "rubydebug"
# }
if [type] == "system-log-1105-filebeat" {
elasticsearch {
hosts => ["192.168.1.106:9200"]
index => "system-log-1105-filebeat-%{+YYYY.MM.dd}"
}
}
}
[root@node2 conf.d]# systemctl restart logstash.service
#测试 在node1 /var/log/messages里添加东西,然后去9100端口去检查,如果有证明正常,然后添加到kibana里面。
##流程是,filebeat配置文件从messages里面读取然后输出到192.168.1.105:5044端口上,然后logstash从本地的5044端口上读取写入kafka:9092端口,然后node2的logstash从input 192.168.1.105的9092端口读取内容output输出到elasticsearch上。
###收集nginx
[root@node1 conf.d]# vim /etc/filebeat/filebeat.yml
- input_type: log
paths:
- /var/log/nginx/access.log
exclude_lines: ["^DBG"]
exclude_files: [".gz$"]
document_type: "nginx-accesslog-1105-filebeat"
output.logstash:
hosts: ["192.168.1.105:5044"] #logstash 服务器地址,可以是多个
enabled: true #是否开启输出至logstash,默认即为true
worker: 1 #工作线程数
comperession_level: 3 #压缩级别
#loadbalance: true #多个输出的时候开启负载
[root@node1 src]# vim /etc/logstash/conf.d/filebate.conf
output {
if [type] == "nginx-accesslog-1105-filebeat" {
kafka {
bootstrap_servers => "192.168.1.105:9092"
topic_id => "nginx-accesslog-filebeat-1105"
codec => "json"
}}
}
node2
input {
kafka {
bootstrap_servers => "192.168.1.105:9092"
topics => "nginx-accesslog-filebeat-1105"
group_id => "nginx-accesslog-filebeat"
codec => "json"
consumer_threads => 1
decorate_events => true
}
}
output {
if [type] == "nginx-accesslog-1105-filebeat" {
elasticsearch {
hosts => ["192.168.1.106:9200"]
index => "logstash-nginx-accesslog-1105-filebeat-%{+YYYY.MM.dd}"
}
}
}
[root@node2 conf.d]# systemctl restart logstash.service
##收集java日志
[root@node1 conf.d]# vim /etc/logstash/conf.d/java.conf
input {
file {
path => "/var/log/logstash/logstash-plain.log"
type => "javalog"
codec => multiline {
pattern => "^\[(\d{4}-\d{2}-\d{2})"
negate => true
what => "previous"
}}
}
output {
elasticsearch {
hosts => ["192.168.1.105:9200"]
index => "javalog-1105-%{+YYYY.MM}"
}
}
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· 10年+ .NET Coder 心语,封装的思维:从隐藏、稳定开始理解其本质意义
· .NET Core 中如何实现缓存的预热?
· 从 HTTP 原因短语缺失研究 HTTP/2 和 HTTP/3 的设计差异
· AI与.NET技术实操系列:向量存储与相似性搜索在 .NET 中的实现
· 基于Microsoft.Extensions.AI核心库实现RAG应用
· TypeScript + Deepseek 打造卜卦网站:技术与玄学的结合
· 阿里巴巴 QwQ-32B真的超越了 DeepSeek R-1吗?
· 【译】Visual Studio 中新的强大生产力特性
· 10年+ .NET Coder 心语 ── 封装的思维:从隐藏、稳定开始理解其本质意义
· 【设计模式】告别冗长if-else语句:使用策略模式优化代码结构