ELK 1.2 之logstash

logstash部署安装

1.作用:logstash可以采集日志,格式化,过滤数据把最终数据传输给elasticsearch

2.安装logstash,安装openjdk

[root@hd1 elk]# tar -xf logstash-7.9.3.tar.gz
[root@hd1 elk]# mv logstash-7.9.3 logstash
[root@hd1 elk]# yum install java-1.8.0-openjdk -y         #logstash是由java开发,所以要安装java的内置环境

3.修改logstash的主配置文件

复制代码
   vim   /opt/elasticsearch/logstash/config/logstash.yml
找到
pipeline:     
  batch:
    size: 125
    delay: 5
config.reload.automatic: false
 config.reload.interval: 3s
 http.enabled: true
http.host: 0.0.0.0
http.port: 9600-9700
log.level: info
path.logs: /opt/elasticsearch/logstash/logs           #logs这个目录自己创建
复制代码

 

4.修改服务启动文件如下:

 
复制代码
[root@hd1 config]# vim  /usr/lib/systemd/system/logstash.service
[Unit]
Description=logstash
 
[Service]
ExecStart=/opt/elasticsearch/logstash/bin/logstash
ExecReload=/bin/kill -HUP $MAINPID
KillMode=process
#Restart=on-failure
 
[Install]
WantedBy=multi-user.target
复制代码

5.手动启动下logstash并查看状态

root@hd1 logstash]# bin/logstash -e 'input { stdin {} } output { stdout {} }'

 

 6.写个简单的测试,在logstash安装目录下新建一个文件mua.conf

 
复制代码
[root@hd1 logstash]# vim mua.conf
input {
    stdin {
    }
}
output {
    stdout {
    codec => rubydebug
    }
}

复制代码

7.启动logstash 并输入hello 

1
[root@hd1 logstash]# bin/logstash -f mua.conf

 8.重新加载启动文件并启动logstash

[root@hd1 config]# systemctl daemon-reload
[root@hd1 config]# systemctl start logstash
 
9.
2) 输入阶段:从哪里获取日志 常用插件: 
• Stdin(一般用于调试) 
• File
• Redis 
• Beats(例如filebeat)
 
File插件:用于读取指定日志文件 常用字段: 
• path 日志文件路径,可以使用通配符 
• exclude 排除采集的日志文件 
• start_position 指定日志文件什么位置开始读,默认从结尾 开始,指定beginning表示从头开始读
案例:一。读取日志文件并输出到文件
 
(1)在/opt/elasticsearch/logstash/conf.d创建一个conf文件,test.conf
(2)将logstash.yml中的path.config 属性打开
path.config: /opt/elk/logstash/conf.d  #conf.d目录自己创建
(3)修改test.conf文件 ,输入文件为/var/log/messages 输出到/tmp/test.log 中间省略了filter
 
[root@hd1 logstash]# cd conf.d/
[root@hd1 conf.d]# vim test.conf
复制代码
input {
    file {
       path => "/var/log/messages"
    }
}
 
filter {
 
}
output {
    file {
    path => "/tmp/test.log"
    }
}
复制代码

(4)重启logstash并查看/tmp/test.log下是否有文本输出,

 

 

 

注意:如果启动不了,一个就是pipelines.yml 文件的里面要空,默认是全部注释          另一个问题就是,之前运行的instance有缓冲,要把data目录下一个隐藏文件.lock文件删除在重启,就能看到

 

 

 

 

File插件:用于读取指定日志文件 常用字段: 
• path 日志文件路径,可以使用通配符 
• exclude 排除采集的日志文件 
• start_position 指定日志文件什么位置开始读,默认从结尾 开始,指定beginning表示从头开始读
 
 
案例二。过滤error日志
   (1)编辑一个新conf文件   errortest.conf
复制代码
input {
    file {
       path => "/var/log/test/*.log"
       exclude => "error.log"
       start_position => "beginning"
    }
}
filter {

}
output {
    file {
    path => "/tmp/test.log"
    }
}

复制代码

(2)重启logshash

[root@logstash logstash]# systemctl daemon-reload
[root@logstash logstash]# systemctl restart logstash

(3)启动这个新配置文件

复制代码
[root@logstash conf.d]# /opt/logstash/bin/logstash -f  errortest.conf.conf 
^C[root@logstash conf.d]# /opt/logstash/bin/logstash -f errortest.conf
Sending Logstash logs to /opt/logstash/logs which is now configured via log4j2.properties
[2022-03-23T05:21:13,538][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"7.9.3", "jruby.version"=>"jruby 9.2.13.0 (2.5.7) 2020-08-03 9a89c94bcc OpenJDK 64-Bit Server VM 25.322-b06 on 1.8.0_322-b06 +indy +jit [linux-x86_64]"}
[2022-03-23T05:21:14,064][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2022-03-23T05:21:15,780][INFO ][org.reflections.Reflections] Reflections took 53 ms to scan 1 urls, producing 22 keys and 45 values 
[2022-03-23T05:21:16,323][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>250, "pipeline.sources"=>["/opt/logstash/conf.d/errortest.conf"], :thread=>"#<Thread:0x2f6f31f run>"}
[2022-03-23T05:21:17,060][INFO ][logstash.javapipeline    ][main] Pipeline Java execution initialization time {"seconds"=>0.72}
[2022-03-23T05:21:17,263][INFO ][logstash.inputs.file     ][main] No sincedb_path set, generating one based on the "path" setting {:sincedb_path=>"/opt/logstash/data/plugins/inputs/file/.sincedb_54fd3bc452299b50db7e60530cbeaef2", :path=>["/var/log/test/*.log"]}
[2022-03-23T05:21:17,306][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
[2022-03-23T05:21:17,420][INFO ][filewatch.observingtail  ][main][218539607883934df4c41723bf522f23856cdcc58ec69110e2f61f3a7871af11] START, creating Discoverer, Watch with file and sincedb collections
[2022-03-23T05:21:17,435][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2022-03-23T05:21:17,812][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2022-03-23T05:23:35,474][INFO ][logstash.outputs.file    ][main][13b19f0a29415b2a2b24d296c3c754e02eb15f82753c29552425cbcc773679e8] Opening file {:path=>"/tmp/test.log"}
[2022-03-23T05:24:02,337][INFO ][logstash.outputs.file    ][main][13b19f0a29415b2a2b24d296c3c754e02eb15f82753c29552425cbcc773679e8] Closing file /tmp/test.log
复制代码

(4)向这俩日志文件里输入内容

[root@logstash conf.d]# echo lalal >> '/var/log/test/error.log'
[root@logstash conf.d]# echo test >> '/var/log/test/access.log'

(5)查看/tmp/test.log文件,只看到access.log的内容,过滤掉error.log内容

[root@logstash test]# cat /tmp/test.log 
{"path":"/var/log/test/access.log","@version":"1","host":"logstash","message":"123","@timestamp":"2022-03-22T20:19:47.762Z"}
{"path":"/var/log/test/access.log","@version":"1","host":"logstash","message":"123","@timestamp":"2022-03-22T20:19:47.734Z"}
{"path":"/var/log/test/access.log","@version":"1","host":"logstash","message":"123","@timestamp":"2022-03-22T20:19:47.760Z"}
{"path":"/var/log/test/access.log","@version":"1","host":"logstash","message":"hahahah","@timestamp":"2022-03-22T20:21:11.135Z"}
{"@timestamp":"2022-03-22T21:23:35.365Z","message":"test","@version":"1","path":"/var/log/test/access.log","host":"logstash"}
{"@timestamp":"2022-03-22T21:23:43.418Z","message":"okok","@version":"1","path":"/var/log/test/access.log","host":"logstash"}

 

 

 

案例三。设置日志的来源 ,添加日志的属性

• add_field 添加一个字段到一个事件,放到事件顶部,一般用于标记日志来源。例如属于哪个项目,哪个应用 
• tags 添加任意数量的标签,用于标记日志的其他属性,例如表明访问日志还是错误日志 
• type 为所有输入添加一个字段,例如表明日志类型

编辑配置文件

复制代码
input {
    file {
       path => "/var/log/test/*.log"
       exclude => "error.log"
       start_position => "beginning"
       tags => "web"
       tags => "nginx"
       type => "access"
       add_field => {
         "project" => "cloud service"
         "app"  =>  "douyu"
    }
}
}

filter {

}
output {
    file {
    path => "/tmp/test.log"
    }
}
复制代码

输入绝对路径,启动脚本

[root@logstash conf.d]# /opt/logstash/bin/logstash -f nginx.conf 

输入内容到/var/log/test.log

[root@logstash conf.d]# echo test >> /var/log/test/test.log
[root@logstash conf.d]# cat /tmp/test.log 

{"message":"test","project":"cloud service","@version":"1","type":"access","@timestamp":"2022-03-22T23:42:38.615Z","host":"logstash","app":"douyu","tags":["web","nginx"],"path":"/var/log/test/test.log"}

 

#tips如果启动conf文件失败,可以试下用绝对路径启动conf文件。不然老是报错,看配置文件也没问题就是报错找不到config源

 

 

 

posted @   多次拒绝黄宗泽  阅读(234)  评论(0编辑  收藏  举报
相关博文:
阅读排行:
· 【.NET】调用本地 Deepseek 模型
· CSnakes vs Python.NET:高效嵌入与灵活互通的跨语言方案对比
· DeepSeek “源神”启动!「GitHub 热点速览」
· 我与微信审核的“相爱相杀”看个人小程序副业
· Plotly.NET 一个为 .NET 打造的强大开源交互式图表库
点击右上角即可分享
微信分享提示