ELK之logstash6.5
首先安装,这里采用rpm安装:
1 | # rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch |
创建repo文件:
1 2 3 4 5 6 7 8 9 | [root@node1 logstash] # cat /etc/yum.repos.d/logstash.repo [logstash-6.x] name=Elastic repository for 6.x packages baseurl=https: //artifacts .elastic.co /packages/6 .x /yum gpgcheck=1 gpgkey=https: //artifacts .elastic.co /GPG-KEY-elasticsearch enabled=1 autorefresh=1 type =rpm-md |
在yum install logstash之前确保已经安装了jdk,也就是确保有java环境:
1 2 3 4 5 6 | [root@node1 logstash] # java -version java version "1.8.0_191" Java(TM) SE Runtime Environment (build 1.8.0_191-b12) Java HotSpot(TM) 64-Bit Server VM (build 25.191-b12, mixed mode) # yum install logstash |
查看logstash的配置文件:
1 2 3 4 5 6 7 8 9 10 11 | [root@node1 logstash] # pwd /etc/logstash [root@node1 logstash] # ll 总用量 36 drwxrwxr-x. 2 root root 6 12月 18 06:06 conf.d -rw-r--r--. 1 root root 1846 12月 18 06:06 jvm.options -rw-r--r--. 1 root root 4568 12月 18 06:06 log4j2.properties -rw-r--r--. 1 root root 342 12月 18 06:06 logstash-sample.conf -rw-r--r--. 1 root root 8194 12月 23 20:32 logstash.yml -rw-r--r--. 1 root root 285 12月 18 06:06 pipelines.yml -rw-------. 1 root root 1696 12月 18 06:06 startup.options |
首先来一个简单的输入到输出:
1 | # /usr/share/logstash/bin/logstash -e 'input { stdin { } } output { stdout {} }' |
但提示有错误:
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
解决办法:
1 2 3 | mkdir -p /usr/share/logstash/config/ ln -s /etc/logstash/ * /usr/share/logstash/config chown -R logstash:logstash /usr/share/logstash/config/ |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 | [root@node1 conf.d] # /usr/share/logstash/bin/logstash -e 'input { stdin { } } output { stdout {} }' Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties [2018-12-24T20:28:50,213][WARN ][logstash.config. source .multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified [2018-12-24T20:28:50,240][INFO ][logstash.runner ] Starting Logstash { "logstash.version" => "6.5.4" } [2018-12-24T20:28:53,997][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=> "main" , "pipeline.workers" =>4, "pipeline.batch.size" =>125, "pipeline.batch.delay" =>50} [2018-12-24T20:29:04,221][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=> "main" , :thread=> "#<Thread:0x45200d9d run>" } The stdin plugin is now waiting for input: [2018-12-24T20:29:04,293][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]} [2018-12-24T20:29:04,570][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600} hello world { "message" => "hello world" , "host" => "node1" , "@version" => "1" , "@timestamp" => 2018-12-24T12:29:50.015Z } |
退出该logstash使用ctrl+d
现在将es.log日志的内容输入到redis中:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 | [root@node1 conf.d] # cat redis_output.conf input { file { path => [ "/var/log/elasticsearch/es.log" ] start_position => "beginning" } } output { redis { db => "0" 选择的库 data_type => "list" 选择数据类型 host => [ "172.16.23.129" ] 选择的redis服务器 key => "es_log" key取名 } } |
使用docker构建redis服务器:
1 2 | # docker run --name redis -p 6379:6379 -d redis # yum install redis 提供redis-cli的命令 |
然后执行:
1 | # /usr/share/logstash/bin/logstash -f redis_output.conf |
这边执行的时候,将elasticsearch的服务进行关闭,产生一部分日志:
1 | [root@node1 ~] # systemctl stop elasticsearch |
可以看见上面的输出:
1 2 3 4 5 6 7 8 9 10 | [root@node1 conf.d] # /usr/share/logstash/bin/logstash -f redis_output.conf Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties [2018-12-25T20:55:22,977][WARN ][logstash.config. source .multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified [2018-12-25T20:55:23,004][INFO ][logstash.runner ] Starting Logstash { "logstash.version" => "6.5.4" } [2018-12-25T20:55:28,021][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=> "main" , "pipeline.workers" =>4, "pipeline.batch.size" =>125, "pipeline.batch.delay" =>50} [2018-12-25T20:55:38,691][INFO ][logstash.inputs. file ] No sincedb_path set , generating one based on the "path" setting {:sincedb_path=> "/var/lib/logstash/plugins/inputs/file/.sincedb_573723e58bddd528c972283d168c6f3f" , :path=>[ "/var/log/elasticsearch/es.log" ]} [2018-12-25T20:55:38,901][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=> "main" , :thread=> "#<Thread:0x3582c34c run>" } [2018-12-25T20:55:39,132][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]} [2018-12-25T20:55:39,226][INFO ][filewatch.observingtail ] START, creating Discoverer, Watch with file and sincedb collections [2018-12-25T20:55:40,236][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600} |
然后打开另外一个终端查看redis的数据:
1 2 3 4 5 6 7 8 9 10 11 12 13 | [root@node1 ~] # redis-cli -h 172.16.23.129 172.16.23.129:6379> KEYS * 1) "es_log" 172.16.23.129:6379> llen es_log (integer) 7 172.16.23.129:6379> lrange es_log 0 7 1) "{\"message\":\"[2018-12-25T20:59:02,371][INFO ][o.e.n.Node ] [node1] stopping ...\",\"host\":\"node1\",\"@version\":\"1\",\"@timestamp\":\"2018-12-25T12:59:03.484Z\",\"path\":\"/var/log/elasticsearch/es.log\"}" 2) "{\"message\":\"[2018-12-25T20:59:02,981][INFO ][o.e.n.Node ] [node1] stopped\",\"host\":\"node1\",\"@version\":\"1\",\"@timestamp\":\"2018-12-25T12:59:03.525Z\",\"path\":\"/var/log/elasticsearch/es.log\"}" 3) "{\"message\":\"[2018-12-25T20:59:02,877][INFO ][o.e.x.m.j.p.NativeController] [node1] Native controller process has stopped - no new native processes can be started\",\"host\":\"node1\",\"@version\":\"1\",\"@timestamp\":\"2018-12-25T12:59:03.524Z\",\"path\":\"/var/log/elasticsearch/es.log\"}" 4) "{\"message\":\"[2018-12-25T20:59:02,399][INFO ][o.e.x.w.WatcherService ] [node1] stopping watch service, reason [shutdown initiated]\",\"host\":\"node1\",\"@version\":\"1\",\"@timestamp\":\"2018-12-25T12:59:03.523Z\",\"path\":\"/var/log/elasticsearch/es.log\"}" 5) "{\"message\":\"[2018-12-25T20:59:02,981][INFO ][o.e.n.Node ] [node1] closing ...\",\"host\":\"node1\",\"@version\":\"1\",\"@timestamp\":\"2018-12-25T12:59:03.525Z\",\"path\":\"/var/log/elasticsearch/es.log\"}" 6) "{\"message\":\"[2018-12-25T20:59:02,866][INFO ][o.e.x.m.j.p.l.CppLogMessageHandler] [node1] [controller/1513] [Main.cc@148] Ml controller exiting\",\"host\":\"node1\",\"@version\":\"1\",\"@timestamp\":\"2018-12-25T12:59:03.524Z\",\"path\":\"/var/log/elasticsearch/es.log\"}" 7) "{\"message\":\"[2018-12-25T20:59:02,998][INFO ][o.e.n.Node ] [node1] closed\",\"host\":\"node1\",\"@version\":\"1\",\"@timestamp\":\"2018-12-25T12:59:03.526Z\",\"path\":\"/var/log/elasticsearch/es.log\"}" |
于是将日志的数据顺利的输出到redis以key的数据了
现在将nginx的日志输出到redis中:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 | [root@node1 ~] # cat /etc/logstash/conf.d/nginx_output_redis.conf input { file { path => [ "/var/log/nginx/access.log" ] start_position => "beginning" } } output { redis { db => "0" data_type => "list" host => [ "172.16.23.129" ] key => "nginx_log" } } |
配置nginx的日志格式为json输出:
1 2 3 4 5 | log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"' ; log_format json '{"@timstamp":"$time_iso8601","@version":"1","client":"$remote_addr","url":"$uri","status":"$status","domain":"$host","host":"$server_addr","size":"$body_bytes_sent","responsetime":"$request_time","referer":"$http_referer","ua":"$http_user_agent"}' ; |
然后将main注释:
1 2 | #access_log /var/log/nginx/access.log main; access_log /var/log/nginx/access .log json; |
现在执行:
1 2 3 4 5 6 7 8 9 10 | [root@node1 conf.d] # /usr/share/logstash/bin/logstash -f nginx_output_redis.conf Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties [2018-12-25T21:22:52,300][WARN ][logstash.config. source .multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified [2018-12-25T21:22:52,320][INFO ][logstash.runner ] Starting Logstash { "logstash.version" => "6.5.4" } [2018-12-25T21:22:56,773][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=> "main" , "pipeline.workers" =>4, "pipeline.batch.size" =>125, "pipeline.batch.delay" =>50} [2018-12-25T21:23:07,349][INFO ][logstash.inputs. file ] No sincedb_path set , generating one based on the "path" setting {:sincedb_path=> "/var/lib/logstash/plugins/inputs/file/.sincedb_d883144359d3b4f516b37dba51fab2a2" , :path=>[ "/var/log/nginx/access.log" ]} [2018-12-25T21:23:07,459][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=> "main" , :thread=> "#<Thread:0x4e31d96 run>" } [2018-12-25T21:23:07,633][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]} [2018-12-25T21:23:07,688][INFO ][filewatch.observingtail ] START, creating Discoverer, Watch with file and sincedb collections [2018-12-25T21:23:08,510][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600} |
然后进行手动访问nginx页面:
1 | [root@node1 ~] # for i in `seq 1 10`;do echo $i;curl http://172.16.23.129 &> /dev/null ;done |
现在到redis中查看相应的key和值:
1 2 3 4 5 6 7 8 9 10 11 | 172.16.23.129:6379> keys * 1) "es_log" 2) "nginx_log" 172.16.23.129:6379> llen nginx_log (integer) 14 172.16.23.129:6379> lrange nginx_log 0 4 1) "{\"path\":\"/var/log/nginx/access.log\",\"message\":\"{\\\"@timstamp\\\":\\\"2018-12-25T21:19:54+08:00\\\",\\\"@version\\\":\\\"1\\\",\\\"client\\\":\\\"172.16.23.129\\\",\\\"url\\\":\\\"/index.html\\\",\\\"status\\\":\\\"200\\\",\\\"domain\\\":\\\"172.16.23.129\\\",\\\"host\\\":\\\"172.16.23.129\\\",\\\"size\\\":\\\"14\\\",\\\"responsetime\\\":\\\"0.000\\\",\\\"referer\\\":\\\"-\\\",\\\"ua\\\":\\\"curl/7.29.0\\\"}\",\"@version\":\"1\",\"@timestamp\":\"2018-12-25T13:23:09.318Z\",\"host\":\"node1\"}" 2) "{\"path\":\"/var/log/nginx/access.log\",\"message\":\"{\\\"@timstamp\\\":\\\"2018-12-25T21:24:06+08:00\\\",\\\"@version\\\":\\\"1\\\",\\\"client\\\":\\\"172.16.23.129\\\",\\\"url\\\":\\\"/index.html\\\",\\\"status\\\":\\\"200\\\",\\\"domain\\\":\\\"172.16.23.129\\\",\\\"host\\\":\\\"172.16.23.129\\\",\\\"size\\\":\\\"14\\\",\\\"responsetime\\\":\\\"0.000\\\",\\\"referer\\\":\\\"-\\\",\\\"ua\\\":\\\"curl/7.29.0\\\"}\",\"@version\":\"1\",\"@timestamp\":\"2018-12-25T13:24:06.952Z\",\"host\":\"node1\"}" 3) "{\"path\":\"/var/log/nginx/access.log\",\"message\":\"{\\\"@timstamp\\\":\\\"2018-12-25T21:24:27+08:00\\\",\\\"@version\\\":\\\"1\\\",\\\"client\\\":\\\"172.16.23.129\\\",\\\"url\\\":\\\"/index.html\\\",\\\"status\\\":\\\"200\\\",\\\"domain\\\":\\\"172.16.23.129\\\",\\\"host\\\":\\\"172.16.23.129\\\",\\\"size\\\":\\\"14\\\",\\\"responsetime\\\":\\\"0.000\\\",\\\"referer\\\":\\\"-\\\",\\\"ua\\\":\\\"curl/7.29.0\\\"}\",\"@version\":\"1\",\"@timestamp\":\"2018-12-25T13:24:28.040Z\",\"host\":\"node1\"}" 4) "{\"path\":\"/var/log/nginx/access.log\",\"message\":\"{\\\"@timstamp\\\":\\\"2018-12-25T21:24:27+08:00\\\",\\\"@version\\\":\\\"1\\\",\\\"client\\\":\\\"172.16.23.129\\\",\\\"url\\\":\\\"/index.html\\\",\\\"status\\\":\\\"200\\\",\\\"domain\\\":\\\"172.16.23.129\\\",\\\"host\\\":\\\"172.16.23.129\\\",\\\"size\\\":\\\"14\\\",\\\"responsetime\\\":\\\"0.000\\\",\\\"referer\\\":\\\"-\\\",\\\"ua\\\":\\\"curl/7.29.0\\\"}\",\"@version\":\"1\",\"@timestamp\":\"2018-12-25T13:24:28.041Z\",\"host\":\"node1\"}" 5) "{\"path\":\"/var/log/nginx/access.log\",\"message\":\"{\\\"@timstamp\\\":\\\"2018-12-25T21:31:59+08:00\\\",\\\"@version\\\":\\\"1\\\",\\\"client\\\":\\\"172.16.23.129\\\",\\\"url\\\":\\\"/index.html\\\",\\\"status\\\":\\\"200\\\",\\\"domain\\\":\\\"172.16.23.129\\\",\\\"host\\\":\\\"172.16.23.129\\\",\\\"size\\\":\\\"14\\\",\\\"responsetime\\\":\\\"0.000\\\",\\\"referer\\\":\\\"-\\\",\\\"ua\\\":\\\"curl/7.29.0\\\"}\",\"@version\":\"1\",\"@timestamp\":\"2018-12-25T13:32:00.394Z\",\"host\":\"node1\"}" |
现在将redis的nginx_log这个key输出到elasticsearch的index中:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 | [root@node1 ~] # cat /etc/logstash/conf.d/redis_output_es.conf input { redis { db => "0" data_type => "list" host => [ "172.16.23.129" ] key => "nginx_log" } } output { elasticsearch { hosts => [ "172.16.23.129" ] index => "nginx-log-%{+YYYY.MM.dd}" } } |
然后执行:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | [root@node1 conf.d] # /usr/share/logstash/bin/logstash -f redis_output_es.conf Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties [2018-12-25T21:44:26,608][WARN ][logstash.config. source .multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified [2018-12-25T21:44:26,631][INFO ][logstash.runner ] Starting Logstash { "logstash.version" => "6.5.4" } [2018-12-25T21:44:31,074][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=> "main" , "pipeline.workers" =>4, "pipeline.batch.size" =>125, "pipeline.batch.delay" =>50} [2018-12-25T21:44:32,062][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http: //172 .16.23.129:9200/]}} [2018-12-25T21:44:32,690][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=> "http://172.16.23.129:9200/" } [2018-12-25T21:44:32,927][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6} [2018-12-25T21:44:32,935][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the ` type ` event field won't be used to determine the document _type {:es_version=>6} [2018-12-25T21:44:32,987][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=> "LogStash::Outputs::ElasticSearch" , :hosts=>[ "//172.16.23.129" ]} [2018-12-25T21:44:33,026][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil} [2018-12-25T21:44:33,092][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{ "template" => "logstash-*" , "version" =>60001, "settings" =>{ "index.refresh_interval" => "5s" }, "mappings" =>{ "_default_" =>{ "dynamic_templates" =>[{ "message_field" =>{ "path_match" => "message" , "match_mapping_type" => "string" , "mapping" =>{ "type" => "text" , "norms" => false }}}, { "string_fields" =>{ "match" => "*" , "match_mapping_type" => "string" , "mapping" =>{ "type" => "text" , "norms" => false , "fields" =>{ "keyword" =>{ "type" => "keyword" , "ignore_above" =>256}}}}}], "properties" =>{ "@timestamp" =>{ "type" => "date" }, "@version" =>{ "type" => "keyword" }, "geoip" =>{ "dynamic" => true , "properties" =>{ "ip" =>{ "type" => "ip" }, "location" =>{ "type" => "geo_point" }, "latitude" =>{ "type" => "half_float" }, "longitude" =>{ "type" => "half_float" }}}}}}}} [2018-12-25T21:44:33,177][INFO ][logstash.inputs.redis ] Registering Redis {:identity=> "redis://@172.16.23.129:6379/0 list:nginx_log" } [2018-12-25T21:44:33,251][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=> "main" , :thread=> "#<Thread:0x1361ed6f run>" } [2018-12-25T21:44:33,371][INFO ][logstash.outputs.elasticsearch] Installing elasticsearch template to _template /logstash [2018-12-25T21:44:33,540][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]} [2018-12-25T21:44:34,552][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600} |
最后在es上进行查看:
1 2 3 4 | [root@node1 ~] # curl -X GET "localhost:9200/_cat/indices?v" health status index uuid pri rep docs.count docs.deleted store.size pri.store.size yellow open test1 ZAjj9y_sSPmGz8ZscIXUsA 5 1 0 0 1.2kb 1.2kb yellow open nginx-log-2018.12.25 Zr4q_U5bTk2dY9PfEpZz_Q 5 1 14 0 31.8kb 31.8kb |
test1是之前手动进行创建的忽略即可,nginx-log-2018.12.25这个index即是刚刚进行创建的
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· Linux系列:如何用heaptrack跟踪.NET程序的非托管内存泄露
· 开发者必知的日志记录最佳实践
· SQL Server 2025 AI相关能力初探
· Linux系列:如何用 C#调用 C方法造成内存泄露
· AI与.NET技术实操系列(二):开始使用ML.NET
· 被坑几百块钱后,我竟然真的恢复了删除的微信聊天记录!
· 没有Manus邀请码?试试免邀请码的MGX或者开源的OpenManus吧
· 【自荐】一款简洁、开源的在线白板工具 Drawnix
· 园子的第一款AI主题卫衣上架——"HELLO! HOW CAN I ASSIST YOU TODAY
· Docker 太简单,K8s 太复杂?w7panel 让容器管理更轻松!