Docker 部署 elk + filebeat
Docker 部署 elk + filebeat
kibana 开源的分析与可视化平台
logstash 日志收集工具 logstash-forwarder(原名lubmberjack)
elasticsearch 查询 + filebeat 日志收集
ELK 是由三部分组成的一套日志分析系统,
Elasticsearch: 基于json分析搜索引擎,Elasticsearch是个开源分布式搜索引擎,它的特点有:分布式,零配置,自动发现,索引自动分片,
索引副本机制,restful风格接口,多数据源,自动搜索负载等。
Logstash:动态数据收集管道,Logstash是一个完全开源的工具,它可以对你的日志进行收集、分析,并将其存储供以后使用
Kibana:可视化视图,将elasticsearh所收集的data通过视图展现。kibana 是一个开源和免费的工具,它可以为 Logstash 和
ElasticSearch 提供的日志分析友好的 Web 界面,可以帮助您汇总、分析和搜索重要数据日志。
Docker至少得分配3GB的内存;
Elasticsearch至少需要单独2G的内存;
环境:
- docker 版本 :19.03.5
- 主机地址:192.168.1.220
vm.max_map_count至少需要262144,修改vm.max_map_count 参数
# 临时修改 [root@docker-01 ~]# sysctl -w vm.max_map_count=262144 vm.max_map_count = 262144 永久修改 [root@docker-01 ~]# vim /etc/sysctl.conf vm.max_map_count=262144
2、下载elk镜像
[root@docker-01 ~]# docker pull sebp/elk
3、运行elk服务
[root@docker-01 ~]# docker run -p 5601:5601 -p 9200:9200 -p 5044:5044 -itd --name elk sebp/elk a008628736052e01fcfc44b5e494b46496d7f0b015163257435cc368728256f8
端口之间关系 5601 (Kibana web interface). 前台界面 9200 (Elasticsearch JSON interface) 搜索. 5044 (Logstash Beats interface, receives logs from Beats such as Filebea 日志传输
4、修改logstash关闭ssl
# 1、进入elk容器 [root@docker-01 ~]# docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a00862873605 sebp/elk "/usr/local/bin/star…" 2 minutes ago Up 2 minutes 0.0.0.0:5044->5044/tcp, 0.0.0.0:5601->5601/tcp, 0.0.0.0:9200->9200/tcp, 9300/tcp elk [root@docker-01 ~]# docker exec -it a00862873605 /bin/bash # 2、修改配置文件 改为如下 root@a00862873605:/# vi /etc/logstash/conf.d/02-beats-input.conf input { beats { port => 5044 } }
5、重启elk
[root@docker-01 ~]# docker restart elk
elk
二、部署filebeat(本地安装)
- filebeat压缩包
- 下载地址:https://pan.baidu.com/s/1fx12E7N_O2a6CHIBmQV6Ig
- 密码:osxk
1、安装filebeat
[root@docker-01 ~]# rpm -ivh filebeat-6.5.4-x86_64.rpm 警告:filebeat-6.5.4-x86_64.rpm: 头V4 RSA/SHA512 Signature, 密钥 ID d88e42b4: NOKEY 准备中... ################################# [100%] 正在升级/安装... 1:filebeat-6.5.4-1 ################################# [100%]
2、修改配置文件
[root@docker-01 etc]# cd /etc/filebeat/ [root@docker-01 filebeat]# ls fields.yml filebeat.reference.yml filebeat.yml modules.d
[root@docker-01 filebeat]# cp filebeat.yml filebeat.yml.bak [root@docker-01 filebeat]# echo 0 > filebeat.yml [root@docker-01 filebeat]# vi filebeat.yml filebeat.prospectors: - type: log enabled: true paths: - /docker/service/zs/java/javalog/*.log tags: ["java"] - type: log enabled: true paths: - /docker/service/zs/nginx/log/*.log tags: ["nginx"] - type: log enabled: true paths: - /docker/service/zs/redis/log/*.log tags: ["redis"] filebeat.config.modules: path: ${path.config}/modules.d/*.yml reload.enabled: false setup.template.settings: index.number_of_shards: 3 setup.kibana: host: "192.168.1.220:5601" output.logstash: hosts: ["192.168.1.220:5044"]
[root@docker-01 ~]# systemctl restart filebeat
filebeat.prospectors: # 指定收集的日志路径 - type: log enabled: true paths: # 指定路径 - /docker/service/zs/java/javalog/*.log tags: ["java"] # 指定收集的日志路径 - type: log enabled: true paths: # 指定路径 - /docker/service/zs/nginx/log/*.log # 自定义名称 tags: ["nginx"] # 指定收集的日志路径 - type: log enabled: true paths: # 指定路径 - /docker/service/zs/redis/log/*.log # 自定义名称 tags: ["redis"] filebeat.config.modules: path: ${path.config}/modules.d/*.yml reload.enabled: false setup.template.settings: index.number_of_shards: 3 # 指定kibana主机地址 setup.kibana: host: "192.168.1.220:5601" # 指定logstash地址 output.logstash: hosts: ["192.168.1.220:5044"]
三、kibana管理
浏览器端访问:192.168.1.220:5601
1、Management --> index patterns -- > create index pattern
测试:
测试传递一条信息
[root@docker-01 ~]# docker exec -it a00862873605 /bin/bash root@a00862873605:/# /opt/logstash/bin/logstash -e 'input { stdin { } } output { elasticsearch { hosts => ["localhost"] } }' Sending Logstash logs to /opt/logstash/logs which is now configured via log4j2.properties [2020-02-29T09:29:27,910][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified [2020-02-29T09:29:28,181][FATAL][logstash.runner ] Logstash could not be started because there is already another instance using the configured data directory. If you wish to run multiple instances, you must change the "path.data" setting. [2020-02-29T09:29:28,238][ERROR][org.logstash.Logstash ] java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit) exit
报错
如果看到这样的报错信息 Logstash could not be started because there is already another instance using the configured data directory.
If you wish to run multiple instances, you must change the "path.data" setting.
解决:请执行命令:service logstash stop 然后在执行就可以了。
root@a00862873605:/# service logstash stop Killing logstash (pid 285) with SIGTERM Waiting for logstash (pid 285) to die... Waiting for logstash (pid 285) to die... Waiting for logstash (pid 285) to die... Waiting for logstash (pid 285) to die... Waiting for logstash (pid 285) to die... logstash stop failed; still running.
3.输入测试信息 :this is a test
打开浏览器,输入:http://192.168.1.220:9200/_search?pretty 如图,就会看到我们刚刚输入的日志内容