filebeat安装、配置及测试
一、filebeat安装、配置及测试
1、安装filebeat
# yum install filebeat-6.6.1-x86_64.rpm
2、配置filebeat收集系统日志输出到文件中(/etc/filebeat/filebeat.yml)
filebeat.prospectors: - input_type: log paths: - /var/log/*.log - /var/log/messages exclude_lines: ["^DBG","^$"] document_type: system-log-5612 output.file: path: "/tmp" filename: "filebeat.txt"
3、启动filebeat服务
systemctl start filebeat
4、向系统(/var/log/messages)日志插入数据,然后通过查看filebeat.txt文件是是否收集到了数据。
5、配置filebeat收集系统日志输出到redis中(/etc/filebeat/filebeat.yml)
# grep -v "#" /etc/filebeat/filebeat.yml | grep -v "^$" filebeat.prospectors: - input_type: log paths: - /var/log/*.log - /var/log/messages exclude_lines: ["^DBG","^$"] document_type: system-log-5612 output.redis: hosts: "192.168.56.12" db: "3" port: "6379" password: "123456" key: "system-log-5612" # systemctl restart filebeat # 向/var/log/messages中插入数据 # redis中验证数据是否存在
6、将redis中存放的系统日志输出到elasticsearch中
# cat redis-elasticsearch.conf input { redis { data_type => "list" host => "192.168.56.12" db => "3" port => "6379" password => "123456" key => "system-log-5612" } } output { elasticsearch { hosts => ["192.168.56.11:9200"] index => "system-log-5612-%{+YYYY.MM.dd}" } } # /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/redis-elasticsearch.conf -t # systemctl restart logstash
7、测试
# echo "aaaaaaaaaaaa" >> /var/log/messages # echo "bbbbbbbbbbbb" >> /var/log/messages
二、filebeat实验配置信息
环境信息:
服务器描述 | IP地址 | 应用 |
web服务器 | 192.168.56.100 | nginx、filebeat |
redis服务器 | 192.168.56.12 | redis |
logstash服务器端 | 192.168.56.11 | logstash |
elasticsearch服务集群 | 192.168.56.15/16 | java、elasticsearch |
kibana服务器 | 192.168.56.12 | kibana、nginx反向代理认证 |
1、filebeat配置文件,filebeat收集nginx日志并输出到redis数据库服务器
# grep -v "#" /etc/filebeat/filebeat.yml |grep -v "^$" filebeat.inputs: - type: log enabled: true paths: - /var/log/nginx/access.log filebeat.config.modules: path: ${path.config}/modules.d/*.yml reload.enabled: false setup.template.settings: index.number_of_shards: 3 setup.kibana: output.redis: hosts: ["192.168.56.12"] port: 6379 key: "nginx-log"
2、logstash server端配置文件,从redis中读取数据输出到elasticsearch服务中
# cat /etc/logstash/conf.d/redis-es-logstash-nginx.conf input { redis { data_type => "list" host => "192.168.56.12" db => "0" port => "6379" key => "nginx-log" } } output { elasticsearch { hosts => ["192.168.56.15:9200"] index => "nginx-log-%{+YYYY.MM.dd}" } }
3、kibana配置文件
# grep -Evi "^#|^$" /etc/kibana/kibana.yml server.port: 5601 server.host: "192.168.56.12" elasticsearch.hosts: ["http://192.168.56.15:9200","http://192.168.56.16:9200"]
4、nginx 反向代理kibana配置文件
# cat /etc/nginx/nginx.conf worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; log_format access_log_json '{"user_ip":"$http_x_forwarded_for","lan_ip":"$remote_addr","log_time":"$time_iso8601","user_rqp":"$request","http_code":"$status","body_bytes_sent":"$body_bytes_sent","req_time":"$request_time","user_ua":"$http_user_agent"}'; sendfile on; keepalive_timeout 65; include conf.d/*.conf; } # cat /etc/nginx/conf.d/http-www.conf server { listen 81; server_name localhost; auth_basic "User Authentication"; auth_basic_user_file /etc/nginx/conf.d/kibana.passwd; access_log /var/log/nginx/http-access.log access_log_json; location / { proxy_set_header Host $host; proxy_set_header x-for $remote_addr; proxy_set_header x-server $host; proxy_set_header x-agent $http_user_agent; proxy_pass http://kibana; } } # cat /etc/nginx/conf.d/upstream.conf upstream kibana { server 192.168.56.12:5601; } # cat /etc/nginx/conf.d/kibana.passwd admin:$apr1$21NJ.Fx/$gmT0bwS4GoW1gmsHDRq911
三、filebeat 收集多日志文件
# 1、filebeat收集nginx访问日志、系统日志,输出到redis服务器中。 # grep -v "#" /etc/filebeat/filebeat.yml |grep -v "^$" filebeat.inputs: - type: log enabled: true paths: - /var/log/nginx/access.log tags: ["nginx-log-56-100"] - type: log enabled: true paths: - /var/log/messages tags: ["system-messages-log-56-100"] filebeat.config.modules: path: ${path.config}/modules.d/*.yml reload.enabled: false setup.template.settings: index.number_of_shards: 3 setup.kibana: output.redis: hosts: ["192.168.56.12"] port: 6379 timeout: 5 key: "default_list" # 2、logstash服务端从redis数据库中读取数据并输出到Elasticsearch服务器中。 # cat redis-es-logstash-nginx.conf input { redis { data_type => "list" host => "192.168.56.12" db => "0" port => "6379" key => "default_list" } } output { if "nginx-log-56-100" in [tags] { elasticsearch { hosts => ["192.168.56.15:9200"] index => "nginx-log-56100-%{+YYYY.MM.dd}" } } if "system-messages-log-56-100" in [tags] { elasticsearch { hosts => ["192.168.56.15:9200"] index => "system-messages-log-56100-%{+YYYY.MM.dd}" } } }
四、filebeat 收集多日志文件(syslog、nginx、java 多行合并)
# grep -v "#" /etc/filebeat/filebeat.yml |grep -v "^$" filebeat.inputs: - type: log enabled: true paths: - /var/log/nginx/access.log tags: ["nginx-log-56-100"] - type: log enabled: true paths: - /var/log/messages tags: ["system-messages-log-56-100"] - type: log enabled: true paths: - /data/tomcat/logs/catalina.out tags: ["tomcat-catalina-log-56-100"] multiline: pattern: '^\[' negate: true match: after filebeat.config.modules: path: ${path.config}/modules.d/*.yml reload.enabled: false setup.template.settings: index.number_of_shards: 3 setup.kibana: output.redis: hosts: ["192.168.56.12"] port: 6379 timeout: 5 key: "default_list" # cat redis-es-logstash-nginx-system-tomcat.conf input { redis { data_type => "list" host => "192.168.56.12" db => "0" port => "6379" key => "default_list" } } output { if "nginx-log-56-100" in [tags] { elasticsearch { hosts => ["192.168.56.15:9200"] index => "nginx-log-56100-%{+YYYY.MM.dd}" } } if "system-messages-log-56-100" in [tags] { elasticsearch { hosts => ["192.168.56.15:9200"] index => "system-messages-log-56100-%{+YYYY.MM.dd}" } } if "tomcat-catalina-log-56-100" in [tags] { elasticsearch { hosts => ["192.168.56.15:9200"] index => "tomcat-catalina-log-56100-%{+YYYY.MM.dd}" } } }