logstash收集TCP与UDP日志
通过 logstash 的 tcp/udp 插件收集日志,通常用于在向 elasticsearch 日志补录丢
失的部分日志,可以将丢失的日志写到一个文件,然后通过 TCP 日志收集方式直
接发送给 logstash 然后再写入到 elasticsearch 服务器。
https://www.elastic.co/guide/en/logstash/5.6/input-plugins.html
准备条件:
环境:jdk,安装好 logstash
安装nc
[root@es-web2 ~]# apt install nc
安装jdk
[root@es-web2 ~]# apt install openjdk-8-jdk -y
dpkg安装
[root@es-web2 src]# dpkg -i logstash-7.12.1-amd64.deb
配置个文件,先进行收集测试
[root@es-web2 ]# vim /etc/logstash/conf.d/tcp-log-es.conf
input{
tcp{
port => 8899
type => "tcplog"
mode => "server"
}
}
output{
stdout{
codec => rubydebug
}
}
验证
[root@es-web2 ]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/tcp-log-es.conf
其他服务器安装 “瑞士军刀” nc
[root@es-web2 ]# apt install nc
测试
[root@es-web2 ]# echo "nc test" | nc 172.31.2.107 8899
检查端口
[root@es-web2 ]# ss -tnl | grep 8899
LISTEN 0 128 *:8899 *:*
测试接收文件
[root@es-web2 ]# nc 172.31.2.107 8899 < /etc/passwd
将输出改为 elasticsearch
root@long:~# vim /etc/logstash/conf.d/tcp-log-es.conf
input{
tcp{
port => 8899
type => "tcplog"
mode => "server"
}
}
output{
elasticsearch{
hosts => ["172.31.2.101:9200"]
index => "long-tcplog-%{+YYYY.MM.dd}"
}
}
重启
root@long:~# systemctl restart logstash
再用nc 传数据
root@long:~# echo "nc test1" | nc 172.31.2.108 8899
root@long:~# echo "伪设备1" > /dev/tcp/172.31.2.108/8899
查看es
添加到kibana
略
logstash收集UDP日志
准备一台CentOS代替交换机
安装rsyslog和haproxy
[root@localhost ~]# yum install rsyslog
[root@localhost ~]# yum install haproxy -y
rsyslog配置
[ root@localhost ~]# vim /etc/rsyslog.conf
$ModLoad imudp
$UDPServerRun 514
# 最后一行添加
local2.* @@remote-host:514
haproxy配置
[ root@localhost ~]# vim /etc/haproxy/haproxy.cfg
listen web-port
bind 0.0.0.0:80
server 172.31.2.108 172.31.2.108:80 check inter 3s fall 3 rise 5
重启
[ root@localhost ~]# systemctl restart haproxy
测试网页,可以访问即可
配置rsyslog写入日志
[ root@localhost ~]# vim /etc/rsyslog.conf
#local2.* @@remote-host:514
local2.* /var/log/haproxy.log
重启
[ root@localhost ~]# systemctl restart rsyslog
改haproxy配置
[ root@localhost ~]# vim /etc/haproxy/haproxy.cfg
listen web-port
bind 0.0.0.0:80
log global
mode http
server 172.31.2.108 172.31.2.108:80 check inter 3s fall 3 rise 5
重启
[ root@localhost ~]# systemctl restart haproxy
查看日志写入
[ root@localhost ~]# tail -f /var/log/haproxy.log
配置测试logstash输出到终端显示
input{
syslog{
host => "172.31.0.18"
port => "6514"
type => "ststem-rsyslog"
}
}
output {
stdout {}
}
停止
root@long:~# systemctl stop logstash
启动
root@long:~# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/rsys-log-es.conf
然后在centos-18服务器改rsyslog配置
[ root@localhost ~]# systemctl restart rsyslog
local2.* @@172.31.0.18:6514
重启
[ root@localhost ~]# systemctl restart rsyslog
启动
root@long:~# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/rsys-log-es.conf
然后刷新页面看看机台有没有获取到信息,获取到信息说明是成功的
在haproxy配置添加
[ root@localhost ~]# vim /etc/haproxy/haproxy.cfg
listen web1-port
bind 172.31.2.108:5601
log global
mode tcp
server 172.31.2.101 172.31.2.101:5601 check inter 3s fall 3 rise 5
server 172.31.2.102 172.31.2.102:5601 check inter 3s fall 3 rise 5
重启
[ root@localhost ~]# systemctl restart haproxy
检查端口
[ root@localhost ~]# ss -tnl
9200
在上面的基础上修改配置(Ubuntu的rsyslog日志配置有问题,所有这里使用的CentOS系统)
[root@es-web2 ~]# vim /etc/logstash/conf.d/tcp-log-es.conf
input{
tcp{
port => 8899
type => "tcplog"
mode => "server"
}
syslog {
type => "ststem-rsyslog"
port => "6514"
}
}
#output{
# stdout{
# codec => rubydebug
# }
#}
output{
if [type] == "tcplog" {
elasticsearch {
hosts => ["172.31.2.101:9200"]
index => "long-tcplog-%{+YYYY.MM.dd}"
}}
if [type] == "ststem-rsyslog" {
elasticsearch {
hosts => ["172.31.2.101:9200"]
index => "long-rsyslog-%{+YYYY.MM.dd}"
}}
}
启动
root@long:~# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/rsys-log-es.conf
访问几下网页,出现下面的即可
然后添加到 kibana
略
如果时间没有同步,执行下面命令即可
[root@localhost ~]# ntpdate time1.aliyun.com
[root@localhost ~]# hwclock -w
重启
[ root@localhost ~]# systemctl restart rsyslog
时区不对,执行如下命令即可(CentOS7)
[root@localhost ~]# cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime