ELKStack-使用消息队列扩展(十)

ELKStack-使用消息队列扩展

官方文档:https://www.elastic.co/guide/en/logstash/5.x/deploying-and-scaling.html

流程

流程:数据源 --> logstash(input收集、output消息队列) -->  MQ  -->  logstash (input收集消息队列、filter过滤、output ES) --> ES

使用这个流程,主要是基于性能考虑,第一层logstash主要做原始数据收集,不对数据进行处理,提高对数据源的收集性能。同时部署第一层logstash需要放在生产环境的服务器上,做为agent端,这样使用也是基于尽量少消耗服务器性能的考量。

本章使用redis做消息队列

yum install -y redis

配置/etc/redis.conf

daemonize yes
bind 192.168.137.11

启动 systemctl start redis

 

logstash output redis使用

官方文档:https://www.elastic.co/guide/en/logstash/5.x/plugins-outputs-redis.html

1、标准输入、redis输出

input {
    stdin {}
}

filter {
}

output{
    redis {
        host => ["192.168.137.11"]
        port => 6379
        db => 1
        data_type => "list"
        key => "demo"
        timeout => 10
    }
}

启动/opt/logstash/bin/logstash -f /etc/logstash/conf.d/redis.conf

2、apache日志输入,redis输出

input {
    file {
        path => "/etc/httpd/logs/access_log"
        start_position => "beginning"
    }
}

filter {
}

output{
    redis {
        host => ["192.168.137.11"]
        port => 6379
        db => 1
        data_type => "list"
        key => "apache"
        timeout => 10
    }
}

启动/opt/logstash/bin/logstash -f /etc/logstash/conf.d/apache.conf

 

logstash input redis使用

官方文档:https://www.elastic.co/guide/en/logstash/5.x/plugins-inputs-redis.html

1、redis输入、标准输出

input{
    redis {
        host => ["192.168.137.11"]
        port => 6379
        db => 1
        data_type => "list"
        key => "apache"
        timeout => 10
    }
}

filter {
}

output{
    stdout{
        codec => rubydebug
    }
}

启动/opt/logstash/bin/logstash -f /etc/logstash/conf.d/indexer.conf

2、redis输入、grok过滤、标准输出

input{
    redis {
        host => ["192.168.137.11"]
        port => 6379
        db => 1
        data_type => "list"
        key => "apache"
        timeout => 10
    }
}

filter {
    grok {
        match => { "message" => "%{COMBINEDAPACHELOG}" }
    }
}

output{
    stdout{
        codec => rubydebug
    }
}

启动/opt/logstash/bin/logstash -f /etc/logstash/conf.d/indexer.conf

3、redis输入、grok过滤、ES输出

input{
    redis {
        host => ["192.168.137.11"]
        port => 6379
        db => 1
        data_type => "list"
        key => "apache"
        timeout => 10
    }
}

filter {
    grok {
        match => { "message" => "%{COMBINEDAPACHELOG}" }
    }
}

output{
    elasticsearch {
        hosts => ["192.168.137.11:9200"]
        index => "apache-accesslog-%{+YYYY.MM.dd}"
    }
}

启动/opt/logstash/bin/logstash -f /etc/logstash/conf.d/indexer.conf

流程总结图:

 

 

 

posted @ 2017-01-07 12:43  shhnwangjian  阅读(460)  评论(0编辑  收藏  举报