使用Redis和Kafka作为缓存收集日志

 

 

当日志的数量非常多的时候,可能需要引入缓存层作为临时存储数据的地方,防止因为ES处理不过来导致日志丢失的情况。
filebeat支持将日志发送到redis或者kafka作为消息队列缓存。
但是使用了缓存层,就不能使用模版来配置日志收集了。
所以最好日志是json格式
https://www.elastic.co/guide/en/beats/filebeat/6.6/redis-output.html

使用单台redis作为缓存
这里需要说明,如果使用redis作为缓存
可以将不同的日志类型单独写成一个键,这样好处是清晰,但是缺点是logstash写起来起来复杂
也可以将所有的日志全部写入到一个键中,然后靠后端的logstash去过滤处理。


redis 10.192.27.115 redis-115
nginx(filebeat) 10.192.27.100  nginx-100
logstash、es、kibana 10.192.27.111  ELK-111


1、安装启动测试redis   #redis 10.192.27.115 redis-115
[root@redis-115 ~]# yum install redis -y
[root@redis-115 ~]# systemctl status redis
[root@redis-115 ~]# redis-cli 
127.0.0.1:6379> set k1 v1
OK
127.0.0.1:6379> GEt k1
"v1"
127.0.0.1:6379>

2、 配置nginx的json日志  #nginx(filebeat) 10.192.27.100  nginx-100
将nginx的日志调整为json格式
    log_format  json  '{ "time_local": "$time_local", '
                           '"remote_addr": "$remote_addr", '
                           '"referer": "$http_referer", '
                           '"request": "$request", '
                           '"status": $status, '
                           '"bytes": $body_bytes_sent, '
                           '"agent": "$http_user_agent", '
                           '"x_forwarded": "$http_x_forwarded_for", '
                           '"up_addr": "$upstream_addr",'
                           '"up_host": "$upstream_http_host",'
                           '"upstream_time": "$upstream_response_time",'
                           '"request_time": "$request_time"' ' }';
        
        
3、 配置filebeat写入到不同的key中  #nginx(filebeat) 10.192.27.100  nginx-100
[root@nginx-100 ~]# cat /etc/filebeat/filebeat.yml 
filebeat.inputs:
- type: log
  enabled: true 
  paths:
    - /var/log/nginx/access.log
  json.keys_under_root: true
  json.overwrite_keys: true
  tags: ["access"]
- type: log
  enabled: true 
  paths:
    - /var/log/nginx/error.log
  tags: ["error"]
setup.template.settings:
  index.number_of_shards: 3
setup.kibana:
  host: "10.192.27.111:5601"
output.redis:
  hosts: ["10.192.27.115"]
  keys:
    - key: "nginx_access"   
      when.contains:
        tags: "access"
    - key: "nginx_error"
      when.contains:
        tags: "error"
        
[root@nginx-100 ~]# systemctl restart filebeat      

    
        
4、 配置logstash读取不同的key  #logstash、es、kibana 10.192.27.111  ELK-111
[root@ELK-111 soft]# rpm -ivh logstash-6.6.0.rpm     
[root@ELK-111 ~]# cat /etc/logstash/conf.d/redis.conf    
input {
  redis {
    host => "10.192.27.115"
    port => "6379"
    db => "0"
    key => "nginx_access"
    data_type => "list"
  }
  redis {
    host => "10.192.27.115"
    port => "6379"
    db => "0"
    key => "nginx_error"
    data_type => "list"
  }
}

filter {
  mutate {
    convert => ["upstream_time", "float"]  //该字段值转为浮点型
    convert => ["request_time", "float"]   //该字段值转为浮点型
  }
}

output {
    stdout {}
    if "access" in [tags] {
      elasticsearch {
        hosts => "http://localhost:9200"
        manage_template => false
        index => "nginx_access-%{+yyyy.MM.dd}"
      }
    }
    if "error" in [tags] {
      elasticsearch {
        hosts => "http://localhost:9200"
        manage_template => false
        index => "nginx_error-%{+yyyy.MM.dd}"
      }
    }
}

#启动logstash
[root@ELK-111 ~]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/redis.conf 





优化方式:
5、 filebeat收集日志写入到一个key中  #nginx(filebeat) 10.192.27.100  nginx-100
[root@nginx-100 ~]# cat /etc/filebeat/filebeat.yml 
filebeat.inputs:
- type: log
  enabled: true 
  paths:
    - /var/log/nginx/access.log
  json.keys_under_root: true
  json.overwrite_keys: true
  tags: ["access"]
- type: log
  enabled: true 
  paths:
    - /var/log/nginx/error.log
  tags: ["error"]
setup.template.settings:
  index.number_of_shards: 3
setup.kibana:
  host: "10.192.27.111:5601"
output.redis:
  hosts: ["localhost"]
  key: "filebeat"
[root@nginx-100 ~]# systemctl restart filebeat    
  
  
6、 logstash根据tag区分一个key里的不同日志
[root@ELK-111 ~]# cat /etc/logstash/conf.d/redis.conf 
input {
  redis {
    host => "10.192.27.115"
    port => "6379"
    db => "0"
    key => "filebeat"
    data_type => "list"
  }
}
filter {
  mutate {
    convert => ["upstream_time", "float"]
    convert => ["request_time", "float"]
  }
}
output {
    if "access" in [tags] {
      elasticsearch {
        hosts => "http://localhost:9200"
        manage_template => false
        index => "nginx_access-%{+yyyy.MM.dd}"
      }
    }
    if "error" in [tags] {
      elasticsearch {
        hosts => "http://localhost:9200"
        manage_template => false
        index => "nginx_error-%{+yyyy.MM.dd}"
      }
    }
}
#启动logstash
[root@ELK-111 ~]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/redis.conf 
使用Redis作为缓存收集日志

 

 二、使用kafka作为缓存收集日志

安装环境准备
三台服务器配置hosts,并可以互相ping通,这里我选用centos7系统
10.192.27.100 kafka100
10.192.27.111 kafka111
10.192.27.114 kafka114
[root@kafka100 ~]# vim /etc/hosts
[root@kafka100 ~]# cat /etc/hosts
10.192.27.100 kafka100
10.192.27.111 kafka111
10.192.27.114 kafka114
[root@kafka100 ~]# ping kafka100
[root@kafka100 ~]# ping kafka111
[root@kafka100 ~]# ping kafka114


下载安装并验证zookeeper
1. zookeeper下载地址
http://zookeeper.apache.org/releases.html
2. kafka下载地址
http://kafka.apache.org/downloads.html

 

3. 安装zookeeper
zookeeper集群特性:整个集群中只要有超过集群数量一半的zookeeper工作是正常的,那么整个集群对外就是可用的,例如有2台服务器做了一个zaookeeper,只要有任何一台故障或宕机,那么这个zookeeper集群就是不可用的了.因为剩下的一台没有超过集群的一半的数量,但是假如有三台zookeeper组成一个集群,那么损坏一台还剩两台,大于3台的一半,所以损坏一台还是可以正常运行的,但是再损坏一台就只剩下一台,集群就不可用了.
如果是4台组成,损坏一台正常,损坏两台还剩两台,不满足集群总数的一半,所以3台的集群和4台的集群算坏两台的结果都是集群不可用.所以这也是为什么集群一般是奇数的原因.
    
3.1所有节点上传所有的软件包到指定目录
把所有的软件包都上传到/opt/soft目录,注意!所有节点都操作
[root@kafka100 ~]# mkdir /opt/soft
[root@kafka100 ~]# cd /opt/soft/
[root@kafka100 soft]# ls -lh
total 264M
-rw-r--r-- 1 root root 181M Jan  9 18:33 jdk-8u151-linux-x64.tar.gz
-rw-r--r-- 1 root root  48M Dec 15 15:34 kafka_2.11-1.0.0.tgz
-rw-r--r-- 1 root root  35M Dec 15 15:27 zookeeper-3.4.11.tar.gz

[root@kafka111 ~]# mkdir /opt/soft
[root@kafka111 ~]# cd /opt/soft/
[root@kafka111 soft]# ls -lh
total 264M
-rw-r--r-- 1 root root 181M Jan  9 18:33 jdk-8u151-linux-x64.tar.gz
-rw-r--r-- 1 root root  48M Dec 15 15:34 kafka_2.11-1.0.0.tgz
-rw-r--r-- 1 root root  35M Dec 15 15:27 zookeeper-3.4.11.tar.gz
[root@kafka114 ~]# mkdir /opt/soft
[root@kafka114 ~]# cd /opt/soft/
[root@kafka114 soft]# ls -lh
total 264M
-rw-r--r-- 1 root root 181M Jan  9 18:33 jdk-8u151-linux-x64.tar.gz
-rw-r--r-- 1 root root  48M Dec 15 15:34 kafka_2.11-1.0.0.tgz
-rw-r--r-- 1 root root  35M Dec 15 15:27 zookeeper-3.4.11.tar.gz



3.2节点1的配置
安装配置java环境并确认
[root@kafka100 soft]# cd /opt/soft
[root@kafka100 soft]# tar zxf jdk-8u151-linux-x64.tar.gz  -C /opt/
[root@kafka100 soft]# ln -s /opt/jdk1.8.0_151/ /opt/jdk
[root@kafka100 soft]# sed -i.bak '$a export JAVA_HOME=/opt/jdk\nexport PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH\nexport CLASSPATH=.$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/jre/lib:$JAVA_HOME/lib/tools.jar' /etc/profile
[root@kafka100 soft]# source /etc/profile
[root@kafka100 soft]# java -version
java version "1.8.0_151"
Java(TM) SE Runtime Environment (build 1.8.0_151-b12)
Java HotSpot(TM) 64-Bit Server VM (build 25.151-b12, mixed mode)
[root@kafka100 soft]# ls -lh /opt/
total 8.0K
lrwxrwxrwx 1 root root   18 Mar 12 14:05 jdk -> /opt/jdk1.8.0_151/
drwxr-xr-x 8 uucp  143 4.0K Sep  6  2017 jdk1.8.0_151
drwxr-xr-x 2 root root 4.0K Mar 12 13:53 soft
安装配置zookeeper
[root@kafka100 soft]# cd /opt/soft
[root@kafka100 soft]# tar zxf zookeeper-3.4.11.tar.gz -C /opt/
[root@kafka100 soft]# ln -s /opt/zookeeper-3.4.11/ /opt/zookeeper
[root@kafka100 soft]# tree -L 1 /opt/
/opt/
├── jdk -> /opt/jdk1.8.0_151/
├── jdk1.8.0_151
├── soft
├── zookeeper -> /opt/zookeeper-3.4.11/
└── zookeeper-3.4.11
5 directories, 0 files
[root@kafka100 soft]# mkdir -p /data/zookeeper
[root@kafka100 soft]# cp /opt/zookeeper/conf/zoo_sample.cfg /opt/zookeeper/conf/zoo.cfg
[root@kafka100 soft]# vim /opt/zookeeper/conf/zoo.cfg
[root@kafka100 soft]# grep "^[a-Z]" /opt/zookeeper/conf/zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/data/zookeeper
clientPort=2181
server.1=10.192.27.100:2888:3888
server.2=10.192.27.111:2888:3888
server.3=10.192.27.114:2888:3888
[root@kafka100 soft]# echo "1" > /data/zookeeper/myid
[root@kafka100 soft]# ls -lh /data/zookeeper/
total 4.0K
-rw-r--r-- 1 root root 2 Mar 12 14:17 myid
[root@kafka100 soft]# cat /data/zookeeper/myid 
1


3.3节点2的配置
配置java: 步骤和节点1一样,只是最后myid不一样而已
[root@kafka111 soft]# cd /opt/soft
[root@kafka111 soft]# tar zxf jdk-8u151-linux-x64.tar.gz  -C /opt/
[root@kafka111 soft]# ln -s /opt/jdk1.8.0_151/ /opt/jdk
[root@kafka111 soft]# sed -i.bak '$a export JAVA_HOME=/opt/jdk\nexport PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH\nexport CLASSPATH=.$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/jre/lib:$JAVA_HOME/lib/tools.jar' /etc/profile
[root@kafka111 soft]# source /etc/profile
[root@kafka111 soft]# java -version
java version "1.8.0_151"
Java(TM) SE Runtime Environment (build 1.8.0_151-b12)
Java HotSpot(TM) 64-Bit Server VM (build 25.151-b12, mixed mode)
配置zookeeper
[root@kafka111 soft]# cd /opt/soft
[root@kafka111 soft]# tar zxf zookeeper-3.4.11.tar.gz -C /opt/
[root@kafka111 soft]# ln -s /opt/zookeeper-3.4.11/ /opt/zookeeper
[root@kafka111 soft]# mkdir -p /data/zookeeper
[root@kafka111 soft]# cp /opt/zookeeper/conf/zoo_sample.cfg /opt/zookeeper/conf/zoo.cfg
[root@kafka111 soft]# vim /opt/zookeeper/conf/zoo.cfg
[root@kafka111 soft]# grep "^[a-Z]" /opt/zookeeper/conf/zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/data/zookeeper
clientPort=2181
server.1=10.192.27.100:2888:3888
server.2=10.192.27.111:2888:3888
server.3=10.192.27.114:2888:3888
[root@kafka111 soft]# echo "2" > /data/zookeeper/myid
[root@kafka111 soft]# cat /data/zookeeper/myid 
2
[root@kafka111 soft]# tree -L 1 /opt/
/opt/
├── jdk -> /opt/jdk1.8.0_151/
├── jdk1.8.0_151
├── soft
├── zookeeper -> /opt/zookeeper-3.4.11/
└── zookeeper-3.4.11

3.4节点3的配置
配置java
[root@kafka114 soft]# cd /opt/soft
[root@kafka114 soft]# tar zxf jdk-8u151-linux-x64.tar.gz  -C /opt/
[root@kafka114 soft]# ln -s /opt/jdk1.8.0_151/ /opt/jdk
[root@kafka114 soft]# sed -i.bak '$a export JAVA_HOME=/opt/jdk\nexport PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH\nexport CLASSPATH=.$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/jre/lib:$JAVA_HOME/lib/tools.jar' /etc/profile
[root@kafka114 soft]# vim /etc/profile
[root@kafka114 soft]# source /etc/profile
[root@kafka114 soft]# java -version
java version "1.8.0_151"
Java(TM) SE Runtime Environment (build 1.8.0_151-b12)
Java HotSpot(TM) 64-Bit Server VM (build 25.151-b12, mixed mode)
配置zookeeper
[root@kafka114 soft]# cd /opt/soft
[root@kafka114 soft]# tar zxf zookeeper-3.4.11.tar.gz -C /opt/
[root@kafka114 soft]# ln -s /opt/zookeeper-3.4.11/ /opt/zookeeper
[root@kafka114 soft]# mkdir -p /data/zookeeper
[root@kafka114 soft]# cp /opt/zookeeper/conf/zoo_sample.cfg /opt/zookeeper/conf/zoo.cfg
[root@kafka114 soft]# vim /opt/zookeeper/conf/zoo.cfg
[root@kafka114 soft]# grep "^[a-Z]" /opt/zookeeper/conf/zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/data/zookeeper
clientPort=2181
server.1=10.192.27.100:2888:3888
server.2=10.192.27.111:2888:3888
server.3=10.192.27.114:2888:3888
[root@kafka114 soft]# echo "3" > /data/zookeeper/myid
[root@kafka114 soft]# cat /data/zookeeper/myid
3
[root@kafka114 soft]# tree -L 1 /opt/
/opt/
├── jdk -> /opt/jdk1.8.0_151/
├── jdk1.8.0_151
├── soft
├── zookeeper -> /opt/zookeeper-3.4.11/
└── zookeeper-3.4.11


3.5各节点启动zookeeper
节点1
[root@kafka100 ~]# /opt/zookeeper/bin/zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
节点2
[root@kafka111 soft]# /opt/zookeeper/bin/zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
节点3
[root@kafka114 ~]# /opt/zookeeper/bin/zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED

3.6查看各节点的zookeeper状态
节点1
[root@kafka100 ~]# /opt/zookeeper/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Mode: follower
节点2
[root@kafka111 soft]# /opt/zookeeper/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Mode: leader
节点3
[root@kafka114 ~]# /opt/zookeeper/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Mode: follower


3.7zookeeper简单操作命令
连接到任意节点生成数据:
我们在节点1生成数据,然后在其他节点验证数据
[root@kafka100 ~]# /opt/zookeeper/bin/zkCli.sh -server 10.192.27.100:2181
Connecting to 10.192.27.100:2181
=================
WATCHER::

WatchedEvent state:SyncConnected type:None path:null
[zk: 10.192.27.100:2181(CONNECTED) 0] create /test "hello"
Created /test
[zk: 10.192.27.100:2181(CONNECTED) 1]
在其他节点上验证数据
[root@kafka111 ~]# /opt/zookeeper/bin/zkCli.sh -server 10.192.27.111:2181
Connecting to 10.192.27.111:2181
===========================
WATCHER::

WatchedEvent state:SyncConnected type:None path:null
[zk: 10.192.27.111:2181(CONNECTED) 0] get /test
hello
cZxid = 0x100000002
ctime = Mon Mar 12 15:15:52 CST 2018
mZxid = 0x100000002
mtime = Mon Mar 12 15:15:52 CST 2018
pZxid = 0x100000002
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 5
numChildren = 0
[zk: 10.192.27.111:2181(CONNECTED) 1]
[root@kafka114 ~]# /opt/zookeeper/bin/zkCli.sh -server 10.192.27.114:2181
Connecting to 10.192.27.114:2181
===========================
WATCHER::

WatchedEvent state:SyncConnected type:None path:null
[zk: 10.192.27.114:2181(CONNECTED) 0] get /test
hello
cZxid = 0x100000002
ctime = Mon Mar 12 15:15:52 CST 2018
mZxid = 0x100000002
mtime = Mon Mar 12 15:15:52 CST 2018
pZxid = 0x100000002
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 5
numChildren = 0
[zk: 10.192.27.114:2181(CONNECTED) 1]
3. 安装zookeeper

 

 

 

4、安装并测试kafka
4.1节点1的配置
[root@kafka100 ~]# cd /opt/soft/
[root@kafka100 soft]# tar zxf kafka_2.11-1.0.0.tgz -C /opt/
[root@kafka100 soft]# ln -s /opt/kafka_2.11-1.0.0/ /opt/kafka
[root@kafka100 soft]# mkdir /opt/kafka/logs
[root@kafka100 soft]# vim /opt/kafka/config/server.properties
21 broker.id=1                                                                              
31 listeners=PLAINTEXT://10.192.27.100:9092           
60 log.dirs=/opt/kafka/logs                                     
103 log.retention.hours=24                                                                   
123 zookeeper.connect=10.192.27.100:2181,10.192.27.111:2181,10.192.27.114:2181


4.2节点2的配置
[root@kafka111 ~]# cd /opt/soft/
[root@kafka111 soft]# tar zxf kafka_2.11-1.0.0.tgz -C /opt/
[root@kafka111 soft]# ln -s /opt/kafka_2.11-1.0.0/ /opt/kafka
[root@kafka111 soft]# mkdir /opt/kafka/logs
[root@kafka111 soft]# vim /opt/kafka/config/server.properties
21 broker.id=2                                                                              
31 listeners=PLAINTEXT://10.192.27.111:9092
60 log.dirs=/opt/kafka/logs                                                  
103 log.retention.hours=24                                                                   
123 zookeeper.connect=10.192.27.100:2181,10.192.27.111:2181,10.192.27.114:2181

4.3节点3的配置
[root@kafka114 ~]# cd /opt/soft/
[root@kafka114 soft]# tar zxf kafka_2.11-1.0.0.tgz -C /opt/
[root@kafka114 soft]# ln -s /opt/kafka_2.11-1.0.0/ /opt/kafka
[root@kafka114 soft]# mkdir /opt/kafka/logs
[root@kafka114 soft]# vim /opt/kafka/config/server.properties
21 broker.id=3                                                                             
31 listeners=PLAINTEXT://10.192.27.114:9092
60 log.dirs=/opt/kafka/logs                                                    
103 log.retention.hours=24                                                                   
123 zookeeper.connect=10.192.27.100:2181,10.192.27.111:2181,10.192.27.114:2181

4.4各节点启动kafka
节点1,可以先前台启动,方便查看错误日志
[root@kafka100 soft]# /opt/kafka/bin/kafka-server-start.sh  /opt/kafka/config/server.properties
===========================
[2018-03-14 11:04:05,397] INFO Kafka version : 1.0.0 (org.apache.kafka.common.utils.AppInfoParser)
[2018-03-14 11:04:05,397] INFO Kafka commitId : aaa7af6d4a11b29d (org.apache.kafka.common.utils.AppInfoParser)
[2018-03-14 11:04:05,414] INFO [KafkaServer id=1] started (kafka.server.KafkaServer)
最后一行出现KafkaServer id和started字样,就表明启动成功了,然后就可以放到后台启动了
[root@kafka100 logs]# /opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties
[root@kafka100 logs]# tail -f /opt/kafka/logs/server.log
=========================
[2018-03-14 11:04:05,414] INFO [KafkaServer id=1] started (kafka.server.KafkaServer)

节点2,我们这次直接后台启动然后查看日志
[root@kafka111 kafka]# /opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties
[root@kafka111 kafka]# tail -f /opt/kafka/logs/server.log
====================================
[2018-03-14 11:04:13,679] INFO [KafkaServer id=2] started (kafka.server.KafkaServer)
节点3,一样后台启动然后查看日志
[root@kafka114 kafka]# /opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties
[root@kafka114 kafka]# tail -f /opt/kafka/logs/server.log
=======================================
[2018-03-14 11:06:38,274] INFO [KafkaServer id=3] started (kafka.server.KafkaServer)

4.5验证进程
节点1
[root@kafka100 ~]# /opt/jdk/bin/jps
4531 Jps
4334 Kafka
1230 QuorumPeerMain
节点2
[root@kafka111 kafka]# /opt/jdk/bin/jps
2513 Kafka
2664 Jps
1163 QuorumPeerMain
节点3
[root@kafka114 kafka]# /opt/jdk/bin/jps
2835 Jps
2728 Kafka
1385 QuorumPeerMain


4.6测试创建topic
创建名为kafkatest,partitions(分区)为3,replication(复制)为3的topic(主题),在任意机器操作即可
[root@kafka100 ~]# /opt/kafka/bin/kafka-topics.sh  --create  --zookeeper 10.192.27.100:2181,10.192.27.111:2181,10.192.27.114:2181 --partitions 3 --replication-factor 3 --topic kafkatest
Created topic "kafkatest".

4.7测试获取toppid
可以在任意一台kafka服务器进行测试
节点1
[root@kafka100 ~]# /opt/kafka/bin/kafka-topics.sh --describe --zookeeper 10.192.27.100:2181,10.192.27.111:2181,10.192.27.114:2181  --topic kafkatest
Topic:kafkatest    PartitionCount:3    ReplicationFactor:3    Configs:
    Topic: kafkatest    Partition: 0    Leader: 2    Replicas: 2,3,1    Isr: 2,3,1
    Topic: kafkatest    Partition: 1    Leader: 3    Replicas: 3,1,2    Isr: 3,1,2
    Topic: kafkatest    Partition: 2    Leader: 1    Replicas: 1,2,3    Isr: 1,2,3
节点2
[root@kafka111 ~]# /opt/kafka/bin/kafka-topics.sh --describe --zookeeper 10.192.27.100:2181,10.192.27.111:2181,10.192.27.114:2181  --topic kafkatest
Topic:kafkatest    PartitionCount:3    ReplicationFactor:3    Configs:
    Topic: kafkatest    Partition: 0    Leader: 2    Replicas: 2,3,1    Isr: 2,3,1
    Topic: kafkatest    Partition: 1    Leader: 3    Replicas: 3,1,2    Isr: 3,1,2
    Topic: kafkatest    Partition: 2    Leader: 1    Replicas: 1,2,3    Isr: 1,2,3
节点3
[root@kafka114 ~]# /opt/kafka/bin/kafka-topics.sh --describe --zookeeper 10.192.27.100:2181,10.192.27.111:2181,10.192.27.114:2181  --topic kafkatest
Topic:kafkatest    PartitionCount:3    ReplicationFactor:3    Configs:
    Topic: kafkatest    Partition: 0    Leader: 2    Replicas: 2,3,1    Isr: 2,3,1
    Topic: kafkatest    Partition: 1    Leader: 3    Replicas: 3,1,2    Isr: 3,1,2
    Topic: kafkatest    Partition: 2    Leader: 1    Replicas: 1,2,3    Isr: 1,2,3
状态说明:kafkatest有三个分区分别为1、23,分区0的leader是2(broker.id),分区0有三个副本,并且状态都为lsr(ln-sync,表示可以参加选举成为leader)。

4.8测试删除topic
[root@kafka100 ~]# /opt/kafka/bin/kafka-topics.sh --delete --zookeeper 10.192.27.100:2181,10.192.27.111:2181,10.192.27.114:2181  --topic kafkatest
Topic kafkatest is marked for deletion.
Note: This will have no impact if delete.topic.enable is not set to true.

4.9验证是否真的删除
[root@kafka100 ~]# /opt/kafka/bin/kafka-topics.sh --describe --zookeeper 10.192.27.100:2181,10.192.27.111:2181,10.192.27.114:2181  --topic kafkatest
[root@kafka100 ~]#

4.10测试获取所有的topic列表
首先创建两个topic
[root@kafka100 ~]# /opt/kafka/bin/kafka-topics.sh --create  --zookeeper 10.192.27.100:2181,10.192.27.111:2181,10.192.27.114:2181 --partitions 3 --replication-factor 3 --topic kafkatest
Created topic "kafkatest".
[root@kafka100 ~]# /opt/kafka/bin/kafka-topics.sh --create  --zookeeper 10.192.27.100:2181,10.192.27.111:2181,10.192.27.114:2181 --partitions 3 --replication-factor 3 --topic kafkatest2
Created topic "kafkatest2".
然后查看所有的topic列表
[root@kafka100 ~]# /opt/kafka/bin/kafka-topics.sh  --list --zookeeper 10.192.27.100:2181,10.192.27.111:2181,10.192.27.114:2181
kafkatest
kafkatest2
[root@kafka100 ~]#

4.11kafka测试命令发送消息
创建一个名为messagetest的topic
[root@kafka100 ~]# /opt/kafka/bin/kafka-topics.sh --create --zookeeper 10.192.27.100:2181,10.192.27.111:2181,10.192.27.114:2181 --partitions 3 --replication-factor 3 --topic  messagetest
Created topic "messagetest".
发送消息:注意,端口是 kafka的9092,而不是zookeeper的2181
[root@kafka100 ~]# /opt/kafka/bin/kafka-console-producer.sh --broker-list  10.192.27.100:9092,10.192.27.111:9092,10.192.27.114:9092 --topic  messagetest
>hello
>mymy
>Yo!
>





4.12其他kafka服务器获取消息
[root@kafka100 ~]# /opt/kafka/bin/kafka-console-consumer.sh --zookeeper 10.192.27.100:2181,10.192.27.111:2181,10.192.27.114:2181 --topic messagetest --from-beginning
Using the ConsoleConsumer with old consumer is deprecated and will be removed in a future major release. Consider using the new consumer by passing [bootstrap-server] instead of [zookeeper].
mymy
Yo!
hello
[root@kafka111 ~]# /opt/kafka/bin/kafka-console-consumer.sh --zookeeper 10.192.27.100:2181,10.192.27.111:2181,10.192.27.114:2181 --topic messagetest --from-beginning
Using the ConsoleConsumer with old consumer is deprecated and will be removed in a future major release. Consider using the new consumer by passing [bootstrap-server] instead of [zookeeper].
mymy
Yo!
hello
[root@kafka114 ~]#  /opt/kafka/bin/kafka-console-consumer.sh --zookeeper 10.192.27.100:2181,10.192.27.111:2181,10.192.27.114:2181 --topic messagetest --from-beginning
Using the ConsoleConsumer with old consumer is deprecated and will be removed in a future major release. Consider using the new consumer by passing [bootstrap-server] instead of [zookeeper].
hello
mymy
Yo!
4、安装并测试kafka

 

 

 

5、报错解决

5.1 zookeeper配置文件里的server写错导致zookeeper状态为standalone
配置文件里zoo.cfg里的server地址写错了,导致启动的时候只会查找自己的节点
[root@kafka100 soft]# grep "^[a-Z]" /opt/zookeeper/conf/zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/data/zookeeper
clientPort=2181
server.1=10.192.27.100:2888:3888
server.1=10.192.27.111:2888:3888
server.1=10.192.27.114:2888:3888
[root@kafka100 ~]# /opt/zookeeper/bin/zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[root@kafka100 ~]# /opt/zookeeper/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Mode: standalone
解决:各节点修改标签为正确的数字,然后重启zookeeper服务,注意!所有节点都要操作!
[root@kafka100 soft]# grep "^[a-Z]" /opt/zookeeper/conf/zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/data/zookeeper
clientPort=2181
server.1=10.192.27.100:2888:3888
server.2=10.192.27.111:2888:3888
server.3=10.192.27.114:2888:3888
[root@kafka100 soft]# /opt/zookeeper/bin/zkServer.sh restart
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Stopping zookeeper ... STOPPED
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[root@kafka111 soft]# /opt/zookeeper/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Mode: follower


5.2发送消息失败
[root@kafka100 ~]# /opt/kafka/bin/kafka-console-producer.sh --broker-list  10.192.27.100:2181,10.192.27.111:2181,10.192.27.114:2181 --topic  messagetest
>hellp

mymy
meme
[2018-03-14 11:47:31,269] ERROR Error when sending message to topic messagetest with key: null, value: 5 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
>hello
[2018-03-14 11:48:31,277] ERROR Error when sending message to topic messagetest with key: null, value: 0 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
>    
报错原因.端口写错了,应该是kafka的9092,而不是zookeeper的2181
解决:使用正确的端口
[root@kafka100 ~]# /opt/kafka/bin/kafka-console-producer.sh --broker-list  10.192.27.100:9092,10.192.27.111:9092,10.192.27.114:9092 --topic  messagetest
>hello
>mymy
>Yo!
>

5.3接受消息失败报错
[root@kafka111 ~]# /opt/kafka/bin/kafka-console-consumer.sh --zookeeper 10.192.27.100:2181,10.192.27.111:2181,10.192.27.114:2181 --topic messagetest --from-beginning
Using the ConsoleConsumer with old consumer is deprecated and will be removed in a future major release. Consider using the new consumer by passing [bootstrap-server] instead of [zookeeper].
[2018-03-14 12:02:01,648] ERROR Unknown error when running consumer:  (kafka.tools.ConsoleConsumer$)
java.net.UnknownHostException: kafka111: kafka111: Name or service not known
    at java.net.InetAddress.getLocalHost(InetAddress.java:1505)
    at kafka.consumer.ZookeeperConsumerConnector.<init>(ZookeeperConsumerConnector.scala:135)
    at kafka.consumer.ZookeeperConsumerConnector.<init>(ZookeeperConsumerConnector.scala:159)
    at kafka.consumer.Consumer$.create(ConsumerConnector.scala:112)
    at kafka.consumer.OldConsumer.<init>(BaseConsumer.scala:130)
    at kafka.tools.ConsoleConsumer$.run(ConsoleConsumer.scala:72)
    at kafka.tools.ConsoleConsumer$.main(ConsoleConsumer.scala:54)
    at kafka.tools.ConsoleConsumer.main(ConsoleConsumer.scala)
Caused by: java.net.UnknownHostException: kafka111: Name or service not known
    at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
    at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928)
    at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323)
    at java.net.InetAddress.getLocalHost(InetAddress.java:1500)
    ... 7 more


报错原因:主机名和hosts解析名不一致
[root@kafka111 ~]# cat /etc/hostname 
kafka111
[root@kafka111 ~]# tail -3 /etc/hosts
10.192.27.100 kafka100
10.192.27.111 kafka111
10.192.27.114 kafka114
解决办法:所有主机的主机名和hosts解析名保持一致,然后重新获取
修改所有主机的主机名
[root@kafka100 ~]# hostname
kafka100
[root@kafka100 ~]# tail -3 /etc/hosts
10.192.27.100 kafka100
10.192.27.111 kafka111
10.192.27.114 kafka114
[root@kafka111 ~]# hostname
kafka111
[root@kafka111 ~]# tail -3 /etc/hosts
10.192.27.100 kafka100
10.192.27.111 kafka111
10.192.27.114 kafka114
[root@kafka114 ~]# hostname
kafka114
[root@kafka114 ~]# tail -3 /etc/hosts
10.192.27.100 kafka100
10.192.27.111 kafka111
10.192.27.114 kafka114
重新获取消息
[root@kafka111 ~]# /opt/kafka/bin/kafka-console-consumer.sh --zookeeper 10.192.27.100:2181,10.192.27.111:2181,10.192.27.114:2181 --topic messagetest --from-beginning
Using the ConsoleConsumer with old consumer is deprecated and will be removed in a future major release. Consider using the new consumer by passing [bootstrap-server] instead of [zookeeper].
mymy
Yo!
hello
[root@kafka114 ~]#  /opt/kafka/bin/kafka-console-consumer.sh --zookeeper 10.192.27.100:2181,10.192.27.111:2181,10.192.27.114:2181 --topic messagetest --from-beginning
Using the ConsoleConsumer with old consumer is deprecated and will be removed in a future major release. Consider using the new consumer by passing [bootstrap-server] instead of [zookeeper].
hello
mymy
Yo!
5、报错解决

 

 

 

6、ELK配置

6.1 filebeat配置
[root@nginx-100 conf.d]# cat /etc/filebeat/filebeat.yml 
filebeat.inputs:
- type: log
  enabled: true 
  paths:
    - /var/log/nginx/access.log
  json.keys_under_root: true
  json.overwrite_keys: true
  tags: ["access"]
- type: log
  enabled: true 
  paths:
    - /var/log/nginx/error.log
  tags: ["error"]
setup.template.settings:
  index.number_of_shards: 3
setup.kibana:
  host: "10.192.27.111:5601"
output.kafka:
  hosts: ["10.192.27.100:9092","10.192.27.111:9092","10.192.27.114:9092"]
  topic: elklog

[root@nginx-100 ~]# systemctl restart filebeat  






6.2 logstash配置
[root@ELK-111 conf.d]# cat /etc/logstash/conf.d/kafka.conf
input{
  kafka{
    bootstrap_servers=>"10.192.27.100:9092"
    topics=>["elklog"]
    group_id=>"logstash"
    codec => "json"
  }
}
filter {
  mutate {
    convert => ["upstream_time", "float"]                                                                                                                        
    convert => ["request_time", "float"]
  }
}
output {
    if "access" in [tags] {
      elasticsearch {
        hosts => "http://localhost:9200"
        manage_template => false
        index => "nginx_access-%{+yyyy.MM.dd}"
      }
    }
    if "error" in [tags] {
      elasticsearch {
        hosts => "http://localhost:9200"
        manage_template => false
        index => "nginx_error-%{+yyyy.MM.dd}"
      }
    }
}

[root@ELK-111 ~]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/kafka.conf #启动

 

posted @ 2020-05-08 14:12  冥想心灵  阅读(48)  评论(0编辑  收藏  举报