Kafka 0.8 宕机问题排查步骤

CPU 利用率高的排查方法

看看该机器的连接数是不是比其他机器多,监听的端口数:netstat -anlp | wc -l

Kafka-0.8的停止和启动

  • 启动: cd /usr/local/kafka-0.8.0-release/ && nohup ./bin/kafka-server-start.sh config/server.properties &
  • 停止: ps ax | grep -i 'kafka.Kafka' | grep -v grep | awk '{print $1}' | xargs kill -SIGTERM

kafka -0.10的启动和停止

  • 启动:sh /usr/local/kafka/kafka-release/bin/kafka-server-start.sh -daemon /usr/local/kafka/kafka-release/config/server.properties &
  • 停止: kill

1 问题描述

broker 连接Session超时,在zookeeper的临时节点删除,出发告警。查明超时的原因。

2 步骤

1.hpm查看byte-out,网络情况

2.hpm查看当时时间点的load

3.sar命令查看broker当时的网络负载:sar -n ALL -f /var/log/sa/sa08 | grep bond0 > ~/netinfo

3 tips

空闲的时候,查一下2016.7.22 4:27,broker:10.39.48.158的问题原因。

4 一个详细的问题

10.39.48.158机器出现了2次Session Time Out的问题。第二次现象:2016-7-24 12:36网络入口流量byte_in猛增到230M/s,load 达到70。它的其他情况是几M/s,load 稳定在1左右。由于它已经宕机过一次,所以该机器上没有作为任何topic的leader。这个猛增的流量是从何处来的呢?

  1. 排查作为topic partition的follower是否有BytesInPerSec这个metrics值
  • leader有,follower没有。方法:jmx获取同一个topic partition的leader和follower的BytesInPerSec值。

2.排查是否该机器作为follower在某时刻,由于leader所在的topic入口流量猛增,follower同步,导致入口流量猛增。

  • topic:weibomobile,在那时刻入口流量130M/s,48.158机器需要同步它的6个partition。还有别的小入口流量的topic,合计起来,增大了48.158机器的流量。

3.解决

  • 如果在出现类似的问题,将48.158机器的Session Time Out时间从6s,设置成30s,待观察。

5 机器48.158再次出现ZK节点丢失

1.现象

12:58获取到报警,即在12:57节点丢失后,在1min内没有连接上,因此触发报警。
观察该机器,

  • load_one 12:55:30-12:59:00 负载稳定在 75;
  • cpu_system 13:00:00-13:02:00 利用率在 12.8%
  • bytes_in 13:02:30 流量突增到 243M

2.分析

分析该机器上所有的46个topic所在的48.158的partition是否有激增的流量。该数据目前没有,需要收集。

在分析这46个topic的LogEndOffsetIncrease指标,发现12:55:00,所有topic的增加量猛增。
(???会不会是这个时间刻发生了GC)
原有的进程已经被kill重新启动了,查看了新启动的Kafka进程的GC,如下,是正常情况:YC 一次平均耗时38毫秒,FC 一次平均耗时325毫秒。

jstat -gcutil 7089
  S0     S1     E      O      P     YGC     YGCT    FGC    FGCT     GCT
  5.18   0.00  68.29  14.54  60.00 308028 11991.504  1258  409.274 12400.778

很重要的一次GC信息 from 杰哥

image
kafka 宕机的时候,由于负载很高,流量很高,cms gc的remark耗时7s,导致kafka挂掉。(remark stop the word)

6 机器4.215、4.216、4.217 在ZK的节点丢失一次。

1.现象+分析

16/08/08 14:42:09 INFO eager.MonitorBrokerDown: -----broker down info:kafka broker had shutdown. Maybe this is a normal behavior.    last alive brokers:[19, 48151, 17, 18, 15, 48153, 48152, 16, 13, 48155, 14, 48154, 11, 48157, 12, 48156, 48158, 48159, 48160, 3, 2, 10, 1, 0, 7, 6, 5, 4, 9, 8].      current alive brokers:[19, 48151, 17, 18, 15, 48153, 48152, 16, 13, 48155, 48154, 11, 48157, 12, 48156, 48158, 48159, 48160, 3, 2, 10, 1, 0, 7, 6, 5, 4, 9, 8].      invalid broker ip info:{ "host":"10.39.4.215", "jmx_port":9999, "port":19092, "version":1 }

byte-out下降、byte-in下降、load下降,2.5.
02:30:01 PM     bond0 119951.04 125071.66 114175.78  91512.58      0.00      0.00     11.26
02:40:01 PM     bond0 118530.20 125037.93 110370.88  95596.69      0.00      0.00     11.13
02:50:01 PM     bond0  85377.47  83437.07  90981.47  50017.23      0.00      0.00     10.94


16/08/08 16:35:33 INFO eager.MonitorBrokerDown: -----broker down info:kafka broker had shutdown. Maybe this is a normal behavior.    last alive brokers:[19, 48151, 17, 18, 15, 48153, 48152, 16, 13, 48155, 14, 48154, 11, 48157, 12, 48156, 48158, 48159, 48160, 3, 2, 10, 1, 0, 7, 6, 5, 4, 9, 8].      current alive brokers:[19, 48151, 17, 18, 48153, 48152, 16, 13, 48155, 14, 48154, 11, 48157, 12, 48156, 48158, 48159, 48160, 3, 2, 10, 1, 0, 7, 6, 5, 4, 9, 8].      invalid broker ip info:{ "host":"10.39.4.216", "jmx_port":9999, "port":19092, "version":1 }

byte-out:41-62M;byte-in:95-120M;load稳定在3左右。
04:10:01 PM     bond0 132737.19 130567.37 138791.12  81211.07      0.00      0.00     11.25
04:20:01 PM     bond0 131918.26 134324.10 129449.86  88444.40      0.00      0.00     11.22
04:30:01 PM     bond0  97577.10  98807.90  93055.00  61815.56      0.00      0.00     11.32
04:40:01 PM     bond0  98629.53  94743.99 108407.71  44847.14      0.00      0.00     11.34

16/08/08 17:08:07 INFO eager.MonitorBrokerDown: -----broker down info:kafka broker had shutdown. Maybe this is a normal behavior.    last alive brokers:[19, 48151, 17, 18, 15, 48153, 48152, 16, 13, 48155, 14, 48154, 11, 48157, 12, 48156, 48158, 48159, 48160, 3, 2, 10, 1, 0, 7, 6, 5, 4, 9, 8].      current alive brokers:[19, 48151, 17, 18, 15, 48153, 48152, 13, 48155, 14, 48154, 11, 48157, 12, 48156, 48158, 48159, 48160, 3, 2, 10, 1, 0, 7, 6, 5, 4, 9, 8].      invalid broker ip info:{ "host":"10.39.4.217", "jmx_port":9999, "port":19092, "version":1 }

byte-out:102-182M;byte-in:70-84M;load稳定,在1.6左右。
05:00:01 PM     bond0 140487.48 174511.06  82636.05 184825.13      0.00      0.00     11.58
05:10:01 PM     bond0 106820.66 125645.42  72597.47 118687.98      0.00      0.00     11.39
05:20:01 PM     bond0  69833.26  64922.26  82521.11  24865.14      0.00      0.00     11.47

新出现10.39.4.210一天,zk节点丢失2次。

16/08/09 23:16:37 INFO eager.MonitorBrokerDown: -----broker down info:kafka broker had shutdown. Maybe this is a normal behavior.    last alive brokers:[19, 48151, 17, 18, 15, 48153, 16, 48152, 13, 48155, 14, 48154, 11, 48157, 12, 48156, 48158, 48159, 48160, 3, 2, 10, 1, 0, 7, 6, 5, 4, 9, 8].      current alive brokers:[19, 48151, 17, 18, 15, 48153, 16, 48152, 13, 48155, 14, 48154, 11, 48157, 12, 48156, 48158, 48159, 48160, 3, 2, 10, 1, 0, 7, 6, 5, 4, 9].      invalid broker ip info:{ "host":"10.39.4.210", "jmx_port":9999, "port":19092, "version":1 }

分析:
byte-out:5M左右,byte-in:20M左右,load在5左右。Server日志无异常。
11:10:01 PM     bond0 109463.54 123069.26  70532.96 107504.21      0.00      0.00     10.76
11:20:02 PM     bond0 104557.60 111693.01  66477.78  80714.76      0.00      0.00     10.58
11:30:02 PM     bond0  81705.83  73025.27  65993.75   7403.50      0.00      0.00     10.74

16/08/10 04:55:39 INFO eager.MonitorBrokerDown: -----broker down info:kafka broker had shutdown. Maybe this is a normal behavior.    last alive brokers:[19, 48151, 17, 18, 15, 48153, 16, 48152, 13, 48155, 14, 48154, 11, 48157, 12, 48156, 48158, 48159, 48160, 3, 2, 10, 1, 0, 7, 6, 5, 4, 9, 8].      current alive brokers:[19, 48151, 17, 18, 15, 48153, 16, 48152, 13, 48155, 14, 48154, 11, 48157, 12, 48156, 48158, 48159, 48160, 3, 2, 10, 1, 0, 7, 6, 5, 4, 9].      invalid broker ip info:{ "host":"10.39.4.210", "jmx_port":9999, "port":19092, "version":1 }

byte-out:2M左右,byte-in:10M左右,load在11左右,
04:50:01 AM     bond0  31866.19  28762.38  11614.37   3181.36      0.00      0.00     10.58
05:00:01 AM     bond0  30127.26  27063.09  11909.66   3022.31      0.00      0.00     10.54
05:10:01 AM     bond0  32618.83  29096.87  13870.89   3218.26      0.00      0.00     10.36

7. 机器4.220 kafka进程无法提供服务

1.现象

zk节点丢失。

2.分析

宕机的前几分钟load值从2升到83.

8. 机器4.210 近5天的时间,平均每天出现1次 zookeeper.session.timeout

排查该机器,每次断连的时候load_one的值突然变高,byte_in byte_out cpu 指标正常。
zookeeper连接默认的超时时间是 6000ms,见代码kafka/utils/ZkUtils.scala:743 ("zookeeper.session.timeout.ms", 6000)

解决

  • 提高超时时间,zookeeper.session.timeout.ms=45000
  • bin/kafka-server-start.sh添加GC日志:export KAFKA_OPTS="-Xms4G -Xmx8G -Xmn3G -XX:+UseConcMarkSweepGC -XX:ParallelGCThreads=4 -server -Dlog4j.configuration=file:$base_dir/config/log4j.properties -Xloggc:/data0/kafka/log/gc.log -verbose:gc -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCApplicationStoppedTime"

posted on 2017-02-05 16:48  BYRHuangQiang  阅读(5687)  评论(0编辑  收藏  举报

导航