flume监控
Flume本身提供了http, ganglia的监控服务,而我们目前主要使用zabbix做监控。因此,我们为Flume添加了zabbix监控模块,和sa的监控服务无缝融合。
另一方面,净化Flume的metrics。只将我们需要的metrics发送给zabbix,避免 zabbix server造成压力。目前我们最为关心的是Flume能否及时把应用端发送过来的日志写到Hdfs上, 对应关注的metrics为:
- Source : 接收的event数和处理的event数
- Channel : Channel中拥堵的event数
- Sink : 已经处理的event数
zabbix安装
http://my.oschina.net/yunnet/blog/173161
zabbix监控Flume
#JVM性能监控
Young GC counts
sudo /usr/local/jdk1.7.0_21/bin/jstat -gcutil $(pgrep java)|tail -1|awk '{print $6}'
Full GC counts
sudo /usr/local/jdk1.7.0_21/bin/jstat -gcutil $(pgrep java)|tail -1|awk '{print $8}'
JVM total memory usage
sudo /usr/local/jdk1.7.0_21/bin/jmap -histo $(pgrep java)|grep Total|awk '{print $3}'
JVM total instances usage
sudo /usr/local/jdk1.7.0_21/bin/jmap -histo $(pgrep java)|grep Total|awk '{print $2}'
#flume应用参数监控
启动时加上JSON repoting参数,这样就可以通过http://localhost:34545/metrics访问
bin/flume-ng agent -n consumer -c conf -f bin/conf.properties -Dflume.monitoring.type=http -Dflume.monitoring.port=34545 &
#生成一些数据
for i in {1..100};do echo "exec test$i" >> /usr/logs/log.10;echo $i;done
#通过shell脚本对JSON输出进行排版
curl http://localhost:34545/metrics 2>/dev/null|sed -e 's/\([,]\)\s*/\1\n/g' -e 's/[{}]/\n/g' -e 's/[",]//g'
SOURCE.kafka:
OpenConnectionCount:0
AppendBatchAcceptedCount:0
AppendBatchReceivedCount:0
Type:SOURCE
EventAcceptedCount:7252225
AppendReceivedCount:0
StopTime:0
EventReceivedCount:0
StartTime:1407731371546
AppendAcceptedCount:0
SINK.es:
BatchCompleteCount:10697
ConnectionFailedCount:0
EventDrainAttemptCount:7253061
ConnectionCreatedCount:1
BatchEmptyCount:226
Type:SINK
ConnectionClosedCount:0
EventDrainSuccessCount:7253061
StopTime:0
StartTime:1407731371546
BatchUnderflowCount:14857
SINK.hdp:
BatchCompleteCount:1290
ConnectionFailedCount:0
EventDrainAttemptCount:8057502
ConnectionCreatedCount:35787
BatchEmptyCount:54894
Type:SINK
ConnectionClosedCount:35609
EventDrainSuccessCount:8057502
StopTime:0
StartTime:1407731371545
BatchUnderflowCount:45433
--------------$1 变量!!!eg:EventDrainSuccessCount(source,channel,sink)
#配置监控flume的脚本文件
cat /opt/monitor_flume.sh
curl http://localhost:34545/metrics 2>/dev/null|sed -e 's/\([,]\)\s*/\1\n/g' -e 's/[{}]/\n/g' -e 's/[",]//g'|grep $1|awk -F: '{print $2}'
curl http://localhost:34545/metrics 2>/dev/null|sed -e 's/\([,]\)\s*/\1\n/g' -e 's/[{}]/\n/g' -e 's/[",]//g'|grep Total|awk -F: '{print $2}'
curl http://localhost:34545/metrics 2>/dev/null|sed -e 's/\([,]\)\s*/\1\n/g' -e 's/[{}]/\n/g' -e 's/[",]//g'|grep StartTime|awk -F: '{print $2}'
#在zabbix agent配置文件进行部署
cat zabbix_flume_jdk.conf
UserParameter=ygc.counts,sudo /usr/local/jdk1.7.0_21/bin/jstat -gcutil $(pgrep java|head -1)|tail -1|awk '{print $6}'
UserParameter=fgc.counts,sudo /usr/local/jdk1.7.0_21/bin/jstat -gcutil $(pgrep java|head -1)|tail -1|awk '{print $8}'
UserParameter=jvm.memory.usage,sudo /usr/local/jdk1.7.0_21/bin/jmap -histo $(pgrep java|head -1)|grep Total|awk '{print $3}'
UserParameter=jvm.instances.usage,sudo /usr/local/jdk1.7.0_21/bin/jmap -histo $(pgrep java|head -1)|grep Total|awk '{print $2}'
UserParameter=flume.monitor[*],/bin/bash /opt/monitor_flume.sh $1