分布式日志收集系统--Chukwa

1. 安装部署

1.1 环境要求

1.使用的JDK的版本必须是1.6或者更高版本,本实例中使用的是JDK1.6

2.使用的hadoop的版本必须是Hadoop0.20.205.1及以上版本,本实例中使用的是Hadoop1.0.1版本。

3.为了运行HICC,需要使用HBase0.90.4版本

1.2 版本选择

  这里使用0.5版本

 

1.3 执行步骤

1.首先下载的chukwa的版本是0.5版本,下载链接如下:

http://labs.renren.com/apache-mirror/incubator/chukwa/chukwa-0.5.0/

下载如下的两个文件:

chukwa-incubating-0.5.0.tar.gz

chukwa-incubating-src-0.5.0.tar.gz

将如上的两个gz文件进行解压缩,

2.然后将chukwa-incubating-src-0.5.0下的conf目录和script目录拷贝到

chukwa-incubating-0.5.0目录下,并将chukwa-incubating-0.5.0重命名为chukwa

1.3 目录规范

程序目录

1 tar -zxvf chukwa-incubating-0.5.0.tar.gz -C /usr/local/cloud/src/
2 cd /usr/local/cloud/
3 ln -s -f /usr/local/cloud/src/chukwa-incubating-0.5.0 chukua

 

数据目录

 

1 mkdir /data/logs/chukwa
2 mkdir /data/pids/chukwa

 

1.4 修改配置

 

1 vim /etc/profile
2 export CHUKWA_HOME=/usr/local/cloud/chukwa
3 export PATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$CHUKWA_HOME/bin:$PATH
4 source /etc/profile

 

代理器配置

 

  • 使用 $CHUKWA  /etc/chukwa/agents 指定代理器地址
#配置代理这里介绍单机模式
localhost
  • 使用 $CHUKWA  /etc/chukwa/chukwa-agent-conf.xml 配置代理器参数
 1 <!-- 设置轮询检测文件内容变化的间隔时间  -->
 2 <property>
 3     <name>chukwaAgent.adaptor.context.switch.time</name>
 4     <value>5000</value>
 5 </property>
 6 <!-- 设置读取文件增量内容的最大值  -->
 7 <property>
 8     <name>chukwaAgent.fileTailingAdaptor.maxReadSize</name>
 9     <value>2097152</value>
10 </property>

收集器配置

 

  • 使用 $CHUKWA   /etc/chukwa/collectors 指定收集器地址
1 # 单机部署的情况下与agents相同
2 localhost

 

  • 使用 $CHUKWA  /etc/chukwa/chukwa-collector-conf.xml 配置收集器参数

 

 

 1 <!-- Chukwa 0.5 版本添加了写入到HBase的实现, 如果不需要则应恢复默认 -->
 2 <!-- Sequence File Writer parameters -->
 3 <property>
 4     <name>chukwaCollector.pipeline</name>
 5     <value>org.apache.hadoop.chukwa.datacollection.writer.SocketTeeWriter,org.apache.hadoop.chukwa.datacollection.writer.Se#
 6 </property>
 7 
 8 <!-- 设置服务端地址  -->
 9 <property>
10     <name>writer.hdfs.filesystem</name>
11     <value>hdfs://hadooptest:9000</value>
12 </property>

 

全局配置

 

1 # 在 $CHUKWA_HOME/etc/chukwa/chukwa-env.sh 添加或修改如下项
2 export JAVA_HOME=/usr/java/default
3 export CLASSPATH=.:$JAVA_HOME/lib
4 export HADOOP_HOME=/usr/local/cloud/hadoop
5 export CHUKWA_HOME=/usr/local/cloud/chukua
6 export CHUKWA_CONF_DIR=${CHUKWA_HOME}/etc/chukwa
7 export CHUKWA_PID_DIR=/data/pids/chukwa
8 export CHUKWA_LOG_DIR=/data/logs/chukwa

 

监测文件设置

 

 

1 # 在 $CHUKWA_HOME/etc/chukwa/initial_adaptors 中添加要监测的日志文件, 但一般使用 telnet 链接到服务端的方式添加
2 # 格式为 add [name =] <adaptor_class_name> <datatype> <adaptor specific params> <initial offset>
3 # 依次为: 监测接口的实现类 数据类型 起始点 日志文件 已收集的文件大小
4 add filetailer.CharFileTailingAdaptorUTF8 typeone 0 /data/logs/web/typeone.log 0
5 add filetailer.CharFileTailingAdaptorUTF8 typetwo 0 /data/logs/web/typetwo.log 0

 

2 启动服务

2.1 启动收集器进程

1 cd $CHUKWA_HOME/
2 sbin/start-collectors.sh

 

2.2 启动代理器进程

sbin/start-agents.sh

2.3 启动数据处理进程

 

1 sbin/start-data-processors.sh

 

 1 [hadoop@hadooptest chukua]$ sbin/start-collectors.sh
 2 localhost: starting collector, logging to /data/logs/chukwa/chukwa-hadoop-collector-hadooptest.out
 3 localhost: WARN: option chukwa.data.dir may not exist; val = /chukwa
 4 localhost: Guesses:
 5 localhost:  chukwaRootDir null
 6 localhost:  fs.default.name URI
 7 localhost:  nullWriter.dataRate Time
 8 localhost: WARN: option chukwa.tmp.data.dir may not exist; val = /chukwa/temp
 9 localhost: Guesses:
10 localhost:  chukwaRootDir null
11 localhost:  nullWriter.dataRate Time
12 localhost:  chukwaCollector.tee.port Integral
13 [hadoop@hadooptest chukua]$ sbin/start-agents.sh
14 localhost: starting agent, logging to /data/logs/chukwa/chukwa-hadoop-agent-hadooptest.out
15 localhost: OK chukwaAgent.adaptor.context.switch.time [Time] = 5000
16 localhost: OK chukwaAgent.checkpoint.dir [File] = /data/logs/chukwa/
17 localhost: OK chukwaAgent.checkpoint.interval [Time] = 5000
18 localhost: WARN: option chukwaAgent.collector.retries may not exist; val = 144000
19 localhost: Guesses:
20 localhost:  chukwaAgent.connector.retryRate Time
21 localhost:  chukwaAgent.sender.retries Integral
22 localhost:  chukwaAgent.control.remote Boolean
23 localhost: WARN: option chukwaAgent.collector.retryInterval may not exist; val = 20000
24 localhost: Guesses:
25 [hadoop@hadooptest chukua]$ sbin/start-data-processors.sh
26 starting archive, logging to /data/logs/chukwa/chukwa-hadoop-archive-hadooptest.out
27 starting demux, logging to /data/logs/chukwa/chukwa-hadoop-demux-hadooptest.out
28 starting dp, logging to /data/logs/chukwa/chukwa-hadoop-dp-hadooptest.out
29 [hadoop@hadooptest chukua]$

 

3 收集测试

3.1 构造测试数据

 1 # 在 /data/logs/web/webone 中写入如下测试日志
 2 - 10.0.0.10 [17/Oct/2011:23:20:40 +0800] GET /img/chukwa0.jpg HTTP/1.0 "404" "16" "Mozilla/5.0 (MSIE 9.0; Windows NT 6.1;)"
 3 - 10.0.0.11 [17/Oct/2011:23:20:41 +0800] GET /img/chukwa1.jpg HTTP/1.0 "404" "16" "Mozilla/5.0 (MSIE 9.0; Windows NT 6.1;)"
 4 - 10.0.0.12 [17/Oct/2011:23:20:42 +0800] GET /img/chukwa2.jpg HTTP/1.0 "404" "16" "Mozilla/5.0 (MSIE 9.0; Windows NT 6.1;)"
 5 - 10.0.0.13 [17/Oct/2011:23:20:43 +0800] GET /img/chukwa3.jpg HTTP/1.0 "404" "16" "Mozilla/5.0 (MSIE 9.0; Windows NT 6.1;)"
 6 - 10.0.0.14 [17/Oct/2011:23:20:44 +0800] GET /img/chukwa4.jpg HTTP/1.0 "404" "16" "Mozilla/5.0 (MSIE 9.0; Windows NT 6.1;)"
 7 - 10.0.0.15 [17/Oct/2011:23:20:45 +0800] GET /img/chukwa5.jpg HTTP/1.0 "404" "16" "Mozilla/5.0 (MSIE 9.0; Windows NT 6.1;)"
 8 - 10.0.0.16 [17/Oct/2011:23:20:46 +0800] GET /img/chukwa6.jpg HTTP/1.0 "404" "16" "Mozilla/5.0 (MSIE 9.0; Windows NT 6.1;)"
 9 - 10.0.0.17 [17/Oct/2011:23:20:47 +0800] GET /img/chukwa7.jpg HTTP/1.0 "404" "16" "Mozilla/5.0 (MSIE 9.0; Windows NT 6.1;)"
10 - 10.0.0.18 [17/Oct/2011:23:20:48 +0800] GET /img/chukwa8.jpg HTTP/1.0 "404" "16" "Mozilla/5.0 (MSIE 9.0; Windows NT 6.1;)"
11 - 10.0.0.19 [17/Oct/2011:23:20:49 +0800] GET /img/chukwa9.jpg HTTP/1.0 "404" "16" "Mozilla/5.0 (MSIE 9.0; Windows NT 6.1;)"
12 
13 # 在 /data/logs/web/webtwo 中写入如下测试日志
14 - 192.168.0.10 [17/Oct/2011:23:20:40 +0800] GET /img/chukwa0.jpg HTTP/1.0 "404" "16" "Mozilla/5.0 (MSIE 9.0; Windows NT 6.1;)"
15 - 192.168.0.11 [17/Oct/2011:23:21:40 +0800] GET /img/chukwa1.jpg HTTP/1.0 "404" "16" "Mozilla/5.0 (MSIE 9.0; Windows NT 6.1;)"
16 - 192.168.0.12 [17/Oct/2011:23:22:40 +0800] GET /img/chukwa2.jpg HTTP/1.0 "404" "16" "Mozilla/5.0 (MSIE 9.0; Windows NT 6.1;)"
17 - 192.168.0.13 [17/Oct/2011:23:23:40 +0800] GET /img/chukwa3.jpg HTTP/1.0 "404" "16" "Mozilla/5.0 (MSIE 9.0; Windows NT 6.1;)"
18 - 192.168.0.14 [17/Oct/2011:23:24:40 +0800] GET /img/chukwa4.jpg HTTP/1.0 "404" "16" "Mozilla/5.0 (MSIE 9.0; Windows NT 6.1;)"
19 - 192.168.0.15 [17/Oct/2011:23:25:40 +0800] GET /img/chukwa5.jpg HTTP/1.0 "404" "16" "Mozilla/5.0 (MSIE 9.0; Windows NT 6.1;)"
20 - 192.168.0.16 [17/Oct/2011:23:26:40 +0800] GET /img/chukwa6.jpg HTTP/1.0 "404" "16" "Mozilla/5.0 (MSIE 9.0; Windows NT 6.1;)"
21 - 192.168.0.17 [17/Oct/2011:23:27:40 +0800] GET /img/chukwa7.jpg HTTP/1.0 "404" "16" "Mozilla/5.0 (MSIE 9.0; Windows NT 6.1;)"
22 - 192.168.0.18 [17/Oct/2011:23:28:40 +0800] GET /img/chukwa8.jpg HTTP/1.0 "404" "16" "Mozilla/5.0 (MSIE 9.0; Windows NT 6.1;)"
23 - 192.168.0.19 [17/Oct/2011:23:29:40 +0800] GET /img/chukwa9.jpg HTTP/1.0 "404" "16" "Mozilla/5.0 (MSIE 9.0; Windows NT 6.1;)"

 

3.2 模拟WEB日志

 1 # 在 /data/logs/web/weblogadd.sh 中写入如下内容
 2 #!/bin/bash
 3 cat /data/logs/web/webone >> /data/logs/web/typeone.log
 4 cat /data/logs/web/webtwo >> /data/logs/web/typetwo.log
 5 
 6 # 设置脚本文件可执行
 7 chmod +x weblogadd.sh
 8 
 9 # 在 /etc/crontab 中添加定时任务以模拟WEB日志生成
10 */1 * * * * hadoop /data/logs/web/weblogadd.sh


 

3.3 添加日志监控

1 # 链接到服务端的 telnet 服务
2 telnet hadooptest 9093
3 add org.apache.hadoop.chukwa.datacollection.adaptor.filetailer.CharFileTailingAdaptorUTF8 typeone 0 /data/logs/web/typeone.log 0
4 add org.apache.hadoop.chukwa.datacollection.adaptor.filetailer.CharFileTailingAdaptorUTF8 typetwo 0 /data/logs/web/typetwo.log 0

参见:http://hi.baidu.com/zhangxinandala/item/db5d8adc22bab0d5241f4017

http://hadoop.readthedocs.org/en/latest/Hadoop-Chukwa.html#id3

 

posted @ 2013-12-02 17:23  wq920  阅读(2626)  评论(0编辑  收藏  举报