欢迎这位怪蜀黍来到《项目实战从0到1之hive(34)大数据项目之电商数仓(用户行为数据采集)(二) - 大码王 - 博客园》

关闭页面特效

 


第4章 数据采集模块

4.1 Hadoop安装

1)集群规划: img

注意:尽量使用离线方式安装

4.1.1 项目经验之HDFS存储多目录

若HDFS存储空间紧张,需要对DataNode进行磁盘扩展。 1)在DataNode节点增加磁盘并进行挂载。

img 2)在hdfs-site.xml文件中配置多目录,注意新挂载磁盘的访问权限问题。

<property>
   <name>dfs.datanode.data.dir</name>
<value>file:///${hadoop.tmp.dir}/dfs/data1,file:///hd2/dfs/data2,file:///hd3/dfs/data3,file:///hd4/dfs/data4</value>
</property>

4.1.2 项目经验之支持LZO压缩配置

1)hadoop本身并不支持lzo压缩,故需要使用twitter提供的hadoop-lzo开源组件。hadoop-lzo需依赖hadoop和lzo进行编译,编译步骤如下。

2)将编译好后的hadoop-lzo-0.4.20.jar 放入hadoop-2.7.2/share/hadoop/common/

[kgg@hadoop101 common]$ pwd
/opt/module/hadoop-2.7.2/share/hadoop/common
[kgg@hadoop101 common]$ ls
hadoop-lzo-0.4.20.jar

3)同步hadoop-lzo-0.4.20.jar到hadoop102、hadoop103

[kgg@hadoop101 common]$ xsync hadoop-lzo-0.4.20.jar

4)core-site.xml增加配置支持LZO压缩

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>io.compression.codecs</name>
<value>
org.apache.hadoop.io.compress.GzipCodec,
org.apache.hadoop.io.compress.DefaultCodec,
org.apache.hadoop.io.compress.BZip2Codec,
org.apache.hadoop.io.compress.SnappyCodec,
com.hadoop.compression.lzo.LzoCodec,
com.hadoop.compression.lzo.LzopCodec
</value>
</property>
<property>
   <name>io.compression.codec.lzo.class</name>
   <value>com.hadoop.compression.lzo.LzoCodec</value>
</property>
</configuration>

5)同步core-site.xml到hadoop102、hadoop103

[kgg@hadoop101 hadoop]$ xsync core-site.xml

6)启动及查看集群

[kgg@hadoop101 hadoop-2.7.2]$ sbin/start-dfs.sh
[kgg@hadoop102 hadoop-2.7.2]$ sbin/start-yarn.sh

7)测试

yarn jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.2.jar wordcount -Dmapreduce.output.fileoutputformat.compress=true -Dmapreduce.output.fileoutputformat.compress.codec=com.hadoop.compression.lzo.LzopCodec /input /output

8)为lzo文件创建索引

hadoop jar ./share/hadoop/common/hadoop-lzo-0.4.20.jar com.hadoop.compression.lzo.DistributedLzoIndexer /output

4.1.3 项目经验之基准测试

1) 测试HDFS写性能 测试内容:向HDFS集群写10个128M的文件

[kgg@hadoop101 mapreduce]$ hadoop jar /opt/module/hadoop-2.7.2/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.2-tests.jar TestDFSIO -write -nrFiles 10 -fileSize 128MB

19/05/02 11:44:26 INFO fs.TestDFSIO: TestDFSIO.1.8
19/05/02 11:44:26 INFO fs.TestDFSIO: nrFiles = 10
19/05/02 11:44:26 INFO fs.TestDFSIO: nrBytes (MB) = 128.0
19/05/02 11:44:26 INFO fs.TestDFSIO: bufferSize = 1000000
19/05/02 11:44:26 INFO fs.TestDFSIO: baseDir = /benchmarks/TestDFSIO
19/05/02 11:44:28 INFO fs.TestDFSIO: creating control file: 134217728 bytes, 10 files
19/05/02 11:44:30 INFO fs.TestDFSIO: created control files for: 10 files
19/05/02 11:44:30 INFO client.RMProxy: Connecting to ResourceManager at hadoop102/192.168.1.103:8032
19/05/02 11:44:31 INFO client.RMProxy: Connecting to ResourceManager at hadoop102/192.168.1.103:8032
19/05/02 11:44:32 INFO mapred.FileInputFormat: Total input paths to process : 10
19/05/02 11:44:32 INFO mapreduce.JobSubmitter: number of splits:10
19/05/02 11:44:33 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1556766549220_0003
19/05/02 11:44:34 INFO impl.YarnClientImpl: Submitted application application_1556766549220_0003
19/05/02 11:44:34 INFO mapreduce.Job: The url to track the job: http://hadoop102:8088/proxy/application_1556766549220_0003/
19/05/02 11:44:34 INFO mapreduce.Job: Running job: job_1556766549220_0003
19/05/02 11:44:47 INFO mapreduce.Job: Job job_1556766549220_0003 running in uber mode : false
19/05/02 11:44:47 INFO mapreduce.Job: map 0% reduce 0%
19/05/02 11:45:05 INFO mapreduce.Job: map 13% reduce 0%
19/05/02 11:45:06 INFO mapreduce.Job: map 27% reduce 0%
19/05/02 11:45:08 INFO mapreduce.Job: map 43% reduce 0%

19/05/02 11:45:09 INFO mapreduce.Job: map 60% reduce 0%
19/05/02 11:45:10 INFO mapreduce.Job: map 73% reduce 0%
19/05/02 11:45:15 INFO mapreduce.Job: map 77% reduce 0%
19/05/02 11:45:18 INFO mapreduce.Job: map 87% reduce 0%
19/05/02 11:45:19 INFO mapreduce.Job: map 100% reduce 0%
19/05/02 11:45:21 INFO mapreduce.Job: map 100% reduce 100%
19/05/02 11:45:22 INFO mapreduce.Job: Job job_1556766549220_0003 completed successfully
19/05/02 11:45:22 INFO mapreduce.Job: Counters: 51
      File System Counters
              FILE: Number of bytes read=856
              FILE: Number of bytes written=1304826
              FILE: Number of read operations=0
              FILE: Number of large read operations=0
              FILE: Number of write operations=0
              HDFS: Number of bytes read=2350
              HDFS: Number of bytes written=1342177359
              HDFS: Number of read operations=43
              HDFS: Number of large read operations=0
              HDFS: Number of write operations=12
      Job Counters
              Killed map tasks=1
              Launched map tasks=10
              Launched reduce tasks=1
              Data-local map tasks=8
              Rack-local map tasks=2
              Total time spent by all maps in occupied slots (ms)=263635
              Total time spent by all reduces in occupied slots (ms)=9698
              Total time spent by all map tasks (ms)=263635
              Total time spent by all reduce tasks (ms)=9698
              Total vcore-milliseconds taken by all map tasks=263635
              Total vcore-milliseconds taken by all reduce tasks=9698
              Total megabyte-milliseconds taken by all map tasks=269962240
              Total megabyte-milliseconds taken by all reduce tasks=9930752
      Map-Reduce Framework
              Map input records=10
              Map output records=50
              Map output bytes=750
              Map output materialized bytes=910
              Input split bytes=1230
              Combine input records=0
              Combine output records=0
              Reduce input groups=5
              Reduce shuffle bytes=910
              Reduce input records=50
              Reduce output records=5
              Spilled Records=100
              Shuffled Maps =10
              Failed Shuffles=0
              Merged Map outputs=10
              GC time elapsed (ms)=17343
              CPU time spent (ms)=96930
              Physical memory (bytes) snapshot=2821341184
              Virtual memory (bytes) snapshot=23273218048
              Total committed heap usage (bytes)=2075656192
      Shuffle Errors
               BAD_ID=0
               CONNECTION=0
               IO_ERROR=0
               WRONG_LENGTH=0
               WRONG_MAP=0
               WRONG_REDUCE=0
      File Input Format Counters
              Bytes Read=1120
      File Output Format Counters
              Bytes Written=79
19/05/02 11:45:23 INFO fs.TestDFSIO: ----- TestDFSIO ----- : write
19/05/02 11:45:23 INFO fs.TestDFSIO:           Date & time: Thu May 02 11:45:23 CST 2019
19/05/02 11:45:23 INFO fs.TestDFSIO:       Number of files: 10
19/05/02 11:45:23 INFO fs.TestDFSIO: Total MBytes processed: 1280.0
19/05/02 11:45:23 INFO fs.TestDFSIO:     Throughput mb/sec: 10.69751115716984
19/05/02 11:45:23 INFO fs.TestDFSIO: Average IO rate mb/sec: 14.91699504852295
19/05/02 11:45:23 INFO fs.TestDFSIO: IO rate std deviation: 11.160882132355928
19/05/02 11:45:23 INFO fs.TestDFSIO:     Test exec time sec: 52.315

2)测试HDFS读性能 测试内容:读取HDFS集群10个128M的文件

[kgg@hadoop101 mapreduce]$ hadoop jar /opt/module/hadoop-2.7.2/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.2-tests.jar TestDFSIO -read -nrFiles 10 -fileSize 128MB

19/05/02 11:55:42 INFO fs.TestDFSIO: TestDFSIO.1.8
19/05/02 11:55:42 INFO fs.TestDFSIO: nrFiles = 10
19/05/02 11:55:42 INFO fs.TestDFSIO: nrBytes (MB) = 128.0
19/05/02 11:55:42 INFO fs.TestDFSIO: bufferSize = 1000000
19/05/02 11:55:42 INFO fs.TestDFSIO: baseDir = /benchmarks/TestDFSIO
19/05/02 11:55:45 INFO fs.TestDFSIO: creating control file: 134217728 bytes, 10 files
19/05/02 11:55:47 INFO fs.TestDFSIO: created control files for: 10 files
19/05/02 11:55:47 INFO client.RMProxy: Connecting to ResourceManager at hadoop102/192.168.1.103:8032
19/05/02 11:55:48 INFO client.RMProxy: Connecting to ResourceManager at hadoop102/192.168.1.103:8032
19/05/02 11:55:49 INFO mapred.FileInputFormat: Total input paths to process : 10
19/05/02 11:55:49 INFO mapreduce.JobSubmitter: number of splits:10
19/05/02 11:55:49 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1556766549220_0004
19/05/02 11:55:50 INFO impl.YarnClientImpl: Submitted application application_1556766549220_0004
19/05/02 11:55:50 INFO mapreduce.Job: The url to track the job: http://hadoop102:8088/proxy/application_1556766549220_0004/
19/05/02 11:55:50 INFO mapreduce.Job: Running job: job_1556766549220_0004
19/05/02 11:56:04 INFO mapreduce.Job: Job job_1556766549220_0004 running in uber mode : false
19/05/02 11:56:04 INFO mapreduce.Job: map 0% reduce 0%
19/05/02 11:56:24 INFO mapreduce.Job: map 7% reduce 0%
19/05/02 11:56:27 INFO mapreduce.Job: map 23% reduce 0%
19/05/02 11:56:28 INFO mapreduce.Job: map 63% reduce 0%
19/05/02 11:56:29 INFO mapreduce.Job: map 73% reduce 0%
19/05/02 11:56:30 INFO mapreduce.Job: map 77% reduce 0%
19/05/02 11:56:31 INFO mapreduce.Job: map 87% reduce 0%
19/05/02 11:56:32 INFO mapreduce.Job: map 100% reduce 0%
19/05/02 11:56:35 INFO mapreduce.Job: map 100% reduce 100%
19/05/02 11:56:36 INFO mapreduce.Job: Job job_1556766549220_0004 completed successfully
19/05/02 11:56:36 INFO mapreduce.Job: Counters: 51
      File System Counters
              FILE: Number of bytes read=852
              FILE: Number of bytes written=1304796
              FILE: Number of read operations=0
              FILE: Number of large read operations=0
              FILE: Number of write operations=0
              HDFS: Number of bytes read=1342179630
              HDFS: Number of bytes written=78
              HDFS: Number of read operations=53
              HDFS: Number of large read operations=0
              HDFS: Number of write operations=2
      Job Counters
              Killed map tasks=1
              Launched map tasks=10
              Launched reduce tasks=1
              Data-local map tasks=8
              Rack-local map tasks=2
              Total time spent by all maps in occupied slots (ms)=233690
              Total time spent by all reduces in occupied slots (ms)=7215
              Total time spent by all map tasks (ms)=233690
              Total time spent by all reduce tasks (ms)=7215
              Total vcore-milliseconds taken by all map tasks=233690
              Total vcore-milliseconds taken by all reduce tasks=7215
              Total megabyte-milliseconds taken by all map tasks=239298560
              Total megabyte-milliseconds taken by all reduce tasks=7388160
      Map-Reduce Framework
              Map input records=10
              Map output records=50
              Map output bytes=746
              Map output materialized bytes=906
              Input split bytes=1230
              Combine input records=0
              Combine output records=0
              Reduce input groups=5
              Reduce shuffle bytes=906
              Reduce input records=50
              Reduce output records=5
              Spilled Records=100
              Shuffled Maps =10
              Failed Shuffles=0
              Merged Map outputs=10
              GC time elapsed (ms)=6473
              CPU time spent (ms)=57610
              Physical memory (bytes) snapshot=2841436160
              Virtual memory (bytes) snapshot=23226683392
              Total committed heap usage (bytes)=2070413312
      Shuffle Errors
               BAD_ID=0
               CONNECTION=0
               IO_ERROR=0
               WRONG_LENGTH=0
               WRONG_MAP=0
               WRONG_REDUCE=0
      File Input Format Counters
              Bytes Read=1120
      File Output Format Counters
              Bytes Written=78
19/05/02 11:56:36 INFO fs.TestDFSIO: ----- TestDFSIO ----- : read
19/05/02 11:56:36 INFO fs.TestDFSIO:           Date & time: Thu May 02 11:56:36 CST 2019
19/05/02 11:56:36 INFO fs.TestDFSIO:       Number of files: 10
19/05/02 11:56:36 INFO fs.TestDFSIO: Total MBytes processed: 1280.0
19/05/02 11:56:36 INFO fs.TestDFSIO:     Throughput mb/sec: 16.001000062503905
19/05/02 11:56:36 INFO fs.TestDFSIO: Average IO rate mb/sec: 17.202795028686523
19/05/02 11:56:36 INFO fs.TestDFSIO: IO rate std deviation: 4.881590515873911
19/05/02 11:56:36 INFO fs.TestDFSIO:     Test exec time sec: 49.116
19/05/02 11:56:36 INFO fs.TestDFSIO:

3)删除测试生成数据

[kgg@hadoop101 mapreduce]$ hadoop jar /opt/module/hadoop-2.7.2/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.2-tests.jar TestDFSIO -clean

4)使用Sort程序评测MapReduce (1)使用RandomWriter来产生随机数,每个节点运行10个Map任务,每个Map产生大约1G大小的二进制随机数

[kgg@hadoop101 mapreduce]$ hadoop jar /opt/module/hadoop-2.7.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.2.jar randomwriter random-data

(2)执行Sort程序

[kgg@hadoop101 mapreduce]$ hadoop jar /opt/module/hadoop-2.7.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.2.jar sort random-data sorted-data

(3)验证数据是否真正排好序了

[kgg@hadoop101 mapreduce]$ hadoop jar /opt/module/hadoop-2.7.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.2.jar testmapredsort -sortInput random-data -sortOutput sorted-data 

4.1.4 项目经验之Hadoop参数调优

1)HDFS参数调优hdfs-site.xml

(1)dfs.namenode.handler.count=20 * log2(Cluster Size),比如集群规模为8台时,此参数设置为60 The number of Namenode RPC server threads that listen to requests from clients. If dfs.namenode.servicerpc-address is not configured then Namenode RPC server threads listen to requests from all nodes. NameNode有一个工作线程池,用来处理不同DataNode的并发心跳以及客户端并发的元数据操作。对于大集群或者有大量客户端的集群来说,通常需要增大参数dfs.namenode.handler.count的默认值10。设置该值的一般原则是将其设置为集群大小的自然对数乘以20,即20logN,N为集群大小。 (2)编辑日志存储路径dfs.namenode.edits.dir设置与镜像文件存储路径dfs.namenode.name.dir尽量分开,达到最低写入延迟

2)YARN参数调优yarn-site.xml

(1)情景描述:总共7台机器,每天几亿条数据,数据源->Flume->Kafka->HDFS->Hive 面临问题:数据统计主要用HiveSQL,没有数据倾斜,小文件已经做了合并处理,开启的JVM重用,而且IO没有阻塞,内存用了不到50%。但是还是跑的非常慢,而且数据量洪峰过来时,整个集群都会宕掉。基于这种情况有没有优化方案。 (2)解决办法: 内存利用率不够。这个一般是Yarn的2个配置造成的,单个任务可以申请的最大内存大小,和Hadoop单个节点可用内存大小。调节这两个参数能提高系统内存的利用率。 (a)yarn.nodemanager.resource.memory-mb 表示该节点上YARN可使用的物理内存总量,默认是8192(MB),注意,如果你的节点内存资源不够8GB,则需要调减小这个值,而YARN不会智能的探测节点的物理内存总量。 (b)yarn.scheduler.maximum-allocation-mb 单个任务可申请的最多物理内存量,默认是8192(MB)。

3)Hadoop宕机

(1)如果MR造成系统宕机。此时要控制Yarn同时运行的任务数,和每个任务申请的最大内存。调整参数:yarn.scheduler.maximum-allocation-mb(单个任务可申请的最多物理内存量,默认是8192MB) (2)如果写入文件过量造成NameNode宕机。那么调高Kafka的存储大小,控制从Kafka到HDFS的写入速度。高峰期的时候用Kafka进行缓存,高峰期过去数据同步会自动跟上。

4.2 Zookeeper安装

4.2.1 安装ZK

集群规划

img

4.2.2 ZK集群启动停止脚本

1)在hadoop101的/home/kgg/bin目录下创建脚本

[kgg@hadoop101 bin]$ vim zk.sh

在脚本中编写如下内容

#! /bin/bash

case $1 in
"start"){
   for i in hadoop101 hadoop102 hadoop103
   do
       ssh $i "/opt/module/zookeeper-3.4.10/bin/zkServer.sh start"
   done
};;
"stop"){
   for i in hadoop101 hadoop102 hadoop103
   do
       ssh $i "/opt/module/zookeeper-3.4.10/bin/zkServer.sh stop"
   done
};;
"status"){
   for i in hadoop101 hadoop102 hadoop103
   do
       ssh $i "/opt/module/zookeeper-3.4.10/bin/zkServer.sh status"
   done
};;
esac

2)增加脚本执行权限

[kgg@hadoop101 bin]$ chmod 777 zk.sh

3)Zookeeper集群启动脚本

[kgg@hadoop101 module]$ zk.sh start

4)Zookeeper集群停止脚本

[kgg@hadoop101 module]$ zk.sh stop

4.2.3 项目经验之Linux环境变量

1)修改/etc/profile文件:用来设置系统环境参数,比如$PATH. 这里面的环境变量是对系统内所有用户生效。使用bash命令,需要source /etc/profile一下。 2)修改~/.bashrc文件:针对某一个特定的用户,环境变量的设置只对该用户自己有效。使用bash命令,只要以该用户身份运行命令行就会读取该文件。 3)把/etc/profile里面的环境变量追加到~/.bashrc目录

[kgg@hadoop101 ~]$ cat /etc/profile >> ~/.bashrc
[kgg@hadoop102 ~]$ cat /etc/profile >> ~/.bashrc
[kgg@hadoop103 ~]$ cat /etc/profile >> ~/.bashrc

4.3 日志生成

4.3.1 日志启动

1)代码参数说明

// 参数一:控制发送每条的延时时间,默认是0
Long delay = args.length > 0 ? Long.parseLong(args[0]) : 0L;

// 参数二:循环遍历次数
int loop_len = args.length > 1 ? Integer.parseInt(args[1]) : 1000;

2)将生成的jar包log-collector-0.0.1-SNAPSHOT-jar-with-dependencies.jar拷贝到hadoop101、服务器上,并同步到hadoop102的/opt/module路径下,

[kgg@hadoop101 module]$ xsync log-collector-1.0-SNAPSHOT-jar-with-dependencies.jar

3)在hadoop101上执行jar程序

[kgg@hadoop101 module]$ java -classpath log-collector-1.0-SNAPSHOT-jar-with-dependencies.jar com.kgg.appclient.AppMain  >/opt/module/test.log

4)在/tmp/logs路径下查看生成的日志文件

[kgg@hadoop101 module]$ cd /tmp/logs/
[kgg@hadoop101 logs]$ ls
app-2019-02-10.log

4.3.2 集群日志生成启动脚本

1)在/home/kgg/bin目录下创建脚本lg.sh

[kgg@hadoop101 bin]$ vim lg.sh

2)在脚本中编写如下内容

#! /bin/bash

   for i in hadoop101 hadoop102
   do
       ssh $i "java -classpath /opt/module/gmall/logcollector-1.0-SNAPSHOT-jar-with-dependencies.jar com.kgg.appclient.AppMain $1 $2 > /opt/module/test.log &"
   done

3)修改脚本执行权限

[kgg@hadoop101 bin]$ chmod 777 lg.sh

4)启动脚本

[kgg@hadoop101 module]$ lg.sh 

5)分别在hadoop101、hadoop102的/tmp/logs目录上查看生成的数据

[kgg@hadoop101 logs]$ ls
app-2019-02-10.log
[kgg@hadoop102 logs]$ ls
app-2019-02-10.log

4.3.3 集群时间同步修改脚本

[kgg@hadoop101 bin]$ vim dt.sh

2)在脚本中编写如下内容

#!/bin/bash

for i in hadoop101 hadoop102 hadoop103
do
       echo "========== $i =========="
       ssh -t $i "sudo date -s $1"
done

3)修改脚本执行权限

[kgg@hadoop101 bin]$ chmod 777 dt.sh

4)启动脚本

[kgg@hadoop101 bin]$ dt.sh 2019-2-10

4.3.4 集群所有进程查看脚本

1)在/home/kgg/bin目录下创建脚本xcall.sh

[kgg@hadoop101 bin]$ vim xcall.sh

2)在脚本中编写如下内容

#! /bin/bash

for i in hadoop101 hadoop102 hadoop103
do
       echo --------- $i ----------
       ssh $i "$*"
done

3)修改脚本执行权限

[kgg@hadoop101 bin]$ chmod 777 xcall.sh

4)启动脚本

[kgg@hadoop101 bin]$ xcall.sh jps

 

 posted on   大码王  阅读(336)  评论(0编辑  收藏  举报
编辑推荐:
· 开发者必知的日志记录最佳实践
· SQL Server 2025 AI相关能力初探
· Linux系列:如何用 C#调用 C方法造成内存泄露
· AI与.NET技术实操系列(二):开始使用ML.NET
· 记一次.NET内存居高不下排查解决与启示
阅读排行:
· 阿里最新开源QwQ-32B,效果媲美deepseek-r1满血版,部署成本又又又降低了!
· 开源Multi-agent AI智能体框架aevatar.ai,欢迎大家贡献代码
· Manus重磅发布:全球首款通用AI代理技术深度解析与实战指南
· 被坑几百块钱后,我竟然真的恢复了删除的微信聊天记录!
· AI技术革命,工作效率10个最佳AI工具

成都

复制代码

喜欢请打赏

扫描二维码打赏

了解更多

点击右上角即可分享
微信分享提示

目录导航