hadoop集群搭建——单节点(伪分布式)
1. 准备工作:
前提:需要电脑安装VM,且VM上安装一个Linux系统
注意:本人是在学习完尚学堂视频后,结合自己的理解,在这里做的总结。学习的视频是:大数据。
为了区分是在哪一台机器做的操作,eg:- - - Linux 表示在Linux上做的操作。
2. 以下是教程, 首先是准备工作:
2.1 配置网络:
(1) 配置网卡文件:
- - - Linux:
cd /etc/sysconfig/network-scripts/ vi ifcfg-eth0 # interface config { # HWADDR="00:0C:29:92:E5:B7" # 注释这个,虚拟机需要注释掉,公司不需要 # UUID="2d678a8b-6c40-4ebc-8f4e-245ef6b7a969" ONBOOT="yes" # 机器启动时候网卡启动 BOOTPROTO=static # 使用静态地址 IPADDR=192.168.9.8 NETMASK=255.255.255.0 GATEWAY=192.168.9.2 DNS1=114.114.114.114 }
- - - VMware:
虚拟网络编辑器 -> Net设置 -> 网关IP 192.168.9.2 子网IP:192.168.9.0 子网掩码255.255.255.0 端口转发:192.168.9.128(主机的)
将主机虚拟适配器连接到此网络;# 主机Windows,虚拟适配器->WMnet8(虚拟网卡)
- - - Linux:
service network restart
测试:
Linux 是否能上网 : ping baidu.com Linux ping 主机: ping 192.168.9.128 主机pingLinux:ping 192.168.9.2
- - - Windows:
# VMnet8 IP地址:192.168.9.128 子网掩码:255.225.255.0 DNS:和网关一样或者114.114.114.114 or 8.8.8.8
(2) 关闭虚拟机防火墙(企业的话不关闭):
- - - Linux:
service iptables stop # 临时关闭,防护墙属于服务,重新开机后又会启动
chkconfig iptables off # 永久关闭
chkconfig # 看iptables(命令行是3,图形模式是5) windows -> 管理 -> 服务
(3) 关闭SELINUX
- - - Linux:
cd /etc/selinux/ vi config { SELINUX=disabled }
(4) DNS 域名解析
- - - Linux:
vi /etc/hosts { 192.168.9.11 node01 192.168.9.12 node02 192.168.9.13 node03 192.168,9.14 node04 }
(5) 删除Mac地址,不然当现在这个使用eth0时候,另一个就是eth1了,又的重新配置
- - - Linux:
cd /etc/udev/rules.d/ cat 70-persistent-net.rules # 右键虚拟机 ->网络适配器 ->高级->Mac地址 -> 00:0C:29:96:95:65 rm -f 70-persistent-net.rules # 为了克隆
(6) poweroff(克隆前别启动)
小扳手 -> 拍摄快照 -> basic basic -> 克隆 -> 现有快照 -> 创建链接克隆 (克隆之前Mac地址是一样的,但是启动之后就不一样了)
之后这个作为样板机,克隆出4台机器为:node01、node02、node03、node04
2.2 配置其余:
- - - Linux - node01:
vi /etc/sysconfig/network-scripts/ifcfg-eth0 { IPADDR=192.168.9.11 } vi /etc/sysconfig/network # 改完重启后才会有效 { NETWORKING=yes HOSTNAME=node01 } vi /etc/hosts poweroff
- - - Linux - node02:
vi /etc/sysconfig/network-scripts/ifcfg-eth0 { IPADDR=192.168.9.12 } vi /etc/sysconfig/network { NETWORKING=yes HOSTNAME=node02 } vi /etc/hosts poweroff
node03、node04的地址是13、14,HOSTNAME改为对应的;
- - - Windows:修改hosts文件:
C:\Windows\System32\drivers\etc { 192.168.9.11 node01 192.168.9.12 node02 192.168.9.13 node03 192.168.9.14 node04 }
3. 接着是hadoop的配置:
参考网址:
# -> https://hadoop.apache.org/docs/r2.6.5/
# -> https://hadoop.apache.org/docs/r2.6.5/hadoop-project-dist/hadoop-common/SingleCluster.html# 官网地址:https://hadoop.apache.org/docs/r2.6.5/hadoop-project-dist/hadoop-common/SingleCluster.html
3.1 检查hosts和网络(检查上面的):
cat /etc/hosts
hostname
cat /etc/sysconfig/network
3.2 免密钥登录:
ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa # 创建密钥公钥文件(dsa类型) id_dsa id_dsa.pub cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys # 读取家目录的公钥文件然后重定向追加到到authorized_keys(将共钥放到访问方的认证文件里)(不要多次执行,重复执行了把authorized_keys删除一次) cat authorized_keys id_dsa.pub # 检查是否一样,公钥和私钥 ssh root@localhost # 登录自己 exit ssh root@node01
3.3 安装jdk
参考网址:
# https://blog.csdn.net/m0_54849806/article/details/123772220
# https://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html
准备:jdk-8u251-linux-i586.tar.gz(32位的jdk)
mkdir /usr/java mv /root/Downloads/jdk-8u251-linux-i586.tar.gz /usr/java/ tar -zxvf /usr/java/jdk-8u251-linux-i586.tar.gz # 配置profile文件 vi /etc/profile { export JAVA_HOME=/usr/java/jdk1.8.0_251/ export PATH=$PATH:$JAVA_HOME/bin # 先取出老的path,再拼接(:) } # 检查 source /etc/profile # 也可是 . /etc/profile java -version whereis java jps
3.4 安装hadoop
准备:hadoop-2.5.2.tar.gz
mkdir /usr/hadoop/ mv /root/Downloads/hadoop-2.5.2.tar.gz /usr/hadoop/ tar -zxvf /usr/hadoop/hadoop-2.5.2.tar.gz cd /usr/hadoop/hadoop-2.5.2/ # sbin bin vi /etc/profile { export JAVA_HOME=/usr/java/jdk1.8.0_251/ export HADOOP_HOME=/usr/hadoop/hadoop-2.5.2/ export PATH=$PATH:$JAVA_HOME/bin/:$HADOOP_HOME/bin/:$HADOOP_HOME/sbin/ } . /etc/profile hadoop # hdfs start # Tab
3.5 改Hadoop的配置文件
3.5.1 配置 env.sh文件:
cd /usr/hadoop/hadoop-2.5.2/etc/hadoop/ vi hadoop-env.sh # 如果/etc/profile文件没有执行,${JAVA_HOME}不能取出值,所以需要二次的javahoem环境配置; { export JAVA_HOME=/usr/java/jdk1.8.0_251/ } vi mapred-env.sh { export JAVA_HOME=/usr/java/jdk1.8.0_251/ } vi yarn-env.sh { export JAVA_HOME=/usr/java/jdk1.8.0_251/ }
3.5.2 配置 Configuration:
vi core-site.xml { <configuration> <property> <name>fs.defaultFS</name> # 决定Namenode在哪启动 (文件系统的入口:NameNode) <value>hdfs://node01:9000</value> # NameNode以哪个机器哪个端口启动的,见到localhost反感,换成自己的名字:node01; </property> </configuration> }
vi hdfs-site.xml { <configuration> <property> <name>dfs.replication</name> <value>1</value> # 配置1个副本,伪分布式,节点只有一个,副本不能出现同一节点。 </property> </configuration> }
# 以上仅仅配置了NameNode节点在哪?在哪启动?
# 配置DataNode
vi slaves
{
node01 # datanode在哪启动。(localhost, 集群的话这边有多个)
}
vi core-site.xml { <configuration> <property> <name>fs.defaultFS</name> <value>hdfs://node01:9000</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/var/sxt/hadoop/local/</value> # Namenode的持久化目录;修改namenode存放持久化元数据文件的存放目录;(这个目录是空的也没事,自己创建的) </property> </configuration> }
i hdfs-site.xml { <configuration> <property> <name>dfs.replication</name> <value>1</value> </property> <property> <name>dfs.namenode.secondary.http-address</name> # secondarynamenode在哪启动 <value>node01:50090</value> </property> </configuration> }
3.6 格式化文件系统(只要一次)
hdfs namenode -format # 执行前后jps不会有变化;而且会创建/var/sxt/hadoop/local/;注意报错不搞错都会输出一大堆东西; # Storage directory /var/sxt/hadoop/local/dfs/name has been successfully formatted. cd /var/sxt/hadoop/local/dfs/name/ cd current/ ll { -rw-r--r-- 1 root root 351 Jun 10 05:18 fsimage_0000000000000000000 -rw-r--r-- 1 root root 62 Jun 10 05:18 fsimage_0000000000000000000.md5 -rw-r--r-- 1 root root 2 Jun 10 05:18 seen_txid -rw-r--r-- 1 root root 205 Jun 10 05:18 VERSION } cat VERSION { #Fri Jun 10 05:18:49 PDT 2022 namespaceID=1178112766 clusterID=CID-3ba8cea9-4994-4ad6-aff6-b159d0f716d1 cTime=0 storageType=NAME_NODE blockpoolID=BP-2116590704-192.168.9.11-1654863529163 # 连接池 layoutVersion=-57 # 在namenode这边有这些数据了 }
3.7 开始 # 看日志文件看.log
start-dfs.sh { Java HotSpot(TM) Client VM warning: You have loaded library /usr/hadoop/hadoop-2.5.2/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now. It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'. 22/06/10 05:28:46 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Starting namenodes on [node01] node01: starting namenode, logging to /usr/hadoop/hadoop-2.5.2/logs/hadoop-root-namenode-node01.out node01: Java HotSpot(TM) Client VM warning: You have loaded library /usr/hadoop/hadoop-2.5.2/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now. node01: It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'. node01: starting datanode, logging to /usr/hadoop/hadoop-2.5.2/logs/hadoop-root-datanode-node01.out Starting secondary namenodes [node01] node01: starting secondarynamenode, logging to /usr/hadoop/hadoop-2.5.2/logs/hadoop-root-secondarynamenode-node01.out node01: Java HotSpot(TM) Client VM warning: You have loaded library /usr/hadoop/hadoop-2.5.2/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now. node01: It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'. Java HotSpot(TM) Client VM warning: You have loaded library /usr/hadoop/hadoop-2.5.2/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now. It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'. 22/06/10 05:29:01 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable }
jps # 角色即目录 { 4829 DataNode 4974 SecondaryNameNode 4718 NameNode 5087 Jps }
cd /var/sxt/hadoop/local/dfs/ ll # 对于完全分布式的话:第一台只能看到name,第二台只能看到data; { total 12 drwx------ 3 root root 4096 Jun 10 05:28 data # drwxr-xr-x 3 root root 4096 Jun 10 05:28 name # 格式化产生的 drwxr-xr-x 3 root root 4096 Jun 10 05:28 namesecondary }
3.8 说明
cd /var/sxt/hadoop/local/dfs/name/current/ cat VERSION { #Fri Jun 10 05:18:49 PDT 2022 namespaceID=1178112766 clusterID=CID-3ba8cea9-4994-4ad6-aff6-b159d0f716d1 # 集群开始时候DttaNonde跟随namenode(就是两者的clusterID一样);格式化时候只会格式化namenode,datanode不会变化;如果重新格式化的话,datanode会找不到namenode,然后自杀,进程退出,找不到主人,自杀; cTime=0 # 如果发现启动后datanode不见了,第一反应就是这个clusterID不一样,datanode自杀了; storageType=NAME_NODE # datanode上面的VERSION文件什么时候创建的?在namenode格式化后,datanode第一次启动与namenode交互后产生的。namenode授权给他的。 blockpoolID=BP-2116590704-192.168.9.11-1654863529163 layoutVersion=-57 }
cd /var/sxt/hadoop/local/dfs/data/current/ cat VERSION { #Fri Jun 10 05:28:54 PDT 2022 storageID=DS-6f5b9506-8a9c-4daa-99b9-5acdb21cf00d clusterID=CID-3ba8cea9-4994-4ad6-aff6-b159d0f716d1 # 集群开始时候DttaNonde跟随namenode(就是两者的clusterID一样) cTime=0 datanodeUuid=fa96bb92-0d4a-488c-87a9-649a1481f49d storageType=DATA_NODE layoutVersion=-55 }
http://node01:50070/ # 浏览器;9000是rpc间通信用的,不是web的,做心跳,传输数据; { Overview 'node01:9000' (active) Live Nodes 1 (Decommissioned: 0) Utilities -> 浏览文件系统 -> / # hadoop的根目录 }
hdfs # 查看后面可以接收什么参数 hdfs dfs # 提示 hadoop fs == hdfs dfs hdfs dfs -mkdir -p /user/root # 创建用户目录,root;可以在:Utilities -> 浏览文件系统 -> 查看(/user/root 相当于Linux的home) cd /usr/hadoop/ hdfs dfs -put ./hadoop-2.5.2.tar.gz /user/root # 上传文件: 同样在:Utilities -> 浏览文件系统 -> 查看 # Permission Owner Group Size(实际大小) Replication Block Size(块大小) Name # 可以点击文件,看见文件被切成两个块
3.9 作业
for i in `seq 100000`;do echo "hello world $i" >> test.txt;done ll -h ./ hdfs dfs -D dfs.blocksize=1048576 -put ./test.txt /user/root # 1M /var/sxt/hadoop/local/dfs/data/current/BP-2116590704-192.168.9.11-1654863529163/current/finalized ll { -rw-r--r-- 1 root root 134217728 Jun 10 06:00 blk_1073741825 # 压缩包 数据 -rw-r--r-- 1 root root 1048583 Jun 10 06:00 blk_1073741825_1001.meta # 压缩包 元数据 -rw-r--r-- 1 root root 12979764 Jun 10 06:00 blk_1073741826 # 压缩包 数据 -rw-r--r-- 1 root root 101415 Jun 10 06:00 blk_1073741826_1002.meta # 压缩包 元数据 -rw-r--r-- 1 root root 1048576 Jun 10 06:15 blk_1073741827 # test文件 数据 -rw-r--r-- 1 root root 8199 Jun 10 06:15 blk_1073741827_1003.meta # test文件 元数据 -rw-r--r-- 1 root root 740319 Jun 10 06:15 blk_1073741828 # test文件 数据 -rw-r--r-- 1 root root 5791 Jun 10 06:15 blk_1073741828_1004.meta # test文件 元数据 } stop-dfs.sh # 关闭