Hadoop 2.5.2分布式集群配置(VirtualBox虚拟机模拟)

本教程使用VirtualBox模拟一共有四台机器,系统为CentOS 7 x86-64,其中一台为master 其余三台为slave。

1.使用VirtualBox新建master和三台slave,配置网络

四台机器网络设置为桥接模式,网卡选择主机上网的真实网卡。

分别修改/etc/sysconfig/network-scripts/ifcfg-enp0s3

设置静态ip上网模式和固定ip及相应网关配置(与真实主机一致)

# vi /etc/sysconfig/network-scripts/ifcfg-enp0s3

#####  master机器IP设置 #####

TYPE=Ethernet
#BOOTPROTO=dhcp 注释默认自动获取ip上网模式,添加静态 static 模式
BOOTPROTO=static 
DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
IPV6_FAILURE_FATAL=no
NAME=enp0s3
UUID=1c78b02d-5a3a-42f0-a83f-a17fd8623cdf
DEVICE=enp0s3
ONBOOT=yes             # 开机加载
IPADDR=192.168.0.200   # master上网固定IP
NETMASK=255.255.255.0  # 子网掩码
GATEWAY=192.168.0.1    # 网关
DNS1=223.5.5.5         # DNS服务器1
DNS2=8.8.8.8           # DNS服务器2

#####  slave机器IP设置 #####

TYPE=Ethernet
#BOOTPROTO=dhcp 注释默认自动获取ip上网模式,添加静态 static 模式
BOOTPROTO=static 
DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
IPV6_FAILURE_FATAL=no
NAME=enp0s3
UUID=1c78b02d-5a3a-42f0-a83f-a17fd8623cdf
DEVICE=enp0s3
ONBOOT=yes             # 开机加载
IPADDR=192.168.0.201   # slave1上网固定IP 其余两台IP相应为192.168.0.202,192.168.0.203
NETMASK=255.255.255.0  # 子网掩码
GATEWAY=192.168.0.1    # 网关
DNS1=223.5.5.5         # DNS服务器1
DNS2=8.8.8.8           # DNS服务器2

以上修改完成后均需执行命苦重启网卡服务使配置生效

# service network restart

互相ping,看是否测试成功,若不成功,注意防火墙的影响。

关闭windows或虚拟机的防火墙。

centos关闭firewall:

systemctl stop firewalld.service #停止firewall

systemctl disable firewalld.service #禁止firewall开机启动

firewall-cmd –state #查看默认防火墙状态(关闭后显示notrunning,开启后显示running)

更多防火墙高级设置参考CentOS 7.0关闭默认防火墙启用iptables防火墙

2.修改master和三台slave主机名并配置/etc/hosts

master重命名为Hmaster (主机名千万不能有下划线!):

# hostnamectl set-hostname Hmaster

slave重命名:

Hslave1

# hostnamectl set-hostname Hslave1
# service network restart

Hslave2

# hostnamectl set-hostname Hslave2
# service network restart

Hslave3

# hostnamectl set-hostname Hslave3
# service network restart

然后四台机器分别修改/etc/hosts,目的是使集群机器之间互相认识,可以使用主机名直接通讯

# vi /etc/hosts

192.168.0.200 Hmaster
192.168.0.201 HSlave1
192.168.0.202 HSlave2
192.168.0.203 HSlave3

# service network restart

可以直接使用主机名互相ping,看是否配置成功。

3.配置master到slave配置ssh无密码验证配置

在master机器下

cd ~
cd .ssh/
ssh-keygen -t rsa #连续敲四个回车完成 ,会用rsa算法生成私钥id_rsa和公钥id_rsa.pub

.ssh目录下多出两个文件 私钥文件:id_rsa公钥文件:id_rsa.pub

方式一:
手动copy方式

复制id_rsa.pub文件为authorized_keys

cp id_rsa.pub authorized_keys

将公钥文件authorized_keys分发到节点Hslave1、Hslave2、Hslave3上:

scp authorized_keys root@Hslave1:/root/.ssh/ 
scp authorized_keys root@Hslave2:/root/.ssh/
scp authorized_keys root@Hslave3:/root/.ssh/

注意:如果当前用户目录下没有.ssh目录,可以自己创建一个该目录,该目录的权限最好设置为700,authorized_keys权限最好设置为600

方式二:ssh-copy-id命令

ssh-copy-id命令可以把本地主机的公钥复制到远程主机的authorized_keys文件上,ssh-copy-id命令也会给远程主机的用户主目录(home)和~/.ssh, 和~/.ssh/authorized_keys设置合适的权限。

来自: http://man.linuxde.net/ssh-copy-id

ssh-copy-id Hmaster  # copy自己一份
ssh-copy-id Hslave1
ssh-copy-id Hslave2
ssh-copy-id Hslave3

验证ssh无密码登录:

# ssh Hslave1
Last login: Sat Nov  4 15:39:29 2017 from hmaster
#

4.配置每台集群机器的JDK环境

由于master机器上已经安装过java,安装目录为/usr/lib/jvm/jdk1.8.0_60,所以直接将安装目录发到其他的slave节点,如果没有java,就要去官网下载解压安装。

具体linux安装JDK的方法可以参考:

Linux上Java JDK环境的部署和配置

# scp -r /usr/java/default root@Hslave1:/usr/java
# scp -r /usr/java/default root@Hslave2:/usr/java
# scp -r /usr/java/default root@Hslave3:/usr/java

修改/etc/profile文件 配置java环境变量

# vi /etc/profile

# set java jdk
export JAVA_HOME=/usr/java/default
export JRE_HOME=${JAVA_HOME}/jre
export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib
export PATH=${JAVA_HOME}/bin:$PATH

5.安装hadoop

hadoop官网下载地址

可以选择一个自己需要的版本,这里选择的是hadoop-2.5.2
先下载一个到master服务器的/usr/local路径下,

# cd /usr/local
# wget http://mirror.bit.edu.cn/apache/hadoop/common/hadoop-2.5.2/hadoop-2.5.2.tar.gz

下载完成后解压

# tar -zxvf hadoop-2.5.2.tar.gz
# cd hadoop-2.5.2
# mkdir data  //用于制定hadoop的hadoop.tmp.dir目录

修改hadoop core-site.xml文件配置

# vim /usr/local/hadoop-2.5.2/etc/hadoop/core-site.xml

修改etc/hadoop/core-site.xml 配置如下

<configuration>
        <property>
                <name>fs.default.name</name>
                <value>hdfs://Hmaster:9000</value>
        </property>
        <property>
                <name>io.file.buffer.size</name>
                <value>131072</value>
        </property>

        <property>
                 <name>hadoop.tmp.dir</name>
                 <value>/usr/local/hadoop-2.5.2/data</value>
        </property>
</configuration>

修改etc/conf/mapred-site.xml 配置如下

<configuration>
  <property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
  </property>
  <property>
    <name>mapreduce.jobtracker.http.address</name>
    <value>Hmaster:50030</value>
  </property>
  <property>
    <name>mapreduce.jobhistory.address</name>
    <value>Hmaster:10020</value>
  </property>
  <property>
    <name>mapreduce.jobhistory.webapp.address</name>
    <value>Hmaster:19888</value>
  </property>
</configuration>

conf/hdfs-site.xml 配置如下,注意文件路径中不要包含一些点、逗号等特殊字符,文件路径需要写成完全路径,以file:开头

<configuration>
    <property>
        <name>dfs.replication</name>
        <value>3</value>
    </property>
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>file:/usr/local/hadoop-2.5.2/dfs/name</value>
    </property>
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>file:/usr/local/hadoop-2.5.2/dfs/data</value>
    </property>
    <property>
        <name>dfs.permissions</name>
        <value>false</value>
       </property>
    <property>
        <name>dfs.webhdfs.enabled</name>
        <value>true</value>
    </property>
    <property>
        <name>dfs.namenode.rpc-address</name>
        <value>Hmaster:9000</value>
    </property>
    <property>
        <name>dfs.block.size</name>
        <value>67108864</value>
    </property>
</configuration

修改etc/hadoop/yarn-site.xml

<configuration>
     <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
     </property>
     <property>
        <name>yarn.resourcemanager.address</name>
        <value>Hmaster:8032</value>
     </property>
     <property>
        <name>yarn.resourcemanager.scheduler.address</name>
        <value>Hmaster:8030</value>
     </property>
     <property>
        <name>yarn.resourcemanager.resource-tracker.address</name>
        <value>Hmaster:8031</value>
     </property>
     <property>
        <name>yarn.resourcemanager.admin.address</name>
        <value>Hmaster:8033</value>
     </property>
     <property>
        <name>yarn.resourcemanager.webapp.address</name>
        <value>Hmaster:8088</value>
     </property>
</configuration>

修改 etc/hadoop/slaves

Hslave1
Hslave2
Hslave3

etc/hadoop/hadoop-env.sh和yarn-env.sh中配置Java环境变量

export JAVA_HOME=/usr/java/default

使用scp 直接把以上配置copy到另外的集群上

# scp -r hadoop-2.5.2 root@Hslave1:/usr/local
# scp -r hadoop-2.5.2 root@Hslave2:/usr/local
# scp -r hadoop-2.5.2 root@Hslave3:/usr/local

修改/etc/profile文件 配置hadoop环境变量

#HADOOP VARIABLES START 
export HADOOP_HOME=/usr/local/hadoop-2.5.2
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_YARN_HOME=$HADOOP_HOME
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$HADOOP_HOME/lib
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib:$HADOOP_HOME/lib/native"
#HADOOP VARIABLES END 

分布式hadoop环境搭建完毕

6.启动验证hadoop

(1)格式化文件系统

# cd /usr/local/hadoop-2.5.2
# ./bin/hdfs namenode -format

17/11/04 22:19:13 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = Hmaster/192.168.0.200
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 2.5.2
STARTUP_MSG:   classpath = /usr/local/hadoop-2.5.2/etc/hadoop:/usr/local/hadoop-2.5.2/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/hadoop-2.5.2/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/usr/local/hadoop-2.5.2/share/hadoop/common/lib/activation-1.1.jar:/usr/local/hadoop-
...
...
STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r cc72e9b000545b86b75a61f4835eb86d57bfafc0; compiled by 'jenkins' on 2014-11-14T23:45Z
STARTUP_MSG:   java = 1.8.0_91
************************************************************/
17/11/04 22:19:13 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
17/11/04 22:19:13 INFO namenode.NameNode: createNameNode [-format]
Formatting using clusterid: CID-c7182aaa-a763-4276-b7eb-f11b4bd86a63
17/11/04 22:19:14 INFO namenode.FSNamesystem: fsLock is fair:true
17/11/04 22:19:14 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
17/11/04 22:19:14 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
17/11/04 22:19:14 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
17/11/04 22:19:14 INFO blockmanagement.BlockManager: The block deletion will start around 2017 十一月 04 22:19:14
17/11/04 22:19:14 INFO util.GSet: Computing capacity for map BlocksMap
17/11/04 22:19:14 INFO util.GSet: VM type       = 64-bit
17/11/04 22:19:14 INFO util.GSet: 2.0% max memory 966.7 MB = 19.3 MB
17/11/04 22:19:14 INFO util.GSet: capacity      = 2^21 = 2097152 entries
17/11/04 22:19:15 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
17/11/04 22:19:15 INFO blockmanagement.BlockManager: defaultReplication         = 3
17/11/04 22:19:15 INFO blockmanagement.BlockManager: maxReplication             = 512
17/11/04 22:19:15 INFO blockmanagement.BlockManager: minReplication             = 1
17/11/04 22:19:15 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
17/11/04 22:19:15 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks  = false
17/11/04 22:19:15 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
17/11/04 22:19:15 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
17/11/04 22:19:15 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
17/11/04 22:19:15 INFO namenode.FSNamesystem: fsOwner             = root (auth:SIMPLE)
17/11/04 22:19:15 INFO namenode.FSNamesystem: supergroup          = supergroup
17/11/04 22:19:15 INFO namenode.FSNamesystem: isPermissionEnabled = false
17/11/04 22:19:15 INFO namenode.FSNamesystem: HA Enabled: false
17/11/04 22:19:15 INFO namenode.FSNamesystem: Append Enabled: true
17/11/04 22:19:15 INFO util.GSet: Computing capacity for map INodeMap
17/11/04 22:19:15 INFO util.GSet: VM type       = 64-bit
17/11/04 22:19:15 INFO util.GSet: 1.0% max memory 966.7 MB = 9.7 MB
17/11/04 22:19:15 INFO util.GSet: capacity      = 2^20 = 1048576 entries
17/11/04 22:19:15 INFO namenode.NameNode: Caching file names occuring more than 10 times
17/11/04 22:19:15 INFO util.GSet: Computing capacity for map cachedBlocks
17/11/04 22:19:15 INFO util.GSet: VM type       = 64-bit
17/11/04 22:19:15 INFO util.GSet: 0.25% max memory 966.7 MB = 2.4 MB
17/11/04 22:19:15 INFO util.GSet: capacity      = 2^18 = 262144 entries
17/11/04 22:19:15 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
17/11/04 22:19:15 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
17/11/04 22:19:15 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
17/11/04 22:19:15 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
17/11/04 22:19:15 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
17/11/04 22:19:15 INFO util.GSet: Computing capacity for map NameNodeRetryCache
17/11/04 22:19:15 INFO util.GSet: VM type       = 64-bit
17/11/04 22:19:15 INFO util.GSet: 0.029999999329447746% max memory 966.7 MB = 297.0 KB
17/11/04 22:19:15 INFO util.GSet: capacity      = 2^15 = 32768 entries
17/11/04 22:19:15 INFO namenode.NNConf: ACLs enabled? false
17/11/04 22:19:15 INFO namenode.NNConf: XAttrs enabled? true
17/11/04 22:19:15 INFO namenode.NNConf: Maximum size of an xattr: 16384
17/11/04 22:19:15 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1165534849-192.168.0.200-1509805155604
17/11/04 22:19:15 INFO common.Storage: Storage directory /usr/local/hadoop-2.5.2/dfs/name has been successfully formatted.
17/11/04 22:19:15 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
17/11/04 22:19:15 INFO util.ExitUtil: Exiting with status 0
17/11/04 22:19:15 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at Hmaster/192.168.0.200
************************************************************/

如果启动失败 则需要手动创建目录

# mkdir /usr/local/hadoop-2.5.2/dfs

成功会显示 INFO common.Storage: Storage directory /usr/local/hadoop-2.5.2/dfs/name has been successfully formatted.

...省略
17/11/04 22:19:15 INFO namenode.NNConf: ACLs enabled? false
17/11/04 22:19:15 INFO namenode.NNConf: XAttrs enabled? true
17/11/04 22:19:15 INFO namenode.NNConf: Maximum size of an xattr: 16384
17/11/04 22:19:15 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1165534849-192.168.0.200-1509805155604
17/11/04 22:19:15 INFO common.Storage: Storage directory /usr/local/hadoop-2.5.2/dfs/name has been successfully formatted.
17/11/04 22:19:15 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
17/11/04 22:19:15 INFO util.ExitUtil: Exiting with status 0
17/11/04 22:19:15 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at Hmaster/192.168.0.200
************************************************************/

(2) 启动hadoop

# cd /usr/local/hadoop-2.5.2
# sbin/start-all.sh

启动日志

This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [Hmaster]
Hmaster: starting namenode, logging to /usr/local/hadoop-2.5.2/logs/hadoop-root-namenode-hmaster.out
Hslave2: starting datanode, logging to /usr/local/hadoop-2.5.2/logs/hadoop-root-datanode-hmaster.out
Hslave3: starting datanode, logging to /usr/local/hadoop-2.5.2/logs/hadoop-root-datanode-hmaster.out
Hslave1: starting datanode, logging to /usr/local/hadoop-2.5.2/logs/hadoop-root-datanode-hmaster.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop-2.5.2/logs/hadoop-root-secondarynamenode-hmaster.out
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop-2.5.2/logs/yarn-root-resourcemanager-hmaster.out
Hslave3: starting nodemanager, logging to /usr/local/hadoop-2.5.2/logs/yarn-root-nodemanager-hmaster.out
Hslave1: starting nodemanager, logging to /usr/local/hadoop-2.5.2/logs/yarn-root-nodemanager-hmaster.out
Hslave2: starting nodemanager, logging to /usr/local/hadoop-2.5.2/logs/yarn-root-nodemanager-hmaster.out

在master和三台slave上执行命令jps查看java进程,成功启动情况如下,

# Hmaster 情况
# jps
5346 ResourceManager
5619 Jps
5206 SecondaryNameNode
5032 NameNode

# Hslave1 Hslave2 Hslave3 情况
4291 NodeManager
4133 DataNode
4460 Jps

如果出现以下输出使其卡着不动,则要在/etc/ssh/ssh_config 文件中添加
StrictHostKeyChecking no 然后重启ssh服务/etc/init.d/ssh restart

...
The authenticity of host 'localhost (127.0.0.1)' can't be established.ECDSA key fingerprint is 08:1d:db:e4:d2:e0:87:89:ed:ca:69:82:17:6a:83:57 
...

7.遇到的问题

(1)start-all.sh集群启动过程中出现一些服务进程启动失败,首先检查排除防火墙的问题,再去查看相应服务的启动日志的报错信息。

(2)Initialization failed for Block pool(Datanode Uuid unassigned)

FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: 
Initialization failed for Block pool (Datanode Uuid unassigned) 
service to master/xxx. Exiting. java.io.IOException: Incompatible clusterIDs

问题定位:所有namenode目录、所有datanode目录、从节点临时目录
问题原因:
1) 主节点的namenode clusterID与从节点的datanode clusterID不一致
2) 多次格式化了namenode跟datanode之后的结果,格式化之后从节点生成了新的ID,造成不一致
解决办法:
在格式化之前,先把所有的服务停掉(stop-dfs.sh、stop-yarn.sh或者stop-all.sh),确保都停掉了之后,分别到所有节点的namenode目录、datanode目录、临时目录,把以上目录里面的所有内容都删除掉。然后再重新启动就可以了。一个个机器删除比较麻烦 附上一个脚本可以在各台机器上批量删除,参考该博客Linux 集群上批量执行同一命令 shell 脚本改写的

创建脚本:allcmd.sh

if [ "$#" -ne 2 ] ; then
    echo "USAGE: $0 -f server_list_file cmd"           
    exit -1
fi

file_name=$1
cmd_str=$2

cwd=$(pwd)
cd $cwd
serverlist_file="$cwd/$file_name"
cmdlist_file="$cwd/$cmd_str"
if [ ! -e $serverlist_file ] ; then
    echo 'server.list not exist';
    exit 0
fi

if [ ! -e $cmdlist_file ] ; then
    echo 'cmd.list not exist';
    exit 0
fi

while read line                                        
do
    #echo $line                                        
    if [ -n "$line" ] ; then
        echo "DOING--->>>>>" $line "<<<<<<<"    
        while read cmd_str
        do
        ssh $line $cmd_str < /dev/null > /dev/null
        if [ $? -eq 0 ] ; then
            echo "$cmd_str done!"                      
        else
            echo "error: " $?                          
        fi
        done<$cmdlist_file
    fi
done < $serverlist_file

创建完执行chmod +x allcmd.sh
创建命令文件 cmdList

rm -r /usr/local/hadoop-2.5.2/dfs/*
rm -r /usr/local/hadoop-2.5.2/data/*
rm -r /usr/local/hadoop-2.5.2/logs/*

创建服务器列表文件 serverList

Hmaster
Hslave1
Hslave2
Hslave3

使用方法:在脚本所在目录下建立cmdList文件和serverList
然后调用:./allcmd.sh serverList cmdList 即可

 

posted @ 2017-11-04 23:32  stonesma  阅读(197)  评论(0编辑  收藏  举报