Hadoop - [04] 分布式部署

Zookeeper的分布式部署 >> Hadoop的分布式部署

 

 

一、集群规划

主机名 node01 node02 node03
JDK
Zookeeper
NameNode  
JournalNode
DataNode
ResourceManager  
NodeManager

 

 

二、安装部署

1、将 hadoop-2.5.2.tar.gz 上传到node01、node02、node03的 /opt/software目录下

2、将 hadoop-2.5.2.tar.gz 解压到 /opt/module 目录下

[root@node01 software]# tar -zxvf hadoop-2.5.2.tar.gz -C /opt/module/
hadoop-2.5.2/
hadoop-2.5.2/bin/
hadoop-2.5.2/bin/hadoop
hadoop-2.5.2/bin/hdfs
hadoop-2.5.2/bin/mapred
hadoop-2.5.2/bin/yarn.cmd
hadoop-2.5.2/bin/hadoop.cmd
hadoop-2.5.2/bin/hdfs.cmd
hadoop-2.5.2/bin/mapred.cmd
......
[root@node01 software]# 

3、修改配置文件/opt/module/hadoop-2.5.2/etc/hadoop/hadoop-env.sh

......
# 配置JDK
export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.372.b07-1.el7_9.x86_64/

......
# 定义一些变量
export HDFS_NAMENODE_USER=root
export HDFS_DATANODE_USER=root
export HDFS_SECONDARYNAMENODE_USER=root
export YARN_RESOURCEMANAGER_USER=root
export YARN_NODEMANAGER_USER=root
export HDFS_JOURNALNODE_USER=root
export HDFS_ZKFC_USER=root

注意:hadoop-env.sh脚本里必须指定JAVA_HOME的具体的值,要不然后面使用start-dfs.sh和stop-dfs.sh就会无效。

4、修改配置文件/opt/module/hadoop-2.5.2/etc/hadoop/core-site.xml

5、修改配置文件/opt/module/hadoop-2.5.2/etc/hadoop/hdfs-site.xml

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
    <!-- 副本数默认为3副本 -->
	<property>
        <name>dfs.replication</name>
        <value>2</value>
    </property>

    <!-- 关闭权限检查 -->
    <property>
        <name>dfs.permissions.enabled</name>
        <value>false</value>
    </property>

    <!-- dfs.namenode.name.dir:namenode的目录放的路径在hadoop.tmp.dir之上做了修改
		file://${hadoop.tmp.dir}/dfs/name

		dfs.datanode.data.dir:namenode的目录放的路径在hadoop.tmp.dir之上做了修改
		file://${hadoop.tmp.dir}/dfs/data
     -->
    <!-- 为nameservice起一个别名 -->
    <property>
        <name>dfs.nameservices</name>
        <value>nameservice1</value>
    </property>
    <property>
        <name>dfs.ha.namenodes.nameservice1</name>
        <value>nn1,nn2</value>
    </property>
    <!-- Active NN -->
    <property>
        <name>dfs.namenode.rpc-address.nameservice1.nn1</name>
        <value>node01:8020</value>
    </property>
    <property>
        <name>dfs.namenode.http-address.nameservice1.nn1</name>
        <value>node01:9870</value>
    </property>
    <!-- Secondary NN -->
    <property>
        <name>dfs.namenode.rpc-address.nameservice1.nn2</name>
        <value>node02:8020</value>
    </property>
    <property>
        <name>dfs.namenode.http-address.nameservice1.nn2</name>
        <value>node02:9870</value>
    </property>
    
    <!-- Journalnode列表: 负责Hadoop与Zookeeper进行沟通 -->
    <property>
        <name>dfs.namenode.shared.edits.dir</name>
        <value>qjournal://node01:8485;node02:8485;node03:8485/nameservice1</value>
    </property>
    
    <!-- 自动切换namenode -->
    <property>
        <name>dfs.client.failover.proxy.provider.nameservice1</name>
        <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
    </property>
    
    <!-- Journal的存储位置 -->
    <property>
        <name>dfs.journalnode.edits.dir</name>
        <value>/data/hadoop/data/journal/</value>
    </property>

    <!-- 启动故障转移 -->
    <property>
        <name>dfs.ha.automatic-failover.enabled</name>
        <value>true</value>
    </property>

    <!-- SSH免密码登录 -->
    <property>
        <name>dfs.ha.fencing.methods</name>
        <value>sshfence</value>
    </property>
    <property>
        <name>dfs.ha.fencing.ssh.private-key-files</name>
        <value>/root/.ssh/id_rsa</value>
    </property>
</configuration>

dfs.replication:副本数、

dfs.permissions.enabled:权限检查,true:在HDFS中启用权限检查.false:关闭权限检查

dfs.nameservices:以逗号分隔的主节点列表(namenode组)

dfs.namenode.rpc-address.nameservice1.nn1:处理所有客户端请求的RPC地址

dfs.namenode.http-address.nameservice1.nn1:DFS NameNodeWebUI将侦听的地址和基本端口。

dfs.namenode.rpc-address.nameservice1.nn2:处理所有客户端请求的RPC地址

dfs.namenode.http-address.nameservice1.nn2:DFS NameNodeWebUI将侦听的地址和基本端口。

dfs.namenode.shared.edits.dir:HA集群中多个namenodes之间共享存储的目录。此目录将由活动目录写入,由备用目录读取,以保持名称空间同步。在非HA集群中保持为空。

dfs.client.failover.proxy.provider.nameservice1: 主机配置的故障转移代理提供程序的类名的前缀(加上所需的名称服务ID)。有关更详细的信息,请参阅HDFS高可用性文档的“配置详细信息”部分。

dfs.journalnode.edits.dir:存储日志编辑文件的目录。

dfs.ha.automatic-failover.enabled:是否启用自动故障转移。

dfs.ha.fencing.methods:免密登录

dfs.ha.fencing.ssh.private-key-files:私钥目录(/root/.ssh/id_rsa)

 

6、 修改配置文件:/opt/module/hadoop-2.5.2/etc/hadoop/mapred-site.xml

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
    <!-- yarn -->
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
    <!-- ctrl+shift+/ -->
    <!--  <property>
       <name>mapreduce.application.classpath</name>
       <value>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*:$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*</value>
    </property> -->
  
    <!-- 一旦启动了yarn,建议换成必须设置最大内存 -->
    <property>
        <name>mapreduce.map.memory.mb</name>
        <value>200</value>
    </property>
    <property>
        <name>mapreduce.map.java.opts</name>
        <value>-Xmx200M</value>
    </property>
    <property>
        <name>mapreduce.reduce.memory.mb</name>
        <value>200</value>
    </property>
    <property>
        <name>mapreduce.reduce.java.opts</name>
        <value>-Xmx200M</value>
    </property>
</configuration>

7、修改配置文件: /opt/module/hadoop-2.5.2/etc/hadoop/yarn-site.xml

<?xml version="1.0"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->
<configuration>

<!-- Site specific YARN configuration properties -->
        <!-- 配置yarn -->
        <property>
                <name>yarn.nodemanager.aux-services</name>
                <value>mapreduce_shuffle</value>
        </property>
        <property>
                <name>yarn.nodemanager.env-whitelist</name>
                <value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value>
        </property>
        <!-- 配置Yarn开启高可用 -->
        <property>
                <name>yarn.resourcemanager.ha.enabled</name>
                <value>true</value>
        </property>
        <!-- 配置Yarn服务的名称 -->
        <property>
                <name>yarn.resourcemanager.cluster-id</name>
                <value>yarn1</value>
        </property>
        <!-- ResourceManager列表 -->
        <property>
                <name>yarn.resourcemanager.ha.rm-ids</name>
                <value>rm1,rm2</value>
        </property>
        <!-- 配置ResourceManager主机名以及webapp的端口号 -->
        <property>
                <name>yarn.resourcemanager.hostname.rm1</name>
                <value>node01</value>
        </property>
        <property>
                <name>yarn.resourcemanager.webapp.address.rm1</name>
                <value>node01:8088</value>
        </property>
        <property>
                <name>yarn.resourcemanager.hostname.rm2</name>
                <value>node02</value>
        </property>
        <property>
                <name>yarn.resourcemanager.webapp.address.rm2</name>
                <value>node02:8088</value>
        </property>
        <!-- 指定Yarn所依赖的Zookeeper集群列表 -->
        <property>
                <name>yarn.resourcemanager.zk-address</name>
                <value>node01:2181,node02:2181,node03:2181</value>
        </property>
</configuration>

设置成hadoop.zk.address 之后,sbin/start-yarn.shsbin/stop-yarn.sh会失效,所以需要设置成:yarn.resourcemanager.zk-address

 

8、将解压后的hadoop-2.5.2分发到node02、node03节点的/opt/module目录下

[root@node01 module]# scp -r -p hadoop-2.5.2/ root@node02:$PWD
[root@node01 module]# scp -r -p hadoop-2.5.2/ root@node03:$PWD

 

 

首次启动初始化

1、启动zookeeper(在node01、node02、node03节点执行)

cd /opt/module/zookeeper-3.4.5/bin
./zkServer.sh restart

 2、启动journalnode(在node01、node02、node03节点执行)

sbin/hadoop-daemon.sh start journalnode

脚本路径:/opt/module/hadoop-2.5.2/sbin/hadoop-daemon.sh

 3、格式化namenode

[root@node01 hadoop-2.5.2]# bin/hdfs namenode -format
23/06/06 01:02:22 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = node01/192.168.56.121
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 2.5.2
......

脚本路径: /opt/module/hadoop-2.5.2/bin/hdfs

4、将格式化后的元数据分发到ctos79-02(2nn)的 /data/hadoop/data目录下

5、启动ctos79-01节点的NameNode(nn)

6、在ctos79-02节点执行bin/hdfs namenode -bootstrapStandby

注意:在命令执行之后会有提示输入的地方,输入Y即可。(应该是前面从ctos79-01分发了hadoop-root目录导致的。)

 

7、 启动node02的namenode(2nn)

# 启动namenode
[root@node02 hadoop-2.5.2]# sbin/hadoop-daemon.sh start namenode
starting namenode, logging to /opt/module/hadoop-2.5.2/logs/hadoop-root-namenode-node02.out
[root@node02 hadoop-2.5.2]# jps
2080 Jps
1795 JournalNode
2009 NameNode
1647 QuorumPeerMain
[root@node02 hadoop-2.5.2]#

8、在其中一个zookeeper节点初始化 ZKFC注意:在zookeeper服务运行的情况下执行该操作)

9、群起/停 namenode、journalnode、datanode

[root@node01 hadoop-2.5.2]# sbin/start-dfs.sh
Starting namenodes on [node01 node02]
node01: namenode running as process 4059. Stop it first.
node02: namenode running as process 2752. Stop it first.
localhost: datanode running as process 3905. Stop it first.
Starting journal nodes [node01 node02 node03]
node03: journalnode running as process 1786. Stop it first.
node01: journalnode running as process 2032. Stop it first.
node02: journalnode running as process 1795. Stop it first.
Starting ZK Failover Controllers on NN hosts [node01 node02]
node01: starting zkfc, logging to /opt/module/hadoop-2.5.2/logs/hadoop-root-zkfc-node01.out
node02: starting zkfc, logging to /opt/module/hadoop-2.5.2/logs/hadoop-root-zkfc-node02.out
[root@node01 hadoop-2.5.2]#
[root@node01 hadoop-2.5.2]#
[root@node01 hadoop-2.5.2]#
[root@node01 hadoop-2.5.2]# sbin/stop-dfs.sh
Stopping namenodes on [node01 node02]
node01: no namenode to stop
node02: stopping namenode
localhost: stopping datanode
Stopping journal nodes [node01 node02 node03]
node01: stopping journalnode
node02: stopping journalnode
node03: stopping journalnode
Stopping ZK Failover Controllers on NN hosts [node01 node02]
node01: stopping zkfc
node02: stopping zkfc
[root@node01 hadoop-2.5.2]#

10、启停yarn(只能启动当前节点上yarn相关组件:ResourceManagerNodeManager

[root@node01 hadoop-2.5.2]# sbin/stop-yarn.sh
stopping yarn daemons
stopping resourcemanager
localhost: stopping nodemanager
no proxyserver to stop
[root@node01 hadoop-2.5.2]# jps
6579 DataNode
6933 DFSZKFailoverController
1943 QuorumPeerMain
6487 NameNode
6759 JournalNode
8314 Jps
[root@node01 hadoop-2.5.2]#
[root@node01 hadoop-2.5.2]#
[root@node01 hadoop-2.5.2]# sbin/start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /opt/module/hadoop-2.5.2/logs/yarn-root-resourcemanager-node01.out
localhost: starting nodemanager, logging to /opt/module/hadoop-2.5.2/logs/yarn-root-nodemanager-node01.out
[root@node01 hadoop-2.5.2]# jps
6579 DataNode
6933 DFSZKFailoverController
8470 NodeManager
1943 QuorumPeerMain
6487 NameNode
6759 JournalNode
8366 ResourceManager
8590 Jps
[root@node01 hadoop-2.5.2]#

11、群起/停 hdfs和yarn的脚本

sbin/start-all.sh = sbin/start-dfs.sh + sbin/start-yarn.sh

 

 

遇到的问题

1、使用start-dfs.sh、start-yarn.sh发现nodeManager和Datanode都没起来

问题解决:发现是 /opt/module/hadoop-3.3.6/etc/hadoop/workers中未配置hadoop集群三个节点的主机名。

 

 

 

 

 

 

 

 

 

 

页面

 

 

 

— 要养成终生学习的习惯 —

posted @ 2023-06-06 01:19  HOUHUILIN  阅读(19)  评论(0编辑  收藏  举报