Hadoop多节点Cluster
Hadoop多节点集群规划
服务起名称 | 内网IP | HDFS | YARN |
master | 192.168.1.155 | NameNode | ResourceManager |
slave1 | 192.168.1.116 | DataNode | NodeManager |
slave2 | 192.168.1.117 | DataNode | NodeManager |
slave3 | 192.168.1.118 | DataNode | NodeManager |
1. Slave1机器配置
1.1 以单机Hadoop镜像为模板克隆出一个虚拟机, 修改固定IP及MAC地址(修改/etc/sysconfig/network-scripts/ifcfg-ens33)
DEVICE="ens33"
HWADDR="00:0C:29:30:BB:7E"
Type="Ethernet"
BOOTPROTO="static"
IPADDR=192.168.1.156
GATEWAY=192.168.1.1
NETMASK=255.255.255.0
ONBOOT="yes"
1.2 修改机器名为Slave1(/etc/hostname)
1.3 修改机器名及ip映射(/etc/hosts),同时将127.0.0.1映射到slave1
192.168.1.155 master
192.168.1.156 slave1
192.168.1.157 slave2
192.168.1.157 slave3
1.4 编辑/usr/local/hadoop/etc/hadoop/core-site.xml,设置fs.defaultFS值为hdfs://master:9000
1.5 编辑.../..../yarn-site.xml,添加额外的3个property(nodemanager->resourcemanager, application-master->resourcemanager, client->resourcemanager)
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>master:8025</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>master:8030</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>master:8050</value>
</property>
1.6 编辑mapred-site.xml,添加
<property>
<name>mapred.job.tracker</name>
<value>master:54331</value>
</property>
1.7 编辑hdfs-site.xml, slave1为datanode,所以设置data dir
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/usr/local/hadoop/hadoop_data/hdfs/datanode</value>
</property>
2. Slave2机器配置
2.1 以Slave1机器为模板,复制出新的VM,然后修改固定IP及MAC
DEVICE="ens33"
HWADDR="00:0C:29:51:C4:45"
Type="Ethernet"
BOOTPROTO="static"
PADDR=192.168.1.157
GATEWAY=192.168.1.1
NETMASK=255.255.255.0
ONBOOT="yes"
2.2 修改机器名为Slave2(/etc/hostname)
2.3 修改/etc/hosts,将127.0.0.1映射到slave2
3. Slave3机器配置
3.1 以Slave1机器为模板,复制出新的VM,然后修改固定IP及MAC
DEVICE="ens33"
HWADDR="00:0C:29:BE:C6:0C"
Type="Ethernet"
BOOTPROTO="static"
IPADDR=192.168.1.158
GATEWAY=192.168.1.1
NETMASK=255.255.255.0
ONBOOT="yes"
3.2 修改机器名为Slave3(/etc/hostname)
3.3 修改/etc/hosts,将127.0.0.1映射到slave3
4. Master机器配置
4.1 设置hdfs-site.xml, Master为NameNode, 指定name dir
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/usr/local/hadoop/hadoop_data/hdfs/namenode</value>
</property>
4.4 设置yarn-site.xml
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>master:8025</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>master:8030</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>master:8050</value>
</property>
4.3 编辑masters(/usr/local/hadoop/etc/hadoop/masters), 内容为master
4.4 编辑slaves(/usr/local/hadoop/etc/hadoop/slaves,内容为
slave1
slave2
slave3
4.5 ssh到3台slaves机器,创建datanode目录/usr/local/hadoop/hadoop_data/hdfs/datanode
4.6 master机器上创建namenode目录/usr/local/hadoop/hadoop_data/hdfs/namenode
4.7 格式化NameNode HDFS目录(hdfs namenode -format), 注意:首次格式化时使用
4.8 启动多节点Hadoop Cluster
start-dfs.sh
start-yarn.sh
4.9 查看ResourceManager Web界面(http://master:8088)及NameNode Web界面(http://master:50070)
4.10 关闭多节点Hadoop Cluster
stop-dfs.sh
stop-yarn.sh
其他:删除多余内网ip命令ip addr del 192.168.1.105/24 dev ens33