Hadoop分布式配置

 

请先参照Linux安装Java安装好java,以及参照Hadoop伪分布模式配置安装好SSH

Hadoop请按以下过程安装。

 [All]LinuxOS+Java+hostname&hosts+ssh install
 [master]generate ssh &scp to slaves + configure Hadoop & scp to slaves
 [slaves]add master's key to ssh file + create link for hadoop-2.2.0

准备工作

hostname IP Address
master 192.168.1.100
slave1 192.168.1.102

修改主机名(对应地改成master、slave1)

sudo vi /etc/hostname

修改hosts

sudo vi /etc/hosts
127.0.0.1    localhost
192.168.1.121    master
192.168.1.122    slave1

关闭防火墙(重启生效)

sudo ufw disable

 

先重启,使得主机名生效,并以hadoop用户登录

SSH

进入master的.ssh目录将key复制到各slave中

ssh-copy-id hadoop@slave1

至此,可以在master上面ssh hadoop@slave1进行无密码登陆了

 对其余slave作同样处理即可

Hadoop安装和配置 

安装参考 https://www.cnblogs.com/manhua/p/3529928.html

使用备份的配置 文件

cd ~/setupEnv/hadoop_distribute_setting
sudo cp core-site.xml ~/hadoop/etc/hadoop
sudo cp hadoop-env.sh ~/hadoop/etc/hadoop
sudo cp hdfs-site.xml ~/hadoop/etc/hadoop
sudo cp mapred-site.xml ~/hadoop/etc/hadoop
sudo cp yarn-site.xml ~/hadoop/etc/hadoop

 

(手动配置)进入Hadoop配置文件目录

cd ~/hadoop/etc/hadoop

sudo gedit core-site.xml

复制代码
<property>
    <name>fs.defaultFS</name>
    <value>hdfs://master:9000</value>
</property>
<property>
    <name>hadoop.tmp.dir</name>
    <value>/home/hadoop/hadoop/tmp</value>
</property>
<property>
    <name>hadoop.proxyuser.hadoop.hosts</name>
    <value>*</value>
</property>
<property>
    <name>hadoop.proxyuser.hadoop.groups</name>
    <value>*</value>
</property>
复制代码

sudo gedit hdfs-site.xml

复制代码
<property>
    <name>dfs.namenode.secondary.http-address</name>
    <value>master:9001</value>
</property>
<property>
    <name>dfs.replication</name>
    <value>2</value>
</property>
<property>
    <name>dfs.webhdfs.enabled</name>
    <value>true</value>
</property>
复制代码

默认不存在此文件,需要创建:
sudo cp mapred-site.xml.template mapred-site.xml
sudo gedit mapred-site.xml

复制代码
<property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
</property>
<property>
    <name>mapreduce.jobhistory.address</name>
    <value>master:10020</value>
</property>
<property>
    <name>mapreduce.jobhistory.webapp.address</name>
    <value>master:19888</value>
</property>
复制代码

sudo gedit yarn-site.xml

复制代码
<property>
    <name>yarn.nodemanager.aux-services</name> 
    <value>mapreduce_shuffle</value>
    </property>
<property>
    <name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>
    <value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
    <name>yarn.resourcemanager.address</name>
    <value>master:8032</value>
</property>
<property>
    <name>yarn.resourcemanager.scheduler.address</name>
    <value>master:8030</value>
</property>
<property>
    <name>yarn.resourcemanager.resource-tracker.address</name>
    <value>master:8031</value>
</property>
<property>
    <name>yarn.resourcemanager.admin.address</name>
    <value>master:8033</value>
</property>
<property>
    <name>yarn.resourcemanager.webapp.address</name>
    <value>master:8088</value>
</property>
复制代码

新建masters文档:添加master

修改slaves:添加slave1

sudo gedit slaves

 复制配置到slave1上

cp2slave.sh
#!/bin/bash

scp –r /home/hadoop/hadoop-2.2.0/ hadoop@slave1:~/

 

测试

在master中启动hadoop

hdfs namenode –format
start-dfs.sh
start-yarn.sh

在master上运行jps命令可见到namenode、secondarynamenode、resourcemanager

在slave1上运行jps命令可见到datanode、nodemanager

 

=============================================

使用过程中如果出现无法解决的问题,或者在修改配置文件,可以尝试执行以下步骤:

〇停止任务

hadoop job -kill [jobID,如job_1394263427873_0002]

①stop-all.sh

②[修改配置文件]

③scp setting 到每台slave

cd /home/casper/hadoop/hadoop-2.2.0/etc/hadoop
scp core-site.xml casper@hdp002:~/hadoop/hadoop-2.2.0/etc/hadoop
scp hdfs-site.xml casper@hdp002:~/hadoop/hadoop-2.2.0/etc/hadoop
scp mapred-site.xml casper@hdp002:~/hadoop/hadoop-2.2.0/etc/hadoop
scp yarn-site.xml casper@hdp002:~/hadoop/hadoop-2.2.0/etc/hadoop
scp slaves casper@hdp002:~/hadoop/hadoop-2.2.0/etc/hadoop

④删除每台机器的临时文件夹、dfs数据(路径根据自己配置的修改)

cd ~/hadoop
rm -rf dfs
rm -rf tmp
ls

⑤格式化namenode

hadoop namenode -format

⑥启动start-dfs.sh\start-yarn.sh

⑦上传文件 : hadoop fs  -put ss-out.txt  /

⑧运行jar: hadoop jar part-45-90-3-goodrule.jar RICO /ss-out.txt /rico-out 5 0.9

 


why map task always running on a single node

If that doesn't work, check to make sure that your cluster is configured correctly. Specifically, check that your name node has paths to your other nodes set in its slaves file, and that each slave node has your name node set in its masters file.

 -----TODO

create masters file in etc/hadoop/

reset block size in hdfs-site.xml to enlarge the number of blocks--check block size in dfs browser later

 最简单方法:

上传的时候使用命令  hadoop fs -D dfs.blocksize=16777216 -put ss-part-out.txt  /targetDir

posted @   Man_华  阅读(655)  评论(0编辑  收藏  举报
编辑推荐:
· .NET Core 中如何实现缓存的预热?
· 从 HTTP 原因短语缺失研究 HTTP/2 和 HTTP/3 的设计差异
· AI与.NET技术实操系列:向量存储与相似性搜索在 .NET 中的实现
· 基于Microsoft.Extensions.AI核心库实现RAG应用
· Linux系列:如何用heaptrack跟踪.NET程序的非托管内存泄露
阅读排行:
· TypeScript + Deepseek 打造卜卦网站:技术与玄学的结合
· Manus的开源复刻OpenManus初探
· AI 智能体引爆开源社区「GitHub 热点速览」
· 三行代码完成国际化适配,妙~啊~
· .NET Core 中如何实现缓存的预热?
点击右上角即可分享
微信分享提示