CentOS7.2配置Hadoop2.6.5

Hadoop配置文件

/etc/profile

配置Java和Hadoop环境

export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk
export CLASSPATH=.:$JAVA_HOME/jre/lib/rt.jar:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar 
export HADOOP_HOME=/usr/local/hadoop-2.6.5/hadoop-2.6.5
export PATH=$JAVA_HOME/bin:${PATH}:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

配置文件在/etc/hadoop

core-site.xml

fs.defualtFS:配置主节点namenode地址

hadoop.tmp.dir:在本地文件系统所在的NameNode的存储空间和持续化处理日志

<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://192.168.94.140:9000</value>
</property>

<property>
<name>hadoop.tmp.dir</name>
<value>/usr/local/hadoop-2.6.5/data</value>
</property>
</configuration>

mapred-site.xml.template

mapreduce.framework.name:执行框架设置为 Hadoop YARN

<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>

yarn-site.xml

yarn.resourcemanager.hostname:namenode的yarn地址

yarn.nodemanager.aux-services:Shuffle service 需要加以设置的Map Reduce的应用程序服务

<property>
<name>yarn.resourcemanager.hostname</name>
<value>CentOS7One</value>
</property>

<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>

hadoop-env.sh

JAVA_HOME换成绝对路径

export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk

slaves

配置子节点,伪分布式,一台机器,配置子节点为自身

CentOS7One

Hadoop免密钥配置

方法1:

1.生成私钥和公钥

[root@CentOS7One ~]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
eb:68:01:dc:9a:71:ee:f9:5d:3b:db:6f:46:90:47:f9 root@CentOS7One
The key's randomart image is:
+--[ RSA 2048]----+
|                .|
|               ..|
|   . .         o.|
|    + o       o E|
|     B  S      o |
|    o o  .      .|
|     . o.   .  . |
|      +o . ..o  o|
|     ...o . oo.+.|
+-----------------+

2.将CentOS7One的id_rsa_pub拷贝到CentOSTwo中

[root@CentOS7One .ssh]# 
[root@CentOS7One .ssh]# scp id_rsa.pub  CentOS7Two:/root
The authenticity of host 'centos7two (192.168.94.139)' can't be established.
ECDSA key fingerprint is dd:e2:09:9d:e2:6e:86:c3:2a:62:52:3f:f6:3a:f2:37.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'centos7two,192.168.94.139' (ECDSA) to the list of known hosts.
root@centos7two's password: 
id_rsa.pub                                                      100%  397     0.4KB/s   00:00    
[root@CentOS7One .ssh]# 

3.重复步骤1,在CentOS7Two主机上生成私钥和公钥

[root@CentOS7Two ~]# ssh-keygen -t rsa

4.在CentOS7Two中新建authorized_keys,权限为600,向其追加从CentOS7One拷贝的id_rsa.pub

[root@CentOS7Two .ssh]# touch authorized_keys
[root@CentOS7Two .ssh]# chmod 600 authorized_keys 
[root@CentOS7Two .ssh]# cat /root/id_rsa.pub >> authorized_keys 

authorized_keys是被允许免密连接到CentOS7Two的主机的公钥

[root@CentOS7Two .ssh]# cat authorized_keys 
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDGBMCc2AOqbMUT9uB8tDyrcNkjV/nCum/Ez5OMRZlVEITERtMLN8NUZP0kQjqiRU7kr5oIlA8FoE6sgSF7ciTu1IIDb2pg3roe0PpcaIwdSjP/b6ktnnkkcIXl74194AY/I+A9UFGErdaarTzAlMEougMC6G9IYTefqyMywsUZ5lH3PP72vqQNwZZX/LZtq6AK+yZ4C2jiErfF3i7hL1bTxVDBvGmLg37U8xNhei0Z5SDq9tCGP9EFabVuaw+mehPxGwFTbyuQj6X1xDmRD8lfjRWTK7M88dVImKdrf85KJAL5kyquIQi0tSAskkSlaroIDzNh1ebacKlOuWh6eWhd root@CentOS7One

方法2:

先在本机上生成私钥和公钥

[root@CentOS7Three ~]# ssh-keygen -t rsa

然后,运行ssh-copy-id命令,快捷的配置免密登录

[root@CentOS7Three ~]# ssh-copy-id CentOS7Seven

 

Hadoop运行

Hadoop格式化

对Hadoop格式化,对data目录下写初始化文件

hadoop namenode -format

启动dfs

启动目录在sbin/

start-dfs.sh

Jps检验是否启动成功

[root@CentOS7One current]# jps
5125 NameNode
5239 DataNode
5384 SecondaryNameNode
5487 Jps

启动yarn

start-yarn.sh

Jps检验是否成功

[root@CentOS7One current]# jps
5125 NameNode
5239 DataNode
5879 NodeManager
5384 SecondaryNameNode
5611 ResourceManager
5918 Jps

Hadoop命令

上传

hadoop fs -put hadoop-2.6.5.tar.gz hdfs://CentOS7One:9000/
hadoop fs -copyFromLocal /usr/local/hadoop-2.6.5/hadoop-2.6.5/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.5.jar /

下载

hadoop fs -get hdfs://CentOS7One:9000/hadoop-2.6.5.tar.gz 

创建文件

路径是hdfs://CenOS7One:9000/wordcount/input的简写

hadoop fs -mkdir /wordcount/input

查看文件列表

[root@CentOS7One mapreduce]# hadoop fs -ls /wordcount/output
Found 2 items
-rw-r--r--   1 root supergroup          0 2019-02-12 10:34 /wordcount/output/_SUCCESS
-rw-r--r--   1 root supergroup         47 2019-02-12 10:34 /wordcount/output/part-r-00000

查看文件详情

[root@CentOS7One mapreduce]# hadoop fs -cat /wordcount/output/part-r-00000
fuckbaby        1
hello   5
jim     1
kitty   1
tom     1
world   1

删除文件

[root@CentOS7One ~]# hadoop fs -rm -r /qingshu.txt
19/03/01 10:59:34 INFO fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 0 minutes, Emptier interval = 0 minutes.
Deleted /qingshu.txt

查看文件系统空间使用情况

[root@CentOS7One data]# hadoop fs -df -h /
Filesystem                    Size     Used  Available  Use%
hdfs://192.168.94.140:9000  17.5 G  210.6 M     14.1 G    1%

查看文件夹、文件具体占用多少空间

[root@CentOS7One data]# hadoop fs -du -s -h hdfs://CentOS7One:9000/*
190.4 M  hdfs://CentOS7One:9000/hadoop-2.6.5.tar.gz
285.8 K  hdfs://CentOS7One:9000/hadoop-mapreduce-examples-2.6.5.jar
18.1 M  hdfs://CentOS7One:9000/mapreduce
0  hdfs://CentOS7One:9000/user
106  hdfs://CentOS7One:9000/wordcount

 

posted @ 2019-01-31 14:38  Rest探路者  阅读(335)  评论(0编辑  收藏  举报
levels of contents