BenjaminYang In solitude, where we are least alone

hadoop(二)hadoop的安装部署

系统版本 : 64位CentOS6.6 

hadoop版本: 1.2.1

jdk版本: jdk1.6.0_45

环境准备

1.主机分配

主机名 ip
master 1.0.0.0.10
slave1 1.0.0.0.11
slave2 1.0.0.0.12
slave3 1.0.0.0.13

2.关闭防火墙和selinux(略)

3.配置dns(每一个节点都配置)

[root@master conf]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

10.0.0.10 master
10.0.0.11 slave1
10.0.0.12 slave2
10.0.0.13 slave3

4.ssh免密登陆 

生成公私钥,在 master 机器的虚拟机命令行下输入 ssh-keygen,一路回车,全部节点都执行

# cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
# scp .ssh/authorized_keys slave1:~/
# scp .ssh/authorized_keys slave2:~/
# scp .ssh/authorized_keys slave3:~/
然后将jdk和hadoop全部分发到各个slave节点
# cd /home/hadoop

 #  scp -r hadoop-1.2.1/ slave1:/home/hadoop/
 #  scp -r hadoop-1.2.1/ slave2:~/home/hadoop
 #  scp -r hadoop-1.2.1/ slave3:~/home/hadoop

 # scp -r jdk1.6.0_45/ slave1:/home/hadoop/
 # scp -r jdk1.6.0_45/ slave2:/home/hadoop/
 # scp -r jdk1.6.0_45/ slave3:/home/hadoop/

 

由于实践部分主要以 Hadoop 1.0 环境为主,所以这主要介绍如何搭建 Hadoop 1.0 分布式环境。 整个分布式环境运行在带有 linux 操作系统的虚拟机上,至于虚拟机和 linux 系统的安 装这里暂不做过多介绍。

安装 Hadoop 分布式环境:

1) 安装jdk(所以节点)

# cd /home/hadoop
官网下载地址 版本为jdk1.6.0_45
# wget http://download.oracle.com/otn/java/jdk/6u45-b06/jdk-6u45-linux-x64.bin
# chmod +x jdk-6u45-linux-x64.bin
# ./
jdk-6u45-linux-x64.bin
配置系统环境变量(方便使用jps命令查看java进程)

[root@master hadoop]# grep jdk ~/.bash_profile
PATH=$PATH:$HOME/bin:/home/hadoop/jdk1.6.0_45/bin/


 

2) 下载 Hadoop 安装包:(所有节点)

使用版本 hadoop-1.2.1
# useradd hadoop
# cd /home/hadoop
# rz 上传 hadoop-1.2.1.tar.gz 安装包
# tar xf hadoop-1.2.1.tar.gz
# 新增 tmp 目录
# mkdir /home/hadoop/hadoop-1.2.1/tmp
# cd conf

3) 配置 Hadoop:(所有节点)


[root@master conf]# pwd
/home/hadoop/hadoop-1.2.1/conf


[root@master conf]# cat masters
master

[root@master conf]# cat slaves
slave1
slave2
slave3


[root@master conf]# cat core-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hadoop/hadoop-1.2.1/tmp</value>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://10.0.0.10:9000</value>
</property>
</configuration>

 

[root@master conf]# cat mapred-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
<name>mapred.job.tracker</name>
<value>http://10.0.0.10:9001</value>
</property>
</configuration>


[root@master conf]# cat hdfs-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>


<!-- Put site-specific property overrides in this file. -->


<configuration>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
</configuration>


[root@master conf]# grep JAVA_HOME hadoop-env.sh    #(注释掉原先默认的jdk路径换成自己的)
# The only required environment variable is JAVA_HOME. All others are
# set JAVA_HOME in this file, so that it is correctly defined on
# export JAVA_HOME=/usr/lib/j2sdk1.5-sun
export JAVA_HOME=/home/hadoop/jdk1.6.0_45/

 

4) hadoop的启动和停止

[root@master hadoop]# cd /home/hadoop/hadoop-1.2.1/bin/
#初始化hadoop文件系统
[root@master bin]# ./hadoop namenode -format 
如果有error查看日志 百度百度,没什么问题的。
#开启脚本

[root@master bin]# ./start-all.sh
starting namenode, logging to /home/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-root-namenode-master.out
slave3: starting datanode, logging to /home/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-root-datanode-slave3.out
slave2: starting datanode, logging to /home/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-root-datanode-slave2.out
slave1: starting datanode, logging to /home/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-root-datanode-slave1.out
master: starting secondarynamenode, logging to /home/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-root-secondarynamenode-master.out
starting jobtracker, logging to /home/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-root-jobtracker-master.out
slave3: starting tasktracker, logging to /home/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-root-tasktracker-slave3.out
slave1: starting tasktracker, logging to /home/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-root-tasktracker-slave1.out
slave2: starting tasktracker, logging to /home/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-root-tasktracker-slave2.out

#查看进程
主:

[root@master bin]# jps
4672 SecondaryNameNode
4495 NameNode
4861 Jps
4756 JobTracker

 从:

[root@slave1 ~]# jps
3525 DataNode
3627 TaskTracker
3695 Jps

如果一切正常,应当有如上的一些进程存在。

#停止脚本

[root@master bin]# ./stop-all.sh
stopping jobtracker
slave3: stopping tasktracker
slave2: stopping tasktracker
slave1: stopping tasktracker
stopping namenode
slave3: stopping datanode
slave1: stopping datanode
slave2: stopping datanode
master: stopping secondarynamenode

5) 测试系统

#做个命令别名
[root@master bin]# grep hdfs /etc/bashrc 
alias hdfs='/home/hadoop/hadoop-1.2.1/bin/hadoop'
#使用命令进行测试
[root@master bin]# hdfs fs -ls  /
Found 8 items
drwxr-xr-x   - root supergroup          0 2018-03-19 16:41 /dir
drwxr-xr-x   - root supergroup          0 2018-03-19 15:47 /home
有输出,就代表正常。

 

posted @ 2018-03-19 18:55  benjamin杨  阅读(291)  评论(0编辑  收藏  举报