liangsw  

1、软件准备:

1、下载VirtualBox: http://download.virtualbox.org/virtualbox/5.2.18/VirtualBox-5.2.18-124319-Win.exe

2、下载CentOS http://isoredirect.centos.org/centos/7/isos/x86_64/CentOS-7-x86_64-DVD-1611.iso

3、下载XShell与XFtp(请在百度自己搜索与安装)

4、JDK(jdk-7u79-linux-x64.tar.gz)

5、下载hadoop2.7.3 --> http://mirror.bit.edu.cn/apache/hadoop/common/hadoop-2.7.3/hadoop-2.7.3.tar.gz

2、安装VirtualBox

3、设置虚拟机中的网络设置

3.1、在虚拟机中选用"桥接"网络

3.2、网络配置

vi /etc/sysconfig/network

NETWORKING=yes
GATEWAY=192.168.1.1

vi /etc/sysconfig/network-sripts/ifcfg-enp0s3

TYPE=Ethernet
BOOTPROTO=static
IPADDR=192.168.1.100 
NETMASK=255.255.255.0
GATEWAY=192.168.1.1

3.3、修改主机名

hostnamectl set-hostname master ###主机名千万不能有下划线

3.4、重启网络

service network restart

3.5、网络检查

## 互相ping,看是否测试成功,若不成功,注意防火墙的影响。关闭windows或虚拟机的防火墙。
systemctl stop firewalld 
system disable firewalld

3.6、使用XShell登陆

检查ssh服务状态systemctl status sshd (service sshd status),验证使用XShell是否能登陆成功!

3.7、上传文件

将hadoop和jdk上传到虚拟机/usr/local/src目录;

3.8、安装软件

1、解压Hadoop和JDK软件包

cd /usr/local/src
ls
tar -xzvf ./jdk-7u79-linux-x64.tar.gz
tar -xzvf ./hadoop-2.7.2.tar.gz
mv ./jdk1.7.0_79 /usr/local/software
mv ./hadoop-2.7.2 /usr/local/software

2、配置环境

vi /etc/profile

export JAVA_HOME=/usr/local/software/jdk1.7.0_79
export PATH=$PATH:${JAVA_HOME}/bin
export HADOOP_HOME=/usr/local/software/hadoop-2.7.3
export PATH=$PATH:${HADOOP_HOME}/bin

source /etc/profile

vi /usr/local/software/hadoop-2.7.3/etc/hadoop/hadoop-env.sh
export JAVA_HOME=/usr/local/software/jdk1.7.0_79

3、测试

     测试hadoop命令是否可以直接执行,任意目录下敲hadoop;

4、关闭虚拟机,复制3份;

     分别修改虚拟机的ip和hostname,确认互相能够ping通;     

hostnamectl set-hostname slave01/slave02/slave03;
vi /etc/sysconfig/network-scripts/ifcfg-enp0s3
BOOTPROTO=static
IPADDR=192.168.1.101
GATEWAY=192.168.1.1
NETMASK=255.255.255.0

用ssh登陆,同时修改所有虚拟机的/etc/hosts,确认使用名字可以ping通;

vi /etc/hosts
192.168.1.100 master
192.168.1.101 salve01
192.168.1.102 salve02
192.168.1.103 salve03

5、启动Hadoop(NameNode及DataNode)

[root@master master] # cd /usr/local/hadoop/etc/hadoop/
vi core-site.xml
配置:
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:9000</value> ## master或192.168.1.100
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/local/hadoop/tmp</value> #自定义目录;需要mkdir;注意##授权
</property>
配置主机master管理的slave:
cd ${HADOOP_HOME}/etc/hadoop/slaves   ## ${HADOOP_HOME}视自己的配置
192.168.1.101
192.168.1.102
192.168.1.103
## Master启动 namenode:
hdfs namenode –format ; (格式化)
hadoop-daemon.sh start namenode;
jsp; # 验证是否启动成功

## 启动datanode:
hadoop-daemon.sh start datanode;
jsp ; # 验证是否启动成功;

网页登陆:http://192.168.1.100:50070/ 验证;

 

 

 

 

 

 

posted on 2018-09-01 09:19  liangsw  阅读(75)  评论(0编辑  收藏  举报