hadoop hdfs 安装

简介

Hadoop分布式文件系统(HDFS)被设计成适合运行在通用硬件(commodity hardware)上的分布式文件系统。它和现有的分布式文件系统有很多共同点。但同时,它和其他的分布式文件系统的区别也是很明显的。HDFS是一个高度容错性的系统,适合部署在廉价的机器上。

安装

java安装

各个版本与java依赖对照网页: https://wiki.apache.org/hadoop/HadoopJavaVersions

本次安装hadoop2.8.4(比较稳定的版本) 选取java7目前最新版本7u80

下载地址如下:http://www.oracle.com/technetwork/java/archive-139210.html

# tar xf jdk-7u80-linux-x64.tar.gz
# ln -s jdk1.7.0_80/ jdk

添加java环境变量

# cat /etc/profile
export JAVA_HOME=/opt/jdk
export JRE_HOME=$JAVA_HOME/jre
export CLASSPATH=.:$JAVA_HOME/lib
export PATH=$PATH:$JAVA_HOME/bin

加载环境变量

# source /etc/profile

hdfs安装

http://www.apache.org/dyn/closer.cgi/hadoop/common/hadoop-2.8.4/ 中选择最快的镜像下载地址下载

# wget https://mirrors.tuna.tsinghua.edu.cn/apache/hadoop/common/hadoop-2.8.4/hadoop-2.8.4.tar.gz

在hadoop-1 、hadoop-2 、hadoop-3 上面创建hadoop帐号

useradd hadoop

在三台主机的/etc/hosts文件中写入(安装过程全部操作在3台设备中镜像操作,除了免密登录)

# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.0.181 hadoop-1
192.168.0.182 hadoop-2
192.168.0.183 hadoop-3

在hadoop1上面配置免密登录

# ssh-keygen -t rsa
# ssh-copy-id -i hadoop@hadoop-1
# ssh-copy-id -i hadoop@hadoop-2
# ssh-copy-id -i hadoop@hadoop-3

解压hadoop到 /opt 目录下

# tar xf hadoop-2.8.4.tar.gz -C /opt
# chown hadoop:hadoop /opt/hadoop-2.8.4

配置hadoop的环境变量

# cat /etc/profile
export JAVA_HOME=/opt/jdk
export JRE_HOME=$JAVA_HOME/jre
export CLASSPATH=.:$JAVA_HOME/lib
export HADOOP_HOME=/opt/hadoop-2.8.4
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin

# source /etc/profile

#cat /opt/hadoop-2.8.4/etc/hadoop/hadoop-env.sh

26 export JAVA_HOME=/opt/jdk

切换到hadoop用户修改配置文件core-site.xml hdfs-site.xml hadoop-env.sh

# su - hadoop
[hadoop@hadoop-1 ~]$ cat /opt/hadoop-2.8.4/etc/hadoop/core-site.xml 
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
            <property>
                <name>fs.defaultFS</name>
                <value>hdfs://hadoop-1:9000</value>
            </property>
	    <property>
		<name>io.file.buffer.size</name>
		<value>131072</value>
	    </property>
<property>
		<name>hadoop.tmp.dir</name>
		<value>file:///tmp/hadoop</value>
</property>
</configuration>
[hadoop@hadoop-1 ~]$ cat /opt/hadoop-2.8.4/etc/hadoop/hdfs-site.xml 
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
            <property>
                <name>dfs.nameservices</name>
                <value>hadoop-cluster-panjunbai</value>
            </property>
            <property>
                <name>dfs.replication</name>
                <value>3</value>
            </property>
            <property>
                <name>dfs.namenode.name.dir</name>
                <value>file:///data/hadoop/hdfs/nn</value>
            </property>
            <property>
                <name>dfs.namenode.checkpoint.dir</name>
                <value>file:///data/hadoop/hdfs/snn</value>
            </property>
            <property>
                <name>dfs.namenode.checkpoint.edits.dir</name>
                <value>file:///data/hadoop/hdfs/snn</value>
            </property>
            <property>
                <name>dfs.datanode.data.dir</name>
                <value>file:///data/hadoop/hdfs/dn</value>
            </property>

</configuration>

创建目录并更改权限

# mkdir -p /data/hadoop/hdfs/dn /data/hadoop/hdfs/snn /data/hadoop/hdfs/nn
# chown hadoop:hadoop /data -R

在hadoop用户下面启动hdfs,首先格式化namenode

/opt/hadoop-2.8.4/bin/hdfs namenode -format

启动hdfs

/opt/hadoop-2.8.4/sbin/start-dfs.sh
posted @ 2018-09-08 02:12  长风七万里  阅读(129)  评论(0编辑  收藏  举报