hadoop2.6.4【ubuntu】单机环境搭建 系列1

jdk安装

tar zxvf jdk

mv jdk /usr/lib/jvm/java

jdk环境变量配置

vim /etc/profile

```
export JAVA_HOME=/usr/lib/java
export JRE_HOME=${JAVA_HOME}/jre
export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib
export PATH=${PATH}:${JAVA_HOME}/bin:${JRE_HOME}/bin

export HADOOP_HOME=/usr/lib/hadoop
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
```

安装hadoop2.6.4

tar zxvf hadoop

mv /usr/lib

hadoop环境变量配置

export HADOOP_HOME=/usr/lib/hadoop
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

hadoop单机配置

cd /usr/lib/hadoop/etc/hadoop

ls

vim core-site.xml

```
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
        <name>fs.defaultFS</name>
        <value>hdfs://192.168.128.129:9000</value>
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/usr/lib/hadoop/tmp</value>
    </property>
</configuration>
```

vim hdfs-site.xml

```
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
<property>
  <name>dfs.permissions</name>
  <value>false</value>
</property>



<property>
    <name>dfs.datanode.max.transfer.threads</name>
    <value>8192</value>
    <description>
        Specifies the maximum number of threads to use for transferring data
        in and out of the DN.
    </description>
</property>

<property>
<name>dfs.data.dir</name>
<value>/usr/lib/hadoop/hdfs/data</value>
</property>

</configuration>
```

vim mapred-site.xml

```
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
</configuration>
```

vim yarn-site.xml

```
<?xml version="1.0"?>
<configuration>
<!-- Site specific YARN configuration properties -->
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
</configuration>
```

vim  hadoop-env.sh

```
# The java implementation to use.
export JAVA_HOME=/usr/lib/java/jre

export HADOOP_PREFIX=/opt/hadoop
```

vim yarn-env.sh

```
export JAVA_HOME=/usr/lib/java/jre
```

以上配置完成后启动hadoop

```
cd /opt/hadoop/sbin

# 启动hdfs
./start-dfs.sh

#启动yarn
./start-yarn.sh

```

hadoop免密码启动

```
 方法一:​

    在命令终端下输入如下命令:​(注:与当前目录无关)

    ssh-keygen -t rsa​  按照下面括号内的注明部分操作

    (Enter file in which to save the key (/home/youruser/.ssh/id_rsa):(注:这里按Enter接受默认文件名)

    Enter passphrase (empty for no passphrase):(注:这里按Enter不设置rsa私有秘钥加密)

    Enter same passphrase again:(注:这里按Enter不设置rsa私有秘钥加密)   

    Your identification has been saved in /home/youruser/.ssh/id_rsa.

    Your public key has been saved in /home/youruser/.ssh/id_rsa.pub.)​

    cd ~/.ssh/cat id rsa.pub >> authorized_keys

    chmod 600 authorized_keys (注:网上介绍的方法一般没有这一行,但是在本人的   机器上如果不加这一行则不成功)​​
```

 

 

posted @ 2017-12-08 14:42  佛法无边  阅读(143)  评论(0编辑  收藏  举报