bingmous

欢迎交流,不吝赐教~

导航

ubuntu 18.04安装hadoop 2.9.2

先试用命令su,进入root用户权限

下载jdk及hadoop

分别解压,/java,/hadoop

tar xvf xxx.tar(在所在目录下进行,或者)
tar xvf /x/x/xxx.tar -C /x/x

java
/java/jdk1.8.0_241

hadoop
/hadoop/hadoop-2.9.2

注意:必须给hadoop安装目录重设权限

chmod 777 -R hadoop-x.x.x

设置环境变量

第一个是对当前用户的,第二个是对所有用户的,改了第二个重启之后依旧有效

gedit ~/.bashrc
gedit /etc/profile

export JAVA_HOME=/java/jdk1.8.0_241
export HADOOP_HOME=/hadoop/hadoop-2.9.2
export CLASSPATH=.:$JAVA_HOME/lib/tools.jar:$JAVA_HOME/lib/dt.jar 
export PATH=.:$HADOOP_HOME/bin:$JAVA_HOME/bin:$HADOOP_HOME/sbin:$JAVA_HOME/jre/bin:$PATH

//把JAVA_HOME改成绝对路径
gedit /hadoop/hadoop-2.9.2/etc/hadoop/hadoop-env.sh
export JAVA_HOME=/java/jdk1.8.0_241

//立即生效
source ~/.bashrc
source /etc/profile

//测试,新起一个终端,不要在root权限下
java -version
hadoop version

配置ssh

修改xml

core

<configuration>
        <property>
             <name>hadoop.tmp.dir</name>
             <value>file:/hadoop/hadoop-2.9.2/tmp</value>
        </property>
        <property>
             <name>fs.defaultFS</name>
             <value>hdfs://localhost:9000</value>
        </property>
</configuration>
hdfs 

<configuration>
        <property>
             <name>dfs.replication</name>
             <value>1</value>
        </property>
        <property>
             <name>dfs.namenode.name.dir</name>
             <value>file:/hadoop/hadoop-2.9.2/tmp/dfs/namenode</value>
        </property>
        <property>
             <name>dfs.datanode.data.dir</name>
             <value>file:/hadoop/hadoop-2.9.2/tmp/dfs/datanode</value>
        </property>
</configuration>
mapred

<configuration>
        <property>
             <name>mapreduce.framework.name</name>
             <value>yarn</value>
        </property>
</configuration>
yarn

<configuration>
    <property> 
        <name>yarn.nodemanager.aux-services</name> 
        <value>mapreduce_shuffle</value> 
    </property> 
    <property> 
        <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name> 
        <value>org.apache.hadoop.mapred.ShuffleHandler</value> 
    </property> 

    <property> 
        <name>yarn.resourcemanager.hostname</name> 
        <value>localhost</value> 
    </property> 
</configuration>

配置hadoop

  • hdfs namenode -format
  • start-dfs.sh
  • start-yarn.sh
  • mr-jobhistoryserver.sh start historyserver 开启jobhistory
  • start-all.sh
  • stop-all.sh
  • jps查看进程
  • 测试wordcount
  • hadoop fs -mkdir <>
  • hadoop jar <jar dir>  <class> <input> <output>
  •  

web查看节点、资源管理、jobhistory:localhost:50070,8088,19888

 

 

 

 

 

 

 

 

 

 

 

 

posted on 2020-03-27 23:01  Bingmous  阅读(28)  评论(0编辑  收藏  举报