20201117 - Hadoop3.3.0 Linux 安装运行
发现以前很多2.x版本的步骤已不适合3.+,这里重新记录下
一.下载解压
cd /home/bigdata;
wget http://mirrors.hust.edu.cn/apache/hadoop/core/stable/hadoop-3.3.0.tar.gz;
解压+权限:
tar -zxvf hadoop-3.3.0.tar.gz;
chmod -R 777 /home/bigdata/hadoop-3.3.0;
chown -R root:root /home/bigdata/hadoop-3.3.0;
二.配置
1.ssh免密配置
ssh-keygen -t rsa;
默认文件位置/root/.ssh
touch /root/.ssh/authorized_keys;
chmod -R 600 /root/.ssh/authorized_keys;
cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys;
2.hadoop配置
获取java位置:echo $JAVA_HOME;
vim /home/bigdata/hadoop-3.3.0/etc/hadoop/hadoop-env.sh;
添加:
export JAVA_HOME=/usr/java/jdk1.8.0_131
export HDFS_NAMENODE_USER=root
export HDFS_DATANODE_USER=root
export HDFS_SECONDARYNAMENODE_USER=root
export YARN_RESOURCEMANAGER_USER=root
export YARN_NODEMANAGER_USER=root
export HDFS_JOURNALNODE_USER=root
export HDFS_ZKFC_USER=root
export HADOOP_SHELL_EXECNAME=root
3.修改用户
vim /home/bigdata/hadoop-3.3.0/bin/hdfs;
修改: HADOOP_SHELL_EXECNAME=“root”
4.修改数据存贮
vim /home/bigdata/hadoop-3.3.0/etc/hadoop/core-site.xml;
添加:
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/bigdata/hadoop-data/tmp</value>
<description>Abase for other temporary directories.</description>
</property>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
三.初始化
/home/bigdata/hadoop-3.3.0/bin/hdfs namenode -format;
四.运行
/home/bigdata/hadoop-3.3.0/sbin/start-dfs.sh
执行jps命令查看结果
27010 NameNode
26438 SecondaryNameNode
26238 DataNode
web页面
http://localhost:9870/