hadoop 安装

deban 安装hadoop 文档
1.使用的kvm 创建的虚拟机,创建虚拟机的同时创建 hadoop用户 建议使用最简单的安装方式
2.配置 /etc/network/interfaces文件
3.配置/etc/hosts文件,添加如下内容
192.168.20.101  hadoop-master
192.168.20.102  hadoop-solver1
192.168.20.103  hadoop-solver2

使用hadoop用户进行操作
sudo apt-get install openssh-server vim 安装
# 分别在不同的主机上执行`ssh-keygen`命令 创建密钥

hadoop-master 
ssh-keygen -t rsa -C "hadoop-master"

hadoop-solver1
ssh-keygen -t rsa -C "hadoop-solver1"

hadoop-solver2
ssh-keygen -t rsa -C "hadoop-solver2"
5. 免密码登录
# 在每台主机上执行:
ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop-master
ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop-solver1
ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop-solver2
sudo ln -sf /usr/lib/jvm/jdk1.8.0_202 /usr/lib/jvm/jdk
sudo vim /etc/profile.d/jdk.sh
添加如下内容:
# JDK environment settings
export JAVA_HOME=/usr/lib/jvm/jdk
export JRE_HOME=${JAVA_HOME}/jre
export CLASSPATh=.:${JAVA_HOME}/lib:${JRE_HOME}/lib
export PATH=${JAVA_HOME}/bin:$PATH
JAVA环境的验证
java -version
sudo chown -R hadoop:hadoop /opt/hadoop-3.1.2
sudo ln -sf /opt/hadoop-3.1.2 /opt/hadoop
mkdir /opt/hadoop/logs
mkdir -p /opt/hadoop/hdfs/name
mkdir -p /opt/hadoop/hdfs/data
hadoop环境变量的配置
vi /etc/profile.d/hadoop.sh
# Hadoop environment settings
export HADOOP_HOME=/opt/hadoop
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
# 使profile生效
source /etc/profile 
Hadoop文件配置
配置文件都在/opt/hadoop/etc/hadoop/文件夹下
workers
hadoop-master
hadoop-solver1
hadoop-solver2
vim core-site.xml 文件配置
<configuration> 

  <!-- hdfs的位置 -->  
  <property> 
    <name>fs.defaultFS</name>  
    <value>hdfs://hadoop-master:9000</value> 
  </property>  
  
  <!-- hadoop运行时产生的缓冲文件存储位置 -->  
  <property> 
    <name>hadoop.tmp.dir</name>  
    <value>/opt/hadoop/tmp</value> 
  </property> 
  
</configuration>
vim hdfs-site.xml 文件配置
<configuration> 

  <!-- hdfs 数据备份数量 -->  
  <property> 
    <name>dfs.replication</name>  
    <value>1</value> 
  </property>  
  
  <!-- hdfs namenode上存储hdfs名字空间元数据 -->  
  <property> 
    <name>dfs.namenode.name.dir</name>  
    <value>/opt/hadoop/hdfs/name</value> 
  </property>  
  
  <!-- hdfs datanode上数据块的物理存储位置 -->  
  <property> 
    <name>dfs.datanode.data.dir</name>  
    <value>/opt/hadoop/hdfs/data</value> 
  </property> 
  
</configuration>

vim mapred-site.xm 文件配置
<configuration> 

  <!--  mapreduce运行的平台 默认local本地模式 -->  
  <property> 
    <name>mapreduce.framework.name</name>  
    <value>yarn</value> 
  </property>  
  
  <!--  mapreduce web UI address -->  
  <property> 
    <name>mapreduce.jobhistory.webapp.address</name>  
    <value>hadoop-master:19888</value> 
  </property> 
  
</configuration>

vim yarn-site.xml 文件 配置 
<configuration> 
  <!-- Site specific YARN configuration properties --> 
   
  <!--  yarn 的 hostname -->  
  <property> 
    <name>yarn.resourcemanager.hostname</name>  
    <value>hadoop-master</value> 
  </property>  
  
  <!--  yarn Web UI address -->  
  <property> 
    <name>yarn.resourcemanager.webapp.address</name>  
    <value>${yarn.resourcemanager.hostname}:8088</value> 
  </property>  
  
  <!--  reducer 获取数据的方式 -->  
  <property> 
    <name>yarn.nodemanager.aux-services</name>  
    <value>mapreduce_shuffle</value> 
  </property> 
  
</configuration>
把/opt/hadoop 和hadoop.sh jdk.sh 打包scp到每台服务器上,
scp -r /opt/hadoop   xxx:/opt/
scp -r /etc/profile.d/jdk.sh xxx:/etc/profile.d/
scp -r /etc/profile.d/hadoop.sh xxx:/etc/profile.d/
同时执行一下 source /etc/profile
Hadoop 的验证
首先格式化 hdfs
hdfs namenode -format
启动与关闭 jobhistoryserver
mr-jobhistory-daemon.sh start historyserver
mr-jobhistory-daemon.sh stop historyserver
启动与关闭 yarn
start-yarn.sh
stop-yarn.sh
启动与关闭 hdfs
start-dfs.sh
stop-dfs.sh
一键启动与关闭
start-all.sh
stop-all.sh
验证
jps
###########
13074 SecondaryNameNode
14485 Jps
10441 JobHistoryServer
12876 NameNode
13341 ResourceManager
访问Web UI
NameNode	https://xxx:9870	Default HTTP port is 9870.
Resourcemanager	http://xxx:8088	Default HTTP port is 8088.
MapReduce JobHistory Server	http://xxx:19888	Default HTTP port is 19888.

  

posted @ 2023-03-22 13:27  繁星下的晴空  阅读(21)  评论(0编辑  收藏  举报