学习 Hadoop3.0 一、Hadoop3.0的安装与配置
一、JDK1.8的安装
- 添加ppa
sudo add-apt-repository ppa:webupd8team/java sudo apt-get update
-
安装Oracle-java-installer
sudo apt-get install oracle-java8-installer
这条语句可以默认确认条款:echo oracle-java8-installer shared/accepted-oracle-license-v1-1 select true | sudo /usr/bin/debconf-set-selec
-
设置系统默认jdk
jdk7 切换到jdk8
sudo update-java-alternatives -s java-8-oracle - 测试jdk 是是否安装成功:
java -version javac -version
- 若选择下载安装包安装
下载: wget http://download.oracle.com/otn-pub/java/jdk/8u151-b12/e758a0de34e24606bca991d704f6dcbf/jdk-8u151-linux-x64.tar.gz 创建目录: sudo mkdir /usr/lib/jvm 解压缩至目标目录: sudo tar -zxvfjdk-8u151-linux-x64.tar.gz -C /usr/lib/jvm 修改环境变量: sudo vim ~/.bashrc 文件的末尾追加下面内容: #set oracle jdk environment export JAVA_HOME=/usr/lib/jvm/jdk1.8.0_151 ## 这里要注意目录要换成自己解压的jdk 目录 export JRE_HOME=${JAVA_HOME}/jre export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib export PATH=${JAVA_HOME}/bin:$PATH 使环境变量马上生效 source ~/.bashrc 设置系统默认jdk 版本 sudo update-alternatives --install /usr/bin/java java /usr/lib/jvm/jdk1.8.0_151/bin/java 300 sudo update-alternatives --install /usr/bin/javac javac /usr/lib/jvm/jdk1.8.0_151/bin/javac 300 sudo update-alternatives --install /usr/bin/jar jar /usr/lib/jvm/jdk1.8.0_151/bin/jar 300 sudo update-alternatives --install /usr/bin/javah javah /usr/lib/jvm/jdk1.8.0_151/bin/javah 300 sudo update-alternatives --install /usr/bin/javap javap /usr/lib/jvm/jdk1.8.0_151/bin/javap 300 sudo update-alternatives --config java java -version
二、下载安装配置Hadoop3.0
- 下载Hadoop wget http://mirror.bit.edu.cn/apache/hadoop/common/hadoop-3.0.0/hadoop-3.0.0.tar.gz
- 解压缩至/usr/local/hadoop3
- 配置环境变量
vi /etc/profile 末尾添加
#Hadoop 3.0 export HADOOP_HOME=/usr/local/hadoop3 export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_YARN_HOME=$HADOOP_HOME
export HADOOP_INSTALL=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_CONF_DIR=$HADOOP_HOME
export HADOOP_PREFIX=$HADOOP_HOME
export HADOOP_LIBEXEC_DIR=$HADOOP_HOME/libexec
export JAVA_LIBRARY_PATH=$HADOOP_HOME/lib/native:$JAVA_LIBRARY_PATH
export HADOOP_CONF_DIR=$HADOOP_PREFIX/etc/hadoopexport HDFS_DATANODE_USER=root
export HDFS_DATANODE_SECURE_USER=root
export HDFS_SECONDARYNAMENODE_USER=root
export HDFS_NAMENODE_USER=rootsource /etc/profile
- 配置环境变量
- 配置文件
修改/usr/local/hadoop3/etc/hadoop/core-site.xml,配置hdfs端口和地址,临时文件存放地址
<configuration> <property> <name>fs.default.name</name> <value>hdfs://ha01:9000</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/home/hadoop3/hadoop/tmp</value> </configuration>
#hdfs://ha01:9000 中ha01是主机名,下面是永久修改hostname的方法
1.修改network文件# vi /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=ha01 //在这修改hostname
NISDOMAIN=eng-cn.platform.com
2.修改/etc/hosts里面的名字
# vi /etc/hosts
127.0.0.1 localhost.localdomain localhost
172.17.33.169 ha01 //在这修改hostname修改hdfs-site.xml 配置副本个数以及数据存放的路径 <configuration> <property> <name>dfs.replication</name> <value>2</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>/home/hadoop3/hadoop/hdfs/name</value> </property> <property> <name>dfs.namenode.data.dir</name> <value>/home/hadoop3/hadoop/hdfs/data</value> </property> </configuration>
修改mapred-site.xml,配置使用yarn框架执行mapreduce处理程序,与之前版本多了后面两部 不配置mapreduce.application.classpath这个参数mapreduce运行时会报错: Error: Could not find or load main class org.apache.hadoop.mapreduce.v2.app.MRAppMaster <configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>mapreduce.application.classpath</name> <value> /usr/local/hadoop3/etc/hadoop, /usr/local/hadoop3/share/hadoop/common/*, /usr/local/hadoop3/share/hadoop/common/lib/*, /usr/local/hadoop3/share/hadoop/hdfs/*, /usr/local/hadoop3/share/hadoop/hdfs/lib/*, /usr/local/hadoop3/share/hadoop/mapreduce/*, /usr/local/hadoop3/share/hadoop/mapreduce/lib/*, /usr/local/hadoop3/share/hadoop/yarn/*, /usr/local/hadoop3/share/hadoop/yarn/lib/* </value> </property> </configuration>
修改yar-site.xml <configuration> <!-- Site specific YARN configuration properties --> <property> <name>yarn.resourcemanager.hostname</name> <value>ha01</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> </configuration>
workers文件里添加主机名 ha02 ha03
- Hadoop设置完成,现在实现分布式
通过克隆linux或者复制hadoop文件夹的方式构建其它节点 scp -r /usr/local/hadoop3 root@ha02:/usr/local scp -r /usr/local/hadoop3 root@ha03:/usr/local 复制时候ha02无法解析,此时需要我们在系统hosts文件中声明 192.168.160.101 ha01 192.168.160.102 ha02 192.168.160.103 ha03
- hadoop节点需要设置免密码登录。
ssh-keygen -t rsa //生成密钥id-rsa、公钥id-rsa.pub 将公钥的内容复制到需要ssh免密码登陆的机器的~/.ssh/authorized_keys文件中。 例如:A机器中生成密钥及公钥,然后将公钥内容复制到B机器的authorized_keys文件中,这样变实现了A免密码ssh登陆B。
一.SSH免密登录
1.1、检查是否可以免密匙登录
[root@master ~]# ssh localhostThe authenticity of host 'localhost (::1)' can't be established.1.2CentOS默认没有启动ssh无密登录,去掉/etc/ssh/sshd_config其中2行的注释,每台服务器都要设置,
#RSAAuthentication yes
#PubkeyAuthentication yes1.3生成密钥输入命令 ssh-keygen -t rsa 然后一路回车即可1.4复制到公共密钥中cp /root/.ssh/id_rsa.pub /root/.ssh/authorized_keys1.5再次登录,即可免密匙[root@master ~]# ssh localhost
Last login: Thu Oct 20 15:47:22 2016 from 192.168.0.100