bigdata-02-hadoop2.8.4-resourceHA安装
1, 电脑环境准备
1), 关闭selinux
vim /etc/selinux/config SELINUX=disabled
2), 时间同步
yum -y install chrony
修改时间服务器配置, 并重启
vim /etc/chrony.conf [root@dock hadoop]# cat /etc/chrony.conf | grep -v ^$ | grep -v ^# server 0.centos.pool.ntp.org iburst server 1.centos.pool.ntp.org iburst server 2.centos.pool.ntp.org iburst server 3.centos.pool.ntp.org iburst driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync allow 192.168.199.0/16 local stratum 10 logdir /var/log/chrony
修改需要同步的服务器配置, 并重启
vim /etc/chrony.conf [root@node1 ~]# cat /etc/chrony.conf | grep -v ^$ | grep -v ^# server 192.168.199.131 iburst driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony
执行时间同步
systemctl restart chronyd [root@node2 ~]# chronyc sources -v 210 Number of sources = 1 .-- Source mode '^' = server, '=' = peer, '#' = local clock. / .- Source state '*' = current synced, '+' = combined , '-' = not combined, | / '?' = unreachable, 'x' = time may be in error, '~' = time too variable. || .- xxxx [ yyyy ] +/- zzzz || Reachability register (octal) -. | xxxx = adjusted offset, || Log2(Polling interval) --. | | yyyy = measured offset, || \ | | zzzz = estimated error. || | | \ MS Name/IP address Stratum Poll Reach LastRx Last sample =============================================================================== ^* dock 3 6 177 4 -1590ns[ +62us] +/- 13ms
查看时间同步:
[root@node3 ~]# timedatectl Local time: Wed 2018-03-21 08:16:02 EDT Universal time: Wed 2018-03-21 12:16:02 UTC RTC time: Wed 2018-03-21 12:16:02 Time zone: America/New_York (EDT, -0400) NTP enabled: yes NTP synchronized: yes RTC in local TZ: no DST active: yes Last DST change: DST began at Sun 2018-03-11 01:59:59 EST Sun 2018-03-11 03:00:00 EDT Next DST change: DST ends (the clock jumps one hour backwards) at Sun 2018-11-04 01:59:59 EDT Sun 2018-11-04 01:00:00 EST
3), 修改hostname, 很多集群都需要执行这一个
hostname node1,
hostname node2
hostname node3
4), jdk 版本
java -version 1.8.0_161
5), 设置免密登陆
ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
发送到namenode, 设置
非root用户, 记得修改authorized 权限为。600
cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
2, zookeeper 安装
参照其他博客..
3, hadoop安装
zkFc-用来做HA的备份和切换的, 做active, standby的状态管理的, 监控namenode进程, 记录信息到zookeeper中
journalNode--复制fsimage和edtis的
1), 修改环境变量
export HADOOP_HOME=/usr/local/hadoop-2.7.5 export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
2), 修改hadoop-env.sh
cd {HADOOP_HOME}/etc/hadoop export JAVA_HOME=/usr/local/jdk/jdk1.8.0_161
3), 配置core_site.xml
<configuration> <property>
<--! 指定hdfs的nameservice --> <name>fs.defaultFS</name> <value>hdfs://hdfscluster</value> </property> <property>
<!-- 指定hadoop临时目录 --> <name>hadoop.tmp.dir</name> <value>/usr/local/hadoop-2.8.4/tmp</value> </property> <property>
<!-- 指定zookeeper地址 --> <name>ha.zookeeper.quorum</name> <value>node1:2181,node2:2181,node3:2181</value> </property> </configuration>
4), 修改 hdfs-site.xml
<configuration> <!--指定hdfs的nameservice为ns1,需要和core-site.xml中的保持一致 --> <property> <name>dfs.nameservices</name> <value>hdfscluster</value> </property> <!-- ns1下面有两个NameNode,分别是nn1,nn2 --> <property> <name>dfs.ha.namenodes.hdfscluster</name> <value>nn1,nn2</value> </property> <!-- nn1的RPC通信地址 --> <property> <name>dfs.namenode.rpc-address.hdfscluster.nn1</name> <value>192.168.199.182:8020</value> </property> <!-- nn1的http通信地址 --> <property> <name>dfs.namenode.http-address.hdfscluster.nn1</name> <value>192.168.199.182:50070</value> </property> <!-- nn2的RPC通信地址 --> <property> <name>dfs.namenode.rpc-address.hdfscluster.nn2</name> <value>192.168.199.247:8020</value> </property> <!-- nn2的http通信地址 --> <property> <name>dfs.namenode.http-address.hdfscluster.nn2</name> <value>192.168.199.247:50070</value> </property> <!-- 指定NameNode的元数据在JournalNode上的存放位置 --> <property> <name>dfs.namenode.shared.edits.dir</name> <value>qjournal://node1:8485;node2:8485;node3:8485/hdfscluster</value> </property> <!-- 指定JournalNode在本地磁盘存放数据的位置 --> <property> <name>dfs.journalnode.edits.dir</name> <value>/usr/local/hadoop-2.8.4/journaldata</value> </property> <!-- 开启NameNode失败自动切换 --> <property> <name>dfs.ha.automatic-failover.enabled</name> <value>true</value> </property> <!-- 配置失败自动切换实现方式 --> <property> <name>dfs.client.failover.proxy.provider.hdfscluster</name> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value> </property> <!-- 配置隔离机制方法,多个机制用换行分割,即每个机制暂用一行-->
<property> <name>dfs.ha.fencing.methods</name> <value>sshfence</value> </property>
<!-- 使用sshfence隔离机制时需要ssh免登陆 --> <property> <name>dfs.ha.fencing.ssh.private-key-files</name> <value>/root/.ssh/id_dsa</value> </property> <!-- 配置sshfence隔离机制超时时间 --> <property> <name>dfs.ha.fencing.ssh.connect-timeout</name> <value>30000</value> </property> </configuration>
备注: 如果集群成功后, 但创建目录显示: ipc.Client: Retrying connect to serve, 就更改为
5), 添加 slaves
vim slaves
node1
node2
node3
4, 配置yarn
1), 修改mapred-site.xml.template 为 mapred-site.xml
<configuration> <!-- 指定mr框架为yarn方式 --> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> </configuration>
2), 配置 yarn-site.xml
<configuration> <!-- Site specific YARN configuration properties --> <!-- 开启RM高可用 --> <property> <name>yarn.resourcemanager.ha.enabled</name> <value>true</value> </property> <!-- 指定RM的cluster id --> <property> <name>yarn.resourcemanager.cluster-id</name> <value>yarncluster</value> </property> <!-- 指定RM的名字 --> <property> <name>yarn.resourcemanager.ha.rm-ids</name> <value>rm1,rm2</value> </property> <!-- 分别指定RM的地址 --> <property> <name>yarn.resourcemanager.hostname.rm1</name> <value>node1</value> </property> <property> <name>yarn.resourcemanager.hostname.rm2</name> <value>node2</value> </property> <!-- 指定zk集群地址 --> <property> <name>yarn.resourcemanager.zk-address</name> <value>node1:2181,node2:2181,node3:2181</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> </configuration>
5, 格式化namenode
1), 3台机器启动 journalenode
hadoop-daemon.sh start journalnode
2), 格式化namenode, 并启动
hdfs namenode -format
hadoop-daemon.sh start namenode
3), 在另一个namenode上拷贝, 或者手动拷贝
hdfs namenode -bootstrapStandby
4), 启动第二个namenode
hadoop-daemon.sh start namenode
5), 在activeNameNode上格式化zookeeper
hdfs zkfc -formatZK
6), 启动
start-dfs.sh
此时可通过 node1:50070 访问 hadoop
6, 启动yarn
1), 在nameNode上执行
start-yarn.sh
2), 启动 resourcenamenager
yarn-HA, 不需要记录状态, 所以非常简单
yarn-daemon.sh start resourcemanager
此时可通过 node1:8088 进行访问
以后启动时, 先启动3台zookeeper, 然后 start-dfs.sh 即可以了
7, 进行测试
1, 创建输入, 输出目录
hadoop fs -mkdir -p /data/wordcount
hadoop fs -mkdir -p /output
2, 上传文件
hadoop fs -put README.txt /data/wordcount
3, 执行样例
hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.5.jar wordcount /data/wordcount /output/wordcount
4, 查看分片文件
hadoop fs -text /output/wordcount/part-r-00000
HA编程的时候应该注意:
1, 代码访问hdfs的时候,
FileSystem.get(new URI("hfs://hdfscluster/", conf), conf, "root);
需要将配置文件
hdfs-site.xml, core-site.xml, yarn-site.xml, mapred-site.xml 放在resources下,
在 new Configuration() 的时候, 会自动加载resources中的配置文件