采用的是4台真实机器:

namenode:qzhong  node27

datanode:qzhong node27 node100 node101

操作系统环境:qzhong(Ubuntu-14.0) node27、node100、node101(CentOS 64bits)

HA配置方式:采用的是journalNode方式,而不是采用NFS方式

hdfs-site.xml:

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>

    <property>
        <name>dfs.nameservices</name>
        <value>mycluster</value>
        <description>This is the nameservices, similiar to hadoop federation</description>
    </property>

    <property>
        <name>dfs.ha.namenodes.mycluster</name>
        <value>qzhong,node27</value>
    </property>
    
    <property>
        <name>dfs.namenode.rpc-address.mycluster.qzhong</name>
        <value>qzhong:8020</value>
    </property>

    <property>
        <name>dfs.namenode.rpc-address.mycluster.node27</name>
        <value>node27:8020</value>
    </property>

    <property>
        <name>dfs.namenode.http-address.mycluster.qzhong</name>
        <value>qzhong:50070</value>
    </property>

    <property>
        <name>dfs.namenode.http-address.mycluster.node27</name>
        <value>node27:50070</value>
    </property>

    <property>
        <name>dfs.namenode.shared.edits.dir</name>
        <value>qjournal://node27:8485;node100:8485;node101:8485/mycluster</value>
    </property>

    <property>
        <name>dfs.client.failover.proxy.provider.mycluster</name>
        <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
    </property>
    
    <property>
        <name>dfs.journalnode.edits.dir</name>
        <value>/home/qzhong/journalnodeedit</value>    
    </property>

    <property>
        <name>dfs.ha.fencing.methods</name>
        <value>sshfence</value>
    </property>

    <property>
        <name>dfs.ha.fencing.ssh.private-key-files</name>
        <value>/home/qzhong/.ssh/id_rsa</value>
    </property>

    <property>
        <name>dfs.ha.automatic-failover.enabled</name>
        <value>true</value>
    </property>

    <property>
        <name>ha.zookeeper.quorum</name>
        <value>qzhong:2181,node27:2181,node100:2181</value>
    </property>

    <property>
        <name>dfs.hosts</name>
        <value>/home/qzhong/hadoop-2.2.0/etc/hadoop/slaves</value>
    </property>

    <property>
        <name>dfs.namenode.handler.count</name>
        <value>100</value>
    </property>

    <property>
        <name>dfs.blocksize</name>
        <value>268435456</value>
    </property>

    <property>
        <name>dfs.datanode.data.dir</name>
        <value>/home/qzhong/hadoopjournaldata</value>
    </property>

    <property>
        <name>dfs.replication</name>
        <value>3</value>
    </property>

</configuration>

 

core-site.xml:

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
    <property>
        <name>fs.default.name</name>
        <value>hdfs://mycluster</value>
    </property>

    <property>
        <name>io.file.buffer.size</name>
        <value>131072</value>
    </property>
</configuration>

配置完成后启动方式:

按照apache hadoop官网配置HA启动方式:http://hadoop.apache.org/docs/r2.2.0/hadoop-yarn/hadoop-yarn-site/HDFSHighAvailabilityWithQJM.html

刚开始配置启动时,

1、首先需要在机器上启动journalNode进程

2、格式化Namenode,命令(bin/hdfs namenode -format),格式化后启动cluster,命令(sbin/start-dfs.sh)

3、在另外一个备用的Namanode节点上执行命令,bin/hdfs namenode -bootstrapStandby

使用命令查看namenode节点状态,bin/hdfs haadmin -getServiceState qzhong 如果为standby,再查看另外一个namenode, 使用命令bin/hdfs haadmin -getServiceState node27, 如果也为standby,则说明此时无法提供服务,需要手动切换namenode状态,在这里使用命令切换qzhong这个namenode为active,命令bin/hdfs haadmin -transitionToActive qzhong,此时HDFS可以提供服务。

此时的HA配置无法自动切换故障,需要配置zookeeper参数,同时此时的HDFS集群如果无法使用,安装apache官网配置:http://hadoop.apache.org/docs/r2.2.0/hadoop-yarn/hadoop-yarn-site/HDFSHighAvailabilityWithQJM.html

所有都配置完成之后,以后就可以自动切换故障,可以测试,停止一个namenode进程,命令sbin/hadoop-daemon.sh --config etc/hadoop stop namenode ,然后通过bin/hdfs haadmin -getServiceState [qzhong|node27] 查看各个namenode的状态,看另外一个是否切换为active,正常的话应该为active

 

posted on 2014-11-17 09:29  linghuchong0605  阅读(248)  评论(0编辑  收藏  举报