zlingh

  博客园  :: 首页  :: 新随笔  :: 联系 :: 订阅 订阅  :: 管理

一、配置

1. 在masters文件中添加 Secondary节点的主机名。

*注:masters文件用于指定secondary的主机而不是namenode,slaves用于指定datanode和tasktracker,

namenode由core-site.xml fs.default.name指定,jobtracker由mapred-site.xml mapred.job.tracker指定

2. 修改hdfs-site.xml文件

<property>  
    <name>dfs.http.address</name>  
    <value>${your-namenode}:50070</value>  
    <description>Secondary get fsimage and edits via dfs.http.address</description>  
</property>  
<property>  
    <name>dfs.secondary.http.address</name>  
    <value>${your-secondarynamenode}:50090</value>  
    <description>NameNode get the newest fsimage via dfs.secondary.http.address</description>  
</property> 

*注:

  1. 实际上dfs.http.address只在secondary设置,dfs.secondary.http.address只在namenode上设置即可,为了便于管理,集群所有机器同样配置
  2. 采用默认端口(namenode:50070,secondary:50090)时可以省略该配置

2. 修改core-site.xml文件

<property>  
    <name>fs.checkpoint.period</name>  
    <value>3600</value>  
    <description>The number of seconds between two periodic checkpoints.</description>  
</property>  
<property>  
    <name>fs.checkpoint.size</name>  
    <value>67108864</value>  
    <description>The size of the current edit log (in bytes) that triggers a periodic checkpoint even if the fs.checkpoint.period hasn't expired.  </description>  
</property>  
<property>  
    <name>fs.checkpoint.dir</name>  
    <value>${Hadoop.tmp.dir}/dfs/namesecondary</value>  
    <description>Determines where on the local filesystem the DFS secondary namenode should store the temporary images to merge.If this is a comma-delimited list of directories then the image is replicated in all of the directories for redundancy.</description>  
</property> 

*注:该配置在secondary设置即可,为了便于管理,集群所有机器同样配置

3. 重启hdfs,检查是否正常启动

(*注:这一步也可以不重启hdfs,在secondary上直接  sh $HADOOP_HOME/bin/hadoop-daemon.sh start secondarynamenode  启动secondaryNamenode)

(1)重启

sh $HADOOP_HOME/bin/stop-dfs.sh

sh $HADOOP_HOME/bin/start-dfs.sh

(2)检查uri

http://namenode:50070/  #检查namenode

http://sencondnamenode:50090/ #检查secondary

(3)检查目录

检查dfs.name.dir namenode:/data1/hadoop/name

current

image

previous.checkpoint

in_use.lock #主要看时候有这个文件,文件时间戳表示namenode启动时间

检查fs.checkpoint.dir secondary:${hadoop.tmp.dir}/dfs/namesecondary

current

image

in_use.lock #主要看时候有这个文件,文件时间戳表示secondnamenode启动时间

(4) 检查checkpoint是否正常

为便于测试,调整参数fs.checkpoint.period=60,fs.checkpoint.size=10240

对hdfs做一些文件增删操作,看${dfs.name.dir}/current/edits 和 ${fs.checkpoint.dir}/current/edits的变化

posted on 2014-09-22 16:41  zlingh  阅读(1189)  评论(0编辑  收藏  举报