hadoop2.2集群搭建问题只能启动一个datanode问题
按照教程http://cn.soulmachine.me/blog/20140205/搭建总是出现如下问题:
2014-04-13 23:53:45,450 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /home/hadoop/local/var/hadoop/hdfs/datanode/in_use.lock acquired by nodename 19771@node-10-00.example.com 2014-04-13 23:53:45,450 INFO org.apache.hadoop.hdfs.server.common.Storage: Cannot lock storage /home/hadoop/local/var/hadoop/hdfs/datanode. The directory is already locked 2014-04-13 23:53:45,451 WARN org.apache.hadoop.hdfs.server.common.Storage: Ignoring storage directory /home/hadoop/local/var/hadoop/hdfs/datanode due to an exception java.io.IOException: Cannot lock storage /home/hadoop/local/var/hadoop/hdfs/datanode. The directory is already locked at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:637) at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:460) at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:152) at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:219) at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:837) at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:808) at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:280) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:222) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:664)
也就是每次启动,只能启动一个datanode,一开始想是什么进程访问了这个目录,一直被锁定,突然意识我安装的集群式redhat集群,已经配置好了redhat GFS,而我把data放在了用户目录下,这样其中一个datanode启动之后也就一直占用着这个目录,其他机器也就不好访问这个目录,这样也就报错了。
这样我将数据目录设到/var目录下,这个问题也就解决了。