hadoop 2.2.0 集群部署 坑
注意fs.defaultFS为2.2.0新的变量,代替旧的:fs.default.name
hadoop 2.2.0 集群启动命令:
bin/hdfs namenode -format
sbin/start-dfs.sh
sbin/start-yarn.sh
bin/hdfs -put input in
400 bin/hdfs dfs -put input in
401 bin/hdfs dfs -mkdir /in
402 bin/hdfs dfs -ls /
403 bin/hdfs dfs -put input/ /in
404 bin/hdfs dfs -ls /
405 bin/hdfs dfs -ls /in
406 bin/hdfs dfs -rmr /in/input
407 bin/hdfs dfs -mkdir /in
408 bin/hdfs dfs -put input/* /in
409 bin/hdfs dfs -ls /in
http://hi.baidu.com/evenque/item/a91824a33556343d030a4de7 根据这个配置~~~
http://blog.csdn.net/shirdrn/article/details/6562292
hostname 映射失败的~~~java.net.UnknownHostException: localhost.localdomain: localhost.localdomain
最终的解决方案是 修改了
/etc/sysconfig/network
/etc/rc.d/init.d/network restart
然后删除/hadoop/namenode logs
在集群所有的机器的执行上述命令~
在master中启动 namenode 和 resourcemanager
1
2
|
[wyp @wyp hadoop- 2.2 . 0 ]$ sbin/hadoop-daemon.sh start namenode [wyp @wyp hadoop- 2.2 . 0 ]$ sbin/yarn-daemon.sh start resourcemanager |
在node和node1中启动datanode 和 nodemanager
1
2
|
[wyp @wyp hadoop- 2.2 . 0 ]$ sbin/hadoop-daemon.sh start datanode [wyp @wyp hadoop- 2.2 . 0 ]$ sbin/yarn-daemon.sh start nodemanager |