CDH5.12安装遇到的坑
环境:centos7+cdh5.12
问题描述:
在搭建过的CDH机器上重新搭建CDH集群,没有删除数据。导致新搭建的CDH,这两台DATANode启动不了.....
特殊情况:
本人以前在这两台机器上搭建过CDH,所以存在版本问题
1、找到你以前搭建的datanode数据目录,/dfs/dn/current,查看VERSION
eg(不能启动的datanode): storageID=DS-64482928-918c-48a6-b78c-167f4cd90218 clusterID=cluster10 cTime=0 datanodeUuid=1d881f40-21e5-4e5b-91ba-f6ade5223eb3 storageType=DATA_NODE layoutVersion=-56
eg(正常启动的datanode): storageID=DS-64482928-918c-48a6-b78c-167f4cd90218 clusterID=cluster8 cTime=0 datanodeUuid=1d881f40-21e5-4e5b-91ba-f6ade5223eb3 storageType=DATA_NODE layoutVersion=-56
解决方法:
只需要将clusterID=cluster10改为clusterID=cluster8
2、问题描述:HDFS添加 NFS Gateway 角色实例启动失败问题及解决办法
错误描述1:
unlimited Cannot connect to port 111. No portmap or rpcbind service is running on this host. Please start portmap or rpcbind service before attempting to start the NFS Gateway role on this host.
解决方法:
yum -y install nfs-utils yum -y install rpcbind systemctl start rpcbind systemctl status rpcbind
错误描述2:
NFS service is already running on this host. Please stop the NFS service run 表示你启动了nfs,和CDH冲突
解决方法:
service nfs stop
3、Kafka:Configured broker.id 2 doesn't match stored broker.id 0 in meta.properties. 既kafka broker无法启动
错误描述:
2020-11-18 11:16:43,592 FATAL kafka.server.KafkaServer: Fatal error during KafkaServer startup. Prepare to shutdown kafka.common.InconsistentBrokerIdException: Configured broker.id 98 doesn't match stored broker.id 102 in meta.properties. If you moved your data, make sure your configured broker.id matches. If you intend to create a new broker, you should remove all data in your data directories (log.dirs). at kafka.server.KafkaServer.getBrokerId(KafkaServer.scala:650) at kafka.server.KafkaServer.startup(KafkaServer.scala:187) at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:37) at kafka.Kafka$.main(Kafka.scala:67) at com.cloudera.kafka.wrap.Kafka$.main(Kafka.scala:76) at com.cloudera.kafka.wrap.Kafka.main(Kafka.scala) 2020-11-18 11:16:43,593 INFO kafka.server.KafkaServer: shutting down
解决方法:broker.id不一致导致的,修改meta.properties中的broker.id即可。
cd /var/local/kafka/data $ cat meta.properties # #Mon Nov 16 13:20:17 CST 2020 version=0 broker.id=98 #broker.id=102
4、错误描述:
测试 HDFS 是否具有过多副本不足块
解决方法:
1、在HDFS的配置下搜索 dfs.replication 将复制因子调整为2 2、shell端口执行: su - hdfs hadoop fs -setrep -R 2 /
本文来自博客园,作者:小白啊小白,Fighting,转载请注明原文链接:https://www.cnblogs.com/ywjfx/p/13962955.html