hadoop 搭建3节点集群,遇到Live Nodes显示为0时解决办法
首先,尼玛哥在搭建hadoop 的3节点集群时,安装基本的步骤,配置好以下几个文件
- core-site.xml
- hadoop-env.sh
- hdfs-site.xml
- yarn-env.sh
- yarn-site.xml
- slaves
之后就是格式化NameNode节点,
[root@spark1 hadoop]# hdfs namenode -format
启动hdfs集群
[root@spark1 hadoop]# start-dfs.sh
查询各个节点是否运行成功。
spark1 :
[root@spark1 hadoop]# jps
5575 SecondaryNameNode
5722 Jps
5443 DataNode
5336 NameNode
spark2:
[root@spark2 hadoop]# jps
1859 Jps
1795 DataNode
spark3:
[root@spark3 ~]# jps
1748 DataNode
1812 Jps
尼玛哥的集群搭建过程核心文件配置没问题,可是,就是在使用50070端口检测的时候,显示livenode为1 ,而且只是spark1 !
于是,经过对问题的排查,发现,最终因为前期配置/etc/ hosts 的时候,配置分别为:
spark1:
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.30.111 spark1
spark2:
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.30.112 spark2
spark3:
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.30.113 spark3
现在,统一改为:
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.30.113 spark3
192.168.30.111 spark1
192.168.30.112 spaqk2
ok ,问题得到解决
利用 代码 :
[root@spark1 hadoop]# hadoop dfsadmin -report
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
Configured Capacity: 55609774080 (51.79 GB)
Present Capacity: 47725793280 (44.45 GB)
DFS Remaining: 47725719552 (44.45 GB)
DFS Used: 73728 (72 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
-------------------------------------------------
Datanodes available: 3 (3 total, 0 dead)
Live datanodes:
Name: 192.168.30.111:50010 (spark1)
Hostname: spark1
Decommission Status : Normal
Configured Capacity: 18536591360 (17.26 GB)
DFS Used: 24576 (24 KB)
Non DFS Used: 2628579328 (2.45 GB)
DFS Remaining: 15907987456 (14.82 GB)
DFS Used%: 0.00%
DFS Remaining%: 85.82%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Last contact: Wed Aug 09 05:03:06 CST 2017
Name: 192.168.30.113:50010 (spark3)
Hostname: spark3
Decommission Status : Normal
Configured Capacity: 18536591360 (17.26 GB)
DFS Used: 24576 (24 KB)
Non DFS Used: 2627059712 (2.45 GB)
DFS Remaining: 15909507072 (14.82 GB)
DFS Used%: 0.00%
DFS Remaining%: 85.83%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Last contact: Wed Aug 09 05:03:05 CST 2017
Name: 192.168.30.112:50010 (spark2)
Hostname: spark2
Decommission Status : Normal
Configured Capacity: 18536591360 (17.26 GB)
DFS Used: 24576 (24 KB)
Non DFS Used: 2628341760 (2.45 GB)
DFS Remaining: 15908225024 (14.82 GB)
DFS Used%: 0.00%
DFS Remaining%: 85.82%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Last contact: Wed Aug 09 05:03:05 CST 2017
代码部分,可以看出,连接的datanode 为3个。
分别为 192.168.30.111
192.168.30.112
192.168.30.113
分类:
hadoop常见问题及解决方案
【推荐】还在用 ECharts 开发大屏?试试这款永久免费的开源 BI 工具!
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· ASP.NET Core 模型验证消息的本地化新姿势
· 对象命名为何需要避免'-er'和'-or'后缀
· SQL Server如何跟踪自动统计信息更新?
· AI与.NET技术实操系列:使用Catalyst进行自然语言处理
· 分享一个我遇到过的“量子力学”级别的BUG。
· C# 中比较实用的关键字,基础高频面试题!
· 为什么AI教师难以实现
· 如何让低于1B参数的小型语言模型实现 100% 的准确率
· AI Agent爆火后,MCP协议为什么如此重要!
· 【译】Visual Studio(v17.13)中新的调试和分析特性