从节点datanode启动成功,但是后台查看发现未连接namenode,查看日志提示如下:
2019-02-26 13:53:16,307 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Problem connecting to server: server-21/192.168.0.21:9000 2019-02-26 13:53:22,310 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: server-21/192.168.0.21:9000. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2019-02-26 13:53:23,312 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: server-21/192.168.0.21:9000. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2019-02-26 13:53:24,315 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: server-21/192.168.0.21:9000. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2019-02-26 13:53:25,318 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: server-21/192.168.0.21:9000. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2019-02-26 13:53:26,321 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: server-21/192.168.0.21:9000. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2019-02-26 13:53:27,323 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: server-21/192.168.0.21:9000. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
解决方式一
删除hadoop目录下数据,dfs,log,tmp
重新格式化namenode,重启start-dfs.sh 试试
删除hadoop目录下数据,dfs,log,tmp [root@server-22 hadoop-2.7.7]# ll total 112 drwxr-xr-x 2 root root 194 Mar 23 2019 bin drwxr-xr-x 4 root root 30 Mar 23 2019 dfs drwxr-xr-x 3 root root 20 Mar 23 2019 etc drwxr-xr-x 2 root root 106 Mar 23 2019 include drwxr-xr-x 3 root root 20 Mar 23 2019 lib drwxr-xr-x 2 root root 239 Mar 23 2019 libexec -rw-r–r-- 1 root root 86424 Feb 26 13:36 LICENSE.txt drwxr-xr-x 2 root root 165 Feb 26 13:53 logs -rw-r–r-- 1 root root 14978 Feb 26 13:36 NOTICE.txt -rw-r–r-- 1 root root 1366 Feb 26 13:36 README.txt drwxr-xr-x 2 root root 4096 Mar 23 2019 sbin drwxr-xr-x 4 root root 31 Mar 23 2019 share drwxr-xr-x 3 root root 26 Feb 26 13:26 tmp [root@server-22 hadoop-2.7.7]#
解决方式二
查看主节点master监听的端口
如下:nameNode监听的是本机的ip,也就是说其他IP不监听
[root@server-21 ~]# netstat -an | grep 9000 tcp 0 0 127.0.0.1:9000 0.0.0.0:* LISTEN tcp 0 0 127.0.0.1:9000 127.0.0.1:52430 ESTABLISHED tcp 0 0 127.0.0.1:52430 127.0.0.1:9000 ESTABLISHED
修改/etc/hosts
[root@server-21 ~]# cat /etc/hosts #127.0.0.1 localhost.localdomain localhost4 localhost4.localdomain4 server-21 qjw-01 #::1 qjw-01 localhost localhost.localdomain localhost6 localhost6.localdomain6 server-21 192.168.0.22 server-22 192.168.0.21 server-21 qjw-01
立即生效hosts
[root@server-21 ~]# su [root@server-21 ~]# /etc/init.d/network restart
启动hadoop,start-dfs.sh
查看监听端口:如下,正常
[root@server-21 ~]# netstat -an | grep 9000 tcp 0 0 192.168.0.21:9000 0.0.0.0:* LISTEN tcp 0 0 192.168.0.21:48224 192.168.0.21:9000 TIME_WAIT tcp 0 0 192.168.0.21:9000 192.168.0.22:45798 ESTABLISHED tcp 0 0 192.168.0.21:9000 192.168.0.21:48196 ESTABLISHED tcp 0 0 192.168.0.21:48196 192.168.0.21:9000 ESTABLISHED [root@server-21 ~]#
查看网页显示一切正常
转载于:https://blog.csdn.net/bb23417274/article/details/87933725