hadoop操作hdfs错误

本文转自:http://www.aboutyun.com/blog-61-22.html

当我们对hdfs操作的时候,我们可能会碰到如下错误

错误1:权限问题

Exception in thread "main" org.apache.hadoop.security.AccessControlException: org.apache.hadoop.security.AccessControlException: Permission denied: user=,

access=WRITE, inode="":root:supergroup:rwxr-xr-x
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
 at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
 at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:525)
 at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:95)
 at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
 at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:1216)
 at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:321)
 at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1126)
 at hdfs.hdoopapi.main(hdoopapi.java:19)
Caused by: org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.security.AccessControlException: Permission denied: user=hyj, access=WRITE,

inode="":root:supergroup:rwxr-xr-x
 at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:199)
 at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:180)
 at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5214)
 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:5188)
 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:2060)
 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2029)
 at org.apache.hadoop.hdfs.server.namenode.NameNode.mkdirs(NameNode.java:817)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)

 at org.apache.hadoop.ipc.Client.call(Client.java:1070)
 at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
 at sun.proxy.$Proxy1.mkdirs(Unknown Source)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:601)
 at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
 at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
 at sun.proxy.$Proxy1.mkdirs(Unknown Source)
 at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:1214)
 ... 3 more

回答:产生这个错误的原因,是因为HDFS配置文件hdfs-site.xml中,配置了对权限的检查,我们需要把dfs.permissions,value值改为false。

<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/work/hadoop_tmp</value>
</property>
<property>
<name>dfs.permissions</name>
<value>true</value>
</property>
</configuration>

还存在另外一种情况:如下,就是我们对其配置,这个也会产生这个错误。

<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/work/hadoop_tmp</value>
</property>
</configuration>

错误2:端口问题

14/02/24 14:10:45 INFO ipc.Client: Retrying connect to server: aboutyun/192.168.159.10:9000. Already tried 0 time(s).
14/02/24 14:10:47 INFO ipc.Client: Retrying connect to server: aboutyun/192.168.159.10:9000. Already tried 1 time(s).
14/02/24 14:10:49 INFO ipc.Client: Retrying connect to server: aboutyun/192.168.159.10:9000. Already tried 2 time(s).
14/02/24 14:10:51 INFO ipc.Client: Retrying connect to server: aboutyun/192.168.159.10:9000. Already tried 3 time(s).
14/02/24 14:10:53 INFO ipc.Client: Retrying connect to server: aboutyun/192.168.159.10:9000. Already tried 4 time(s).
14/02/24 14:10:55 INFO ipc.Client: Retrying connect to server: aboutyun/192.168.159.10:9000. Already tried 5 time(s).
14/02/24 14:10:57 INFO ipc.Client: Retrying connect to server: aboutyun/192.168.159.10:9000. Already tried 6 time(s).
14/02/24 14:10:59 INFO ipc.Client: Retrying connect to server: aboutyun/192.168.159.10:9000. Already tried 7 time(s).
14/02/24 14:11:01 INFO ipc.Client: Retrying connect to server: aboutyun/192.168.159.10:9000. Already tried 8 time(s).
14/02/24 14:11:03 INFO ipc.Client: Retrying connect to server: aboutyun/192.168.159.10:9000. Already tried 9 time(s).
Exception in thread "main" java.net.ConnectException: Call to aboutyun/192.168.159.10:9000 failed on connection exception: java.net.ConnectException: Connection

refused: no further information
 at org.apache.hadoop.ipc.Client.wrapException(Client.java:1099)
 at org.apache.hadoop.ipc.Client.call(Client.java:1075)
 at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
 at sun.proxy.$Proxy1.getProtocolVersion(Unknown Source)
 at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
 at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
 at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:119)
 at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:238)
 at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:203)
 at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89)
 at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1386)
 at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
 at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1404)
 at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254)
 at hdfs.hdoopapi.getFileSystem(hdoopapi.java:29)
 at hdfs.hdoopapi.main(hdoopapi.java:17)
Caused by: java.net.ConnectException: Connection refused: no further information
 at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
 at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:692)
 at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
 at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:489)
 at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:434)
 at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:560)
 at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:184)
 at org.apache.hadoop.ipc.Client.getConnection(Client.java:1206)
 at org.apache.hadoop.ipc.Client.call(Client.java:1050)
 ... 14 more

上面是的也是错误,看到很多说是修改配置文件,其实发生这个问题的原因

1.你们之间通信确实产生了问题

2.看一下集群是否已经启动(这个很重要)

试一下start-all.sh或许这个就能解决你的问题。

错误3.Name node处于安全模式:

Exception in thread "main" org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot create directory /dl. Name node is in safe mode.

The ratio of reported blocks 0.0000 has not reached the threshold 0.9990. Safe mode will be turned off automatically.

at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:2055)

at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2029)

at org.apache.hadoop.hdfs.server.namenode.NameNode.mkdirs(NameNode.java:817)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:606)

at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)

at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)

at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)

at java.security.AccessController.doPrivileged(Native Method)

at javax.security.auth.Subject.doAs(Subject.java:415)

at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)

at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)

 

at org.apache.hadoop.ipc.Client.call(Client.java:1070)

at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)

at sun.proxy.$Proxy1.mkdirs(Unknown Source)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:601)

at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)

at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)

at sun.proxy.$Proxy1.mkdirs(Unknown Source)

at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:1214)

at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:321)

at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1126)

at hdfs.hdoopapi.main(hdoopapi.java:21)

解决办法:hadoop dfsadmin -safemode leave

posted on 2018-02-09 14:24  过省  阅读(778)  评论(0编辑  收藏  举报

导航