Hadoop节点集群挂了,Hbase数据源损坏怎么办

今天集群节点一下子挂了5台,hbase的数据块也损坏。

hadoop日志

.0.15:36642 dest: /ip:50010
2014-08-26 15:01:14,918 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DataNode is out of memory. Will retry in 30 seconds.
java.lang.OutOfMemoryError: unable to create new native thread
        at java.lang.Thread.start0(Native Method)
        at java.lang.Thread.start(Thread.java:713)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:142)
        at java.lang.Thread.run(Thread.java:744)

 dataNode 日志:

2014-08-26 14:57:21,701 WARN org.mortbay.log: Failed to read file: /home/hadoop/hadoop-2.0.0-cdh4.5.0/share/hadoop/mapreduce/metrics-core-2.1.2.jar
java.util.zip.ZipException: zip file is empty
        at java.util.zip.ZipFile.open(Native Method)
        at java.util.zip.ZipFile.<init>(ZipFile.java:215)
        at java.util.zip.ZipFile.<init>(ZipFile.java:145)
        at java.util.jar.JarFile.<init>(JarFile.java:153)

 

  Hbase日志:

2014-08-26 03:00:16,048 WARN org.apache.hadoop.hdfs.DFSClient: Failed to connect to /ip:50010 for block, add to deadNodes and continue. java.io.IOException: Connection reset by peer
java.io.IOException: Connection reset by peer
        at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
        at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
        at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)

 

posted @ 2014-12-31 14:44  zhanggl  阅读(1799)  评论(0编辑  收藏  举报