NameNode内存溢出和DataNode请求超时异常处理
问题背景
春节假期间,接连收到监控程序发出的数据异常问题,赶忙连接上跳板机检查各服务间的状态,发现Datanode在第二台、第三台从节点都掉线了,通过查看Datanode和Namenode运行日志,发现了问题所在,记录下这次惊心的处理过程,供参考。
问题描述
Namonode主节点运行时报出内存溢出的问题,截取运行日志如下:
java.lang.OutOfMemoryError: GC overhead limit exceeded at java.lang.Long.valueOf(Long.java:577) at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$StorageBlockReportProto.<init>(DatanodeProtocolProtos.java:17327) at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$StorageBlockReportProto.<init>(DatanodeProtocolProtos.java:17250) at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$StorageBlockReportProto$1.parsePartialFrom(DatanodeProtocolProtos.java:17381) at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$StorageBlockReportProto$1.parsePartialFrom(DatanodeProtocolProtos.java:17376) at com.google.protobuf.CodedInputStream.readMessage(CodedInputStream.java:309)
Datanode数据节点运行时报出Socket连接主节点Namenode超时异常,
INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Got finalize command for block pool BP-029006-xxx WARN org.apache.hadoop.hdfs.server.datanode.DataNode: IOException in offerService java.net.SocketTimeoutException: Call From xxx/xxx to xxx:xxx failed on socket timeout exception: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/xxx:xxx remote=xxx/xxx]; For more details see: http://wiki.apache.org/hadoop/SocketTimeout at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:751) at org.apache.hadoop.ipc.Client.call(Client.java:1480) at org.apache.hadoop.ipc.Client.call(Client.java:1407) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) at com.sun.proxy.$Proxy13.sendHeartbeat(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.sendHeartbeat(DatanodeProtocolClientSideTranslatorPB.java:153) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.sendHeartBeat(BPServiceActor.java:553) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:653) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:823) at java.lang.Thread.run(Thread.java:745) Caused by: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/xxx:xxx remote=xxx/xxx]
解决方案
修改Hadoop集群服务中各服务组件的内存配置,更新hadoop-env.sh文件
其中hadoop-env.sh文件所在位置:
$HADOOP_HOME/etc/hadoop/hadoop-env.sh
Hadoop为各个守护进程(namenode、secondaryNamenode、jobtracker、datanode、tasktracker)统一分配的内存在hadoop-env.sh中设置,参数为HADOOP_HEAPSIZE,默认大小为1000MB。
大部分情况下,这个统一设置的值可能并不适合。例如对于NameNode节点,1000M的内存只能存储几百万个文件的数据块的引用。如果我想单独设置NameNode的内存,可以通HADOOP_NAMENODE_OPTS来设置。同样的,可以通过HADOOP_SECONDARYNAMENODE_OPTS来设置SecondaryNamenode的内存,使得它与NameNode保持一致。当然,还有HADOOP_DATANODE_OPTS、HADOOP_BALANCER_OPTS、HADOOP_JOBTRACKER_OPTS变量供你使用。
针对上面提到的问题,我们需要提高NameNode和SecondaryNamenode的内存,即修改HADOOP_NAMENODE_OPTS参数,添加配置 -Xmx2048m ,可设置为2048MB,供参考。同样通过设置HADOOP_SECONDARYNAMENODE_OPTS参数来提高SecondaryNamenode的使用内存,添加参数配置, -Xmx2048m ,也可以设置为2048MB,供参考。根据实际的数据量来调整,数据量越大可适当调高,另需注意服务器的实际内存大小。
# Command specific options appended to HADOOP_OPTS when specified export HADOOP_NAMENODE_OPTS="-Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,RFAS} -Dhdfs.audit.logger=${HDFS_AUDIT_LOGGER:-INFO,NullAppender} -Xmx2048m $HADOOP_NAMENODE_OPTS" export HADOOP_DATANODE_OPTS="-Dhadoop.security.logger=ERROR,RFAS -Xmx2048m $HADOOP_DATANODE_OPTS" export HADOOP_SECONDARYNAMENODE_OPTS="-Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,RFAS} -Dhdfs.audit.logger=${HDFS_AUDIT_LOGGER:-INFO,NullAppender} -Xmx2048m $HADOOP_SECONDARYNAMENODE_OPTS"