【Hive】Caused by: org.apache.tez.dag.api.SessionNotRunning: TezSession has already shutdown
前言
将Hive的默认引擎Map-Reduce改为Tez后,经常出现TezSession has already shutdown错误;
错误内容
Exception in thread "main" java.lang.RuntimeException: org.apache.tez.dag.api.SessionNotRunning: TezSession has already shutdown. Application application_1589331609996_0001 failed 2 times due to AM Container for appattempt_1589331609996_0001_000002 exited with exitCode: -103
For more detailed output, check application tracking page:http://hadoop112:8088/cluster/app/application_1589331609996_0001Then, click on links to logs of each attempt.
Diagnostics: Container [pid=3940,containerID=container_1589331609996_0001_02_000001] is running beyond virtual memory limits. Current usage: 155.1 MB of 1 GB physical memory used; 2.6 GB of 2.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1589331609996_0001_02_000001 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 3940 3939 3940 3940 (bash) 0 0 115838976 359 /bin/bash -c /opt/module/jdk1.8.0_144/bin/java -Xmx819m -Djava.io.tmpdir=/opt/module/hadoop-2.7.2/data/tmp/nm-local-dir/usercache/ssrs/appcache/application_1589331609996_0001/container_1589331609996_0001_02_000001/tmp -server -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -XX:+PrintGCDetails -verbose:gc -XX:+PrintGCTimeStamps -XX:+UseNUMA -XX:+UseParallelGC -Dlog4j.configuratorClass=org.apache.tez.common.TezLog4jConfigurator -Dlog4j.configuration=tez-container-log4j.properties -Dyarn.app.container.log.dir=/opt/module/hadoop-2.7.2/logs/userlogs/application_1589331609996_0001/container_1589331609996_0001_02_000001 -Dtez.root.logger=INFO,CLA -Dsun.nio.ch.bugLevel='' org.apache.tez.dag.app.DAGAppMaster --session 1>/opt/module/hadoop-2.7.2/logs/userlogs/application_1589331609996_0001/container_1589331609996_0001_02_000001/stdout 2>/opt/module/hadoop-2.7.2/logs/userlogs/application_1589331609996_0001/container_1589331609996_0001_02_000001/stderr
|- 3995 3940 3940 3940 (java) 168 124 2657517568 39346 /opt/module/jdk1.8.0_144/bin/java -Xmx819m -Djava.io.tmpdir=/opt/module/hadoop-2.7.2/data/tmp/nm-local-dir/usercache/ssrs/appcache/application_1589331609996_0001/container_1589331609996_0001_02_000001/tmp -server -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -XX:+PrintGCDetails -verbose:gc -XX:+PrintGCTimeStamps -XX:+UseNUMA -XX:+UseParallelGC -Dlog4j.configuratorClass=org.apache.tez.common.TezLog4jConfigurator -Dlog4j.configuration=tez-container-log4j.properties -Dyarn.app.container.log.dir=/opt/module/hadoop-2.7.2/logs/userlogs/application_1589331609996_0001/container_1589331609996_0001_02_000001 -Dtez.root.logger=INFO,CLA -Dsun.nio.ch.bugLevel= org.apache.tez.dag.app.DAGAppMaster --session
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
Failing this attempt. Failing the application.
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:535)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:677)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: org.apache.tez.dag.api.SessionNotRunning: TezSession has already shutdown. Application application_1589331609996_0001 failed 2 times due to AM Container for appattempt_1589331609996_0001_000002 exited with exitCode: -103
问题原因
该错误是YARN的虚拟内存计算方式导致,上例中用户程序申请的内存为1Gb,YARN根据此值乘以一个比例(默认为2.1)得出申请的虚拟内存的值,当YARN计算的用户程序所需虚拟内存值大于计算出来的值时,就会杀死该进程,然后报出以上错误。
解决方式
第一种方式:
1.关闭yarn虚拟内存检查,需要在hadoop目录下的yarn-site.xml添加如下配置。
[ssrs@hadoop111 hadoop]$ pwd
/opt/module/hadoop-2.7.2/etc/hadoop
[ssrs@hadoop111 hadoop]$ vi yarn-site.xml
<!-- 关闭虚拟内存检查 -->
<property>
<name>yarn.nodemanager.vmem-check-enabled</name>
<value>false</value>
</property>
2.将该配置文件发送到集群中的其他服务器,重启集群;
第二种方式:
1.增加yarn虚拟内存的上限;需要在hadoop目录下的yarn-site.xml添加如下配置。
[ssrs@hadoop111 hadoop]$ pwd
/opt/module/hadoop-2.7.2/etc/hadoop
[ssrs@hadoop111 hadoop]$ vi yarn-site.xml
<!-- 增加最小内存上限 -->
<property>
<name>yarn.scheduler.minimum-allocation-mb</name>
<!-- 默认是1024 可以根据自己的服务器内存配置调整合适的数字 -->
<value>2048</value>
</property>
<!-- 增大虚拟内存的比率 -->
<property>
<name>yarn.nodemanager.vmem-pmem-ratio</name>
<!-- 默认是2.1 适当增大该值 -->
<value>4</value>
</property>
2.将该配置文件发送到集群中的其他服务器,重启集群;
注意事项:
第一种方式是一劳永逸的方式,方便快捷;
第二种方式可能根据运行任务的不同,进行多次调整,最后调整到集群合适的一个数值;
无论是方案一还是方案二,都一定记得将yarn-site.xml文件同步到集群中所有服务器!
作者:ShadowFiend
出处:http://www.cnblogs.com/ShadowFiend/
本文版权归作者和博客园共有,欢迎转载,但未经作者同意必须保留此段声明,且在文章页面明显位置给出原文连接,否则保留追究法律责任的权利。如有问题或建议,请多多赐教,非常感谢。