HIVE报错:Error: Error while processing statement: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask (state=08S01,code=2)

执行 insert into table video_orc select * from video_ori;时报错

查看hive日志发现具体报错信息如下:

2020-10-07T09:33:11,117  INFO [HiveServer2-Background-Pool: Thread-241] ql.Driver: Concurrency mode is disabled, not creating a lock manager
2020-10-07T09:33:11,119 ERROR [HiveServer2-Background-Pool: Thread-241] operation.Operation: Error running hive query:
org.apache.hive.service.cli.HiveSQLException: Error while processing statement: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
        at org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:335) ~[hive-service-3.1.2.jar:3.1.2]
        at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:226) ~[hive-service-3.1.2.jar:3.1.2]
        at org.apache.hive.service.cli.operation.SQLOperation.access$700(SQLOperation.java:87) ~[hive-service-3.1.2.jar:3.1.2]
        at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:316) ~[hive-service-3.1.2.jar:3.1.2]
        at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_212]
        at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_212]
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) ~[hadoop-common-3.1.3.jar:?]
        at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:329) ~[hive-service-3.1.2.jar:3.1.2]
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_212]
        at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_212]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_212]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_212]
        at java.lang.Thread.run(Thread.java:748) [?:1.8.0_212]
2020-10-07T09:33:11,120  INFO [HiveServer2-Handler-Pool: Thread-52] conf.HiveConf: Using the default value passed in for log id: 544f2b3d-7a09-44a1-bf44-f25c7b2ad6e4

分析:

经过各个方法实验,无法找到具体报错信息,无法判断具体原因

偶然看到一个贴子说把同样的sql放到hive shell里面执行(本来我是在beeline -u jdbc:hive2://Linux201:10000 -n zls下执行,得到的报错信息不完全)

即直接在hive(defualt)下执行

发现报错信息如下:

running beyond virtual memory错误。

提示如下:

Container [pid=28920,containerID=container_1389136889967_0001_01_000121] is running beyond virtual memory limits. Current usage: 1.2 GB of 1 GB physical memory used; 2.2 GB of 2.1 GB virtual memory used. Killing container.…………

原因:从机上运行的Container试图使用过多的内存,而被NodeManager kill掉了。

解决:加内存

在hadoop的配置文件mapred-site.xml中设置map和reduce任务的内存配置如下:(value中实际配置的内存需要根据自己机器内存大小及应用情况进行修改)

<property>
  <name>mapreduce.map.memory.mb</name>
  <value>1536</value>
</property>
<property>
  <name>mapreduce.map.java.opts</name>
  <value>-Xmx1024M</value>
</property>
<property>
  <name>mapreduce.reduce.memory.mb</name>
  <value>3072</value>
</property>
<property>
  <name>mapreduce.reduce.java.opts</name>
  <value>-Xmx2560M</value>
</property>

关掉集群,分发配置,启动集群

再次运行,🆗

总结:没啥好总结的

posted @ 2020-10-07 14:57  Mrzxs  阅读(14288)  评论(1编辑  收藏  举报