大叔问题定位分享(45)hive任务udf函数偶尔报错
背景
在hive sql中执行添加临时udf的操作
add jar hdfs:///user/hive/lib/tools-1.0.jar;
create temporary function decode as 'com.test.etl.Decoder';
在定时任务重偶尔会报错,报错比较随机,报错信息如下:
INFO : Query ID = hive_20211026010225_545899e7-7afa-4b5c-b7db-fd71565a89c6
INFO : Total jobs = 1
INFO : Launching Job 1 out of 1
INFO : Starting task [Stage-1:MAPRED] in serial mode
INFO : Number of reduce tasks not specified. Estimated from input data size: 197
INFO : In order to change the average load for a reducer (in bytes):
INFO : set hive.exec.reducers.bytes.per.reducer=
INFO : In order to limit the maximum number of reducers:
INFO : set hive.exec.reducers.max=
INFO : In order to set a constant number of reducers:
INFO : set mapreduce.job.reduces=
INFO : Cleaning up the staging area /user/yarn/hive/.staging/job_1629262398375_145140
ERROR : Job Submission failed with exception 'java.io.FileNotFoundException(File file:/disk1/hive/download/tools-1.0.jar does not exist)'
java.io.FileNotFoundException: File file:/disk1/hive/download/tools-1.0.jar does not exist
at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:641)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:867)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:631)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:442)
at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:378)
at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:329)
at org.apache.hadoop.mapreduce.JobResourceUploader.copyRemoteFiles(JobResourceUploader.java:680)
at org.apache.hadoop.mapreduce.JobResourceUploader.uploadLibJars(JobResourceUploader.java:313)
at org.apache.hadoop.mapreduce.JobResourceUploader.uploadResourcesInternal(JobResourceUploader.java:205)
at org.apache.hadoop.mapreduce.JobResourceUploader.uploadResources(JobResourceUploader.java:133)
at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:99)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:194)
at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1570)
at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1567)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1731)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1567)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:576)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:571)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1731)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:571)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:562)
at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:441)
at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:146)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:199)
at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2250)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1893)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1613)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1332)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1327)
at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:256)
at org.apache.hive.service.cli.operation.SQLOperation.access$600(SQLOperation.java:92)
at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:345)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1731)
at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:357)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
ERROR : FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask. File file:/disk1/hive/download/tools-1.0.jar does not exist
INFO : Completed executing command(queryId=hive_20211026010225_545899e7-7afa-4b5c-b7db-fd71565a89c6); Time taken: 0.133 seconds
Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask. File file:/disk1/hive/download/tools-1.0.jar does not exist (state=08S01,code=1)
Closing: 0: jdbc:hive2://master:10000
Process exited with an error: 1 (Exit value: 1)
定位
报错是偶发的,没有规律,查看报错的目录,为hive下载目录,配置为:
hive.downloaded.resources.dir=/disk1/hive/download
检查发现集群中很多节点该目录不存在,怀疑是服务器环境初始化有问题,有的节点没有正确初始化。
进一步检查发现同一个节点这个目录,有时存在,有时不存在,排除上边的怀疑。
hive中这个配置的默认值为:
hive.downloaded.resources.dir=${system:java.io.tmpdir}/${hive.session.id}_resources
可见这个目录与session相关,每次session关闭时会做删除,如果配置的路径没有sessionid,则不同session之间会相互影响
DOWNLOADED_RESOURCES_DIR("hive.downloaded.resources.dir",
"${system:java.io.tmpdir}" + File.separator + "${hive.session.id}_resources",
"Temporary local directory for added resources in the remote file system."),
修改配置为:
hive.downloaded.resources.dir=/disk1/hive/download/${hive.session.id}_resources
问题解决。
---------------------------------------------------------------- 结束啦,我是大魔王先生的分割线 :) ----------------------------------------------------------------
- 由于大魔王先生能力有限,文中可能存在错误,欢迎指正、补充!
- 感谢您的阅读,如果文章对您有用,那么请为大魔王先生轻轻点个赞,ありがとう