姜小嫌

  博客园  :: 首页  :: 新随笔  :: 联系 :: 订阅 订阅  :: 管理

spark shell 提交任务报错

Caused by: java.lang.NoSuchMethodError: org.apache.spark.network.util.AbstractFileRegion.transferred()J
at org.apache.spark.network.util.AbstractFileRegion.transfered(AbstractFileRegion.java:28)
at io.netty.channel.nio.AbstractNioByteChannel.doWrite(AbstractNioByteChannel.java:228)
at io.netty.channel.socket.nio.NioSocketChannel.doWrite(NioSocketChannel.java:282)
at io.netty.channel.AbstractChannel$AbstractUnsafe.flush0(AbstractChannel.java:879)

此类 AbstractFileRegion 在spark-network-common.xxx.jar中 classpath是有的,所以怀疑jar冲突

# 提交失败的classpath
[/data/xxx/dts-executor/executions/4998-0-1/resource
 /data/xxx/dts-executor/plugins/sparkShell/conf
 /data/xxx/dts-executor/plugins/sparkShell/di-dts-plugin-ic-1.0.0.jar
 /usr/hdp/current/hadoop-client/*
 /usr/hdp/current/hadoop-client/lib/*
 /usr/hdp/current/hadoop-hdfs-client/*
 /usr/hdp/current/hadoop-hdfs-client/lib/*
 /usr/hdp/current/hadoop-mapreduce-client/*
 /usr/hdp/current/hadoop-mapreduce-client/lib/*
 /usr/hdp/current/hadoop-yarn-client/*
 /usr/hdp/current/hadoop-yarn-client/lib/*
 /usr/hdp/current/hive/lib/*
 /usr/hdp/current/spark2-client/conf
 /usr/hdp/current/spark2-client/jars/*]

# 提交成功的classpath
 [/data/xxx/dts-executor/plugins/sparkShell/di-dts-plugin-ic-1.0.0.jar
 /data/xxx/dts-executor/plugins/sparkShell/conf
 /usr/hdp/current/spark2-client/jars/*
 /usr/hdp/current/spark2-client/conf
 /usr/hdp/current/hive/lib/*
 /usr/hdp/current/hadoop-client/*
 /usr/hdp/current/hadoop-client/lib/*
 /usr/hdp/current/hadoop-mapreduce-client/*
 /usr/hdp/current/hadoop-mapreduce-client/lib/*
 /usr/hdp/current/hadoop-yarn-client/*
 /usr/hdp/current/hadoop-yarn-client/lib/*
 /usr/hdp/current/hadoop-hdfs-client/*
 /usr/hdp/current/hadoop-hdfs-client/lib/*
 /data/xxx/dts-executor/executions/5008-0-1/resource]

主要区别在于  /usr/hdp/current/spark2-client/jars/* 被提前了,后来发现是netty包冲突了
/usr/hdp/current/hadoop-client/*等下面有与 /usr/hdp/current/spark2-client/jars/* 以下两个包冲突
netty-3.9.9.Final.jar
netty-all-4.1.17.Final.jar
解决方案:
1.可以把spark相关classpath提到hadoop前,那么优先加载spark相关包(我们采用了这种)
2.如果不使用hadoop的netty相关的两个包,可以直接删除掉,那么就不存在冲突了
posted on 2020-09-09 14:25  姜小嫌  阅读(1383)  评论(0编辑  收藏  举报