问题1:

Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException:

Could not find a file system implementation for scheme 'hdfs'. The scheme is not directly supported by Flink and no Hadoop file system to support this scheme could be loaded. For a full list of supported file systems, please see

Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Hadoop is not in the classpath/dependencies. at org.apache.flink.core.fs.UnsupportedSchemeFactory.create(UnsupportedSchemeFactory.java:58) at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:487) ... 24 more

 

原因:

在 Flink 1.11.0 版本之后,增加了很多重要新特性,其中就包括增加了对 Hadoop3.0.0 以及更高版本 Hadoop 的支持,不再提供“flink-shaded-hadoop-*”
jar 包,而是通过配置环境变量完成与 YARN 集群的对接。 在将 Flink 任务部署至 YARN 集群之前,需要确认集群是否安装有 Hadoop,保证 Hadoop
版本至少在 2.2 以上,并且集群中安装有 HDFS 服务。
解决方案:
1. 配置环境变量,增加环境变量配置如下:
sudo vim /etc/profile
HADOOP_HOME=/soft/install/hadoop-2.7.5
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
export HADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop
export HADOOP_CLASSPATH=`hadoop classpath`

hadoop classpath是一句shell命令,用于获取配置的Hadoop类路径

 注意:从Flink 1.11开始,Flink项目不再正式支持使用Flink -shade -hadoop-2-uber版本。建议用户通过HADOOP_CLASSPATH提供Hadoop依赖项。

 

2.添加jar 包到flink/lib

flink-shaded-hadoop-3-3.1.1.7.0.3.0-79-7.0.jar

commons-cli-1.5.0.jar

可以直接下载  https://mvnrepository.com/ 

https://repo1.maven.org/maven2/commons-cli/commons-cli/1.5.0/commons-cli-1.5.0.jar

https://repository.cloudera.com/artifactory/cloudera-repos/org/apache/flink/flink-shaded-hadoop-3/3.1.1.7.2.8.0-224-9.0/flink-shaded-hadoop-3-3.1.1.7.2.8.0-224-9.0.jar

 

如果是 hadoop2.X, 再添加

将 flink-shaded-hadoop-2-uber-2.8.3-10.0.jar 放到 $FLINK_HOME/lib 下面

JAR包下载地址:
https://repo.maven.apache.org/maven2/org/apache/flink/flink-shaded-hadoop-2-uber/2.8.3-10.0/flink-shaded-hadoop-2-uber-2.8.3-10.0.jar

 

3.重启flink    

 

 问题2:

Caused by: java.lang.ClassCastException: cannot assign instance of org.apache.commons.collections.map.LinkedMap to field 
org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.pendingOffsetsToCommit of
type org.apache.common
s.collections.map.LinkedMap in instance of org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer,
LinkedMap cannot be cast to LinkedMap exceptions ....

Flink本地提交任务运行正常,然后打包成jar在远程的Flink上运行失败。

解决办法

在c/onf/flink-conf.yaml 添加如下内容并重启 flink. (默认是child-first

classloader.resolve-order: parent-first

本质原因

LinkedMap class is being loaded from two different packages, and those are being assigned to each other.

官方文档

https://ci.apache.org/projects/flink/flink-docs-release-1.8/monitoring/debugging_classloading.html

 

 

 

 

 

posted on 2022-07-12 11:01  lshan  阅读(4750)  评论(0编辑  收藏  举报