spark常见错误汇总
原文地址:https://my.oschina.net/tearsky/blog/629201
摘要:
1、Operation category READ is not supported in state standby
2、配置spark.deploy.recoveryMode选项为ZOOKEEPER
3、多Master如何配置
4、No Space Left on the device(Shuffle临时文件过多)
5、java.lang.OutOfMemory, unable to create new native thread
6、Worker节点中的work目录占用许多磁盘空间
7、spark-shell提交Spark Application如何解决依赖库
8、Spark在发布应用的时候,出现连接不上master问题
9、开发spark应用程序(和Flume-NG结合时)发布应用时可能出现org.jboss.netty.channel.ChannelException: Failed to bind to: /192.168.10.156:18800
10、spark-shell 找不到hadoop so问题解决
11、ERROR XSDB6: Another instance of Derby may have already booted the database /home/bdata/data/metastore_db.
12、java.lang.IllegalArgumentException: java.net.UnknownHostException: dfscluster
13、Exception in thread "main" java.lang.Exception: When running with master 'yarn-client' either HADOOP_CONF_DIR or YARN_CONF_DIR must be set in the environment.
14、Job aborted due to stage failure: Task 3 in stage 0.0 failed 4 times, most recent failure: Lost task 3.3 in
15、长时间等待无反应,并且看到服务器上面的web界面有内存和核心数,但是没有分配
16、内存不足或数据倾斜导致Executor Lost(spark-submit提交)
17、java.io.IOException : Could not locate executable null\bin\winutils.exe in the Hadoop binaries.(spark sql on hive 任务引发HiveContext NullPointerException)
18、The root scratch dir: /tmp/hive on HDFS should be writable. Current permissions are: rwx------
19、Exception in thread "main" org.apache.hadoop.security.AccessControlException : Permission denied: user=Administrator, access=WRITE, inode="/data":bdata:supergroup:drwxr-xr-x
20、运行Spark-SQL报错:org.apache.spark.sql.AnalysisException: unresolved operator 'Project‘
21、org.apache.spark.shuffle.MetadataFetchFailedException:Missing an output location for shuffle 0/Failed to connect to hostname/192.168.xx.xxx:50268
22、spark error already tried 45 time(s); maxRetries=45
23.cloudera 更改spark高级配置
24、spark Exception in thread "Thread-2" java.lang.OutOfMemoryError: PermGen space
注意:如果Driver写好了代码,eclipse或者程序上传后,没有开始处理数据,或者快速结束任务,也没有在控制台中打印错误,那么请进入spark的web页面,查看一下你的任务,找到每个分区日志的stderr,查看是否有错误,一般情况下一旦驱动提交了,报错的情况只能在任务日志里面查看是否有错误情况了
1、Operation category READ is not supported in state standby
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category READ is not supported in state standby
此时请登录Hadoop的管理界面查看运行节点是否处于standby
如登录地址是:
http://192.168.50.221:50070/dfshealth.html#tab-overview
如果是,则不可在处于StandBy机器运行spark计算,因为该台机器为备分机器
2、配置spark.deploy.recoveryMode选项为ZOOKEEPER
如果不设置spark.deploy.recoveryMode的话,那么集群的所有运行数据在Master重启是都会丢失,可参考BlackHolePersistenceEngine的实现。
3、多Master如何配置
因为涉及到多个Master,所以对于应用程序的提交就有了一点变化,因为应用程序需要知道当前的Master的IP地址和端口。这种HA方案处理这种情况很简单,只需要在SparkContext指向一个Master列表就可以了,如spark://host1:port1,host2:port2,host3:port3,应用程序会轮询列表。
4、No Space Left on the device(Shuffle临时文件过多)
由于Spark在计算的时候会将中间结果存储到/tmp目录,而目前linux又都支持tmpfs,其实就是将/tmp目录挂载到内存当中。
那么这里就存在一个问题,中间结果过多导致/tmp目录写满而出现如下错误
No Space Left on the device
解决办法
第一种:修改配置文件spark-env.sh,把临时文件引入到一个自定义的目录中去即可
export SPARK_LOCAL_DIRS=/home/utoken/datadir/spark/tmp
第二种:偷懒方式,针对tmp目录不启用tmpfs,直接修改/etc/fstab
cloudera manager 添加参数配置:筛选器=>高级=>搜索“spark_env”字样,添加参数export SPARK_LOCAL_DIRS=/home/utoken/datadir/spark/tmp到所有配置项
5、java.lang.OutOfMemory, unable to create new native thread
Caused by: java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:640)
上面这段错误提示的本质是Linux操作系统无法创建更多进程,导致出错,并不是系统的内存不足。因此要解决这个问题需要修改Linux允许创建更多的进程,就需要修改Linux最大进程数。
[utoken@nn1 ~]$ulimit -a
临时修改允许打开的最大进程数
[utoken@nn1 ~]$ulimit -u 65535
临时修改允许打开的文件句柄
[utoken@nn1 ~]$ulimit -n 65535
永久修改Linux最大进程数量
[utoken@nn1 ~]$ vim /etc/security/limits.d/90-nproc.conf
* soft nproc 60000
root soft nproc unlimited
永久修改用户打开文件的最大句柄数,该值默认1024,一般都会不够,常见错误就是not open file
[utoken@nn1 ~]$ vim /etc/security/limits.conf
bdata soft nofile 65536
bdata hard nofile 65536
6、Worker节点中的work目录占用许多磁盘空间
目录地址:/home/utoken/software/spark-1.3.0-bin-hadoop2.4/work
这些是Driver上传到worker的文件,需要定时做手工清理,否则会占用许多磁盘空间
7、spark-shell提交Spark Application如何解决依赖库
spark-shell的话,利用--driver-class-path选项来指定所依赖的jar文件,注意的是--driver-class-path后如果需要跟着多个jar文件的话,jar文件之间使用冒号(:)来分割。
8、Spark在发布应用的时候,出现连接不上master问题,如下
15/11/19 11:35:50 INFO AppClient$ClientEndpoint: Connecting to master spark://s1:7077...
15/11/19 11:35:50 WARN ReliableDeliverySupervisor: Association with remote system [akka.tcp://sparkMaster@s1:7077] has failed, address is now gated for [5000] ms. Reason: [Disassociated]
解决方式
检查所有机器时间是否一致、hosts是否都配置了映射、客户端和服务器端的Scala版本是否一致、Scala版本是否和Spark兼容
检查是否兼容问题请参考官方网站介绍:
9、开发spark应用程序(和Flume-NG结合时)发布应用时可能出现org.jboss.netty.channel.ChannelException: Failed to bind to: /192.168.10.156:18800
15/11/27 10:33:44 ERROR ReceiverSupervisorImpl: Stopped receiver with error: org.jboss.netty.channel.ChannelException: Failed to bind to: /192.168.10.156:18800
15/11/27 10:33:44 ERROR Executor: Exception in task 0.0 in stage 2.0 (TID 70)
org.jboss.netty.channel.ChannelException: Failed to bind to: /192.168.10.156:18800
at org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272)
Caused by: java.net.BindException: Cannot assign requested address
由于spark通过Master发布的时候,会自动选取发送到某一台的worker节点上,所以这里绑定端口的时候,需要选择相应的worker服务器,但是由于我们无法事先了解到,spark发布到哪一台服务器的,所以这里启动报错,是因为在 192.168.10.156:18800的机器上面没有启动Driver程序,而是发布到了其他服务器去启动了,所以无法监听到该机器出现问题,所以我们需要设置spark分发包时,发布到所有worker节点机器,或者发布后,我们去寻找发布到了哪一台机器,重新修改绑定IP,重新发布,有一定几率发布成功。详情可见《印象笔记-战5渣系列——Spark Streaming启动问题 - 推酷》
10、spark-shell 找不到hadoop so问题解决
[main] WARN org.apache.hadoop.util.NativeCodeLoader - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
在Spark的conf目录下,修改spark-env.sh文件,加入LD_LIBRARY_PATH环境变量,值为HADOOP的native库路径即可.
11、ERROR XSDB6: Another instance of Derby may have already booted the database /home/bdata/data/metastore_db.
在使用Hive on Spark模式操作hive里面的数据时,报以上错误,原因是因为HIVE采用了derby这个内嵌数据库作为数据库,它不支持多用户同时访问,解决办法就是把derby数据库换成mysql数据库即可
变更方式
12、java.lang.IllegalArgumentException: java.net.UnknownHostException: dfscluster
解决办法:
找不到hdfs集群名字dfscluster,这个文件在HADOOP的etc/hadoop下面,有个文件hdfs-site.xml,复制到Spark的conf下,重启即可
如:执行脚本,分发到所有的Spark集群机器中,
[bdata@bdata4 hadoop]𝑓𝑜𝑟𝑖𝑖𝑛34,35,36,37,38;𝑑𝑜𝑠𝑐𝑝ℎ𝑑𝑓𝑠−𝑠𝑖𝑡𝑒.𝑥𝑚𝑙192.168.10.foriin34,35,36,37,38;doscphdfs−site.xml192.168.10.i:/u01/spark-1.5.1/conf/ ; done
13、Exception in thread "main" java.lang.Exception: When running with master 'yarn-client' either HADOOP_CONF_DIR or YARN_CONF_DIR must be set in the environment.
问题:在执行yarn集群或者客户端时,报以上错误,
[bdata@bdata4 bin]$ ./spark-sql --master yarn-client
Exception in thread "main" java.lang.Exception: When running with master 'yarn-client' either HADOOP_CONF_DIR or YARN_CONF_DIR must be set in the environment.
解决办法
根据提示,配置HADOOP_CONF_DIR or YARN_CONF_DIR的环境变量即可
export HADOOP_HOME=/u01/hadoop-2.6.1
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
PATH=𝑃𝐴𝑇𝐻:PATH:HOME/.local/bin:𝐻𝑂𝑀𝐸/𝑏𝑖𝑛:HOME/bin:SQOOP_HOME/bin:𝐻𝐼𝑉𝐸𝐻𝑂𝑀𝐸/𝑏𝑖𝑛:HIVEHOME/bin:HADOOP_HOME/bin
14、Job aborted due to stage failure: Task 3 in stage 0.0 failed 4 times, most recent failure: Lost task 3.3 in
[Stage 0:> (0 + 4) / 42]2016-01-15 11:28:16,512 [org.apache.spark.scheduler.TaskSchedulerImpl]-[ERROR] Lost executor 0 on 192.168.10.38: remote Rpc client disassociated
[Stage 0:> (0 + 4) / 42]2016-01-15 11:28:23,188 [org.apache.spark.scheduler.TaskSchedulerImpl]-[ERROR] Lost executor 1 on 192.168.10.38: remote Rpc client disassociated
[Stage 0:> (0 + 4) / 42]2016-01-15 11:28:29,203 [org.apache.spark.scheduler.TaskSchedulerImpl]-[ERROR] Lost executor 2 on 192.168.10.38: remote Rpc client disassociated
[Stage 0:> (0 + 4) / 42]2016-01-15 11:28:36,319 [org.apache.spark.scheduler.TaskSchedulerImpl]-[ERROR] Lost executor 3 on 192.168.10.38: remote Rpc client disassociated
2016-01-15 11:28:36,321 [org.apache.spark.scheduler.TaskSetManager]-[ERROR] Task 3 in stage 0.0 failed 4 times; aborting job
Exception in thread "main" org.apache.spark.SparkException : Job aborted due to stage failure: Task 3 in stage 0.0 failed 4 times, most recent failure: Lost task 3.3 in stage 0.0 (TID 14, 192.168.10.38): ExecutorLostFailure (executor 3 lost)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org𝑎𝑝𝑎𝑐ℎ𝑒apachespark𝑠𝑐ℎ𝑒𝑑𝑢𝑙𝑒𝑟schedulerDAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1283)
解决方案
这里遇到的问题主要是因为数据源数据量过大,而机器的内存无法满足需求,导致长时间执行超时断开的情况,数据无法有效进行交互计算,因此有必要增加内存
15、长时间等待无反应,并且看到服务器上面的web界面有内存和核心数,但是没有分配,如下图
[Stage 0:> (0 + 0) / 42]
或者日志信息显示:
16/01/15 14:18:56 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
解决方案
出现上面的问题主要原因是因为我们通过参数spark.executor.memory设置的内存过大,已经超过了实际机器拥有的内存,故无法执行,需要等待机器拥有足够的内存后,才能执行任务,可以减少任务执行内存,设置小一些即可
16、内存不足或数据倾斜导致Executor Lost(spark-submit提交)
TaskSetManager: Lost task 1.0 in stage 6.0 (TID 100, 192.168.10.37): java.lang.OutOfMemoryError: Java heap space
16/01/15 14:29:51 INFO BlockManagerInfo: Added broadcast_8_piece0 in memory on 192.168.10.37:57139 (size: 42.0 KB, free: 24.2 MB)
16/01/15 14:29:53 INFO BlockManagerInfo: Added broadcast_8_piece0 in memory on 192.168.10.38:53816 (size: 42.0 KB, free: 24.2 MB)
16/01/15 14:29:55 INFO TaskSetManager: Starting task 3.0 in stage 6.0 (TID 102, 192.168.10.37, ANY, 2152 bytes)
16/01/15 14:29:55 WARN TaskSetManager: Lost task 1.0 in stage 6.0 (TID 100, 192.168.10.37): java.lang.OutOfMemoryError: Java heap space
at java.io.BufferedOutputStream.<init>(BufferedOutputStream.java:76)
at java.io.BufferedOutputStream.<init>(BufferedOutputStream.java:59)
at org.apache.spark.sql.execution.UnsafeRowSerializerInstance$$anon2.<𝑖𝑛𝑖𝑡>(𝑈𝑛𝑠𝑎𝑓𝑒𝑅𝑜𝑤𝑆𝑒𝑟𝑖𝑎𝑙𝑖𝑧𝑒𝑟.𝑠𝑐𝑎𝑙𝑎:55)𝑎𝑡𝑜𝑟𝑔.𝑎𝑝𝑎𝑐ℎ𝑒.𝑠𝑝𝑎𝑟𝑘.𝑠𝑞𝑙.𝑒𝑥𝑒𝑐𝑢𝑡𝑖𝑜𝑛.𝑈𝑛𝑠𝑎𝑓𝑒𝑅𝑜𝑤𝑆𝑒𝑟𝑖𝑎𝑙𝑖𝑧𝑒𝑟𝐼𝑛𝑠𝑡𝑎𝑛𝑐𝑒.𝑠𝑒𝑟𝑖𝑎𝑙𝑖𝑧𝑒𝑆𝑡𝑟𝑒𝑎𝑚(𝑈𝑛𝑠𝑎𝑓𝑒𝑅𝑜𝑤𝑆𝑒𝑟𝑖𝑎𝑙𝑖𝑧𝑒𝑟.𝑠𝑐𝑎𝑙𝑎:52)𝑎𝑡𝑜𝑟𝑔.𝑎𝑝𝑎𝑐ℎ𝑒.𝑠𝑝𝑎𝑟𝑘.𝑠𝑡𝑜𝑟𝑎𝑔𝑒.𝐷𝑖𝑠𝑘𝐵𝑙𝑜𝑐𝑘𝑂𝑏𝑗𝑒𝑐𝑡𝑊𝑟𝑖𝑡𝑒𝑟.𝑜𝑝𝑒𝑛(𝐷𝑖𝑠𝑘𝐵𝑙𝑜𝑐𝑘𝑂𝑏𝑗𝑒𝑐𝑡𝑊𝑟𝑖𝑡𝑒𝑟.𝑠𝑐𝑎𝑙𝑎:92)𝑎𝑡𝑜𝑟𝑔.𝑎𝑝𝑎𝑐ℎ𝑒.𝑠𝑝𝑎