Spark shell超时

 WARN netty.NettyRpcEndpointRef: Error sending message [message = RemoveExecutor(1,Command exited with code 1)] in 1 attempts
org.apache.spark.rpc.RpcTimeoutException: Futures timed out after [120 seconds]. This timeout is controlled by spark.rpc.askTimeout
at org.apache.spark.rpc.RpcTimeout.org$apache$spark$rpc$RpcTimeout$$createRpcTimeoutException(RpcTimeout.scala:48)
at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:63)
at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59)
at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:33)
at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:76)
at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:101)
at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:77)
at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.removeExecutor(CoarseGrainedSchedulerBackend.scala:359)
at 

spark集群部署好之后,运行start-all.sh,可以成功运行,但是运行shell出错,显示超时

由于netty是spark通信框架,通信超时所以产生问题。

解决方法:1.ip6可能是一个可能原因,把::1也就是ip6先注释掉试试(不行)                                 2.设置下超时时间(靠谱):
SparkConf: conf.set("spark.rpc.askTimeout", "600s")
  spark-defaults.conf: spark.rpc.askTimeout 600s
spark-submit: --conf spark.rpc.askTimeout=600s

 


posted @ 2017-08-23 20:11  卡丽熙  阅读(1143)  评论(0编辑  收藏  举报