spark出现task不能序列化错误的解决方法 org.apache.spark.SparkException: Task not serializable

复制代码
import org.elasticsearch.cluster.routing.Murmur3HashFunction;
import org.elasticsearch.common.math.MathUtils;

// 自定义Partitioner
class ESShardPartitioner(settings: String) extends org.apache.spark.Partitioner {
  protected var _numPartitions = -1;  
  protected var _hashFunction = new org.elasticsearch.cluster.routing.Murmur3HashFunction;//此处会出现序列化错误
  override def numPartitions: Int = {
    val newSettings = new org.elasticsearch.hadoop.cfg.PropertiesSettings().load(settings);
    // 生产环境下,需要自行设置索引的 index/type,我是以web/blog作为实验的index
    newSettings.setResourceRead("web/blog"); // ******************** !!! modify it !!! ******************** 
    newSettings.setResourceWrite("web/blog"); // ******************** !!! modify it !!! ******************** 
    val repository = new org.elasticsearch.hadoop.rest.RestRepository(newSettings);
    val targetShards = repository.getWriteTargetPrimaryShards(newSettings.getNodesClientOnly());
    repository.close();
    // targetShards ??? data structure
    _numPartitions = targetShards.size();
    println("********************numPartitions*************************");
    println(_numPartitions);
    _numPartitions;
  }

  override def getPartition(docID: Any): Int = {    
    val r = _hashFunction.hash(docID.toString());
    val shardId = org.elasticsearch.common.math.MathUtils.mod(r, _numPartitions);
    println("********************shardId*************************");
    println(shardId)
    shardId;
  }
}
复制代码

根源:出现“task not serializable"这个错误,一般是因为在map、filter等的参数使用了外部的变量,但是这个变量不能序列化。特别是当引用了某个类(经常是当前类)的成员函数或变量时,会导致这个类的所有成员(整个类)都需要支持序列化。

解决方法:

复制代码
class ESShardPartitioner(settings: String) extends org.apache.spark.Partitioner {
  protected var _numPartitions = -1;  

  override def numPartitions: Int = {
    val newSettings = new org.elasticsearch.hadoop.cfg.PropertiesSettings().load(settings);
    // 生产环境下,需要自行设置索引的 index/type,我是以web/blog作为实验的index
    newSettings.setResourceRead("web/blog"); // ******************** !!! modify it !!! ******************** 
    newSettings.setResourceWrite("web/blog"); // ******************** !!! modify it !!! ******************** 
    val repository = new org.elasticsearch.hadoop.rest.RestRepository(newSettings);
    val targetShards = repository.getWriteTargetPrimaryShards(newSettings.getNodesClientOnly());
    repository.close();
    // targetShards ??? data structure
    _numPartitions = targetShards.size();
    println("********************numPartitions*************************");
    println(_numPartitions);
    _numPartitions;
  }

  override def getPartition(docID: Any): Int = {
    val _hashFunction = new org.elasticsearch.cluster.routing.Murmur3HashFunction;
    val r = _hashFunction.hash(docID.toString());
    val shardId = org.elasticsearch.common.math.MathUtils.mod(r, _numPartitions);
    println("********************shardId*************************");
    println(shardId)
    shardId;
  }
}
复制代码

 

Job aborted due to stage failure: Task not serializable:

If you see this error:

org.apache.spark.SparkException: Job aborted due to stage failure: Task not serializable: java.io.NotSerializableException: ...

The above error can be triggered when you intialize a variable on the driver (master), but then try to use it on one of the workers. In that case, Spark Streaming will try to serialize the object to send it over to the worker, and fail if the object is not serializable. Consider the following code snippet:

NotSerializable notSerializable = new NotSerializable();
JavaRDD<String> rdd = sc.textFile("/tmp/myfile");

rdd.map(s -> notSerializable.doSomething(s)).collect();

This will trigger that error. Here are some ideas to fix this error:

  • Serializable the class
  • Declare the instance only within the lambda function passed in map.
  • Make the NotSerializable object as a static and create it once per machine.
  • Call rdd.forEachPartition and create the NotSerializable object in there like this:
rdd.forEachPartition(iter -> {
  NotSerializable notSerializable = new NotSerializable();

  // ...Now process iter
});

参考:https://databricks.gitbooks.io/databricks-spark-knowledge-base/content/troubleshooting/javaionotserializableexception.html
posted @   bonelee  阅读(12547)  评论(0编辑  收藏  举报
编辑推荐:
· 记一次.NET内存居高不下排查解决与启示
· 探究高空视频全景AR技术的实现原理
· 理解Rust引用及其生命周期标识(上)
· 浏览器原生「磁吸」效果!Anchor Positioning 锚点定位神器解析
· 没有源码,如何修改代码逻辑?
阅读排行:
· 全程不用写代码,我用AI程序员写了一个飞机大战
· MongoDB 8.0这个新功能碉堡了,比商业数据库还牛
· 记一次.NET内存居高不下排查解决与启示
· 白话解读 Dapr 1.15:你的「微服务管家」又秀新绝活了
· DeepSeek 开源周回顾「GitHub 热点速览」
点击右上角即可分享
微信分享提示