@malloc

导航

SparkStreaming

1、RDD基础

  RDD.scala源码写到RDD的5个属性。driver生成RDD 分发到个executor,RDD可理解为操作描述,除sc.parallelize()生成的RDD包含数据外,一般RDD不包含具体数据,只存储要读取的文件位置,DAG等。

KafkaUtils.createDirectStream生成KafkaRDD,分区与topics分区数对应。

基于receiver的方式生成blockRDD,默认200ms取一次数据保存在block,由blockmanager管理,分区数与block数有关,与kafka分区数无关,offset由zookeeper管理。

处理逻辑写在foreachRDD中,转变为sparkcore编程,便于发生故障时,做数据校验二次处理。

 * Internally, each RDD is characterized by five main properties:
 *  - A list of partitions
 *  - A function for computing each split
 *  - A list of dependencies on other RDDs
 *  - Optionally, a Partitioner for key-value RDDs (e.g. to say that the RDD is hash-partitioned)
 *  - Optionally, a list of preferred locations to compute each split on (e.g. block locations for an HDFS file)

 2、receiver

3、优雅地停止

//spark.streaming.stopGracefullyOnShutdown 在yarn cluster模式下不起作用,采取hdfs Markfile的方式,检测到存在Markfile,程序就调用stop(true,true).(https://www.inovex.de/blog/247-spark-streaming-on-yarn-in-production/

 

posted on 2019-06-20 23:57  malloc+  阅读(146)  评论(0编辑  收藏  举报