Spark streaming的正确使用。。
转自http://bit1129.iteye.com/blog/2198531
代码如下:
package spark.examples.streaming import java.sql.{PreparedStatement, Connection, DriverManager} import java.util.concurrent.atomic.AtomicInteger import org.apache.spark.SparkConf import org.apache.spark.streaming.{Seconds, StreamingContext} import org.apache.spark.streaming._ import org.apache.spark.streaming.StreamingContext._ //No need to call Class.forName("com.mysql.jdbc.Driver") to register Driver? object SparkStreamingForPartition { def main(args: Array[String]) { val conf = new SparkConf().setAppName("NetCatWordCount") conf.setMaster("local[3]") val ssc = new StreamingContext(conf, Seconds(5)) val dstream = ssc.socketTextStream("192.168.26.140", 9999) //foreachRDD是DStream的动作函数,会触发Job执行,然后对一个时间间隔内创建的RDD进行处理。如果RDD执行RDD的动作函数,是否继续触发Job执行? dstream.foreachRDD(rdd => { //embedded function def func(records: Iterator[String]) { var conn: Connection = null var stmt: PreparedStatement = null try { val url = "jdbc:mysql://192.168.26.140:3306/person"; val user = "root"; val password = "" conn = DriverManager.getConnection(url, user, password) records.flatMap(_.split(" ")).foreach(word => { val sql = "insert into TBL_WORDS(word) values (?)"; stmt = conn.prepareStatement(sql); stmt.setString(1, word) stmt.executeUpdate(); }) } catch { case e: Exception => e.printStackTrace() } finally { if (stmt != null) { stmt.close() } if (conn != null) { conn.close() } } } ///对RDD进行重新分区,以改变处理的并行度 val repartitionedRDD = rdd.repartition(3) ///对每个分区调用func函数,func函数的参数就是一个分区对应的数据的遍历器(Iterator) repartitionedRDD.foreachPartition(func) }) ssc.start() ssc.awaitTermination() } }
其实我想说的,我之前使用的时候总是collect,其实应该使用foreachRdd或者直接foreachPartition,然后里边会是一系列的分区数据,然后再做操作。
我之前不敢使用foreach,我担心这是对每条数据的foreach,因为我要连接数据库,我担心如果是按每条做循环,那如果我一次吞吐1000条,那就是要连接1000次,我觉得太可怕了。。。后来发现完全不是这么回事啦~~