Kafka 集成Spark
1.环境准备
1.Kafka集群环境准备
1.准备一个Kafka集群环境并启动
2.创建first Topic
/usr/kafka/kafka_2.13-3.6.1/bin/kafka-topics.sh --bootstrap-server 192.168.58.130:9092 --create --partitions 1 --replication-factor 3 --topic first
2.Spark环境准备
1.配置Scala运行环境
1.下载Scala
2.配置运行环境【略】
如果不是经常使用,可不做配置
3.IDE 安装Scala插件【略】
2.准备一个基础运行项目
1.创建一个 maven 项目 spark-kafka【略】
2.在模块设置中添加Scala依赖
3.在 main 下创建 scala 文件夹,并配置为源代码根目录,在 scala 下创建包名为 cn.coreqi.spark
4.添加POM依赖
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming-kafka-0-10_2.13</artifactId>
<version>3.5.0</version>
</dependency>
5.在 resources 下添加日志配置文件log4j.properties
log4j.rootLogger=error, stdout,R
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss,SSS} %5p --- [%50t] %-80c(line:%5L) : %m%n
log4j.appender.R=org.apache.log4j.RollingFileAppender
log4j.appender.R.File=../log/agent.log
log4j.appender.R.MaxFileSize=1024KB
log4j.appender.R.MaxBackupIndex=1
log4j.appender.R.layout=org.apache.log4j.PatternLayout
log4j.appender.R.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss,SSS} %5p --- [%50t] %-80c(line:%6L) : %m%n
2.Spark 生产者
1.新建 scala Object:SparkKafkaProducer
package cn.coreqi.spark.producer
import org.apache.kafka.clients.producer.{KafkaProducer, ProducerConfig, ProducerRecord}
import org.apache.kafka.common.serialization.StringSerializer
import java.util.Properties
object SparkKafkaProducer {
def main(args: Array[String]): Unit = {
// 0 kafka 配置信息
val properties = new Properties()
properties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "192.168.58.130:9092,192.168.58.131:9092,192.168.58.132:9092")
properties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, classOf[StringSerializer])
properties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, classOf[StringSerializer])
// 1 创建 kafka 生产者
var producer = new KafkaProducer[String, String](properties)
// 2 发送数据
for (i <- 1 to 5) {
producer.send(new ProducerRecord[String, String]("first", "coreqi" + i))
}
// 3 关闭资源
producer.close()
}
}
2.启动 Kafka 消费者
/usr/kafka/kafka_2.13-3.6.1/bin/kafka-console-consumer.sh --bootstrap-server 192.168.58.130:9092 --topic first
3.执行 SparkKafkaProducer 程序,观察 kafka 消费者控制台情况
3.Spark 消费者
1.调整POM依赖,添加一些依赖信息
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming-kafka-0-10_2.13</artifactId>
<version>3.5.0</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.13</artifactId>
<version>3.5.0</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming_2.13</artifactId>
<version>3.5.0</version>
</dependency>
2.新建 scala Object:SparkKafkaConsumer
package cn.coreqi.spark.consumer
import org.apache.kafka.clients.consumer.{ConsumerConfig, ConsumerRecord}
import org.apache.kafka.common.serialization.StringDeserializer
import org.apache.spark.SparkConf
import org.apache.spark.streaming.dstream.{DStream, InputDStream}
import org.apache.spark.streaming.kafka010.{ConsumerStrategies, KafkaUtils, LocationStrategies}
import org.apache.spark.streaming.{Seconds, StreamingContext}
object SparkKafkaConsumer {
def main(args: Array[String]): Unit = {
//1.创建 SparkConf
val sparkConf: SparkConf = new SparkConf().setAppName("sparkstreaming").setMaster("local[*]")
//2.创建 StreamingContext
val ssc = new StreamingContext(sparkConf, Seconds(3))
//3.定义 Kafka 参数:kafka 集群地址、消费者组名称、key 序列化、value 序列化
val kafkaPara: Map[String, Object] = Map[String, Object](
ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG -> "192.168.58.130:9092,192.168.58.131:9092,192.168.58.132:9092",
ConsumerConfig.GROUP_ID_CONFIG -> "coreqiGroup",
ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG -> classOf[StringDeserializer],
ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG -> classOf[StringDeserializer]
)
//4.读取 Kafka 数据创建 DStream
val kafkaDStream: InputDStream[ConsumerRecord[String, String]] =
KafkaUtils.createDirectStream[String, String](
ssc,
LocationStrategies.PreferConsistent, //优先位置
ConsumerStrategies.Subscribe[String, String](Set("first"), kafkaPara) // 消费策略:(订阅多个主题,配置参数)
)
//5.将每条消息的 KV 取出
val valueDStream: DStream[String] = kafkaDStream.map(record => record.value())
//6.计算 WordCount
valueDStream.print()
//7.开启任务
ssc.start()
ssc.awaitTermination()
}
}
3.启动 SparkKafkaConsumer 消费者
4.启动 kafka 生产者
/usr/kafka/kafka_2.13-3.6.1/bin/kafka-console-producer.sh --bootstrap-server 192.168.58.130:9092 --topic first