Kafka拦截器-时间戳&消息条数

Kafka producer拦截器--Interceptor

拦截器原理:

  producer拦截器(interceptor)是在Kafka 0.10版本被引入的,主要用于实现clients端的定制化逻辑.对于producer而言,interceptor使得用户在消息发送前以及producer回调逻辑前有机会对消息做一些定制化需求,比如修改消息等.同时producer允许用户指定多个interceptor按序作用于同一条消息从而形成一个拦截链(interceptor chain).Interceptor的实现接口是org.apache.kafka.clients.producer.ProducerInterceptor,其定义方法包括:

  1) configure(configs) 获取配置信息和初始化数据时调用

  2) onSend(ProducerRecord) 该方法封装进KafkaProducer.send方法中,即它运行在用户主线中.Producer确保在消息被序列化以及计算分区前调用该方法.用户可以在该方法中对消息做任何操作,但最好保证不要修改消息所属的topic和分区,否则会影响目标分区的计算.

  3)onAcknowledgement(RecordMetaData,Exception) 该方法会在消息被应答或消息发送失败时调用,并且通常都是在producer回调逻辑触发之前.onAcknowledgement运行在producer的IO线程中,因此不要在该方法中放入很重要的逻辑,否则会拖慢producer的消息发送效率

  4) close 关闭interceptor,主要用于执行一些资源清理工作.

  如前所述,interceptor可能被运行在多个线程中.因此在具体实现时用户需要自行确保线程安全.另外倘若指定了多个interceptor.则producer将按照指定顺序调用它们,并仅仅时捕捉每个interceptor可能抛出的异常记录到错误日志中而非在向上传递.这在使用过程中,要特别留意.

 

案例:

需求描述:实现一个双interceptor组成的拦截链.第一个interceptor会在消息发送前将时间戳信息加到消息value的最前部;第二个interceptor会在消息发送后更新成功发送消息或发送失败的消息数

需求实现:

0) pom文件导入

复制代码
 <dependencies>

        <!-- https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-common -->
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-common</artifactId>
            <version>2.7.3</version>

        </dependency>

        <!-- https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-hdfs -->
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-hdfs</artifactId>
            <version>2.7.3</version>
        </dependency>

        <!-- https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-client -->
        <!-- Winodws下提交至Yarn上运行,改客户端是2.6.1s
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-client</artifactId>
            <version>2.6.1</version>
        </dependency>
-->

        <dependency>
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
            <version>4.10</version>
        </dependency>

        <!-- https://mvnrepository.com/artifact/org.apache.mrunit/mrunit MRUnit测试 -->
        <dependency>
            <groupId>org.apache.mrunit</groupId>
            <artifactId>mrunit</artifactId>
            <version>0.9.0-incubating</version>
            <classifier>hadoop2</classifier>
            <scope>test</scope>
        </dependency>

        <dependency>
            <groupId>com.sun</groupId>
            <artifactId>tools</artifactId>
            <version>1.8.0</version>
            <scope>system</scope>
            <systemPath>${env.JAVA_HOME}/lib/tools.jar</systemPath>
        </dependency>

        <!-- https://mvnrepository.com/artifact/org.apache.hbase/hbase -->
        <dependency>
            <groupId>org.apache.hbase</groupId>
            <artifactId>hbase</artifactId>
            <version>1.3.2</version>
            <type>pom</type>
        </dependency>

        <!-- https://mvnrepository.com/artifact/org.apache.hbase/hbase-common -->
        <dependency>
            <groupId>org.apache.hbase</groupId>
            <artifactId>hbase-common</artifactId>
            <version>1.3.2</version>
        </dependency>
        <!-- https://mvnrepository.com/artifact/org.apache.hbase/hbase-server -->
        <dependency>
            <groupId>org.apache.hbase</groupId>
            <artifactId>hbase-server</artifactId>
            <version>1.3.2</version>
        </dependency>


        <!-- https://mvnrepository.com/artifact/org.apache.hbase/hbase-client -->
        <dependency>
            <groupId>org.apache.hbase</groupId>
            <artifactId>hbase-client</artifactId>
            <version>1.3.2</version>
        </dependency>

        <!-- https://mvnrepository.com/artifact/org.apache.zookeeper/zookeeper -->
        <dependency>
            <groupId>org.apache.zookeeper</groupId>
            <artifactId>zookeeper</artifactId>
            <version>3.4.6</version>
            <type>pom</type>
        </dependency>

        <!-- https://mvnrepository.com/artifact/org.glassfish.jersey.core/jersey-client -->
        <dependency>
            <groupId>org.glassfish.jersey.core</groupId>
            <artifactId>jersey-client</artifactId>
            <version>2.26</version>
        </dependency>
        <!-- https://mvnrepository.com/artifact/org.apache.kafka/kafka-clients -->
        <dependency>
            <groupId>org.apache.kafka</groupId>
            <artifactId>kafka-clients</artifactId>
            <version>0.10.0.0</version>
        </dependency>
        <!-- https://mvnrepository.com/artifact/org.apache.kafka/kafka-streams -->
        <dependency>
            <groupId>org.apache.kafka</groupId>
            <artifactId>kafka-streams</artifactId>
            <version>0.10.0.0</version>
        </dependency>

    </dependencies>
复制代码

 

1) 增加时间拦截器

复制代码
package com.lxz.kafka.interceptor;

import org.apache.kafka.clients.producer.ProducerInterceptor;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.clients.producer.RecordMetadata;

import java.util.Map;

public class TimeInterCeptor implements ProducerInterceptor<String,String> {

    @Override
    public ProducerRecord<String, String> onSend(ProducerRecord<String, String> producerRecord) {
        //创建一个新的record,把时间戳写入消息体的最前部

        return new ProducerRecord<>(producerRecord.topic(), producerRecord.partition(), producerRecord.timestamp(),producerRecord.key(),
                System.currentTimeMillis() + "," + producerRecord.value().toString());
    }

    @Override
    public void onAcknowledgement(RecordMetadata recordMetadata, Exception e) {

    }

    @Override
    public void close() {

    }

    @Override
    public void configure(Map<String, ?> map) {

    }
}
复制代码

2) 统计发送消息成功和发送失败消息数,并在producer关闭时打印这两个计时器.

复制代码
package com.lxz.kafka.interceptor;

import org.apache.kafka.clients.producer.ProducerInterceptor;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.clients.producer.RecordMetadata;

import java.util.Map;

public class CounterInterCeptor implements ProducerInterceptor<String,String> {
    private int errorCounter = 0;
    private int successCounter = 0;

    @Override
    public ProducerRecord<String, String> onSend(ProducerRecord<String, String> producerRecord) {
        return producerRecord;
    }

    @Override
    public void onAcknowledgement(RecordMetadata recordMetadata, Exception e) {
        //统计成功和失败的次数
        if (e == null){
            successCounter++;
        }else {
            errorCounter++;
        }

    }

    @Override
    public void close() {
        //保存结果
        System.out.println("Successful sent:" + successCounter);
        System.out.println("Failed sent:" + errorCounter);

    }

    @Override
    public void configure(Map<String, ?> map) {

    }
}
复制代码

3) 主程序

复制代码
package com.lxz.kafka.interceptor;

import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.Producer;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.clients.producer.ProducerRecord;

import java.util.ArrayList;
import java.util.Properties;

public class InterceptorProducer {
    public static void main(String[] args) {
        // 1.配置信息
        Properties properties = new Properties();
        properties.put("bootstrap.servers","hadoop1:9092");
        properties.put("acks","all");
        properties.put("retries",0);
        properties.put("batch.size",16384);
        properties.put("linger.ms",1);
        properties.put("buffer.memory",33554432);
        properties.put("key.serializer",
                "org.apache.kafka.common.serialization.StringSerializer");
        properties.put("value.serializer",
                "org.apache.kafka.common.serialization.StringSerializer");
        // 2.构建拦截链
        ArrayList<String> interceptors = new ArrayList<>();

        interceptors.add("com.lxz.kafka.interceptor.TimeInterCeptor");
        interceptors.add("com.lxz.kafka.interceptor.CounterInterCeptor");
        properties.put(ProducerConfig.INTERCEPTOR_CLASSES_CONFIG,interceptors);

        String topic = "first";
        Producer<String, String> kafkaproducers = new KafkaProducer<>(properties);

        // 3.发送消息
        for (int i = 0; i < 10; i++) {
            ProducerRecord<String ,String> record = new ProducerRecord<>(topic,"message" + i);
            kafkaproducers.send(record);
        }

        // 4.关闭producer,这样才会调用interceptor的close方法
        kafkaproducers.close();
    }
}
复制代码

程序运行报错解决:

遇到如下报错,是因为缺少了log4配置文件

log4j:WARN No appenders could be found for logger (org.apache.kafka.clients.producer.ProducerConfig).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.

在resources中创建log4j.properties文件并写入

复制代码
log4j.rootLogger=INFO,console,dailyFile
# TODO 发布到阿里云记得添加,另外控制台不输出(只输出warn或者error信息)

# log4j.logger.org.mybatis = INFO
log4j.logger.com.imooc.mapper=INFO

log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.encoding=UTF-8
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%-d{yyyy-MM-dd HH:mm:ss,SSS} [%t] [%l] - [%p] %m%n

# 定期滚动日志文件,每天都会生成日志
log4j.appender.dailyFile=org.apache.log4j.DailyRollingFileAppender
log4j.appender.dailyFile.encoding=UTF-8
log4j.appender.dailyFile.Threshold=INFO
# TODO 本地日志地址,正式环境请务必切换为阿里云地址
log4j.appender.dailyFile.File=C:/logs/maven-ssm-alipay/log.log4j
log4j.appender.dailyFile.DatePattern='.'yyyy-MM-dd
log4j.appender.dailyFile.layout=org.apache.log4j.PatternLayout
log4j.appender.dailyFile.layout.ConversionPattern=%-d{yyyy-MM-dd HH:mm:ss,SSS} [%t] [%l] - [%p] %m%n
复制代码

5) 测试结果

  1.打开zk和kafka集群

  2.启动Idea主程序

  3.Idea控制台打印

    Successful sent:10
    Failed sent:0

  4.服务器上启动一个Kafka的消费者查看消息队列

cd /opt/module/kafka

bin/kafka-console-consumer.sh --zookeeper hadoop1:2181 --from-beginning --topic first

打印:

1629859601060,message0
1629859601130,message1
1629859601130,message2
1629859601130,message3
1629859601130,message4
1629859601130,message5
1629859601131,message6
1629859601131,message7
1629859601131,message8
1629859601131,message9

posted @   明明就-  阅读(442)  评论(0编辑  收藏  举报
编辑推荐:
· go语言实现终端里的倒计时
· 如何编写易于单元测试的代码
· 10年+ .NET Coder 心语,封装的思维:从隐藏、稳定开始理解其本质意义
· .NET Core 中如何实现缓存的预热?
· 从 HTTP 原因短语缺失研究 HTTP/2 和 HTTP/3 的设计差异
阅读排行:
· 分享一个免费、快速、无限量使用的满血 DeepSeek R1 模型,支持深度思考和联网搜索!
· 基于 Docker 搭建 FRP 内网穿透开源项目(很简单哒)
· ollama系列01:轻松3步本地部署deepseek,普通电脑可用
· 按钮权限的设计及实现
· 25岁的心里话
点击右上角即可分享
微信分享提示