项目总结66:Springboot项目继承kafka集群

 

背景

  项目之前使用kafka单节点服务,现在打算使用多节点集群部署(由单台broker更新为两台broker)

 

具体修改如下(中间碰到的问题见附录):

  1- 部署新的kafka服务,参考博客:https://www.cnblogs.com/wobuchifanqie/p/11687234.html

  2- POM文件(和单节点保持一致)

    <parent>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-parent</artifactId>
        <version>1.5.12.RELEASE</version>
        <relativePath/> <!-- lookup parent from repository -->
    </parent>

        <!--kafka-->
        <dependency>
            <groupId>org.springframework.kafka</groupId>
            <artifactId>spring-kafka</artifactId>
            <version>1.1.1.RELEASE</version>
        </dependency>

  3- application.properties 配置文件

#单节点时
spring.kafka.bootstrap-servers=外网ip2:9092
#=============== provider  =======================
# 写入失败时,重试次数。当leader节点失效,一个repli节点会替代成为leader节点,此时可能出现写入失败,
# 当retris为0时,produce不会重复。retirs重发,此时repli节点完全成为leader节点,不会产生消息丢失。
spring.kafka.producer.retries=0
# 每次批量发送消息的数量,produce积累到一定数据,一次发送
spring.kafka.producer.batch-size=16384
# produce积累数据一次发送,缓存大小达到buffer.memory就发送数据
spring.kafka.producer.buffer-memory=33554432

#procedure要求leader在考虑完成请求之前收到的确认数,用于控制发送记录在服务端的持久化,其值可以为如下:
#acks = 0 如果设置为零,则生产者将不会等待来自服务器的任何确认,该记录将立即添加到套接字缓冲区并视为已发送。在这种情况下,无法保证服务器已收到记录,并且重试配置将不会生效(因为客户端通常不会知道任何故障),为每条记录返回的偏移量始终设置为-1。
#acks = 1 这意味着leader会将记录写入其本地日志,但无需等待所有副本服务器的完全确认即可做出回应,在这种情况下,如果leader在确认记录后立即失败,但在将数据复制到所有的副本服务器之前,则记录将会丢失。
#acks = all 这意味着leader将等待完整的同步副本集以确认记录,这保证了只要至少一个同步副本服务器仍然存活,记录就不会丢失,这是最强有力的保证,这相当于acks = -1的设置。
#可以设置的值为:all, -1, 0, 1
spring.kafka.producer.acks=1

# 指定消息key和消息体的编解码方式
spring.kafka.producer.key-serializer=org.apache.kafka.common.serialization.StringSerializer
spring.kafka.producer.value-serializer=org.apache.kafka.common.serialization.StringSerializer

#=============== consumer  =======================
# 指定默认消费者group id --> 由于在kafka中,同一组中的consumer不会读取到同一个消息,依靠groud.id设置组名
spring.kafka.consumer.group-id=testGroup
# smallest和largest才有效,如果smallest重新0开始读取,如果是largest从logfile的offset读取。一般情况下我们都是设置smallest
spring.kafka.consumer.auto-offset-reset=earliest
# enable.auto.commit:true --> 设置自动提交offset
spring.kafka.consumer.enable-auto-commit=true
#如果'enable.auto.commit'为true,则消费者偏移自动提交给Kafka的频率(以毫秒为单位),默认值为5000。
spring.kafka.consumer.auto-commit-interval=100

# 指定消息key和消息体的编解码方式
spring.kafka.consumer.key-deserializer=org.apache.kafka.common.serialization.StringDeserializer
spring.kafka.consumer.value-deserializer=org.apache.kafka.common.serialization.StringDeserializer

多节点时,只需将spring.kafka.bootstrap-servers修改为
spring.kafka.bootstrap-servers=外网ip1:9092,外网ip2:9092

 

  4- producer生产者

import com.alibaba.fastjson.JSON;
import org.springframework.kafka.core.KafkaTemplate;

@Component
public class ApiFansKafkaProducer {


    @Autowired
    private KafkaTemplate<String,String> kafkaTemplate;

    /**
     *@Description 测试生产者
     *@param
     *@return  void
     *@author  TangYujie
     *@date  2020/6/16 13:56
     */
    public void testProducer(String message){
        HashMap<String, Object> map = new HashMap<>();
        map.put("key_test",message);
        String jsonData = JSON.toJSONString(map);
        kafkaTemplate.send("topic_test",jsonData);
        System.out.println("send kafka testProducer succcess");
    }
}

  5- consumer消费者

import com.alibaba.fastjson.JSONObject;
import org.springframework.kafka.annotation.KafkaListener;


@Component
public class ApiFansKafkaConsumer extends  ApiBaseFansKafka {


    /**
     *@Description 测试消费者
     *@param
     *@return  void
     *@author  TangYujie
     *@date  2020/6/16 13:56
     */
    @KafkaListener( topics= "topic_test")
    public void testConsumer(ConsumerRecord<?, String> record){
        //1-获取原始msg
        System.out.println("消费消息:" + record + ",消息分区:" + record.partition());
        String msg = record.value();
        System.out.println("testConsumer: " + msg);
        JSONObject dataJson = JSONObject.parseObject(String.valueOf(msg));
        //2-从msg读取数据
        System.out.println(String.valueOf(dataJson.get("key_test")));
    }

}

  6-测试接口

    @Autowired
    private  ApiFansKafkaProducer apiFansKafkaProducer;    

    @RequestMapping(value = {"/kafka/test/{message}"}, method = {RequestMethod.GET}, produces = {JSON_UTF8})
    @ResponseBody
    public Object testKafka( @PathVariable String message) throws Exception {
            apiFansKafkaProducer.testProducer(message);
            return "success";

    }

 

  7- 测试结果

send kafka testProducer succcess
������Ϣ:ConsumerRecord(topic = topic_test, partition = 0, offset = 34, CreateTime = 1592295092044, checksum = 1998716755, serialized key size = -1, serialized value size = 25, key = null, value = {"key_test":"hello tyj3"}),��Ϣ����:0
testConsumer: {"key_test":"hello tyj3"}
hello tyj3

 

 

 

附录-问题

  1- Error while fetching metadata with correlation id 0 : {key_test=LEADER_NOT_AVAILABLE}

问题日志:

[2020-06-16 14:04:54,505][kafka-producer-network-thread | producer-1][WARN][org.apache.kafka.clients.NetworkClient$DefaultMetadataUpdater.handleResponse(NetworkClient.java:600)] Error while fetching metadata with correlation id 0 : {key_test=LEADER_NOT_AVAILABLE}

 

原因:两台kafaka的broker.id都是0,冲突导致LEADER_NOT_AVAILABLE

解决方案:在其中一台kafka中的./config/server.properties 文件,设置 broker.id=1

  

  2- Configured broker.id 1 doesn't match stored broker.id 0 in meta.properties. 

问题日志:

[2020-06-16 14:22:28,388] ERROR Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
kafka.common.InconsistentBrokerIdException: Configured broker.id 1 doesn't match stored broker.id 0 in meta.properties. If you moved your data, make sure your configured broker.id matches. If you intend to create a new broker, you should remove all data in your data directories (log.dirs).
    at kafka.server.KafkaServer.getBrokerIdAndOfflineDirs(KafkaServer.scala:684)
    at kafka.server.KafkaServer.startup(KafkaServer.scala:209)
    at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38)
    at kafka.Kafka$.main(Kafka.scala:75)
    at kafka.Kafka.main(Kafka.scala)

 

原因:./config/server.properties里的broker.id 和 ./logs/meta.properties 里的broker.id 不一致

解决方案:配置 ./logs/meta.properties 里的broker.id = 1;

 

  3- Exception thrown when sending a message with key='null'

问题日志:

[2020-06-16 15:37:43,307][kafka-producer-network-thread | producer-1][ERROR][org.springframework.kafka.support.LoggingProducerListener.onError(LoggingProducerListener.java:76)] Exception thrown when sending a message with key='null' and payload='{"key_test":"hello tyj1"}' to topic topic_test:
org.apache.kafka.common.errors.TimeoutException: Batch containing 1 record(s) expired due to timeout while requesting metadata from brokers for topic_test-0

 

原因:./config/server.properties里没有配置advertised.host.name参数

解决方案:去./config/server.properties配置advertised.host.name=IP,我配置的外网IP

 

END

 

posted on 2020-06-16 16:49  我不吃番茄  阅读(1097)  评论(0编辑  收藏  举报