kafka的简单使用

命令行方式

  1. 创建一个主题
[root@manager129 kafka_2.12-2.8.0]# ./bin/kafka-topics.sh --zookeeper localhost:2181 --create --topic heima -partitions 2 --replication-factor 1
Created topic heima.
  • zookeeper: 指定kafka所连接的zookeeper服务地址
  • topic:指定了所要创建的主题
  • partitions:制定了分区个数
  • replication-factor:指定了副本因子
  • create:创建主题的动作指令
  1. 展示所有主题
    命令:
[root@manager129 kafka_2.12-2.8.0]# ./bin/kafka-topics.sh --zookeeper localhost:2181 --list
heima
  1. 查看主题详情
    命令:
[root@manager129 kafka_2.12-2.8.0]# ./bin/kafka-topics.sh --zookeeper localhost:2181 --describe --topic heima
Topic: heima    TopicId: D0rctcUJQ-2-1QstiUHw6w PartitionCount: 2       ReplicationFactor: 1    Configs: 
        Topic: heima    Partition: 0    Leader: 0       Replicas: 0     Isr: 0
        Topic: heima    Partition: 1    Leader: 0       Replicas: 0     Isr: 0
  1. 启动消费者服务
    命令:
[root@manager129 kafka_2.12-2.8.0]# ./bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic heima
> hello
  1. 启动生产者服务
    命令:
[root@manager129 kafka_2.12-2.8.0]# ./bin/kafka-console-producer.sh --broker-list localhost:9092 --topic heima
>hello

Java方式

  1. pom
<dependencies>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.kafka</groupId>
            <artifactId>spring-kafka</artifactId>
        </dependency>
        <dependency>
            <groupId>com.google.code.gson</groupId>
            <artifactId>gson</artifactId>
            <version>2.8.2</version>
        </dependency>
        <dependency>
            <groupId>org.projectlombok</groupId>
            <artifactId>lombok</artifactId>
            <optional>true</optional>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-test</artifactId>
            <scope>test</scope>
        </dependency>
    </dependencies>
  1. 生产者
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.clients.producer.ProducerRecord;

import java.util.Properties;

public class ProducerStart
{
    private static String brokerList = "172.16.177.129:9092";
    private static String topic = "heima";
    public static void main(String[] args)
    {
        Properties properties = new Properties();
        // 设置key序列化器
        properties.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");

        // 设置重试次数
        properties.put(ProducerConfig.RETRIES_CONFIG, 10);

        // 设置值序列化器
        properties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
            "org.apache.kafka.common.serialization.StringSerializer");

        // 设置集群地址
        properties.put("bootstrap.servers", brokerList);

        KafkaProducer<String, String> producer = new KafkaProducer<>(properties);
        ProducerRecord<String, String> record = new ProducerRecord<>(topic, "kafka-demo", "hello kafka");

        producer.send(record);

        producer.close();
    }
}
  1. 消费者
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.common.serialization.StringDeserializer;

import java.time.Duration;
import java.util.Collections;
import java.util.Properties;

public class ConsumerStart
{
    private static String brokerList = "172.16.177.129:9092";
    private static String topic = "heima";
    private static String groupId = "group-demo";
    public static void main(String[] args)
    {
        Properties properties = new Properties();
        properties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
        properties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
        properties.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, brokerList);
        properties.put(ConsumerConfig.GROUP_ID_CONFIG, groupId);

        KafkaConsumer<String, String> consumer = new KafkaConsumer<String, String>(properties);

        consumer.subscribe(Collections.singletonList(topic));
        while (true)
        {
            ConsumerRecords<String, String> records = consumer.poll(Duration.ofMillis(1000));
            for (ConsumerRecord<String, String> record : records)
            {
                System.out.println("=========>" + record);
            }
        }
    }
}

生产者详解

发送消息

发送类型

  • 同步发送 producer.send(record)

  • 异步发送

producer.send(record, new Callback()
{
    @Override
    public void onCompletion(RecordMetadata recordMetadata, Exception e)
    {
        if (e == null)
        {
            System.out.println(recordMetadata.partition() + ":" + recordMetadata.offset());
        }
    }
});

序列化器

.

.

posted @ 2021-06-26 21:49  fight139  阅读(45)  评论(0编辑  收藏  举报