Kafka使用实例

下面是一个使用Java实现的Kafka示例代码,用于发送和接收消息:

首先,您需要安装Kafka,并确保服务正在运行。

接下来,您可以使用以下示例代码来发送和接收消息:

Producer.java文件:

import org.apache.kafka.clients.producer.*;

import java.util.Properties;

public class Producer {
    private final static String TOPIC_NAME = "my-topic";
    private final static String BOOTSTRAP_SERVERS = "localhost:9092";

    public static void main(String[] args) {
        Properties props = new Properties();
        props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, BOOTSTRAP_SERVERS);
        props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");
        props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");

        try (Producer<String, String> producer = new KafkaProducer<>(props)) {
            for (int i = 0; i < 10; i++) {
                String key = "key-" + i;
                String value = "value-" + i;

                ProducerRecord<String, String> record = new ProducerRecord<>(TOPIC_NAME, key, value);

                producer.send(record, new Callback() {
                    @Override
                    public void onCompletion(RecordMetadata metadata, Exception exception) {
                        if (exception != null) {
                            exception.printStackTrace();
                        } else {
                            System.out.println("Sent message: topic = " + metadata.topic() +
                                    ", partition = " + metadata.partition() +
                                    ", offset = " + metadata.offset() +
                                    ", key = " + key +
                                    ", value = " + value);
                        }
                    }
                });
            }
        }
    }
}

Consumer.java文件:

import org.apache.kafka.clients.consumer.*;
import org.apache.kafka.common.TopicPartition;

import java.time.Duration;
import java.util.Collections;
import java.util.Properties;

public class Consumer {
    private final static String TOPIC_NAME = "my-topic";
    private final static String BOOTSTRAP_SERVERS = "localhost:9092";
    private final static String GROUP_ID = "my-group";

    public static void main(String[] args) {
        Properties props = new Properties();
        props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, BOOTSTRAP_SERVERS);
        props.put(ConsumerConfig.GROUP_ID_CONFIG, GROUP_ID);
        props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
        props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");

        try (KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props)) {
            consumer.subscribe(Collections.singletonList(TOPIC_NAME));

            while (true) {
                ConsumerRecords<String, String> records = consumer.poll(Duration.ofMillis(100));

                for (ConsumerRecord<String, String> record : records) {
                    System.out.println("Received message: topic = " + record.topic() +
                            ", partition = " + record.partition() +
                            ", offset = " + record.offset() +
                            ", key = " + record.key() +
                            ", value = " + record.value());
                }

                consumer.commitSync();
            }
        }
    }
}

在上述示例代码中,Producer类用于发送消息,Consumer类用于接收消息。

在Producer类中,我们创建了一个Producer实例,并配置了Kafka服务器的地址。使用send方法发送消息到指定的主题,并在回调函数中处理发送结果。

在Consumer类中,我们创建

了一个Consumer实例,并配置了Kafka服务器的地址和消费者组ID。使用subscribe方法订阅指定的主题。然后,使用poll方法从服务器获取消息,遍历消息并进行处理。最后,通过commitSync方法手动提交消费位移。

请确保您已正确配置Kafka服务器和相关依赖项,并根据需要更改服务器地址、主题名称和消费者组ID。运行Producer和Consumer代码后,您将看到消息从生产者发送到消费者并进行打印输出。



当然!这是另一个使用Java实现的Kafka示例代码,用于发送和接收消息,以及使用Kafka Streams进行流处理:

首先,您需要安装Kafka,并确保服务正在运行。

接下来,您可以使用以下示例代码来发送和接收消息,并使用Kafka Streams进行流处理:

Producer.java文件:

import org.apache.kafka.clients.producer.*;

import java.util.Properties;

public class Producer {
    private final static String TOPIC_NAME = "my-topic";
    private final static String BOOTSTRAP_SERVERS = "localhost:9092";

    public static void main(String[] args) {
        Properties props = new Properties();
        props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, BOOTSTRAP_SERVERS);
        props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");
        props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");

        try (Producer<String, String> producer = new KafkaProducer<>(props)) {
            for (int i = 0; i < 10; i++) {
                String key = "key-" + i;
                String value = "value-" + i;

                ProducerRecord<String, String> record = new ProducerRecord<>(TOPIC_NAME, key, value);

                producer.send(record, new Callback() {
                    @Override
                    public void onCompletion(RecordMetadata metadata, Exception exception) {
                        if (exception != null) {
                            exception.printStackTrace();
                        } else {
                            System.out.println("Sent message: topic = " + metadata.topic() +
                                    ", partition = " + metadata.partition() +
                                    ", offset = " + metadata.offset() +
                                    ", key = " + key +
                                    ", value = " + value);
                        }
                    }
                });
            }
        }
    }
}

Consumer.java文件:

import org.apache.kafka.clients.consumer.*;
import org.apache.kafka.common.TopicPartition;

import java.time.Duration;
import java.util.Collections;
import java.util.Properties;

public class Consumer {
    private final static String TOPIC_NAME = "my-topic";
    private final static String BOOTSTRAP_SERVERS = "localhost:9092";
    private final static String GROUP_ID = "my-group";

    public static void main(String[] args) {
        Properties props = new Properties();
        props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, BOOTSTRAP_SERVERS);
        props.put(ConsumerConfig.GROUP_ID_CONFIG, GROUP_ID);
        props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
        props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");

        try (KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props)) {
            consumer.subscribe(Collections.singletonList(TOPIC_NAME));

            while (true) {
                ConsumerRecords<String, String> records = consumer.poll(Duration.ofMillis(100));

                for (ConsumerRecord<String, String> record : records) {
                    System.out.println("Received message: topic = " + record.topic() +
                            ", partition = " + record.partition() +
                            ", offset = " + record.offset() +
                            ", key = " + record.key() +
                            ", value = " + record.value());
                }

                consumer.commitSync();
            }
        }
    }
}

KafkaStreamsDemo.java文件:

import org.apache.kafka.common.serialization.*;
import org.apache.kafka.streams.*;
import org.apache.kafka.streams.kstream.*;

import java.util.Properties;

public class KafkaStreamsDemo {
    private final static String INPUT

_TOPIC = "input-topic";
    private final static String OUTPUT_TOPIC = "output-topic";
    private final static String BOOTSTRAP_SERVERS = "localhost:9092";

    public static void main(String[] args) {
        Properties props = new Properties();
        props.put(StreamsConfig.APPLICATION_ID_CONFIG, "kafka-streams-demo");
        props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, BOOTSTRAP_SERVERS);
        props.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass());
        props.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, Serdes.String().getClass());

        StreamsBuilder builder = new StreamsBuilder();

        KStream<String, String> inputTopicStream = builder.stream(INPUT_TOPIC);
        KStream<String, String> transformedStream = inputTopicStream.mapValues(value -> value.toUpperCase());
        transformedStream.to(OUTPUT_TOPIC);

        KafkaStreams streams = new KafkaStreams(builder.build(), props);
        streams.start();

        Runtime.getRuntime().addShutdownHook(new Thread(streams::close));
    }
}

在上述示例代码中,Producer类用于发送消息,Consumer类用于接收消息,KafkaStreamsDemo类用于执行流处理。

在Producer类中,我们创建了一个Producer实例,并配置了Kafka服务器的地址。使用send方法发送消息到指定的主题,并在回调函数中处理发送结果。

在Consumer类中,我们创建了一个Consumer实例,并配置了Kafka服务器的地址和消费者组ID。使用subscribe方法订阅指定的主题。然后,使用poll方法从服务器获取消息,遍历消息并进行处理。最后,通过commitSync方法手动提交消费位移。

在KafkaStreamsDemo类中,我们创建了一个StreamsBuilder实例,并配置了Kafka服务器的地址。使用stream方法从输入主题中获取流数据。使用mapValues方法对每条消息进行转换。最后,使用to方法将转换后的消息发送到输出主题。

请确保您已正确配置Kafka服务器和相关依赖项,并根据需要更改服务器地址、主题名称和流处理逻辑。运行Producer、Consumer和KafkaStreamsDemo代码后,您将看到消息从生产者发送到消费者,并进行打印输出。同时,流处理将从输入主题接收消息,并将转换后的结果发送到输出主题。

posted @ 2023-05-30 17:01  田野与天  阅读(160)  评论(0编辑  收藏  举报