Kafka的安装和使用
windows下Kafka的安装和使用
Kafka下载地址:http://kafka.apache.org/downloads.html
下载的文件
解压二进制文件到D盘(千万记住盘不能太深,我建议最好加上D盘两层就行了 我的是D:\kafka_2.12-2.5.0)
这里注意我为啥说kafka文件所在的盘不能太深,是因为你可能启动zookeeper或者kafka的时候会闪退。不然你可能会看到经典报错“输入行太长;命令语法不正确”
修改config/zookeeper.properties、config/server.properties文件
config/zookeeper.properties:dataDir=D:\kafka_2.12-2.5.0\dataDir(我自己的,你没有你自己建,难道我给你建?)
config/server.properties:log.dirs=D:\kafka_2.12-2.5.0\logs
启动zookeper和kafka(注意启动都在:D:\kafka_2.12-2.5.0\bin\windows下,上一层是linux环境启动方式)
启动zookeeper:zookeeper-server-start.bat ....\config\zookeeper.properties 如下图:
启动kafka kafka-server-start.bat ....\config\server.properties 如下图:
启动完成,接下来是java中使用kafka
Kafka使用(消息机制)
IDEA引入依赖:
<!-- https://mvnrepository.com/artifact/org.apache.kafka/kafka-clients --> <dependency> <groupId>org.apache.kafka</groupId> <artifactId>kafka-clients</artifactId> <version>1.0.0</version> </dependency>
生产者(Producer)
package com.example.kafaka.web; import org.apache.kafka.clients.producer.KafkaProducer; import org.apache.kafka.clients.producer.ProducerConfig; import org.apache.kafka.clients.producer.ProducerRecord; import org.apache.kafka.common.serialization.StringSerializer; import java.util.Properties; import java.util.Random; import java.util.concurrent.Future; /** * @author: CSH * @description: * @create: 2020-05-14 11:00 **/ public class Producer { public static final String TOPIC = "kafka_test";//定义主题 public static void main(String[] args){ Properties p = new Properties(); // p.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "10.20.23.40:9092,192.168.23.77:9092");//kafka地址,多个地址用逗号分割,类似集群 p.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "10.20.23.40:9092"); p.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class); p.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class); KafkaProducer<String, String> kafkaProducer = new KafkaProducer<>(p); try{ while(true){ String msg = "Hello," + new Random().nextInt(100); ProducerRecord producerRecord = new ProducerRecord(TOPIC,msg); Future future = kafkaProducer.send(producerRecord); if(future.isDone()){ System.out.println("消息发送成功:" + msg); } Thread.sleep(2000); } }catch(Exception e){ e.printStackTrace(); }finally { kafkaProducer.close(); } } }
消费者(Consumer)
package com.example.kafaka.web; import com.example.kafaka.constant.CommonConstant; import org.apache.kafka.clients.consumer.ConsumerConfig; import org.apache.kafka.clients.consumer.ConsumerRecords; import org.apache.kafka.clients.consumer.KafkaConsumer; import org.apache.kafka.common.serialization.StringDeserializer; import java.util.Collections; import java.util.Properties; /** * @author: CSH * @description: * @create: 2020-05-14 11:09 **/ public class Consumer { public static void main(String[] args) { Properties p = new Properties(); p.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "10.20.23.40:9092"); p.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class); p.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class); p.put(ConsumerConfig.GROUP_ID_CONFIG, CommonConstant.TOPIC); KafkaConsumer<String, String> kafkaConsumer = new KafkaConsumer(p); kafkaConsumer.subscribe(Collections.singletonList(CommonConstant.TOPIC));// 订阅消息 while (true) { ConsumerRecords<String, String> records = kafkaConsumer.poll(100); records.forEach(record-> System.out.println(String.format("topic:%s,offset:%d,消息:%s", // record.topic(), record.offset(), record.value()))); } } }
结果:
依次开启生产者和消费者,会看到消费者打印在控制台。
本文来自博客园,作者:土木转行的人才,转载请注明原文链接