docker安装kfaka案例

借鉴博客:https://www.jianshu.com/p/06675adc3249

https://www.cnblogs.com/wxx999/p/15844557.html

 

https://zhuanlan.zhihu.com/p/403783402  最后按这个博客搞成功了,他有集群弄法

 

https://blog.csdn.net/qq_40454136/article/details/121097161  实用场景描述

 

 

 

一、安装kfaka之前必须要先安装zookeeper

  查看防火墙是否关闭,先关防火墙:

systemctl stop firewalld.service

 

 

 

 

  1.1  下载zookeeper、kafka镜像


docker pull wurstmeister/zookeeper
docker pull wurstmeister/kafka

  两个镜像下载完成:

 

 

  1.2、运行zookeeper容器:

    解释说明:# -v /etc/localtime:/etc/localtime 容器使用开发机时间

docker run -d --name zookeeper -p 2181:2181 -v /etc/localtime:/etc/localtime:ro -t wurstmeister/zookeeper

    查看是否运行成功:docker ps

 

 

 

  1.3、运行kafka容器:

    解释说明:KAFKA_ZOOKEEPER_CONNECT=192.168.25.128/kafka   此为连接到zookeerper所在服务器ip

      还有KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://192.168.25.128:9092  这也要对应上kafka所在服务器ip,抄了别人的0.0.0.0:9092,TM的巨坑启动成功却连接不上,浪费老子大把时间才找出问题

docker run -d --name kafka -p 9092:9092 -e KAFKA_BROKER_ID=0 -e KAFKA_ZOOKEEPER_CONNECT=192.168.25.128:2181 -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://192.168.25.128:9092 -e KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9092 -t wurstmeister/kafka

 

 

 

  zookeeper和kafka两个容器启动成功如下:

 

  

  1.4  查看两个容器日志是否正常有报错:

      docker logs  容器id

 

 

二、使用kafka连接工具Offset Explorer测试看下:

Offset Explorer下载地址:https://www.kafkatool.com/download.html

 

  2.1、安装打开Offset Explorer,配置以下两个地方

 

 

 

 

   test测试一下,显示成功:为什么是192.168.130的地址,回家后换了虚拟机

 

 

连接成功后的目录是这样的:

 

 

 

 

 

 三、java代码连接kafka测试

   借鉴博客:https://blog.csdn.net/LONG_Yi_1994/article/details/120841828

https://wenku.baidu.com/view/e21fb92d7a563c1ec5da50e2524de518964bd39a.html

 

  3.1、pom.xml引入kafka相关依赖

复制代码
        <!-- kafka相关依赖引入  -->
<dependency> <groupId>org.apache.kafka</groupId> <artifactId>kafka_2.12</artifactId> <version>2.1.0</version> <scope>provided</scope> </dependency> <!-- https://mvnrepository.com/artifact/org.apache.kafka/kafka-clients --> <dependency> <groupId>org.apache.kafka</groupId> <artifactId>kafka-clients</artifactId> <version>2.1.1</version> </dependency> <dependency> <groupId>org.apache.kafka</groupId> <artifactId>kafka-streams</artifactId> <version>1.0.0</version> </dependency> <dependency> <groupId>org.apache.commons</groupId> <artifactId>commons-lang3</artifactId> <version>3.12.0</version> </dependency>
复制代码

 

 

  3.2、测试kafka生产者java代码

复制代码
package com.example.demo01.common.kafka;



import com.alibaba.fastjson.JSON;
import com.alibaba.fastjson.JSONArray;
import com.example.demo01.test.Person;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.common.serialization.StringSerializer;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import java.util.ArrayList;
import java.util.List;
import java.util.Properties;



public class KafkaProducerTest implements Runnable  {
    ////记录日志对象
    private static final Logger log = LoggerFactory.getLogger(KafkaProducerTest.class);

    private final KafkaProducer<String, String> producer;
    private final String topic;
    private String clientid;
    public KafkaProducerTest(String topicName,String clientid) {
        Properties props = new Properties();
//        props.put("bootstrap.servers", "10.1.11.212:32765,10.1.11.212:32766,10.1.11.212:32767");
        props.put("bootstrap.servers", "192.168.25.128:9092");
        props.put("acks", "all");
        props.put("retries", 0);
        props.put("batch.size", 16384);
        props.put("key.serializer", StringSerializer.class.getName());
        props.put("value.serializer", StringSerializer.class.getName());
        this.producer = new KafkaProducer<String, String>(props);
        this.topic = topicName;
        this.clientid = clientid;
    }

    @Override
    public void run() {
        int messageNo = 1;
        try {
            for(;;) {
                String messageStr= "你好,这是第"+messageNo+"条数据 clientid=" + clientid;
                producer.send(new ProducerRecord<String, String>(topic, "Message", messageStr));
                //生产了100条就打印
                if(messageNo%100==0){
                    System.out.println("发送的信息:" + messageStr);
                }
                //生产1000条就退出
                if(messageNo == 1000){
                    System.out.println("成功发送了"+messageNo+"");
                    break;
                }
                messageNo++;
            }
        } catch (Exception e) {
            e.printStackTrace();
        } finally {
            producer.close();
        }
    }


    //1、========================================简单测试生产者
//    public static void main(String args[]) {
////        KafkaProducerTest test1 = new KafkaProducerTest("logstash-08-04", "clientid1");
////        Thread thread1 = new Thread(test1);
////        thread1.start();
//
//
//
//        //======================测试生产者, 简单明了写法,生产数据topic======================================
//        Properties p = new Properties();
//        p.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "192.168.25.128:9092");//kafka地址,多个用逗号分隔
//        p.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);//使用实现Serializer接口的密钥的串行器类
//        p.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
//        p.put(ProducerConfig.ACKS_CONFIG, "-1");//设置ACKS
//
//        //创建生产者对象
//        KafkaProducer<String, String> kafkaProducer = new KafkaProducer<String, String>(p);
//
//        try{
//            //生成10条数据
//            for(int i = 0; i < 10; i++){
//                String msg = "测试消息内容msg" + i;
//
//                log.info("发送消息:" + msg);
//
//                //创建消息对象
//                ProducerRecord<String, String> producerRecord = new ProducerRecord<String, String>("testTopic001", msg);
//                kafkaProducer.send(producerRecord);//调用生产者发送消息
//                try{
//                    Thread.sleep(1000);//每条消息间隔1000毫秒
//                }catch (InterruptedException e){
//                    e.printStackTrace();
//                }
//            }
//
//        }finally {
//            kafkaProducer.close();
//        }
//
//        log.info("生产者消息发送成功==========================");
//
//
//    }



    //========================================2、测试生产者发送对象类型数据=======================================
    public static void main(String args[]) {

        List<Person> personList = new ArrayList<Person>();
        personList.add(new Person(1,"jack",18, false));
        personList.add(new Person(2,"paul",28, true));
        personList.add(new Person(3,"zion",20, false));
        personList.add(new Person(4,"jone",30, true));

        String personJsonStr = JSON.toJSONString(personList);




        //======================测试生产者, 简单明了写法,生产数据topic======================================
        Properties p = new Properties();
        p.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "192.168.25.128:9092");//kafka地址,多个用逗号分隔
        p.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);//使用实现Serializer接口的密钥的串行器类
        p.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
        p.put(ProducerConfig.ACKS_CONFIG, "-1");//设置ACKS

        //创建生产者对象
        KafkaProducer<String, String> kafkaProducer = new KafkaProducer<String, String>(p);

        try{

            //创建消息对象
            ProducerRecord<String, String> producerRecord = new ProducerRecord<String, String>("testTopic003", personJsonStr);
            kafkaProducer.send(producerRecord);//调用生产者发送消息




            //生成10条数据
//            for(int i = 0; i < 10; i++){
//                String msg = "测试消息内容msg" + i;
//
//                log.info("发送消息:" + msg);

                //创建消息对象
//                ProducerRecord<String, String> producerRecord = new ProducerRecord<String, String>("testTopic001", msg);
//                kafkaProducer.send(producerRecord);//调用生产者发送消息
                try{
                    Thread.sleep(1000);//每条消息间隔1000毫秒
                }catch (InterruptedException e){
                    e.printStackTrace();
                }
//            }

        }finally {
            kafkaProducer.close();
        }

        log.info("生产者消息发送成功==========================");


    }







}
复制代码

 

 

 

  3.3、测试kafka消费者java代码:

复制代码
package com.example.demo01.common.kafka;


import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.common.serialization.StringDeserializer;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import java.util.Arrays;
import java.util.Collections;
import java.util.Properties;


public class KafkaConsumerTest implements Runnable  {

    private static final Logger log = LoggerFactory.getLogger(KafkaConsumerTest.class);

    private final KafkaConsumer<String, String> consumer;
    private ConsumerRecords<String, String> msgList;
    private final String topic;
    private String clientid;
    private static final String GROUPID = "groupA";

    public KafkaConsumerTest(String topicName,String clientid) {

        Properties props = new Properties();
//        props.put("bootstrap.servers", "10.1.11.212:32765,10.1.11.212:32766,10.1.11.212:32767");
        props.put("bootstrap.servers", "192.168.25.128:9092");
        props.put("group.id", GROUPID);
        props.put("enable.auto.commit", "true");
        props.put("auto.commit.interval.ms", "1000");
        props.put("session.timeout.ms", "30000");
        props.put("auto.offset.reset", "earliest");
        props.put("key.deserializer", StringDeserializer.class.getName());
        props.put("value.deserializer", StringDeserializer.class.getName());
        this.consumer = new KafkaConsumer<String, String>(props);
        this.topic = topicName;
        this.consumer.subscribe(Arrays.asList(topic));
        this.clientid = clientid;
    }

    @Override
    public void run() {
        int messageNo = 1;
        System.out.println("---------开始消费---------");
        try {
            for (;;) {
                msgList = consumer.poll(1000);
                if(null!=msgList&&msgList.count()>0){
                    for (ConsumerRecord<String, String> record : msgList) {
                        //消费100条就打印 ,但打印的数据不一定是这个规律的
                        if(messageNo%100==0){
                            System.out.println(messageNo+"=======成功消费:receive: key = " + record.key() + ", value = " + record.value()+" offset==="+record.offset());
                        }
                        //当消费了1000条就退出
                        if(messageNo == 1000){
                            break;
                        }
                        messageNo++;
                    }
                }else{
                    Thread.sleep(1000);
                }
            }
        } catch (InterruptedException e) {
            e.printStackTrace();
        } finally {
            consumer.close();
        }
    }

    public static void main(String args[]) {
//        KafkaConsumerTest test1 = new KafkaConsumerTest("logstash-08-04", "clientid1");
//        Thread thread1 = new Thread(test1);
//        thread1.start();





        //测试消费者===========================================================
        Properties p = new Properties();

        //kafka连接地址
        p.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "192.168.25.128:9092");
        p.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
        p.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);

        //会话响应时间,超过这个时间kafka可以选择放弃消费或消费下一条消息
        p.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, "30000");


        p.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "false");//false:非自动提交偏移量
        p.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, "1000");//自动提交偏移量周期

        p.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");//earliest拉取最早数据,latest拉取最新数据,none报错
        p.put(ConsumerConfig.GROUP_ID_CONFIG, "G1");//一个消费者组G1里只有一个消费者

        //创建kafka消费对象
        KafkaConsumer<String, String> consumer = new KafkaConsumer<String, String>(p);

        //设置自动分配topic与消费者对象
        consumer.subscribe(Collections.singleton("testTopic003"));

        log.info("===========开始消费==================");
        int num = 0;
        while(true){
            ConsumerRecords<String, String> poll = consumer.poll(10);//消费数据,一次10条

            for(ConsumerRecord<String, String> record : poll){ //遍历输出

                log.info(record.offset()+"\t"+record.key()+"\t"+record.value());
                log.info(record.value().toString());
                log.info("==================================正在消费数据==================================");

            }
        }






    }



}
复制代码

 

 

 

 

 

  代码测试成功如下:生产者生产数据到kafka

 

 

 

 

 

 

 

 

 

 

.

 

posted @   下课后我要去放牛  阅读(211)  评论(0编辑  收藏  举报
相关博文:
阅读排行:
· 全程不用写代码,我用AI程序员写了一个飞机大战
· DeepSeek 开源周回顾「GitHub 热点速览」
· 记一次.NET内存居高不下排查解决与启示
· MongoDB 8.0这个新功能碉堡了,比商业数据库还牛
· .NET10 - 预览版1新功能体验(一)
点击右上角即可分享
微信分享提示