Java操作Kafka执行不成功的解决方法,Kafka Broker Advertised.Listeners属性的设置
Posted on 2018-04-21 16:43 刚泡 阅读(24375) 评论(1) 编辑 收藏 举报创建Spring Boot项目继承Kafka,向Kafka发送消息始终不成功。具体项目配置如下:
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.lodestone</groupId>
<artifactId>lodestone-kafka</artifactId>
<version>0.0.1-SNAPSHOT</version>
<packaging>jar</packaging>
<name>lodestone-kafka</name>
<description>Lodestone kafka</description>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>1.5.12.RELEASE</version>
<relativePath />
</parent>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
<java.version>1.8</java.version>
</properties>
<dependencies>
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
</dependency>
<!-- https://mvnrepository.com/artifact/com.alibaba/fastjson -->
<dependency>
<groupId>com.alibaba</groupId>
<artifactId>fastjson</artifactId>
<version>1.2.47</version>
</dependency>
<!-- https://mvnrepository.com/artifact/org.slf4j/slf4j-api -->
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
</dependency>
<!-- https://mvnrepository.com/artifact/org.slf4j/log4j-over-slf4j -->
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>log4j-over-slf4j</artifactId>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
</project>
application.yml配置:
spring:
kafka:
bootstrap-servers:
- 192.168.52.131:9092
consumer:
auto-offset-reset: earliest
group-id: console-consumer-53989
key-deserializer:org.apache.kafka.common.serialization.StringDeserializer
value-deserializer:org.apache.kafka.common.serialization.StringDeserializer
producer:
key-serializer:org.apache.kafka.common.serialization.StringSerializer
value-serializer:org.apache.kafka.common.serialization.StringSerializer
logging:
level:
root: DEBUG
org:
springframework: DEBUG
mybatis: DEBUG
生产者代码:
package com.lodestone.kafka.producer;
import java.util.Date;
import java.util.UUID;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.kafka.core.KafkaTemplate;
import com.alibaba.fastjson.JSON;
import com.lodestone.kafka.message.LodestoneMessage;
@Component
public class Sender {
@Autowired
private KafkaTemplate kafkaTemplate;
public void sendMessage() {
LodestoneMessage message = new LodestoneMessage();
message.setId(UUID.randomUUID().toString().replaceAll("-", ""));
message.setMsg(UUID.randomUUID().toString());
message.setSendTime(new Date());
kafkaTemplate.send("test", JSON.toJSONString(message));
}
}
消息代码定义:
package com.lodestone.kafka.message;
import java.io.Serializable;
import java.util.Date;
public class LodestoneMessage implements Serializable {
private static final long serialVersionUID = -6847574917429814430L;
private String id;
private String msg;
private Date sendTime;
public String getId() {
return id;
}
public void setId(String id) {
this.id = id;
}
public String getMsg() {
return msg;
}
public void setMsg(String msg) {
this.msg = msg;
}
public Date getSendTime() {
return sendTime;
}
public void setSendTime(Date sendTime) {
this.sendTime = sendTime;
}
}
Spring Boot应用启动类:
package com.lodestone.kafka;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.ApplicationContext;
import com.lodestone.kafka.producer.Sender;
@SpringBootApplication
public class LodestoneKafkaApplication {
public static void main(String[] args) throws InterruptedException {
ApplicationContext app = SpringApplication.run(LodestoneKafkaApplication.class, args);
//测试
for(int i=0; i<5; i++) {
Sender sender = app.getBean(Sender.class);
sender.sendMessage();
Thread.sleep(500);
}
}
}
将应用的日志调整位Debug级别,启动应用时看到如下报错:
2018-04-21 16:05:10.755 DEBUG 10272 --- [ad | producer-1] o.apache.kafka.common.network.Selector : Connection with localhost/127.0.0.1 disconnected
java.net.ConnectException: Connection refused: no further information
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[na:1.8.0_111]
at sun.nio.ch.SocketChannelImpl.finishConnect(Unknown Source) ~[na:1.8.0_111]
at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:51) ~[kafka-clients-0.10.1.1.jar:na]
at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:73) ~[kafka-clients-0.10.1.1.jar:na]
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:323) [kafka-clients-0.10.1.1.jar:na]
at org.apache.kafka.common.network.Selector.poll(Selector.java:291) [kafka-clients-0.10.1.1.jar:na]
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:260) [kafka-clients-0.10.1.1.jar:na]
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:236) [kafka-clients-0.10.1.1.jar:na]
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:148) [kafka-clients-0.10.1.1.jar:na]
at java.lang.Thread.run(Unknown Source) [na:1.8.0_111]
可以看到报错第一句显示:Connection with localhost/127.0.0.1 disconnected
但是我明明在application.yml中配置了我的Kafka Server的地址是:192.168.52.131:9092,而在实际连接kafka服务器时却使用的localhost/127.0.0.1这个地址,所以导致无法连接kafka Server。
经过百度,得知,在设置Kafka的时候,需要设置advertised.listeners这个属性。
该属性在config/server.properties中的描述如下:
# Hostname and port the broker will advertise to producers and consumers. If not set,
# it uses the value for "listeners" if configured. Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
# advertised.listeners=PLAINTEXT://:your.host.name:9092
翻译过来就是hostname和端口是用来建议给生产者和消费者使用的,如果没有设置,将会使用listeners的配置,如果listeners也没有配置,将使用java.net.InetAddress.getCanonicalHostName()来获取这个hostname和port,对于ipv4,基本就是localhost了。
"PLAINTEXT"表示协议,可选的值有PLAINTEXT和SSL,hostname可以指定IP地址,也可以用"0.0.0.0"表示对所有的网络接口有效,如果hostname为空表示只对默认的网络接口有效
也就是说如果你没有配置advertised.listeners,就使用listeners的配置通告给消息的生产者和消费者,这个过程是在生产者和消费者获取源数据(metadata)。
因此重新设置advertised.listeners为如下:
advertised.listeners=PLAINTEXT://192.168.52.131:9092
需要注意的是,如果Kafka有多个节点,那么需要每个节点都按照这个节点的实际hostname和port情况进行设置。
每个节点都设置之后,再重新启动Spring Boot应用,则能够正常连接Kafka Server,并能够正常发送消息了。
部分援引和参考: