Kafka-rest-proxy
一、在部署Kafka-rest-proxy(本例子中kafka有sasl)
1、在/etc/kafka-rest下新增kafka_rest_jaas.conf
Client{ org.apache.kafka.common.security.scram.ScramLoginModule required username="admin" password="admin@2021"; };
KafkaClient { org.apache.kafka.common.security.scram.ScramLoginModule required username="admin" password="admin-secret"; };
|
说明:
l Client为访问zk的sasl权限配置
l KafkaClient 为访问kafka的sasl权限配置
2、在/etc下拷贝krb5.conf到/etc/kafka-rest下,内容示例:
[libdefaults] # "dns_canonicalize_hostname" and "rdns" are better set to false for improved security. # If set to true, the canonicalization mechanism performed by Kerberos client may # allow service impersonification, the consequence is similar to conducting TLS certificate # verification without checking host name. # If left unspecified, the two parameters will have default value true, which is less secure. dns_canonicalize_hostname = false rdns = false # default_realm = EXAMPLE.COM
[realms] # EXAMPLE.COM = { # kdc = kerberos.example.com # admin_server = kerberos.example.com # }
[logging] kdc = FILE:/var/log/krb5/krb5kdc.log admin_server = FILE:/var/log/krb5/kadmind.log default = SYSLOG:NOTICE:DAEMON |
3、修改kafka-rest.properties配置文件
## # Copyright 2015 Confluent Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ##
#id=kafka-rest-test-server #schema.registry.url=http://localhost:8081 #zookeeper.connect=localhost:2181 #bootstrap.servers=PLAINTEXT://localhost:9092 id=95 host.name=10.10.5.95 zookeeper.connect=10.10.5.95:2181,10.10.5.96:2181,10.10.5.97:2181 bootstrap.servers=SASL_PLAINTEXT://10.10.5.95:9092,SASL_PLAINTEXT://10.10.5.96:9092,SASL_PLAINTEXT://10.10.5.97:9092 client.security.protocol=SASL_PLAINTEXT client.sasl.mechanism=SCRAM-SHA-512 client.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="admin" password="admin-secret"; # # Configure interceptor classes for sending consumer and producer metrics to Confluent Control Center # Make sure that monitoring-interceptors-<version>.jar is on the Java class path #consumer.interceptor.classes=io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor #producer.interceptor.classes=io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor |
参数说明:
l Id:此 REST 服务器实例的唯一 ID
l host.name:此服务器IP
l zookeeper.connect:zk集群
l bootstrap.servers:kafka集群
l client.security.protocol:用于与代理通信的协议。
l client.sasl.mechanism:用于客户端连接的 SASL 机制。
l client.sasl.jaas.config:SASL 连接的 JAAS 登录上下文参数,采用 JAAS 配置文件使用的格式。
4、在环境变量中配置 JAAS 文件的名称和 Kerberos 配置文件的名称传递给 REST 启动脚本;在bin下修改kafka-rest-start,在正常配置的上方加入下句:
export KAFKAREST_OPTS="-Djava.security.auth.login.config=/usr/local/kafka/confluent-4.1.1/etc/kafka-rest/kafka_rest_jaas.conf -Djava.security.krb5.conf=/usr/local/kafka/confluent-4.1.1/etc/kafka-rest/krb5.conf " |
5、启动命令
nohup ./kafka-rest-start ../etc/kafka-rest/kafka-rest.properties > nohupre.out 2>&1 & |
6、验证
浏览器访问:http://10.10.5.95:8082/topics
二、通过api进行发送信息
1、发送消息本例子以json为例子 其他的请参考:https://docs.confluent.io/4.1.1/kafka-rest/docs/intro.html
执行:
curl -X POST -H "Content-Type: application/vnd.kafka.json.v2+json" -H "Accept: application/vnd.kafka.v2+json" --data '{"records":[{"value":{"foo":"bar"}}]}' "http://localhost:8082/topics/jsontest" |
records:为传输的json数据即消息载体
jsontest:为topic
响应:
三、通过api进行接收消息
1、创建一个此topic的消费者
curl -X POST -H "Content-Type: application/vnd.kafka.v2+json" --data '{"name": "my_consumer_instance", "format": "json", "auto.offset.reset": "earliest"}' "http://localhost:8082/consumers/my_json_consumer" |
my_json_consumer:消费者名称
my_consumer_instance:消费者示例名称
响应:
2、消费绑定topic
curl -X POST -H "Content-Type: application/vnd.kafka.v2+json" --data '{"topics":["jsontest"]}' http://localhost:8082/consumers/my_json_consumer/instances/my_consumer_instance/subscription |
没有响应。
3、拉取数据
curl -X GET -H "Accept: application/vnd.kafka.json.v2+json" http://localhost:8082/consumers/my_json_consumer/instances/my_consumer_instance/records |
响应:
#删除消费实例
curl -X DELETE -H "Content-Type: application/vnd.kafka.v2+json" http://localhost:8082/consumers/my_json_consumer/instances/my_consumer_instance |