zookeeper与kafka集群部署实现

ZooKeeper是一个分布式的,开放源码的分布式应用程序协调服务,是Google的Chubby一个开源的实现,它是集群的管理者,监视着集群中各个节点的状态根据节点提交的反馈进行下一步合理操作。最终,将简单易用的接口和性能高效、功能稳定的系统提供给用户。
安装准备:zookeeper-3.4.13 https://mirrors.tuna.tsinghua.edu.cn/apache/zookeeper/   两个节点
下载安装包

wget https://mirrors.tuna.tsinghua.edu.cn/apache/zookeeper/zookeeper-3.4.13/zookeeper-3.4.13.tar.gz

安装java依赖环境

yum install java -y

配置zookeeper

node1
tar xf zookeeper-3.4.13.tar.gz
ln -s zookeeper-3.4.13 zookeeper
#创建数据目录
mkdir /data/zookeeper/data
egrep -v "#|^$" zookeeper/conf/zoo.cfg 
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/data/zookeeper/data
clientPort=2181
server.0=192.168.111.142:2888:3888
server.1=192.168.111.143:2888:3888
echo 0 > /data/zookeeper/data/myid

node2
tar xf zookeeper-3.4.13.tar.gz
ln -s zookeeper-3.4.13 zookeeper
#创建数据目录
mkdir /data/zookeeper/data
egrep -v "#|^$" zookeeper/conf/zoo.cfg 
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/data/zookeeper/data
clientPort=2181
server.0=192.168.111.142:2888:3888
server.1=192.168.111.143:2888:3888
echo 1 > /data/zookeeper/data/myid

注:server点后面的数字要与myid匹配
    2888是与客户端通信端口
    3888用于领导者选举

启动zookeeper

/data/zookeeper/bin/zkServer.sh start

检查状态

[root@web2~]# /data/zookeeper/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /data/zookeeper/bin/../conf/zoo.cfg
Mode: follower
[root@web1 ~]# /data/zookeeper/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /data/zookeeper/bin/../conf/zoo.cfg
Mode: leader

 

Kafka是由Apache软件基金会开发的一个开源流处理平台,由Scala和Java编写。该项目的目标是为处理实时数据提供一个统一、高吞吐、低延迟的平台。其持久化层本质上是一个“按照分布式事务日志架构的大规模发布/订阅消息队列”,这使它作为企业级基础设施来处理流式数据非常有价值。此外,Kafka可以通过Kafka Connect连接到外部系统(用于数据输入/输出),并提供了Kafka Streams——一个Java流式处理库 (计算机)。
安装包:kafka_2.12-2.0.0   https://kafka.apache.org/downloads   两个节点
下载安装包

wget https://www.apache.org/dyn/closer.cgi?path=/kafka/2.0.0/kafka_2.12-2.0.0.tgz

配置kafka

node1
tar xf kafka_2.12-2.0.0.tgz
ln -s kafka_2.12-2.0.0 kafka
#创建日志目录
mkdir kafka/kafka-logs -pv
egrep -v "#|^$" server.properties
broker.id=0
listeners=PLAINTEXT://192.168.111.142:9092
advertised.listeners=PLAINTEXT://192.168.111.142:9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/data/kafka/kafka-logs
num.partitions=10
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=192.168.111.142:2181,192.168.111.143:2181
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=0

node1
tar xf kafka_2.12-2.0.0.tgz
ln -s kafka_2.12-2.0.0 kafka
#创建日志目录
mkdir kafka/kafka-logs -pv
egrep -v "#|^$" server.properties
broker.id=1
listeners=PLAINTEXT://192.168.111.143:9092
advertised.listeners=PLAINTEXT://192.168.111.143:9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/data/kafka/kafka-logs
num.partitions=10
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=192.168.111.142:2181,192.168.111.143:2181
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=0

注意:只是常规修改以下选项,需要优化可以自行配置
broker.id   必须要唯一
listeners
advertised.listeners
log.dirs
zookeeper.connect

启动kafka

nohup ./bin/kafka-server-start.sh config/server.properties &
tail -f nohup.out
started---> 输出正常

  

posted @ 2018-09-27 15:48  Reid21  阅读(322)  评论(0编辑  收藏  举报