kafka单机安装配置

1、下载kafka

wget https://www.apache.org/dyn/closer.cgi?path=/kafka/0.8.2.1/kafka_2.9.2-0.8.2.1.tgz

2、解压

tar -zxf kafka_2.9.2-0.8.2.1.tgz

创建软连接

ln -s /opt/workspace/apps/kafka_2.10-0.8.2.1 /opt/workspace/kafka

3、修改配置文件

(1)kafka配置文件

cd /opt/workspace/kafka/config

vim server.properties

############################# Server Basics #############################

# The id of the broker. This must be set to a unique integer for each broker.
broker.id=0

############################# Socket Server Settings #############################

# The port the socket server listens on
port=9092

# Hostname the broker will bind to. If not set, the server will bind to all interfaces
# 修改为主机ip,不然服务器返回给客户端的是主机的hostname,客户端并不一定能够识别
host.name=10.205.28.4

 ############################# Log Basics #############################

# A comma seperated list of directories under which to store log files
# 日志目录
log.dirs=/var/log/kafka

#zookeeper地址和端口
zookeeper.connect=localhost:2181

(2)zookeeper配置

vim zookeeper.properties
# the directory where the snapshot is stored.
dataDir=/var/zookeeper
dataLogDir=/var/log/zookeeper
# the port at which the clients will connect
clientPort=2181
# disable the per-ip limit on the number of connections since this is a non-production config
maxClientCnxns=100

# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial 
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=5

具体解释请参考:http://nanchengru.com/2015/04/zookeeper%E5%AE%89%E8%A3%85%E9%85%8D%E7%BD%AE%E4%BB%A5%E5%8F%8A%E5%91%BD%E4%BB%A4/

4、启动zookeeper和kafka

写了个脚本

#!/bin/sh

#启动zookeeper
zookeeper-server-start.sh ../config/zookeeper.properties > /var/log/zookeeper/zk-server-start.log &

#休眠3秒
sleep 3  #等3秒后执行下一条

#启动kafka
kafka-server-start.sh ../config/server.properties > /var/log/kafka/kafka-server-start.log &

5、使用kafka

Step 3: Create a topic(创建主题:test)

Let's create a topic named "test" with a single partition and only one replica:
> bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
We can now see that topic if we run the list topic command:
> bin/kafka-topics.sh --list --zookeeper localhost:2181
test
Alternatively, instead of manually creating topics you can also configure your brokers to auto-create topics when a non-existent topic is published to.
Step 4: Send some messages

Kafka comes with a command line client that will take input from a file or from standard input and send it out as messages to the Kafka cluster. By default each line will be sent as a separate message.
Run the producer and then type a few messages into the console to send to the server.

> bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
This is a message
This is another message
Step 5: Start a consumer

Kafka also has a command line consumer that will dump out messages to standard output.
> bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test --from-beginning
This is a message
This is another message

心得总结:

1.produce启动的时候参数使用的是kafka的端口而consumer启动的时候使用的是zookeeper的端口;

2.必须先创建topic才能使用;

3.topic本质是以文件的形式储存在zookeeper上的。

参考:

http://kafka.apache.org/documentation.html#quickstart

http://www.tuicool.com/articles/uQzYfq

posted @ 2016-11-09 15:14  空谷幽澜  阅读(404)  评论(0编辑  收藏  举报