阿里云构建Kafka单机集群环境

简介

在一台ECS阿里云服务器上构建Kafa单个集群环境需要如下的几个步骤:

  • 服务器环境
  • JDK的安装
  • ZooKeeper的安装
  • Kafka的安装

1. 服务器环境

  • CPU: 1核
  • 内存: 2048 MB (I/O优化) 1Mbps
  • 操作系统 ubuntu14.04 64位 
    感觉服务器性能还是很好的,当然不是给阿里打广告,汗。 
    随便向kafka里面发了点数据,性能图如下所示: 
    kafka性能图

2. 安装JDK

想要跑Java程序,就必须安装JDK。JDK版本,本人用的是JDK1.7。 
基本操作如下:

  1. 从JDK官网获取JDK的tar.gz包;
  2. 将tar包上传到服务器上的opt/JDK下面;
  3. 解压tar包;
  4. 更改etc/profile文件,将下列信息写在后面;(ps mac环境需要sudo su 以root权限进行操作)
 cd /
 cd etc
 vim profile
 然后进行修改 添加如下部分:
 export JAVA_HOME=/opt/JDK/jdk1.7.0_79
 export PATH=$JAVA_HOME/bin:$PATH
 export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7

改好后的profile文件信息如下:

# /etc/profile: system-wide .profile file for the Bourne shell (sh(1))
# and Bourne compatible shells (bash(1), ksh(1), ash(1), ...).

if [ "$PS1" ]; then
  if [ "$BASH" ] && [ "$BASH" != "/bin/sh" ]; then
    # The file bash.bashrc already sets the default PS1.
    # PS1='\h:\w\$ '
    if [ -f /etc/bash.bashrc ]; then
      . /etc/bash.bashrc
    fi
  else
    if [ "`id -u`" -eq 0 ]; then
      PS1='# '
    else
      PS1='$ '
    fi
  fi
fi

# The default umask is now handled by pam_umask.
# See pam_umask(8) and /etc/login.defs.

if [ -d /etc/profile.d ]; then
  for i in /etc/profile.d/*.sh; do
    if [ -r $i ]; then
      . $i
    fi
  done
  unset i
fi

export JAVA_HOME=/opt/JDK/jdk1.7.0_79
export PATH=$JAVA_HOME/bin:$PATH
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  1. 按下ESC键后,输入“!wq”,按回车保存信息;
  2. 输入 java -v 查看是否生效(未生效的,貌似需要重新登陆下)。

3. 安装ZooKeeper

Kafka集群是通过ZooKeeper进行选举Leader,和保存存储Topic的信息的,所以想运行Kafka。还需要搭建Zookeeper环境。 
ZooKeeper环境搭建步骤如下:

  1. 从官网获取tar.gz包;
  2. 将tar.gz包上传到阿里云服务器的opt/zookeeper下面;
  3. 运行tar -zxvf *.tar.gz 解压缩;
  4. 进入解压好的Zookeeper目录下的conf目录下面;
  5. 将zoo_sample.cfg文件改名成zoo.cfg;(当然也可以备份)
  6. 根据需要修改zoo.cfg文件,当然也可以不改;
  7. 启动zookeeper。

3-7步骤具体的操作命令如下所示:

cd opt/zookeeper
tar -zxvf zookeeper-3.4.6.tar.gz
cd zookeeper-3.4.6/conf
scp zoo_sample.cfg zoo.cfg
cd ..
#打开zookeeper命令
./bin/zkServer.sh start
#关闭zookeeper命令
./bin/zkServer.sh start
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9

结果后可以通过ps -ef|grep zookeeper 查看zookeeper是否成功启动

4. 安装Kafka

经过上面3个步骤的折磨后,我们终于可以来构建自己的kafka单机集群了。(单机你也说是集群,汗——不服来打我QAQ) 
kafka具体的步骤如下:

  1. 下载kafka安装包,我下的包是kafka_2.11-0.10.1.0.tgz,这个官网可找到这;
  2. 将kafka包上传到阿里云服务器上的opt/kafka目录下;
  3. 将kafka包解压;
  4. 进入config目录下,修改server.properties文件; 
    主要修改内容为:
# The id of the broker. This must be set to a unique integer for each broker.
broker.id=0
port=9092
host.name=阿里云内网地址
advertised.host.name=阿里云外网映射地址
  • 1
  • 2
  • 3
  • 4
  • 5
  • 1
  • 2
  • 3
  • 4
  • 5
修改后的配置文件如下:
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# see kafka.server.KafkaConfig for additional details and defaults

############################# Server Basics #############################

# The id of the broker. This must be set to a unique integer for each broker.
broker.id=0
port=9092
host.name=阿里云内网地址
advertised.host.name=阿里云外网映射地址

# Switch to enable topic deletion or not, default value is false
delete.topic.enable=true

############################# Socket Server Settings #############################

# The address the socket server listens on. It will get the value returned from
# java.net.InetAddress.getCanonicalHostName() if not configured.
#   FORMAT:
#     listeners = security_protocol://host_name:port
#   EXAMPLE:
#     listeners = PLAINTEXT://your.host.name:9092
#listeners=PLAINTEXT://:9092

# Hostname and port the broker will advertise to producers and consumers. If not set,
# it uses the value for "listeners" if configured.  Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
#advertised.listeners=PLAINTEXT://your.host.name:9092

# The number of threads handling network requests
num.network.threads=3

# The number of threads doing disk I/O
num.io.threads=8

# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400

# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400

# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600


############################# Log Basics #############################

# A comma seperated list of directories under which to store log files
log.dirs=/tmp/kafka-logs

# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=1

# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1

############################# Log Flush Policy #############################

# Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
#    1. Durability: Unflushed data may be lost if you are not using replication.
#    2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
#    3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to exceessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.

# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000

# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000

############################# Log Retention Policy #############################

# The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.

# The minimum age of a log file to be eligible for deletion
log.retention.hours=168

# A size-based retention policy for logs. Segments are pruned from the log as long as the remaining
# segments don't drop below log.retention.bytes.
#log.retention.bytes=1073741824

# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824

# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000

############################# Zookeeper #############################

# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=localhost:2181

# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=6000
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119
  • 120
  • 121
  • 122
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119
  • 120
  • 121
  • 122
  1. 启动kafka。
nohup ./bin/kafka-server-start.sh config/server.properties >  /dev/null 2>&1 &
  • 1
  • 2
  • 1
  • 2

6.验证kafka是否启动成功; 
运行jps,查看是否名为kafka的进程即可。

5. 踩过的坑

  1. 要配置hostname,port端口号 和 其他选项 
    Bug:ERROR org.apache.kafka.common.errors.InvalidReplicationFactorException: replication factor: 1 larger than available brokers: 0 
    说的很明白,可以使用的broker数量少于1个,可就是Kafka进程没有启动或宕机了。 
    解决办法:1. 运行JPS 查看是否有Kafka进程 ; 2.重新启动Kafka。
  2. 无法绑定到某某地址 
    Bug:Socket server failed to bind to xxx.xxx.xxx.xxx:9092: Cannot assign requested address. 
    在ECS上面配置kafka的地址千万不要写外部地址,比如139.225.155.153(我随便写的),这样事绑定不上去的,因为这个是阿里云内部;它会去内网去寻找他的地址,所以配成127.0.0.1 会自动识别成本机地址/不然应该使用外网的映射地址。
  3. host name配置出问题 
    Bug:报错:java.net.UnknownHostException: 主机名: 主机名
Caused by: java.net.UnknownHostException: iZuf6gsbgu35znsy7ve3s6x: iZuf6gsbgu35znsy7ve3s6x
    at java.net.InetAddress.getLocalHost(InetAddress.java:1475)
    at kafka.network.RequestChannel$.<init>(RequestChannel.scala:40)
    at kafka.network.RequestChannel$.<clinit>(RequestChannel.scala)
    ... 10 more
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6

配置etc/hostname 和 etc/hosts文件,具体操作请看: 
https://my.oschina.net/heguangdong/blog/13678

4 外部调用无法消费kafka

21:45:58,162 DEBUG Selector:365 - Connection with /168.221.153.152 disconnected
java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
    at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:51)
    at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:73)
    at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:323)
    at org.apache.kafka.common.network.Selector.poll(Selector.java:291)
    at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:260)
    at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:236)
    at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:135)
    at java.lang.Thread.run(Thread.java:745)
21:45:58,162 DEBUG NetworkClient:463 - Node -1 disconnected.
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13

这个问题是kafka的config/server.properties文件配置不当导致的,详细请看 
1.http://stackoverflow.com/questions/33541114/kafka-0-8-2-2-unable-to-publish-messages 
2.http://www.cnblogs.com/snifferhu/p/5102629.html 
3. http://www.tuicool.com/articles/n632MvV 
4. kafka安装出现的几个问题

6. 其他

关于Kafka的配置文件具体内容、Kafka如何构建集群、Kafka常用命令、Kafka简单Demo的编写和Kafka Streams 例子的编写,请看Kafka系列的其它部分内容。

posted on 2017-08-15 06:53  adolfmc  阅读(1595)  评论(0编辑  收藏  举报