kafka 的 server.properties 文件 描述

红色 :是源配置文件中 备注

蓝色:英文备注,翻译

绿色:解释说明(若有不对,请下方留言说明)

版本:基于 kafka 2.4.0  http://archive.apache.org/dist/kafka/2.4.0/kafka_2.11-2.4.0.tgz

# Licensed to the Apache Software Foundation (ASF) under one or more (授权给Apache软件基金会(ASF)一个或多个
# contributor license agreements. See the NOTICE file distributed with 贡献者许可协议。请参阅随附的通知文件
# this work for additional information regarding copyright ownership.这是关于版权所有权的附加信息。
# The ASF licenses this file to You under the Apache License, Version 2.0ASF根据Apache许可2.0版将此文件授权给您
# (the "License"); you may not use this file except in compliance with(“许可证”);除非符合规定,否则您不能使用此文件
# the License. You may obtain a copy of the License at许可。你可于
#
# http://www.apache.org/licenses/LICENSE-2.0http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software除非适用法律要求或经书面同意,软件
# distributed under the License is distributed on an "AS IS" BASIS,在许可下发布的是“按现状”发布的,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.没有任何明示或暗示的保证或条件。
# See the License for the specific language governing permissions and有关特定语言的管理权限和
# limitations under the License.许可的限制。

# see kafka.server.KafkaConfig for additional details and defaults看到kafka.server。获取更多的细节和默认值

上面的意思大概讲述了  ASF批准了Apache许可证的2.0版本,该许可证帮助我们实现了通过协作开源软件开发来提供可靠且寿命长的软件产品的目标。

除非另有明确说明,否则A​​SF产生的所有软件包均根据Apache许可版本2.0隐含许可。具体可以到 http://www.apache.org/licenses/LICENSE-2.0 进行查看。

没啥好说的...

############################# Server Basics #############################

# The id of the broker. This must be set to a unique integer for each broker.代理的id。必须为每个代理设置一个惟一的整数

在Kafka集群中,每个broker都有一个唯一的id值用来区分彼此。Kafka在启动时会在zookeeper中/brokers/ids路径下创建一个与当前broker的id为名称的虚节点,Kafka的健康状态检查就依赖于此节点。

当broker下线时,该虚节点会自动删除,其他broker或者客户端通过判断/brokers/ids路径下是否有此broker的id来确定该broker的健康状态。

可以通过配置文件config/server.properties里的broker.id参数来配置broker的id值,默认情况下broker.id值为-1。Kafka broker的id值必须大于等于0时才有可能正常启动,但是这里并不是只能通过配置文件config/server.properties来修改这个值,还可以通过meta.properties文件或者自动生成功能来实现broker的id值的设置。
————————————————
版权声明:本文为CSDN博主「朱小厮」的原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接及本声明。
原文链接:https://blog.csdn.net/u013256816/article/details/80546337

broker.id=0

############################# Socket Server Settings #############################

# The address the socket server listens on. It will get the value returned from套接字服务器监听的地址。它将获得返回的值
# java.net.InetAddress.getCanonicalHostName() if not configured.()
#    FORMAT:
#    listeners = listener_name://host_name:port
#    EXAMPLE:
#   listeners = PLAINTEXT://your.host.name:9092

#listeners=PLAINTEXT://:9092

# Hostname and port the broker will advertise to producers and consumers. If not set,代理将向生产者和消费者发布主机名和端口。如果未设置,
# it uses the value for "listeners" if configured. Otherwise, it will use the value则在配置时使用“侦听器”的值。
# returned from java.net.InetAddress.getCanonicalHostName().否则,它将使用java.net.InetAddress.getCanonicalHostName()返回的值。
#advertised.listeners=PLAINTEXT://your.host.name:9092

# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details将监听器名称映射到安全协议,默认情况下它们是相同的。
#listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL有关详细信息,请参阅配置文档listener.security.protocol。map=明文:明文,SSL:SSL,

# The number of threads that the server uses for receiving requests from the network and sending responses to the networksasl_明文:sasl_明文,SASL_SSL:SASL_SSL服务器用于从网络接收请求和向网络发送响应的线程数

默认处理网络请求的线程个数

num.network.threads=3

# The number of threads that the server uses for processing requests, which may include disk I/O服务器用于处理请求的线程数,可能包括磁盘I/O

执行磁盘IO操作的默认线程个数 

num.io.threads=8

# The send buffer (SO_SNDBUF) used by the socket server套接字服务器使用的发送缓冲区(SO_SNDBUF)

socket服务使用的进行发送数据的缓冲区大小,默认100kb

socket.send.buffer.bytes=102400

# The receive buffer (SO_RCVBUF) used by the socket server套接字服务器使用的接收缓冲区(SO_RCVBUF)

socket服务使用的进行接受数据的缓冲区大小,默认100kb

socket.receive.buffer.bytes=102400

# The maximum size of a request that the socket server will accept (protection against OOM)套接字服务器将接受的请求的最大大小(针对OOM的保护)

socket服务所能够接受的最大的请求量,防止出现OOM(Out of memory)内存溢出,默认值为:100m
(应该是socker server所能接受的一个请求的最大大小,默认为100M)

socket.request.max.bytes=104857600


############################# Log Basics #############################

# A comma separated list of directories under which to store log files一个逗号分隔的目录列表,用于存储日志文件

虽然这里叫做 logs(见名知意,认为是存储kafka日志的),但是这里是存储数据和日志的。之所以叫 logs 是因为 kafka文件的后缀为log
log.dirs=/app/ywjcproject/kafka_2.11-2.4.0/logs

# The default number of log partitions per topic. More partitions allow greater每个主题的默认日志分区数。 
# parallelism for consumption, but this will also result in more files across更多的分区允许更大的并行度,但这也会导致更多的文件跨代理。
# the brokers.

每一个topic所对应的log的partition分区数目,默认1个。更多的partition数目会提高消费
并行度,但是也会导致在kafka集群中有更多的文件进行传输
(partition就是分布式存储,相当于是把一份数据分开几份来进行存储,即划分块、划分分区的意思)

num.partitions=1

# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.每个数据目录用于启动时的日志恢复和关机时的刷新的线程数。
# This value is recommended to be increased for installations with data dirs located in RAID array.对于安装RAID阵列中有数据dirs的情况,建议增加此值。

每一个数据目录用于在启动kafka时恢复数据和在关闭时刷新数据的线程个数。如果kafka数据存储在磁盘阵列中
建议此值可以调整更大。

num.recovery.threads.per.data.dir=1

############################# Internal Topic Settings #############################
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"组元数据内部主题“_consumer_offsets”和“_transaction_state”的复制因子对于开发测试以外的任何测试
# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3.建议使用大于1的值来确保可用性,如3。
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1

############################# Log Flush Policy #############################

# Messages are immediately written to the filesystem but by default we only fsync() to sync消息被立即写入文件系统,但在默认情况下,我们仅使用fsync()对操作系统缓存进行延迟同步。
# the OS cache lazily. The following configurations control the flush of data to disk.以下配置控制数据到磁盘的刷新。
# There are a few important trade-offs here:这里有一些重要的权衡:
# 1. Durability: Unflushed data may be lost if you are not using replication.1。持久性:如果不使用复制,未刷新的数据可能会丢失。
# 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.2. 延迟:非常大的刷新间隔可能会导致延迟峰值,因为会有大量数据需要刷新。
# 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.3.吞吐量:刷新通常是最昂贵的操作,一个小的刷新间隔可能导致过多的查找。
# The settings below allow one to configure the flush policy to flush data after a period of time or下面的设置允许您配置刷新策略,以便在一段时间后或每N条消息(或同时包含N条消息)刷新数据。
# every N messages (or both). This can be done globally and overridden on a per-topic basis.这可以在全局范围内完成,并在每个主题的基础上覆盖。

# The number of messages to accept before forcing a flush of data to disk在将数据刷新到磁盘之前要接受的消息数

消息刷新到磁盘中的消息条数阈值 单位毫秒

#log.flush.interval.messages=10000

# The maximum amount of time a message can sit in a log before we force a flush在强制刷新之前,消息可以驻留在日志中的最长时间

消息刷新到磁盘生成一个log数据文件的时间间隔 单位毫秒

#log.flush.interval.ms=1000

############################# Log Retention Policy #############################

# The following configurations control the disposal of log segments. The policy can以下配置控制日志段的处理。
# be set to delete segments after a period of time, or after a given size has accumulated.可以将策略设置为在一段时间之后或在积累了给定的大小之后删除段。
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens只要满足*其中*一个条件,段就会被删除。删除总是发生在日志的末尾。
# from the end of the log.

# The minimum age of a log file to be eligible for deletion due to age日志文件的最小删除年龄

基于时间的策略,删除日志数据的时间,默认保存7天

log.retention.hours=168

# A size-based retention policy for logs. Segments are pruned from the log unless the remaining基于日志大小的保留策略。除非其余段位于log. retene .bytes下,否则将从日志中删除段。
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.功能独立于日志。保持。小时。

基于大小的策略,1G

#log.retention.bytes=1073741824

# The maximum size of a log segment file. When this size is reached a new log segment will be created.日志段文件的最大大小。当达到这个大小时,将创建一个新的日志段。

数据分片策略 1G

log.segment.bytes=1073741824

# The interval at which log segments are checked to see if they can be deleted according检查日志段以确定是否可以根据其删除的间隔
# to the retention policies保留政策

每隔多长时间检测数据是否达到删除条件

log.retention.check.interval.ms=300000

############################# Zookeeper #############################

# Zookeeper connection string (see zookeeper docs for details).Zookeeper连接字符串(详见Zookeeper文档)。
# This is a comma separated host:port pairs, each corresponding to a zk这是一个逗号分隔的主机:端口对,每个对应一个zk服务器
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".如。“127.0.0.1:3000 127.0.0.1:3001 127.0.0.1:3002”。
# You can also append an optional chroot string to the urls to specify the您还可以向url附加一个可选的chroot字符串,以指定所有kafka znodes的根目录。
# root directory for all kafka znodes.
zookeeper.connect=localhost:2181

# Timeout in ms for connecting to zookeeper连接到zookeeper的ms超时
zookeeper.connection.timeout.ms=6000


############################# Group Coordinator Settings #############################

# The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.下面的配置指定了GroupCoordinator延迟初始消费者再平衡的时间(以毫秒为单位)。
# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.随着新成员加入该组织,再平衡将进一步被group.initial.rebalance.delay.ms的值所延迟,其最大值为max.poll.interval.ms。
# The default value for this is 3 seconds.默认值是3秒。
# We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.我们在这里将其重写为0,因为它为开发和测试提供了更好的开箱即用体验。
# However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.然而,在生产环境中,3秒的默认值更合适,因为这将有助于避免在应用程序启动期间进行不必要的、可能代价高昂的重新平衡。
group.initial.rebalance.delay.ms=0

 


上面 绿色解释 大部分来源于 https://www.cnblogs.com/toutou/p/linux_install_kafka.html  

本篇博客会在开发及学习kafka中不定时更新。 我刚接触 kafka,文章中肯定有很多错误地方,若你有幸看到此文章,并且看到文章中的错误,请下方留言,我们一起讨论学习。

 

posted @ 2020-03-08 21:28  不朽_张  阅读(2303)  评论(0编辑  收藏  举报