企业级Zookeeper+Kafka部署实战篇
一、安装Zookeeper集群
1、下载地址:
[root@Hexindai-C11-71 software]# wget https://mirrors.tuna.tsinghua.edu.cn/apache/zookeeper/zookeeper-3.5.5/apache-zookeeper-3.5.5-bin.tar.gz
2、解压zookeeper
[root@Hexindai-C11-71 software]# tar zxvf apache-zookeeper-3.5.5-bin.tar.gz -C /export/servers/ [root@Hexindai-C11-71 software]# mv /export/servers/apache-zookeeper-3.5.5-bin/ /export/servers/zookeeper-3.5.5/
3、创建配置zookeeper的堆内存配置文件
[root@Hexindai-C11-71 software]# cat /export/servers/zookeeper-3.5.5/conf/java.env #!/bin/bash #@author :wtnyihg #blog:http://www.cnblogs.com/wtnyihg #指定JDK的安装路径 export JAVA_HOME=/export/servers/jdk1.8.0_211 #指定zookeeper的heap内存大小 export JVMFLAGS="-Xms2048m -Xmx2048m $JVMFLAGS" [root@Hexindai-C11-71 software]#
4、修改zookeeper的配置文件zoo.cfg(需要手动创建:“/export/servers/zookeeper-3.5.5/conf/zoo_sample.cfg”是一个配置模板)
Zookeeper的每个服务端的配置文件存放于安装目录的conf目录下,应命名为zoo.cfg,该目录下有一个zoo_sample.cfg文件是格式的简单样本。 以下我们介绍zoo.cfg中可以配置的各种配置项。 一.最小配置:以下三个配置是配置文件中必须存在的“最低消费”. (1)clientPort: zk服务器监听的端口,客户端通过该端口建立连接,每台 zk服务器也允许设置为不同的值。默认值2181 (2)dataDir: zk用于保存内存数据库的快照的目录,除非设置了 dataLogDir,否则这个目录也用来保存更新数据库的事务日志。在生产环境使用的zk集群,强烈建议设置dataLogDir,让dataDir只存放快照,因为写快照的开销很低,这样dataDir就可以和其他日志目录的挂载点合设. (3)tickTime: 前面已提到过,zk使用的基本时间单位是tick,这个参数用于配置一个tick的长度,单位为毫秒,默认值3000,不建议修改. 二.高级配置:以下配置都是可选的,但如果要用好zk,有些几乎是一定要设的. (1)dataLogDir: 生产环境必须要设,注意,因为zk写事务日志是顺序地、 阻塞地,所以这个目录一定要单独划盘,禁止和其他高写操作的目录合设,否则严重影响性能。 (2)globalOutstandingLimit: 因为zk服务器是需要对请求队列化的,当客户端很多,提交的写操作很频繁时,zk服务器可能在来得及处理前就OOM了。这个参数用于限制系统中还未处理的请求数阈值,默认为1000,一般不需要修改。 (3)preAllocSize: 用于设置预分配的事务日志文件大小,单位为KB,默认值为64M。这个值明显太大,因为每次快照后都会生成一个新的事务日志文件,且一次的事务数是用snapCount限定住的,所以需要的日志文件大小可以准确估计,我使用时snapCount默认,单个事务大小不超过100字节,因此该值设为10240够了。 (4)snapCount: 用于设置每次快照之间的事务数,默认为100000。因为写快照是消耗性能的,也没有必要频繁写,虽然该值看起来很大,一般不需要修改该值。 (5)traceFile: 如果设置了该路径,将持续跟踪zk的操作并写入一个名为traceFile.year.month.day的跟踪日志中,该选项比较消耗性能,只在debug环境可以开启,生产环境不能开。 (6)maxClientCnxns: 单个客户端与单台服务器之间的连接数的限制,是ip级别的,默认是60,如果设置为0,那么表明不作任何限制。请注意这个限制的使用范围,仅仅是单台客户端机器与单台zk服务器之间的连接数限制,不是针对指定客户端IP,也不是zk集群的连接数限制,也不是单台zk对所有客户端的连接数限制。如果点对点连接数不高,建议改小这个值。 (7)clientPortAddress: 用于配置zk的端口监听在本机哪个IP上生效,用 于服务器有多块网卡的情况,尤其是既有外网地址又有内网地址的集群, 只要开放内网地址即可。 (8)minSessionTimeout: 客户端和zk服务器间的最小超时时间,单位为 tickTime的倍数,默认为2。 (9)maxSessionTimeout: 客户端和zk服务器间的最大超时时间,单位为 tickTime的倍数,默认为20。客户端一般有自己的连接管理参数,如果客户端的参数不在minSessionTimeout--maxSessionTimeout这个范围内, 会被强制设为最接近的那个值。 (10)fsync.warningthresholdms: 用于设置触发警告的存储同步时间阈值, 单位为毫秒,默认为1000,因为只是一个警告而已,无需修改。 (11)autopurge.snapRetainCount: 3.4.0及之后版本zk提供了自动清理快照 文件和事务日志文件的功能,该参数指定了保留文件的个数,默认为3, 一般无需修改。 (12)autopurge.purgeInterval: 和上一个参数配合使用,设置自动清理的 频率,单位为小时,默认为0表示不清理,建议设为6或12之类的值。 (13)syncEnabled: 3.4.6之后新加的参数,在集群使用observer时,设置其 是否像leader和follower一样将快照和事务日志写到硬盘,默认为true。 我是没开observer的,如果你的集群开了,建议配成false,因为没有 必要。 三.集群配置:用于zk集群模式的高级配置,单机模式是不需要管的。当然,没几个人会用单机的zk. (1)electionAlg: 群首选举算法,默认为3代表FastLeaderElection,前面章节已经说过,除了这种算法外其余3种都已经被标注为deprecated了, 不要修改。 (2)initLimit: follower首次与leader连接并同步的超时值,因为集群启动或新leader选举完成后,follower需要从leader拉最新的数据,如果需要同步的数据很大还是可能超时的,单位为tickTime的倍数,没有默认值。 (3)leaderServes: 默认为yes,此时leader也会接受客户端的连接提供读 写服务。如果设置为no,则leader不会和客户端连接,只负责处理 follower转发的写操作请求以及集群协调事务。改写该参数需要小心, 有可能会从leader很忙变成follower很忙,如果集群台数不多,leader 的CPU和IO也不高的话,就使用默认值。 (4)server.x=[hostname]:nnnnn[:nnnnn][:observer]: 这里的x是一个数字,与myid文件中的id是一致的。接着可以配置两个端口,第一个端口 用于leader和follower之间的数据同步和其它通信,第二个端口用于leader选举过程中投票通信。另外可以在最后加上:observer标记,用于表示该服务器以observer模式运行。 (5)syncLimit: follower后续与leader同步的超时值,单位为tickTime的倍数。该参数与initLimit不同之处是,与数据量大小关系不大,只取决于网络质量,因此还起到心跳作用,如果follower超时了,leader就认为该follower落后太多需要放弃。如果网络为高延迟环境,可适当增加该值,但不宜加得太大,否则会掩盖某些故障。 (6)group.x=nnnnn[:nnnnn]: 仲裁时除了默认的按每机1票计算,还有加权的方法和分组的方法,这个参数就是用来设置分组方法的。 (7)weight.x=nnnnn: 见上一行。 (8)cnxTimeout: 用于设置群首选举时打开一个新的连接的超时值,单位为毫秒,默认值为5000,不需要修改。 (9)standaloneEnabled: 3.5.0及以后才有,默认为true,是为了向后兼容 的目的。如果设为false,单个server可以以集群模式启动,一个集群 可以通过重新配置降回单个node,也可以从单个node升回一个集群。
[root@Hexindai-C11-72 ~]# cat /export/servers/zookeeper-3.5.5/conf/zoo.cfg # The number of milliseconds of each tick # 默认是2000毫秒,即2秒。它是zookeeper最小的时间单位,用于丈量心跳时间和超时时间等,通常设置成默认2秒即可。 tickTime=2000 # The number of ticks that the initial # synchronization phase can take # 初始化限制是10滴答,默认是10个滴答,即默认是20秒。指定follower节点初始化是链接leader节点的最大tick次数。 initLimit=10 # The number of ticks that can pass between # sending a request and getting an acknowledgement # 数据同步的时间限制,默认是5个滴答,即默认时间是10秒。设定了follower节点与leader节点进行同步的最大时间。与initLimit类似,它也是以tickTime为单位进行指定的。 syncLimit=5 # the directory where the snapshot is stored. # do not use /tmp for storage, /tmp here is just # example sakes. # 指定zookeeper的工作目录,这是一个非常重要的参数,zookeeper会在内存中在内存只能中保存系统快照,并定期写入该路径指定的文件夹中。生产环境中需要注意该文件夹的磁盘占用情况。 dataDir=/export/data/zookeeper # the port at which the clients will connect 监听zookeeper的默认端口。zookeeper监听客户端链接的端口,一般设置成默认2181即可。 clientPort=2181 # the maximum number of client connections. # increase this if you need to handle more clients #这个操作将限制连接到 ZooKeeper 的客户端的数量,限制并发连接的数量,它通过 IP 来区分不同的客户端。此配置选项可以用来阻止某些类别的 Dos 攻击。将它设置为 0 或者忽略而不进行设置将会取消对并发连接的限制。 #maxClientCnxns=60 # # Be sure to read the maintenance section of the # administrator guide before turning on autopurge. # # http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance # # The number of snapshots to retain in dataDir #autopurge.snapRetainCount=3 # Purge task interval in hours # Set to "0" to disable auto purge feature #在上文中已经提到,3.4.0及之后版本,ZK提供了自动清理事务日志和快照文件的功能,这个参数指定了清理频率,单位是小时,需要配置一个1或更大的整数,默认是0,表示不开启自动清理功能。 #autopurge.purgeInterval=1 #server.x=[hostname]:nnnnn[:nnnnn],这里的x是一个数字,与myid文件中的id是一致的。右边可以配置两个端口,第一个端口用于F和L之间的数据同步和其它通信,第二个端口用于Leader选举过程中投票通信。 server.71=Hexindai-C11-71:2888:3888 server.72=Hexindai-C11-72:2888:3888 server.73=Hexindai-C11-73:2888:3888 [root@Hexindai-C11-72 ~]#
5、编写zookeeper的启动脚本
[root@Hexindai-C11-71 software]# cat /usr/local/bin/zookeeper_manager.sh #!/bin/bash #@author :wtnyihg #blog:http://www.cnblogs.com/wtnyihg #判断用户是否传参 if [ $# -ne 1 ];then echo "无效参数,用法为: $0 {start|stop|restart|status}" exit fi #获取用户输入的命令 cmd=$1 #定义函数功能 function zookeeperManger(){ case $cmd in start) echo "启动服务" remoteExecution start ;; stop) echo "停止服务" remoteExecution stop ;; restart) echo "重启服务" remoteExecution restart ;; status) echo "查看状态" remoteExecution status ;; *) echo "无效参数,用法为: $0 {start|stop|restart|status}" ;; esac } #定义执行的命令 function remoteExecution(){ for (( i=71 ; i<=73 ; i++ )) ; do tput setaf 2 echo ========== Hexindai-C11-${i} zkServer.sh $1 ================ tput setaf 9 ssh Hexindai-C11-${i} "source /etc/profile ; zkServer.sh $1" done } #调用函数 zookeeperManger [root@Hexindai-C11-71 software]# [root@Hexindai-C11-71 software]# chmod +x /usr/local/bin/zookeeper_manager.sh [root@Hexindai-C11-71 software]#
6、配置zookeeper的环境变量
[root@Hexindai-C11-71 software]# tail -3 /etc/profile #Hadoop Add By Wangruopeng export ZOOKEEPER_HOME=/export/servers/zookeeper-3.5.5 export PATH=$PATH:$ZOOKEEPER_HOME:$ZOOKEEPER_HOME/bin [root@Hexindai-C11-71 software]#
7、配置免密登录
8、将zookeeper二进制文件及环境变量拷贝到其他节点
[root@Hexindai-C11-71 software]# vim /export/ssh-copy.sh IP=" 172.20.11.72 172.20.11.73 " for node in ${IP};do scp -r /export/servers/zookeeper-3.5.5 ${node}:/export/servers/ scp -r /etc/profile ${node}:/etc/profile echo "${node} 基础配置优化完成" done [root@Hexindai-C11-71 software]# sh /export/ssh-copy.sh
9、创建数据目录并生产myid文件
[root@Hexindai-C11-71 ~]# mkdir /export/data/zookeeper/ -p [root@Hexindai-C11-72 ~]# mkdir /export/data/zookeeper/ -p [root@Hexindai-C11-73 ~]# mkdir /export/data/zookeeper/ -p [root@Hexindai-C11-71 software]# echo "71" > /export/data/zookeeper/myid [root@Hexindai-C11-71 software]# [root@Hexindai-C11-72 ~]# echo "72" > /export/data/zookeeper/myid [root@Hexindai-C11-72 ~]# [root@Hexindai-C11-73 ~]# echo "73" > /export/data/zookeeper/myid [root@Hexindai-C11-73 ~]#
10、启动zookeeper并查看状态
[root@Hexindai-C11-71 software]# zookeeper_manager.sh start 启动服务 ========== Hexindai-C11-71 zkServer.sh start ================ ZooKeeper JMX enabled by default Using config: /export/servers/zookeeper-3.5.5/bin/../conf/zoo.cfg Starting zookeeper ... STARTED ========== Hexindai-C11-72 zkServer.sh start ================ ZooKeeper JMX enabled by default Using config: /export/servers/zookeeper-3.5.5/bin/../conf/zoo.cfg Starting zookeeper ... STARTED ========== Hexindai-C11-73 zkServer.sh start ================ ZooKeeper JMX enabled by default Using config: /export/servers/zookeeper-3.5.5/bin/../conf/zoo.cfg Starting zookeeper ... STARTED [root@Hexindai-C11-71 software]# [root@Hexindai-C11-71 software]#
[root@Hexindai-C11-71 software]# zookeeper_manager.sh status 查看状态 ========== Hexindai-C11-71 zkServer.sh status ================ ZooKeeper JMX enabled by default Using config: /export/servers/zookeeper-3.5.5/bin/../conf/zoo.cfg Client port found: 2181. Client address: localhost. Mode: follower ========== Hexindai-C11-72 zkServer.sh status ================ ZooKeeper JMX enabled by default Using config: /export/servers/zookeeper-3.5.5/bin/../conf/zoo.cfg Client port found: 2181. Client address: localhost. Mode: leader ========== Hexindai-C11-73 zkServer.sh status ================ ZooKeeper JMX enabled by default Using config: /export/servers/zookeeper-3.5.5/bin/../conf/zoo.cfg Client port found: 2181. Client address: localhost. Mode: follower [root@Hexindai-C11-71 software]#
二、使用zkWeb监控zookeeper状态
1、下载“zkWeb For Zookeeper"的jar 包(下载地址:https://github.com/zhitom/zkweb/releases)
[root@Hexindai-C11-71 software]# wget https://github.com/zhitom/zkweb/releases/download/zkWeb-v1.2.1/zkWeb-v1.2.1.jar

2、运行zkWeb的jar包(默认启动端口是8099)
[root@Hexindai-C11-71 software]# java -jar /export/data/zkWeb-v1.2.1.jar 15:51:29.671 [main] INFO com.yasenagat.zkweb.ZkWebSpringBootApplication - applicationYamlFileName(application-zkweb.yaml)=file:/export/data/zkWeb-v1.2.1.jar!/BOOT-INF/classes!/application-zkweb.yaml . ____ _ __ _ _ /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ \\/ ___)| |_)| | | | | || (_| | ) ) ) ) ' |____| .__|_| |_|_| |_\__, | / / / / =========|_|==============|___/=/_/_/_/ :: Spring Boot :: (v2.0.2.RELEASE) [2019-08-08 15:51:30 INFO main StartupInfoLogger.java:50] c.y.zkweb.ZkWebSpringBootApplication --> Starting ZkWebSpringBootApplication vv1.2.1 on Hexindai-C11-71 with PID 26089 (/export/data/zkWeb-v1.2.1.jar started by root in /export/software) [2019-08-08 15:51:30 INFO main SpringApplication.java:663] c.y.zkweb.ZkWebSpringBootApplication --> The following profiles are active: local [2019-08-08 15:51:31 INFO main AbstractApplicationContext.java:590] o.s.b.w.s.c.AnnotationConfigServletWebServerApplicationContext --> Refreshing org.springframework.boot.web.servlet.context.AnnotationConfigServletWebServerApplicationContext@340f438e: startup date [Thu Aug 08 15:51:31 CST 2019]; root of context hierarchy [2019-08-08 15:51:31 INFO main DefaultListableBeanFactory.java:824] o.s.b.f.s.DefaultListableBeanFactory --> Overriding bean definition for bean 'requestMappingHandlerAdapter' with a different definition: replacing [Root bean: class [null]; scope=; abstract=false; lazyInit=false; autowireMode=3; dependencyCheck=0; autowireCandidate=true; primary=false; factoryBeanName=zkSpringBootConfiguration; factoryMethodName=requestMappingHandlerAdapter; initMethodName=null; destroyMethodName=(inferred); defined in class path resource [com/yasenagat/zkweb/util/ZkSpringBootConfiguration.class]] with [Root bean: class [null]; scope=; abstract=false; lazyInit=false; autowireMode=3; dependencyCheck=0; autowireCandidate=true; primary=false; factoryBeanName=org.springframework.boot.autoconfigure.web.servlet.WebMvcAutoConfiguration$EnableWebMvcConfiguration; factoryMethodName=requestMappingHandlerAdapter; initMethodName=null; destroyMethodName=(inferred); defined in class path resource [org/springframework/boot/autoconfigure/web/servlet/WebMvcAutoConfiguration$EnableWebMvcConfiguration.class]] [2019-08-08 15:51:32 INFO main TomcatWebServer.java:91] o.s.b.w.e.tomcat.TomcatWebServer --> Tomcat initialized with port(s): 8099 (http) [2019-08-08 15:51:32 INFO main DirectJDKLog.java:180] o.a.coyote.http11.Http11NioProtocol --> Initializing ProtocolHandler ["http-nio-8099"] [2019-08-08 15:51:32 INFO main DirectJDKLog.java:180] o.a.catalina.core.StandardService --> Starting service [Tomcat] [2019-08-08 15:51:32 INFO main DirectJDKLog.java:180] o.a.catalina.core.StandardEngine --> Starting Servlet Engine: Apache Tomcat/8.5.31 [2019-08-08 15:51:32 INFO localhost-startStop-1 DirectJDKLog.java:180] o.a.c.core.AprLifecycleListener --> The APR based Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: [/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib] [2019-08-08 15:51:32 INFO localhost-startStop-1 DirectJDKLog.java:180] o.a.c.c.C.[Tomcat].[localhost].[/] --> Initializing Spring embedded WebApplicationContext [2019-08-08 15:51:32 INFO localhost-startStop-1 ServletWebServerApplicationContext.java:285] o.s.web.context.ContextLoader --> Root WebApplicationContext: initialization completed in 1788 ms [2019-08-08 15:51:33 INFO localhost-startStop-1 ServletRegistrationBean.java:185] o.s.b.w.s.ServletRegistrationBean --> Servlet dispatcherServlet mapped to [/] [2019-08-08 15:51:33 INFO localhost-startStop-1 ServletRegistrationBean.java:185] o.s.b.w.s.ServletRegistrationBean --> Servlet webServlet mapped to [/console/*] [2019-08-08 15:51:33 INFO localhost-startStop-1 ServletRegistrationBean.java:185] o.s.b.w.s.ServletRegistrationBean --> Servlet cacheServlet mapped to [/cache/*] [2019-08-08 15:51:33 INFO localhost-startStop-1 AbstractFilterRegistrationBean.java:244] o.s.b.w.s.FilterRegistrationBean --> Mapping filter: 'characterEncodingFilter' to: [/*] [2019-08-08 15:51:33 INFO localhost-startStop-1 AbstractFilterRegistrationBean.java:244] o.s.b.w.s.FilterRegistrationBean --> Mapping filter: 'hiddenHttpMethodFilter' to: [/*] [2019-08-08 15:51:33 INFO localhost-startStop-1 AbstractFilterRegistrationBean.java:244] o.s.b.w.s.FilterRegistrationBean --> Mapping filter: 'httpPutFormContentFilter' to: [/*] [2019-08-08 15:51:33 INFO localhost-startStop-1 AbstractFilterRegistrationBean.java:244] o.s.b.w.s.FilterRegistrationBean --> Mapping filter: 'requestContextFilter' to: [/*] [2019-08-08 15:51:33 INFO MLog-Init-Reporter Slf4jMLog.java:212] com.mchange.v2.log.MLog --> MLog clients using slf4j logging. [2019-08-08 15:51:33 INFO main Slf4jMLog.java:212] com.mchange.v2.c3p0.C3P0Registry --> Initializing c3p0-0.9.5.2 [built 08-December-2015 22:06:04 -0800; debug? true; trace: 10] [2019-08-08 15:51:33 INFO main Slf4jMLog.java:212] c.m.v.c.i.AbstractPoolBackedDataSource --> Initializing c3p0 pool... com.mchange.v2.c3p0.ComboPooledDataSource [ acquireIncrement -> 3, acquireRetryAttempts -> 30, acquireRetryDelay -> 1000, autoCommitOnClose -> false, automaticTestTable -> null, breakAfterAcquireFailure -> false, checkoutTimeout -> 0, connectionCustomizerClassName -> null, connectionTesterClassName -> com.mchange.v2.c3p0.impl.DefaultConnectionTester, contextClassLoaderSource -> caller, dataSourceName -> 1bqubeva4v4ulmmppqnbd|63021689, debugUnreturnedConnectionStackTraces -> false, description -> null, driverClass -> org.h2.Driver, extensions -> {}, factoryClassLocation -> null, forceIgnoreUnresolvedTransactions -> false, forceSynchronousCheckins -> false, forceUseNamedDriverClass -> false, identityToken -> 1bqubeva4v4ulmmppqnbd|63021689, idleConnectionTestPeriod -> 60, initialPoolSize -> 10, jdbcUrl -> jdbc:h2:file:~/.h2/zkweb;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=TRUE;FILE_LOCK=SOCKET, maxAdministrativeTaskTime -> 0, maxConnectionAge -> 0, maxIdleTime -> 60, maxIdleTimeExcessConnections -> 0, maxPoolSize -> 20, maxStatements -> 100, maxStatementsPerConnection -> 0, minPoolSize -> 5, numHelperThreads -> 3, preferredTestQuery -> null, privilegeSpawnedThreads -> false, properties -> {user=******, password=******}, propertyCycle -> 0, statementCacheNumDeferredCloseThreads -> 0, testConnectionOnCheckin -> false, testConnectionOnCheckout -> false, unreturnedConnectionTimeout -> 0, userOverrides -> {}, usesTraditionalReflectiveProxies -> false ] [2019-08-08 15:51:33 ERROR main ZkCfgManagerImpl.java:406] c.y.zkweb.util.ZkCfgManagerImpl --> isTableOk Failed,A problem occurred while trying to acquire a cached PreparedStatement in a background thread. [2019-08-08 15:51:33 ERROR main ZkCfgManagerImpl.java:70] c.y.zkweb.util.ZkCfgManagerImpl --> create table (CREATE TABLE IF NOT EXISTS ZK(ID VARCHAR PRIMARY KEY, DESC VARCHAR, CONNECTSTR VARCHAR, SESSIONTIMEOUT VARCHAR))... [2019-08-08 15:51:33 ERROR main ZkCfgManagerImpl.java:76] c.y.zkweb.util.ZkCfgManagerImpl --> create table OK !ret=0 [2019-08-08 15:51:33 ERROR main ZkCfgManagerImpl.java:81] c.y.zkweb.util.ZkCfgManagerImpl --> table select check OK! [2019-08-08 15:51:33 INFO main ZkCache.java:41] com.yasenagat.zkweb.util.ZkCache --> zk info size=0 [2019-08-08 15:51:33 INFO main ZkCfgManagerImpl.java:436] c.y.zkweb.util.ZkCfgManagerImpl --> afterPropertiesSet init 0 zk instance [2019-08-08 15:51:34 INFO main RequestMappingHandlerAdapter.java:574] o.s.w.s.m.m.a.RequestMappingHandlerAdapter --> Looking for @ControllerAdvice: org.springframework.boot.web.servlet.context.AnnotationConfigServletWebServerApplicationContext@340f438e: startup date [Thu Aug 08 15:51:31 CST 2019]; root of context hierarchy [2019-08-08 15:51:34 INFO main AbstractUrlHandlerMapping.java:373] o.s.w.s.h.SimpleUrlHandlerMapping --> Mapped URL path [/**/favicon.ico] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler] [2019-08-08 15:51:34 INFO main AbstractHandlerMethodMapping.java:547] o.s.w.s.m.m.a.RequestMappingHandlerMapping --> Mapped "{[/zkcfg/queryZkCfgById]}" onto public java.util.Map<java.lang.String, java.lang.Object> com.yasenagat.zkweb.web.ZkCfgController.queryZkCfg(java.lang.String) [2019-08-08 15:51:34 INFO main AbstractHandlerMethodMapping.java:547] o.s.w.s.m.m.a.RequestMappingHandlerMapping --> Mapped "{[/zkcfg/queryZkCfg]}" onto public java.util.Map<java.lang.String, java.lang.Object> com.yasenagat.zkweb.web.ZkCfgController.queryZkCfg(int,int,java.lang.String) [2019-08-08 15:51:34 INFO main AbstractHandlerMethodMapping.java:547] o.s.w.s.m.m.a.RequestMappingHandlerMapping --> Mapped "{[/zkcfg/addZkCfg],produces=[text/html;charset=UTF-8]}" onto public java.lang.String com.yasenagat.zkweb.web.ZkCfgController.addZkCfg(java.lang.String,java.lang.String,java.lang.String) [2019-08-08 15:51:34 INFO main AbstractHandlerMethodMapping.java:547] o.s.w.s.m.m.a.RequestMappingHandlerMapping --> Mapped "{[/zkcfg/updateZkCfg],produces=[text/html;charset=UTF-8]}" onto public java.lang.String com.yasenagat.zkweb.web.ZkCfgController.updateZkCfg(java.lang.String,java.lang.String,java.lang.String,java.lang.String) [2019-08-08 15:51:34 INFO main AbstractHandlerMethodMapping.java:547] o.s.w.s.m.m.a.RequestMappingHandlerMapping --> Mapped "{[/zkcfg/delZkCfg],produces=[text/html;charset=UTF-8]}" onto public java.lang.String com.yasenagat.zkweb.web.ZkCfgController.delZkCfg(java.lang.String) [2019-08-08 15:51:34 INFO main AbstractHandlerMethodMapping.java:547] o.s.w.s.m.m.a.RequestMappingHandlerMapping --> Mapped "{[/zk/queryZnodeInfo],produces=[text/html;charset=UTF-8]}" onto public java.lang.String com.yasenagat.zkweb.web.ZkController.queryzNodeInfo(java.lang.String,org.springframework.ui.Model,java.lang.String) [2019-08-08 15:51:34 INFO main AbstractHandlerMethodMapping.java:547] o.s.w.s.m.m.a.RequestMappingHandlerMapping --> Mapped "{[/zk/queryZKOk]}" onto public java.lang.String com.yasenagat.zkweb.web.ZkController.queryZKOk(org.springframework.ui.Model,java.lang.String) [2019-08-08 15:51:34 INFO main AbstractHandlerMethodMapping.java:547] o.s.w.s.m.m.a.RequestMappingHandlerMapping --> Mapped "{[/zk/queryZKJMXInfo],produces=[application/json;charset=UTF-8]}" onto public java.util.List<com.yasenagat.zkweb.util.ZkManager$PropertyPanel> com.yasenagat.zkweb.web.ZkController.queryZKJMXInfo(java.lang.String,java.lang.String,javax.servlet.http.HttpServletResponse) [2019-08-08 15:51:34 INFO main AbstractHandlerMethodMapping.java:547] o.s.w.s.m.m.a.RequestMappingHandlerMapping --> Mapped "{[/zk/deleteNode],produces=[text/html;charset=UTF-8]}" onto public java.lang.String com.yasenagat.zkweb.web.ZkController.deleteNode(java.lang.String,java.lang.String) [2019-08-08 15:51:34 INFO main AbstractHandlerMethodMapping.java:547] o.s.w.s.m.m.a.RequestMappingHandlerMapping --> Mapped "{[/zk/saveData],produces=[text/html;charset=UTF-8]}" onto public java.lang.String com.yasenagat.zkweb.web.ZkController.saveData(java.lang.String,java.lang.String,java.lang.String) [2019-08-08 15:51:34 INFO main AbstractHandlerMethodMapping.java:547] o.s.w.s.m.m.a.RequestMappingHandlerMapping --> Mapped "{[/zk/createNode],produces=[text/html;charset=UTF-8]}" onto public java.lang.String com.yasenagat.zkweb.web.ZkController.createNode(java.lang.String,java.lang.String,java.lang.String) [2019-08-08 15:51:34 INFO main AbstractHandlerMethodMapping.java:547] o.s.w.s.m.m.a.RequestMappingHandlerMapping --> Mapped "{[/zk/queryZnode]}" onto public java.util.List<com.yasenagat.zkweb.model.Tree> com.yasenagat.zkweb.web.ZkController.query(java.lang.String,java.lang.String,java.lang.String) [2019-08-08 15:51:34 INFO main AbstractHandlerMethodMapping.java:547] o.s.w.s.m.m.a.RequestMappingHandlerMapping --> Mapped "{[/error],produces=[text/html]}" onto public org.springframework.web.servlet.ModelAndView org.springframework.boot.autoconfigure.web.servlet.error.BasicErrorController.errorHtml(javax.servlet.http.HttpServletRequest,javax.servlet.http.HttpServletResponse) [2019-08-08 15:51:34 INFO main AbstractHandlerMethodMapping.java:547] o.s.w.s.m.m.a.RequestMappingHandlerMapping --> Mapped "{[/error]}" onto public org.springframework.http.ResponseEntity<java.util.Map<java.lang.String, java.lang.Object>> org.springframework.boot.autoconfigure.web.servlet.error.BasicErrorController.error(javax.servlet.http.HttpServletRequest) [2019-08-08 15:51:34 INFO main AbstractUrlHandlerMapping.java:360] o.s.w.s.h.SimpleUrlHandlerMapping --> Root mapping to handler of type [class org.springframework.web.servlet.mvc.ParameterizableViewController] [2019-08-08 15:51:34 INFO main AbstractUrlHandlerMapping.java:373] o.s.w.s.h.SimpleUrlHandlerMapping --> Mapped URL path [/webjars/**] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler] [2019-08-08 15:51:34 INFO main AbstractUrlHandlerMapping.java:373] o.s.w.s.h.SimpleUrlHandlerMapping --> Mapped URL path [/**] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler] [2019-08-08 15:51:34 INFO main AbstractUrlHandlerMapping.java:373] o.s.w.s.h.SimpleUrlHandlerMapping --> Mapped URL path [/resources/**] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler] [2019-08-08 15:51:34 INFO main MBeanExporter.java:433] o.s.j.e.a.AnnotationMBeanExporter --> Registering beans for JMX exposure on startup [2019-08-08 15:51:34 INFO main DirectJDKLog.java:180] o.a.coyote.http11.Http11NioProtocol --> Starting ProtocolHandler ["http-nio-8099"] [2019-08-08 15:51:34 INFO main DirectJDKLog.java:180] o.a.tomcat.util.net.NioSelectorPool --> Using a shared selector for servlet write/read [2019-08-08 15:51:34 INFO main DirectJDKLog.java:180] o.a.c.c.C.[Tomcat].[localhost].[/] --> Initializing Spring FrameworkServlet 'dispatcherServlet' [2019-08-08 15:51:34 INFO main FrameworkServlet.java:494] o.s.web.servlet.DispatcherServlet --> FrameworkServlet 'dispatcherServlet': initialization started [2019-08-08 15:51:34 INFO main FrameworkServlet.java:509] o.s.web.servlet.DispatcherServlet --> FrameworkServlet 'dispatcherServlet': initialization completed in 11 ms [2019-08-08 15:51:34 INFO main TomcatWebServer.java:206] o.s.b.w.e.tomcat.TomcatWebServer --> Tomcat started on port(s): 8099 (http) with context path '' [2019-08-08 15:51:34 INFO main StartupInfoLogger.java:59] c.y.zkweb.ZkWebSpringBootApplication --> Started ZkWebSpringBootApplication in 4.822 seconds (JVM running for 6.077)

3、添加节点保存成功

4、通过zookeeper的web界面查看相应的数据

三、搭建kafka完全分布式集群
1、官网下载kafka
[root@Hexindai-C11-71 software]# wget https://archive.apache.org/dist/kafka/2.3.0/kafka_2.12-2.3.0.tgz
[root@Hexindai-C11-71 software]# tar xvf kafka_2.12-2.3.0.tgz -C /export/servers/ [root@Hexindai-C11-71 software]# mv /export/servers/kafka_2.12-2.3.0 /export/servers/kafka [root@Hexindai-C11-71 software]# cat >> /etc/profile << EOF export KAFKA_HOME=/export/servers/kafka export PATH=\$PATH:\$KAFKA_HOME:\$KAFKA_HOME/bin:\$KAFKA_HOME/sbin EOF [root@Hexindai-C11-71 software]# [root@Hexindai-C11-71 software]# source /etc/profile [root@Hexindai-C11-71 software]#
3、修改kafka的启动脚本
[root@Hexindai-C11-71 software]# [root@Hexindai-C11-71 software]# cat /export/servers/kafka/bin/kafka-server-start.sh #!/bin/bash # Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. if [ $# -lt 1 ]; then echo "USAGE: $0 [-daemon] server.properties [--override property=value]*" exit 1 fi base_dir=$(dirname $0) if [ "x$KAFKA_LOG4J_OPTS" = "x" ]; then export KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:$base_dir/../config/log4j.properties" fi if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G" #默认的KAFKA的HEAP内存为1G,在实际生产环境中显然是不够的,在《kafka权威指南》书中说是配置5G,在《Apache Kafka实战》书中说配置6G,其实差距并不是很大,我们这里暂且配置6G吧,当时书中的知识是死的,如果Kafka配置了6G的Heap内存严重发现Full GC的话,到时候我们应该学会变通,将其在扩大,但在实际生产环境中,我就是这样配置的。注意力,这样配置如果你的虚拟机可用内存如果不足6G可能会直接抛出OOM异常哟~ export KAFKA_HEAP_OPTS="-Xmx6g -Xms6g -XX:MetaspaceSize=96m -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:G1HeapRegionSize=16M -XX:MinMetaspaceFreeRatio=50 -XX:MaxMetaspaceFreeRatio=80" fi EXTRA_ARGS=${EXTRA_ARGS-'-name kafkaServer -loggc'} COMMAND=$1 case $COMMAND in -daemon) EXTRA_ARGS="-daemon "$EXTRA_ARGS shift ;; *) ;; esac #从这行命令不难看出,该脚本会调用kafka-run-class.sh,如果我们在该配置文件中配置HEAP内存,就不要在Kafka-run-class.sh脚本里再去配置了哟,否则当前脚本配置的HEAP将无效! exec $base_dir/kafka-run-class.sh $EXTRA_ARGS kafka.Kafka "$@" [root@Hexindai-C11-71 software]#
4、分发kafka相关应用程序
[root@Hexindai-C11-71 software]# cat /export/ssh-copy.sh IP=" 172.20.11.72 172.20.11.73 " for node in ${IP};do scp -r /export/servers/kafka ${node}:/export/servers/ scp -r /etc/profile ${node}:/etc/profile echo "${node} 基础配置优化完成" done [root@Hexindai-C11-71 software]# [root@Hexindai-C11-71 software]# sh /export/ssh-copy.sh
5、修改kafka的配置文件(server.properties)
[root@Hexindai-C11-71 ~]# cat /export/servers/kafka/config/server.properties # Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # see kafka.server.KafkaConfig for additional details and defaults ############################# Server Basics ############################# # The id of the broker. This must be set to a unique integer for each broker. #每一个broker在集群中的唯一表示,要求是正数。当该服务器的IP地址发生改变时,broker.id没有变化,则不会影响consumers的消息情况 broker.id=71 #这就是说,这条命令其实并不执行删除动作,仅仅是在zookeeper上标记该topic要被删除而已,同时也提醒用户一定要提前打开delete.topic.enable开关,否则删除动作是不会执行的。 delete.topic.enable=true #是否允许自动创建topic,若是false,就需要通过命令创建topic auto.crate.topics.enable=false ############################# Socket Server Settings ############################# # The address the socket server listens on. It will get the value returned from # java.net.InetAddress.getCanonicalHostName() if not configured. # FORMAT: # listeners = listener_name://host_name:port # EXAMPLE: # listeners = PLAINTEXT://your.host.name:9092 #listeners=PLAINTEXT://:9092 #broker server服务端口 port=9092 #broker的主机地址,若是设置了,那么会绑定到这个地址上,若是没有,会绑定到所有的接口上,并将其中之一发送到ZK,一般不设置 host.name=Hexindai-C11-71 # Hostname and port the broker will advertise to producers and consumers. If not set, # it uses the value for "listeners" if configured. Otherwise, it will use the value # returned from java.net.InetAddress.getCanonicalHostName(). #kafka 0.9.x以后的版本新增了advertised.listeners配置,kafka 0.9.x以后的版本不要使用 advertised.host.name 和 advertised.host.port 已经deprecated.如果配置的话,它使用 "listeners" 的值。否则,它将使用从java.net.InetAddress.getCanonicalHostName()返回的值。 #advertised.listeners=PLAINTEXT://your.host.name:9092 # Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details #将侦听器(listener)名称映射到安全协议,默认情况下它们是相同的。有关详细信息,请参阅配置文档。 #listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL # The number of threads that the server uses for receiving requests from the network and sending responses to the network #处理网络请求的最大线程数 num.network.threads=30 # The number of threads that the server uses for processing requests, which may include disk I/O #处理磁盘I/O的线程数 num.io.threads=30 # The send buffer (SO_SNDBUF) used by the socket server #套接字服务器使用的发送缓冲区(SOYSNDBUF) socket.send.buffer.bytes=5242880 # The receive buffer (SO_RCVBUF) used by the socket server #套接字服务器使用的接收缓冲区(SOYRCVBUF) socket.receive.buffer.bytes=5242880 # The maximum size of a request that the socket server will accept (protection against OOM) #套接字服务器将接受的请求的最大大小(对OOM的保护) socket.request.max.bytes=104857600 #I/O线程等待队列中的最大的请求数,超过这个数量,network线程就不会再接收一个新的请求。应该是一种自我保护机制。 queued.max.requests=1000 ############################# Log Basics ############################# # A comma separated list of directories under which to store log files #日志存放目录,多个目录使用逗号分割,如果你有多块磁盘,建议配置成多个目录,从而达到I/O的效率的提升。 log.dirs=/export/data/kafka/log1,/export/data/kafka/log2,/export/data/kafka/log3 #每个topic的分区个数,若是在topic创建时候没有指定的话会被topic创建时的指定参数覆盖 num.partitions=1 #在启动时恢复日志和关闭时刷盘日志时每个数据目录的线程的数量,默认1 num.recovery.threads.per.data.dir=1 ############################# Internal Topic Settings ############################# # The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state" # For anything other than development testing, a value greater than 1 is recommended for to ensure availability such as 3. #设置副本数量为3,这样当一台消费者宕机时,其他消费者也可以进行消费 offsets.topic.replication.factor=3 #事务主题的复制因子(设置为更高以确保可用性)。在群集大小满足此复制因子要求之前,内部主题创建将失败。默认为3 transaction.state.log.replication.factor=3 #重写了事务主题的min.insync.replicas配置,默认为2 transaction.state.log.min.isr=2 ############################# Log Flush Policy ############################# # Messages are immediately written to the filesystem but by default we only fsync() to sync # the OS cache lazily. The following configurations control the flush of data to disk. # There are a few important trade-offs here: # 1. Durability: Unflushed data may be lost if you are not using replication. # 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush. # 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks. # The settings below allow one to configure the flush policy to flush data after a period of time or # every N messages (or both). This can be done globally and overridden on a per-topic basis. # The number of messages to accept before forcing a flush of data to disk #在强制fsync一个partition的log文件之前暂存的消息数量。调低这个值会更频繁的sync数据到磁盘,影响性能。通常建议人家使用replication来确保持久性,而不是依靠单机上的fsync,但是这可以带来更多的可靠性,默认10000。 #log.flush.interval.messages=10000 # The maximum amount of time a message can sit in a log before we force a flush #2次fsync调用之间最大的时间间隔,单位为ms。即使log.flush.interval.messages没有达到,只要这个时间到了也需要调用fsync。默认3000ms. #log.flush.interval.ms=1000 ############################# Log Retention Policy ############################# # The following configurations control the disposal of log segments. The policy can # be set to delete segments after a period of time, or after a given size has accumulated. # A segment will be deleted whenever *either* of these criteria are met. Deletion always happens # from the end of the log. # 日志保存时间 (hours|minutes),默认为7天(168小时)。超过这个时间会根据policy处理数据。bytes和minutes无论哪个先达到都会触发。 log.retention.hours=168 # A size-based retention policy for logs. Segments are pruned from the log unless the remaining # segments drop below log.retention.bytes. Functions independently of log.retention.hours. #日志数据存储的最大字节数。超过这个时间会根据policy处理数据。 #log.retention.bytes=1073741824 # The maximum size of a log segment file. When this size is reached a new log segment will be created. #控制日志segment文件的大小,超出该大小则追加到一个新的日志segment文件中(-1表示没有限制) log.segment.bytes=1073741824 # 当达到下面时间,会强制新建一个segment #log.roll.hours = 24*7 # 日志片段文件的检查周期,查看它们是否达到了删除策略的设置(log.retention.hours或log.retention.bytes) log.retention.check.interval.ms=300000 #是否开启压缩 #log.cleaner.enable=false #日志清理策略选择有:delete和compact主要针对过期数据的处理,或是日志文件达到限制的额度,会被 topic创建时的指定参数覆盖 #log.cleanup.policy=delete # 日志压缩运行的线程数 #log.cleaner.threads=2 # 压缩的日志保留的最长时间 #log.cleaner.delete.retention.ms=3600000 ############################# Zookeeper ############################# # Zookeeper connection string (see zookeeper docs for details). # This is a comma separated host:port pairs, each corresponding to a zk # server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002". # You can also append an optional chroot string to the urls to specify the # root directory for all kafka znodes. #zookeeper集群的地址,可以是多个,多个之间用逗号分割. zookeeper.connect=Hexindai-C11-71:2181,Hexindai-C11-72:2181,Hexindai-C11-73:2181 # Timeout in ms for connecting to zookeeper #ZooKeeper的最大超时时间,就是心跳的间隔,若是没有反映,那么认为已经死了,不易过大 zookeeper.session.timeout.ms=180000 #指定多久消费者更新offset到zookeeper中。注意offset更新时基于time而不是每次获得的消息。一旦在更新zookeeper发生异常并重启,将可能拿到已拿到过的消息,连接zk的超时时间 zookeeper.connection.timeout.ms=6000 #请求的最大大小为字节,请求的最大字节数。这也是对最大记录尺寸的有效覆盖。注意:server具有自己对消息记录尺寸的覆盖,这些尺寸和这个设置不同。此项设置将会限制producer每次批量发送请求的数目,以防发出巨量的请求。 max.request.size=104857600 #每次fetch请求中,针对每次fetch消息的最大字节数。这些字节将会督导用于每个partition的内存中,因此,此设置将会控制consumer所使用的memory大小。这个fetch请求尺寸必须至少和server允许的最大消息尺寸相等,否则,producer可能发送的消息尺寸大于consumer所能消耗的尺寸。 fetch.message.max.bytes=104857600 #ZooKeeper集群中leader和follower之间的同步时间,换句话说:一个ZK follower能落后leader多久。 #zookeeper.sync.time.ms=2000 ############################# Group Coordinator Settings ############################# # The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance. # The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms. # The default value for this is 3 seconds. # We override this to 0 here as it makes for a better out-of-the-box experience for development and testing. # However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup. group.initial.rebalance.delay.ms=0 ############################# Replica Basics ############################# # leader接收follower的"fetch请求"的超时时间,默认是10秒。 # replica.lag.time.max.ms=30000 # 如果relicas落后太多,将会认为此partition relicas已经失效。而一般情况下,因为网络延迟等原因,总会导致replicas中消息同步滞后。如果消息严重滞后,leader将认为此relicas网络延迟较大或者消息吞吐能力有限。在broker数量较少,或者网络不足的环境中,建议提高此值.follower落 后于leader的最大message数,这个参数是broker全局的。设置太大 了,影响真正“落后”follower的移除;设置的太小了,导致follower的频繁进出。无法给定一个合适的replica.lag.max.messages的值,因此不推荐使用,据说新版本的Kafka移除了这个参数。#replica.lag.max.messages=4000 # follower与leader之间的socket超时时间 #replica.socket.timeout.ms=30000 # follower每次fetch数据的最大尺寸 replica.fetch.max.bytes=104857600 # follower的fetch请求超时重发时间 replica.fetch.wait.max.ms=2000 # fetch的最小数据尺寸 #replica.fetch.min.bytes=1 #0.11.0.0版本开始unclean.leader.election.enable参数的默认值由原来的true改为false,可以关闭unclean leader election,也就是不在ISR(IN-Sync Replica)列表中的replica,不会被提升为新的leader partition。kafka集群的持久化力大于可用性,如果ISR中没有其它的replica, 会导致这个partition不能读写。unclean.leader.election.enable=false # follower中开启的fetcher线程数, 同步速度与系统负载均衡 num.replica.fetchers=5 # partition leader与replicas之间通讯时,socket的超时时间 #controller.socket.timeout.ms=30000 # partition leader与replicas数据同步时,消息的队列尺寸. #controller.message.queue.size=10 #指定将使用哪个版本的 inter-broker 协议。 在所有经纪人升级到新版本之后,这通常会受到冲击。升级时要设置 #inter.broker.protocol.version=0.10.1 #指定broker将用于将消息添加到日志文件的消息格式版本。 该值应该是有效的ApiVersion。 一些例子是:0.8.2,0.9.0.0,0.10.0。 通过设置特定的消息格式版本,用户保证磁盘上的所有现有消息都小于或等于指定的版本。 不正确地设置这个值将导致使用旧版本的用户出错,因为他们将接收到他们不理解的格式的消息。 #log.message.format.version=0.10.1 [root@Hexindai-C11-71 ~]#
[root@Hexindai-C11-72 ~]# cat /export/servers/kafka/config/server.properties # Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # see kafka.server.KafkaConfig for additional details and defaults ############################# Server Basics ############################# # The id of the broker. This must be set to a unique integer for each broker. #每一个broker在集群中的唯一表示,要求是正数。当该服务器的IP地址发生改变时,broker.id没有变化,则不会影响consumers的消息情况 broker.id=72 #这就是说,这条命令其实并不执行删除动作,仅仅是在zookeeper上标记该topic要被删除而已,同时也提醒用户一定要提前打开delete.topic.enable开关,否则删除动作是不会执行的。 delete.topic.enable=true #是否允许自动创建topic,若是false,就需要通过命令创建topic auto.crate.topics.enable=false ############################# Socket Server Settings ############################# # The address the socket server listens on. It will get the value returned from # java.net.InetAddress.getCanonicalHostName() if not configured. # FORMAT: # listeners = listener_name://host_name:port # EXAMPLE: # listeners = PLAINTEXT://your.host.name:9092 #listeners=PLAINTEXT://:9092 #broker server服务端口 port=9092 #broker的主机地址,若是设置了,那么会绑定到这个地址上,若是没有,会绑定到所有的接口上,并将其中之一发送到ZK,一般不设置 host.name=Hexindai-C11-72 # Hostname and port the broker will advertise to producers and consumers. If not set, # it uses the value for "listeners" if configured. Otherwise, it will use the value # returned from java.net.InetAddress.getCanonicalHostName(). #kafka 0.9.x以后的版本新增了advertised.listeners配置,kafka 0.9.x以后的版本不要使用 advertised.host.name 和 advertised.host.port 已经deprecated.如果配置的话,它使用 "listeners" 的值。否则,它将使用从java.net.InetAddress.getCanonicalHostName()返回的值。 #advertised.listeners=PLAINTEXT://your.host.name:9092 # Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details #将侦听器(listener)名称映射到安全协议,默认情况下它们是相同的。有关详细信息,请参阅配置文档。 #listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL # The number of threads that the server uses for receiving requests from the network and sending responses to the network #处理网络请求的最大线程数 num.network.threads=30 # The number of threads that the server uses for processing requests, which may include disk I/O #处理磁盘I/O的线程数 num.io.threads=30 # The send buffer (SO_SNDBUF) used by the socket server #套接字服务器使用的发送缓冲区(SOYSNDBUF) socket.send.buffer.bytes=5242880 # The receive buffer (SO_RCVBUF) used by the socket server #套接字服务器使用的接收缓冲区(SOYRCVBUF) socket.receive.buffer.bytes=5242880 # The maximum size of a request that the socket server will accept (protection against OOM) #套接字服务器将接受的请求的最大大小(对OOM的保护) socket.request.max.bytes=104857600 #I/O线程等待队列中的最大的请求数,超过这个数量,network线程就不会再接收一个新的请求。应该是一种自我保护机制。 queued.max.requests=1000 ############################# Log Basics ############################# # A comma separated list of directories under which to store log files #日志存放目录,多个目录使用逗号分割,如果你有多块磁盘,建议配置成多个目录,从而达到I/O的效率的提升。 log.dirs=/export/data/kafka/log1,/export/data/kafka/log2,/export/data/kafka/log3 #每个topic的分区个数,若是在topic创建时候没有指定的话会被topic创建时的指定参数覆盖 num.partitions=1 #在启动时恢复日志和关闭时刷盘日志时每个数据目录的线程的数量,默认1 num.recovery.threads.per.data.dir=1 ############################# Internal Topic Settings ############################# # The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state" # For anything other than development testing, a value greater than 1 is recommended for to ensure availability such as 3. #设置副本数量为3,这样当一台消费者宕机时,其他消费者也可以进行消费 offsets.topic.replication.factor=3 #事务主题的复制因子(设置为更高以确保可用性)。在群集大小满足此复制因子要求之前,内部主题创建将失败。默认为3 transaction.state.log.replication.factor=3 #重写了事务主题的min.insync.replicas配置,默认为2 transaction.state.log.min.isr=2 ############################# Log Flush Policy ############################# # Messages are immediately written to the filesystem but by default we only fsync() to sync # the OS cache lazily. The following configurations control the flush of data to disk. # There are a few important trade-offs here: # 1. Durability: Unflushed data may be lost if you are not using replication. # 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush. # 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks. # The settings below allow one to configure the flush policy to flush data after a period of time or # every N messages (or both). This can be done globally and overridden on a per-topic basis. # The number of messages to accept before forcing a flush of data to disk #在强制fsync一个partition的log文件之前暂存的消息数量。调低这个值会更频繁的sync数据到磁盘,影响性能。通常建议人家使用replication来确保持久性,而不是依靠单机上的fsync,但是这可以带来更多的可靠性,默认10000。 #log.flush.interval.messages=10000 # The maximum amount of time a message can sit in a log before we force a flush #2次fsync调用之间最大的时间间隔,单位为ms。即使log.flush.interval.messages没有达到,只要这个时间到了也需要调用fsync。默认3000ms. #log.flush.interval.ms=1000 ############################# Log Retention Policy ############################# # The following configurations control the disposal of log segments. The policy can # be set to delete segments after a period of time, or after a given size has accumulated. # A segment will be deleted whenever *either* of these criteria are met. Deletion always happens # from the end of the log. # 日志保存时间 (hours|minutes),默认为7天(168小时)。超过这个时间会根据policy处理数据。bytes和minutes无论哪个先达到都会触发。 log.retention.hours=168 # A size-based retention policy for logs. Segments are pruned from the log unless the remaining # segments drop below log.retention.bytes. Functions independently of log.retention.hours. #日志数据存储的最大字节数。超过这个时间会根据policy处理数据。 #log.retention.bytes=1073741824 # The maximum size of a log segment file. When this size is reached a new log segment will be created. #控制日志segment文件的大小,超出该大小则追加到一个新的日志segment文件中(-1表示没有限制) log.segment.bytes=1073741824 # 当达到下面时间,会强制新建一个segment #log.roll.hours = 24*7 # 日志片段文件的检查周期,查看它们是否达到了删除策略的设置(log.retention.hours或log.retention.bytes) log.retention.check.interval.ms=300000 #是否开启压缩 #log.cleaner.enable=false #日志清理策略选择有:delete和compact主要针对过期数据的处理,或是日志文件达到限制的额度,会被 topic创建时的指定参数覆盖 #log.cleanup.policy=delete # 日志压缩运行的线程数 #log.cleaner.threads=2 # 压缩的日志保留的最长时间 #log.cleaner.delete.retention.ms=3600000 ############################# Zookeeper ############################# # Zookeeper connection string (see zookeeper docs for details). # This is a comma separated host:port pairs, each corresponding to a zk # server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002". # You can also append an optional chroot string to the urls to specify the # root directory for all kafka znodes. #zookeeper集群的地址,可以是多个,多个之间用逗号分割. zookeeper.connect=Hexindai-C11-71:2181,Hexindai-C11-72:2181,Hexindai-C11-73:2181 # Timeout in ms for connecting to zookeeper #ZooKeeper的最大超时时间,就是心跳的间隔,若是没有反映,那么认为已经死了,不易过大 zookeeper.session.timeout.ms=180000 #指定多久消费者更新offset到zookeeper中。注意offset更新时基于time而不是每次获得的消息。一旦在更新zookeeper发生异常并重启,将可能拿到已拿到过的消息,连接zk的超时时间 zookeeper.connection.timeout.ms=6000 #请求的最大大小为字节,请求的最大字节数。这也是对最大记录尺寸的有效覆盖。注意:server具有自己对消息记录尺寸的覆盖,这些尺寸和这个设置不同。此项设置将会限制producer每次批量发送请求的数目,以防发出巨量的请求。 max.request.size=104857600 #每次fetch请求中,针对每次fetch消息的最大字节数。这些字节将会督导用于每个partition的内存中,因此,此设置将会控制consumer所使用的memory大小。这个fetch请求尺寸必须至少和server允许的最大消息尺寸相等,否则,producer可能发送的消息尺寸大于consumer所能消耗的尺寸。 fetch.message.max.bytes=104857600 #ZooKeeper集群中leader和follower之间的同步时间,换句话说:一个ZK follower能落后leader多久。 #zookeeper.sync.time.ms=2000 ############################# Group Coordinator Settings ############################# # The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance. # The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms. # The default value for this is 3 seconds. # We override this to 0 here as it makes for a better out-of-the-box experience for development and testing. # However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup. group.initial.rebalance.delay.ms=0 ############################# Replica Basics ############################# # leader接收follower的"fetch请求"的超时时间,默认是10秒。 # replica.lag.time.max.ms=30000 # 如果relicas落后太多,将会认为此partition relicas已经失效。而一般情况下,因为网络延迟等原因,总会导致replicas中消息同步滞后。如果消息严重滞后,leader将认为此relicas网络延迟较大或者消息吞吐能力有限。在broker数量较少,或者网络不足的环境中,建议提高此值.follower落 后于leader的最大message数,这个参数是broker全局的。设置太大 了,影响真正“落后”follower的移除;设置的太小了,导致follower的频繁进出。无法给定一个合适的replica.lag.max.messages的值,因此不推荐使用,据说新版本的Kafka移除了这个参数。#replica.lag.max.messages=4000 # follower与leader之间的socket超时时间 #replica.socket.timeout.ms=30000 # follower每次fetch数据的最大尺寸 replica.fetch.max.bytes=104857600 # follower的fetch请求超时重发时间 replica.fetch.wait.max.ms=2000 # fetch的最小数据尺寸 #replica.fetch.min.bytes=1 #0.11.0.0版本开始unclean.leader.election.enable参数的默认值由原来的true改为false,可以关闭unclean leader election,也就是不在ISR(IN-Sync Replica)列表中的replica,不会被提升为新的leader partition。kafka集群的持久化力大于可用性,如果ISR中没有其它的replica, 会导致这个partition不能读写。unclean.leader.election.enable=false # follower中开启的fetcher线程数, 同步速度与系统负载均衡 num.replica.fetchers=5 # partition leader与replicas之间通讯时,socket的超时时间 #controller.socket.timeout.ms=30000 # partition leader与replicas数据同步时,消息的队列尺寸. #controller.message.queue.size=10 #指定将使用哪个版本的 inter-broker 协议。 在所有经纪人升级到新版本之后,这通常会受到冲击。升级时要设置 #inter.broker.protocol.version=0.10.1 #指定broker将用于将消息添加到日志文件的消息格式版本。 该值应该是有效的ApiVersion。 一些例子是:0.8.2,0.9.0.0,0.10.0。 通过设置特定的消息格式版本,用户保证磁盘上的所有现有消息都小于或等于指定的版本。 不正确地设置这个值将导致使用旧版本的用户出错,因为他们将接收到他们不理解的格式的消息。 #log.message.format.version=0.10.1 [root@Hexindai-C11-72 ~]#
[root@Hexindai-C11-73 ~]# cat /export/servers/kafka/config/server.properties # Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # see kafka.server.KafkaConfig for additional details and defaults ############################# Server Basics ############################# # The id of the broker. This must be set to a unique integer for each broker. #每一个broker在集群中的唯一表示,要求是正数。当该服务器的IP地址发生改变时,broker.id没有变化,则不会影响consumers的消息情况 broker.id=73 #这就是说,这条命令其实并不执行删除动作,仅仅是在zookeeper上标记该topic要被删除而已,同时也提醒用户一定要提前打开delete.topic.enable开关,否则删除动作是不会执行的。 delete.topic.enable=true #是否允许自动创建topic,若是false,就需要通过命令创建topic auto.crate.topics.enable=false ############################# Socket Server Settings ############################# # The address the socket server listens on. It will get the value returned from # java.net.InetAddress.getCanonicalHostName() if not configured. # FORMAT: # listeners = listener_name://host_name:port # EXAMPLE: # listeners = PLAINTEXT://your.host.name:9092 #listeners=PLAINTEXT://:9092 #broker server服务端口 port=9092 #broker的主机地址,若是设置了,那么会绑定到这个地址上,若是没有,会绑定到所有的接口上,并将其中之一发送到ZK,一般不设置 host.name=Hexindai-C11-73 # Hostname and port the broker will advertise to producers and consumers. If not set, # it uses the value for "listeners" if configured. Otherwise, it will use the value # returned from java.net.InetAddress.getCanonicalHostName(). #kafka 0.9.x以后的版本新增了advertised.listeners配置,kafka 0.9.x以后的版本不要使用 advertised.host.name 和 advertised.host.port 已经deprecated.如果配置的话,它使用 "listeners" 的值。否则,它将使用从java.net.InetAddress.getCanonicalHostName()返回的值。 #advertised.listeners=PLAINTEXT://your.host.name:9092 # Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details #将侦听器(listener)名称映射到安全协议,默认情况下它们是相同的。有关详细信息,请参阅配置文档。 #listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL # The number of threads that the server uses for receiving requests from the network and sending responses to the network #处理网络请求的最大线程数 num.network.threads=30 # The number of threads that the server uses for processing requests, which may include disk I/O #处理磁盘I/O的线程数 num.io.threads=30 # The send buffer (SO_SNDBUF) used by the socket server #套接字服务器使用的发送缓冲区(SOYSNDBUF) socket.send.buffer.bytes=5242880 # The receive buffer (SO_RCVBUF) used by the socket server #套接字服务器使用的接收缓冲区(SOYRCVBUF) socket.receive.buffer.bytes=5242880 # The maximum size of a request that the socket server will accept (protection against OOM) #套接字服务器将接受的请求的最大大小(对OOM的保护) socket.request.max.bytes=104857600 #I/O线程等待队列中的最大的请求数,超过这个数量,network线程就不会再接收一个新的请求。应该是一种自我保护机制。 queued.max.requests=1000 ############################# Log Basics ############################# # A comma separated list of directories under which to store log files #日志存放目录,多个目录使用逗号分割,如果你有多块磁盘,建议配置成多个目录,从而达到I/O的效率的提升。 log.dirs=/export/data/kafka/log1,/export/data/kafka/log2,/export/data/kafka/log3 #每个topic的分区个数,若是在topic创建时候没有指定的话会被topic创建时的指定参数覆盖 num.partitions=1 #在启动时恢复日志和关闭时刷盘日志时每个数据目录的线程的数量,默认1 num.recovery.threads.per.data.dir=1 ############################# Internal Topic Settings ############################# # The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state" # For anything other than development testing, a value greater than 1 is recommended for to ensure availability such as 3. #设置副本数量为3,这样当一台消费者宕机时,其他消费者也可以进行消费 offsets.topic.replication.factor=3 #事务主题的复制因子(设置为更高以确保可用性)。在群集大小满足此复制因子要求之前,内部主题创建将失败。默认为3 transaction.state.log.replication.factor=3 #重写了事务主题的min.insync.replicas配置,默认为2 transaction.state.log.min.isr=2 ############################# Log Flush Policy ############################# # Messages are immediately written to the filesystem but by default we only fsync() to sync # the OS cache lazily. The following configurations control the flush of data to disk. # There are a few important trade-offs here: # 1. Durability: Unflushed data may be lost if you are not using replication. # 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush. # 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks. # The settings below allow one to configure the flush policy to flush data after a period of time or # every N messages (or both). This can be done globally and overridden on a per-topic basis. # The number of messages to accept before forcing a flush of data to disk #在强制fsync一个partition的log文件之前暂存的消息数量。调低这个值会更频繁的sync数据到磁盘,影响性能。通常建议人家使用replication来确保持久性,而不是依靠单机上的fsync,但是这可以带来更多的可靠性,默认10000。 #log.flush.interval.messages=10000 # The maximum amount of time a message can sit in a log before we force a flush #2次fsync调用之间最大的时间间隔,单位为ms。即使log.flush.interval.messages没有达到,只要这个时间到了也需要调用fsync。默认3000ms. #log.flush.interval.ms=1000 ############################# Log Retention Policy ############################# # The following configurations control the disposal of log segments. The policy can # be set to delete segments after a period of time, or after a given size has accumulated. # A segment will be deleted whenever *either* of these criteria are met. Deletion always happens # from the end of the log. # 日志保存时间 (hours|minutes),默认为7天(168小时)。超过这个时间会根据policy处理数据。bytes和minutes无论哪个先达到都会触发。 log.retention.hours=168 # A size-based retention policy for logs. Segments are pruned from the log unless the remaining # segments drop below log.retention.bytes. Functions independently of log.retention.hours. #日志数据存储的最大字节数。超过这个时间会根据policy处理数据。 #log.retention.bytes=1073741824 # The maximum size of a log segment file. When this size is reached a new log segment will be created. #控制日志segment文件的大小,超出该大小则追加到一个新的日志segment文件中(-1表示没有限制) log.segment.bytes=1073741824 # 当达到下面时间,会强制新建一个segment #log.roll.hours = 24*7 # 日志片段文件的检查周期,查看它们是否达到了删除策略的设置(log.retention.hours或log.retention.bytes) log.retention.check.interval.ms=300000 #是否开启压缩 #log.cleaner.enable=false #日志清理策略选择有:delete和compact主要针对过期数据的处理,或是日志文件达到限制的额度,会被 topic创建时的指定参数覆盖 #log.cleanup.policy=delete # 日志压缩运行的线程数 #log.cleaner.threads=2 # 压缩的日志保留的最长时间 #log.cleaner.delete.retention.ms=3600000 ############################# Zookeeper ############################# # Zookeeper connection string (see zookeeper docs for details). # This is a comma separated host:port pairs, each corresponding to a zk # server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002". # You can also append an optional chroot string to the urls to specify the # root directory for all kafka znodes. #zookeeper集群的地址,可以是多个,多个之间用逗号分割. zookeeper.connect=Hexindai-C11-71:2181,Hexindai-C11-72:2181,Hexindai-C11-73:2181 # Timeout in ms for connecting to zookeeper #ZooKeeper的最大超时时间,就是心跳的间隔,若是没有反映,那么认为已经死了,不易过大 zookeeper.session.timeout.ms=180000 #指定多久消费者更新offset到zookeeper中。注意offset更新时基于time而不是每次获得的消息。一旦在更新zookeeper发生异常并重启,将可能拿到已拿到过的消息,连接zk的超时时间 zookeeper.connection.timeout.ms=6000 #请求的最大大小为字节,请求的最大字节数。这也是对最大记录尺寸的有效覆盖。注意:server具有自己对消息记录尺寸的覆盖,这些尺寸和这个设置不同。此项设置将会限制producer每次批量发送请求的数目,以防发出巨量的请求。 max.request.size=104857600 #每次fetch请求中,针对每次fetch消息的最大字节数。这些字节将会督导用于每个partition的内存中,因此,此设置将会控制consumer所使用的memory大小。这个fetch请求尺寸必须至少和server允许的最大消息尺寸相等,否则,producer可能发送的消息尺寸大于consumer所能消耗的尺寸。 fetch.message.max.bytes=104857600 #ZooKeeper集群中leader和follower之间的同步时间,换句话说:一个ZK follower能落后leader多久。 #zookeeper.sync.time.ms=2000 ############################# Group Coordinator Settings ############################# # The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance. # The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms. # The default value for this is 3 seconds. # We override this to 0 here as it makes for a better out-of-the-box experience for development and testing. # However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup. group.initial.rebalance.delay.ms=0 ############################# Replica Basics ############################# # leader接收follower的"fetch请求"的超时时间,默认是10秒。 # replica.lag.time.max.ms=30000 # 如果relicas落后太多,将会认为此partition relicas已经失效。而一般情况下,因为网络延迟等原因,总会导致replicas中消息同步滞后。如果消息严重滞后,leader将认为此relicas网络延迟较大或者消息吞吐能力有限。在broker数量较少,或者网络不足的环境中,建议提高此值.follower落 后于leader的最大message数,这个参数是broker全局的。设置太大 了,影响真正“落后”follower的移除;设置的太小了,导致follower的频繁进出。无法给定一个合适的replica.lag.max.messages的值,因此不推荐使用,据说新版本的Kafka移除了这个参数。#replica.lag.max.messages=4000 # follower与leader之间的socket超时时间 #replica.socket.timeout.ms=30000 # follower每次fetch数据的最大尺寸 replica.fetch.max.bytes=104857600 # follower的fetch请求超时重发时间 replica.fetch.wait.max.ms=2000 # fetch的最小数据尺寸 #replica.fetch.min.bytes=1 #0.11.0.0版本开始unclean.leader.election.enable参数的默认值由原来的true改为false,可以关闭unclean leader election,也就是不在ISR(IN-Sync Replica)列表中的replica,不会被提升为新的leader partition。kafka集群的持久化力大于可用性,如果ISR中没有其它的replica, 会导致这个partition不能读写。unclean.leader.election.enable=false # follower中开启的fetcher线程数, 同步速度与系统负载均衡 num.replica.fetchers=5 # partition leader与replicas之间通讯时,socket的超时时间 #controller.socket.timeout.ms=30000 # partition leader与replicas数据同步时,消息的队列尺寸. #controller.message.queue.size=10 #指定将使用哪个版本的 inter-broker 协议。 在所有经纪人升级到新版本之后,这通常会受到冲击。升级时要设置 #inter.broker.protocol.version=0.10.1 #指定broker将用于将消息添加到日志文件的消息格式版本。 该值应该是有效的ApiVersion。 一些例子是:0.8.2,0.9.0.0,0.10.0。 通过设置特定的消息格式版本,用户保证磁盘上的所有现有消息都小于或等于指定的版本。 不正确地设置这个值将导致使用旧版本的用户出错,因为他们将接收到他们不理解的格式的消息。 #log.message.format.version=0.10.1 [root@Hexindai-C11-73 ~]#
6、启动kafka集群
[root@Hexindai-C11-71 ~]# kafka-server-start.sh /export/servers/kafka/config/server.properties >> /dev/null & [1] 10736 [root@Hexindai-C11-71 ~]# jps 1184 Application 4835 DataNode 28259 Application 1031 Application 1611 Application 4685 NameNode 5230 NodeManager 11150 Jps 27376 Application 10736 Kafka 6163 Worker 25396 QuorumPeerMain 6070 Master 26590 zkWeb-v1.2.1.jar 30655 Application 30751 Application [root@Hexindai-C11-71 ~]#
[root@Hexindai-C11-71 ~]# kafka-server-stop.sh
[root@Hexindai-C11-71 ~]# jps
1184 Application
4835 DataNode
28259 Application
1031 Application
1611 Application
4685 NameNode
5230 NodeManager
27376 Application
6163 Worker
25396 QuorumPeerMain
6070 Master
11319 Jps
26590 zkWeb-v1.2.1.jar
30655 Application
30751 Application
[root@Hexindai-C11-71 ~]#
四、Apache Kafka运维常用命令
详情请参考:https://www.cnblogs.com/wtnyihg/articles/11239640.html

浙公网安备 33010602011771号