|NO.Z.00053|——————————|BigDataEnd|——|Hadoop&kafka.V38|——|kafka.v38|分区重新分配.v02|

一、创建kafka集群实例二:
### --- 添加kafka实例二:在hadoop02搭建Kafka;此处不需要zookeeper,切记!!!
### --- 配置hosts文件

[root@hadoop02 ~]# vim /etc/hosts
192.168.1.111 hadoop01
192.168.1.122 hadoop02
### --- 搭建JDK环境:拷贝JDK并安装

~~~     # 创建安装目录
[root@hadoop02 ~]# mkdir -p /opt/yanqi/servers/
[root@hadoop02 ~]# cd /opt/yanqi/servers/

~~~     # 拷贝JDK版本包到hadoop02目录下
[root@hadoop02 servers]# scp -r root@192.168.1.111:/opt/yanqi/servers/jdk1.8.0_231 .
### --- 配置环境变量

[root@hadoop02 ~]# vim /etc/profile
#JAVA_HOME
export JAVA_HOME=/opt/yanqi/servers/jdk1.8.0_231
export PATH=:$JAVA_HOME/bin:$PATH
 
~~~     # 让配置生效
[root@hadoop02 ~]# source /etc/profile
~~~     # 查看jdk版本

[root@hadoop02 ~]# java -version
java version "1.8.0_231"
Java(TM) SE Runtime Environment (build 1.8.0_231-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.231-b11, mixed mode)
### --- 配置kafka实例二:拷贝hadoop01上安装的Kafka到hadoop02上

~~~     # 切换到安装目录下
[root@hadoop02 ~]# cd /opt/yanqi/servers/

~~~     # 拷贝kafka环境到hadoop02主机下
[root@hadoop02 servers]# scp -r root@192.168.1.111:/opt/yanqi/servers/kafka .
### --- 配置环境变量

[root@hadoop02 ~]# vim /etc/profile
#KAFKA_HOME
export KAFKA_HOME=/opt/yanqi/servers/kafka
export PATH=$PATH:$KAFKA_HOME/bin 
 
~~~     # 让环境变量生效
[root@hadoop02 ~]# . /etc/profile 
### --- 修改hadoop02上Kafka的配置:

[root@hadoop02 ~]# vim /opt/yanqi/servers/kafka/config/server.properties
############################# Server Basics #############################
broker.id=1                                  # 这个值是唯一个,
############################# Zookeeper #############################
zookeeper.connect=192.168.1.111:2181/myKafka # zookeeper地址,zookeeper部署在什么位置,就写那个主机地址
二、启动kafka并查看配置集群是否生效
### --- 启动kafka

[root@hadoop02 ~]# kafka-server-start.sh /opt/yanqi/servers/kafka/config/server.properties
~~~提示说明:创建一个brokersID
INFO Creating /brokers/ids/1 (is it secure? false) (kafka.utils.ZKCheckedEphemeral) 
### --- 报错信息:若是启动失败,报错信息
you intend to create a new broker, you should remove all data in your data directories (log.dirs).
### --- 解决方案
[root@hadoop02 ~]# vim /opt/yanqi/servers/kafka/config/server.properties
log.dirs=/opt/yanqi/servers/kafka/kafka-logs
 
[root@hadoop02 ~]# rm -rf /opt/yanqi/servers/kafka/kafka-logs/
### --- 查看在hadoop01下zookeeper下是否创建了kafka.brokers.ID信息

[zk: localhost:2181(CONNECTED) 2] ls /myKafka/brokers/ids
[0, 1]
~~~     注意观察hadoop02上节点启动的时候的ClusterId,看和zookeeper节点上的ClusterId是否一致,
~~~     如果是,证明node11和node1在同一个集群中。 hadoop01启动的Cluster ID:

~~~     # hadoop02下启动kafka时创建的clusterID
INFO Cluster ID = mx1SabVkSemcmi6_9xl-fQ (kafka.server.KafkaServer)
### --- hadoop01节点zookeeper节点上的Cluster ID:

~~~     # hadoop01下zookeeper下查看cluster.id是否与hadoop02下kafka启动clusterID是否一致
~~~     # 一致说明kafka加入到同一个集群中
[zk: localhost:2181(CONNECTED) 0] get /myKafka/cluster/id
{"version":"1","id":"mx1SabVkSemcmi6_9xl-fQ"}
### --- 在hadoop01上查看zookeeper的节点信息:
~~~     # hadoop02的节点已经加入集群了。

[zk: localhost:2181(CONNECTED) 4] ls /myKafka/brokers/ids
[0, 1]
[zk: localhost:2181(CONNECTED) 2] get /myKafka/brokers/ids/1
{"listener_security_protocol_map":{"PLAINTEXT":"PLAINTEXT"},"endpoints":["PLAINTEXT://hadoop02:9092"],"jmx_port":-1,"host":"hadoop02","timestamp":"1632410383285","port":9092,"version":4}

 
 
 
 
 
 
 
 
 

Walter Savage Landor:strove with none,for none was worth my strife.Nature I loved and, next to Nature, Art:I warm'd both hands before the fire of life.It sinks, and I am ready to depart
                                                                                                                                                   ——W.S.Landor

 

 

posted on   yanqi_vip  阅读(30)  评论(0编辑  收藏  举报

相关博文:
阅读排行:
· 无需6万激活码!GitHub神秘组织3小时极速复刻Manus,手把手教你使用OpenManus搭建本
· Manus爆火,是硬核还是营销?
· 终于写完轮子一部分:tcp代理 了,记录一下
· 别再用vector<bool>了!Google高级工程师:这可能是STL最大的设计失误
· 单元测试从入门到精通
< 2025年3月 >
23 24 25 26 27 28 1
2 3 4 5 6 7 8
9 10 11 12 13 14 15
16 17 18 19 20 21 22
23 24 25 26 27 28 29
30 31 1 2 3 4 5

导航

统计

点击右上角即可分享
微信分享提示