centos下zookeeper集群搭建
单机模式:
1) 首先下载zookeeper压缩包, 这里采用zookeeper3.4.8....
wget http://mirror.bit.edu.cn/apache/zookeeper/zookeeper-3.4.8/zookeeper-3.4.8.tar.gz
2) 解压缩
首先创建文件夹,这里放到/user/zookeeper/文件夹下
mkdir zookeeper1
mkdir zookeeper2
mkdir zookeeper3
创建三个文件夹用于存放三个实例,下面解压到目标文件夹,找到压缩包存放路径,执行下面命令
tar -zxvf zookeeper-3.4.8.tar.gz -C /usr/zookeeper/zookeeper1
tar -zxvf zookeeper-3.4.8.tar.gz -C /usr/zookeeper/zookeeper2
tar -zxvf zookeeper-3.4.8.tar.gz -C /usr/zookeeper/zookeeper3
3)运行单价环境
先来整下zookeeper1这个实例,首先
cd zookeeper1/
利用ll命令, 显示所有文件详情,
drwxr-xr-x. 2 1000 1000 4096 Mar 17 17:27 bin -rw-rw-r--. 1 1000 1000 83235 Feb 6 11:46 build.xml -rw-rw-r--. 1 1000 1000 88625 Feb 6 11:46 CHANGES.txt drwxr-xr-x. 2 1000 1000 4096 Mar 17 17:21 conf drwxr-xr-x. 10 1000 1000 4096 Feb 6 11:46 contrib drwxr-xr-x. 2 1000 1000 4096 Feb 6 11:50 dist-maven drwxr-xr-x. 6 1000 1000 4096 Feb 6 11:49 docs -rw-rw-r--. 1 1000 1000 1953 Feb 6 11:46 ivysettings.xml -rw-rw-r--. 1 1000 1000 3498 Feb 6 11:46 ivy.xml drwxr-xr-x. 4 1000 1000 4096 Feb 6 11:49 lib -rw-rw-r--. 1 1000 1000 11938 Feb 6 11:46 LICENSE.txt -rw-rw-r--. 1 1000 1000 171 Feb 6 11:46 NOTICE.txt -rw-rw-r--. 1 1000 1000 1770 Feb 6 11:46 README_packaging.txt -rw-rw-r--. 1 1000 1000 1585 Feb 6 11:46 README.txt drwxr-xr-x. 5 1000 1000 4096 Feb 6 11:46 recipes drwxr-xr-x. 8 1000 1000 4096 Feb 6 11:49 src -rw-rw-r--. 1 1000 1000 1360961 Feb 6 11:46 zookeeper-3.4.8.jar -rw-rw-r--. 1 1000 1000 819 Feb 6 11:50 zookeeper-3.4.8.jar.asc -rw-rw-r--. 1 1000 1000 33 Feb 6 11:46 zookeeper-3.4.8.jar.md5 -rw-rw-r--. 1 1000 1000 41 Feb 6 11:46 zookeeper-3.4.8.jar.sha1
cp下配置文件,
mv conf/zoo_sample.cfg conf/zoo.cfg
启动zk,
./bin/zdServer.sh start
4) 使用java 客户端连接ZooKeeper
./bin/zkCli.sh -server 127.0.0.1:2181
然后就可以使用各种命令了,跟文件操作命令很类似,输入help
可以看到所有命令。
5) 关闭
./bin/zdServer.sh stop
分布式模式(Replicated mode)
ZooKeeper集群一般被称为ZooKeeper ensemble,或者 quorum.
因为我们这里只有一台机器,通过不同端口号,做成伪集群的方式,如果你机器比较多,可以在每个机器上安装一个zk,配置方式是类似。
所谓 “伪分布式集群” 就是在,在一台PC中,启动多个ZooKeeper的实例。“完全分布式集群” 是每台PC,启动一个ZooKeeper实例。
ZooKeeper不存在明显的master/slave关系,各个节点都是服务器,leader挂了,会立马从follower中选举一个出来作为leader.
由于没有主从关系,也不用配置SSH无密码登录了,各个zk服务器是自己启动的,互相之间通过TCP端口来交换数据
将zookeeper2,zookeeper3的配置文件改下,因为zookeeper1在我们做单机时候,已经修改过,在zookeeper2,zookeeper3 conf文件夹下,执行
mv zoo_sample.cfg zoo.cfg
创建环境目录
切换到var/zookeeper路径下,新建三个文件夹zookeeper1,zookeeper2,zookeeper3....用于存放pid
mkdir /var/zookeeper/zookeeper1 mkdir /var/zookeeper/zookeeper2 mkdir /var/zookeeper/zookeeper3
分别在 zookeeper1,zookeeper2,zookeeper3 ,新建三个文件 myid
~ echo "1" > /var/zookeeper/zookeeper1/myid ~ echo "2" > /var/zookeeper/zookeeper2/myid ~ echo "3" > /var/zookeeper/zookeeper3/myid
注意myid只能为数字,因为我试验了下,包含英文的话zk启动不起来,抛出异常
分别修改配置文件
修改:dataDir,clientPort
增加:集群的实例,server.X,”X”表示每个目录中的myid的值
vi /usr/zookeeper/zookeeper1/conf/zoo.cfg
# The number of milliseconds of each tick tickTime=2000 # The number of ticks that the initial # synchronization phase can take initLimit=10 # The number of ticks that can pass between # sending a request and getting an acknowledgement syncLimit=5 # the directory where the snapshot is stored. # do not use /tmp for storage, /tmp here is just # example sakes. dataDir=/var/zookeeper/zookeeper1 # the port at which the clients will connect clientPort=2181 server.1=127.0.0.1:2888:3888 server.2=127.0.0.1:2889:3889 server.3=127.0.0.1:2890:3890 # the maximum number of client connections. # increase this if you need to handle more clients #maxClientCnxns=60 # # Be sure to read the maintenance section of the # administrator guide before turning on autopurge. # # http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance # # The number of snapshots to retain in dataDir #autopurge.snapRetainCount=3 # Purge task interval in hours # Set to "0" to disable auto purge feature #autopurge.purgeInterval=1
以下只贴出不同部分,其他和zk1相同
vi /usr/zookeeper/zookeeper2/conf/zoo.cfg
dataDir=/var/zookeeper/zookeeper2
clientPort=2182
vi /user/zookeeper/zookeeper3/conf/zoo.cfg
dataDir=/var/zookeeper/zookeeper3
clientPort=2182
启动每台机器
/usr/zookeeper/zookeeper1/bin/zkServer.sh start /usr/zookeeper/zookeeper2/bin/zkServer.sh start /usr/zookeeper/zookeeper3/bin/zkServer.sh start
查看启动状态