安装高可用Hadoop生态 (二) 安装Zookeeper
2. 安装Zookeeper
2.1. 解压程序
※ 3台服务器分别执行
tar -xf ~/install/zookeeper-3.4.9.tar.gz -C/opt/cloud/packages ln -s /opt/cloud/packages/zookeeper-3.4.9 /opt/cloud/bin/zookeeper ln -s /opt/cloud/packages/zookeeper-3.4.9/conf /opt/cloud/etc/zookeeper mkdir -p /opt/cloud/data/zookeeper/dat mkdir -p /opt/cloud/data/zookeeper/logdat mkdir -p /opt/cloud/logs/zookeeper
2.2. 修改配置文件
2.2.1. 修改zoo.cfg
mv /opt/cloud/etc/zookeeper/zoo_sample.cfg /opt/cloud/etc/zookeeper/zoo.cfg vi /opt/cloud/etc/zookeeper/zoo.cfg
# The number of milliseconds of each tick tickTime=2000 # The number of ticks that the initial # synchronization phase can take initLimit=10 # The number of ticks that can pass between # sending a request and getting an acknowledgement syncLimit=5 # the directory where the snapshot is stored. # do not use /tmp for storage, /tmp here is just # example sakes. dataDir=/opt/cloud/data/zookeeper/dat dataLogDir=/opt/cloud/data/zookeeper/logdat[1] # the port at which the clients will connect clientPort=2181 # the maximum number of client connections. # increase this if you need to handle more clients maxClientCnxns=100 # # Be sure to read the maintenance section of the # administrator guide before turning on autopurge. # # http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance # # The number of snapshots to retain in dataDir autopurge.snapRetainCount=5[2] # Purge task interval in hours # Set to "0" to disable auto purge feature autopurge.purgeInterval=6 # server.A=B:C:D server.1=hadoop1:2888:3888[3] server.2=hadoop2:2888:3888 server.3=hadoop3:2888:3888
2.2.2. 修改log配置文件
vi /opt/cloud/etc/zookeeper/log4j.properties
修改配置项
zookeeper.root.logger=INFO, DRFA zookeeper.log.dir=/opt/cloud/logs/zookeeper
增加DRFA日志定义
log4j.appender.DRFA=org.apache.log4j.DailyRollingFileAppender log4j.appender.DRFA.Append=true log4j.appender.DRFA.DatePattern='.'yyyy-MM-dd log4j.appender.DRFA.File=${zookeeper.log.dir}/${zookeeper.log.file} log4j.appender.DRFA.Threshold=${zookeeper.log.threshold} log4j.appender.DRFA.layout=org.apache.log4j.PatternLayout log4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} [myid:%X{myid}] - %-5p [%t:%C{1}@%L] - %m%n log4j.appender.DRFA.Encoding=UTF-8 #log4j.appender.DRFA.MaxFileSize=20MB
2.2.3. 复制到另外2台服务器
scp /opt/cloud/etc/zookeeper/zoo.cfg hadoop2:/opt/cloud/etc/zookeeper scp /opt/cloud/etc/zookeeper/log4j.properties hadoop2:/opt/cloud/etc/zookeeper scp /opt/cloud/etc/zookeeper/zoo.cfg hadoop3:/opt/cloud/etc/zookeeper scp /opt/cloud/etc/zookeeper/log4j.properties hadoop3:/opt/cloud/etc/zookeeper
2.3. 生成myid
在dataDir目录下创建一个myid文件,然后分别在myid文件中按照zoo.cfg文件的server.A中A的数值,在不同机器上的该文件中填写相应的值。
ssh hadoop1 'echo 1 >/opt/cloud/data/zookeeper/dat/myid' ssh hadoop2 'echo 2 >/opt/cloud/data/zookeeper/dat/myid' ssh hadoop3 'echo 3 >/opt/cloud/data/zookeeper/dat/myid'
2.4. 设置环境变量
vi ~/.bashrc
增加
export ZOO_HOME=/opt/cloud/bin/zookeeper export ZOOCFGDIR=${ZOO_HOME}/conf export ZOO_LOG_DIR=/opt/cloud/logs/zookeeper export PATH=$ZOO_HOME/bin:$PATH
即刻生效
source ~/.bashrc
复制到另外两台服务器
scp ~/.bashrc hadoop2:/home/hadoop scp ~/.bashrc hadoop3:/home/hadoop
2.5. 手工执行
1.启动
zkServer.sh start
2.输入jps命令查看进程
QuorumPeerMain
Jps
其中,QuorumPeerMain是zookeeper进程,启动正常。
3、停止zookeeper进程
zkServer.sh stop
4、启动zookeeper集群
[hadoop@hadoop1 ~]$ cexec 'zkServer.sh start' ************************* cloud ************************* --------- hadoop1--------- ZooKeeper JMX enabled by default Using config: /opt/cloud/bin/zookeeper/bin/../conf/zoo.cfg Starting zookeeper ... STARTED --------- hadoop2--------- ZooKeeper JMX enabled by default Using config: /opt/cloud/bin/zookeeper/bin/../conf/zoo.cfg Starting zookeeper ... STARTED --------- hadoop3--------- ZooKeeper JMX enabled by default Using config: /opt/cloud/bin/zookeeper/bin/../conf/zoo.cfg Starting zookeeper ... STARTED
5、查看zookeeper集群状态
[hadoop@hadoop1 ~]$ cexec 'zkServer.sh status' ************************* cloud ************************* --------- hadoop1--------- ZooKeeper JMX enabled by default Using config: /opt/cloud/bin/zookeeper/bin/../conf/zoo.cfg Mode: follower --------- hadoop2--------- ZooKeeper JMX enabled by default Using config: /opt/cloud/bin/zookeeper/bin/../conf/zoo.cfg Mode: follower --------- hadoop3--------- ZooKeeper JMX enabled by default Using config: /opt/cloud/bin/zookeeper/bin/../conf/zoo.cfg Mode: leader
6、启动客户端脚本
zkCli.sh ls /zookeeper ls /zookeeper/quota
2.6. 系统启动时自动运行
vi /opt/cloud/bin/zookeeper/bin/zkServer.sh
找到 nohup "$JAVA" "-Dzookeeper.log.dir=${ZOO_LOG_DIR}" "-Dzookeeper.root.logger=${ZOO_LOG4J_PROP}" \ 替换为 nohup "$JAVA" "-Dlog4j.configuration=file:${ZOOCFGDIR}/log4j.properties" \
复制到另外两台服务器
scp /opt/cloud/bin/zookeeper/bin/zkEnv.sh hadoop2:/opt/cloud/bin/zookeeper/bin/ scp /opt/cloud/bin/zookeeper/bin/zkServer.sh hadoop2:/opt/cloud/bin/zookeeper/bin/ scp /opt/cloud/bin/zookeeper/bin/zkEnv.sh hadoop3:/opt/cloud/bin/zookeeper/bin/ scp /opt/cloud/bin/zookeeper/bin/zkServer.sh hadoop3:/opt/cloud/bin/zookeeper/bin/
vi /etc/systemd/system/zookeeper.service
[Unit] Description=Zookeeper service After=network.target [Service] User=hadoop Group=hadoop Type=forking Environment = ZOO_HOME=/opt/cloud/bin/zookeeper Environment = ZOOCFGDIR=/opt/cloud/bin/zookeeper/conf Environment = ZOO_LOG_DIR=/opt/cloud/logs/zookeeper ExecStart=/usr/bin/sh -c '/opt/cloud/bin/zookeeper/bin/zkServer.sh start' ExecStop =/usr/bin/sh -c '/opt/cloud/bin/zookeeper/bin/zkServer.sh stop' [Install] WantedBy=multi-user.target
复制到另外两台服务器
scp /etc/systemd/system/zookeeper.service hadoop2:/etc/systemd/system/ scp /etc/systemd/system/zookeeper.service hadoop3:/etc/systemd/system/
重新加载配置信息:systemctl daemon-reload
启动zookeeper:systemctl start zookeeper
停止zookeeper:systemctl stop zookeeper
查看进程状态及日志(重要):systemctl status zookeeper
开机自启动:systemctl enable zookeeper
关闭自启动:systemctl disable zookeeper
启动服务设置为自动启动
systemctl daemon-reload
systemctl start zookeeper
systemctl status zookeeper
systemctl enable zookeeper
2.7. 卸载
root用户操作
- 停止并卸载zookeeper服务
systemctl stop zookeeper systemctl disable zookeeper rm /etc/systemd/system/zookeeper.service -f
- 复原环境变量
vi ~/.bashrc
删除zookeeper相关行
- 删除其他文件
rm /opt/cloud/bin/zookeeper/ -rf rm /opt/cloud/data/zookeeper/ -rf rm /opt/cloud/logs/zookeeper/ -rf rm /opt/cloud/packages/zookeeper-3.4.9/ -rf