基于k8s的zookeeper搭建

1.官方

  https://kafka.apache.org

2.k8s部署

2.1. 编译镜像

Dockerfile
ENV KAFKA_USER=kafka \
KAFKA_DATA_DIR=/var/lib/kafka/data \
JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 \
KAFKA_HOME=/opt/kafka \
PATH=$PATH:/opt/kafka/bin

ARG KAFKA_VERSION=3.3.2
ARG SCALA_VERSION=2.13


RUN set -x \
    && apt-get update \
    && apt-get install -y openjdk-8-jre-headless wget gpg vim curl\
    && wget -q "https://downloads.apache.org/kafka/$KAFKA_VERSION/kafka_$SCALA_VERSION-$KAFKA_VERSION.tgz" \
    && wget -q "https://downloads.apache.org/kafka/$KAFKA_VERSION/kafka_$SCALA_VERSION-$KAFKA_VERSION.tgz.asc" \
    && wget -q "https://downloads.apache.org/kafka/KEYS" \
    && export GNUPGHOME="$(mktemp -d)" \
    && gpg --import KEYS \
    && gpg --batch --verify "kafka_$SCALA_VERSION-$KAFKA_VERSION.tgz.asc" "kafka_$SCALA_VERSION-$KAFKA_VERSION.tgz"\
    && tar -xzf "kafka_$SCALA_VERSION-$KAFKA_VERSION.tgz" -C /opt \
    && rm -r "$GNUPGHOME" "kafka_$SCALA_VERSION-$KAFKA_VERSION.tgz" "kafka_$SCALA_VERSION-$KAFKA_VERSION.tgz.asc"

RUN set -x \
    && ln -s /opt/kafka_$SCALA_VERSION-$KAFKA_VERSION $KAFKA_HOME \
    && useradd $KAFKA_USER \
    && mkdir -p $KAFKA_DATA_DIR \
    && chown -R "$KAFKA_USER:$KAFKA_USER"  $KAFKA_HOME \
    && chown -R "$KAFKA_USER:$KAFKA_USER"  $KAFKA_DATA_DIR

编译:

sudo docker build -t duruo850/kafka:v3.3.2 .

2.2. PersistentVolume部署

PersistentVolume.yaml
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: kafka-pv1
  labels:
    type: kafka
spec:
  storageClassName: kafka
  capacity:
    storage: 300Mi
  accessModes:
    - ReadWriteMany
  hostPath:
    path: "/opt/zookeeper_data1"
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: kafka-pv2
  labels:
    type: kafka
spec:
  storageClassName: kafka
  capacity:
    storage: 300Mi
  accessModes:
    - ReadWriteMany
  hostPath:
    path: "/opt/zookeeper_data2"
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: kafka-pv3
  labels:
    type: kafka
spec:
  storageClassName: kafka
  capacity:
    storage: 300Mi
  accessModes:
    - ReadWriteMany
  hostPath:
    path: "/opt/zookeeper_data3"

3个PersistentVolume的hostPath必须是不同路径,如果部署在不同机器上面可以相同路径。

sudo kubectl apply -f PersistentVolume.yaml

2.3. 设置namespace

namespace.yaml
 apiVersion: v1
kind: Namespace
metadata:
  name: kafka

部署namespace:

sudo kubectl apply -f namespace.yaml

2.4. k8s部署kafka

 

kafka.yaml
---
apiVersion: v1
kind: Service
metadata:
  name: kafka-svc
  namespace: kafka
  labels:
    app: kafka
spec:
  ports:
    - port: 9092
      name: server
  clusterIP: None
  selector:
    app: kafka
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
  name: kafka-pdb
  namespace: kafka
spec:
  selector:
    matchLabels:
      app: kafka
  minAvailable: 2
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: kafka
  namespace: kafka
spec:
  selector:
    matchLabels:
      app: kafka
  serviceName: kafka-svc
  replicas: 3
  template:
    metadata:
      labels:
        app: kafka
    spec:
      toleration:
        - key: "node-role.kubernetes.io/master"
          operator: "Exists"
          effect: "NoSchedule"
      #      nodeSelector:
      #          deploy-queue: "yes"
      #      affinity:
      #        podAntiAffinity:
      #          requiredDuringSchedulingIgnoredDuringExecution:
      #            - labelSelector:
      #                matchExpressions:
      #                  - key: "app"
      #                    operator: In
      #                    values:
      #                    - kafka
      #              topologyKey: "kubernetes.io/hostname"
      #        podAffinity:
      #          preferredDuringSchedulingIgnoredDuringExecution:
      #             - weight: 1
      #               podAffinityTerm:
      #                 labelSelector:
      #                    matchExpressions:
      #                      - key: "app"
      #                        operator: In
      #                        values:
      #                        - zk
      #                 topologyKey: "kubernetes.io/hostname"
      terminationGracePeriodSeconds: 300
      securityContext:
        runAsUser: 1000
        fsGroup: 1000
      containers:
        - name: kafka
          imagePullPolicy: Always
          image: duruo850/kafka:v3.3.2
          resources:
            requests:
              memory: "100Mi"
              cpu: 100m
          ports:
            - containerPort: 9092
              name: server
          command:
            - sh
            - -c
            - "exec kafka-server-start.sh /opt/kafka/config/server.properties --override broker.id=${HOSTNAME##*-} \
          --override listeners=PLAINTEXT://:9092 \
          --override zookeeper.connect=zk-0.zk-hs.zookeeper.svc.cluster.local:2181,zk-1.zk-hs.zookeeper.svc.cluster.local:2181,zk-2.zk-hs.zookeeper.svc.cluster.local:2181 \
          --override log.dir=/var/lib/kafka \
          --override auto.create.topics.enable=true \
          --override auto.leader.rebalance.enable=true \
          --override background.threads=10 \
          --override compression.type=producer \
          --override delete.topic.enable=false \
          --override leader.imbalance.check.interval.seconds=300 \
          --override leader.imbalance.per.broker.percentage=10 \
          --override log.flush.interval.messages=9223372036854775807 \
          --override log.flush.offset.checkpoint.interval.ms=60000 \
          --override log.flush.scheduler.interval.ms=9223372036854775807 \
          --override log.retention.bytes=-1 \
          --override log.retention.hours=168 \
          --override log.roll.hours=168 \
          --override log.roll.jitter.hours=0 \
          --override log.segment.bytes=1073741824 \
          --override log.segment.delete.delay.ms=60000 \
          --override message.max.bytes=1000012 \
          --override min.insync.replicas=1 \
          --override num.io.threads=8 \
          --override num.network.threads=3 \
          --override num.recovery.threads.per.data.dir=1 \
          --override num.replica.fetchers=1 \
          --override offset.metadata.max.bytes=4096 \
          --override offsets.commit.required.acks=-1 \
          --override offsets.commit.timeout.ms=5000 \
          --override offsets.load.buffer.size=5242880 \
          --override offsets.retention.check.interval.ms=600000 \
          --override offsets.retention.minutes=1440 \
          --override offsets.topic.compression.codec=0 \
          --override offsets.topic.num.partitions=50 \
          --override offsets.topic.replication.factor=3 \
          --override offsets.topic.segment.bytes=104857600 \
          --override queued.max.requests=500 \
          --override quota.consumer.default=9223372036854775807 \
          --override quota.producer.default=9223372036854775807 \
          --override replica.fetch.min.bytes=1 \
          --override replica.fetch.wait.max.ms=500 \
          --override replica.high.watermark.checkpoint.interval.ms=5000 \
          --override replica.lag.time.max.ms=10000 \
          --override replica.socket.receive.buffer.bytes=65536 \
          --override replica.socket.timeout.ms=30000 \
          --override request.timeout.ms=30000 \
          --override socket.receive.buffer.bytes=102400 \
          --override socket.request.max.bytes=104857600 \
          --override socket.send.buffer.bytes=102400 \
          --override unclean.leader.election.enable=true \
          --override zookeeper.session.timeout.ms=6000 \
          --override zookeeper.set.acl=false \
          --override broker.id.generation.enable=true \
          --override connections.max.idle.ms=600000 \
          --override controlled.shutdown.enable=true \
          --override controlled.shutdown.max.retries=3 \
          --override controlled.shutdown.retry.backoff.ms=5000 \
          --override controller.socket.timeout.ms=30000 \
          --override default.replication.factor=1 \
          --override fetch.purgatory.purge.interval.requests=1000 \
          --override group.max.session.timeout.ms=300000 \
          --override group.min.session.timeout.ms=6000 \
          #--override inter.broker.protocol.version=o.11.0.3 \
          --override log.cleaner.backoff.ms=15000 \
          --override log.cleaner.dedupe.buffer.size=134217728 \
          --override log.cleaner.delete.retention.ms=86400000 \
          --override log.cleaner.enable=true \
          --override log.cleaner.io.buffer.load.factor=0.9 \
          --override log.cleaner.io.buffer.size=524288 \
          --override log.cleaner.io.max.bytes.per.second=1.7976931348623157E308 \
          --override log.cleaner.min.cleanable.ratio=0.5 \
          --override log.cleaner.min.compaction.lag.ms=0 \
          --override log.cleaner.threads=1 \
          --override log.cleanup.policy=delete \
          --override log.index.interval.bytes=4096 \
          --override log.index.size.max.bytes=10485760 \
          --override log.message.timestamp.difference.max.ms=9223372036854775807 \
          --override log.message.timestamp.type=CreateTime \
          --override log.preallocate=false \
          --override log.retention.check.interval.ms=300000 \
          --override max.connections.per.ip=2147483647 \
          --override num.partitions=4 \
          --override producer.purgatory.purge.interval.requests=1000 \
          --override replica.fetch.backoff.ms=1000 \
          --override replica.fetch.max.bytes=1048576 \
          --override replica.fetch.response.max.bytes=10485760 \
          --override reserved.broker.max.id=1000 \
          --override security.inter.broker.protocol=SASL_PLAINTEXT \
          --override sasl.enabled.mechanisms=PLAIN \
          --override sasl.mechanism.inter.broker.protocol=PLAIN"
          env:
            - name: KAFKA_HEAP_OPTS
              value: "-Xmx512M -Xms512M"
            - name: KAFKA_OPTS
              value: "-Dlogging.level=INFO -Djava.security.auth.login.config=/opt/kafka/config/kafka_server_jaas.conf"
          volumeMounts:
            - name: datadir
              mountPath: /var/lib/kafka
          readinessProbe:
            tcpSocket:
              port: 9092
            timeoutSeconds: 1
            initialDelaySeconds: 5
  volumeClaimTemplates:
    - metadata:
        name: datadir
      spec:
        accessModes: [ "ReadWriteMany" ]
        storageClassName: kafka
        resources:
          requests:
            storage:  300Mi

nodeSelector/affinity根据实际情况设定

2.2. 部署说明

2.2.1. PersistentVolume配置:

zookeeper 依赖与PersistentVolume, 需要优先创建3个PersistentVolume

volumeClaimTemplates:
    - metadata:
        name: datadir
      spec:
        accessModes: [ "ReadWriteOnce" ]
        resources:
          requests:
            storage: 200Mi

 2.2.2.

 2.2.3. 用户配置:

 zookeeper默认启动的用户是容器内部的zookeeper,用户id为1000, 用户组id为1000

securityContext:
        runAsUser: 1000
        fsGroup: 1000

启动用户需要确保给予PersistentVolume的硬盘路径访问权限:

sudo chown -R 1000:1000 /opt/zookeeper_data1
sudo chown -R 1000:1000 /opt/zookeeper_data2
sudo chown -R 1000:1000 /opt/zookeeper_data3

 2.2.4. 亲和性配置:

affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                  - key: "app"
                    operator: In
                    values:
                      - zk
              topologyKey: "kubernetes.io/hostname"

zookeeper默认配置podAntiAffinity反亲和力配置,一个节点只能部署1个pod

如果只有1台机器的话,就把它去除,我是就2个节点,所以部署的时候就把亲和性删除了。

2.3.部署

  sudo kubectl apply -f zookeeper.yaml

3.验证基本功能

3.1.查看某个pod的配置文件

sudo kubectl exec -it zk-0 -- cat /opt/zookeeper/conf/zoo.cfg
 qiteck@server:~/program/docker_service/zookeeper$ sudo kubectl exec -it zk-0 -- cat /opt/zookeeper/conf/zoo.cfg
    #This file was autogenerated DO NOT EDIT
    clientPort=2181
    dataDir=/var/lib/zookeeper/data
    dataLogDir=/var/lib/zookeeper/data/log
    tickTime=2000
    initLimit=10
    syncLimit=5
    maxClientCnxns=60
    minSessionTimeout=4000
    maxSessionTimeout=40000
    autopurge.snapRetainCount=3
    autopurge.purgeInteval=12
    server.1=zk-0.zk-hs.default.svc.cluster.local:2888:3888
    server.2=zk-1.zk-hs.default.svc.cluster.local:2888:3888
    server.3=zk-2.zk-hs.default.svc.cluster.local:2888:3888

3.2.获取zookeeper的完整主机名

for i in 0 1 2; do kubectl exec zk-$i -- hostname -f; done
 qiteck@server:~/program/docker_service/zookeeper$ for i in 0 1 2; do kubectl exec zk-$i -- hostname -f; done
    zk-0.zk-hs.default.svc.cluster.local
    zk-1.zk-hs.default.svc.cluster.local
    zk-2.zk-hs.default.svc.cluster.local

 

3.3.查看集群状态--3个节点的角色

for i in 0 1 2; do kubectl exec zk-$i -- zkServer.sh status; done
 qiteck@server:~/program/docker_service/zookeeper$ for i in 0 1 2; do kubectl exec zk-$i -- zkServer.sh status; done
    ZooKeeper JMX enabled by default
    Using config: /usr/bin/../etc/zookeeper/zoo.cfg
    Mode: follower
    ZooKeeper JMX enabled by default
    Using config: /usr/bin/../etc/zookeeper/zoo.cfg
    Mode: leader
    ZooKeeper JMX enabled by default
    Using config: /usr/bin/../etc/zookeeper/zoo.cfg
    Mode: follower

3.4.查看myid

for i in 0 1 2; do echo -n "zk-$i"; kubectl exec zk-$i -- cat /var/lib/zookeeper/data/myid; done
 qiteck@server:~/program/docker_service/zookeeper$ for i in 0 1 2; do echo -n "zk-$i"; kubectl exec zk-$i -- cat /var/lib/zookeeper/data/myid; done
    zk-01
    zk-11
    zk-21

3.5.在其他pod去ping zookeeper主机

sudo kubectl exec -it account-6468847985-2fqpf -- ping zk-0.zk-hs.default.svc.cluster.local
 qiteck@server:~/program/docker_service/zookeeper$ sudo kubectl exec -it account-6468847985-2fqpf -- ping zk-0.zk-hs.default.svc.cluster.local
    PING zk-0.zk-hs.default.svc.cluster.local (10.244.1.9) 56(84) bytes of data.
    64 bytes from zk-0.zk-hs.default.svc.cluster.local (10.244.1.9): icmp_seq=1 ttl=64 time=0.340 ms
    64 bytes from zk-0.zk-hs.default.svc.cluster.local (10.244.1.9): icmp_seq=2 ttl=64 time=0.071 ms
    64 bytes from zk-0.zk-hs.default.svc.cluster.local (10.244.1.9): icmp_seq=3 ttl=64 time=0.053 ms
    ^C
    --- zk-0.zk-hs.default.svc.cluster.local ping statistics ---
    3 packets transmitted, 3 received, 0% packet loss, time 2031ms
sudo kubectl exec -it account-6468847985-2fqpf -- ping zk-1.zk-hs.default.svc.cluster.local
 qiteck@server:~/program/docker_service/zookeeper$ sudo kubectl exec -it account-6468847985-2fqpf -- ping zk-1.zk-hs.default.svc.cluster.local
    PING zk-1.zk-hs.default.svc.cluster.local (10.244.1.10) 56(84) bytes of data.
    64 bytes from 10-244-1-10.zk-cs.default.svc.cluster.local (10.244.1.10): icmp_seq=1 ttl=64 time=0.226 ms
    64 bytes from 10-244-1-10.zk-cs.default.svc.cluster.local (10.244.1.10): icmp_seq=2 ttl=64 time=0.044 ms
    64 bytes from 10-244-1-10.zk-cs.default.svc.cluster.local (10.244.1.10): icmp_seq=3 ttl=64 time=0.042 ms
    ^C
    --- zk-1.zk-hs.default.svc.cluster.local ping statistics ---
    3 packets transmitted, 3 received, 0% packet loss, time 2038ms
    rtt min/avg/max/mdev = 0.042/0.104/0.226/0.086 ms
sudo kubectl exec -it account-6468847985-2fqpf -- ping zk-2.zk-hs.default.svc.cluster.local
 qiteck@server:~/program/docker_service/zookeeper$ sudo kubectl exec -it account-6468847985-2fqpf -- ping zk-2.zk-hs.default.svc.cluster.local
    PING zk-2.zk-hs.default.svc.cluster.local (10.244.1.11) 56(84) bytes of data.
    64 bytes from zk-2.zk-hs.default.svc.cluster.local (10.244.1.11): icmp_seq=1 ttl=64 time=0.308 ms
    64 bytes from zk-2.zk-hs.default.svc.cluster.local (10.244.1.11): icmp_seq=2 ttl=64 time=0.080 ms
    64 bytes from zk-2.zk-hs.default.svc.cluster.local (10.244.1.11): icmp_seq=3 ttl=64 time=0.295 ms
    ^C
    --- zk-2.zk-hs.default.svc.cluster.local ping statistics ---
    3 packets transmitted, 3 received, 0% packet loss, time 2024ms
    rtt min/avg/max/mdev = 0.080/0.227/0.308/0.105 ms

4.验证集群功能

4.1.在0容器创建数据

 

进入0容器,执行zkCli.sh客户端,执行create /zk-test hsssss, 获取一下是否看对不对get /zk-test
 qiteck@server:~/program/docker_service/zookeeper$ sudo kubectl exec -it zk-0 -- /bin/bash
    root@zk-0:/# zkCli.sh
    Connecting to localhost:2181
    2023-01-31 08:49:55,163 [myid:] - INFO  [main:Environment@100] - Client environment:zookeeper.version=3.4.10-39d3a4f269333c922ed3db283be479f9deacaa0f, built on 03/23/2017 10:13 GMT
    2023-01-31 08:49:55,166 [myid:] - INFO  [main:Environment@100] - Client environment:host.name=zk-0.zk-hs.default.svc.cluster.local
    2023-01-31 08:49:55,167 [myid:] - INFO  [main:Environment@100] - Client environment:java.version=1.8.0_131
    2023-01-31 08:49:55,170 [myid:] - INFO  [main:Environment@100] - Client environment:java.vendor=Oracle Corporation
    2023-01-31 08:49:55,170 [myid:] - INFO  [main:Environment@100] - Client environment:java.home=/usr/lib/jvm/java-8-openjdk-amd64/jre
    2023-01-31 08:49:55,170 [myid:] - INFO  [main:Environment@100] - Client environment:java.class.path=/usr/bin/../build/classes:/usr/bin/../build/lib/*.jar:/usr/bin/../share/zookeeper/zookeeper-3.4.10.jar:/usr/bin/../share/zookeeper/slf4j-log4j12-1.6.1.jar:/usr/bin/../share/zookeeper/slf4j-api-1.6.1.jar:/usr/bin/../share/zookeeper/netty-3.10.5.Final.jar:/usr/bin/../share/zookeeper/log4j-1.2.16.jar:/usr/bin/../share/zookeeper/jline-0.9.94.jar:/usr/bin/../src/java/lib/*.jar:/usr/bin/../etc/zookeeper:
    2023-01-31 08:49:55,171 [myid:] - INFO  [main:Environment@100] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib
    2023-01-31 08:49:55,171 [myid:] - INFO  [main:Environment@100] - Client environment:java.io.tmpdir=/tmp
    2023-01-31 08:49:55,171 [myid:] - INFO  [main:Environment@100] - Client environment:java.compiler=<NA>
    2023-01-31 08:49:55,172 [myid:] - INFO  [main:Environment@100] - Client environment:os.name=Linux
    2023-01-31 08:49:55,172 [myid:] - INFO  [main:Environment@100] - Client environment:os.arch=amd64
    2023-01-31 08:49:55,172 [myid:] - INFO  [main:Environment@100] - Client environment:os.version=5.15.0-50-generic
    2023-01-31 08:49:55,173 [myid:] - INFO  [main:Environment@100] - Client environment:user.name=root
    2023-01-31 08:49:55,173 [myid:] - INFO  [main:Environment@100] - Client environment:user.home=/root
    2023-01-31 08:49:55,173 [myid:] - INFO  [main:Environment@100] - Client environment:user.dir=/
    2023-01-31 08:49:55,175 [myid:] - INFO  [main:ZooKeeper@438] - Initiating client connection, connectString=localhost:2181 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@22d8cfe0
    Welcome to ZooKeeper!
    2023-01-31 08:49:55,234 [myid:] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@1032] - Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
    JLine support is enabled
    2023-01-31 08:49:55,410 [myid:] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@876] - Socket connection established to localhost/127.0.0.1:2181, initiating session
    [zk: localhost:2181(CONNECTING) 0] 2023-01-31 08:49:55,494 [myid:] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@1299] - Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x18606fc8b4a0000, negotiated timeout = 30000
    
    WATCHER::
    
    WatchedEvent state:SyncConnected type:None path:null
    
    [zk: localhost:2181(CONNECTED) 0]
    [zk: localhost:2181(CONNECTED) 0] create /zk-test hsssss
    Created /zk-test
    [zk: localhost:2181(CONNECTED) 1] get /zk-test
    hsssss
    cZxid = 0x100000002
    ctime = Tue Jan 31 08:50:53 UTC 2023
    mZxid = 0x100000002
    mtime = Tue Jan 31 08:50:53 UTC 2023
    pZxid = 0x100000002
    cversion = 0
    dataVersion = 0
    aclVersion = 0
    ephemeralOwner = 0x0
    dataLength = 6
    numChildren = 0

 

4.2.在1容器获取数据

进入1容器,执行zkCli.sh客户端,获取数据验证get /zk-test
 qiteck@server:~/program/docker_service/zookeeper$ sudo kubectl exec -it zk-1 -- /bin/bash
    root@zk-1:/# zkCli.sh
    Connecting to localhost:2181
    2023-01-31 08:51:55,701 [myid:] - INFO  [main:Environment@100] - Client environment:zookeeper.version=3.4.10-39d3a4f269333c922ed3db283be479f9deacaa0f, built on 03/23/2017 10:13 GMT
    2023-01-31 08:51:55,710 [myid:] - INFO  [main:Environment@100] - Client environment:host.name=zk-1.zk-hs.default.svc.cluster.local
    2023-01-31 08:51:55,713 [myid:] - INFO  [main:Environment@100] - Client environment:java.version=1.8.0_131
    2023-01-31 08:51:55,717 [myid:] - INFO  [main:Environment@100] - Client environment:java.vendor=Oracle Corporation
    2023-01-31 08:51:55,721 [myid:] - INFO  [main:Environment@100] - Client environment:java.home=/usr/lib/jvm/java-8-openjdk-amd64/jre
    2023-01-31 08:51:55,721 [myid:] - INFO  [main:Environment@100] - Client environment:java.class.path=/usr/bin/../build/classes:/usr/bin/../build/lib/*.jar:/usr/bin/../share/zookeeper/zookeeper-3.4.10.jar:/usr/bin/../share/zookeeper/slf4j-log4j12-1.6.1.jar:/usr/bin/../share/zookeeper/slf4j-api-1.6.1.jar:/usr/bin/../share/zookeeper/netty-3.10.5.Final.jar:/usr/bin/../share/zookeeper/log4j-1.2.16.jar:/usr/bin/../share/zookeeper/jline-0.9.94.jar:/usr/bin/../src/java/lib/*.jar:/usr/bin/../etc/zookeeper:
    2023-01-31 08:51:55,722 [myid:] - INFO  [main:Environment@100] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib
    2023-01-31 08:51:55,722 [myid:] - INFO  [main:Environment@100] - Client environment:java.io.tmpdir=/tmp
    2023-01-31 08:51:55,722 [myid:] - INFO  [main:Environment@100] - Client environment:java.compiler=<NA>
    2023-01-31 08:51:55,723 [myid:] - INFO  [main:Environment@100] - Client environment:os.name=Linux
    2023-01-31 08:51:55,723 [myid:] - INFO  [main:Environment@100] - Client environment:os.arch=amd64
    2023-01-31 08:51:55,724 [myid:] - INFO  [main:Environment@100] - Client environment:os.version=5.15.0-50-generic
    2023-01-31 08:51:55,725 [myid:] - INFO  [main:Environment@100] - Client environment:user.name=root
    2023-01-31 08:51:55,725 [myid:] - INFO  [main:Environment@100] - Client environment:user.home=/root
    2023-01-31 08:51:55,726 [myid:] - INFO  [main:Environment@100] - Client environment:user.dir=/
    2023-01-31 08:51:55,730 [myid:] - INFO  [main:ZooKeeper@438] - Initiating client connection, connectString=localhost:2181 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@22d8cfe0
    Welcome to ZooKeeper!
    2023-01-31 08:51:55,777 [myid:] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@1032] - Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
    JLine support is enabled
    2023-01-31 08:51:55,943 [myid:] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@876] - Socket connection established to localhost/127.0.0.1:2181, initiating session
    [zk: localhost:2181(CONNECTING) 0] 2023-01-31 08:51:56,014 [myid:] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@1299] - Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x28606fc8ba00000, negotiated timeout = 30000
    
    WATCHER::
    
    WatchedEvent state:SyncConnected type:None path:null
    
    [zk: localhost:2181(CONNECTED) 0] get /zk-test
    hsssss
    cZxid = 0x100000002
    ctime = Tue Jan 31 08:50:53 UTC 2023
    mZxid = 0x100000002
    mtime = Tue Jan 31 08:50:53 UTC 2023
    pZxid = 0x100000002
    cversion = 0
    dataVersion = 0
    aclVersion = 0
    ephemeralOwner = 0x0
    dataLength = 6
    numChildren = 0

可以取到/zk-test对应的数据 hsssss

4.3.删除pod,验证pod挂掉验证集群是否能恢复正常:

qiteck@server:~/program/docker_service/zookeeper$ sudo kubectl get pods -l app=zk
    NAME   READY   STATUS    RESTARTS   AGE
    zk-0   1/1     Running   0          15m
    zk-1   1/1     Running   0          15m
    zk-2   1/1     Running   0          15m
qiteck@server:~/program/docker_service/zookeeper$ sudo kubectl delete pods zk-1 zk-0
    pod "zk-1" deleted
    pod "zk-0" deleted
qiteck@server:~/program/docker_service/zookeeper$ sudo kubectl get pods -l app=zk -w
    NAME   READY   STATUS    RESTARTS   AGE
    zk-0   1/1     Running   0          65s
    zk-1   1/1     Running   0          44s
    zk-2   1/1     Running   0          16m

可以看到,zk-0,zk-1再删除以后会自动创建,集群能够恢复

4.4.验证恢复后的集群状态:

for i in 0 1 2; do kubectl exec zk-$i -- zkServer.sh status; done
 qiteck@server:~/program/docker_service/zookeeper$ for i in 0 1 2; do kubectl exec zk-$i -- zkServer.sh status; done
    ZooKeeper JMX enabled by default
    Using config: /usr/bin/../etc/zookeeper/zoo.cfg
    Mode: follower
    ZooKeeper JMX enabled by default
    Using config: /usr/bin/../etc/zookeeper/zoo.cfg
    Mode: leader
    ZooKeeper JMX enabled by default
    Using config: /usr/bin/../etc/zookeeper/zoo.cfg
    Mode: follower

4.5.验证新的节点数据是否正常:

再次进入0容器,执行zkCli.sh客户端,获取数据get /zk-test
 qiteck@server:~/program/docker_service/zookeeper$ sudo kubectl exec -it zk-0 -- /bin/bash
    root@zk-0:/# zkCli.sh
    Connecting to localhost:2181
    2023-01-31 09:00:14,553 [myid:] - INFO  [main:Environment@100] - Client environment:zookeeper.version=3.4.10-39d3a4f269333c922ed3db283be479f9deacaa0f, built on 03/23/2017 10:13 GMT
    2023-01-31 09:00:14,556 [myid:] - INFO  [main:Environment@100] - Client environment:host.name=zk-0.zk-hs.default.svc.cluster.local
    2023-01-31 09:00:14,556 [myid:] - INFO  [main:Environment@100] - Client environment:java.version=1.8.0_131
    2023-01-31 09:00:14,559 [myid:] - INFO  [main:Environment@100] - Client environment:java.vendor=Oracle Corporation
    2023-01-31 09:00:14,559 [myid:] - INFO  [main:Environment@100] - Client environment:java.home=/usr/lib/jvm/java-8-openjdk-amd64/jre
    2023-01-31 09:00:14,559 [myid:] - INFO  [main:Environment@100] - Client environment:java.class.path=/usr/bin/../build/classes:/usr/bin/../build/lib/*.jar:/usr/bin/../share/zookeeper/zookeeper-3.4.10.jar:/usr/bin/../share/zookeeper/slf4j-log4j12-1.6.1.jar:/usr/bin/../share/zookeeper/slf4j-api-1.6.1.jar:/usr/bin/../share/zookeeper/netty-3.10.5.Final.jar:/usr/bin/../share/zookeeper/log4j-1.2.16.jar:/usr/bin/../share/zookeeper/jline-0.9.94.jar:/usr/bin/../src/java/lib/*.jar:/usr/bin/../etc/zookeeper:
    2023-01-31 09:00:14,559 [myid:] - INFO  [main:Environment@100] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib
    2023-01-31 09:00:14,559 [myid:] - INFO  [main:Environment@100] - Client environment:java.io.tmpdir=/tmp
    2023-01-31 09:00:14,559 [myid:] - INFO  [main:Environment@100] - Client environment:java.compiler=<NA>
    2023-01-31 09:00:14,560 [myid:] - INFO  [main:Environment@100] - Client environment:os.name=Linux
    2023-01-31 09:00:14,560 [myid:] - INFO  [main:Environment@100] - Client environment:os.arch=amd64
    2023-01-31 09:00:14,560 [myid:] - INFO  [main:Environment@100] - Client environment:os.version=5.15.0-50-generic
    2023-01-31 09:00:14,560 [myid:] - INFO  [main:Environment@100] - Client environment:user.name=root
    2023-01-31 09:00:14,560 [myid:] - INFO  [main:Environment@100] - Client environment:user.home=/root
    2023-01-31 09:00:14,561 [myid:] - INFO  [main:Environment@100] - Client environment:user.dir=/
    2023-01-31 09:00:14,562 [myid:] - INFO  [main:ZooKeeper@438] - Initiating client connection, connectString=localhost:2181 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@22d8cfe0
    Welcome to ZooKeeper!
    2023-01-31 09:00:14,594 [myid:] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@1032] - Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
    JLine support is enabled
    2023-01-31 09:00:14,738 [myid:] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@876] - Socket connection established to localhost/127.0.0.1:2181, initiating session
    [zk: localhost:2181(CONNECTING) 0] 2023-01-31 09:00:14,808 [myid:] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@1299] - Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x186070b91d80000, negotiated timeout = 30000
    
    WATCHER::
    
    WatchedEvent state:SyncConnected type:None path:null
    
    [zk: localhost:2181(CONNECTED) 0] get /zk-test
    hsssss
    cZxid = 0x100000002
    ctime = Tue Jan 31 08:50:53 UTC 2023
    mZxid = 0x100000002
    mtime = Tue Jan 31 08:50:53 UTC 2023
    pZxid = 0x100000002
    cversion = 0
    dataVersion = 0
    aclVersion = 0
    ephemeralOwner = 0x0
    dataLength = 6
    numChildren = 0

可以看到获取zk-test的数据还是hssss

至此zookeeper的功能是正常的。

posted @   若-飞  阅读(332)  评论(0编辑  收藏  举报
相关博文:
阅读排行:
· CSnakes vs Python.NET:高效嵌入与灵活互通的跨语言方案对比
· DeepSeek “源神”启动!「GitHub 热点速览」
· 我与微信审核的“相爱相杀”看个人小程序副业
· Plotly.NET 一个为 .NET 打造的强大开源交互式图表库
· 上周热点回顾(2.17-2.23)
点击右上角即可分享
微信分享提示