Kubernetes 搭建zookeeper集群
一、修改官方镜像的运行脚本
如果要以statefulset的方式启动zookeeper集群,默认情况下myid的值是固定的,所以要修改最后的启动脚本。
https://github.com/31z4/zookeeper-docker/tree/master/3.6.2
修改文件
# git clone https://github.com/31z4/zookeeper-docker.git # cd zookeeper-docker/3.6.2/ # cat docker-entrypoint.sh #!/bin/bash set -e # Allow the container to be started with `--user` if [[ "$1" = 'zkServer.sh' && "$(id -u)" = '0' ]]; then chown -R zookeeper "$ZOO_DATA_DIR" "$ZOO_DATA_LOG_DIR" "$ZOO_LOG_DIR" "$ZOO_CONF_DIR" exec gosu zookeeper "$0" "$@" fi # Generate the config only if it doesn't exist if [[ ! -f "$ZOO_CONF_DIR/zoo.cfg" ]]; then CONFIG="$ZOO_CONF_DIR/zoo.cfg" # 以下是生成zoo.cnf的配置文件,如果需要可以在K8s的脚本中定义env环境变量传入参数 { echo "dataDir=$ZOO_DATA_DIR" echo "dataLogDir=$ZOO_DATA_LOG_DIR" echo "tickTime=$ZOO_TICK_TIME" echo "initLimit=$ZOO_INIT_LIMIT" echo "syncLimit=$ZOO_SYNC_LIMIT" echo "autopurge.snapRetainCount=$ZOO_AUTOPURGE_SNAPRETAINCOUNT" echo "autopurge.purgeInterval=$ZOO_AUTOPURGE_PURGEINTERVAL" echo "maxClientCnxns=$ZOO_MAX_CLIENT_CNXNS" echo "standaloneEnabled=$ZOO_STANDALONE_ENABLED" echo "admin.enableServer=$ZOO_ADMINSERVER_ENABLED" } >> "$CONFIG" if [[ -z $ZOO_SERVERS ]]; then ZOO_SERVERS="server.1=localhost:2888:3888;2181" fi for server in $ZOO_SERVERS; do echo "$server" >> "$CONFIG" done if [[ -n $ZOO_4LW_COMMANDS_WHITELIST ]]; then echo "4lw.commands.whitelist=$ZOO_4LW_COMMANDS_WHITELIST" >> "$CONFIG" fi for cfg_extra_entry in $ZOO_CFG_EXTRA; do echo "$cfg_extra_entry" >> "$CONFIG" done fi ZOO_MY_ID=$(($(hostname | sed s/.*-//) + 1)) # 增加此行脚本ZOO_MY_ID变量,取StateFulSet的主机名称加1。 # Write myid only if it doesn't exist if [[ ! -f "$ZOO_DATA_DIR/myid" ]]; then echo "${ZOO_MY_ID:-1}" > "$ZOO_DATA_DIR/myid" fi exec "$@" # docker build -t zookeeper:3.6.2-c ./ # 构建zookeeper镜像
二、创建Service服务文件
# cat zookeeper-svc.yaml apiVersion: v1 kind: Service metadata: name: zookeeper-cluster-test-svc namespace: default labels: app: zookeeper-cluster-test spec: clusterIP: None ports: - name: client # 对client端提供服务的端口,在公有云平台搭建zk类型应选择 type: LoadBalancer port: 2181 targetPort: 2181 protocol: TCP - name: peer # 集群内通信使用(leader监听此端口) port: 2888 targetPort: 2888 protocol: TCP - name: leader-election # 选举leader使用 port: 3888 targetPort: 3888 protocol: TCP selector: app: zookeeper-cluster-test --- # 如果使用公有云则使用如下配置文件 # cat zookeeper-svc.yaml apiVersion: v1 kind: Service metadata: name: zookeeper-cluster-test-svc namespace: default labels: app: zookeeper-cluster-test annotations: service.kubernetes.io/qcloud-loadbalancer-internal-subnetid: subnet-pqqeergv # 子网ID spec: externalTrafficPolicy: Cluster ports: - name: 2181-tcp port: 2181 targetPort: 2181 protocol: TCP - name: peer port: 2888 targetPort: 2888 protocol: TCP - name: leader-election port: 3888 targetPort: 3888 protocol: TCP - name: web port: 8080 targetPort: 8080 selector: app: zookeeper-cluster-test type: LoadBalancer
三、zookeeper的Statefulset服务文件
# cat zookeeper.yaml apiVersion: apps/v1 kind: StatefulSet metadata: name: zookeeper-cluster-test namespace: default labels: app: zookeeper-cluster-test spec: serviceName: zookeeper-cluster-test-svc replicas: 3 selector: matchLabels: app: zookeeper-cluster-test template: metadata: labels: app: zookeeper-cluster-test spec: containers: - name: zookeeper imagePullPolicy: Always image: zookeeper:3.6.2-c resources: requests: cpu: 200m memory: 201Mi limits: cpu: 500m memory: 1024Mi ports: - containerPort: 2181 name: client - containerPort: 2888 name: leader - containerPort: 3888 name: leader-election protocol: TCP volumeMounts: - name: zookeeper-data mountPath: "/data/" readinessProbe: tcpSocket: port: 2181 initialDelaySeconds: 30 periodSeconds: 5 livenessProbe: tcpSocket: port: 2181 initialDelaySeconds: 30 failureThreshold: 30 periodSeconds: 10 env: - name: ZOO_STANDALONE_ENABLED value: "false" - name: ZOO_SERVERS value: "server.1=zookeeper-cluster-test-0.zookeeper-cluster-test-svc.default.svc.cluster.local:2888:3888;2181 server.2=zookeeper-cluster-test-1.zookeeper-cluster-test-svc.default.svc.cluster.local:2888:3888;2181 server.3=zookeeper-cluster-test-2.zookeeper-cluster-test-svc.default.svc.cluster.local:2888:3888;2181" volumeClaimTemplates: - metadata: name: zookeeper-data spec: storageClassName: disk-test accessModes: - ReadWriteOnce resources: requests: storage: 10Gi
四、登录容器后验证集群
# kubectl exec -it zookeeper-cluster-test-0 bash root@zookeeper-cluster-test-0:/apache-zookeeper-3.6.2-bin# apt-get update -y && apt-get install procps iputils-ping net-tools -y root@zookeeper-cluster-test-0:/apache-zookeeper-3.6.2-bin# zkServer.sh status ZooKeeper JMX enabled by default Using config: /conf/zoo.cfg Client port found: 2181. Client address: localhost. Client SSL: false. Mode: follower root@zookeeper-cluster-test-0:/apache-zookeeper-3.6.2-bin# netstat -lntp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:2181 0.0.0.0:* LISTEN - tcp 0 0 0.0.0.0:38509 0.0.0.0:* LISTEN - tcp 0 0 172.16.0.80:3888 0.0.0.0:* LISTEN - tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN -
分类:
Docker & Kubernetes
, 消息队列
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· 从 HTTP 原因短语缺失研究 HTTP/2 和 HTTP/3 的设计差异
· AI与.NET技术实操系列:向量存储与相似性搜索在 .NET 中的实现
· 基于Microsoft.Extensions.AI核心库实现RAG应用
· Linux系列:如何用heaptrack跟踪.NET程序的非托管内存泄露
· 开发者必知的日志记录最佳实践
· TypeScript + Deepseek 打造卜卦网站:技术与玄学的结合
· Manus的开源复刻OpenManus初探
· AI 智能体引爆开源社区「GitHub 热点速览」
· 从HTTP原因短语缺失研究HTTP/2和HTTP/3的设计差异
· 三行代码完成国际化适配,妙~啊~