K8S部署zookeeper

部署nfs请参考:https://www.cnblogs.com/llds/p/17198194.html

一、集群部署zookeeper

zookeeper部署在以下节点上

[root@k8s-master nfs-client]# kubectl get nodes
NAME         STATUS   ROLES                  AGE   VERSION
k8s-master   Ready    control-plane,master   17d   v1.20.0
k8s-node1    Ready    <none>                 16d   v1.20.0
k8s-node2    Ready    <none>                 16d   v1.20.0

二、部署configmap

1、.yaml文件

点击查看代码

[root@k8s-master conf]# vim configmap/zk-cm.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: zk-scripts
  namespace: middleware
  labels:
    app.kubernetes.io/name: zookeeper
    app.kubernetes.io/component: zookeeper
data:
  init-certs.sh: |-
    #!/bin/bash
  setup.sh: |-
    #!/bin/bash
    HOSTNAME="$(hostname -s)"
    echo "HOSTNAME  $HOSTNAME"
    if [[ -f "/bitnami/zookeeper/data/myid" ]]; then
        export ZOO_SERVER_ID="$(cat /bitnami/zookeeper/data/myid)"
    else
        if [[ $HOSTNAME =~ (.*)-([0-9]+)$ ]]; then
            ORD=${BASH_REMATCH[2]}
            export ZOO_SERVER_ID="$((ORD + 1 ))"
        else
            echo "Failed to get index from hostname $HOSTNAME"
            exit 1
        fi
    fi
    echo "ZOO_SERVER_ID  $ZOO_SERVER_ID"
    exec /entrypoint.sh /run.sh

注意:HOSTNAME 是指 pod name,所以ORD 获取的将是 pod name 动态生成的尾缀
如:zk-test-0 ORD 的结果为 1

2、应用.yaml 文件

kubectl create configmap -n middleware configmap/zk-cm.yaml

三、部署svc

1、部署 Service Headless,用于zookeeper间相互通信

1)、.yaml文件
点击查看代码
[root@k8s-master svc]# vim zk-svc-headless.yaml
apiVersion: v1
kind: Service
metadata:
  name: zk-headless
  namespace: middleware
  labels:
    app.kubernetes.io/name: zookeeper
    app.kubernetes.io/component: zookeeper
spec:
  type: ClusterIP
  clusterIP: None
  publishNotReadyAddresses: true
  ports:
    - name: tcp-client
      port: 2181
      targetPort: client
    - name: tcp-follower
      port: 2888
      targetPort: follower
    - name: tcp-election
      port: 3888
      targetPort: election
  selector:
    app.kubernetes.io/name: zookeeper

2)、应用.yaml文件

kubectl apply -f zk-svc-headless.yaml

1、部署 Service,用于外部访问 Zookeeper

1)、.yaml文件
点击查看代码
[root@k8s-master svc]# vim zk-svc.yaml

apiVersion: v1
kind: Service
metadata:
  name: zk-service
  namespace: middleware
  labels:
    app.kubernetes.io/name: zookeeper
    app.kubernetes.io/component: zookeeper
spec:
  type: ClusterIP
  sessionAffinity: None
  ports:
    - name: tcp-client
      port: 2181
      targetPort: client
      nodePort: null
    - name: tcp-follower
      port: 2888
      targetPort: follower
    - name: tcp-election
      port: 3888
      targetPort: election
  selector:
    app.kubernetes.io/name: zookeeper
    app.kubernetes.io/component: zookeeper

2)、应用.yaml文件

kubectl apply -f zk-svc.yaml

四、部署StatefulSet

1、.yaml 文件

点击查看代码
[root@k8s-master conf]# cat zk.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: zk-test
  namespace: middleware
  labels:
    app.kubernetes.io/name: zookeeper
    app.kubernetes.io/component: zookeeper
    role: zookeeper
spec:
  replicas: 3
  podManagementPolicy: Parallel
  selector:
    matchLabels:
      app.kubernetes.io/name: zookeeper
      app.kubernetes.io/component: zookeeper
  serviceName: zk-headless
  updateStrategy:
    rollingUpdate: {}
    type: RollingUpdate
  template:
    metadata:
      annotations:
      labels:
        app.kubernetes.io/name: zookeeper
        app.kubernetes.io/component: zookeeper
    spec:
      serviceAccountName: default
      affinity:
        podAntiAffinity:                                    # Pod反亲和性
          preferredDuringSchedulingIgnoredDuringExecution:  # 软策略,使Pod分布在不同的节点上
          - weight: 49                                      # 权重,有多个策略通过权重控制调度
            podAffinityTerm:
              topologyKey: app.kubernetes.io/name           # 通过app.kubernetes.io/name作为域调度
              labelSelector:
                matchExpressions:
                - key: app.kubernetes.io/component
                  operator: In
                  values:
                  - zookeeper
      securityContext:
        fsGroup: 1001
      initContainers:
      containers:
        - name: zookeeper
          image: bitnami/zookeeper:3.8.0-debian-10-r0
          imagePullPolicy: "IfNotPresent"
          securityContext:
            runAsNonRoot: true
            runAsUser: 1001
          command:
            - /scripts/setup.sh
          resources:                                       # QoS 最高等级
            limits:
              cpu: 500m
              memory: 500Mi
            requests:
              cpu: 500m
              memory: 500Mi
          env:
            - name: BITNAMI_DEBUG
              value: "false"
            - name: ZOO_DATA_LOG_DIR
              value: ""
            - name: ZOO_PORT_NUMBER
              value: "2181"
            - name: ZOO_TICK_TIME
              value: "2000"
            - name: ZOO_INIT_LIMIT
              value: "10"
            - name: ZOO_SYNC_LIMIT
              value: "5"
            - name: ZOO_PRE_ALLOC_SIZE
              value: "65536"
            - name: ZOO_SNAPCOUNT
              value: "100000"
            - name: ZOO_MAX_CLIENT_CNXNS
              value: "60"
            - name: ZOO_4LW_COMMANDS_WHITELIST
              value: "srvr, mntr, ruok"
            - name: ZOO_LISTEN_ALLIPS_ENABLED
              value: "no"
            - name: ZOO_AUTOPURGE_INTERVAL
              value: "0"
            - name: ZOO_AUTOPURGE_RETAIN_COUNT
              value: "3"
            - name: ZOO_MAX_SESSION_TIMEOUT
              value: "40000"
            - name: ZOO_CFG_EXTRA
              value: "quorumListenOnAllIPs=true"
            - name: ZOO_SERVERS
              value: "zk-test-0.zk-headless.middleware.svc.cluster.local:2888:3888::1,    #`zk-test-0` 指 podName `zk-headless` 指 Service Headless 名称 `middleware` 指 namespace 
                      zk-test-1.zk-headless.middleware.svc.cluster.local:2888:3888::2,    #podName.headlessName.namespace.svc.cluster.local:2888:3888::2
                      zk-test-2.zk-headless.middleware.svc.cluster.local:2888:3888::3"
            - name: ZOO_ENABLE_AUTH
              value: "no"
            - name: ZOO_HEAP_SIZE
              value: "1024"
            - name: ZOO_LOG_LEVEL
              value: "ERROR"
            - name: ALLOW_ANONYMOUS_LOGIN
              value: "yes"
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: metadata.name
          ports:
            - name: client
              containerPort: 2181
            - name: follower
              containerPort: 2888
            - name: election
              containerPort: 3888
          livenessProbe:
            failureThreshold: 6
            initialDelaySeconds: 30
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 5
            exec:
              command: ['/bin/bash', '-c', 'echo "ruok" | timeout 2 nc -w 2 localhost 2181 | grep imok']
          readinessProbe:
            failureThreshold: 6
            initialDelaySeconds: 5
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 5
            exec:
              command: ['/bin/bash', '-c', 'echo "ruok" | timeout 2 nc -w 2 localhost 2181 | grep imok']
          volumeMounts:
            - name: scripts
              mountPath: /scripts/setup.sh
              subPath: setup.sh
            - name: zookeeper-data
              mountPath: /bitnami/zookeeper
      volumes:
        - name: scripts
          configMap:
            name: zk-scripts
            defaultMode: 0755
  volumeClaimTemplates:
  - metadata:
      name: zookeeper-data
    spec:
      storageClassName: zk-nfs-storage
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 1Gi

2)、应用.yaml文件

kubectl apply -f zk.yaml

参考文献:

posted @ 2023-03-09 15:07  eqwal  阅读(462)  评论(0编辑  收藏  举报