BenjaminYang In solitude, where we are least alone

有状态部署StatefulSet控制器

1.StatefulSet概述

  • 部署有状态应用 
  • 解决Pod独立生命周期,保持Pod启动顺序和唯一性

1. 稳定,唯一的网络标识符,持久存储

2. 有序,优雅的部署和扩展、删除和终止

3. 有序,滚动更新

应用场景:数据库

 

 

StatefulSet与Deployment区别:

有身份的!

身份三要素: 

  • 域名
  • 主机名
  • 存储(PVC)

 

无状态的适用:web,api,微服务的部署,可以运行在任意节点,不依赖后端持久化存储。

有状态的适用: 需要有固定ip,pod有各自的存储,可以按一定规则进行扩缩容。

 

2.正常service和headlessService对比

image.png

normal sevice:

通过一个cluster-ip 10.0.0.224:80 来反向代理 endpoints

10.244.0.58:8080   10.244.1.78:8080    10.244.1.88:8080

image.png

 

image.png

 

image.png

 

headless service:

 

无头服务,需要将 clusterIP: None  并且不能设置nodePort

web-headlessService.yaml

apiVersion: v1
kind: Service
metadata:
  labels:
    app: web
  name: headless-svc
  namespace: default
spec:
  clusterIP: None
  ports:
  - port: 80
    protocol: TCP
    targetPort: 8080
  selector:
    app: web

statefulSet.yaml    seviceName关联上面的svc  headless-svc  

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: web
spec:
  selector:
    matchLabels:
      app: nginx
  serviceName: "headless-svc"
  replicas: 3
  template:
    metadata:
      labels:
        app: nginx
    spec:
      terminationGracePeriodSeconds: 10
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80
          name: web

 

image.png

statefulSet生成的pod 都会有个稳定有序的 标识符   statefulSetName-index  如 web-0

image.png

 

service域名解析差异

image.png

normal service  解析的是 service 的clusterIP

image.png

由于headless service 的clusterIP 设置的None 所以解析 service name 的时候解析是sevice 对应的pod ip

image.png

 

ClusterIP A记录格式:(normal service)

<service-name>.<namespace-name>.svc.cluster.local

ClusterIP=None A记录格式:(headless servcie)

<statefulsetName-index>.<service-name>.<namespace-name>.svc.cluster.local

示例:web-0.nginx.default.svc.cluster.local

相比 normal service 的域名格式 headless service 解析出来的格式多一个< statefulSetName-index 

 

statefulSet 管理的pod 他主机名  statefulSetName-index 

image.png

 

3.StatefulSet 存储

StatefulSet使用 volumeClaimTemplates创建每个Pod的PersistentVolume的同时会为每个pod分配并创建一个带编号的pvc

使用无状态部署创建的动态供给pv 和pvc 是没有编号的

image.png

statfulset的删除机制

执行kubectl delete -f statefulSet.yaml

按照创建的先后进行删除,先创建的将会被先删除。

image.png

statefulset的启动机制

image.png

有序的 pv ,pvc创建

image.png

这样就可以使得对应的pod 申请到的 pv 和pvc  与之向对应。

 

各个pod之前的存储是互相独立的不共享

在web-0创建的文件 在 web-1中是看不到的

image.png

 

4.statefulSet应用示例(Mysql集群)

创建个statefulSet应用目录

mkdir  statefulset-mysqlCluster

创建configMap

configmap.yaml  主要保存主从配置

 

apiVersion: v1
kind: ConfigMap
metadata:
  name: mysql
  labels:
    app: mysql
data:
  master.cnf: |
    # Apply this config only on the master.
    [client]
    default-character-set=utf8mb4

    [mysql]
    default-character-set=utf8mb4

    [mysqld]
    # 打开binlog日志
    log-bin
    binlog_expire_logs_seconds=2592000
    max_connections=10000
    # 在容器里面需要设置下时区
    default-time-zone='+8:00'
    character-set-client-handshake=FALSE
    character-set-server=utf8mb4
    collation-server=utf8mb4_unicode_ci
    init_connect='SET NAMES utf8mb4 COLLATE utf8mb4_unicode_ci'
  slave.cnf: |
    # Apply this config only on slaves.
    [client]
    default-character-set=utf8mb4

    [mysql]
    default-character-set=utf8mb4

    [mysqld]
    # 机器设置只读
    super-read-only
    max_connections=10000
    default-time-zone='+8:00'
    character-set-client-handshake=FALSE
    character-set-server=utf8mb4
    collation-server=utf8mb4_unicode_ci
    init_connect='SET NAMES utf8mb4 COLLATE utf8mb4_unicode_ci'

image.png

 

创建service

service.yaml

 

apiVersion: v1
kind: Service
metadata:
  name: mysql-svc-master
  labels:
    app: mysql
spec:
  selector:
    app: mysql
  ports:
  - port: 3306
    name: mysql
  clusterIP: None

需要先创建一个Headless Service,用于从机去访问主机。从机可以通过<pod-name>.mysql-svc-master去访问到指定的pod。

image.png

 

创建statefulSet

 

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mysql-ss
spec: 
  selector: 
    matchLabels: 
      app: mysql # 匹配 .spec.template.metadata.labels
  serviceName: mysql-svc-master
  replicas: 3
  template: 
    metadata:
      labels:
        app: mysql # 匹配 .spec.selector.matchLabels
    spec:
      imagePullSecrets:
      - name: myregistry
      initContainers:
      - name: init-mysql
        image: registry.cn-hangzhou.aliyuncs.com/benjamin-learn/mysql:8.0 
        command:
        - bash
        - "-c"
        - |
          set ex
          # Generate mysql server-id from pod ordinal index.
          [[ `hostname` =~ -([0-9]+)$ ]] || exit 1
          ordinal=${BASH_REMATCH[1]}
          echo [mysqld] > /mnt/conf.d/server-id.cnf

          # Add an offset to avoid reserved server-id=0 value.
          echo server-id=$((100 + $ordinal)) >> /mnt/conf.d/server-id.cnf

          # Copy appropriate conf.d files from config-map to emptyDir.
          if [[ $ordinal -eq 0 ]]; then
            cp /mnt/config-map/master.cnf /mnt/conf.d/
          else
            cp /mnt/config-map/slave.cnf /mnt/conf.d/
          fi
        volumeMounts:
        - name: conf
          mountPath: /mnt/conf.d
        - name: config-map
          mountPath: /mnt/config-map
      - name: clone-mysql
        image: registry.cn-hangzhou.aliyuncs.com/benjamin-learn/xtrabackup:1.0
        command:
        - bash
        - "-c"
        - |
          set -ex
          # Skip the clone if data already exists.
          [[ -d /var/lib/mysql/mysql ]] && exit 0

          # Skip the clone on master (ordinal index 0).
          [[ `hostname` =~ -([0-9]+)$ ]] || exit 1
          ordinal=${BASH_REMATCH[1]}
          [[ $ordinal -eq 0 ]] && exit 0

          # Clone data from previous peer.
          ncat --recv-only mysql-ss-$(($ordinal-1)).mysql-svc-master 3307 | xbstream -x -C /var/lib/mysql
          # Prepare the backup.
          xtrabackup --prepare --target-dir=/var/lib/mysql
        volumeMounts:
        - name: data
          mountPath: /var/lib/mysql
          subPath: mysql
        - name: conf
          mountPath: /etc/mysql/conf.d
      containers:
      - name: mysql
        image: mysql:8.0
        args: ["--default-authentication-plugin=mysql_native_password"]
        env:
        - name: MYSQL_ALLOW_EMPTY_PASSWORD
          value: "1"
        ports:
        - name: mysql
          containerPort: 3306
        volumeMounts:
        - name: data
          mountPath: /var/lib/mysql
          subPath: mysql
        - name: conf
          mountPath: /etc/mysql/conf.d
        resources:
          requests:
            cpu: 250m
            memory: 256Mi
          limits:
            cpu: 500m
            memory: 512Mi
        livenessProbe:
          exec:
            command: ["mysqladmin", "ping"]
          initialDelaySeconds: 30
          periodSeconds: 10
          timeoutSeconds: 5
        readinessProbe:
          exec:
# Check we can execute queries over TCP (skip-networking is off).
            command: ["mysql", "-h", "127.0.0.1", "-e", "SELECT 1"]
          initialDelaySeconds: 5
          periodSeconds: 2
          timeoutSeconds: 1
      - name: xtrabackup
        image: mzmuer/xtrabackup:1.0
        ports:
        - name: xtrabackup
          containerPort: 3307
        command:
        - bash
        - "-c"
        - |
          set -ex
          cd /var/lib/mysql
# Determine binlog position of cloned data, if any.
          if [[ -s xtrabackup_slave_info ]]; then
# XtraBackup already generated a partial "CHANGE MASTER TO" query
# because we're cloning from an existing slave.
            mv xtrabackup_slave_info change_master_to.sql.in
# Ignore xtrabackup_binlog_info in this case (it's useless).
            rm -f xtrabackup_binlog_info
          elif [[ -f xtrabackup_binlog_info ]]; then
# We're cloning directly from master. Parse binlog position.
[[ `cat xtrabackup_binlog_info` =~ ^(.*?)[[:space:]]+(.*?)$ ]] || exit 1
            rm xtrabackup_binlog_info
            echo "CHANGE MASTER TO MASTER_LOG_FILE='${BASH_REMATCH[1]}',\
                  MASTER_LOG_POS=${BASH_REMATCH[2]}" > change_master_to.sql.in
          fi
# Check if we need to complete a clone by starting replication.
          if [[ -f change_master_to.sql.in ]]; then
            echo "Waiting for mysqld to be ready (accepting connections)"
            until mysql -h 127.0.0.1 -e "SELECT 1"; do sleep 1; done
            echo "Initializing replication from clone position"
# In case of container restart, attempt this at-most-once.
            mv change_master_to.sql.in change_master_to.sql.orig
            mysql -h 127.0.0.1 <<EOF
          $(<change_master_to.sql.orig),
            MASTER_HOST='mysql-ss-0.mysql-svc-master',
            MASTER_USER='root',
            MASTER_PASSWORD='',
            MASTER_CONNECT_RETRY=10;
          START SLAVE;
          EOF
          fi
# Start a server to send backups when requested by peers.
          exec ncat --listen --keep-open --send-only --max-conns=1 3307 -c \
"xtrabackup --backup --slave-info --stream=xbstream --host=127.0.0.1 --user=root"
        volumeMounts:
        - name: data
          mountPath: /var/lib/mysql
          subPath: mysql
        - name: conf
          mountPath: /etc/mysql/conf.d
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
          limits: 
            cpu: 200m
            memory: 200Mi
      volumes:
      - name: conf
        emptyDir: {}
      - name: config-map
        configMap:
          name: mysql
  volumeClaimTemplates:
  - metadata:
      name: data
    spec:
# 这里需要注意下,要修改为自己的StorageClass的名称
      storageClassName: managed-nfs-storage
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 10Gi

image.png

  • init-mysql
    初始化配置文件和server-id文件,分别给主机和从机不同的配置文件。
  • clone-mysql
    初始化mysql。如果是从机需要先用xtrabackup做一次全量的备份,备份的数据来自于前一个pod。
  • mysql
    按照配置启动mysql
  • xtrabackup
    执行命令启动slave,并利用ncat启动监听,如果扩展新的节点,传输数据到新节点。

验证集群数据同步

进入主库创建个db名称test

image.png

进入备库查看是否同步主库

image.png

备库已经成功同步主库的test  db

 

posted @ 2020-03-09 09:06  benjamin杨  阅读(542)  评论(0编辑  收藏  举报