Loading

转载 基于K8s部署MySQL cluster

原文章地址:https://zhuanlan.zhihu.com/p/104951736

使用kubectl部署

1,部署ConfigMap

编写gemfield-mysql-configmap.yaml,内容如下

apiVersion: v1
kind: ConfigMap
metadata:
  name: gemfield-mysql
  labels:
    app: gemfield-mysql
data:
  master.cnf: |
    # Apply this config only on the master.
    [mysqld]
    log-bin
  slave.cnf: |
    # Apply this config only on slaves.
    [mysqld]
    super-read-only

然后执行apply命令:

kubectl apply -f gemfield-mysql-configmap.yaml

检查命令执行结果:

gemfield@deepvac:/bigdata/gemfield$ kubectl describe configmap gemfield-mysql
Name:         gemfield-mysql
Namespace:    default
Labels:       app=gemfield-mysql
Annotations:  kubectl.kubernetes.io/last-applied-configuration:
                {"apiVersion":"v1","data":{"master.cnf":"# Apply this config only on the master.\n[mysqld]\nlog-bin\n","slave.cnf":"# Apply this config on...

Data
====
master.cnf:
----
# Apply this config only on the master.
[mysqld]
log-bin

slave.cnf:
----
# Apply this config only on slaves.
[mysqld]
super-read-only

Events:  <none>

这个ConfigMap将会在Pod启动后覆盖mysql服务的my.cnf文件。因为配置是区分master和slave的,所以这里是告诉mysql master需要将logs同步到mysql slaves上,并且让mysql slaves拒绝掉所有的写入操作(同步操作除外)。

2,部署Services

新建gemfield-mysql-services.yaml:

# Headless service for stable DNS entries of StatefulSet members.
apiVersion: v1
kind: Service
metadata:
  name: gemfield-mysql
  labels:
    app: gemfield-mysql
spec:
  ports:
  - name: gemfield-mysql
    port: 3306
  clusterIP: None
  selector:
    app: gemfield-mysql
---
# Client service for connecting to any MySQL instance for reads.
# For writes, you must instead connect to the master: mysql-0.mysql.
apiVersion: v1
kind: Service
metadata:
  name: gemfield-mysql-read
  labels:
    app: gemfield-mysql
spec:
  ports:
  - name: gemfield-mysql
    port: 3306
  selector:
    app: gemfield-mysql

部署Service:

kubectl apply -f gemfield-mysql-services.yaml

检查命令执行结果:

gemfield@deepvac:/bigdata/gemfield$ kubectl get svc | grep gemfield-mysql
gemfield-mysql        ClusterIP   None             <none>         3306/TCP                                   13s
gemfield-mysql-read   ClusterIP   10.106.228.197   <none>         3306/TCP                                   13s

3,部署StatefulSet

新建gemfield-mysql-statefulset.yaml:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: gemfield-mysql
spec:
  selector:
    matchLabels:
      app: gemfield-mysql
  serviceName: gemfield-mysql
  replicas: 3
  template:
    metadata:
      labels:
        app: gemfield-mysql
    spec:
      initContainers:
      - name: init-gemfield-mysql
        image: mysql:5.7
        command:
        - bash
        - "-c"
        - |
          set -ex
          # Generate mysql server-id from pod ordinal index.
          [[ `hostname` =~ -([0-9]+)$ ]] || exit 1
          ordinal=${BASH_REMATCH[1]}
          echo [mysqld] > /mnt/conf.d/server-id.cnf
          # Add an offset to avoid reserved server-id=0 value.
          echo server-id=$((100 + $ordinal)) >> /mnt/conf.d/server-id.cnf
          # Copy appropriate conf.d files from config-map to emptyDir.
          if [[ $ordinal -eq 0 ]]; then
            cp /mnt/config-map/master.cnf /mnt/conf.d/
          else
            cp /mnt/config-map/slave.cnf /mnt/conf.d/
          fi
        volumeMounts:
        - name: conf
          mountPath: /mnt/conf.d
        - name: config-map
          mountPath: /mnt/config-map
      - name: clone-gemfield-mysql
        image: gcr.azk8s.cn/google-samples/xtrabackup:1.0
        command:
        - bash
        - "-c"
        - |
          set -ex
          # Skip the clone if data already exists.
          [[ -d /var/lib/mysql/mysql ]] && exit 0
          # Skip the clone on master (ordinal index 0).
          [[ `hostname` =~ -([0-9]+)$ ]] || exit 1
          ordinal=${BASH_REMATCH[1]}
          [[ $ordinal -eq 0 ]] && exit 0
          # Clone data from previous peer.
          ncat --recv-only gemfield-mysql-$(($ordinal-1)).gemfield-mysql 3307 | xbstream -x -C /var/lib/mysql
          # Prepare the backup.
          xtrabackup --prepare --target-dir=/var/lib/mysql
        volumeMounts:
        - name: data
          mountPath: /var/lib/mysql
          subPath: mysql
        - name: conf
          mountPath: /etc/mysql/conf.d
      containers:
      - name: gemfield-mysql
        image: mysql:5.7
        #command: ['/bin/bash','-c','docker-entrypoint.sh mysqld']
        env:
        - name: MYSQL_ALLOW_EMPTY_PASSWORD
          value: "1"
        ports:
        - name: gemfield-mysql
          containerPort: 3306
        volumeMounts:
        - name: data
          mountPath: /var/lib/mysql
          subPath: mysql
        - name: conf
          mountPath: /etc/mysql/conf.d
        resources:
          requests:
            cpu: 500m
            memory: 1Gi
        livenessProbe:
          exec:
            command: ["mysqladmin", "ping"]
          initialDelaySeconds: 30
          periodSeconds: 10
          timeoutSeconds: 5
        readinessProbe:
          exec:
            # Check we can execute queries over TCP (skip-networking is off).
            command: ["mysql", "-h", "127.0.0.1", "-e", "SELECT 1"]
          initialDelaySeconds: 5
          periodSeconds: 2
          timeoutSeconds: 1
      - name: xtrabackup
        image: gcr.azk8s.cn/google-samples/xtrabackup:1.0
        ports:
        - name: xtrabackup
          containerPort: 3307
        command:
        - bash
        - "-c"
        - |
          set -ex
          cd /var/lib/mysql

          # Determine binlog position of cloned data, if any.
          if [[ -f xtrabackup_slave_info && "x$(<xtrabackup_slave_info)" !="x"]]; then# XtraBackup already generated a partial "CHANGE MASTER TO" query# because we're cloning from an existing slave. (Need to remove the tailing semicolon!)            cat xtrabackup_slave_info | sed -E's/;$//g' > change_master_to.sql.in# Ignore xtrabackup_binlog_info in this case (it's useless).            rm -f xtrabackup_slave_info xtrabackup_binlog_info          elif[[ -f xtrabackup_binlog_info]]; then# We're cloning directly from master. Parse binlog position.[[ `cat xtrabackup_binlog_info` =~ ^(.*?)[[:space:]]+(.*?)$]] || exit1            rm -f xtrabackup_binlog_info xtrabackup_slave_info            echo"CHANGE MASTER TO MASTER_LOG_FILE='${BASH_REMATCH[1]}',\
                  MASTER_LOG_POS=${BASH_REMATCH[2]}" > change_master_to.sql.in          fi# Check if we need to complete a clone by starting replication.          if[[ -f change_master_to.sql.in]]; then            echo"Waiting for mysqld to be ready (accepting connections)"            until mysql -h127.0.0.1 -e"SELECT 1"; do sleep1; done            echo"Initializing replication from clone position"            mysql -h127.0.0.1 \                  -e"$(<change_master_to.sql.in), \
                          MASTER_HOST='gemfield-mysql-0.gemfield-mysql', \
                          MASTER_USER='root', \
                          MASTER_PASSWORD='', \
                          MASTER_CONNECT_RETRY=10; \
                        START SLAVE;" || exit1# In case of container restart, attempt this at-most-once.            mv change_master_to.sql.in change_master_to.sql.orig          fi# Start a server to send backups when requested by peers.          exec ncat --listen --keep-open --send-only --max-conns=13307 -c \"xtrabackup --backup --slave-info --stream=xbstream --host=127.0.0.1 --user=root"        volumeMounts:        - name: data          mountPath: /var/lib/mysql          subPath: mysql        - name: conf          mountPath: /etc/mysql/conf.d        resources:          requests:            cpu: 100m            memory: 100Mi      nodeSelector:        kubernetes.io/hostname: deepvac      volumes:      - name: conf        emptyDir: {}      - name: config-map        configMap:          name: gemfield-mysql  volumeClaimTemplates:  - metadata:      name: data    spec:      accessModes:["ReadWriteOnce"]      storageClassName:"rook-ceph-block"      resources:        requests:          storage: 10Gi

部署StatefulSet:

kubectl apply -f gemfield-mysql-statefulset.yaml

检查命令执行结果:

kubectl get pods -l app=gemfield-mysql --watch

StatefulSet完全起好后,可以使用如下命令确认:

gemfield@deepvac:/bigdata/gemfield$ kubectl get po | grep gemfield-mysql
gemfield-mysql-0              2/2     Running       0          93m
gemfield-mysql-1              2/2     Running       0          93m
gemfield-mysql-2              2/2     Running       0          92m

这里面要注意什么呢?就是StatefulSet YAML文件中的volumeClaimTemplates,这是特地为StatefulSet打造的,一定要弄懂,不懂的话评论里问。否则你可能会遇到如下的错误:“innodb unable to lock ./ibdata1 error 11”;另外,关于PVC也一定要弄懂,不然你可能会遇到如下的错误:"Pod has unbound immediate persistentvolumeclaims",这时候你使用kubectl get pvc命令看下就明白了。

测试

1,从master写入

使用如下的命令创建一个syszux database和syszux表:

kubectl run mysql-client --image=mysql:5.7 -i --rm --restart=Never --\
  mysql -h gemfield-mysql-0.gemfield-mysql <<EOF
CREATE DATABASE syszux;
CREATE TABLE syszux.messages (message VARCHAR(250));
INSERT INTO syszux.messages VALUES ('CivilNet');
EOF

看到了吗,这个命令访问的是master。

2,从slave读取

然后我们使用如下命令从slave(只读)去读取这个记录:

kubectl run mysql-client --image=mysql:5.7 -i -t --rm --restart=Never --\
  mysql -h gemfield-mysql-read -e "SELECT * FROM syszux.messages"

输出如下:

gemfield@deepvac:/bigdata/gemfield$ kubectl run mysql-client --image=mysql:5.7 -i -t --rm --restart=Never --\
>   mysql -h gemfield-mysql-read -e "SELECT * FROM syszux.messages"
If you don't see a command prompt, try pressing enter.
+---------+
| message |
+---------+
| CivilNet|
+---------+
pod "mysql-client" deleted

3,测试K8s服务的分发

使用如下的命令“不断”从slave读取数据,你会看到K8s的service会将请求分发到不同的pod上:

gemfield@deepvac:/bigdata/gemfield$ kubectl run mysql-client-loop --image=mysql:5.7 -i -t --rm --restart=Never --\
>   bash -ic "while sleep 1; do mysql -h gemfield-mysql-read -e 'SELECT @@server_id,NOW()'; done"
If you don't see a command prompt, try pressing enter.
+-------------+---------------------+
| @@server_id | NOW()               |
+-------------+---------------------+
|         101 | 2020-02-03 14:43:44 |
+-------------+---------------------+
+-------------+---------------------+
| @@server_id | NOW()               |
+-------------+---------------------+
|         100 | 2020-02-03 14:43:46 |
+-------------+---------------------+
+-------------+---------------------+
| @@server_id | NOW()               |
+-------------+---------------------+
|         100 | 2020-02-03 14:43:47 |
+-------------+---------------------+
+-------------+---------------------+
| @@server_id | NOW()               |
+-------------+---------------------+
|         102 | 2020-02-03 14:43:48 |
+-------------+---------------------+
+-------------+---------------------+
| @@server_id | NOW()               |
+-------------+---------------------+
|         100 | 2020-02-03 14:43:49 |
+-------------+---------------------+
......

服务的弹性伸缩

因为只有id为0的Pod才是master,所以这里的弹性伸缩指的就是slave服务了。

增加replica:

kubectl scale statefulset gemfield-mysql  --replicas=5

再降低replica:

kubectl scale statefulset gemfield-mysql --replicas=3

但是这样一来,自动增加的PVC还在。这是为了以后再想scale回来更快。

如果真的不想要这个PVC了,手动删除即可:

kubectl delete pvc data-gemfield-mysql-3
kubectl delete pvc data-gemfield-mysql-4
posted @ 2020-06-24 20:06  microestc  阅读(347)  评论(0编辑  收藏  举报