K8s-Helm部署RadonDB-MySQL高可用集群实现自动主从切换
文章目录
引
具体介绍参考:分布式数据库 RadonDB
K8s平台环境下,利用Helm部署真正高可用的MySQL数据库集群,彻底实现一主多从、主节点宕机自动选举从节点来充当主节点,原主节点恢复后则充当从节点,保持高可用对外服务!
RadonDB MySQL
是一款基于MySQL
的开源、高可用、云原生集群解决方案。RadonDB MySQL Kubernetes
支持在Kubernetes
上安装部署和管理,自动执行与运行RadonDB MySQL
集群有关的任务。- 支持一主多从高可用架构,并具备安全、自动备份、监控告警、自动扩容等全套管理功能。
- 目前已经在生产环境中大规模的使用,包含银行、保险、传统大企业等。
准备
克隆项目代码
项目地址:
git clone https://github.com/radondb/radondb-mysql-kubernetes.git
[root@k8s-master01 ~]# git clone https://github.com/radondb/radondb-mysql-kubernetes.git
[root@k8s-master01 ~]# cd /root/radondb-mysql-kubernetes-main/charts/helm
[root@k8s-master01 helm]# ll
总用量 40K
-rw-r--r-- 1 root root 1.9K 2月 8 16:00 Chart.yaml
drwxr-xr-x 4 root root 32 2月 8 16:00 dockerfiles
-rw-r--r-- 1 root root 20K 2月 8 16:00 README.md
drwxr-xr-x 2 root root 200 2月 8 16:00 templates
-rw-r--r-- 1 root root 0 2月 9 13:46 tmp.json
-rw-r--r-- 1 root root 5.4K 2月 9 13:49 values.yaml
[root@k8s-master01 helm]# cp /root/radondb-mysql-kubernetes-main/charts/helm/values.yaml /root/radondb-mysql-kubernetes-main/charts/helm/values.yaml.bak
修改values.yaml
修改后的
values.yaml
修改处:
replicaCount: 3
副本数,默认3,一主两从,改为1
则为一主,改外多个则为一主多从(部署前修改)
type: NodePort
暴露方式
nodePort: 30060
指定暴露端口号
storageClass: nfs-client
持久化类型,可参考:nfs-client部署
# Default values for radondb-mysql.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
## Specify an imagePullPolicy (Required)
## It's recommended to change this to 'Always' if the image tag is 'latest'
## ref: http://kubernetes.io/docs/user-guide/images/#updating-images
## Always IfNotPresent Never
imagePullPolicy: IfNotPresent
## String to partially override fullname template (will maintain the release name)
##
# nameOverride: ""
## String to fully override fullname template
##
# fullnameOverride: ""
# Please donot modify `replicaCount`, after the cluster is created.
replicaCount: 3
busybox:
image: busybox
tag: 1.32
mysql:
image: radondb/percona
tag: 5.7.34
allowEmptyRootPassword: true
# mysqlRootPassword:
mysqlReplicationPassword: Repl_123
mysqlUser: radondb
mysqlPassword: RadonDB@123
mysqlDatabase: radondb
initTokudb: false
## Additionnal arguments that are passed to the MySQL container.
## For example use --default-authentication-plugin=mysql_native_password if older clients need to
## connect to a MySQL 8 instance.
args: []
configFiles:
node.cnf: |
[mysqld]
default_storage_engine=InnoDB
max_connections=65535
# 以上位默认配置,以下为新增配置
default_storage_engine=InnoDB
max_connections=65535
log_bin=/var/lib/mysql/mysql-bin
log-bin-index=/var/lib/mysql/mysql-bin.index
basedir=/usr
datadir=/var/lib/mysql
tmpdir=/tmp
skip-character-set-client-handshake
default-storage-engine=InnoDB
character-set-server=utf8mb4
collation-server=utf8mb4_unicode_ci
transaction-isolation=READ-COMMITTED
init_connect='SET NAMES utf8mb4'
sql_mode=STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION
innodb_buffer_pool_size = 4G
wait_timeout=600
interactive_timeout=600
innodb_temp_data_file_path = ibtmp1:12M:autoextend:max:1G
log-bin-trust-function-creators=1
default-time-zone = '+8:00'
[mysqldump]
quick
quote-names
max_allowed_packet=16M
[mysql]
#no-auto-rehash # faster start of mysql but no tab completition
default-character-set=utf8mb4
[client]
default-character-set=utf8mb4
livenessProbe:
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 3
readinessProbe:
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
## A string to add extra environment variables
# extraEnvVars: |
# - name: EXTRA_VAR
# value: "extra"
## Configure resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
resources: {}
# requests:
# memory: 256Mi
# cpu: 100m
# limits:
# memory: 1Gi
# cpu: 500m
xenon:
image: radondb/xenon
tag: 1.1.5-helm
args: []
## A string to add extra environment variables
# extraEnvVars: |
# - name: EXTRA_VAR
# value: "extra"
## Configure resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
livenessProbe:
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 3
readinessProbe:
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
resources: {}
# requests:
# memory: 128Mi
# cpu: 50m
# limits:
# memory: 256Mi
# cpu: 100m
metrics:
enabled: false
image: prom/mysqld-exporter
tag: v0.12.1
annotations: {}
livenessProbe:
initialDelaySeconds: 15
timeoutSeconds: 5
readinessProbe:
initialDelaySeconds: 5
timeoutSeconds: 1
# Enable this if you're using https://github.com/coreos/prometheus-operator
serviceMonitor:
enabled: false
## Specify a namespace if needed
# namespace: monitoring
# fallback to the prometheus default unless specified
interval: 10s
# scrapeTimeout: 10s
## Defaults to what's used if you follow CoreOS [Prometheus Install Instructions](https://github.com/helm/charts/tree/master/stable/prometheus-operator#tldr)
## [Prometheus Selector Label](https://github.com/helm/charts/tree/master/stable/prometheus-operator#prometheus-operator-1)
## [Kube Prometheus Selector Label](https://github.com/helm/charts/tree/master/stable/prometheus-operator#exporters)
# selector:
# prometheus: kube-prometheus
## When set to true will create sidecar to tail mysql slow log.
slowLogTail: true
resources: {}
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 10m
# memory: 32Mi
## Configure the service
## ref: http://kubernetes.io/docs/user-guide/services/
service:
annotations: {}
## Specify a service type, NodePort|ClusterIP|LoadBalancer
## ref: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services---service-types
type: NodePort # ClusterIP
# clusterIP: None
port: 3306
nodePort: 30060
# nodePort: 32000
# loadBalancerIP:
rbac:
# Specifies whether RBAC resources should be created
create: true
serviceAccount:
# Specifies whether a service account should be created
create: true
# The name of the service account to use.
# If not set and create is true, a name is generated using the serviceAccountName template
name:
## Enable persistence using Persistent Volume Claims
## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
##
persistence:
enabled: true
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, azure-disk on
## Azure, standard on GKE, AWS & OpenStack)
##
# storageClass: "-"
accessModes:
- ReadWriteOnce
storageClass: nfs-client
size: 2Gi # 10Gi
annotations: {}
reclaimPolicy: ""
## Set pod priorityClassName
# priorityClassName: {}
## Use an alternate scheduler, e.g. "stork".
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
##
# schedulerName:
# statefulset Annotations
statefulsetAnnotations: {}
# To be added to the database server pod(s)
podAnnotations: {}
podLabels: {}
nodeSelector: {}
hardAntiAffinity: true
additionalAffinities: {}
affinity: {}
部署
cd /root/radondb-mysql-kubernetes-main/charts/helm
helm install mysql -n mysql-cluster .
PS:若出现如下提示可忽略
Error: Service "mysql-radondb-mysql-follower" is invalid: spec.ports[0].nodePort: Invalid value: 30060: provided port is already allocated
测试
等待初始化完成,约4分钟左右
[root@k8s-master01 helm]# kubectl get pod -n mysql-cluster
NAME READY STATUS RESTARTS AGE
pod/mysql-radondb-mysql-0 0/3 Init:0/1 0 78s
[root@k8s-master01 helm]# kubectl get all -n mysql-cluster
NAME READY STATUS RESTARTS AGE
pod/mysql-radondb-mysql-0 3/3 Running 0 3m25s
pod/mysql-radondb-mysql-1 3/3 Running 0 114s
pod/mysql-radondb-mysql-2 3/3 Running 0 30s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/mysql-radondb-mysql ClusterIP None <none> 3306/TCP 3m25s
service/mysql-radondb-mysql-follower NodePort 192.168.245.239 <none> 3306:30060/TCP 3m25s
NAME READY AGE
statefulset.apps/mysql-radondb-mysql 3/3 3m25s
# 查询主从同步状态
[root@k8s-master01 helm]# kubectl exec -it -n mysql-cluster mysql-radondb-mysql-0 -- bash -c 'mysql -e "show slave status\G"' | grep Yes
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Master
创建test
库
默认
mysql-radondb-mysql-0
为主库,1、2
为从库
主库可读可写,从库只读
[root@k8s-master01 helm]# kubectl exec -it -n mysql-cluster mysql-radondb-mysql-0 bash
root@mysql-radondb-mysql-0:/# mysql
mysql> create database test;
Query OK, 1 row affected (0.00 sec)
mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| radondb |
| sys |
| test |
+--------------------+
6 rows in set (0.02 sec)
Slave
查看是否同步数据
# 从库1
[root@k8s-master01 helm]# kubectl exec -it -n mysql-cluster mysql-radondb-mysql-1 bash
root@mysql-radondb-mysql-2:/# mysql
mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| radondb |
| sys |
| test |
+--------------------+
6 rows in set (0.02 sec)
-------------------------------------------------------------------------------------------
# 从库2
[root@k8s-master01 helm]# kubectl exec -it -n mysql-cluster mysql-radondb-mysql-2 bash
mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| radondb |
| sys |
| test |
+--------------------+
6 rows in set (0.02 sec)
删除Master
后自动选举
主库被删除后,会自动选举一个
Slave
节点来接替主库,且被删除的主库会自动重新部署安装,安装完成后会自动变为从库
# 测试删除主库
[root@k8s-master01 helm]# kubectl delete pod -n mysql-cluster mysql-radondb-mysql-0
pod "mysql-radondb-mysql-0" deleted
# 监控状态,主库自动重启安装部署
[root@k8s-master01 helm]# kubectl get pod -n mysql-cluster -wowide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
mysql-radondb-mysql-0 0/3 Init:0/1 0 0s <none> k8s-node02 <none> <none>
mysql-radondb-mysql-1 3/3 Running 0 4m10s 172.18.195.35 k8s-master03 <none> <none>
mysql-radondb-mysql-2 3/3 Running 0 2m46s 172.17.125.44 k8s-node01 <none> <none>
mysql-radondb-mysql-0 0/3 PodInitializing 0 1s 172.27.14.222 k8s-node02 <none> <none>
mysql-radondb-mysql-0 3/3 Running 0 10m 172.27.14.222 k8s-node02 <none> <none>
mysql-radondb-mysql-1 3/3 Running 0 14m 172.18.195.35 k8s-master03 <none> <none>
mysql-radondb-mysql-2 3/3 Running 0 12m 172.17.125.44 k8s-node01 <none> <none>
原Master
恢复后进行写操作
被删除的原主库恢复后会自动变成从库,只有读权限,所以不能执行写操作
[root@k8s-master01 helm]# kubectl exec -it -n mysql-cluster mysql-radondb-mysql-0 bash
root@mysql-radondb-mysql-0:/# mysql
mysql> create database test1;
ERROR 1290 (HY000): The MySQL server is running with the --super-read-only option so it cannot execute this statement
查找新选举的Master
通过
show variables like 'read_only';
获取节点只读状态:ON
为从节点,OFF
为主节点还是最新选举的新
Master
节点具有可读可写权限,其他节点均为读权限
[root@k8s-master01 helm]# kubectl exec -it -n mysql-cluster mysql-radondb-mysql-0 -- bash -c mysql
mysql> show variables like 'read_only';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| read_only | ON |
+---------------+-------+
1 row in set (0.00 sec)
[root@k8s-master01 helm]# kubectl exec -it -n mysql-cluster mysql-radondb-mysql-1 -- bash -c mysql
mysql> show variables like 'read_only';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| read_only | ON |
+---------------+-------+
1 row in set (0.00 sec)
[root@k8s-master01 helm]# kubectl exec -it -n mysql-cluster mysql-radondb-mysql-2 -- bash -c mysql
mysql> show variables like 'read_only';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| read_only | OFF |
+---------------+-------+
1 row in set (0.00 sec)
测试新Master
写入数据
[root@k8s-master01 helm]# kubectl exec -it -n mysql-cluster mysql-radondb-mysql-2 bash
root@mysql-radondb-mysql-2:/# mysql
mysql> create database test2;
Query OK, 1 row affected (0.01 sec)
mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| radondb |
| sys |
| test |
| test1 |
| test2 |
+--------------------+
8 rows in set (0.02 sec)
Slave
查看是否同步数据
# Slave1
[root@k8s-master01 helm]# kubectl exec -it -n mysql-cluster mysql-radondb-mysql-0 -- bash -c 'mysql -e "show databases;"'
Defaulted container "mysql" out of: mysql, xenon, slowlog, init-mysql (init)
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| radondb |
| sys |
| test |
| test1 |
| test2 |
+--------------------+
-------------------------------------------------------------------------------------------
# Slave2
[root@k8s-master01 helm]# kubectl exec -it -n mysql-cluster mysql-radondb-mysql-1 -- bash -c 'mysql -e "show databases;"'
Defaulted container "mysql" out of: mysql, xenon, slowlog, init-mysql (init)
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| radondb |
| sys |
| test |
| test1 |
| test2 |
+--------------------+
配置数据库密码
在新的主节点进行创建用户并授权
若在从节点执行此操作则会报错,因为是只读权限
[root@k8s-master02 mysql]# kubectl exec -it -n mysql-cluster mysql-radondb-mysql-2 -- bash
mysql> grant all on *.* to root@'%' identified by '123';
mysql> flush privileges;
# 测试连接
# 容器内:无需输入密码,输密码反而连接不上!
## 不输密码:
root@mysql-radondb-mysql-0:/# mysql -uroot
mysql>
## 输密码:
root@mysql-radondb-mysql-0:/# mysql -uroot -p123
mysql: [Warning] Using a password on the command line interface can be insecure.
ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
# 容器外:必须输用户/密码
## 不输密码:
[root@k8s-master02 mysql]# mysql -h 172.23.0.247 -P30060 -uroot
ERROR 1045 (28000): Access denied for user 'root'@'172.23.0.247' (using password: NO)
# 输密码:
[root@k8s-master02 mysql]# mysql -h 172.23.0.247 -P30060 -uroot -p123
mysql>
配置外部连接数据库
安装MySQL客户端,参考:安装MySQL客户端
PS:默认连接到可读可写主节点,若要配置连接到只读节点,则还需创建一个连接到只读从节点的service
[root@k8s-master02 mysql]# kubectl get svc -n mysql-cluster
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mysql-radondb-mysql ClusterIP None <none> 3306/TCP 125m
mysql-radondb-mysql-leader NodePort 192.168.166.41 <none> 3306:30060/TCP 125m
1.通过服务器IP连接到数据库
# 244是K8s集群虚拟VIP,集群内任意节点的IP都可以连接
[root@k8s-master02 mysql]# mysql -h 172.23.0.244 -P30060 -u user -p123
mysql>
2.通过K8s服务名连接到数据库
# 获取服务名:mysql-radondb-mysql-leader
[root@k8s-master02 mysql]# kubectl get svc -n mysql-cluster
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mysql-radondb-mysql ClusterIP None <none> 3306/TCP 45h
mysql-radondb-mysql-leader NodePort 192.168.166.41 <none> 3306:30060/TCP 45h
# 做解析
[root@k8s-master02 mysql]# vim /etc/hosts
172.23.0.244 mysql-radondb-mysql-leader
# 测试连接
[root@k8s-master02 mysql]# mysql -h mysql-radondb-mysql-leader -P 30060 -u root -p123
mysql>
附
Repo方式部署可参考:在Kubernetes上部署RadonDB MysQL集群
删除部署后重新二次部署时,出现如下提示,问题解决:
问题1:cannot re-use a name that is still in use
# 提示不能用已经存在的名字(已经删除部署)
[root@k8s-master01 helm]# helm install mysql -n mysql-cluster .
Error: cannot re-use a name that is still in use
# 解决:查看并删除对应的命名空间下的部署名称
[root@k8s-master01 helm]# helm ls --all-namespaces
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
harbor devops 1 2022-01-12 18:12:35.243062163 +0800 CST deployed harbor-1.4.0-dev dev
mysql mysql-cluster 1 2022-02-09 14:56:03.137600534 +0800 CST failed radondb-mysql-1.0.0 5.7.34
nfs-client default 1 2022-01-12 10:36:42.603414816 +0800 CST deployed nfs-client-provisioner-1.0.2 3.1.0
[root@k8s-master01 helm]# helm delete -n mysql-cluster mysql
release "mysql" uninstalled
[root@k8s-master01 helm]# helm ls --all-namespaces
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
harbor devops 1 2022-01-12 18:12:35.243062163 +0800 CST deployed harbor-1.4.0-dev dev
nfs-client default 1 2022-01-12 10:36:42.603414816 +0800 CST deployed nfs-client-provisioner-1.0.2 3.1.0
问题2:Pod
一直处在Init:0/1
状态
[root@k8s-master01 helm]# kubectl get pod -n mysql-cluster
NAME READY STATUS RESTARTS AGE
pod/mysql-radondb-mysql-0 0/3 Init:0/1 0 78s
# 查看时间:挂载问题,可能是删除部署时未删除pv/pvc,导致一直去找原来的持久化目录
kubectl describe pod -n mysql-cluster mysql-radondb-mysql-0
···
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 84s default-scheduler Successfully assigned mysql-cluster/mysql-radondb-mysql-0 to k8s-node02
Warning FailedMount 19s (x8 over 83s) kubelet MountVolume.SetUp failed for volume "pvc-91585191-5c24-469b-992a-f925f64b0e74" : mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t nfs 172.23.0.243:/data/nfs/mysql-cluster-data-mysql-radondb-mysql-0-pvc-91585191-5c24-469b-992a-f925f64b0e74 /var/lib/kubelet/pods/de613da9-b469-4f93-b277-f83607bb1791/volumes/kubernetes.io~nfs/pvc-91585191-5c24-469b-992a-f925f64b0e74
Output: mount.nfs: mounting 172.23.0.243:/data/nfs/mysql-cluster-data-mysql-radondb-mysql-0-pvc-91585191-5c24-469b-992a-f925f64b0e74 failed, reason given by server: No such file or directory
# 解决:删除残留pv/pvc(提前备份持久化数据目录)
## 找出残留pvc
[root@k8s-master01 helm]# kubectl get pvc -n mysql-cluster
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data-mysql-cluster-radondb-0 Bound pvc-688ec200-f080-48a6-896b-825b3f91827f 2Gi RWO nfs-client 107m
data-mysql-radondb-mysql-0 Bound pvc-91585191-5c24-469b-992a-f925f64b0e74 2Gi RWO nfs-client 108m
data-mysql-radondb-mysql-1 Bound pvc-074d5f61-c36c-4ef4-8133-14457ae44891 2Gi RWO nfs-client 106m
data-mysql-radondb-mysql-2 Bound pvc-65710543-c97a-4059-9ba5-88ab373d0cec 2Gi RWO nfs-client 105m
## 删除残留pvc
[root@k8s-master01 helm]# kubectl delete pvc -n mysql-cluster `kubectl get pvc -n mysql-cluster | awk 'NR>1{print $1}'`
persistentvolumeclaim "data-mysql-cluster-radondb-0" deleted
persistentvolumeclaim "data-mysql-radondb-mysql-0" deleted
persistentvolumeclaim "data-mysql-radondb-mysql-1" deleted
persistentvolumeclaim "data-mysql-radondb-mysql-2" deleted
## 找出残留pv
[root@k8s-master01 helm]# kubectl get pv |grep Released
pvc-074d5f61-c36c-4ef4-8133-14457ae44891 2Gi RWO Retain Released mysql-cluster/data-mysql-radondb-mysql-1 nfs-client 107m
pvc-65710543-c97a-4059-9ba5-88ab373d0cec 2Gi RWO Retain Released mysql-cluster/data-mysql-radondb-mysql-2 nfs-client 106m
pvc-688ec200-f080-48a6-896b-825b3f91827f 2Gi RWO Retain Released mysql-cluster/data-mysql-cluster-radondb-0 nfs-client 108m
pvc-91585191-5c24-469b-992a-f925f64b0e74 2Gi RWO Retain Released mysql-cluster/data-mysql-radondb-mysql-0 nfs-client 109m
## 删除残留pv
[root@k8s-master01 helm]# kubectl delete pv `kubectl get pv |grep Released|awk '{print $1}'`
persistentvolume "pvc-074d5f61-c36c-4ef4-8133-14457ae44891" deleted
persistentvolume "pvc-65710543-c97a-4059-9ba5-88ab373d0cec" deleted
persistentvolume "pvc-688ec200-f080-48a6-896b-825b3f91827f" deleted
persistentvolume "pvc-91585191-5c24-469b-992a-f925f64b0e74" deleted
# 重新部署测试:
[root@k8s-master01 helm]# helm install mysql -n mysql-cluster .
NAME: mysql
LAST DEPLOYED: Wed Feb 9 15:46:12 2022
NAMESPACE: mysql-cluster
STATUS: deployed
REVISION: 1
TEST SUITE: None
[root@k8s-master01 helm]# kubectl get pod -n mysql-cluster -w
NAME READY STATUS RESTARTS AGE
mysql-radondb-mysql-0 0/3 Init:0/1 0 4s
mysql-radondb-mysql-0 0/3 PodInitializing 0 14s
# 监控事件无异常:
[root@k8s-master01 helm]# kubectl describe pod -n mysql-cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 71s (x2 over 71s) default-scheduler running "VolumeBinding" filter plugin for pod "mysql-radondb-mysql-2": pod has unbound immediate PersistentVolumeClaims
Normal Scheduled 68s default-scheduler Successfully assigned mysql-cluster/mysql-radondb-mysql-2 to k8s-master03
Normal Pulled 67s kubelet Container image "busybox:1.32" already present on machine
Normal Created 67s kubelet Created container init-mysql
Normal Started 67s kubelet Started container init-mysql
Normal Started 66s kubelet Started container mysql
Normal Created 66s kubelet Created container mysql
Normal Pulled 66s kubelet Container image "radondb/percona:5.7.34" already present on machine
Normal Pulled 66s kubelet Container image "radondb/xenon:1.1.5-helm" already present on machine
Normal Created 66s kubelet Created container xenon
Normal Started 66s kubelet Started container xenon
Normal Pulled 51s kubelet Container image "busybox:1.32" already present on machine
Normal Created 51s kubelet Created container slowlog
Normal Started 51s kubelet Started container slowlog
# Pod已正常运行
[root@k8s-master01 helm]# kubectl get pod -n mysql-cluster -w
NAME READY STATUS RESTARTS AGE
mysql-radondb-mysql-0 3/3 Running 0 2m27s
mysql-radondb-mysql-1 3/3 Running 0 2m6s
mysql-radondb-mysql-2 3/3 Running 0 106s