LINUX7 HA(mysql)

服务器信息:

操作系统:CentOS Linux release 7.7.1908 (Core)

主机名
  • ens33网卡信息
ha1 192.168.198.180/24
ha2 192.168.198.190/24

 

 

 

 

 

各节点安装HA软件:

1
[root@ha1 yum.repos.d]# yum install pcs fence-agents-all

 

各节点关闭防火墙及selinux

1
2
3
4
systemctl stop firewalld.service
systemctl disable firewalld.service
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
setenforce 0

  

各节点设置hacluster用户密码(保持一样)

1
2
3
4
5
6
[root@ha1 yum.repos.d]# passwd hacluster
Changing password for user hacluster.
New password:
BAD PASSWORD: The password is shorter than 8 characters
Retype new password:
passwd: all authentication tokens updated successfully.

 

各节点启用pcsd服务并开启自启动

1
2
[root@ha1 yum.repos.d]# systemctl start pcsd.service
[root@ha1 yum.repos.d]# systemctl enable pcsd.service

 

确认集群用户

1
2
3
4
5
[root@ha1 yum.repos.d]# pcs cluster auth ha1 ha2
Username: hacluster
Password:
ha1: Authorized
ha2: Authorized

  

创建集群及查看状态

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
[root@ha1 yum.repos.d]# pcs cluster setup --start --name my_cluster ha1 ha2
Destroying cluster on nodes: ha1, ha2...
ha1: Stopping Cluster (pacemaker)...
ha2: Stopping Cluster (pacemaker)...
ha1: Successfully destroyed cluster
ha2: Successfully destroyed cluster
 
Sending 'pacemaker_remote authkey' to 'ha1', 'ha2'
ha1: successful distribution of the file 'pacemaker_remote authkey'
ha2: successful distribution of the file 'pacemaker_remote authkey'
Sending cluster config files to the nodes...
ha1: Succeeded
ha2: Succeeded
 
Starting cluster on nodes: ha1, ha2...
ha1: Starting Cluster (corosync)...
ha2: Starting Cluster (corosync)...
ha1: Starting Cluster (pacemaker)...
ha2: Starting Cluster (pacemaker)...
 
Synchronizing pcsd certificates on nodes ha1, ha2...
ha1: Success
ha2: Success
Restarting pcsd on the nodes in order to reload the certificates...
 
ha1: Success
ha2: Success<br><br><br>[root@ha1 yum.repos.d]# pcs status
Cluster name: my_cluster
 
WARNINGS:
No stonith devices and stonith-enabled is not false
 
Stack: unknown
Current DC: NONE
Last updated: Sat Jul 11 21:22:04 2020
Last change: Sat Jul 11 21:21:47 2020 by hacluster via crmd on ha1
 
2 nodes configured
0 resources configured
 
Node ha1: UNCLEAN (offline)
Node ha2: UNCLEAN (offline)
 
No resources
 
 
Daemon Status:
  corosync: active/disabled
  pacemaker: active/disabled
  pcsd: active/enabled<br>
[root@ha1 yum.repos.d]#  pcs cluster enable --all
ha1: Cluster Enabled
ha2: Cluster Enabled

 

隔离配置(生产环境加上隔离相关配置,防止其中某节点hang住的情况):

1
[root@ha1 yum.repos.d]# pcs property set stonith-enabled=false

  

vmware添加一个共享的存储

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
[root@ha1 ~]# fdisk -l
 
Disk /dev/sda: 32.2 GB, 32212254720 bytes, 62914560 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000b122a
 
   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048     2099199     1048576   83  Linux
/dev/sda2         2099200    62914559    30407680   8e  Linux LVM
 
Disk <strong>/dev/sdb:</strong> 10.7 GB, 10737418240 bytes, 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
 
[root@ha2 ~]# fdisk -l
 
Disk /dev/sda: 32.2 GB, 32212254720 bytes, 62914560 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000b122a
 
   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048     2099199     1048576   83  Linux
/dev/sda2         2099200    62914559    30407680   8e  Linux LVM
 
Disk <strong>/dev/sdb</strong>: 10.7 GB, 10737418240 bytes, 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

  

创建lvm

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[root@ha1 yum.repos.d]# pvcreate /dev/sdb
  Physical volume "/dev/sdb" successfully created.
[root@ha1 yum.repos.d]# vgcreate my_vg /dev/sdb
  Volume group "my_vg" successfully created
[root@ha1 yum.repos.d]# lvcreate -L5G -n my_lv my_vg
  Logical volume "my_lv" created.
[root@ha1 yum.repos.d]# mkfs.xfs /dev/my_vg/my_lv
meta-data=/dev/my_vg/my_lv       isize=512    agcount=4, agsize=327680 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=1310720, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
1
2
3
4
5
[root@ha1 yum.repos.d]# lvs
  LV    VG     Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root  centos -wi-ao---- 26.99g                                                   
  swap  centos -wi-ao----  2.00g                                                   
  my_lv my_vg  -wi-a-----  5.00g     

 

创建lvm关在目录及mysql用户(各节点执行)

1
2
3
4
5
[root@ha1 yum.repos.d]# mkdir -p /wsgw
[root@ha1 yum.repos.d]# useradd mysql
[root@ha1 yum.repos.d]# chown mysql:mysql /wsgw/
[root@ha1 yum.repos.d]# mount /dev/my_vg/my_lv /wsgw/
[root@ha1 yum.repos.d]# umount /wsgw
1
2
3
4
[root@ha2 yum.repos.d]# mkdir -p /wsgw
[root@ha2 yum.repos.d]# useradd mysql
[root@ha2 yum.repos.d]# chown mysql:mysql /wsgw/
[root@ha2 yum.repos.d]# mount /dev/my_vg/my_lv /wsgw/

  

 

 

独占启用集群中的lvm(各节点配置)

1
2
3
4
5
#修改 /etc/lvm/lvm.conf中locking_type为1,且use_lvmetad为0.同时停用lvmetad程序
lvmconf --enable-halvm --services --startstopservices
 
#在 /etc/lvm/lvm.conf修改volume_list配置(不受集群管控的lvm,比如你本地的root相关的lvm)
volume_list = [ "centos" ]

  

重建initramfs(各节点配置),重启节点

1
2
3
4
[root@ha1 yum.repos.d]# dracut -H -f /boot/initramfs-$(uname -r).img $(uname -r)
[root@ha1 yum.repos.d]# reboot
[root@ha2 yum.repos.d]# dracut -H -f /boot/initramfs-$(uname -r).img $(uname -r)
[root@ha2 yum.repos.d]# reboot

  

查看集群状态

1
2
3
4
5
6
7
8
9
10
11
12
[root@ha1 ~]# pcs cluster status
Cluster Status:
 Stack: corosync
 Current DC: ha2 (version 1.1.20-5.el7-3c4c782f70) - partition with quorum
 Last updated: Sat Jul 11 21:48:43 2020
 Last change: Sat Jul 11 21:45:23 2020 by root via cibadmin on ha1
 2 nodes configured
 0 resources configured
 
PCSD Status:
  ha1: Online
  ha2: Online

 

创建lvm资源

1
2
[root@ha1 ~]# pcs resource create mysql_lvm LVM volgrpname=my_vg exclusive=true --group mysqlgroup
Assumed agent name 'ocf:heartbeat:LVM' (deduced from 'LVM')
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
[root@ha1 ~]# pcs status
Cluster name: my_cluster
Stack: corosync
Current DC: ha2 (version 1.1.20-5.el7-3c4c782f70) - partition with quorum
Last updated: Sat Jul 11 21:49:42 2020
Last change: Sat Jul 11 21:49:39 2020 by root via cibadmin on ha1
 
2 nodes configured
1 resource configured
 
Online: [ ha1 ha2 ]
 
Full list of resources:
 
 Resource Group: mysqlgroup
     mysql_lvm  (ocf::heartbeat:LVM):   Started ha1
 
Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled
#或使用pcs resource show命令查看资源状态

 

创建文件系统资源

1
[root@ha1 ~]# pcs resource create mysql_fs Filesystem device="/dev/my_vg/my_lv" directory="/wsgw" fstype="ext4" --group  mysqlgroup<br>mysqlgroup Assumed agent name 'ocf:heartbeat:Filesystem' (deduced from 'Filesystem')
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[root@ha1 ~]#pcs status
Cluster name: my_cluster
Stack: corosync
Current DC: ha2 (version 1.1.20-5.el7-3c4c782f70) - partition with quorum
Last updated: Sat Jul 11 21:52:00 2020
Last change: Sat Jul 11 21:51:55 2020 by root via cibadmin on ha1
 
2 nodes configured
2 resources configured
 
Online: [ ha1 ha2 ]
 
Full list of resources:
 
 Resource Group: mysqlgroup
     mysql_lvm  (ocf::heartbeat:LVM):   Started ha1
     mysql_fs   (ocf::heartbeat:Filesystem):    Started ha1

 

创建vip资源

1
2
3
4
5
6
7
8
[root@ha1 ~]# pcs resource create mysql_vip IPaddr2 ip=198.168.198.185 cidr_netmask=24 nic=ens33 --group mysqlgroup
Assumed agent name 'ocf:heartbeat:IPaddr2' (deduced from 'IPaddr2')
 
pcs resource show
Resource Group: mysqlgroup
     mysql_lvm  (ocf::heartbeat:LVM):   Started ha1
     mysql_fs   (ocf::heartbeat:Filesystem):    Started ha1
     mysql_vip  (ocf::heartbeat:IPaddr2):   Started ha1

  

创建mysql资源

1
2
3
4
5
6
7
8
[root@ha1 ~]#pcs resource create mysql_wsgw ocf:heartbeat:mysql binary="/usr/local/mysql-5.7.27/bin/mysqld_safe" client_binary="/usr/local/mysql-5.7.27/bin/mysql" config="/etc/my.cnf" datadir="/wsgw/data" --group mysqlgroup
 
[root@ha1 ~]# pcs resource show
 Resource Group: mysqlgroup
     mysql_lvm  (ocf::heartbeat:LVM):   Started ha1
     mysql_fs   (ocf::heartbeat:Filesystem):    Started ha1
     mysql_vip  (ocf::heartbeat:IPaddr2):   Started ha1
     mysql_wsgw (ocf::heartbeat:mysql): Started ha1

 

测试资源配置:

1
2
3
4
5
6
7
[root@ha1 ~]# pcs cluster standby ha1
[root@ha1 ~]# pcs resource show
 Resource Group: mysqlgroup
     mysql_lvm  (ocf::heartbeat:LVM):   Started ha2
     mysql_fs   (ocf::heartbeat:Filesystem):    Started ha2
     mysql_vip  (ocf::heartbeat:IPaddr2):   Started ha2
     mysql_wsgw (ocf::heartbeat:mysql): Started ha2
1
[root@ha1 ~]# pcs cluster unstandby ha1

 

手工切换资源到指定节点

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
[root@ha1 ~]# pcs resource move mysql_lvm ha2
[root@ha1 ~]# pcs resource show
 Resource Group: mysqlgroup
     mysql_lvm  (ocf::heartbeat:LVM):   Started ha1
     mysql_fs   (ocf::heartbeat:Filesystem):    Started ha1
     mysql_vip  (ocf::heartbeat:IPaddr2):   Started ha1
     mysql_wsgw (ocf::heartbeat:mysql): Stopping ha1
[root@ha1 ~]# pcs resource show
 Resource Group: mysqlgroup
     mysql_lvm  (ocf::heartbeat:LVM):   Starting ha2
     mysql_fs   (ocf::heartbeat:Filesystem):    Stopped
     mysql_vip  (ocf::heartbeat:IPaddr2):   Stopped
     mysql_wsgw (ocf::heartbeat:mysql): Stopped
[root@ha1 ~]# pcs resource show
 Resource Group: mysqlgroup
     mysql_lvm  (ocf::heartbeat:LVM):   Started ha2
     mysql_fs   (ocf::heartbeat:Filesystem):    Started ha2
     mysql_vip  (ocf::heartbeat:IPaddr2):   Started ha2
     mysql_wsgw (ocf::heartbeat:mysql): Starting ha2
[root@ha1 ~]# pcs resource show
 Resource Group: mysqlgroup
     mysql_lvm  (ocf::heartbeat:LVM):   Started ha2
     mysql_fs   (ocf::heartbeat:Filesystem):    Started ha2
     mysql_vip  (ocf::heartbeat:IPaddr2):   Started ha2
     mysql_wsgw (ocf::heartbeat:mysql): Starting ha2

  

参考文档:

https://access.redhat.com/documentation/zh-tw/red_hat_enterprise_linux/7/html/high_availability_add-on_administration/index

https://access.redhat.com/documentation/zh-cn/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/index

 

 

 

posted @   阿西吧li  阅读(619)  评论(0编辑  收藏  举报
编辑推荐:
· 开发者必知的日志记录最佳实践
· SQL Server 2025 AI相关能力初探
· Linux系列:如何用 C#调用 C方法造成内存泄露
· AI与.NET技术实操系列(二):开始使用ML.NET
· 记一次.NET内存居高不下排查解决与启示
阅读排行:
· 开源Multi-agent AI智能体框架aevatar.ai,欢迎大家贡献代码
· Manus重磅发布:全球首款通用AI代理技术深度解析与实战指南
· 被坑几百块钱后,我竟然真的恢复了删除的微信聊天记录!
· 没有Manus邀请码?试试免邀请码的MGX或者开源的OpenManus吧
· 园子的第一款AI主题卫衣上架——"HELLO! HOW CAN I ASSIST YOU TODAY
点击右上角即可分享
微信分享提示