MYSQL高可用架构实践
一 . 基础环境设置
1. 准备四台机器
10.0.0.150 | 管理机 | Manager节点 |
10.0.0.151 | MASTER | Node节点 |
10.0.0.152 | SLAVE | Node节点 |
10.0.0.153 | SLAVE | Node节点 |
2.建立秘钥对
(1) 在10.0.0.150机器上操作
[root@manage ~]# ssh-keygen -t dsa
[root@manage ~]# ssh-copy-id -i /root/.ssh/id_dsa.pub root@10.0.0.151
[root@manage ~]# ssh-copy-id -i /root/.ssh/id_dsa.pub root@10.0.0.152
[root@manage ~]# ssh-copy-id -i /root/.ssh/id_dsa.pub root@10.0.0.153
(2) 在10.0.0.151机器上操作
[root@mha01 ~]#ssh-keygen -t dsa
[root@mha01 ~]# ssh-copy-id -i /root/.ssh/id_dsa.pub root@10.0.0.152
[root@mha01 ~]# ssh-copy-id -i /root/.ssh/id_dsa.pub root@10.0.0.153
(3) 在10.0.0.152机器上操作
[root@mha02 ~]# ssh-keygen -t dsa
[root@mha02 ~]# ssh-copy-id -i /root/.ssh/id_dsa.pub root@10.0.0.151
[root@mha02 ~]# ssh-copy-id -i /root/.ssh/id_dsa.pub root@10.0.0.153
(4) 在10.0.0.153机器上操作
[root@mha03 ~]# ssh-keygen -t dsa
[root@mha03 ~]# ssh-copy-id -i /root/.ssh/id_dsa.pub root@10.0.0.151
[root@mha03 ~]# ssh-copy-id -i /root/.ssh/id_dsa.pub root@10.0.0.152
二. 搭建数据库主从关系
1. 修改数据库配置文件my.cnf,添加在 [mysqld] 模块内
[root@mha01 ~]# /etc/init.d/mysqld stop (关闭mysql,否则改配置后无法重启)
[root@mha01 ~]# vim /etc/my.cnf
server-id = 151 #以IP结尾作为server_id区分
log-bin=mysql-bin #开启主库从库的binlog日志relay_log_purge=0 #slave数据不丢失,relay log值为0表示手工清除
2. 初始化数据库并登录Mysql
(1)在所有Node节点增加MHA管理账户和复制账户
grant replication slave on *.* to 'rep'@'10.0.0.%' identified by '123456';grant all on *.* to 'mhamanager'@'10.0.0.%' identified by '123456';
(2)检查所有Node节点授权
show grants for 'rep'@'10.0.0.%';show grants for 'mhamanager'@'%';
(3)确定所有的slave为只读,master为读写
Master> set global read_only=0; #MASTER数据库操作Slave > set global read_only=1; #SLAVE数据库操作
(4)初始化数据库
[root@mha01 ~]# /etc/init.d/mysqld start[root@mha01 ~]# mysql -uroot -p123456
mysql> reset master;mysql> reset slave;
(5)开启从库slave模式,并查看状态(只在从库上操作)
mysql -uroot -p123456 <<EOF
CHANGE MASTER TO
MASTER_HOST='10.0.0.152',MASTER_PORT=3306,MASTER_USER='rep',MASTER_PASSWORD='123456',MASTER_LOG_FILE='mysql-bin.000001',MASTER_LOG_POS=107;EOF
mysql> start slave;
mysql> show slave status\G;
三. 搭建MHA安装环境及配置
1.Manager节点配置安装
mkdir -p /etc/mha #mha配置文件目录mkdir -p /data/mha/scripts #mha所需脚本总目录mkdir -p /data/mha/scripts/mhm #mha 故障转移所需脚本目录mkdir -p /data/mha/manager/app1 #mha 故障转移数据日志存储目录mkdir -p /data/mha/log/app1 #mha 日志目录
2.MHA manager节点安装 需要安装到管理节点
yum install -y perl* perl-*
yum install -y python*yum install cpanyum install perl-DBD-MySQLyum install perl-Config-Tinyyum install perl-Log-Dispatchyum install perl-Parallel-ForkManager
必须先安装node程序
yum install mha4mysql-node-0.56-0.el6.noarch.rpm
然后安装Manager
yum install mha4mysql-manager-0.56-0.el6.noarch.rpm
3.Node节点配置安装
mkdir -p /data/mha/manager/app1 #mha 故障转移数据日志存储目录mkdir -p /data/mha/scripts #mha 节点所需维护脚本目录MHA node节点安装,MHA node的node服务需要安装在所有的mysql服务器上yum install -y perl* perl-*yum install -y python*yum install cpanyum install perl-DBD-MySQLyum install mha4mysql-node-0.56-0.el6.noarch.rpm
目录 /etc/mha
[root@db-hma-manager-p-zw mha]# more app1.cnf
[server default]
manager_log=/data/mha/log/app1/manager.log
manager_workdir=/data/mha/manager/app1
master_ip_failover_script=/data/mha/scripts/master_ip_failover --vip=192.168.7.234 --devic_1=em1 --devic_2=em1 --key=1 --net_mask=255.255.255.0 --mysql_conf=/etc/my.cnf --gatewa
y=192.168.7.1
master_ip_online_change_script=/data/mha/scripts/master_ip_online_change --vip=192.168.7.234 --key=1 --devic_1=em1 --devic_2=em1 --net_mask=255.255.255.0 --ssh_user=root --gatew
ay=192.168.7.1 --orig_master_ssh_port=22 --new_master_ssh_port=22
ping_interval=5
remote_workdir=/data/mha/manager/app1
repl_password=pass4slave
repl_user=replication
ssh_user=root
ssh_port=22
user=mhamanager
password=pass4mha
[server1]
candidate_master=1
hostname= 192.168.7.230
ignore_fail=1
master_binlog_dir="/data/data/mysqllogs/binlog"
port=3306
#check_repl_delay=0
check_repl_delay=0
[server2]
candidate_master=1
#no_master=1
hostname= 192.168.7.231
ignore_fail=1
master_binlog_dir="/data/data/mysqllogs/binlog"
port=3306
check_repl_delay=1
[server3]
#candidate_master=1
no_master=1
hostname= 192.168.7.232
ignore_fail=1
master_binlog_dir="/data/data/mysqllogs/binlog"
port=3306
check_repl_delay=1
[binlog1]
#candidate_master=1
no_master=1
hostname= 192.168.7.232
ignore_fail=1
master_binlog_dir="/data/data/mysqllogs/binlog"
http://www.cnblogs.com/wingsless/p/4033093.html
mysql> 6.10 修复宕机的Master
通常情况下自动切换以后,原master可能已经废弃掉,待原master主机修复后,如果数据完整的情况下,可能想把原来master重新作为新主库的slave,这时我们可以借助当时自动切换时刻的MHA日志来完成对原master的修复。下面是提取相关日志的命令:
[root@192.168.0.20 app1]# grep -i "All other slaves should start" /var/log/masterha/wssc/manager.log
Fri Feb 27 14:59:53 2015 - [info] All other slaves should start replication from here. Statement should be: CHANGE MASTER TO
MASTER_HOST='192.168.5.102', MASTER_PORT=3306,
MASTER_LOG_FILE='master-bin.000021', MASTER_LOG_POS=120,
MASTER_USER='repl_user', MASTER_PASSWORD='xxx';