ctu_lzq

导航

mysql高可用之MHA搭建

 

MySQL MHA架构介绍:

MHA(Master High Availability)目前在MySQL高可用方面是一个相对成熟的解决方案,它由日本DeNA公司youshimaton(现就职于Facebook公司)开发,是一套优秀的作为MySQL高可用性环境下故障切换和主从提升的高可用软件。在MySQL故障切换过程中,MHA能做到在0~30秒之内自动完成数据库的故障切换操作,并且在进行故障切换的过程中,MHA能在最大程度上保证数据的一致性,以达到真正意义上的高可用。

该软件由两部分组成:MHA Manager(管理节点)和MHA Node(数据节点)。MHA Manager可以单独部署在一台独立的机器上管理多个master-slave集群,也可以部署在一台slave节点上。MHA Node运行在每台MySQL服务器上,MHA Manager会定时探测集群中的master节点,当master出现故障时,它可以自动将最新数据的slave提升为新的master,然后将所有其他的slave重新指向新的master。整个故障转移过程对应用程序完全透明。

在MHA自动故障切换过程中,MHA试图从宕机的主服务器上保存二进制日志,最大程度的保证数据的不丢失,但这并不总是可行的。例如,如果主服务器硬件故障或无法通过ssh访问,MHA没法保存二进制日志,只进行故障转移而丢失了最新的数据。使用MySQL 5.5的半同步复制,可以大大降低数据丢失的风险。MHA可以与半同步复制结合起来。如果只有一个slave已经收到了最新的二进制日志,MHA可以将最新的二进制日志应用于其他所有的slave服务器上,因此可以保证所有节点的数据一致性。

目前MHA主要支持一主多从的架构,要搭建MHA,要求一个复制集群中必须最少有三台数据库服务器,一主二从,即一台充当master,一台充当备用master,另外一台充当从库,因为至少需要三台服务器,出于机器成本的考虑,淘宝也在该基础上进行了改造,目前淘宝TMHA已经支持一主一从。(出自:《深入浅出MySQL(第二版)》)

架构图: 

MHA工作原理总结为以下几条:

(1)从宕机崩溃的master保存二进制日志事件(binlog events);

(2)识别含有最新更新的slave;

(3)应用差异的中继日志(relay log) 到其他slave;

(4)应用从master保存的二进制日志事件(binlog events);

(5)提升一个slave为新master;

(6)使用其他的slave连接新的master进行复制。

 

MHA软件由两部分组成:Manager工具包和Node工具包,具体说明如下:

MHA Manager:

1. masterha_check_ssh:检查MHA的SSH配置状况

2. masterha_check_repl:检查MySQL的复制状况

3. masterha_manager:启动MHA

4. masterha_check_status:检测当前MHA运行状态

5. masterha_master_monitor:检测master是否宕机

6. masterha_master_switch:控制故障转移(自动或手动)

7. masterha_conf_host:添加或删除配置的server信息

8. masterha_stop:关闭MHA

 

MHA Node:

save_binary_logs:保存或复制master的二进制日志

apply_diff_relay_logs:识别差异的relay log并将差异的event应用到其它slave中

filter_mysqlbinlog:去除不必要的ROLLBACK事件(MHA已不再使用这个工具)

purge_relay_logs:消除中继日志(不会堵塞SQL线程)

 

另有如下几个脚本需自定义:

1. master_ip_failover:管理VIP

2. master_ip_online_change:

3. masterha_secondary_check:当MHA manager检测到master不可用时,通过masterha_secondary_check脚本来进一步确认,减低误切的风险。

4. send_report:当发生故障切换时,可通过send_report脚本发送告警信息。

 

实验环境:(centos6.9 MySQL版本5.7.21)

 

主机设置: 

角色                 ip地址              主机名            server_id            类型
Monitor host        192.168.153.160     server160          -                    监控复制组
Master              192.168.153.162     server162          162                  写入
Candicate master    192.168.153.163     server163          163                  读
Slave               192.168.153.164     server164          164

 

一、部署MHA过程:在所有节点都要安装MHA node所需的perl模块(DBD:mysql),可以通过yum安装有些yum没有找到资源的,请自行下载安装。如下:

[root@localhost opt]# ls

mha4mysql-manager-0.56.tar.gz
mha4mysql-node-0.56.tar.gz
perl-Log-Dispatch-2.27-1.puias6.noarch.rpm
perl-Mail-Sender-0.8.16-3.puias6.noarch.rpm
perl-Mail-Sendmail-0.79-12.puias6.noarch.rpm
perl-Parallel-ForkManager-0.7.9-1.el6.noarch.rpm

 [root@localhost opt]# tar -zxvf mha4mysql-node-0.56.tar.gz

 [root@localhost opt]# cd mha4mysql-node-0.56

 [root@localhost mha4mysql-node-0.56]# perl Makefile.PL

*** Module::AutoInstall version 1.03
*** Checking for Perl dependencies...
Can't locate CPAN.pm in @INC (@INC contains: inc /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at inc/Module/AutoInstall.pm line 277.
[root@localhost mha4mysql-node-0.56]# yum install -y perl-CPAN
...
 
[root@localhost mha4mysql-node-0.56]# perl Makefile.PL
*** Module::AutoInstall version 1.03
*** Checking for Perl dependencies...
[Core Features]
- DBI        ...missing.
- DBD::mysql ...missing.
==> Auto-install the 2 mandatory module(s) from CPAN? [y] y
*** Dependencies will be installed the next time you type 'make'.
*** Module::AutoInstall configuration finished.
Checking if your kit is complete...
Looks good
Warning: prerequisite DBD::mysql 0 not found.
Warning: prerequisite DBI 0 not found.
Writing Makefile for mha4mysql::node

上述报错,需要安装perl-DBD-MySQL,采用yum在线安装即可:

[root@localhost mha4mysql-node-0.56]# yum -y install perl-DBD-MySQL  

安装perl-DBD-MySQL之后,再次执行脚本:

[root@localhost mha4mysql-node-0.56]# perl Makefile.PL
*** Module::AutoInstall version 1.03
*** Checking for Perl dependencies...
[Core Features]
- DBI        ...loaded. (1.609)
- DBD::mysql ...loaded. (4.013)
*** Module::AutoInstall configuration finished.
Writing Makefile for mha4mysql::node

然后进行安装:make && make install

[root@localhost mha4mysql-node-0.56]# make && make install
cp lib/MHA/BinlogPosFinderElp.pm blib/lib/MHA/BinlogPosFinderElp.pm
cp lib/MHA/NodeUtil.pm blib/lib/MHA/NodeUtil.pm
cp lib/MHA/BinlogManager.pm blib/lib/MHA/BinlogManager.pm
cp lib/MHA/SlaveUtil.pm blib/lib/MHA/SlaveUtil.pm
cp lib/MHA/NodeConst.pm blib/lib/MHA/NodeConst.pm
cp lib/MHA/BinlogPosFindManager.pm blib/lib/MHA/BinlogPosFindManager.pm
cp lib/MHA/BinlogPosFinderXid.pm blib/lib/MHA/BinlogPosFinderXid.pm
cp lib/MHA/BinlogHeaderParser.pm blib/lib/MHA/BinlogHeaderParser.pm
cp lib/MHA/BinlogPosFinder.pm blib/lib/MHA/BinlogPosFinder.pm
cp bin/filter_mysqlbinlog blib/script/filter_mysqlbinlog
/usr/bin/perl "-Iinc" -MExtUtils::MY -e 'MY->fixin(shift)' -- blib/script/filter_mysqlbinlog
cp bin/apply_diff_relay_logs blib/script/apply_diff_relay_logs
/usr/bin/perl "-Iinc" -MExtUtils::MY -e 'MY->fixin(shift)' -- blib/script/apply_diff_relay_logs
cp bin/purge_relay_logs blib/script/purge_relay_logs
/usr/bin/perl "-Iinc" -MExtUtils::MY -e 'MY->fixin(shift)' -- blib/script/purge_relay_logs
cp bin/save_binary_logs blib/script/save_binary_logs
/usr/bin/perl "-Iinc" -MExtUtils::MY -e 'MY->fixin(shift)' -- blib/script/save_binary_logs
Manifying blib/man1/filter_mysqlbinlog.1
Manifying blib/man1/apply_diff_relay_logs.1
Manifying blib/man1/purge_relay_logs.1
Manifying blib/man1/save_binary_logs.1
Installing /usr/local/share/perl5/MHA/NodeUtil.pm
Installing /usr/local/share/perl5/MHA/BinlogPosFinderElp.pm
Installing /usr/local/share/perl5/MHA/NodeConst.pm
Installing /usr/local/share/perl5/MHA/BinlogHeaderParser.pm
Installing /usr/local/share/perl5/MHA/SlaveUtil.pm
Installing /usr/local/share/perl5/MHA/BinlogPosFinderXid.pm
Installing /usr/local/share/perl5/MHA/BinlogPosFinder.pm
Installing /usr/local/share/perl5/MHA/BinlogManager.pm
Installing /usr/local/share/perl5/MHA/BinlogPosFindManager.pm
Installing /usr/local/share/man/man1/apply_diff_relay_logs.1
Installing /usr/local/share/man/man1/save_binary_logs.1
Installing /usr/local/share/man/man1/purge_relay_logs.1
Installing /usr/local/share/man/man1/filter_mysqlbinlog.1
Installing /usr/local/bin/filter_mysqlbinlog
Installing /usr/local/bin/purge_relay_logs
Installing /usr/local/bin/save_binary_logs
Installing /usr/local/bin/apply_diff_relay_logs
Appending installation info to /usr/lib64/perl5/perllocal.pod

安装完毕!安装完成后会在/usr/local/bin目录下生成以下脚本文件:

[root@localhost opt]# ls -al /usr/local/bin/
total 52
drwxr-xr-x.  2 root root  4096 Mar 26 13:19 .
drwxr-xr-x. 12 root root  4096 Dec 11 18:01 ..
-r-xr-xr-x   1 root root 16367 Mar 26 13:19 apply_diff_relay_logs
-r-xr-xr-x   1 root root  4807 Mar 26 13:19 filter_mysqlbinlog
-r-xr-xr-x   1 root root  8261 Mar 26 13:19 purge_relay_logs
-r-xr-xr-x   1 root root  7525 Mar 26 13:19 save_binary_logs
    Node脚本说明:(这些工具通常由MHA Manager的脚本触发,无需人为操作)
        save_binary_logs               //保存和复制master的二进制日志
        apply_diff_relay_logs          //识别差异的中继日志事件并将其差异的事件应用于其他的slave
        filter_mysqlbinlog             //去除不必要的ROLLBACK事件(MHA已不再使用这个工具)
        purge_relay_logs               //清除中继日志(不会阻塞SQL线程)

二、安装MHA Manager:

在MHA Manager的主机也是需要安装MHA Node,MHA Manger也依赖于perl模块。

(1)在MHA Manager的主机也是需要安装MHA Node,所以以下的步骤和上面的操作一样,(在server160 192.168.153.160操作):此处省略。

(2)安装MHA Manager。首先安装MHA Manger依赖的perl模块:

yum install perl-DBD-MySQL perl-Config-Tiny perl-Time-HiRes perl-Parallel-ForkManager perl-Log-Dispatch -y
有些yum没有找到资源的,请自行下载安装。如:
perl-Mail-Sender-0.8.16-3.puias6.noarch.rpm
perl-Mail-Sendmail-0.79-12.puias6.noarch.rpm
perl-Log-Dispatch-2.27-1.puias6.noarch.rpm
perl-Parallel-ForkManager-0.7.9-1.el6.noarch.rpm
以上采用rpm -ivh ...安装。
[root@localhost opt]# tar -zxvf mha4mysql-manager-0.56.tar.gz
[root@localhost opt]# cd mha4mysql-manager-0.56
[root@localhost mha4mysql-manager-0.56]# ls
AUTHORS  blib     debian  lib       Makefile.PL  META.yml    README  samples  tests
bin      COPYING  inc     Makefile  MANIFEST     pm_to_blib  rpm     t
[root@localhost mha4mysql-manager-0.56]# perl Makefile.PL
*** Module::AutoInstall version 1.03
*** Checking for Perl dependencies...
[Core Features]
- DBI                   ...loaded. (1.609)
- DBD::mysql            ...loaded. (4.013)
- Time::HiRes           ...loaded. (1.9721)
- Config::Tiny          ...loaded. (2.12)
- Log::Dispatch         ...loaded. (2.26)
- Parallel::ForkManager ...loaded. (0.7.9)
- MHA::NodeConst        ...loaded. (0.56)
*** Module::AutoInstall configuration finished.
Writing Makefile for mha4mysql::manager

 [root@localhost mha4mysql-manager-0.56]# make && make install
  ...

安装完毕!安装完成后,在/usr/local/bin会产生相关的脚本: 

[root@localhost mha4mysql-manager-0.56]# ls -al /usr/local/bin/
total 92
drwxr-xr-x.  2 root root  4096 Mar 21 10:26 .
drwxr-xr-x. 12 root root  4096 Dec 11 18:01 ..
-r-xr-xr-x   1 root root 16367 Mar 21 09:58 apply_diff_relay_logs
-r-xr-xr-x   1 root root  4807 Mar 21 09:58 filter_mysqlbinlog
-r-xr-xr-x   1 root root  1995 Mar 21 10:26 masterha_check_repl
-r-xr-xr-x   1 root root  1779 Mar 21 10:26 masterha_check_ssh
-r-xr-xr-x   1 root root  1865 Mar 21 10:26 masterha_check_status
-r-xr-xr-x   1 root root  3201 Mar 21 10:26 masterha_conf_host
-r-xr-xr-x   1 root root  2517 Mar 21 10:26 masterha_manager
-r-xr-xr-x   1 root root  2165 Mar 21 10:26 masterha_master_monitor
-r-xr-xr-x   1 root root  2373 Mar 21 10:26 masterha_master_switch
-r-xr-xr-x   1 root root  5171 Mar 21 10:26 masterha_secondary_check
-r-xr-xr-x   1 root root  1739 Mar 21 10:26 masterha_stop
-r-xr-xr-x   1 root root  8261 Mar 21 09:58 purge_relay_logs
-r-xr-xr-x   1 root root  7525 Mar 21 09:58 save_binary_logs
【转载:复制相关脚本到/usr/local/bin目录(软件包解压缩后就有了,不是必须,因为这些脚本不完整,需要自己修改,这是软件开发着留给我们自己发挥的,如果开启下面的任何一个脚本对应的参数,而对应这里的脚本又没有修改,则会抛错,自己被坑的很惨)
/opt/mha4mysql-manager-0.56/samples/scripts/
[root@localhost scripts]# ls -al
total 40
drwxr-xr-x 2 4984 users  4096 Apr  1  2014 .
drwxr-xr-x 4 4984 users  4096 Apr  1  2014 ..
-rwxr-xr-x 1 4984 users  3648 Apr  1  2014 master_ip_failover       #自动切换时vip管理的脚本,不是必须,如果我们使用keepalived的,我们可以自己编写脚本完成对vip的管理,比如监控mysql,如果mysql异常,我们停止keepalived就行,这样vip就会自动漂移 
-rwxr-xr-x 1 4984 users  9870 Apr  1  2014 master_ip_online_change        #在线切换时vip的管理,不是必须,同样可以可以自行编写简单的shell完成
-rwxr-xr-x 1 4984 users 11867 Apr  1  2014 power_manager                 #故障发生后关闭主机的脚本,不是必须
-rwxr-xr-x 1 4984 users  1360 Apr  1  2014 send_report                     #因故障切换后发送报警的脚本,不是必须,可自行编写简单的shell完成
[root@localhost scripts]#  cp * /usr/local/bin/   --复制脚本

三、配置SSH登录无密码验证(使用key登录,工作中常用,最好不要禁掉密码登录,如果禁了,可能会有问题)

1).在192.168.153.160上执行(MHA Manager):
[root@localhost scripts]# ssh-keygen -t rsa   #一路回车即可
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):   
/root/.ssh/id_rsa already exists.
Overwrite (y/n)? y
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
99:34:a9:66:09:a5:63:c0:f6:88:e3:86:13:23:82:62 root@localhost.localdomain
The key's randomart image is:
+--[ RSA 2048]----+
| ..   .          |
|  o. o   .       |
|.o o=   +        |
|XE...o + +       |
|B+    = S        |
|oo   o           |
|..               |
|                 |
|                 |
+-----------------+
[root@localhost scripts]# ssh-copy-id -i /root/.ssh/id_rsa.pub root@192.168.153.162
The authenticity of host '192.168.153.162 (192.168.153.162)' can't be established.
RSA key fingerprint is 29:62:2e:0d:bc:2b:d5:a4:e5:2a:d5:a8:53:c9:12:2a.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.153.162' (RSA) to the list of known hosts.
root@192.168.153.162's password: 
Now try logging into the machine, with "ssh 'root@192.168.153.162'", and check in:

  .ssh/authorized_keys

to make sure we haven't added extra keys that you weren't expecting.

[root@localhost scripts]# ssh-copy-id -i /root/.ssh/id_rsa.pub root@192.168.153.163
The authenticity of host '192.168.153.163 (192.168.153.163)' can't be established.
RSA key fingerprint is 29:62:2e:0d:bc:2b:d5:a4:e5:2a:d5:a8:53:c9:12:2a.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.153.163' (RSA) to the list of known hosts.
root@192.168.153.163's password: 
Now try logging into the machine, with "ssh 'root@192.168.153.163'", and check in:

  .ssh/authorized_keys

to make sure we haven't added extra keys that you weren't expecting.

[root@localhost scripts]# ssh-copy-id -i /root/.ssh/id_rsa.pub root@192.168.153.164
The authenticity of host '192.168.153.164 (192.168.153.164)' can't be established.
RSA key fingerprint is 29:62:2e:0d:bc:2b:d5:a4:e5:2a:d5:a8:53:c9:12:2a.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.153.164' (RSA) to the list of known hosts.
root@192.168.153.164's password: 
Now try logging into the machine, with "ssh 'root@192.168.153.164'", and check in:

  .ssh/authorized_keys

to make sure we haven't added extra keys that you weren't expecting.
2).在192.168.153.162上执行(Master):
[root@localhost opt]# ssh-keygen -t rsa  #一路回车即可
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
1c:a5:75:6d:d9:2f:52:d4:88:8b:45:4c:c1:76:cd:d9 root@localhost.localdomain
The key's randomart image is:
+--[ RSA 2048]----+
|          o===.Bo|
|         + .* B.E|
|        o  + =  .|
|       . .. o . .|
|        S    . . |
|                 |
|                 |
|                 |
|                 |
+-----------------+
[root@localhost opt]# ssh-copy-id -i /root/.ssh/id_rsa.pub root@192.168.153.163
The authenticity of host '192.168.153.163 (192.168.153.163)' can't be established.
RSA key fingerprint is 29:62:2e:0d:bc:2b:d5:a4:e5:2a:d5:a8:53:c9:12:2a.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.153.163' (RSA) to the list of known hosts.
root@192.168.153.163's password: 
Now try logging into the machine, with "ssh 'root@192.168.153.163'", and check in:

  .ssh/authorized_keys

to make sure we haven't added extra keys that you weren't expecting.

[root@localhost opt]# ssh-copy-id -i /root/.ssh/id_rsa.pub root@192.168.153.164
The authenticity of host '192.168.153.164 (192.168.153.164)' can't be established.
RSA key fingerprint is 29:62:2e:0d:bc:2b:d5:a4:e5:2a:d5:a8:53:c9:12:2a.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.153.164' (RSA) to the list of known hosts.
root@192.168.153.164's password: 
Now try logging into the machine, with "ssh 'root@192.168.153.164'", and check in:

  .ssh/authorized_keys

to make sure we haven't added extra keys that you weren't expecting.
3).在192.168.153.163上执行(slave):
[root@localhost mha4mysql-node-0.56]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
93:d3:19:6e:0d:72:33:e1:1b:ec:c7:c6:4d:ff:34:87 root@localhost.localdomain
The key's randomart image is:
+--[ RSA 2048]----+
|          .      |
|         o .     |
|        . X   .  |
|         B % o o |
|        S B * E.+|
|         + o   .+|
|                .|
|                 |
|                 |
+-----------------+
[root@localhost mha4mysql-node-0.56]# ssh-copy-id -i /root/.ssh/id_rsa.pub root@192.168.153.162
The authenticity of host '192.168.153.162 (192.168.153.162)' can't be established.
RSA key fingerprint is 29:62:2e:0d:bc:2b:d5:a4:e5:2a:d5:a8:53:c9:12:2a.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.153.162' (RSA) to the list of known hosts.
root@192.168.153.162's password: 
Now try logging into the machine, with "ssh 'root@192.168.153.162'", and check in:

  .ssh/authorized_keys

to make sure we haven't added extra keys that you weren't expecting.

[root@localhost mha4mysql-node-0.56]# ssh-copy-id -i /root/.ssh/id_rsa.pub root@192.168.153.164
The authenticity of host '192.168.153.164 (192.168.153.164)' can't be established.
RSA key fingerprint is 29:62:2e:0d:bc:2b:d5:a4:e5:2a:d5:a8:53:c9:12:2a.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.153.164' (RSA) to the list of known hosts.
root@192.168.153.164's password: 
Now try logging into the machine, with "ssh 'root@192.168.153.164'", and check in:

  .ssh/authorized_keys

to make sure we haven't added extra keys that you weren't expecting.
4).在192.168.153.164上执行(slave):
[root@localhost opt]# ssh-keygen -t rsa  #一路回车即可
[root@localhost mha4mysql-node-0.56]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
61:0d:b5:04:18:bd:68:41:e5:c8:09:f5:13:c6:57:bc root@localhost.localdomain
The key's randomart image is:
+--[ RSA 2048]----+
|    .o+**ooo.    |
|     oo*o=...    |
|      +o*oo  .   |
|      o..o  E    |
|     .  S        |
|                 |
|                 |
|                 |
|                 |
+-----------------+
[root@localhost mha4mysql-node-0.56]# ssh-copy-id -i /root/.ssh/id_rsa.pub root@192.168.153.162
The authenticity of host '192.168.153.162 (192.168.153.162)' can't be established.
RSA key fingerprint is 29:62:2e:0d:bc:2b:d5:a4:e5:2a:d5:a8:53:c9:12:2a.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.153.162' (RSA) to the list of known hosts.
root@192.168.153.162's password: 
Now try logging into the machine, with "ssh 'root@192.168.153.162'", and check in:

  .ssh/authorized_keys

to make sure we haven't added extra keys that you weren't expecting.

[root@localhost mha4mysql-node-0.56]# ssh-copy-id -i /root/.ssh/id_rsa.pub root@192.168.153.163
The authenticity of host '192.168.153.163 (192.168.153.163)' can't be established.
RSA key fingerprint is 29:62:2e:0d:bc:2b:d5:a4:e5:2a:d5:a8:53:c9:12:2a.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.153.163' (RSA) to the list of known hosts.
root@192.168.153.163's password: 
Now try logging into the machine, with "ssh 'root@192.168.153.163'", and check in:

  .ssh/authorized_keys

to make sure we haven't added extra keys that you weren't expecting.

四、搭建主从复制环境:

注意:binlog-do-db 和 replicate-ignore-db 设置必须相同。 MHA 在启动时候会检测过滤规则,如果过滤规则不同,MHA 不启动监控和故障转移。

1)在Master 192.168.153.162(server162)上备份一份完整的数据:
[root@localhost opt]# mysqldump -uroot -p --master-data=2 --single-transaction -R --triggers -A > /opt/all.sql
Enter password: 
[root@localhost opt]#

其中--master-data=2代表备份时刻记录master的Binlog位置和Position,--single-transaction意思是获取一致性快照,-R意思是备份存储过程和函数,--triggres的意思是备份触发器,-A代表备份所有的库。更多信息请自行mysqldump --help查看。
2)把备份复制到192.168.153.163和192.168.153.164
[root@localhost opt]# scp all.sql root@192.168.153.163:/opt/
all.sql                                                         100%  768KB 767.7KB/s   00:00    
[root@localhost opt]# scp all.sql root@192.168.153.164:/opt/
all.sql                                                         100%  768KB 767.7KB/s   00:00 
3)查看主库备份时的binlog名称和位置,MASTER_LOG_FILE和MASTER_LOG_POS:
[root@localhost opt]# head -n 30 all.sql | grep 'CHANGE MASTER TO'
-- CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS=154;
4)分别在两台服务器上导入备份,执行复制相关命令,恢复数据库操作:
[root@localhost opt]# mysql -uroot -p < /opt/all.sql
Enter password: 
[root@localhost opt]#
5)在Master 192.168.153.162(server162)上创建复制用户:
mysql> grant replication slave on *.* to 'repl'@'192.168.153.%' identified by '密码';  
Query OK, 0 rows affected, 1 warning (0.08 sec)
mysql> flush privileges;
Query OK, 0 rows affected (0.07 sec)
mysql> start slave;
Query OK, 0 rows affected (0.01 sec)

mysql> show slave status\G
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: 192.168.153.162
                  Master_User: repl
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: mysql-bin.000002
          Read_Master_Log_Pos: 154
               Relay_Log_File: localhost-relay-bin.000002
                Relay_Log_Pos: 320
        Relay_Master_Log_File: mysql-bin.000002
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
              Replicate_Do_DB: 
          Replicate_Ignore_DB: 
           Replicate_Do_Table: 
       Replicate_Ignore_Table: 
      Replicate_Wild_Do_Table: 
  Replicate_Wild_Ignore_Table: 
                   Last_Errno: 0
                   Last_Error: 
                 Skip_Counter: 0
          Exec_Master_Log_Pos: 154
              Relay_Log_Space: 531
              Until_Condition: None
               Until_Log_File: 
                Until_Log_Pos: 0
           Master_SSL_Allowed: No
           Master_SSL_CA_File: 
           Master_SSL_CA_Path: 
              Master_SSL_Cert: 
            Master_SSL_Cipher: 
               Master_SSL_Key: 
        Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
                Last_IO_Errno: 0
                Last_IO_Error: 
               Last_SQL_Errno: 0
               Last_SQL_Error: 
  Replicate_Ignore_Server_Ids: 
             Master_Server_Id: 162
                  Master_UUID: c8a5168d-2103-11e8-a3e1-000c29cb1162
             Master_Info_File: /var/lib/mysql/master.info
                    SQL_Delay: 0
          SQL_Remaining_Delay: NULL
      Slave_SQL_Running_State: Slave has read all relay log; waiting for more updates
           Master_Retry_Count: 86400
                  Master_Bind: 
      Last_IO_Error_Timestamp: 
     Last_SQL_Error_Timestamp: 
               Master_SSL_Crl: 
           Master_SSL_Crlpath: 
           Retrieved_Gtid_Set: 
            Executed_Gtid_Set: 
                Auto_Position: 0
         Replicate_Rewrite_DB: 
                 Channel_Name: 
           Master_TLS_Version: 
1 row in set (0.01 sec)

mysql> 
7)两台slave服务器设置read_only(从库对外提供读服务不提供写,之所以没有写进配置文件,是因为随时slave会提升为master)
[root@localhost opt]# mysql -uroot -p -e "set global read_only=1"
Enter password: 
[root@localhost opt]# 
8)在Master中创建监控用户:
mysql> grant all privileges on *.* to 'monitor'@'%' identified by '密码';
Query OK, 0 rows affected, 1 warning (0.04 sec)
mysql> flush  privileges;
Query OK, 0 rows affected (0.04 sec)
到这里整个集群环境已经搭建完毕,剩下的就是配置MHA软件了。


五、配置MHA:
1. 在Monitor host(192.168.153.160)上创建MHA工作目录,并且创建相关配置文件
[root@localhost opt]# mkdir -p /etc/masterha
[root@localhost opt]# vim /etc/masterha/app1.cnf
[server default]
manager_log=/masterha/app1/manager.log          //设置manager的日志
manager_workdir=/masterha/app1           //设置manager的工作目录
master_binlog_dir=/var/lib/mysql                  //设置master默认保存binlog的位置,以便MHA可以找到master的日志
master_ip_failover_script= /usr/local/bin/master_ip_failover    //设置自动failover时候的切换脚本
master_ip_online_change_script= /usr/local/bin/master_ip_online_change  //设置手动切换时候的切换脚本
user=monitor               // 设置监控用户
password=密码         //设置监控用户的密码
ping_interval=1         //设置监控主库,发送ping包的时间间隔,默认是3秒,尝试三次没有回应的时候进行自动failover
remote_workdir=/tmp     //设置远端mysql在发生切换时binlog的保存位置
repl_user=repl          //设置复制环境中的复制用户名
repl_password=repl    //设置复制用户的密码
report_script=/usr/local/bin/send_report    //设置发生切换后发送的报警的脚本
secondary_check_script=/usr/local/bin/masterha_secondary_check -s 192.168.153.163 -s 192.168.153.164 --user=root --master_host=192.168.153.162 --master_ip=192.168.153.162 --master_port=3306
 //一旦MHA到master的监控之间出现问题,MHA Manager将会判断其它两个slave是否能建立到master_ip 3306端口的连接
shutdown_script=""      //设置故障发生后关闭故障主机脚本(该脚本的主要作用是关闭主机防止发生脑裂)
ssh_user=root           //设置ssh的登录用户名

[server1]
hostname=192.168.153.162
port=3306

[server2]
hostname=192.168.153.163
port=3306
candidate_master=1   //设置为候选master,如果设置该参数以后,发生主从切换以后将会将此从库提升为主库,即使这个主库不是集群中最新的slave
check_repl_delay=0   //默认情况下如果一个slave落后master 100M的relay logs的话,MHA将不会选择该slave作为一个新的master,因为对于这个slave的恢复需要花费很长时间,通过设置check_repl_delay=0,MHA触发切换在选择一个新的master的时候将会忽略复制延时,这个参数对于设置了candidate_master=1的主机非常有用,因为它保证了这个候选主在切换过程中一定是最新的master

[server3]
hostname=192.168.153.16
port=3306

在编辑该文件时,后面的注释要去掉,MHA并不会将后面的内容识别为注释。

  2. 设置relay log清除方式(在每个Slave上)

mysql> set global relay_log_purge=0;
Query OK, 0 rows affected (0.00 sec)

MHA在发生切换过程中,从库在恢复的过程中,依赖于relay log的相关信息,所以我们这里要将relay log的自动清楚设置为OFF,采用手动清楚relay log的方式。在默认情况下,从服务器上的中继日志会在SQL线程执行完后被自动删除。但是在MHA环境中,这些中继日志在恢复其它从服务器时可能会被用到,因此需要禁用中继日志的自动清除。改为定期手动清除SQL线程应用完的中继日志。在ext3文件系统下,删除大的文件需要一定的时间,这样会导致严重的复制延迟,所以在Linux中,一般都是通过硬链接的方式来删除大文件。

 3. 设置定期清理relay脚本

MHA节点中包含了purge_relay_logs脚本,它可以为relay log创建硬链接,执行set global relay_log_purge=1,等待几秒钟以便SQL线程切换到新的中继日志,再执行set global relay_log_purge=0。
[root@localhost opt]# mkdir /data/masterha/log
[root@localhost opt]# vi purge_relay_log.sh 

#!/bin/bash
user=monitor
passwd=密码
port=3306
log_dir='/data/masterha/log'
work_dir='/var/lib/mysql'
purge='/usr/local/bin/purge_relay_logs'
$purge --user=$user --password=$passwd --disable_relay_log_purge --port=$port --workdir=$work_dir >> $log_dir/purge_relay_logs.log 2>&1

参数说明:
--user mysql                      //用户名
--password mysql                  //密码
--port                            //端口号
--workdir                         //指定创建relay log的硬链接的位置,默认是/var/tmp,由于系统不同分区创建硬链接文件会失败,故需要执行硬链接具体位置,成功执行脚本后,硬链接的中继日志文件被删除
--disable_relay_log_purge         //默认情况下,如果relay_log_purge=1,脚本会什么都不清理,自动退出,通过设定这个参数,当relay_log_purge=1的情况下会将relay_log_purge设置为0。清理relay log之后,最后将参数设置为OFF。
制定定时任务:
 [root ~]# crontab -l
    0 4 * * * /opt/purge_relay_log.sh >> /opt/purge_relay_logs.log 2>&1

 4、检查SSH配置(server160 192.168.153.160 Monitor 监控节点上操作),如下: 

[root@localhost ~]# masterha_check_ssh --conf=/etc/masterha/app1.cnf
Tue Mar 27 10:10:11 2018 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Tue Mar 27 10:10:11 2018 - [info] Reading application default configuration from /etc/masterha/app1.cnf..
Tue Mar 27 10:10:11 2018 - [info] Reading server configuration from /etc/masterha/app1.cnf..
Tue Mar 27 10:10:11 2018 - [info] Starting SSH connection tests..
Tue Mar 27 10:10:12 2018 - [debug] 
Tue Mar 27 10:10:11 2018 - [debug]  Connecting via SSH from root@192.168.153.162(192.168.153.162:22) to root@192.168.153.163(192.168.153.163:22)..
Tue Mar 27 10:10:11 2018 - [debug]   ok.
Tue Mar 27 10:10:11 2018 - [debug]  Connecting via SSH from root@192.168.153.162(192.168.153.162:22) to root@192.168.153.164(192.168.153.164:22)..
Tue Mar 27 10:10:12 2018 - [debug]   ok.
Tue Mar 27 10:10:13 2018 - [debug] 
Tue Mar 27 10:10:12 2018 - [debug]  Connecting via SSH from root@192.168.153.163(192.168.153.163:22) to root@192.168.153.162(192.168.153.162:22)..
Tue Mar 27 10:10:12 2018 - [debug]   ok.
Tue Mar 27 10:10:12 2018 - [debug]  Connecting via SSH from root@192.168.153.163(192.168.153.163:22) to root@192.168.153.164(192.168.153.164:22)..
Tue Mar 27 10:10:12 2018 - [debug]   ok.
Tue Mar 27 10:10:13 2018 - [debug] 
Tue Mar 27 10:10:12 2018 - [debug]  Connecting via SSH from root@192.168.153.164(192.168.153.164:22) to root@192.168.153.162(192.168.153.162:22)..
Tue Mar 27 10:10:13 2018 - [debug]   ok.
Tue Mar 27 10:10:13 2018 - [debug]  Connecting via SSH from root@192.168.153.164(192.168.153.164:22) to root@192.168.153.163(192.168.153.163:22)..
Tue Mar 27 10:10:13 2018 - [debug]   ok.
Tue Mar 27 10:10:13 2018 - [info] All SSH connection tests passed successfully.
可以看见各个节点ssh验证都是ok的。

 5、检查整个复制环境状况(server160 192.168.153.160 Monitor 监控节点上操作),如下:

[root@localhost ~]# masterha_check_ssh --conf=/etc/masterha/app1.cnf
Tue Mar 27 10:10:11 2018 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Tue Mar 27 10:10:11 2018 - [info] Reading application default configuration from /etc/masterha/app1.cnf..
Tue Mar 27 10:10:11 2018 - [info] Reading server configuration from /etc/masterha/app1.cnf..
Tue Mar 27 10:10:11 2018 - [info] Starting SSH connection tests..
Tue Mar 27 10:10:12 2018 - [debug] 
Tue Mar 27 10:10:11 2018 - [debug]  Connecting via SSH from root@192.168.153.162(192.168.153.162:22) to root@192.168.153.163(192.168.153.163:22)..
Tue Mar 27 10:10:11 2018 - [debug]   ok.
Tue Mar 27 10:10:11 2018 - [debug]  Connecting via SSH from root@192.168.153.162(192.168.153.162:22) to root@192.168.153.164(192.168.153.164:22)..
Tue Mar 27 10:10:12 2018 - [debug]   ok.
Tue Mar 27 10:10:13 2018 - [debug] 
Tue Mar 27 10:10:12 2018 - [debug]  Connecting via SSH from root@192.168.153.163(192.168.153.163:22) to root@192.168.153.162(192.168.153.162:22)..
Tue Mar 27 10:10:12 2018 - [debug]   ok.
Tue Mar 27 10:10:12 2018 - [debug]  Connecting via SSH from root@192.168.153.163(192.168.153.163:22) to root@192.168.153.164(192.168.153.164:22)..
Tue Mar 27 10:10:12 2018 - [debug]   ok.
Tue Mar 27 10:10:13 2018 - [debug] 
Tue Mar 27 10:10:12 2018 - [debug]  Connecting via SSH from root@192.168.153.164(192.168.153.164:22) to root@192.168.153.162(192.168.153.162:22)..
Tue Mar 27 10:10:13 2018 - [debug]   ok.
Tue Mar 27 10:10:13 2018 - [debug]  Connecting via SSH from root@192.168.153.164(192.168.153.164:22) to root@192.168.153.163(192.168.153.163:22)..
Tue Mar 27 10:10:13 2018 - [debug]   ok.
Tue Mar 27 10:10:13 2018 - [info] All SSH connection tests passed successfully.
[root@localhost ~]# masterha_check_repl --conf=/etc/masterha/app1.cnf
Tue Mar 27 10:12:59 2018 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Tue Mar 27 10:12:59 2018 - [info] Reading application default configuration from /etc/masterha/app1.cnf..
Tue Mar 27 10:12:59 2018 - [info] Reading server configuration from /etc/masterha/app1.cnf..
Tue Mar 27 10:12:59 2018 - [info] MHA::MasterMonitor version 0.56.
Tue Mar 27 10:12:59 2018 - [info] GTID failover mode = 0
Tue Mar 27 10:12:59 2018 - [info] Dead Servers:
Tue Mar 27 10:12:59 2018 - [info] Alive Servers:
Tue Mar 27 10:12:59 2018 - [info]   192.168.153.162(192.168.153.162:3306)
Tue Mar 27 10:12:59 2018 - [info]   192.168.153.163(192.168.153.163:3306)
Tue Mar 27 10:12:59 2018 - [info]   192.168.153.164(192.168.153.164:3306)
Tue Mar 27 10:12:59 2018 - [info] Alive Slaves:
Tue Mar 27 10:12:59 2018 - [info]   192.168.153.163(192.168.153.163:3306)  Version=5.7.21 (oldest major version between slaves) log-bin:disabled
Tue Mar 27 10:12:59 2018 - [info]     Replicating from 192.168.153.162(192.168.153.162:3306)
Tue Mar 27 10:12:59 2018 - [info]     Primary candidate for the new Master (candidate_master is set)
Tue Mar 27 10:12:59 2018 - [info]   192.168.153.164(192.168.153.164:3306)  Version=5.7.21 (oldest major version between slaves) log-bin:disabled
Tue Mar 27 10:12:59 2018 - [info]     Replicating from 192.168.153.162(192.168.153.162:3306)
Tue Mar 27 10:12:59 2018 - [info] Current Alive Master: 192.168.153.162(192.168.153.162:3306)
Tue Mar 27 10:12:59 2018 - [info] Checking slave configurations..
Tue Mar 27 10:12:59 2018 - [warning]  log-bin is not set on slave 192.168.153.163(192.168.153.163:3306). This host cannot be a master.
Tue Mar 27 10:12:59 2018 - [warning]  log-bin is not set on slave 192.168.153.164(192.168.153.164:3306). This host cannot be a master.
Tue Mar 27 10:12:59 2018 - [info] Checking replication filtering settings..
Tue Mar 27 10:12:59 2018 - [info]  binlog_do_db= , binlog_ignore_db= information_schema,mysql,performance_schema,sys
Tue Mar 27 10:12:59 2018 - [info]  Replication filtering check ok.
Tue Mar 27 10:12:59 2018 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm, ln361] None of slaves can be master. Check failover configuration file or log-bin settings in my.cnf
Tue Mar 27 10:12:59 2018 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm, ln424] Error happened on checking configurations.  at /usr/local/bin/masterha_check_repl line 48
Tue Mar 27 10:12:59 2018 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm, ln523] Error happened on monitoring servers.
Tue Mar 27 10:12:59 2018 - [info] Got exit code 1 (Not master dead).

MySQL Replication Health is NOT OK!
解决方法:在两个从库上开启二进制日志即可。再次执行查看:
[root@localhost ~]# masterha_check_repl --conf=/etc/masterha/app1.cnf
Tue Mar 27 10:18:24 2018 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Tue Mar 27 10:18:24 2018 - [info] Reading application default configuration from /etc/masterha/app1.cnf..
Tue Mar 27 10:18:24 2018 - [info] Reading server configuration from /etc/masterha/app1.cnf..
Tue Mar 27 10:18:24 2018 - [info] MHA::MasterMonitor version 0.56.
Tue Mar 27 10:18:24 2018 - [info] GTID failover mode = 0
Tue Mar 27 10:18:24 2018 - [info] Dead Servers:
Tue Mar 27 10:18:24 2018 - [info] Alive Servers:
Tue Mar 27 10:18:24 2018 - [info]   192.168.153.162(192.168.153.162:3306)
Tue Mar 27 10:18:24 2018 - [info]   192.168.153.163(192.168.153.163:3306)
Tue Mar 27 10:18:24 2018 - [info]   192.168.153.164(192.168.153.164:3306)
Tue Mar 27 10:18:24 2018 - [info] Alive Slaves:
Tue Mar 27 10:18:24 2018 - [info]   192.168.153.163(192.168.153.163:3306)  Version=5.7.21-log (oldest major version between slaves) log-bin:enabled
Tue Mar 27 10:18:24 2018 - [info]     Replicating from 192.168.153.162(192.168.153.162:3306)
Tue Mar 27 10:18:24 2018 - [info]     Primary candidate for the new Master (candidate_master is set)
Tue Mar 27 10:18:24 2018 - [info]   192.168.153.164(192.168.153.164:3306)  Version=5.7.21-log (oldest major version between slaves) log-bin:enabled
Tue Mar 27 10:18:24 2018 - [info]     Replicating from 192.168.153.162(192.168.153.162:3306)
Tue Mar 27 10:18:24 2018 - [info] Current Alive Master: 192.168.153.162(192.168.153.162:3306)
Tue Mar 27 10:18:24 2018 - [info] Checking slave configurations..
Tue Mar 27 10:18:24 2018 - [info]  read_only=1 is not set on slave 192.168.153.163(192.168.153.163:3306).
Tue Mar 27 10:18:24 2018 - [warning]  relay_log_purge=0 is not set on slave 192.168.153.163(192.168.153.163:3306).
Tue Mar 27 10:18:24 2018 - [info]  read_only=1 is not set on slave 192.168.153.164(192.168.153.164:3306).
Tue Mar 27 10:18:24 2018 - [warning]  relay_log_purge=0 is not set on slave 192.168.153.164(192.168.153.164:3306).
Tue Mar 27 10:18:24 2018 - [info] Checking replication filtering settings..
Tue Mar 27 10:18:24 2018 - [info]  binlog_do_db= , binlog_ignore_db= information_schema,mysql,performance_schema,sys
Tue Mar 27 10:18:24 2018 - [info]  Replication filtering check ok.
Tue Mar 27 10:18:24 2018 - [info] GTID (with auto-pos) is not supported
Tue Mar 27 10:18:24 2018 - [info] Starting SSH connection tests..
Tue Mar 27 10:18:26 2018 - [info] All SSH connection tests passed successfully.
Tue Mar 27 10:18:26 2018 - [info] Checking MHA Node version..
Tue Mar 27 10:18:26 2018 - [info]  Version check ok.
Tue Mar 27 10:18:26 2018 - [info] Checking SSH publickey authentication settings on the current master..
Tue Mar 27 10:18:26 2018 - [info] HealthCheck: SSH to 192.168.153.162 is reachable.
Tue Mar 27 10:18:27 2018 - [info] Master MHA Node version is 0.56.
Tue Mar 27 10:18:27 2018 - [info] Checking recovery script configurations on 192.168.153.162(192.168.153.162:3306)..
Tue Mar 27 10:18:27 2018 - [info]   Executing command: save_binary_logs --command=test --start_pos=4 --binlog_dir=/var/lib/mysql --output_file=/tmp/save_binary_logs_test --manager_version=0.56 --start_file=mysql-bin.000002 
Tue Mar 27 10:18:27 2018 - [info]   Connecting to root@192.168.153.162(192.168.153.162:22).. 
  Creating /tmp if not exists..    ok.
  Checking output directory is accessible or not..
   ok.
  Binlog found at /var/lib/mysql, up to mysql-bin.000002
Tue Mar 27 10:18:27 2018 - [info] Binlog setting check done.
Tue Mar 27 10:18:27 2018 - [info] Checking SSH publickey authentication and checking recovery script configurations on all alive slave servers..
Tue Mar 27 10:18:27 2018 - [info]   Executing command : apply_diff_relay_logs --command=test --slave_user='monitor' --slave_host=192.168.153.163 --slave_ip=192.168.153.163 --slave_port=3306 --workdir=/tmp --target_version=5.7.21-log --manager_version=0.56 --relay_log_info=/var/lib/mysql/relay-log.info  --relay_dir=/var/lib/mysql/  --slave_pass=xxx
Tue Mar 27 10:18:27 2018 - [info]   Connecting to root@192.168.153.163(192.168.153.163:22).. 
  Checking slave recovery environment settings..
    Opening /var/lib/mysql/relay-log.info ... ok.
    Relay log found at /var/lib/mysql, up to localhost-relay-bin.000005
    Temporary relay log file is /var/lib/mysql/localhost-relay-bin.000005
    Testing mysql connection and privileges..mysql: [Warning] Using a password on the command line interface can be insecure.
 done.
    Testing mysqlbinlog output.. done.
    Cleaning up test file(s).. done.
Tue Mar 27 10:18:28 2018 - [info]   Executing command : apply_diff_relay_logs --command=test --slave_user='monitor' --slave_host=192.168.153.164 --slave_ip=192.168.153.164 --slave_port=3306 --workdir=/tmp --target_version=5.7.21-log --manager_version=0.56 --relay_log_info=/var/lib/mysql/relay-log.info  --relay_dir=/var/lib/mysql/  --slave_pass=xxx
Tue Mar 27 10:18:28 2018 - [info]   Connecting to root@192.168.153.164(192.168.153.164:22).. 
  Checking slave recovery environment settings..
    Opening /var/lib/mysql/relay-log.info ... ok.
    Relay log found at /var/lib/mysql, up to localhost-relay-bin.000005
    Temporary relay log file is /var/lib/mysql/localhost-relay-bin.000005
    Testing mysql connection and privileges..mysql: [Warning] Using a password on the command line interface can be insecure.
 done.
    Testing mysqlbinlog output.. done.
    Cleaning up test file(s).. done.
Tue Mar 27 10:18:29 2018 - [info] Slaves settings check done.
Tue Mar 27 10:18:29 2018 - [info] 
192.168.153.162(192.168.153.162:3306) (current master)
 +--192.168.153.163(192.168.153.163:3306)
 +--192.168.153.164(192.168.153.164:3306)

Tue Mar 27 10:18:29 2018 - [info] Checking replication health on 192.168.153.163..
Tue Mar 27 10:18:29 2018 - [info]  ok.
Tue Mar 27 10:18:29 2018 - [info] Checking replication health on 192.168.153.164..
Tue Mar 27 10:18:29 2018 - [info]  ok.
Tue Mar 27 10:18:29 2018 - [warning] master_ip_failover_script is not defined.
Tue Mar 27 10:18:29 2018 - [warning] shutdown_script is not defined.
Tue Mar 27 10:18:29 2018 - [info] Got exit code 0 (Not master dead).

MySQL Replication Health is OK.
已经没有明显报错。复制显示正常了。

6、检查MHA Manager的状态

[root@localhost ~]# masterha_check_status --conf=/etc/masterha/app1.cnf
app1 is stopped(2:NOT_RUNNING).
注意:如果正常,会显示"PING_OK",否则会显示"NOT_RUNNING",这代表MHA监控没有开启。
开启MHA Manager监控(server160 192.168.153.160操作)如下:
mkdir -p  /var/log/masterha/app1/  (前面写脚本的时候已经创建,如脚本未创建,此处创建)
nohup masterha_manager --conf=/etc/masterha/app1.cnf --remove_dead_master_conf --ignore_last_failover < /dev/null > /var/log/masterha/app1/manager.log 2>&1 &
[1] 28711
[root@localhost ~]# 
启动参数说明:
--remove_dead_master_conf      //该参数代表当发生主从切换后,老的主库的ip将会从配置文件中移除。
--manger_log                   //日志存放位置
--ignore_last_failover        //在缺省情况下,如果MHA检测到连续发生宕机,且两次宕机间隔不足8小时的话,则不会进行Failover,之所以这样限制是为了避免ping-pong效应。该参数代表忽略上次MHA触发切换产生的文件,默认情况下,MHA发生切换后会在日志目录,也就是上面我设置的/data产生app1.failover.complete文件,下次再次切换的时候如果发现该目录下存在该文件将不允许触发切换,除非在第一次切换后收到删除该文件,为了方便,这里设置为--ignore_last_failover。
再次查看MHA Manager监控是否正常:
[root@localhost ~]# masterha_check_status --conf=/etc/masterha/app1.cnf
app1 (pid:28711) is running(0:PING_OK), master:192.168.153.162
可以看见已经在监控了,而且master的主机为192.168.153.162

7、查看启动日志(server01 192.168.2.131操作)如下:

[root@localhost ~]# tail -n20 /var/log/masterha/app1/manager.log
  Checking slave recovery environment settings..
    Opening /var/lib/mysql/relay-log.info ... ok.
    Relay log found at /var/lib/mysql, up to localhost-relay-bin.000005
    Temporary relay log file is /var/lib/mysql/localhost-relay-bin.000005
    Testing mysql connection and privileges..mysql: [Warning] Using a password on the command line interface can be insecure.
 done.
    Testing mysqlbinlog output.. done.
    Cleaning up test file(s).. done.
Tue Mar 27 10:24:01 2018 - [info] Slaves settings check done.
Tue Mar 27 10:24:01 2018 - [info] 
192.168.153.162(192.168.153.162:3306) (current master)
 +--192.168.153.163(192.168.153.163:3306)
 +--192.168.153.164(192.168.153.164:3306)

Tue Mar 27 10:24:01 2018 - [warning] master_ip_failover_script is not defined.
Tue Mar 27 10:24:01 2018 - [warning] shutdown_script is not defined.
Tue Mar 27 10:24:01 2018 - [info] Set master ping interval 1 seconds.
Tue Mar 27 10:24:01 2018 - [warning] secondary_check_script is not defined. It is highly recommended setting it to check master reachability from two or more routes.
Tue Mar 27 10:24:01 2018 - [info] Starting ping health check on 192.168.153.162(192.168.153.162:3306)..
Tue Mar 27 10:24:01 2018 - [info] Ping(SELECT) succeeded, waiting until MySQL doesn't respond..
其中"Ping(SELECT) succeeded, waiting until MySQL doesn't respond.."说明整个系统已经开始监控了。

8、关闭MHA Manage监控(server01 192.168.2.131操作)如下:

使用masterha_stop命令完成
[root@localhost ~]# masterha_stop --conf=/etc/masterha/app1.cnf
Stopped app1 successfully.
[1]+  Exit 1                  nohup masterha_manager --conf=/etc/masterha/app1.cnf --remove_dead_master_conf --ignore_last_failover < /dev/null > /var/log/masterha/app1/manager.log 2>&1
[root@localhost ~]# 

六、配置VIP  

vip配置可以采用两种方式,一种通过keepalived的方式管理虚拟ip的浮动;另外一种通过脚本方式启动虚拟ip的方式(即不需要keepalived或者heartbeat类似的软件)。对于keepalived管理VIP,存在脑裂情况,即当主从网络出现问题时,slave会抢占VIP,这样会导致主从数据库都持有VIP,造成IP冲突,所以在网络不是很好的情况下,不建议采用keepalived服务。在实际生产中使用较多的也是第二种,即在脚本中手动管理VIP。

1. keepalived管理VIP 【略】

 2. 通过脚本的方式管理VIP,修改/usr/local/bin/master_ip_failover。

在管理节点(server160 192.168.153.160)修改下/usr/local/bin/master_ip_failover脚本,如下:
cp /opt/mha4mysql-manager-0.56/samples/scripts/* /usr/local/bin/ (安装文件压缩包里是有这些脚本的,复制后相应修改)

  (1).手动在master服务器上绑定一个vip [192.168.153.88]
  ifconfig eth2:1 192.168.153.88/24

[root@localhost ~]# ifconfig eth2:1 192.168.153.88/24
[root@localhost ~]# ifconfig
eth2      Link encap:Ethernet  HWaddr 00:0C:29:2B:78:35  
          inet addr:192.168.153.162  Bcast:192.168.153.255  Mask:255.255.255.0
          inet6 addr: fe80::20c:29ff:fe2b:7835/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:144062 errors:0 dropped:0 overruns:0 frame:0
          TX packets:65682 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:136548917 (130.2 MiB)  TX bytes:5928077 (5.6 MiB)

eth2:1    Link encap:Ethernet  HWaddr 00:0C:29:2B:78:35  
          inet addr:192.168.153.88  Bcast:192.168.153.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:85 errors:0 dropped:0 overruns:0 frame:0
          TX packets:85 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:7998 (7.8 KiB)  TX bytes:7998 (7.8 KiB)

  (2).编辑脚本master_ip_failover

 

[root@localhost ~]# vi /usr/local/bin/master_ip_failover
#!/usr/bin/env perl
use strict;
use warnings FATAL => 'all';

use Getopt::Long;

my (
    $command,          $ssh_user,        $orig_master_host, $orig_master_ip,
    $orig_master_port, $new_master_host, $new_master_ip,    $new_master_port
);

my $vip = '192.168.153.88';
my $key = '1';
my $ssh_start_vip = "/sbin/ifconfig eth2:$key $vip";
my $ssh_stop_vip = "/sbin/ifconfig eth2:$key down";


GetOptions(
    'command=s'          => \$command,
    'ssh_user=s'         => \$ssh_user,
    'orig_master_host=s' => \$orig_master_host,
    'orig_master_ip=s'   => \$orig_master_ip,
    'orig_master_port=i' => \$orig_master_port,
    'new_master_host=s'  => \$new_master_host,
    'new_master_ip=s'    => \$new_master_ip,
    'new_master_port=i'  => \$new_master_port,
);

exit &main();

sub main {

    print "\n\nIN SCRIPT TEST====$ssh_stop_vip==$ssh_start_vip===\n\n";

    if ( $command eq "stop" || $command eq "stopssh" ) {

        my $exit_code = 1;
        eval {
            print "Disabling the VIP on old master: $orig_master_host \n";
            &stop_vip();
            $exit_code = 0;
        };
        if ($@) {
            warn "Got Error: $@\n";
            exit $exit_code;
        }
        exit $exit_code;
    }
    elsif ( $command eq "start" ) {

        my $exit_code = 10;
        eval {
            print "Enabling the VIP - $vip on the new master - $new_master_host \n";
            &start_vip();
            $exit_code = 0;
        };
        if ($@) {
            warn $@;
            exit $exit_code;
        }
        exit $exit_code;
    }
    elsif ( $command eq "status" ) {
        print "Checking the Status of the script.. OK \n";
        exit 0;
    }
    else {
        &usage();
        exit 1;
    }
}
sub start_vip() {
    `ssh $ssh_user\@$new_master_host \" $ssh_start_vip \"`;
}
# A simple system call that disable the VIP on the old_master
sub stop_vip() {
    `ssh $ssh_user\@$orig_master_host \" $ssh_stop_vip \"`;
}

sub usage {
    print
    "Usage: master_ip_failover --command=start|stop|stopssh|status --orig_master_host=host --orig_master_ip=ip --orig_master_port=port --new_master_host=host --new_master_ip=ip --new_master_port=port\n";
}

  (3).mha架构旧master宕机后,mha完成切换之后,mha服务会关闭,并且配置文件自动去掉宕机的server。

(i)待旧master主机恢复后,修改作为新master的slave,加入到架构里。
mysql> CHANGE MASTER TO
 MASTER_HOST='192.168.153.162',
 MASTER_USER='repl',
 MASTER_PASSWORD='密码',
 MASTER_LOG_FILE='mysql-bin.000005',
 MASTER_LOG_POS=154;

mysql> start slave;

mysql> show slave status\G
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: 192.168.153.163
                  Master_User: repl
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: mysql-bin.000002
          Read_Master_Log_Pos: 154
               Relay_Log_File: localhost-relay-bin.000002
                Relay_Log_Pos: 320
        Relay_Master_Log_File: mysql-bin.000002
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
......
(ii)MHA Manage监控会关闭,查看状态
[root@localhost app1]# masterha_check_status --conf=/etc/masterha/app1.cnf
app1 is stopped(2:NOT_RUNNING).
(iii)MHA配置文件:    
[root@localhost masterha]# cat /etc/masterha/app1.cnf
[server default]
manager_log=/var/log/masterha/app1/manager.log
manager_workdir=/var/log/masterha/app1.log
master_binlog_dir=/var/lib/mysql
master_ip_failover_script=/usr/local/bin/master_ip_failover
master_ip_online_change_script=/usr/local/bin/master_ip_online_change
password=密码
ping_interval=1
remote_workdir=/tmp
repl_password=密码
repl_user=repl
report_script=/usr/local/bin/send_report
secondary_check_script=/usr/local/bin/masterha_secondary_check -s 192.168.153.163 -s 192.168.153.164 --user=root --master_host=192.168.153.162 --master_ip=192.168.153.162 --master_port=3306
shutdown_script=""
ssh_user=root
user=monitor

[server2]
hostname=192.168.153.163
port=3306
candidate_master=1
check_repl_delay=0

[server3]
hostname=192.168.153.164
port=3306

#少了这一部分内容
[server1]
hostname=192.168.153.162
port=3306
iv)待主机和配置文件都手工处理后,再启动MHA Manage监控。(可手工切换master恢复到当初新建立状态。)
[root@localhost ~]# masterha_check_status --conf=/etc/masterha/app1.cnf
 
[root@localhost ~]# nohup masterha_manager --conf=/etc/masterha/app1.cnf --remove_dead_master_conf --ignore_last_failover < /dev/null > /var/log/masterha/app1/manager.log 2>&1 &

--至此,MHA基本搭建测试已完毕。搭建过程中会遇到很多问题,每个人的搭建环境不一样,遇到的问题可能也不一样,以及脚本是不够完善的,需要自己去修改。
(待补充的有keepalived方式和在线切换)


 

--补充说明 send_report脚本:(转自博友)

#!/usr/bin/perl

#  Copyright (C) 2011 DeNA Co.,Ltd.
#
#  This program is free software; you can redistribute it and/or modify
#  it under the terms of the GNU General Public License as published by
#  the Free Software Foundation; either version 2 of the License, or
#  (at your option) any later version.
#
#  This program is distributed in the hope that it will be useful,
#  but WITHOUT ANY WARRANTY; without even the implied warranty of
#  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
#  GNU General Public License for more details.
#
#  You should have received a copy of the GNU General Public License
#   along with this program; if not, write to the Free Software
#  Foundation, Inc.,
#  51 Franklin Street, Fifth Floor, Boston, MA  02110-1301  USA

## Note: This is a sample script and is not complete. Modify the script based on your environment.
use strict;
use warnings FATAL => 'all';
use Mail::Sender;
use Getopt::Long;

#new_master_host and new_slave_hosts are set only when recovering master succeeded
my ( $dead_master_host, $new_master_host, $new_slave_hosts, $subject, $body );
my $smtp='smtp.163.com';
my $mail_from='from@163.com';
my $mail_user='from@163.com';
my $mail_pass='password';
my $mail_to='xxxxxx@qq.com';

GetOptions(
  'orig_master_host=s' => \$dead_master_host,
  'new_master_host=s'  => \$new_master_host,
  'new_slave_hosts=s'  => \$new_slave_hosts,
  'subject=s'          => \$subject,
  'body=s'             => \$body,
);

mailToContacts($smtp,$mail_from,$mail_user,$mail_pass,$mail_to,$subject,$body);

sub mailToContacts {
    my ( $smtp, $mail_from, $user, $passwd, $mail_to, $subject, $msg ) = @_;
    open my $DEBUG, "> /tmp/monitormail.log"
        or die "Can't open the debug      file:$!\n";
    my $sender = new Mail::Sender {
        ctype       => 'text/plain; charset=utf-8',
        encoding    => 'utf-8',
        smtp        => $smtp,
        from        => $mail_from,
        auth        => 'LOGIN',
        TLS_allowed => '0',
        authid      => $user,
        authpwd     => $passwd,
        to          => $mail_to,
        subject     => $subject,
        debug       => $DEBUG
    };

    $sender->MailMsg(
        {   msg   => $msg,
            debug => $DEBUG
        }
    ) or print $Mail::Sender::Error;
    return 1;
}



# Do whatever you want here

exit 0;

 

 


 

本文部分内容借鉴

http://www.cnblogs.com/xuanzhi201111/p/4231412.html

http://www.cnblogs.com/ivictor/p/5686275.html

posted on 2018-03-27 16:53  ctu_lzq  阅读(800)  评论(0编辑  收藏  举报