CentOS6.5下DRBD+HeartBeat双机热备教程
做双机热备方案需要用到Hearbeat和存储设备(如果没存储设备,可以用DRBD代替,但是最好用存储设备)。
Heartbeat:如果热备服务器在规定的时间内没有收到主服务器心跳消息那么热备服务器会认为主服务器宕机了,热备服务器就开始工作启动IP、服务等也就是启动故障转移程序。启动故障转移程序的同时并取得主服务器上相关资源服务的控制权,接替主服务器继续不间断的提供服务,从而达到资源及服务高可用性的目的。
DRBD(代替存储设备):Distributed Replicated Block Device(DRBD)是一个用软件实现的、无共享的、服务器之间镜像块设备内容的存储复制解决方案。用来将两台服务器的数据同步成一模一样,只能一台服务器挂载。可以理解为DRBD其实就是个网络Raid 1。
DRBD原理参考:
https://www.cnblogs.com/guoting1202/p/3975685.html
https://blog.csdn.net/leshami/article/details/49509919
一、环境描述
系统版本:centos6.9 x64
DRBD版本:DRBD-8.4.3
node1(主节点)IP: 主机名:drbd1.gxm.com (配置IP地址和主机名)
eth0:192.168.1.106
eth1:192.168.136.6(用于心跳和drbd传输数据)
node2(从节点)IP: 主机名:drbd2.gxm.com (配置IP地址和主机名)
eth0:192.168.1.107
eth1:192.168.136.7(用于心跳和drbd传输数据)
虚拟IP地址(VIP):192.168.1.105
(node1) 仅为主节点配置
(node2) 仅为从节点配置
(node1,node2) 为主从节点共同配置
1、安装好操作系统和umail(node1、node2)
安装我们Centos 6.9版本一体盘操作系统(含umail的操作系统,刻盘安装。安装的时候/boot分区为200M,swap分8G左右,/分区分50G或100G,剩下容量的暂时不分--后面DRBD分区再分):
http://www.comingchina.com/download/soft/linux/U-Mail_x86_64_AS6.9_common_V9.8.67.iso
http://www.comingchina.com/linux/install/962.html
安装完重启下,重启后升级到最新版本,升级到最新版本后一定要先访问下登录界面。
2、更改主机名和hosts记录(node1、node2)
[root@localhost ~]# cat /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=drbd1.gxm.com
[root@localhost ~]# cat /etc/hosts
127.0.0.1 localhost drbd1.gxm.com localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.106 drbd1.gxm.com
192.168.1.107 drbd2.gxm.com
[root@localhost ~]# cat /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=drbd2.gxm.com
[root@localhost ~]# cat /etc/hosts
127.0.0.1 localhost drbd2.gxm.com localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.106 drbd1.gxm.com
192.168.1.107 drbd2.gxm.com
3、分别在两服务器上面进行postfix设置,加入VIP地址(node1、node2)
# 将 vip 添加到 /etc/postfix/main.cf 中
mynetworks =
192.168.1.105/32,
127.0.0.0/8,
mysql:/etc/postfix/mysql/mynetworks.cf
(如果/etc/postfix目录下是系统自带的postfix,就改成/usr/local/u-mail/config/postfix/mysql/mynetworks.cf)
备注:如果有问题,可以尝试把这2个IP地址段去掉,一般不要去掉。207是虚拟IP地址
# 重启 postfix
#/etc/init.d/umail_postfix restart
4、停止umail相关服务,并去掉开机自启动l(node1、node2)
for service in umail_apache umail_nginx umail_redis umail_dovecot umail_postfix umail_postgrey umail_clamd umail_spamassassin umail_mysqld umail_app ;
do
service $service stop
chkconfig $service off;
done
二、安装前准备:(node1,node2)
1、关闭iptables和SELINUX,避免安装过程中报错。
1 2 3 4 5 6 7 |
# service iptables stop # chkconfig iptables off # setenforce 0 # vi /etc/selinux/config --------------- SELINUX=disabled --------------- |
2、在两台虚拟机分别添加一块10G硬盘分区作为DRBD设备磁盘(这是我举例,具体分多大,看客户那边硬盘容量,但是两台服务器分一样的大小),分别都为sdb1,大小10G,并在本地系统创建/store目录,不做挂载操作。
1 2 3 4 5 |
# fdisk /dev/sdb ---------------- n-p-1-1-"+10G"-w ---------------- # mkdir /store |
3、时间同步:
1 |
#yum install -y rdate#rdate -s time-b.nist.gov
|
三、DRBD的安装配置:
1、安装依赖包:(node1,node2)
1 |
# yum install gcc gcc-c++ make glibc flex kernel-devel kernel-headers |
2、安装DRBD:(node1,node2)
1 2 3 4 5 6 7 8 9 10 |
# wget http://www.drbd.org/download/drbd/8.4/archive/drbd-8.4.3.tar.gz # tar zxvf drbd-8.4.3.tar.gz # cd drbd-8.4.3 # ./configure --prefix=/usr/local/drbd --with-km # make KDIR=/usr/src/kernels/2.6.32-696.23.1.el6.x86_64/ (请替换成您操作系统内核版本) # make install # mkdir -p /usr/local/drbd/var/run/drbd # cp /usr/local/drbd/etc/rc.d/init.d/drbd /etc/rc.d/init.d # chkconfig --add drbd # chkconfig drbd on |
3、加载DRBD模块:(node1,node2)
1 |
# modprobe drbd |
查看DRBD模块是否加载到内核:
1 2 3 |
# lsmod |grep drbd drbd 310172 4 libcrc32c 1246 1 drbd |
如果加载DRBD模块报下面的错误:
# modprobe drbd
FATAL: Module drbd not found.
备注:由于在安装依赖包的时候,已经安装kernel,所以一般情况下不会出现下面的错误。如果出现了可以先尝试重启看下,如果重启后还是不行,就按照下面的方法操作:
原因:这个报错是因为内核并不支持此模块,所以需要更新内核,
更新内核的方法是:yum install kernel(备注:如果没报错不建议更新)
更新后,记得一定要重新启动操作系统!!!
重启系统后再次使用命令查看,此时的内核版本变为
# uname -r
2.6.32-642.1.1.el6.x86_64
此时再次尝试加载模块drbd
# modprobe drbd
4、参数配置:(node1,node2)
1 |
# vi /usr/local/drbd/etc/drbd.conf |
清空文件内容,并添加如下配置:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
resource r0{ protocol C; startup { wfc-timeout 0; degr-wfc-timeout 120;} disk { on-io-error detach;} net{ timeout 60; connect-int 10; ping-int 10; max-buffers 2048; max-epoch-size 2048; } syncer { rate 200M;} on drbd1.gxm.com{ device /dev/drbd0; disk /dev/sdb1; address 192.168.136.6:7788; meta-disk internal; } on drbd2.gxm.com{ device /dev/drbd0; disk /dev/sdb1; address 192.168.136.7:7788; meta-disk internal; } } |
注:请修改上面配置中的主机名、IP、和disk为自己的具体配置
5、创建DRBD设备并激活r0资源:(node1,node2)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 |
# mknod /dev/drbd0 b 147 0 # drbdadm create-md r0 等待片刻,显示success表示drbd块创建成功 Writing meta data... initializing activity log NOT initializing bitmap New drbd meta data block successfully created. --== Creating metadata ==-- As with nodes, we count the total number of devices mirrored by DRBD at http://usage.drbd.org. The counter works anonymously. It creates a random number to identify the device and sends that random number, along with the kernel and DRBD version, to usage.drbd.org. http://usage.drbd.org/cgi-bin/insert_usage.pl? nu=716310175600466686&ru=15741444353112217792&rs=1085704704 * If you wish to opt out entirely, simply enter 'no'. * To continue, just press [RETURN] success |
注意:如果等很久都没提示success,就按下回车键再等等。
再次输入该命令:
1 2 3 4 5 6 7 8 |
# drbdadm create-md r0 成功激活r0 [need to type 'yes' to confirm] yes Writing meta data... initializing activity log NOT initializing bitmap New drbd meta data block successfully created. |
6、启动DRBD服务:(node1,node2)
1 |
# service drbd start |
注意:需要主从共同启动方能生效
7、查看状态:(node1,node2)
1 2 3 4 5 6 |
# service drbd status drbd driver loaded OK; device status: version: 8.4.3 (api:1/proto:86-101) GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@drbd1.gxm.com, 2015-05-12 21:05:41 m:res cs ro ds p mounted fstype 0:r0 Connected Secondary/Secondary Inconsistent/Inconsistent C |
这里ro:Secondary/Secondary表示两台主机的状态都是备机状态,ds是磁盘状态,显示的状态内容为“Inconsistent不一致”,这是因为DRBD无法判断哪一方为主机,应以哪一方的磁盘数据作为标准。
8、将drbd1.gxm.com主机配置为主节点:(node1,注意只有node1,这步一定要等待显示下面的状态后才能执行下一步)
1 |
# drbdsetup /dev/drbd0 primary --force |
分别查看主从DRBD状态:
(node1)
1 2 3 4 5 6 |
# service drbd status drbd driver loaded OK; device status: version: 8.4.3 (api:1/proto:86-101) GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@drbd1.gxm.com, 2015-05-12 21:05:41 m:res cs ro ds p mounted fstype 0:r0 Connected Primary/Secondary UpToDate/UpToDate C |
(node2)
1 2 3 4 5 6 |
# service drbd status drbd driver loaded OK; device status: version: 8.4.3 (api:1/proto:86-101) GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@drbd2.gxm.com, 2015-05-12 21:05:46 m:res cs ro ds p mounted fstype 0:r0 Connected Secondary/Primary UpToDate/UpToDate C |
备注:ro在主从服务器上分别显示 Primary/Secondary和Secondary/Primary
ds显示UpToDate/UpToDate,表示主从配置成功(注意这个需要时间初始化和同步的,请等待显示成上面的状态后再执行下面的步骤)。
9、挂载DRBD:(node1,注意只有node1)
从刚才的状态上看到mounted和fstype参数为空,所以我们这步开始挂载DRBD到系统目录/store
1 2 |
# mkfs.ext4 /dev/drbd0 # mount /dev/drbd0 /store# df -h
|
注:Secondary节点上不允许对DRBD设备进行任何操作,包括挂载;所有的读写操作只能在Primary节点上进行,只有当Primary节点挂掉时,Secondary节点才能提升为Primary节点,并自动挂载DRBD继续工作。
成功挂载后的DRBD状态:(node1,注意只有node1)
1 2 3 4 5 6 |
# service drbd status drbd driver loaded OK; device status: version: 8.4.3 (api:1/proto:86-101) GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@drbd1.gxm.com, 2015-05-12 21:05:41 m:res cs ro ds p mounted fstype 0:r0 Connected Primary/Secondary UpToDate/UpToDate C /store ext4 |
四、umail目录配置
1、在node1进行以下操作(先把/store目录挂载到node1)
# 移动目录到 /store 目录
mv /usr/local/u-mail/data/mailbox /store
mv /usr/local/u-mail/data/backup /store <默认没有此目录,可以跳过>
mv /usr/local/u-mail/data/www/webmail/attachment /store
mv /usr/local/u-mail/data/www/webmail/netdisk /store
mv /usr/local/u-mail/data/mysql/default/umail /store
mv /usr/local/u-mail/data/mysql/default/ibdata1 /store
mv /usr/local/u-mail/data/mysql/default/ib_logfile0 /store
mv /usr/local/u-mail/data/mysql/default/ib_logfile1 /store
# 建立软链接
ln -s /store/mailbox /usr/local/u-mail/data/mailbox
ln -s /store/backup /usr/local/u-mail/data/backup <默认没有此目录,可以跳过>
ln -s /store/attachment /usr/local/u-mail/data/www/webmail/attachment
ln -s /store/netdisk /usr/local/u-mail/data/www/webmail/netdisk
ln -s /store/umail /usr/local/u-mail/data/mysql/default/umail
ln -s /store/ibdata1 /usr/local/u-mail/data/mysql/default/ibdata1
ln -s /store/ib_logfile0 /usr/local/u-mail/data/mysql/default/ib_logfile0
ln -s /store/ib_logfile1 /usr/local/u-mail/data/mysql/default/ib_logfile1
# 更正权限
chown -R umail.root /usr/local/u-mail/data/mailbox/
chown -R umail.umail /usr/local/u-mail/data/backup/ <默认没有此目录,可以跳过>
chown -R umail_apache.umail_apache /usr/local/u-mail/data/www/webmail/attachment/
chown -R umail_apache.umail_apache /usr/local/u-mail/data/www/webmail/netdisk/
chown -R umail_mysql.umail_mysql /usr/local/u-mail/data/mysql/default/umail
chown -R umail_mysql.umail_mysql /usr/local/u-mail/data/mysql/default/ibdata1
chown -R umail_mysql.umail_mysql /usr/local/u-mail/data/mysql/default/ib_logfile0
chown -R umail_mysql.umail_mysql /usr/local/u-mail/data/mysql/default/ib_logfile1
2、在node2上面进行以下操作(把/store目录挂载到node2)
# 修改原来的内容
mv /usr/local/u-mail/data/mailbox{,_bak}
mv /usr/local/u-mail/data/backup{,_bak} <默认没有此目录,可以跳过>
mv /usr/local/u-mail/data/www/webmail/attachment{,_bak}
mv /usr/local/u-mail/data/www/webmail/netdisk{,_bak}
mv /usr/local/u-mail/data/mysql/default/umail{,_bak}
mv /usr/local/u-mail/data/mysql/default/ibdata1{,_bak}
mv /usr/local/u-mail/data/mysql/default/ib_logfile0{,_bak}
mv /usr/local/u-mail/data/mysql/default/ib_logfile1{,_bak}
# 建立软链接
ln -s /store/mailbox /usr/local/u-mail/data/mailbox
ln -s /store/backup /usr/local/u-mail/data/backup <默认没有此目录,可以跳过>
ln -s /store/attachment /usr/local/u-mail/data/www/webmail/attachment
ln -s /store/netdisk /usr/local/u-mail/data/www/webmail/netdisk
ln -s /store/umail /usr/local/u-mail/data/mysql/default/umail
ln -s /store/ibdata1 /usr/local/u-mail/data/mysql/default/ibdata1
ln -s /store/ib_logfile0 /usr/local/u-mail/data/mysql/default/ib_logfile0
ln -s /store/ib_logfile1 /usr/local/u-mail/data/mysql/default/ib_logfile1
备注:
ln -s 源地址 目标地址
软链接可以对一个不存在的文件名进行链接
软链接可以对目录进行链接
/store/mailbox、/store/attachment、/store/netdisk、/store/umail其实不存在。
四、Hearbeat配置
1、安装heartbeat(node1,node2)
1 2 |
# yum install epel-release -y # yum --enablerepo=epel install heartbeat heartbeat-ldirectord heartbeat-pils heartbeat-stonith -y |
2、设置heartbeat配置文件
(node1)
编辑ha.cf,添加下面配置:
1 |
# vi /etc/ha.d/ha.cf |
debugfile /var/log/ha-debug
logfile /var/log/ha-log
keepalive 2
warntime 10
deadtime 30
initdead 60
udpport 1112
ucast eth1 192.168.136.7
ucast eth0 192.168.1.107
auto_failback off
node drbd1.gxm.com
node drbd2.gxm.com
ping 192.168.1.1
respawn hacluster /usr/lib64/heartbeat/ipfail
respawn hacluster /usr/lib64/heartbeat/dopd
apiauth dopd gid=haclient uid=hacluster
(node2)
编辑ha.cf,添加下面配置:
1 |
# vi /etc/ha.d/ha.cf |
debugfile /var/log/ha-debug
logfile /var/log/ha-log
keepalive 2
warntime 10
deadtime 30
initdead 60
udpport 1112
ucast eth1 192.168.136.6
ucast eth0 192.168.1.106
auto_failback off
node drbd1.gxm.com
node drbd2.gxm.com
ping 192.168.1.1
respawn hacluster /usr/lib64/heartbeat/ipfail
respawn hacluster /usr/lib64/heartbeat/dopd
apiauth dopd gid=haclient uid=hacluster
3、编辑双机互联验证文件authkeys,添加以下内容:(node1,node2)
1 2 3 |
# vi /etc/ha.d/authkeys auth 1 1 crc |
给验证文件600权限
1 |
# chmod 600 /etc/ha.d/authkeys |
4、编辑集群资源文件:
# vi /etc/ha.d/haresources
(node1,node2)
drbd1.gxm.com IPaddr::192.168.1.105/24/eth0 drbddisk::r0 Filesystem::/dev/drbd0::/store umail_nginx umail_mysqld umail_app
注:该文件内IPaddr,Filesystem等脚本存放路径在/etc/ha.d/resource.d/下,也可在该目录下存放服务启动脚本(例如:mysql,www),将相同脚本名称添加到/etc/ha.d/haresources内容中,从而跟随heartbeat启动而启动该脚本。
IPaddr::192.168.1.105/24/eth0:用IPaddr脚本配置对外服务的浮动虚拟IP
drbddisk::r0:用drbddisk脚本实现DRBD主从节点资源组的挂载和卸载
Filesystem::/dev/drbd0::/store:用Filesystem脚本实现磁盘挂载和卸载
五、创建DRBD脚本文件drbddisk:(node1,node2)
编辑drbddisk,添加下面的脚本内容
1 |
# vi /etc/ha.d/resource.d/drbddisk |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 |
#!/bin/bash # # This script is inteded to be used as resource script by heartbeat # # Copright 2003-2008 LINBIT Information Technologies # Philipp Reisner, Lars Ellenberg # ### DEFAULTFILE="/etc/default/drbd" DRBDADM="/sbin/drbdadm" if [ -f $DEFAULTFILE ]; then . $DEFAULTFILE fi if [ "$#" -eq 2 ]; then RES="$1" CMD="$2" else RES="all" CMD="$1" fi ## EXIT CODES # since this is a "legacy heartbeat R1 resource agent" script, # exit codes actually do not matter that much as long as we conform to # http://wiki.linux-ha.org/HeartbeatResourceAgent # but it does not hurt to conform to lsb init-script exit codes, # where we can. # http://refspecs.linux-foundation.org/LSB_3.1.0/ #LSB-Core-generic/LSB-Core-generic/iniscrptact.html #### drbd_set_role_from_proc_drbd() { local out if ! test -e /proc/drbd; then ROLE="Unconfigured" return fi dev=$( $DRBDADM sh-dev $RES ) minor=${dev#/dev/drbd} if [[ $minor = *[!0-9]* ]] ; then # sh-minor is only supported since drbd 8.3.1 minor=$( $DRBDADM sh-minor $RES ) fi if [[ -z $minor ]] || [[ $minor = *[!0-9]* ]] ; then ROLE=Unknown return fi if out=$(sed -ne "/^ *$minor: cs:/ { s/:/ /g; p; q; }" /proc/drbd); then set -- $out ROLE=${5%/**} : ${ROLE:=Unconfigured} # if it does not show up else ROLE=Unknown fi } case "$CMD" in start) # try several times, in case heartbeat deadtime # was smaller than drbd ping time try=6 while true; do $DRBDADM primary $RES && break let "--try" || exit 1 # LSB generic error sleep 1 done ;; stop) # heartbeat (haresources mode) will retry failed stop # for a number of times in addition to this internal retry. try=3 while true; do $DRBDADM secondary $RES && break # We used to lie here, and pretend success for anything != 11, # to avoid the reboot on failed stop recovery for "simple # config errors" and such. But that is incorrect. # Don't lie to your cluster manager. # And don't do config errors... let --try || exit 1 # LSB generic error sleep 1 done ;; status) if [ "$RES" = "all" ]; then echo "A resource name is required for status inquiries." exit 10 fi ST=$( $DRBDADM role $RES ) ROLE=${ST%/**} case $ROLE in Primary|Secondary|Unconfigured) # expected ;; *) # unexpected. whatever... # If we are unsure about the state of a resource, we need to # report it as possibly running, so heartbeat can, after failed # stop, do a recovery by reboot. # drbdsetup may fail for obscure reasons, e.g. if /var/lock/ is # suddenly readonly. So we retry by parsing /proc/drbd. drbd_set_role_from_proc_drbd esac case $ROLE in Primary) echo "running (Primary)" exit 0 # LSB status "service is OK" ;; Secondary|Unconfigured) echo "stopped ($ROLE)" exit 3 # LSB status "service is not running" ;; *) # NOTE the "running" in below message. # this is a "heartbeat" resource script, # the exit code is _ignored_. echo "cannot determine status, may be running ($ROLE)" exit 4 # LSB status "service status is unknown" ;; esac ;; *) echo "Usage: drbddisk [resource] {start|stop|status}" exit 1 ;; esac exit 0 |
赋予755执行权限:
1 |
# chmod 755 /etc/ha.d/resource.d/drbddisk |
六、启动umail相关服务、HeartBeat服务
启动umail相关服务,并设置随机启动
(除umail_nginx、umail_mysqld、umail_app umail_postgresql这4个服务)
for service in umail_apache umail_redis umail_dovecot umail_postfix umail_postgrey umail_clamd umail_spamassassin;
do
service $service start
chkconfig $service on;
done
在两个节点上启动HeartBeat服务,先启动node1,再启动node2:(node1,node2)
1 2 |
# service heartbeat start # chkconfig heartbeat on |
现在从其他机器能够ping通虚IP 192.168.1.105,表示配置成功
---------------------------------------------------------------------------------------------------
如果遇到heartbeat 无法启动,提示下面错误
[root@drbd1 ~]# service heartbeat start
Starting High-Availability services: INFO: Resource is stopped
Done.
手动激活一下虚拟IP(192.168.1.105)
[root@drbd1 ~]# cd /etc/ha.d/resource.d/
[root@drbd1 resource.d]# ./IPaddr 192.168.1.105 start
INFO: Adding inet address 192.168.1.105/24 with broadcast address 192.168.1.255 to device eth0
INFO: Bringing device eth0 up
INFO:
/usr/libexec/heartbeat/send_arp -i 200 -r 5 -p
/var/run/resource-agents/send_arp-192.168.1.105 eth0 192.168.1.105 auto
not_used not_used
INFO: Success
INFO: Success
[root@drbd1 resource.d]# ARPING 192.168.1.105 from 192.168.1.105 eth0
Sent 5 probes (5 broadcast(s))
Received 0 response(s)
[root@drbd1 resource.d]#
[root@drbd1 resource.d]#
[root@drbd1 resource.d]# ping 192.168.1.105 — 如果能ping通,说明已经启用了
PING 192.168.1.105 (192.168.1.105) 56(84) bytes of data.
64 bytes from 192.168.1.105: icmp_seq=1 ttl=64 time=0.020 ms
--- 192.168.1.105 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 879ms
rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms
[root@drbd1 resource.d]#
[root@drbd1 resource.d]# service heartbeat start
Starting High-Availability services: INFO: Running OK
CRITICAL: Resource IPaddr::192.168.1.105/24/eth0 is active, and should not be!
CRITICAL: Non-idle resources can affect data integrity!
info: If you don't know what this means, then get help!
-------------------------------------------------------------------------------以上何工2018-5-31 21:41添加--------------------------------------
七、测试高可用
异常宕机切换(重启或关机,但是不要同时重启。要不两台服务器会同时挂载存储)
强制关机,直接关闭node1电源,node2节点也会立即无缝接管。
注意:此时node2上的DRBD状态连接状态可能是WFConnection,等nod1开机后就会变成Connected,并且ro和ds也会显示Primary/Secondary UpToDate/UpToDate
1 2 3 4 5 6 |
# service drbd status drbd driver loaded OK; device status: version: 8.4.3 (api:1/proto:86-101) GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@drbd2.gxm.com, 2015-05-12 21:05:41 m:res cs ro ds p mounted fstype 0:r0 Connected Primary/Unknown UpToDate/DUnknown C /store ext4 |
八、故障排查方法和日志
如果一台有问题(或重启、关机),另外一台没自动接管。
1、确认两边防火墙都关了(或者允许hearebeat的UDP端口),并且两台服务器可以相互ping通,并且都可以ping通网关。
2、先排查下hearebeat服务状态,如果是停止就启动或重启下,然后再观察日志和资源接管状态。
3、如果服务正常,那就查看两台服务器/var/log/目录下ha-debug和ha-log日志。
备注:由于一些原因要手动重启一台之前,先确认下另外一台hearebeat服务状态。
重启或关机mail1,mail2服务器日志如下(正常切换):
Sep 13 22:35:32 mail2.gxm.com heartbeat: [3412]: info: Received shutdown notice from 'mail1.gxm.com'. #收到mail1服务器关机通知
Sep 13 22:35:32 mail2.gxm.com heartbeat: [3412]: info: Resources being acquired from mail1.gxm.com. #从mail1开始分配资源
Sep 13 22:35:32 mail2.gxm.com heartbeat: [3746]: info: acquire all HA resources (standby). #开始分配所有ha资源
ResourceManager(default)[3772]: 2017/09/13_22:35:32 info: Acquiring resource group: mail2.gxm.com IPaddr::10.0.100.19/24/eth0 drbddisk::r0 Filesystem::/dev/drbd0::/store umail_nginx umail_mysqld umail_app umail_postgresql #分配资源,包括VIP和DRBD分区(如果用存储设备,就挂载存储)和启动服务
/usr/lib/ocf/resource.d//heartbeat/IPaddr(IPaddr_10.0.100.19)[3800]: 2017/09/13_22:35:32 INFO: Resource is stopped #先查看状态是停止
ResourceManager(default)[3772]: 2017/09/13_22:35:32 info: Running /etc/ha.d/resource.d/IPaddr 10.0.100.19/24/eth0 start #先查看状态是停止
/usr/lib/ocf/resource.d//heartbeat/IPaddr(IPaddr_10.0.100.19)[3865]: 2017/09/13_22:35:32 INFO: Resource is stopped #先查看状态是停止
Sep 13 22:35:32 mail2.gxm.com heartbeat: [3747]: info: Local Resource acquisition completed. #本地资源分配完成
IPaddr(IPaddr_10.0.100.19)[4008]: 2017/09/13_22:35:32 INFO: Adding inet address 10.0.100.19/24 with broadcast address 10.0.100.255 to device eth0 #增加VIP到指定网卡eth0上
IPaddr(IPaddr_10.0.100.19)[4008]: 2017/09/13_22:35:32 INFO: Bringing device eth0 up #eth0为up
IPaddr(IPaddr_10.0.100.19)[4008]: 2017/09/13_22:35:32 INFO: /usr/libexec/heartbeat/send_arp -i 200 -r 5 -p /var/run/resource-agents/send_arp-10.0.100.19 eth0 10.0.100.19 auto not_used not_used #
/usr/lib/ocf/resource.d//heartbeat/IPaddr(IPaddr_10.0.100.19)[3948]: 2017/09/13_22:35:32 INFO: Success #挂载VIP成功
ResourceManager(default)[3772]: 2017/09/13_22:35:32 info: Running /etc/ha.d/resource.d/drbddisk r0 start #启动drbd分区
/usr/lib/ocf/resource.d//heartbeat/Filesystem(Filesystem_/dev/drbd0)[4152]: 2017/09/13_22:35:32 INFO: Resource is stopped
ResourceManager(default)[3772]: 2017/09/13_22:35:32 info: Running /etc/ha.d/resource.d/Filesystem /dev/drbd0 /store start #挂载分区到/store
Filesystem(Filesystem_/dev/drbd0)[4237]: 2017/09/13_22:35:32 INFO: Running start for /dev/drbd0 on /store
Filesystem(Filesystem_/dev/drbd0)[4237]: 2017/09/13_22:35:32 INFO: Starting filesystem check on /dev/drbd0 #检查drbd分区
/usr/lib/ocf/resource.d//heartbeat/Filesystem(Filesystem_/dev/drbd0)[4229]: 2017/09/13_22:35:33 INFO: Success #挂载drbd分区成功
ResourceManager(default)[3772]: 2017/09/13_22:35:33 info: Running /etc/init.d/umail_nginx start #启动服务
ResourceManager(default)[3772]: 2017/09/13_22:35:33 info: Running /etc/init.d/umail_mysqld start #启动服务
ResourceManager(default)[3772]: 2017/09/13_22:35:34 info: Running /etc/init.d/umail_app start #启动服务
ResourceManager(default)[3772]: 2017/09/13_22:35:36 info: Running /etc/init.d/umail_postgresql start #启动服务
Sep 13 22:35:38 mail2.gxm.com heartbeat: [3746]: info: all HA resource acquisition completed (standby). #分配所有ha分配资源完成
Sep 13 22:35:38 mail2.gxm.com heartbeat: [3412]: info: Standby resource acquisition done [all]. #分配所有ha分配资源完成
harc(default)[5012]: 2017/09/13_22:35:38 info: Running /etc/ha.d//rc.d/status status
mach_down(default)[5028]: 2017/09/13_22:35:38 info: /usr/share/heartbeat/mach_down: nice_failback: foreign resources acquired
mach_down(default)[5028]: 2017/09/13_22:35:38 info: mach_down takeover complete for node mail1.gxm.com.
Sep 13 22:35:38 mail2.gxm.com heartbeat: [3412]: info: mach_down takeover complete.
harc(default)[5062]: 2017/09/13_22:35:38 info: Running /etc/ha.d//rc.d/ip-request-resp ip-request-resp
ip-request-resp(default)[5062]: 2017/09/13_22:35:38 received ip-request-resp IPaddr::10.0.100.19/24/eth0 OK yes
ResourceManager(default)[5083]: 2017/09/13_22:35:38 info: Acquiring resource group: mail2.gxm.com IPaddr::10.0.100.19/24/eth0 drbddisk::r0 Filesystem::/dev/drbd0::/store umail_nginx umail_mysqld umail_app umail_postgresql #运行正常
/usr/lib/ocf/resource.d//heartbeat/IPaddr(IPaddr_10.0.100.19)[5110]: 2017/09/13_22:35:38 INFO: Running OK #运行正常
/usr/lib/ocf/resource.d//heartbeat/Filesystem(Filesystem_/dev/drbd0)[5214]: 2017/09/13_22:35:38 INFO: Running OK #运行正常
Sep 13 22:36:03 mail2.gxm.com heartbeat: [3412]: WARN: node mail1.gxm.com: is dead #mail1挂了
Sep 13 22:36:03 mail2.gxm.com heartbeat: [3412]: info: Dead node mail1.gxm.com gave up resources. #mail1挂了通知我,他放弃所有资源
Sep 13 22:36:03 mail2.gxm.com ipfail: [3457]: info: Status update: Node mail1.gxm.com now has status dead #mail1更新装完为挂了
Sep 13 22:36:03 mail2.gxm.com heartbeat: [3412]: info: Link mail1.gxm.com:eth0 dead. #mail1更新装完为挂了
Sep 13 22:36:04 mail2.gxm.com ipfail: [3457]: info: NS: We are still alive! #mail2依然在线
Sep 13 22:36:04 mail2.gxm.com ipfail: [3457]: info: Link Status update: Link mail1.gxm.com/eth0 now has status dead #mail1更新装完为挂了
Sep 13 22:36:05 mail2.gxm.com ipfail: [3457]: info: Asking other side for ping node count. #请求对方ping节点计数
Sep 13 22:36:05 mail2.gxm.com ipfail: [3457]: info: Checking remote count of ping nodes. #检查ping节点的远程计数
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
附:DRBD常见维护
一、服务器维护建议:
1、不要同时重启两台服务器,否则可能会争夺资源(术语叫做脑裂),建议间隔5分钟左右。
2、不要同时开机两台服务器,否则可能会争夺资源(术语叫做脑裂),建议间隔5分钟左右。
3、当前心跳线是10.0.100.0网段的,建议后期在两台服务器上各加一个网卡,用网线直接将两台服务器相连(IP配置成另外一个网段)。这样可以避免由于您10.0.100.0网段出现故障造成争夺资源(术语叫做脑裂)。
二、升级注意:
1、如果将一台服务器升级到最新版本了,需要切换到另外一台也升级到最新版本。
三、怎么确认同步是否有问题:
最基本的方法,在两台服务器上运行df –h命令查看存储挂载情况:
正常情况:一台服务器挂载了(红框圈中的分区),另外一台服务器没挂载,并且两边drbd都是启动的,并且cat /proc/drbd状态正常。
不正常情况1:如果两台服务器都挂载了(红框圈中的分区),表示不正常,即发生了脑裂。这时候请联系技术支持解决。
不正常情况2:一台服务器挂载了(红框圈中的分区),另外一台服务器没挂载,但是drdb服务停止状态,并且cat /proc/drbd状态不正常。
不正常情况下drbd状态一般为:
(1). 其中两个个节点的连接状态为 StandAlone
(2). 其中一个节点的连接状态为 WFConnection,另一个问题StandAlone
查看主备服务器DRBD状态:
/etc/init.d/drbd status
或
cat /proc/drbd
四、DRBD同步异常的原因:
(1). 采用HA环境的时候自动切换导致脑裂;
(2). 人为操作或配置失误,导致产生的脑裂;
(3). 经验有限,惭愧的很,只碰到以上2中产生脑裂的原因。
(4). drbd服务停止了
五、解决方法:
一般问题状态可能是这样的:
备机(hlt1):
[root@hlt1 ~]# service drbd status
drbd driver loaded OK; device status:
version: 8.4.3 (api:1/proto:86-101)
GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@hlt1.holitech.net, 2016-10-31 10:43:50
m:res cs ro ds p mounted fstype
0:r0 WFConnection Secondary/Unknown UpToDate/DUnknown C
[root@hlt1 ~]# cat /proc/drbd
version: 8.4.3 (api:1/proto:86-101)
GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@hlt1.holitech.net, 2016-10-31 10:43:50
0: cs:WFConnection ro:Secondary/Unknown ds:UpToDate/DUnknown C r-----
ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:383860
主机(hlt2):
[root@hlt2 ~]# cat /proc/drbd
version: 8.4.3 (api:1/proto:86-101)
GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@hlt2.holitech.net, 2016-10-31 10:49:30
0: cs:StandAlone ro:Primary/Unknown ds:UpToDate/DUnknown r-----
ns:0 nr:0 dw:987208 dr:3426933 al:1388 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:1380568204
[root@hlt2 ~]# service drbd status
drbd driver loaded OK; device status:
version: 8.4.3 (api:1/proto:86-101)
GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@hlt2.holitech.net, 2016-10-31 10:49:30
m:res cs ro ds p mounted fstype
0:r0 StandAlone Primary/Unknown UpToDate/DUnknown r----- ext4
1、在备服务器操作:其中example(比如r0)是资源名。
[root@hlt1 ~]# drbdadm secondary r0
[root@hlt1 ~]# drbdadm --discard-my-data connect r0 (如果返回错误信息,就多执行一次)
2、在主服务器操作:
[root@hlt2 ~]# drbdadm connect r0
[root@hlt2 ~]# cat /proc/drbd
version: 8.4.4 (api:1/proto:86-101)
GIT-hash: 599f286440bd633d15d5ff985204aff4bccffadd build by root@master.luodi.com, 2013-11-03 00:03:40
1: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r-----
ns:6852 nr:0 dw:264460 dr:8393508 al:39 bm:512 lo:0 pe:2 ua:0 ap:0 ep:1 wo:d oos:257728
[>....................] sync'ed: 4.7% (257728/264412)K
finish: 0:03:47 speed: 1,112 (1,112) K/sec
3、备主机上查看:DRBD恢复正常:
备服务器:
[root@hlt1 ~]# cat /proc/drbd
version: 8.4.3 (api:1/proto:86-101)
GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@hlt1.holitech.net, 2016-10-31 10:43:50
0: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate C r-----
ns:0 nr:1455736720 dw:1455736720 dr:0 al:0 bm:140049 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0
主服务器:
[root@hlt2 ~]# cat /proc/drbd
version: 8.4.3 (api:1/proto:86-101)
GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@hlt2.holitech.net, 2016-10-31 10:49:30
0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r-----
ns:1455737960 nr:0 dw:85995012 dr:1403665281 al:113720 bm:139737 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0
DRBD日常管理:
http://blog.163.com/qiushuhui1989@126/blog/static/27011089201561411536667/
http://blog.csdn.net/leshami/article/details/49777677
http://www.cnblogs.com/rainy-shurun/p/5335843.html