RHCA-436-4 rhcs conga

RHCS 是用来提供“高可用”集群的解决方案,目前功能完善,机制健全,是业界认可度比较不错的高可用解决方案。
RHCS包含组件:
corosync udp/5404
cman udp/5405
ricci tcp/11111
dlm tcp/21064
modclusterd tcp/16861

RHCS提供了多种管理工具来创建和维护集群,这里主要谈谈conga和ccs。用几个示例来综合谈谈rhcs。

示例一:
2节点集群(xfs+nginx+lvm-ha)+仲裁盘qdisk
集群环境概述:3台kvm虚拟机(node1和node2跑应用,node3作存储),每台4块网卡(2块网卡eth3,eth4连存储,1块eth2心跳,1块eth1和物理机通信安装软件之用),操作系统都采用CentOS6.5 x64

1.准备环境
实验情况暂不考虑iptables和selinux,所以在每个节点上permissive selinux及清空iptables,再关闭可能对集群产生干扰及故障的不必要的服务,并修改集群节点的hosts表。
/etc/init.d/iptables stop
iptables-save >/etc/sysconfig/iptables
sed -i 's/SELINUX=enforcing/SELINUX=permissive/g'  /etc/sysconfig/selinux
chkconfig kdump off
chkconfig NetworkManager off


node1   #nginx服务节点
eth1 192.168.8.101
eth2 192.168.7.101
eth3 192.168.6.101
eth4 192.168.5.101

/etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.8.101 node1
192.168.7.101 node1
192.168.6.101 node1
192.168.5.101 node1

192.168.7.102 node2
192.168.7.103 luci

192.168.6.103 storage
192.168.5.103 storage



node2   #nginx服务节点
eth1 192.168.8.102
eth2 192.168.7.102
eth3 192.168.6.102
eth4 192.168.5.102

/etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.8.102 node2
192.168.7.102 node2
192.168.6.102 node2
192.168.5.102 node2

192.168.7.101 node1
192.168.7.103 luci

192.168.6.103 storage
192.168.5.103 storage


node3   #iSCSI存储及luci管理节点
eth1 192.168.8.103
eth2 192.168.7.103
eth3 192.168.6.103
eth4 192.168.5.103

/etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.8.103 storage
192.168.7.103 storage
192.168.6.103 storage
192.168.5.103 storage

192.168.6.101 node1
192.168.5.101 node1
192.168.6.102 node2
192.168.5.102 node2

192.168.7.103 luci


2.配置iSCSI存储,请参阅
RHCA-436-1 iSCSI
RHCA-436-3 multipath

3.部署conga
分别在node1,node2上安装ricci, 在node3上安装luci

yum -y install ricci
echo root|passwd --std ricci
/etc/init.d/ricci start
chkconfig ricci on

yum -y install luci
echo root|passwd --stdin luci
/etc/init.d/luci start


4.通过luci建立nginx ha集群
一.登录luci管理控制台
在浏览器中输入luci所在主机的ip,在luci启动时会给予提示。
https://192.168.7.103:8084
RHCA-436-4 <wbr>rhcs <wbr>conga
添加例外,直接信任。

RHCA-436-4 <wbr>rhcs <wbr>conga
初始用户为root,登录进去后会看到一个Warning, 直接忽略。最好添加一个普通账号来管理,我这里以luci来管理。加完账号赋予权限后,logout 再用luci这个用户登录。

RHCA-436-4 <wbr>rhcs <wbr>conga

RHCA-436-4 <wbr>rhcs <wbr>conga

RHCA-436-4 <wbr>rhcs <wbr>conga

二. 创建集群
RHCA-436-4 <wbr>rhcs <wbr>conga

RHCA-436-4 <wbr>rhcs <wbr>conga

RHCA-436-4 <wbr>rhcs <wbr>conga
集群名字,随便取,点完“Create cluster"之后会在后台为所有加入到该集群的节点安装相关软件包,装完后会重启集群节点。成功后如下图:
RHCA-436-4 <wbr>rhcs <wbr>conga

在没有添加集群服务之前,我们通过clustat 可以看到以下状态。
Cluster Status for rhcs @ Sun Sep 28 14:32:02 2014
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 node1                                       1 Online, Local
 node2                                       2 Online



基础设施

包括服务安装,集群集逻辑卷等。待基础设施做好后即可添加相关集群资源及集群服务了。
A.安装nginx,请参看Nginx-1.x.x源码自动安装配置(CentOS6)
B.lvm-ha
1.在所有服务节点上开启lvm cluster功能,并重启clvmd
lvmconf --enable-cluster
/etc/init.d/clvmd restart

[root@node1 ~]# grep locking_type /etc/lvm/lvm.conf
    locking_type = 3
    # NB. This option only affects locking_type = 1 viz. local file-based
    # The external locking library to load if locking_type is set to 2.

chkconfig clvmd on
chkconfig rgmanager on
chkconfig cman on


2.分区,我这里分了1个2G的空间,用来建立集群级逻辑卷
[root@node1 ~]# parted /dev/mapper/redhat mklabel gpt
Information: You may need to update /etc/fstab.                          
[root@node1 ~]# parted /dev/mapper/redhat mkpart primary 1 2048
Information: You may need to update /etc/fstab.                          
[root@node1 ~]# partprobe /dev/mapper/redhat
WARNING: GPT (GUID Partition Table) detected on '/dev/mapper/redhat'! The util fdisk doesn't support GPT. Use GNU Parted.


Disk /dev/mapper/redhat: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

             Device Boot      Start         End      Blocks   Id  System
/dev/mapper/redhatp1                     1045     8388607+  ee  GPT

Disk /dev/mapper/redhatp1: 2046 MB, 2046820352 bytes
255 heads, 63 sectors/track, 248 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

[root@node1 ~]# multipath -r
reload: redhat (1IET     00010001) undef IET,VIRTUAL-DISK
size=8.0G features='0' hwhandler='0' wp=undef
|-+- policy='service-time 0' prio=1 status=undef
| `- 3:0:0:1 sda 8:0   active ready running
`-+- policy='service-time 0' prio=1 status=undef
  `- 2:0:0:1 sdb 8:16  active ready running

[root@node1 ~]# pvcreate /dev/mapper/redhatp1
  Physical volume "/dev/mapper/redhatp1" successfully created
[root@node1 ~]# vgcreate -cy clusterlvm /dev/mapper/redhatp1
  Clustered volume group "clusterlvm" successfully created
[root@node1 ~]# lvcreate -n nginx_xfs -L 500M clusterlvm
  Logical volume "nginx_xfs" created


3.安装xfs文件系统,并格式化相应分区
yum -y install xfsprogs

[root@node1 ~]# mkfs.xfs /dev/clusterlvm/nginx_xfs
meta-data=/dev/clusterlvm/nginx_xfs isize=256    agcount=4, agsize=32000 blks
                              sectsz=512   attr=2, projid32bit=0
data                          bsize=4096   blocks=128000, imaxpct=25
                              sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal log           bsize=4096   blocks=1200, version=2
                              sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0


三.添加集群服务
1.Failover Domains
RHCA-436-4 <wbr>rhcs <wbr>conga

2.Resources
IP Address
RHCA-436-4 <wbr>rhcs <wbr>conga

HA LVM
RHCA-436-4 <wbr>rhcs <wbr>conga

FileSystem
RHCA-436-4 <wbr>rhcs <wbr>conga

Script
RHCA-436-4 <wbr>rhcs <wbr>conga


3.Servic Groups
RHCA-436-4 <wbr>rhcs <wbr>conga

注意:在添加Service Groups的时候,一定要注意顺序,这里的顺序,4个资源是平等关系,下一个示例中会有层级关系。
IP-->HA LVM-->Filesystem-->Script



四.管理与测试

[root@node1 ~]# clustat -i1
Cluster Status for rhcs @ Sun Sep 28 14:50:17 2014
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 node1                                       1 Online, Local, rgmanager
 node2                                       2 Online, rgmanager

 Service Name                   Owner (Last)                   State        
 ------- ----                   ----- ------                   -----        
 service:nginx                  node1                          started
RHCA-436-4 <wbr>rhcs <wbr>conga


clusvcadm -e nginx -m node1      #在哪个节点上开启
clusvcadm -e nginx -F
clusvcadm -r nginx -m node2       #将服务迁到node2
clusvcadm -d nginx      #disable掉nginx

五.仲裁盘,对于2节点集群,为了防止“脑分裂”,引入仲裁盘(允许最小10MB),目前最多支持16节点
在/dev/mapper/redhat上再分一区,这里我就分10MB
当然, 想结合fence机制,请参阅RHCA-436-5 fence
[root@node1 ~]# mkqdisk -c /dev/mapper/redhatp2 -l qdisk
mkqdisk v3.0.12.1

Writing new quorum disk label 'qdisk' to /dev/mapper/redhatp2.
WARNING: About to destroy all data on /dev/mapper/redhatp2; proceed [N/y] ? y
Warning: Initializing previously initialized partition
Initializing status block for node 1...
Initializing status block for node 2...
Initializing status block for node 3...
Initializing status block for node 4...
Initializing status block for node 5...
Initializing status block for node 6...
Initializing status block for node 7...
Initializing status block for node 8...
Initializing status block for node 9...
Initializing status block for node 10...
Initializing status block for node 11...
Initializing status block for node 12...
Initializing status block for node 13...
Initializing status block for node 14...
Initializing status block for node 15...
Initializing status block for node 16...

RHCA-436-4 <wbr>rhcs <wbr>conga

[root@node1 ~]# clustat
Cluster Status for rhcs @ Sun Sep 28 15:12:37 2014
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 node1                                       1 Online, Local, rgmanager
 node2                                       2 Online, rgmanager
 /dev/block/253:6                            0 Online, Quorum Disk

 Service Name                   Owner (Last)                   State        
 ------- ----                   ----- ------                   -----        
 service:nginx                  node1                          started



补充:可以通过ccs(要额外安装),cman_too等工具来管理,和luci相互补充。
EXAMPLES
       Create and start a 3 node cluster with apc fencing:
       ccs -h host1 --createcluster mycluster
       ccs -h host1 --addnode host1
       ccs -h host1 --addnode host2
       ccs -h host1 --addnode host3
       ccs -h host1 --addmethod primary host1
       ccs -h host1 --addmethod primary host2
       ccs -h host1 --addmethod primary host3
       ccs -h host1 --addfencedev myfence agent=fence_apc ipaddr=192.168.0.200
       login=apc passwd=apc
       ccs -h host1 --addfenceinst myfence host1 primary port=1
       ccs -h host1 --addfenceinst myfence host2 primary port=2
       ccs -h host1 --addfenceinst myfence host3 primary port=3
       ccs -h host1 --sync --activate
       ccs -h host1 --startall

cman_tool status
cman_tool version -r
cman_too expected -e "新的expected votes"









示例二:3节点集群(gfs2+nfsv4+lvm-cluster)
接上,再增加1个节点node4,并去掉仲裁盘qdisk
node4   #nfs服务节点
/etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.8.104 node3
192.168.7.104 node3
192.168.6.104 node3
192.168.5.104 node3

192.168.7.101 node1
192.168.7.102 node2

192.168.6.103 storage
192.168.5.103 storage

192.168.7.103 luci


分区。再分一个2G空间,做成新集群级逻辑卷,并格式化为gfs2
[root@node3 ~]# parted /dev/mapper/redhat mkpart primary 2058 4106
Information: You may need to update /etc/fstab.                          
[root@node3 ~]# partprobe /dev/mapper/redhat
[root@node3 ~]# fdisk -l|grep redhat
WARNING: GPT (GUID Partition Table) detected on '/dev/sdb'! The util fdisk doesn't support GPT. Use GNU Parted.
WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted.
WARNING: GPT (GUID Partition Table) detected on '/dev/mapper/redhat'! The util fdisk doesn't support GPT. Use GNU Parted.

Disk /dev/mapper/redhat: 8589 MB, 8589934592 bytes
/dev/mapper/redhatp1                     1045     8388607+  ee  GPT
Disk /dev/mapper/redhatp1: 2046 MB, 2046820352 bytes
Disk /dev/mapper/redhatp2: 10 MB, 10485760 bytes
Disk /dev/mapper/redhatp3: 2047 MB, 2047868928 bytes

[root@node3 ~]# pvcreate /dev/mapper/redhatp3
  Physical volume "/dev/mapper/redhatp3" successfully created
[root@node3 ~]# vgcreate -cy clvm /dev/mapper/redhatp3
  Clustered volume group "clvm" successfully created
[root@node3 ~]# lvcreate -n gfs -L 1500M clvm
  Logical volume "gfs" created
[root@node3 ~]# mkfs.gfs2 -t rhcs:gfs -p lock_dlm -j 4 -J 32 /dev/clvm/gfs
This will destroy any data on /dev/clvm/gfs.
It appears to contain: symbolic link to `../dm-7'

Are you sure you want to proceed? [y/n] y

Device:                    /dev/clvm/gfs
Blocksize:                 4096
Device Size                1.46 GB (384000 blocks)
Filesystem Size:           1.46 GB (383999 blocks)
Journals:                  4
Resource Groups:           6
Locking Protocol:          "lock_dlm"
Lock Table:                "rhcs:gfs"
UUID:                      1a40e734-2bdc-f232-2cdd-8214fa64bd1b

本地测试。格式化完成后,在3个节点上新建/data目录,本地先挂载测试一下。
[root@node3 ~]# mkdir /data
[root@node3 ~]# mount /dev/clvm/gfs /data/
[root@node3 ~]# cd /data/
[root@node3 data]# touch test
[root@node3 data]# mount
/dev/mapper/vg0-root on / type ext4 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw,rootcontext="system_u:object_r:tmpfs_t:s0")
/dev/vda1 on /boot type ext4 (rw)
/dev/mapper/vg0-home on /home type ext4 (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
none on /sys/kernel/config type configfs (rw)
/dev/mapper/clvm-gfs on /data type gfs2 (rw,seclabel,relatime,hostdata=jid=0)

一切就绪后,添加集群资源
RHCA-436-4 <wbr>rhcs <wbr>conga

RHCA-436-4 <wbr>rhcs <wbr>conga

RHCA-436-4 <wbr>rhcs <wbr>conga
Mount options  这里可根据需要来添加,这里的选项就是使用mount命令时-o 加的选项,我这里加了
lockproto=lock_dlm,acl,quota=on,后面会讨论到。
RHCA-436-4 <wbr>rhcs <wbr>conga

RHCA-436-4 <wbr>rhcs <wbr>conga

RHCA-436-4 <wbr>rhcs <wbr>conga
步骤同上,先添加一个Failover Domain,再添加Resource,再添加Service Group
注:nfs比较特殊,在添加Service Group时一定要注意层级关系,ip和gfs2文件系统是平等关系,nfs server是gfs2的子资源,nfs client又时nfs server的子资源。
IP--gfs2
            |--nfs server
                              |--nfs client




测试服务是否可用
[root@node3 ~]# clustat
Cluster Status for rhcs @ Mon Sep 29 10:47:13 2014
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 node1                                       1 Online, rgmanager
 node2                                       2 Online, rgmanager
 node3                                       3 Online, Local, rgmanager

 Service Name                   Owner (Last)                   State        
 ------- ----                   ----- ------                   -----        
 service:nfsv4                  node3                          started      
 service:nginx                  node1                          started


root@jun-live:~#showmount -e 192.168.8.20
Export list for 192.168.8.20:
/data *

root@jun-live:~#mount -t nfs 192.168.8.20:/data /mnt/test/
root@jun-live:~#mount
/dev/mapper/vg0-ct6_root on / type ext4 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
/dev/mapper/vg0-virt on /var/lib/libvirt/images type ext4 (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
nfsd on /proc/fs/nfsd type nfsd (rw)
192.168.8.20:/data on /mnt/test type nfs (rw,vers=4,addr=192.168.8.20,clientaddr=192.168.8.254)
root@jun-live:~#cd /mnt/test/
root@jun-live:test#touch hello
root@jun-live:test#ls
client  hello  test


好的,通过两个典型示例,我们大致可以了解RHCS的工作模式,更多详情请参考红帽官方文档。
posted @ 2014-09-29 13:53  李庆喜  阅读(427)  评论(0编辑  收藏  举报