集群 luci+ricci

配置环境:

1.selinux     Enforcing           vim /etc/sysconfig/selinux
2.date        时间同步             ntpdate  
3.iptables    关闭火墙             iptables -F
4.NetwortManger 关闭              

                                                              集群管理
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

 被管理机: 192.168.2.149
           192.168.2.243
 管理主机: 192.168.2.1

 访问网址: server1.example.com:8084


查看    clustat
停服务  clusvcadm -s www
开启服务clusvcadm -e www
切换服务所在位置   clusvcadm -r www -m server243.example.com ( 数据迁移 )

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~



HA( 高可用,双机热备,对外只有一个主机,但是两个主机都活着 )


                luci
        /  \
           /    \
    (主)ricci-HA-ricci(副)

资源:VIP(ip)  web(应用)  filesystem(文件系统)


Create New Cluster
名字是需要不一样的,< 15字符



(一) 添加节点( 被管理主机就是节点,需要集群管理的主机 )

   * 1. ricci  被管理主机配置 ( 两台149.243都需要配置 )
         (1)selinux  vim /etc/sysconfig/selinux
                      reboot                            
                    ( 在配置文件里修改selinux状态,enforcing permissive之间互换不需要重启,但是与disabled互换时需要重启)
         (2)date   ( 时间同步 )ntpdate 192.168.2.251
                    ( 设置时间 ) date -s 11:12:38        ( 以上两种方式选择一种即可 )
         (3)火墙     iptables -F
         (4)yum源    rm -fr /etc/yum.repos.d/rhel-source.repo
                      lftp i
                      get dvd.repo
                      yum clean all
                     ( 我们在配置yum源时 写的路径如:baseurl=http://192.168.2.251/pub/rhel6.5,它只能访问到镜像里server目录还有一些目录访问不到)
                     ( 访问不到的目录:HighAvailability , LoadBalancer , ResilientStorage , ScalableFileSystem 我们需要安装的ricci和luic
                        在server目录里访问不到,所以需要添加这几个目录,dvd.repo里是写好的,详细内容可见文档最后 )
         (5)安装ricci  yum install -y ricci
               密码    passwd ricci ( 给予日此次密码,在创建是需要填写 )
               开启    /etc/init.d/ricci start
             自启动    chkconfig ricci on
        (6)查看端口   netstat -antlp  ( 11111 )
                      tcp        0      0 :::11111       :::*              LISTEN      1330/ricci
        (7)解析      vim /etc/hosts
                      192.168.2.243   server243.example.com
                      192.168.2.149   server149.example.com
                      192.168.2.1     server1.example.com
 
  * 2. 管理主机 ( 192.168.2.1 )
         (1)selinux  vim /etc/sysconfig/selinux
                      reboot
         (2)date   ( 时间同步 )
         (3)火墙     iptables -F
         (4)yum源    rm -fr /etc/yum.repos.d/rhel-source.repo
                      lftp i
                      get dvd.repo
                      yum clean all
         (5)安装luci yum install -y luci
                      rpm -q luci
                      luci-0.26.0-48.el6.x86_64
         (6)解析     vim /etc/hosts
                      192.168.2.243   server243.example.com
                      192.168.2.149   server149.example.com
                      192.168.2.1     server1.example.com
         (7)开启服务 /etc/init.d/luci start
                     ( 有一个网址链接,右键选择链接,open lins )
                     ( 登录名:root 密码:主机root密码 )
             网页添加Nodes
                     ( Create --> Name( 随意 )  --> Use the Same Password.. )
                     ( 添加管理用户:create -- server149.example.com(主机名)-- 密码( ricci的密码 ) --  server149.example.com -- 11111 )
                                        ( -- server243.example.com(主机名)-- 密码( ricci的密码 ) --  server243.example.com -- 11111 )
                     ( Download ..., Reboot ..., Enable ...)
                     ( Create Cluster  大概等待5分钟 ,可以用ps ax查看 )
         
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

ricci1 主机为主,管理资源,但是如果出现问题,如硬盘损坏,ricci2主机接管ricci1主机上的资源,但是ricci1主机会抓住资源不放,当ricci1主机好了以后会继续管理主机上的资源,现在ricci1和ricci2主机都在管理同一资源,同时查看资源没有问题,但是同时写入会出现问题,这种现象称为脑裂,fence可以解决这个问题。fence设备属于第三方,如果ricci1和ricci2主机同时在管理资源,fence会让ricci1主机断电,重启。当 ricci1 主机再次开启时,发现资源被ricci2主机接管,ricci1主机就称为备机


(二)添加fence等操作

  * 1. 获取fence_xvm.key ( 获取key必须在真机上进行,管理主机 192.168.2.1 )
                   ( 当fence配置好以后,如果没有设置开机自启动,再次使用集群是必须先开启fence再开启需要管理的主机(虚拟机) )
    (1)安装fence   yum search fence
                    yum install -y fence-virtd-libvirt.x86_64 fence-virtd-multicast.x86_64 fence-virtd.x86_64
                    rpm -qa | grep fence
                    fence-virtd-libvirt-0.2.3-15.el6.x86_64
                    fence-virtd-multicast-0.2.3-15.el6.x86_64
                    fence-virtd-0.2.3-15.el6.x86_64
    (2)获取fence_xvm.key     
                    fence_virtd -c  ( 需要填写的如下,其他的空格即可 )
                    Interface [none]: br0  ( 物理机O(真机) )
                    Backend module [checkpoint]: libvirt
                    Replace /etc/fence_virt.conf with the above [y/N]? y
    (3)开启服务     /etc/init.d/fence_virtd start
                    Starting fence_virtd:                                      [  OK  ]
    (4)建立存放目录  mkdir /etc/cluster
    (5)获取key      cd /etc/cluter/
                     dd if=/dev/urandom of=fence_xvm.key bs=128 count=1
                     运行结果如下:
                     1+0 records in
                     1+0 records out
                     128 bytes (128 B) copied, 0.000301578 s, 424 kB/s
         查看key     ll /etc/cluter/fence_xvm.key
                     -rw-r--r--. 1 root root 128 Jul 24 13:20 /etc/cluter/fence_xvm.key
     (6)key复制给被管理主机
                     scp /etc/cluster/fence_xvm.key 192.168.2.149:/etc/cluster/
                     scp /etc/cluster/fence_xvm.key 192.168.2.243:/etc/cluster/
    (7)重启服务      /etc/init.d/fence_virtd restart
    (8)查看端口      netstat -anulp | grep 1229
                     udp        0      0 0.0.0.0:1229        0.0.0.0:*             12686/fence_virtd   


  * 2.  图形Fence等操作
    (1)找一个没有使用的ip
                     ping 192.168.2.233
                     PING 192.168.2.233 (192.168.2.233) 56(84) bytes of data.
                     From 192.168.2.1 icmp_seq=2 Destination Host Unreachable
    (2)在被管理机里安装httpd
                     yum install -y httpd
                     vim /var/www/html/index.html ( 两个被管理主机写的不同,仅仅为了测试,在真是环境里是相同的 )
    (3)图形添加Fence Device  

        * Fence Device:Add --> Fence Virt( Muilicast Mode ) --> Name ( vmfence ) --> Submit

        * Nodes: server149.example.com --> Fence Method to Node --> server149-vmfence --> Submit
                                      --> Fence Device --> vmdence( xvm Virtual Machine Fencing ) --> Domain
                                      ( 可以是虚拟机的名字,也可是uid(在虚拟机的!信息可以查看))  
                                      ( vm2 , abb71331-39a0-16cc-6fe2-11f5ebfb9689 ) --> Submit     
                server243.example.com --> Fence Method to Node --> server243-vmfence --> Submit
                                      --> Fence Device --> vmdence( xvm Virtual Machine Fencing ) --> Domain
                                      ( vm3 , 5a306666-7fef-164d-8072-09279e429725 ) --> Submit     
    
        * Failover Domains  -->  Add  --> Name ( webfile(此名字随意)) -->
                              ( 打勾 )   Prioritized     Order the nodes to which services failover.  ( 优先级,服务的故障转移 )
                          ( 打勾 )   Restricted     Service can run only on nodes specified.     ( 限制,服务只能运行在指定节点 )
                              ( 不打 )   No Failback     Do not send service back to 1st priority node when it becomes available again.
                                                       ( 没有恢复, 不要发送服务回到第一优先级节点时,重新变得可用)
                                                                       Member     Priority  ( 优先级 )
                                     server149.example.com          ( 打勾 )        1
                                     server240.example.com          ( 打勾 )        2
                              --> create

                      建立完成页面: Name     Prioritized     Restricted
                                   webfile          *                 *
 
        *  Resources   -->   Add   -->   IP Address ( 选择 )                                          
                                         IP Address              ( 此ip就是之前测的ip )             192.168.2.233
                                         Netmask Bits (optional) ( 子网掩码位(可选))               24
                                         Monitor Link              ( 监控环节 )                     ( 打勾 )
                                         Disable Updates to Static Routes (禁用更新的静态路由)      ( 打勾 )
                                         Number of Seconds to Sleep After Removing an IP Address   10
                            -->  submit

                       -->   Add   -->   Script ( 脚本 )
                                         Name                          httpd
                                         Full Path to Script File    /etc/init.d/httpd
                            -->  submit

                             建立完成页面: Name/IP             Type             In Use
                                      192.168.2.36/24     IP Address     Yes
                                      httpd             Script             Yes

       * Service Groups  -->  Add  -->  Service Name                            WWW
                                        Automatically Start This Service      ( 打勾 )    ( 开启ufwu )
                                        Run Exclusive                           ( 打勾 ) ( 运行独占 )
                                        Failover Domain                   ( webfile )
                                        Recovery Policy                       ( relocate )
                            -->  submit
                        Add Resource--> 192.168.2.233/24
                        Add Resource--> httpd

                            建立完成页面: Name     Status                                    Autostart        Failover Domain
                                 www     Running on server149.example.com       ( 打勾 )           webfile

                     ......
    (4)最后成功测行( 在被管理主机上进行检测 )
                     网页访问:192.168.2.233
                             server149.example.com
                     clustat ( 被管理机查看 )
                     结果:
                     Cluster Status for wjx @ Thu Jul 24 14:50:23 2014
                     Member Status: Quorate
                     Member Name                                                     ID   Status
                     ------ ----                                                     ---- ------
                     server149.example.com                                               1 Online, rgmanager
                     server243.example.com                                               2 Online, Local, rgmanager
                     Service Name                   Owner (Last)                         State         
                     ------- ----                   ----- ------                         -----         
                     service:www                    server149.example.com                started

                    /etc/init.d/httpd stop  ( 在192.168.2.149上停http )
                    网页访问:192.168.2.233
                             server243.example.com
                    clustat ( 被管理机查看 )
                     结果:
                     Member Name                             ID   Status
                     ------ ----                             ---- ------
                     server149.example.com                       1 Online, Local, rgmanager
                     server243.example.com                       2 Online, rgmanager
                     Service Name                   Owner (Last)                   State         
                     ------- ----                   ----- ------                   -----         
                     service:www                    server243.example.com          started

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
(三)数据储存
  第一种存储方式  (  mkfs.ext4  )( 只可以在149和243上其中一个在同一个挂载点操作,但是支持多个不同的挂载点 )
 * 1. 管理机:( luci )
       (1)安装scsi  yum install -y scsi-target-utils.x86_64
       (2)建lvs     lvcreate -L 2G -n iscsi vol0
                     lvs
                     iscsi vol0 -wi-a---   2.00g
       (3)添加节点   vi /etc/tgt/targets.conf
                      <target iqn.2008-09.com.example:server.target1>
                          backing-store /dev/vol0/iscsi
                          initiator-address 192.168.2.149
                          initiator-address 192.168.2.240
                      </target>
       (4)开启服务  /etc/init.d/tgtd start
       (5)查看信息  tgt-admin -s
                    /etc/init.d/tgtd restart

 * 2. 被管理机 ( ricci主机 )
       (1)安装iscsi yum install -y iscsi*
       (2)导入管理机 iscsiadm -m discovery -t st -p 192.168.2.1
       (3)初始化    iscsiadm -m node -l
                     fdisk -l ( 没有sda )
       (4)建lvm     fdisk -cu /dev/sda ( n p 1 空 空 t 8e p w )
           查看       cat /proc/partitions   (partprobe 在另一个机子上看不到需要执行此命令才可以)
                      8        1    2096128 sda1
       (5)查看clvmd状态   /etc/init.d/clvmd status ( 运行 )
       (6)集群      lvmconf --enable-cluster
       (7)lvm配置    vi /etc/lvm/lvm.conf
                       locking_type = 3  ( 一般默认是3,表示使用内置的丛生的锁 )
                     /etc/init.d/clvmd restart
       (8)建立lvm   
               pv: pvcreate /dev/sda1    
                    pvs
                    结果:/dev/sda1  clustervg lvm2 a--  2.00g 764.00m
               vg: vgcreate  clustervg /dev/sda1
                    vgdisplay clustervg
                    结果:Clustered             yes  
                    vgs
                    结果:clustervg   1   1   0 wz--nc 2.00g 764.00m
               lv: lvcreate -L 1G -n clusterlv clustervg
                    lvs
                    结果:clusterlv clustervg -wi-------   1.25g
       (9)格式化   mkfs.ext4 /dev/clustervg/clusterlv
             挂载   mount /dev/clustervg/clusterlv /var/www/html
       (10)安全上下文( selinux是强制的 )
                   restorecon -Rv /var/www/html/
                   ll -dZ /var/www/html/
                   结果:drwxr-xr-x. root root system_u:object_r:httpd_sys_content_t:s0 /var/www/html/
       (11)测试   vim /var/www/html/index.html
                   westos
                   umount /var/www/html/ ( 在149 )
                   mount /dev/clustervg/clusterlv /var/www/html/ (在243)
                   开启服务clusvcadm -e www
                   cat /var/www/html/index.html
                   westos
                   umount /var/www/html/

          图形:Filesystem 函数允许您访问和操作文件系统
                  Resources --> Add --> Filesystem -->
                                              Name                                          webdate
                                              Filesystem Type     ( 文件系统类型 )              ext4
                                              Mount Point     ( 挂载点 )                   /var/www/html
                                              Device, FS Label, or UUID  ( 需要挂载的设备 )   /dev/clustervg/clusterlv
                                              Mount Options                                            挂载选项
                                              Filesystem ID (optional)                                    文件编号(可选)
                                              Force Unmount                                 (打勾)    强制卸载
                                              Force fsck                                 (打勾)  
                                              Enable NFS daemon and lockd workaround                    启用NFS守护进程和上锁的解决方法
                                              Use Quick Status Checks                         (打勾)    使用快速的状态检查
                                              Reboot Host Node if Unmount Fails         (打勾)    如果取消失败重新启动主机节点

    当执行完图形界面的操作时,在df查会看到存储设备自动挂载,只挂载在一个主机上
        df
        /dev/mapper/clustervg-clusterlv   1032088   34056    945604   4% /var/www/html

        clustat
         service:www                    server149.example.com           started  

        网络页面检测: http://192.168.2.233/
        westos


*3.数据迁移
(1)图形操作
        Service Groups -- 点击www( 服务组 ) --
        Status Running on server149.example.com( start on node  )-- 点击 start on node --选择server243.example.com -- 点击 start标志( 小三角型 )
        查看:clustat
              service:www                    server243.example.com           started
        网络页面检测:( 没有变化,在前端,也就是客户端没有变化,不知道数据在迁移 )
        http://192.168.2.233/
        westos    

(2)命令
        clusvcadm -r www -m server149.example.com
            查看:clustat
              service:www                    server149.example.com           started


~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

数据储存第二种方式 ( 在149和243上都可以同时操作同一个挂载点操作,支持多点写入 )

首先需要停止资源
    clusvcadm -s www
     service:www                    (server41.example.com)         stopped

    df ( 当df查看时,自动卸载了 )



   2.(  mkfs.gfs2  )
*1.格式化成gfs2
    (1)查看状态   /etc/init.d/gfs2 status ( 没有开启 )
                   man mkfs.gfs2 ( 手册 )
    (2)卸载     ( 149,243都要卸载 )
                  umount /var/www/html/      
    (3)格式化     mkfs.gfs2 -p lock_dlm -t wjx:mygfs2 -j 3 /dev/clustervg/clusterlv (149)
                 ( wjx是集群创建的时候的名字,-j 3 代表有三份日志 )

           This will destroy any data on /dev/clustervg/clusterlv.
           It appears to contain: symbolic link to `../dm-2'

           Are you sure you want to proceed? [y/n] y         ( 输入y,表示进行格式化 )

            信息如下:
            Device:                    /dev/clustervg/clusterlv
            Blocksize:                 4096
            Device Size                1.00 GB (262144 blocks)
            Filesystem Size:           1.00 GB (262142 blocks)
            Journals:                  3
            Resource Groups:           4
            Locking Protocol:          "lock_dlm"
            Lock Table:                "wjx-c:mygfs2"
            UUID:                      0ced770a-afc4-50d5-7224-9a06cea2415f

    (4)挂载      mount /dev/clustervg/clusterlv /var/www/html/ (149)
    (5)测试网络页面 vim /var/www/html/index.html ( 149 )
                内容:www
    (6)安全上下文( selinux是强制的时候需要修改安全上下文,在一个主机上修改即可,但是必需挂载着,另一个主机再操作的时候不用在修改了 )
                   restorecon -Rv /var/www/html/ ( 149 )
                   ll -dZ /var/www/html/      ( 149 )
                   结果:drwxr-xr-x. root root system_u:object_r:httpd_sys_content_t:s0 /var/www/html/
    (7)测试       touch /var/www/html/file     ( 243 )
                   mount /dev/clustervg/clusterlv /var/www/html/ ( 在243挂载,看是否有在149上建立的文件,有就代表成功 )
    (8)卸载       umount  /var/www/html/ ( 149和243主机 )
    (9)图像删除原来的文件系统
            Service Groups -- 点击www( 服务组,此时是红色的字体 )--  Filesystem ( remove )
    (8)永久挂载    ( 149和243主机 两个被管理主机都需要写 )
         查看uid    blkid
                   /dev/mapper/clustervg-clusterlv: LABEL="wjx-a:mygfs2" UUID="1364ecd2-0c36-5e76-a506-253dcc7c8fc0" TYPE="gfs2"
                   vim /etc/fstab
                    UUID=1364ecd2-0c36-5e76-a506-253dcc7c8fc0       /var/www/html   gts2    _netdev 0 0
                   mount -a ( 检测挂载 )
           df  ( 挂载成功 )
           /dev/mapper/clustervg-clusterlv   1048400  397168    651232  38% /var/www/html   

posted @ 2014-09-27 08:36  淡蓝色的天空很美  阅读(2555)  评论(0编辑  收藏  举报