CentOS下关于集群同步/LB/HA 的尝试

Zookepper

集群同步

 

下载解压

wget http://apache.fayea.com/zookeeper/stable/zookeeper-3.4.8.tar.gz

tar xvf zookeeper-3.4.8.tar.gz

cd zookeeper-3.4.8

 

 

配置zookeeper配置文件

cp zoo_sample.cfg zoo.cfg

vim zoo.cfg

#每个tick默认2s

# The number of milliseconds of each tick

tickTime=2000

#初始化同步tick,默认10,为20s,超过剔除。

# The number of ticks that the initial

# synchronization phase can take

initLimit=10

# The number of ticks that can pass between

同步tick,默认为5,为10s,超过剔除

# sending a request and getting an acknowledgement

syncLimit=5

# the directory where the snapshot is stored.

# do not use /tmp for storage, /tmp here is just

# example sakes

#修改数据文件目录.

dataDir=/usr/local/zookeeper/data

#增加数据日志文件目录

dataLogDir=/usr/local/zookeeper/datalog

# the port at which the clients will connect

#客户端连接端口

clientPort=2181

#定义客户端连接数,默认60

# the maximum number of client connections.

# increase this if you need to handle more clients

#maxClientCnxns=60

#

# Be sure to read the maintenance section of the

# administrator guide before turning on autopurge.

#

# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance

#

# The number of snapshots to retain in dataDir

#autopurge.snapRetainCount=3

# Purge task interval in hours

# Set to "0" to disable auto purge feature

#autopurge.purgeInterval=1

 

创建配置文件中指定的目录

mkdir /usr/local/zookeeper/data

mkdir /usr/local/zookeeper/datalog

 

启动zookeeper

cd /usr/local/src/zookeeper-3.4.8/

bin/zkServer.sh start

连接测试

bin/zkCli.sh -server 127.0.0.1:2181

Heatbeat

本文以heatbeat+nginx进行测试。

生产环境下得确保是共享存储哦。

Ip&主机名规划清单

#虚拟vip

vip 192.168.211.134/eth0:0

#主机1:

cs01:

192.168.211.128/eth0/public

192.168.244.128/eth1/private

#主机2:

cs02:

192.168.211.135/eth0/public

192.168.244.129/eth1/private

 

主机1设置

hostname cs01

vim /etc/sysconfig/network

HOSTNAME=cs01

base

iptables -F

service iptables save

setenforce 0

sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config

vi /etc/hosts

192.168.211.128 cs01

192.168.211.135 cs02

yum install -y epel-release

yum install -y heartbeat* libnet nginx

cd /usr/share/doc/heartbeat-3.0.4/

cp authkeys ha.cf haresources /etc/ha.d/

cd /etc/ha.d/

vim authkeys

auth 3

#1 crc

#2 sha1 HI!

3 md5 Hello!

chmod 600 authkeys

vim haresources

cs01 192.168.211.134/24/eth0:0 nginx

 

 

vim ha.cf

#

# There are lots of options in this file. All you have to have is a set

# of nodes listed {"node ...} one of {serial, bcast, mcast, or ucast},

# and a value for "auto_failback".

#

# ATTENTION: As the configuration file is read line by line,

# THE ORDER OF DIRECTIVE MATTERS!

#

# In particular, make sure that the udpport, serial baud rate

# etc. are set before the heartbeat media are defined!

# debug and log file directives go into effect when they

# are encountered.

#

# All will be fine if you keep them ordered as in this example.

#

#

# Note on logging:

# If all of debugfile, logfile and logfacility are not defined,

# logging is the same as use_logd yes. In other case, they are

# respectively effective. if detering the logging to syslog,

# logfacility must be "none".

#

# File to write debug messages to

debugfile /var/log/ha-debug

#

#

# File to write other messages to

#

logfile /var/log/ha-log

#

#

16,1 Top

#

#

# Facility to use for syslog()/logger

#

logfacility local0

#

#

# A note on specifying "how long" times below...

#

# The default time unit is seconds

# 10 means ten seconds

#

# You can also specify them in milliseconds

# 1500ms means 1.5 seconds

#

#

# keepalive: how long between heartbeats?

#

keepalive 2

#

# deadtime: how long-to-declare-host-dead?

#

# If you set this too low you will get the problematic

# split-brain (or cluster partition) problem.

# See the FAQ for how to use warntime to tune deadtime.

#

deadtime 30

#

# warntime: how long before issuing "late heartbeat" warning?

# See the FAQ for how to use warntime to tune deadtime.

#

30,1 9%

# See the FAQ for how to use warntime to tune deadtime.

#

warntime 10

#

#

# Very first dead time (initdead)

#

# On some machines/OSes, etc. the network takes a while to come up

# and start working right after you've been rebooted. As a result

# we have a separate dead time for when things first come up.

# It should be at least twice the normal dead time.

#

initdead 60

#

#

# What UDP port to use for bcast/ucast communication?

#

udpport 694

#

# Baud rate for serial ports...

#

#baud 19200

#

# serial serialportname ...

#serial /dev/ttyS0 # Linux

#serial /dev/cuaa0 # FreeBSD

#serial /dev/cuad0 # FreeBSD 6.x

#serial /dev/cua/a # Solaris

#

#

# What interfaces to broadcast heartbeats over?

59,1 18%

#

# What interfaces to broadcast heartbeats over?

#

#bcast eth0 # Linux

#bcast eth1 eth2 # Linux

#bcast le0 # Solaris

#bcast le1 le2 # Solaris

#

# Set up a multicast heartbeat medium

# mcast [dev] [mcast group] [port] [ttl] [loop]

#

# [dev] device to send/rcv heartbeats on

# [mcast group] multicast group to join (class D multicast address

# 224.0.0.0 - 239.255.255.255)

# [port] udp port to sendto/rcvfrom (set this value to the

# same value as "udpport" above)

# [ttl] the ttl value for outbound heartbeats. this effects

# how far the multicast packet will propagate. (0-255)

# Must be greater than zero.

# [loop] toggles loopback for outbound multicast heartbeats.

# if enabled, an outbound packet will be looped back and

# received by the interface it was sent on. (0 or 1)

# Set this value to zero.

#

#

#mcast eth0 225.0.0.1 694 1 0

#

# Set up a unicast / udp heartbeat medium

# ucast [dev] [peer-ip-addr]

#

# [dev] device to send/rcv heartbeats on

88,1 28%

#

# [dev] device to send/rcv heartbeats on

# [peer-ip-addr] IP address of peer to send packets to

#

ucast eth1 192.168.244.129

#

#

# About boolean values...

#

# Any of the following case-insensitive values will work for true:

# true, on, yes, y, 1

# Any of the following case-insensitive values will work for false:

# false, off, no, n, 0

#

#

#

# auto_failback: determines whether a resource will

# automatically fail back to its "primary" node, or remain

# on whatever node is serving it until that node fails, or

# an administrator intervenes.

#

# The possible values for auto_failback are:

# on - enable automatic failbacks

# off - disable automatic failbacks

# legacy - enable automatic failbacks in systems

# where all nodes do not yet support

# the auto_failback option.

#

# auto_failback "on" and "off" are backwards compatible with the old

# "nice_failback on" setting.

#

117,1 37%

# "nice_failback on" setting.

#

# See the FAQ for information on how to convert

# from "legacy" to "on" without a flash cut.

# (i.e., using a "rolling upgrade" process)

#

# The default value for auto_failback is "legacy", which

# will issue a warning at startup. So, make sure you put

# an auto_failback directive in your ha.cf file.

# (note: auto_failback can be any boolean or "legacy")

#

auto_failback on

#

#

# Basic STONITH support

# Using this directive assumes that there is one stonith

# device in the cluster. Parameters to this device are

# read from a configuration file. The format of this line is:

#

# stonith <stonith_type> <configfile>

#

# NOTE: it is up to you to maintain this file on each node in the

# cluster!

#

#stonith baytech /etc/ha.d/conf/stonith.baytech

#

# STONITH support

# You can configure multiple stonith devices using this directive.

# The format of the line is:

# stonith_host <hostfrom> <stonith_type> <params...>

# <hostfrom> is the machine the stonith device is attached

146,1 46%

# stonith_host <hostfrom> <stonith_type> <params...>

# <hostfrom> is the machine the stonith device is attached

# to or * to mean it is accessible from any host.

# <stonith_type> is the type of stonith device (a list of

# supported drives is in /usr/lib/stonith.)

# <params...> are driver specific parameters. To see the

# format for a particular device, run:

# stonith -l -t <stonith_type>

#

#

# Note that if you put your stonith device access information in

# here, and you make this file publically readable, you're asking

# for a denial of service attack ;-)

#

# To get a list of supported stonith devices, run

# stonith -L

# For detailed information on which stonith devices are supported

# and their detailed configuration options, run this command:

# stonith -h

#

#stonith_host * baytech 10.0.0.3 mylogin mysecretpassword

#stonith_host ken3 rps10 /dev/ttyS1 kathy 0

#stonith_host kathy rps10 /dev/ttyS1 ken3 0

#

# Watchdog is the watchdog timer. If our own heart doesn't beat for

# a minute, then our machine will reboot.

# NOTE: If you are using the software watchdog, you very likely

# wish to load the module with the parameter "nowayout=0" or

# compile it without CONFIG_WATCHDOG_NOWAYOUT set. Otherwise even

# an orderly shutdown of heartbeat will trigger a reboot, which is

# very likely NOT what you want.

175,1 56%

# an orderly shutdown of heartbeat will trigger a reboot, which is

# very likely NOT what you want.

#

#watchdog /dev/watchdog

#

# Tell what machines are in the cluster

# node nodename ... -- must match uname -n

node cs01

node cs02

#

# Less common options...

#

# Treats 10.10.10.254 as a psuedo-cluster-member

# Used together with ipfail below...

# note: don't use a cluster node as ping node

#

ping 192.168.244.1

#

# Treats 10.10.10.254 and 10.10.10.253 as a psuedo-cluster-member

# called group1. If either 10.10.10.254 or 10.10.10.253 are up

# then group1 is up

# Used together with ipfail below...

#

#ping_group group1 10.10.10.254 10.10.10.253

#

# HBA ping derective for Fiber Channel

# Treats fc-card-name as psudo-cluster-member

# used with ipfail below ...

#

# You can obtain HBAAPI from http://hbaapi.sourceforge.net. You need

# to get the library specific to your HBA directly from the vender

204,1 65%

# You can obtain HBAAPI from http://hbaapi.sourceforge.net. You need

# to get the library specific to your HBA directly from the vender

# To install HBAAPI stuff, all You need to do is to compile the common

# part you obtained from the sourceforge. This will produce libHBAAPI.so

# which you need to copy to /usr/lib. You need also copy hbaapi.h to

# /usr/include.

#

# The fc-card-name is the name obtained from the hbaapitest program

# that is part of the hbaapi package. Running hbaapitest will produce

# a verbose output. One of the first line is similar to:

# Apapter number 0 is named: qlogic-qla2200-0

# Here fc-card-name is qlogic-qla2200-0.

#

#hbaping fc-card-name

#

#

# Processes started and stopped with heartbeat. Restarted unless

# they exit with rc=100

#

#respawn userid /path/name/to/run

respawn hacluster /usr/lib64/heartbeat/ipfail

#

# Access control for client api

# default is no access

#

#apiauth client-name gid=gidlist uid=uidlist

#apiauth ipfail gid=haclient uid=hacluster

 

###########################

#

# Unusual options.

233,1 75%

#

# Unusual options.

#

###########################

#

# hopfudge maximum hop count minus number of nodes in config

#hopfudge 1

#

# deadping - dead time for ping nodes

#deadping 30

#

# hbgenmethod - Heartbeat generation number creation method

# Normally these are stored on disk and incremented as needed.

#hbgenmethod time

#

# realtime - enable/disable realtime execution (high priority, etc.)

# defaults to on

#realtime off

#

# debug - set debug level

# defaults to zero

#debug 1

#

# API Authentication - replaces the fifo-permissions-based system of the past

#

#

# You can put a uid list and/or a gid list.

# If you put both, then a process is authorized if it qualifies under either

# the uid list, or under the gid list.

#

# The groupname "default" has special meaning. If it is specified, then

262,1 84%

#

# The groupname "default" has special meaning. If it is specified, then

# this will be used for authorizing groupless clients, and any client groups

# not otherwise specified.

#

# There is a subtle exception to this. "default" will never be used in the

# following cases (actual default auth directives noted in brackets)

# ipfail (uid=HA_CCMUSER)

# ccm (uid=HA_CCMUSER)

# ping (gid=HA_APIGROUP)

# cl_status (gid=HA_APIGROUP)

#

# This is done to avoid creating a gaping security hole and matches the most

# likely desired configuration.

#

#apiauth ipfail uid=hacluster

#apiauth ccm uid=hacluster

#apiauth cms uid=hacluster

#apiauth ping gid=haclient uid=alanr,root

#apiauth default gid=haclient

 

# message format in the wire, it can be classic or netstring,

# default: classic

#msgfmt classic/netstring

 

# Do we use logging daemon?

# If logging daemon is used, logfile/debugfile/logfacility in this file

# are not meaningful any longer. You should check the config file for logging

# daemon (the default is /etc/logd.cf)

# more infomartion can be fould in the man page.

# Setting use_logd to "yes" is recommended

 

scp authkeys ha.cf haresources cs02:/etc/ha.d/

 

 

 

 

 

 

主机2上设置

hostname cs02

vim /etc/sysconfig/network

HOSTNAME=cs02

base

iptables -F

service iptables save

setenforce 0

sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config

vi /etc/hosts

192.168.211.128 cs01

192.168.211.135 cs02

yum install -y epel-release

yum install -y heartbeat* libnet nginx

vim /etc/ha.d/ha.cf

ucast eth1 192.168.244.128

 

 

主从先后启动

service heartbeat start

service heartbeat start

 

检查测试

ifconfig,确认有eth0:0

ps aux | grep nginx

主上停止服务

备用启动

LVS

负载均衡模式NAT\DR,物理部署要求是共享存储哦。

lvs-nat

ip&主机名规划

#Director

192.168.211.137/eth0

192.168.244.130/eth1

 

#主机1:

cs01:

192.168.244.128/gw:130

#主机2:

cs02:

192.168.244.129/gw:130

 

director上设置

yum install -y ipvsadm

[root@director ~]# vi /usr/local/sbin/lvs_nat.sh

#!/bin/bash

echo 1 > /proc/sys/net/ipv4/ip_forward

echo 0 > /proc/sys/net/ipv4/conf/all/send_redirects

echo 0 > /proc/sys/net/ipv4/conf/default/send_redirects

echo 0 > /proc/sys/net/ipv4/conf/eth0/send_redirects

echo 0 > /proc/sys/net/ipv4/conf/eth1/send_redirects

iptables -t nat -F

iptables -t nat -X

iptables -t nat -A POSTROUTING -s 192.168.244.0/24 -j MASQUERADE

IPVSADM='/sbin/ipvsadm'

$IPVSADM -C

$IPVSADM -A -t 192.168.211.137:80 -s lc -p 300

$IPVSADM -a -t 192.168.211.137:80 -r 192.168.244.128:80 -m -w 1

$IPVSADM -a -t 192.168.211.137:80 -r 192.168.244.129:80 -m -w 1

 

/bin/bash /usr/local/sbin/lvs_nat.sh

 

RS上设置

cs01和cs02上安装nginx

yum install -y epel-release

yum install -y nginx

分别写入写入测试数据

echo "rs1rs1" /usr/share/nginx/html/index.html

echo "rs2rs2" /usr/share/nginx/html/index.html

分别启动服务

service nginx start

 

测试

curl 192.168.211.137

 

lvs-dr

ip&主机名规划

#Director

192.168.244.130/eth1

192.168.244.131/eth1:1

#主机1:

cs01:

192.168.244.128/eth1

192.168.244.131/lo:0

 

#主机2:

cs02:

192.168.244.129/eth1

192.168.244.131/lo:0

Director上设置

#!/bin/bash

echo 1 > /proc/sys/net/ipv4/ip_forward

ipv=/sbin/ipvsadm

vip=192.168.244.131

rs1=192.168.244.128

rs2=192.168.244.128

ifconfig eth1:1 $vip broadcast $vip netmask 255.255.255.255 up

route add -host $vip dev eth1:1

$ipv -C

$ipv -A -t $vip:80 -s rr

$ipv -a -t $vip:80 -r $rs1:80 -g -w 1

$ipv -a -t $vip:80 -r $rs2:80 -g -w 1

bash /usr/local/sbin/lvs_dr.sh

 

2台RS上设置(cs01,cs02)

vim /usr/local/sbin/lvs_dr_rs.sh

#!/bin/bash

vip=192.168.244.131

ifconfig lo:0 $vip broadcast $vip netmask 255.255.255.255 up

route add -host $vip dev lo:0

echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore

echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce

echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore

echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce

bash /usr/local/sbin/lvs_dr_rs.sh

 

 

windows测试

http://192.168.244.131

 

lvs结合keepalived

 

#ip&主机名规划:

vip:

192.168.211.139

cs01:

192.168.211.137/eth0

cs02:

192.168.211.137/eth0

 

#2台主机执行操作:

yum install -y epel-release

yum install -y nginx

yum install -y keepalived

echo 1 > /proc/sys/net/ipv4/ip_forward

/etc/init.d/nginx start

 

vim /usr/local/sbin/lvs_dr_rs.sh

#!/bin/bash

vip=192.168.211.139

ifconfig lo:0 $vip broadcast $vip netmask 255.255.255.255 up

route add -host $vip dev lo:0

echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore

echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce

echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore

echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce

 

#主机cs01 nginx

echo "keep1rs1" > /usr/share/nginx/html/index.html

#主机cs02 nginx

echo "keep2rs2" > /usr/share/nginx/html/index.html

 

#主机cs01 keepalived

vim /etc/keepalived/keepalived.conf

! Configuration File for keepalived

#global_defs {

# notification_email {

## acassen@firewall.loc

# failover@firewall.loc

# sysadmin@firewall.loc

# }

# notification_email_from Alexandre.Cassen@firewall.loc

# smtp_server 192.168.200.1

# smtp_connect_timeout 30

# router_id LVS_DEVEL

#}

 

vrrp_instance VI_1 {

state MASTER

interface eth0

virtual_router_id 51

priority 100

advert_int 1

authentication {

auth_type PASS

auth_pass 1111

}

virtual_ipaddress {

192.168.211.139

 

}

}

 

virtual_server 192.168.211.139 80 {

"/etc/keepalived/keepalived.conf" 57L, 1118C 28,5 Top

 

virtual_server 192.168.211.139 80 {

delay_loop 6

lb_algo wlc

lb_kind DR

# nat_mask 255.255.255.0

persistence_timeout 60

protocol TCP

 

real_server 192.168.211.137 80 {

weight 100

TCP_CHECK {

connect_timeout 10

nb_get_retry 3

delay_before_retry 3

connect_port 80

}

}

real_server 192.168.211.138 80 {

weight 100

TCP_CHECK {

connect_timeout 10

nb_get_retry 3

delay_before_retry 3

connect_port 80

}

}

}

 

 

#主机cs02 keepalived :

vim /etc/keepalived/keepalived.conf

! Configuration File for keepalived

 

#global_defs {

# notification_email {

## acassen@firewall.loc

# failover@firewall.loc

# sysadmin@firewall.loc

# }

# notification_email_from Alexandre.Cassen@firewall.loc

# smtp_server 192.168.200.1

# smtp_connect_timeout 30

# router_id LVS_DEVEL

#}

 

vrrp_instance VI_1 {

state BACKUP

interface eth0

virtual_router_id 51

priority 90

advert_int 1

authentication {

auth_type PASS

auth_pass 1111

}

virtual_ipaddress {

192.168.211.139

 

}

}

 

 

21,9 Top

 

 

virtual_server 192.168.211.139 80 {

delay_loop 6

lb_algo wlc

lb_kind DR

# nat_mask 255.255.255.0

persistence_timeout 60

protocol TCP

 

real_server 192.168.211.137 80 {

weight 100

TCP_CHECK {

connect_timeout 10

nb_get_retry 3

delay_before_retry 3

connect_port 80

}

}

real_server 192.168.211.138 80 {

weight 100

TCP_CHECK {

connect_timeout 10

nb_get_retry 3

delay_before_retry 3

connect_port 80

}

}

}

 

2个rs上执行lvs:

bash /usr/local/sbin/lvs_dr_rs.sh

 

2个rs上执行keepalived:

/etc/init.d/keepalived start

 

客户端访问vip测试。

posted @ 2016-11-02 10:58  8年扛枪梦  阅读(420)  评论(0编辑  收藏  举报