openstack controller ha测试环境搭建记录(四)——配置mysql数据库集群

内容正式开始前,我已经在集群中添加了新的节点controller1(IP地址为10.0.0.14)。

 


在所有节点上安装软件:
# yum install -y mariadb-galera-server xinetd rsync


在节点1初始化数据库:
# systemctl start mariadb.service
# mysql_secure_installation
# mysql -u root -p -e "CREATE USER 'clustercheck'@'localhost' IDENTIFIED BY '123456';"
# systemctl stop mariadb.service


在所有节点上配置MariaDB和Galera:
# vi /etc/my.cnf.d/galera.cnf
[mysqld]
binlog_format=ROW
default-storage-engine=innodb
innodb_autoinc_lock_mode=2
innodb_locks_unsafe_for_binlog=1
query_cache_size=0
query_cache_type=0
bind-address=controller1
wsrep_provider=/usr/lib64/galera/libgalera_smm.so
wsrep_cluster_name="my_wsrep_cluster"
wsrep_cluster_address="gcomm://controller1,controller2,controller3"
wsrep_slave_threads=1
wsrep_certify_nonPK=1
wsrep_max_ws_rows=131072
wsrep_max_ws_size=1073741824
wsrep_debug=0
wsrep_convert_LOCK_to_trx=0
wsrep_retry_autocommit=1
wsrep_auto_increment_control=1
wsrep_drupal_282555_workaround=0
wsrep_causal_reads=0
wsrep_notify_cmd=
wsrep_sst_method=rsync
wsrep_sst_auth=root:
“bind-address=”配置成/etc/hosts中的本机名称。
注意:如果“bind-address=0.0.0.0”,则在本机所有IP的3306端口进行监听,包括VIP。这将导致后续haproxy无法在VIP的3306端口监听。


在节点1执行如下命令:
# sudo -u mysql /usr/libexec/mysqld --wsrep-cluster-address='gcomm://' &
需要记住屏幕上的进程id,后面有用。


在节点2、节点3:
# systemctl start mariadb.service
# systemctl status -l mariadb.service


在任意节点确认集群的成员数量:
# mysql -u root -e 'SELECT VARIABLE_VALUE as "wsrep_cluster_size" FROM INFORMATION_SCHEMA.GLOBAL_STATUS WHERE VARIABLE_NAME="wsrep_cluster_size"'
+--------------------+
| wsrep_cluster_size |
+--------------------+
| 3 |
+--------------------+


以上状态均正常,重启节点1的服务:
# kill <mysql PIDs>
# systemctl start mariadb.service

 

 

在所有节点上创建健康检查登录信息文件:
# vi /etc/sysconfig/clustercheck
MYSQL_USERNAME=clustercheck
MYSQL_PASSWORD=123456
MYSQL_HOST=localhost
MYSQL_PORT=3306


在所有节点上创建供HAProxy调用的健康检查服务:
#vi /etc/xinetd.d/galera-monitor
service galera-monitor
{
port = 9200
disable = no
socket_type = stream
protocol = tcp
wait = no
user = root
group = root
groups = yes
server = /usr/bin/clustercheck
type = UNLISTED
per_source = UNLIMITED
log_on_success =
log_on_failure = HOST
flags = REUSE
}
此处检查状态的端口为9200。


在所有节点上启动xinetd服务,clustercheck需要这个服务:
# systemctl daemon-reload
# systemctl enable xinetd
# systemctl start xinetd


测试检查效果:
# clustercheck
HTTP/1.1 200 OK
Content-Type: text/plain
Connection: close
Content-Length: 32

Galera cluster node is synced.

或:
# telnet 10.0.0.14 9200
Trying 10.0.0.14...
Connected to 10.0.0.14.
Escape character is '^]'.
HTTP/1.1 200 OK
Content-Type: text/plain
Connection: close
Content-Length: 32

Galera cluster node is synced.
Connection closed by foreign host.

 

在所有节点上配置haproxy.cfg:
# vi /etc/haproxy/haproxy.cfg
global
chroot /var/lib/haproxy
daemon
group haproxy
maxconn 4000
pidfile /var/run/haproxy.pid
user haproxy

defaults
log global
maxconn 4000
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout check 10s

listen galera_cluster
bind 10.0.0.10:3306
balance source
option httpchk
server controller1 10.0.0.14:3306 check port 9200 inter 2000 rise 2 fall 5
server controller2 10.0.0.12:3306 check port 9200 inter 2000 rise 2 fall 5
server controller3 10.0.0.13:3306 check port 9200 inter 2000 rise 2 fall 5


查看资源当前在哪个节点:
# crm_mon


重启资源所在节点的haproxy服务:
# systemctl restart haproxy.service
# systemctl status -l haproxy.service

posted @ 2015-12-10 16:43  Endoresu  阅读(1972)  评论(0编辑  收藏  举报