KingbaseES RAC 部署案例之---单节点部署案例

案例说明:
KingbaseES RAC 在单节点上部署。
适用版本:
KingbaseES V008R006C008M030B0010

操作系统:

[root@node201 KingbaseHA]# cat /etc/centos-release
CentOS Linux release 7.9.2009 (Core)

集群节点信息:

[root@node203 ~]# cat /etc/hosts
.......
192.168.1.203 node203

一、系统环境准备
参考博文:https://www.cnblogs.com/tiany1224/p/18342848

如下所示磁盘环境:

[root@node203 ~]# fdisk -l
Disk /dev/sdh: 134 MB, 134217728 bytes, 262144 sectors
......
Disk /dev/sdi: 4294 MB, 4294967296 bytes, 8388608 sectors
......

二、部署和配置RAC

1、安装数据库软件

[root@node203 soft]# mount -o loop KingbaseES_V008R006C008M030B0010_Lin64_install.iso /mnt
mount: /dev/loop0 is write-protected, mounting read-only

[kingbase@node203 mnt]$ sh setup.sh
Now launch installer...
Choose the server type
----------------------
Please choose the server type :
  ->1- default
    2- rac

  Default Install Folder: /opt/Kingbase/ES/V8

2、创建集群部署目录
如下所示,进入数据库软件部署目录,执行集群脚本,默认创建"/opt/KingbaseHA"目录:

[root@node203 script]# pwd
/opt/Kingbase/ES/V8/install/script
[root@node203 script]# ls -lh
total 32K
-rwxr-xr-x 1 kingbase kingbase  321 Jul 18 14:17 consoleCloud-uninstall.sh
-rwxr-x--- 1 kingbase kingbase 3.6K Jul 18 14:17 initcluster.sh
-rwxr-x--- 1 kingbase kingbase  289 Jul 18 14:17 javatools.sh
-rwxr-xr-x 1 kingbase kingbase  553 Jul 18 14:17 rootDeployClusterware.sh
-rwxr-x--- 1 kingbase kingbase  767 Jul 18 14:17 root.sh
-rwxr-x--- 1 kingbase kingbase  627 Jul 18 14:17 rootuninstall.sh
-rwxr-x--- 1 kingbase kingbase 3.7K Jul 18 14:17 startupcfg.sh
-rwxr-x--- 1 kingbase kingbase  252 Jul 18 14:17 stopserver.sh

# 执行脚本
[root@node203 script]# sh rootDeployClusterware.sh
cp: cannot stat ‘@@INSTALL_DIR@@/KingbaseHA/*’: No such file or directory

# 修改脚本变量
[root@node203 V8]# head  install/script/rootDeployClusterware.sh
#!/bin/sh
# copy KingbaseHA to /opt/KingbaseHA
ROOT_UID=0
#INSTALLDIR=@@INSTALL_DIR@@
INSTALLDIR=/opt/Kingbase/ES/V8/KESRealPro/V008R006C008M030B0010

# 执行脚本(创建/opt/KingbaseHA)
[root@node203 V8]# sh install/script/rootDeployClusterware.sh
/opt/KingbaseHA has existed. Do you want to override it?(y/n)y
y
[root@node203 V8]# ls -lh /opt/KingbaseHA/
total 64K
-rw-r--r--  1 root root 3.8K Jul 30 17:38 cluster_manager.conf
-rwxr-xr-x  1 root root  54K Jul 30 17:38 cluster_manager.sh
drwxr-xr-x  9 root root  121 Jul 30 17:38 corosync
drwxr-xr-x  7 root root  122 Jul 30 17:38 corosync-qdevice
drwxr-xr-x  8 root root   68 Jul 30 17:38 crmsh
drwxr-xr-x  7 root root   65 Jul 30 17:38 dlm-dlm
drwxr-xr-x  5 root root   39 Jul 30 17:38 fence_agents
drwxr-xr-x  5 root root   60 Jul 30 17:38 gfs2
drwxr-xr-x  6 root root   53 Jul 30 17:38 gfs2-utils
drwxr-xr-x  5 root root   39 Jul 30 17:38 ipmi_tool
drwxr-xr-x  7 root root   84 Jul 30 17:38 kingbasefs
drwxr-xr-x  5 root root   42 Jul 30 17:38 kronosnet
drwxr-xr-x  2 root root 4.0K Jul 30 17:38 lib
drwxr-xr-x  2 root root   28 Jul 30 17:38 lib64
drwxr-xr-x  7 root root   63 Jul 30 17:38 libqb
drwxr-xr-x 10 root root  136 Jul 30 17:38 pacemaker
drwxr-xr-x  6 root root   52 Jul 30 17:38 python2.7

3、集群部署配置

[root@node203 KingbaseHA]# cat cluster_manager.conf
######################################## Basic Configuration ####################################
################# install #################
##cluster node information
cluster_name=krac
node_name=(node203)
node_ip=(192.168.1.203)

##voting disk, used for qdevice
enable_qdisk=1
votingdisk=/dev/sdh                 # vote投票盘

##shared data disk, used for gfs2
sharedata_dir=/sharedata/data_gfs2
sharedata_disk=/dev/sdi             # 集群数据共享存储

################# common ################
##cluster manager install dir
install_dir=/opt/KingbaseHA
env_bash_file=/root/.bashrc

##pacemaker
pacemaker_daemon_group=haclient
pacemaker_daemon_user=hacluster

##kingbase owner and install_dir
kingbaseowner=kingbase
kingbasegroup=kingbase
kingbase_install_dir=/opt/Kingbase/ES/V8/Server

################# crm_dsn #################
##crm_dsn, used for configuring data source connection string information.
database="test"
username="system"
# If loged in to database without password,
# the item of password could not be provided.
password="123456"
# Do not add '-D' parameter to 'initdb_options'.
initdb_options="-A trust -U $username"
......
######################################## For KingbaseES RAC ########################################
##if install KingbaseES RAC, set 'install_rac' to 1,else set it to 0
install_rac=1

##KingbaseES RAC params
rac_port=55321
rac_lms_port=53444
rac_lms_count=7
###################

4、投票盘初始化

[root@node203 KingbaseHA]# ./cluster_manager.sh --qdisk_init
qdisk init start
Writing new quorum disk label 'krac' to /dev/sdh.
WARNING: About to destroy all data on /dev/sdh; proceed? (Y/N):
y
/dev/block/8:112:
/dev/disk/by-id/ata-VBOX_HARDDISK_VB049744dd-80024550:
/dev/disk/by-path/pci-0000:00:0d.0-ata-8.0:
/dev/sdh:
        Magic:                eb7a62c2
        Label:                krac
        Created:              Tue Aug 20 10:44:53 2024
        Host:                 node203
        Kernel Sector Size:   512
        Recorded Sector Size: 512

qdisk init success

5、初始化数据盘

[root@node203 KingbaseHA]# ./cluster_manager.sh --cluster_disk_init
rac disk init start
This will destroy any data on /dev/sdi
Are you sure you want to proceed? (Y/N): y
Adding journals: Done
Building resource groups: Done
Creating quota file: Done
Writing superblock and syncing: Done
Device:                    /dev/sdi
Block size:                4096
Device size:               4.00 GB (1048576 blocks)
Filesystem size:           4.00 GB (1048575 blocks)
Journals:                  2
Journal size:              32MB
Resource groups:           18
Locking protocol:          "lock_dlm"
Lock table:                "krac:gfs2"
UUID:                      dc81ac11-8871-44b1-829a-e2df8573c5d5
rac disk init success

6、基础组件初始化(all nodes)
在节点执行如下命令,初始化所有基础组件,如corosync,pacemaker,corosync-qdevice。

[root@node203 KingbaseHA]# ./cluster_manager.sh --base_configure_init
init kernel soft watchdog start
init kernel soft watchdog success
config host start
config host success
add env varaible in /root/.bashrc
add env variable success
config corosync.conf start
config corosync.conf success
Starting Corosync Cluster Engine (corosync): [  OK  ]
add pacemaker daemon user start
groupadd: group 'haclient' already exists
useradd: user 'hacluster' already exists
add pacemaker daemon user success
config pacemaker success
Starting Pacemaker Cluster Manager[  OK  ]
config qdevice start
config qdevice success
Starting Qdisk Fenced daemon (qdisk-fenced): [  OK  ]
Starting Corosync Qdevice daemon (corosync-qdevice): [  OK  ]
Please note the configuration: superuser(system) and port(36321) for database(test) of resource(DB0)
Please note the configuration: superuser(system) and port(36321) for database(test) of resource(DB1)
config kingbase rac start
/opt/Kingbase/ES/V8/Server/log already exist
config kingbase rac success
add_udev_rule start
add_udev_rule success
insmod dlm.ko success
check and mknod for dlm start
check and mknod for dlm success

应用环境变量:
[root@node203 KingbaseHA]# source /root/.bashrc

查看集群资源状态:

[root@node203 KingbaseHA]# crm status
Cluster Summary:
  * Stack: corosync
  * Current DC: node203 (version 2.0.3-4b1f869f0f) - partition with quorum
  * Last updated: Tue Aug 20 10:50:49 2024
  * Last change:  Tue Aug 20 10:48:41 2024 by hacluster via crmd on node203
  * 1 node configured
  * 0 resource instances configured

Node List:
  * Online: [ node203 ]

Full List of Resources:
  * No resources

7、gfs2相关资源初始化(all nodes)
如下所示,更新系统gfs2内核模块:

[root@node203 KingbaseHA]# ./cluster_manager.sh --init_gfs2
init gfs2 start
current OS kernel version does not support updating gfs2, please confirm whether to continue? (Y/N):
y
init the OS native gfs2 success

8、配置集群资源配置( fence、dlm 和 gfs2 资源)

[root@node203 KingbaseHA]# ./cluster_manager.sh --config_gfs2_resource
config dlm and gfs2 resource start
dc81ac11-8871-44b1-829a-e2df8573c5d5

config dlm and gfs2 resource success

查看集群资源状态:
如下所示,集群资源增加了dlm和gfs2的资源:

[root@node203 KingbaseHA]# crm status
Cluster Summary:
  * Stack: corosync
  * Current DC: node203 (version 2.0.3-4b1f869f0f) - partition with quorum
  * Last updated: Tue Aug 20 10:51:53 2024
  * Last change:  Tue Aug 20 10:51:52 2024 by root via cibadmin on node203
  * 1 node configured
  * 3 resource instances configured

Node List:
  * Online: [ node203 ]

Full List of Resources:
  * fence_qdisk_0       (stonith:fence_qdisk):   Started node203
  * Clone Set: clone-dlm [dlm]:
    * Started: [ node203 ]
  * Clone Set: clone-gfs2 [gfs2]:
    * gfs2      (ocf::heartbeat:Filesystem):     Starting node203

查看集群资源配置信息:

[root@node203 KingbaseHA]# crm config show
node 1: node203
primitive dlm ocf:pacemaker:controld \
        params daemon="/opt/KingbaseHA/dlm-dlm/sbin/dlm_controld" dlm_tool="/opt/KingbaseHA/dlm-dlm/sbin/dlm_tool" args="-s 0 -f 0" \
        op start interval=0 \
        op stop interval=0 \
        op monitor interval=60 timeout=60
primitive fence_qdisk_0 stonith:fence_qdisk \
        params qdisk_path="/dev/sdh" qdisk_fence_tool="/opt/KingbaseHA/corosync-qdevice/sbin/qdisk-fence-tool" pcmk_host_list=node203 \
        op monitor interval=60s \
        meta failure-timeout=5min
primitive gfs2 Filesystem \
        params device="-U dc81ac11-8871-44b1-829a-e2df8573c5d5" directory="/sharedata/data_gfs2" fstype=gfs2 \
        op start interval=0 timeout=60 \
        op stop interval=0 timeout=60 \
        op monitor interval=30s timeout=60 OCF_CHECK_LEVEL=20 \
        meta failure-timeout=5min
clone clone-dlm dlm \
        meta interleave=true target-role=Started
clone clone-gfs2 gfs2 \
        meta interleave=true target-role=Started
colocation cluster-colo1 inf: clone-gfs2 clone-dlm
order cluster-order1 clone-dlm clone-gfs2
property cib-bootstrap-options: \
        have-watchdog=false \
 dc-version=2.0.3-4b1f869f0f \
        cluster-infrastructure=corosync \
        cluster-name=krac

9、创建RAC数据库实例

[root@node203 KingbaseHA]# ./cluster_manager.sh --init_rac
init KingbaseES RAC start
create_rac_share_dir start
create_rac_share_dir success
.......
成功。您现在可以用下面的命令开启数据库服务器:
    ./sys_ctl -D /sharedata/data_gfs2/kingbase/data -l 日志文件 start
init KingbaseES RAC success

10、配置数据库DB资源

[root@node203 KingbaseHA]# ./cluster_manager.sh --config_rac_resource
crm configure DB resource start
crm configure DB resource end

查看集群资源配置信息:
如下所示,集群资源配置中增加了数据库资源DB的配置:

[root@node203 KingbaseHA]# crm config show
node 1: node203
primitive DB ocf:kingbase:kingbase \
        params sys_ctl="/opt/Kingbase/ES/V8/Server/bin/sys_ctl" ksql="/opt/Kingbase/ES/V8/Server/bin/ksql" sys_isready="/opt/Kingbase/ES/V8/Server/bin/sys_isready" kb_data="/sharedata/data_gfs2/kingbase/data" kb_dba=kingbase kb_host=0.0.0.0 kb_user=system kb_port=55321 kb_db=template1 logfile="/opt/Kingbase/ES/V8/Server/log/kingbase1.log" \
        op start interval=0 timeout=120 \
        op stop interval=0 timeout=120 \
        op monitor interval=9s timeout=30 on-fail=stop \
        meta failure-timeout=5min
primitive dlm ocf:pacemaker:controld \
        params daemon="/opt/KingbaseHA/dlm-dlm/sbin/dlm_controld" dlm_tool="/opt/KingbaseHA/dlm-dlm/sbin/dlm_tool" args="-s 0 -f 0" \
        op start interval=0 \
        op stop interval=0 \
        op monitor interval=60 timeout=60
primitive fence_qdisk_0 stonith:fence_qdisk \
        params qdisk_path="/dev/sdh" qdisk_fence_tool="/opt/KingbaseHA/corosync-qdevice/sbin/qdisk-fence-tool" pcmk_host_list=node203 \
        op monitor interval=60s \
        meta failure-timeout=5min
primitive gfs2 Filesystem \
        params device="-U dc81ac11-8871-44b1-829a-e2df8573c5d5" directory="/sharedata/data_gfs2" fstype=gfs2 \
        op start interval=0 timeout=60 \
        op stop interval=0 timeout=60 \
        op monitor interval=30s timeout=60 OCF_CHECK_LEVEL=20 \
        meta failure-timeout=5min
clone clone-DB DB \
        meta interleave=true target-role=Started
clone clone-dlm dlm \
        meta interleave=true target-role=Started
clone clone-gfs2 gfs2 \
        meta interleave=true target-role=Started
colocation cluster-colo1 inf: clone-gfs2 clone-dlm
order cluster-order1 clone-dlm clone-gfs2
order cluster-order2 clone-dlm clone-gfs2 clone-DB
property cib-bootstrap-options: \
        have-watchdog=false \
        dc-version=2.0.3-4b1f869f0f \
        cluster-infrastructure=corosync \
        cluster-name=krac \
        load-threshold="0%"

三、连接和访问RAC

1、集群方式启动数据库服务

[root@node203 KingbaseHA]# crm resource stop clone-DB
[root@node203 KingbaseHA]# crm resource start clone-DB
[root@node203 KingbaseHA]# crm resource status clone-DB
resource clone-DB is running on: node203

2、手工方式启动数据库

# 如下所示因license文件启动失败
[kingbase@node203 bin]$ ./sys_ctl start -D /sharedata/data_gfs2/kingbase/data
waiting for server to start....FATAL:  XX000: license.dat path is dir or file does not exist.
LOCATION:  KesMasterMain, master.c:1002
 stopped waiting
sys_ctl: could not start server
Examine the log output.

[kingbase@node203 data]$ ls -lh /opt/Kingbase/ES/V8/KESRealPro/V008R006C008M030B0010/license.dat
-rw-r--r-- 1 kingbase kingbase 2.9K Aug 19 18:02 /opt/Kingbase/ES/V8/KESRealPro/V008R006C008M030B0010/license.dat

# 将license文件拷贝到bin目录下启动数据库服务
[kingbase@node203 bin]$ cp /opt/Kingbase/ES/V8/KESRealPro/V008R006C008M030B0010/license.dat ./
[kingbase@node203 bin]$ ./sys_ctl start -D /sharedata/data_gfs2/kingbase/data
waiting for server to start....2024-08-20 11:29:19.976 CST [3354] LOG:  请尽快配置有效的归档命令做WAL日志文件的归档
2024-08-20 11:29:19.997 CST [3354] LOG:  sepapower扩展初始化完成
......
server started

[kingbase@node203 bin]$ netstat -antlp |grep 55321
(Not all processes could be identified, non-owned process info
 will not be shown, you would have to be root to see it all.)
tcp        0      0 0.0.0.0:55321           0.0.0.0:*               LISTEN      3354/kingbase
tcp6       0      0 :::55321                :::*                    LISTEN      3354/kingbase

查看集群资源:
如下所示,数据库资源DB处于startup状态:

[root@node203 KingbaseHA]# crm status
Cluster Summary:
  * Stack: corosync
  * Current DC: node203 (version 2.0.3-4b1f869f0f) - partition with quorum
  * Last updated: Tue Aug 20 11:34:56 2024
  * Last change:  Tue Aug 20 10:58:11 2024 by root via cibadmin on node203
  * 1 node configured
  * 4 resource instances configured

Node List:
  * Online: [ node203 ]

Full List of Resources:
  * fence_qdisk_0       (stonith:fence_qdisk):   Started node203
  * Clone Set: clone-dlm [dlm]:
    * Started: [ node203 ]
  * Clone Set: clone-gfs2 [gfs2]:
    * Started: [ node203 ]
  * Clone Set: clone-DB [DB]:
    * Started: [ node203 ]

3、连接数据库访问

[kingbase@node203 bin]$ ./ksql -U system test -p 55321
Type "help" for help.

test=# \l
                              List of databases
   Name    | Owner  | Encoding |  Collate   |   Ctype    | Access privileges
-----------+--------+----------+------------+------------+-------------------
 kingbase  | system | UTF8     | zh_CN.utf8 | zh_CN.utf8 |
 security  | system | UTF8     | zh_CN.utf8 | zh_CN.utf8 |
 template0 | system | UTF8     | zh_CN.utf8 | zh_CN.utf8 | =c/system        +
           |        |          |            |            | system=CTc/system
 template1 | system | UTF8     | zh_CN.utf8 | zh_CN.utf8 | =c/system        +
           |        |          |            |            | system=CTc/system
 test      | system | UTF8     | zh_CN.utf8 | zh_CN.utf8 |
(5 rows)

---如上所示,KingbaseES RAC在单节点上部署完成。

posted @   天涯客1224  阅读(113)  评论(0编辑  收藏  举报
相关博文:
阅读排行:
· 全程不用写代码,我用AI程序员写了一个飞机大战
· MongoDB 8.0这个新功能碉堡了,比商业数据库还牛
· 记一次.NET内存居高不下排查解决与启示
· 白话解读 Dapr 1.15:你的「微服务管家」又秀新绝活了
· DeepSeek 开源周回顾「GitHub 热点速览」
点击右上角即可分享
微信分享提示