达梦数据库部署案例之---DMDSC共享存储集群部署

案例说明:
在Linux环境下部署DMDSC 共享存储集群。

适用版本:
DM8

操作系统版本:

[root@node201 KingbaseHA]# cat /etc/centos-release
CentOS Linux release 7.9.2009 (Core)

集群架构:
如下所示,node1和node2为集群节点:

节点信息:

[root@node201 KingbaseHA]# vi /etc/hosts
192.168.1.201 node201
192.168.1.202 node202
192.168.1.203 node203    iscsi_Srv

一、构建iscsi共享存储
如下所示,在iscsi服务器上创建共享存储(其中sdd,sde为OCR和VOTE DISK, sdf为redolog磁盘,sdf为data数据磁盘):

1、iscsi server配置共享存储

# 创建共享磁盘
[root@node203 ~]# targetcli
targetcli shell version 2.1.53
Copyright 2011-2013 by Datera, Inc and others.
For help on commands, type 'help'.

/> /backstores/block create idisk3 /dev/sdd
Created block storage object idisk3 using /dev/sdd.
/> /backstores/block create idisk4 /dev/sde
Created block storage object idisk4 using /dev/sde.
/> /backstores/block create idisk5 /dev/sdf
Created block storage object idisk5 using /dev/sdf.
/> /backstores/block create idisk6 /dev/sdg
Created block storage object idisk6 using /dev/sdg.

/> ls
o- / ........................................................................................... [...]
  o- backstores ................................................................................ [...]
  | o- block .................................................................... [Storage Objects: 6]
  | | o- idisk1 ............................................ [/dev/sdb (10.7GiB) write-thru activated]
  | | | o- alua ..................................................................... [ALUA Groups: 1]
  | | |   o- default_tg_pt_gp ......................................... [ALUA state: Active/optimized]
  | | o- idisk2 ........................................... [/dev/sdc (512.0MiB) write-thru activated]
  | | | o- alua ..................................................................... [ALUA Groups: 1]
  | | |   o- default_tg_pt_gp ......................................... [ALUA state: Active/optimized]
  | | o- idisk3 ......................................... [/dev/sdd (128.0MiB) write-thru deactivated]
  | | | o- alua ..................................................................... [ALUA Groups: 1]
  | | |   o- default_tg_pt_gp ......................................... [ALUA state: Active/optimized]
  | | o- idisk4 ......................................... [/dev/sde (128.0MiB) write-thru deactivated]
  | | | o- alua ..................................................................... [ALUA Groups: 1]
  | | |   o- default_tg_pt_gp ......................................... [ALUA state: Active/optimized]
  | | o- idisk5 ........................................... [/dev/sdf (2.2GiB) write-thru deactivated]
  | | | o- alua ..................................................................... [ALUA Groups: 1]
  | | |   o- default_tg_pt_gp ......................................... [ALUA state: Active/optimized]
  | | o- idisk6 ........................................... [/dev/sdg (8.3GiB) write-thru deactivated]
  | |   o- alua ..................................................................... [ALUA Groups: 1]
  | |     o- default_tg_pt_gp ......................................... [ALUA state: Active/optimized]
  | o- fileio ................................................................... [Storage Objects: 0]
  | o- pscsi .................................................................... [Storage Objects: 0]
  | o- ramdisk .................................................................. [Storage Objects: 0]
  o- iscsi .............................................................................. [Targets: 1]
  | o- iqn.2024-08.pip.cc:server ........................................................... [TPGs: 1]
  |   o- tpg1 ................................................................. [no-gen-acls, no-auth]
  |     o- acls ............................................................................ [ACLs: 1]
  |     | o- iqn.2024-08.pip.cc:client .............................................. [Mapped LUNs: 2]
  |     |   o- mapped_lun0 .................................................. [lun0 block/idisk1 (rw)]
  |     |   o- mapped_lun1 .................................................. [lun1 block/idisk2 (rw)]
  |     o- luns ............................................................................ [LUNs: 2]
  |     | o- lun0 ....................................... [block/idisk1 (/dev/sdb) (default_tg_pt_gp)]
  |     | o- lun1 ....................................... [block/idisk2 (/dev/sdc) (default_tg_pt_gp)]
  |     o- portals ...................................................................... [Portals: 1]
  |       o- 192.168.1.203:3260 ................................................................. [OK]
  o- loopback ........................................................................... [Targets: 0]
/>  cd iscsi/iqn.2024-08.pip.cc:server/tpg1/

# 创建lun
/iscsi/iqn.20...c:server/tpg1> luns/ create /backstores/block/idisk3
Created LUN 2.
Created LUN 2->2 mapping in node ACL iqn.2024-08.pip.cc:client
/iscsi/iqn.20...c:server/tpg1> luns/ create /backstores/block/idisk4
Created LUN 3.
Created LUN 3->3 mapping in node ACL iqn.2024-08.pip.cc:client
/iscsi/iqn.20...c:server/tpg1> luns/ create /backstores/block/idisk5
Created LUN 4.
Created LUN 4->4 mapping in node ACL iqn.2024-08.pip.cc:client
/iscsi/iqn.20...c:server/tpg1> luns/ create /backstores/block/idisk6
Created LUN 5.
Created LUN 5->5 mapping in node ACL iqn.2024-08.pip.cc:client

/iscsi/iqn.20...c:server/tpg1> ls
o- tpg1 ....................................................................... [no-gen-acls, no-auth]
  o- acls .................................................................................. [ACLs: 1]
  | o- iqn.2024-08.pip.cc:client .................................................... [Mapped LUNs: 6]
  |   o- mapped_lun0 ........................................................ [lun0 block/idisk1 (rw)]
  |   o- mapped_lun1 ........................................................ [lun1 block/idisk2 (rw)]
  |   o- mapped_lun2 ........................................................ [lun2 block/idisk3 (rw)]
  |   o- mapped_lun3 ........................................................ [lun3 block/idisk4 (rw)]
  |   o- mapped_lun4 ........................................................ [lun4 block/idisk5 (rw)]
  |   o- mapped_lun5 ........................................................ [lun5 block/idisk6 (rw)]
  o- luns .................................................................................. [LUNs: 6]
  | o- lun0 ............................................. [block/idisk1 (/dev/sdb) (default_tg_pt_gp)]
  | o- lun1 ............................................. [block/idisk2 (/dev/sdc) (default_tg_pt_gp)]
  | o- lun2 ............................................. [block/idisk3 (/dev/sdd) (default_tg_pt_gp)]
  | o- lun3 ............................................. [block/idisk4 (/dev/sde) (default_tg_pt_gp)]
  | o- lun4 ............................................. [block/idisk5 (/dev/sdf) (default_tg_pt_gp)]
  | o- lun5 ............................................. [block/idisk6 (/dev/sdg) (default_tg_pt_gp)]
  o- portals ............................................................................ [Portals: 1]
    o- 192.168.1.203:3260 ....................................................................... [OK]

# 查看client acl配置
/iscsi/iqn.20...c:server/tpg1> cd acls/iqn.2024-08.pip.cc:client/
/iscsi/iqn.20...pip.cc:client> info
chap_password: 123456
chap_userid: root
wwns:
iqn.2024-08.pip.cc:client

/iscsi/iqn.20...pip.cc:client> ls /
o- / ........................................................................................... [...]
  o- backstores ................................................................................ [...]
  | o- block .................................................................... [Storage Objects: 6]
  | | o- idisk1 ............................................ [/dev/sdb (10.7GiB) write-thru activated]
  | | | o- alua ..................................................................... [ALUA Groups: 1]
  | | |   o- default_tg_pt_gp ......................................... [ALUA state: Active/optimized]
  | | o- idisk2 ........................................... [/dev/sdc (512.0MiB) write-thru activated]
  | | | o- alua ..................................................................... [ALUA Groups: 1]
  | | |   o- default_tg_pt_gp ......................................... [ALUA state: Active/optimized]
  | | o- idisk3 ........................................... [/dev/sdd (128.0MiB) write-thru activated]
  | | | o- alua ..................................................................... [ALUA Groups: 1]
  | | |   o- default_tg_pt_gp ......................................... [ALUA state: Active/optimized]
  | | o- idisk4 ........................................... [/dev/sde (128.0MiB) write-thru activated]
  | | | o- alua ..................................................................... [ALUA Groups: 1]
  | | |   o- default_tg_pt_gp ......................................... [ALUA state: Active/optimized]
  | | o- idisk5 ............................................. [/dev/sdf (2.2GiB) write-thru activated]
  | | | o- alua ..................................................................... [ALUA Groups: 1]
  | | |   o- default_tg_pt_gp ......................................... [ALUA state: Active/optimized]
  | | o- idisk6 ............................................. [/dev/sdg (8.3GiB) write-thru activated]
  | |   o- alua ..................................................................... [ALUA Groups: 1]
  | |     o- default_tg_pt_gp ......................................... [ALUA state: Active/optimized]
  | o- fileio ................................................................... [Storage Objects: 0]
  | o- pscsi .................................................................... [Storage Objects: 0]
  | o- ramdisk .................................................................. [Storage Objects: 0]
  o- iscsi .............................................................................. [Targets: 1]
  | o- iqn.2024-08.pip.cc:server ........................................................... [TPGs: 1]
  |   o- tpg1 ................................................................. [no-gen-acls, no-auth]
  |     o- acls ............................................................................ [ACLs: 1]
  |     | o- iqn.2024-08.pip.cc:client .............................................. [Mapped LUNs: 6]
  |     |   o- mapped_lun0 .................................................. [lun0 block/idisk1 (rw)]
  |     |   o- mapped_lun1 .................................................. [lun1 block/idisk2 (rw)]
  |     |   o- mapped_lun2 .................................................. [lun2 block/idisk3 (rw)]
  |     |   o- mapped_lun3 .................................................. [lun3 block/idisk4 (rw)]
  |     |   o- mapped_lun4 .................................................. [lun4 block/idisk5 (rw)]
  |     |   o- mapped_lun5 .................................................. [lun5 block/idisk6 (rw)]
  |     o- luns ............................................................................ [LUNs: 6]
  |     | o- lun0 ....................................... [block/idisk1 (/dev/sdb) (default_tg_pt_gp)]
  |     | o- lun1 ....................................... [block/idisk2 (/dev/sdc) (default_tg_pt_gp)]
  |     | o- lun2 ....................................... [block/idisk3 (/dev/sdd) (default_tg_pt_gp)]
  |     | o- lun3 ....................................... [block/idisk4 (/dev/sde) (default_tg_pt_gp)]
  |     | o- lun4 ....................................... [block/idisk5 (/dev/sdf) (default_tg_pt_gp)]
  |     | o- lun5 ....................................... [block/idisk6 (/dev/sdg) (default_tg_pt_gp)]
  |     o- portals ...................................................................... [Portals: 1]
  |       o- 192.168.1.203:3260 ................................................................. [OK]
  o- loopback ........................................................................... [Targets: 0]
/iscsi/iqn.20...pip.cc:client>

# 保存配置信息
/iscsi/iqn.20...pip.cc:client> cd /
/> saveconfig
Configuration saved to /etc/target/saveconfig.json
/> exit
Global pref auto_save_on_exit=true
Last 10 configs saved in /etc/target/backup/.
Configuration saved to /etc/target/saveconfig.json

2、客户端访问共享存储(all nodes)
如下所示,集群节点作为iscsi client连接访问共享存储,并可以将共享磁盘,作为本地磁盘访问:

[root@node201 ~]# /usr/sbin/iscsiadm -m node -T iqn.2024-08.pip.cc:server -p 192.168.1.203 --login
Logging in to [iface: default, target: iqn.2024-08.pip.cc:server, portal: 192.168.1.203,3260] (multiple)
Login to [iface: default, target: iqn.2024-08.pip.cc:server, portal: 192.168.1.203,3260] successful.

[root@node201 ~]# fdisk -l
Disk /dev/sdd: 134 MB, 134217728 bytes, 262144 sectors
Disk /dev/sde: 134 MB, 134217728 bytes, 262144 sectors
Disk /dev/sdf: 2343 MB, 2343757824 bytes, 4577652 sectors
Disk /dev/sdg: 8867 MB, 8867020800 bytes, 17318400 sectors

二、配置集群环境
参考文档:https://eco.dameng.com/document/dm/zh-cn/pm/dsc-build.html

1、集群规划
如下所示,集群实例配置规划:

在 4 块共享磁盘中:2 块较小的磁盘(1G)用于创建 DCR、VOTE 磁盘;2 块较大的磁盘(2T)用于创建 ASM 磁盘组(数据磁盘组 DMDATA 和联机日志磁盘组 DMLOG)。

2、配置用户(all nodes)

[root@node201 ~]# groupadd -g 2001 dmdba
[root@node201 ~]# useradd -u 2001 -g dmdba dmdba
[root@node201 ~]# passwd dmdba
Changing password for user dmdba.
New password:
BAD PASSWORD: The password is shorter than 8 characters
Retype new password:
passwd: all authentication tokens updated successfully.

3、集群目录规划

[root@node201 ~]# mkdir -p /home/dmdba/dmdsc   # DSC 环境搭建的目录
[root@node201 ~]# mkdir -p /home/dmdba/dmdsc/bin
[root@node201 ~]# mkdir -p /home/dmdba/dmdsc/data/DSC01  #配置文件存储目录(node201)
[root@node201 ~]# mkdir -p /home/dmdba/dmdsc/data/DSC02 #配置文件存储目录(node202)

4、创建共享存储(裸设备、udev设备)
1)通过 scsi_id 获取磁盘信息(两个节点获取信息一致)

[root@node201 ~]#  /usr/lib/udev/scsi_id -g -u /dev/sdd
36001405027e52d2bbfa49148b4dcc546
[root@node201 ~]#  /usr/lib/udev/scsi_id -g -u /dev/sde
36001405d8832f16856e45b0b44cd9252
[root@node201 ~]#  /usr/lib/udev/scsi_id -g -u /dev/sdf
360014058606671082694fca897a2404d
[root@node201 ~]#  /usr/lib/udev/scsi_id -g -u /dev/sdg
36001405067e2cf5c3a046d299ee74301

2)配置udev规则

[root@node201 ~]# cat /etc/udev/rules.d/66-dmdevices.rules
## DCR磁盘配置,且在软链接之前创建文件夹 /dev_DSC2
KERNEL=="sd*",SUBSYSTEM=="block",PROGRAM=="/usr/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$name",RESULT=="36001405027e52d2bbfa49148b4dcc546",SYMLINK+="DCR", OWNER="dmdba", GROUP="dmdba", MODE="0660", RUN+="/bin/sh -c 'chown dmdba:dmdba /dev/$name;mkdir -p /dev_DSC2; ln -s /dev/DCR /dev_DSC2/DCR'"
## VOTE 磁盘配置
KERNEL=="sd*",SUBSYSTEM=="block",PROGRAM=="/usr/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$name",RESULT=="36001405d8832f16856e45b0b44cd9252",SYMLINK+="VOTE", OWNER="dmdba", GROUP="dmdba", MODE="0660", RUN+="/bin/sh -c 'chown dmdba:dmdba /dev/$name; ln -s /dev/VOTE /dev_DSC2/VOTE'"

## DMLOG 磁盘配置,且在搭建完成之后,将权限直接赋予 dmdba组的dmdba用户
KERNEL=="sd*",SUBSYSTEM=="block",PROGRAM=="/usr/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$name",RESULT=="360014058606671082694fca897a2404d",SYMLINK+="DMLOG", OWNER="dmdba", GROUP="dmdba", MODE="0660", RUN+="/bin/sh -c 'chown dmdba:dmdba /dev/$name; ln -s /dev/DMLOG /dev_DSC2/DMLOG ; chown -R dmdba:dmdba /dev_DSC2'"

## DMDATA 磁盘配置
KERNEL=="sd*",SUBSYSTEM=="block",PROGRAM=="/usr/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$name",RESULT=="36001405067e2cf5c3a046d299ee74301",SYMLINK+="DMDATA", OWNER="dmdba", GROUP="dmdba", MODE="0660", RUN+="/bin/sh -c 'chown dmdba:dmdba /dev/$name; ln -s /dev/DMDATA /dev_DSC2/DMDATA'"

3)创建磁盘链接

[root@node201 ES]# systemctl restart systemd-udev-trigger
[root@node201 ~]#  ls -lth /dev_DSC2/
total 0
lrwxrwxrwx 1 dmdba dmdba 10 Aug 14 15:45 DMLOG -> /dev/DMLOG
lrwxrwxrwx 1 dmdba dmdba  9 Aug 14 15:40 VOTE -> /dev/VOTE
lrwxrwxrwx 1 dmdba dmdba 11 Aug 14 15:40 DMDATA -> /dev/DMDATA
lrwxrwxrwx 1 dmdba dmdba  8 Aug 14 15:40 DCR -> /dev/DCR

三、安装DM数据库软件

1、部署数据库软件

[root@node201 soft]# mount -o loop /soft/dm8_20240712_x86_rh7_64.iso /mnt
mount: /dev/loop0 is write-protected, mounting read-only

[dmdba@node201 mnt]$ ./DMInstall.bin -i
Installer Language:
[1]: 简体中文
[2]: English
Please select the installer's language [2]:
Extract install files.........
Hardware architecture verification passed!
Welcome to DM DBMS Installer
......

[root@node201 soft]# /home/dmdba/dmdbms/script/root/root_installer.sh
Move /home/dmdba/dmdbms/bin/dm_svc.conf to /etc
Create the DmAPService service
Created symlink from /etc/systemd/system/multi-user.target.wants/DmAPService.service to /usr/lib/systemd/system/DmAPService.service.
Finished to create the service (DmAPService)
Start the DmAPService service

2、初始化数据库实例

[root@node201 bin]# ./dminit /home/dmdba/dmdbms/data PAGE_SIZE=16
initdb V8
db version: 0x7000c
file dm.key not found, use default license!
License will expire on 2025-07-03
start parameter error,please check!
fail to init db.

# 指定参数后初始化成功
[root@node201 bin]# ./dminit path=/home/dmdba/dmdbms/data db_name=DAMENG instance_name=DMDB port_num=5236 CASE_SENSITIVE=y CHARSET=1 PAGE_SIZE=32 EXTENT_SIZE=32
initdb V8
db version: 0x7000c
file dm.key not found, use default license!
License will expire on 2025-07-03
Normal of FAST
Normal of DEFAULT
Normal of RECYCLE
Normal of KEEP
Normal of ROLL

 log file path: /home/dmdba/dmdbms/data/DAMENG/DAMENG01.log
 log file path: /home/dmdba/dmdbms/data/DAMENG/DAMENG02.log

write to dir [/home/dmdba/dmdbms/data/DAMENG].
create dm database success. 2024-08-14 19:32:38

3、创建和注册数据库服务

[root@node202 root]# pwd
/home/dmdba/dmdbms/script/root

[root@node201 root]# ./dm_service_installer.sh -t dmserver -p DMSERVER -dm_ini /home/dmdba/dmdbms/data/DAMENG/dm.ini
Created symlink from /etc/systemd/system/multi-user.target.wants/DmServiceDMSERVER.service to /usr/lib/systemd/system/DmServiceDMSERVER.service.
Finished to create the service (DmServiceDMSERVER)

# 启动数据库服务失败
[root@node201 root]# systemctl start DmServiceDMSERVER
Job for DmServiceDMSERVER.service failed because the control process exited with error code. See "systemctl status DmServiceDMSERVER.service" and "journalctl -xe" for details.
[root@node201 root]# journalctl -xe
Aug 14 19:39:08 node201 iscsid[1061]: iscsid: Kernel reported iSCSI connection 1:0 error (1020 - ISCSI
Aug 14 19:39:10 node201 iscsid[1061]: iscsid: connection1:0 is operational after recovery (1 attempts)
Aug 14 19:39:12 node201 kernel:  connection1:0: detected conn error (1020)
Aug 14 19:39:12 node201 iscsid[1061]: iscsid: Kernel reported iSCSI connection 1:0 error (1020 - ISCSI
Aug 14 19:39:14 node201 iscsid[1061]: iscsid: connection1:0 is operational after recovery (1 attempts)
Aug 14 19:39:16 node201 kernel:  connection1:0: detected conn error (1020)
Aug 14 19:39:16 node201 iscsid[1061]: iscsid: Kernel reported iSCSI connection 1:0 error (1020 - ISCSI
Aug 14 19:39:17 node201 DmServiceDMSERVER[20560]: [43B blob data]
Aug 14 19:39:17 node201 systemd[1]: DmServiceDMSERVER.service: control process exited, code=exited sta
Aug 14 19:39:17 node201 systemd[1]: Failed to start DM Instance Service(DmServiceDMSERVER)..
-- Subject: Unit DmServiceDMSERVER.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit DmServiceDMSERVER.service has failed.
--
-- The result is failed.
Aug 14 19:39:17 node201 systemd[1]: Unit DmServiceDMSERVER.service entered failed state.
Aug 14 19:39:17 node201 systemd[1]: DmServiceDMSERVER.service failed.
Aug 14 19:39:17 node201 polkitd[734]: Unregistered Authentication Agent for unix-process:20553:3512746
Aug 14 19:39:18 node201 iscsid[1061]: iscsid: connection1:0 is operational after recovery (1 attempts)
Aug 14 19:39:20 node201 kernel:  connection1:0: detected conn error (1020)
Aug 14 19:39:20 node201 iscsid[1061]: iscsid: Kernel reported iSCSI connection 1:0 error (1020 - ISCSI
Aug 14 19:39:22 node201 iscsid[1061]: iscsid: connection1:0 is operational after recovery (1 attempts)
Aug 14 19:39:24 node201 kernel:  connection1:0: detected conn error (1020)
Aug 14 19:39:24 node201 iscsid[1061]: iscsid: Kernel reported iSCSI connection 1:0 error (1020 - ISCSI
Aug 14 19:39:26 node201 iscsid[1061]: iscsid: connection1:0 is operational after recovery (1 attempts)
lines 1135-1161/1161 (END)

# 修改数据存储目录属性
[root@node201 dmdbms]# ls -lhd data
drwxr-xr-x 3 root root 19 Aug 14 19:32 data

[root@node201 dmdbms]# chown -R dmdba.dmdba data
[root@node201 dmdbms]# ls -lhd data
drwxr-xr-x 3 dmdba dmdba 19 Aug 14 19:32 data

# 启动数据库服务
[root@node201 DAMENG]# systemctl start DmServiceDMSERVER
[root@node201 DAMENG]# systemctl status DmServiceDMSERVER
● DmServiceDMSERVER.service - DM Instance Service(DmServiceDMSERVER).
   Loaded: loaded (/usr/lib/systemd/system/DmServiceDMSERVER.service; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2024-08-15 11:44:29 CST; 5s ago
  Process: 19669 ExecStart=/home/dmdba/dmdbms/bin/DmServiceDMSERVER start (code=exited, status=0/SUCCESS)
 Main PID: 19691 (dmserver)
    Tasks: 114
   CGroup: /system.slice/DmServiceDMSERVER.service
           └─19691 /home/dmdba/dmdbms/bin/dmserver path=/home/dmdba/dmdbms/data/DAMENG/dm.ini -noco...

Aug 15 11:44:14 node201 systemd[1]: Starting DM Instance Service(DmServiceDMSERVER)....
Aug 15 11:44:29 node201 DmServiceDMSERVER[19669]: [39B blob data]
Aug 15 11:44:29 node201 systemd[1]: Started DM Instance Service(DmServiceDMSERVER)..

# 查看数据库服务端口
[root@node201 dmdbms]# netstat -antlp |grep 5236
tcp6       0      0 :::5236                 :::*                    LISTEN      19691/dmserver

# 访问数据库
[dmdba@node201 bin]$ ./disql SYSDBA/SYSDBA
Server[LOCALHOST:5236]:mode is normal, state is open
login used time : 5.890(ms)
disql V8
SQL> help

四、创建集群

1、创建ASM磁盘
如下所示,准备配置文件 DMDCR_CFG.INI 文件,保存到

node201的/home/dmdba/dmdsc/data/DSC01 下。
[root@node201 DSC01]# cat dmdcr_cfg.ini
DCR_N_GRP= 3
DCR_VTD_PATH=/dev_DSC2/VOTE
DCR_OGUID= 1071107589
[GRP]
  DCR_GRP_TYPE = CSS
  DCR_GRP_NAME = GRP_CSS
  DCR_GRP_N_EP = 2
  DCR_GRP_DSKCHK_CNT = 60

[GRP_CSS]
  DCR_EP_NAME = CSS0
  DCR_EP_HOST = 192.168.1.201
  DCR_EP_PORT = 9836

[GRP_CSS]
  DCR_EP_NAME = CSS1
  DCR_EP_HOST = 192.168.1.202
  DCR_EP_PORT = 9837

[GRP]
DCR_GRP_TYPE= ASM
DCR_GRP_NAME= GRP_ASM
DCR_GRP_N_EP= 2
DCR_GRP_DSKCHK_CNT= 60

[GRP_ASM]
DCR_EP_NAME= ASM0
DCR_EP_SHM_KEY= 64735
DCR_EP_SHM_SIZE= 512
DCR_EP_HOST= 192.168.1.201
DCR_EP_PORT= 5836
DCR_EP_ASM_LOAD_PATH= /dev_DSC2

[GRP_ASM]
DCR_EP_NAME= ASM1
DCR_EP_SHM_KEY= 54736
DCR_EP_SHM_SIZE= 512
DCR_EP_HOST= 192.168.1.202
DCR_EP_PORT= 5837
DCR_EP_ASM_LOAD_PATH= /dev_DSC2

[GRP]
DCR_GRP_TYPE= DB
DCR_GRP_NAME= GRP_DSC
DCR_GRP_N_EP= 2
DCR_GRP_DSKCHK_CNT= 60

[GRP_DSC]
DCR_EP_NAME= DSC01
DCR_EP_SEQNO= 0
DCR_EP_PORT= 6636

[GRP_DSC]
DCR_EP_NAME= DSC02
DCR_EP_SEQNO= 1
DCR_EP_PORT= 6637

[root@node201 DSC01]# cat dmasvrmal.ini
[MAL_INST1]
MAL_INST_NAME= ASM0
MAL_HOST= 192.168.1.201
MAL_PORT= 4836
[MAL_INST2]
MAL_INST_NAME= ASM1
MAL_HOST= 192.168.1.202
MAL_PORT= 4837

2、初始化asm磁盘

[root@node201 bin]# ./dmasmcmd
dmasmcmd V8
ASM>create dcrdisk '/dev_DSC2/DCR' 'DCR'
[TRACE]The ASM initialize dcrdisk /dev_DSC2/DCR to name DMASMDCR
Used time: 481.989(ms).
ASM>create votedisk '/dev_DSC2/VOTE' 'VOTE'
[TRACE]The ASM initialize votedisk /dev_DSC2/VOTE to name DMASMVOTE
Used time: 00:00:01.111.
ASM>create asmdisk '/dev_DSC2/DMDATA' 'DMDATA'
[TRACE]The ASM initialize asmdisk /dev_DSC2/DMDATA to name DMASMDMDATA
Used time: 110.782(ms).
ASM>create asmdisk '/dev_DSC2/DMLOG'  'DMLOG'
[TRACE]The ASM initialize asmdisk /dev_DSC2/DMLOG to name DMASMDMLOG
Used time: 945.421(ms).
ASM>init dcrdisk '/dev_DSC2/DCR' from '/home/dmdba/dmdsc/data/DSC01/dmdcr_cfg.ini' identified by 'SYSDBA'
[TRACE]DG 126 alloc extent for inode (0, 0, 1)
[TRACE]DG 126 alloc 4 extents for 0xfe000002 (0, 0, 2)->(0, 0, 5)
Used time: 334.278(ms).
ASM>init votedisk  '/dev_DSC2/VOTE' from '/home/dmdba/dmdsc/data/DSC01/dmdcr_cfg.ini'
[TRACE]DG 125 alloc extent for inode (0, 0, 1)
[TRACE]DG 125 alloc 4 extents for 0xfd000002 (0, 0, 2)->(0, 0, 5)
Used time: 167.669(ms).

3、配置dmasvrmal.ini
如下所示,准备 DMASM 的 MAL 配置文件 DMASVRMAL.INI,保存到 node201的/home/dmdba/dmdsc/data/DSC01 下

[root@node201 DSC01]# cat dmasvrmal.ini
[MAL_INST1]
MAL_INST_NAME= ASM0
MAL_HOST= 192.168.1.201
MAL_PORT= 4836
[MAL_INST2]
MAL_INST_NAME= ASM1
MAL_HOST= 192.168.1.202
MAL_PORT= 4837

如下所示,node202存储在/home/dmdba/dmdsc/data/DSC02目录下,内容和nde201一致:

[root@node202 DSC02]# cat dmasvrmal.ini
[MAL_INST1]
MAL_INST_NAME= ASM0
MAL_HOST= 192.168.1.201
MAL_PORT= 4836
[MAL_INST2]
MAL_INST_NAME= ASM1
MAL_HOST= 192.168.1.202
MAL_PORT= 4837

4、配置dmdcr.ini
如下所示,在node201下配置 dmdcr.ini文件:

[root@node201 DSC01]# cat dmdcr.ini
DMDCR_PATH                   = /dev_DSC2/DCR
DMDCR_MAL_PATH               = /home/dmdba/dmdsc/data/DSC01/dmasvrmal.ini
DMDCR_SEQNO                  = 0
DMDCR_ASM_RESTART_INTERVAL   = 0
DMDCR_ASM_STARTUP_CMD        = /home/dmdba/dmdsc/bin/dmasmsvr dcr_ini=/home/dmdba/dmdsc/data/DSC01/dmdcr.ini
DMDCR_DB_RESTART_INTERVAL    = 0
DMDCR_DB_STARTUP_CMD         = /home/dmdba/dmdsc/bin/dmserver path=/home/dmdba/dmdsc/data/DSC01/DSC01_conf/dm.ini dcr_ini=/home/dmdba/dmdsc/data/DSC01/dmdcr.ini
DMDCR_LINK_CHECK_IP=192.168.1.100

设置了 DMDCR_LINK_CHECK_IP,须为 node201节点的 DMSERVER 和 DMASMSVR 赋予 ping 权限:

[root@node201 bin]# setcap cap_net_raw,cap_net_admin=eip /home/dmdba/dmdbms/bin/dmserver
[root@node201 bin]# setcap cap_net_raw,cap_net_admin=eip /home/dmdba/dmdbms/bin/dmasmsvr

如下所示,在node202下配置 dmdcr.ini文件:

[root@node202 DSC02]# cat dmdcr.ini
DMDCR_PATH                   = /dev_DSC2/DCR
DMDCR_MAL_PATH               = /home/dmdba/dmdsc/data/DSC02/dmasvrmal.ini
DMDCR_SEQNO                  = 1
DMDCR_ASM_RESTART_INTERVAL   = 0
DMDCR_ASM_STARTUP_CMD        = /home/dmdba/dmdsc/bin/dmasmsvr dcr_ini=/home/dmdba/dmdsc/data/DSC02/dmdcr.ini
DMDCR_DB_RESTART_INTERVAL    = 0
DMDCR_DB_STARTUP_CMD         = /home/dmdba/dmdsc/bin/dmserver path=/home/dmdba/dmdsc/data/DSC02/DSC02_conf/dm.ini dcr_ini=/home/dmdba/dmdsc/data/DSC02/dmdcr.ini
DMDCR_LINK_CHECK_IP=192.168.1.100

设置了 DMDCR_LINK_CHECK_IP,须为 node202节点的 DMSERVER 和 DMASMSVR 赋予 ping 权限:

[root@node202 DSC02]# setcap cap_net_raw,cap_net_admin=eip /home/dmdba/dmdbms/bin/dmserver
[root@node202 DSC02]# setcap cap_net_raw,cap_net_admin=eip /home/dmdba/dmdbms/bin/dmasmsvr

5、启动 DMCSS、DMASM 服务程序(all nodes)

[root@node201 bin]# ./dmcss dcr_ini=/home/dmdba/dmdsc/data/DSC01/dmdcr.ini &
[1] 32553
[root@node202 bin]# ./dmcss dcr_ini=/home/dmdba/dmdsc/data/DSC02/dmdcr.ini &
[1] 27973
[root@node201 bin]# ./dmasmsvr dcr_ini=/home/dmdba/dmdsc/data/DSC01/dmdcr.ini &
[2] 1719
[root@node202 bin]# ./dmasmsvr dcr_ini=/home/dmdba/dmdsc/data/DSC02/dmdcr.ini &
[2] 28708

如下所示,集群通讯信息:

[2024-08-16 14:47:48:235] [ASM]: Set EP ASM0[0] as Control node
[2024-08-16 14:47:48:239] [ASM]: CSS set cmd START NOTIFY, dest_ep ASM0 seqno = 0, cmd_seq = 2
check css cmd: START NOTIFY, cmd_seq: 2, code: 0
[2024-08-16 14:47:49:243] [ASM]: CSS set cmd EP START, dest_ep ASM0 seqno = 0, cmd_seq = 3
check css cmd: EP START, cmd_seq: 3, code: 0
ASM Control Node EPNO:0
[WARNING]Decode asmdisk device fail, sig:1751483255, disk_id:65535, group_id:65535.
[WARNING]Decode asmdisk device fail, sig:1751483255, disk_id:65535, group_id:65535.
[2024-08-16 14:47:49:379] [ASM]: CSS set cmd NONE, dest_ep ASM0 seqno = 0, cmd_seq = 0
[2024-08-16 14:47:49:524] [ASM]: CSS set cmd EP START, dest_ep ASM1 seqno = 1, cmd_seq = 5
[2024-08-16 14:47:52:348] [ASM]: CSS set cmd NONE, dest_ep ASM1 seqno = 1, cmd_seq = 0
.......

6、使用DMASMTOOL 工具创建 ASM 磁盘组(node201)

[root@node201 bin]# ./dmasmtool dcr_ini=/home/dmdba/dmdsc/data/DSC01/dmdcr.ini
DMASMTOOL V8
[TRACE]atsk_process_connect success, client_is_local=1

ASM>CREATE DISKGROUP DMDATA asmdisk '/dev_DSC2/DMDATA'
[TRACE]Pre-check asmdisk /dev_DSC2/DMDATA
[TRACE]asvr2_sync_disk_pre_check code:0
[TRACE]asm_disk_add: /dev_DSC2/DMDATA
[TRACE]Create diskgroup DMDATA, with asmdisk /dev_DSC2/DMDATA
[TRACE]DG 0 alloc extent for inode (0, 0, 1)
[TRACE]aptx op_type 1, log_len 1178, start seq 0
[TRACE]aptx flush op_type 1, log_len 1178, start seq 3
[TRACE]The disk metadata addr(0, 0) flush.
Used time: 00:00:04.633.

ASM>CREATE DISKGROUP DMLOG  asmdisk '/dev_DSC2/DMLOG'
[TRACE]Pre-check asmdisk /dev_DSC2/DMLOG
[TRACE]asvr2_sync_disk_pre_check code:0
[TRACE]aptx op_type 1, log_len 15, start seq 3
[TRACE]aptx flush op_type 1, log_len 15, start seq 4
[TRACE]asm_disk_add: /dev_DSC2/DMLOG
[TRACE]Create diskgroup DMLOG, with asmdisk /dev_DSC2/DMLOG
[TRACE]DG 1 alloc extent for inode (0, 0, 1)
[TRACE]aptx op_type 1, log_len 1176, start seq 0
[TRACE]aptx flush op_type 1, log_len 1176, start seq 3
[TRACE]The disk metadata addr(1, 0) flush.
Used time: 00:00:04.841.
ASM>

8、配置dminit.ini
在node201节点上准备 DMINIT.INI 配置文件,保存到/home/dmdba/dmdsc/data/DSC01 目录下

[root@node201 DSC01]# cat dminit.ini
DB_NAME= dsc2
SYSTEM_PATH= +DMDATA/data
SYSTEM= +DMDATA/data/dsc2/system.dbf
SYSTEM_SIZE= 128
ROLL= +DMDATA/data/dsc2/roll.dbf
ROLL_SIZE= 128
MAIN= +DMDATA/data/dsc2/main.dbf
MAIN_SIZE= 128
CTL_PATH= +DMDATA/data/dsc2/dm.ctl
LOG_SIZE= 2048
DCR_PATH= /dev_DSC2/DCR
DCR_SEQNO= 0
AUTO_OVERWRITE= 2
PAGE_SIZE = 16
EXTENT_SIZE = 16

[DSC01]
CONFIG_PATH= /home/dmdba/dmdsc/data/DSC01/DSC01_conf
PORT_NUM = 6636
MAL_HOST= 192.168.1.201
MAL_PORT= 6536
LOG_PATH= +DMLOG/log/DSC01_log1.log
LOG_PATH= +DMLOG/log/DSC01_log2.log
[DSC02]
CONFIG_PATH= /home/dmdba/dmdsc/data/DSC02/DSC02_conf
PORT_NUM = 6637
MAL_HOST= 192.168.1.202
MAL_PORT= 6537
LOG_PATH= +DMLOG/log/DSC02_log1.log
LOG_PATH= +DMLOG/log/DSC02_log2.log

9、使用 DMINIT 初始化一个节点的数据库环境
1)如下所示,初始化失败

[root@node201 bin]# ./dminit control=/home/dmdba/dmdsc/data/DSC01/dminit.ini
initdb V8
db version: 0x7000c
file dm.key not found, use default license!
License will expire on 2025-07-03
[TRACE]atsk_process_connect success, client_is_local=1
execute open ASM file fail, code: [-2405]
Normal of FAST
Normal of DEFAULT
Normal of RECYCLE
Normal of KEEP
Normal of ROLL
execute open ASM file fail, code: [-2405]
[TRACE]The ASM create file +DMDATA/data/dsc2 from (0, 3355445528).
[TRACE]aptx op_type 5, log_len 1359, start seq 4
[TRACE]aptx flush op_type 5, log_len 1359, start seq 7
execute open ASM file fail, code: [-2405]
[TRACE]The ASM create file +DMDATA/data/dsc2/bak from (0, 3355445528).
[TRACE]aptx op_type 5, log_len 686, start seq 7
[TRACE]aptx flush op_type 5, log_len 686, start seq 9
 log file path: +DMLOG/log/DSC01_log1.log
 log file path: +DMLOG/log/DSC01_log2.log
 log file path: +DMLOG/log/DSC02_log1.log
 log file path: +DMLOG/log/DSC02_log2.lo
write to dir [+DMDATA/data/dsc2].
execute open ASM file fail, code: [-2405]
[TRACE]DG 0 alloc 1 extents for 0x80000005 (0, 0, 2)->(0, 0, 2)
[TRACE]aptx op_type 5, log_len 1000, start seq 9
[TRACE]aptx flush op_type 5, log_len 1000, start seq 11
[TRACE]The ASM create file:[+DMDATA/data/dsc2/dm.ctl], id:[0x80000005] from (0, 3355445528), init_flag 0, file_size:2048.
[TRACE]The ASM file 0x80000005 truncate from (0, 3355445528), org_size:2048, cur_size:2048.
execute open ASM file fail, code: [-2405]
[TRACE]The ASM create file +DMLOG/log from (0, 3355445528).
[TRACE]aptx op_type 5, log_len 675, start seq 3
[TRACE]aptx flush op_type 5, log_len 675, start seq 5
[TRACE]DG 1 alloc 512 extents for 0x81000003 (0, 0, 2)->(0, 0, 513)
[TRACE]aptx op_type 5, log_len 1001, start seq 5
[TRACE]aptx flush op_type 5, log_len 1001, start seq 7
[TRACE]The ASM create file:[+DMLOG/log/DSC01_log1.log], id:[0x81000003] from (0, 3355445528), init_flag 0, file_size:2147483648.
[TRACE]The ASM create file:[+DMLOG/log/DSC01_log2.log] failed, code:[-523].
execute create ASM file fail, code: [-523]
create rlog file +DMLOG/log/DSC01_log2.log failed, code: -7013.

You may get more details in file ../log/dm_DSC01_202408.log
[TRACE]atsk_process_sess_free org_site:(0), org_sess:(0xc8000918).
[TRACE]asvr2_sess_free sess:(0xc8000918), tsk:(0x17f1fe8).
fail to init db.


查看日志:
   如下所示,创建log磁盘失败,超出磁盘空间:
[root@node201 bin]# cat ../log/dm_DSC01_202408.log
2024-08-16 14:57:41.295 [INFO] dminit P0000005706 T0000000000000005706  os_sema2_create_low, create and inc sema success, key:26278319, sem_id:4, sem_value:1!
2024-08-16 14:57:41.296 [INFO] dminit P0000005706 T0000000000000005706  os_sema2_create_low, create and inc sema success, key:294713775, sem_id:5, sem_value:1!
2024-08-16 14:57:41.296 [INFO] dminit P0000005706 T0000000000000005706  os_dir_is_empty_asm->os_asm_dir_get_first: [path: +DMDATA/data/dsc2]: [code:-2405] File or Directory [+DMDATA/data/dsc2] does not exist
2024-08-16 14:57:41.299 [INFO] dminit P0000005706 T0000000000000005706  INI parameter IO_THR_GROUPS changed, the original value 0, new value 8
2024-08-16 14:57:41.300 [INFO] dminit P0000005706 T0000000000000005706  INI parameter FAST_POOL_PAGES changed, the original value 0, new value 16
2024-08-16 14:57:41.300 [INFO] dminit P0000005706 T0000000000000005706  INI parameter BUFFER_POOLS changed, the original value 0, new value 1
2024-08-16 14:57:41.300 [INFO] dminit P0000005706 T0000000000000005706  INI parameter RECYCLE_POOLS changed, the original value 0, new value 1
2024-08-16 14:57:41.300 [INFO] dminit P0000005706 T0000000000000005706  INI parameter ROLLSEG_POOLS changed, the original value 0, new value 1
2024-08-16 14:57:41.301 [INFO] dminit P0000005706 T0000000000000005706  fil_sys_init
2024-08-16 14:57:41.301 [INFO] dminit P0000005706 T0000000000000005706  init database start at +DMDATA/data/dsc2
2024-08-16 14:57:42.855 [INFO] dminit P0000005706 T0000000000000005706  ctl_write_to_file file: +DMDATA/data/dsc2/dm.ctl
2024-08-16 14:57:43.629 [INFO] dminit P0000005706 T0000000000000005706  ctl_add_table_space_low to lst success, ts_name RLOG[id=2]
2024-08-16 14:57:45.226 [INFO] dminit P0000005706 T0000000000000005706  rfil_close_low set arch rfil[+DMLOG/log/DSC01_log1.log]'s sta to inactive, l_next_seq = 2457, g_next_seq = 2457, clsn = 0, handle = -2130706429, free=4096, len=2147483648
2024-08-16 14:57:45.229 [ERROR] dminit P0000005706 T0000000000000005706  os_file_create_with_init->os_asm_file_create: [path: +DMLOG/log/DSC01_log2.log]: [CODE:-523] Out of space
2024-08-16 14:57:45.229 [INFO] dminit P0000005706 T0000000000000005706  os_sema2_free, sema_id:4, sema_value:1!
2024-08-16 14:57:45.229 [INFO] dminit P0000005706 T0000000000000005706  os_sema2_free, sema_id:5, sema_value:1!
2024-08-16 14:57:45.229 [INFO] dminit P0000005706 T0000000000000005706  dmshm2_detach, ret = 0 shm id 8
2024-08-16 14:57:45.238 [FATAL] dminit P0000005706 T0000000000000005706  init database fail with code -2!



修改配置文件:
[root@node201 DSC01]# cat dminit.ini
DB_NAME= dsc2
SYSTEM_PATH= +DMDATA/data
SYSTEM= +DMDATA/data/dsc2/system.dbf
SYSTEM_SIZE= 128
ROLL= +DMDATA/data/dsc2/roll.dbf
ROLL_SIZE= 128
MAIN= +DMDATA/data/dsc2/main.dbf
MAIN_SIZE= 128
CTL_PATH= +DMDATA/data/dsc2/dm.ctl
LOG_SIZE= 256            # 缩小log_size(原为2048)
DCR_PATH= /dev_DSC2/DCR
DCR_SEQNO= 0
AUTO_OVERWRITE= 2
PAGE_SIZE = 16
EXTENT_SIZE = 16

[DSC01]
CONFIG_PATH= /home/dmdba/dmdsc/data/DSC01/DSC01_conf
PORT_NUM = 6636
MAL_HOST= 192.168.1.201
MAL_PORT= 6536
LOG_PATH= +DMLOG/log/DSC01_log1.log
LOG_PATH= +DMLOG/log/DSC01_log2.log

[DSC02]
CONFIG_PATH= /home/dmdba/dmdsc/data/DSC02/DSC02_conf
PORT_NUM = 6637
MAL_HOST= 192.168.1.202
MAL_PORT= 6537
LOG_PATH= +DMLOG/log/DSC02_log1.log
LOG_PATH= +DMLOG/log/DSC02_log2.log


重新执行初始化:
[root@node201 bin]# ./dminit control=/home/dmdba/dmdsc/data/DSC01/dminit.ini
initdb V8
db version: 0x7000c
file dm.key not found, use default license!
License will expire on 2025-07-03
Normal of FAST
Normal of DEFAULT
Normal of RECYCLE
Normal of KEEP
Normal of ROLL

 log file path: +DMLOG/log/DSC01_log1.log
 log file path: +DMLOG/log/DSC01_log2.log
 log file path: +DMLOG/log/DSC02_log1.log
log file path: +DMLOG/log/DSC02_log2.log
FILE "/home/dmdba/dmdsc/data/DSC01/DSC01_conf/dm.ini" has already existed
FILE "/home/dmdba/dmdsc/data/DSC01/DSC01_conf/sqllog.ini" has already existed
FILE "/home/dmdba/dmdsc/data/DSC02/DSC02_conf/dm.ini" has already existed
FILE "/home/dmdba/dmdsc/data/DSC02/DSC02_conf/sqllog.ini" has already existed
FILE "+DMLOG/log/DSC01_log1.log" has already existed
FILE "+DMLOG/log/DSC01_log2.log" has already existed
write to dir [+DMDATA/data/dsc2].
create dm database success. 2024-08-16 15:07:47

DMINIT 执行完成后,会在 config_path 目录(/home/dmdba/dmdsc/data/DSC01/DSC01_conf 和/home/dmdba/dmdsc/data/DSC02/DSC02_conf)下生成配置文件 DM.INI 和 DMMAL.INI。
将 node201上初始化库时产生的 DSC02 节点的配置文件(整个/home/dmdba/dmdsc/data/DSC02 文件夹)复制到 110 机器的/home/dmdba/dmdsc/data/DSC02/目录下。之后就可以启动数据库服务器了。

[root@node201 DSC02]# scp  -r * node202:/home/dmdba/dmdsc/data/DSC02/

10、启动数据库服务(all nodes)

# node201
[root@node201 bin]# ./dmserver dcr_ini=/home/dmdba/dmdsc/data/DSC01/dmdcr.ini /home/dmdba/dmdsc/data/DSC01/DSC01_conf/dm.ini
file dm.key not found, use default license!
version info: develop
csek2_vm_t = 1408
nsql_vm_t = 328
prjt2_vm_t = 176
ltid_vm_t = 216
nins2_vm_t = 1048
nset2_vm_t = 272
ndlck_vm_t = 192
ndel2_vm_t = 776
slct2_vm_t = 208
nli2_vm_t = 192
aagr2_vm_t = 280
pscn_vm_t = 288
dist_vm_t = 896
DM Database Server 64 V8 03134284194-20240703-234060-20108 startup...
Normal of FAST
Normal of DEFAULT
Normal of RECYCLE
Normal of KEEP
Normal of ROLL
Database mode = 0, oguid = 0
License will expire on 2025-07-03

# node202
[root@node202 bin]# ./dmserver dcr_ini=/home/dmdba/dmdsc/data/DSC02/dmdcr.ini /home/dmdba/dmdsc/data/DSC02/DSC02_conf/dm.ini
file dm.key not found, use default license!
[TRACE]atsk_process_connect success, client_is_local=1
version info: develop
[TRACE]asvr2_sess_free sess:(0xa0000918), tsk:(0x295aac8).
csek2_vm_t = 1408
nsql_vm_t = 328
prjt2_vm_t = 176
ltid_vm_t = 216
nins2_vm_t = 1048
nset2_vm_t = 272
ndlck_vm_t = 192
ndel2_vm_t = 776
slct2_vm_t = 208
nli2_vm_t = 192
aagr2_vm_t = 280
pscn_vm_t = 288
dist_vm_t = 896
[TRACE]atsk_process_connect success, client_is_local=1
DM Database Server 64 V8 03134284194-20240703-234060-20108 startup...
Normal of FAST
Normal of DEFAULT
Normal of RECYCLE
Normal of KEEP
Normal of ROLL
Database mode = 0, oguid = 0
License will expire on 2025-07-03

11、配置并启动 DMCSSM 监视器
DMCSSM 在任何机器上均可以启动,只要该台机器和 DMDSC 的真实机器网络是相通的,就可以监控 DMDSC 集群信息。

1)配置dmcssm.ini

[root@node201 DSC01]# cat dmcssm.ini
#和DMDCR_CFG.INI中的DCR_OGUID保持一致
CSSM_OGUID      =       1071107589

#配置所有CSS的连接信息,
#与DMDCR_CFG.INI中CSS配置项的DCR_EP_HOST和DCR_EP_PORT保持一致
CSSM_CSS_IP = 192.168.1.201:9836
CSSM_CSS_IP = 192.168.1.202:9837

CSSM_LOG_PATH   =       /home/dmdba/dmdsc/data/cssm_log #监视器日志文件存放路径
CSSM_LOG_FILE_SIZE              =       32              #每个日志文件最大32M
CSSM_LOG_SPACE_LIMIT    =       0               #不限定日志文件总占用空间


启动 DMCSSM 集群监视器:
[root@node201 bin]#  ./dmcssm ini_path=/home/dmdba/dmdsc/data/dmcssm.ini
[monitor]         2024-08-16 15:18:31: CSS MONITOR V8
[monitor]         2024-08-16 15:18:31: CSS MONITOR SYSTEM IS READY.

[monitor]         2024-08-16 15:18:31: Wait CSS Control Node choosed...
[monitor]         2024-08-16 15:18:32: Wait CSS Control Node choosed succeed.

# DMCSSM 启动之后,可使用 show 命令在 DMCSSM 监视器中查看集群状态信息
show

monitor current time:2024-08-16 15:19:04, n_group:3
=================== group[name = GRP_CSS, seq = 0, type = CSS, Control Node = 0] ========================================

[CSS0] auto check = TRUE, global info:
[ASM0] auto restart = FALSE
[DSC01] auto restart = FALSE
[CSS1] auto check = TRUE, global info:
[ASM1] auto restart = FALSE
[DSC02] auto restart = FALSE

ep:     css_time               inst_name     seqno     port    mode         inst_status        vtd_status   is_ok        active       guid              ts
        2024-08-16 15:19:03    CSS0          0         9836    Control Node OPEN               WORKING      OK           TRUE         7485468           7486602
        2024-08-16 15:19:03    CSS1          1         9837    Normal Node  OPEN               WORKING      OK           TRUE         7580635           7581649

=================== group[name = GRP_ASM, seq = 1, type = ASM, Control Node = 0] ========================================

n_ok_ep = 2
ok_ep_arr(index, seqno):
(0, 0)
(1, 1)

sta = OPEN, sub_sta = STARTUP
break ep = NULL
recover ep = NULL

crash process over flag is TRUE
ep:     css_time               inst_name     seqno     port    mode         inst_status        vtd_status   is_ok        active       guid              ts
        2024-08-16 15:19:03    ASM0          0         5836    Control Node OPEN               WORKING      OK           TRUE         7623743           7624707
        2024-08-16 15:19:03    ASM1          1         5837    Normal Node  OPEN               WORKING      OK           TRUE         7644304           7645240

=================== group[name = GRP_DSC, seq = 2, type = DB, Control Node = 0] ========================================

n_ok_ep = 2
ok_ep_arr(index, seqno):
(0, 0)
(1, 1)

sta = OPEN, sub_sta = STARTUP
break ep = NULL
recover ep = NULL

crash process over flag is TRUE
ep:     css_time               inst_name     seqno     port    mode         inst_status        vtd_status   is_ok        active       guid              ts
        2024-08-16 15:19:03    DSC01         0         6636    Control Node OPEN               WORKING      OK           TRUE         19090916          19091165
        2024-08-16 15:19:03    DSC02         1         6637    Normal Node  OPEN               WORKING      OK           TRUE         19192105          19192299

==================================================================================================================

---至此,基于 DMASM 的 DMDSC 已经搭建完成。

posted @ 2024-08-19 11:17  天涯客1224  阅读(225)  评论(0编辑  收藏  举报