postgres-xl安装部署(9.5)

环境
Os:Centos 7
pxl:9.5

IP地址                角色                                                    hostname
192.168.1.104   gtm master                                         pg1
192.168.1.108   gtmproxy,coordinator,datanode         pg2
192.168.1.109   gtmproxy,coordinator,datanode         pg3

建议安装postgres-xl-10r1.1,步骤一样 

 

目前 postgres-xl不支持安装插件

1.下载

https://www.postgres-xl.org/download/
我这里下载的是postgres-xl-9.5r1.6.tar.bz2

 

2.安装依赖包
每台机器上都执行
#yum install -y flex bison readline-devel zlib-devel openjade docbook-style-dsssl gcc

 

3.创建用户
每台机器上都执行
#useradd postgres
#passwd postgres

 

4.设置hostname
192.168.1.104
[root@localhost ~]#hostnamectl set-hostname pg1
192.168.1.108
[root@localhost ~]#hostnamectl set-hostname pg2
192.168.1.109
[root@localhost ~]#hostnamectl set-hostname pg3


5.设置配置hosts文件
每个节点都需要操作
vi /etc/hosts
192.168.1.104 pg1
192.168.1.108 pg2
192.168.1.109 pg3

6.关闭防火墙
每个节点都要执行
# systemctl stop firewalld.service
# systemctl disable firewalld.service


7.设置selinux
vi /etc/selinux/config
设置如下:
SELINUX=disabled


8.配置免密登录
配置用户postgres 各机器之间的免密登陆,使pg1可以免密登录pg2和pg3(pg2和pg3可以不使用免密登录)
可以参考如下连接
https://www.cnblogs.com/hxlasky/p/12204180.html

 

9.配置环境变量
每台机器上都执行
su - postgres
cd ~
vi .bashrc(好像编辑.bash_profile不生效)

export PGUSER=postgres
export PGHOME=/opt/pgxl
export PATH=$PGHOME/bin:$PATH:$HOME/.local/bin:$HOME/bin
export LD_LIBRARY_PATH=$PGHOME/lib:$LD_LIBRARY_PATH

重新登录后生效

 

6.安装postgres-xl
在三台服务器上都执行
su - root
[root@pg1 soft]# cd /soft
[root@pg1 soft]# mkdir /opt/pgxl
[root@pg1 soft]# tar -jxvf postgres-xl-9.5r1.6.tar.bz2
[root@pg1 soft]# cd postgres-xl-9.5r1.6
[root@pg1 postgres-xl-9.5r1.6]# ./configure --prefix=/opt/pgxl/
[root@pg1 postgres-xl-9.5r1.6]# make
看到如下提示说明make成功了,可以执行make install命令了
All of Postgres-XL successfully made. Ready to install.
[root@pg1 postgres-xl-9.5r1.6]# make install
最后安装成功会有如下提示:
Postgres-XL installation complete.

修改权限目录
[root@pg1 bin]# chown -R postgres:postgres /opt/pgxl/

 

7.配置集群
###配置集群最重要的配置pgxc_ctl工具,并使用其工具的配置文件进行快速部署集群。但是这个工具默认不安装,需要先安装。

7.1安装pgxc_ctl并生成配置文件.
仅在pg1上执行
[root@pg1 contrib]# cd /soft/postgres-xl-9.5r1.6/contrib
[root@pg1 contrib]#make
[root@pg1 contrib]#make install
[root@pg1 /]# chown -R postgres:postgres /opt/pgxl/

使用pgxc_ctl

[root@pg1 /]# su - postgres
[postgres@pg1 ~]$ cd /opt/pgxl/bin
[postgres@pg1 bin]$ ./pgxc_ctl
/bin/bash
Installing pgxc_ctl_bash script as /home/postgres/pgxc_ctl/pgxc_ctl_bash.
ERROR: File "/home/postgres/pgxc_ctl/pgxc_ctl.conf" not found or not a regular file. No such file or directory
Installing pgxc_ctl_bash script as /home/postgres/pgxc_ctl/pgxc_ctl_bash.
Reading configuration using /home/postgres/pgxc_ctl/pgxc_ctl_bash --home /home/postgres/pgxc_ctl --configuration /home/postgres/pgxc_ctl/pgxc_ctl.conf
Finished reading configuration.
   ******** PGXC_CTL START ***************

Current directory: /home/postgres/pgxc_ctl
PGXC prepare
PGXC exit

 

7.2 编辑pgxc_ctl.conf文件,仅在pg1上执行
su - postgres
[postgres@pg1 pgxc_ctl]$vi /home/postgres/pgxc_ctl/pgxc_ctl.conf

 

#!/usr/bin/env bash
#
# Postgres-XC Configuration file for pgxc_ctl utility. 
#
# Configuration file can be specified as -c option from pgxc_ctl command.   Default is
# $PGXC_CTL_HOME/pgxc_ctl.org.
#
# This is bash script so you can make any addition for your convenience to configure
# your Postgres-XC cluster.
#
# Please understand that pgxc_ctl provides only a subset of configuration which pgxc_ctl
# provide.  Here's several several assumptions/restrictions pgxc_ctl depends on.
#
# 1) All the resources of pgxc nodes has to be owned by the same user.   Same user means
#    user with the same user name.  User ID may be different from server to server.
#    This must be specified as a variable $pgxcOwner.
#
# 2) All the servers must be reacheable via ssh without password.   It is highly recommended
#    to setup key-based authentication among all the servers.
#
# 3) All the databases in coordinator/datanode has at least one same superuser.  Pgxc_ctl
#    uses this user to connect to coordinators and datanodes.   Again, no password should
#    be used to connect.  You have many options to do this, pg_hba.conf, pg_ident.conf and
#    others.  Pgxc_ctl provides a way to configure pg_hba.conf but not pg_ident.conf.   This
#    will be implemented in the later releases.
#
# 4) Gtm master and slave can have different port to listen, while coordinator and datanode
#    slave should be assigned the same port number as master.
#
# 5) Port nuber of a coordinator slave must be the same as its master.
#
# 6) Master and slave are connected using synchronous replication.  Asynchronous replication
#    have slight (almost none) chance to bring total cluster into inconsistent state.
#    This chance is very low and may be negligible.  Support of asynchronous replication
#    may be supported in the later release.
#
# 7) Each coordinator and datanode can have only one slave each.  Cascaded replication and
#    multiple slave are not supported in the current pgxc_ctl.
#
# 8) Killing nodes may end up with IPC resource leak, such as semafor and shared memory.
#    Only listening port (socket) will be cleaned with clean command.
#
# 9) Backup and restore are not supported in pgxc_ctl at present.   This is a big task and
#    may need considerable resource.
#
#========================================================================================
#
#
# pgxcInstallDir variable is needed if you invoke "deploy" command from pgxc_ctl utility.
# If don't you don't need this variable.
pgxcInstallDir=$HOME/pgxc
#---- OVERALL -----------------------------------------------------------------------------
#
pgxcOwner=$USER            # owner of the Postgres-XC databaseo cluster.  Here, we use this
                        # both as linus user and database user.  This must be
                        # the super user of each coordinator and datanode.
pgxcUser=$pgxcOwner        # OS user of Postgres-XC owner

tmpDir=/tmp                    # temporary dir used in XC servers
localTmpDir=$tmpDir            # temporary dir used here locally

configBackup=n                    # If you want config file backup, specify y to this value.
configBackupHost=pgxc-linker    # host to backup config file
configBackupDir=$HOME/pgxc        # Backup directory
configBackupFile=pgxc_ctl.bak    # Backup file name --> Need to synchronize when original changed.

#---- GTM ------------------------------------------------------------------------------------

# GTM is mandatory.  You must have at least (and only) one GTM master in your Postgres-XC cluster.
# If GTM crashes and you need to reconfigure it, you can do it by pgxc_update_gtm command to update
# GTM master with others.   Of course, we provide pgxc_remove_gtm command to remove it.  This command
# will not stop the current GTM.  It is up to the operator.


#---- GTM Master -----------------------------------------------

#---- Overall ----
gtmName=gtm
gtmMasterServer=pg1
gtmMasterPort=20001
gtmMasterDir=$HOME/pgxc/nodes/gtm

#---- Configuration ---
gtmExtraConfig=none            # Will be added gtm.conf for both Master and Slave (done at initilization only)
gtmMasterSpecificExtraConfig=none    # Will be added to Master's gtm.conf (done at initialization only)

#---- GTM Slave -----------------------------------------------

# Because GTM is a key component to maintain database consistency, you may want to configure GTM slave
# for backup.

#---- Overall ------
gtmSlave=n                    # Specify y if you configure GTM Slave.   Otherwise, GTM slave will not be configured and
                            # all the following variables will be reset.
#gtmSlaveName=gtmSlave
#gtmSlaveServer=node12        # value none means GTM slave is not available.  Give none if you don't configure GTM Slave.
#gtmSlavePort=20001            # Not used if you don't configure GTM slave.
#gtmSlaveDir=$HOME/pgxc/nodes/gtm    # Not used if you don't configure GTM slave.
# Please note that when you have GTM failover, then there will be no slave available until you configure the slave
# again. (pgxc_add_gtm_slave function will handle it)

#---- Configuration ----
gtmSlaveSpecificExtraConfig=none # Will be added to Slave's gtm.conf (done at initialization only)

#---- GTM Proxy -------------------------------------------------------------------------------------------------------
# GTM proxy will be selected based upon which server each component runs on.
# When fails over to the slave, the slave inherits its master's gtm proxy.  It should be
# reconfigured based upon the new location.
#
# To do so, slave should be restarted.   So pg_ctl promote -> (edit postgresql.conf and recovery.conf) -> pg_ctl restart
#
# You don't have to configure GTM Proxy if you dont' configure GTM slave or you are happy if every component connects
# to GTM Master directly.  If you configure GTL slave, you must configure GTM proxy too.

#---- Shortcuts ------
gtmProxyDir=$HOME/pgxc/nodes/gtm_pxy

#---- Overall -------
gtmProxy=y                # Specify y if you conifugre at least one GTM proxy.   You may not configure gtm proxies
                        # only when you dont' configure GTM slaves.
                        # If you specify this value not to y, the following parameters will be set to default empty values.
                        # If we find there're no valid Proxy server names (means, every servers are specified
                        # as none), then gtmProxy value will be set to "n" and all the entries will be set to
                        # empty values.
gtmProxyNames=(gtm_pxy1 gtm_pxy2)    # No used if it is not configured
gtmProxyServers=(pg1 pg2)            # Specify none if you dont' configure it.
gtmProxyPorts=(20081 20081)                # Not used if it is not configured.
gtmProxyDirs=($gtmProxyDir $gtmProxyDir)    # Not used if it is not configured.

#---- Configuration ----
gtmPxyExtraConfig=none        # Extra configuration parameter for gtm_proxy.  Coordinator section has an example.
gtmPxySpecificExtraConfig=(none none)

#---- Coordinators ----------------------------------------------------------------------------------------------------

#---- shortcuts ----------
coordMasterDir=$HOME/pgxc/nodes/coord
coordSlaveDir=$HOME/pgxc/nodes/coord_slave
coordArchLogDir=$HOME/pgxc/nodes/coord_archlog

#---- Overall ------------
coordNames=(coord1 coord2)        # Master and slave use the same name
coordPorts=(25432 25432)            # Master ports
poolerPorts=(20010 20010)            # Master pooler ports
coordPgHbaEntries=(192.168.1.0/24)                # Assumes that all the coordinator (master/slave) accepts
                                                # the same connection
                                                # This entry allows only $pgxcOwner to connect.
                                                # If you'd like to setup another connection, you should
                                                # supply these entries through files specified below.
# Note: The above parameter is extracted as "host all all 0.0.0.0/0 trust".   If you don't want
# such setups, specify the value () to this variable and suplly what you want using coordExtraPgHba
# and/or coordSpecificExtraPgHba variables.
#coordPgHbaEntries=(::1/128)    # Same as above but for IPv6 addresses

#---- Master -------------
coordMasterServers=(192.168.1.108 192.168.1.109)        # none means this master is not available
coordMasterDirs=($coordMasterDir $coordMasterDir)
coordMaxWALsernder=5    # max_wal_senders: needed to configure slave. If zero value is specified,
                        # it is expected to supply this parameter explicitly by external files
                        # specified in the following.    If you don't configure slaves, leave this value to zero.
coordMaxWALSenders=($coordMaxWALsernder $coordMaxWALsernder)
                        # max_wal_senders configuration for each coordinator.

#---- Slave -------------
coordSlave=y            # Specify y if you configure at least one coordiantor slave.  Otherwise, the following
                        # configuration parameters will be set to empty values.
                        # If no effective server names are found (that is, every servers are specified as none),
                        # then coordSlave value will be set to n and all the following values will be set to
                        # empty values.
coordSlaveSync=y        # Specify to connect with synchronized mode.
coordSlaveServers=(192.168.1.108 192.168.1.109)            # none means this slave is not available
coordSlavePorts=(35432 35432)            # Master ports
coordSlavePoolerPorts=(20020 20020)            # Master pooler ports
coordSlaveDirs=($coordSlaveDir $coordSlaveDir)
coordArchLogDirs=($coordArchLogDir $coordArchLogDir)

#---- Configuration files---
# Need these when you'd like setup specific non-default configuration 
# These files will go to corresponding files for the master.
# You may supply your bash script to setup extra config lines and extra pg_hba.conf entries 
# Or you may supply these files manually.
coordExtraConfig=coordExtraConfig    # Extra configuration file for coordinators.  
                        # This file will be added to all the coordinators'
                        # postgresql.conf
# Pleae note that the following sets up minimum parameters which you may want to change.
# You can put your postgresql.conf lines here.
cat > $coordExtraConfig <<EOF
#================================================
# Added to all the coordinator postgresql.conf
# Original: $coordExtraConfig
log_destination = 'stderr'
logging_collector = on
log_directory = 'pg_log'
listen_addresses = '*'
max_connections = 100
EOF

# Additional Configuration file for specific coordinator master.
# You can define each setting by similar means as above.
coordSpecificExtraConfig=(none none none none)
coordExtraPgHba=none    # Extra entry for pg_hba.conf.  This file will be added to all the coordinators' pg_hba.conf
coordSpecificExtraPgHba=(none none none none)

#----- Additional Slaves -----
#
# Please note that this section is just a suggestion how we extend the configuration for
# multiple and cascaded replication.   They're not used in the current version.
#
coordAdditionalSlaves=n        # Additional slave can be specified as follows: where you
coordAdditionalSlaveSet=(cad1)        # Each specifies set of slaves.   This case, two set of slaves are
                                            # configured
cad1_Sync=n                  # All the slaves at "cad1" are connected with asynchronous mode.
                            # If not, specify "y"
                            # The following lines specifies detailed configuration for each
                            # slave tag, cad1.  You can define cad2 similarly.
cad1_Servers=(node08 node09 node06 node07)    # Hosts
cad1_dir=$HOME/pgxc/nodes/coord_slave_cad1
cad1_Dirs=($cad1_dir $cad1_dir $cad1_dir $cad1_dir)
cad1_ArchLogDir=$HOME/pgxc/nodes/coord_archlog_cad1
cad1_ArchLogDirs=($cad1_ArchLogDir $cad1_ArchLogDir $cad1_ArchLogDir $cad1_ArchLogDir)


#---- Datanodes -------------------------------------------------------------------------------------------------------

#---- Shortcuts --------------
datanodeMasterDir=$HOME/pgxc/nodes/dn_master
datanodeSlaveDir=$HOME/pgxc/nodes/dn_slave
datanodeArchLogDir=$HOME/pgxc/nodes/datanode_archlog

#---- Overall ---------------
#primaryDatanode=datanode1                # Primary Node.
# At present, xc has a priblem to issue ALTER NODE against the primay node.  Until it is fixed, the test will be done
# without this feature.
primaryDatanode=datanode1                # Primary Node.
datanodeNames=(datanode1 datanode2)
datanodePorts=(20008 20008)    # Master ports
datanodePoolerPorts=(20012 20012)    # Master pooler ports
datanodePgHbaEntries=(192.168.1.0/24)    # Assumes that all the coordinator (master/slave) accepts
                                        # the same connection
                                        # This list sets up pg_hba.conf for $pgxcOwner user.
                                        # If you'd like to setup other entries, supply them
                                        # through extra configuration files specified below.
# Note: The above parameter is extracted as "host all all 0.0.0.0/0 trust".   If you don't want
# such setups, specify the value () to this variable and suplly what you want using datanodeExtraPgHba
# and/or datanodeSpecificExtraPgHba variables.
#datanodePgHbaEntries=(::1/128)    # Same as above but for IPv6 addresses

#---- Master ----------------
datanodeMasterServers=(192.168.1.108 192.168.1.109)    # none means this master is not available.
                                                    # This means that there should be the master but is down.
                                                    # The cluster is not operational until the master is
                                                    # recovered and ready to run.    
datanodeMasterDirs=($datanodeMasterDir $datanodeMasterDir)
datanodeMaxWalSender=5                                # max_wal_senders: needed to configure slave. If zero value is 
                                                    # specified, it is expected this parameter is explicitly supplied
                                                    # by external configuration files.
                                                    # If you don't configure slaves, leave this value zero.
datanodeMaxWALSenders=($datanodeMaxWalSender $datanodeMaxWalSender)
                        # max_wal_senders configuration for each datanode

#---- Slave -----------------
datanodeSlave=y            # Specify y if you configure at least one coordiantor slave.  Otherwise, the following
                        # configuration parameters will be set to empty values.
                        # If no effective server names are found (that is, every servers are specified as none),
                        # then datanodeSlave value will be set to n and all the following values will be set to
                        # empty values.
datanodeSlaveServers=(192.168.1.108 192.168.1.109)    # value none means this slave is not available
datanodeSlavePorts=(20009 20009)    # value none means this slave is not available
datanodeSlavePoolerPorts=(20013 20013)    # value none means this slave is not available
datanodeSlaveSync=y        # If datanode slave is connected in synchronized mode
datanodeSlaveDirs=($datanodeSlaveDir $datanodeSlaveDir)
datanodeArchLogDirs=( $datanodeArchLogDir $datanodeArchLogDir)

# ---- Configuration files ---
# You may supply your bash script to setup extra config lines and extra pg_hba.conf entries here.
# These files will go to corresponding files for the master.
# Or you may supply these files manually.
datanodeExtraConfig=none    # Extra configuration file for datanodes.  This file will be added to all the 
                            # datanodes' postgresql.conf
datanodeSpecificExtraConfig=(none none)
datanodeExtraPgHba=none        # Extra entry for pg_hba.conf.  This file will be added to all the datanodes' postgresql.conf
datanodeSpecificExtraPgHba=(none none)

#----- Additional Slaves -----
datanodeAdditionalSlaves=n    # Additional slave can be specified as follows: where you
# datanodeAdditionalSlaveSet=(dad1 dad2)        # Each specifies set of slaves.   This case, two set of slaves are
                                            # configured
# dad1_Sync=n                  # All the slaves at "cad1" are connected with asynchronous mode.
                            # If not, specify "y"
                            # The following lines specifies detailed configuration for each
                            # slave tag, cad1.  You can define cad2 similarly.
# dad1_Servers=(node08 node09 node06 node07)    # Hosts
# dad1_dir=$HOME/pgxc/nodes/coord_slave_cad1
# dad1_Dirs=($cad1_dir $cad1_dir $cad1_dir $cad1_dir)
# dad1_ArchLogDir=$HOME/pgxc/nodes/coord_archlog_cad1
# dad1_ArchLogDirs=($cad1_ArchLogDir $cad1_ArchLogDir $cad1_ArchLogDir $cad1_ArchLogDir)

#---- WAL archives -------------------------------------------------------------------------------------------------
walArchive=n    # If you'd like to configure WAL archive, edit this section.
                # Pgxc_ctl assumes that if you configure WAL archive, you configure it
                # for all the coordinators and datanodes.
                # Default is "no".   Please specify "y" here to turn it on.
#
#        End of Configuration Section
#
#==========================================================================================================================

#========================================================================================================================
# The following is for extension.  Just demonstrate how to write such extension.  There's no code
# which takes care of them so please ignore the following lines.  They are simply ignored by pgxc_ctl.
# No side effects.
#=============<< Beginning of future extension demonistration >> ========================================================
# You can setup more than one backup set for various purposes, such as disaster recovery.
walArchiveSet=(war1 war2)
war1_source=(master)    # you can specify master, slave or ano other additional slaves as a source of WAL archive.
                    # Default is the master
wal1_source=(slave)
wal1_source=(additiona_coordinator_slave_set additional_datanode_slave_set)
war1_host=node10    # All the nodes are backed up at the same host for a given archive set
war1_backupdir=$HOME/pgxc/backup_war1
wal2_source=(master)
war2_host=node11
war2_backupdir=$HOME/pgxc/backup_war2
#=============<< End of future extension demonistration >> ========================================================

 

 

 

修改红色部分

 

7.3.创建目录。根据pgxc_ctl.conf文件中提到各种未注释的目录都必须得有。因此需要根据文件内容和服务器的角色来创建相应目录。
###以下命令仅在pg1上执行(好像会自动创建配置文件的如下目录)
su - postgres
$ cd ~
$ mkdir -p /home/postgres/pgxc/nodes/gtm

###以下命令要在pg2和pg3上执行(好像会自动创建配置文件的如下目录)
$ mkdir -p /home/postgres/pgxc/nodes/gtm_pxy
$ mkdir -p /home/postgres/pgxc/nodes/coord
$ mkdir -p /home/postgres/pgxc/nodes/coord_slave
$ mkdir -p /home/postgres/pgxc/nodes/coord_archlog
$ mkdir -p /home/postgres/pgxc/nodes/dn_master
$ mkdir -p /home/postgres/pgxc/nodes/dn_slave
$ mkdir -p /home/postgres/pgxc/nodes/datanode_archlog

 

8.初始化postgres-xl集群
pg1上执行
$ cd /opt/pgxl/bin/
$ pgxc_ctl -c /home/postgres/pgxc_ctl/pgxc_ctl.conf init all

[postgres@pg1 bin]$ pgxc_ctl -c /home/postgres/pgxc_ctl/pgxc_ctl.conf init all
/bin/bash
Installing pgxc_ctl_bash script as /home/postgres/pgxc_ctl/pgxc_ctl_bash.
Installing pgxc_ctl_bash script as /home/postgres/pgxc_ctl/pgxc_ctl_bash.
Reading configuration using /home/postgres/pgxc_ctl/pgxc_ctl_bash --home /home/postgres/pgxc_ctl --configuration /home/postgres/pgxc_ctl/pgxc_ctl.conf
Finished reading configuration.
ERROR: Number of elements in GTM Proxy definitions are different gtmProxyNames and gtmProxyPorts.  Check your configuration
ERROR: Found fundamental configuration error.

报错原因是:
gtmProxyPorts=(20081 20081)
有多少个数据节点,就配置几个端口,我这里只有2个数据节点,默认是4个,需要删除2个

 

[postgres@pg1 bin]$

ERROR: Conflict in port/pooler in gtmName and gtmProxyNames variable.
ERROR: Conflict in port/pooler in coordNames and coordNames variable.
ERROR: Conflict in port/pooler in coordNames and coordNames variable.
ERROR: Conflict in port/pooler in datanodeNames and datanodeNames variable.
ERROR: Conflict in port/pooler in datanodeNames and datanodeNames variable.
ERROR: Conflict in port/pooler in datanodeNames and datanodeNames variable.
ERROR: Conflict in port/pooler in datanodeNames and datanodeNames variable.
ERROR: Found conflicts among resources.  Exiting.

报错原因是端口有冲突,修改默认的端口就可以了

 

修改后再次执行:

 

[postgres@pg1 bin]$ pgxc_ctl -c /home/postgres/pgxc_ctl/pgxc_ctl.conf init all
/bin/bash
Installing pgxc_ctl_bash script as /home/postgres/pgxc_ctl/pgxc_ctl_bash.
Installing pgxc_ctl_bash script as /home/postgres/pgxc_ctl/pgxc_ctl_bash.
Reading configuration using /home/postgres/pgxc_ctl/pgxc_ctl_bash --home /home/postgres/pgxc_ctl --configuration /home/postgres/pgxc_ctl/pgxc_ctl.conf
Finished reading configuration.
   ******** PGXC_CTL START ***************

Current directory: /home/postgres/pgxc_ctl
Initialize GTM master
The authenticity of host 'pg1 (192.168.1.104)' can't be established.
ECDSA key fingerprint is SHA256:zTU++RgCLPdn9EQXBLJkNnadDlbBgc6kMHelTEkri7I.
ECDSA key fingerprint is MD5:41:f1:85:ed:2a:98:67:54:fd:6f:a7:8d:79:66:42:f9.
Are you sure you want to continue connecting (yes/no)? yes
The files belonging to this GTM system will be owned by user "postgres".
This user must also own the server process.


fixing permissions on existing directory /home/postgres/pgxc/nodes/gtm ... ok
creating configuration files ... ok
creating control file ... ok
省略部分输出...
Initialize all the datanode slaves.
Initialize datanode slave datanode1
Initialize datanode slave datanode2
Starting all the datanode slaves.
Starting datanode slave datanode1.
Starting datanode slave datanode2.
LOG:  redirecting log output to logging collector process
HINT:  Future log output will appear in directory "pg_log".
LOG:  redirecting log output to logging collector process
HINT:  Future log output will appear in directory "pg_log".
Done.
ALTER NODE coord1 WITH (HOST='192.168.1.108', PORT=25432);
ALTER NODE
CREATE NODE coord2 WITH (TYPE='coordinator', HOST='192.168.1.109', PORT=25432);
CREATE NODE
CREATE NODE datanode1 WITH (TYPE='datanode', HOST='192.168.1.108', PORT=20008, PRIMARY, PREFERRED);
CREATE NODE
CREATE NODE datanode2 WITH (TYPE='datanode', HOST='192.168.1.109', PORT=20008);
CREATE NODE
SELECT pgxc_pool_reload();
 pgxc_pool_reload 
------------------
 t
(1 row)

CREATE NODE coord1 WITH (TYPE='coordinator', HOST='192.168.1.108', PORT=25432);
CREATE NODE
ALTER NODE coord2 WITH (HOST='192.168.1.109', PORT=25432);
ALTER NODE
CREATE NODE datanode1 WITH (TYPE='datanode', HOST='192.168.1.108', PORT=20008, PRIMARY);
CREATE NODE
CREATE NODE datanode2 WITH (TYPE='datanode', HOST='192.168.1.109', PORT=20008, PREFERRED);
CREATE NODE
SELECT pgxc_pool_reload();
 pgxc_pool_reload 
------------------
 t
(1 row)

Done.
EXECUTE DIRECT ON (datanode1) 'CREATE NODE coord1 WITH (TYPE=''coordinator'', HOST=''192.168.1.108'', PORT=25432)';
EXECUTE DIRECT
EXECUTE DIRECT ON (datanode1) 'CREATE NODE coord2 WITH (TYPE=''coordinator'', HOST=''192.168.1.109'', PORT=25432)';
EXECUTE DIRECT
EXECUTE DIRECT ON (datanode1) 'ALTER NODE datanode1 WITH (TYPE=''datanode'', HOST=''192.168.1.108'', PORT=20008, PRIMARY, PREFERRED)';
EXECUTE DIRECT
EXECUTE DIRECT ON (datanode1) 'CREATE NODE datanode2 WITH (TYPE=''datanode'', HOST=''192.168.1.109'', PORT=20008, PREFERRED)';
EXECUTE DIRECT
EXECUTE DIRECT ON (datanode1) 'SELECT pgxc_pool_reload()';
 pgxc_pool_reload 
------------------
 t
(1 row)

EXECUTE DIRECT ON (datanode2) 'CREATE NODE coord1 WITH (TYPE=''coordinator'', HOST=''192.168.1.108'', PORT=25432)';
EXECUTE DIRECT
EXECUTE DIRECT ON (datanode2) 'CREATE NODE coord2 WITH (TYPE=''coordinator'', HOST=''192.168.1.109'', PORT=25432)';
EXECUTE DIRECT
EXECUTE DIRECT ON (datanode2) 'CREATE NODE datanode1 WITH (TYPE=''datanode'', HOST=''192.168.1.108'', PORT=20008, PRIMARY, PREFERRED)';
EXECUTE DIRECT
EXECUTE DIRECT ON (datanode2) 'ALTER NODE datanode2 WITH (TYPE=''datanode'', HOST=''192.168.1.109'', PORT=20008, PREFERRED)';
EXECUTE DIRECT
EXECUTE DIRECT ON (datanode2) 'SELECT pgxc_pool_reload()';
 pgxc_pool_reload 
------------------
 t
(1 row)

Done.

 

 

 

 

 

9.安装完成,可以用psql命令在pg2和pg3上登录了
192.168.1.108机器
通过coordinator登录
psql -p 25432 ##连接端口 coordPorts=(25432 25432)
查看节点信息

postgres=# select * from pgxc_node;

node_name | node_type | node_port | node_host | nodeis_primary | nodeis_preferred | node_id
-----------+-----------+-----------+---------------+----------------+------------------+-------------
coord1 | C | 25432 | 192.168.1.108 | f | f | 1885696643
coord2 | C | 25432 | 192.168.1.109 | f | f | -1197102633
datanode1 | D | 20008 | 192.168.1.108 | t | t | 888802358
datanode2 | D | 20008 | 192.168.1.109 | f | f | -905831925
(4 rows)


10.启动和关闭集群
在pg1上执行
su - postgres
[postgres@pg1 pgxc_ctl]$ pgxc_ctl -c /home/postgres/pgxc_ctl/pgxc_ctl.conf start all
[postgres@pg1 pgxc_ctl]$ pgxc_ctl -c /home/postgres/pgxc_ctl/pgxc_ctl.conf stop all


11.修改postgres账号密码
只需要通过一个数据节点(如pg2)连接即可
[postgres@pg2 dn_master]$ psql -p 25432
psql (PGXL 9.5r1.6, based on PG 9.5.8 (Postgres-XL 9.5r1.6))
Type "help" for help.

postgres=# ALTER USER postgres WITH PASSWORD 'postgres';
ALTER ROLE

 

 

################################创建表################################
1.创建replication表
在其中一个数据节点上操作,我这里是在192.168.1.108数据节点上操作
通过coordinator登录
psql -p 25432 ##连接端口 coordPorts=(25432 25432)

 

postgres=# create database db_test;
CREATE DATABASE


postgres=# \c db_test;
You are now connected to database "db_test" as user "postgres".
db_test=# create table tb_repl (id int, value text) distribute by replication;
CREATE TABLE

写入数据
insert into tb_repl(id, value)
select n, md5(random()::text) from generate_series(1, 1500000) n;

查看数据分布
db_test=# select xc_node_id, count(*) from tb_repl group by xc_node_id;
xc_node_id | count
------------+---------
888802358 | 1500000
(1 row)

在另外一个数据节点查看数据(192.168.1.109)
db_test=# select xc_node_id, count(*) from tb_repl group by xc_node_id;
xc_node_id | count
------------+---------
-905831925 | 1500000
(1 row)

replication类型的表每个节点都存储相同的数据


2.创建分片表
db_test=# create table tb_dist (id int, value text) distribute by hash(id);
CREATE TABLE
db_test=# insert into tb_dist(id, value)
db_test-# select n, md5(random()::text) from generate_series(1, 1500000) n;
INSERT 0 1500000

查看数据分布情况
db_test=# select xc_node_id, count(*) from tb_dist group by xc_node_id;
xc_node_id | count
------------+--------
888802358 | 751007
-905831925 | 748993
(2 rows)

distribute表即分片表,按照分片规则,数据会被分配到不同的数据节点中,通过协调节点查询完整的数据集合.

3.默认创建的表属于那种类型

db_test=# \d+ tb_test;
Table "public.tb_test"
Column | Type | Modifiers | Storage | Stats target | Description
--------+-----------------------+-----------+----------+--------------+-------------
id | integer | | plain | |
name | character varying(32) | | extended | |
Distribute By: HASH(id)
Location Nodes: ALL DATANODES

db_test=# create table tb_test01(name1 varchar(32),name2 varchar(32));
CREATE TABLE
db_test=# \d+ tb_test01;
Table "public.tb_test01"
Column | Type | Modifiers | Storage | Stats target | Description
--------+-----------------------+-----------+----------+--------------+-------------
name1 | character varying(32) | | extended | |
name2 | character varying(32) | | extended | |
Distribute By: HASH(name1)
Location Nodes: ALL DATANODES

可以看到默认创建的表是分片式的,以第一列进行分片

 

 

#######################修改数据库参数###########################
在数据节点上192.168.1.108查询当前数据库的参数
db_test=# show shared_buffers;
shared_buffers
----------------
128MB
(1 row)

在中控机器上修改参数
su - postgres
vi /home/postgres/pgxc_ctl/pgxc_ctl.conf

cat > $coordExtraConfig <<EOF
#================================================
# Added to all the coordinator postgresql.conf
# Original: $coordExtraConfig
log_destination = 'stderr'
logging_collector = on
log_directory = 'pg_log'
listen_addresses = '*'
max_connections = 100
shared_buffers = 1024MB
EOF


重新启动集群
[postgres@pg1 pgxc_ctl]$ pgxc_ctl -c /home/postgres/pgxc_ctl/pgxc_ctl.conf stop all
[postgres@pg1 pgxc_ctl]$ pgxc_ctl -c /home/postgres/pgxc_ctl/pgxc_ctl.conf start all

这种方式修改不生效

 

修改数据节点上的参数

我这里在数据节点192.168.1.108上操作

[postgres@pg2 pgxc]$ pwd
/home/postgres/pgxc
[postgres@pg2 pgxc]$ find ./ -name postgresql.conf
./nodes/coord/postgresql.conf
./nodes/coord_slave/postgresql.conf
./nodes/dn_master/postgresql.conf
./nodes/dn_slave/postgresql.conf

 

vi /home/postgres/pgxc/nodes/coord/postgresql.conf 生效

postgres=# show shared_buffers;
shared_buffers
----------------
1GB
(1 row)

postgres=# show max_connections;
max_connections
-----------------
100
(1 row)

 

vi /home/postgres/pgxc/nodes/coord_slave/postgresql.conf 不生效
vi /home/postgres/pgxc/nodes/dn_master/postgresql.conf 不生效
vi /home/postgres/pgxc/nodes/dn_slave/postgresql.conf 不生效

 

使用alter system执行的参数会保存到主coord节点的配置文件中
postgres=# ALTER SYSTEM SET shared_buffers = '1GB';
ALTER SYSTEM

[postgres@pg2 coord]$ more postgresql.auto.conf
# Do not edit this file manually!
# It will be overwritten by ALTER SYSTEM command.
shared_buffers = '1GB'

[postgres@pg2 coord]$ pwd
/home/postgres/pgxc/nodes/coord

恢复默认值:
ALTER SYSTEM SET shared_buffers = default;

 


说明:
1.不需要修改postgresql.conf和pg_hba.conf,客户端通过coordinator定义的端口连接

 

 

####################Postgres-XL不支持插件##########################
在每个数据节点上操作,我这里是在192.168.1.108和192.168.1.109上操作
[root@pg2 contrib]# cd /soft/postgres-xl-10r1.1/contrib ##进入到安装软件目录
[root@pg2 contrib]#make
[root@pg2 contrib]#make install postgres_fdw
[root@pg2 contrib]#make install dblink

 


[postgres@pg2 extension]$ psql -p 25432
psql (PGXL 10r1.1, based on PG 10.6 (Postgres-XL 10r1.1))
Type "help" for help.
postgres=# create extension dblink;
ERROR: Postgres-XL does not support FOREIGN DATA WRAPPER yet
DETAIL: The feature is not currently supported

 

postgres=# create extension postgres_fdw;
ERROR: Postgres-XL does not support FOREIGN DATA WRAPPER yet
DETAIL: The feature is not currently supported

 

 

-- The End --

 

posted @ 2022-12-06 17:15  slnngk  阅读(519)  评论(0编辑  收藏  举报