Mongo4.2分片和复制集群操作手册

一、概述
1.1 环境说明
1节点:
Red Hat Enterprise Linux Server release 7.6 (Maipo)
192.168.161.71

mongos(30000)
config(27017)
shard1主节点(40001)
shard2仲裁节点(40002)
shard3副节点(40003)

2节点:
Red Hat Enterprise Linux Server release 7.6 (Maipo)
192.168.161.72

mongos(30000)
config(27017)
shard1副节点(40001)
shard2主节点(40002)
shard3仲裁节点(40003)

3节点
Red Hat Enterprise Linux Server release 7.6 (Maipo)
192.168.161.73

mongos(30000)
config(27017)
shard1仲裁节点(40001)
shard2副节点(40002)
shard3主节点(40003)

3台机,每台机5个实例,分别mongos 1 个,config server 1 个,shard server 3 个

二、开始安装前准备工作
2.1 下载mongodb文件
http://repo.mongodb.org/yum/redhat/7Server/mongodb-org/4.2/x86_64/RPMS/
如下:
mongodb-org-4.2.9-1.el7.x86_64.rpm
mongodb-org-mongos-4.2.9-1.el7.x86_64.rpm
mongodb-org-server-4.2.9-1.el7.x86_64.rpm
mongodb-org-shell-4.2.9-1.el7.x86_64.rpm
mongodb-org-tools-4.2.9-1.el7.x86_64.rpm

2.2 关闭防火墙
查看防火墙状态
[root@mongodb1 ~]# systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2020-10-16 19:22:44 CST; 1h 26min ago
Docs: man:firewalld(1)
Main PID: 5034 (firewalld)
Tasks: 2
CGroup: /system.slice/firewalld.service
└─5034 /usr/bin/python -Es /usr/sbin/firewalld --nofork --nopid

Oct 16 19:22:43 mongodb1 systemd[1]: Starting firewalld - dynamic firewall daemon...
Oct 16 19:22:44 mongodb1 systemd[1]: Started firewalld - dynamic firewall daemon.

关闭防火墙
[root@mongodb1 ~]# systemctl stop firewalld

禁用防火墙(系统启动时不启动防火墙服务)
[root@mongodb1 ~]# systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@mongodb1 ~]#

2.3 修改/etc/hosts文件
192.168.161.71 mongodb1
192.168.161.72 mongodb2
192.168.161.73 mongodb3

2.4 通过SCP把rpm文件拷贝到另外二台服务器上
[root@mongodb1 soft]# scp * root@mongodb2:/mongodb/soft/
The authenticity of host 'mongodb2 (192.168.161.72)' can't be established.
ECDSA key fingerprint is SHA256:IOuaATF06D5qQRqr4UKjreexzRZR/Znyl6EPRCVgrd0.
ECDSA key fingerprint is MD5:33:d5:c3:88:d9:18:45:ea:b6:51:d3:51:44:25:4a:d2.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'mongodb2,192.168.161.72' (ECDSA) to the list of known hosts.
root@mongodb2's password:
mongodb-cli-1.7.0.x86_64.rpm 100% 4353KB 34.1MB/s 00:00
mongodb-org-4.2.9-1.el7.x86_64.rpm 100% 6048 5.1MB/s 00:00
mongodb-org-mongos-4.2.9-1.el7.x86_64.rpm 100% 15MB 39.3MB/s 00:00
mongodb-org-server-4.2.9-1.el7.x86_64.rpm 100% 25MB 39.5MB/s 00:00
mongodb-org-shell-4.2.9-1.el7.x86_64.rpm 100% 17MB 55.8MB/s 00:00
mongodb-org-tools-4.2.9-1.el7.x86_64.rpm 100% 62MB 35.7MB/s 00:01
[root@mongodb1 soft]# scp * root@mongodb3:/mongodb/soft/
The authenticity of host 'mongodb3 (192.168.161.73)' can't be established.
ECDSA key fingerprint is SHA256:TzG2Bxgce9xYN6B7UuSV9sqwvpb3U1A/BU2MHC960Ww.
ECDSA key fingerprint is MD5:49:f9:bc:8e🇩🇪ba:48:9b:7b:62:66:9b:c1:a1:f2:12.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'mongodb3,192.168.161.73' (ECDSA) to the list of known hosts.
root@mongodb3's password:
mongodb-cli-1.7.0.x86_64.rpm 100% 4353KB 31.5MB/s 00:00
mongodb-org-4.2.9-1.el7.x86_64.rpm 100% 6048 497.3KB/s 00:00
mongodb-org-mongos-4.2.9-1.el7.x86_64.rpm 100% 15MB 36.3MB/s 00:00
mongodb-org-server-4.2.9-1.el7.x86_64.rpm 100% 25MB 54.6MB/s 00:00
mongodb-org-shell-4.2.9-1.el7.x86_64.rpm 100% 17MB 60.6MB/s 00:00
mongodb-org-tools-4.2.9-1.el7.x86_64.rpm 100% 62MB 34.5MB/s 00:01
[root@mongodb1 soft]# ll

三、安装MongoDB操作

3.1 安装MongoDB的rpm包
三台服务器都要执行如下安装操作
安装顺序如下
rpm -ivh mongodb-org-mongos-4.2.9-1.el7.x86_64.rpm
rpm -ivh mongodb-org-server-4.2.9-1.el7.x86_64.rpm
rpm -ivh mongodb-org-shell-4.2.9-1.el7.x86_64.rpm
rpm -ivh mongodb-org-tools-4.2.9-1.el7.x86_64.rpm
rpm -ivh mongodb-org-4.2.9-1.el7.x86_64.rpm

操作如下:
[root@mongodb1 soft]# rpm -ivh mongodb-org-mongos-4.2.9-1.el7.x86_64.rpm
warning: mongodb-org-mongos-4.2.9-1.el7.x86_64.rpm: Header V3 RSA/SHA1 Signature, key ID 058f8b6b: NOKEY
Preparing... ################################# [100%]
Updating / installing...
1:mongodb-org-mongos-4.2.9-1.el7 ################################# [100%]
[root@mongodb1 soft]# rpm -ivh mongodb-org-server-4.2.9-1.el7.x86_64.rpm
warning: mongodb-org-server-4.2.9-1.el7.x86_64.rpm: Header V3 RSA/SHA1 Signature, key ID 058f8b6b: NOKEY
Preparing... ################################# [100%]
Updating / installing...
1:mongodb-org-server-4.2.9-1.el7 ################################# [100%]
Created symlink from /etc/systemd/system/multi-user.target.wants/mongod.service to /usr/lib/systemd/system/mongod.service.
[root@mongodb1 soft]# rpm -ivh mongodb-org-shell-4.2.9-1.el7.x86_64.rpm
warning: mongodb-org-shell-4.2.9-1.el7.x86_64.rpm: Header V3 RSA/SHA1 Signature, key ID 058f8b6b: NOKEY
Preparing... ################################# [100%]
Updating / installing...
1:mongodb-org-shell-4.2.9-1.el7 ################################# [100%]
[root@mongodb1 soft]# rpm -ivh mongodb-org-tools-4.2.9-1.el7.x86_64.rpm
warning: mongodb-org-tools-4.2.9-1.el7.x86_64.rpm: Header V3 RSA/SHA1 Signature, key ID 058f8b6b: NOKEY
Preparing... ################################# [100%]
Updating / installing...
1:mongodb-org-tools-4.2.9-1.el7 ################################# [100%]
[root@mongodb1 soft]# rpm -ivh mongodb-org-4.2.9-1.el7.x86_64.rpm
warning: mongodb-org-4.2.9-1.el7.x86_64.rpm: Header V3 RSA/SHA1 Signature, key ID 058f8b6b: NOKEY
Preparing... ################################# [100%]
Updating / installing...
1:mongodb-org-4.2.9-1.el7 ################################# [100%]

查看安装的rpm包
[root@mongodb1 ~]# rpm -qa |grep mongodb
mongodb-org-tools-4.2.9-1.el7.x86_64
mongodb-org-mongos-4.2.9-1.el7.x86_64
mongodb-org-shell-4.2.9-1.el7.x86_64
mongodb-org-server-4.2.9-1.el7.x86_64
mongodb-org-4.2.9-1.el7.x86_64

3.2 创建相应目录
三台服务器都要执行如下安装操作
mkdir -p /mongodb/{data,logs,apps}
mkdir -p /mongodb/data/shard{1,2,3}
mkdir -p /mongodb/data/config

3.3 配置 config 配置服务器
配置文件(3 个节点都一样)
vi /mongodb/apps/config.conf
dbpath=/mongodb/data/config
logpath=/mongodb/logs/mongodb-config.log
port=27017
fork=true
journal=true
maxConns=500
logappend=true
pidfilepath=/tmp/mongo-config.pid
directoryperdb=true
replSet=tyconfig
configsvr=true
bind_ip=0.0.0.0

拷贝到另外二台服务器上面
[root@mongodb1 apps]#
[root@mongodb1 apps]# scp * root@mongodb2:/mongodb/apps/
root@mongodb2's password:
config.conf 100% 233 159.6KB/s 00:00
[root@mongodb1 apps]# scp * root@mongodb3:/mongodb/apps/
root@mongodb3's password:
config.conf 100% 233 219.2KB/s 00:00
[root@mongodb1 apps]#

3.4 启动config实例
3台都要执行如下操作
mongod --config /mongodb/apps/config.conf

节点1
[root@mongodb1 apps]# mongod --config /mongodb/apps/config.conf
about to fork child process, waiting until server is ready for connections.
forked process: 24459
child process started successfully, parent exiting
[root@mongodb1 apps]#

节点2
[root@mongodb2 apps]# mongod --config /mongodb/apps/config.conf
about to fork child process, waiting until server is ready for connections.
forked process: 24450
child process started successfully, parent exiting

节点3
[root@mongodb3 data]# mongod --config /mongodb/apps/config.conf
about to fork child process, waiting until server is ready for connections.
forked process: 24399
child process started successfully, parent exiting

3.5 创建复制集
连接一个实例
mongo 192.168.161.71:27017

创建复制集
config={_id:"tyconfig",members:[
{_id:0,host:"192.168.161.71:27017"},
{_id:1,host:"192.168.161.72:27017"},
{_id:2,host:"192.168.161.73:27017"},
]}
这个 tyconfig 名字一定要和config 配置文件中 replSet 的名字一致

初始化复制集
rs.initiate(config)

检查状态
rs.status()

操作结果如下:
[root@mongodb1 apps]# mongo 192.168.161.71:27017
MongoDB shell version v4.2.9
connecting to: mongodb://192.168.161.71:27017/test?compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("77bb45b0-72e8-449f-829a-c93d7edbfcd6") }
MongoDB server version: 4.2.9
Welcome to the MongoDB shell.
For interactive help, type "help".
For more comprehensive documentation, see
https://docs.mongodb.com/
Questions? Try the MongoDB Developer Community Forums
https://community.mongodb.com
Server has startup warnings:
2020-10-17T21:44:25.778+0800 I CONTROL [initandlisten]
2020-10-17T21:44:25.778+0800 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database.
2020-10-17T21:44:25.778+0800 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted.
2020-10-17T21:44:25.778+0800 I CONTROL [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.
2020-10-17T21:44:25.779+0800 I CONTROL [initandlisten]
2020-10-17T21:44:25.779+0800 I CONTROL [initandlisten]
2020-10-17T21:44:25.779+0800 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2020-10-17T21:44:25.779+0800 I CONTROL [initandlisten] ** We suggest setting it to 'never'
2020-10-17T21:44:25.779+0800 I CONTROL [initandlisten]

Enable MongoDB's free cloud-based monitoring service, which will then receive and display
metrics about your deployment (disk utilization, CPU, operation statistics, etc).

The monitoring data will be available on a MongoDB website with a unique URL accessible to you
and anyone you share the URL with. MongoDB may use this information to make product
improvements and to suggest MongoDB products and deployment options to you.

To enable free monitoring, run the following command: db.enableFreeMonitoring()
To permanently disable this reminder, run the following command: db.disableFreeMonitoring()

config={_id:"tyconfig",members:[
... {_id:0,host:"192.168.161.71:27017"},
... {_id:1,host:"192.168.161.72:27017"},
... {_id:2,host:"192.168.161.73:27017"},
... ]}
{
"_id" : "tyconfig",
"members" : [
{
"_id" : 0,
"host" : "192.168.161.71:27017"
},
{
"_id" : 1,
"host" : "192.168.161.72:27017"
},
{
"_id" : 2,
"host" : "192.168.161.73:27017"
}
]
}
rs.initiate(config)
{
"ok" : 1,
"$gleStats" : {
"lastOpTime" : Timestamp(1602942428, 1),
"electionId" : ObjectId("000000000000000000000000")
},
"lastCommittedOpTime" : Timestamp(0, 0),
"$clusterTime" : {
"clusterTime" : Timestamp(1602942428, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
},
"operationTime" : Timestamp(1602942428, 1)
}
tyconfig:SECONDARY> rs.status()
{
"set" : "tyconfig",
"date" : ISODate("2020-10-17T13:47:23.539Z"),
"myState" : 1,
"term" : NumberLong(1),
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"configsvr" : true,
"heartbeatIntervalMillis" : NumberLong(2000),
"majorityVoteCount" : 2,
"writeMajorityCount" : 2,
"optimes" : {
"lastCommittedOpTime" : {
"ts" : Timestamp(1602942439, 15),
"t" : NumberLong(1)
},
"lastCommittedWallTime" : ISODate("2020-10-17T13:47:19.691Z"),
"readConcernMajorityOpTime" : {
"ts" : Timestamp(1602942439, 15),
"t" : NumberLong(1)
},
"readConcernMajorityWallTime" : ISODate("2020-10-17T13:47:19.691Z"),
"appliedOpTime" : {
"ts" : Timestamp(1602942440, 11),
"t" : NumberLong(1)
},
"durableOpTime" : {
"ts" : Timestamp(1602942440, 11),
"t" : NumberLong(1)
},
"lastAppliedWallTime" : ISODate("2020-10-17T13:47:20.309Z"),
"lastDurableWallTime" : ISODate("2020-10-17T13:47:20.309Z")
},
"lastStableRecoveryTimestamp" : Timestamp(1602942439, 2),
"lastStableCheckpointTimestamp" : Timestamp(1602942439, 2),
"electionCandidateMetrics" : {
"lastElectionReason" : "electionTimeout",
"lastElectionDate" : ISODate("2020-10-17T13:47:18.973Z"),
"electionTerm" : NumberLong(1),
"lastCommittedOpTimeAtElection" : {
"ts" : Timestamp(0, 0),
"t" : NumberLong(-1)
},
"lastSeenOpTimeAtElection" : {
"ts" : Timestamp(1602942428, 1),
"t" : NumberLong(-1)
},
"numVotesNeeded" : 2,
"priorityAtElection" : 1,
"electionTimeoutMillis" : NumberLong(10000),
"numCatchUpOps" : NumberLong(0),
"newTermStartDate" : ISODate("2020-10-17T13:47:19.093Z"),
"wMajorityWriteAvailabilityDate" : ISODate("2020-10-17T13:47:20.541Z")
},
"members" : [
{
"_id" : 0,
"name" : "192.168.161.71:27017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 180,
"optime" : {
"ts" : Timestamp(1602942440, 11),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2020-10-17T13:47:20Z"),
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"infoMessage" : "could not find member to sync from",
"electionTime" : Timestamp(1602942438, 1),
"electionDate" : ISODate("2020-10-17T13:47:18Z"),
"configVersion" : 1,
"self" : true,
"lastHeartbeatMessage" : ""
},
{
"_id" : 1,
"name" : "192.168.161.72:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 15,
"optime" : {
"ts" : Timestamp(1602942439, 9),
"t" : NumberLong(1)
},
"optimeDurable" : {
"ts" : Timestamp(1602942439, 9),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2020-10-17T13:47:19Z"),
"optimeDurableDate" : ISODate("2020-10-17T13:47:19Z"),
"lastHeartbeat" : ISODate("2020-10-17T13:47:22.983Z"),
"lastHeartbeatRecv" : ISODate("2020-10-17T13:47:22.672Z"),
"pingMs" : NumberLong(0),
"lastHeartbeatMessage" : "",
"syncingTo" : "192.168.161.71:27017",
"syncSourceHost" : "192.168.161.71:27017",
"syncSourceId" : 0,
"infoMessage" : "",
"configVersion" : 1
},
{
"_id" : 2,
"name" : "192.168.161.73:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 15,
"optime" : {
"ts" : Timestamp(1602942439, 16),
"t" : NumberLong(1)
},
"optimeDurable" : {
"ts" : Timestamp(1602942439, 15),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2020-10-17T13:47:19Z"),
"optimeDurableDate" : ISODate("2020-10-17T13:47:19Z"),
"lastHeartbeat" : ISODate("2020-10-17T13:47:23.379Z"),
"lastHeartbeatRecv" : ISODate("2020-10-17T13:47:21.605Z"),
"pingMs" : NumberLong(3),
"lastHeartbeatMessage" : "",
"syncingTo" : "192.168.161.71:27017",
"syncSourceHost" : "192.168.161.71:27017",
"syncSourceId" : 0,
"infoMessage" : "",
"configVersion" : 1
}
],
"ok" : 1,
"$gleStats" : {
"lastOpTime" : Timestamp(1602942428, 1),
"electionId" : ObjectId("7fffffff0000000000000001")
},
"lastCommittedOpTime" : Timestamp(1602942439, 15),
"$clusterTime" : {
"clusterTime" : Timestamp(1602942440, 11),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
},
"operationTime" : Timestamp(1602942440, 11)
}
tyconfig:PRIMARY>

3.6 部署shard1分片服务器
vi /mongodb/apps/shard1.conf

dbpath=/mongodb/data/shard1
logpath=/mongodb/logs/mongodb-shard1.log
port=40001
fork=true
journal=true
maxConns=500
logappend=true
directoryperdb=true
pidfilepath=/tmp/mongo_27018.pid
replSet=tyshard1
bind_ip=0.0.0.0
shardsvr=true
将此文件copy到另外2台机器
scp shard1.conf root@mongodb2:/mongodb/apps/
scp shard1.conf root@mongodb3:/mongodb/apps/

启动3台shard1实例
mongod --config /mongodb/apps/shard1.conf

连接一个实例
mongo 192.168.161.71:40001

创建复制集
use admin

config={_id:"tyshard1",members:[
{_id:0,host:"192.168.161.71:40001",priority:2},
{_id:1,host:"192.168.161.72:40001",priority:1},
{_id:2,host:"192.168.161.73:40001",arbiterOnly:true},
]}
这个 tyshard1 名字一定要和 shard1配置文件中 replSet 的名字一致

初始化复制集
rs.initiate(config)

检查状态
rs.status()

操作如下:
节点1
[root@mongodb1 apps]# mongod --config /mongodb/apps/shard1.conf
about to fork child process, waiting until server is ready for connections.
forked process: 24719
child process started successfully, parent exiting
节点2
[root@mongodb2 apps]# mongod --config /mongodb/apps/shard1.conf
about to fork child process, waiting until server is ready for connections.
forked process: 24650
child process started successfully, parent exiting

节点3
[root@mongodb3 data]# mongod --config /mongodb/apps/shard1.conf
about to fork child process, waiting until server is ready for connections.
forked process: 24601
child process started successfully, parent exiting

操作如下:
[root@mongodb1 apps]# mongo 192.168.161.71:40001
MongoDB shell version v4.2.9
connecting to: mongodb://192.168.161.71:40001/test?compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("1ceb5a7a-6ea4-450e-8dab-5bb42f35bd50") }
MongoDB server version: 4.2.9
Server has startup warnings:
2020-10-17T21:54:02.415+0800 I CONTROL [initandlisten]
2020-10-17T21:54:02.416+0800 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database.
2020-10-17T21:54:02.416+0800 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted.
2020-10-17T21:54:02.416+0800 I CONTROL [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.
2020-10-17T21:54:02.416+0800 I CONTROL [initandlisten]
2020-10-17T21:54:02.416+0800 I CONTROL [initandlisten]
2020-10-17T21:54:02.416+0800 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2020-10-17T21:54:02.416+0800 I CONTROL [initandlisten] ** We suggest setting it to 'never'
2020-10-17T21:54:02.416+0800 I CONTROL [initandlisten]

Enable MongoDB's free cloud-based monitoring service, which will then receive and display
metrics about your deployment (disk utilization, CPU, operation statistics, etc).

The monitoring data will be available on a MongoDB website with a unique URL accessible to you
and anyone you share the URL with. MongoDB may use this information to make product
improvements and to suggest MongoDB products and deployment options to you.

To enable free monitoring, run the following command: db.enableFreeMonitoring()
To permanently disable this reminder, run the following command: db.disableFreeMonitoring()

use admin
switched to db admin

config={_id:"tyshard1",members:[
... {_id:0,host:"192.168.161.71:40001",priority:2},
... {_id:1,host:"192.168.161.72:40001",priority:1},
... {_id:2,host:"192.168.161.73:40001",arbiterOnly:true},
... ]}
{
"_id" : "tyshard1",
"members" : [
{
"_id" : 0,
"host" : "192.168.161.71:40001",
"priority" : 2
},
{
"_id" : 1,
"host" : "192.168.161.72:40001",
"priority" : 1
},
{
"_id" : 2,
"host" : "192.168.161.73:40001",
"arbiterOnly" : true
}
]
}
rs.initiate(config)
{
"ok" : 1,
"$clusterTime" : {
"clusterTime" : Timestamp(1602943545, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
},
"operationTime" : Timestamp(1602943545, 1)
}
tyshard1:OTHER> rs.status()
{
"set" : "tyshard1",
"date" : ISODate("2020-10-17T14:05:56.475Z"),
"myState" : 1,
"term" : NumberLong(1),
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"heartbeatIntervalMillis" : NumberLong(2000),
"majorityVoteCount" : 2,
"writeMajorityCount" : 2,
"optimes" : {
"lastCommittedOpTime" : {
"ts" : Timestamp(0, 0),
"t" : NumberLong(-1)
},
"lastCommittedWallTime" : ISODate("1970-01-01T00:00:00Z"),
"appliedOpTime" : {
"ts" : Timestamp(1602943556, 3),
"t" : NumberLong(1)
},
"durableOpTime" : {
"ts" : Timestamp(1602943556, 3),
"t" : NumberLong(1)
},
"lastAppliedWallTime" : ISODate("2020-10-17T14:05:56.397Z"),
"lastDurableWallTime" : ISODate("2020-10-17T14:05:56.397Z")
},
"lastStableRecoveryTimestamp" : Timestamp(0, 0),
"lastStableCheckpointTimestamp" : Timestamp(0, 0),
"electionCandidateMetrics" : {
"lastElectionReason" : "electionTimeout",
"lastElectionDate" : ISODate("2020-10-17T14:05:56.378Z"),
"electionTerm" : NumberLong(1),
"lastCommittedOpTimeAtElection" : {
"ts" : Timestamp(0, 0),
"t" : NumberLong(-1)
},
"lastSeenOpTimeAtElection" : {
"ts" : Timestamp(1602943545, 1),
"t" : NumberLong(-1)
},
"numVotesNeeded" : 2,
"priorityAtElection" : 2,
"electionTimeoutMillis" : NumberLong(10000),
"numCatchUpOps" : NumberLong(0),
"newTermStartDate" : ISODate("2020-10-17T14:05:56.397Z")
},
"members" : [
{
"_id" : 0,
"name" : "192.168.161.71:40001",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 716,
"optime" : {
"ts" : Timestamp(1602943556, 3),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2020-10-17T14:05:56Z"),
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"infoMessage" : "could not find member to sync from",
"electionTime" : Timestamp(1602943556, 1),
"electionDate" : ISODate("2020-10-17T14:05:56Z"),
"configVersion" : 1,
"self" : true,
"lastHeartbeatMessage" : ""
},
{
"_id" : 1,
"name" : "192.168.161.72:40001",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 10,
"optime" : {
"ts" : Timestamp(1602943545, 1),
"t" : NumberLong(-1)
},
"optimeDurable" : {
"ts" : Timestamp(1602943545, 1),
"t" : NumberLong(-1)
},
"optimeDate" : ISODate("2020-10-17T14:05:45Z"),
"optimeDurableDate" : ISODate("2020-10-17T14:05:45Z"),
"lastHeartbeat" : ISODate("2020-10-17T14:05:56.385Z"),
"lastHeartbeatRecv" : ISODate("2020-10-17T14:05:56.168Z"),
"pingMs" : NumberLong(0),
"lastHeartbeatMessage" : "",
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"infoMessage" : "",
"configVersion" : 1
},
{
"_id" : 2,
"name" : "192.168.161.73:40001",
"health" : 1,
"state" : 7,
"stateStr" : "ARBITER",
"uptime" : 10,
"lastHeartbeat" : ISODate("2020-10-17T14:05:56.388Z"),
"lastHeartbeatRecv" : ISODate("2020-10-17T14:05:56.236Z"),
"pingMs" : NumberLong(0),
"lastHeartbeatMessage" : "",
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"infoMessage" : "",
"configVersion" : 1
}
],
"ok" : 1,
"$clusterTime" : {
"clusterTime" : Timestamp(1602943556, 3),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
},
"operationTime" : Timestamp(1602943556, 3)
}
tyshard1:PRIMARY>

3.7 部署shard2分片服务器

[root@mongodb1 apps]# cat shard2.conf
dbpath=/mongodb/data/shard2
logpath=/mongodb/logs/mongodb-shard2.log
port=40002
fork=true

auth=true

noauth=true

verbose=true

vvvv=true

journal=true
maxConns=500
logappend=true
directoryperdb=true
pidfilepath=/tmp/mongo_27018.pid

cpu=true

nohttpinterface=false

notablescan=false

profile=0

slowms=200

quiet=true

syncdelay=60

replSet=tyshard2
bind_ip=0.0.0.0
shardsvr=true

scp shard2.conf root@mongodb1:/mongodb/apps/
scp shard2.conf root@mongodb3:/mongodb/apps/

mongod --config /mongodb/apps/shard2.conf

mongo 192.168.161.72:40002

use admin

config={_id:"tyshard2",members:[
{_id:0,host:"192.168.161.71:40002",arbiterOnly:true},
{_id:1,host:"192.168.161.72:40002",priority:2},
{_id:2,host:"192.168.161.73:40002",priority:1},
]}
这个 tyshard2名字一定要和 shard2配置文件中 replSet 的名字一致
初始化复制集

rs.initiate(config)

检查状态
rs.status()

节点2
[root@mongodb2 apps]# mongod --config /mongodb/apps/shard2.conf
about to fork child process, waiting until server is ready for connections.
forked process: 25107
child process started successfully, parent exiting

节点1
[root@mongodb1 apps]# mongod --config /mongodb/apps/shard2.conf
about to fork child process, waiting until server is ready for connections.
forked process: 25512
child process started successfully, parent exiting

节点3
[root@mongodb3 apps]# mongod --config /mongodb/apps/shard2.conf
about to fork child process, waiting until server is ready for connections.
forked process: 25294
child process started successfully, parent exiting

操作如下:
[root@mongodb2 apps]# mongo 192.168.161.72:40002
MongoDB shell version v4.2.9
connecting to: mongodb://192.168.161.72:40002/test?compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("81243f1e-cf32-49a7-856f-60192b1fd6d9") }
MongoDB server version: 4.2.9
Server has startup warnings:
2020-10-17T22:18:03.162+0800 I CONTROL [initandlisten]
2020-10-17T22:18:03.162+0800 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database.
2020-10-17T22:18:03.162+0800 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted.
2020-10-17T22:18:03.162+0800 I CONTROL [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.
2020-10-17T22:18:03.162+0800 I CONTROL [initandlisten]
2020-10-17T22:18:03.162+0800 I CONTROL [initandlisten]
2020-10-17T22:18:03.162+0800 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2020-10-17T22:18:03.162+0800 I CONTROL [initandlisten] ** We suggest setting it to 'never'
2020-10-17T22:18:03.162+0800 I CONTROL [initandlisten]

Enable MongoDB's free cloud-based monitoring service, which will then receive and display
metrics about your deployment (disk utilization, CPU, operation statistics, etc).

The monitoring data will be available on a MongoDB website with a unique URL accessible to you
and anyone you share the URL with. MongoDB may use this information to make product
improvements and to suggest MongoDB products and deployment options to you.

To enable free monitoring, run the following command: db.enableFreeMonitoring()
To permanently disable this reminder, run the following command: db.disableFreeMonitoring()

use admin
switched to db admin
config={_id:"tyshard2",members:[
... {_id:0,host:"192.168.161.71:40002",arbiterOnly:true},
... {_id:1,host:"192.168.161.72:40002",priority:2},
... {_id:2,host:"192.168.161.73:40002",priority:1},
... ]}
{
"_id" : "tyshard2",
"members" : [
{
"_id" : 0,
"host" : "192.168.161.71:40002",
"arbiterOnly" : true
},
{
"_id" : 1,
"host" : "192.168.161.72:40002",
"priority" : 2
},
{
"_id" : 2,
"host" : "192.168.161.73:40002",
"priority" : 1
}
]
}
rs.initiate(config)
{
"ok" : 1,
"$clusterTime" : {
"clusterTime" : Timestamp(1602944831, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
},
"operationTime" : Timestamp(1602944831, 1)
}
tyshard2:SECONDARY> rs.status()
{
"set" : "tyshard2",
"date" : ISODate("2020-10-17T14:27:19.110Z"),
"myState" : 2,
"term" : NumberLong(0),
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"heartbeatIntervalMillis" : NumberLong(2000),
"majorityVoteCount" : 2,
"writeMajorityCount" : 2,
"optimes" : {
"lastCommittedOpTime" : {
"ts" : Timestamp(0, 0),
"t" : NumberLong(-1)
},
"lastCommittedWallTime" : ISODate("1970-01-01T00:00:00Z"),
"appliedOpTime" : {
"ts" : Timestamp(1602944831, 1),
"t" : NumberLong(-1)
},
"durableOpTime" : {
"ts" : Timestamp(1602944831, 1),
"t" : NumberLong(-1)
},
"lastAppliedWallTime" : ISODate("2020-10-17T14:27:11.710Z"),
"lastDurableWallTime" : ISODate("2020-10-17T14:27:11.710Z")
},
"lastStableRecoveryTimestamp" : Timestamp(0, 0),
"lastStableCheckpointTimestamp" : Timestamp(0, 0),
"members" : [
{
"_id" : 0,
"name" : "192.168.161.71:40002",
"health" : 1,
"state" : 7,
"stateStr" : "ARBITER",
"uptime" : 7,
"lastHeartbeat" : ISODate("2020-10-17T14:27:18.823Z"),
"lastHeartbeatRecv" : ISODate("2020-10-17T14:27:18.242Z"),
"pingMs" : NumberLong(1),
"lastHeartbeatMessage" : "",
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"infoMessage" : "",
"configVersion" : 1
},
{
"_id" : 1,
"name" : "192.168.161.72:40002",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 558,
"optime" : {
"ts" : Timestamp(1602944831, 1),
"t" : NumberLong(-1)
},
"optimeDate" : ISODate("2020-10-17T14:27:11Z"),
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"infoMessage" : "could not find member to sync from",
"configVersion" : 1,
"self" : true,
"lastHeartbeatMessage" : ""
},
{
"_id" : 2,
"name" : "192.168.161.73:40002",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 7,
"optime" : {
"ts" : Timestamp(1602944831, 1),
"t" : NumberLong(-1)
},
"optimeDurable" : {
"ts" : Timestamp(1602944831, 1),
"t" : NumberLong(-1)
},
"optimeDate" : ISODate("2020-10-17T14:27:11Z"),
"optimeDurableDate" : ISODate("2020-10-17T14:27:11Z"),
"lastHeartbeat" : ISODate("2020-10-17T14:27:18.830Z"),
"lastHeartbeatRecv" : ISODate("2020-10-17T14:27:18.819Z"),
"pingMs" : NumberLong(0),
"lastHeartbeatMessage" : "",
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"infoMessage" : "",
"configVersion" : 1
}
],
"ok" : 1,
"$clusterTime" : {
"clusterTime" : Timestamp(1602944831, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
},
"operationTime" : Timestamp(1602944831, 1)
}
tyshard2:SECONDARY> exit
bye

3.8 部署shard3分片服务器
[root@mongodb3 apps]# cat shard3.conf
dbpath=/mongodb/data/shard3
logpath=/mongodb/logs/mongodb-shard3.log
port=40003
fork=true

auth=true

noauth=true

verbose=true

vvvv=true

journal=true
maxConns=500
logappend=true
directoryperdb=true
pidfilepath=/tmp/mongo_27019.pid

cpu=true

nohttpinterface=false

notablescan=false

profile=0

slowms=200

quiet=true

syncdelay=60

replSet=tyshard3
bind_ip=0.0.0.0
shardsvr=true

scp shard3.conf root@mongodb1:/mongodb/apps/
scp shard3.conf root@mongodb2:/mongodb/apps/

启动实例
mongod --config /mongodb/apps/shard3.conf

mongo 192.168.161.73:40003

use admin

config={_id:"tyshard3",members:[
{_id:0,host:"192.168.161.71:40003",priority:1},
{_id:1,host:"192.168.161.72:40003",arbiterOnly:true},
{_id:2,host:"192.168.161.73:40003",priority:2},
]}
这个 tyshard3名字一定要和 shard3配置文件中 replSet 的名字一致

初始化复制集
rs.initiate(config)

检查状态
rs.status()
节点3
[root@mongodb3 apps]# mongod --config /mongodb/apps/shard3.conf
about to fork child process, waiting until server is ready for connections.
forked process: 25550
child process started successfully, parent exiting

节点2
[root@mongodb2 ~]# mongod --config /mongodb/apps/shard3.conf
about to fork child process, waiting until server is ready for connections.
forked process: 25750
child process started successfully, parent exiting

节点1
[root@mongodb1 apps]# mongod --config /mongodb/apps/shard3.conf
about to fork child process, waiting until server is ready for connections.
forked process: 25739
child process started successfully, parent exiting

操作如下
[root@mongodb3 apps]# mongod --config /mongodb/apps/shard3.conf
about to fork child process, waiting until server is ready for connections.
forked process: 25550
child process started successfully, parent exiting
[root@mongodb3 apps]#
[root@mongodb3 apps]# mongo 192.168.161.73:40003
MongoDB shell version v4.2.9
connecting to: mongodb://192.168.161.73:40003/test?compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("481db1f0-bf0c-4c04-810e-57db929c0464") }
MongoDB server version: 4.2.9
Welcome to the MongoDB shell.
For interactive help, type "help".
For more comprehensive documentation, see
https://docs.mongodb.com/
Questions? Try the MongoDB Developer Community Forums
https://community.mongodb.com
Server has startup warnings:
2020-10-17T22:31:19.724+0800 I CONTROL [initandlisten]
2020-10-17T22:31:19.724+0800 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database.
2020-10-17T22:31:19.724+0800 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted.
2020-10-17T22:31:19.724+0800 I CONTROL [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.
2020-10-17T22:31:19.724+0800 I CONTROL [initandlisten]
2020-10-17T22:31:19.725+0800 I CONTROL [initandlisten]
2020-10-17T22:31:19.725+0800 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2020-10-17T22:31:19.725+0800 I CONTROL [initandlisten] ** We suggest setting it to 'never'
2020-10-17T22:31:19.725+0800 I CONTROL [initandlisten]

Enable MongoDB's free cloud-based monitoring service, which will then receive and display
metrics about your deployment (disk utilization, CPU, operation statistics, etc).

The monitoring data will be available on a MongoDB website with a unique URL accessible to you
and anyone you share the URL with. MongoDB may use this information to make product
improvements and to suggest MongoDB products and deployment options to you.

To enable free monitoring, run the following command: db.enableFreeMonitoring()
To permanently disable this reminder, run the following command: db.disableFreeMonitoring()

use admin
switched to db admin

config={_id:"tyshard3",members:[
... {_id:0,host:"192.168.161.71:40003",priority:1},
... {_id:1,host:"192.168.161.72:40003",arbiterOnly:true},
... {_id:2,host:"192.168.161.73:40003",priority:2},
... ]}
{
"_id" : "tyshard3",
"members" : [
{
"_id" : 0,
"host" : "192.168.161.71:40003",
"priority" : 1
},
{
"_id" : 1,
"host" : "192.168.161.72:40003",
"arbiterOnly" : true
},
{
"_id" : 2,
"host" : "192.168.161.73:40003",
"priority" : 2
}
]
}
rs.initiate(config)
{
"ok" : 1,
"$clusterTime" : {
"clusterTime" : Timestamp(1602945139, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
},
"operationTime" : Timestamp(1602945139, 1)
}
tyshard3:SECONDARY> rs.status()
{
"set" : "tyshard3",
"date" : ISODate("2020-10-17T14:32:24.767Z"),
"myState" : 2,
"term" : NumberLong(0),
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"heartbeatIntervalMillis" : NumberLong(2000),
"majorityVoteCount" : 2,
"writeMajorityCount" : 2,
"optimes" : {
"lastCommittedOpTime" : {
"ts" : Timestamp(0, 0),
"t" : NumberLong(-1)
},
"lastCommittedWallTime" : ISODate("1970-01-01T00:00:00Z"),
"appliedOpTime" : {
"ts" : Timestamp(1602945139, 1),
"t" : NumberLong(-1)
},
"durableOpTime" : {
"ts" : Timestamp(1602945139, 1),
"t" : NumberLong(-1)
},
"lastAppliedWallTime" : ISODate("2020-10-17T14:32:19.181Z"),
"lastDurableWallTime" : ISODate("2020-10-17T14:32:19.181Z")
},
"lastStableRecoveryTimestamp" : Timestamp(0, 0),
"lastStableCheckpointTimestamp" : Timestamp(0, 0),
"members" : [
{
"_id" : 0,
"name" : "192.168.161.71:40003",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 5,
"optime" : {
"ts" : Timestamp(1602945139, 1),
"t" : NumberLong(-1)
},
"optimeDurable" : {
"ts" : Timestamp(1602945139, 1),
"t" : NumberLong(-1)
},
"optimeDate" : ISODate("2020-10-17T14:32:19Z"),
"optimeDurableDate" : ISODate("2020-10-17T14:32:19Z"),
"lastHeartbeat" : ISODate("2020-10-17T14:32:24.731Z"),
"lastHeartbeatRecv" : ISODate("2020-10-17T14:32:24.684Z"),
"pingMs" : NumberLong(1),
"lastHeartbeatMessage" : "",
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"infoMessage" : "",
"configVersion" : 1
},
{
"_id" : 1,
"name" : "192.168.161.72:40003",
"health" : 1,
"state" : 7,
"stateStr" : "ARBITER",
"uptime" : 5,
"lastHeartbeat" : ISODate("2020-10-17T14:32:24.719Z"),
"lastHeartbeatRecv" : ISODate("2020-10-17T14:32:23.591Z"),
"pingMs" : NumberLong(0),
"lastHeartbeatMessage" : "",
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"infoMessage" : "",
"configVersion" : 1
},
{
"_id" : 2,
"name" : "192.168.161.73:40003",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 67,
"optime" : {
"ts" : Timestamp(1602945139, 1),
"t" : NumberLong(-1)
},
"optimeDate" : ISODate("2020-10-17T14:32:19Z"),
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"infoMessage" : "could not find member to sync from",
"configVersion" : 1,
"self" : true,
"lastHeartbeatMessage" : ""
}
],
"ok" : 1,
"$clusterTime" : {
"clusterTime" : Timestamp(1602945139, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
},
"operationTime" : Timestamp(1602945139, 1)
}
tyshard3:SECONDARY>

3.9 创建路由实例
vi /mongodb/apps/route.conf

pidfilepath=/mongodb/apps/mongo-route.pid
logpath=/mongodb/logs/mongodb-route.log
port=30000
fork=true
maxConns=5000
logappend=true
bind_ip=0.0.0.0
configdb=tyconfig/192.168.161.71:27017,192.168.161.72:27017,192.168.161.73:27017

scp route.conf root@mongodb1:/mongodb/apps
scp route.conf root@mongodb2:/mongodb/apps

启动路由实例
mongos --config /mongodb/apps/route.conf

3.10 启用分片功能
登陆路由节点
mongo 192.168.161.71:30000

use admin

sh.addShard("tyshard1/192.168.161.71:40001,192.168.161.72:40001,192.168.161.73:40001")
sh.addShard("tyshard2/192.168.161.71:40002,192.168.161.72:40002,192.168.161.73:40002")

sh.status()

3.11测试服务器分片功能
use config
db.settings.save({"_id":"chunksize","value":1})
模拟写入数据

在tydb库的tyuser表中循环写入6万条数据

use tydb

show collections

for(i=1;i<=60000;i++){db.tyuser.insert({"id":i,"name":"ty"+i})}
s
启用数据库分片
sh.enableSharding("tydb")
围标创建的索引
db.tyuser.createIndex({"id":1})

启用表分片
sh.shardCollection("tydb.tyuser",{"id":1})

查看分片情况
sh.status()

手动添加分片服务器

mongo 192.168.161.71:30000

use admin

sh.addShard("tyshard3/192.168.161.71:40003,192.168.161.72:40003,192.168.161.73:40003")

sh.status()

posted @   baowei*blog  阅读(236)  评论(0编辑  收藏  举报
编辑推荐:
· AI与.NET技术实操系列:基于图像分类模型对图像进行分类
· go语言实现终端里的倒计时
· 如何编写易于单元测试的代码
· 10年+ .NET Coder 心语,封装的思维:从隐藏、稳定开始理解其本质意义
· .NET Core 中如何实现缓存的预热?
阅读排行:
· 25岁的心里话
· 闲置电脑爆改个人服务器(超详细) #公网映射 #Vmware虚拟网络编辑器
· 基于 Docker 搭建 FRP 内网穿透开源项目(很简单哒)
· 零经验选手,Compose 一天开发一款小游戏!
· 一起来玩mcp_server_sqlite,让AI帮你做增删改查!!
点击右上角即可分享
微信分享提示