mongodb分片部署(带密钥认证)
环境:
OS:Centos 7
mongodb:4.4.22
拓扑结构:
s1分片(副本集):
192.168.1.104:29001 分片服务器1
192.168.1.106:29001 分片服务器2
192.168.1.107:29001 分片服务器3
s2分片(副本集):
192.168.1.104:29002 分片服务器1
192.168.1.106:29002 分片服务器2
192.168.1.107:29002 分片服务器3
s3分片(副本集):
192.168.1.104:29003 分片服务器1
192.168.1.106:29003 分片服务器2
192.168.1.107:29003 分片服务器3
配置服务器副本集
192.168.1.104:28001
192.168.1.106:28002
192.168.1.107:28002
路由服务器
192.168.1.108:30001
说明:
3.4以上的版本,分片服务器和配置服务器都需要配置成副本集的模式
####################部署s1副本集分片服务器######################
1.下载相应的版本
https://www.mongodb.com/download-center/community
我这里下载的是mongodb-linux-x86_64-rhel70-4.4.22.tgz
2.在每个分片服务器上创建如下目录
每个s1分片服务器上都执行
[root@test services]# mkdir -p /usr/local/services
[root@test services]# mkdir -p /home/middle/mongodb_s1/data
[root@test services]# mkdir -p /home/middle/mongodb_s1/log
[root@test services]# mkdir -p /home/middle/mongodb_s1/key
[root@test services]# mkdir -p /home/middle/mongodb_s1/conf
[root@test services]# mkdir -p /home/middle/mongodb_s1/run
3.在每个分片服务器上进行安装
每个s1分片服务器上都执行
[root@test soft]# tar -xvf mongodb-linux-x86_64-rhel70-4.4.22.tgz
[root@test soft]# mv mongodb-linux-x86_64-rhel70-4.4.22 /usr/local/services/mongodb_s1
4.产生秘钥验证(先不做)
在其中一个机器上创建秘钥文件,我这里是在192.168.1.104:29001
[root@test key]# cd /home/middle/mongodb_s1/key
[root@test key]# openssl rand -base64 741 >>keyfile
[root@test key]# chmod 700 keyfile
加个keyfile拷贝到另外的两个节点相应的目录
scp keyfile root@192.168.1.106:/home/middle/mongodb/key/
scp keyfile root@192.168.1.107:/home/middle/mongodb/key/
5.生成日志文件(配置文件中指定了,提前创建)
每个s1分片服务器上都执行
[root@test key]#echo>/home/middle/mongodb_s1/log/mongodb.log
6.创建配置文件 mongo.cnf
每个s1分片服务器上都执行,需要修改相应的ip和端口
vi /home/middle/mongodb_s1/conf/mongo.cnf
port=29001
fork=true
dbpath=/home/middle/mongodb_s1/data
logpath=/home/middle/mongodb_s1/log/mongodb.log
pidfilepath=/home/middle/mongodb_s1/run/29001.pid
bind_ip=192.168.1.104,127.0.0.1
logappend=true
shardsvr=true
replSet=s1
oplogSize=16384
logRotate=reopen
##keyFile=/home/middle/mongodb_s1/key/keyfile
##auth=true
7.启动s1的每个分片服务器
每个s1分片服务器上都执行
/usr/local/services/mongodb_s1/bin/mongod -f /home/middle/mongodb_s1/conf/mongo.cnf
8.初始化s1
在s1副本集的其中一台机器上执行即可
[root@localhost bin]# /usr/local/services/mongodb_s1/bin/mongo 192.168.1.104:29001
use admin
config={_id:'s1',members:[{_id:0,host:'192.168.1.104:29001'},{_id:1,host:'192.168.1.106:29001'},{_id:2,host:'192.168.1.107:29001'}]}
rs.initiate(config)
9.查看副本集s1集群状态
rs.status()
rs.conf()
我们这里的副本集s1,s2,s3都分别有3个节点,若是测试环境,资源紧张也可以使用单个节点做副本集(生产环境不推荐)
[root@localhost bin]# /usr/local/services/mongodb_s1/bin/mongo 192.168.1.104:29001
use admin
config={_id:'s1',members:[{_id:0,host:'192.168.1.104:29001'}]}
rs.initiate(config)
rs.status()
[root@localhost bin]# /usr/local/services/mongodb_s1/bin/mongo 192.168.1.106:29002
use admin
config={_id:'s2',members:[{_id:0,host:'192.168.1.106:29002'}]}
rs.initiate(config)
rs.status()
[root@localhost bin]# /usr/local/services/mongodb_s3/bin/mongo 192.168.1.107:29003
use admin
config={_id:'s3',members:[{_id:0,host:'192.168.1.107:29003'}]}
rs.initiate(config)
rs.status()
另外两个副本集也安装上面的步骤进行操作,将s1替换成s2,s3,同时注意ip和端口的修改.
登录查看集群情况
/usr/local/services/mongodb_s2/bin/mongo 192.168.1.104:29002
/usr/local/services/mongodb_s3/bin/mongo 192.168.1.104:29003
####################部署配置服务器######################
1.在每个分片服务器上创建如下目录
每个副本集服务器上都执行
[root@test services]# mkdir -p /usr/local/services
[root@test services]# mkdir -p /home/middle/mongo_config/data
[root@test services]# mkdir -p /home/middle/mongo_config/log
[root@test services]# mkdir -p /home/middle/mongo_config/key
[root@test services]# mkdir -p /home/middle/mongo_config/conf
[root@test services]# mkdir -p /home/middle/mongo_config/run
2.解压安装
每个副本集服务器上都执行
[root@test soft]# tar -xvf mongodb-linux-x86_64-rhel70-4.4.22.tgz
[root@test soft]# mv mongodb-linux-x86_64-rhel70-4.4.22 /usr/local/services/mongo_config
3.生成日志文件
每个副本集服务器上都执行
[root@localhost soft]#echo>/home/middle/mongo_config/log/mongodb.log
4.创建配置文件
在conf目录下创建配置文件,每个副本集服务器上都执行,注意修改ip地址
vi /home/middle/mongo_config/conf/mongo.cnf
[root@localhost conf]# more mongo.cnf
port=28001
fork=true
dbpath=/home/middle/mongo_config/data
logpath=/home/middle/mongo_config/log/mongodb.log
pidfilepath=/home/middle/mongo_config/run/28001.pid
bind_ip=192.168.1.104,127.0.0.1
logappend=true
oplogSize=16384
logRotate=reopen
configsvr=true
replSet=configrs
5.启动
每个副本集服务器上都执行
/usr/local/services/mongo_config/bin/mongod -f /home/middle/mongo_config/conf/mongo.cnf
6.初始化副本集
在副本集的其中一台机器上执行即可
[root@localhost bin]# /usr/local/services/mongo_config/bin/mongo 192.168.1.104:28001
use admin
config={_id:'configrs',members:[{_id:0,host:'192.168.1.104:28001'},{_id:1,host:'192.168.1.106:28001'},{_id:2,host:'192.168.1.107:28001'}]}
rs.initiate(config)
7.查看副本集状态
rs.status()
rs.conf()
8.配置服务器副本集下的库和表
configrs:PRIMARY> show dbs
admin 0.000GB
config 0.002GB
local 0.002GB
configrs:PRIMARY> use config
switched to db config
configrs:PRIMARY> show tables
actionlog
changelog
chunks
collections
databases
image_collection
lockpings
locks
migrations
mongos
shards
system.indexBuilds
tags
transactions
version
####################部署路由服务器######################
1.在每个分片服务器上创建如下目录
每个分片服务器上都执行
[root@test services]# mkdir -p /usr/local/services
[root@test services]# mkdir -p /home/middle/mongo_router/data
[root@test services]# mkdir -p /home/middle/mongo_router/log
[root@test services]# mkdir -p /home/middle/mongo_router/key
[root@test services]# mkdir -p /home/middle/mongo_router/conf
[root@test services]# mkdir -p /home/middle/mongo_router/run
2.解压安装
[root@test soft]# tar -xvf mongodb-linux-x86_64-rhel70-4.4.22.tgz
[root@test soft]# mv mongodb-linux-x86_64-rhel70-4.4.22 /usr/local/services/mongo_router
3.生成日志文件
[root@localhost soft]#echo>/home/middle/mongo_router/log/mongodb.log
4.创建配置文件
在conf目录下创建配置文件,mongo.cnf
vi /home/middle/mongo_router/conf/mongo.cnf
[root@localhost conf]# more mongo.cnf
port=30001
fork=true
logpath=/home/middle/mongo_router/log/mongodb.log
pidfilepath=/home/middle/mongo_router/run/30001.pid
configdb=configrs/192.168.1.104:28001,192.168.1.106:28001,192.168.1.107:28001
bind_ip=192.168.1.108,127.0.0.1
5.启动
/usr/local/services/mongo_router/bin/mongos -f /home/middle/mongo_router/conf/mongo.cnf
注意这里是使用mongos命令启动,而不是mongod
6.路由服务器下的库表
mongos> show databases
admin 0.000GB
config 0.004GB
hxl 0.010GB ##后面步骤新增
mongos> use config
switched to db config
mongos> show tables
actionlog
changelog
chunks
collections
databases
image_collection
lockpings
locks
migrations
mongos
shards
system.indexBuilds
tags
transactions
version
################添加分片服务器#############################
1.添加分片服务器
目前搭建了mongodb配置服务器、路由服务器,各个分片服务器,不过应用程序连接到mongos路由服务器并不能使用分片机制,还需要在程序里设置分片配置,让分片生效。
登录路由服务器(192.168.1.108)
[root@pxc01 bin]# /usr/local/services/mongo_router/bin/mongo 192.168.1.108:30001
mongos> use admin
switched to db admin
#串联路由服务器与分配副本集
sh.addShard("s1/192.168.1.104:29001,192.168.1.106:29001,192.168.1.107:29001")
sh.addShard("s2/192.168.1.104:29002,192.168.1.106:29002,192.168.1.107:29002")
sh.addShard("s3/192.168.1.104:29003,192.168.1.106:29003,192.168.1.107:29003")
#查看集群状态
sh.status()
mongos> sh.status()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("64b64e0705045e89b67bb0f6")
}
shards:
{ "_id" : "s1", "host" : "s1/192.168.1.104:29001,192.168.1.106:29001,192.168.1.107:29001", "state" : 1 }
{ "_id" : "s2", "host" : "s2/192.168.1.104:29002,192.168.1.106:29002,192.168.1.107:29002", "state" : 1 }
{ "_id" : "s3", "host" : "s3/192.168.1.104:29003,192.168.1.106:29003,192.168.1.107:29003", "state" : 1 }
active mongoses:
"4.4.22" : 1
autosplit:
Currently enabled: yes
balancer:
Currently enabled: yes
Currently running: no
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
No recent migrations
databases:
{ "_id" : "config", "primary" : "config", "partitioned" : true }
3.设置分片存储的数据库,这个时候hxl数据库是不存在的
[root@pxc01 bin]# /usr/local/services/mongo_router/bin/mongo 192.168.1.108:30001
mongos> use admin
mongos> db.runCommand({enablesharding:"hxl" })
{
"ok" : 1,
"operationTime" : Timestamp(1689673064, 4),
"$clusterTime" : {
"clusterTime" : Timestamp(1689673064, 5),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
mongos> db.runCommand({"shardcollection":"hxl.person","key":{"_id":"hashed"}})
{
"collectionsharded" : "hxl.person",
"collectionUUID" : UUID("753575b3-2974-4223-b007-5bbf6f4efd88"),
"ok" : 1,
"operationTime" : Timestamp(1689673137, 14),
"$clusterTime" : {
"clusterTime" : Timestamp(1689673137, 14),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
在路由服务器上执行该命令后,后台会自动在每个分片服务器上创建hxl数据库和person集合
写入数据
mongos> use hxl
switched to db hxl
mongos> for (var i=0;i<100000;i++){db.person.insert({"name":"hxl"+i,"age":i})}
4.数据核查
登录路由服务器
[root@pxc01 bin]# /usr/local/services/mongo_router/bin/mongo 192.168.1.108:30001
mongos> use hxl
switched to db hxl
mongos> db.person.find().count()
100000
分别登录s1,s2,s3 三个分片查看数据分布情况
s1:
[root@localhost soft]# /usr/local/services/mongodb_s1/bin/mongo 192.168.1.104:29001
s1:SECONDARY> use hxl
s1:PRIMARY> db.person.find().count()
33197
s2:
[root@localhost soft]#/usr/local/services/mongodb_s1/bin/mongo 192.168.1.104:29002
s2:SECONDARY> rs.secondaryOk()
s2:SECONDARY> use hxl
s2:SECONDARY> db.person.find().count()
33500
s3:
[root@localhost soft]#/usr/local/services/mongodb_s1/bin/mongo 192.168.1.104:29003
s3:SECONDARY> rs.secondaryOk()
s3:SECONDARY> use hxl
s3:SECONDARY> db.person.find().count()
33303
33197 + 33500 + 33303=100000,可以看到数据均匀的分布到3个分片了,但是发现写入速度很慢.
5.创建各分片自己的管理员账号
我们这里测试环境每个分片故意创建不同的账号,为了方便管理线上建议创建同样的账号,这些账号不同于路由服务器上创建的账号
s1:
/usr/local/services/mongodb_s1/bin/mongo 192.168.1.104:29001
use admin
db.createUser({user:"hxl01",pwd:"test123",roles:["root"]});
s2:
/usr/local/services/mongodb_s1/bin/mongo 192.168.1.106:29002
use admin
db.createUser({user:"hxl02",pwd:"test123",roles:["root"]});
s3:
/usr/local/services/mongodb_s1/bin/mongo 192.168.1.107:29003
use admin
db.createUser({user:"hxl03",pwd:"test123",roles:["root"]});
##############################日常维护#####################################
1.启动顺序:
配置服务器-->分片服务器器-->路由服务器
启动配置服务器
192.168.1.104
192.168.1.106
192.168.1.107
/usr/local/services/mongo_config/bin/mongod -f /home/middle/mongo_config/conf/mongo.cnf
启动分片s1
192.168.1.104
192.168.1.106
192.168.1.107
/usr/local/services/mongodb_s1/bin/mongod -f /home/middle/mongodb_s1/conf/mongo.cnf
启动分片s2
192.168.1.104
192.168.1.106
192.168.1.107
/usr/local/services/mongodb_s2/bin/mongod -f /home/middle/mongodb_s2/conf/mongo.cnf
启动分片s3
192.168.1.104
192.168.1.106
192.168.1.107
/usr/local/services/mongodb_s3/bin/mongod -f /home/middle/mongodb_s3/conf/mongo.cnf
启动路由服务器
192.168.1.108
/usr/local/services/mongo_router/bin/mongos -f /home/middle/mongo_router/conf/mongo.cnf
说明:
1.若其中一个分片不可用(特别是单个机器做的副本集),整个分片集群不可用
mongos> show dbs
uncaught exception: Error: listDatabases failed:{
"ok" : 0,
"errmsg" : "Could not find host matching read preference { mode: \"primary\" } for set s1",
"code" : 133,
"codeName" : "FailedToSatisfyReadPreference",
"operationTime" : Timestamp(1689750344, 2),
"$clusterTime" : {
"clusterTime" : Timestamp(1689750344, 2),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
} :
####################配置密码认证######################
#################路由服务器上的配置###################
1.在路由服务器上创建账号(创建的账号会自动在配置服务器上创建)
[root@localhost bin]# /usr/local/services/mongo_router/bin/mongo 192.168.1.108:30001
use admin
db.createUser({user:"test",pwd:"test123",roles:["root"]}); --创建用户
db.auth("test","test123"); --设置用户登陆权限,密码一定要和创建用户时输入的密码相同
show users; --查看创建的用户
这个时候查看分片上是否也创建了账号
[root@localhost ~]# /usr/local/services/mongodb_s1/bin/mongo 192.168.1.104:29001
s1:PRIMARY> use admin
s1:PRIMARY> show users;
发现分片不会同步路由服务器上创建的账号
查看配置服务器上是否创建
[root@localhost ~]#/usr/local/services/mongodb_s1/bin/mongo 192.168.1.104:28001
configrs:PRIMARY> show users;
{
"_id" : "admin.test",
"userId" : UUID("0c9ba371-bc38-48de-8697-cbab2a348daa"),
"user" : "test",
"db" : "admin",
"roles" : [
{
"role" : "root",
"db" : "admin"
}
],
"mechanisms" : [
"SCRAM-SHA-1",
"SCRAM-SHA-256"
]
}
发现配置服务器上创建了该账号.
2.在路由服务上生成密码文件
在其中一个机器上创建秘钥文件,我这里是在192.168.1.104:28001
[root@test key]# cd /home/middle/mongo_router/key
[root@test key]# openssl rand -base64 741 >>keyfile
[root@test key]# chmod 700 keyfile
3.修改路由服务器上的配置文件
添加如下项:
keyFile=/home/middle/mongo_router/key/keyfile
##auth=true ##该项目不需要添加
4.重启路由服务器上的mongodb(这里先不重启,等副本集服务器和各分片服务器都配置好密钥认证重启没有问题再启动)
kill 掉相应的进程
重启动
/usr/local/services/mongo_router/bin/mongos -f /home/middle/mongo_router/conf/mongo.cnf
这个时候是无法启动路由服务器的(上面启动的进程一致hang着),提示验证失败(因为配置副本集和分片副本集都还没有启用认证)
initializing sharding state, sleeping for 2 seconds and retrying","attr":{"error":{"code":18,"codeName":"AuthenticationFailed","errmsg":"Error loading clusterID :: caused by :: Authentication failed."}}}
重点:
必须配置好配置服务器副本集和分片副本集 路由服务器才能启动
#####################配置服务器上的配置#######################
1.将路由服务器上的keyfile拷贝到各配置副本集上的各机器
2.修改配置文件
每个配置节点上都做同样的操作
[root@localhost conf]# cd /home/middle/mongo_config/conf
vi mongo.cnf
加入如下两项
keyFile=/home/middle/mongo_config/key/keyfile
auth=true
3.重新启动配置服务器副本集
每个机器上都执行
停掉
/usr/local/services/mongo_config/bin/mongo localhost:28001
use admin
db.shutdownServer()
启动
/usr/local/services/mongo_config/bin/mongod -f /home/middle/mongo_config/conf/mongo.cnf
#####################分片服务上的配置(好像分片服务器上不需要创建账号)#########################
下面接着配置s1,s2,s3各分片的认证
1.将配置服务器生成的密钥文件拷贝到各分片的机器相应的目录,保证整个集群都使用同样的密钥文件
cp keyfile /home/middle/mongodb_s1/key/
3.修改配置参数
启用密码认证,每个节点根据相应路径进行修改
keyFile=/home/middle/mongodb_s1/key/keyfile
auth=true
4.重新启动
停止
s1:
/usr/local/services/mongodb_s1/bin/mongo localhost:29001
use admin
db.shutdownServer()
s2:
/usr/local/services/mongodb_s1/bin/mongo localhost:29002
use admin
db.shutdownServer()
s3:
/usr/local/services/mongodb_s1/bin/mongo localhost:29003
use admin
db.shutdownServer()
启动
/usr/local/services/mongodb_s1/bin/mongod -f /home/middle/mongodb_s1/conf/mongo.cnf
/usr/local/services/mongodb_s2/bin/mongod -f /home/middle/mongodb_s2/conf/mongo.cnf
/usr/local/services/mongodb_s3/bin/mongod -f /home/middle/mongodb_s3/conf/mongo.cnf
5.再次登录路由服务器
这个时候等配置副本集 分片副本集全部启动完成了,路由服务器也启动了,登录再次查看
[root@localhost data]# /usr/local/services/mongo_router/bin/mongo 192.168.1.108:30001
mongos> use admin
switched to db admin
mongos> db.auth("test","test123");
##################在路由服务器上创建普通账号########################
/usr/local/services/mongo_router/bin/mongo 192.168.1.108:30001
mongos>use admin
mongos>db.auth("test","test123");
mongos>use db_pushmsg
mongos>db.createUser({user:'hxl',pwd:'hxl123',roles:[{role:'dbOwner',db:'db_pushmsg'}]})
说明:
1.创建账号(超级管理账号或是普通账号)
在路由服务器上执行创建,会同步到配置服务器,但是不会同步到分片服务器
所以单独登录分片服务器
/usr/local/services/mongodb_s1/bin/mongo localhost:29001
s1:PRIMARY> use admin
switched to db admin
s1:PRIMARY> db.auth("test","test123");
Error: Authentication failed.
0
因为分片服务器没有该账号,所以认证失败.
若要想登录分片服务器需要在启用认证之前创建自己的账号(与路由服务器上创建的账号不一致)
###########################新增分片##################
机器192.168.1.108:29004
1.安装相同版本的mongodb
[root@test soft]# tar -xvf mongodb-linux-x86_64-rhel70-4.4.22.tgz
[root@test soft]# mv mongodb-linux-x86_64-rhel70-4.4.22 /usr/local/services/mongodb_s4
[root@test services]# mkdir -p /home/middle/mongodb_s4/data
[root@test services]# mkdir -p /home/middle/mongodb_s4/log
[root@test services]# mkdir -p /home/middle/mongodb_s4/key
[root@test services]# mkdir -p /home/middle/mongodb_s4/conf
[root@test services]# mkdir -p /home/middle/mongodb_s4/run
2.创建配置文件
可以拷贝一份其他分片的配置文件过来进行修改
vi /home/middle/mongodb_s4/conf/mongo.cnf
[root@localhost conf]# more mongo.cnf
port=29004
fork=true
dbpath=/home/middle/mongodb_s4/data
logpath=/home/middle/mongodb_s4/log/mongodb.log
pidfilepath=/home/middle/mongodb_s4/run/29004.pid
logappend=true
bind_ip=192.168.1.108,127.0.0.1
shardsvr=true
replSet=s1
oplogSize=16384
logRotate=reopen
##keyFile=/home/middle/mongodb_s1/key/keyfile
##auth=true
这个时候先启用密钥认证,等创建了用户后再启用
3.生成配置文件中指定的mongodb.log
echo>/home/middle/mongodb_s4/log/mongodb.log
4.启动
/usr/local/services/mongodb_s4/bin/mongod -f /home/middle/mongodb_s4/conf/mongo.cnf
5.创建副本集
[root@localhost bin]# /usr/local/services/mongodb_s4/bin/mongo 192.168.1.108:29004
use admin
config={_id:'s4',members:[{_id:0,host:'192.168.1.108:29004'}]}
rs.initiate(config)
rs.status()
6.创建自身分片管理账号账号
/usr/local/services/mongodb_s4/bin/mongo 192.168.1.108:29004
use admin
db.createUser({user:"hxl04",pwd:"test123",roles:["root"]});
7.修改配置文件启用密钥(密钥文件重其他节点拷贝过来)
keyFile=/home/middle/mongodb_s4/key/keyfile
auth=true
8.重启动
kill相应的进程
重启动进程
/usr/local/services/mongodb_s4/bin/mongod -f /home/middle/mongodb_s4/conf/mongo.cnf
登录
/usr/local/services/mongodb_s4/bin/mongo 192.168.1.108:29004
s4:PRIMARY> use admin
switched to db admin
s4:PRIMARY> db.auth("hxl04","test123");
9.在路由服务器上添加分片
[root@pxc01 bin]# /usr/local/services/mongo_router/bin/mongo 192.168.1.108:30001
mongos> use admin
switched to db admin
mongos> db.auth("test","test123")
#串联路由服务器与分配副本集
sh.addShard("s4/192.168.1.108:29004")
#查看集群状态
sh.status()
若是删除后重新加入会报如下的错误:
mongos> sh.addShard("s4/192.168.1.108:29004")
{
"ok" : 0,
"errmsg" : "can't add shard 's4/192.168.1.108:29004' because a local database 'db_pushmsg' exists in another s1",
"code" : 96,
"codeName" : "OperationFailed",
"operationTime" : Timestamp(1689899799, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1689899799, 1),
"signature" : {
"hash" : BinData(0,"X1jGdnFUs4c5vYnc5e0Qqs8mfgc="),
"keyId" : NumberLong("7257793727152783361")
}
}
}
解决办法:
登录分片删除本地数据库
s4:PRIMARY> use db_pushmsg
switched to db db_pushmsg
s4:PRIMARY> db.dropDatabase()
10.查看新增分片的数据分布情况
/usr/local/services/mongodb_s4/bin/mongo 192.168.1.108:29004
s4:PRIMARY> use admin
switched to db admin
s4:PRIMARY> db.auth("hxl04","test123");
s4:PRIMARY> show dbs
admin 0.000GB
config 0.000GB
db_pushmsg 0.000GB
local 0.000GB
看到表会自动传到新分片
s4:PRIMARY> show tables
app_message_all
app_message_range
数据也会自动分布到新分片
s4:PRIMARY> db.app_message_all.count()
498869
s4:PRIMARY> db.app_message_range.count()
126816
################删除分片####################
登录路由服务器
[root@localhost ~]# /usr/local/services/mongo_router/bin/mongo 192.168.1.108:30001
mongos> use admin
switched to db admin
mongos> db.auth("test","test123")
mongos> db.adminCommand({'removeShard': 's4'})
{
"msg" : "draining started successfully",
"state" : "started",
"shard" : "s4",
"note" : "you need to drop or movePrimary these databases",
"dbsToMove" : [ ],
"ok" : 1,
"operationTime" : Timestamp(1689845772, 2),
"$clusterTime" : {
"clusterTime" : Timestamp(1689845772, 2),
"signature" : {
"hash" : BinData(0,"eos6I/YyRHZxAcm1vglV/1g8SOI="),
"keyId" : NumberLong("7257793727152783361")
}
}
}
状态变成ongoing
mongos> db.adminCommand({'removeShard': 's4'})
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(128),
"dbs" : NumberLong(0),
"jumboChunks" : NumberLong(0)
},
"note" : "you need to drop or movePrimary these databases",
"dbsToMove" : [ ],
"ok" : 1,
"operationTime" : Timestamp(1689900478, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1689900478, 1),
"signature" : {
"hash" : BinData(0,"b+6eawFsgEhJfQE9sd8cjYeRbok="),
"keyId" : NumberLong("7257793727152783361")
}
}
}
查看整个集群环境(sh.status())
shards:
{ "_id" : "s1", "host" : "s1/192.168.1.105:29001", "state" : 1 }
{ "_id" : "s2", "host" : "s2/192.168.1.106:29002", "state" : 1 }
{ "_id" : "s3", "host" : "s3/192.168.1.107:29003", "state" : 1 }
{ "_id" : "s4", "host" : "s4/192.168.1.108:29004", "state" : 1, "draining" : true }
若是需要移动主分片("dbsToMove" : [ ]是否有值)的话需要执行如下的命令():
移动移动主分片到s1
> db.adminCommand({'movePrimary': 'db_pushmsg', to: 's1'})
> db.adminCommand({'removeShard': 's4'})
上面的过程要等很久,最后执行如下语句,提示迁移成功了.
mongos> db.runCommand( { removeShard: "s4" } )
{
"msg" : "removeshard completed successfully",
"state" : "completed",
"shard" : "s4",
"ok" : 1,
"operationTime" : Timestamp(1689847004, 2),
"$clusterTime" : {
"clusterTime" : Timestamp(1689847004, 2),
"signature" : {
"hash" : BinData(0,"Pj1v/h7BGeENbv3GUN9Smonq0dM="),
"keyId" : NumberLong("7257793727152783361")
}
}
}
再次查看集群分片的情况
mongos> db.adminCommand( { listShards: 1 } )
{
"shards" : [
{
"_id" : "s1",
"host" : "s1/192.168.1.105:29001",
"state" : 1
},
{
"_id" : "s2",
"host" : "s2/192.168.1.106:29002",
"state" : 1
},
{
"_id" : "s3",
"host" : "s3/192.168.1.107:29003",
"state" : 1
}
],
"ok" : 1,
"operationTime" : Timestamp(1689847023, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1689847023, 1),
"signature" : {
"hash" : BinData(0,"cDpxjEsFmPTXloNHejyedMukEUo="),
"keyId" : NumberLong("7257793727152783361")
}
}
}
发现s4已经没有了.
数据验证,发现数据不会丢失
mongos> db.app_message_range.count()
1000000
mongos> db.app_message_all.count()
2000000
说明:
1.整个删除分片的过程,集群是可以对外使用的。