mongodb单机做分片(使用不同的端口号)
环境:
OS:Centos 7
mongodb:4.4.22
部署情况如下:
192.168.1.109:27001 s1分片(单节点的副本集)
192.168.1.109:28001 s2分片(单节点的副本集)
192.168.1.109:29001 s3分片(单节点的副本集)
192.168.1.109:30001 配置服务器(单节点副本集)
192.168.1.109:40001 路由服务器
####################部署s1副本集分片服务器######################
1.下载相应的版本
https://www.mongodb.com/download-center/community
我这里下载的是mongodb-linux-x86_64-rhel70-4.4.22.tgz
2.创建目录
[root@test services]# mkdir -p /usr/local/services
[root@test services]# mkdir -p /home/middle/mongodb_s1/data
[root@test services]# mkdir -p /home/middle/mongodb_s1/log
[root@test services]# mkdir -p /home/middle/mongodb_s1/key
[root@test services]# mkdir -p /home/middle/mongodb_s1/conf
[root@test services]# mkdir -p /home/middle/mongodb_s1/run
3.安装
[root@test soft]# tar -xvf mongodb-linux-x86_64-rhel70-4.4.22.tgz
[root@test soft]# mv mongodb-linux-x86_64-rhel70-4.4.22 /usr/local/services/mongodb_s1
4.产生秘钥验证(先不做)
[root@test key]# cd /home/middle/mongodb_s1/key
[root@test key]# openssl rand -base64 741 >>keyfile
[root@test key]# chmod 700 keyfile
这里每个副本集s1,s2,s3可以共用一个keyfile
[root@localhost key]# cp /home/middle/mongodb_s1/key/keyfile /home/middle/mongodb_s2/key/
[root@localhost key]# cp /home/middle/mongodb_s1/key/keyfile /home/middle/mongodb_s3/key/
5.生成日志文件(配置文件中指定了,提前创建)
每个s1分片服务器上都执行
[root@test key]#echo>/home/middle/mongodb_s1/log/mongodb.log
6.创建配置文件 mongo.cnf
每个s1分片服务器上都执行,需要修改相应的ip和端口
vi /home/middle/mongodb_s1/conf/mongo.cnf
port=27001
fork=true
dbpath=/home/middle/mongodb_s1/data
logpath=/home/middle/mongodb_s1/log/mongodb.log
pidfilepath=/home/middle/mongodb_s1/run/27001.pid
bind_ip=192.168.1.109,127.0.0.1
logappend=true
shardsvr=true
replSet=s1
oplogSize=16384
logRotate=reopen
##keyFile=/home/middle/mongodb_s1/key/keyfile
##auth=true
wiredTigerCacheSizeGB=1 ##内存分配,默认是物理内存的1/2
yaml格式的配置文件
[root@localhost conf]# more mongo_yaml.cnf
net:
bindIp: 192.168.1.109,127.0.0.1
port: 27001
storage:
journal:
enabled: true
dbPath: "/home/middle/mongodb_s1/data"
engine: wiredTiger
wiredTiger:
engineConfig:
cacheSizeGB: 1
systemLog:
destination: file
path: "/home/middle/mongodb_s1/log/mongodb.log"
logAppend: true
logRotate: reopen
processManagement:
fork: true
pidFilePath: "/home/middle/mongodb_s1/run/27001.pid"
replication:
oplogSizeMB: 16384
replSetName: s1
##security:
## keyFile: "/home/middle/mongodb_s1/key/keyfile"
## authorization: "enabled"
sharding:
clusterRole: shardsvr
7.启动s1的
/usr/local/services/mongodb_s1/bin/mongod -f /home/middle/mongodb_s1/conf/mongo.cnf
8.初始化s1
在s1副本集的其中一台机器上执行即可
[root@localhost bin]# /usr/local/services/mongodb_s1/bin/mongo 192.168.1.109:27001
use admin
config={_id:'s1',members:[{_id:0,host:'192.168.1.109:27001'}]}
rs.initiate(config)
9.查看副本集s1集群状态
rs.status()
rs.conf()
另外s2,s3副本集也安装上面的步骤进行操作,将s1替换成s2,s3,同时注意ip和端口的修改.
登录查看集群情况
/usr/local/services/mongodb_s2/bin/mongo 192.168.1.109:28001
/usr/local/services/mongodb_s3/bin/mongo 192.168.1.109:29001
####################部署配置服务器######################
1.创建如下目录
[root@test services]# mkdir -p /usr/local/services
[root@test services]# mkdir -p /home/middle/mongo_config/data
[root@test services]# mkdir -p /home/middle/mongo_config/log
[root@test services]# mkdir -p /home/middle/mongo_config/key
[root@test services]# mkdir -p /home/middle/mongo_config/conf
[root@test services]# mkdir -p /home/middle/mongo_config/run
2.解压安装
每个副本集服务器上都执行
[root@test soft]# tar -xvf mongodb-linux-x86_64-rhel70-4.4.22.tgz
[root@test soft]# mv mongodb-linux-x86_64-rhel70-4.4.22 /usr/local/services/mongo_config
3.生成日志文件
[root@localhost soft]#echo>/home/middle/mongo_config/log/mongodb.log
4.产生秘钥验证(先不做)
[root@test key]# cd /home/middle/mongo_config/key
[root@test key]# openssl rand -base64 741 >>keyfile
[root@test key]# chmod 700 keyfile
可以使用与其他副本集同样的keyfile
[root@localhost key]# cp /home/middle/mongodb_s1/key/keyfile /home/middle/mongo_config/key/
5.创建配置文件
在conf目录下创建配置文件
vi /home/middle/mongo_config/conf/mongo.cnf
[root@localhost conf]# more mongo.cnf
port=30001
fork=true
dbpath=/home/middle/mongo_config/data
logpath=/home/middle/mongo_config/log/mongodb.log
pidfilepath=/home/middle/mongo_config/run/30001.pid
bind_ip=192.168.1.109,127.0.0.1
logappend=true
oplogSize=16384
logRotate=reopen
configsvr=true
replSet=configrs
yaml格式的配置文件
[root@localhost conf]# more mongo_yaml.cnf
net:
bindIp: 192.168.1.109,127.0.0.1
port: 30001
storage:
journal:
enabled: true
dbPath: "/home/middle/mongo_config/data"
engine: wiredTiger
wiredTiger:
engineConfig:
cacheSizeGB: 1
systemLog:
destination: file
path: "/home/middle/mongo_config/log/mongodb.log"
logAppend: true
logRotate: reopen
processManagement:
fork: true
pidFilePath: "/home/middle/mongo_config/run/30001.pid"
replication:
oplogSizeMB: 16384
replSetName: configrs
##security:
## keyFile: "/home/middle/mongo_config/key/keyfile"
## authorization: "enabled"
sharding:
clusterRole: configsvr
6.启动
每个副本集服务器上都执行
/usr/local/services/mongo_config/bin/mongod -f /home/middle/mongo_config/conf/mongo.cnf
7.初始化副本集
在副本集的其中一台机器上执行即可
[root@localhost bin]# /usr/local/services/mongo_config/bin/mongo 192.168.1.109:30001
use admin
config={_id:'configrs',members:[{_id:0,host:'192.168.1.109:30001'}]}
rs.initiate(config)
8.查看副本集状态
rs.status()
rs.conf()
####################部署路由服务器######################
1.创建如下目录
[root@test services]# mkdir -p /usr/local/services
[root@test services]# mkdir -p /home/middle/mongo_router/data
[root@test services]# mkdir -p /home/middle/mongo_router/log
[root@test services]# mkdir -p /home/middle/mongo_router/key
[root@test services]# mkdir -p /home/middle/mongo_router/conf
[root@test services]# mkdir -p /home/middle/mongo_router/run
2.解压安装
[root@test soft]# tar -xvf mongodb-linux-x86_64-rhel70-4.4.22.tgz
[root@test soft]# mv mongodb-linux-x86_64-rhel70-4.4.22 /usr/local/services/mongo_router
3.生成日志文件
[root@localhost soft]#echo>/home/middle/mongo_router/log/mongodb.log
4.产生秘钥验证(先不做)
[root@test key]# cd /home/middle/mongo_router/key
[root@test key]# openssl rand -base64 741 >>keyfile
[root@test key]# chmod 700 keyfile
可以使用与其他副本集同样的keyfile
[root@localhost key]# cp /home/middle/mongodb_s1/key/keyfile /home/middle/mongo_router/key/
5.创建配置文件
在conf目录下创建配置文件,mongo.cnf
vi /home/middle/mongo_router/conf/mongo.cnf
[root@localhost conf]# more mongo.cnf
port=40001
fork=true
logpath=/home/middle/mongo_router/log/mongodb.log
pidfilepath=/home/middle/mongo_router/run/40001.pid
configdb=configrs/192.168.1.109:30001
bind_ip=192.168.1.109,127.0.0.1
路由服务器不需要如下项:
##auth=true
yaml格式的配置文件
[root@localhost conf]# more mongo_yaml.cnf
net:
bindIp: 192.168.1.109,127.0.0.1
port: 40001
processManagement:
fork: true
pidFilePath: /home/middle/mongo_router/run/40001.pid
sharding:
configDB: configrs/192.168.1.109:30001
systemLog:
destination: file
path: /home/middle/mongo_router/log/mongodb.log
##security:
## keyFile: "/home/middle/mongo_router/key/keyfile"
路由服务器不需要如下项:
## authorization: "enabled"
我们这里没有配置数据存储路径的参数dbPath,因为路由服务器是不存储数据的,路由信息是存储在配置服务器的。
6.启动
/usr/local/services/mongo_router/bin/mongos -f /home/middle/mongo_router/conf/mongo.cnf
注意这里是使用mongos命令启动,而不是mongod
7.登录查看
/usr/local/services/mongo_router/bin/mongo 192.168.1.109:40001
mongos> show dbs
admin 0.000GB
config 0.000GB
################配置分片服务器#############################
在路由服务器上进行登录
1.配置分片服务器
目前搭建了mongodb配置服务器、路由服务器,各个分片服务器,不过应用程序连接到mongos路由服务器并不能使用分片机制,还需要在程序里设置分片配置,让分片生效。
登录路由服务器(192.168.1.109:40001)
[root@pxc01 bin]# /usr/local/services/mongo_router/bin/mongo 192.168.1.109:40001
mongos> use admin
switched to db admin
sh.addShard("s1/192.168.1.109:27001")
sh.addShard("s2/192.168.1.109:28001")
sh.addShard("s3/192.168.1.109:29001")
sh.status()
mongos> sh.status()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("654b21aa04789a98cfffd524")
}
shards:
{ "_id" : "s1", "host" : "s1/192.168.1.109:27001", "state" : 1 }
{ "_id" : "s2", "host" : "s2/192.168.1.109:28001", "state" : 1 }
{ "_id" : "s3", "host" : "s3/192.168.1.109:29001", "state" : 1 }
active mongoses:
"4.4.22" : 1
autosplit:
Currently enabled: yes
balancer:
Currently enabled: yes
Currently running: no
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
No recent migrations
databases:
{ "_id" : "config", "primary" : "config", "partitioned" : true }
mongos>
mongos> db.shards.find()
{ "_id" : "s1", "host" : "s1/192.168.1.109:27001", "state" : 1 }
{ "_id" : "s2", "host" : "s2/192.168.1.109:28001", "state" : 1 }
{ "_id" : "s3", "host" : "s3/192.168.1.109:29001", "state" : 1 }
######################提前创建库和表##################################
mongos> use admin
mongos> db.runCommand({enablesharding:"db_pushmsg" })
mongos> db.runCommand({"shardcollection":"db_pushmsg.app_message_all","key":{"_id":"hashed"}})
######################日常维护#######################################
1.启动顺序:
配置服务器-->分片服务器器-->路由服务器
配置服务器
/usr/local/services/mongo_config/bin/mongod -f /home/middle/mongo_config/conf/mongo.cnf
各分片服务器
/usr/local/services/mongodb_s1/bin/mongod -f /home/middle/mongodb_s1/conf/mongo.cnf
/usr/local/services/mongodb_s2/bin/mongod -f /home/middle/mongodb_s2/conf/mongo.cnf
/usr/local/services/mongodb_s3/bin/mongod -f /home/middle/mongodb_s3/conf/mongo.cnf
路由服务器
/usr/local/services/mongo_router/bin/mongos -f /home/middle/mongo_router/conf/mongo.cnf
2.登录
/usr/local/services/mongodb_s1/bin/mongo 192.168.1.109:27001
/usr/local/services/mongodb_s2/bin/mongo 192.168.1.109:28001
/usr/local/services/mongodb_s3/bin/mongo 192.168.1.109:29001
/usr/local/services/mongo_config/bin/mongo 192.168.1.109:30001
/usr/local/services/mongo_router/bin/mongo 192.168.1.109:40001
#################创建各分片自己的管理员账号(区分路由服务器上创建的账号)#######################
我们这里测试环境每个分片故意创建不同的账号,为了方便管理线上建议创建同样的账号,这些账号不同于路由服务器上创建的账号
s1:
/usr/local/services/mongodb_s1/bin/mongo 192.168.1.109:27001
use admin
db.createUser({user:"fenpian",pwd:"fenpian123",roles:["root"]});
s2:
/usr/local/services/mongodb_s2/bin/mongo 192.168.1.109:28001
use admin
db.createUser({user:"fenpian",pwd:"fenpian123",roles:["root"]});
s3:
/usr/local/services/mongodb_s3/bin/mongo 192.168.1.109:29001
use admin
db.createUser({user:"fenpian",pwd:"fenpian123",roles:["root"]});
#######################创建账号和开启密码访问#####################################################
1.在路由服务器上创建账号(创建的账号会自动在配置服务器上创建)
创建超级用户账号
[root@localhost bin]# /usr/local/services/mongo_router/bin/mongo 192.168.1.109:40001
use admin
db.createUser({user:"test",pwd:"test123",roles:["root"]}); --创建用户
db.auth("test","test123"); --设置用户登陆权限,密码一定要和创建用户时输入的密码相同
show users; --查看创建的用户
创建开发账号
use admin
db.auth("test","test123");
db.createUser({user:'hxl',pwd:'hxl123',roles:[{role:'readWrite',db:'db_pushmsg'}]})
2.这个时候查看分片上是否也创建了账号
[root@localhost ~]# /usr/local/services/mongodb_s1/bin/mongo 192.168.1.109:27001
s1:PRIMARY> use admin
s1:PRIMARY> show users;
s1:PRIMARY>
发现分片不会同步路由服务器上创建的账号
3.查看配置服务器上是否创建了账号
[root@localhost ~]# /usr/local/services/mongo_config/bin/mongo 192.168.1.109:30001
configrs:PRIMARY> use admin
switched to db admin
configrs:PRIMARY> show users;
{
"_id" : "admin.hxl",
"userId" : UUID("c82ed077-f422-4541-99bb-4a5696ee9bcb"),
"user" : "hxl",
"db" : "admin",
"roles" : [
{
"role" : "readWrite",
"db" : "db_pushmsg"
}
],
"mechanisms" : [
"SCRAM-SHA-1",
"SCRAM-SHA-256"
]
}
{
"_id" : "admin.test",
"userId" : UUID("909592db-7c04-4fc1-95fd-38ed8a9ba5a4"),
"user" : "test",
"db" : "admin",
"roles" : [
{
"role" : "root",
"db" : "admin"
}
],
"mechanisms" : [
"SCRAM-SHA-1",
"SCRAM-SHA-256"
]
}
发现配置服务器上创建了账号.
2.开启密码访问
2.1 配置服务器
##security:
## keyFile: "/home/middle/mongo_config/key/keyfile"
## authorization: "enabled"
修改为
security:
keyFile: "/home/middle/mongo_config/key/keyfile"
authorization: "enabled"
重启动
/usr/local/services/mongo_router/bin/mongo localhost:30001
configrs:PRIMARY> use admin
switched to db admin
configrs:PRIMARY> db.shutdownServer()
server should be down...
启动
/usr/local/services/mongo_config/bin/mongod -f /home/middle/mongo_config/conf/mongo_yaml.cnf
2.2 路由服务器
配置文件将之前注释掉的开启
##security:
## keyFile: "/home/middle/mongo_router/key/keyfile"
修改为
security:
keyFile: "/home/middle/mongo_router/key/keyfile"
然后重启动路由服务器
[root@localhost ~]# /usr/local/services/mongo_router/bin/mongo localhost:40001
mongos> use admin
switched to db admin
mongos> db.shutdownServer()
启动
/usr/local/services/mongo_router/bin/mongos -f /home/middle/mongo_router/conf/mongo_yaml.cnf
2.3 各分片服务器
##security:
## keyFile: "/home/middle/mongodb_s1/key/keyfile"
## authorization: "enabled"
修改为
security:
keyFile: "/home/middle/mongodb_s1/key/keyfile"
authorization: "enabled"
重启
/usr/local/services/mongodb_s1/bin/mongo localhost:27001
s1:PRIMARY> use admin
switched to db admin
s1:PRIMARY> db.shutdownServer()
server should be down...
启动
/usr/local/services/mongodb_s1/bin/mongod -f /home/middle/mongodb_s1/conf/mongo_yaml.cnf
每个分片节点都做同样的操作
3.密码登录路由服务器
[root@localhost conf]#/usr/local/services/mongo_config/bin/mongo 192.168.1.109:40001
mongos> use admin
mongos> db.auth("test","test123");
mongos> show dbs
admin 0.000GB
config 0.004GB
db_pushmsg 0.636GB
mongos> use admin
switched to db admin
mongos> db.auth("hxl","hxl123");
1
mongos> show dbs
db_pushmsg 0.636GB
mongos> use db_pushmsg
mongos> show tables
app_message_all
明:
1.分片服务器、配置服务器、路由服务器上使用的keyfile都是同一个文件(不同的keyfile会导致认证失败),注意路由服务器没有auth参数;
2.创建好账号后,分片服务器、配置服务器、路由服务器都要同时启用认证的方式重启动(不能其中一种角色服务器启用,另外服务器不启用);
3.在路由服务器上创建的账号是不会同步到分片服务器的,启用了密码认证后,是无法单独访问具体的分片服务器的:
[root@localhost ~]# /usr/local/services/mongodb_s1/bin/mongo 192.168.1.109:27001
MongoDB shell version v4.4.22
connecting to: mongodb://192.168.1.109:27001/test?compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("ce4252e0-d039-4afa-9129-d65d286f1205") }
MongoDB server version: 4.4.22
s1:PRIMARY> show dbs
s1:PRIMARY> use admin
switched to db admin
s1:PRIMARY> db.auth("test","test123");
Error: Authentication failed.
0
s1:PRIMARY> db.auth("hxl","hxl123");
Error: Authentication failed.
0
这个时候为什么上面步骤要提前创建好各自分片的账号的原因,这个时候只能使用分片自己的账号登录
/usr/local/services/mongodb_s2/bin/mongo 192.168.1.109:28001
use admin
db.auth("fenpian","fenpian123");
4.分片后在路由服务器上查询发现count不准确的,原因是:shard 分片正在做块迁移,导致有重复数据出现
可以使用如下方式统计记录数:
db.app_message_all.aggregate(
[
{ $group: { _id: null, count: { $sum: 1 } } }
]
)
官方说明:
https://www.mongodb.com/docs/manual/reference/method/db.collection.count/