搭建mongodb分片集群
注意:mongos、config、shard三个角色的实例的keyfile内容保证完全一致:
如果搭建副本集时,出错,那么删掉
config副本集配置文件内容:使用mongod启动:
[work@xxx etc]$ cat mongodb.conf systemLog: destination: file path: /home/work/mongodb/mongo_40000/log/mongodb.log logAppend: true cloud: monitoring: free: state: off #net Options net: maxIncomingConnections: 10240 port: 40000 bindIp: 10.10.10.40,localhost #security Options security: authorization: 'enabled' keyFile: /home/work/mongodb/mongo_40000/etc/igoodful_config clusterAuthMode: "keyFile" #storage Options storage: engine: "wiredTiger" directoryPerDB: true dbPath: /home/work/mongodb/mongo_40000/data journal: enabled: true commitIntervalMs: 100 wiredTiger: engineConfig: directoryForIndexes: true cacheSizeGB: 4 journalCompressor: "snappy" collectionConfig: blockCompressor: "snappy" indexConfig: prefixCompression: true #wiredTigerCollectionConfigString: lsm #wiredTigerIndexConfigString: lsm #replication Options replication: oplogSizeMB: 81920 #80GB replSetName: igoodful_config #operationProfiling Options operationProfiling: slowOpThresholdMs: 100 mode: "slowOp" processManagement: fork: true pidFilePath: /home/work/mongodb/mongo_40000/tmp/mongo_40000.pid sharding: clusterRole: configsvr
mongos配置文件:使用mongos启动:
[work@hostname etc]$ cat mongodb.conf systemLog: destination: file path: /home/work/mongodb_dba/mongo_30000/log/mongodb_dba.log logAppend: true #net Options net: maxIncomingConnections: 10240 port: 30000 bindIp: 10.10.10.30,localhost serviceExecutor : adaptive #security Options security: keyFile: /home/work/mongodb_dba/mongo_30000/etc/test.mongos clusterAuthMode: "keyFile" sharding: configDB: "igoodful_config/10.10.10.40:40000,10.10.10.41:40000,10.10.10.42:40000" processManagement: fork: true pidFilePath: /home/work/mongodb_dba/mongo_30000/tmp/mongo_30000.pid
分片副本集配置:使用mongod启动:
启动命令:/home/work/mongodb/4.0.14/bin/mongod --config /home/work/mongodb/mongo_28001/etc/mongodb.conf [work@xxx etc]$ cat mongodb.conf systemLog: destination: file path: "/home/work/mongodb/mongo_28001/log/mongodb.log" logAppend: true cloud: monitoring: free: state: off #net Options net: maxIncomingConnections: 10240 port: 28001 bindIp: 10.10.10.50,localhost serviceExecutor : adaptive #security Options security: authorization: 'enabled' keyFile: /home/work/mongodb/mongo_28001/etc/igoodful_shard clusterAuthMode: "keyFile" #storage Options storage: engine: "wiredTiger" directoryPerDB: true dbPath: /home/work/mongodb/mongo_28001/data journal: enabled: true commitIntervalMs: 100 wiredTiger: engineConfig: directoryForIndexes: true cacheSizeGB: 25 journalCompressor: "snappy" collectionConfig: blockCompressor: "snappy" indexConfig: prefixCompression: true #wiredTigerCollectionConfigString: lsm #wiredTigerIndexConfigString: lsm #replication Options replication: oplogSizeMB: 81920 #80GB replSetName: igoodful_shard #operationProfiling Options operationProfiling: slowOpThresholdMs: 100 mode: "slowOp" processManagement: fork: true pidFilePath: /home/work/mongodb/mongo_28001/tmp/mongo_28001.pid sharding: clusterRole: shardsvr
mongos启动:(注意:不是用mongod来启动)
注意一:mongos启动
[work@hostname mongodb_dba]$ /home/work/mongodb_dba/4.0/bin/mongos --config /home/work/mongodb_dba/mongo_30000/etc/mongodb.conf 2020-10-27T14:49:34.383+0800 W SHARDING [main] Running a sharded cluster with fewer than 3 config servers should only be done for testing purposes and is not recommended for production. about to fork child process, waiting until server is ready for connections. forked process: 30389 child process started successfully, parent exiting
注意二:配置的config需要有相同的keyfile认证,即config开启认证
[work@tj1-ai-stag-db-i1-01 mongodb_dba]$ /home/work/mongodb_dba/4.0/bin/mongos --config /home/work/mongodb_dba/mongo_30000/etc/mongodb.conf 2020-10-27T16:40:09.882+0800 W SHARDING [main] Running a sharded cluster with fewer than 3 config servers should only be done for testing purposes and is not recommended for production. about to fork child process, waiting until server is ready for connections. forked process: 300974
登录mongos:
/home/work/mongodb_dba/4.0/bin/mongo --host 10.38.167.100 --port 30000 -u mongo_dba -p 123456 --authenticationDatabase admin mongos> use admin switched to db admin mongos> sh.addShard("test_shard1/10.10.10.10:28001"); { "shardAdded" : "test_shard1", "ok" : 1, "operationTime" : Timestamp(1603788736, 3), "$clusterTime" : { "clusterTime" : Timestamp(1603788736, 3), "signature" : { "hash" : BinData(0,"32np1vC8xMXEKRFRgE5MGjZvKqM="), "keyId" : NumberLong("6888216464256401439") } } } mongos> mongos> sh.status() --- Sharding Status --- sharding version: { "_id" : 1, "minCompatibleVersion" : 5, "currentVersion" : 6, "clusterId" : ObjectId("5f97dc61cd2244b6a298aaf4") } shards: { "_id" : "igoodful_shard", "host" : "igoodful_shard/10.10.10.50:28001,10.10.10.51:28001,10.10.10.52:28001", "state" : 1 } active mongoses: "4.0.17-10" : 1 autosplit: Currently enabled: yes balancer: Currently enabled: yes Currently running: no Failed balancer rounds in last 5 attempts: 0 Migration Results for the last 24 hours: No recent migrations databases: { "_id" : "config", "primary" : "config", "partitioned" : true } config.system.sessions shard key: { "_id" : 1 } unique: false balancing: true chunks: test_shard1 1 { "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : test_shard1 Timestamp(1, 0) mongos>
向分片集群添加分片副本集:登录mongos操作:
####################################################### sh.addShard("igoodful_shard/10.10.10.50:28001,10.10.10.51:28001,10.10.10.52"); sh.addShard("分片集群的副本集名称/该副本集的ip列表")
对数据库启用分片:
# 对数据库启动分片: sh.enableSharding("<database>")
对集合启用分片:
# 对集合分片,该集合所在数据库必须先启用分片 db.adminCommand( { shardCollection: "records.people", key: { zipcode: 1 } } )
{ |
shardCollection: "<database>.<collection>", |
key: { <field1>: <1|"hashed">, ... }, |
unique: <boolean>, |
numInitialChunks: <integer>, |
presplitHashedZones: <boolean>, |
collation: { locale: "simple" } |
} |
# 查看balance是否已经开启 sh.getBalancerState()
# 关闭均衡器:sh.stopBalancer()
# 开启均衡器:sh.startBalancer()
查看当前分片集群中的所有分片副本集:
db.adminCommand( { listShards: 1 } ) sh.status() db.printShardingStatus()
在 admin 数据库中,运行 removeShard 命令.运行之后会开始将这个分片的数据块”转移”到其他分片的过程,比如,对一个命名为 mongodb0 的分片运行:db.runCommand( { removeShard: "mongodb0" } )
use admin db.runCommand( { removeShard: "mongodb0" } )
# 这个操作是立刻返回的,返回数据如下:
{
"msg" : "draining started successfully",
"state" : "started",
"shard" : "mongodb0",
"ok" : 1
}
检查迁移状态:
检查迁移的状态,再次在 admin 数据库运行 removeShard 命令,比如,对一个命名为 mongodb0 的分片,运行: use admin db.runCommand( { removeShard: "mongodb0" } ) 这条命令返回类似如下的输出: { "msg" : "draining ongoing", "state" : "ongoing", "remaining" : { "chunks" : 42, "dbs" : 1 }, "ok" : 1 } 在输出结果中, remaining 文档显示的是MongoDB必须迁移到其他分片的数据块中剩余的数据块数量与”primary”在这个分片的数据库数量. 在 remaining 字段变为0之前,持续运行 removeShard 命令检查状态.这个命令需要在 admin 数据库上运行,在其他库可以使用 sh._adminCommand 命令操作.
迁移没有分片的数据:
如果这个分片是一个或多个数据库的 primary shard ,上面会存储没有分片的数据,如果不是,则跳过 完成迁移 任务.
在集群中,没有分片的数据库只会将数据存放在一个分片上,这个分片就是这个数据库的主分片.(不同的数据库可以有不同的主分片.)
查看数据库对应的主分片:
sh.status() db.printShardingStatus() 在返回的文档中, databases 字段列出了所有数据库和它的主分片.比如,以下的 databases 字段显示了 products 数据库使用 mongodb0 作为主分片. { "_id" : "products", "partitioned" : true, "primary" : "mongodb0" } 将数据库迁移到另一个分片,需要使用 movePrimary 命令.使用如下命令将所有的剩余的未分片的数据从 mongodb0 迁移到 mongodb1 上.
迁移主分片:db.runCommand( { movePrimary: "products", to: "mongodb1" })
db.runCommand( { movePrimary: "products", to: "mongodb1" }) This command does not return until MongoDB completes moving all data, which may take a long time. The response from this command will resemble the following: { "primary" : "mongodb1", "ok" : 1 } 警告 这个命令直到全部数据迁移完成才会返回,可能会花费较长时间.最后返回的结果类似这样:
完成迁移:
为了清除所有的元信息,并结束删除分片的过程,再次执行 removeShard , 比如,对 mongodb0 这个分片,执行: use admin db.runCommand( { removeShard: "mongodb0" } ) 在完成时会显示出成功的信息: { "msg" : "removeshard completed successfully", "state" : "completed", "shard" : "mongodb0", "ok" : 1 } 一旦 state 的值变为 “completed”,就可以安全地停止 mongodb0` 分片上的monod进程.
############################
igoodful@qq.com