一、基本概念
mongodb分片集群的实例有5种类型。
- a) 配置节点config,用来存储元数据,即存储数据的位置信息等,配置节点1M的空间约等于数据节点存储200M的数据;
- b) 主分片节点shard,用来存储数据,被查询;
- c) 分片复本节点replication,用来存储数据,当主分片挂了之后,会被选为主分片;
- d) 仲裁节点arbitor,当分片节点的主分片实例挂了之后,负责选举对应的复本作为主分片,本身并不存储任何数据;
- e) 分发节点mongos,用来处理所有的外部请求,将请求分发至配置节点取得元数据,然后再从指定的分片中取得数据,最后返回至分发节点,其本身也不存储任何信息,只做分发请求使用,分发节点所占内存和资源比较小,部署时可以将分发节点放在应用机器上。
二、环境准备
2.1 关闭防火墙与SELinux
| $ systemctl stop firewalld.service |
| $ systemctl disable firewalld.service |
| $ sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config |
修改主机名、配置host文件,过程略过!
2.2 安装java环境
2.3 创建普通用户
| $ useradd mongo |
| $ echo 123456 | passwd --stdin mongo |
2.4 修改资源使用配置文件
| $ vim /etc/security/limits.conf |
| mongo soft nproc 65535 |
| mongo hard nproc 65535 |
| mongo soft nofile 81920 |
| mongo hard nofile 81920 |
2.5 关闭大页内存
| $ vim /etc/rc.local |
| # 末尾添加 |
| if test -f /sys/kernel/mm/transparent_hugepage/enabled; then |
| echo never > /sys/kernel/mm/transparent_hugepage/enabled |
| fi |
| if test -f /sys/kernel/mm/transparent_hugepage/defrag; then |
| echo never > /sys/kernel/mm/transparent_hugepage/defrag |
| fi |
| $ source /etc/rc.local |
| # 确认配置生效(默认情况下always) |
| $ cat /sys/kernel/mm/transparent_hugepage/enabled |
| always madvise [never] |
| $ cat /sys/kernel/mm/transparent_hugepage/defrag |
| always madvise [never] |
三、安装和部署mongo
以下操作全部更换为mongo用户操作即可!
3.1 主机角色分配
分片/端口(角色) |
mongo01 |
mongo02 |
mongo03 |
config |
27018 |
27018 |
27018 |
shard1 |
27019(master) |
27019(replication) |
27019(arbitor) |
shard2 |
27020(arbitor) |
27020(master) |
27020(replication) |
shard3 |
27021(replication) |
27021(arbitor) |
27021(master) |
mongos |
27017 |
27017 |
27017 |
3.2 获取软件包
| $ wget https://fastdl.mongodb.org/linux/mongodb-linux-x86_64-3.4.4.tgz |
| $ tar -xzvf mongodb-linux-x86_64-3.4.4.tgz |
| $ cd mongodb-linux-x86_64-3.4.4/ |
| $ mkdir {data,logs,conf} |
| $ mkdir data/{config,shard1,shard2,shard3} |
3.3 config.yml
配置节点必须在分发节点之前完成启动和配置,数据节点必须在集群初始化之前完成启动和配置,所谓初始化就是分发节点和配置节点配置好后,将各个分片加入到集群中。
每台机器配置一样,注意配置的IP和端口。
| sharding: |
| clusterRole: configsvr |
| replication: |
| replSetName: lvzhenjiang |
| |
| systemLog: |
| destination: file |
| path: "/home/mongo/mongodb-linux-x86_64-3.4.4/logs/config.log" |
| logAppend: true |
| logRotate: rename |
| net: |
| bindIp: 192.168.99.11 |
| port: 27018 |
| storage: |
| dbPath: "/home/mongo/mongodb-linux-x86_64-3.4.4/data/config" |
| processManagement: |
| fork: true |
3.4 mongos.yml
每台器配置一样,注意配置的IP和端口
| sharding: |
| configDB: lvzhenjiang/192.168.99.251:27018,192.168.99.252:27018,192.168.99.253:27018 |
| # 指定配置节点 |
| systemLog: |
| destination: file |
| path: "/home/mongo/mongodb-linux-x86_64-3.4.4/logs/mongos.log" |
| logAppend: true |
| net: |
| bindIp: 192.168.99.11,127.0.0.1 |
| port: 27017 |
| processManagement: |
| fork: true |
| |
3.5 shard1.yml(主分片)
mongo 主分片和副本配置文件相同,但要注意根据角色分配修改replSetName,端口,日志路径,存储路径。
| sharding: |
| clusterRole: shardsvr |
| replication: |
| replSetName: shard1 |
| systemLog: |
| destination: file |
| logAppend: true |
| logRotate: rename |
| path: "/home/mongo/mongodb-linux-x86_64-3.4.4/logs/shard1.log" |
| processManagement: |
| fork: true |
| net: |
| bindIp: 192.168.99.11 |
| port: 27019 |
| http: |
| enabled: false |
| maxIncomingConnections: 65535 |
| operationProfiling: |
| mode: slowOp |
| slowOpThresholdMs: 100 |
| storage: |
| dbPath: "/home/mongo/mongodb-linux-x86_64-3.4.4/data/shard1" |
| wiredTiger: |
| engineConfig: |
| cacheSizeGB: 40 |
| directoryForIndexes: true |
| indexConfig: |
| prefixCompression: true |
| directoryPerDB: true |
| setParameter: |
| replWriterThreadCount: 64 |
| |
3.6 shard2.yml(仲裁节点)
仲裁节点分片配置相同,但注意端口
| sharding: |
| clusterRole: shardsvr |
| replication: |
| replSetName: shard2 |
| systemLog: |
| destination: file |
| logAppend: true |
| logRotate: rename |
| path: "/home/mongo/mongodb-linux-x86_64-3.4.4/logs/shard2.log" |
| processManagement: |
| fork: true |
| net: |
| bindIp: 192.168.99.11 |
| port: 27020 |
| operationProfiling: |
| mode: slowOp |
| slowOpThresholdMs: 100 |
| storage: |
| dbPath: "/home/mongo/mongodb-linux-x86_64-3.4.4/data/shard2" |
| |
3.7 按照该表格分配shard的角色
分片/端口(角色) |
mongo01 |
mongo02 |
mongo03 |
config |
27018 |
27018 |
27018 |
shard1 |
27019(master) |
27019(replication) |
27019(arbitor) |
shard2 |
27020(arbitor) |
27020(master) |
27020(replication) |
shard3 |
27021(replication) |
27021(arbitor) |
27021(master) |
mongos |
27017 |
27017 |
27017 |
3.8 分配完成后,启动config、shard进程:
| $ /home/mongo/mongodb-linux-x86_64-3.4.4/bin/mongod -f /home/mongo/mongodb-linux-x86_64-3.4.4/conf/config.yml |
| $ /home/mongo/mongodb-linux-x86_64-3.4.4/bin/mongod -f /home/mongo/mongodb-linux-x86_64-3.4.4/conf/shard1.yml |
| $ /home/mongo/mongodb-linux-x86_64-3.4.4/bin/mongod -f /home/mongo/mongodb-linux-x86_64-3.4.4/conf/shard2.yml |
| $ /home/mongo/mongodb-linux-x86_64-3.4.4/bin/mongod -f /home/mongo/mongodb-linux-x86_64-3.4.4/conf/shard3.yml |
| |
3.9 初始化mongo集群
| $ /home/mongo/mongodb-linux-x86_64-3.4.4/bin/mongo --port 27018 -host 192.168.99.11 |
| > rs.initiate( { |
| _id: "lvzhenjiang", |
| configsvr: true, |
| members: [ |
| { _id: 0, host: "192.168.99.11:27018" }, |
| { _id: 1, host: "192.168.99.12:27018" }, |
| { _id: 2, host: "192.168.99.13:27018" } |
| ] |
| }); |
| > rs.status() |
| |
3.10 配置分片角色
3.10.1 shard1
| $ /home/mongo/mongodb-linux-x86_64-3.4.4/bin/mongo -host 192.168.99.11 --port 27019 |
| >rs.initiate( |
| { _id:"shard1", members:[ |
| {_id:0,host:"192.168.99.11:27019"}, |
| {_id:1,host:"192.168.99.12:27019"}, |
| {_id:2,host:"192.168.99.13:27019",arbiterOnly:true} |
| ] |
| } |
| ); |
| |
| > rs.status() |
| |
3.10.2 shard2
| $ /home/mongo/mongodb-linux-x86_64-3.4.4/bin/mongo -host 192.168.99.12 --port 27020 |
| > rs.initiate( |
| { _id:"shard2", members:[ |
| {_id:0,host:"192.168.99.12:27020"}, |
| {_id:1,host:"192.168.99.13:27020"}, |
| {_id:2,host:"192.168.99.11:27020",arbiterOnly:true} |
| ] |
| } |
| ); |
| |
| > rs.status() |
| |
3.10.3 shard3
| $ /home/mongo/mongodb-linux-x86_64-3.4.4/bin/mongo -host 192.168.99.13 --port 27021 |
| > rs.initiate( |
| { _id:"shard3", members:[ |
| {_id:0,host:"192.168.99.13:27021"}, |
| {_id:1,host:"192.168.99.11:27021"}, |
| {_id:2,host:"192.168.99.12:27021",arbiterOnly:true} |
| ] |
| } |
| ); |
| |
| > rs.status() |
| |
3.10.4 启动mongos,插入数据便于查看各个分片情况
| $ /home/mongo/mongodb-linux-x86_64-3.4.4/bin/mongos -f /home/mongo/mongodb-linux-x86_64-3.4.4/conf/mongos.yml |
| $ /home/mongo/mongodb-linux-x86_64-3.4.4/bin/mongo -host 192.168.99.11 --port 27017 |
| mongos> use admin |
| mongos> db.runCommand( { addshard : "shard1/192.168.99.11:27019,192.168.99.12:27019,192.168.99.13:27019"}); |
| mongos> db.runCommand( { addshard : "shard2/192.168.99.11:27020,192.168.99.12:27020,192.168.99.13:27020"}); |
| mongos> db.runCommand( { addshard : "shard3/192.168.99.11:27021,192.168.99.12:27021,192.168.99.13:27021"}); |
| mongos> sh.status() |
| |
完事!
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· AI与.NET技术实操系列:向量存储与相似性搜索在 .NET 中的实现
· 基于Microsoft.Extensions.AI核心库实现RAG应用
· Linux系列:如何用heaptrack跟踪.NET程序的非托管内存泄露
· 开发者必知的日志记录最佳实践
· SQL Server 2025 AI相关能力初探
· 震惊!C++程序真的从main开始吗?99%的程序员都答错了
· 【硬核科普】Trae如何「偷看」你的代码?零基础破解AI编程运行原理
· 单元测试从入门到精通
· 上周热点回顾(3.3-3.9)
· winform 绘制太阳,地球,月球 运作规律