mongo分布式集群(三主三从三仲裁)搭建

一、基本概念

mongodb分片集群的实例有5种类型。

  • a) 配置节点config,用来存储元数据,即存储数据的位置信息等,配置节点1M的空间约等于数据节点存储200M的数据;
  • b) 主分片节点shard,用来存储数据,被查询;
  • c) 分片复本节点replication,用来存储数据,当主分片挂了之后,会被选为主分片;
  • d) 仲裁节点arbitor,当分片节点的主分片实例挂了之后,负责选举对应的复本作为主分片,本身并不存储任何数据;
  • e) 分发节点mongos,用来处理所有的外部请求,将请求分发至配置节点取得元数据,然后再从指定的分片中取得数据,最后返回至分发节点,其本身也不存储任何信息,只做分发请求使用,分发节点所占内存和资源比较小,部署时可以将分发节点放在应用机器上。

二、环境准备

2.1 关闭防火墙与SELinux

$ systemctl stop firewalld.service
$ systemctl disable firewalld.service 
$ sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config

修改主机名、配置host文件,过程略过!

2.2 安装java环境

$ yum install java -y

2.3 创建普通用户

$ useradd mongo
$ echo 123456 | passwd --stdin mongo

2.4 修改资源使用配置文件

$ vim /etc/security/limits.conf
mongo    soft  nproc  65535
mongo    hard  nproc  65535
mongo    soft  nofile 81920
mongo    hard  nofile 81920

2.5 关闭大页内存

$ vim /etc/rc.local   
# 末尾添加
if test -f /sys/kernel/mm/transparent_hugepage/enabled; then
 echo never > /sys/kernel/mm/transparent_hugepage/enabled
fi
if test -f /sys/kernel/mm/transparent_hugepage/defrag; then
  echo never > /sys/kernel/mm/transparent_hugepage/defrag
fi
$ source /etc/rc.local
# 确认配置生效(默认情况下always)
$ cat /sys/kernel/mm/transparent_hugepage/enabled    
always madvise [never]
$ cat /sys/kernel/mm/transparent_hugepage/defrag
always madvise [never]

三、安装和部署mongo

以下操作全部更换为mongo用户操作即可!

3.1 主机角色分配

分片/端口(角色) mongo01 mongo02 mongo03
config 27018 27018 27018
shard1 27019(master) 27019(replication) 27019(arbitor)
shard2 27020(arbitor) 27020(master) 27020(replication)
shard3 27021(replication) 27021(arbitor) 27021(master)
mongos 27017 27017 27017

3.2 获取软件包

$ wget https://fastdl.mongodb.org/linux/mongodb-linux-x86_64-3.4.4.tgz
$ tar -xzvf mongodb-linux-x86_64-3.4.4.tgz
$ cd mongodb-linux-x86_64-3.4.4/
$ mkdir {data,logs,conf}
$ mkdir data/{config,shard1,shard2,shard3}

3.3 config.yml

配置节点必须在分发节点之前完成启动和配置,数据节点必须在集群初始化之前完成启动和配置,所谓初始化就是分发节点和配置节点配置好后,将各个分片加入到集群中。

$ vim conf/config.yml

每台机器配置一样,注意配置的IP和端口。

sharding:
  clusterRole: configsvr
replication:
  replSetName: lvzhenjiang
  #所有config 节点一致
systemLog:
  destination: file
  path: "/home/mongo/mongodb-linux-x86_64-3.4.4/logs/config.log"
  logAppend: true
  logRotate: rename
net:
  bindIp: 192.168.99.11
  port: 27018
storage:
  dbPath: "/home/mongo/mongodb-linux-x86_64-3.4.4/data/config"
processManagement:
  fork: true

3.4 mongos.yml

$ vim conf/mongos.yml

每台器配置一样,注意配置的IP和端口

sharding:
  configDB: lvzhenjiang/192.168.99.251:27018,192.168.99.252:27018,192.168.99.253:27018
# 指定配置节点
systemLog:
  destination: file
  path: "/home/mongo/mongodb-linux-x86_64-3.4.4/logs/mongos.log"
  logAppend: true
net:
  bindIp: 192.168.99.11,127.0.0.1
  port: 27017
processManagement:
  fork: true

3.5 shard1.yml(主分片)

mongo 主分片和副本配置文件相同,但要注意根据角色分配修改replSetName,端口,日志路径,存储路径。

$ vim conf/shard1.yml

sharding:
  clusterRole: shardsvr
replication:
  replSetName: shard1
systemLog:
  destination: file
  logAppend: true
  logRotate: rename
  path: "/home/mongo/mongodb-linux-x86_64-3.4.4/logs/shard1.log"
processManagement:
  fork: true
net:
  bindIp: 192.168.99.11
  port: 27019
  http:
    enabled: false
  maxIncomingConnections: 65535
operationProfiling:
  mode: slowOp
  slowOpThresholdMs: 100
storage:
  dbPath: "/home/mongo/mongodb-linux-x86_64-3.4.4/data/shard1"
  wiredTiger:
    engineConfig:
      cacheSizeGB: 40
      directoryForIndexes: true
    indexConfig:
      prefixCompression: true
  directoryPerDB: true
setParameter:
  replWriterThreadCount: 64

3.6 shard2.yml(仲裁节点)

$ vim conf/shard2.yml

仲裁节点分片配置相同,但注意端口

sharding:
  clusterRole: shardsvr
replication:
  replSetName: shard2
systemLog:
  destination: file
  logAppend: true
  logRotate: rename
  path: "/home/mongo/mongodb-linux-x86_64-3.4.4/logs/shard2.log"
processManagement:
  fork: true
net:
  bindIp: 192.168.99.11
  port: 27020
operationProfiling:
  mode: slowOp
  slowOpThresholdMs: 100
storage:
  dbPath: "/home/mongo/mongodb-linux-x86_64-3.4.4/data/shard2"

3.7 按照该表格分配shard的角色

分片/端口(角色) mongo01 mongo02 mongo03
config 27018 27018 27018
shard1 27019(master) 27019(replication) 27019(arbitor)
shard2 27020(arbitor) 27020(master) 27020(replication)
shard3 27021(replication) 27021(arbitor) 27021(master)
mongos 27017 27017 27017

3.8 分配完成后,启动config、shard进程:

$ /home/mongo/mongodb-linux-x86_64-3.4.4/bin/mongod -f /home/mongo/mongodb-linux-x86_64-3.4.4/conf/config.yml
$ /home/mongo/mongodb-linux-x86_64-3.4.4/bin/mongod -f /home/mongo/mongodb-linux-x86_64-3.4.4/conf/shard1.yml
$ /home/mongo/mongodb-linux-x86_64-3.4.4/bin/mongod -f /home/mongo/mongodb-linux-x86_64-3.4.4/conf/shard2.yml
$ /home/mongo/mongodb-linux-x86_64-3.4.4/bin/mongod -f /home/mongo/mongodb-linux-x86_64-3.4.4/conf/shard3.yml

3.9 初始化mongo集群

$ /home/mongo/mongodb-linux-x86_64-3.4.4/bin/mongo --port 27018 -host 192.168.99.11
> rs.initiate( {
	_id: "lvzhenjiang",
   configsvr: true,
   members: [
     { _id: 0, host: "192.168.99.11:27018" },
     { _id: 1, host: "192.168.99.12:27018" },
     { _id: 2, host: "192.168.99.13:27018" }
   ]
});
> rs.status()

3.10 配置分片角色

3.10.1 shard1
$ /home/mongo/mongodb-linux-x86_64-3.4.4/bin/mongo -host 192.168.99.11 --port 27019
>rs.initiate(
    { _id:"shard1", members:[
      {_id:0,host:"192.168.99.11:27019"},
      {_id:1,host:"192.168.99.12:27019"},
      {_id:2,host:"192.168.99.13:27019",arbiterOnly:true}
    ]
  }
);

> rs.status()

3.10.2 shard2
$ /home/mongo/mongodb-linux-x86_64-3.4.4/bin/mongo -host 192.168.99.12 --port 27020
> rs.initiate(
    { _id:"shard2", members:[
      {_id:0,host:"192.168.99.12:27020"},
      {_id:1,host:"192.168.99.13:27020"},
      {_id:2,host:"192.168.99.11:27020",arbiterOnly:true}
    ]
  }
);

> rs.status()

3.10.3 shard3
$ /home/mongo/mongodb-linux-x86_64-3.4.4/bin/mongo -host 192.168.99.13 --port 27021
> rs.initiate(
    { _id:"shard3", members:[
      {_id:0,host:"192.168.99.13:27021"},
      {_id:1,host:"192.168.99.11:27021"},
      {_id:2,host:"192.168.99.12:27021",arbiterOnly:true}
    ]
  }
);

> rs.status()

3.10.4 启动mongos,插入数据便于查看各个分片情况
$ /home/mongo/mongodb-linux-x86_64-3.4.4/bin/mongos -f /home/mongo/mongodb-linux-x86_64-3.4.4/conf/mongos.yml
$ /home/mongo/mongodb-linux-x86_64-3.4.4/bin/mongo -host 192.168.99.11 --port 27017
mongos> use admin
mongos> db.runCommand( { addshard : "shard1/192.168.99.11:27019,192.168.99.12:27019,192.168.99.13:27019"});
mongos> db.runCommand( { addshard : "shard2/192.168.99.11:27020,192.168.99.12:27020,192.168.99.13:27020"});
mongos> db.runCommand( { addshard : "shard3/192.168.99.11:27021,192.168.99.12:27021,192.168.99.13:27021"});
mongos> sh.status()

完事!

posted @ 2020-12-04 10:28  吕振江  阅读(675)  评论(0编辑  收藏  举报
浏览器标题切换
浏览器标题切换end