MongoDB副本集(一主两从)读写分离、故障转移功能环境部署记录

MongoDB副本集(一主两从)读写分离、故障转移功能环境部署记录

 

 

Mongodb是一种非关系数据库(NoSQL),非关系型数据库的产生就是为了解决大数据量、高扩展性、高性能、灵活数据模型、高可用性。MongoDB官方已经不建议使用主从模式了,替代方案是采用副本集的模式。主从模式其实就是一个单副本的应用,没有很好的扩展性和容错性,而Mongodb副本集具有多个副本保证了容错性,就算一个副本挂掉了还有很多副本存在,主节点挂掉后,整个集群内会实现自动切换。

Mongodb副本集的工作原理
客户端连接到整个Mongodb副本集,不关心具体哪一台节点机是否挂掉。主节点机负责整个副本集的读写,副本集定期同步数据备份,一但主节点挂掉,副本节点就会选举一个新的主服务器,这一切对于应用服务器不需要关心。副本集中的副本节点在主节点挂掉后通过心跳机制检测到后,就会在集群内发起主节点的选举机制,自动选举一位新的主服务器。

看起来Mongodb副本集很牛X的样子,下面就演示下副本集环境部署过程,官方推荐的Mongodb副本集机器数量为至少3个节点,这里我就选择三个节点,一个主节点,两个从节点,暂不使用仲裁节点。

一、环境准备

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
ip地址                    主机名                               角色
172.16.60.205      mongodb-master01          副本集主节点
172.16.60.206      mongodb-slave01             副本集副本节点
172.16.60.207      mongodb-slave02             副本集副本节点
 
三个节点机均设置好各自的主机名,并如下设置好hosts绑定
[root@mongodb-master01 ~]# cat /etc/hosts
............
172.16.60.205    mongodb-master01
172.16.60.206    mongodb-slave01
172.16.60.207    mongodb-slave02
 
三个节点机均关闭selinux,为了测试方便,将iptables也关闭
[root@mongodb-master01 ~]# setenforce 0
[root@mongodb-master01 ~]# cat /etc/sysconfig/selinux
...........
SELINUX=disabled
 
[root@mongodb-master01 ~]# iptables -F
[root@mongodb-master01 ~]# /etc/init.d/iptables stop
[root@mongodb-master01 ~]# /etc/init.d/iptables stop
iptables: Setting chains to policy ACCEPT: filter          [  OK  ]
iptables: Flushing firewall rules:                         [  OK  ]
iptables: Unloading modules:                               [  OK  ]

二、Mongodb安装、副本集配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
1) 在三个节点机上建立mongodb副本集测试文件夹,用于存放整个副本集文件
[root@mongodb-master01 ~]# mkdir -p /data/mongodb/data/replset/
 
2)在三个节点机上安装mongodb
下载地址:https://www.mongodb.org/dl/linux/x86_64-rhel62
 
[root@mongodb-master01 ~]# wget http://downloads.mongodb.org/linux/mongodb-linux-x86_64-rhel62-v3.6-latest.tgz
[root@mongodb-master01 ~]# tar -zvxf mongodb-linux-x86_64-rhel62-v3.6-latest.tgz
 
3)分别在每个节点机上启动mongodb(启动时指明--bind_ip地址,默认是127.0.0.1,需要改成本机ip,否则远程连接时失败)
[root@mongodb-master01 ~]# mv mongodb-linux-x86_64-rhel62-3.6.11-rc0-2-g2151d1d219 /usr/local/mongodb
[root@mongodb-master01 ~]# nohup /usr/local/mongodb/bin/mongod -dbpath /data/mongodb/data/replset -replSet repset --bind_ip=172.16.60.205 --port=27017 &
 
[root@mongodb-master01 ~]# ps -ef|grep mongodb
root      7729  6977  1 15:10 pts/1    00:00:01 /usr/local/mongodb/bin/mongod -dbpath /data/mongodb/data/replset -replSet repset
root      7780  6977  0 15:11 pts/1    00:00:00 grep mongodb
 
[root@mongodb-master01 ~]# lsof -i:27017
COMMAND  PID USER   FD   TYPE  DEVICE SIZE/OFF NODE NAME
mongod  7729 root   10u  IPv4 6554476      0t0  TCP localhost:27017 (LISTEN)
 
4)初始化副本集
在三个节点中的任意一个节点机上操作(比如在172.16.60.205节点机)
 
登陆mongodb
[root@mongodb-master01 ~]# /usr/local/mongodb/bin/mongo 172.16.60.205:27017
.........
#使用admin数据库
> use admin
switched to db admin
 
#定义副本集配置变量,这里的 _id:”repset” 和上面命令参数“ –replSet repset” 要保持一样。
> config = { _id:"repset", members:[{_id:0,host:"172.16.60.205:27017"},{_id:1,host:"172.16.60.206:27017"},{_id:2,host:"172.16.60.207:27017"}]}
{
        "_id" : "repset",
        "members" : [
                {
                        "_id" : 0,
                        "host" : "172.16.60.205:27017"
                },
                {
                        "_id" : 1,
                        "host" : "172.16.60.206:27017"
                },
                {
                        "_id" : 2,
                        "host" : "172.16.60.207:27017"
                }
        ]
}
 
#初始化副本集配置
> rs.initiate(config);
{
        "ok" : 1,
        "operationTime" : Timestamp(1551166191, 1),
        "$clusterTime" : {
                "clusterTime" : Timestamp(1551166191, 1),
                "signature" : {
                        "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
                        "keyId" : NumberLong(0)
                }
        }
}
 
#查看集群节点的状态
repset:SECONDARY> rs.status();
{
        "set" : "repset",
        "date" : ISODate("2019-02-26T07:31:07.766Z"),
        "myState" : 1,
        "term" : NumberLong(1),
        "syncingTo" : "",
        "syncSourceHost" : "",
        "syncSourceId" : -1,
        "heartbeatIntervalMillis" : NumberLong(2000),
        "optimes" : {
                "lastCommittedOpTime" : {
                        "ts" : Timestamp(1551166263, 1),
                        "t" : NumberLong(1)
                },
                "readConcernMajorityOpTime" : {
                        "ts" : Timestamp(1551166263, 1),
                        "t" : NumberLong(1)
                },
                "appliedOpTime" : {
                        "ts" : Timestamp(1551166263, 1),
                        "t" : NumberLong(1)
                },
                "durableOpTime" : {
                        "ts" : Timestamp(1551166263, 1),
                        "t" : NumberLong(1)
                }
        },
        "members" : [
                {
                        "_id" : 0,
                        "name" : "172.16.60.205:27017",
                        "health" : 1,
                        "state" : 1,
                        "stateStr" : "PRIMARY",
                        "uptime" : 270,
                        "optime" : {
                                "ts" : Timestamp(1551166263, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDate" : ISODate("2019-02-26T07:31:03Z"),
                        "syncingTo" : "",
                        "syncSourceHost" : "",
                        "syncSourceId" : -1,
                        "infoMessage" : "could not find member to sync from",
                        "electionTime" : Timestamp(1551166202, 1),
                        "electionDate" : ISODate("2019-02-26T07:30:02Z"),
                        "configVersion" : 1,
                        "self" : true,
                        "lastHeartbeatMessage" : ""
                },
                {
                        "_id" : 1,
                        "name" : "172.16.60.206:27017",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 76,
                        "optime" : {
                                "ts" : Timestamp(1551166263, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDurable" : {
                                "ts" : Timestamp(1551166263, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDate" : ISODate("2019-02-26T07:31:03Z"),
                        "optimeDurableDate" : ISODate("2019-02-26T07:31:03Z"),
                        "lastHeartbeat" : ISODate("2019-02-26T07:31:06.590Z"),
                        "lastHeartbeatRecv" : ISODate("2019-02-26T07:31:06.852Z"),
                        "pingMs" : NumberLong(0),
                        "lastHeartbeatMessage" : "",
                        "syncingTo" : "172.16.60.205:27017",
                        "syncSourceHost" : "172.16.60.205:27017",
                        "syncSourceId" : 0,
                        "infoMessage" : "",
                        "configVersion" : 1
                },
                {
                        "_id" : 2,
                        "name" : "172.16.60.207:27017",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 76,
                        "optime" : {
                                "ts" : Timestamp(1551166263, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDurable" : {
                                "ts" : Timestamp(1551166263, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDate" : ISODate("2019-02-26T07:31:03Z"),
                        "optimeDurableDate" : ISODate("2019-02-26T07:31:03Z"),
                        "lastHeartbeat" : ISODate("2019-02-26T07:31:06.589Z"),
                        "lastHeartbeatRecv" : ISODate("2019-02-26T07:31:06.958Z"),
                        "pingMs" : NumberLong(0),
                        "lastHeartbeatMessage" : "",
                        "syncingTo" : "172.16.60.205:27017",
                        "syncSourceHost" : "172.16.60.205:27017",
                        "syncSourceId" : 0,
                        "infoMessage" : "",
                        "configVersion" : 1
                }
        ],
        "ok" : 1,
        "operationTime" : Timestamp(1551166263, 1),
        "$clusterTime" : {
                "clusterTime" : Timestamp(1551166263, 1),
                "signature" : {
                        "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
                        "keyId" : NumberLong(0)
                }
        }
}
 
如上信息表明:
副本集配置成功后,172.16.60.205为主节点PRIMARY,172.16.60.206/207为副本节点SECONDARY。
health:1   1表明状态是正常,0表明异常
state:1     值小的是primary节点、值大的是secondary节点

三、测试Mongodb副本集数据复制功能 <mongodb默认是从主节点读写数据的,副本节点上不允许读,需要设置副本节点可以读>

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
1)在主节点172.16.60.205上连接到终端
[root@mongodb-master01 ~]# /usr/local/mongodb/bin/mongo 172.16.60.205:27017
................
#建立test 数据库
repset:PRIMARY> use test;
switched to db test
 
#往testdb表插入测试数据
repset:PRIMARY> db.testdb.insert({"test1":"testval1"})
WriteResult({ "nInserted" : 1 })
 
2)在副本节点172.16.60.206、172.16.60.207上连接到mongodb查看数据是否复制过来。
这里在172.16.60.206副本节点上进行查看
[root@mongodb-slave01 ~]# /usr/local/mongodb/bin/mongo 172.16.60.206:27017
................
repset:SECONDARY> use test;
switched to db test
repset:SECONDARY> show tables;
2019-02-26T15:37:46.446+0800 E QUERY    [thread1] Error: listCollections failed: {
        "operationTime" : Timestamp(1551166663, 1),
        "ok" : 0,
        "errmsg" : "not master and slaveOk=false",
        "code" : 13435,
        "codeName" : "NotMasterNoSlaveOk",
        "$clusterTime" : {
                "clusterTime" : Timestamp(1551166663, 1),
                "signature" : {
                        "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
                        "keyId" : NumberLong(0)
                }
        }
} :
_getErrorWithCode@src/mongo/shell/utils.js:25:13
DB.prototype._getCollectionInfosCommand@src/mongo/shell/db.js:941:1
DB.prototype.getCollectionInfos@src/mongo/shell/db.js:953:19
DB.prototype.getCollectionNames@src/mongo/shell/db.js:964:16
shellHelper.show@src/mongo/shell/utils.js:853:9
shellHelper@src/mongo/shell/utils.js:750:15
@(shellhelp2):1:1
 
上面出现了报错!
这是因为mongodb默认是从主节点读写数据的,副本节点上不允许读,需要设置副本节点可以读
repset:SECONDARY> db.getMongo().setSlaveOk();
repset:SECONDARY> db.testdb.find();
{ "_id" : ObjectId("5c74ec9267d8c3d06506449b"), "test1" : "testval1" }
repset:SECONDARY> show tables;
testdb
 
如上发现已经在副本节点上发现了测试数据,即已经从主节点复制过来了。
(在另一个副本节点172.16.60.207也如上操作即可)

四、测试副本集故障转移功能
先停掉主节点172.16.60.205,查看mongodb副本集状态,可以看到经过一系列的投票选择操作,172.16.60.206当选主节点,172.16.60.207从172.16.60.206同步数据过来。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
1)停掉原来的主节点172.16.60.205的mongodb,模拟故障
[root@mongodb-master01 ~]# ps -ef|grep mongodb|grep -v grep|awk '{print $2}'|xargs kill -9
[root@mongodb-master01 ~]# lsof -i:27017
[root@mongodb-master01 ~]#
 
2)接着登录到另外两个正常的从节点172.16.60.206、172.16.60.207中的任意一个节点的mongodb,查看副本集状态
[root@mongodb-slave01 ~]# /usr/local/mongodb/bin/mongo 172.16.60.206:27017
.................
repset:PRIMARY> rs.status();
{
        "set" : "repset",
        "date" : ISODate("2019-02-26T08:06:02.996Z"),
        "myState" : 1,
        "term" : NumberLong(2),
        "syncingTo" : "",
        "syncSourceHost" : "",
        "syncSourceId" : -1,
        "heartbeatIntervalMillis" : NumberLong(2000),
        "optimes" : {
                "lastCommittedOpTime" : {
                        "ts" : Timestamp(1551168359, 1),
                        "t" : NumberLong(2)
                },
                "readConcernMajorityOpTime" : {
                        "ts" : Timestamp(1551168359, 1),
                        "t" : NumberLong(2)
                },
                "appliedOpTime" : {
                        "ts" : Timestamp(1551168359, 1),
                        "t" : NumberLong(2)
                },
                "durableOpTime" : {
                        "ts" : Timestamp(1551168359, 1),
                        "t" : NumberLong(2)
                }
        },
        "members" : [
                {
                        "_id" : 0,
                        "name" : "172.16.60.205:27017",
                        "health" : 0,
                        "state" : 8,
                        "stateStr" : "(not reachable/healthy)",
                        "uptime" : 0,
                        "optime" : {
                                "ts" : Timestamp(0, 0),
                                "t" : NumberLong(-1)
                        },
                        "optimeDurable" : {
                                "ts" : Timestamp(0, 0),
                                "t" : NumberLong(-1)
                        },
                        "optimeDate" : ISODate("1970-01-01T00:00:00Z"),
                        "optimeDurableDate" : ISODate("1970-01-01T00:00:00Z"),
                        "lastHeartbeat" : ISODate("2019-02-26T08:06:02.917Z"),
                        "lastHeartbeatRecv" : ISODate("2019-02-26T08:03:37.492Z"),
                        "pingMs" : NumberLong(0),
                        "lastHeartbeatMessage" : "Connection refused",
                        "syncingTo" : "",
                        "syncSourceHost" : "",
                        "syncSourceId" : -1,
                        "infoMessage" : "",
                        "configVersion" : -1
                },
                {
                        "_id" : 1,
                        "name" : "172.16.60.206:27017",
                        "health" : 1,
                        "state" : 1,
                        "stateStr" : "PRIMARY",
                        "uptime" : 2246,
                        "optime" : {
                                "ts" : Timestamp(1551168359, 1),
                                "t" : NumberLong(2)
                        },
                        "optimeDate" : ISODate("2019-02-26T08:05:59Z"),
                        "syncingTo" : "",
                        "syncSourceHost" : "",
                        "syncSourceId" : -1,
                        "infoMessage" : "",
                        "electionTime" : Timestamp(1551168228, 1),
                        "electionDate" : ISODate("2019-02-26T08:03:48Z"),
                        "configVersion" : 1,
                        "self" : true,
                        "lastHeartbeatMessage" : ""
                },
                {
                        "_id" : 2,
                        "name" : "172.16.60.207:27017",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 2169,
                        "optime" : {
                                "ts" : Timestamp(1551168359, 1),
                                "t" : NumberLong(2)
                        },
                        "optimeDurable" : {
                                "ts" : Timestamp(1551168359, 1),
                                "t" : NumberLong(2)
                        },
                        "optimeDate" : ISODate("2019-02-26T08:05:59Z"),
                        "optimeDurableDate" : ISODate("2019-02-26T08:05:59Z"),
                        "lastHeartbeat" : ISODate("2019-02-26T08:06:02.861Z"),
                        "lastHeartbeatRecv" : ISODate("2019-02-26T08:06:02.991Z"),
                        "pingMs" : NumberLong(0),
                        "lastHeartbeatMessage" : "",
                        "syncingTo" : "172.16.60.206:27017",
                        "syncSourceHost" : "172.16.60.206:27017",
                        "syncSourceId" : 1,
                        "infoMessage" : "",
                        "configVersion" : 1
                }
        ],
        "ok" : 1,
        "operationTime" : Timestamp(1551168359, 1),
        "$clusterTime" : {
                "clusterTime" : Timestamp(1551168359, 1),
                "signature" : {
                        "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
                        "keyId" : NumberLong(0)
                }
        }
}
 
发现当原来的主节点172.16.60.205宕掉后,经过选举,原来的从节点172.16.60.206被推举为新的主节点。
 
3)现在在172.16.60.206新主节点上创建测试数据
repset:PRIMARY> use kevin;
switched to db kevin
repset:PRIMARY> db.kevin.insert({"shibo":"hahaha"})
WriteResult({ "nInserted" : 1 })
 
4)另一个从节点172.16.60.207上登录mongodb查看
[root@mongodb-slave02 ~]# /usr/local/mongodb/bin/mongo 172.16.60.207:27017
................
repset:SECONDARY> use kevin;
switched to db kevin
repset:SECONDARY> db.getMongo().setSlaveOk();
repset:SECONDARY> show tables;
kevin
repset:SECONDARY> db.kevin.find();
{ "_id" : ObjectId("5c74f42bb0b339ed6eb68e9c"), "shibo" : "hahaha" }
 
发现从节点172.16.60.207可以同步新的主节点172.16.60.206的数据
 
5)再重新启动原来的主节点172.16.60.205的mongodb
[root@mongodb-master01 ~]# nohup /usr/local/mongodb/bin/mongod -dbpath /data/mongodb/data/replset -replSet repset --bind_ip=172.16.60.205 --port=27017 &
 
mongod  9162 root   49u  IPv4 6561201      0t0  TCP mongodb-master01:55236->mongodb-slave01:27017 (ESTABLISHED)
[root@mongodb-master01 ~]# ps -ef|grep mongodb
root      9162  6977  4 16:14 pts/1    00:00:01 /usr/local/mongodb/bin/mongod -dbpath /data/mongodb/data/replset -replSet repset --bind_ip=172.16.60.205 --port=27017
root      9244  6977  0 16:14 pts/1    00:00:00 grep mongodb
 
再次登录到三个节点中的任意一个的mongodb,查看副本集状态
[root@mongodb-master01 ~]# /usr/local/mongodb/bin/mongo 172.16.60.205:27017
....................
repset:SECONDARY> rs.status();
{
        "set" : "repset",
        "date" : ISODate("2019-02-26T08:16:11.741Z"),
        "myState" : 2,
        "term" : NumberLong(2),
        "syncingTo" : "172.16.60.206:27017",
        "syncSourceHost" : "172.16.60.206:27017",
        "syncSourceId" : 1,
        "heartbeatIntervalMillis" : NumberLong(2000),
        "optimes" : {
                "lastCommittedOpTime" : {
                        "ts" : Timestamp(1551168969, 1),
                        "t" : NumberLong(2)
                },
                "readConcernMajorityOpTime" : {
                        "ts" : Timestamp(1551168969, 1),
                        "t" : NumberLong(2)
                },
                "appliedOpTime" : {
                        "ts" : Timestamp(1551168969, 1),
                        "t" : NumberLong(2)
                },
                "durableOpTime" : {
                        "ts" : Timestamp(1551168969, 1),
                        "t" : NumberLong(2)
                }
        },
        "members" : [
                {
                        "_id" : 0,
                        "name" : "172.16.60.205:27017",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 129,
                        "optime" : {
                                "ts" : Timestamp(1551168969, 1),
                                "t" : NumberLong(2)
                        },
                        "optimeDate" : ISODate("2019-02-26T08:16:09Z"),
                        "syncingTo" : "172.16.60.206:27017",
                        "syncSourceHost" : "172.16.60.206:27017",
                        "syncSourceId" : 1,
                        "infoMessage" : "",
                        "configVersion" : 1,
                        "self" : true,
                        "lastHeartbeatMessage" : ""
                },
                {
                        "_id" : 1,
                        "name" : "172.16.60.206:27017",
                        "health" : 1,
                        "state" : 1,
                        "stateStr" : "PRIMARY",
                        "uptime" : 127,
                        "optime" : {
                                "ts" : Timestamp(1551168969, 1),
                                "t" : NumberLong(2)
                        },
                        "optimeDurable" : {
                                "ts" : Timestamp(1551168969, 1),
                                "t" : NumberLong(2)
                        },
                        "optimeDate" : ISODate("2019-02-26T08:16:09Z"),
                        "optimeDurableDate" : ISODate("2019-02-26T08:16:09Z"),
                        "lastHeartbeat" : ISODate("2019-02-26T08:16:10.990Z"),
                        "lastHeartbeatRecv" : ISODate("2019-02-26T08:16:11.518Z"),
                        "pingMs" : NumberLong(0),
                        "lastHeartbeatMessage" : "",
                        "syncingTo" : "",
                        "syncSourceHost" : "",
                        "syncSourceId" : -1,
                        "infoMessage" : "",
                        "electionTime" : Timestamp(1551168228, 1),
                        "electionDate" : ISODate("2019-02-26T08:03:48Z"),
                        "configVersion" : 1
                },
                {
                        "_id" : 2,
                        "name" : "172.16.60.207:27017",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 127,
                        "optime" : {
                                "ts" : Timestamp(1551168969, 1),
                                "t" : NumberLong(2)
                        },
                        "optimeDurable" : {
                                "ts" : Timestamp(1551168969, 1),
                                "t" : NumberLong(2)
                        },
                        "optimeDate" : ISODate("2019-02-26T08:16:09Z"),
                        "optimeDurableDate" : ISODate("2019-02-26T08:16:09Z"),
                        "lastHeartbeat" : ISODate("2019-02-26T08:16:10.990Z"),
                        "lastHeartbeatRecv" : ISODate("2019-02-26T08:16:11.655Z"),
                        "pingMs" : NumberLong(0),
                        "lastHeartbeatMessage" : "",
                        "syncingTo" : "172.16.60.206:27017",
                        "syncSourceHost" : "172.16.60.206:27017",
                        "syncSourceId" : 1,
                        "infoMessage" : "",
                        "configVersion" : 1
                }
        ],
        "ok" : 1,
        "operationTime" : Timestamp(1551168969, 1),
        "$clusterTime" : {
                "clusterTime" : Timestamp(1551168969, 1),
                "signature" : {
                        "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
                        "keyId" : NumberLong(0)
                }
        }
}
 
发现原来的主节点172.16.60.205在故障恢复后,变成了新的主节点172.16.60.206的从节点

五、Mongodb读写分离
目前来看。Mongodb副本集可以完美支持故障转移。至于主节点的读写压力过大如何解决?常见的解决方案是读写分离。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
一般情况下,常规写操作来说并没有读操作多,所以在Mongodb副本集中,一台主节点负责写操作,两台副本节点负责读操作。
1)设置读写分离需要先在副本节点SECONDARY 设置 setSlaveOk。
2)在程序中设置副本节点负责读操作,如下代码:<br>
public class TestMongoDBReplSetReadSplit {
public static void main(String[] args) {
try {
List<ServerAddress> addresses = new ArrayList<ServerAddress>();
ServerAddress address1 = new ServerAddress("172.16.60.205" , 27017);
ServerAddress address2 = new ServerAddress("172.16.60.206" , 27017);
ServerAddress address3 = new ServerAddress("172.16.60.207" , 27017);
addresses.add(address1);
addresses.add(address2);
addresses.add(address3);
MongoClient client = new MongoClient(addresses);
DB db = client.getDB( "test" );
DBCollection coll = db.getCollection( "testdb" );
BasicDBObject object = new BasicDBObject();
object.append( "test2" , "testval2" );
 
//读操作从副本节点读取
ReadPreference preference = ReadPreference. secondary();
DBObject dbObject = coll.findOne(object, null , preference);
System. out .println(dbObject);
} catch (Exception e) {
e.printStackTrace();
}
}
}

读参数除了secondary一共还有五个参数:primary、primaryPreferred、secondary、secondaryPreferred、nearest。
primary:默认参数,只从主节点上进行读取操作;
primaryPreferred:大部分从主节点上读取数据,只有主节点不可用时从secondary节点读取数据。
secondary:只从secondary节点上进行读取操作,存在的问题是secondary节点的数据会比primary节点数据“旧”。
secondaryPreferred:优先从secondary节点进行读取操作,secondary节点不可用时从主节点读取数据;
nearest:不管是主节点、secondary节点,从网络延迟最低的节点上读取数据。

读写分离做好后,就可以进行数据分流,减轻压力,解决了"主节点的读写压力过大如何解决?"这个问题。不过当副本节点增多时,主节点的复制压力会加大有什么办法解决吗?基于这个问题,Mongodb已有了相应的解决方案 - 引用仲裁节点:
在Mongodb副本集中,仲裁节点不存储数据,只是负责故障转移的群体投票,这样就少了数据复制的压力。看起来想的很周到啊,其实不只是主节点、副本节点、仲裁节点,还有Secondary-Only、Hidden、Delayed、Non-Voting,其中:
Secondary-Only:不能成为primary节点,只能作为secondary副本节点,防止一些性能不高的节点成为主节点。
Hidden:这类节点是不能够被客户端制定IP引用,也不能被设置为主节点,但是可以投票,一般用于备份数据。
Delayed:可以指定一个时间延迟从primary节点同步数据。主要用于备份数据,如果实时同步,误删除数据马上同步到从节点,恢复又恢复不了。
Non-Voting:没有选举权的secondary节点,纯粹的备份数据节点。

 
 
posted @ 2019-02-26 20:15  冒蓝火的加特林哒哒哒  阅读(440)  评论(0编辑  收藏  举报