代码改变世界

Mongo-Shake安装配置(2)

2023-01-28 11:53  abce  阅读(620)  评论(0编辑  收藏  举报

下载地址

https://github.com/alibaba/MongoShake/releases

安装

# tar -zxvf mongo-shake-v2.8.2.tgz && mv mongo-shake-v2.8.2 mongoshake && mv mongoshake/ /usr/local/

设置访问变量

# export PATH=/usr/local/mongoshake/bin:$PATH
# vi /etc/profile
添加
export PATH=/usr/local/mongoshake/bin:$PATH

修改collector.conf配置文件

#源端连接串信息,逗号分隔不同的mongod
mongo_urls = mongodb://admin:pwd456@192.168.56.213:27017,192.168.56.214:27017,192.168.56.215:27017 

#同步模式,all表示全量+增量同步,full表示全量同步,incr表示增量同步
sync_mode = all

# 通道模式。
tunnel = direct

# 此处配置通道的地址,格式与mongo_urls对齐。目标端MongoDB实例的ConnectionStringURI格式连接地址。
tunnel.address = mongodb://admin:pwd456@192.168.56.216:27017

mongo_connect_mode = secondaryPreferred  

filter.ddl_enable = true 
filter.oplog.gids = false

# 2.4版本以后不需要配置为源端cs的地址。checkpoint的具体写入的MongoDB地址,用于支持断点续传。如果不配置,对于副本集和分片集群都将写入源库(db=mongoshake) 
checkpoint.storage.url = 
checkpoint.storage.db = mongoshake 
checkpoint.storage.collection = ckpt_default

full_sync.create_index = background
incr_sync.mongo_fetch_method = oplog #官方建议使用change_stream,change_stream从源库中拉取change事件(仅支持MongoDB 4.0及以上版本)。

incr_sync.mongo_fetch_method = oplog

#要迁移的db
#filter.namespace.white = test

启动mongoshake并打印日志信息

# ./collector.linux -conf=collector.conf -verbose 1
[2023/01/17 13:50:43 CST] [INFO] log init succ. log.dir[] log.name[collector.log] log.level[info]
[2023/01/17 13:50:43 CST] [INFO] MongoDB Version Source[5.0.13] Target[5.0.13]
[2023/01/17 13:50:43 CST] [WARN] 
______________________________
\                             \           _         ______ |
 \                             \        /   \___-=O'/|O'/__|
  \  MongoShake, Here we go !!  \_______\          / | /    )
  /                             /        '/-==__ _/__|/__=-|  -GM
 /        Alibaba Cloud        /         *             \ | |
/                             /                        (o)
------------------------------

if you have any problem, please visit https://github.com/alibaba/MongoShake/wiki/FAQ

[2023/01/17 13:50:43 CST] [INFO] New session to mongodb://admin:***@192.168.56.213:27017,192.168.56.214:27017,192.168.56.215:27017 successfully
[2023/01/17 13:50:43 CST] [INFO] Close client with mongodb://admin:***@192.168.56.213:27017,192.168.56.214:27017,192.168.56.215:27017
[2023/01/17 13:50:43 CST] [INFO] New session to mongodb://admin:***@192.168.56.213:27017,192.168.56.214:27017,192.168.56.215:27017 successfully
[2023/01/17 13:50:43 CST] [INFO] Close client with mongodb://admin:***@192.168.56.213:27017,192.168.56.214:27017,192.168.56.215:27017
[2023/01/17 13:50:43 CST] [INFO] Collector startup. shard_by[collection] gids[[]]
[2023/01/17 13:50:43 CST] [INFO] Collector configuration {"ConfVersion":10,"Id":"mongoshake","MasterQuorum":false,"FullSyncHTTPListenPort":9101,"IncrSyncHTTPListenPort":9100,"SystemProfilePort":9200,"LogLevel":"info","LogDirectory":"","LogFileName":"collector.log","LogFlush":false,"SyncMode":"all","MongoUrls":["mongodb://admin:***@192.168.56.213:27017,192.168.56.214:27017,192.168.56.215:27017"],"MongoCsUrl":"","MongoSUrl":"","MongoSslRootCaFile":"","MongoSslClientCaFile":"","MongoConnectMode":"secondaryPreferred","Tunnel":"direct","TunnelAddress":["mongodb://admin:***@192.168.56.216:27017"],"TunnelMessage":"raw","TunnelKafkaPartitionNumber":1,"TunnelJsonFormat":"","TunnelMongoSslRootCaFile":"","FilterNamespaceBlack":[],"FilterNamespaceWhite":[],"FilterPassSpecialDb":[],"FilterDDLEnable":true,"FilterOplogGids":false,"CheckpointStorageUrl":"mongodb://admin:***@192.168.56.213:27017,192.168.56.214:27017,192.168.56.215:27017","CheckpointStorageDb":"mongoshake","CheckpointStorageCollection":"ckpt_default","CheckpointStorageUrlMongoSslRootCaFile":"","CheckpointStartPosition":1,"TransformNamespace":[],"SpecialSourceDBFlag":"","FullSyncReaderCollectionParallel":6,"FullSyncReaderWriteDocumentParallel":8,"FullSyncReaderDocumentBatchSize":128,"FullSyncReaderFetchBatchSize":8192,"FullSyncReaderParallelThread":1,"FullSyncReaderParallelIndex":"_id","FullSyncCollectionDrop":true,"FullSyncCreateIndex":"none","FullSyncReaderOplogStoreDisk":false,"FullSyncReaderOplogStoreDiskMaxSize":256000,"FullSyncExecutorInsertOnDupUpdate":false,"FullSyncExecutorFilterOrphanDocument":false,"FullSyncExecutorMajorityEnable":false,"IncrSyncMongoFetchMethod":"oplog","IncrSyncChangeStreamWatchFullDocument":false,"IncrSyncReaderFetchBatchSize":8192,"IncrSyncOplogGIDS":[],"IncrSyncShardKey":"collection","IncrSyncShardByObjectIdWhiteList":[],"IncrSyncWorker":8,"IncrSyncTunnelWriteThread":8,"IncrSyncTargetDelay":0,"IncrSyncWorkerBatchQueueSize":64,"IncrSyncAdaptiveBatchingMaxSize":1024,"IncrSyncFetcherBufferCapacity":256,"IncrSyncExecutorUpsert":false,"IncrSyncExecutorInsertOnDupUpdate":false,"IncrSyncConflictWriteTo":"none","IncrSyncExecutorMajorityEnable":false,"CheckpointStorage":"database","CheckpointInterval":5000,"FullSyncExecutorDebug":false,"IncrSyncDBRef":false,"IncrSyncExecutor":1,"IncrSyncExecutorDebug":false,"IncrSyncReaderDebug":"","IncrSyncCollisionEnable":false,"IncrSyncReaderBufferTime":1,"IncrSyncWorkerOplogCompressor":"none","IncrSyncTunnelKafkaDebug":"","Version":"improve-2.8.2,78d0e913a561b651af2b6310183a8eb881555782,release,go1.15.10,2022-12-15_23:03:06","SourceDBVersion":"5.0.13","TargetDBVersion":"5.0.13","IncrSyncTunnel":"","IncrSyncTunnelAddress":null,"IncrSyncTunnelMessage":"","HTTPListenPort":0,"SystemProfile":0}
[2023/01/17 13:50:43 CST] [INFO] New session to mongodb://admin:***@192.168.56.213:27017,192.168.56.214:27017,192.168.56.215:27017 successfully
[2023/01/17 13:50:43 CST] [INFO] Close client with mongodb://admin:***@192.168.56.213:27017,192.168.56.214:27017,192.168.56.215:27017
[2023/01/17 13:50:43 CST] [INFO] New session to mongodb://admin:***@192.168.56.213:27017,192.168.56.214:27017,192.168.56.215:27017 successfully
[2023/01/17 13:50:43 CST] [INFO] Close client with mongodb://admin:***@192.168.56.213:27017,192.168.56.214:27017,192.168.56.215:27017
[2023/01/17 13:50:43 CST] [INFO] GetAllTimestamp biggestNew:{1673934640 1}, smallestNew:{1673934640 1}, biggestOld:{1673591766 1}, smallestOld:{1673591766 1}, MongoSource:[url[mongodb://admin:***@192.168.56.213:27017,192.168.56.214:27017,192.168.56.215:27017], name[RS0]], tsMap:map[RS0:{7188021901824884737 7189494534441533441}]
[2023/01/17 13:50:43 CST] [INFO] all node timestamp map: map[RS0:{7188021901824884737 7189494534441533441}] CheckpointStartPosition:{1 0}
[2023/01/17 13:50:43 CST] [INFO] New session to mongodb://admin:***@192.168.56.213:27017,192.168.56.214:27017,192.168.56.215:27017 successfully
[2023/01/17 13:50:43 CST] [INFO] RS0 Regenerate checkpoint but won't persist. content: {"name":"RS0","ckpt":1,"version":2,"fetch_method":"","oplog_disk_queue":"","oplog_disk_queue_apply_finish_ts":1}
[2023/01/17 13:50:43 CST] [INFO] RS0 checkpoint using mongod/replica_set: {"name":"RS0","ckpt":1,"version":2,"fetch_method":"","oplog_disk_queue":"","oplog_disk_queue_apply_finish_ts":1}, ckptRemote set? [false]
[2023/01/17 13:50:43 CST] [INFO] RS0 syncModeAll[true] ts.Oldest[7188021901824884737], confTsMongoTs[4294967296]
[2023/01/17 13:50:43 CST] [INFO] start running with mode[all], fullBeginTs[7189494534441533441[1673934640, 1]]
[2023/01/17 13:50:43 CST] [INFO] run serialize document oplog
[2023/01/17 13:50:43 CST] [INFO] source is replica or mongos, no need to fetching chunk map
[2023/01/17 13:50:43 CST] [INFO] New session to mongodb://admin:***@192.168.56.213:27017,192.168.56.214:27017,192.168.56.215:27017 successfully
[2023/01/17 13:50:43 CST] [INFO] Close client with mongodb://admin:***@192.168.56.213:27017,192.168.56.214:27017,192.168.56.215:27017
[2023/01/17 13:50:43 CST] [INFO] all namespace: map[{testdb testdb}:{}]
[2023/01/17 13:50:43 CST] [INFO] New session to mongodb://admin:***@192.168.56.213:27017,192.168.56.214:27017,192.168.56.215:27017 successfully
[2023/01/17 13:50:43 CST] [INFO] Close client with mongodb://admin:***@192.168.56.213:27017,192.168.56.214:27017,192.168.56.215:27017
[2023/01/17 13:50:43 CST] [INFO] New session to mongodb://admin:***@192.168.56.213:27017,192.168.56.214:27017,192.168.56.215:27017 successfully
[2023/01/17 13:50:43 CST] [INFO] Close client with mongodb://admin:***@192.168.56.213:27017,192.168.56.214:27017,192.168.56.215:27017
[2023/01/17 13:50:43 CST] [INFO] GetAllTimestamp biggestNew:{1673934640 1}, smallestNew:{1673934640 1}, biggestOld:{1673591772 1}, smallestOld:{1673591772 1}, MongoSource:[url[mongodb://admin:***@192.168.56.213:27017,192.168.56.214:27017,192.168.56.215:27017], name[RS0]], tsMap:map[RS0:{7188021927594688513 7189494534441533441}]
[2023/01/17 13:50:43 CST] [INFO] New session to mongodb://admin:***@192.168.56.216:27017 successfully
[2023/01/17 13:50:43 CST] [INFO] replication from [replica] to [replica]
[2023/01/17 13:50:43 CST] [INFO] document syncer-0 do replication for url=mongodb://admin:pwd456@192.168.56.213:27017,192.168.56.214:27017,192.168.56.215:27017
[2023/01/17 13:50:43 CST] [INFO] New session to mongodb://admin:***@192.168.56.213:27017,192.168.56.214:27017,192.168.56.215:27017 successfully
[2023/01/17 13:50:43 CST] [INFO] Close client with mongodb://admin:***@192.168.56.213:27017,192.168.56.214:27017,192.168.56.215:27017
[2023/01/17 13:50:43 CST] [INFO] DBSyncer id[0] source[mongodb://admin:***@192.168.56.213:27017,192.168.56.214:27017,192.168.56.215:27017] target[mongodb://admin:***@192.168.56.216:27017] startTime[2023-01-17 13:50:43.86081563 +0800 CST m=+0.273674273] collExecutor-5 sync ns {testdb testdb} to {testdb testdb} begin
[2023/01/17 13:50:43 CST] [INFO] New session to mongodb://admin:***@192.168.56.216:27017 successfully
[2023/01/17 13:50:43 CST] [INFO] New session to mongodb://admin:***@192.168.56.213:27017,192.168.56.214:27017,192.168.56.215:27017 successfully
[2023/01/17 13:50:43 CST] [INFO] NewDocumentSplitter db[testdb] col[testdb] res[{900000 8.64e+07 2.5280512e+07}], pieceByteSize[86400000]
[2023/01/17 13:50:43 CST] [INFO] splitter[DocumentSplitter src[mongodb://admin:***@192.168.56.213:27017,192.168.56.214:27017,192.168.56.215:27017] ns[{testdb testdb}] count[900000] pieceByteSize[82 MB] pieceNumber[0]] disable split or no need
[2023/01/17 13:50:43 CST] [INFO] splitter[DocumentSplitter src[mongodb://admin:***@192.168.56.213:27017,192.168.56.214:27017,192.168.56.215:27017] ns[{testdb testdb}] count[900000] pieceByteSize[82 MB] pieceNumber[0]] exits
[2023/01/17 13:50:43 CST] [INFO] reader[DocumentReader id[0], src[mongodb://admin:***@192.168.56.213:27017,192.168.56.214:27017,192.168.56.215:27017] ns[{testdb testdb}] query[map[]]] client is empty, create one
[2023/01/17 13:50:43 CST] [INFO] New session to mongodb://admin:***@192.168.56.213:27017,192.168.56.214:27017,192.168.56.215:27017 successfully
[2023/01/17 13:50:43 CST] [INFO] reader[DocumentReader id[0], src[mongodb://admin:***@192.168.56.213:27017,192.168.56.214:27017,192.168.56.215:27017] ns[{testdb testdb}] query[map[]] docCursorId[7003775289556676838]] generates new cursor
[2023/01/17 13:50:48 CST] [INFO] [name=RS0, stage=full, get=663552, write_success=663424, tps=137216]
[2023/01/17 13:50:50 CST] [INFO] reader[DocumentReader id[0], src[mongodb://admin:***@192.168.56.213:27017,192.168.56.214:27017,192.168.56.215:27017] ns[{testdb testdb}] query[map[]] docCursorId[0]] finish
[2023/01/17 13:50:50 CST] [INFO] splitter reader finishes: DocumentReader id[0], src[mongodb://admin:***@192.168.56.213:27017,192.168.56.214:27017,192.168.56.215:27017] ns[{testdb testdb}] query[map[]] docCursorId[0]
[2023/01/17 13:50:50 CST] [INFO] reader[DocumentReader id[0], src[mongodb://admin:***@192.168.56.213:27017,192.168.56.214:27017,192.168.56.215:27017] ns[{testdb testdb}] query[map[]] docCursorId[0]] close
[2023/01/17 13:50:50 CST] [INFO] DBSyncer id[0] source[mongodb://admin:***@192.168.56.213:27017,192.168.56.214:27017,192.168.56.215:27017] target[mongodb://admin:***@192.168.56.216:27017] startTime[2023-01-17 13:50:43.86081563 +0800 CST m=+0.273674273] all readers finish, wait all writers finish
[2023/01/17 13:50:50 CST] [INFO] Close client with mongodb://admin:***@192.168.56.216:27017
[2023/01/17 13:50:50 CST] [INFO] Close client with mongodb://admin:***@192.168.56.216:27017
[2023/01/17 13:50:50 CST] [INFO] Close client with mongodb://admin:***@192.168.56.216:27017
[2023/01/17 13:50:50 CST] [INFO] Close client with mongodb://admin:***@192.168.56.216:27017
[2023/01/17 13:50:50 CST] [INFO] Close client with mongodb://admin:***@192.168.56.216:27017
[2023/01/17 13:50:50 CST] [INFO] Close client with mongodb://admin:***@192.168.56.216:27017
[2023/01/17 13:50:50 CST] [INFO] Close client with mongodb://admin:***@192.168.56.216:27017
[2023/01/17 13:50:50 CST] [INFO] Close client with mongodb://admin:***@192.168.56.216:27017
[2023/01/17 13:50:50 CST] [INFO] Close client with mongodb://admin:***@192.168.56.216:27017
[2023/01/17 13:50:50 CST] [INFO] Close client with mongodb://admin:***@192.168.56.213:27017,192.168.56.214:27017,192.168.56.215:27017
[2023/01/17 13:50:50 CST] [INFO] DBSyncer id[0] source[mongodb://admin:***@192.168.56.213:27017,192.168.56.214:27017,192.168.56.215:27017] target[mongodb://admin:***@192.168.56.216:27017] startTime[2023-01-17 13:50:43.86081563 +0800 CST m=+0.273674273] collExecutor-5 sync ns {testdb testdb} to {testdb testdb} successful. db syncer-0 progress 100%
[2023/01/17 13:50:50 CST] [INFO] syncer[DBSyncer id[0] source[mongodb://admin:***@192.168.56.213:27017,192.168.56.214:27017,192.168.56.215:27017] target[mongodb://admin:***@192.168.56.216:27017] startTime[2023-01-17 13:50:43.86081563 +0800 CST m=+0.273674273]] closed
[2023/01/17 13:50:50 CST] [INFO] DBSyncer id[0] source[mongodb://admin:***@192.168.56.213:27017,192.168.56.214:27017,192.168.56.215:27017] target[mongodb://admin:***@192.168.56.216:27017] startTime[2023-01-17 13:50:43.86081563 +0800 CST m=+0.273674273] collExecutor-0 finish
[2023/01/17 13:50:50 CST] [INFO] DBSyncer id[0] source[mongodb://admin:***@192.168.56.213:27017,192.168.56.214:27017,192.168.56.215:27017] target[mongodb://admin:***@192.168.56.216:27017] startTime[2023-01-17 13:50:43.86081563 +0800 CST m=+0.273674273] collExecutor-5 finish
[2023/01/17 13:50:50 CST] [INFO] DBSyncer id[0] source[mongodb://admin:***@192.168.56.213:27017,192.168.56.214:27017,192.168.56.215:27017] target[mongodb://admin:***@192.168.56.216:27017] startTime[2023-01-17 13:50:43.86081563 +0800 CST m=+0.273674273] collExecutor-4 finish
[2023/01/17 13:50:50 CST] [INFO] DBSyncer id[0] source[mongodb://admin:***@192.168.56.213:27017,192.168.56.214:27017,192.168.56.215:27017] target[mongodb://admin:***@192.168.56.216:27017] startTime[2023-01-17 13:50:43.86081563 +0800 CST m=+0.273674273] collExecutor-3 finish
[2023/01/17 13:50:50 CST] [INFO] DBSyncer id[0] source[mongodb://admin:***@192.168.56.213:27017,192.168.56.214:27017,192.168.56.215:27017] target[mongodb://admin:***@192.168.56.216:27017] startTime[2023-01-17 13:50:43.86081563 +0800 CST m=+0.273674273] collExecutor-2 finish
[2023/01/17 13:50:50 CST] [INFO] DBSyncer id[0] source[mongodb://admin:***@192.168.56.213:27017,192.168.56.214:27017,192.168.56.215:27017] target[mongodb://admin:***@192.168.56.216:27017] startTime[2023-01-17 13:50:43.86081563 +0800 CST m=+0.273674273] collExecutor-1 finish
[2023/01/17 13:50:50 CST] [INFO] metric[name[RS0] stage[full]] exit
[2023/01/17 13:50:51 CST] [INFO] try to set checkpoint with map[map[RS0:{7188021927594688513 7189494534441533441}]]
[2023/01/17 13:50:51 CST] [INFO] New session to mongodb://admin:***@192.168.56.213:27017,192.168.56.214:27017,192.168.56.215:27017 successfully
[2023/01/17 13:50:51 CST] [INFO] RS0 Regenerate checkpoint but won't persist. content: {"name":"RS0","ckpt":1,"version":2,"fetch_method":"","oplog_disk_queue":"","oplog_disk_queue_apply_finish_ts":1}
[2023/01/17 13:50:51 CST] [INFO] RS0 Record new checkpoint in MongoDB success [1673934640]
[2023/01/17 13:50:51 CST] [INFO] document syncer sync end
[2023/01/17 13:50:51 CST] [INFO] Close client with mongodb://admin:***@192.168.56.216:27017
[2023/01/17 13:50:51 CST] [INFO] New session to mongodb://admin:***@192.168.56.213:27017,192.168.56.214:27017,192.168.56.215:27017 successfully
[2023/01/17 13:50:51 CST] [INFO] Close client with mongodb://admin:***@192.168.56.213:27017,192.168.56.214:27017,192.168.56.215:27017
[2023/01/17 13:50:51 CST] [INFO] New session to mongodb://admin:***@192.168.56.213:27017,192.168.56.214:27017,192.168.56.215:27017 successfully
[2023/01/17 13:50:51 CST] [INFO] Close client with mongodb://admin:***@192.168.56.213:27017,192.168.56.214:27017,192.168.56.215:27017
[2023/01/17 13:50:51 CST] [INFO] GetAllTimestamp biggestNew:{1673934651 2}, smallestNew:{1673934651 2}, biggestOld:{1673591766 1}, smallestOld:{1673591766 1}, MongoSource:[url[mongodb://admin:***@192.168.56.213:27017,192.168.56.214:27017,192.168.56.215:27017], name[RS0]], tsMap:map[RS0:{7188021901824884737 7189494581686173698}]
全库同步结束后会有提示:
[2023/01/17 13:50:51 CST] [INFO] ------------------------full sync done!------------------------
[2023/01/17 13:50:51 CST] [INFO] oldestTs[7188021901824884737[1673591766, 1]] fullBeginTs[7189494534441533441[1673934640, 1]] fullFinishTs[7189494581686173698[1673934651, 2]]
[2023/01/17 13:50:51 CST] [INFO] finish full sync, start incr sync with timestamp: fullBeginTs[7189494534441533441[1673934640, 1]], fullFinishTs[7189494581686173698[1673934651, 2]]
然后开始增量同步:
[2023/01/17 13:50:51 CST] [INFO] start incr replication
[2023/01/17 13:50:51 CST] [INFO] RealSourceIncrSync[0]: url[mongodb://admin:***@192.168.56.213:27017,192.168.56.214:27017,192.168.56.215:27017], name[RS0], startTimestamp[7189494534441533441]
[2023/01/17 13:50:51 CST] [INFO] New session to mongodb://admin:***@192.168.56.216:27017 successfully
[2023/01/17 13:50:51 CST] [INFO] Collector-worker-0 start working with jobs batch queue. buffer capacity 64
[2023/01/17 13:50:51 CST] [INFO] New session to mongodb://admin:***@192.168.56.216:27017 successfully
[2023/01/17 13:50:51 CST] [INFO] Collector-worker-1 start working with jobs batch queue. buffer capacity 64
[2023/01/17 13:50:51 CST] [INFO] New session to mongodb://admin:***@192.168.56.216:27017 successfully
[2023/01/17 13:50:51 CST] [INFO] Collector-worker-2 start working with jobs batch queue. buffer capacity 64
[2023/01/17 13:50:51 CST] [INFO] New session to mongodb://admin:***@192.168.56.216:27017 successfully
[2023/01/17 13:50:51 CST] [INFO] Collector-worker-3 start working with jobs batch queue. buffer capacity 64
[2023/01/17 13:50:51 CST] [INFO] New session to mongodb://admin:***@192.168.56.216:27017 successfully
[2023/01/17 13:50:51 CST] [INFO] Collector-worker-4 start working with jobs batch queue. buffer capacity 64
[2023/01/17 13:50:51 CST] [INFO] New session to mongodb://admin:***@192.168.56.216:27017 successfully
[2023/01/17 13:50:51 CST] [INFO] Collector-worker-5 start working with jobs batch queue. buffer capacity 64
[2023/01/17 13:50:51 CST] [INFO] New session to mongodb://admin:***@192.168.56.216:27017 successfully
[2023/01/17 13:50:51 CST] [INFO] Collector-worker-6 start working with jobs batch queue. buffer capacity 64
[2023/01/17 13:50:51 CST] [INFO] New session to mongodb://admin:***@192.168.56.216:27017 successfully
[2023/01/17 13:50:51 CST] [INFO] Collector-worker-7 start working with jobs batch queue. buffer capacity 64
[2023/01/17 13:50:51 CST] [INFO] Syncer[RS0] poll oplog syncer start. ckpt_interval[5000ms], gid[[]], shard_key[collection]
[2023/01/17 13:50:51 CST] [INFO] Oplog sync[RS0] create checkpoint manager with url[mongodb://admin:***@192.168.56.213:27017,192.168.56.214:27017,192.168.56.215:27017] table[mongoshake.ckpt_default] start-position[7189494534441533441[1673934640, 1]]
[2023/01/17 13:50:51 CST] [INFO] New session to mongodb://admin:***@192.168.56.213:27017,192.168.56.214:27017,192.168.56.215:27017 successfully
[2023/01/17 13:50:51 CST] [INFO] RS0 Load exist checkpoint. content {"name":"RS0","ckpt":7189494534441533441,"version":2,"fetch_method":"","oplog_disk_queue":"","oplog_disk_queue_apply_finish_ts":1}
[2023/01/17 13:50:51 CST] [INFO] load checkpoint value: {"name":"RS0","ckpt":7189494534441533441,"version":2,"fetch_method":"","oplog_disk_queue":"","oplog_disk_queue_apply_finish_ts":1}
[2023/01/17 13:50:51 CST] [INFO] persister replset[RS0] update fetch status to: store memory and apply
[2023/01/17 13:50:51 CST] [INFO] RS0 Load exist checkpoint. content {"name":"RS0","ckpt":7189494534441533441,"version":2,"fetch_method":"","oplog_disk_queue":"","oplog_disk_queue_apply_finish_ts":1}
[2023/01/17 13:50:51 CST] [INFO] set query timestamp: 7189494534441533441[1673934640, 1]
[2023/01/17 13:50:51 CST] [INFO] update or.query to map[ts:map[$gt:{1673934640 1}]]
[2023/01/17 13:50:51 CST] [INFO] start oplogReader[src:mongodb://admin:***@192.168.56.213:27017,192.168.56.214:27017,192.168.56.215:27017 replset:RS0] fetcher with src[mongodb://admin:***@192.168.56.213:27017,192.168.56.214:27017,192.168.56.215:27017] replica-name[RS0] query-ts[{1673934640 1}]
[2023/01/17 13:50:51 CST] [INFO] oplogReader[src:mongodb://admin:***@192.168.56.213:27017,192.168.56.214:27017,192.168.56.215:27017 replset:RS0] ensure network
[2023/01/17 13:50:51 CST] [INFO] New session to mongodb://admin:***@192.168.56.213:27017,192.168.56.214:27017,192.168.56.215:27017 successfully
[2023/01/17 13:50:51 CST] [INFO] oplogReader[src:mongodb://admin:***@192.168.56.213:27017,192.168.56.214:27017,192.168.56.215:27017 replset:RS0] generates new cursor query[map[ts:map[$gt:{1673934640 1}]]]
[2023/01/17 13:50:56 CST] [INFO] [name=RS0, stage=incr, get=3, filter=3, write_success=0, tps=0, ckpt_times=0, lsn_ckpt={0[0, 0], 1970-01-01 08:00:00}, lsn_ack={0[0, 0], 1970-01-01 08:00:00}]]

会发现源端自动创建了一个mongoshake库:

RS0:PRIMARY> show dbs
admin       0.000GB
config      0.000GB
local       0.002GB
mongoshake  0.000GB
RS0:PRIMARY> 
#有一个检查点的表
RS0:PRIMARY> use mongoshake;
switched to db mongoshake
RS0:PRIMARY> show tables;
ckpt_default

监控MongoShake状态

# ./mongoshake-stat --port=9100
|---------------------|---------------------|---------------------|---------------------|---------------------|---------------------|---------------------|---------------------|---------------------|---------------------|---------------------|
|        log_size_avg |        log_size_max |        logs_get/sec |       logs_repl/sec |    logs_success/sec |            lsn.time |        lsn_ack.time |       lsn_ckpt.time |            now.time |             replset |             tps/sec |
|---------------------|---------------------|---------------------|---------------------|---------------------|---------------------|---------------------|---------------------|---------------------|---------------------|---------------------|
|             103.00B |             422.00B |                none |                none |                none | 1970-01-01 08:00:00 | 1970-01-01 08:00:00 | 1970-01-01 08:00:00 | 2023-01-17 13:53:55 |                 RS0 |                none |
|---------------------|---------------------|---------------------|---------------------|---------------------|---------------------|---------------------|---------------------|---------------------|---------------------|---------------------|
|             103.00B |             422.00B |                   0 |                   0 |                   0 | 1970-01-01 08:00:00 | 1970-01-01 08:00:00 | 1970-01-01 08:00:00 | 2023-01-17 13:53:56 |                 RS0 |                   0 |
|---------------------|---------------------|---------------------|---------------------|---------------------|---------------------|---------------------|---------------------|---------------------|---------------------|---------------------|
|             103.00B |             422.00B |                   0 |                   0 |                   0 | 1970-01-01 08:00:00 | 1970-01-01 08:00:00 | 1970-01-01 08:00:00 | 2023-01-17 13:53:57 |                 RS0 |                   0 |
|---------------------|---------------------|---------------------|---------------------|---------------------|---------------------|---------------------|---------------------|---------------------|---------------------|---------------------|
|             103.00B |             422.00B |                   0 |                   0 |                   0 | 1970-01-01 08:00:00 | 1970-01-01 08:00:00 | 1970-01-01 08:00:00 | 2023-01-17 13:53:58 |                 RS0 |                   0 |
|---------------------|---------------------|---------------------|---------------------|---------------------|---------------------|---------------------|---------------------|---------------------|---------------------|---------------------|
……

接下来就可以验证同步功能了。

生成模拟数据

for(var i=1;i<=900000;i++){db.testdb.insert({x:i,name:"abce",name1:"abce",name2:"abce",name3:"abce"})}