MongoDB学习札记 第八篇 Replica Set 实战

环境

  • Ubuntu12.0.4
  • mongodb3.0.3
  • 三台机器,分别为: 192.168.236.131 ; 192.168.236.133 ; 192.168.236.134

如果对于怎么安装Mongodb还不清楚的同学可以查看我之前的学习札记

第一步:

在三台机器上分别运行(都要运行)

root@ubuntu:/usr/local/mongodb#    mongod --dbpath /usr/local/mongodb/data --replSet rs0

注意这里的 –replSet 参数指定了副本集的名称,每一个副本集都有一个唯一的名称。

运行之后可以看到下面这样的信息:

2015-06-09T17:54:20.845-0700 I JOURNAL  [initandlisten] journal dir=/usr/local/mongodb/data/journal
2015-06-09T17:54:20.846-0700 I JOURNAL  [initandlisten] recover : no journal files present, no recovery needed
2015-06-09T17:54:20.925-0700 I JOURNAL  [durability] Durability thread started
2015-06-09T17:54:20.926-0700 I JOURNAL  [journal writer] Journal writer thread started
2015-06-09T17:54:20.931-0700 I CONTROL  [initandlisten] MongoDB starting : pid=2539 port=27017 dbpath=/usr/local/mongodb/data/ 64-bit host=ubuntu
2015-06-09T17:54:20.931-0700 I CONTROL  [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.
2015-06-09T17:54:20.931-0700 I CONTROL  [initandlisten]
2015-06-09T17:54:20.932-0700 I CONTROL  [initandlisten]
2015-06-09T17:54:20.932-0700 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2015-06-09T17:54:20.932-0700 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2015-06-09T17:54:20.932-0700 I CONTROL  [initandlisten]
2015-06-09T17:54:20.932-0700 I CONTROL  [initandlisten] db version v3.0.3
2015-06-09T17:54:20.933-0700 I CONTROL  [initandlisten] git version: b40106b36eecd1b4407eb1ad1af6bc60593c6105
2015-06-09T17:54:20.933-0700 I CONTROL  [initandlisten] OpenSSL version: OpenSSL 1.0.1 14 Mar 2012
2015-06-09T17:54:20.933-0700 I CONTROL  [initandlisten] build info: Linux ip-10-216-207-166 3.2.0-36-virtual #57-Ubuntu SMP Tue Jan 8 22:04:49 UTC 2013 x86_64 BOOST_LIB_VERSION=1_49
2015-06-09T17:54:20.933-0700 I CONTROL  [initandlisten] allocator: tcmalloc
2015-06-09T17:54:20.933-0700 I CONTROL  [initandlisten] options: { replication: { replSet: "rs0" }, storage: { dbPath: "/usr/local/mongodb/data/" } }
2015-06-09T17:54:20.954-0700 I NETWORK  [initandlisten] waiting for connections on port 27017
2015-06-09T17:54:20.973-0700 W NETWORK  [ReplicationExecutor] Failed to connect to 192.168.236.134:27017, reason: errno:111 Connection refused
2015-06-09T17:54:20.974-0700 W NETWORK  [ReplicationExecutor] Failed to connect to 192.168.236.131:27017, reason: errno:111 Connection refused
2015-06-09T17:54:20.975-0700 I REPL     [ReplicationExecutor] New replica set config in use: { _id: "rs0", version: 3, members: [ { _id: 1, host: "192.168.236.133:27017", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "192.168.236.134:27017", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 3, host: "192.168.236.131:27017", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatTimeoutSecs: 10, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } }
2015-06-09T17:54:20.975-0700 I REPL     [ReplicationExecutor] This node is 192.168.236.133:27017 in the config
2015-06-09T17:54:20.975-0700 I REPL     [ReplicationExecutor] transition to STARTUP2
2015-06-09T17:54:20.975-0700 I REPL     [ReplicationExecutor] Starting replication applier threads
2015-06-09T17:54:20.977-0700 I REPL     [ReplicationExecutor] transition to RECOVERING
2

信息大概就是这样的,等到三台机器都启动完了之后。使用mongo客户端登录其中一台mongod服务器。这里我登录到 192.168.236.131 这台机器

root@ubuntu:~# mongo

登录之后要切换到admin数据库,这样我们可以进行副本集的配置,具体怎么配置,代码如下:

> use admin
switched to db admin
> config = {_id:"rs0",members:[
... {_id:0,host:"192.168.236.131:27017"},
... {_id:1,host:"192.168.236.133:27017"},
... {_id:2,host:"192.168.236.134:27017"}]}
{
        "_id" : "rs0",
        "members" : [
                {
                        "_id" : 0,
                        "host" : "192.168.236.131:27017"
                },
                {
                        "_id" : 1,
                        "host" : "192.168.236.133:27017"
                },
                {
                        "_id" : 2,
                        "host" : "192.168.236.134:27017"
                }
        ]
}
> rs.initiate(config);
{ "ok" : 1 }

先定义 config 的配置信息, 然后通过 rs.initiate(config) 方法,将配置信息初始化。这两个步骤完成之后就表示我们的副本集配置信息初始化完成了,在这个rs0的副本集中我们定义了三台主机(注意在定义配置信息的时候指定的 _id 必须和我们启动mongod的时候指定的参数 –replSet 这个参数的值是一样的。)

过一会,mongodb就会帮我们选举出Primary节点和Secondary节点了。那在mongo客户端,我们可以通过 rs.status() 来查看副本集的状态信息

rs0:OTHER>
rs0:PRIMARY> rs.status()
{
        "set" : "rs0",
        "date" : ISODate("2015-06-10T00:10:06.941Z"),
        "myState" : 1,
        "members" : [
                {
                        "_id" : 0,
                        "name" : "192.168.236.131:27017",
                        "health" : 1,
                        "state" : 1,
                        "stateStr" : "PRIMARY",
                        "uptime" : 468,
                        "optime" : Timestamp(1433894773, 1),
                        "optimeDate" : ISODate("2015-06-10T00:06:13Z"),
                        "electionTime" : Timestamp(1433894777, 1),
                        "electionDate" : ISODate("2015-06-10T00:06:17Z"),
                        "configVersion" : 1,
                        "self" : true
                },
                {
                        "_id" : 1,
                        "name" : "192.168.236.133:27017",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 233,
                        "optime" : Timestamp(1433894773, 1),
                        "optimeDate" : ISODate("2015-06-10T00:06:13Z"),
                        "lastHeartbeat" : ISODate("2015-06-10T00:10:06.278Z"),
                        "lastHeartbeatRecv" : ISODate("2015-06-10T00:10:06.245Z"),
                        "pingMs" : 1,
                        "configVersion" : 1
                },
                {
                        "_id" : 2,
                        "name" : "192.168.236.134:27017",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 233,
                        "optime" : Timestamp(1433894773, 1),
                        "optimeDate" : ISODate("2015-06-10T00:06:13Z"),
                        "lastHeartbeat" : ISODate("2015-06-10T00:10:05.943Z"),
                        "lastHeartbeatRecv" : ISODate("2015-06-10T00:10:05.890Z"),
                        "pingMs" : 1,
                        "configVersion" : 1
                }
        ],
        "ok" : 1
}

其中name表示我么你的主机, health表示主机是否健康(0/1) , state(主节点还是从节点,或者是不可达节点)

如果上面信息正常显示出来说明整个副本集群已经建立起来了。这时候我们来验证一下是否是真的能够自动备份数据,是否能够自动从失败中恢复,自动选举新的Primary节点。

这个实验我们这样来做:

  1. 先往Primary节点插入数据(131那台机器)
  2. 在133和134两台Secondary节点中查询数据,验证是否能够正常的同步机器。
rs0:PRIMARY> use test
switched to db test
rs0:PRIMARY> show collections
rs0:PRIMARY> db.guids.insert({"name":"replica set","author":"webinglin"})
WriteResult({ "nInserted" : 1 })
rs0:PRIMARY> exit
bye
root@ubuntu:~# mongo --host 192.168.236.134
MongoDB shell version: 3.0.3
connecting to: 192.168.236.134:27017/test
Server has startup warnings:
2015-06-09T17:03:27.744-0700 I CONTROL  [initandlisten] ** WARNING: You are running this process as the root user, which is not re                                      commended.
2015-06-09T17:03:27.744-0700 I CONTROL  [initandlisten]
2015-06-09T17:03:27.745-0700 I CONTROL  [initandlisten]
2015-06-09T17:03:27.745-0700 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2015-06-09T17:03:27.745-0700 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2015-06-09T17:03:27.745-0700 I CONTROL  [initandlisten]
rs0:SECONDARY> show dbs
2015-06-09T17:13:49.138-0700 E QUERY    Error: listDatabases failed:{ "note" : "from execCommand", "ok" : 0, "errmsg" : "not master" }
    at Error (<anonymous>)
    at Mongo.getDBs (src/mongo/shell/mongo.js:47:15)
    at shellHelper.show (src/mongo/shell/utils.js:630:33)
    at shellHelper (src/mongo/shell/utils.js:524:36)
    at (shellhelp2):1:1 at src/mongo/shell/mongo.js:47
rs0:SECONDARY> use test
switched to db test
rs0:SECONDARY> db.guids.find()
Error: error: { "$err" : "not master and slaveOk=false", "code" : 13435 }
rs0:SECONDARY> rs.slaveOk()
rs0:SECONDARY> rs.slaveOk()
rs0:SECONDARY> db.guids.find()
{ "_id" : ObjectId("557780ebd147e9391020860d"), "name" : "replica set", "author" : "webinglin" }
rs0:SECONDARY> show collections()
2015-06-09T17:14:24.219-0700 E QUERY    Error: don't know how to show [collections()]
    at Error (<anonymous>)
    at shellHelper.show (src/mongo/shell/utils.js:733:11)
    at shellHelper (src/mongo/shell/utils.js:524:36)
    at (shellhelp2):1:1 at src/mongo/shell/utils.js:733
rs0:SECONDARY> show collections
guids
system.indexes
rs0:SECONDARY> exit
bye
root@ubuntu:~# mongo --host 192.168.236.133
MongoDB shell version: 3.0.3
connecting to: 192.168.236.133:27017/test
Server has startup warnings:
2015-06-09T17:03:11.647-0700 I CONTROL  [initandlisten] ** WARNING: You are running this process as the root user, which is not re                                      commended.
2015-06-09T17:03:11.647-0700 I CONTROL  [initandlisten]
2015-06-09T17:03:11.647-0700 I CONTROL  [initandlisten]
2015-06-09T17:03:11.648-0700 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2015-06-09T17:03:11.648-0700 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2015-06-09T17:03:11.648-0700 I CONTROL  [initandlisten]
rs0:SECONDARY> rs.slaveOk()
rs0:SECONDARY> show dbs
local  1.078GB
test   0.078GB
rs0:SECONDARY> use test
switched to db test
rs0:SECONDARY> show collections
guids
system.indexes
rs0:SECONDARY> db.guids.find()
{ "_id" : ObjectId("557780ebd147e9391020860d"), "name" : "replica set", "author" : "webinglin" }
rs0:SECONDARY> exit
bye

至此,整个验证过程说明了我们集群部署是成功的。数据能够正常同步了。那么接下来我们还要验证另一种情况,Primary异常终止之后(131),另外两个Secondary节点会不会自动选举出新的Primary节点呢? 这个实验我们这样处理: 将131机器的mongod服务停止掉。然后再来连接133或者134任意一台机器,通过rs.status()查看集群状态。

通过 ps -e | grep mongod 查看mongod服务是否开启,然后通过 killall mongod 或者 kill -15 <进程号> 来杀死mongod进程

root@ubuntu:~# ps -e | grep mongod
 3279 pts/0    00:00:19 mongod
root@ubuntu:~# killall mongod
root@ubuntu:~# mongo --host 192.168.236.133
MongoDB shell version: 3.0.3
connecting to: 192.168.236.133:27017/test
Server has startup warnings:
2015-06-09T17:03:11.647-0700 I CONTROL  [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.
2015-06-09T17:03:11.647-0700 I CONTROL  [initandlisten]
2015-06-09T17:03:11.647-0700 I CONTROL  [initandlisten]
2015-06-09T17:03:11.648-0700 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2015-06-09T17:03:11.648-0700 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2015-06-09T17:03:11.648-0700 I CONTROL  [initandlisten]
rs0:SECONDARY> rs.status()
{
        "set" : "rs0",
        "date" : ISODate("2015-06-10T00:22:40.283Z"),
        "myState" : 2,
        "members" : [
                {
                        "_id" : 0,
                        "name" : "192.168.236.131:27017",
                        "health" : 0,
                        "state" : 8,
                        "stateStr" : "(not reachable/healthy)",
                        "uptime" : 0,
                        "optime" : Timestamp(0, 0),
                        "optimeDate" : ISODate("1970-01-01T00:00:00Z"),
                        "lastHeartbeat" : ISODate("2015-06-10T00:22:39.642Z"),
                        "lastHeartbeatRecv" : ISODate("2015-06-10T00:18:22.292Z"),
                        "pingMs" : 3,
                        "lastHeartbeatMessage" : "Failed attempt to connect to 192.168.236.131:27017; couldn't connect to server 192.168.236.131:27017 (192.168.236.131), connection attempt failed",
                        "configVersion" : -1
                },
                {
                        "_id" : 1,
                        "name" : "192.168.236.133:27017",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 1169,
                        "optime" : Timestamp(1433895342, 1),
                        "optimeDate" : ISODate("2015-06-10T00:15:42Z"),
                        "configVersion" : 1,
                        "self" : true
                },
                {
                        "_id" : 2,
                        "name" : "192.168.236.134:27017",
                        "health" : 1,
                        "state" : 1,
                        "stateStr" : "PRIMARY",
                        "uptime" : 986,
                        "optime" : Timestamp(1433895342, 1),
                        "optimeDate" : ISODate("2015-06-10T00:15:42Z"),
                        "lastHeartbeat" : ISODate("2015-06-10T00:22:38.952Z"),
                        "lastHeartbeatRecv" : ISODate("2015-06-10T00:22:38.951Z"),
                        "pingMs" : 6,
                        "electionTime" : Timestamp(1433895503, 1),
                        "electionDate" : ISODate("2015-06-10T00:18:23Z"),
                        "configVersion" : 1
                }
        ],
        "ok" : 1
}
rs0:SECONDARY> exit
bye

通过上面这段代码的观察,我们发现,当把原来的Primary节点停止掉后(131停止), 那么整个mongodb的副本集群会重新选举出新的Primary节点( 134 机器)

为了验证一下新选举的Primary是否正常,我们再次验证一把数据的同步情况,先 连接到134 主节点,将原来的数据删掉,在到133进行验证,数据是否也被删除

root@ubuntu:~# mongo --192.168.236.134
Error parsing command line: unknown option 192.168.236.134
try 'mongo --help' for more information
root@ubuntu:~# mongo --host 192.168.236.134
MongoDB shell version: 3.0.3
connecting to: 192.168.236.134:27017/test
Server has startup warnings:
2015-06-09T17:03:27.744-0700 I CONTROL  [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.
2015-06-09T17:03:27.744-0700 I CONTROL  [initandlisten]
2015-06-09T17:03:27.745-0700 I CONTROL  [initandlisten]
2015-06-09T17:03:27.745-0700 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2015-06-09T17:03:27.745-0700 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2015-06-09T17:03:27.745-0700 I CONTROL  [initandlisten]
rs0:PRIMARY> use test
switched to db test
rs0:PRIMARY> show collections
guids
system.indexes
rs0:PRIMARY> db.guids.find()
{ "_id" : ObjectId("557780ebd147e9391020860d"), "name" : "replica set", "author" : "webinglin" }
{ "_id" : ObjectId("557781aed5ed7ed61c16abfd"), "name" : "mongodb" }
rs0:PRIMARY> db.guids.remove({name:"mongodb"})
WriteResult({ "nRemoved" : 1 })
rs0:PRIMARY> db.guids.find()
{ "_id" : ObjectId("557780ebd147e9391020860d"), "name" : "replica set", "author" : "webinglin" }
rs0:PRIMARY> exit
bye
root@ubuntu:~# mongo --host 192.168.236.133
MongoDB shell version: 3.0.3
connecting to: 192.168.236.133:27017/test
Server has startup warnings:
2015-06-09T17:03:11.647-0700 I CONTROL  [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.
2015-06-09T17:03:11.647-0700 I CONTROL  [initandlisten]
2015-06-09T17:03:11.647-0700 I CONTROL  [initandlisten]
2015-06-09T17:03:11.648-0700 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2015-06-09T17:03:11.648-0700 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2015-06-09T17:03:11.648-0700 I CONTROL  [initandlisten]
rs0:SECONDARY> rs.slaveOk()
rs0:SECONDARY> db.guids.find()
{ "_id" : ObjectId("557780ebd147e9391020860d"), "name" : "replica set", "author" : "webinglin" }
rs0:SECONDARY> exit
bye

实践后发现,先选举的Primary节点也正常工作。我们的整个Mongodb副本集群测试完成。

动态添加节点,删除节点。

在开始这个实验之前,先把131的机器重新启动,然后用mongo客户端连到131进行验证数据是否也同步了。

登录131之后,我们发现数据也同步了,然后131节点变成了 Secondary节点了。

root@ubuntu:~# mongo
MongoDB shell version: 3.0.3
connecting to: test
Server has startup warnings:
2015-06-09T17:24:49.146-0700 I CONTROL  [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.
2015-06-09T17:24:49.146-0700 I CONTROL  [initandlisten]
2015-06-09T17:24:49.146-0700 I CONTROL  [initandlisten]
2015-06-09T17:24:49.147-0700 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2015-06-09T17:24:49.147-0700 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2015-06-09T17:24:49.147-0700 I CONTROL  [initandlisten]
rs0:SECONDARY> rs.status()
{
        "set" : "rs0",
        "date" : ISODate("2015-06-10T00:25:02.631Z"),
        "myState" : 2,
        "syncingTo" : "192.168.236.133:27017",
        "members" : [
                {
                        "_id" : 0,
                        "name" : "192.168.236.131:27017",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 14,
                        "optime" : Timestamp(1433895834, 1),
                        "optimeDate" : ISODate("2015-06-10T00:23:54Z"),
                        "syncingTo" : "192.168.236.133:27017",
                        "configVersion" : 1,
                        "self" : true
                },
                {
                        "_id" : 1,
                        "name" : "192.168.236.133:27017",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 13,
                        "optime" : Timestamp(1433895834, 1),
                        "optimeDate" : ISODate("2015-06-10T00:23:54Z"),
                        "lastHeartbeat" : ISODate("2015-06-10T00:25:01.196Z"),
                        "lastHeartbeatRecv" : ISODate("2015-06-10T00:25:02.228Z"),
                        "pingMs" : 1,
                        "syncingTo" : "192.168.236.134:27017",
                        "configVersion" : 1
                },
                {
                        "_id" : 2,
                        "name" : "192.168.236.134:27017",
                        "health" : 1,
                        "state" : 1,
                        "stateStr" : "PRIMARY",
                        "uptime" : 13,
                        "optime" : Timestamp(1433895834, 1),
                        "optimeDate" : ISODate("2015-06-10T00:23:54Z"),
                        "lastHeartbeat" : ISODate("2015-06-10T00:25:01.235Z"),
                        "lastHeartbeatRecv" : ISODate("2015-06-10T00:25:02.446Z"),
                        "pingMs" : 10,
                        "electionTime" : Timestamp(1433895503, 1),
                        "electionDate" : ISODate("2015-06-10T00:18:23Z"),
                        "configVersion" : 1
                }
        ],
        "ok" : 1
}
rs0:SECONDARY> exit
bye

登录到134 Primary节点,通过 rs.remove() 方法来删除副本集中的某一个节点,这里我们还是将 131删除。删除之后我们还往134主节点中加入数据.

rs0:PRIMARY> rs.remove("192.168.236.131:27017")
{ "ok" : 1 }
rs0:PRIMARY> rs.status
function () { return db._adminCommand("replSetGetStatus"); }
rs0:PRIMARY> rs.status()
{
        "set" : "rs0",
        "date" : ISODate("2015-06-10T00:32:15.795Z"),
        "myState" : 1,
        "members" : [
                {
                        "_id" : 1,
                        "name" : "192.168.236.133:27017",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 1562,
                        "optime" : Timestamp(1433896329, 1),
                        "optimeDate" : ISODate("2015-06-10T00:32:09Z"),
                        "lastHeartbeat" : ISODate("2015-06-10T00:32:13.909Z"),
                        "lastHeartbeatRecv" : ISODate("2015-06-10T00:32:15.633Z"),
                        "pingMs" : 1,
                        "syncingTo" : "192.168.236.134:27017",
                        "configVersion" : 2
                },
                {
                        "_id" : 2,
                        "name" : "192.168.236.134:27017",
                        "health" : 1,
                        "state" : 1,
                        "stateStr" : "PRIMARY",
                        "uptime" : 1729,
                        "optime" : Timestamp(1433896329, 1),
                        "optimeDate" : ISODate("2015-06-10T00:32:09Z"),
                        "electionTime" : Timestamp(1433895503, 1),
                        "electionDate" : ISODate("2015-06-10T00:18:23Z"),
                        "configVersion" : 2,
                        "self" : true
                }
        ],
        "ok" : 1
}
rs0:PRIMARY> db.guids.find()
{ "_id" : ObjectId("557780ebd147e9391020860d"), "name" : "replica set", "author" : "webinglin" }
rs0:PRIMARY> db.guids.insert({"name":"remove one node dync"})
WriteResult({ "nInserted" : 1 })
rs0:PRIMARY> db.guids.find()
{ "_id" : ObjectId("557780ebd147e9391020860d"), "name" : "replica set", "author" : "webinglin" }
{ "_id" : ObjectId("557785bcbb56172c8e069341"), "name" : "remove one node dync" }
rs0:PRIMARY> exit
bye

删除131节点后,我们往primary节点中加入了新的数据,然后先不要将131的mongod服务停掉,我们通过mongo连接到131的mongod服务来查看数据

root@ubuntu:~# mongo --host 192.168.236.131
MongoDB shell version: 3.0.3
connecting to: 192.168.236.131:27017/test
Server has startup warnings:
2015-06-09T17:24:49.146-0700 I CONTROL  [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.
2015-06-09T17:24:49.146-0700 I CONTROL  [initandlisten]
2015-06-09T17:24:49.146-0700 I CONTROL  [initandlisten]
2015-06-09T17:24:49.147-0700 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2015-06-09T17:24:49.147-0700 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2015-06-09T17:24:49.147-0700 I CONTROL  [initandlisten]
> db.guids.find()
Error: error: { "$err" : "not master and slaveOk=false", "code" : 13435 }
> db.slaveOk()
2015-06-09T17:33:40.243-0700 E QUERY    TypeError: Property 'slaveOk' of object test is not a function
    at (shell):1:4
> rs.slaveOk()
> db.guids.find()
{ "_id" : ObjectId("557780ebd147e9391020860d"), "name" : "replica set", "author" : "webinglin" }
> exit
bye

实验结果可以知道,我们在134新加入的数据 {name:”remove one node dync”} 并没有同步到131(已从副本集中删除).

为了让实验结果更加确切,我们查看133是否有同步了数据:

root@ubuntu:~# mongo --host 192.168.236.133
MongoDB shell version: 3.0.3
connecting to: 192.168.236.133:27017/test
Server has startup warnings:
2015-06-09T17:03:11.647-0700 I CONTROL  [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.
2015-06-09T17:03:11.647-0700 I CONTROL  [initandlisten]
2015-06-09T17:03:11.647-0700 I CONTROL  [initandlisten]
2015-06-09T17:03:11.648-0700 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2015-06-09T17:03:11.648-0700 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2015-06-09T17:03:11.648-0700 I CONTROL  [initandlisten]
rs0:SECONDARY> rs.slaveOk()
rs0:SECONDARY> db.guids.find()
{ "_id" : ObjectId("557780ebd147e9391020860d"), "name" : "replica set", "author" : "webinglin" }
{ "_id" : ObjectId("557785bcbb56172c8e069341"), "name" : "remove one node dync" }
rs0:SECONDARY> exit
bye

实验数据可以看到,133同步了在134主节点中新增的文档 {“name”:”remove one node dync”},这样就证明了动态删除副本集中的某一个节点的实验成功了。那怎么动态添加节点到副本集中呢?

原理是一样的,但是调用的方法变成了 rs.add("192.168.236.131:27017")

root@ubuntu:~# mongo --host 192.168.236.134
MongoDB shell version: 3.0.3
connecting to: 192.168.236.134:27017/test
Server has startup warnings:
2015-06-09T17:03:27.744-0700 I CONTROL  [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.
2015-06-09T17:03:27.744-0700 I CONTROL  [initandlisten]
2015-06-09T17:03:27.745-0700 I CONTROL  [initandlisten]
2015-06-09T17:03:27.745-0700 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2015-06-09T17:03:27.745-0700 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2015-06-09T17:03:27.745-0700 I CONTROL  [initandlisten]
rs0:PRIMARY> rs.add("192.168.236.131:27017");
{ "ok" : 1 }
rs0:PRIMARY> rs.status()
{
        "set" : "rs0",
        "date" : ISODate("2015-06-10T00:34:45.974Z"),
        "myState" : 1,
        "members" : [
                {
                        "_id" : 1,
                        "name" : "192.168.236.133:27017",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 1712,
                        "optime" : Timestamp(1433896482, 1),
                        "optimeDate" : ISODate("2015-06-10T00:34:42Z"),
                        "lastHeartbeat" : ISODate("2015-06-10T00:34:44.207Z"),
                        "lastHeartbeatRecv" : ISODate("2015-06-10T00:34:45.901Z"),
                        "pingMs" : 2,
                        "syncingTo" : "192.168.236.134:27017",
                        "configVersion" : 3
                },
                {
                        "_id" : 2,
                        "name" : "192.168.236.134:27017",
                        "health" : 1,
                        "state" : 1,
                        "stateStr" : "PRIMARY",
                        "uptime" : 1879,
                        "optime" : Timestamp(1433896482, 1),
                        "optimeDate" : ISODate("2015-06-10T00:34:42Z"),
                        "electionTime" : Timestamp(1433895503, 1),
                        "electionDate" : ISODate("2015-06-10T00:18:23Z"),
                        "configVersion" : 3,
                        "self" : true
                },
                {
                        "_id" : 3,
                        "name" : "192.168.236.131:27017",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 1,
                        "optime" : Timestamp(1433896329, 1),
                        "optimeDate" : ISODate("2015-06-10T00:32:09Z"),
                        "lastHeartbeat" : ISODate("2015-06-10T00:34:44.217Z"),
                        "lastHeartbeatRecv" : ISODate("2015-06-10T00:34:44.234Z"),
                        "pingMs" : 1,
                        "syncingTo" : "192.168.236.134:27017",
                        "configVersion" : 3
                }
        ],
        "ok" : 1
}
rs0:PRIMARY> exit
bye

在rs.status()返回的结果中可以看到,131节点已经成功加入副本集中了。加入之后,理论上应该会把在134主节点加入的数据同步过来,刚才删除之后是不会同步的。那这时候重新加入副本集,应该是要同步的。下面是实验结果:

root@ubuntu:~# mongo
MongoDB shell version: 3.0.3
connecting to: test
Server has startup warnings:
2015-06-09T17:24:49.146-0700 I CONTROL  [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.
2015-06-09T17:24:49.146-0700 I CONTROL  [initandlisten]
2015-06-09T17:24:49.146-0700 I CONTROL  [initandlisten]
2015-06-09T17:24:49.147-0700 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2015-06-09T17:24:49.147-0700 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2015-06-09T17:24:49.147-0700 I CONTROL  [initandlisten]
rs0:SECONDARY> rs.slaveOk()
rs0:SECONDARY> db.guids.find()
{ "_id" : ObjectId("557780ebd147e9391020860d"), "name" : "replica set", "author" : "webinglin" }
{ "_id" : ObjectId("557785bcbb56172c8e069341"), "name" : "remove one node dync" }
rs0:SECONDARY> exit
bye

实验结果显示,动态添加操作也正常。动态的将131节点加入到副本集中能够保证数据同步成功。

注意

在调用 rs.add(“host:ip”) 或者 rs.remove(“host:ip”) 的时候,必须要在 Primary 节点中进行。

add方法可以加入一个document对象,这样就可以在指定具体的Secondary节点的更多的设置项了,比如指定为priority: 0 或 priority: 0,hidden: true 或 priority:0,hidden:true,arbiterOnly:true

{
  _id: <int>,
  host: <string>,
  arbiterOnly: <boolean>,
  buildIndexes: <boolean>,
  hidden: <boolean>,
  priority: <number>,
  tags: <document>,
  slaveDelay: <int>,
  votes: <number>
}

怎么对副本集进行权限验证,参考主从复制的安全部分,也是通过openssl来生成keyfile,然后再启动mongod的时候指定keyFile来设置安全的

<<< 捐赠 >>>

转载请注明出处! 原文地址: http://webinglin.github.io

References

留言

2015-06-09