Scenario: MongoDB test cases
DBA- M201
Vagrant and VMware setup
C:\HashiCorp\Vagrant\bin
Vagrantfile location: D:\MongoDB\M202_Vagrant_Image___3.0.5_m202_mongoproc3.0
Modified vagerant file inside.
cd D:\MongoDB\M202_Vagrant_Image___3.0.5_m202_mongoproc3.0
D:\MongoDB\M202_Vagrant_Image___3.0.5_m202_mongoproc3.0>C:\HashiCorp\Vagrant\bin\vagrant.exe init
D:\MongoDB\M202_Vagrant_Image___3.0.5_m202_mongoproc3.0>C:\HashiCorp\Vagrant\bin\vagrant.exe up
192.168.56.1 = interface#3
### Assign static IP:
m202@m202-ubuntu1404:~$ cat /etc/network/interfaces
# interfaces(5) file used by ifup(8) and ifdown(8)
#auto lo
#iface lo inet loopbacki
auto eth0
iface eth0 inet static
address 192.168.56.11
netmask 255.255.255.0
network 192.168.56.1
broadcast 255.255.255.127
gateway 192.168.56.1
sudo /etc/init.d/networking restart
===============================
Disaster Recovery.
==============================
More complex than you think
Trade Offs:
1. More downtime more loss.
2. 99.9999% - 30 Sec down.
3. 30Million $ for 30 sec. Damage on Company.
4. Downtime tolaration.
Reference Number About Availability (per Year)
30 Sec = ~ 99.9999%
1 hr = ~ 99.9886%
1 min = ~ 99.9998%
1 days = ~ 99.7260%
> 99 %Availability.
3 Days = ~ Still claim > 99%
Tolerance For:
==========================
Data Loss Downtime Reduced Capacity
Based on 3 groups.
Group 3 - Low tolerance data loss & No downtime. E.g. AD server.
Design - Avoid data loss.
Design - Avoid Dwon time.
Group 3: Low Dataloss and high downtime e.g - Leaning system
Group 3 - High tolerance data loss. e.g. caching application where you used mongodb.
Backup Strategies:
==============================================
Till 30 min possibility,
Backup Consideration:
1) How quickely I can restore?
2) Test backup often, what happens any bug in software, restore backup and run..
3) Coordinated backup, multiple host, related with consistent backp. This implies to step 2.
1. File System based backup:
Clean shutdown and copy files. scp, cp, rsync.
e.g. fsyncock os shutdown :- file system, Use secondary to backup.
2. Filesystem based snapshot:
Point in time guarantee.
Must include journal.
I/O overhead.
LVM has snapshot capability, EBS, Netapp etc.
mongod --port 30001 --dbpath mongod-pri --replSet CorruptionTest --smallfiles --oplogSize 128 --nojournal
mongod --port 30002 --dbpath mongod-sec --replSet CorruptionTest --smallfiles --oplogSize 128 --nojournal
mongod --port 30003 --dbpath mongod-arb --replSet CorruptionTest --nojournal
24k * 8 = 192k.
100,000 on Monday
102,000 reads/sec on Tuesday,
104,040 on Wednesday,
106,121 on Thursday,
108,243 on Friday
24k * 5 = 120k
24k - 100%
Peak - 21k - Peak. - 21k - 90%
21k * 5 = 105 k.
21.6 K * 5 = 108,000
Homework: 2.4: Using oplog replay to restore
--Point in Time
http://www.codepimp.org/2014/08/replay-the-oplog-in-mongodb/
http://stackoverflow.com/questions/15444920/modify-and-replay-mongodb-oplog
https://go.bestdenki.com.sg/winners
xx196 mkdir recovery
197 mongod --dbpath=/data/MongoDB/recovery --port 3004 --fork
198 mongod --dbpath=/data/MongoDB/recovery --port 3004 --fork --logpath ./recovery_log
199 mongorestore --3004 --db backupDB backupDB
200 mongorestore --port 3004 --db backupDB backupDB
201 mkdir oplogD
202 mongodump -d local -c oplog.rs --port 30001 -o oplogD
203 ls -lrt
204 mongorestore --port 3004 --oplogReplay --oplogLimit="1398778745:1" ./oplogD
205 ls -l oplogD
206 ls -l oplogD/local/
207 mongorestore --port 3004 --oplogReplay --oplogLimit="1398778745:1" ./oplogD/local/oplog.rs.bson
208 mongo --port 3004
209 BackupTest:PRIMARY> db.oplog.rs.find({op: "c"})
210 { "ts" : Timestamp(1398778745, 1), "h" : NumberLong("-4262957146204779874"), "v" : 2, "op" : "c", "ns" : "backupDB.$cmd", "o" : { "drop" : "backupColl" } }
211 mkdir oplogR
212 mv oplogD/local/oplog.rs.bson oplogR/oplog.bson
213 bsondump oplogR/oplog.bson > oplog.read
214 cat oplog.read | grep -A 10 -B 10 "drop"
215 mongorestore --port 3004 --oplogReplay --oplogLimit="1398778745:1" ./oplogR
216 mongo --port 3004
217 ls -ld */
218 ls -l recovery
219 ps -aef | grep mongo
220 mongod -f mongod__homework_hw_m202_w3_week3_wk3_536bf5318bb48b4bb3853f31.conf --shutdown
221 cp recovery/backupDB.* backuptest/
222 mongod -f mongod__homework_hw_m202_w3_week3_wk3_536bf5318bb48b4bb3853f31.conf
223 mongo --port 30001
m202@m202-ubuntu1404:/data/MongoDB$ cat oplog.read | grep -A 10 -B 10 "drop"
{"h":{"$numberLong":"-3486134360601788719"},"ns":"backupDB.backupColl","o":{"_id":{"$date":"2014-01-08T02:46:30.000Z"},"string":"testStringForPadding0000000000000000000000000000000000000000"},"op":"i","ts":{"$timestamp":{"t":1398778026,"i":9991}},"v":2}
{"h":{"$numberLong":"-6091430578150945877"},"ns":"backupDB.backupColl","o":{"_id":{"$date":"2014-01-08T02:46:31.000Z"},"string":"testStringForPadding0000000000000000000000000000000000000000"},"op":"i","ts":{"$timestamp":{"t":1398778026,"i":9992}},"v":2}
{"h":{"$numberLong":"2638276158384822314"},"ns":"backupDB.backupColl","o":{"_id":{"$date":"2014-01-08T02:46:32.000Z"},"string":"testStringForPadding0000000000000000000000000000000000000000"},"op":"i","ts":{"$timestamp":{"t":1398778026,"i":9993}},"v":2}
{"h":{"$numberLong":"826924733673180424"},"ns":"backupDB.backupColl","o":{"_id":{"$date":"2014-01-08T02:46:33.000Z"},"string":"testStringForPadding0000000000000000000000000000000000000000"},"op":"i","ts":{"$timestamp":{"t":1398778026,"i":9994}},"v":2}
{"h":{"$numberLong":"6784254043238495315"},"ns":"backupDB.backupColl","o":{"_id":{"$date":"2014-01-08T02:46:34.000Z"},"string":"testStringForPadding0000000000000000000000000000000000000000"},"op":"i","ts":{"$timestamp":{"t":1398778026,"i":9995}},"v":2}
{"h":{"$numberLong":"-7899106625682405931"},"ns":"backupDB.backupColl","o":{"_id":{"$date":"2014-01-08T02:46:35.000Z"},"string":"testStringForPadding0000000000000000000000000000000000000000"},"op":"i","ts":{"$timestamp":{"t":1398778026,"i":9996}},"v":2}
{"h":{"$numberLong":"-1666073625494465588"},"ns":"backupDB.backupColl","o":{"_id":{"$date":"2014-01-08T02:46:36.000Z"},"string":"testStringForPadding0000000000000000000000000000000000000000"},"op":"i","ts":{"$timestamp":{"t":1398778026,"i":9997}},"v":2}
{"h":{"$numberLong":"7346874363465863058"},"ns":"backupDB.backupColl","o":{"_id":{"$date":"2014-01-08T02:46:37.000Z"},"string":"testStringForPadding0000000000000000000000000000000000000000"},"op":"i","ts":{"$timestamp":{"t":1398778026,"i":9998}},"v":2}
{"h":{"$numberLong":"9124493582125599509"},"ns":"backupDB.backupColl","o":{"_id":{"$date":"2014-01-08T02:46:38.000Z"},"string":"testStringForPadding0000000000000000000000000000000000000000"},"op":"i","ts":{"$timestamp":{"t":1398778026,"i":9999}},"v":2}
{"h":{"$numberLong":"1832517689805078463"},"ns":"backupDB.backupColl","o":{"_id":{"$date":"2014-01-08T02:46:39.000Z"},"string":"testStringForPadding0000000000000000000000000000000000000000"},"op":"i","ts":{"$timestamp":{"t":1398778026,"i":10000}},"v":2}
{"h":{"$numberLong":"-4262957146204779874"},"ns":"backupDB.$cmd","o":{"drop":"backupColl"},"op":"c","ts":{"$timestamp":{"t":1398778745,"i":1}},"v":2}
m202@m202-ubuntu1404:/data/MongoDB$
BackupTest:PRIMARY> db.oplog.rs.find({"o":{"drop":"backupColl"}})
{ "ts" : Timestamp(1398778745, 1), "h" : NumberLong("-4262957146204779874"), "v" : 2, "op" : "c", "ns" : "backupDB.$cmd", "o" : { "drop" : "backupColl" } }
BackupTest:PRIMARY>
BackupTest:PRIMARY> db.oplog.rs.find({"op":{$nin:["u","i"]}})
{ "ts" : Timestamp(1398771810, 1), "h" : NumberLong(0), "v" : 2, "op" : "n", "ns" : "", "o" : { "msg" : "initiating set" } }
{ "ts" : Timestamp(1398778745, 1), "h" : NumberLong("-4262957146204779874"), "v" : 2, "op" : "c", "ns" : "backupDB.$cmd", "o" : { "drop" : "backupColl" } }
BackupTest:PRIMARY>
BackupTest:PRIMARY> db.oplog.rs.distinct("op")
[ "n", "i", "u", "c" ]
BackupTest:PRIMARY>
Week 3:
===================
mongo –nodb
var rst = new ReplSetTest( { name: ‘rsTest’, nodes: 3 } )
rst.startSet()
rs.initiate()
rs.add(“Wills-MacBook-Pro.local:31001”)
rs.add(“Wills-MacBook-Pro.local:31002”)
var rst = new ReplSetTest( { name: "rsTest", nodes: { node1: {}, node2: {}, arb: {arbiter: true} } } )
*node1:{smallfiles:"",oplogSize:40, noprealloc:null}
rs.initiate()
rs.add(“Wills-MacBook-Pro.local:31001”)
rs.addArb(“Wills-MacBook-Pro.local:31002”)
for(var i=0;i<=100000; i++) {db.testColl.insert({"a":i}); sleep(100);}
kill -SIGSTOP 3195
kill -SIGINT 6472; kill -SIGCONT 3195
Mongo State:
0 - Startup
5 - Startup2
1 - Primary
2 - Secondary
3 - Recovery
4 - Fatal
6 - Unknown
7 - Arbitor
8 - Down
9 - Rollback
10 - Shunned -
1,2,3,7,9 - Can vote rest can't vote.
maxPrimaryConnection - (numOfSecondary * 3) - (numberOfother *3)
------------------------------------------------------------------
num of Mongoes
10000 - (2*3) - (6*3)
---------------------- = 155.875 - 100% so 90% of 155.875 = ~140
64
alias ins="ps -aef | grep mongod"
alias killmo="ins | grep -v grep | awk '{print $2}' | xargs kill -9"
============Reverse case=======DDL avoid (create Index on Secondary only)===========
kill -SIGSTOP 8650
kill -SIGINT 3195; kill -SIGCONT 8650
mongod --dbpath /data/MongoDB/week3/three-member-replica-set/1 --logpath /data/MongoDB/week3/three-member-replica-set/1/mongod.log --smallfiles --noprealloc --port 27017 --replSet threeMemberReplicaSet --fork
mongod --dbpath /data/MongoDB/week3/three-member-replica-set/2 --logpath /data/MongoDB/week3/three-member-replica-set/2/mongod.log --smallfiles --noprealloc --port 27018 --replSet threeMemberReplicaSet --fork
mongod --dbpath /data/MongoDB/week3/three-member-replica-set/3 --logpath /data/MongoDB/week3/three-member-replica-set/3/mongod.log --smallfiles --noprealloc --port 27019 --replSet threeMemberReplicaSet --fork
===================DML rollback scenario=================
for(var i=0;i<=100000; i++) {db.testColl.insert({"a":i}); sleep(100);}
kill -SIGSTOP 9748
kill -SIGINT 9738; kill -SIGCONT 9748
mongod --dbpath /data/MongoDB/week3/three-member-replica-set/1 --logpath /data/MongoDB/week3/three-member-replica-set/1/mongod.log --smallfiles --noprealloc --port 27017 --replSet threeMemberReplicaSet --fork
mongod --dbpath /data/MongoDB/week3/three-member-replica-set/2 --logpath /data/MongoDB/week3/three-member-replica-set/2/mongod.log --smallfiles --noprealloc --port 27018 --replSet threeMemberReplicaSet --fork
mongod --dbpath /data/MongoDB/week3/three-member-replica-set/3 --logpath /data/MongoDB/week3/three-member-replica-set/3/mongod.log --smallfiles --noprealloc --port 27019 --replSet threeMemberReplicaSet --fork
######Rollback Block from log file
2016-06-11T19:29:40.741+0100 I REPL [rsBackgroundSync] replSet rollback findcommonpoint scanned : 2685
2016-06-11T19:29:40.741+0100 I REPL [rsBackgroundSync] replSet rollback 3 fixup
2016-06-11T19:29:40.797+0100 I REPL [rsBackgroundSync] rollback 3.5
2016-06-11T19:29:40.797+0100 I REPL [rsBackgroundSync] rollback 4 n:1992 ### n is number of rows
2016-06-11T19:29:40.797+0100 I REPL [rsBackgroundSync] replSet minvalid=Jun 11 19:29:40 575c5894:7
2016-06-11T19:29:40.797+0100 I REPL [rsBackgroundSync] rollback 4.6
2016-06-11T19:29:40.797+0100 I REPL [rsBackgroundSync] rollback 4.7
2016-06-11T19:29:40.846+0100 I REPL [rsBackgroundSync] rollback 5 d:1992 u:0
2016-06-11T19:29:40.846+0100 I REPL [rsBackgroundSync] rollback 6
2016-06-11T19:29:40.851+0100 I REPL [rsBackgroundSync] rollback done
2016-06-11T19:29:40.851+0100 I REPL [ReplicationExecutor] transition to RECOVERING
2016-06-11T19:29:40.851+0100 I REPL [rsBackgroundSync] rollback finished
2016-06-11T19:29:40.851+0100 I REPL [ReplicationExecutor] syncing from: localhost:27018
mongod --dbpath ./1 --logpath /data/MongoDB/week3/three-member-replica-set/1/mongod.log --smallfiles --noprealloc --port 27017 --replSet threeMemberReplicaSet --fork
###################
Week 4:
mongo --nodb
cluster = new ShardingTest({Shards:3, chunksize: 1, config: 3, other: {rs:true}})
##Connect mongos
mongo --port 30999
db.users.ensureIndex({email:1})
db.users.getIndexes()
mongos> sh.shardCollection("shardTest.users", {email:1})
{ "collectionsharded" : "shardTest.users", "ok" : 1 }
for ( var x=97; x<97+26; x++ ) {
for( var y=97; y<97+26; y+=6 ) {
var prefix = String.fromCharCode(x) + String.fromCharCode(y);
db.runCommand( { split : "shardTest.users", middle : { email : prefix } } );
}
}
use config
db.chunks.find().count()
mongos> first_doc = db.chunks.find().next()
{
"_id" : "myapp.users-email_MinKey",
"lastmod" : Timestamp(2, 0),
"lastmodEpoch" : ObjectId("538e27be31972172d9b3ec61"),
"ns" : "myapp.users",
"min" : {
"email" : { "$minKey" : 1 }
},
"max" : {
"email" : "aa"
},
"shard" : "test-rs1"
}
mongos> min = first_doc.min
{ "email" : { "$minKey" : 1 } }
mongos> max = first_doc.max
{ "email" : "aa" }
mongos> keyPattern = { email : 1 }
{ "email" : 1 }
mongos> ns = first_doc.ns
myapp.users
mongos> db.runCommand({dataSize: ns, keyPattern: keyPattern, min: min, max: max } )
{ "size" : 0, "numObjects" : 0, "millis" : 0, "ok" : 1 }
mongos> second_doc = db.chunks.find().skip(1).next()
{
"_id" : "myapp.users-email_\"aa\"",
"lastmod" : Timestamp(3, 0),
"lastmodEpoch" : ObjectId("538e27be31972172d9b3ec61"),
"ns" : "myapp.users",
"min" : {
"email" : "aa"
},
"max" : {
"email" : "ag"
},
"shard" : "test-rs0"
}
mongos> max2 = second_doc.max
{ "email" : "ag" }
mongos> use admin
switched to db admin
mongos> db.runCommand( { mergeChunks : ns , bounds : [ min , max2 ] } )
{ "ok" : 1 }
=====
Homework 1:
For this assignment, you will be pre-splitting chunks into ranges. We'll be working with the "m202.presplit" database/collection.
First, create 3 shards. You can use standalone servers or replica sets on each shard, whatever is most convenient.
Pre-split your collection into chunks with the following ranges, and put the chunks on the appropriate shard, and name the shards "shard0", "shard1", and "shard2". Let's make the shard key the "a" field. Don't worry if you have other ranges as well, we will only be checking for the following:
Range / Shard
0 to 7 / shard0
7 to 10 / shard0
10 to 14 / shard0
14 to 15 / shard1
15 to 20 / shard1
20 to 21 / shard1
21 to 22 / shard2
22 to 23 / shard2
23 to 24 / shard2
mongo --nodb
cluster = new ShardingTest({Shards:3, chunksize: 1, config: 3, other: {rs:false}})
cluster = new ShardingTest({shards : {shard0:{smallfiles:''}, shard1:{smallfiles:''}, shard2:{smallfiles:''}},config:3,other: {rs:false}})
##working
cluster = new ShardingTest({shards : {"shard0":{smallfiles:''}, "shard1":{smallfiles:''}, "shard2":{smallfiles:''}},config: 3})
mongos> sh.enableSharding("m202")
{ "ok" : 1 }
mongos> sh.shardCollection("m202.presplit",{a:1})
{ "collectionsharded" : "m202.presplit", "ok" : 1 }
###Shard Start
mkdir shard0 shard1 shard2 config0 config1 config3
mongod --shardsvr --port 30101 --dbpath shard0 --logpath shard0/shard0-0.log --smallfiles --oplogSize 40 --fork
mongod --shardsvr --port 30201 --dbpath shard1 --logpath shard1/shard1-0.log --smallfiles --oplogSize 40 --fork
mongod --shardsvr --port 30301 --dbpath shard2 --logpath shard2/shard2-0.log --smallfiles --oplogSize 40 --fork
## Config start
mongod --configsvr --port 30501 --dbpath config0 --logpath config0/config-0.log --smallfiles --oplogSize 40 --fork
mongod --configsvr --port 30502 --dbpath config1 --logpath config1/config-1.log --smallfiles --oplogSize 40 --fork
mongod --configsvr --port 30503 --dbpath config2 --logpath config2/config-2.log --smallfiles --oplogSize 40 --fork
mongos --port 39000 --configdb localhost:30501,localhost:30502,localhost:30503
mongo --port 39000
use admin
db.runCommand({"addShard": "localhost:30101", "name": "shard0"})
db.runCommand({"addShard": "localhost:30201", "name": "shard1"})
db.runCommand({"addShard": "localhost:30301", "name": "shard2"})
sh.enableSharding("m202")
sh.shardCollection("m202.presplit", {"a": 1})
sh.splitAt("m202.presplit", {"a": 0})
sh.splitAt("m202.presplit", {"a": 7})
sh.splitAt("m202.presplit", {"a": 10})
sh.splitAt("m202.presplit", {"a": 14})
sh.splitAt("m202.presplit", {"a": 15})
sh.splitAt("m202.presplit", {"a": 20})
sh.splitAt("m202.presplit", {"a": 21})
sh.splitAt("m202.presplit", {"a": 22})
sh.splitAt("m202.presplit", {"a": 23})
sh.splitAt("m202.presplit", {"a": 24})
sh.stopBalancer()
sh.moveChunk("m202.presplit", {"a": 0}, "shard0")
sh.moveChunk("m202.presplit", {"a": 7}, "shard0")
sh.moveChunk("m202.presplit", {"a": 10}, "shard0")
sh.moveChunk("m202.presplit", {"a": 14}, "shard1")
sh.moveChunk("m202.presplit", {"a": 15}, "shard1")
sh.moveChunk("m202.presplit", {"a": 20}, "shard1")
sh.moveChunk("m202.presplit", {"a": 21}, "shard2")
sh.moveChunk("m202.presplit", {"a": 22}, "shard2")
sh.moveChunk("m202.presplit", {"a": 23}, "shard2")
sh.startBalancer()
Week4 4.3
Week4 4.4
mongo --nodb
config = { d0 : { smallfiles : "", noprealloc : "", nopreallocj : ""}, d1 : { smallfiles : "", noprealloc : "", nopreallocj : "" }, d2 : { smallfiles : "", noprealloc : "", nopreallocj : ""}};
cluster = new ShardingTest( { shards : config } );
mongo --port 30999
db.testColl.find({
"createdDate": {
$gte: ISODate("2014-01-01T00:00:00.000Z"),
$lt: ISODate("2014-12-31T24:59:59.999Z")
}
}).count()
db.testColl.find({
"createdDate": {
$gte: ISODate("2014-01-01T00:00:00.000Z"),
$lt: ISODate("2014-12-31T24:59:59.999Z")
}
}).sort({"createdDate":1}).limit(1)
min={ "_id" : ObjectId("5766b31795027f1b4332820a"), "createdDate" : ISODate("2014-01-01T00:00:00Z") }
max={ "_id" : ObjectId("5766b31795027f1b43328d4a"), "createdDate" : ISODate("2014-05-01T00:00:00Z") }
min=ISODate("2013-10-01T00:00:00Z")
max=ISODate("2014-05-01T00:00:00Z")
db.runCommand( { mergeChunks :"testDB.testColl" , bounds : [ min , max ] } )
db.testColl.find({
"createdDate": {
$gte: ISODate("2013-01-01T00:00:00.000Z"),
$lt: ISODate("2013-12-31T24:59:59.999Z")
}
}).sort({"createdDate":1}).limit(1)
min={ "_id" : ObjectId("5766b31795027f1b4332796a"), "createdDate" : ISODate("2013-10-01T00:00:00Z") }
chunks:
shard0000 46
shard0001 121
shard0002 47
too many chunks to print, use verbose if you want to force print
tag: LTS { "createdDate" : ISODate("2013-10-01T00:00:00Z") } -->> { "createdDate" : ISODate("2014-01-01T00:00:00Z") }
tag: STS { "createdDate" : ISODate("2014-01-01T00:00:00Z") } -->> { "createdDate" : ISODate("2014-05-01T00:00:00Z") }
Your assignment is to move all data for the month of January 2014 into LTS
sh.addTagRange('testDB.testColl', {createdDate : ISODate("2013-01-01")}, {createdDate : ISODate("2014-02-01")}, "LTS")
sh.addTagRange('testDB.testColl', {createdDate : ISODate("2014-02-01")}, {createdDate : ISODate("2014-05-01")}, "STS")
tag: LTS { "createdDate" : ISODate("2013-10-01T00:00:00Z") } -->> { "createdDate" : ISODate("2014-01-01T00:00:00Z") }
tag: STS { "createdDate" : ISODate("2014-01-01T00:00:00Z") } -->> { "createdDate" : ISODate("2014-05-01T00:00:00Z") }
sh.addTagRange('testDB.testColl', {createdDate : ISODate("2013-10-01T00:00:00Z")}, {createdDate : ISODate("2014-01-01T00:00:00Z")}, "LTS")
sh.addTagRange('testDB.testColl', {createdDate : ISODate("2014-01-01T00:00:00Z")}, {createdDate : ISODate("2014-05-01T00:00:00Z")}, "STS")
db.tags.remove({_id : { ns : "testDB.testColl", min : { createdDate : ISODate("2014-01-01T00:00:00Z")} }, tag: "STS" })
tag: LTS { "createdDate" : ISODate("2013-10-01T00:00:00Z") } -->> { "createdDate" : ISODate("2014-02-01T00:00:00Z") }
tag: STS { "createdDate" : ISODate("2014-02-01T00:00:00Z") } -->> { "createdDate" : ISODate("2014-05-01T00:00:00Z") }
2.4 Strict order
Config server first
General order
Secondary shard first - Parallel
Primary your (Step Down)
Config server
Upgrade mongos last
var allChunkInfo = function(ns){
var chunks = db.getSiblingDB("config").chunks.find({"ns" : ns}).sort({min:1}); //this will return all chunks for the ns ordered by min
//some counters for overall stats at the end
var totalChunks = 0;
var totalSize = 0;
var totalEmpty = 0;
print("ChunkID,Shard,ChunkSize,ObjectsInChunk"); // header row
// iterate over all the chunks, print out info for each
chunks.forEach(
function printChunkInfo(chunk) {
var db1 = db.getSiblingDB(chunk.ns.split(".")[0]); // get the database we will be running the command against later
var key = db.getSiblingDB("config").collections.findOne({_id:chunk.ns}).key; // will need this for the dataSize call
// dataSize returns the info we need on the data, but using the estimate option to use counts is less intensive
var dataSizeResult = db1.runCommand({datasize:chunk.ns, keyPattern:key, min:chunk.min, max:chunk.max, estimate:true});
// printjson(dataSizeResult); // uncomment to see how long it takes to run and status
print(chunk._id+","+chunk.shard+","+dataSizeResult.size+","+dataSizeResult.numObjects);
totalSize += dataSizeResult.size;
totalChunks++;
if (dataSizeResult.size == 0) { totalEmpty++ }; //count empty chunks for summary
}
)
print("***********Summary Chunk Information***********");
print("Total Chunks: "+totalChunks);
print("Average Chunk Size (bytes): "+(totalSize/totalChunks));
print("Empty Chunks: "+totalEmpty);
print("Average Chunk Size (non-empty): "+(totalSize/(totalChunks-totalEmpty)));
}
Week 5: M202: MongoDB Advanced Deployment and Operations
{ config: "m202-sec.conf", net: { bindIp: "127.0.0.1", port: 27017 }, processManagement: { fork: true }, replication: { oplogSizeMB: 512, replSetName: "m202-final" }, storage: { dbPath: "/data/db/rs2" }, systemLog: { destination: "file", logAppend: true, path: "/data/db/m202-mba.log" } }
{ config: "m202-sec.conf", net: { bindIp: "192.0.2.2,127.0.0.1", port: 27017 }, processManagement: { fork: true }, replication: { oplogSizeMB: 512, replSetName: "m202-final" }, storage: { dbPath: "/data/db/rs2" }, systemLog: { destination: "file", logAppend: true, path: "/data/db/m202-mba.log" } }
{ config: "m202-sec.conf", net: { port: 27017 }, processManagement: { fork: true }, replication: { oplogSizeMB: 512, replSetName: "m202-final" }, storage: { dbPath: "/data/db/rs2" }, systemLog: { destination: "file", logAppend: true, path: "/data/db/m202-mbp.log" } }
{ config: "./m202-pri.conf", net: { bindIp: "127.0.0.1", port: 27017 }, processManagement: { fork: true }, replication: { oplogSizeMB: 512, replSetName: "m202-final" }, storage: { dbPath: "/data/db/rs1" }, systemLog: { destination: "file", logAppend: true, path: "/data/db/m202-ubuntu-27017.log" } }
{ config: "./m202-pri.conf", net: { bindIp: "192.0.2.3,127.0.0.1", port: 27017 }, processManagement: { fork: true }, replication: { oplogSizeMB: 512, replSetName: "m202-final" }, storage: { dbPath: "/data/db/rs1" }, systemLog: { destination: "file", logAppend: true, path: "/data/db/m202-ubuntu-27017.log" } }
{ config: "./m202-sec.conf", net: { bindIp: "127.0.0.1", port: 27018 }, processManagement: { fork: true }, replication: { oplogSizeMB: 512, replSetName: "m202-final" }, storage: { dbPath: "/data/db/rs2" }, systemLog: { destination: "file", logAppend: true, path: "/data/db/m202-ubuntu-27018.log" } }
{ config: "./m202-sec.conf", net: { bindIp: "192.0.2.3,127.0.0.1", port: 27018 }, processManagement: { fork: true }, replication: { oplogSizeMB: 512, replSetName: "m202-final" }, storage: { dbPath: "/data/db/rs2" }, systemLog: { destination: "file", logAppend: true, path: "/data/db/m202-ubuntu-27018.log" } }
{ net: { port: 27017 }, processManagement: { fork: true }, replication: { replSet: "threeMemberReplicaSet" }, storage: { dbPath: "/data/MongoDB/week3/three-member-replica-set/1", mmapv1: { preallocDataFiles: false, smallFiles: true } }, systemLog: { destination: "file", path: "/data/MongoDB/week3/three-member-replica-set/1/mongod.log" } }
Homework-5.1
Ans: bindIp
HomeworK5.2
m202@m202-ubuntu1404:/data/MongoDB/week5$ mloginfo --queries --sort sum mtools_example.log | head -20
namespace operation pattern count min (ms) max (ms) mean (ms) 95%-ile (ms) sum (ms)
grilled.indigo.papaya update {"_id": 1, "l": {"$not": 1}} 4227 0 3872 482 1189.4 2038017
Ans:{"_id": 1, "l": {"$not": 1}}
3rd Homework:
=========================
mlogfilter mtools_example.log --namespace grilled.indigo.papaya --pattern '{"_id": 1, "l": {"$not": 1}}' > homework5.3
m202@m202-ubuntu1404:/data/MongoDB/week5$ mplotqueries homework5.3 --type histogram --bucketsize 1
Ans: 60-90 ops/s
mlaunch init --replicaset 3 --name testrpl
mlaunch init --replicaset 3 --name testrpl --arbiter 1 --sharded 2 --config 3 --mongos 3
https://www.rootusers.com/how-to-increase-the-size-of-a-linux-lvm-by-expanding-the-virtual-machine-disk/
m202@m202-ubuntu1404:~$ sudo fdisk -l | grep "Disk /dev/"
Disk /dev/sdb doesn't contain a valid partition table
Disk /dev/sdc doesn't contain a valid partition table
Disk /dev/mapper/m202--vg-root doesn't contain a valid partition table
Disk /dev/mapper/m202--vg-swap_1 doesn't contain a valid partition table
Disk /dev/mapper/m202--vg-data doesn't contain a valid partition table
Disk /dev/sda: 8589 MB, 8589934592 bytes
Disk /dev/sdb: 21.5 GB, 21474836480 bytes
Disk /dev/sdc: 53.7 GB, 53687091200 bytes
Disk /dev/mapper/m202--vg-root: 6148 MB, 6148849664 bytes
Disk /dev/mapper/m202--vg-swap_1: 2143 MB, 2143289344 bytes
Disk /dev/mapper/m202--vg-data: 21.5 GB, 21474836480 bytes
m202@m202-ubuntu1404:~$ sudo fdisk -l /dev/sda
Disk /dev/sda: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders, total 16777216 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0004ec72
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 499711 248832 83 Linux
/dev/sda2 501758 16775167 8136705 5 Extended
/dev/sda5 501760 16775167 8136704 8e Linux LVM
===You can see 8e =====
/dev/sda5 501760 16775167 8136704 8e Linux LVM
need to add in below lvm
/dev/mapper/m202--vg-data 20G 44M 19G 1% /data
sudo lvdisplay
--- Logical volume ---
LV Path /dev/m202-vg/data
LV Name data
VG Name m202-vg
LV UUID 0ceMjJ-1Uvp-Hv2x-I0mF-MmIS-gh8F-IxY2Yx
LV Write Access read/write
LV Creation host, time m202-ubuntu1404, 2014-09-26 16:35:13 +0100
LV Status available
# open 1
LV Size 20.00 GiB
Current LE 5120
Segments 2
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 252:2
sudo fdisk /dev/sdc
n
p
w
t
8e
m202@m202-ubuntu1404:~$ sudo pvcreate /dev/sdc1
Physical volume "/dev/sdc1" successfully created
m202@m202-ubuntu1404:~$
m202@m202-ubuntu1404:~$ sudo vgextend m202-vg /dev/sdc1
Volume group "m202-vg" successfully extended
m202@m202-ubuntu1404:~$
m202@m202-ubuntu1404:~$ sudo lvextend /dev/m202-vg/data /dev/sdc1
Extending logical volume data to 70.00 GiB
Logical volume data successfully resized
m202@m202-ubuntu1404:~$ sudo resize2fs /dev/m202-vg/data
resize2fs 1.42.9 (4-Feb-2014)
Filesystem at /dev/m202-vg/data is mounted on /data; on-line resizing required
old_desc_blocks = 2, new_desc_blocks = 5
The filesystem on /dev/m202-vg/data is now 18349056 blocks long.
m202@m202-ubuntu1404:~$ df -h /data
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/m202--vg-data 69G 52M 66G 1% /data
m202@m202-ubuntu1404:~$
############Week 6 #####################
As 20 GB is not enough to run mlaunch with 3 shard and 3 replica set: Added space with 50 and extended /data to 70 GB.
Now can create cluster shared and replicaset:
m202@m202-ubuntu1404:/data/MongoDB/week6$ du -sh *
39G data
m202@m202-ubuntu1404:/data/MongoDB/week6$
mlaunch init --replicaset --sharded s1 s2 s3
usercreate:
-----------------------
db.createUser({user:"dilip",pwd:"dilip" , roles:[{role:"clusterManager",db:"admin"}]})
Que2:
Answers:
conf=rs.conf()
testReplSet:SECONDARY> conf.members = [conf.members[0]]
[
{
"_id" : 0,
"host" : "m202-ubuntu1404:27017",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {
},
"slaveDelay" : 0,
"votes" : 1
}
]
testReplSet:SECONDARY> rs.reconfig(conf, {force:true})
{ "ok" : 1 }
testReplSet:SECONDARY>
Que3:
mongos> sh.shardCollection("final_exam.m202",{_id:1})
{ "ok" : 0, "errmsg" : "already sharded" }
m202@m202-ubuntu1404:/data/db$ mongoexport -d final_exam -c m202 -o m202 --port 30999
2016-07-06T10:18:07.836+0100 connected to: localhost:30999
2016-07-06T10:18:14.576+0100 exported 200000 records
m202@m202-ubuntu1404:/data/db$ ls -lrt
m202@m202-ubuntu1404:/data/db$ mongoimport -d final_exam -c m202 --port 30999 --file=m202
2016-07-06T10:29:10.818+0100 connected to: localhost:30999
2016-07-06T10:29:13.815+0100 [###############.........] final_exam.m202 11.9 MB/19.0 MB (63.0%)
2016-07-06T10:29:15.584+0100 imported 200000 documents
mongos> db.m202.drop()
true
mongos> db.m202.drop()
false
mongos> show collections
system.indexes
mongos>
mongos> sh.shardCollection("final_exam.m202",{_id:1})
{ "collectionsharded" : "final_exam.m202", "ok" : 1 }
mongos>
Que4:
2014-06-06T18:15:28.224+0100 [initandlisten] connection accepted from 127.0.0.1:40945 #1 (1 connection now open)
2014-06-06T18:15:28.456+0100 [rsStart] trying to contact m202-ubuntu:27017
2014-06-06T18:15:28.457+0100 [rsStart] DBClientCursor::init call() failed
2014-06-06T18:15:28.457+0100 [rsStart] replSet can't get local.system.replset config from self or any seed (yet)
2014-06-06T18:15:29.458+0100 [rsStart] trying to contact m202-ubuntu:27017
2014-06-06T18:15:29.460+0100 [rsStart] DBClientCursor::init call() failed
2014-06-06T18:15:29.460+0100 [rsStart] replSet can't get local.system.replset config from self or any seed (yet)
2014-06-06T18:15:30.460+0100 [rsStart] trying to contact m202-ubuntu:27017
options: { config: "rs1.conf", processManagement: { fork: true }, replication: { oplogSizeMB: 512, replSetName: "m202-final" }, storage: { dbPath: "/data/db/rs1" }, systemLog: { destination: "file", path: "/data/db/rs1.log" } }
options: { config: "rs2.conf", net: { port: 27018 }, processManagement: { fork: true }, replication: { oplogSizeMB: 512, replSetName: "m202-final" }, storage: { dbPath: "/data/db/rs2" }, systemLog: { destination: "file", path: "/data/db/rs2.log" } }
Ans: Too many connection cause that issue:
Que5:
m202@m202-ubuntu1404:/data/MongoDB/exam_m202$ grep connectio mtools_example.log | grep acce | grep -o "2014.............." | sort | uniq -c
1 2014-06-21T00:11:0
355 2014-06-21T00:11:3
129 2014-06-21T00:11:4
10 2014-06-21T00:11:5
10 2014-06-21T00:12:0
13 2014-06-21T00:12:1
35 2014-06-21T00:12:2
1 2014-06-21T00:12:3
3 2014-06-21T00:12:4
12 2014-06-21T00:12:5
1 2014 connections n
m202@m202-ubuntu1404:/data/MongoDB/exam_m202$ grep connectio mtools_example.log | grep acce | grep -o "2014..............." | sort | uniq -c | sort -nr -k1 | head
105 2014-06-21T00:11:35
78 2014-06-21T00:11:37
67 2014-06-21T00:11:38
59 2014-06-21T00:11:42
57 2014-06-21T00:11:36
48 2014-06-21T00:11:39
What is the write concern of these operations?
1) mplotqueries mtool_example.log --type connchurn -b 1
Hint: Gives you Connections opened per bin which is in green
2) Filter with that range:
mlogfilter mtool_example.log --from Jun 00:11:35 --to Jun 21 00:11:40 > issue.txt
3) Plot again for exact operation:
mplotqueries issue.txt --type scatter
You see operation:
2014-06-21T00:11:37.661Z [conn619] command grilled.$cmd command: update { update: "uionJBQboEga25", ordered: true, writeConcern: { w: 2 }, updates: [ { q: { _id: "5tCrmNbxxKXPRLBl1BBRXqDCkZRigUubKH" }, u: { $pull: { uptimes: { recorded: { $lte: new Date(1403308699816) } } } } } ] } keyUpdates:0 numYields:0 reslen:95 2125ms
2014-06-21T00:11:37.661Z [conn1127] command grilled.$cmd command: delete { delete: "M6s4fZ6bmqM91r", ordered: true, writeConcern: { w: 2 }, deletes: [ { q: { _id: "exrIAVX9xFRZef4iIirRqndcjmfiBF9JhV" }, limit: 0 } ] } keyUpdates:0 numYields:0 reslen:80 452ms
2014-06-21T00:11:37.661Z [conn1112] command grilled.$cmd command: update { update: "8tG8pbMubfM50z", ordered: true, writeConcern: { w: 2 }, updates: [ { q: { _id: "N05Wyp1WeYGm4vXtc7XfwRdl3dNqcviOQE" }, u: { $pull: { uptimes: { recorded: { $lte: new Date(1403308627138) } } } } } ] } keyUpdates:0 numYields:0 reslen:95 787ms
Quetion 7: Final: Question 7: Traffic Imbalance in a Sharded Environment
########
Note: Please complete this homework assignment in the provided virtual machine (VM). If you choose to use your native local machine OS, or any environment other than the provided VM, we won't be able to support you.
In this problem, you have a cluster with 2 shards, each with a similar volume of data, but all the application traffic is going to one shard. You must diagnose the query pattern that is causing this problem and figure out how to balance out the traffic.
To set up the scenario, run the following commands to set up your cluster. The config document passed to ShardingTest will eliminate the disk space issues some students have seen when using ShardingTest.
mongo --nodb
config = { d0 : { smallfiles : "", noprealloc : "", nopreallocj : ""}, d1 : { smallfiles : "", noprealloc : "", nopreallocj : "" } };
cluster = new ShardingTest( { shards: config } );
Once the cluster is up, click "Initialize" in MongoProc one time to finish setting up the cluster's data and configuration. If you are running MongoProc on a machine other than the one running the mongos, then you must change the host of 'mongod1' in the settings. The host should be the hostname or IP of the machine on which the mongos is running. MongoProc will use port 30999 to connect to the mongos for this problem.
Once the cluster is initialized, click the "Initialize" button in MongoProc again to simulate application traffic to the cluster for 1 minute. You may click "Initialize" as many times as you like to simulate more traffic for 1 minute at a time. If you need to begin the problem again and want MongoProc to reinitialize the dataset, drop the m202 database from the cluster and click "Initialize" in MongoProc.
Use diagnostic tools (e.g., mongostat and the database profiler) to determine why all application traffic is being routed to one shard. Once you believe you have fixed the problem and traffic is balanced evenly between the two shards, test using MongoProc and then turn in if the test completes successfully.
Note:Dwight discusses the profiler in M102.
########
Ans:
mongo --nodb
config = { d0 : { smallfiles : "", noprealloc : "", nopreallocj : ""}, d1 : { smallfiles : "", noprealloc : "", nopreallocj : "" } };
cluster = new ShardingTest( { shards: config } );
m202@m202-ubuntu1404:~$ mongo --port 30999
MongoDB shell version: 3.0.5
connecting to: 127.0.0.1:30999/test
mongos>
mongos> sh.status()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("5781c822f7b8e6f9d5a2a7cc")
}
shards:
{ "_id" : "shard0000", "host" : "localhost:30000" }
{ "_id" : "shard0001", "host" : "localhost:30001" }
balancer:
Troubleshooting using mongostat:
m202@m202-ubuntu1404:~$ mongostat --port 30000
insert query update delete getmore command flushes mapped vsize res faults qr|qw ar|aw netIn netOut conn time
*0 *0 *0 *0 0 1|0 0 96.0M 427.0M 92.0M 0 0|0 0|0 79b 11k 15 05:09:49
*0 *0 *0 *0 0 1|0 0 96.0M 427.0M 92.0M 0 0|0 0|0 79b 11k 15 05:09:50
*0 *0 *0 *0 0 1|0 0 96.0M 427.0M 92.0M 0 0|0 0|0 79b 11k 15 05:09:51
1 11 5 *0 0 11|0 0 96.0M 427.0M 92.0M 0 0|0 0|0 3k 37k 15 05:09:52
*0 *0 *0 *0 0 2|0 0 96.0M 427.0M 92.0M 0 0|0 0|0 133b 11k 15 05:09:53
*0 2 2 2 0 5|0 0 96.0M 427.0M 92.0M 0 0|0 0|0 900b 11k 15 05:09:54
*0 *0 *0 *0 0 1|0 0 96.0M 427.0M 92.0M 0 0|0 0|0 79b 11k 15 05:09:55
*0 *0 *0 *0 0 1|0 0 96.0M 427.0M 92.0M 0 0|0 0|0 79b 11k 15 05:09:56
You see only traffic on { "_id" : "shard0000", "host" : "localhost:30000" }
m202@m202-ubuntu1404:~$ mongostat --port 30001
insert query update delete getmore command flushes mapped vsize res faults qr|qw ar|aw netIn netOut conn time
*0 *0 *0 *0 0 1|0 0 64.0M 353.0M 74.0M 0 0|0 0|0 79b 11k 6 05:10:01
*0 *0 *0 *0 0 4|0 0 64.0M 353.0M 74.0M 0 0|0 0|0 262b 22k 6 05:10:02
*0 *0 *0 *0 0 1|0 0 64.0M 353.0M 74.0M 0 0|0 0|0 79b 11k 6 05:10:03
*0 *0 *0 *0 0 1|0 0 64.0M 353.0M 74.0M 0 0|0 0|0 79b 11k 6 05:10:04
*0 *0 *0 *0 0 2|0 0 64.0M 353.0M 74.0M 0 0|0 0|0 133b 11k 6 05:10:05
*0 *0 *0 *0 0 1|0 0 64.0M 353.0M 74.0M 0 0|0 0|0 79b 11k 6 05:10:06
No traffic on { "_id" : "shard0001", "host" : "localhost:30001" }
Using profiler:
> use m202
switched to db m202
> db.getProfilingLevel()
0
> db.getProfilingStatus()
{ "was" : 0, "slowms" : 100 }
>
m202@m202-ubuntu1404:~$ ins
3059 pts/1 Sl+ 0:00 mongo --nodb
3062 pts/1 Sl+ 0:11 mongod --port 30000 --dbpath /data/db/test0 --smallfiles --noprealloc --nopreallocj --setParameter enableTestCommands=1
3079 pts/1 Sl+ 0:10 mongod --port 30001 --dbpath /data/db/test1 --smallfiles --noprealloc --nopreallocj --setParameter enableTestCommands=1
3096 pts/1 Sl+ 0:07 mongos --port 30999 --configdb localhost:30000 --chunkSize 50 --setParameter enableTestCommands=1
mongos is isseue:
## Config start
mongod --configsvr --port 30501 --dbpath ./config0 --logpath ./config0/config-0.log --smallfiles --oplogSize 40 --fork
mongod --configsvr --port 30502 --dbpath ./config1 --logpath ./config1/config-1.log --smallfiles --oplogSize 40 --fork
mongod --configsvr --port 30503 --dbpath ./config2 --logpath ./config2/config-2.log --smallfiles --oplogSize 40 --fork
mongos --port 30999 --configdb localhost:30501,localhost:30502,localhost:30503
mongodump -d config --port 30000 -o=config.dump
Restore in all shard
mongorestore -d config --port 30501 --dir=config.dump/config
mongorestore -d config --port 30502 --dir=config.dump/config
mongorestore -d config --port 30503 --dir=config.dump/config
m202@m202-ubuntu1404:/data/MongoDB/exam_m202/q7$ mongodump -d m202 --port 30000 -o=m202_dump
2016-07-10T06:10:04.115+0100 writing m202.imbalance to m202_dump/m202/imbalance.bson
2016-07-10T06:10:04.176+0100 writing m202.imbalance metadata to m202_dump/m202/imbalance.metadata.json
2016-07-10T06:10:04.177+0100 done dumping m202.imbalance (3000 documents)
2016-07-10T06:10:04.178+0100 writing m202.system.indexes to m202_dump/m202/system.indexes.bson
2016-07-10T06:10:04.179+0100 writing m202.system.profile to m202_dump/m202/system.profile.bson
2016-07-10T06:10:04.182+0100 writing m202.system.profile metadata to m202_dump/m202/system.profile.metadata.json
2016-07-10T06:10:04.182+0100 done dumping m202.system.profile (29 documents)
m202@m202-ubuntu1404:/data/MongoDB/exam_m202/q7$
> db
m202
> db.dropDatabase()
{ "dropped" : "m202", "ok" : 1 }
>
>
mongos> sh.enableSharding("m202")
{ "ok" : 1 }
mongos> sh.shardCollection("m202.imbalance", {otherID : "hashed"})
{ "collectionsharded" : "m202.imbalance", "ok" : 1 }
mongos>
m202@m202-ubuntu1404:/data/MongoDB/exam_m202/q7$ mongorestore -d m202 --port 30999 --dir=m202_dump/m202
2016-07-10T06:13:02.322+0100 building a list of collections to restore from m202_dump/m202 dir
2016-07-10T06:13:02.327+0100 reading metadata file from m202_dump/m202/imbalance.metadata.json
2016-07-10T06:13:02.332+0100 restoring m202.imbalance from file m202_dump/m202/imbalance.bson
2016-07-10T06:13:02.335+0100 reading metadata file from m202_dump/m202/system.profile.metadata.json
2016-07-10T06:13:02.502+0100 no indexes to restore
2016-07-10T06:13:02.502+0100 finished restoring m202.system.profile (0 documents)
2016-07-10T06:13:05.334+0100 [########################] m202.imbalance 90.8 KB/90.8 KB (100.0%)
2016-07-10T06:13:08.081+0100 error: no progress was made executing batch write op in m202.imbalance after 5 rounds (0 ops completed in 6 rounds total)
2016-07-10T06:13:08.081+0100 restoring indexes for collection m202.imbalance from metadata
2016-07-10T06:13:08.084+0100 finished restoring m202.imbalance (3000 documents)
2016-07-10T06:13:08.084+0100 done
mongos> db.imbalance.getShardDistribution()
Shard shard0000 at localhost:30000
data : 140KiB docs : 3000 chunks : 32
estimated data per chunk : 4KiB
estimated docs per chunk : 93
Shard shard0001 at localhost:30001
data : 150KiB docs : 3200 chunks : 31
estimated data per chunk : 4KiB
estimated docs per chunk : 103
Totals
data : 290KiB docs : 6200 chunks : 63
Shard shard0000 contains 48.38% data, 48.38% docs in cluster, avg obj size on shard : 48B
Shard shard0001 contains 51.61% data, 51.61% docs in cluster, avg obj size on shard : 48B
{ "_id" : ISODate("2014-08-16T00:00:00Z") } -->> { "_id" : ISODate("2014-08-17T00:00:00Z") } on : shard0001 Timestamp(2, 94)
{ "_id" : ISODate("2014-08-17T00:00:00Z") } -->> { "_id" : ISODate("2014-08-18T00:00:00Z") } on : shard0001 Timestamp(2, 96)
{ "_id" : ISODate("2014-08-18T00:00:00Z") } -->> { "_id" : ISODate("2014-08-19T00:00:00Z") } on : shard0001 Timestamp(2, 98)
db.system.profile."query".find().sort({"ts":-1}).pretty()
mongos> db.imbalance.find({_id:"wjhf"},{}).explain("executionStats")
db.imbalance.getShardDistribution()
db.runCommand( { moveChunk : "m202.imbalance", bounds : [{"_id": ISODate("2014-07-16T00:00:00Z")},{"_id": ISODate("2014-07-17T00:00:00Z")}],"to":"shard0001"})
{ "millis" : 966, "ok" : 1 }
mongos> db.runCommand( { moveChunk : "m202.imbalance", bounds : [{"_id": ISODate("2014-07-15T00:00:00Z")},{"_id": ISODate("2014-07-16T00:00:00Z")}],"to":"shard0001"})
{ "millis" : 1084, "ok" : 1 }
mongos> db.runCommand( { moveChunk : "m202.imbalance", bounds : [{"_id": ISODate("2014-07-18T00:00:00Z")},{"_id": ISODate("2014-07-19T00:00:00Z")}],"to":"shard0001"})
{ "millis" : 906, "ok" : 1
config = { d0 : { smallfiles : "", noprealloc : "", nopreallocj : ""}, d1 : { smallfiles : "", noprealloc : "", nopreallocj : "" } };
cluster = new ShardingTest( { shards: config } );
{ "_id" : ISODate("2014-07-30T00:00:00Z") } -->> { "_id" : ISODate("2014-07-31T00:00:00Z") } on : shard0000 Timestamp(2, 62)
{ "_id" : ISODate("2014-07-31T00:00:00Z") } -->> { "_id" : ISODate("2014-08-01T00:00:00Z") } on : shard0000 Timestamp(2, 63)
{ "_id" : ISODate("2014-08-01T00:00:00Z") } -->> { "_id" : ISODate("2014-08-02T00:00:00Z") } on : shard0001 Timestamp(2, 64)
db.runCommand( { moveChunk : "m202.imbalance", bounds : [{"_id": ISODate("2014-07-31T00:00:00Z")},{"_id": ISODate("2014-08-01T00:00:00Z")}],"to":"shard0001"})
mongos> db.imbalance.getShardDistribution()
Shard shard0000 at localhost:30000
data : 140KiB docs : 3000 chunks : 32
estimated data per chunk : 4KiB
estimated docs per chunk : 93
Shard shard0001 at localhost:30001
data : 150KiB docs : 3200 chunks : 31
estimated data per chunk : 4KiB
estimated docs per chunk : 103
db.system.profile.find( { op: { $ne : 'command' } } ).pretty()
db.system.profile.find().sort({"$natural":-1}).limit(5).pretty()
find({},{"ns" :1,ts:1,query:1})
db.system.profile.find({},{"ns" :1,ts:1,query:1}).sort({"$natural":-1}).limit(5).pretty()
db.system.profile.find( { ns : 'm202.imbalance' } ).pretty()
numYields:0 reslen:55 locks:{ Global: { acquireCount: { r: 6, w: 6 } }, MMAPV1Journal: { acquireCount: { w: 7 } }, Database: { acquireCount: { w: 6 } }, Collection: { acquireCount: { W: 6 } } } 7ms
m30000| 2016-07-10T11:12:35.784+0100 I WRITE [conn6] update m202.imbalance query: { _id: { $gte: new Date(1405209600000), $lt: new Date(1405728000000) } } update: { $set: { hot: true } } nscanned:600 nscannedObjects:600 nMatched:600 nModified:0 keyUpdates:0 writeConflicts:0 numYields:4 locks:{ Global: { acquireCount: { r: 5, w: 5 } }, MMAPV1Journal: { acquireCount: { w: 5 } }, Database: { acquireCount: { w: 5 } }, Collection: { acquireCount: { W: 5 } } } 4ms
m30000| 2016-07-10T11:12:35.793+0100 I COMMAND [conn6] command m202.$cmd command: update { update: "imbalance", updates: [ { q: { _id: { $gte: new Date(1405209600000), $lt: new Date(1405728000000) } }, u: { $set: { hot: true } }, multi: true, upsert: false } ], writeConcern: { w: 1 }, ordered: true, metadata: { shardName: "shard0000", shardVersion: [ Timestamp 2000|63, ObjectId('578210753fba777c8b39094c') ], session: 0 } } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:55 locks:{ Global: { acquireCount: { r: 6, w: 6 } }, MMAPV1Journal: { acquireCount: { w: 7 } }, Database: { acquireCount: { w: 6 } }, Collection: { acquireCount: { W: 6 } } } 13ms
m30000| 2016-07-10T11:12:35.820+0100 I WRITE [conn6] update m202.imbalance query: { _id: { $gte: new Date(1405209600000), $lt: new Date(1405728000000) } } update: { $set: { hot: true } } nscanned:600 nscannedObjects:600 nMatched:600 nModified:0 keyUpdates:0 writeConflicts:0 numYields:4 locks:{ Global: { acquireCount: { r: 5, w: 5 } }, MMAPV1Journal: { acquireCount: { w: 5 } }, Database: { acquireCount: { w: 5 } }, Collection: { acquireCount: { W: 5 } } } 2ms
m30000| 2016-07-10T11:12:35.820+0100 I COMMAND [conn6] command m202.$cmd command: update { update: "imbalance", updates: [ { q: { _id: { $gte: new Date(1405209600000), $lt: new Date(1405728000000) } }, u: { $set: { hot: true } }, multi: true, upsert: false } ], writeConcern: { w: 1 }, ordered: true, metadata: { shardName: "shard0000", shardVersion: [ Timestamp 2000|63, ObjectId('578210753fba777c8b39094c') ], session: 0 } } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:55 locks:{ Global: { acquireCount: { r: 6, w: 6 } }, MMAPV1Journal: { acquireCount: { w: 7 } }, Database: { acquireCount: { w: 6 } }, Collection: { acquireCount: { W: 6 } } } 2ms
m30000| 2016-07-10T11:12:35.834+0100 I WRITE [conn6] update m202.imbalance query: { _id: { $gte: new Date(1405209600000), $lt: new Date(1405728000000) } } update: { $set: { hot: true } } nscanned:600 nscannedObjects:600 nMatched:600 nModified:0 keyUpdates:0 writeConflicts:0 numYields:4 locks:{ Global: { acquireCount: { r: 5, w: 5 } }, MMAPV1Journal: { acquireCount: { w: 5 } }, Database: { acquireCount: { w: 5 } }, Collection: { acquireCount: { W: 5 } } } 2ms
m30000| 2016-07-10T11:12:35.835+0100 I COMMAND [conn6] command m202.$cmd command: update { update: "imbalance", updates: [ { q: { _id: { $gte: new Date(1405209600000), $lt: new Date(1405728000000) } }, u: { $set: { hot: true } }, multi: true, upsert: false } ], writeConcern: { w: 1 }, ordered: true, metadata: { shardName: "shard0000", shardVersion: [ Timestamp 2000|63, ObjectId('578210753fba777c8b39094c') ], session: 0 } } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:55 locks:{ Global: { acquireCount: { r: 6, w: 6 } }, MMAPV1Journal: { acquireCount: { w: 7 } }, Database: { acquireCount: { w: 6 } }, Collection: { acquireCount: { W: 6 } } } 3ms
m30000| 2016-07-10T11:12:35.850+0100 I WRITE [conn6] update m202.imbalance query: { _id: { $gte: new Date(1405209600000), $lt: new Date(1405728000000) } } update: { $set: { hot: true } } nscanned:600 nscannedObjects:600 nMatched:600 nModified:0 keyUpdates:0 writeConflicts:0 numYields:4 locks:{ Global: { acquireCount: { r: 5, w: 5 } }, MMAPV1Journal: { acquireCount: { w: 5 } }, Database: { acquireCount: { w: 5 } }, Collection: { acquireCount: { W: 5 } } } 2ms
m30000| 2016-07-10T11:12:35.852+0100 I COMMAND [conn6] command m202.$cmd command: update { update: "imbalance", updates: [ { q: { _id: { $gte: new Date(1405209600000), $lt: new Date(1405728000000) } }, u: { $set: { hot: true } }, multi: true, upsert: false } ], writeConcern: { w: 1 }, ordered: true, metadata: { shardName: "shard0000", shardVersion: [ Timestamp 2000|63, ObjectId('578210753fba777c8b39094c') ], session: 0 } } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:55 locks:{ Global: { acquireCount: { r: 6, w: 6 } }, MMAPV1Journal: { acquireCount: { w: 7 } }, Database: { acquireCount: { w: 6 } }, Collection: { acquireCount: { W: 6 } } } 5ms
mongos> db.imbalance.find({"hot" : true}).sort({"otherID":-1})
{ "_id" : ISODate("2014-07-13T00:00:00.099Z"), "otherID" : 99, "hot" : true }
{ "_id" : ISODate("2014-07-14T00:00:00.099Z"), "otherID" : 99, "hot" : true }
{ "_id" : ISODate("2014-07-15T00:00:00.099Z"), "otherID" : 99, "hot" : true }
{ "_id" : ISODate("2014-07-16T00:00:00.099Z"), "otherID" : 99, "hot" : true }
{ "_id" : ISODate("2014-07-17T00:00:00.099Z"), "otherID" : 99, "hot" : true }
{ "_id" : ISODate("2014-07-18T00:00:00.099Z"), "otherID" : 99, "hot" : true }
{ "_id" : ISODate("2014-07-13T00:00:00Z") } -->> { "_id" : ISODate("2014-07-14T00:00:00Z") } on : shard0000 Timestamp(2, 28)
{ "_id" : ISODate("2014-07-14T00:00:00Z") } -->> { "_id" : ISODate("2014-07-15T00:00:00Z") } on : shard0000 Timestamp(2, 30)
{ "_id" : ISODate("2014-07-15T00:00:00Z") } -->> { "_id" : ISODate("2014-07-16T00:00:00Z") } on : shard0000 Timestamp(2, 32)
{ "_id" : ISODate("2014-07-16T00:00:00Z") } -->> { "_id" : ISODate("2014-07-17T00:00:00Z") } on : shard0000 Timestamp(2, 34)
{ "_id" : ISODate("2014-07-17T00:00:00Z") } -->> { "_id" : ISODate("2014-07-18T00:00:00Z") } on : shard0000 Timestamp(2, 36)
{ "_id" : ISODate("2014-07-18T00:00:00Z") } -->> { "_id" : ISODate("2014-07-19T00:00:00Z") } on : shard0000 Timestamp(2, 38)
db.imbalance.find({ _id: { $gte: ISODate("2014-07-13T00:00:00Z"), $lt: ISODate("2014-07-14T00:00:00Z") } }).count()
db.runCommand( { moveChunk : "m202.imbalance", bounds : [{"_id": ISODate("2014-07-13T00:00:00Z")},{"_id": ISODate("2014-07-14T00:00:00Z")}],"to":"shard0001"})
{ "millis" : 966, "ok" : 1 }
mongos> db.runCommand( { moveChunk : "m202.imbalance", bounds : [{"_id": ISODate("2014-07-15T00:00:00Z")},{"_id": ISODate("2014-07-16T00:00:00Z")}],"to":"shard0001"})
{ "millis" : 1084, "ok" : 1 }
mongos> db.runCommand( { moveChunk : "m202.imbalance", bounds : [{"_id": ISODate("2014-07-18T00:00:00Z")},{"_id": ISODate("2014-07-19T00:00:00Z")}],"to":"shard0001"})
{ "millis" : 906, "ok" : 1
db.runCommand( { moveChunk : "m202.imbalance", bounds : [{"_id": ISODate("2014-07-16T00:00:00Z")},{"_id": ISODate("2014-07-17T00:00:00Z")}],"to":"shard0001"})
db.runCommand( { moveChunk : "m202.imbalance", bounds : [{"_id": ISODate("2014-07-17T00:00:00Z")},{"_id": ISODate("2014-07-18T00:00:00Z")}],"to":"shard0001"})
db.runCommand( { moveChunk : "m202.imbalance", bounds : [{"_id": ISODate("2014-07-18T00:00:00Z")},{"_id": ISODate("2014-07-19T00:00:00Z")}],"to":"shard0001"})
{ "_id" : ISODate("2014-07-16T00:00:00Z") } -->> { "_id" : ISODate("2014-07-17T00:00:00Z") } on : shard0001 Timestamp(3, 0)
{ "_id" : ISODate("2014-07-17T00:00:00Z") } -->> { "_id" : ISODate("2014-07-18T00:00:00Z") } on : shard0001 Timestamp(4, 0)
{ "_id" : ISODate("2014-07-18T00:00:00Z") } -->> { "_id" : ISODate("2014-07-19T00:00:00Z") } on : shard0001 Timestamp(5, 0)
{ "_id" : ISODate("2014-07-16T00:00:00Z") } -->> { "_id" : ISODate("2014-07-17T00:00:00Z") } on : shard0000 Timestamp(6, 0)
{ "_id" : ISODate("2014-07-17T00:00:00Z") } -->> { "_id" : ISODate("2014-07-18T00:00:00Z") } on : shard0000 Timestamp(7, 0)
DBA- M201
Vagrant and VMware setup
C:\HashiCorp\Vagrant\bin
Vagrantfile location: D:\MongoDB\M202_Vagrant_Image___3.0.5_m202_mongoproc3.0
Modified vagerant file inside.
cd D:\MongoDB\M202_Vagrant_Image___3.0.5_m202_mongoproc3.0
D:\MongoDB\M202_Vagrant_Image___3.0.5_m202_mongoproc3.0>C:\HashiCorp\Vagrant\bin\vagrant.exe init
D:\MongoDB\M202_Vagrant_Image___3.0.5_m202_mongoproc3.0>C:\HashiCorp\Vagrant\bin\vagrant.exe up
192.168.56.1 = interface#3
### Assign static IP:
m202@m202-ubuntu1404:~$ cat /etc/network/interfaces
# interfaces(5) file used by ifup(8) and ifdown(8)
#auto lo
#iface lo inet loopbacki
auto eth0
iface eth0 inet static
address 192.168.56.11
netmask 255.255.255.0
network 192.168.56.1
broadcast 255.255.255.127
gateway 192.168.56.1
sudo /etc/init.d/networking restart
===============================
Disaster Recovery.
==============================
More complex than you think
Trade Offs:
1. More downtime more loss.
2. 99.9999% - 30 Sec down.
3. 30Million $ for 30 sec. Damage on Company.
4. Downtime tolaration.
Reference Number About Availability (per Year)
30 Sec = ~ 99.9999%
1 hr = ~ 99.9886%
1 min = ~ 99.9998%
1 days = ~ 99.7260%
> 99 %Availability.
3 Days = ~ Still claim > 99%
Tolerance For:
==========================
Data Loss Downtime Reduced Capacity
Based on 3 groups.
Group 3 - Low tolerance data loss & No downtime. E.g. AD server.
Design - Avoid data loss.
Design - Avoid Dwon time.
Group 3: Low Dataloss and high downtime e.g - Leaning system
Group 3 - High tolerance data loss. e.g. caching application where you used mongodb.
Backup Strategies:
==============================================
Till 30 min possibility,
Backup Consideration:
1) How quickely I can restore?
2) Test backup often, what happens any bug in software, restore backup and run..
3) Coordinated backup, multiple host, related with consistent backp. This implies to step 2.
1. File System based backup:
Clean shutdown and copy files. scp, cp, rsync.
e.g. fsyncock os shutdown :- file system, Use secondary to backup.
2. Filesystem based snapshot:
Point in time guarantee.
Must include journal.
I/O overhead.
LVM has snapshot capability, EBS, Netapp etc.
mongod --port 30001 --dbpath mongod-pri --replSet CorruptionTest --smallfiles --oplogSize 128 --nojournal
mongod --port 30002 --dbpath mongod-sec --replSet CorruptionTest --smallfiles --oplogSize 128 --nojournal
mongod --port 30003 --dbpath mongod-arb --replSet CorruptionTest --nojournal
24k * 8 = 192k.
100,000 on Monday
102,000 reads/sec on Tuesday,
104,040 on Wednesday,
106,121 on Thursday,
108,243 on Friday
24k * 5 = 120k
24k - 100%
Peak - 21k - Peak. - 21k - 90%
21k * 5 = 105 k.
21.6 K * 5 = 108,000
Homework: 2.4: Using oplog replay to restore
--Point in Time
http://www.codepimp.org/2014/08/replay-the-oplog-in-mongodb/
http://stackoverflow.com/questions/15444920/modify-and-replay-mongodb-oplog
https://go.bestdenki.com.sg/winners
xx196 mkdir recovery
197 mongod --dbpath=/data/MongoDB/recovery --port 3004 --fork
198 mongod --dbpath=/data/MongoDB/recovery --port 3004 --fork --logpath ./recovery_log
199 mongorestore --3004 --db backupDB backupDB
200 mongorestore --port 3004 --db backupDB backupDB
201 mkdir oplogD
202 mongodump -d local -c oplog.rs --port 30001 -o oplogD
203 ls -lrt
204 mongorestore --port 3004 --oplogReplay --oplogLimit="1398778745:1" ./oplogD
205 ls -l oplogD
206 ls -l oplogD/local/
207 mongorestore --port 3004 --oplogReplay --oplogLimit="1398778745:1" ./oplogD/local/oplog.rs.bson
208 mongo --port 3004
209 BackupTest:PRIMARY> db.oplog.rs.find({op: "c"})
210 { "ts" : Timestamp(1398778745, 1), "h" : NumberLong("-4262957146204779874"), "v" : 2, "op" : "c", "ns" : "backupDB.$cmd", "o" : { "drop" : "backupColl" } }
211 mkdir oplogR
212 mv oplogD/local/oplog.rs.bson oplogR/oplog.bson
213 bsondump oplogR/oplog.bson > oplog.read
214 cat oplog.read | grep -A 10 -B 10 "drop"
215 mongorestore --port 3004 --oplogReplay --oplogLimit="1398778745:1" ./oplogR
216 mongo --port 3004
217 ls -ld */
218 ls -l recovery
219 ps -aef | grep mongo
220 mongod -f mongod__homework_hw_m202_w3_week3_wk3_536bf5318bb48b4bb3853f31.conf --shutdown
221 cp recovery/backupDB.* backuptest/
222 mongod -f mongod__homework_hw_m202_w3_week3_wk3_536bf5318bb48b4bb3853f31.conf
223 mongo --port 30001
m202@m202-ubuntu1404:/data/MongoDB$ cat oplog.read | grep -A 10 -B 10 "drop"
{"h":{"$numberLong":"-3486134360601788719"},"ns":"backupDB.backupColl","o":{"_id":{"$date":"2014-01-08T02:46:30.000Z"},"string":"testStringForPadding0000000000000000000000000000000000000000"},"op":"i","ts":{"$timestamp":{"t":1398778026,"i":9991}},"v":2}
{"h":{"$numberLong":"-6091430578150945877"},"ns":"backupDB.backupColl","o":{"_id":{"$date":"2014-01-08T02:46:31.000Z"},"string":"testStringForPadding0000000000000000000000000000000000000000"},"op":"i","ts":{"$timestamp":{"t":1398778026,"i":9992}},"v":2}
{"h":{"$numberLong":"2638276158384822314"},"ns":"backupDB.backupColl","o":{"_id":{"$date":"2014-01-08T02:46:32.000Z"},"string":"testStringForPadding0000000000000000000000000000000000000000"},"op":"i","ts":{"$timestamp":{"t":1398778026,"i":9993}},"v":2}
{"h":{"$numberLong":"826924733673180424"},"ns":"backupDB.backupColl","o":{"_id":{"$date":"2014-01-08T02:46:33.000Z"},"string":"testStringForPadding0000000000000000000000000000000000000000"},"op":"i","ts":{"$timestamp":{"t":1398778026,"i":9994}},"v":2}
{"h":{"$numberLong":"6784254043238495315"},"ns":"backupDB.backupColl","o":{"_id":{"$date":"2014-01-08T02:46:34.000Z"},"string":"testStringForPadding0000000000000000000000000000000000000000"},"op":"i","ts":{"$timestamp":{"t":1398778026,"i":9995}},"v":2}
{"h":{"$numberLong":"-7899106625682405931"},"ns":"backupDB.backupColl","o":{"_id":{"$date":"2014-01-08T02:46:35.000Z"},"string":"testStringForPadding0000000000000000000000000000000000000000"},"op":"i","ts":{"$timestamp":{"t":1398778026,"i":9996}},"v":2}
{"h":{"$numberLong":"-1666073625494465588"},"ns":"backupDB.backupColl","o":{"_id":{"$date":"2014-01-08T02:46:36.000Z"},"string":"testStringForPadding0000000000000000000000000000000000000000"},"op":"i","ts":{"$timestamp":{"t":1398778026,"i":9997}},"v":2}
{"h":{"$numberLong":"7346874363465863058"},"ns":"backupDB.backupColl","o":{"_id":{"$date":"2014-01-08T02:46:37.000Z"},"string":"testStringForPadding0000000000000000000000000000000000000000"},"op":"i","ts":{"$timestamp":{"t":1398778026,"i":9998}},"v":2}
{"h":{"$numberLong":"9124493582125599509"},"ns":"backupDB.backupColl","o":{"_id":{"$date":"2014-01-08T02:46:38.000Z"},"string":"testStringForPadding0000000000000000000000000000000000000000"},"op":"i","ts":{"$timestamp":{"t":1398778026,"i":9999}},"v":2}
{"h":{"$numberLong":"1832517689805078463"},"ns":"backupDB.backupColl","o":{"_id":{"$date":"2014-01-08T02:46:39.000Z"},"string":"testStringForPadding0000000000000000000000000000000000000000"},"op":"i","ts":{"$timestamp":{"t":1398778026,"i":10000}},"v":2}
{"h":{"$numberLong":"-4262957146204779874"},"ns":"backupDB.$cmd","o":{"drop":"backupColl"},"op":"c","ts":{"$timestamp":{"t":1398778745,"i":1}},"v":2}
m202@m202-ubuntu1404:/data/MongoDB$
BackupTest:PRIMARY> db.oplog.rs.find({"o":{"drop":"backupColl"}})
{ "ts" : Timestamp(1398778745, 1), "h" : NumberLong("-4262957146204779874"), "v" : 2, "op" : "c", "ns" : "backupDB.$cmd", "o" : { "drop" : "backupColl" } }
BackupTest:PRIMARY>
BackupTest:PRIMARY> db.oplog.rs.find({"op":{$nin:["u","i"]}})
{ "ts" : Timestamp(1398771810, 1), "h" : NumberLong(0), "v" : 2, "op" : "n", "ns" : "", "o" : { "msg" : "initiating set" } }
{ "ts" : Timestamp(1398778745, 1), "h" : NumberLong("-4262957146204779874"), "v" : 2, "op" : "c", "ns" : "backupDB.$cmd", "o" : { "drop" : "backupColl" } }
BackupTest:PRIMARY>
BackupTest:PRIMARY> db.oplog.rs.distinct("op")
[ "n", "i", "u", "c" ]
BackupTest:PRIMARY>
Week 3:
===================
mongo –nodb
var rst = new ReplSetTest( { name: ‘rsTest’, nodes: 3 } )
rst.startSet()
rs.initiate()
rs.add(“Wills-MacBook-Pro.local:31001”)
rs.add(“Wills-MacBook-Pro.local:31002”)
var rst = new ReplSetTest( { name: "rsTest", nodes: { node1: {}, node2: {}, arb: {arbiter: true} } } )
*node1:{smallfiles:"",oplogSize:40, noprealloc:null}
rs.initiate()
rs.add(“Wills-MacBook-Pro.local:31001”)
rs.addArb(“Wills-MacBook-Pro.local:31002”)
for(var i=0;i<=100000; i++) {db.testColl.insert({"a":i}); sleep(100);}
kill -SIGSTOP 3195
kill -SIGINT 6472; kill -SIGCONT 3195
Mongo State:
0 - Startup
5 - Startup2
1 - Primary
2 - Secondary
3 - Recovery
4 - Fatal
6 - Unknown
7 - Arbitor
8 - Down
9 - Rollback
10 - Shunned -
1,2,3,7,9 - Can vote rest can't vote.
maxPrimaryConnection - (numOfSecondary * 3) - (numberOfother *3)
------------------------------------------------------------------
num of Mongoes
10000 - (2*3) - (6*3)
---------------------- = 155.875 - 100% so 90% of 155.875 = ~140
64
alias ins="ps -aef | grep mongod"
alias killmo="ins | grep -v grep | awk '{print $2}' | xargs kill -9"
============Reverse case=======DDL avoid (create Index on Secondary only)===========
kill -SIGSTOP 8650
kill -SIGINT 3195; kill -SIGCONT 8650
mongod --dbpath /data/MongoDB/week3/three-member-replica-set/1 --logpath /data/MongoDB/week3/three-member-replica-set/1/mongod.log --smallfiles --noprealloc --port 27017 --replSet threeMemberReplicaSet --fork
mongod --dbpath /data/MongoDB/week3/three-member-replica-set/2 --logpath /data/MongoDB/week3/three-member-replica-set/2/mongod.log --smallfiles --noprealloc --port 27018 --replSet threeMemberReplicaSet --fork
mongod --dbpath /data/MongoDB/week3/three-member-replica-set/3 --logpath /data/MongoDB/week3/three-member-replica-set/3/mongod.log --smallfiles --noprealloc --port 27019 --replSet threeMemberReplicaSet --fork
===================DML rollback scenario=================
for(var i=0;i<=100000; i++) {db.testColl.insert({"a":i}); sleep(100);}
kill -SIGSTOP 9748
kill -SIGINT 9738; kill -SIGCONT 9748
mongod --dbpath /data/MongoDB/week3/three-member-replica-set/1 --logpath /data/MongoDB/week3/three-member-replica-set/1/mongod.log --smallfiles --noprealloc --port 27017 --replSet threeMemberReplicaSet --fork
mongod --dbpath /data/MongoDB/week3/three-member-replica-set/2 --logpath /data/MongoDB/week3/three-member-replica-set/2/mongod.log --smallfiles --noprealloc --port 27018 --replSet threeMemberReplicaSet --fork
mongod --dbpath /data/MongoDB/week3/three-member-replica-set/3 --logpath /data/MongoDB/week3/three-member-replica-set/3/mongod.log --smallfiles --noprealloc --port 27019 --replSet threeMemberReplicaSet --fork
######Rollback Block from log file
2016-06-11T19:29:40.741+0100 I REPL [rsBackgroundSync] replSet rollback findcommonpoint scanned : 2685
2016-06-11T19:29:40.741+0100 I REPL [rsBackgroundSync] replSet rollback 3 fixup
2016-06-11T19:29:40.797+0100 I REPL [rsBackgroundSync] rollback 3.5
2016-06-11T19:29:40.797+0100 I REPL [rsBackgroundSync] rollback 4 n:1992 ### n is number of rows
2016-06-11T19:29:40.797+0100 I REPL [rsBackgroundSync] replSet minvalid=Jun 11 19:29:40 575c5894:7
2016-06-11T19:29:40.797+0100 I REPL [rsBackgroundSync] rollback 4.6
2016-06-11T19:29:40.797+0100 I REPL [rsBackgroundSync] rollback 4.7
2016-06-11T19:29:40.846+0100 I REPL [rsBackgroundSync] rollback 5 d:1992 u:0
2016-06-11T19:29:40.846+0100 I REPL [rsBackgroundSync] rollback 6
2016-06-11T19:29:40.851+0100 I REPL [rsBackgroundSync] rollback done
2016-06-11T19:29:40.851+0100 I REPL [ReplicationExecutor] transition to RECOVERING
2016-06-11T19:29:40.851+0100 I REPL [rsBackgroundSync] rollback finished
2016-06-11T19:29:40.851+0100 I REPL [ReplicationExecutor] syncing from: localhost:27018
mongod --dbpath ./1 --logpath /data/MongoDB/week3/three-member-replica-set/1/mongod.log --smallfiles --noprealloc --port 27017 --replSet threeMemberReplicaSet --fork
###################
Week 4:
mongo --nodb
cluster = new ShardingTest({Shards:3, chunksize: 1, config: 3, other: {rs:true}})
##Connect mongos
mongo --port 30999
db.users.ensureIndex({email:1})
db.users.getIndexes()
mongos> sh.shardCollection("shardTest.users", {email:1})
{ "collectionsharded" : "shardTest.users", "ok" : 1 }
for ( var x=97; x<97+26; x++ ) {
for( var y=97; y<97+26; y+=6 ) {
var prefix = String.fromCharCode(x) + String.fromCharCode(y);
db.runCommand( { split : "shardTest.users", middle : { email : prefix } } );
}
}
use config
db.chunks.find().count()
mongos> first_doc = db.chunks.find().next()
{
"_id" : "myapp.users-email_MinKey",
"lastmod" : Timestamp(2, 0),
"lastmodEpoch" : ObjectId("538e27be31972172d9b3ec61"),
"ns" : "myapp.users",
"min" : {
"email" : { "$minKey" : 1 }
},
"max" : {
"email" : "aa"
},
"shard" : "test-rs1"
}
mongos> min = first_doc.min
{ "email" : { "$minKey" : 1 } }
mongos> max = first_doc.max
{ "email" : "aa" }
mongos> keyPattern = { email : 1 }
{ "email" : 1 }
mongos> ns = first_doc.ns
myapp.users
mongos> db.runCommand({dataSize: ns, keyPattern: keyPattern, min: min, max: max } )
{ "size" : 0, "numObjects" : 0, "millis" : 0, "ok" : 1 }
mongos> second_doc = db.chunks.find().skip(1).next()
{
"_id" : "myapp.users-email_\"aa\"",
"lastmod" : Timestamp(3, 0),
"lastmodEpoch" : ObjectId("538e27be31972172d9b3ec61"),
"ns" : "myapp.users",
"min" : {
"email" : "aa"
},
"max" : {
"email" : "ag"
},
"shard" : "test-rs0"
}
mongos> max2 = second_doc.max
{ "email" : "ag" }
mongos> use admin
switched to db admin
mongos> db.runCommand( { mergeChunks : ns , bounds : [ min , max2 ] } )
{ "ok" : 1 }
=====
Homework 1:
For this assignment, you will be pre-splitting chunks into ranges. We'll be working with the "m202.presplit" database/collection.
First, create 3 shards. You can use standalone servers or replica sets on each shard, whatever is most convenient.
Pre-split your collection into chunks with the following ranges, and put the chunks on the appropriate shard, and name the shards "shard0", "shard1", and "shard2". Let's make the shard key the "a" field. Don't worry if you have other ranges as well, we will only be checking for the following:
Range / Shard
0 to 7 / shard0
7 to 10 / shard0
10 to 14 / shard0
14 to 15 / shard1
15 to 20 / shard1
20 to 21 / shard1
21 to 22 / shard2
22 to 23 / shard2
23 to 24 / shard2
mongo --nodb
cluster = new ShardingTest({Shards:3, chunksize: 1, config: 3, other: {rs:false}})
cluster = new ShardingTest({shards : {shard0:{smallfiles:''}, shard1:{smallfiles:''}, shard2:{smallfiles:''}},config:3,other: {rs:false}})
##working
cluster = new ShardingTest({shards : {"shard0":{smallfiles:''}, "shard1":{smallfiles:''}, "shard2":{smallfiles:''}},config: 3})
mongos> sh.enableSharding("m202")
{ "ok" : 1 }
mongos> sh.shardCollection("m202.presplit",{a:1})
{ "collectionsharded" : "m202.presplit", "ok" : 1 }
###Shard Start
mkdir shard0 shard1 shard2 config0 config1 config3
mongod --shardsvr --port 30101 --dbpath shard0 --logpath shard0/shard0-0.log --smallfiles --oplogSize 40 --fork
mongod --shardsvr --port 30201 --dbpath shard1 --logpath shard1/shard1-0.log --smallfiles --oplogSize 40 --fork
mongod --shardsvr --port 30301 --dbpath shard2 --logpath shard2/shard2-0.log --smallfiles --oplogSize 40 --fork
## Config start
mongod --configsvr --port 30501 --dbpath config0 --logpath config0/config-0.log --smallfiles --oplogSize 40 --fork
mongod --configsvr --port 30502 --dbpath config1 --logpath config1/config-1.log --smallfiles --oplogSize 40 --fork
mongod --configsvr --port 30503 --dbpath config2 --logpath config2/config-2.log --smallfiles --oplogSize 40 --fork
mongos --port 39000 --configdb localhost:30501,localhost:30502,localhost:30503
mongo --port 39000
use admin
db.runCommand({"addShard": "localhost:30101", "name": "shard0"})
db.runCommand({"addShard": "localhost:30201", "name": "shard1"})
db.runCommand({"addShard": "localhost:30301", "name": "shard2"})
sh.enableSharding("m202")
sh.shardCollection("m202.presplit", {"a": 1})
sh.splitAt("m202.presplit", {"a": 0})
sh.splitAt("m202.presplit", {"a": 7})
sh.splitAt("m202.presplit", {"a": 10})
sh.splitAt("m202.presplit", {"a": 14})
sh.splitAt("m202.presplit", {"a": 15})
sh.splitAt("m202.presplit", {"a": 20})
sh.splitAt("m202.presplit", {"a": 21})
sh.splitAt("m202.presplit", {"a": 22})
sh.splitAt("m202.presplit", {"a": 23})
sh.splitAt("m202.presplit", {"a": 24})
sh.stopBalancer()
sh.moveChunk("m202.presplit", {"a": 0}, "shard0")
sh.moveChunk("m202.presplit", {"a": 7}, "shard0")
sh.moveChunk("m202.presplit", {"a": 10}, "shard0")
sh.moveChunk("m202.presplit", {"a": 14}, "shard1")
sh.moveChunk("m202.presplit", {"a": 15}, "shard1")
sh.moveChunk("m202.presplit", {"a": 20}, "shard1")
sh.moveChunk("m202.presplit", {"a": 21}, "shard2")
sh.moveChunk("m202.presplit", {"a": 22}, "shard2")
sh.moveChunk("m202.presplit", {"a": 23}, "shard2")
sh.startBalancer()
Week4 4.3
Week4 4.4
mongo --nodb
config = { d0 : { smallfiles : "", noprealloc : "", nopreallocj : ""}, d1 : { smallfiles : "", noprealloc : "", nopreallocj : "" }, d2 : { smallfiles : "", noprealloc : "", nopreallocj : ""}};
cluster = new ShardingTest( { shards : config } );
mongo --port 30999
db.testColl.find({
"createdDate": {
$gte: ISODate("2014-01-01T00:00:00.000Z"),
$lt: ISODate("2014-12-31T24:59:59.999Z")
}
}).count()
db.testColl.find({
"createdDate": {
$gte: ISODate("2014-01-01T00:00:00.000Z"),
$lt: ISODate("2014-12-31T24:59:59.999Z")
}
}).sort({"createdDate":1}).limit(1)
min={ "_id" : ObjectId("5766b31795027f1b4332820a"), "createdDate" : ISODate("2014-01-01T00:00:00Z") }
max={ "_id" : ObjectId("5766b31795027f1b43328d4a"), "createdDate" : ISODate("2014-05-01T00:00:00Z") }
min=ISODate("2013-10-01T00:00:00Z")
max=ISODate("2014-05-01T00:00:00Z")
db.runCommand( { mergeChunks :"testDB.testColl" , bounds : [ min , max ] } )
db.testColl.find({
"createdDate": {
$gte: ISODate("2013-01-01T00:00:00.000Z"),
$lt: ISODate("2013-12-31T24:59:59.999Z")
}
}).sort({"createdDate":1}).limit(1)
min={ "_id" : ObjectId("5766b31795027f1b4332796a"), "createdDate" : ISODate("2013-10-01T00:00:00Z") }
chunks:
shard0000 46
shard0001 121
shard0002 47
too many chunks to print, use verbose if you want to force print
tag: LTS { "createdDate" : ISODate("2013-10-01T00:00:00Z") } -->> { "createdDate" : ISODate("2014-01-01T00:00:00Z") }
tag: STS { "createdDate" : ISODate("2014-01-01T00:00:00Z") } -->> { "createdDate" : ISODate("2014-05-01T00:00:00Z") }
Your assignment is to move all data for the month of January 2014 into LTS
sh.addTagRange('testDB.testColl', {createdDate : ISODate("2013-01-01")}, {createdDate : ISODate("2014-02-01")}, "LTS")
sh.addTagRange('testDB.testColl', {createdDate : ISODate("2014-02-01")}, {createdDate : ISODate("2014-05-01")}, "STS")
tag: LTS { "createdDate" : ISODate("2013-10-01T00:00:00Z") } -->> { "createdDate" : ISODate("2014-01-01T00:00:00Z") }
tag: STS { "createdDate" : ISODate("2014-01-01T00:00:00Z") } -->> { "createdDate" : ISODate("2014-05-01T00:00:00Z") }
sh.addTagRange('testDB.testColl', {createdDate : ISODate("2013-10-01T00:00:00Z")}, {createdDate : ISODate("2014-01-01T00:00:00Z")}, "LTS")
sh.addTagRange('testDB.testColl', {createdDate : ISODate("2014-01-01T00:00:00Z")}, {createdDate : ISODate("2014-05-01T00:00:00Z")}, "STS")
db.tags.remove({_id : { ns : "testDB.testColl", min : { createdDate : ISODate("2014-01-01T00:00:00Z")} }, tag: "STS" })
tag: LTS { "createdDate" : ISODate("2013-10-01T00:00:00Z") } -->> { "createdDate" : ISODate("2014-02-01T00:00:00Z") }
tag: STS { "createdDate" : ISODate("2014-02-01T00:00:00Z") } -->> { "createdDate" : ISODate("2014-05-01T00:00:00Z") }
2.4 Strict order
Config server first
General order
Secondary shard first - Parallel
Primary your (Step Down)
Config server
Upgrade mongos last
var allChunkInfo = function(ns){
var chunks = db.getSiblingDB("config").chunks.find({"ns" : ns}).sort({min:1}); //this will return all chunks for the ns ordered by min
//some counters for overall stats at the end
var totalChunks = 0;
var totalSize = 0;
var totalEmpty = 0;
print("ChunkID,Shard,ChunkSize,ObjectsInChunk"); // header row
// iterate over all the chunks, print out info for each
chunks.forEach(
function printChunkInfo(chunk) {
var db1 = db.getSiblingDB(chunk.ns.split(".")[0]); // get the database we will be running the command against later
var key = db.getSiblingDB("config").collections.findOne({_id:chunk.ns}).key; // will need this for the dataSize call
// dataSize returns the info we need on the data, but using the estimate option to use counts is less intensive
var dataSizeResult = db1.runCommand({datasize:chunk.ns, keyPattern:key, min:chunk.min, max:chunk.max, estimate:true});
// printjson(dataSizeResult); // uncomment to see how long it takes to run and status
print(chunk._id+","+chunk.shard+","+dataSizeResult.size+","+dataSizeResult.numObjects);
totalSize += dataSizeResult.size;
totalChunks++;
if (dataSizeResult.size == 0) { totalEmpty++ }; //count empty chunks for summary
}
)
print("***********Summary Chunk Information***********");
print("Total Chunks: "+totalChunks);
print("Average Chunk Size (bytes): "+(totalSize/totalChunks));
print("Empty Chunks: "+totalEmpty);
print("Average Chunk Size (non-empty): "+(totalSize/(totalChunks-totalEmpty)));
}
Week 5: M202: MongoDB Advanced Deployment and Operations
{ config: "m202-sec.conf", net: { bindIp: "127.0.0.1", port: 27017 }, processManagement: { fork: true }, replication: { oplogSizeMB: 512, replSetName: "m202-final" }, storage: { dbPath: "/data/db/rs2" }, systemLog: { destination: "file", logAppend: true, path: "/data/db/m202-mba.log" } }
{ config: "m202-sec.conf", net: { bindIp: "192.0.2.2,127.0.0.1", port: 27017 }, processManagement: { fork: true }, replication: { oplogSizeMB: 512, replSetName: "m202-final" }, storage: { dbPath: "/data/db/rs2" }, systemLog: { destination: "file", logAppend: true, path: "/data/db/m202-mba.log" } }
{ config: "m202-sec.conf", net: { port: 27017 }, processManagement: { fork: true }, replication: { oplogSizeMB: 512, replSetName: "m202-final" }, storage: { dbPath: "/data/db/rs2" }, systemLog: { destination: "file", logAppend: true, path: "/data/db/m202-mbp.log" } }
{ config: "./m202-pri.conf", net: { bindIp: "127.0.0.1", port: 27017 }, processManagement: { fork: true }, replication: { oplogSizeMB: 512, replSetName: "m202-final" }, storage: { dbPath: "/data/db/rs1" }, systemLog: { destination: "file", logAppend: true, path: "/data/db/m202-ubuntu-27017.log" } }
{ config: "./m202-pri.conf", net: { bindIp: "192.0.2.3,127.0.0.1", port: 27017 }, processManagement: { fork: true }, replication: { oplogSizeMB: 512, replSetName: "m202-final" }, storage: { dbPath: "/data/db/rs1" }, systemLog: { destination: "file", logAppend: true, path: "/data/db/m202-ubuntu-27017.log" } }
{ config: "./m202-sec.conf", net: { bindIp: "127.0.0.1", port: 27018 }, processManagement: { fork: true }, replication: { oplogSizeMB: 512, replSetName: "m202-final" }, storage: { dbPath: "/data/db/rs2" }, systemLog: { destination: "file", logAppend: true, path: "/data/db/m202-ubuntu-27018.log" } }
{ config: "./m202-sec.conf", net: { bindIp: "192.0.2.3,127.0.0.1", port: 27018 }, processManagement: { fork: true }, replication: { oplogSizeMB: 512, replSetName: "m202-final" }, storage: { dbPath: "/data/db/rs2" }, systemLog: { destination: "file", logAppend: true, path: "/data/db/m202-ubuntu-27018.log" } }
{ net: { port: 27017 }, processManagement: { fork: true }, replication: { replSet: "threeMemberReplicaSet" }, storage: { dbPath: "/data/MongoDB/week3/three-member-replica-set/1", mmapv1: { preallocDataFiles: false, smallFiles: true } }, systemLog: { destination: "file", path: "/data/MongoDB/week3/three-member-replica-set/1/mongod.log" } }
Homework-5.1
Ans: bindIp
HomeworK5.2
m202@m202-ubuntu1404:/data/MongoDB/week5$ mloginfo --queries --sort sum mtools_example.log | head -20
namespace operation pattern count min (ms) max (ms) mean (ms) 95%-ile (ms) sum (ms)
grilled.indigo.papaya update {"_id": 1, "l": {"$not": 1}} 4227 0 3872 482 1189.4 2038017
Ans:{"_id": 1, "l": {"$not": 1}}
3rd Homework:
=========================
mlogfilter mtools_example.log --namespace grilled.indigo.papaya --pattern '{"_id": 1, "l": {"$not": 1}}' > homework5.3
m202@m202-ubuntu1404:/data/MongoDB/week5$ mplotqueries homework5.3 --type histogram --bucketsize 1
Ans: 60-90 ops/s
mlaunch init --replicaset 3 --name testrpl
mlaunch init --replicaset 3 --name testrpl --arbiter 1 --sharded 2 --config 3 --mongos 3
https://www.rootusers.com/how-to-increase-the-size-of-a-linux-lvm-by-expanding-the-virtual-machine-disk/
m202@m202-ubuntu1404:~$ sudo fdisk -l | grep "Disk /dev/"
Disk /dev/sdb doesn't contain a valid partition table
Disk /dev/sdc doesn't contain a valid partition table
Disk /dev/mapper/m202--vg-root doesn't contain a valid partition table
Disk /dev/mapper/m202--vg-swap_1 doesn't contain a valid partition table
Disk /dev/mapper/m202--vg-data doesn't contain a valid partition table
Disk /dev/sda: 8589 MB, 8589934592 bytes
Disk /dev/sdb: 21.5 GB, 21474836480 bytes
Disk /dev/sdc: 53.7 GB, 53687091200 bytes
Disk /dev/mapper/m202--vg-root: 6148 MB, 6148849664 bytes
Disk /dev/mapper/m202--vg-swap_1: 2143 MB, 2143289344 bytes
Disk /dev/mapper/m202--vg-data: 21.5 GB, 21474836480 bytes
m202@m202-ubuntu1404:~$ sudo fdisk -l /dev/sda
Disk /dev/sda: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders, total 16777216 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0004ec72
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 499711 248832 83 Linux
/dev/sda2 501758 16775167 8136705 5 Extended
/dev/sda5 501760 16775167 8136704 8e Linux LVM
===You can see 8e =====
/dev/sda5 501760 16775167 8136704 8e Linux LVM
need to add in below lvm
/dev/mapper/m202--vg-data 20G 44M 19G 1% /data
sudo lvdisplay
--- Logical volume ---
LV Path /dev/m202-vg/data
LV Name data
VG Name m202-vg
LV UUID 0ceMjJ-1Uvp-Hv2x-I0mF-MmIS-gh8F-IxY2Yx
LV Write Access read/write
LV Creation host, time m202-ubuntu1404, 2014-09-26 16:35:13 +0100
LV Status available
# open 1
LV Size 20.00 GiB
Current LE 5120
Segments 2
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 252:2
sudo fdisk /dev/sdc
n
p
w
t
8e
m202@m202-ubuntu1404:~$ sudo pvcreate /dev/sdc1
Physical volume "/dev/sdc1" successfully created
m202@m202-ubuntu1404:~$
m202@m202-ubuntu1404:~$ sudo vgextend m202-vg /dev/sdc1
Volume group "m202-vg" successfully extended
m202@m202-ubuntu1404:~$
m202@m202-ubuntu1404:~$ sudo lvextend /dev/m202-vg/data /dev/sdc1
Extending logical volume data to 70.00 GiB
Logical volume data successfully resized
m202@m202-ubuntu1404:~$ sudo resize2fs /dev/m202-vg/data
resize2fs 1.42.9 (4-Feb-2014)
Filesystem at /dev/m202-vg/data is mounted on /data; on-line resizing required
old_desc_blocks = 2, new_desc_blocks = 5
The filesystem on /dev/m202-vg/data is now 18349056 blocks long.
m202@m202-ubuntu1404:~$ df -h /data
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/m202--vg-data 69G 52M 66G 1% /data
m202@m202-ubuntu1404:~$
############Week 6 #####################
As 20 GB is not enough to run mlaunch with 3 shard and 3 replica set: Added space with 50 and extended /data to 70 GB.
Now can create cluster shared and replicaset:
m202@m202-ubuntu1404:/data/MongoDB/week6$ du -sh *
39G data
m202@m202-ubuntu1404:/data/MongoDB/week6$
mlaunch init --replicaset --sharded s1 s2 s3
usercreate:
-----------------------
db.createUser({user:"dilip",pwd:"dilip" , roles:[{role:"clusterManager",db:"admin"}]})
Que2:
Answers:
conf=rs.conf()
testReplSet:SECONDARY> conf.members = [conf.members[0]]
[
{
"_id" : 0,
"host" : "m202-ubuntu1404:27017",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {
},
"slaveDelay" : 0,
"votes" : 1
}
]
testReplSet:SECONDARY> rs.reconfig(conf, {force:true})
{ "ok" : 1 }
testReplSet:SECONDARY>
Que3:
mongos> sh.shardCollection("final_exam.m202",{_id:1})
{ "ok" : 0, "errmsg" : "already sharded" }
m202@m202-ubuntu1404:/data/db$ mongoexport -d final_exam -c m202 -o m202 --port 30999
2016-07-06T10:18:07.836+0100 connected to: localhost:30999
2016-07-06T10:18:14.576+0100 exported 200000 records
m202@m202-ubuntu1404:/data/db$ ls -lrt
m202@m202-ubuntu1404:/data/db$ mongoimport -d final_exam -c m202 --port 30999 --file=m202
2016-07-06T10:29:10.818+0100 connected to: localhost:30999
2016-07-06T10:29:13.815+0100 [###############.........] final_exam.m202 11.9 MB/19.0 MB (63.0%)
2016-07-06T10:29:15.584+0100 imported 200000 documents
mongos> db.m202.drop()
true
mongos> db.m202.drop()
false
mongos> show collections
system.indexes
mongos>
mongos> sh.shardCollection("final_exam.m202",{_id:1})
{ "collectionsharded" : "final_exam.m202", "ok" : 1 }
mongos>
Que4:
2014-06-06T18:15:28.224+0100 [initandlisten] connection accepted from 127.0.0.1:40945 #1 (1 connection now open)
2014-06-06T18:15:28.456+0100 [rsStart] trying to contact m202-ubuntu:27017
2014-06-06T18:15:28.457+0100 [rsStart] DBClientCursor::init call() failed
2014-06-06T18:15:28.457+0100 [rsStart] replSet can't get local.system.replset config from self or any seed (yet)
2014-06-06T18:15:29.458+0100 [rsStart] trying to contact m202-ubuntu:27017
2014-06-06T18:15:29.460+0100 [rsStart] DBClientCursor::init call() failed
2014-06-06T18:15:29.460+0100 [rsStart] replSet can't get local.system.replset config from self or any seed (yet)
2014-06-06T18:15:30.460+0100 [rsStart] trying to contact m202-ubuntu:27017
options: { config: "rs1.conf", processManagement: { fork: true }, replication: { oplogSizeMB: 512, replSetName: "m202-final" }, storage: { dbPath: "/data/db/rs1" }, systemLog: { destination: "file", path: "/data/db/rs1.log" } }
options: { config: "rs2.conf", net: { port: 27018 }, processManagement: { fork: true }, replication: { oplogSizeMB: 512, replSetName: "m202-final" }, storage: { dbPath: "/data/db/rs2" }, systemLog: { destination: "file", path: "/data/db/rs2.log" } }
Ans: Too many connection cause that issue:
Que5:
m202@m202-ubuntu1404:/data/MongoDB/exam_m202$ grep connectio mtools_example.log | grep acce | grep -o "2014.............." | sort | uniq -c
1 2014-06-21T00:11:0
355 2014-06-21T00:11:3
129 2014-06-21T00:11:4
10 2014-06-21T00:11:5
10 2014-06-21T00:12:0
13 2014-06-21T00:12:1
35 2014-06-21T00:12:2
1 2014-06-21T00:12:3
3 2014-06-21T00:12:4
12 2014-06-21T00:12:5
1 2014 connections n
m202@m202-ubuntu1404:/data/MongoDB/exam_m202$ grep connectio mtools_example.log | grep acce | grep -o "2014..............." | sort | uniq -c | sort -nr -k1 | head
105 2014-06-21T00:11:35
78 2014-06-21T00:11:37
67 2014-06-21T00:11:38
59 2014-06-21T00:11:42
57 2014-06-21T00:11:36
48 2014-06-21T00:11:39
What is the write concern of these operations?
1) mplotqueries mtool_example.log --type connchurn -b 1
Hint: Gives you Connections opened per bin which is in green
2) Filter with that range:
mlogfilter mtool_example.log --from Jun 00:11:35 --to Jun 21 00:11:40 > issue.txt
3) Plot again for exact operation:
mplotqueries issue.txt --type scatter
You see operation:
2014-06-21T00:11:37.661Z [conn619] command grilled.$cmd command: update { update: "uionJBQboEga25", ordered: true, writeConcern: { w: 2 }, updates: [ { q: { _id: "5tCrmNbxxKXPRLBl1BBRXqDCkZRigUubKH" }, u: { $pull: { uptimes: { recorded: { $lte: new Date(1403308699816) } } } } } ] } keyUpdates:0 numYields:0 reslen:95 2125ms
2014-06-21T00:11:37.661Z [conn1127] command grilled.$cmd command: delete { delete: "M6s4fZ6bmqM91r", ordered: true, writeConcern: { w: 2 }, deletes: [ { q: { _id: "exrIAVX9xFRZef4iIirRqndcjmfiBF9JhV" }, limit: 0 } ] } keyUpdates:0 numYields:0 reslen:80 452ms
2014-06-21T00:11:37.661Z [conn1112] command grilled.$cmd command: update { update: "8tG8pbMubfM50z", ordered: true, writeConcern: { w: 2 }, updates: [ { q: { _id: "N05Wyp1WeYGm4vXtc7XfwRdl3dNqcviOQE" }, u: { $pull: { uptimes: { recorded: { $lte: new Date(1403308627138) } } } } } ] } keyUpdates:0 numYields:0 reslen:95 787ms
Quetion 7: Final: Question 7: Traffic Imbalance in a Sharded Environment
########
Note: Please complete this homework assignment in the provided virtual machine (VM). If you choose to use your native local machine OS, or any environment other than the provided VM, we won't be able to support you.
In this problem, you have a cluster with 2 shards, each with a similar volume of data, but all the application traffic is going to one shard. You must diagnose the query pattern that is causing this problem and figure out how to balance out the traffic.
To set up the scenario, run the following commands to set up your cluster. The config document passed to ShardingTest will eliminate the disk space issues some students have seen when using ShardingTest.
mongo --nodb
config = { d0 : { smallfiles : "", noprealloc : "", nopreallocj : ""}, d1 : { smallfiles : "", noprealloc : "", nopreallocj : "" } };
cluster = new ShardingTest( { shards: config } );
Once the cluster is up, click "Initialize" in MongoProc one time to finish setting up the cluster's data and configuration. If you are running MongoProc on a machine other than the one running the mongos, then you must change the host of 'mongod1' in the settings. The host should be the hostname or IP of the machine on which the mongos is running. MongoProc will use port 30999 to connect to the mongos for this problem.
Once the cluster is initialized, click the "Initialize" button in MongoProc again to simulate application traffic to the cluster for 1 minute. You may click "Initialize" as many times as you like to simulate more traffic for 1 minute at a time. If you need to begin the problem again and want MongoProc to reinitialize the dataset, drop the m202 database from the cluster and click "Initialize" in MongoProc.
Use diagnostic tools (e.g., mongostat and the database profiler) to determine why all application traffic is being routed to one shard. Once you believe you have fixed the problem and traffic is balanced evenly between the two shards, test using MongoProc and then turn in if the test completes successfully.
Note:Dwight discusses the profiler in M102.
########
Ans:
mongo --nodb
config = { d0 : { smallfiles : "", noprealloc : "", nopreallocj : ""}, d1 : { smallfiles : "", noprealloc : "", nopreallocj : "" } };
cluster = new ShardingTest( { shards: config } );
m202@m202-ubuntu1404:~$ mongo --port 30999
MongoDB shell version: 3.0.5
connecting to: 127.0.0.1:30999/test
mongos>
mongos> sh.status()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("5781c822f7b8e6f9d5a2a7cc")
}
shards:
{ "_id" : "shard0000", "host" : "localhost:30000" }
{ "_id" : "shard0001", "host" : "localhost:30001" }
balancer:
Troubleshooting using mongostat:
m202@m202-ubuntu1404:~$ mongostat --port 30000
insert query update delete getmore command flushes mapped vsize res faults qr|qw ar|aw netIn netOut conn time
*0 *0 *0 *0 0 1|0 0 96.0M 427.0M 92.0M 0 0|0 0|0 79b 11k 15 05:09:49
*0 *0 *0 *0 0 1|0 0 96.0M 427.0M 92.0M 0 0|0 0|0 79b 11k 15 05:09:50
*0 *0 *0 *0 0 1|0 0 96.0M 427.0M 92.0M 0 0|0 0|0 79b 11k 15 05:09:51
1 11 5 *0 0 11|0 0 96.0M 427.0M 92.0M 0 0|0 0|0 3k 37k 15 05:09:52
*0 *0 *0 *0 0 2|0 0 96.0M 427.0M 92.0M 0 0|0 0|0 133b 11k 15 05:09:53
*0 2 2 2 0 5|0 0 96.0M 427.0M 92.0M 0 0|0 0|0 900b 11k 15 05:09:54
*0 *0 *0 *0 0 1|0 0 96.0M 427.0M 92.0M 0 0|0 0|0 79b 11k 15 05:09:55
*0 *0 *0 *0 0 1|0 0 96.0M 427.0M 92.0M 0 0|0 0|0 79b 11k 15 05:09:56
You see only traffic on { "_id" : "shard0000", "host" : "localhost:30000" }
m202@m202-ubuntu1404:~$ mongostat --port 30001
insert query update delete getmore command flushes mapped vsize res faults qr|qw ar|aw netIn netOut conn time
*0 *0 *0 *0 0 1|0 0 64.0M 353.0M 74.0M 0 0|0 0|0 79b 11k 6 05:10:01
*0 *0 *0 *0 0 4|0 0 64.0M 353.0M 74.0M 0 0|0 0|0 262b 22k 6 05:10:02
*0 *0 *0 *0 0 1|0 0 64.0M 353.0M 74.0M 0 0|0 0|0 79b 11k 6 05:10:03
*0 *0 *0 *0 0 1|0 0 64.0M 353.0M 74.0M 0 0|0 0|0 79b 11k 6 05:10:04
*0 *0 *0 *0 0 2|0 0 64.0M 353.0M 74.0M 0 0|0 0|0 133b 11k 6 05:10:05
*0 *0 *0 *0 0 1|0 0 64.0M 353.0M 74.0M 0 0|0 0|0 79b 11k 6 05:10:06
No traffic on { "_id" : "shard0001", "host" : "localhost:30001" }
Using profiler:
> use m202
switched to db m202
> db.getProfilingLevel()
0
> db.getProfilingStatus()
{ "was" : 0, "slowms" : 100 }
>
m202@m202-ubuntu1404:~$ ins
3059 pts/1 Sl+ 0:00 mongo --nodb
3062 pts/1 Sl+ 0:11 mongod --port 30000 --dbpath /data/db/test0 --smallfiles --noprealloc --nopreallocj --setParameter enableTestCommands=1
3079 pts/1 Sl+ 0:10 mongod --port 30001 --dbpath /data/db/test1 --smallfiles --noprealloc --nopreallocj --setParameter enableTestCommands=1
3096 pts/1 Sl+ 0:07 mongos --port 30999 --configdb localhost:30000 --chunkSize 50 --setParameter enableTestCommands=1
mongos is isseue:
## Config start
mongod --configsvr --port 30501 --dbpath ./config0 --logpath ./config0/config-0.log --smallfiles --oplogSize 40 --fork
mongod --configsvr --port 30502 --dbpath ./config1 --logpath ./config1/config-1.log --smallfiles --oplogSize 40 --fork
mongod --configsvr --port 30503 --dbpath ./config2 --logpath ./config2/config-2.log --smallfiles --oplogSize 40 --fork
mongos --port 30999 --configdb localhost:30501,localhost:30502,localhost:30503
mongodump -d config --port 30000 -o=config.dump
Restore in all shard
mongorestore -d config --port 30501 --dir=config.dump/config
mongorestore -d config --port 30502 --dir=config.dump/config
mongorestore -d config --port 30503 --dir=config.dump/config
m202@m202-ubuntu1404:/data/MongoDB/exam_m202/q7$ mongodump -d m202 --port 30000 -o=m202_dump
2016-07-10T06:10:04.115+0100 writing m202.imbalance to m202_dump/m202/imbalance.bson
2016-07-10T06:10:04.176+0100 writing m202.imbalance metadata to m202_dump/m202/imbalance.metadata.json
2016-07-10T06:10:04.177+0100 done dumping m202.imbalance (3000 documents)
2016-07-10T06:10:04.178+0100 writing m202.system.indexes to m202_dump/m202/system.indexes.bson
2016-07-10T06:10:04.179+0100 writing m202.system.profile to m202_dump/m202/system.profile.bson
2016-07-10T06:10:04.182+0100 writing m202.system.profile metadata to m202_dump/m202/system.profile.metadata.json
2016-07-10T06:10:04.182+0100 done dumping m202.system.profile (29 documents)
m202@m202-ubuntu1404:/data/MongoDB/exam_m202/q7$
> db
m202
> db.dropDatabase()
{ "dropped" : "m202", "ok" : 1 }
>
>
mongos> sh.enableSharding("m202")
{ "ok" : 1 }
mongos> sh.shardCollection("m202.imbalance", {otherID : "hashed"})
{ "collectionsharded" : "m202.imbalance", "ok" : 1 }
mongos>
m202@m202-ubuntu1404:/data/MongoDB/exam_m202/q7$ mongorestore -d m202 --port 30999 --dir=m202_dump/m202
2016-07-10T06:13:02.322+0100 building a list of collections to restore from m202_dump/m202 dir
2016-07-10T06:13:02.327+0100 reading metadata file from m202_dump/m202/imbalance.metadata.json
2016-07-10T06:13:02.332+0100 restoring m202.imbalance from file m202_dump/m202/imbalance.bson
2016-07-10T06:13:02.335+0100 reading metadata file from m202_dump/m202/system.profile.metadata.json
2016-07-10T06:13:02.502+0100 no indexes to restore
2016-07-10T06:13:02.502+0100 finished restoring m202.system.profile (0 documents)
2016-07-10T06:13:05.334+0100 [########################] m202.imbalance 90.8 KB/90.8 KB (100.0%)
2016-07-10T06:13:08.081+0100 error: no progress was made executing batch write op in m202.imbalance after 5 rounds (0 ops completed in 6 rounds total)
2016-07-10T06:13:08.081+0100 restoring indexes for collection m202.imbalance from metadata
2016-07-10T06:13:08.084+0100 finished restoring m202.imbalance (3000 documents)
2016-07-10T06:13:08.084+0100 done
mongos> db.imbalance.getShardDistribution()
Shard shard0000 at localhost:30000
data : 140KiB docs : 3000 chunks : 32
estimated data per chunk : 4KiB
estimated docs per chunk : 93
Shard shard0001 at localhost:30001
data : 150KiB docs : 3200 chunks : 31
estimated data per chunk : 4KiB
estimated docs per chunk : 103
Totals
data : 290KiB docs : 6200 chunks : 63
Shard shard0000 contains 48.38% data, 48.38% docs in cluster, avg obj size on shard : 48B
Shard shard0001 contains 51.61% data, 51.61% docs in cluster, avg obj size on shard : 48B
{ "_id" : ISODate("2014-08-16T00:00:00Z") } -->> { "_id" : ISODate("2014-08-17T00:00:00Z") } on : shard0001 Timestamp(2, 94)
{ "_id" : ISODate("2014-08-17T00:00:00Z") } -->> { "_id" : ISODate("2014-08-18T00:00:00Z") } on : shard0001 Timestamp(2, 96)
{ "_id" : ISODate("2014-08-18T00:00:00Z") } -->> { "_id" : ISODate("2014-08-19T00:00:00Z") } on : shard0001 Timestamp(2, 98)
db.system.profile."query".find().sort({"ts":-1}).pretty()
mongos> db.imbalance.find({_id:"wjhf"},{}).explain("executionStats")
db.imbalance.getShardDistribution()
db.runCommand( { moveChunk : "m202.imbalance", bounds : [{"_id": ISODate("2014-07-16T00:00:00Z")},{"_id": ISODate("2014-07-17T00:00:00Z")}],"to":"shard0001"})
{ "millis" : 966, "ok" : 1 }
mongos> db.runCommand( { moveChunk : "m202.imbalance", bounds : [{"_id": ISODate("2014-07-15T00:00:00Z")},{"_id": ISODate("2014-07-16T00:00:00Z")}],"to":"shard0001"})
{ "millis" : 1084, "ok" : 1 }
mongos> db.runCommand( { moveChunk : "m202.imbalance", bounds : [{"_id": ISODate("2014-07-18T00:00:00Z")},{"_id": ISODate("2014-07-19T00:00:00Z")}],"to":"shard0001"})
{ "millis" : 906, "ok" : 1
config = { d0 : { smallfiles : "", noprealloc : "", nopreallocj : ""}, d1 : { smallfiles : "", noprealloc : "", nopreallocj : "" } };
cluster = new ShardingTest( { shards: config } );
{ "_id" : ISODate("2014-07-30T00:00:00Z") } -->> { "_id" : ISODate("2014-07-31T00:00:00Z") } on : shard0000 Timestamp(2, 62)
{ "_id" : ISODate("2014-07-31T00:00:00Z") } -->> { "_id" : ISODate("2014-08-01T00:00:00Z") } on : shard0000 Timestamp(2, 63)
{ "_id" : ISODate("2014-08-01T00:00:00Z") } -->> { "_id" : ISODate("2014-08-02T00:00:00Z") } on : shard0001 Timestamp(2, 64)
db.runCommand( { moveChunk : "m202.imbalance", bounds : [{"_id": ISODate("2014-07-31T00:00:00Z")},{"_id": ISODate("2014-08-01T00:00:00Z")}],"to":"shard0001"})
mongos> db.imbalance.getShardDistribution()
Shard shard0000 at localhost:30000
data : 140KiB docs : 3000 chunks : 32
estimated data per chunk : 4KiB
estimated docs per chunk : 93
Shard shard0001 at localhost:30001
data : 150KiB docs : 3200 chunks : 31
estimated data per chunk : 4KiB
estimated docs per chunk : 103
db.system.profile.find( { op: { $ne : 'command' } } ).pretty()
db.system.profile.find().sort({"$natural":-1}).limit(5).pretty()
find({},{"ns" :1,ts:1,query:1})
db.system.profile.find({},{"ns" :1,ts:1,query:1}).sort({"$natural":-1}).limit(5).pretty()
db.system.profile.find( { ns : 'm202.imbalance' } ).pretty()
numYields:0 reslen:55 locks:{ Global: { acquireCount: { r: 6, w: 6 } }, MMAPV1Journal: { acquireCount: { w: 7 } }, Database: { acquireCount: { w: 6 } }, Collection: { acquireCount: { W: 6 } } } 7ms
m30000| 2016-07-10T11:12:35.784+0100 I WRITE [conn6] update m202.imbalance query: { _id: { $gte: new Date(1405209600000), $lt: new Date(1405728000000) } } update: { $set: { hot: true } } nscanned:600 nscannedObjects:600 nMatched:600 nModified:0 keyUpdates:0 writeConflicts:0 numYields:4 locks:{ Global: { acquireCount: { r: 5, w: 5 } }, MMAPV1Journal: { acquireCount: { w: 5 } }, Database: { acquireCount: { w: 5 } }, Collection: { acquireCount: { W: 5 } } } 4ms
m30000| 2016-07-10T11:12:35.793+0100 I COMMAND [conn6] command m202.$cmd command: update { update: "imbalance", updates: [ { q: { _id: { $gte: new Date(1405209600000), $lt: new Date(1405728000000) } }, u: { $set: { hot: true } }, multi: true, upsert: false } ], writeConcern: { w: 1 }, ordered: true, metadata: { shardName: "shard0000", shardVersion: [ Timestamp 2000|63, ObjectId('578210753fba777c8b39094c') ], session: 0 } } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:55 locks:{ Global: { acquireCount: { r: 6, w: 6 } }, MMAPV1Journal: { acquireCount: { w: 7 } }, Database: { acquireCount: { w: 6 } }, Collection: { acquireCount: { W: 6 } } } 13ms
m30000| 2016-07-10T11:12:35.820+0100 I WRITE [conn6] update m202.imbalance query: { _id: { $gte: new Date(1405209600000), $lt: new Date(1405728000000) } } update: { $set: { hot: true } } nscanned:600 nscannedObjects:600 nMatched:600 nModified:0 keyUpdates:0 writeConflicts:0 numYields:4 locks:{ Global: { acquireCount: { r: 5, w: 5 } }, MMAPV1Journal: { acquireCount: { w: 5 } }, Database: { acquireCount: { w: 5 } }, Collection: { acquireCount: { W: 5 } } } 2ms
m30000| 2016-07-10T11:12:35.820+0100 I COMMAND [conn6] command m202.$cmd command: update { update: "imbalance", updates: [ { q: { _id: { $gte: new Date(1405209600000), $lt: new Date(1405728000000) } }, u: { $set: { hot: true } }, multi: true, upsert: false } ], writeConcern: { w: 1 }, ordered: true, metadata: { shardName: "shard0000", shardVersion: [ Timestamp 2000|63, ObjectId('578210753fba777c8b39094c') ], session: 0 } } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:55 locks:{ Global: { acquireCount: { r: 6, w: 6 } }, MMAPV1Journal: { acquireCount: { w: 7 } }, Database: { acquireCount: { w: 6 } }, Collection: { acquireCount: { W: 6 } } } 2ms
m30000| 2016-07-10T11:12:35.834+0100 I WRITE [conn6] update m202.imbalance query: { _id: { $gte: new Date(1405209600000), $lt: new Date(1405728000000) } } update: { $set: { hot: true } } nscanned:600 nscannedObjects:600 nMatched:600 nModified:0 keyUpdates:0 writeConflicts:0 numYields:4 locks:{ Global: { acquireCount: { r: 5, w: 5 } }, MMAPV1Journal: { acquireCount: { w: 5 } }, Database: { acquireCount: { w: 5 } }, Collection: { acquireCount: { W: 5 } } } 2ms
m30000| 2016-07-10T11:12:35.835+0100 I COMMAND [conn6] command m202.$cmd command: update { update: "imbalance", updates: [ { q: { _id: { $gte: new Date(1405209600000), $lt: new Date(1405728000000) } }, u: { $set: { hot: true } }, multi: true, upsert: false } ], writeConcern: { w: 1 }, ordered: true, metadata: { shardName: "shard0000", shardVersion: [ Timestamp 2000|63, ObjectId('578210753fba777c8b39094c') ], session: 0 } } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:55 locks:{ Global: { acquireCount: { r: 6, w: 6 } }, MMAPV1Journal: { acquireCount: { w: 7 } }, Database: { acquireCount: { w: 6 } }, Collection: { acquireCount: { W: 6 } } } 3ms
m30000| 2016-07-10T11:12:35.850+0100 I WRITE [conn6] update m202.imbalance query: { _id: { $gte: new Date(1405209600000), $lt: new Date(1405728000000) } } update: { $set: { hot: true } } nscanned:600 nscannedObjects:600 nMatched:600 nModified:0 keyUpdates:0 writeConflicts:0 numYields:4 locks:{ Global: { acquireCount: { r: 5, w: 5 } }, MMAPV1Journal: { acquireCount: { w: 5 } }, Database: { acquireCount: { w: 5 } }, Collection: { acquireCount: { W: 5 } } } 2ms
m30000| 2016-07-10T11:12:35.852+0100 I COMMAND [conn6] command m202.$cmd command: update { update: "imbalance", updates: [ { q: { _id: { $gte: new Date(1405209600000), $lt: new Date(1405728000000) } }, u: { $set: { hot: true } }, multi: true, upsert: false } ], writeConcern: { w: 1 }, ordered: true, metadata: { shardName: "shard0000", shardVersion: [ Timestamp 2000|63, ObjectId('578210753fba777c8b39094c') ], session: 0 } } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:55 locks:{ Global: { acquireCount: { r: 6, w: 6 } }, MMAPV1Journal: { acquireCount: { w: 7 } }, Database: { acquireCount: { w: 6 } }, Collection: { acquireCount: { W: 6 } } } 5ms
mongos> db.imbalance.find({"hot" : true}).sort({"otherID":-1})
{ "_id" : ISODate("2014-07-13T00:00:00.099Z"), "otherID" : 99, "hot" : true }
{ "_id" : ISODate("2014-07-14T00:00:00.099Z"), "otherID" : 99, "hot" : true }
{ "_id" : ISODate("2014-07-15T00:00:00.099Z"), "otherID" : 99, "hot" : true }
{ "_id" : ISODate("2014-07-16T00:00:00.099Z"), "otherID" : 99, "hot" : true }
{ "_id" : ISODate("2014-07-17T00:00:00.099Z"), "otherID" : 99, "hot" : true }
{ "_id" : ISODate("2014-07-18T00:00:00.099Z"), "otherID" : 99, "hot" : true }
{ "_id" : ISODate("2014-07-13T00:00:00Z") } -->> { "_id" : ISODate("2014-07-14T00:00:00Z") } on : shard0000 Timestamp(2, 28)
{ "_id" : ISODate("2014-07-14T00:00:00Z") } -->> { "_id" : ISODate("2014-07-15T00:00:00Z") } on : shard0000 Timestamp(2, 30)
{ "_id" : ISODate("2014-07-15T00:00:00Z") } -->> { "_id" : ISODate("2014-07-16T00:00:00Z") } on : shard0000 Timestamp(2, 32)
{ "_id" : ISODate("2014-07-16T00:00:00Z") } -->> { "_id" : ISODate("2014-07-17T00:00:00Z") } on : shard0000 Timestamp(2, 34)
{ "_id" : ISODate("2014-07-17T00:00:00Z") } -->> { "_id" : ISODate("2014-07-18T00:00:00Z") } on : shard0000 Timestamp(2, 36)
{ "_id" : ISODate("2014-07-18T00:00:00Z") } -->> { "_id" : ISODate("2014-07-19T00:00:00Z") } on : shard0000 Timestamp(2, 38)
db.imbalance.find({ _id: { $gte: ISODate("2014-07-13T00:00:00Z"), $lt: ISODate("2014-07-14T00:00:00Z") } }).count()
db.runCommand( { moveChunk : "m202.imbalance", bounds : [{"_id": ISODate("2014-07-13T00:00:00Z")},{"_id": ISODate("2014-07-14T00:00:00Z")}],"to":"shard0001"})
{ "millis" : 966, "ok" : 1 }
mongos> db.runCommand( { moveChunk : "m202.imbalance", bounds : [{"_id": ISODate("2014-07-15T00:00:00Z")},{"_id": ISODate("2014-07-16T00:00:00Z")}],"to":"shard0001"})
{ "millis" : 1084, "ok" : 1 }
mongos> db.runCommand( { moveChunk : "m202.imbalance", bounds : [{"_id": ISODate("2014-07-18T00:00:00Z")},{"_id": ISODate("2014-07-19T00:00:00Z")}],"to":"shard0001"})
{ "millis" : 906, "ok" : 1
db.runCommand( { moveChunk : "m202.imbalance", bounds : [{"_id": ISODate("2014-07-16T00:00:00Z")},{"_id": ISODate("2014-07-17T00:00:00Z")}],"to":"shard0001"})
db.runCommand( { moveChunk : "m202.imbalance", bounds : [{"_id": ISODate("2014-07-17T00:00:00Z")},{"_id": ISODate("2014-07-18T00:00:00Z")}],"to":"shard0001"})
db.runCommand( { moveChunk : "m202.imbalance", bounds : [{"_id": ISODate("2014-07-18T00:00:00Z")},{"_id": ISODate("2014-07-19T00:00:00Z")}],"to":"shard0001"})
{ "_id" : ISODate("2014-07-16T00:00:00Z") } -->> { "_id" : ISODate("2014-07-17T00:00:00Z") } on : shard0001 Timestamp(3, 0)
{ "_id" : ISODate("2014-07-17T00:00:00Z") } -->> { "_id" : ISODate("2014-07-18T00:00:00Z") } on : shard0001 Timestamp(4, 0)
{ "_id" : ISODate("2014-07-18T00:00:00Z") } -->> { "_id" : ISODate("2014-07-19T00:00:00Z") } on : shard0001 Timestamp(5, 0)
{ "_id" : ISODate("2014-07-16T00:00:00Z") } -->> { "_id" : ISODate("2014-07-17T00:00:00Z") } on : shard0000 Timestamp(6, 0)
{ "_id" : ISODate("2014-07-17T00:00:00Z") } -->> { "_id" : ISODate("2014-07-18T00:00:00Z") } on : shard0000 Timestamp(7, 0)
No comments:
Post a Comment