I have a collection with a sub-document consisting of more than 40K records. My aggregate query takes about 300 secs. I have tried optimizing the same using compound as well as multi-key indexing, which completes in 180 secs.
I still require a reduced query time execution.
here is my collection:
{ "_id" : ObjectId("545b32cc7e9b99112e7ddd97"), "grp_id" : 654, "user_id" : 2, "mod_on" : ISODate("2014-11-06T08:35:40.857Z"), "crtd_on" : ISODate("2014-11-06T08:35:24.791Z"), "uploadTp" : 0, "tp" : 1, "status" : 3, "id_url" : [ {"mid":"xyz12793"}, {"mid":"xyz12794"}, {"mid":"xyz12795"}, {"mid":"xyz12796"} ], "incl" : 1, "total_cnt" : 25, "succ_cnt" : 25, "fail_cnt" : 0 } and following is my query
db.member_id_transactions.aggregate([ { '$match': { id_url: { '$elemMatch': { mid: 'xyz12794' } } } }, { '$unwind': '$id_url' }, { '$match': { grp_id: 654, 'id_url.mid': 'xyz12794' } } ]) has anyone faced the same issue?
here's the o/p for aggregate query with explain option
{ "result" : [ { "_id" : ObjectId("546342467e6d1f4951b56285"), "grp_id" : 685, "user_id" : 2, "mod_on" : ISODate("2014-11-12T11:24:01.336Z"), "crtd_on" : ISODate("2014-11-12T11:19:34.682Z"), "uploadTp" : 1, "tp" : 1, "status" : 3, "id_url" : [ {"mid":"xyz12793"}, {"mid":"xyz12794"}, {"mid":"xyz12795"}, {"mid":"xyz12796"} ], "incl" : 1, "__v" : 0, "total_cnt" : 21406, "succ_cnt" : 21402, "fail_cnt" : 4 } ], "ok" : 1, "$gleStats" : { "lastOpTime" : Timestamp(0, 0), "electionId" : ObjectId("545c8d37ab9cc679383a1b1b") } }