Is there any better way to write this query?
{
$facet: {
results: [
{
$match: {
$expr: {
$and: [{ $gte: ["$var1", 1] }, { $gte: ["$var2", 200] }],
},
},
},
{ $sort: { var1: 1 } },
{ $skip: 0 },
{ $limit: 25 },
{
$project: {
va1: "$va1",
var2: "$var2",
var3: "$var3",
},
},
],
total: [
{
$match: {
$expr: {
$and: [{ $gte: ["$var1", 1] }, { $gte: ["$var1", 200] }],
},
},
},
{ $count: "count" },
],
},
},
As this runs $match operation twice just to get the total count which I think is redundant.
I tried this sequence but it did not work –
{
"$facet": {
"results": [{"$match": searchString},
{"$sort": sort},
{"$count": "total"},
{"$skip": skip},
{"$limit": limit},
{"$project": project}],
}
}
I want to get the total number of documents which matches the criteria and return only few of them with projected fields. Can someone please help with this query optimization?
The document size is 200 kb and there are 10k documents and the projected size will be 2 kb per document. Where should the project field be placed so that it uses less RAM and gives low latency?
2
Answers
You can optimize your query by using a single $match operation and moving the $project stage to an earlier position, which will reduce the document size before performing other operations.
Since I see here an incorrect answer although I think my comment is clear, there are few ways to optimize this. One option is to run the relevant stps before the
$facet
:See how it works on the playground example
$facet
if you can, or at least use it on further steps, after you already matched and sort you documents, as it is currently not supporting indexing.$facet
also group your documents into one big document, and a document have a size limit, so this should concern you…$and
, while one of them is redundant…