So what I want to do is group all documents having same hash whose count is more than 1 and only keep the oldest record according to startDate
My db structure is as follows:
[{
"_id": "82bacef1915f4a75e6a18406",
"Hash": "cdb3d507734383260b1d26bd3edcdfac",
"duration": 12,
"price": 999,
"purchaseType": "Complementary",
"startDate": {
"$date": {
"$numberLong": "1656409841000"
}
},
"endDate": {
"$date": {
"$numberLong": "1687859441000"
}
}
}]
I was using this query which I created
db.Mydb.aggregate([
{
"$group": {
_id: {hash: "$Hash"},
dups: { $addToSet: "$_id" } ,
count: { $sum : 1 }
}
},{"$sort":{startDate:-1}},
{
"$match": {
count: { "$gt": 1 }
}
}
]).forEach(function(doc) {
doc.dups.shift();
db.Mydb.deleteMany({
_id: {$in: doc.dups}
});
})
this gives a result like this:
{ _id: { hash: '1c01ef475d072f207c4485d0a6448334' },
dups:
[ '6307501ca03c94389f09b782',
'6307501ca03c94389f09b783',
'62bacef1915f4a75e6a18l06' ],
count: 3 }
The problem with this is that the _id’s in dups array are random everytime I run this query i.e. not sorted according to startDate field.
What can be done here?
Any help is appreciated. Thanks!
2
Answers
Got the solution. I was using $addToSet in the group pipeline stage which does not allow duplicate values. Instead, I used $push which allows duplicate elements in the array or set.
After
$group
stage,startDate
field will not pre present in the results, so you can not sort based on that field. So, as stated in the comments, you should put$sort
stage first in the Aggregation pipeline.