Mongodb - group() 无法处理超过 20000 个唯一键

Mongodb - group() can't handle more than 20000 unique keys

我是 运行 我的 mongo 集群上的一个 groupBy query。我正在使用在线 SQL to mongo query converter 工具。 运行 查询后,出现错误。数据量很大。有办法解决吗?

// SQL  query
SELECT COUNT(*) AS count FROM db_name
WHERE version="v2"
GROUP BY id

// Mongo query 
db.db_name.group({
"key": {
    "id": true
},
"initial": {
    "count": 0
},
"reduce": function(obj, prev) {
    if (true != null) if (true instanceof Array) prev.count += true.length;
    else prev.count++;
},
"cond": {
    "version": "v2"
}
});

我收到这个错误

 E QUERY    [js] Error: group command failed: {
"operationTime" : Timestamp(1589898357, 1),
"ok" : 0,
"errmsg" : "Plan executor error during group command :: caused by :: group() can't handle more than 20000 unique keys",
"code" : 2,
"codeName" : "BadValue",
"$clusterTime" : {
    "clusterTime" : Timestamp(1589898357, 1),
    "signature" : {
        "hash" : BinData(0,"SvsjmAIsn4rGwA/aRtLt3MPenJQ="),
        "keyId" : NumberLong("6784431306852794369")
    }
}
} :

您可以使用 db.collection.aggregate() 代替:

db.db_name.aggregate([
    { $match: { version: "v2" } },
    { $group: { _id: "$id", count: { $sum: 1 } } },
    { $project: { _id: 0 } }
])

上述聚合操作选择version等于“v2”的文档,将匹配的文档按id字段分组,并为每个id计算COUNT(*)领域。

docs所述:

The pipeline provides efficient data aggregation using native operations within MongoDB, and is the preferred method for data aggregation in MongoDB.

The aggregation pipeline can use indexes to improve its performance during some of its stages. In addition, the aggregation pipeline has an internal optimization phase.