使用 DocumentDB 作为接收器在 Azure 流分析中出错
Getting error in Azure Stream Analytics with DocumentDB as sink
我正在使用 Azure 流分析将事件从事件中心流式传输到 DocumentDB。
我已经按照文档配置了 input、query 和 output,使用示例数据对其进行了测试并进行了管理return 结果符合预期。
但是当我开始流式传输作业并发送与之前示例数据相同的有效负载时,我收到了这条错误消息:
There was a problem formatting the document [id] column as per DocumentDB constraints for DocumentDB db:[my-database-name], and collection:[my-collection-name].
我的示例数据是 JSON:
的数组
[
{ "Sequence": 1, "Tenant": "T1", "Status": "Started" },
{ "Sequence": 2, "Tenant": "T1", "Status": "Ended" }
]
我将输入配置如下:
- Input alias: eventhubs-events
- Source Type: Data stream
- Source: Event Hub
- Subscription: same subscription as where I create the Analytics job
- Service bus namespace: an existing Event Hub namespace
- Event hub name: events (existing event hub in the namespace)
- Event hub policy name: a policy with read access
- Event hub consumer group: blank
- Event serialization format: JSON
- Encoding: UTF-8
并输出如下:
- Output alias: documentdb-events
- Sink: DocumentDB
- Subscription: same subscription as where I create the Analytics job
- Account id: an existing DocumentDB account
- Database: records (an existing database in the account)
- Collection name pattern: collection (an existing collection in the database)
- Document id: id
我的查询很简单:
SELECT
event.Sequence AS id,
event.Tenant,
event.Status
INTO [documentdb-events]
FROM [eventhubs-events] AS event
原来输出中的所有字段名称都自动小写。
在我的 DocumentDB 集合中,我已将集合配置为 Partitioned 模式,并将 "/Tenant" 作为分区键。
由于大小写与输出不匹配,因此约束失败。
将分区键更改为“/tenant”解决了该问题。
希望通过分享我的发现结果可以为碰到这个问题的人省去一些麻烦。
第二个选项
现在我们可以在流分析中更改 compatibility-Level,而不是更改小写的分区键。
1.0 versions: Field names were changed to lower case when processed by the Azure Stream Analytics engine.
1.1 version: case-sensitivity is persisted for field names when they are processed by the Azure Stream Analytics engine.
我正在使用 Azure 流分析将事件从事件中心流式传输到 DocumentDB。 我已经按照文档配置了 input、query 和 output,使用示例数据对其进行了测试并进行了管理return 结果符合预期。
但是当我开始流式传输作业并发送与之前示例数据相同的有效负载时,我收到了这条错误消息:
There was a problem formatting the document [id] column as per DocumentDB constraints for DocumentDB db:[my-database-name], and collection:[my-collection-name].
我的示例数据是 JSON:
的数组[
{ "Sequence": 1, "Tenant": "T1", "Status": "Started" },
{ "Sequence": 2, "Tenant": "T1", "Status": "Ended" }
]
我将输入配置如下:
- Input alias: eventhubs-events
- Source Type: Data stream
- Source: Event Hub
- Subscription: same subscription as where I create the Analytics job
- Service bus namespace: an existing Event Hub namespace
- Event hub name: events (existing event hub in the namespace)
- Event hub policy name: a policy with read access
- Event hub consumer group: blank
- Event serialization format: JSON
- Encoding: UTF-8
并输出如下:
- Output alias: documentdb-events
- Sink: DocumentDB
- Subscription: same subscription as where I create the Analytics job
- Account id: an existing DocumentDB account
- Database: records (an existing database in the account)
- Collection name pattern: collection (an existing collection in the database)
- Document id: id
我的查询很简单:
SELECT
event.Sequence AS id,
event.Tenant,
event.Status
INTO [documentdb-events]
FROM [eventhubs-events] AS event
原来输出中的所有字段名称都自动小写。
在我的 DocumentDB 集合中,我已将集合配置为 Partitioned 模式,并将 "/Tenant" 作为分区键。
由于大小写与输出不匹配,因此约束失败。
将分区键更改为“/tenant”解决了该问题。
希望通过分享我的发现结果可以为碰到这个问题的人省去一些麻烦。
第二个选项
现在我们可以在流分析中更改 compatibility-Level,而不是更改小写的分区键。
1.0 versions: Field names were changed to lower case when processed by the Azure Stream Analytics engine.
1.1 version: case-sensitivity is persisted for field names when they are processed by the Azure Stream Analytics engine.