Azure 数据工厂 - 超时接收器端

azure data factory - timeout sink side

我尝试将大表转换为 Azure SQL 服务器。 小的完成了,大的没完成,落在timeout sink这边。 附上错误。 虽然 sql 服务器没有指定任何超时,它仍然无法工作。

sql db 是 800 DTU。

如果这是问题所在,我该如何增加接收器端的超时时间。

数据工厂不是应该保存连接并在失败时重试吗?

errors:
{
    "dataRead": 1372864152,
    "dataWritten": 1372864152,
    "sourcePeakConnections": 1,
    "sinkPeakConnections": 2,
    "rowsRead": 2205634,
    "rowsCopied": 2205634,
    "copyDuration": 8010,
    "throughput": 167.377,
    "errors": [
        {
            "Code": 11000,
            "Message": "Failure happened on 'Sink' side. 'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Timeouts in SQL write operation.,Source=Microsoft.DataTransfer.ClientLibrary,''Type=System.Data.SqlClient.SqlException,Message=Execution Timeout Expired.  The timeout period elapsed prior to completion of the operation or the server is not responding.,Source=.Net SqlClient Data Provider,SqlErrorNumber=-2,Class=11,ErrorCode=-2146232060,State=0,Errors=[{Class=11,Number=-2,State=0,Message=Execution Timeout Expired.  The timeout period elapsed prior to completion of the operation or the server is not responding.,},],''Type=System.ComponentModel.Win32Exception,Message=The wait operation timed out,Source=,'",
            "EventType": 0,
            "Category": 5,
            "Data": {
                "FailureInitiator": "Sink"
            },
            "MsgId": null,
            "ExceptionType": null,
            "Source": null,
            "StackTrace": null,
            "InnerEventInfos": []
        }
    ],
    "effectiveIntegrationRuntime": "XXX",
    "billingReference": {
        "activityType": "DataMovement",
        "billableDuration": [
            {
                "meterType": "SelfhostedIR",
                "duration": 2.0166666666666666,
                "unit": "Hours"
            }
        ]
    },
    "usedParallelCopies": 1,
    "executionDetails": [
        {
            "source": {
                "type": "SqlServer"
            },
            "sink": {
                "type": "SqlServer"
            },
            "status": "Failed",
            "start": "2020-08-03T17:16:58.8388528Z",
            "duration": 8010,
            "usedParallelCopies": 1,
            "profile": {
                "queue": {
                    "status": "Completed",
                    "duration": 810
                },
                "preCopyScript": {
                    "status": "Completed",
                    "duration": 0
                },
                "transfer": {
                    "status": "Completed",
                    "duration": 7200,
                    "details": {
                        "readingFromSource": {
                            "type": "SqlServer",
                            "workingDuration": 7156,
                            "timeToFirstByte": 0
                        },
                        "writingToSink": {
                            "type": "SqlServer"
                        }
                    }
                }
            },
            "detailedDurations": {
                "queuingDuration": 810,
                "preCopyScriptDuration": 0,
                "timeToFirstByte": 0,
                "transferDuration": 7200
            }
        }
    ],
    "dataConsistencyVerification": {
        "VerificationResult": "NotVerified"
    },
    "durationInQueue": {
        "integrationRuntimeQueue": 810
    }
}

请尝试在接收器端设置写入批处理超时:

  1. 批量插入操作完成之前的等待时间 超时。允许的值为 timespan。一个例子是“00:30:00” (30 分钟)。

参考:Azure SQL Database as the sink

想补充我的情况:与OP相同的错误,但在执行数据流时引发异常。

按照接受的答案的方向,我将 批量大小 设置为我的数据流接收器中的某个限制将有助于解决我的问题。