Azure BlobWriteStream 客户端无法在指定的超时时间内完成操作

Azure BlobWriteStream The client could not finish the operation within specified timeout

我正在尝试将 300GB(流式)数据上传到 Azure blob 存储。 我用来执行上传的代码如下所示:

var stream = blob.OpenWrite();
[...]
// the buffer is filled in with 128KB chunks of data from a larger 300GB file
stream.Write(buffer, offset, count);

上传大约 8 小时后,我收到以下错误消息:

at Microsoft.WindowsAzure.Storage.Core.Util.StorageAsyncResult`1.End() in c:\Program Files x86)\Jenkins\workspace\release_dotnet_master\Lib\ClassLibraryCommon\Core\Util\StorageAsyncResult.cs:line 77 at Microsoft.WindowsAzure.Storage.Blob.BlobWriteStream.EndWrite(IAsyncResult asyncResult) in c:\Program Files (x86)\Jenkins\workspace\release_dotnet_master\Lib\ClassLibraryCommon\Blob\BlobWriteStream.cs:line 211

ErrorMessage = The client could not finish the operation within specified timeout.

附带说明一下,我的上传速度约为 2MB/s(可能与超时消息有关)。任何帮助将不胜感激。

根据您的描述和错误信息,我建议您可以尝试将 BlobRequestOptions.MaximumExecutionTime 的值更改为更长的时间,如果您不希望它过快超时的话。

我建议您还可以启用存储诊断日志来查看您的存储分析日志和指标,以确定延迟是服务器延迟还是端到端延迟。 有关 Microsoft Azure 存储的监控、诊断和故障排除的更多详细信息,您可以参考此 article

此外,我建议您可以尝试使用 Microsoft Azure 存储数据移动库将大文件上传到 blob 存储。

这是为高性能上传、下载和复制 Azure 存储 Blob 和文件而设计的。您可以从 VS nuget 包管理器安装它。

更详细的使用方法,可以参考这个article

这是一个例子:

 CloudStorageAccount storageAccount = CloudStorageAccount.Parse(
"connectstring");
            CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
            CloudBlobContainer blobContainer = blobClient.GetContainerReference("foobar");
            blobContainer.CreateIfNotExists();
            string sourcePath = @"yourfilepath";
            CloudBlockBlob destBlob = blobContainer.GetBlockBlobReference("foobar");

            TransferManager.Configurations.ParallelOperations = 64;

            SingleTransferContext context = new SingleTransferContext();
            context.ProgressHandler = new Progress<TransferStatus>((progress) =>
            {
                Console.WriteLine("Bytes uploaded: {0}", progress.BytesTransferred);
            });

            var task = TransferManager.UploadAsync(
    sourcePath, destBlob, null, context, CancellationToken.None);
            task.Wait();

它会自动将每个块发送到 azure 存储,如下所示:

此问题已在 8.1.3 版本中解决(之前我使用的是 8.1.1)。他们的 changelogs:

中也提到了这一变化
  • Blobs (Desktop) : Fixed a bug where the MaximumExecutionTime was not honored, leading to infinite wait, if due to a failure, e.g., a network failure after receiving the response headers, server stopped sending partial response.
  • All(Desktop) : Fixed a memory leak issue where the SendStream was not being disposed during retries, cancellations and Table Operations.

基本上,在8.1.3之前,BlobRequestOptions.MaximumExecutionTime被忽略了。