使用 Python 的 Azure 数据工厂
Azure Data Factory using Python
有人能告诉我如何使用 Python 策略变量(超时、重试等)在 Azure 数据工厂中设置 Azure 管道的一般功能吗?
请在您的 python 代码中添加 policy
参数:
# Create a copy activity
act_name = 'copyBlobtoBlob'
blob_source = BlobSource()
blob_sink = BlobSink()
policy = {"timeout": "1.00:00:00",
"concurrency": 1,
"executionPriorityOrder": "NewestFirst",
"style": "StartOfInterval",
"retry": 3,
"longRetry": 0,
"longRetryInterval": "00:00:00"}
copy_activity = CopyActivity(act_name, source=blob_source, sink=blob_sink,
policy=policy)
# Create a pipeline with the copy activity
p_name = 'copyPipeline'
params_for_pipeline = {}
p_obj = PipelineResource(activities=[copy_activity], parameters=params_for_pipeline)
p = adf_client.pipelines.create_or_update(rg_name, df_name, p_name, p_obj)
测试:
希望对您有所帮助。
有人能告诉我如何使用 Python 策略变量(超时、重试等)在 Azure 数据工厂中设置 Azure 管道的一般功能吗?
请在您的 python 代码中添加 policy
参数:
# Create a copy activity
act_name = 'copyBlobtoBlob'
blob_source = BlobSource()
blob_sink = BlobSink()
policy = {"timeout": "1.00:00:00",
"concurrency": 1,
"executionPriorityOrder": "NewestFirst",
"style": "StartOfInterval",
"retry": 3,
"longRetry": 0,
"longRetryInterval": "00:00:00"}
copy_activity = CopyActivity(act_name, source=blob_source, sink=blob_sink,
policy=policy)
# Create a pipeline with the copy activity
p_name = 'copyPipeline'
params_for_pipeline = {}
p_obj = PipelineResource(activities=[copy_activity], parameters=params_for_pipeline)
p = adf_client.pipelines.create_or_update(rg_name, df_name, p_name, p_obj)
测试:
希望对您有所帮助。