GCP Dataproc 并行步骤执行
GCP Dataproc parallel steps execution
我正在使用来自 YAML 文件的工作流模板在 GCP 上创建 dataproc 集群。创建集群后,所有步骤开始并行执行,但我希望在所有其他步骤完成执行后执行一些步骤。有什么办法可以实现吗?
用于集群创建的示例 YAML
jobs:
- pigJob:
continueOnFailure: true
queryList:
queries:
- sh /ui.sh
stepId: run-pig-ui
- pigJob:
continueOnFailure: true
queryList:
queries:
- sh /hotel.sh
stepId: run-pig-hotel
placement:
managedCluster:
clusterName: cluster-abc
labels:
data: cluster
config:
configBucket: bucket-1
initializationActions:
- executableFile: gs://bucket-1/install_git.sh
executionTimeout: 600s
gceClusterConfig:
zoneUri: asia-south1-a
tags:
- test
masterConfig:
machineTypeUri: n1-standard-8
diskConfig:
bootDiskSizeGb: 50
workerConfig:
machineTypeUri: n1-highcpu-32
numInstances: 2
diskConfig:
bootDiskSizeGb: 100
softwareConfig:
imageVersion: 1.4-ubuntu18
properties:
core:io.compression.codec.lzo.class: com.hadoop.compression.lzo.LzoCodec
core:io.compression.codecs: org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.BZip2Codec,com.hadoop.compression.lzo.LzoCodec,com.hadoop.compression.lzo.LzopCodec
secondaryWorkerConfig:
numInstances: 2
isPreemptible: true
用于创建集群的命令
gcloud dataproc workflow-templates instantiate-from-file --file file_name.yaml
gcloud 版本:261.0.0
您可以在最后的工作流程步骤中使用 prerequisiteStepIds
列表,以确保在所有先决条件步骤 运行 之后仅 运行 秒。您可以在 corresponding JSON API representation for OrderedJob.
中看到预期的结构
jobs:
- pigJob:
continueOnFailure: true
queryList:
queries:
- sh /ui.sh
stepId: run-pig-ui
- pigJob:
continueOnFailure: true
queryList:
queries:
- sh /hotel.sh
stepId: run-pig-hotel
- pigJob:
continueOnFailure: true
queryList:
queries:
- sh /final.sh
stepId: run-final-step
prerequisiteStepIds:
- run-pig-ui
- run-pig-hotel
...
我正在使用来自 YAML 文件的工作流模板在 GCP 上创建 dataproc 集群。创建集群后,所有步骤开始并行执行,但我希望在所有其他步骤完成执行后执行一些步骤。有什么办法可以实现吗?
用于集群创建的示例 YAML
jobs:
- pigJob:
continueOnFailure: true
queryList:
queries:
- sh /ui.sh
stepId: run-pig-ui
- pigJob:
continueOnFailure: true
queryList:
queries:
- sh /hotel.sh
stepId: run-pig-hotel
placement:
managedCluster:
clusterName: cluster-abc
labels:
data: cluster
config:
configBucket: bucket-1
initializationActions:
- executableFile: gs://bucket-1/install_git.sh
executionTimeout: 600s
gceClusterConfig:
zoneUri: asia-south1-a
tags:
- test
masterConfig:
machineTypeUri: n1-standard-8
diskConfig:
bootDiskSizeGb: 50
workerConfig:
machineTypeUri: n1-highcpu-32
numInstances: 2
diskConfig:
bootDiskSizeGb: 100
softwareConfig:
imageVersion: 1.4-ubuntu18
properties:
core:io.compression.codec.lzo.class: com.hadoop.compression.lzo.LzoCodec
core:io.compression.codecs: org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.BZip2Codec,com.hadoop.compression.lzo.LzoCodec,com.hadoop.compression.lzo.LzopCodec
secondaryWorkerConfig:
numInstances: 2
isPreemptible: true
用于创建集群的命令
gcloud dataproc workflow-templates instantiate-from-file --file file_name.yaml
gcloud 版本:261.0.0
您可以在最后的工作流程步骤中使用 prerequisiteStepIds
列表,以确保在所有先决条件步骤 运行 之后仅 运行 秒。您可以在 corresponding JSON API representation for OrderedJob.
jobs:
- pigJob:
continueOnFailure: true
queryList:
queries:
- sh /ui.sh
stepId: run-pig-ui
- pigJob:
continueOnFailure: true
queryList:
queries:
- sh /hotel.sh
stepId: run-pig-hotel
- pigJob:
continueOnFailure: true
queryList:
queries:
- sh /final.sh
stepId: run-final-step
prerequisiteStepIds:
- run-pig-ui
- run-pig-hotel
...