Workflow Template Args
A Workflow Template is a reusable workflow configuration. It defines a graph of jobs with information on where to run those jobs.
Example Usage
package generated_program;
import com.pulumi.Context;
import com.pulumi.Pulumi;
import com.pulumi.core.Output;
import com.pulumi.gcp.dataproc.WorkflowTemplate;
import com.pulumi.gcp.dataproc.WorkflowTemplateArgs;
import com.pulumi.gcp.dataproc.inputs.WorkflowTemplateJobArgs;
import com.pulumi.gcp.dataproc.inputs.WorkflowTemplateJobSparkJobArgs;
import com.pulumi.gcp.dataproc.inputs.WorkflowTemplateJobPrestoJobArgs;
import com.pulumi.gcp.dataproc.inputs.WorkflowTemplatePlacementArgs;
import com.pulumi.gcp.dataproc.inputs.WorkflowTemplatePlacementManagedClusterArgs;
import com.pulumi.gcp.dataproc.inputs.WorkflowTemplatePlacementManagedClusterConfigArgs;
import com.pulumi.gcp.dataproc.inputs.WorkflowTemplatePlacementManagedClusterConfigGceClusterConfigArgs;
import com.pulumi.gcp.dataproc.inputs.WorkflowTemplatePlacementManagedClusterConfigMasterConfigArgs;
import com.pulumi.gcp.dataproc.inputs.WorkflowTemplatePlacementManagedClusterConfigMasterConfigDiskConfigArgs;
import com.pulumi.gcp.dataproc.inputs.WorkflowTemplatePlacementManagedClusterConfigSecondaryWorkerConfigArgs;
import com.pulumi.gcp.dataproc.inputs.WorkflowTemplatePlacementManagedClusterConfigSoftwareConfigArgs;
import com.pulumi.gcp.dataproc.inputs.WorkflowTemplatePlacementManagedClusterConfigWorkerConfigArgs;
import com.pulumi.gcp.dataproc.inputs.WorkflowTemplatePlacementManagedClusterConfigWorkerConfigDiskConfigArgs;
import java.util.List;
import java.util.ArrayList;
import java.util.Map;
import java.io.File;
import java.nio.file.Files;
import java.nio.file.Paths;
public class App {
public static void main(String[] args) {
Pulumi.run(App::stack);
}
public static void stack(Context ctx) {
var template = new WorkflowTemplate("template", WorkflowTemplateArgs.builder()
.jobs(
WorkflowTemplateJobArgs.builder()
.sparkJob(WorkflowTemplateJobSparkJobArgs.builder()
.mainClass("SomeClass")
.build())
.stepId("someJob")
.build(),
WorkflowTemplateJobArgs.builder()
.prerequisiteStepIds("someJob")
.prestoJob(WorkflowTemplateJobPrestoJobArgs.builder()
.queryFileUri("someuri")
.build())
.stepId("otherJob")
.build())
.location("us-central1")
.placement(WorkflowTemplatePlacementArgs.builder()
.managedCluster(WorkflowTemplatePlacementManagedClusterArgs.builder()
.clusterName("my-cluster")
.config(WorkflowTemplatePlacementManagedClusterConfigArgs.builder()
.gceClusterConfig(WorkflowTemplatePlacementManagedClusterConfigGceClusterConfigArgs.builder()
.tags(
"foo",
"bar")
.zone("us-central1-a")
.build())
.masterConfig(WorkflowTemplatePlacementManagedClusterConfigMasterConfigArgs.builder()
.diskConfig(WorkflowTemplatePlacementManagedClusterConfigMasterConfigDiskConfigArgs.builder()
.bootDiskSizeGb(15)
.bootDiskType("pd-ssd")
.build())
.machineType("n1-standard-1")
.numInstances(1)
.build())
.secondaryWorkerConfig(WorkflowTemplatePlacementManagedClusterConfigSecondaryWorkerConfigArgs.builder()
.numInstances(2)
.build())
.softwareConfig(WorkflowTemplatePlacementManagedClusterConfigSoftwareConfigArgs.builder()
.imageVersion("2.0.35-debian10")
.build())
.workerConfig(WorkflowTemplatePlacementManagedClusterConfigWorkerConfigArgs.builder()
.diskConfig(WorkflowTemplatePlacementManagedClusterConfigWorkerConfigDiskConfigArgs.builder()
.bootDiskSizeGb(10)
.numLocalSsds(2)
.build())
.machineType("n1-standard-2")
.numInstances(3)
.build())
.build())
.build())
.build())
.build());
}
}
Import
WorkflowTemplate can be imported using any of these accepted formats
$ pulumi import gcp:dataproc/workflowTemplate:WorkflowTemplate default projects/{{project}}/locations/{{location}}/workflowTemplates/{{name}}
$ pulumi import gcp:dataproc/workflowTemplate:WorkflowTemplate default {{project}}/{{location}}/{{name}}
$ pulumi import gcp:dataproc/workflowTemplate:WorkflowTemplate default {{location}}/{{name}}
Constructors
Properties
(Beta only) Optional. Timeout duration for the DAG of jobs. You can use "s", "m", "h", and "d" suffixes for second, minute, hour, and day duration values, respectively. The timeout duration must be from 10 minutes ("10m") to 24 hours ("24h" or "1d"). The timer begins when the first job is submitted. If the workflow is running at the end of the timeout period, any remaining jobs are cancelled, the workflow is ended, and if the workflow was running on a (/dataproc/docs/concepts/workflows/using-workflows#configuring_or_selecting_a_cluster), the cluster is deleted.
Required. The Directed Acyclic Graph of Jobs to submit.
The labels to associate with this template. These labels will be propagated to all jobs and clusters created by the workflow instance. Label keys must contain 1 to 63 characters, and must conform to (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a template.
Output only. The resource name of the workflow template, as described in https://cloud.google.com/apis/design/resource_names. * For projects.regions.workflowTemplates
, the resource name of the template has the following format: projects/{project_id}/regions/{region}/workflowTemplates/{template_id}
* For projects.locations.workflowTemplates
, the resource name of the template has the following format: projects/{project_id}/locations/{location}/workflowTemplates/{template_id}
Template parameters whose values are substituted into the template. Values for parameters must be provided when the template is instantiated.
Required. WorkflowTemplate scheduling information.
Used to perform a consistent read-modify-write. This field should be left blank for a CreateWorkflowTemplate
request. It is required for an UpdateWorkflowTemplate
request, and must match the current server version. A typical update template flow would fetch the current template with a GetWorkflowTemplate
request, which will return the current template with the version
field filled in with the current server version. The user updates other fields in the template, then returns it as part of the UpdateWorkflowTemplate
request.