Package-level declarations

Types

Link copied to clipboard
class FlexTemplateJob : KotlinCustomResource

There are many types of Dataflow jobs. Some Dataflow jobs run constantly, getting new data from (e.g.) a GCS bucket, and outputting data continuously. Some jobs process a set amount of data then terminate. All jobs can fail while running due to programming errors or other issues. In this way, Dataflow jobs are different from most other provider / Google resources. The Dataflow resource is considered 'existing' while it is in a nonterminal state. If it reaches a terminal state (e.g. 'FAILED', 'COMPLETE', 'CANCELLED'), it will be recreated on the next 'apply'. This is as expected for jobs which run continuously, but may surprise users who use this resource for other kinds of Dataflow jobs. A Dataflow job which is 'destroyed' may be "cancelled" or "drained". If "cancelled", the job terminates - any data written remains where it is, but no new data will be processed. If "drained", no new data will enter the pipeline, but any data currently in the pipeline will finish being processed. The default is "cancelled", but if a user sets on_delete to "drain" in the configuration, you may experience a long wait for your pulumi destroy to complete. You can potentially short-circuit the wait by setting skip_wait_on_job_termination to true, but beware that unless you take active steps to ensure that the job name parameter changes between instances, the name will conflict and the launch of the new job will fail. One way to do this is with a random_id resource, for example:

Link copied to clipboard
data class FlexTemplateJobArgs(val additionalExperiments: Output<List<String>>? = null, val autoscalingAlgorithm: Output<String>? = null, val containerSpecGcsPath: Output<String>? = null, val enableStreamingEngine: Output<Boolean>? = null, val ipConfiguration: Output<String>? = null, val kmsKeyName: Output<String>? = null, val labels: Output<Map<String, Any>>? = null, val launcherMachineType: Output<String>? = null, val machineType: Output<String>? = null, val maxWorkers: Output<Int>? = null, val name: Output<String>? = null, val network: Output<String>? = null, val numWorkers: Output<Int>? = null, val onDelete: Output<String>? = null, val parameters: Output<Map<String, Any>>? = null, val project: Output<String>? = null, val region: Output<String>? = null, val sdkContainerImage: Output<String>? = null, val serviceAccountEmail: Output<String>? = null, val skipWaitOnJobTermination: Output<Boolean>? = null, val stagingLocation: Output<String>? = null, val subnetwork: Output<String>? = null, val tempLocation: Output<String>? = null, val transformNameMapping: Output<Map<String, Any>>? = null) : ConvertibleToJava<FlexTemplateJobArgs>

There are many types of Dataflow jobs. Some Dataflow jobs run constantly, getting new data from (e.g.) a GCS bucket, and outputting data continuously. Some jobs process a set amount of data then terminate. All jobs can fail while running due to programming errors or other issues. In this way, Dataflow jobs are different from most other provider / Google resources. The Dataflow resource is considered 'existing' while it is in a nonterminal state. If it reaches a terminal state (e.g. 'FAILED', 'COMPLETE', 'CANCELLED'), it will be recreated on the next 'apply'. This is as expected for jobs which run continuously, but may surprise users who use this resource for other kinds of Dataflow jobs. A Dataflow job which is 'destroyed' may be "cancelled" or "drained". If "cancelled", the job terminates - any data written remains where it is, but no new data will be processed. If "drained", no new data will enter the pipeline, but any data currently in the pipeline will finish being processed. The default is "cancelled", but if a user sets on_delete to "drain" in the configuration, you may experience a long wait for your pulumi destroy to complete. You can potentially short-circuit the wait by setting skip_wait_on_job_termination to true, but beware that unless you take active steps to ensure that the job name parameter changes between instances, the name will conflict and the launch of the new job will fail. One way to do this is with a random_id resource, for example:

Link copied to clipboard
object FlexTemplateJobMapper : ResourceMapper<FlexTemplateJob>
Link copied to clipboard
class Job : KotlinCustomResource

Creates a job on Dataflow, which is an implementation of Apache Beam running on Google Compute Engine. For more information see the official documentation for Beam and Dataflow.

Link copied to clipboard
data class JobArgs(val additionalExperiments: Output<List<String>>? = null, val enableStreamingEngine: Output<Boolean>? = null, val ipConfiguration: Output<String>? = null, val kmsKeyName: Output<String>? = null, val labels: Output<Map<String, Any>>? = null, val machineType: Output<String>? = null, val maxWorkers: Output<Int>? = null, val name: Output<String>? = null, val network: Output<String>? = null, val onDelete: Output<String>? = null, val parameters: Output<Map<String, Any>>? = null, val project: Output<String>? = null, val region: Output<String>? = null, val serviceAccountEmail: Output<String>? = null, val skipWaitOnJobTermination: Output<Boolean>? = null, val subnetwork: Output<String>? = null, val tempGcsLocation: Output<String>? = null, val templateGcsPath: Output<String>? = null, val transformNameMapping: Output<Map<String, Any>>? = null, val zone: Output<String>? = null) : ConvertibleToJava<JobArgs>

Creates a job on Dataflow, which is an implementation of Apache Beam running on Google Compute Engine. For more information see the official documentation for Beam and Dataflow.

Link copied to clipboard

Builder for JobArgs.

Link copied to clipboard
object JobMapper : ResourceMapper<Job>
Link copied to clipboard

Builder for Job.

Link copied to clipboard
class Pipeline : KotlinCustomResource

/* The main pipeline entity and all the necessary metadata for launching and managing linked jobs. To get more information about Pipeline, see:

Link copied to clipboard
data class PipelineArgs(val displayName: Output<String>? = null, val name: Output<String>? = null, val pipelineSources: Output<Map<String, String>>? = null, val project: Output<String>? = null, val region: Output<String>? = null, val scheduleInfo: Output<PipelineScheduleInfoArgs>? = null, val schedulerServiceAccountEmail: Output<String>? = null, val state: Output<String>? = null, val type: Output<String>? = null, val workload: Output<PipelineWorkloadArgs>? = null) : ConvertibleToJava<PipelineArgs>

/* The main pipeline entity and all the necessary metadata for launching and managing linked jobs. To get more information about Pipeline, see:

Link copied to clipboard
Link copied to clipboard
object PipelineMapper : ResourceMapper<Pipeline>
Link copied to clipboard

Functions

Link copied to clipboard
Link copied to clipboard
fun job(name: String): Job
suspend fun job(name: String, block: suspend JobResourceBuilder.() -> Unit): Job
Link copied to clipboard
suspend fun pipeline(name: String, block: suspend PipelineResourceBuilder.() -> Unit): Pipeline