TaskArgs

data class TaskArgs(val description: Output<String>? = null, val displayName: Output<String>? = null, val executionSpec: Output<TaskExecutionSpecArgs>? = null, val labels: Output<Map<String, String>>? = null, val lake: Output<String>? = null, val location: Output<String>? = null, val notebook: Output<TaskNotebookArgs>? = null, val project: Output<String>? = null, val spark: Output<TaskSparkArgs>? = null, val taskId: Output<String>? = null, val triggerSpec: Output<TaskTriggerSpecArgs>? = null) : ConvertibleToJava<TaskArgs>

A Dataplex task represents the work that you want Dataplex to do on a schedule. It encapsulates code, parameters, and the schedule. To get more information about Task, see:

Example Usage

{{% /examples %}}

Import

Task can be imported using any of these accepted formats

$ pulumi import gcp:dataplex/task:Task default projects/{{project}}/locations/{{location}}/lakes/{{lake}}/tasks/{{task_id}}
$ pulumi import gcp:dataplex/task:Task default {{project}}/{{location}}/{{lake}}/{{task_id}}
$ pulumi import gcp:dataplex/task:Task default {{location}}/{{lake}}/{{task_id}}

Constructors

Link copied to clipboard
fun TaskArgs(description: Output<String>? = null, displayName: Output<String>? = null, executionSpec: Output<TaskExecutionSpecArgs>? = null, labels: Output<Map<String, String>>? = null, lake: Output<String>? = null, location: Output<String>? = null, notebook: Output<TaskNotebookArgs>? = null, project: Output<String>? = null, spark: Output<TaskSparkArgs>? = null, taskId: Output<String>? = null, triggerSpec: Output<TaskTriggerSpecArgs>? = null)

Functions

Link copied to clipboard
open override fun toJava(): TaskArgs

Properties

Link copied to clipboard
val description: Output<String>? = null

User-provided description of the task.

Link copied to clipboard
val displayName: Output<String>? = null

User friendly display name.

Link copied to clipboard

Configuration for the cluster Structure is documented below.

Link copied to clipboard
val labels: Output<Map<String, String>>? = null

User-defined labels for the task.

Link copied to clipboard
val lake: Output<String>? = null

The lake in which the task will be created in.

Link copied to clipboard
val location: Output<String>? = null

The location in which the task will be created in.

Link copied to clipboard
val notebook: Output<TaskNotebookArgs>? = null

A service with manual scaling runs continuously, allowing you to perform complex initialization and rely on the state of its memory over time. Structure is documented below. (Required) Path to input notebook. This can be the Cloud Storage URI of the notebook file or the path to a Notebook Content. The execution args are accessible as environment variables (TASK_key=value).

Link copied to clipboard
val project: Output<String>? = null

The project in which jobs are run. By default, the project containing the Lake is used. If a project is provided, the ExecutionSpec.service_account must belong to this project. If it is not provided, the provider project is used.

Link copied to clipboard
val spark: Output<TaskSparkArgs>? = null

A service with manual scaling runs continuously, allowing you to perform complex initialization and rely on the state of its memory over time. Structure is documented below.

Link copied to clipboard
val taskId: Output<String>? = null

The task Id of the task.

Link copied to clipboard
val triggerSpec: Output<TaskTriggerSpecArgs>? = null

Configuration for the cluster Structure is documented below.