JobArgs

data class JobArgs(val driverSchedulingConfig: Output<DriverSchedulingConfigArgs>? = null, val hadoopJob: Output<HadoopJobArgs>? = null, val hiveJob: Output<HiveJobArgs>? = null, val labels: Output<Map<String, String>>? = null, val pigJob: Output<PigJobArgs>? = null, val placement: Output<JobPlacementArgs>? = null, val prestoJob: Output<PrestoJobArgs>? = null, val project: Output<String>? = null, val pysparkJob: Output<PySparkJobArgs>? = null, val reference: Output<JobReferenceArgs>? = null, val region: Output<String>? = null, val requestId: Output<String>? = null, val scheduling: Output<JobSchedulingArgs>? = null, val sparkJob: Output<SparkJobArgs>? = null, val sparkRJob: Output<SparkRJobArgs>? = null, val sparkSqlJob: Output<SparkSqlJobArgs>? = null, val trinoJob: Output<TrinoJobArgs>? = null) : ConvertibleToJava<JobArgs>

Submits a job to a cluster. Auto-naming is currently not supported for this resource.

Constructors

Link copied to clipboard
fun JobArgs(driverSchedulingConfig: Output<DriverSchedulingConfigArgs>? = null, hadoopJob: Output<HadoopJobArgs>? = null, hiveJob: Output<HiveJobArgs>? = null, labels: Output<Map<String, String>>? = null, pigJob: Output<PigJobArgs>? = null, placement: Output<JobPlacementArgs>? = null, prestoJob: Output<PrestoJobArgs>? = null, project: Output<String>? = null, pysparkJob: Output<PySparkJobArgs>? = null, reference: Output<JobReferenceArgs>? = null, region: Output<String>? = null, requestId: Output<String>? = null, scheduling: Output<JobSchedulingArgs>? = null, sparkJob: Output<SparkJobArgs>? = null, sparkRJob: Output<SparkRJobArgs>? = null, sparkSqlJob: Output<SparkSqlJobArgs>? = null, trinoJob: Output<TrinoJobArgs>? = null)

Functions

Link copied to clipboard
open override fun toJava(): JobArgs

Properties

Link copied to clipboard

Optional. Driver scheduling configuration.

Link copied to clipboard
val hadoopJob: Output<HadoopJobArgs>? = null

Optional. Job is a Hadoop job.

Link copied to clipboard
val hiveJob: Output<HiveJobArgs>? = null

Optional. Job is a Hive job.

Link copied to clipboard
val labels: Output<Map<String, String>>? = null

Optional. The labels to associate with this job. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a job.

Link copied to clipboard
val pigJob: Output<PigJobArgs>? = null

Optional. Job is a Pig job.

Link copied to clipboard
val placement: Output<JobPlacementArgs>? = null

Job information, including how, when, and where to run the job.

Link copied to clipboard
val prestoJob: Output<PrestoJobArgs>? = null

Optional. Job is a Presto job.

Link copied to clipboard
val project: Output<String>? = null
Link copied to clipboard
val pysparkJob: Output<PySparkJobArgs>? = null

Optional. Job is a PySpark job.

Link copied to clipboard
val reference: Output<JobReferenceArgs>? = null

Optional. The fully qualified reference to the job, which can be used to obtain the equivalent REST path of the job resource. If this property is not specified when a job is created, the server generates a job_id.

Link copied to clipboard
val region: Output<String>? = null
Link copied to clipboard
val requestId: Output<String>? = null

Optional. A unique id used to identify the request. If the server receives two SubmitJobRequest (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1#google.cloud.dataproc.v1.SubmitJobRequest)s with the same id, then the second request will be ignored and the first Job created and stored in the backend is returned.It is recommended to always set this value to a UUID (https://en.wikipedia.org/wiki/Universally_unique_identifier).The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.

Link copied to clipboard
val scheduling: Output<JobSchedulingArgs>? = null

Optional. Job scheduling configuration.

Link copied to clipboard
val sparkJob: Output<SparkJobArgs>? = null

Optional. Job is a Spark job.

Link copied to clipboard
val sparkRJob: Output<SparkRJobArgs>? = null

Optional. Job is a SparkR job.

Link copied to clipboard
val sparkSqlJob: Output<SparkSqlJobArgs>? = null

Optional. Job is a SparkSql job.

Link copied to clipboard
val trinoJob: Output<TrinoJobArgs>? = null

Optional. Job is a Trino job.