Job Args
Submits a job to a cluster. Auto-naming is currently not supported for this resource.
Constructors
Properties
Optional. Driver scheduling configuration.
Optional. Job is a Hadoop job.
Optional. Job is a Hive job.
Optional. The labels to associate with this job. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a job.
Optional. Job is a Pig job.
Job information, including how, when, and where to run the job.
Optional. Job is a Presto job.
Optional. Job is a PySpark job.
Optional. The fully qualified reference to the job, which can be used to obtain the equivalent REST path of the job resource. If this property is not specified when a job is created, the server generates a job_id.
Optional. A unique id used to identify the request. If the server receives two SubmitJobRequest (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1#google.cloud.dataproc.v1.SubmitJobRequest)s with the same id, then the second request will be ignored and the first Job created and stored in the backend is returned.It is recommended to always set this value to a UUID (https://en.wikipedia.org/wiki/Universally_unique_identifier).The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.
Optional. Job scheduling configuration.
Optional. Job is a Spark job.
Optional. Job is a SparkR job.
Optional. Job is a SparkSql job.
Optional. Job is a Trino job.