GetJobResult

data class GetJobResult(val done: Boolean, val driverControlFilesUri: String, val driverOutputResourceUri: String, val hadoopJob: HadoopJobResponse, val hiveJob: HiveJobResponse, val jobUuid: String, val labels: Map<String, String>, val pigJob: PigJobResponse, val placement: JobPlacementResponse, val prestoJob: PrestoJobResponse, val pysparkJob: PySparkJobResponse, val reference: JobReferenceResponse, val scheduling: JobSchedulingResponse, val sparkJob: SparkJobResponse, val sparkRJob: SparkRJobResponse, val sparkSqlJob: SparkSqlJobResponse, val status: JobStatusResponse, val statusHistory: List<JobStatusResponse>, val submittedBy: String, val yarnApplications: List<YarnApplicationResponse>)

Constructors

Link copied to clipboard
fun GetJobResult(done: Boolean, driverControlFilesUri: String, driverOutputResourceUri: String, hadoopJob: HadoopJobResponse, hiveJob: HiveJobResponse, jobUuid: String, labels: Map<String, String>, pigJob: PigJobResponse, placement: JobPlacementResponse, prestoJob: PrestoJobResponse, pysparkJob: PySparkJobResponse, reference: JobReferenceResponse, scheduling: JobSchedulingResponse, sparkJob: SparkJobResponse, sparkRJob: SparkRJobResponse, sparkSqlJob: SparkSqlJobResponse, status: JobStatusResponse, statusHistory: List<JobStatusResponse>, submittedBy: String, yarnApplications: List<YarnApplicationResponse>)

Types

Link copied to clipboard
object Companion

Properties

Link copied to clipboard

Indicates whether the job is completed. If the value is false, the job is still in progress. If true, the job is completed, and status.state field will indicate if it was successful, failed, or cancelled.

Link copied to clipboard

If present, the location of miscellaneous control files which may be used as part of job setup and handling. If not present, control files may be placed in the same location as driver_output_uri.

Link copied to clipboard

A URI pointing to the location of the stdout of the job's driver program.

Link copied to clipboard

Optional. Job is a Hadoop job.

Link copied to clipboard

Optional. Job is a Hive job.

Link copied to clipboard

A UUID that uniquely identifies a job within the project over time. This is in contrast to a user-settable reference.job_id that may be reused over time.

Link copied to clipboard

Optional. The labels to associate with this job. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a job.

Link copied to clipboard

Optional. Job is a Pig job.

Link copied to clipboard

Job information, including how, when, and where to run the job.

Link copied to clipboard

Optional. Job is a Presto job.

Link copied to clipboard

Optional. Job is a PySpark job.

Link copied to clipboard

Optional. The fully qualified reference to the job, which can be used to obtain the equivalent REST path of the job resource. If this property is not specified when a job is created, the server generates a job_id.

Link copied to clipboard

Optional. Job scheduling configuration.

Link copied to clipboard

Optional. Job is a Spark job.

Link copied to clipboard

Optional. Job is a SparkR job.

Link copied to clipboard

Optional. Job is a SparkSql job.

Link copied to clipboard

The job status. Additional application-specific status information may be contained in the type_job and yarn_applications fields.

Link copied to clipboard

The previous job status.

Link copied to clipboard

The email address of the user submitting the job. For jobs submitted on the cluster, the address is username@hostname.

Link copied to clipboard

The collection of YARN applications spun up by this job.Beta Feature: This report is available for testing purposes only. It may be changed before final release.