Job Args
data class JobArgs(val forceDelete: Output<Boolean>? = null, val hadoopConfig: Output<JobHadoopConfigArgs>? = null, val hiveConfig: Output<JobHiveConfigArgs>? = null, val labels: Output<Map<String, String>>? = null, val pigConfig: Output<JobPigConfigArgs>? = null, val placement: Output<JobPlacementArgs>? = null, val prestoConfig: Output<JobPrestoConfigArgs>? = null, val project: Output<String>? = null, val pysparkConfig: Output<JobPysparkConfigArgs>? = null, val reference: Output<JobReferenceArgs>? = null, val region: Output<String>? = null, val scheduling: Output<JobSchedulingArgs>? = null, val sparkConfig: Output<JobSparkConfigArgs>? = null, val sparksqlConfig: Output<JobSparksqlConfigArgs>? = null) : ConvertibleToJava<JobArgs>
Manages a job resource within a Dataproc cluster within GCE. For more information see the official dataproc documentation. !>Note: This resource does not support 'update' and changing any attributes will cause the resource to be recreated.
Example Usage
package generated_program;
import com.pulumi.Context;
import com.pulumi.Pulumi;
import com.pulumi.core.Output;
import com.pulumi.gcp.dataproc.Cluster;
import com.pulumi.gcp.dataproc.ClusterArgs;
import com.pulumi.gcp.dataproc.Job;
import com.pulumi.gcp.dataproc.JobArgs;
import com.pulumi.gcp.dataproc.inputs.JobPlacementArgs;
import com.pulumi.gcp.dataproc.inputs.JobSparkConfigArgs;
import com.pulumi.gcp.dataproc.inputs.JobSparkConfigLoggingConfigArgs;
import com.pulumi.gcp.dataproc.inputs.JobPysparkConfigArgs;
import java.util.List;
import java.util.ArrayList;
import java.util.Map;
import java.io.File;
import java.nio.file.Files;
import java.nio.file.Paths;
public class App {
public static void main(String[] args) {
Pulumi.run(App::stack);
}
public static void stack(Context ctx) {
var mycluster = new Cluster("mycluster", ClusterArgs.builder()
.region("us-central1")
.build());
var spark = new Job("spark", JobArgs.builder()
.region(mycluster.region())
.forceDelete(true)
.placement(JobPlacementArgs.builder()
.clusterName(mycluster.name())
.build())
.sparkConfig(JobSparkConfigArgs.builder()
.mainClass("org.apache.spark.examples.SparkPi")
.jarFileUris("file:///usr/lib/spark/examples/jars/spark-examples.jar")
.args("1000")
.properties(Map.of("spark.logConf", "true"))
.loggingConfig(JobSparkConfigLoggingConfigArgs.builder()
.driverLogLevels(Map.of("root", "INFO"))
.build())
.build())
.build());
var pyspark = new Job("pyspark", JobArgs.builder()
.region(mycluster.region())
.forceDelete(true)
.placement(JobPlacementArgs.builder()
.clusterName(mycluster.name())
.build())
.pysparkConfig(JobPysparkConfigArgs.builder()
.mainPythonFileUri("gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py")
.properties(Map.of("spark.logConf", "true"))
.build())
.build());
ctx.export("sparkStatus", spark.statuses().applyValue(statuses -> statuses[0].state()));
ctx.export("pysparkStatus", pyspark.statuses().applyValue(statuses -> statuses[0].state()));
}
}
Content copied to clipboard
Import
This resource does not support import.
Constructors
Link copied to clipboard
constructor(forceDelete: Output<Boolean>? = null, hadoopConfig: Output<JobHadoopConfigArgs>? = null, hiveConfig: Output<JobHiveConfigArgs>? = null, labels: Output<Map<String, String>>? = null, pigConfig: Output<JobPigConfigArgs>? = null, placement: Output<JobPlacementArgs>? = null, prestoConfig: Output<JobPrestoConfigArgs>? = null, project: Output<String>? = null, pysparkConfig: Output<JobPysparkConfigArgs>? = null, reference: Output<JobReferenceArgs>? = null, region: Output<String>? = null, scheduling: Output<JobSchedulingArgs>? = null, sparkConfig: Output<JobSparkConfigArgs>? = null, sparksqlConfig: Output<JobSparksqlConfigArgs>? = null)
Properties
Link copied to clipboard
By default, you can only delete inactive jobs within Dataproc. Setting this to true, and calling destroy, will ensure that the job is first cancelled before issuing the delete.
Link copied to clipboard
The config of Hadoop job
Link copied to clipboard
The config of hive job
Link copied to clipboard
The config of pag job.
Link copied to clipboard
The config of job placement.
Link copied to clipboard
The config of presto job
Link copied to clipboard
The config of pySpark job.
Link copied to clipboard
The reference of the job
Link copied to clipboard
Optional. Job scheduling configuration.
Link copied to clipboard
The config of the Spark job.
Link copied to clipboard
The config of SparkSql job