SynapseSparkJobDefinitionActivityArgs

data class SynapseSparkJobDefinitionActivityArgs(val arguments: Output<List<Any>>? = null, val className: Output<Any>? = null, val conf: Output<Any>? = null, val configurationType: Output<Either<String, ConfigurationType>>? = null, val dependsOn: Output<List<ActivityDependencyArgs>>? = null, val description: Output<String>? = null, val driverSize: Output<Any>? = null, val executorSize: Output<Any>? = null, val file: Output<Any>? = null, val files: Output<List<Any>>? = null, val filesV2: Output<List<Any>>? = null, val linkedServiceName: Output<LinkedServiceReferenceArgs>? = null, val name: Output<String>, val numExecutors: Output<Any>? = null, val onInactiveMarkAs: Output<Either<String, ActivityOnInactiveMarkAs>>? = null, val policy: Output<ActivityPolicyArgs>? = null, val pythonCodeReference: Output<List<Any>>? = null, val scanFolder: Output<Any>? = null, val sparkConfig: Output<Map<String, Any>>? = null, val sparkJob: Output<SynapseSparkJobReferenceArgs>, val state: Output<Either<String, ActivityState>>? = null, val targetBigDataPool: Output<BigDataPoolParametrizationReferenceArgs>? = null, val targetSparkConfiguration: Output<SparkConfigurationParametrizationReferenceArgs>? = null, val type: Output<String>, val userProperties: Output<List<UserPropertyArgs>>? = null) : ConvertibleToJava<SynapseSparkJobDefinitionActivityArgs>

Execute spark job activity.

Constructors

Link copied to clipboard
constructor(arguments: Output<List<Any>>? = null, className: Output<Any>? = null, conf: Output<Any>? = null, configurationType: Output<Either<String, ConfigurationType>>? = null, dependsOn: Output<List<ActivityDependencyArgs>>? = null, description: Output<String>? = null, driverSize: Output<Any>? = null, executorSize: Output<Any>? = null, file: Output<Any>? = null, files: Output<List<Any>>? = null, filesV2: Output<List<Any>>? = null, linkedServiceName: Output<LinkedServiceReferenceArgs>? = null, name: Output<String>, numExecutors: Output<Any>? = null, onInactiveMarkAs: Output<Either<String, ActivityOnInactiveMarkAs>>? = null, policy: Output<ActivityPolicyArgs>? = null, pythonCodeReference: Output<List<Any>>? = null, scanFolder: Output<Any>? = null, sparkConfig: Output<Map<String, Any>>? = null, sparkJob: Output<SynapseSparkJobReferenceArgs>, state: Output<Either<String, ActivityState>>? = null, targetBigDataPool: Output<BigDataPoolParametrizationReferenceArgs>? = null, targetSparkConfiguration: Output<SparkConfigurationParametrizationReferenceArgs>? = null, type: Output<String>, userProperties: Output<List<UserPropertyArgs>>? = null)

Properties

Link copied to clipboard
val arguments: Output<List<Any>>? = null

User specified arguments to SynapseSparkJobDefinitionActivity.

Link copied to clipboard
val className: Output<Any>? = null

The fully-qualified identifier or the main class that is in the main definition file, which will override the 'className' of the spark job definition you provide. Type: string (or Expression with resultType string).

Link copied to clipboard
val conf: Output<Any>? = null

Spark configuration properties, which will override the 'conf' of the spark job definition you provide.

Link copied to clipboard
val configurationType: Output<Either<String, ConfigurationType>>? = null

The type of the spark config.

Link copied to clipboard
val dependsOn: Output<List<ActivityDependencyArgs>>? = null

Activity depends on condition.

Link copied to clipboard
val description: Output<String>? = null

Activity description.

Link copied to clipboard
val driverSize: Output<Any>? = null

Number of core and memory to be used for driver allocated in the specified Spark pool for the job, which will be used for overriding 'driverCores' and 'driverMemory' of the spark job definition you provide. Type: string (or Expression with resultType string).

Link copied to clipboard
val executorSize: Output<Any>? = null

Number of core and memory to be used for executors allocated in the specified Spark pool for the job, which will be used for overriding 'executorCores' and 'executorMemory' of the spark job definition you provide. Type: string (or Expression with resultType string).

Link copied to clipboard
val file: Output<Any>? = null

The main file used for the job, which will override the 'file' of the spark job definition you provide. Type: string (or Expression with resultType string).

Link copied to clipboard
val files: Output<List<Any>>? = null

(Deprecated. Please use pythonCodeReference and filesV2) Additional files used for reference in the main definition file, which will override the 'files' of the spark job definition you provide.

Link copied to clipboard
val filesV2: Output<List<Any>>? = null

Additional files used for reference in the main definition file, which will override the 'jars' and 'files' of the spark job definition you provide.

Link copied to clipboard

Linked service reference.

Link copied to clipboard
val name: Output<String>

Activity name.

Link copied to clipboard
val numExecutors: Output<Any>? = null

Number of executors to launch for this job, which will override the 'numExecutors' of the spark job definition you provide. Type: integer (or Expression with resultType integer).

Link copied to clipboard
val onInactiveMarkAs: Output<Either<String, ActivityOnInactiveMarkAs>>? = null

Status result of the activity when the state is set to Inactive. This is an optional property and if not provided when the activity is inactive, the status will be Succeeded by default.

Link copied to clipboard
val policy: Output<ActivityPolicyArgs>? = null

Activity policy.

Link copied to clipboard
val pythonCodeReference: Output<List<Any>>? = null

Additional python code files used for reference in the main definition file, which will override the 'pyFiles' of the spark job definition you provide.

Link copied to clipboard
val scanFolder: Output<Any>? = null

Scanning subfolders from the root folder of the main definition file, these files will be added as reference files. The folders named 'jars', 'pyFiles', 'files' or 'archives' will be scanned, and the folders name are case sensitive. Type: boolean (or Expression with resultType boolean).

Link copied to clipboard
val sparkConfig: Output<Map<String, Any>>? = null

Spark configuration property.

Link copied to clipboard

Synapse spark job reference.

Link copied to clipboard
val state: Output<Either<String, ActivityState>>? = null

Activity state. This is an optional property and if not provided, the state will be Active by default.

Link copied to clipboard

The name of the big data pool which will be used to execute the spark batch job, which will override the 'targetBigDataPool' of the spark job definition you provide.

Link copied to clipboard

The spark configuration of the spark job.

Link copied to clipboard
val type: Output<String>

Type of activity. Expected value is 'SparkJob'.

Link copied to clipboard
val userProperties: Output<List<UserPropertyArgs>>? = null

Activity user properties.

Functions

Link copied to clipboard
open override fun toJava(): SynapseSparkJobDefinitionActivityArgs