AzureDatabricksLinkedServiceArgs

data class AzureDatabricksLinkedServiceArgs(val accessToken: Output<Either<AzureKeyVaultSecretReferenceArgs, SecureStringArgs>>? = null, val annotations: Output<List<Any>>? = null, val authentication: Output<Any>? = null, val connectVia: Output<IntegrationRuntimeReferenceArgs>? = null, val credential: Output<CredentialReferenceArgs>? = null, val dataSecurityMode: Output<Any>? = null, val description: Output<String>? = null, val domain: Output<Any>, val encryptedCredential: Output<String>? = null, val existingClusterId: Output<Any>? = null, val instancePoolId: Output<Any>? = null, val newClusterCustomTags: Output<Map<String, Any>>? = null, val newClusterDriverNodeType: Output<Any>? = null, val newClusterEnableElasticDisk: Output<Any>? = null, val newClusterInitScripts: Output<Any>? = null, val newClusterLogDestination: Output<Any>? = null, val newClusterNodeType: Output<Any>? = null, val newClusterNumOfWorker: Output<Any>? = null, val newClusterSparkConf: Output<Map<String, Any>>? = null, val newClusterSparkEnvVars: Output<Map<String, Any>>? = null, val newClusterVersion: Output<Any>? = null, val parameters: Output<Map<String, ParameterSpecificationArgs>>? = null, val policyId: Output<Any>? = null, val type: Output<String>, val version: Output<String>? = null, val workspaceResourceId: Output<Any>? = null) : ConvertibleToJava<AzureDatabricksLinkedServiceArgs>

Azure Databricks linked service.

Constructors

Link copied to clipboard
constructor(accessToken: Output<Either<AzureKeyVaultSecretReferenceArgs, SecureStringArgs>>? = null, annotations: Output<List<Any>>? = null, authentication: Output<Any>? = null, connectVia: Output<IntegrationRuntimeReferenceArgs>? = null, credential: Output<CredentialReferenceArgs>? = null, dataSecurityMode: Output<Any>? = null, description: Output<String>? = null, domain: Output<Any>, encryptedCredential: Output<String>? = null, existingClusterId: Output<Any>? = null, instancePoolId: Output<Any>? = null, newClusterCustomTags: Output<Map<String, Any>>? = null, newClusterDriverNodeType: Output<Any>? = null, newClusterEnableElasticDisk: Output<Any>? = null, newClusterInitScripts: Output<Any>? = null, newClusterLogDestination: Output<Any>? = null, newClusterNodeType: Output<Any>? = null, newClusterNumOfWorker: Output<Any>? = null, newClusterSparkConf: Output<Map<String, Any>>? = null, newClusterSparkEnvVars: Output<Map<String, Any>>? = null, newClusterVersion: Output<Any>? = null, parameters: Output<Map<String, ParameterSpecificationArgs>>? = null, policyId: Output<Any>? = null, type: Output<String>, version: Output<String>? = null, workspaceResourceId: Output<Any>? = null)

Properties

Link copied to clipboard

Access token for databricks REST API. Refer to https://docs.azuredatabricks.net/api/latest/authentication.html. Type: string (or Expression with resultType string).

Link copied to clipboard
val annotations: Output<List<Any>>? = null

List of tags that can be used for describing the linked service.

Link copied to clipboard
val authentication: Output<Any>? = null

Required to specify MSI, if using Workspace resource id for databricks REST API. Type: string (or Expression with resultType string).

Link copied to clipboard

The integration runtime reference.

Link copied to clipboard
val credential: Output<CredentialReferenceArgs>? = null

The credential reference containing authentication information.

Link copied to clipboard
val dataSecurityMode: Output<Any>? = null

The data security mode for the Databricks Cluster. Type: string (or Expression with resultType string).

Link copied to clipboard
val description: Output<String>? = null

Linked service description.

Link copied to clipboard
val domain: Output<Any>

.azuredatabricks.net, domain name of your Databricks deployment. Type: string (or Expression with resultType string).

Link copied to clipboard
val encryptedCredential: Output<String>? = null

The encrypted credential used for authentication. Credentials are encrypted using the integration runtime credential manager. Type: string.

Link copied to clipboard
val existingClusterId: Output<Any>? = null

The id of an existing interactive cluster that will be used for all runs of this activity. Type: string (or Expression with resultType string).

Link copied to clipboard
val instancePoolId: Output<Any>? = null

The id of an existing instance pool that will be used for all runs of this activity. Type: string (or Expression with resultType string).

Link copied to clipboard
val newClusterCustomTags: Output<Map<String, Any>>? = null

Additional tags for cluster resources. This property is ignored in instance pool configurations.

Link copied to clipboard
val newClusterDriverNodeType: Output<Any>? = null

The driver node type for the new job cluster. This property is ignored in instance pool configurations. Type: string (or Expression with resultType string).

Link copied to clipboard
val newClusterEnableElasticDisk: Output<Any>? = null

Enable the elastic disk on the new cluster. This property is now ignored, and takes the default elastic disk behavior in Databricks (elastic disks are always enabled). Type: boolean (or Expression with resultType boolean).

Link copied to clipboard
val newClusterInitScripts: Output<Any>? = null

User-defined initialization scripts for the new cluster. Type: array of strings (or Expression with resultType array of strings).

Link copied to clipboard
val newClusterLogDestination: Output<Any>? = null

Specify a location to deliver Spark driver, worker, and event logs. Type: string (or Expression with resultType string).

Link copied to clipboard
val newClusterNodeType: Output<Any>? = null

The node type of the new job cluster. This property is required if newClusterVersion is specified and instancePoolId is not specified. If instancePoolId is specified, this property is ignored. Type: string (or Expression with resultType string).

Link copied to clipboard
val newClusterNumOfWorker: Output<Any>? = null

If not using an existing interactive cluster, this specifies the number of worker nodes to use for the new job cluster or instance pool. For new job clusters, this a string-formatted Int32, like '1' means numOfWorker is 1 or '1:10' means auto-scale from 1 (min) to 10 (max). For instance pools, this is a string-formatted Int32, and can only specify a fixed number of worker nodes, such as '2'. Required if newClusterVersion is specified. Type: string (or Expression with resultType string).

Link copied to clipboard
val newClusterSparkConf: Output<Map<String, Any>>? = null

A set of optional, user-specified Spark configuration key-value pairs.

Link copied to clipboard
val newClusterSparkEnvVars: Output<Map<String, Any>>? = null

A set of optional, user-specified Spark environment variables key-value pairs.

Link copied to clipboard
val newClusterVersion: Output<Any>? = null

If not using an existing interactive cluster, this specifies the Spark version of a new job cluster or instance pool nodes created for each run of this activity. Required if instancePoolId is specified. Type: string (or Expression with resultType string).

Link copied to clipboard

Parameters for linked service.

Link copied to clipboard
val policyId: Output<Any>? = null

The policy id for limiting the ability to configure clusters based on a user defined set of rules. Type: string (or Expression with resultType string).

Link copied to clipboard
val type: Output<String>

Type of linked service. Expected value is 'AzureDatabricks'.

Link copied to clipboard
val version: Output<String>? = null

Version of the linked service.

Link copied to clipboard
val workspaceResourceId: Output<Any>? = null

Workspace resource id for databricks REST API. Type: string (or Expression with resultType string).

Functions

Link copied to clipboard
open override fun toJava(): AzureDatabricksLinkedServiceArgs