ClusterClusterConfigWorkerConfigArgs

data class ClusterClusterConfigWorkerConfigArgs(val accelerators: Output<List<ClusterClusterConfigWorkerConfigAcceleratorArgs>>? = null, val diskConfig: Output<ClusterClusterConfigWorkerConfigDiskConfigArgs>? = null, val imageUri: Output<String>? = null, val instanceNames: Output<List<String>>? = null, val machineType: Output<String>? = null, val minCpuPlatform: Output<String>? = null, val numInstances: Output<Int>? = null) : ConvertibleToJava<ClusterClusterConfigWorkerConfigArgs>

Constructors

Link copied to clipboard
constructor(accelerators: Output<List<ClusterClusterConfigWorkerConfigAcceleratorArgs>>? = null, diskConfig: Output<ClusterClusterConfigWorkerConfigDiskConfigArgs>? = null, imageUri: Output<String>? = null, instanceNames: Output<List<String>>? = null, machineType: Output<String>? = null, minCpuPlatform: Output<String>? = null, numInstances: Output<Int>? = null)

Properties

Link copied to clipboard

The Compute Engine accelerator configuration for these instances. Can be specified multiple times.

Link copied to clipboard
Link copied to clipboard
val imageUri: Output<String>? = null

The URI for the image to use for this worker. See the guide for more information.

Link copied to clipboard
val instanceNames: Output<List<String>>? = null
Link copied to clipboard
val machineType: Output<String>? = null

The name of a Google Compute Engine machine type to create for the worker nodes. If not specified, GCP will default to a predetermined computed value (currently n1-standard-4).

Link copied to clipboard
val minCpuPlatform: Output<String>? = null

The name of a minimum generation of CPU family for the master. If not specified, GCP will default to a predetermined computed value for each zone. See the guide for details about which CPU families are available (and defaulted) for each zone.

Link copied to clipboard
val numInstances: Output<Int>? = null

Specifies the number of worker nodes to create. If not specified, GCP will default to a predetermined computed value (currently 2). There is currently a beta feature which allows you to run a Single Node Cluster. In order to take advantage of this you need to set "dataproc:dataproc.allow.zero.workers" = "true" in cluster_config.software_config.properties

Functions

Link copied to clipboard
open override fun toJava(): ClusterClusterConfigWorkerConfigArgs