ClusterConfigResponse

data class ClusterConfigResponse(val autoscalingConfig: AutoscalingConfigResponse, val configBucket: String, val encryptionConfig: EncryptionConfigResponse, val endpointConfig: EndpointConfigResponse, val gceClusterConfig: GceClusterConfigResponse, val gkeClusterConfig: GkeClusterConfigResponse, val initializationActions: List<NodeInitializationActionResponse>, val lifecycleConfig: LifecycleConfigResponse, val masterConfig: InstanceGroupConfigResponse, val metastoreConfig: MetastoreConfigResponse, val secondaryWorkerConfig: InstanceGroupConfigResponse, val securityConfig: SecurityConfigResponse, val softwareConfig: SoftwareConfigResponse, val tempBucket: String, val workerConfig: InstanceGroupConfigResponse)

The cluster config.

Constructors

Link copied to clipboard
fun ClusterConfigResponse(autoscalingConfig: AutoscalingConfigResponse, configBucket: String, encryptionConfig: EncryptionConfigResponse, endpointConfig: EndpointConfigResponse, gceClusterConfig: GceClusterConfigResponse, gkeClusterConfig: GkeClusterConfigResponse, initializationActions: List<NodeInitializationActionResponse>, lifecycleConfig: LifecycleConfigResponse, masterConfig: InstanceGroupConfigResponse, metastoreConfig: MetastoreConfigResponse, secondaryWorkerConfig: InstanceGroupConfigResponse, securityConfig: SecurityConfigResponse, softwareConfig: SoftwareConfigResponse, tempBucket: String, workerConfig: InstanceGroupConfigResponse)

Types

Link copied to clipboard
object Companion

Properties

Link copied to clipboard

Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.

Link copied to clipboard

Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging bucket (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.

Link copied to clipboard

Optional. Encryption settings for the cluster.

Link copied to clipboard

Optional. Port/endpoint configuration for this cluster

Link copied to clipboard

Optional. The shared Compute Engine config settings for all instances in a cluster.

Link copied to clipboard

Optional. The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. Setting this is considered mutually exclusive with Compute Engine-based options such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.

Link copied to clipboard

Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1beta2/instance/attributes/dataproc-role) if [ "${ROLE}" == 'Master' ]; then ... master specific actions ... else ... worker specific actions ... fi

Link copied to clipboard

Optional. The config setting for auto delete cluster schedule.

Link copied to clipboard

Optional. The Compute Engine config settings for the master instance in a cluster.

Link copied to clipboard

Optional. Metastore configuration.

Link copied to clipboard

Optional. The Compute Engine config settings for additional worker instances in a cluster.

Link copied to clipboard

Optional. Security related configuration.

Link copied to clipboard

Optional. The config settings for software inside the cluster.

Link copied to clipboard

Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket. This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.

Link copied to clipboard

Optional. The Compute Engine config settings for worker instances in a cluster.