SparkStandaloneAutoscalingConfigArgs

data class SparkStandaloneAutoscalingConfigArgs(val gracefulDecommissionTimeout: Output<String>, val scaleDownFactor: Output<Double>, val scaleDownMinWorkerFraction: Output<Double>? = null, val scaleUpFactor: Output<Double>, val scaleUpMinWorkerFraction: Output<Double>? = null) : ConvertibleToJava<SparkStandaloneAutoscalingConfigArgs>

Basic autoscaling configurations for Spark Standalone.

Constructors

Link copied to clipboard
constructor(gracefulDecommissionTimeout: Output<String>, scaleDownFactor: Output<Double>, scaleDownMinWorkerFraction: Output<Double>? = null, scaleUpFactor: Output<Double>, scaleUpMinWorkerFraction: Output<Double>? = null)

Properties

Link copied to clipboard

Timeout for Spark graceful decommissioning of spark workers. Specifies the duration to wait for spark worker to complete spark decommissioning tasks before forcefully removing workers. Only applicable to downscaling operations.Bounds: 0s, 1d.

Link copied to clipboard
val scaleDownFactor: Output<Double>

Fraction of required executors to remove from Spark Serverless clusters. A scale-down factor of 1.0 will result in scaling down so that there are no more executors for the Spark Job.(more aggressive scaling). A scale-down factor closer to 0 will result in a smaller magnitude of scaling donw (less aggressive scaling).Bounds: 0.0, 1.0.

Link copied to clipboard
val scaleDownMinWorkerFraction: Output<Double>? = null

Optional. Minimum scale-down threshold as a fraction of total cluster size before scaling occurs. For example, in a 20-worker cluster, a threshold of 0.1 means the autoscaler must recommend at least a 2 worker scale-down for the cluster to scale. A threshold of 0 means the autoscaler will scale down on any recommended change.Bounds: 0.0, 1.0. Default: 0.0.

Link copied to clipboard
val scaleUpFactor: Output<Double>

Fraction of required workers to add to Spark Standalone clusters. A scale-up factor of 1.0 will result in scaling up so that there are no more required workers for the Spark Job (more aggressive scaling). A scale-up factor closer to 0 will result in a smaller magnitude of scaling up (less aggressive scaling).Bounds: 0.0, 1.0.

Link copied to clipboard
val scaleUpMinWorkerFraction: Output<Double>? = null

Optional. Minimum scale-up threshold as a fraction of total cluster size before scaling occurs. For example, in a 20-worker cluster, a threshold of 0.1 means the autoscaler must recommend at least a 2-worker scale-up for the cluster to scale. A threshold of 0 means the autoscaler will scale up on any recommended change.Bounds: 0.0, 1.0. Default: 0.0.

Functions

Link copied to clipboard
open override fun toJava(): SparkStandaloneAutoscalingConfigArgs