NodePoolArgs

data class NodePoolArgs(val autoscaling: Output<NodePoolAutoscalingArgs>? = null, val conditions: Output<List<StatusConditionArgs>>? = null, val config: Output<NodeConfigArgs>? = null, val etag: Output<String>? = null, val initialNodeCount: Output<Int>? = null, val locations: Output<List<String>>? = null, val management: Output<NodeManagementArgs>? = null, val maxPodsConstraint: Output<MaxPodsConstraintArgs>? = null, val name: Output<String>? = null, val networkConfig: Output<NodeNetworkConfigArgs>? = null, val placementPolicy: Output<PlacementPolicyArgs>? = null, val upgradeSettings: Output<UpgradeSettingsArgs>? = null, val version: Output<String>? = null) : ConvertibleToJava<NodePoolArgs>

NodePool contains the name and configuration for a cluster's node pool. Node pools are a set of nodes (i.e. VM's), with a common configuration and specification, under the control of the cluster master. They may have a set of Kubernetes labels applied to them, which may be used to reference them during pod scheduling. They may also be resized up or down, to accommodate the workload. These upgrade settings control the level of parallelism and the level of disruption caused by an upgrade. maxUnavailable controls the number of nodes that can be simultaneously unavailable. maxSurge controls the number of additional nodes that can be added to the node pool temporarily for the time of the upgrade to increase the number of available nodes. (maxUnavailable + maxSurge) determines the level of parallelism (how many nodes are being upgraded at the same time). Note: upgrades inevitably introduce some disruption since workloads need to be moved from old nodes to new, upgraded ones. Even if maxUnavailable=0, this holds true. (Disruption stays within the limits of PodDisruptionBudget, if it is configured.) Consider a hypothetical node pool with 5 nodes having maxSurge=2, maxUnavailable=1. This means the upgrade process upgrades 3 nodes simultaneously. It creates 2 additional (upgraded) nodes, then it brings down 3 old (not yet upgraded) nodes at the same time. This ensures that there are always at least 4 nodes available.

Constructors

Link copied to clipboard
fun NodePoolArgs(autoscaling: Output<NodePoolAutoscalingArgs>? = null, conditions: Output<List<StatusConditionArgs>>? = null, config: Output<NodeConfigArgs>? = null, etag: Output<String>? = null, initialNodeCount: Output<Int>? = null, locations: Output<List<String>>? = null, management: Output<NodeManagementArgs>? = null, maxPodsConstraint: Output<MaxPodsConstraintArgs>? = null, name: Output<String>? = null, networkConfig: Output<NodeNetworkConfigArgs>? = null, placementPolicy: Output<PlacementPolicyArgs>? = null, upgradeSettings: Output<UpgradeSettingsArgs>? = null, version: Output<String>? = null)

Functions

Link copied to clipboard
open override fun toJava(): NodePoolArgs

Properties

Link copied to clipboard

Autoscaler configuration for this NodePool. Autoscaler is enabled only if a valid configuration is present.

Link copied to clipboard
val conditions: Output<List<StatusConditionArgs>>? = null

Which conditions caused the current node pool state.

Link copied to clipboard
val config: Output<NodeConfigArgs>? = null

The node configuration of the pool.

Link copied to clipboard
val etag: Output<String>? = null

This checksum is computed by the server based on the value of node pool fields, and may be sent on update requests to ensure the client has an up-to-date value before proceeding.

Link copied to clipboard
val initialNodeCount: Output<Int>? = null

The initial node count for the pool. You must ensure that your Compute Engine resource quota is sufficient for this number of instances. You must also have available firewall and routes quota.

Link copied to clipboard
val locations: Output<List<String>>? = null

The list of Google Compute Engine zones in which the NodePool's nodes should be located. If this value is unspecified during node pool creation, the Cluster.Locations value will be used, instead. Warning: changing node pool locations will result in nodes being added and/or removed.

Link copied to clipboard
val management: Output<NodeManagementArgs>? = null

NodeManagement configuration for this NodePool.

Link copied to clipboard

The constraint on the maximum number of pods that can be run simultaneously on a node in the node pool.

Link copied to clipboard
val name: Output<String>? = null

The name of the node pool.

Link copied to clipboard

Networking configuration for this NodePool. If specified, it overrides the cluster-level defaults.

Link copied to clipboard

Specifies the node placement policy.

Link copied to clipboard

Upgrade settings control disruption and speed of the upgrade.

Link copied to clipboard
val version: Output<String>? = null

The version of Kubernetes running on this NodePool's nodes. If unspecified, it defaults as described here.