NodePoolArgs

data class NodePoolArgs(val autoscaling: Output<NodePoolAutoscalingArgs>? = null, val cluster: Output<String>? = null, val initialNodeCount: Output<Int>? = null, val location: Output<String>? = null, val management: Output<NodePoolManagementArgs>? = null, val maxPodsPerNode: Output<Int>? = null, val name: Output<String>? = null, val namePrefix: Output<String>? = null, val networkConfig: Output<NodePoolNetworkConfigArgs>? = null, val nodeConfig: Output<NodePoolNodeConfigArgs>? = null, val nodeCount: Output<Int>? = null, val nodeLocations: Output<List<String>>? = null, val placementPolicy: Output<NodePoolPlacementPolicyArgs>? = null, val project: Output<String>? = null, val upgradeSettings: Output<NodePoolUpgradeSettingsArgs>? = null, val version: Output<String>? = null) : ConvertibleToJava<NodePoolArgs>

Manages a node pool in a Google Kubernetes Engine (GKE) cluster separately from the cluster control plane. For more information see the official documentation and the API reference.

Example Usage

Using A Separately Managed Node Pool (Recommended)

package generated_program;
import com.pulumi.Context;
import com.pulumi.Pulumi;
import com.pulumi.core.Output;
import com.pulumi.gcp.serviceAccount.Account;
import com.pulumi.gcp.serviceAccount.AccountArgs;
import com.pulumi.gcp.container.Cluster;
import com.pulumi.gcp.container.ClusterArgs;
import com.pulumi.gcp.container.NodePool;
import com.pulumi.gcp.container.NodePoolArgs;
import com.pulumi.gcp.container.inputs.NodePoolNodeConfigArgs;
import java.util.List;
import java.util.ArrayList;
import java.util.Map;
import java.io.File;
import java.nio.file.Files;
import java.nio.file.Paths;
public class App {
public static void main(String[] args) {
Pulumi.run(App::stack);
}
public static void stack(Context ctx) {
var default_ = new Account("default", AccountArgs.builder()
.accountId("service-account-id")
.displayName("Service Account")
.build());
var primary = new Cluster("primary", ClusterArgs.builder()
.location("us-central1")
.removeDefaultNodePool(true)
.initialNodeCount(1)
.build());
var primaryPreemptibleNodes = new NodePool("primaryPreemptibleNodes", NodePoolArgs.builder()
.cluster(primary.id())
.nodeCount(1)
.nodeConfig(NodePoolNodeConfigArgs.builder()
.preemptible(true)
.machineType("e2-medium")
.serviceAccount(default_.email())
.oauthScopes("https://www.googleapis.com/auth/cloud-platform")
.build())
.build());
}
}

2 Node Pools, 1 Separately Managed + The Default Node Pool

package generated_program;
import com.pulumi.Context;
import com.pulumi.Pulumi;
import com.pulumi.core.Output;
import com.pulumi.gcp.serviceAccount.Account;
import com.pulumi.gcp.serviceAccount.AccountArgs;
import com.pulumi.gcp.container.Cluster;
import com.pulumi.gcp.container.ClusterArgs;
import com.pulumi.gcp.container.inputs.ClusterNodeConfigArgs;
import com.pulumi.gcp.container.NodePool;
import com.pulumi.gcp.container.NodePoolArgs;
import com.pulumi.gcp.container.inputs.NodePoolNodeConfigArgs;
import java.util.List;
import java.util.ArrayList;
import java.util.Map;
import java.io.File;
import java.nio.file.Files;
import java.nio.file.Paths;
public class App {
public static void main(String[] args) {
Pulumi.run(App::stack);
}
public static void stack(Context ctx) {
var default_ = new Account("default", AccountArgs.builder()
.accountId("service-account-id")
.displayName("Service Account")
.build());
var primary = new Cluster("primary", ClusterArgs.builder()
.location("us-central1-a")
.initialNodeCount(3)
.nodeLocations("us-central1-c")
.nodeConfig(ClusterNodeConfigArgs.builder()
.serviceAccount(default_.email())
.oauthScopes("https://www.googleapis.com/auth/cloud-platform")
.guestAccelerators(ClusterNodeConfigGuestAcceleratorArgs.builder()
.type("nvidia-tesla-k80")
.count(1)
.build())
.build())
.build());
var np = new NodePool("np", NodePoolArgs.builder()
.cluster(primary.id())
.nodeConfig(NodePoolNodeConfigArgs.builder()
.machineType("e2-medium")
.serviceAccount(default_.email())
.oauthScopes("https://www.googleapis.com/auth/cloud-platform")
.build())
.timeouts(%!v(PANIC=Format method: runtime error: invalid memory address or nil pointer dereference))
.build());
}
}

Import

Node pools can be imported using the project, location, cluster and name. If the project is omitted, the project value in the provider configuration will be used. Examples

$ pulumi import gcp:container/nodePool:NodePool mainpool my-gcp-project/us-east1-a/my-cluster/main-pool
$ pulumi import gcp:container/nodePool:NodePool mainpool us-east1/my-cluster/main-pool

Constructors

Link copied to clipboard
constructor(autoscaling: Output<NodePoolAutoscalingArgs>? = null, cluster: Output<String>? = null, initialNodeCount: Output<Int>? = null, location: Output<String>? = null, management: Output<NodePoolManagementArgs>? = null, maxPodsPerNode: Output<Int>? = null, name: Output<String>? = null, namePrefix: Output<String>? = null, networkConfig: Output<NodePoolNetworkConfigArgs>? = null, nodeConfig: Output<NodePoolNodeConfigArgs>? = null, nodeCount: Output<Int>? = null, nodeLocations: Output<List<String>>? = null, placementPolicy: Output<NodePoolPlacementPolicyArgs>? = null, project: Output<String>? = null, upgradeSettings: Output<NodePoolUpgradeSettingsArgs>? = null, version: Output<String>? = null)

Properties

Link copied to clipboard

Configuration required by cluster autoscaler to adjust the size of the node pool to the current cluster usage. Structure is documented below.

Link copied to clipboard
val cluster: Output<String>? = null

The cluster to create the node pool for. Cluster must be present in location provided for clusters. May be specified in the format projects/{{project}}/locations/{{location}}/clusters/{{cluster}} or as just the name of the cluster.

Link copied to clipboard
val initialNodeCount: Output<Int>? = null

The initial number of nodes for the pool. In regional or multi-zonal clusters, this is the number of nodes per zone. Changing this will force recreation of the resource. WARNING: Resizing your node pool manually may change this value in your existing cluster, which will trigger destruction and recreation on the next provider run (to rectify the discrepancy). If you don't need this value, don't set it. If you do need it, you can use a lifecycle block to ignore subsqeuent changes to this field.

Link copied to clipboard
val location: Output<String>? = null

The location (region or zone) of the cluster.

Link copied to clipboard
val management: Output<NodePoolManagementArgs>? = null

Node management configuration, wherein auto-repair and auto-upgrade is configured. Structure is documented below.

Link copied to clipboard
val maxPodsPerNode: Output<Int>? = null

The maximum number of pods per node in this node pool. Note that this does not work on node pools which are "route-based" - that is, node pools belonging to clusters that do not have IP Aliasing enabled. See the official documentation for more information.

Link copied to clipboard
val name: Output<String>? = null

The name of the node pool. If left blank, the provider will auto-generate a unique name.

Link copied to clipboard
val namePrefix: Output<String>? = null

Creates a unique name for the node pool beginning with the specified prefix. Conflicts with name.

Link copied to clipboard

The network configuration of the pool. Such as configuration for Adding Pod IP address ranges) to the node pool. Or enabling private nodes. Structure is documented below

Link copied to clipboard
val nodeConfig: Output<NodePoolNodeConfigArgs>? = null

Parameters used in creating the node pool. See gcp.container.Cluster for schema.

Link copied to clipboard
val nodeCount: Output<Int>? = null

The number of nodes per instance group. This field can be used to update the number of nodes per instance group but should not be used alongside autoscaling.

Link copied to clipboard
val nodeLocations: Output<List<String>>? = null

The list of zones in which the node pool's nodes should be located. Nodes must be in the region of their regional cluster or in the same region as their cluster's zone for zonal clusters. If unspecified, the cluster-level node_locations will be used.

Link copied to clipboard

Specifies a custom placement policy for the nodes. The autoscaling block supports (either total or per zone limits are required):

Link copied to clipboard
val project: Output<String>? = null

The ID of the project in which to create the node pool. If blank, the provider-configured project will be used.

Link copied to clipboard

Specify node upgrade settings to change how GKE upgrades nodes. The maximum number of nodes upgraded simultaneously is limited to 20. Structure is documented below.

Link copied to clipboard
val version: Output<String>? = null

The Kubernetes version for the nodes in this pool. Note that if this field and auto_upgrade are both specified, they will fight each other for what the node version should be, so setting both is highly discouraged. While a fuzzy version can be specified, it's recommended that you specify explicit versions as the provider will see spurious diffs when fuzzy versions are used. See the gcp.container.getEngineVersions data source's version_prefix field to approximate fuzzy versions in a provider-compatible way.

Functions

Link copied to clipboard
open override fun toJava(): NodePoolArgs