Node Pool Args
Manages a node pool in a Google Kubernetes Engine (GKE) cluster separately from the cluster control plane. For more information see the official documentation and the API reference.
Example Usage
Using A Separately Managed Node Pool (Recommended)
package generated_program;
import com.pulumi.Context;
import com.pulumi.Pulumi;
import com.pulumi.core.Output;
import com.pulumi.gcp.serviceAccount.Account;
import com.pulumi.gcp.serviceAccount.AccountArgs;
import com.pulumi.gcp.container.Cluster;
import com.pulumi.gcp.container.ClusterArgs;
import com.pulumi.gcp.container.NodePool;
import com.pulumi.gcp.container.NodePoolArgs;
import com.pulumi.gcp.container.inputs.NodePoolNodeConfigArgs;
import java.util.List;
import java.util.ArrayList;
import java.util.Map;
import java.io.File;
import java.nio.file.Files;
import java.nio.file.Paths;
public class App {
public static void main(String[] args) {
Pulumi.run(App::stack);
}
public static void stack(Context ctx) {
var default_ = new Account("default", AccountArgs.builder()
.accountId("service-account-id")
.displayName("Service Account")
.build());
var primary = new Cluster("primary", ClusterArgs.builder()
.location("us-central1")
.removeDefaultNodePool(true)
.initialNodeCount(1)
.build());
var primaryPreemptibleNodes = new NodePool("primaryPreemptibleNodes", NodePoolArgs.builder()
.cluster(primary.id())
.nodeCount(1)
.nodeConfig(NodePoolNodeConfigArgs.builder()
.preemptible(true)
.machineType("e2-medium")
.serviceAccount(default_.email())
.oauthScopes("https://www.googleapis.com/auth/cloud-platform")
.build())
.build());
}
}
2 Node Pools, 1 Separately Managed + The Default Node Pool
package generated_program;
import com.pulumi.Context;
import com.pulumi.Pulumi;
import com.pulumi.core.Output;
import com.pulumi.gcp.serviceAccount.Account;
import com.pulumi.gcp.serviceAccount.AccountArgs;
import com.pulumi.gcp.container.Cluster;
import com.pulumi.gcp.container.ClusterArgs;
import com.pulumi.gcp.container.inputs.ClusterNodeConfigArgs;
import com.pulumi.gcp.container.NodePool;
import com.pulumi.gcp.container.NodePoolArgs;
import com.pulumi.gcp.container.inputs.NodePoolNodeConfigArgs;
import java.util.List;
import java.util.ArrayList;
import java.util.Map;
import java.io.File;
import java.nio.file.Files;
import java.nio.file.Paths;
public class App {
public static void main(String[] args) {
Pulumi.run(App::stack);
}
public static void stack(Context ctx) {
var default_ = new Account("default", AccountArgs.builder()
.accountId("service-account-id")
.displayName("Service Account")
.build());
var primary = new Cluster("primary", ClusterArgs.builder()
.location("us-central1-a")
.initialNodeCount(3)
.nodeLocations("us-central1-c")
.nodeConfig(ClusterNodeConfigArgs.builder()
.serviceAccount(default_.email())
.oauthScopes("https://www.googleapis.com/auth/cloud-platform")
.guestAccelerators(ClusterNodeConfigGuestAcceleratorArgs.builder()
.type("nvidia-tesla-k80")
.count(1)
.build())
.build())
.build());
var np = new NodePool("np", NodePoolArgs.builder()
.cluster(primary.id())
.nodeConfig(NodePoolNodeConfigArgs.builder()
.machineType("e2-medium")
.serviceAccount(default_.email())
.oauthScopes("https://www.googleapis.com/auth/cloud-platform")
.build())
.timeouts(%!v(PANIC=Format method: runtime error: invalid memory address or nil pointer dereference))
.build());
}
}
Import
Node pools can be imported using the project
, location
, cluster
and name
. If the project is omitted, the project value in the provider configuration will be used. Examples
$ pulumi import gcp:container/nodePool:NodePool mainpool my-gcp-project/us-east1-a/my-cluster/main-pool
$ pulumi import gcp:container/nodePool:NodePool mainpool us-east1/my-cluster/main-pool
Constructors
Properties
Configuration required by cluster autoscaler to adjust the size of the node pool to the current cluster usage. Structure is documented below.
The initial number of nodes for the pool. In regional or multi-zonal clusters, this is the number of nodes per zone. Changing this will force recreation of the resource. WARNING: Resizing your node pool manually may change this value in your existing cluster, which will trigger destruction and recreation on the next provider run (to rectify the discrepancy). If you don't need this value, don't set it. If you do need it, you can use a lifecycle block to ignore subsqeuent changes to this field.
Node management configuration, wherein auto-repair and auto-upgrade is configured. Structure is documented below.
The maximum number of pods per node in this node pool. Note that this does not work on node pools which are "route-based" - that is, node pools belonging to clusters that do not have IP Aliasing enabled. See the official documentation for more information.
Creates a unique name for the node pool beginning with the specified prefix. Conflicts with name
.
The network configuration of the pool. Such as configuration for Adding Pod IP address ranges) to the node pool. Or enabling private nodes. Structure is documented below
Parameters used in creating the node pool. See gcp.container.Cluster for schema.
The list of zones in which the node pool's nodes should be located. Nodes must be in the region of their regional cluster or in the same region as their cluster's zone for zonal clusters. If unspecified, the cluster-level node_locations
will be used.
Specify node upgrade settings to change how GKE upgrades nodes. The maximum number of nodes upgraded simultaneously is limited to 20. Structure is documented below.
The Kubernetes version for the nodes in this pool. Note that if this field and auto_upgrade
are both specified, they will fight each other for what the node version should be, so setting both is highly discouraged. While a fuzzy version can be specified, it's recommended that you specify explicit versions as the provider will see spurious diffs when fuzzy versions are used. See the gcp.container.getEngineVersions
data source's version_prefix
field to approximate fuzzy versions in a provider-compatible way.