WorkerPoolResponse

data class WorkerPoolResponse(val autoscalingSettings: AutoscalingSettingsResponse, val dataDisks: List<DiskResponse>, val defaultPackageSet: String, val diskSizeGb: Int, val diskSourceImage: String, val diskType: String, val ipConfiguration: String, val kind: String, val machineType: String, val metadata: Map<String, String>, val network: String, val numThreadsPerWorker: Int, val numWorkers: Int, val onHostMaintenance: String, val packages: List<PackageResponse>, val poolArgs: Map<String, String>, val sdkHarnessContainerImages: List<SdkHarnessContainerImageResponse>, val subnetwork: String, val taskrunnerSettings: TaskRunnerSettingsResponse, val teardownPolicy: String, val workerHarnessContainerImage: String, val zone: String)

Describes one particular pool of Cloud Dataflow workers to be instantiated by the Cloud Dataflow service in order to perform the computations required by a job. Note that a workflow job may use multiple pools, in order to match the various computational requirements of the various stages of the job.

Constructors

Link copied to clipboard
fun WorkerPoolResponse(autoscalingSettings: AutoscalingSettingsResponse, dataDisks: List<DiskResponse>, defaultPackageSet: String, diskSizeGb: Int, diskSourceImage: String, diskType: String, ipConfiguration: String, kind: String, machineType: String, metadata: Map<String, String>, network: String, numThreadsPerWorker: Int, numWorkers: Int, onHostMaintenance: String, packages: List<PackageResponse>, poolArgs: Map<String, String>, sdkHarnessContainerImages: List<SdkHarnessContainerImageResponse>, subnetwork: String, taskrunnerSettings: TaskRunnerSettingsResponse, teardownPolicy: String, workerHarnessContainerImage: String, zone: String)

Types

Link copied to clipboard
object Companion

Properties

Link copied to clipboard

Settings for autoscaling of this WorkerPool.

Link copied to clipboard

Data disks that are used by a VM in this workflow.

Link copied to clipboard

The default package set to install. This allows the service to select a default set of packages which are useful to worker harnesses written in a particular language.

Link copied to clipboard

Size of root disk for VMs, in GB. If zero or unspecified, the service will attempt to choose a reasonable default.

Link copied to clipboard

Fully qualified source image for disks.

Link copied to clipboard

Type of root disk for VMs. If empty or unspecified, the service will attempt to choose a reasonable default.

Link copied to clipboard

Configuration for VM IPs.

Link copied to clipboard

The kind of the worker pool; currently only harness and shuffle are supported.

Link copied to clipboard

Machine type (e.g. "n1-standard-1"). If empty or unspecified, the service will attempt to choose a reasonable default.

Link copied to clipboard

Metadata to set on the Google Compute Engine VMs.

Link copied to clipboard

Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".

Link copied to clipboard

The number of threads per worker harness. If empty or unspecified, the service will choose a number of threads (according to the number of cores on the selected machine type for batch, or 1 by convention for streaming).

Link copied to clipboard

Number of Google Compute Engine workers in this pool needed to execute the job. If zero or unspecified, the service will attempt to choose a reasonable default.

Link copied to clipboard

The action to take on host maintenance, as defined by the Google Compute Engine API.

Link copied to clipboard

Packages to be installed on workers.

Link copied to clipboard

Extra arguments for this worker pool.

Link copied to clipboard

Set of SDK harness containers needed to execute this pipeline. This will only be set in the Fn API path. For non-cross-language pipelines this should have only one entry. Cross-language pipelines will have two or more entries.

Link copied to clipboard

Subnetwork to which VMs will be assigned, if desired. Expected to be of the form "regions/REGION/subnetworks/SUBNETWORK".

Link copied to clipboard

Settings passed through to Google Compute Engine workers when using the standard Dataflow task runner. Users should ignore this field.

Link copied to clipboard

Sets the policy for determining when to turndown worker pool. Allowed values are: TEARDOWN_ALWAYS, TEARDOWN_ON_SUCCESS, and TEARDOWN_NEVER. TEARDOWN_ALWAYS means workers are always torn down regardless of whether the job succeeds. TEARDOWN_ON_SUCCESS means workers are torn down if the job succeeds. TEARDOWN_NEVER means the workers are never torn down. If the workers are not torn down by the service, they will continue to run and use Google Compute Engine VM resources in the user's project until they are explicitly terminated by the user. Because of this, Google recommends using the TEARDOWN_ALWAYS policy except for small, manually supervised test jobs. If unknown or unspecified, the service will attempt to choose a reasonable default.

Link copied to clipboard

Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.

Link copied to clipboard

Zone to run the worker pools in. If empty or unspecified, the service will attempt to choose a reasonable default.