Package-level declarations
Types
Specifies the type and number of accelerator cards attached to the instances of an instance group (see GPUs on Compute Engine (https://cloud.google.com/compute/docs/gpus/)).
Autoscaling Policy config associated with the cluster.
Basic algorithm for autoscaling.
Associates members with a role.
The cluster config.
A selector that chooses target cluster for jobs based on metadata.
The status of a cluster and its instances.
Specifies the config of disk options for a group of VM instances.
Encryption settings for the cluster.
Endpoint config for this cluster
Represents a textual expression in the Common Expression Language (CEL) syntax. CEL is a C-like expression language. The syntax and semantics of CEL are documented at https://github.com/google/cel-spec.Example (Comparison): title: "Summary size limit" description: "Determines if a summary is less than 100 chars" expression: "document.summary.size() < 100" Example (Equality): title: "Requestor is owner" description: "Determines if requestor is the document owner" expression: "document.owner == request.auth.claims.email" Example (Logic): title: "Public documents" description: "Determine whether the document should be publicly visible" expression: "document.type != 'private' && document.type != 'internal'" Example (Data Manipulation): title: "Notification string" description: "Create a notification string with a timestamp." expression: "'New message received at ' + string(document.create_time)" The exact variables and functions that may be referenced within an expression are determined by the service that evaluates it. See the service documentation for additional information.
Common config settings for resources of Compute Engine cluster instances, applicable to all instances in the cluster.
The GKE config for this cluster.
A Dataproc job for running Apache Hadoop MapReduce (https://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html) jobs on Apache Hadoop YARN (https://hadoop.apache.org/docs/r2.7.1/hadoop-yarn/hadoop-yarn-site/YARN.html).
A Dataproc job for running Apache Hive (https://hive.apache.org/) queries on YARN.
Configuration for the size bounds of an instance group, including its proportional size to other groups.
The config settings for Compute Engine resources in an instance group, such as a master or worker group.
A reference to a Compute Engine instance.
Encapsulates the full scoping used to reference a job.
Job scheduling options.
Dataproc job status.
Specifies Kerberos related configuration.
Specifies the cluster auto-delete schedule configuration.
The runtime logging config of the job.
Cluster that is managed by the workflow.
Specifies the resources used to actively manage an instance group.
Specifies a Metastore configuration.
A full, namespace-isolated deployment target for an existing GKE cluster.
Node Group Affinity for clusters using sole-tenant node groups.
Specifies an executable to run on a fully configured node and a timeout period for executable completion.
A job executed by the workflow.
Configuration for parameter validation.
A Dataproc job for running Apache Pig (https://pig.apache.org/) queries on YARN.
A Dataproc job for running Presto (https://prestosql.io/) queries. IMPORTANT: The Dataproc Presto Optional Component (https://cloud.google.com/dataproc/docs/concepts/components/presto) must be enabled when the cluster is created to submit a Presto job to the cluster.
A Dataproc job for running Apache PySpark (https://spark.apache.org/docs/0.9.0/python-programming-guide.html) applications on YARN.
A list of queries to run on a cluster.
Validation based on regular expressions.
Reservation Affinity for consuming Zonal reservation.
Security related configuration, including encryption, Kerberos, etc.
Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
A Dataproc job for running Apache Spark (http://spark.apache.org/) applications on YARN. The specification of the main method to call to drive the job. Specify either the jar file that contains the main class or the main class name. To pass both a main jar and a main class in that jar, add the jar to CommonJob.jar_file_uris, and then specify the main class name in main_class.
A Dataproc job for running Apache SparkR (https://spark.apache.org/docs/latest/sparkr.html) applications on YARN.
A Dataproc job for running Apache Spark SQL (http://spark.apache.org/sql/) queries.
A configurable parameter that replaces one or more fields in the template. Parameterizable fields: - Labels - File uris - Job properties - Job arguments - Script variables - Main class (in HadoopJob and SparkJob) - Zone (in ClusterSelector)
Validation based on a list of allowed values.
Specifies workflow execution target.Either managed_cluster or cluster_selector is required.
A YARN application created by a job. Application information is a subset of org.apache.hadoop.yarn.proto.YarnProtos.ApplicationReportProto.Beta Feature: This report is available for testing purposes only. It may be changed before final release.