Package-level declarations
Types
Settings for WorkerPool autoscaling.
Builder for AutoscalingSettingsArgs.
Metadata for a BigQuery connector used by the job.
Builder for BigQueryIODetailsArgs.
Metadata for a Cloud Bigtable connector used by the job.
Builder for BigTableIODetailsArgs.
Description of an interstitial value between transforms in an execution stage.
Builder for ComponentSourceArgs.
Description of a transform executed as part of an execution stage.
Builder for ComponentTransformArgs.
Metadata for a Datastore connector used by the job.
Builder for DatastoreIODetailsArgs.
Describes any options that have an effect on the debugging of pipelines.
Builder for DebugOptionsArgs.
Builder for DiskArgs.
Data provided with a pipeline or transform to provide descriptive info.
Builder for DisplayDataArgs.
Describes the environment in which a Dataflow Job runs.
Builder for EnvironmentArgs.
A message describing the state of a particular execution stage.
Builder for ExecutionStageStateArgs.
Description of the composing transforms, names/ids, and input/outputs of a stage of execution. Some composing transforms and sources may have been generated by the Dataflow service during execution planning.
Builder for ExecutionStageSummaryArgs.
Metadata for a File connector used by the job.
Builder for FileIODetailsArgs.
Builder for GetJobPlainArgs.
Builder for GetTemplatePlainArgs.
Additional information about how a Cloud Dataflow job will be executed that isn't contained in the submitted job.
Builder for JobExecutionInfoArgs.
Metadata available primarily for filtering jobs. Will be included in the ListJob response and Job SUMMARY view.
Builder for JobMetadataArgs.
The packages that must be installed in order for a worker to run the steps of the Cloud Dataflow job that will be assigned to its worker pool. This is the mechanism by which the Cloud Dataflow SDK causes code to be loaded onto the workers. For example, the Cloud Dataflow Java SDK might use this to install jars containing the user's code and all of the various dependencies (libraries, data files, etc.) required in order for that code to run.
Builder for PackageArgs.
A descriptive representation of submitted pipeline as well as the executed form. This data is provided by the Dataflow service for ease of visualizing the pipeline and interpreting Dataflow provided metrics.
Builder for PipelineDescriptionArgs.
Metadata for a Pub/Sub connector used by the job.
Builder for PubSubIODetailsArgs.
The environment values to set at runtime.
Builder for RuntimeEnvironmentArgs.
Additional job parameters that can only be updated during runtime using the projects.jobs.update method. These fields have no effect when specified during job creation.
Builder for RuntimeUpdatableParamsArgs.
Defines an SDK harness container for executing Dataflow pipelines.
Builder for SdkHarnessContainerImageArgs.
The version of the SDK used to run the job.
Builder for SdkVersionArgs.
Metadata for a Spanner connector used by the job.
Builder for SpannerIODetailsArgs.
Description of an input or output of an execution stage.
Builder for StageSourceArgs.
Defines a particular step within a Cloud Dataflow job. A job consists of multiple steps, each of which performs some specific operation as part of the overall job. Data is typically passed from one step to another as part of the job. Note: The properties of this object are not stable and might change. Here's an example of a sequence of steps which together implement a Map-Reduce job: * Read a collection of data from some source, parsing the collection's elements. * Validate the elements. * Apply a user-defined function to map each element to some value and extract an element-specific key value. * Group elements with the same key into a single element with that key, transforming a multiply-keyed collection into a uniquely-keyed collection. * Write the elements out to some data sink. Note that the Cloud Dataflow service may be used to run many different types of jobs, not just Map-Reduce.
Builder for StepArgs.
Taskrunner configuration settings.
Builder for TaskRunnerSettingsArgs.
Description of the type, names/ids, and input/outputs for a transform.
Builder for TransformSummaryArgs.
Describes one particular pool of Cloud Dataflow workers to be instantiated by the Cloud Dataflow service in order to perform the computations required by a job. Note that a workflow job may use multiple pools, in order to match the various computational requirements of the various stages of the job.
Builder for WorkerPoolArgs.
Provides data to pass through to the worker harness.
Builder for WorkerSettingsArgs.