DataQualityJobDefinitionDataQualityJobInputBatchTransformInputArgs

data class DataQualityJobDefinitionDataQualityJobInputBatchTransformInputArgs(val dataCapturedDestinationS3Uri: Output<String>, val datasetFormat: Output<DataQualityJobDefinitionDataQualityJobInputBatchTransformInputDatasetFormatArgs>, val localPath: Output<String>? = null, val s3DataDistributionType: Output<String>? = null, val s3InputMode: Output<String>? = null) : ConvertibleToJava<DataQualityJobDefinitionDataQualityJobInputBatchTransformInputArgs>

Constructors

constructor(dataCapturedDestinationS3Uri: Output<String>, datasetFormat: Output<DataQualityJobDefinitionDataQualityJobInputBatchTransformInputDatasetFormatArgs>, localPath: Output<String>? = null, s3DataDistributionType: Output<String>? = null, s3InputMode: Output<String>? = null)

Properties

Link copied to clipboard

The Amazon S3 location being used to capture the data.

Link copied to clipboard

The dataset format for your batch transform job. Fields are documented below.

Link copied to clipboard
val localPath: Output<String>? = null

Path to the filesystem where the batch transform data is available to the container. Defaults to /opt/ml/processing/input.

Link copied to clipboard
val s3DataDistributionType: Output<String>? = null

Whether input data distributed in Amazon S3 is fully replicated or sharded by an S3 key. Defaults to FullyReplicated. Valid values are FullyReplicated or ShardedByS3Key

Link copied to clipboard
val s3InputMode: Output<String>? = null

Whether the Pipe or File is used as the input mode for transferring data for the monitoring job. Pipe mode is recommended for large datasets. File mode is useful for small files that fit in memory. Defaults to File. Valid values are Pipe or File

Functions

Link copied to clipboard
open override fun toJava(): DataQualityJobDefinitionDataQualityJobInputBatchTransformInputArgs