Data Quality Job Definition Batch Transform Input Args
data class DataQualityJobDefinitionBatchTransformInputArgs(val dataCapturedDestinationS3Uri: Output<String>, val datasetFormat: Output<DataQualityJobDefinitionDatasetFormatArgs>, val excludeFeaturesAttribute: Output<String>? = null, val localPath: Output<String>, val s3DataDistributionType: Output<DataQualityJobDefinitionBatchTransformInputS3DataDistributionType>? = null, val s3InputMode: Output<DataQualityJobDefinitionBatchTransformInputS3InputMode>? = null) : ConvertibleToJava<DataQualityJobDefinitionBatchTransformInputArgs>
The batch transform input for a monitoring job.
Constructors
Link copied to clipboard
constructor(dataCapturedDestinationS3Uri: Output<String>, datasetFormat: Output<DataQualityJobDefinitionDatasetFormatArgs>, excludeFeaturesAttribute: Output<String>? = null, localPath: Output<String>, s3DataDistributionType: Output<DataQualityJobDefinitionBatchTransformInputS3DataDistributionType>? = null, s3InputMode: Output<DataQualityJobDefinitionBatchTransformInputS3InputMode>? = null)
Properties
Link copied to clipboard
A URI that identifies the Amazon S3 storage location where Batch Transform Job captures data.
Link copied to clipboard
The dataset format for your batch transform job.
Link copied to clipboard
Indexes or names of the features to be excluded from analysis
Link copied to clipboard
val s3DataDistributionType: Output<DataQualityJobDefinitionBatchTransformInputS3DataDistributionType>? = null
Whether input data distributed in Amazon S3 is fully replicated or sharded by an S3 key. Defauts to FullyReplicated
Link copied to clipboard
Whether the Pipe or File is used as the input mode for transfering data for the monitoring job. Pipe mode is recommended for large datasets. File mode is useful for small files that fit in memory. Defaults to File.