DataQualityJobDefinitionBatchTransformInput

data class DataQualityJobDefinitionBatchTransformInput(val dataCapturedDestinationS3Uri: String, val datasetFormat: DataQualityJobDefinitionDatasetFormat, val excludeFeaturesAttribute: String? = null, val localPath: String, val s3DataDistributionType: DataQualityJobDefinitionBatchTransformInputS3DataDistributionType? = null, val s3InputMode: DataQualityJobDefinitionBatchTransformInputS3InputMode? = null)

The batch transform input for a monitoring job.

Constructors

constructor(dataCapturedDestinationS3Uri: String, datasetFormat: DataQualityJobDefinitionDatasetFormat, excludeFeaturesAttribute: String? = null, localPath: String, s3DataDistributionType: DataQualityJobDefinitionBatchTransformInputS3DataDistributionType? = null, s3InputMode: DataQualityJobDefinitionBatchTransformInputS3InputMode? = null)

Types

Link copied to clipboard
object Companion

Properties

Link copied to clipboard

A URI that identifies the Amazon S3 storage location where Batch Transform Job captures data.

Link copied to clipboard

The dataset format for your batch transform job.

Link copied to clipboard

Indexes or names of the features to be excluded from analysis

Link copied to clipboard

Path to the filesystem where the endpoint data is available to the container.

Link copied to clipboard

Whether input data distributed in Amazon S3 is fully replicated or sharded by an S3 key. Defauts to FullyReplicated

Link copied to clipboard

Whether the Pipe or File is used as the input mode for transfering data for the monitoring job. Pipe mode is recommended for large datasets. File mode is useful for small files that fit in memory. Defaults to File.