EndpointKafkaSettingsArgs

data class EndpointKafkaSettingsArgs(val broker: Output<String>, val includeControlDetails: Output<Boolean>? = null, val includeNullAndEmpty: Output<Boolean>? = null, val includePartitionValue: Output<Boolean>? = null, val includeTableAlterOperations: Output<Boolean>? = null, val includeTransactionDetails: Output<Boolean>? = null, val messageFormat: Output<String>? = null, val messageMaxBytes: Output<Int>? = null, val noHexPrefix: Output<Boolean>? = null, val partitionIncludeSchemaTable: Output<Boolean>? = null, val saslPassword: Output<String>? = null, val saslUsername: Output<String>? = null, val securityProtocol: Output<String>? = null, val sslCaCertificateArn: Output<String>? = null, val sslClientCertificateArn: Output<String>? = null, val sslClientKeyArn: Output<String>? = null, val sslClientKeyPassword: Output<String>? = null, val topic: Output<String>? = null) : ConvertibleToJava<EndpointKafkaSettingsArgs>

Constructors

Link copied to clipboard
constructor(broker: Output<String>, includeControlDetails: Output<Boolean>? = null, includeNullAndEmpty: Output<Boolean>? = null, includePartitionValue: Output<Boolean>? = null, includeTableAlterOperations: Output<Boolean>? = null, includeTransactionDetails: Output<Boolean>? = null, messageFormat: Output<String>? = null, messageMaxBytes: Output<Int>? = null, noHexPrefix: Output<Boolean>? = null, partitionIncludeSchemaTable: Output<Boolean>? = null, saslPassword: Output<String>? = null, saslUsername: Output<String>? = null, securityProtocol: Output<String>? = null, sslCaCertificateArn: Output<String>? = null, sslClientCertificateArn: Output<String>? = null, sslClientKeyArn: Output<String>? = null, sslClientKeyPassword: Output<String>? = null, topic: Output<String>? = null)

Properties

Link copied to clipboard
val broker: Output<String>

Kafka broker location. Specify in the form broker-hostname-or-ip:port.

Link copied to clipboard
val includeControlDetails: Output<Boolean>? = null

Shows detailed control information for table definition, column definition, and table and column changes in the Kafka message output. Default is false.

Link copied to clipboard
val includeNullAndEmpty: Output<Boolean>? = null

Include NULL and empty columns for records migrated to the endpoint. Default is false.

Link copied to clipboard
val includePartitionValue: Output<Boolean>? = null

Shows the partition value within the Kafka message output unless the partition type is schema-table-type. Default is false.

Link copied to clipboard

Includes any data definition language (DDL) operations that change the table in the control data, such as rename-table, drop-table, add-column, drop-column, and rename-column. Default is false.

Link copied to clipboard
val includeTransactionDetails: Output<Boolean>? = null

Provides detailed transaction information from the source database. This information includes a commit timestamp, a log position, and values for transaction_id, previous transaction_id, and transaction_record_id (the record offset within a transaction). Default is false.

Link copied to clipboard
val messageFormat: Output<String>? = null

Output format for the records created on the endpoint. Message format is JSON (default) or JSON_UNFORMATTED (a single line with no tab).

Link copied to clipboard
val messageMaxBytes: Output<Int>? = null

Maximum size in bytes for records created on the endpoint Default is 1,000,000.

Link copied to clipboard
val noHexPrefix: Output<Boolean>? = null

Set this optional parameter to true to avoid adding a '0x' prefix to raw data in hexadecimal format. For example, by default, AWS DMS adds a '0x' prefix to the LOB column type in hexadecimal format moving from an Oracle source to a Kafka target. Use the no_hex_prefix endpoint setting to enable migration of RAW data type columns without adding the '0x' prefix.

Link copied to clipboard

Prefixes schema and table names to partition values, when the partition type is primary-key-type. Doing this increases data distribution among Kafka partitions. For example, suppose that a SysBench schema has thousands of tables and each table has only limited range for a primary key. In this case, the same primary key is sent from thousands of tables to the same partition, which causes throttling. Default is false.

Link copied to clipboard
val saslPassword: Output<String>? = null

Secure password you created when you first set up your MSK cluster to validate a client identity and make an encrypted connection between server and client using SASL-SSL authentication.

Link copied to clipboard
val saslUsername: Output<String>? = null

Secure user name you created when you first set up your MSK cluster to validate a client identity and make an encrypted connection between server and client using SASL-SSL authentication.

Link copied to clipboard
val securityProtocol: Output<String>? = null

Set secure connection to a Kafka target endpoint using Transport Layer Security (TLS). Options include ssl-encryption, ssl-authentication, and sasl-ssl. sasl-ssl requires sasl_username and sasl_password.

Link copied to clipboard
val sslCaCertificateArn: Output<String>? = null

ARN for the private certificate authority (CA) cert that AWS DMS uses to securely connect to your Kafka target endpoint.

Link copied to clipboard
val sslClientCertificateArn: Output<String>? = null

ARN of the client certificate used to securely connect to a Kafka target endpoint.

Link copied to clipboard
val sslClientKeyArn: Output<String>? = null

ARN for the client private key used to securely connect to a Kafka target endpoint.

Link copied to clipboard
val sslClientKeyPassword: Output<String>? = null

Password for the client private key used to securely connect to a Kafka target endpoint.

Link copied to clipboard
val topic: Output<String>? = null

Kafka topic for migration. Default is kafka-default-topic.

Functions

Link copied to clipboard
open override fun toJava(): EndpointKafkaSettingsArgs