Location Hdfs Args
data class LocationHdfsArgs(val agentArns: Output<List<String>>? = null, val authenticationType: Output<String>? = null, val blockSize: Output<Int>? = null, val kerberosKeytab: Output<String>? = null, val kerberosKrb5Conf: Output<String>? = null, val kerberosPrincipal: Output<String>? = null, val kmsKeyProviderUri: Output<String>? = null, val nameNodes: Output<List<LocationHdfsNameNodeArgs>>? = null, val qopConfiguration: Output<LocationHdfsQopConfigurationArgs>? = null, val replicationFactor: Output<Int>? = null, val simpleUser: Output<String>? = null, val subdirectory: Output<String>? = null, val tags: Output<Map<String, String>>? = null) : ConvertibleToJava<LocationHdfsArgs>
Manages an HDFS Location within AWS DataSync.
NOTE: The DataSync Agents must be available before creating this resource.
Example Usage
package generated_program;
import com.pulumi.Context;
import com.pulumi.Pulumi;
import com.pulumi.core.Output;
import com.pulumi.aws.datasync.LocationHdfs;
import com.pulumi.aws.datasync.LocationHdfsArgs;
import com.pulumi.aws.datasync.inputs.LocationHdfsNameNodeArgs;
import java.util.List;
import java.util.ArrayList;
import java.util.Map;
import java.io.File;
import java.nio.file.Files;
import java.nio.file.Paths;
public class App {
public static void main(String[] args) {
Pulumi.run(App::stack);
}
public static void stack(Context ctx) {
var example = new LocationHdfs("example", LocationHdfsArgs.builder()
.agentArns(aws_datasync_agent.example().arn())
.authenticationType("SIMPLE")
.simpleUser("example")
.nameNodes(LocationHdfsNameNodeArgs.builder()
.hostname(aws_instance.example().private_dns())
.port(80)
.build())
.build());
}
}
Content copied to clipboard
Import
Using pulumi import
, import aws_datasync_location_hdfs
using the Amazon Resource Name (ARN). For example:
$ pulumi import aws:datasync/locationHdfs:LocationHdfs example arn:aws:datasync:us-east-1:123456789012:location/loc-12345678901234567
Content copied to clipboard
Constructors
Link copied to clipboard
fun LocationHdfsArgs(agentArns: Output<List<String>>? = null, authenticationType: Output<String>? = null, blockSize: Output<Int>? = null, kerberosKeytab: Output<String>? = null, kerberosKrb5Conf: Output<String>? = null, kerberosPrincipal: Output<String>? = null, kmsKeyProviderUri: Output<String>? = null, nameNodes: Output<List<LocationHdfsNameNodeArgs>>? = null, qopConfiguration: Output<LocationHdfsQopConfigurationArgs>? = null, replicationFactor: Output<Int>? = null, simpleUser: Output<String>? = null, subdirectory: Output<String>? = null, tags: Output<Map<String, String>>? = null)
Functions
Properties
Link copied to clipboard
The Quality of Protection (QOP) configuration specifies the Remote Procedure Call (RPC) and data transfer protection settings configured on the Hadoop Distributed File System (HDFS) cluster. If qop_configuration
isn't specified, rpc_protection
and data_transfer_protection
default to PRIVACY
. If you set RpcProtection or DataTransferProtection, the other parameter assumes the same value. See configuration below.