Location Hdfs Args
Manages an HDFS Location within AWS DataSync.
NOTE: The DataSync Agents must be available before creating this resource.
Example Usage
package generated_program;
import com.pulumi.Context;
import com.pulumi.Pulumi;
import com.pulumi.core.Output;
import com.pulumi.aws.datasync.LocationHdfs;
import com.pulumi.aws.datasync.LocationHdfsArgs;
import com.pulumi.aws.datasync.inputs.LocationHdfsNameNodeArgs;
import java.util.List;
import java.util.ArrayList;
import java.util.Map;
import java.io.File;
import java.nio.file.Files;
import java.nio.file.Paths;
public class App {
public static void main(String[] args) {
Pulumi.run(App::stack);
}
public static void stack(Context ctx) {
var example = new LocationHdfs("example", LocationHdfsArgs.builder()
.agentArns(aws_datasync_agent.example().arn())
.authenticationType("SIMPLE")
.simpleUser("example")
.nameNodes(LocationHdfsNameNodeArgs.builder()
.hostname(aws_instance.example().private_dns())
.port(80)
.build())
.build());
}
}
Import
aws_datasync_location_hdfs
can be imported by using the Amazon Resource Name (ARN), e.g.,
$ pulumi import aws:datasync/locationHdfs:LocationHdfs example arn:aws:datasync:us-east-1:123456789012:location/loc-12345678901234567
Constructors
Properties
The type of authentication used to determine the identity of the user. Valid values are SIMPLE
and KERBEROS
.
The Kerberos key table (keytab) that contains mappings between the defined Kerberos principal and the encrypted keys. If KERBEROS
is specified for authentication_type
, this parameter is required.
The krb5.conf file that contains the Kerberos configuration information. If KERBEROS
is specified for authentication_type
, this parameter is required.
The Kerberos principal with access to the files and folders on the HDFS cluster. If KERBEROS
is specified for authentication_type
, this parameter is required.
The URI of the HDFS cluster's Key Management Server (KMS).
The NameNode that manages the HDFS namespace. The NameNode performs operations such as opening, closing, and renaming files and directories. The NameNode contains the information to map blocks of data to the DataNodes. You can use only one NameNode. See configuration below.
The Quality of Protection (QOP) configuration specifies the Remote Procedure Call (RPC) and data transfer protection settings configured on the Hadoop Distributed File System (HDFS) cluster. If qop_configuration
isn't specified, rpc_protection
and data_transfer_protection
default to PRIVACY
. If you set RpcProtection or DataTransferProtection, the other parameter assumes the same value. See configuration below.
The number of DataNodes to replicate the data to when writing to the HDFS cluster. By default, data is replicated to three DataNodes.
The user name used to identify the client on the host operating system. If SIMPLE
is specified for authentication_type
, this parameter is required.
A subdirectory in the HDFS cluster. This subdirectory is used to read data from or write data to the HDFS cluster. If the subdirectory isn't specified, it will default to /.