Lustre File System Args
Manages a FSx Lustre File System. See the FSx Lustre Guide for more information.
NOTE:
auto_import_policy
,export_path
,import_path
andimported_file_chunk_size
are not supported with thePERSISTENT_2
deployment type. Useaws.fsx.DataRepositoryAssociation
instead.
Example Usage
import * as pulumi from "@pulumi/pulumi";
import * as aws from "@pulumi/aws";
const example = new aws.fsx.LustreFileSystem("example", {
importPath: `s3://${exampleAwsS3Bucket.bucket}`,
storageCapacity: 1200,
subnetIds: exampleAwsSubnet.id,
});
import pulumi
import pulumi_aws as aws
example = aws.fsx.LustreFileSystem("example",
import_path=f"s3://{example_aws_s3_bucket['bucket']}",
storage_capacity=1200,
subnet_ids=example_aws_subnet["id"])
using System.Collections.Generic;
using System.Linq;
using Pulumi;
using Aws = Pulumi.Aws;
return await Deployment.RunAsync(() =>
{
var example = new Aws.Fsx.LustreFileSystem("example", new()
{
ImportPath = $"s3://{exampleAwsS3Bucket.Bucket}",
StorageCapacity = 1200,
SubnetIds = exampleAwsSubnet.Id,
});
});
package main
import (
"fmt"
"github.com/pulumi/pulumi-aws/sdk/v6/go/aws/fsx"
"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
)
func main() {
pulumi.Run(func(ctx *pulumi.Context) error {
_, err := fsx.NewLustreFileSystem(ctx, "example", &fsx.LustreFileSystemArgs{
ImportPath: pulumi.Sprintf("s3://%v", exampleAwsS3Bucket.Bucket),
StorageCapacity: pulumi.Int(1200),
SubnetIds: pulumi.Any(exampleAwsSubnet.Id),
})
if err != nil {
return err
}
return nil
})
}
package generated_program;
import com.pulumi.Context;
import com.pulumi.Pulumi;
import com.pulumi.core.Output;
import com.pulumi.aws.fsx.LustreFileSystem;
import com.pulumi.aws.fsx.LustreFileSystemArgs;
import java.util.List;
import java.util.ArrayList;
import java.util.Map;
import java.io.File;
import java.nio.file.Files;
import java.nio.file.Paths;
public class App {
public static void main(String[] args) {
Pulumi.run(App::stack);
}
public static void stack(Context ctx) {
var example = new LustreFileSystem("example", LustreFileSystemArgs.builder()
.importPath(String.format("s3://%s", exampleAwsS3Bucket.bucket()))
.storageCapacity(1200)
.subnetIds(exampleAwsSubnet.id())
.build());
}
}
resources:
example:
type: aws:fsx:LustreFileSystem
properties:
importPath: s3://${exampleAwsS3Bucket.bucket}
storageCapacity: 1200
subnetIds: ${exampleAwsSubnet.id}
Import
Using pulumi import
, import FSx File Systems using the id
. For example:
$ pulumi import aws:fsx/lustreFileSystem:LustreFileSystem example fs-543ab12b1ca672f33
Certain resource arguments, like security_group_ids
, do not have a FSx API method for reading the information after creation. If the argument is set in the Pulumi program on an imported resource, Pulumi will always show a difference. To workaround this behavior, either omit the argument from the Pulumi program or use ignore_changes
to hide the difference. For example:
Constructors
Properties
How Amazon FSx keeps your file and directory listings up to date as you add or modify objects in your linked S3 bucket. see Auto Import Data Repo for more details. Only supported on PERSISTENT_1
deployment types.
The number of days to retain automatic backups. Setting this to 0 disables automatic backups. You can retain automatic backups for a maximum of 90 days. only valid for PERSISTENT_1
and PERSISTENT_2
deployment_type.
A boolean flag indicating whether tags for the file system should be copied to backups. Applicable for PERSISTENT_1
and PERSISTENT_2
deployment_type. The default value is false.
A recurring daily time, in the format HH:MM. HH is the zero-padded hour of the day (0-23), and MM is the zero-padded minute of the hour. For example, 05:00 specifies 5 AM daily. only valid for PERSISTENT_1
and PERSISTENT_2
deployment_type. Requires automatic_backup_retention_days
to be set.
Sets the data compression configuration for the file system. Valid values are LZ4
and NONE
. Default value is NONE
. Unsetting this value reverts the compression type back to NONE
.
The filesystem deployment type. One of: SCRATCH_1
, SCRATCH_2
, PERSISTENT_1
, PERSISTENT_2
.
The type of drive cache used by PERSISTENT_1
filesystems that are provisioned with HDD
storage_type. Required for HDD
storage_type, set to either READ
or NONE
.
Adds support for Elastic Fabric Adapter (EFA) and GPUDirect Storage (GDS) to Lustre. This must be set at creation. If set this cannot be changed and this prevents changes to per_unit_storage_throughput
. This is only supported when deployment_type is set to PERSISTENT_2
, metadata_configuration
is used, and an EFA-enabled security group is attached.
S3 URI (with optional prefix) where the root of your Amazon FSx file system is exported. Can only be specified with import_path
argument and the path must use the same Amazon S3 bucket as specified in import_path
. Set equal to import_path
to overwrite files on export. Defaults to s3://{IMPORT BUCKET}/FSxLustre{CREATION TIMESTAMP}
. Only supported on PERSISTENT_1
deployment types.
Sets the Lustre version for the file system that you're creating. Valid values are 2.10 for SCRATCH_1
, SCRATCH_2
and PERSISTENT_1
deployment types. Valid values for 2.12 include all deployment types.
A map of tags to apply to the file system's final backup. Note: If the filesystem uses a Scratch deployment type, final backup during delete will always be skipped and this argument will not be used even when set.
For files imported from a data repository, this value determines the stripe count and maximum amount of data per file (in MiB) stored on a single physical disk. Can only be specified with import_path
argument. Defaults to 1024
. Minimum of 1
and maximum of 512000
. Only supported on PERSISTENT_1
deployment types.
S3 URI (with optional prefix) that you're using as the data repository for your FSx for Lustre file system. For example, s3://example-bucket/optional-prefix/
. Only supported on PERSISTENT_1
deployment types.
The Lustre logging configuration used when creating an Amazon FSx for Lustre file system. When logging is enabled, Lustre logs error and warning events for data repositories associated with your file system to Amazon CloudWatch Logs. See log_configuration
Block for details.
The Lustre metadata configuration used when creating an Amazon FSx for Lustre file system. This can be used to specify a user provisioned metadata scale. This is only supported when deployment_type
is set to PERSISTENT_2
. See metadata_configuration
Block for details.
Describes the amount of read and write throughput for each 1 tebibyte of storage, in MB/s/TiB, required for the PERSISTENT_1
and PERSISTENT_2
deployment_type. Valid values for PERSISTENT_1
deployment_type and SSD
storage_type are 50, 100, 200. Valid values for PERSISTENT_1
deployment_type and HDD
storage_type are 12, 40. Valid values for PERSISTENT_2
deployment_type and SSD
storage_type are 125, 250, 500, 1000.
The Lustre root squash configuration used when creating an Amazon FSx for Lustre file system. When enabled, root squash restricts root-level access from clients that try to access your file system as a root user. See root_squash_configuration
Block for details.
A list of IDs for the security groups that apply to the specified network interfaces created for file system access. These security groups will apply to all network interfaces.
When enabled, will skip the default final backup taken when the file system is deleted. This configuration must be applied separately before attempting to delete the resource to have the desired behavior. Defaults to true
. Note: If the filesystem uses a Scratch deployment type, final backup during delete will always be skipped and this argument will not be used even when set.
The storage capacity (GiB) of the file system. Minimum of 1200
. See more details at Allowed values for Fsx storage capacity. Update is allowed only for SCRATCH_2
, PERSISTENT_1
and PERSISTENT_2
deployment types, See more details at Fsx Storage Capacity Update. Required when not creating filesystem for a backup.
The filesystem storage type. Either SSD
or HDD
, defaults to SSD
. HDD
is only supported on PERSISTENT_1
deployment types.
The preferred start time (in d:HH:MM
format) to perform weekly maintenance, in the UTC time zone.