1. Packages
  2. AWS Native
  3. API Docs
  4. fsx
  5. DataRepositoryAssociation

AWS Native is in preview. AWS Classic is fully supported.

AWS Native v0.97.0 published on Wednesday, Feb 21, 2024 by Pulumi

aws-native.fsx.DataRepositoryAssociation

Explore with Pulumi AI

aws-native logo

AWS Native is in preview. AWS Classic is fully supported.

AWS Native v0.97.0 published on Wednesday, Feb 21, 2024 by Pulumi

    Creates an Amazon FSx for Lustre data repository association (DRA). A data repository association is a link between a directory on the file system and an Amazon S3 bucket or prefix. You can have a maximum of 8 data repository associations on a file system. Data repository associations are supported on all FSx for Lustre 2.12 and newer file systems, excluding scratch_1 deployment type. Each data repository association must have a unique Amazon FSx file system directory and a unique S3 bucket or prefix associated with it. You can configure a data repository association for automatic import only, for automatic export only, or for both. To learn more about linking a data repository to your file system, see Linking your file system to an S3 bucket.

    Example Usage

    Example

    using System.Collections.Generic;
    using System.Linq;
    using Pulumi;
    using AwsNative = Pulumi.AwsNative;
    
    return await Deployment.RunAsync(() => 
    {
        var config = new Config();
        var fsId = config.Require("fsId");
        var draIdExportName = config.Require("draIdExportName");
        var fileSystemPath = config.Require("fileSystemPath");
        var importedFileChunkSize = config.Require("importedFileChunkSize");
        var testDRA = new AwsNative.FSx.DataRepositoryAssociation("testDRA", new()
        {
            FileSystemId = fsId,
            FileSystemPath = fileSystemPath,
            DataRepositoryPath = "s3://example-bucket",
            BatchImportMetaDataOnCreate = true,
            ImportedFileChunkSize = importedFileChunkSize,
            S3 = new AwsNative.FSx.Inputs.DataRepositoryAssociationS3Args
            {
                AutoImportPolicy = new AwsNative.FSx.Inputs.DataRepositoryAssociationAutoImportPolicyArgs
                {
                    Events = new[]
                    {
                        AwsNative.FSx.DataRepositoryAssociationEventType.New,
                        AwsNative.FSx.DataRepositoryAssociationEventType.Changed,
                        AwsNative.FSx.DataRepositoryAssociationEventType.Deleted,
                    },
                },
                AutoExportPolicy = new AwsNative.FSx.Inputs.DataRepositoryAssociationAutoExportPolicyArgs
                {
                    Events = new[]
                    {
                        AwsNative.FSx.DataRepositoryAssociationEventType.New,
                        AwsNative.FSx.DataRepositoryAssociationEventType.Changed,
                        AwsNative.FSx.DataRepositoryAssociationEventType.Deleted,
                    },
                },
            },
            Tags = new[]
            {
                new AwsNative.FSx.Inputs.DataRepositoryAssociationTagArgs
                {
                    Key = "Location",
                    Value = "Boston",
                },
            },
        });
    
        return new Dictionary<string, object?>
        {
            ["draId"] = testDRA.Id,
        };
    });
    
    package main
    
    import (
    	"github.com/pulumi/pulumi-aws-native/sdk/go/aws/fsx"
    	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
    	"github.com/pulumi/pulumi/sdk/v3/go/pulumi/config"
    )
    
    func main() {
    	pulumi.Run(func(ctx *pulumi.Context) error {
    		cfg := config.New(ctx, "")
    		fsId := cfg.Require("fsId")
    		draIdExportName := cfg.Require("draIdExportName")
    		fileSystemPath := cfg.Require("fileSystemPath")
    		importedFileChunkSize := cfg.Require("importedFileChunkSize")
    		testDRA, err := fsx.NewDataRepositoryAssociation(ctx, "testDRA", &fsx.DataRepositoryAssociationArgs{
    			FileSystemId:                pulumi.String(fsId),
    			FileSystemPath:              pulumi.String(fileSystemPath),
    			DataRepositoryPath:          pulumi.String("s3://example-bucket"),
    			BatchImportMetaDataOnCreate: pulumi.Bool(true),
    			ImportedFileChunkSize:       pulumi.String(importedFileChunkSize),
    			S3: &fsx.DataRepositoryAssociationS3Args{
    				AutoImportPolicy: &fsx.DataRepositoryAssociationAutoImportPolicyArgs{
    					Events: fsx.DataRepositoryAssociationEventTypeArray{
    						fsx.DataRepositoryAssociationEventTypeNew,
    						fsx.DataRepositoryAssociationEventTypeChanged,
    						fsx.DataRepositoryAssociationEventTypeDeleted,
    					},
    				},
    				AutoExportPolicy: &fsx.DataRepositoryAssociationAutoExportPolicyArgs{
    					Events: fsx.DataRepositoryAssociationEventTypeArray{
    						fsx.DataRepositoryAssociationEventTypeNew,
    						fsx.DataRepositoryAssociationEventTypeChanged,
    						fsx.DataRepositoryAssociationEventTypeDeleted,
    					},
    				},
    			},
    			Tags: []fsx.DataRepositoryAssociationTagArgs{
    				{
    					Key:   pulumi.String("Location"),
    					Value: pulumi.String("Boston"),
    				},
    			},
    		})
    		if err != nil {
    			return err
    		}
    		ctx.Export("draId", testDRA.ID())
    		return nil
    	})
    }
    

    Coming soon!

    import pulumi
    import pulumi_aws_native as aws_native
    
    config = pulumi.Config()
    fs_id = config.require("fsId")
    dra_id_export_name = config.require("draIdExportName")
    file_system_path = config.require("fileSystemPath")
    imported_file_chunk_size = config.require("importedFileChunkSize")
    test_dra = aws_native.fsx.DataRepositoryAssociation("testDRA",
        file_system_id=fs_id,
        file_system_path=file_system_path,
        data_repository_path="s3://example-bucket",
        batch_import_meta_data_on_create=True,
        imported_file_chunk_size=imported_file_chunk_size,
        s3=aws_native.fsx.DataRepositoryAssociationS3Args(
            auto_import_policy=aws_native.fsx.DataRepositoryAssociationAutoImportPolicyArgs(
                events=[
                    aws_native.fsx.DataRepositoryAssociationEventType.NEW,
                    aws_native.fsx.DataRepositoryAssociationEventType.CHANGED,
                    aws_native.fsx.DataRepositoryAssociationEventType.DELETED,
                ],
            ),
            auto_export_policy=aws_native.fsx.DataRepositoryAssociationAutoExportPolicyArgs(
                events=[
                    aws_native.fsx.DataRepositoryAssociationEventType.NEW,
                    aws_native.fsx.DataRepositoryAssociationEventType.CHANGED,
                    aws_native.fsx.DataRepositoryAssociationEventType.DELETED,
                ],
            ),
        ),
        tags=[aws_native.fsx.DataRepositoryAssociationTagArgs(
            key="Location",
            value="Boston",
        )])
    pulumi.export("draId", test_dra.id)
    
    import * as pulumi from "@pulumi/pulumi";
    import * as aws_native from "@pulumi/aws-native";
    
    const config = new pulumi.Config();
    const fsId = config.require("fsId");
    const draIdExportName = config.require("draIdExportName");
    const fileSystemPath = config.require("fileSystemPath");
    const importedFileChunkSize = config.require("importedFileChunkSize");
    const testDRA = new aws_native.fsx.DataRepositoryAssociation("testDRA", {
        fileSystemId: fsId,
        fileSystemPath: fileSystemPath,
        dataRepositoryPath: "s3://example-bucket",
        batchImportMetaDataOnCreate: true,
        importedFileChunkSize: importedFileChunkSize,
        s3: {
            autoImportPolicy: {
                events: [
                    aws_native.fsx.DataRepositoryAssociationEventType.New,
                    aws_native.fsx.DataRepositoryAssociationEventType.Changed,
                    aws_native.fsx.DataRepositoryAssociationEventType.Deleted,
                ],
            },
            autoExportPolicy: {
                events: [
                    aws_native.fsx.DataRepositoryAssociationEventType.New,
                    aws_native.fsx.DataRepositoryAssociationEventType.Changed,
                    aws_native.fsx.DataRepositoryAssociationEventType.Deleted,
                ],
            },
        },
        tags: [{
            key: "Location",
            value: "Boston",
        }],
    });
    export const draId = testDRA.id;
    

    Coming soon!

    Create DataRepositoryAssociation Resource

    new DataRepositoryAssociation(name: string, args: DataRepositoryAssociationArgs, opts?: CustomResourceOptions);
    @overload
    def DataRepositoryAssociation(resource_name: str,
                                  opts: Optional[ResourceOptions] = None,
                                  batch_import_meta_data_on_create: Optional[bool] = None,
                                  data_repository_path: Optional[str] = None,
                                  file_system_id: Optional[str] = None,
                                  file_system_path: Optional[str] = None,
                                  imported_file_chunk_size: Optional[int] = None,
                                  s3: Optional[DataRepositoryAssociationS3Args] = None,
                                  tags: Optional[Sequence[DataRepositoryAssociationTagArgs]] = None)
    @overload
    def DataRepositoryAssociation(resource_name: str,
                                  args: DataRepositoryAssociationArgs,
                                  opts: Optional[ResourceOptions] = None)
    func NewDataRepositoryAssociation(ctx *Context, name string, args DataRepositoryAssociationArgs, opts ...ResourceOption) (*DataRepositoryAssociation, error)
    public DataRepositoryAssociation(string name, DataRepositoryAssociationArgs args, CustomResourceOptions? opts = null)
    public DataRepositoryAssociation(String name, DataRepositoryAssociationArgs args)
    public DataRepositoryAssociation(String name, DataRepositoryAssociationArgs args, CustomResourceOptions options)
    
    type: aws-native:fsx:DataRepositoryAssociation
    properties: # The arguments to resource properties.
    options: # Bag of options to control resource's behavior.
    
    
    name string
    The unique name of the resource.
    args DataRepositoryAssociationArgs
    The arguments to resource properties.
    opts CustomResourceOptions
    Bag of options to control resource's behavior.
    resource_name str
    The unique name of the resource.
    args DataRepositoryAssociationArgs
    The arguments to resource properties.
    opts ResourceOptions
    Bag of options to control resource's behavior.
    ctx Context
    Context object for the current deployment.
    name string
    The unique name of the resource.
    args DataRepositoryAssociationArgs
    The arguments to resource properties.
    opts ResourceOption
    Bag of options to control resource's behavior.
    name string
    The unique name of the resource.
    args DataRepositoryAssociationArgs
    The arguments to resource properties.
    opts CustomResourceOptions
    Bag of options to control resource's behavior.
    name String
    The unique name of the resource.
    args DataRepositoryAssociationArgs
    The arguments to resource properties.
    options CustomResourceOptions
    Bag of options to control resource's behavior.

    DataRepositoryAssociation Resource Properties

    To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.

    Inputs

    The DataRepositoryAssociation resource accepts the following input properties:

    DataRepositoryPath string
    The path to the Amazon S3 data repository that will be linked to the file system. The path can be an S3 bucket or prefix in the format s3://myBucket/myPrefix/. This path specifies where in the S3 data repository files will be imported from or exported to.
    FileSystemId string
    The ID of the file system on which the data repository association is configured.
    FileSystemPath string
    A path on the Amazon FSx for Lustre file system that points to a high-level directory (such as /ns1/) or subdirectory (such as /ns1/subdir/) that will be mapped 1-1 with DataRepositoryPath. The leading forward slash in the name is required. Two data repository associations cannot have overlapping file system paths. For example, if a data repository is associated with file system path /ns1/, then you cannot link another data repository with file system path /ns1/ns2. This path specifies where in your file system files will be exported from or imported to. This file system directory can be linked to only one Amazon S3 bucket, and no other S3 bucket can be linked to the directory. If you specify only a forward slash (/) as the file system path, you can link only one data repository to the file system. You can only specify "/" as the file system path for the first data repository associated with a file system.
    BatchImportMetaDataOnCreate bool
    A boolean flag indicating whether an import data repository task to import metadata should run after the data repository association is created. The task runs if this flag is set to true.
    ImportedFileChunkSize int
    For files imported from a data repository, this value determines the stripe count and maximum amount of data per file (in MiB) stored on a single physical disk. The maximum number of disks that a single file can be striped across is limited by the total number of disks that make up the file system or cache. The default chunk size is 1,024 MiB (1 GiB) and can go as high as 512,000 MiB (500 GiB). Amazon S3 objects have a maximum size of 5 TB.
    S3 Pulumi.AwsNative.FSx.Inputs.DataRepositoryAssociationS3
    The configuration for an Amazon S3 data repository linked to an Amazon FSx Lustre file system with a data repository association. The configuration defines which file events (new, changed, or deleted files or directories) are automatically imported from the linked data repository to the file system or automatically exported from the file system to the data repository.
    Tags List<Pulumi.AwsNative.FSx.Inputs.DataRepositoryAssociationTag>
    An array of key-value pairs to apply to this resource. For more information, see Tag.
    DataRepositoryPath string
    The path to the Amazon S3 data repository that will be linked to the file system. The path can be an S3 bucket or prefix in the format s3://myBucket/myPrefix/. This path specifies where in the S3 data repository files will be imported from or exported to.
    FileSystemId string
    The ID of the file system on which the data repository association is configured.
    FileSystemPath string
    A path on the Amazon FSx for Lustre file system that points to a high-level directory (such as /ns1/) or subdirectory (such as /ns1/subdir/) that will be mapped 1-1 with DataRepositoryPath. The leading forward slash in the name is required. Two data repository associations cannot have overlapping file system paths. For example, if a data repository is associated with file system path /ns1/, then you cannot link another data repository with file system path /ns1/ns2. This path specifies where in your file system files will be exported from or imported to. This file system directory can be linked to only one Amazon S3 bucket, and no other S3 bucket can be linked to the directory. If you specify only a forward slash (/) as the file system path, you can link only one data repository to the file system. You can only specify "/" as the file system path for the first data repository associated with a file system.
    BatchImportMetaDataOnCreate bool
    A boolean flag indicating whether an import data repository task to import metadata should run after the data repository association is created. The task runs if this flag is set to true.
    ImportedFileChunkSize int
    For files imported from a data repository, this value determines the stripe count and maximum amount of data per file (in MiB) stored on a single physical disk. The maximum number of disks that a single file can be striped across is limited by the total number of disks that make up the file system or cache. The default chunk size is 1,024 MiB (1 GiB) and can go as high as 512,000 MiB (500 GiB). Amazon S3 objects have a maximum size of 5 TB.
    S3 DataRepositoryAssociationS3Args
    The configuration for an Amazon S3 data repository linked to an Amazon FSx Lustre file system with a data repository association. The configuration defines which file events (new, changed, or deleted files or directories) are automatically imported from the linked data repository to the file system or automatically exported from the file system to the data repository.
    Tags []DataRepositoryAssociationTagArgs
    An array of key-value pairs to apply to this resource. For more information, see Tag.
    dataRepositoryPath String
    The path to the Amazon S3 data repository that will be linked to the file system. The path can be an S3 bucket or prefix in the format s3://myBucket/myPrefix/. This path specifies where in the S3 data repository files will be imported from or exported to.
    fileSystemId String
    The ID of the file system on which the data repository association is configured.
    fileSystemPath String
    A path on the Amazon FSx for Lustre file system that points to a high-level directory (such as /ns1/) or subdirectory (such as /ns1/subdir/) that will be mapped 1-1 with DataRepositoryPath. The leading forward slash in the name is required. Two data repository associations cannot have overlapping file system paths. For example, if a data repository is associated with file system path /ns1/, then you cannot link another data repository with file system path /ns1/ns2. This path specifies where in your file system files will be exported from or imported to. This file system directory can be linked to only one Amazon S3 bucket, and no other S3 bucket can be linked to the directory. If you specify only a forward slash (/) as the file system path, you can link only one data repository to the file system. You can only specify "/" as the file system path for the first data repository associated with a file system.
    batchImportMetaDataOnCreate Boolean
    A boolean flag indicating whether an import data repository task to import metadata should run after the data repository association is created. The task runs if this flag is set to true.
    importedFileChunkSize Integer
    For files imported from a data repository, this value determines the stripe count and maximum amount of data per file (in MiB) stored on a single physical disk. The maximum number of disks that a single file can be striped across is limited by the total number of disks that make up the file system or cache. The default chunk size is 1,024 MiB (1 GiB) and can go as high as 512,000 MiB (500 GiB). Amazon S3 objects have a maximum size of 5 TB.
    s3 DataRepositoryAssociationS3
    The configuration for an Amazon S3 data repository linked to an Amazon FSx Lustre file system with a data repository association. The configuration defines which file events (new, changed, or deleted files or directories) are automatically imported from the linked data repository to the file system or automatically exported from the file system to the data repository.
    tags List<DataRepositoryAssociationTag>
    An array of key-value pairs to apply to this resource. For more information, see Tag.
    dataRepositoryPath string
    The path to the Amazon S3 data repository that will be linked to the file system. The path can be an S3 bucket or prefix in the format s3://myBucket/myPrefix/. This path specifies where in the S3 data repository files will be imported from or exported to.
    fileSystemId string
    The ID of the file system on which the data repository association is configured.
    fileSystemPath string
    A path on the Amazon FSx for Lustre file system that points to a high-level directory (such as /ns1/) or subdirectory (such as /ns1/subdir/) that will be mapped 1-1 with DataRepositoryPath. The leading forward slash in the name is required. Two data repository associations cannot have overlapping file system paths. For example, if a data repository is associated with file system path /ns1/, then you cannot link another data repository with file system path /ns1/ns2. This path specifies where in your file system files will be exported from or imported to. This file system directory can be linked to only one Amazon S3 bucket, and no other S3 bucket can be linked to the directory. If you specify only a forward slash (/) as the file system path, you can link only one data repository to the file system. You can only specify "/" as the file system path for the first data repository associated with a file system.
    batchImportMetaDataOnCreate boolean
    A boolean flag indicating whether an import data repository task to import metadata should run after the data repository association is created. The task runs if this flag is set to true.
    importedFileChunkSize number
    For files imported from a data repository, this value determines the stripe count and maximum amount of data per file (in MiB) stored on a single physical disk. The maximum number of disks that a single file can be striped across is limited by the total number of disks that make up the file system or cache. The default chunk size is 1,024 MiB (1 GiB) and can go as high as 512,000 MiB (500 GiB). Amazon S3 objects have a maximum size of 5 TB.
    s3 DataRepositoryAssociationS3
    The configuration for an Amazon S3 data repository linked to an Amazon FSx Lustre file system with a data repository association. The configuration defines which file events (new, changed, or deleted files or directories) are automatically imported from the linked data repository to the file system or automatically exported from the file system to the data repository.
    tags DataRepositoryAssociationTag[]
    An array of key-value pairs to apply to this resource. For more information, see Tag.
    data_repository_path str
    The path to the Amazon S3 data repository that will be linked to the file system. The path can be an S3 bucket or prefix in the format s3://myBucket/myPrefix/. This path specifies where in the S3 data repository files will be imported from or exported to.
    file_system_id str
    The ID of the file system on which the data repository association is configured.
    file_system_path str
    A path on the Amazon FSx for Lustre file system that points to a high-level directory (such as /ns1/) or subdirectory (such as /ns1/subdir/) that will be mapped 1-1 with DataRepositoryPath. The leading forward slash in the name is required. Two data repository associations cannot have overlapping file system paths. For example, if a data repository is associated with file system path /ns1/, then you cannot link another data repository with file system path /ns1/ns2. This path specifies where in your file system files will be exported from or imported to. This file system directory can be linked to only one Amazon S3 bucket, and no other S3 bucket can be linked to the directory. If you specify only a forward slash (/) as the file system path, you can link only one data repository to the file system. You can only specify "/" as the file system path for the first data repository associated with a file system.
    batch_import_meta_data_on_create bool
    A boolean flag indicating whether an import data repository task to import metadata should run after the data repository association is created. The task runs if this flag is set to true.
    imported_file_chunk_size int
    For files imported from a data repository, this value determines the stripe count and maximum amount of data per file (in MiB) stored on a single physical disk. The maximum number of disks that a single file can be striped across is limited by the total number of disks that make up the file system or cache. The default chunk size is 1,024 MiB (1 GiB) and can go as high as 512,000 MiB (500 GiB). Amazon S3 objects have a maximum size of 5 TB.
    s3 DataRepositoryAssociationS3Args
    The configuration for an Amazon S3 data repository linked to an Amazon FSx Lustre file system with a data repository association. The configuration defines which file events (new, changed, or deleted files or directories) are automatically imported from the linked data repository to the file system or automatically exported from the file system to the data repository.
    tags Sequence[DataRepositoryAssociationTagArgs]
    An array of key-value pairs to apply to this resource. For more information, see Tag.
    dataRepositoryPath String
    The path to the Amazon S3 data repository that will be linked to the file system. The path can be an S3 bucket or prefix in the format s3://myBucket/myPrefix/. This path specifies where in the S3 data repository files will be imported from or exported to.
    fileSystemId String
    The ID of the file system on which the data repository association is configured.
    fileSystemPath String
    A path on the Amazon FSx for Lustre file system that points to a high-level directory (such as /ns1/) or subdirectory (such as /ns1/subdir/) that will be mapped 1-1 with DataRepositoryPath. The leading forward slash in the name is required. Two data repository associations cannot have overlapping file system paths. For example, if a data repository is associated with file system path /ns1/, then you cannot link another data repository with file system path /ns1/ns2. This path specifies where in your file system files will be exported from or imported to. This file system directory can be linked to only one Amazon S3 bucket, and no other S3 bucket can be linked to the directory. If you specify only a forward slash (/) as the file system path, you can link only one data repository to the file system. You can only specify "/" as the file system path for the first data repository associated with a file system.
    batchImportMetaDataOnCreate Boolean
    A boolean flag indicating whether an import data repository task to import metadata should run after the data repository association is created. The task runs if this flag is set to true.
    importedFileChunkSize Number
    For files imported from a data repository, this value determines the stripe count and maximum amount of data per file (in MiB) stored on a single physical disk. The maximum number of disks that a single file can be striped across is limited by the total number of disks that make up the file system or cache. The default chunk size is 1,024 MiB (1 GiB) and can go as high as 512,000 MiB (500 GiB). Amazon S3 objects have a maximum size of 5 TB.
    s3 Property Map
    The configuration for an Amazon S3 data repository linked to an Amazon FSx Lustre file system with a data repository association. The configuration defines which file events (new, changed, or deleted files or directories) are automatically imported from the linked data repository to the file system or automatically exported from the file system to the data repository.
    tags List<Property Map>
    An array of key-value pairs to apply to this resource. For more information, see Tag.

    Outputs

    All input properties are implicitly available as output properties. Additionally, the DataRepositoryAssociation resource produces the following output properties:

    AssociationId string
    Id string
    The provider-assigned unique ID for this managed resource.
    ResourceArn string
    AssociationId string
    Id string
    The provider-assigned unique ID for this managed resource.
    ResourceArn string
    associationId String
    id String
    The provider-assigned unique ID for this managed resource.
    resourceArn String
    associationId string
    id string
    The provider-assigned unique ID for this managed resource.
    resourceArn string
    association_id str
    id str
    The provider-assigned unique ID for this managed resource.
    resource_arn str
    associationId String
    id String
    The provider-assigned unique ID for this managed resource.
    resourceArn String

    Supporting Types

    DataRepositoryAssociationAutoExportPolicy, DataRepositoryAssociationAutoExportPolicyArgs

    Events List<Pulumi.AwsNative.FSx.DataRepositoryAssociationEventType>

    The AutoExportPolicy can have the following event values:

    • NEW - New files and directories are automatically exported to the data repository as they are added to the file system.
    • CHANGED - Changes to files and directories on the file system are automatically exported to the data repository.
    • DELETED - Files and directories are automatically deleted on the data repository when they are deleted on the file system.

    You can define any combination of event types for your AutoExportPolicy.

    Events []DataRepositoryAssociationEventType

    The AutoExportPolicy can have the following event values:

    • NEW - New files and directories are automatically exported to the data repository as they are added to the file system.
    • CHANGED - Changes to files and directories on the file system are automatically exported to the data repository.
    • DELETED - Files and directories are automatically deleted on the data repository when they are deleted on the file system.

    You can define any combination of event types for your AutoExportPolicy.

    events List<DataRepositoryAssociationEventType>

    The AutoExportPolicy can have the following event values:

    • NEW - New files and directories are automatically exported to the data repository as they are added to the file system.
    • CHANGED - Changes to files and directories on the file system are automatically exported to the data repository.
    • DELETED - Files and directories are automatically deleted on the data repository when they are deleted on the file system.

    You can define any combination of event types for your AutoExportPolicy.

    events DataRepositoryAssociationEventType[]

    The AutoExportPolicy can have the following event values:

    • NEW - New files and directories are automatically exported to the data repository as they are added to the file system.
    • CHANGED - Changes to files and directories on the file system are automatically exported to the data repository.
    • DELETED - Files and directories are automatically deleted on the data repository when they are deleted on the file system.

    You can define any combination of event types for your AutoExportPolicy.

    events Sequence[DataRepositoryAssociationEventType]

    The AutoExportPolicy can have the following event values:

    • NEW - New files and directories are automatically exported to the data repository as they are added to the file system.
    • CHANGED - Changes to files and directories on the file system are automatically exported to the data repository.
    • DELETED - Files and directories are automatically deleted on the data repository when they are deleted on the file system.

    You can define any combination of event types for your AutoExportPolicy.

    events List<"NEW" | "CHANGED" | "DELETED">

    The AutoExportPolicy can have the following event values:

    • NEW - New files and directories are automatically exported to the data repository as they are added to the file system.
    • CHANGED - Changes to files and directories on the file system are automatically exported to the data repository.
    • DELETED - Files and directories are automatically deleted on the data repository when they are deleted on the file system.

    You can define any combination of event types for your AutoExportPolicy.

    DataRepositoryAssociationAutoImportPolicy, DataRepositoryAssociationAutoImportPolicyArgs

    Events List<Pulumi.AwsNative.FSx.DataRepositoryAssociationEventType>

    The AutoImportPolicy can have the following event values:

    • NEW - Amazon FSx automatically imports metadata of files added to the linked S3 bucket that do not currently exist in the FSx file system.
    • CHANGED - Amazon FSx automatically updates file metadata and invalidates existing file content on the file system as files change in the data repository.
    • DELETED - Amazon FSx automatically deletes files on the file system as corresponding files are deleted in the data repository.

    You can define any combination of event types for your AutoImportPolicy.

    Events []DataRepositoryAssociationEventType

    The AutoImportPolicy can have the following event values:

    • NEW - Amazon FSx automatically imports metadata of files added to the linked S3 bucket that do not currently exist in the FSx file system.
    • CHANGED - Amazon FSx automatically updates file metadata and invalidates existing file content on the file system as files change in the data repository.
    • DELETED - Amazon FSx automatically deletes files on the file system as corresponding files are deleted in the data repository.

    You can define any combination of event types for your AutoImportPolicy.

    events List<DataRepositoryAssociationEventType>

    The AutoImportPolicy can have the following event values:

    • NEW - Amazon FSx automatically imports metadata of files added to the linked S3 bucket that do not currently exist in the FSx file system.
    • CHANGED - Amazon FSx automatically updates file metadata and invalidates existing file content on the file system as files change in the data repository.
    • DELETED - Amazon FSx automatically deletes files on the file system as corresponding files are deleted in the data repository.

    You can define any combination of event types for your AutoImportPolicy.

    events DataRepositoryAssociationEventType[]

    The AutoImportPolicy can have the following event values:

    • NEW - Amazon FSx automatically imports metadata of files added to the linked S3 bucket that do not currently exist in the FSx file system.
    • CHANGED - Amazon FSx automatically updates file metadata and invalidates existing file content on the file system as files change in the data repository.
    • DELETED - Amazon FSx automatically deletes files on the file system as corresponding files are deleted in the data repository.

    You can define any combination of event types for your AutoImportPolicy.

    events Sequence[DataRepositoryAssociationEventType]

    The AutoImportPolicy can have the following event values:

    • NEW - Amazon FSx automatically imports metadata of files added to the linked S3 bucket that do not currently exist in the FSx file system.
    • CHANGED - Amazon FSx automatically updates file metadata and invalidates existing file content on the file system as files change in the data repository.
    • DELETED - Amazon FSx automatically deletes files on the file system as corresponding files are deleted in the data repository.

    You can define any combination of event types for your AutoImportPolicy.

    events List<"NEW" | "CHANGED" | "DELETED">

    The AutoImportPolicy can have the following event values:

    • NEW - Amazon FSx automatically imports metadata of files added to the linked S3 bucket that do not currently exist in the FSx file system.
    • CHANGED - Amazon FSx automatically updates file metadata and invalidates existing file content on the file system as files change in the data repository.
    • DELETED - Amazon FSx automatically deletes files on the file system as corresponding files are deleted in the data repository.

    You can define any combination of event types for your AutoImportPolicy.

    DataRepositoryAssociationEventType, DataRepositoryAssociationEventTypeArgs

    New
    NEW
    Changed
    CHANGED
    Deleted
    DELETED
    DataRepositoryAssociationEventTypeNew
    NEW
    DataRepositoryAssociationEventTypeChanged
    CHANGED
    DataRepositoryAssociationEventTypeDeleted
    DELETED
    New
    NEW
    Changed
    CHANGED
    Deleted
    DELETED
    New
    NEW
    Changed
    CHANGED
    Deleted
    DELETED
    NEW
    NEW
    CHANGED
    CHANGED
    DELETED
    DELETED
    "NEW"
    NEW
    "CHANGED"
    CHANGED
    "DELETED"
    DELETED

    DataRepositoryAssociationS3, DataRepositoryAssociationS3Args

    AutoExportPolicy Pulumi.AwsNative.FSx.Inputs.DataRepositoryAssociationAutoExportPolicy
    Describes a data repository association's automatic export policy. The AutoExportPolicy defines the types of updated objects on the file system that will be automatically exported to the data repository. As you create, modify, or delete files, Amazon FSx for Lustre automatically exports the defined changes asynchronously once your application finishes modifying the file. The AutoExportPolicy is only supported on Amazon FSx for Lustre file systems with a data repository association.
    AutoImportPolicy Pulumi.AwsNative.FSx.Inputs.DataRepositoryAssociationAutoImportPolicy
    Describes the data repository association's automatic import policy. The AutoImportPolicy defines how Amazon FSx keeps your file metadata and directory listings up to date by importing changes to your Amazon FSx for Lustre file system as you modify objects in a linked S3 bucket. The AutoImportPolicy is only supported on Amazon FSx for Lustre file systems with a data repository association.
    AutoExportPolicy DataRepositoryAssociationAutoExportPolicy
    Describes a data repository association's automatic export policy. The AutoExportPolicy defines the types of updated objects on the file system that will be automatically exported to the data repository. As you create, modify, or delete files, Amazon FSx for Lustre automatically exports the defined changes asynchronously once your application finishes modifying the file. The AutoExportPolicy is only supported on Amazon FSx for Lustre file systems with a data repository association.
    AutoImportPolicy DataRepositoryAssociationAutoImportPolicy
    Describes the data repository association's automatic import policy. The AutoImportPolicy defines how Amazon FSx keeps your file metadata and directory listings up to date by importing changes to your Amazon FSx for Lustre file system as you modify objects in a linked S3 bucket. The AutoImportPolicy is only supported on Amazon FSx for Lustre file systems with a data repository association.
    autoExportPolicy DataRepositoryAssociationAutoExportPolicy
    Describes a data repository association's automatic export policy. The AutoExportPolicy defines the types of updated objects on the file system that will be automatically exported to the data repository. As you create, modify, or delete files, Amazon FSx for Lustre automatically exports the defined changes asynchronously once your application finishes modifying the file. The AutoExportPolicy is only supported on Amazon FSx for Lustre file systems with a data repository association.
    autoImportPolicy DataRepositoryAssociationAutoImportPolicy
    Describes the data repository association's automatic import policy. The AutoImportPolicy defines how Amazon FSx keeps your file metadata and directory listings up to date by importing changes to your Amazon FSx for Lustre file system as you modify objects in a linked S3 bucket. The AutoImportPolicy is only supported on Amazon FSx for Lustre file systems with a data repository association.
    autoExportPolicy DataRepositoryAssociationAutoExportPolicy
    Describes a data repository association's automatic export policy. The AutoExportPolicy defines the types of updated objects on the file system that will be automatically exported to the data repository. As you create, modify, or delete files, Amazon FSx for Lustre automatically exports the defined changes asynchronously once your application finishes modifying the file. The AutoExportPolicy is only supported on Amazon FSx for Lustre file systems with a data repository association.
    autoImportPolicy DataRepositoryAssociationAutoImportPolicy
    Describes the data repository association's automatic import policy. The AutoImportPolicy defines how Amazon FSx keeps your file metadata and directory listings up to date by importing changes to your Amazon FSx for Lustre file system as you modify objects in a linked S3 bucket. The AutoImportPolicy is only supported on Amazon FSx for Lustre file systems with a data repository association.
    auto_export_policy DataRepositoryAssociationAutoExportPolicy
    Describes a data repository association's automatic export policy. The AutoExportPolicy defines the types of updated objects on the file system that will be automatically exported to the data repository. As you create, modify, or delete files, Amazon FSx for Lustre automatically exports the defined changes asynchronously once your application finishes modifying the file. The AutoExportPolicy is only supported on Amazon FSx for Lustre file systems with a data repository association.
    auto_import_policy DataRepositoryAssociationAutoImportPolicy
    Describes the data repository association's automatic import policy. The AutoImportPolicy defines how Amazon FSx keeps your file metadata and directory listings up to date by importing changes to your Amazon FSx for Lustre file system as you modify objects in a linked S3 bucket. The AutoImportPolicy is only supported on Amazon FSx for Lustre file systems with a data repository association.
    autoExportPolicy Property Map
    Describes a data repository association's automatic export policy. The AutoExportPolicy defines the types of updated objects on the file system that will be automatically exported to the data repository. As you create, modify, or delete files, Amazon FSx for Lustre automatically exports the defined changes asynchronously once your application finishes modifying the file. The AutoExportPolicy is only supported on Amazon FSx for Lustre file systems with a data repository association.
    autoImportPolicy Property Map
    Describes the data repository association's automatic import policy. The AutoImportPolicy defines how Amazon FSx keeps your file metadata and directory listings up to date by importing changes to your Amazon FSx for Lustre file system as you modify objects in a linked S3 bucket. The AutoImportPolicy is only supported on Amazon FSx for Lustre file systems with a data repository association.

    DataRepositoryAssociationTag, DataRepositoryAssociationTagArgs

    Key string
    A value that specifies the TagKey, the name of the tag. Tag keys must be unique for the resource to which they are attached.
    Value string
    A value that specifies the TagValue, the value assigned to the corresponding tag key. Tag values can be null and don't have to be unique in a tag set. For example, you can have a key-value pair in a tag set of finances : April and also of payroll : April.
    Key string
    A value that specifies the TagKey, the name of the tag. Tag keys must be unique for the resource to which they are attached.
    Value string
    A value that specifies the TagValue, the value assigned to the corresponding tag key. Tag values can be null and don't have to be unique in a tag set. For example, you can have a key-value pair in a tag set of finances : April and also of payroll : April.
    key String
    A value that specifies the TagKey, the name of the tag. Tag keys must be unique for the resource to which they are attached.
    value String
    A value that specifies the TagValue, the value assigned to the corresponding tag key. Tag values can be null and don't have to be unique in a tag set. For example, you can have a key-value pair in a tag set of finances : April and also of payroll : April.
    key string
    A value that specifies the TagKey, the name of the tag. Tag keys must be unique for the resource to which they are attached.
    value string
    A value that specifies the TagValue, the value assigned to the corresponding tag key. Tag values can be null and don't have to be unique in a tag set. For example, you can have a key-value pair in a tag set of finances : April and also of payroll : April.
    key str
    A value that specifies the TagKey, the name of the tag. Tag keys must be unique for the resource to which they are attached.
    value str
    A value that specifies the TagValue, the value assigned to the corresponding tag key. Tag values can be null and don't have to be unique in a tag set. For example, you can have a key-value pair in a tag set of finances : April and also of payroll : April.
    key String
    A value that specifies the TagKey, the name of the tag. Tag keys must be unique for the resource to which they are attached.
    value String
    A value that specifies the TagValue, the value assigned to the corresponding tag key. Tag values can be null and don't have to be unique in a tag set. For example, you can have a key-value pair in a tag set of finances : April and also of payroll : April.

    Package Details

    Repository
    AWS Native pulumi/pulumi-aws-native
    License
    Apache-2.0
    aws-native logo

    AWS Native is in preview. AWS Classic is fully supported.

    AWS Native v0.97.0 published on Wednesday, Feb 21, 2024 by Pulumi