1. Packages
  2. Packages
  3. AWS Cloud Control
  4. API Docs
  5. sagemaker
  6. Model

We recommend new projects start with resources from the AWS provider.

Viewing docs for AWS Cloud Control v1.62.0
published on Monday, Apr 20, 2026 by Pulumi
aws-native logo

We recommend new projects start with resources from the AWS provider.

Viewing docs for AWS Cloud Control v1.62.0
published on Monday, Apr 20, 2026 by Pulumi

    Resource type definition for AWS::SageMaker::Model

    Create Model Resource

    Resources are created with functions called constructors. To learn more about declaring and configuring resources, see Resources.

    Constructor syntax

    new Model(name: string, args?: ModelArgs, opts?: CustomResourceOptions);
    @overload
    def Model(resource_name: str,
              args: Optional[ModelArgs] = None,
              opts: Optional[ResourceOptions] = None)
    
    @overload
    def Model(resource_name: str,
              opts: Optional[ResourceOptions] = None,
              containers: Optional[Sequence[ModelContainerDefinitionArgs]] = None,
              enable_network_isolation: Optional[bool] = None,
              execution_role_arn: Optional[str] = None,
              inference_execution_config: Optional[ModelInferenceExecutionConfigArgs] = None,
              model_name: Optional[str] = None,
              primary_container: Optional[ModelContainerDefinitionArgs] = None,
              tags: Optional[Sequence[_root_inputs.TagArgs]] = None,
              vpc_config: Optional[ModelVpcConfigArgs] = None)
    func NewModel(ctx *Context, name string, args *ModelArgs, opts ...ResourceOption) (*Model, error)
    public Model(string name, ModelArgs? args = null, CustomResourceOptions? opts = null)
    public Model(String name, ModelArgs args)
    public Model(String name, ModelArgs args, CustomResourceOptions options)
    
    type: aws-native:sagemaker:Model
    properties: # The arguments to resource properties.
    options: # Bag of options to control resource's behavior.
    
    

    Parameters

    name string
    The unique name of the resource.
    args ModelArgs
    The arguments to resource properties.
    opts CustomResourceOptions
    Bag of options to control resource's behavior.
    resource_name str
    The unique name of the resource.
    args ModelArgs
    The arguments to resource properties.
    opts ResourceOptions
    Bag of options to control resource's behavior.
    ctx Context
    Context object for the current deployment.
    name string
    The unique name of the resource.
    args ModelArgs
    The arguments to resource properties.
    opts ResourceOption
    Bag of options to control resource's behavior.
    name string
    The unique name of the resource.
    args ModelArgs
    The arguments to resource properties.
    opts CustomResourceOptions
    Bag of options to control resource's behavior.
    name String
    The unique name of the resource.
    args ModelArgs
    The arguments to resource properties.
    options CustomResourceOptions
    Bag of options to control resource's behavior.

    Model Resource Properties

    To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.

    Inputs

    In Python, inputs that are objects can be passed either as argument classes or as dictionary literals.

    The Model resource accepts the following input properties:

    Containers List<Pulumi.AwsNative.SageMaker.Inputs.ModelContainerDefinition>
    Specifies the containers in the inference pipeline.
    EnableNetworkIsolation bool
    Isolates the model container. No inbound or outbound network calls can be made to or from the model container.
    ExecutionRoleArn string
    The Amazon Resource Name (ARN) of the IAM role that you specified for the model.
    InferenceExecutionConfig Pulumi.AwsNative.SageMaker.Inputs.ModelInferenceExecutionConfig
    Specifies details of how containers in a multi-container endpoint are called.
    ModelName string
    The name of the new model.
    PrimaryContainer Pulumi.AwsNative.SageMaker.Inputs.ModelContainerDefinition
    The location of the primary docker image containing inference code, associated artifacts, and custom environment map that the inference code uses when the model is deployed for predictions.
    Tags List<Pulumi.AwsNative.Inputs.Tag>
    An array of key-value pairs. You can use tags to categorize your AWS resources in different ways, for example, by purpose, owner, or environment. For more information, see Tagging AWS Resources.
    VpcConfig Pulumi.AwsNative.SageMaker.Inputs.ModelVpcConfig
    A VpcConfig object that specifies the VPC that you want your model to connect to. Control access to and from your model container by configuring the VPC. VpcConfig is used in hosting services and in batch transform. For more information, see Protect Endpoints by Using an Amazon Virtual Private Cloud and Protect Data in Batch Transform Jobs by Using an Amazon Virtual Private Cloud .
    Containers []ModelContainerDefinitionArgs
    Specifies the containers in the inference pipeline.
    EnableNetworkIsolation bool
    Isolates the model container. No inbound or outbound network calls can be made to or from the model container.
    ExecutionRoleArn string
    The Amazon Resource Name (ARN) of the IAM role that you specified for the model.
    InferenceExecutionConfig ModelInferenceExecutionConfigArgs
    Specifies details of how containers in a multi-container endpoint are called.
    ModelName string
    The name of the new model.
    PrimaryContainer ModelContainerDefinitionArgs
    The location of the primary docker image containing inference code, associated artifacts, and custom environment map that the inference code uses when the model is deployed for predictions.
    Tags TagArgs
    An array of key-value pairs. You can use tags to categorize your AWS resources in different ways, for example, by purpose, owner, or environment. For more information, see Tagging AWS Resources.
    VpcConfig ModelVpcConfigArgs
    A VpcConfig object that specifies the VPC that you want your model to connect to. Control access to and from your model container by configuring the VPC. VpcConfig is used in hosting services and in batch transform. For more information, see Protect Endpoints by Using an Amazon Virtual Private Cloud and Protect Data in Batch Transform Jobs by Using an Amazon Virtual Private Cloud .
    containers List<ModelContainerDefinition>
    Specifies the containers in the inference pipeline.
    enableNetworkIsolation Boolean
    Isolates the model container. No inbound or outbound network calls can be made to or from the model container.
    executionRoleArn String
    The Amazon Resource Name (ARN) of the IAM role that you specified for the model.
    inferenceExecutionConfig ModelInferenceExecutionConfig
    Specifies details of how containers in a multi-container endpoint are called.
    modelName String
    The name of the new model.
    primaryContainer ModelContainerDefinition
    The location of the primary docker image containing inference code, associated artifacts, and custom environment map that the inference code uses when the model is deployed for predictions.
    tags List<Tag>
    An array of key-value pairs. You can use tags to categorize your AWS resources in different ways, for example, by purpose, owner, or environment. For more information, see Tagging AWS Resources.
    vpcConfig ModelVpcConfig
    A VpcConfig object that specifies the VPC that you want your model to connect to. Control access to and from your model container by configuring the VPC. VpcConfig is used in hosting services and in batch transform. For more information, see Protect Endpoints by Using an Amazon Virtual Private Cloud and Protect Data in Batch Transform Jobs by Using an Amazon Virtual Private Cloud .
    containers ModelContainerDefinition[]
    Specifies the containers in the inference pipeline.
    enableNetworkIsolation boolean
    Isolates the model container. No inbound or outbound network calls can be made to or from the model container.
    executionRoleArn string
    The Amazon Resource Name (ARN) of the IAM role that you specified for the model.
    inferenceExecutionConfig ModelInferenceExecutionConfig
    Specifies details of how containers in a multi-container endpoint are called.
    modelName string
    The name of the new model.
    primaryContainer ModelContainerDefinition
    The location of the primary docker image containing inference code, associated artifacts, and custom environment map that the inference code uses when the model is deployed for predictions.
    tags Tag[]
    An array of key-value pairs. You can use tags to categorize your AWS resources in different ways, for example, by purpose, owner, or environment. For more information, see Tagging AWS Resources.
    vpcConfig ModelVpcConfig
    A VpcConfig object that specifies the VPC that you want your model to connect to. Control access to and from your model container by configuring the VPC. VpcConfig is used in hosting services and in batch transform. For more information, see Protect Endpoints by Using an Amazon Virtual Private Cloud and Protect Data in Batch Transform Jobs by Using an Amazon Virtual Private Cloud .
    containers Sequence[ModelContainerDefinitionArgs]
    Specifies the containers in the inference pipeline.
    enable_network_isolation bool
    Isolates the model container. No inbound or outbound network calls can be made to or from the model container.
    execution_role_arn str
    The Amazon Resource Name (ARN) of the IAM role that you specified for the model.
    inference_execution_config ModelInferenceExecutionConfigArgs
    Specifies details of how containers in a multi-container endpoint are called.
    model_name str
    The name of the new model.
    primary_container ModelContainerDefinitionArgs
    The location of the primary docker image containing inference code, associated artifacts, and custom environment map that the inference code uses when the model is deployed for predictions.
    tags Sequence[TagArgs]
    An array of key-value pairs. You can use tags to categorize your AWS resources in different ways, for example, by purpose, owner, or environment. For more information, see Tagging AWS Resources.
    vpc_config ModelVpcConfigArgs
    A VpcConfig object that specifies the VPC that you want your model to connect to. Control access to and from your model container by configuring the VPC. VpcConfig is used in hosting services and in batch transform. For more information, see Protect Endpoints by Using an Amazon Virtual Private Cloud and Protect Data in Batch Transform Jobs by Using an Amazon Virtual Private Cloud .
    containers List<Property Map>
    Specifies the containers in the inference pipeline.
    enableNetworkIsolation Boolean
    Isolates the model container. No inbound or outbound network calls can be made to or from the model container.
    executionRoleArn String
    The Amazon Resource Name (ARN) of the IAM role that you specified for the model.
    inferenceExecutionConfig Property Map
    Specifies details of how containers in a multi-container endpoint are called.
    modelName String
    The name of the new model.
    primaryContainer Property Map
    The location of the primary docker image containing inference code, associated artifacts, and custom environment map that the inference code uses when the model is deployed for predictions.
    tags List<Property Map>
    An array of key-value pairs. You can use tags to categorize your AWS resources in different ways, for example, by purpose, owner, or environment. For more information, see Tagging AWS Resources.
    vpcConfig Property Map
    A VpcConfig object that specifies the VPC that you want your model to connect to. Control access to and from your model container by configuring the VPC. VpcConfig is used in hosting services and in batch transform. For more information, see Protect Endpoints by Using an Amazon Virtual Private Cloud and Protect Data in Batch Transform Jobs by Using an Amazon Virtual Private Cloud .

    Outputs

    All input properties are implicitly available as output properties. Additionally, the Model resource produces the following output properties:

    Id string
    The provider-assigned unique ID for this managed resource.
    ModelArn string
    The Amazon Resource Name (ARN) of the model.
    Id string
    The provider-assigned unique ID for this managed resource.
    ModelArn string
    The Amazon Resource Name (ARN) of the model.
    id String
    The provider-assigned unique ID for this managed resource.
    modelArn String
    The Amazon Resource Name (ARN) of the model.
    id string
    The provider-assigned unique ID for this managed resource.
    modelArn string
    The Amazon Resource Name (ARN) of the model.
    id str
    The provider-assigned unique ID for this managed resource.
    model_arn str
    The Amazon Resource Name (ARN) of the model.
    id String
    The provider-assigned unique ID for this managed resource.
    modelArn String
    The Amazon Resource Name (ARN) of the model.

    Supporting Types

    ModelAccessConfig, ModelAccessConfigArgs

    The access configuration file to control access to the ML model. You can explicitly accept the model end-user license agreement (EULA) within the ModelAccessConfig.
    AcceptEula bool
    Specifies agreement to the model end-user license agreement (EULA). The AcceptEula value must be explicitly defined as True in order to accept the EULA that this model requires. You are responsible for reviewing and complying with any applicable license terms and making sure they are acceptable for your use case before downloading or using a model.
    AcceptEula bool
    Specifies agreement to the model end-user license agreement (EULA). The AcceptEula value must be explicitly defined as True in order to accept the EULA that this model requires. You are responsible for reviewing and complying with any applicable license terms and making sure they are acceptable for your use case before downloading or using a model.
    acceptEula Boolean
    Specifies agreement to the model end-user license agreement (EULA). The AcceptEula value must be explicitly defined as True in order to accept the EULA that this model requires. You are responsible for reviewing and complying with any applicable license terms and making sure they are acceptable for your use case before downloading or using a model.
    acceptEula boolean
    Specifies agreement to the model end-user license agreement (EULA). The AcceptEula value must be explicitly defined as True in order to accept the EULA that this model requires. You are responsible for reviewing and complying with any applicable license terms and making sure they are acceptable for your use case before downloading or using a model.
    accept_eula bool
    Specifies agreement to the model end-user license agreement (EULA). The AcceptEula value must be explicitly defined as True in order to accept the EULA that this model requires. You are responsible for reviewing and complying with any applicable license terms and making sure they are acceptable for your use case before downloading or using a model.
    acceptEula Boolean
    Specifies agreement to the model end-user license agreement (EULA). The AcceptEula value must be explicitly defined as True in order to accept the EULA that this model requires. You are responsible for reviewing and complying with any applicable license terms and making sure they are acceptable for your use case before downloading or using a model.

    ModelContainerDefinition, ModelContainerDefinitionArgs

    Describes the container, as part of model definition.
    ContainerHostname string

    This parameter is ignored for models that contain only a PrimaryContainer.

    When a ContainerDefinition is part of an inference pipeline, the value of the parameter uniquely identifies the container for the purposes of logging and metrics. For information, see Use Logs and Metrics to Monitor an Inference Pipeline. If you don't specify a value for this parameter for a ContainerDefinition that is part of an inference pipeline, a unique name is automatically assigned based on the position of the ContainerDefinition in the pipeline. If you specify a value for the ContainerHostName for any ContainerDefinition that is part of an inference pipeline, you must specify a value for the ContainerHostName parameter of every ContainerDefinition in that pipeline.

    Environment object

    The environment variables to set in the Docker container. Don't include any sensitive data in your environment variables.

    The maximum length of each key and value in the Environment map is 1024 bytes. The maximum length of all keys and values in the map, combined, is 32 KB. If you pass multiple containers to a CreateModel request, then the maximum length of all of their maps, combined, is also 32 KB.

    Image string
    The path where inference code is stored. This can be either in Amazon EC2 Container Registry or in a Docker registry that is accessible from the same VPC that you configure for your endpoint. If you are using your own custom algorithm instead of an algorithm provided by SageMaker, the inference code must meet SageMaker requirements. SageMaker supports both registry/repository[:tag] and registry/repository[@digest] image path formats. For more information, see Using Your Own Algorithms with Amazon SageMaker.
    ImageConfig Pulumi.AwsNative.SageMaker.Inputs.ModelImageConfig

    Specifies whether the model container is in Amazon ECR or a private Docker registry accessible from your Amazon Virtual Private Cloud (VPC). For information about storing containers in a private Docker registry, see Use a Private Docker Registry for Real-Time Inference Containers .

    The model artifacts in an Amazon S3 bucket and the Docker image for inference container in Amazon EC2 Container Registry must be in the same region as the model or endpoint you are creating.

    InferenceSpecificationName string
    The inference specification name in the model package version.
    Mode Pulumi.AwsNative.SageMaker.ModelContainerDefinitionMode
    Whether the container hosts a single model or multiple models.
    ModelDataSource Pulumi.AwsNative.SageMaker.Inputs.ModelDataSource

    Specifies the location of ML model data to deploy.

    Currently you cannot use ModelDataSource in conjunction with SageMaker batch transform, SageMaker serverless endpoints, SageMaker multi-model endpoints, and SageMaker Marketplace.

    ModelDataUrl string

    The S3 path where the model artifacts, which result from model training, are stored. This path must point to a single gzip compressed tar archive (.tar.gz suffix). The S3 path is required for SageMaker built-in algorithms, but not if you use your own algorithms. For more information on built-in algorithms, see Common Parameters.

    If you provide a value for this parameter, SageMaker uses AWS Security Token Service to download model artifacts from the S3 path you provide. AWS STS is activated in your AWS account by default. If you previously deactivated AWS STS for a region, you need to reactivate AWS STS for that region. For more information, see Activating and Deactivating AWS STS in an AWS Region in the AWS Identity and Access Management User Guide

    ModelPackageName string
    The name or Amazon Resource Name (ARN) of the model package to use to create the model.
    MultiModelConfig Pulumi.AwsNative.SageMaker.Inputs.ModelMultiModelConfig
    Specifies additional configuration for multi-model endpoints.
    ContainerHostname string

    This parameter is ignored for models that contain only a PrimaryContainer.

    When a ContainerDefinition is part of an inference pipeline, the value of the parameter uniquely identifies the container for the purposes of logging and metrics. For information, see Use Logs and Metrics to Monitor an Inference Pipeline. If you don't specify a value for this parameter for a ContainerDefinition that is part of an inference pipeline, a unique name is automatically assigned based on the position of the ContainerDefinition in the pipeline. If you specify a value for the ContainerHostName for any ContainerDefinition that is part of an inference pipeline, you must specify a value for the ContainerHostName parameter of every ContainerDefinition in that pipeline.

    Environment interface{}

    The environment variables to set in the Docker container. Don't include any sensitive data in your environment variables.

    The maximum length of each key and value in the Environment map is 1024 bytes. The maximum length of all keys and values in the map, combined, is 32 KB. If you pass multiple containers to a CreateModel request, then the maximum length of all of their maps, combined, is also 32 KB.

    Image string
    The path where inference code is stored. This can be either in Amazon EC2 Container Registry or in a Docker registry that is accessible from the same VPC that you configure for your endpoint. If you are using your own custom algorithm instead of an algorithm provided by SageMaker, the inference code must meet SageMaker requirements. SageMaker supports both registry/repository[:tag] and registry/repository[@digest] image path formats. For more information, see Using Your Own Algorithms with Amazon SageMaker.
    ImageConfig ModelImageConfig

    Specifies whether the model container is in Amazon ECR or a private Docker registry accessible from your Amazon Virtual Private Cloud (VPC). For information about storing containers in a private Docker registry, see Use a Private Docker Registry for Real-Time Inference Containers .

    The model artifacts in an Amazon S3 bucket and the Docker image for inference container in Amazon EC2 Container Registry must be in the same region as the model or endpoint you are creating.

    InferenceSpecificationName string
    The inference specification name in the model package version.
    Mode ModelContainerDefinitionMode
    Whether the container hosts a single model or multiple models.
    ModelDataSource ModelDataSource

    Specifies the location of ML model data to deploy.

    Currently you cannot use ModelDataSource in conjunction with SageMaker batch transform, SageMaker serverless endpoints, SageMaker multi-model endpoints, and SageMaker Marketplace.

    ModelDataUrl string

    The S3 path where the model artifacts, which result from model training, are stored. This path must point to a single gzip compressed tar archive (.tar.gz suffix). The S3 path is required for SageMaker built-in algorithms, but not if you use your own algorithms. For more information on built-in algorithms, see Common Parameters.

    If you provide a value for this parameter, SageMaker uses AWS Security Token Service to download model artifacts from the S3 path you provide. AWS STS is activated in your AWS account by default. If you previously deactivated AWS STS for a region, you need to reactivate AWS STS for that region. For more information, see Activating and Deactivating AWS STS in an AWS Region in the AWS Identity and Access Management User Guide

    ModelPackageName string
    The name or Amazon Resource Name (ARN) of the model package to use to create the model.
    MultiModelConfig ModelMultiModelConfig
    Specifies additional configuration for multi-model endpoints.
    containerHostname String

    This parameter is ignored for models that contain only a PrimaryContainer.

    When a ContainerDefinition is part of an inference pipeline, the value of the parameter uniquely identifies the container for the purposes of logging and metrics. For information, see Use Logs and Metrics to Monitor an Inference Pipeline. If you don't specify a value for this parameter for a ContainerDefinition that is part of an inference pipeline, a unique name is automatically assigned based on the position of the ContainerDefinition in the pipeline. If you specify a value for the ContainerHostName for any ContainerDefinition that is part of an inference pipeline, you must specify a value for the ContainerHostName parameter of every ContainerDefinition in that pipeline.

    environment Object

    The environment variables to set in the Docker container. Don't include any sensitive data in your environment variables.

    The maximum length of each key and value in the Environment map is 1024 bytes. The maximum length of all keys and values in the map, combined, is 32 KB. If you pass multiple containers to a CreateModel request, then the maximum length of all of their maps, combined, is also 32 KB.

    image String
    The path where inference code is stored. This can be either in Amazon EC2 Container Registry or in a Docker registry that is accessible from the same VPC that you configure for your endpoint. If you are using your own custom algorithm instead of an algorithm provided by SageMaker, the inference code must meet SageMaker requirements. SageMaker supports both registry/repository[:tag] and registry/repository[@digest] image path formats. For more information, see Using Your Own Algorithms with Amazon SageMaker.
    imageConfig ModelImageConfig

    Specifies whether the model container is in Amazon ECR or a private Docker registry accessible from your Amazon Virtual Private Cloud (VPC). For information about storing containers in a private Docker registry, see Use a Private Docker Registry for Real-Time Inference Containers .

    The model artifacts in an Amazon S3 bucket and the Docker image for inference container in Amazon EC2 Container Registry must be in the same region as the model or endpoint you are creating.

    inferenceSpecificationName String
    The inference specification name in the model package version.
    mode ModelContainerDefinitionMode
    Whether the container hosts a single model or multiple models.
    modelDataSource ModelDataSource

    Specifies the location of ML model data to deploy.

    Currently you cannot use ModelDataSource in conjunction with SageMaker batch transform, SageMaker serverless endpoints, SageMaker multi-model endpoints, and SageMaker Marketplace.

    modelDataUrl String

    The S3 path where the model artifacts, which result from model training, are stored. This path must point to a single gzip compressed tar archive (.tar.gz suffix). The S3 path is required for SageMaker built-in algorithms, but not if you use your own algorithms. For more information on built-in algorithms, see Common Parameters.

    If you provide a value for this parameter, SageMaker uses AWS Security Token Service to download model artifacts from the S3 path you provide. AWS STS is activated in your AWS account by default. If you previously deactivated AWS STS for a region, you need to reactivate AWS STS for that region. For more information, see Activating and Deactivating AWS STS in an AWS Region in the AWS Identity and Access Management User Guide

    modelPackageName String
    The name or Amazon Resource Name (ARN) of the model package to use to create the model.
    multiModelConfig ModelMultiModelConfig
    Specifies additional configuration for multi-model endpoints.
    containerHostname string

    This parameter is ignored for models that contain only a PrimaryContainer.

    When a ContainerDefinition is part of an inference pipeline, the value of the parameter uniquely identifies the container for the purposes of logging and metrics. For information, see Use Logs and Metrics to Monitor an Inference Pipeline. If you don't specify a value for this parameter for a ContainerDefinition that is part of an inference pipeline, a unique name is automatically assigned based on the position of the ContainerDefinition in the pipeline. If you specify a value for the ContainerHostName for any ContainerDefinition that is part of an inference pipeline, you must specify a value for the ContainerHostName parameter of every ContainerDefinition in that pipeline.

    environment any

    The environment variables to set in the Docker container. Don't include any sensitive data in your environment variables.

    The maximum length of each key and value in the Environment map is 1024 bytes. The maximum length of all keys and values in the map, combined, is 32 KB. If you pass multiple containers to a CreateModel request, then the maximum length of all of their maps, combined, is also 32 KB.

    image string
    The path where inference code is stored. This can be either in Amazon EC2 Container Registry or in a Docker registry that is accessible from the same VPC that you configure for your endpoint. If you are using your own custom algorithm instead of an algorithm provided by SageMaker, the inference code must meet SageMaker requirements. SageMaker supports both registry/repository[:tag] and registry/repository[@digest] image path formats. For more information, see Using Your Own Algorithms with Amazon SageMaker.
    imageConfig ModelImageConfig

    Specifies whether the model container is in Amazon ECR or a private Docker registry accessible from your Amazon Virtual Private Cloud (VPC). For information about storing containers in a private Docker registry, see Use a Private Docker Registry for Real-Time Inference Containers .

    The model artifacts in an Amazon S3 bucket and the Docker image for inference container in Amazon EC2 Container Registry must be in the same region as the model or endpoint you are creating.

    inferenceSpecificationName string
    The inference specification name in the model package version.
    mode ModelContainerDefinitionMode
    Whether the container hosts a single model or multiple models.
    modelDataSource ModelDataSource

    Specifies the location of ML model data to deploy.

    Currently you cannot use ModelDataSource in conjunction with SageMaker batch transform, SageMaker serverless endpoints, SageMaker multi-model endpoints, and SageMaker Marketplace.

    modelDataUrl string

    The S3 path where the model artifacts, which result from model training, are stored. This path must point to a single gzip compressed tar archive (.tar.gz suffix). The S3 path is required for SageMaker built-in algorithms, but not if you use your own algorithms. For more information on built-in algorithms, see Common Parameters.

    If you provide a value for this parameter, SageMaker uses AWS Security Token Service to download model artifacts from the S3 path you provide. AWS STS is activated in your AWS account by default. If you previously deactivated AWS STS for a region, you need to reactivate AWS STS for that region. For more information, see Activating and Deactivating AWS STS in an AWS Region in the AWS Identity and Access Management User Guide

    modelPackageName string
    The name or Amazon Resource Name (ARN) of the model package to use to create the model.
    multiModelConfig ModelMultiModelConfig
    Specifies additional configuration for multi-model endpoints.
    container_hostname str

    This parameter is ignored for models that contain only a PrimaryContainer.

    When a ContainerDefinition is part of an inference pipeline, the value of the parameter uniquely identifies the container for the purposes of logging and metrics. For information, see Use Logs and Metrics to Monitor an Inference Pipeline. If you don't specify a value for this parameter for a ContainerDefinition that is part of an inference pipeline, a unique name is automatically assigned based on the position of the ContainerDefinition in the pipeline. If you specify a value for the ContainerHostName for any ContainerDefinition that is part of an inference pipeline, you must specify a value for the ContainerHostName parameter of every ContainerDefinition in that pipeline.

    environment Any

    The environment variables to set in the Docker container. Don't include any sensitive data in your environment variables.

    The maximum length of each key and value in the Environment map is 1024 bytes. The maximum length of all keys and values in the map, combined, is 32 KB. If you pass multiple containers to a CreateModel request, then the maximum length of all of their maps, combined, is also 32 KB.

    image str
    The path where inference code is stored. This can be either in Amazon EC2 Container Registry or in a Docker registry that is accessible from the same VPC that you configure for your endpoint. If you are using your own custom algorithm instead of an algorithm provided by SageMaker, the inference code must meet SageMaker requirements. SageMaker supports both registry/repository[:tag] and registry/repository[@digest] image path formats. For more information, see Using Your Own Algorithms with Amazon SageMaker.
    image_config ModelImageConfig

    Specifies whether the model container is in Amazon ECR or a private Docker registry accessible from your Amazon Virtual Private Cloud (VPC). For information about storing containers in a private Docker registry, see Use a Private Docker Registry for Real-Time Inference Containers .

    The model artifacts in an Amazon S3 bucket and the Docker image for inference container in Amazon EC2 Container Registry must be in the same region as the model or endpoint you are creating.

    inference_specification_name str
    The inference specification name in the model package version.
    mode ModelContainerDefinitionMode
    Whether the container hosts a single model or multiple models.
    model_data_source ModelDataSource

    Specifies the location of ML model data to deploy.

    Currently you cannot use ModelDataSource in conjunction with SageMaker batch transform, SageMaker serverless endpoints, SageMaker multi-model endpoints, and SageMaker Marketplace.

    model_data_url str

    The S3 path where the model artifacts, which result from model training, are stored. This path must point to a single gzip compressed tar archive (.tar.gz suffix). The S3 path is required for SageMaker built-in algorithms, but not if you use your own algorithms. For more information on built-in algorithms, see Common Parameters.

    If you provide a value for this parameter, SageMaker uses AWS Security Token Service to download model artifacts from the S3 path you provide. AWS STS is activated in your AWS account by default. If you previously deactivated AWS STS for a region, you need to reactivate AWS STS for that region. For more information, see Activating and Deactivating AWS STS in an AWS Region in the AWS Identity and Access Management User Guide

    model_package_name str
    The name or Amazon Resource Name (ARN) of the model package to use to create the model.
    multi_model_config ModelMultiModelConfig
    Specifies additional configuration for multi-model endpoints.
    containerHostname String

    This parameter is ignored for models that contain only a PrimaryContainer.

    When a ContainerDefinition is part of an inference pipeline, the value of the parameter uniquely identifies the container for the purposes of logging and metrics. For information, see Use Logs and Metrics to Monitor an Inference Pipeline. If you don't specify a value for this parameter for a ContainerDefinition that is part of an inference pipeline, a unique name is automatically assigned based on the position of the ContainerDefinition in the pipeline. If you specify a value for the ContainerHostName for any ContainerDefinition that is part of an inference pipeline, you must specify a value for the ContainerHostName parameter of every ContainerDefinition in that pipeline.

    environment Any

    The environment variables to set in the Docker container. Don't include any sensitive data in your environment variables.

    The maximum length of each key and value in the Environment map is 1024 bytes. The maximum length of all keys and values in the map, combined, is 32 KB. If you pass multiple containers to a CreateModel request, then the maximum length of all of their maps, combined, is also 32 KB.

    image String
    The path where inference code is stored. This can be either in Amazon EC2 Container Registry or in a Docker registry that is accessible from the same VPC that you configure for your endpoint. If you are using your own custom algorithm instead of an algorithm provided by SageMaker, the inference code must meet SageMaker requirements. SageMaker supports both registry/repository[:tag] and registry/repository[@digest] image path formats. For more information, see Using Your Own Algorithms with Amazon SageMaker.
    imageConfig Property Map

    Specifies whether the model container is in Amazon ECR or a private Docker registry accessible from your Amazon Virtual Private Cloud (VPC). For information about storing containers in a private Docker registry, see Use a Private Docker Registry for Real-Time Inference Containers .

    The model artifacts in an Amazon S3 bucket and the Docker image for inference container in Amazon EC2 Container Registry must be in the same region as the model or endpoint you are creating.

    inferenceSpecificationName String
    The inference specification name in the model package version.
    mode "SingleModel" | "MultiModel"
    Whether the container hosts a single model or multiple models.
    modelDataSource Property Map

    Specifies the location of ML model data to deploy.

    Currently you cannot use ModelDataSource in conjunction with SageMaker batch transform, SageMaker serverless endpoints, SageMaker multi-model endpoints, and SageMaker Marketplace.

    modelDataUrl String

    The S3 path where the model artifacts, which result from model training, are stored. This path must point to a single gzip compressed tar archive (.tar.gz suffix). The S3 path is required for SageMaker built-in algorithms, but not if you use your own algorithms. For more information on built-in algorithms, see Common Parameters.

    If you provide a value for this parameter, SageMaker uses AWS Security Token Service to download model artifacts from the S3 path you provide. AWS STS is activated in your AWS account by default. If you previously deactivated AWS STS for a region, you need to reactivate AWS STS for that region. For more information, see Activating and Deactivating AWS STS in an AWS Region in the AWS Identity and Access Management User Guide

    modelPackageName String
    The name or Amazon Resource Name (ARN) of the model package to use to create the model.
    multiModelConfig Property Map
    Specifies additional configuration for multi-model endpoints.

    ModelContainerDefinitionMode, ModelContainerDefinitionModeArgs

    SingleModel
    SingleModel
    MultiModel
    MultiModel
    ModelContainerDefinitionModeSingleModel
    SingleModel
    ModelContainerDefinitionModeMultiModel
    MultiModel
    SingleModel
    SingleModel
    MultiModel
    MultiModel
    SingleModel
    SingleModel
    MultiModel
    MultiModel
    SINGLE_MODEL
    SingleModel
    MULTI_MODEL
    MultiModel
    "SingleModel"
    SingleModel
    "MultiModel"
    MultiModel

    ModelDataSource, ModelDataSourceArgs

    Specifies the location of ML model data to deploy. If specified, you must specify one and only one of the available data sources.
    S3DataSource Pulumi.AwsNative.SageMaker.Inputs.ModelS3DataSource
    Specifies the S3 location of ML model data to deploy.
    S3DataSource ModelS3DataSource
    Specifies the S3 location of ML model data to deploy.
    s3DataSource ModelS3DataSource
    Specifies the S3 location of ML model data to deploy.
    s3DataSource ModelS3DataSource
    Specifies the S3 location of ML model data to deploy.
    s3_data_source ModelS3DataSource
    Specifies the S3 location of ML model data to deploy.
    s3DataSource Property Map
    Specifies the S3 location of ML model data to deploy.

    ModelHubAccessConfig, ModelHubAccessConfigArgs

    Configuration information specifying which hub contents have accessible deployment options.
    HubContentArn string
    The ARN of the hub content for which deployment access is allowed.
    HubContentArn string
    The ARN of the hub content for which deployment access is allowed.
    hubContentArn String
    The ARN of the hub content for which deployment access is allowed.
    hubContentArn string
    The ARN of the hub content for which deployment access is allowed.
    hub_content_arn str
    The ARN of the hub content for which deployment access is allowed.
    hubContentArn String
    The ARN of the hub content for which deployment access is allowed.

    ModelImageConfig, ModelImageConfigArgs

    Specifies whether the model container is in Amazon ECR or a private Docker registry accessible from your Amazon Virtual Private Cloud (VPC).
    RepositoryAccessMode Pulumi.AwsNative.SageMaker.ModelImageConfigRepositoryAccessMode
    Set this to one of the following values: Platform - The model image is hosted in Amazon ECR. Vpc - The model image is hosted in a private Docker registry in your VPC.
    RepositoryAuthConfig Pulumi.AwsNative.SageMaker.Inputs.ModelRepositoryAuthConfig
    (Optional) Specifies an authentication configuration for the private docker registry where your model image is hosted. Specify a value for this property only if you specified Vpc as the value for the RepositoryAccessMode field, and the private Docker registry where the model image is hosted requires authentication.
    RepositoryAccessMode ModelImageConfigRepositoryAccessMode
    Set this to one of the following values: Platform - The model image is hosted in Amazon ECR. Vpc - The model image is hosted in a private Docker registry in your VPC.
    RepositoryAuthConfig ModelRepositoryAuthConfig
    (Optional) Specifies an authentication configuration for the private docker registry where your model image is hosted. Specify a value for this property only if you specified Vpc as the value for the RepositoryAccessMode field, and the private Docker registry where the model image is hosted requires authentication.
    repositoryAccessMode ModelImageConfigRepositoryAccessMode
    Set this to one of the following values: Platform - The model image is hosted in Amazon ECR. Vpc - The model image is hosted in a private Docker registry in your VPC.
    repositoryAuthConfig ModelRepositoryAuthConfig
    (Optional) Specifies an authentication configuration for the private docker registry where your model image is hosted. Specify a value for this property only if you specified Vpc as the value for the RepositoryAccessMode field, and the private Docker registry where the model image is hosted requires authentication.
    repositoryAccessMode ModelImageConfigRepositoryAccessMode
    Set this to one of the following values: Platform - The model image is hosted in Amazon ECR. Vpc - The model image is hosted in a private Docker registry in your VPC.
    repositoryAuthConfig ModelRepositoryAuthConfig
    (Optional) Specifies an authentication configuration for the private docker registry where your model image is hosted. Specify a value for this property only if you specified Vpc as the value for the RepositoryAccessMode field, and the private Docker registry where the model image is hosted requires authentication.
    repository_access_mode ModelImageConfigRepositoryAccessMode
    Set this to one of the following values: Platform - The model image is hosted in Amazon ECR. Vpc - The model image is hosted in a private Docker registry in your VPC.
    repository_auth_config ModelRepositoryAuthConfig
    (Optional) Specifies an authentication configuration for the private docker registry where your model image is hosted. Specify a value for this property only if you specified Vpc as the value for the RepositoryAccessMode field, and the private Docker registry where the model image is hosted requires authentication.
    repositoryAccessMode "Platform" | "Vpc"
    Set this to one of the following values: Platform - The model image is hosted in Amazon ECR. Vpc - The model image is hosted in a private Docker registry in your VPC.
    repositoryAuthConfig Property Map
    (Optional) Specifies an authentication configuration for the private docker registry where your model image is hosted. Specify a value for this property only if you specified Vpc as the value for the RepositoryAccessMode field, and the private Docker registry where the model image is hosted requires authentication.

    ModelImageConfigRepositoryAccessMode, ModelImageConfigRepositoryAccessModeArgs

    Platform
    Platform
    Vpc
    Vpc
    ModelImageConfigRepositoryAccessModePlatform
    Platform
    ModelImageConfigRepositoryAccessModeVpc
    Vpc
    Platform
    Platform
    Vpc
    Vpc
    Platform
    Platform
    Vpc
    Vpc
    PLATFORM
    Platform
    VPC
    Vpc
    "Platform"
    Platform
    "Vpc"
    Vpc

    ModelInferenceExecutionConfig, ModelInferenceExecutionConfigArgs

    Specifies details about how containers in a multi-container endpoint are run.
    Mode Pulumi.AwsNative.SageMaker.ModelInferenceExecutionConfigMode
    How containers in a multi-container are run.
    Mode ModelInferenceExecutionConfigMode
    How containers in a multi-container are run.
    mode ModelInferenceExecutionConfigMode
    How containers in a multi-container are run.
    mode ModelInferenceExecutionConfigMode
    How containers in a multi-container are run.
    mode ModelInferenceExecutionConfigMode
    How containers in a multi-container are run.
    mode "Serial" | "Direct"
    How containers in a multi-container are run.

    ModelInferenceExecutionConfigMode, ModelInferenceExecutionConfigModeArgs

    Serial
    Serial
    Direct
    Direct
    ModelInferenceExecutionConfigModeSerial
    Serial
    ModelInferenceExecutionConfigModeDirect
    Direct
    Serial
    Serial
    Direct
    Direct
    Serial
    Serial
    Direct
    Direct
    SERIAL
    Serial
    DIRECT
    Direct
    "Serial"
    Serial
    "Direct"
    Direct

    ModelMultiModelConfig, ModelMultiModelConfigArgs

    Specifies additional configuration for multi-model endpoints.
    ModelCacheSetting Pulumi.AwsNative.SageMaker.ModelMultiModelConfigModelCacheSetting
    Whether to cache models for a multi-model endpoint. By default, multi-model endpoints cache models so that a model does not have to be loaded into memory each time it is invoked. Some use cases do not benefit from model caching. For example, if an endpoint hosts a large number of models that are each invoked infrequently, the endpoint might perform better if you disable model caching. To disable model caching, set the value of this parameter to Disabled.
    ModelCacheSetting ModelMultiModelConfigModelCacheSetting
    Whether to cache models for a multi-model endpoint. By default, multi-model endpoints cache models so that a model does not have to be loaded into memory each time it is invoked. Some use cases do not benefit from model caching. For example, if an endpoint hosts a large number of models that are each invoked infrequently, the endpoint might perform better if you disable model caching. To disable model caching, set the value of this parameter to Disabled.
    modelCacheSetting ModelMultiModelConfigModelCacheSetting
    Whether to cache models for a multi-model endpoint. By default, multi-model endpoints cache models so that a model does not have to be loaded into memory each time it is invoked. Some use cases do not benefit from model caching. For example, if an endpoint hosts a large number of models that are each invoked infrequently, the endpoint might perform better if you disable model caching. To disable model caching, set the value of this parameter to Disabled.
    modelCacheSetting ModelMultiModelConfigModelCacheSetting
    Whether to cache models for a multi-model endpoint. By default, multi-model endpoints cache models so that a model does not have to be loaded into memory each time it is invoked. Some use cases do not benefit from model caching. For example, if an endpoint hosts a large number of models that are each invoked infrequently, the endpoint might perform better if you disable model caching. To disable model caching, set the value of this parameter to Disabled.
    model_cache_setting ModelMultiModelConfigModelCacheSetting
    Whether to cache models for a multi-model endpoint. By default, multi-model endpoints cache models so that a model does not have to be loaded into memory each time it is invoked. Some use cases do not benefit from model caching. For example, if an endpoint hosts a large number of models that are each invoked infrequently, the endpoint might perform better if you disable model caching. To disable model caching, set the value of this parameter to Disabled.
    modelCacheSetting "Enabled" | "Disabled"
    Whether to cache models for a multi-model endpoint. By default, multi-model endpoints cache models so that a model does not have to be loaded into memory each time it is invoked. Some use cases do not benefit from model caching. For example, if an endpoint hosts a large number of models that are each invoked infrequently, the endpoint might perform better if you disable model caching. To disable model caching, set the value of this parameter to Disabled.

    ModelMultiModelConfigModelCacheSetting, ModelMultiModelConfigModelCacheSettingArgs

    Enabled
    Enabled
    Disabled
    Disabled
    ModelMultiModelConfigModelCacheSettingEnabled
    Enabled
    ModelMultiModelConfigModelCacheSettingDisabled
    Disabled
    Enabled
    Enabled
    Disabled
    Disabled
    Enabled
    Enabled
    Disabled
    Disabled
    ENABLED
    Enabled
    DISABLED
    Disabled
    "Enabled"
    Enabled
    "Disabled"
    Disabled

    ModelRepositoryAuthConfig, ModelRepositoryAuthConfigArgs

    Specifies an authentication configuration for the private docker registry where your model image is hosted. Specify a value for this property only if you specified Vpc as the value for the RepositoryAccessMode field of the ImageConfig object that you passed to a call to CreateModel and the private Docker registry where the model image is hosted requires authentication.
    RepositoryCredentialsProviderArn string
    The Amazon Resource Name (ARN) of an AWS Lambda function that provides credentials to authenticate to the private Docker registry where your model image is hosted. For information about how to create an AWS Lambda function, see Create a Lambda function with the console in the AWS Lambda Developer Guide
    RepositoryCredentialsProviderArn string
    The Amazon Resource Name (ARN) of an AWS Lambda function that provides credentials to authenticate to the private Docker registry where your model image is hosted. For information about how to create an AWS Lambda function, see Create a Lambda function with the console in the AWS Lambda Developer Guide
    repositoryCredentialsProviderArn String
    The Amazon Resource Name (ARN) of an AWS Lambda function that provides credentials to authenticate to the private Docker registry where your model image is hosted. For information about how to create an AWS Lambda function, see Create a Lambda function with the console in the AWS Lambda Developer Guide
    repositoryCredentialsProviderArn string
    The Amazon Resource Name (ARN) of an AWS Lambda function that provides credentials to authenticate to the private Docker registry where your model image is hosted. For information about how to create an AWS Lambda function, see Create a Lambda function with the console in the AWS Lambda Developer Guide
    repository_credentials_provider_arn str
    The Amazon Resource Name (ARN) of an AWS Lambda function that provides credentials to authenticate to the private Docker registry where your model image is hosted. For information about how to create an AWS Lambda function, see Create a Lambda function with the console in the AWS Lambda Developer Guide
    repositoryCredentialsProviderArn String
    The Amazon Resource Name (ARN) of an AWS Lambda function that provides credentials to authenticate to the private Docker registry where your model image is hosted. For information about how to create an AWS Lambda function, see Create a Lambda function with the console in the AWS Lambda Developer Guide

    ModelS3DataSource, ModelS3DataSourceArgs

    Specifies the S3 location of ML model data to deploy.
    CompressionType Pulumi.AwsNative.SageMaker.ModelS3DataSourceCompressionType
    Specifies how the ML model data is prepared.
    S3DataType Pulumi.AwsNative.SageMaker.ModelS3DataSourceS3DataType
    Specifies the type of ML model data to deploy.
    S3Uri string
    Specifies the S3 path of ML model data to deploy.
    HubAccessConfig Pulumi.AwsNative.SageMaker.Inputs.ModelHubAccessConfig
    The configuration for a private hub model reference that points to a SageMaker JumpStart public hub model.
    ModelAccessConfig Pulumi.AwsNative.SageMaker.Inputs.ModelAccessConfig
    CompressionType ModelS3DataSourceCompressionType
    Specifies how the ML model data is prepared.
    S3DataType ModelS3DataSourceS3DataType
    Specifies the type of ML model data to deploy.
    S3Uri string
    Specifies the S3 path of ML model data to deploy.
    HubAccessConfig ModelHubAccessConfig
    The configuration for a private hub model reference that points to a SageMaker JumpStart public hub model.
    ModelAccessConfig ModelAccessConfig
    compressionType ModelS3DataSourceCompressionType
    Specifies how the ML model data is prepared.
    s3DataType ModelS3DataSourceS3DataType
    Specifies the type of ML model data to deploy.
    s3Uri String
    Specifies the S3 path of ML model data to deploy.
    hubAccessConfig ModelHubAccessConfig
    The configuration for a private hub model reference that points to a SageMaker JumpStart public hub model.
    modelAccessConfig ModelAccessConfig
    compressionType ModelS3DataSourceCompressionType
    Specifies how the ML model data is prepared.
    s3DataType ModelS3DataSourceS3DataType
    Specifies the type of ML model data to deploy.
    s3Uri string
    Specifies the S3 path of ML model data to deploy.
    hubAccessConfig ModelHubAccessConfig
    The configuration for a private hub model reference that points to a SageMaker JumpStart public hub model.
    modelAccessConfig ModelAccessConfig
    compression_type ModelS3DataSourceCompressionType
    Specifies how the ML model data is prepared.
    s3_data_type ModelS3DataSourceS3DataType
    Specifies the type of ML model data to deploy.
    s3_uri str
    Specifies the S3 path of ML model data to deploy.
    hub_access_config ModelHubAccessConfig
    The configuration for a private hub model reference that points to a SageMaker JumpStart public hub model.
    model_access_config ModelAccessConfig
    compressionType "None" | "Gzip"
    Specifies how the ML model data is prepared.
    s3DataType "S3Prefix" | "S3Object"
    Specifies the type of ML model data to deploy.
    s3Uri String
    Specifies the S3 path of ML model data to deploy.
    hubAccessConfig Property Map
    The configuration for a private hub model reference that points to a SageMaker JumpStart public hub model.
    modelAccessConfig Property Map

    ModelS3DataSourceCompressionType, ModelS3DataSourceCompressionTypeArgs

    None
    None
    Gzip
    Gzip
    ModelS3DataSourceCompressionTypeNone
    None
    ModelS3DataSourceCompressionTypeGzip
    Gzip
    None
    None
    Gzip
    Gzip
    None
    None
    Gzip
    Gzip
    NONE
    None
    GZIP
    Gzip
    "None"
    None
    "Gzip"
    Gzip

    ModelS3DataSourceS3DataType, ModelS3DataSourceS3DataTypeArgs

    S3Prefix
    S3Prefix
    S3Object
    S3Object
    ModelS3DataSourceS3DataTypeS3Prefix
    S3Prefix
    ModelS3DataSourceS3DataTypeS3Object
    S3Object
    S3Prefix
    S3Prefix
    S3Object
    S3Object
    S3Prefix
    S3Prefix
    S3Object
    S3Object
    S3_PREFIX
    S3Prefix
    S3_OBJECT
    S3Object
    "S3Prefix"
    S3Prefix
    "S3Object"
    S3Object

    ModelVpcConfig, ModelVpcConfigArgs

    Specifies an Amazon Virtual Private Cloud (VPC) that your SageMaker jobs, hosted models, and compute resources have access to. You can control access to and from your resources by configuring a VPC. For more information, see Give SageMaker Access to Resources in your Amazon VPC.
    SecurityGroupIds List<string>
    The VPC security group IDs, in the form sg-xxxxxxxx. Specify the security groups for the VPC that is specified in the Subnets field.
    Subnets List<string>
    The ID of the subnets in the VPC to which you want to connect your training job or model. For information about the availability of specific instance types, see Supported Instance Types and Availability Zones.
    SecurityGroupIds []string
    The VPC security group IDs, in the form sg-xxxxxxxx. Specify the security groups for the VPC that is specified in the Subnets field.
    Subnets []string
    The ID of the subnets in the VPC to which you want to connect your training job or model. For information about the availability of specific instance types, see Supported Instance Types and Availability Zones.
    securityGroupIds List<String>
    The VPC security group IDs, in the form sg-xxxxxxxx. Specify the security groups for the VPC that is specified in the Subnets field.
    subnets List<String>
    The ID of the subnets in the VPC to which you want to connect your training job or model. For information about the availability of specific instance types, see Supported Instance Types and Availability Zones.
    securityGroupIds string[]
    The VPC security group IDs, in the form sg-xxxxxxxx. Specify the security groups for the VPC that is specified in the Subnets field.
    subnets string[]
    The ID of the subnets in the VPC to which you want to connect your training job or model. For information about the availability of specific instance types, see Supported Instance Types and Availability Zones.
    security_group_ids Sequence[str]
    The VPC security group IDs, in the form sg-xxxxxxxx. Specify the security groups for the VPC that is specified in the Subnets field.
    subnets Sequence[str]
    The ID of the subnets in the VPC to which you want to connect your training job or model. For information about the availability of specific instance types, see Supported Instance Types and Availability Zones.
    securityGroupIds List<String>
    The VPC security group IDs, in the form sg-xxxxxxxx. Specify the security groups for the VPC that is specified in the Subnets field.
    subnets List<String>
    The ID of the subnets in the VPC to which you want to connect your training job or model. For information about the availability of specific instance types, see Supported Instance Types and Availability Zones.

    Tag, TagArgs

    A set of tags to apply to the resource.
    Key string
    The key name of the tag
    Value string
    The value of the tag
    Key string
    The key name of the tag
    Value string
    The value of the tag
    key String
    The key name of the tag
    value String
    The value of the tag
    key string
    The key name of the tag
    value string
    The value of the tag
    key str
    The key name of the tag
    value str
    The value of the tag
    key String
    The key name of the tag
    value String
    The value of the tag

    Package Details

    Repository
    AWS Native pulumi/pulumi-aws-native
    License
    Apache-2.0
    aws-native logo

    We recommend new projects start with resources from the AWS provider.

    Viewing docs for AWS Cloud Control v1.62.0
    published on Monday, Apr 20, 2026 by Pulumi
      Try Pulumi Cloud free. Your team will thank you.