We recommend new projects start with resources from the AWS provider.
aws-native.sagemaker.InferenceComponent
Explore with Pulumi AI
We recommend new projects start with resources from the AWS provider.
Resource Type definition for AWS::SageMaker::InferenceComponent
Create InferenceComponent Resource
Resources are created with functions called constructors. To learn more about declaring and configuring resources, see Resources.
Constructor syntax
new InferenceComponent(name: string, args: InferenceComponentArgs, opts?: CustomResourceOptions);
@overload
def InferenceComponent(resource_name: str,
args: InferenceComponentArgs,
opts: Optional[ResourceOptions] = None)
@overload
def InferenceComponent(resource_name: str,
opts: Optional[ResourceOptions] = None,
endpoint_name: Optional[str] = None,
specification: Optional[InferenceComponentSpecificationArgs] = None,
endpoint_arn: Optional[str] = None,
inference_component_name: Optional[str] = None,
runtime_config: Optional[InferenceComponentRuntimeConfigArgs] = None,
tags: Optional[Sequence[_root_inputs.TagArgs]] = None,
variant_name: Optional[str] = None)
func NewInferenceComponent(ctx *Context, name string, args InferenceComponentArgs, opts ...ResourceOption) (*InferenceComponent, error)
public InferenceComponent(string name, InferenceComponentArgs args, CustomResourceOptions? opts = null)
public InferenceComponent(String name, InferenceComponentArgs args)
public InferenceComponent(String name, InferenceComponentArgs args, CustomResourceOptions options)
type: aws-native:sagemaker:InferenceComponent
properties: # The arguments to resource properties.
options: # Bag of options to control resource's behavior.
Parameters
- name string
- The unique name of the resource.
- args InferenceComponentArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- resource_name str
- The unique name of the resource.
- args InferenceComponentArgs
- The arguments to resource properties.
- opts ResourceOptions
- Bag of options to control resource's behavior.
- ctx Context
- Context object for the current deployment.
- name string
- The unique name of the resource.
- args InferenceComponentArgs
- The arguments to resource properties.
- opts ResourceOption
- Bag of options to control resource's behavior.
- name string
- The unique name of the resource.
- args InferenceComponentArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- name String
- The unique name of the resource.
- args InferenceComponentArgs
- The arguments to resource properties.
- options CustomResourceOptions
- Bag of options to control resource's behavior.
InferenceComponent Resource Properties
To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.
Inputs
In Python, inputs that are objects can be passed either as argument classes or as dictionary literals.
The InferenceComponent resource accepts the following input properties:
- Endpoint
Name string - The name of the endpoint that hosts the inference component.
- Specification
Pulumi.
Aws Native. Sage Maker. Inputs. Inference Component Specification - Endpoint
Arn string - The Amazon Resource Name (ARN) of the endpoint that hosts the inference component.
- Inference
Component stringName - The name of the inference component.
- Runtime
Config Pulumi.Aws Native. Sage Maker. Inputs. Inference Component Runtime Config - List<Pulumi.
Aws Native. Inputs. Tag> - Variant
Name string - The name of the production variant that hosts the inference component.
- Endpoint
Name string - The name of the endpoint that hosts the inference component.
- Specification
Inference
Component Specification Args - Endpoint
Arn string - The Amazon Resource Name (ARN) of the endpoint that hosts the inference component.
- Inference
Component stringName - The name of the inference component.
- Runtime
Config InferenceComponent Runtime Config Args - Tag
Args - Variant
Name string - The name of the production variant that hosts the inference component.
- endpoint
Name String - The name of the endpoint that hosts the inference component.
- specification
Inference
Component Specification - endpoint
Arn String - The Amazon Resource Name (ARN) of the endpoint that hosts the inference component.
- inference
Component StringName - The name of the inference component.
- runtime
Config InferenceComponent Runtime Config - List<Tag>
- variant
Name String - The name of the production variant that hosts the inference component.
- endpoint
Name string - The name of the endpoint that hosts the inference component.
- specification
Inference
Component Specification - endpoint
Arn string - The Amazon Resource Name (ARN) of the endpoint that hosts the inference component.
- inference
Component stringName - The name of the inference component.
- runtime
Config InferenceComponent Runtime Config - Tag[]
- variant
Name string - The name of the production variant that hosts the inference component.
- endpoint_
name str - The name of the endpoint that hosts the inference component.
- specification
Inference
Component Specification Args - endpoint_
arn str - The Amazon Resource Name (ARN) of the endpoint that hosts the inference component.
- inference_
component_ strname - The name of the inference component.
- runtime_
config InferenceComponent Runtime Config Args - Sequence[Tag
Args] - variant_
name str - The name of the production variant that hosts the inference component.
- endpoint
Name String - The name of the endpoint that hosts the inference component.
- specification Property Map
- endpoint
Arn String - The Amazon Resource Name (ARN) of the endpoint that hosts the inference component.
- inference
Component StringName - The name of the inference component.
- runtime
Config Property Map - List<Property Map>
- variant
Name String - The name of the production variant that hosts the inference component.
Outputs
All input properties are implicitly available as output properties. Additionally, the InferenceComponent resource produces the following output properties:
- Creation
Time string - The time when the inference component was created.
- Failure
Reason string - Id string
- The provider-assigned unique ID for this managed resource.
- Inference
Component stringArn - The Amazon Resource Name (ARN) of the inference component.
- Inference
Component Pulumi.Status Aws Native. Sage Maker. Inference Component Status - The status of the inference component.
- Last
Modified stringTime - The time when the inference component was last updated.
- Creation
Time string - The time when the inference component was created.
- Failure
Reason string - Id string
- The provider-assigned unique ID for this managed resource.
- Inference
Component stringArn - The Amazon Resource Name (ARN) of the inference component.
- Inference
Component InferenceStatus Component Status - The status of the inference component.
- Last
Modified stringTime - The time when the inference component was last updated.
- creation
Time String - The time when the inference component was created.
- failure
Reason String - id String
- The provider-assigned unique ID for this managed resource.
- inference
Component StringArn - The Amazon Resource Name (ARN) of the inference component.
- inference
Component InferenceStatus Component Status - The status of the inference component.
- last
Modified StringTime - The time when the inference component was last updated.
- creation
Time string - The time when the inference component was created.
- failure
Reason string - id string
- The provider-assigned unique ID for this managed resource.
- inference
Component stringArn - The Amazon Resource Name (ARN) of the inference component.
- inference
Component InferenceStatus Component Status - The status of the inference component.
- last
Modified stringTime - The time when the inference component was last updated.
- creation_
time str - The time when the inference component was created.
- failure_
reason str - id str
- The provider-assigned unique ID for this managed resource.
- inference_
component_ strarn - The Amazon Resource Name (ARN) of the inference component.
- inference_
component_ Inferencestatus Component Status - The status of the inference component.
- last_
modified_ strtime - The time when the inference component was last updated.
- creation
Time String - The time when the inference component was created.
- failure
Reason String - id String
- The provider-assigned unique ID for this managed resource.
- inference
Component StringArn - The Amazon Resource Name (ARN) of the inference component.
- inference
Component "InStatus Service" | "Creating" | "Updating" | "Failed" | "Deleting" - The status of the inference component.
- last
Modified StringTime - The time when the inference component was last updated.
Supporting Types
InferenceComponentComputeResourceRequirements, InferenceComponentComputeResourceRequirementsArgs
- Max
Memory intRequired In Mb - The maximum MB of memory to allocate to run a model that you assign to an inference component.
- Min
Memory intRequired In Mb - The minimum MB of memory to allocate to run a model that you assign to an inference component.
- Number
Of doubleAccelerator Devices Required - The number of accelerators to allocate to run a model that you assign to an inference component. Accelerators include GPUs and AWS Inferentia.
- Number
Of doubleCpu Cores Required - The number of CPU cores to allocate to run a model that you assign to an inference component.
- Max
Memory intRequired In Mb - The maximum MB of memory to allocate to run a model that you assign to an inference component.
- Min
Memory intRequired In Mb - The minimum MB of memory to allocate to run a model that you assign to an inference component.
- Number
Of float64Accelerator Devices Required - The number of accelerators to allocate to run a model that you assign to an inference component. Accelerators include GPUs and AWS Inferentia.
- Number
Of float64Cpu Cores Required - The number of CPU cores to allocate to run a model that you assign to an inference component.
- max
Memory IntegerRequired In Mb - The maximum MB of memory to allocate to run a model that you assign to an inference component.
- min
Memory IntegerRequired In Mb - The minimum MB of memory to allocate to run a model that you assign to an inference component.
- number
Of DoubleAccelerator Devices Required - The number of accelerators to allocate to run a model that you assign to an inference component. Accelerators include GPUs and AWS Inferentia.
- number
Of DoubleCpu Cores Required - The number of CPU cores to allocate to run a model that you assign to an inference component.
- max
Memory numberRequired In Mb - The maximum MB of memory to allocate to run a model that you assign to an inference component.
- min
Memory numberRequired In Mb - The minimum MB of memory to allocate to run a model that you assign to an inference component.
- number
Of numberAccelerator Devices Required - The number of accelerators to allocate to run a model that you assign to an inference component. Accelerators include GPUs and AWS Inferentia.
- number
Of numberCpu Cores Required - The number of CPU cores to allocate to run a model that you assign to an inference component.
- max_
memory_ intrequired_ in_ mb - The maximum MB of memory to allocate to run a model that you assign to an inference component.
- min_
memory_ intrequired_ in_ mb - The minimum MB of memory to allocate to run a model that you assign to an inference component.
- number_
of_ floataccelerator_ devices_ required - The number of accelerators to allocate to run a model that you assign to an inference component. Accelerators include GPUs and AWS Inferentia.
- number_
of_ floatcpu_ cores_ required - The number of CPU cores to allocate to run a model that you assign to an inference component.
- max
Memory NumberRequired In Mb - The maximum MB of memory to allocate to run a model that you assign to an inference component.
- min
Memory NumberRequired In Mb - The minimum MB of memory to allocate to run a model that you assign to an inference component.
- number
Of NumberAccelerator Devices Required - The number of accelerators to allocate to run a model that you assign to an inference component. Accelerators include GPUs and AWS Inferentia.
- number
Of NumberCpu Cores Required - The number of CPU cores to allocate to run a model that you assign to an inference component.
InferenceComponentContainerSpecification, InferenceComponentContainerSpecificationArgs
- Artifact
Url string - The Amazon S3 path where the model artifacts, which result from model training, are stored. This path must point to a single gzip compressed tar archive (.tar.gz suffix).
- Deployed
Image Pulumi.Aws Native. Sage Maker. Inputs. Inference Component Deployed Image - Environment Dictionary<string, string>
- The environment variables to set in the Docker container. Each key and value in the Environment string-to-string map can have length of up to 1024. We support up to 16 entries in the map.
- Image string
- The Amazon Elastic Container Registry (Amazon ECR) path where the Docker image for the model is stored.
- Artifact
Url string - The Amazon S3 path where the model artifacts, which result from model training, are stored. This path must point to a single gzip compressed tar archive (.tar.gz suffix).
- Deployed
Image InferenceComponent Deployed Image - Environment map[string]string
- The environment variables to set in the Docker container. Each key and value in the Environment string-to-string map can have length of up to 1024. We support up to 16 entries in the map.
- Image string
- The Amazon Elastic Container Registry (Amazon ECR) path where the Docker image for the model is stored.
- artifact
Url String - The Amazon S3 path where the model artifacts, which result from model training, are stored. This path must point to a single gzip compressed tar archive (.tar.gz suffix).
- deployed
Image InferenceComponent Deployed Image - environment Map<String,String>
- The environment variables to set in the Docker container. Each key and value in the Environment string-to-string map can have length of up to 1024. We support up to 16 entries in the map.
- image String
- The Amazon Elastic Container Registry (Amazon ECR) path where the Docker image for the model is stored.
- artifact
Url string - The Amazon S3 path where the model artifacts, which result from model training, are stored. This path must point to a single gzip compressed tar archive (.tar.gz suffix).
- deployed
Image InferenceComponent Deployed Image - environment {[key: string]: string}
- The environment variables to set in the Docker container. Each key and value in the Environment string-to-string map can have length of up to 1024. We support up to 16 entries in the map.
- image string
- The Amazon Elastic Container Registry (Amazon ECR) path where the Docker image for the model is stored.
- artifact_
url str - The Amazon S3 path where the model artifacts, which result from model training, are stored. This path must point to a single gzip compressed tar archive (.tar.gz suffix).
- deployed_
image InferenceComponent Deployed Image - environment Mapping[str, str]
- The environment variables to set in the Docker container. Each key and value in the Environment string-to-string map can have length of up to 1024. We support up to 16 entries in the map.
- image str
- The Amazon Elastic Container Registry (Amazon ECR) path where the Docker image for the model is stored.
- artifact
Url String - The Amazon S3 path where the model artifacts, which result from model training, are stored. This path must point to a single gzip compressed tar archive (.tar.gz suffix).
- deployed
Image Property Map - environment Map<String>
- The environment variables to set in the Docker container. Each key and value in the Environment string-to-string map can have length of up to 1024. We support up to 16 entries in the map.
- image String
- The Amazon Elastic Container Registry (Amazon ECR) path where the Docker image for the model is stored.
InferenceComponentDeployedImage, InferenceComponentDeployedImageArgs
- Resolution
Time string - The date and time when the image path for the model resolved to the
ResolvedImage
- Resolved
Image string - The specific digest path of the image hosted in this
ProductionVariant
. - Specified
Image string - The image path you specified when you created the model.
- Resolution
Time string - The date and time when the image path for the model resolved to the
ResolvedImage
- Resolved
Image string - The specific digest path of the image hosted in this
ProductionVariant
. - Specified
Image string - The image path you specified when you created the model.
- resolution
Time String - The date and time when the image path for the model resolved to the
ResolvedImage
- resolved
Image String - The specific digest path of the image hosted in this
ProductionVariant
. - specified
Image String - The image path you specified when you created the model.
- resolution
Time string - The date and time when the image path for the model resolved to the
ResolvedImage
- resolved
Image string - The specific digest path of the image hosted in this
ProductionVariant
. - specified
Image string - The image path you specified when you created the model.
- resolution_
time str - The date and time when the image path for the model resolved to the
ResolvedImage
- resolved_
image str - The specific digest path of the image hosted in this
ProductionVariant
. - specified_
image str - The image path you specified when you created the model.
- resolution
Time String - The date and time when the image path for the model resolved to the
ResolvedImage
- resolved
Image String - The specific digest path of the image hosted in this
ProductionVariant
. - specified
Image String - The image path you specified when you created the model.
InferenceComponentRuntimeConfig, InferenceComponentRuntimeConfigArgs
- Copy
Count int - The number of runtime copies of the model container to deploy with the inference component. Each copy can serve inference requests.
- Current
Copy intCount - Desired
Copy intCount
- Copy
Count int - The number of runtime copies of the model container to deploy with the inference component. Each copy can serve inference requests.
- Current
Copy intCount - Desired
Copy intCount
- copy
Count Integer - The number of runtime copies of the model container to deploy with the inference component. Each copy can serve inference requests.
- current
Copy IntegerCount - desired
Copy IntegerCount
- copy
Count number - The number of runtime copies of the model container to deploy with the inference component. Each copy can serve inference requests.
- current
Copy numberCount - desired
Copy numberCount
- copy_
count int - The number of runtime copies of the model container to deploy with the inference component. Each copy can serve inference requests.
- current_
copy_ intcount - desired_
copy_ intcount
- copy
Count Number - The number of runtime copies of the model container to deploy with the inference component. Each copy can serve inference requests.
- current
Copy NumberCount - desired
Copy NumberCount
InferenceComponentSpecification, InferenceComponentSpecificationArgs
- Base
Inference stringComponent Name The name of an existing inference component that is to contain the inference component that you're creating with your request.
Specify this parameter only if your request is meant to create an adapter inference component. An adapter inference component contains the path to an adapter model. The purpose of the adapter model is to tailor the inference output of a base foundation model, which is hosted by the base inference component. The adapter inference component uses the compute resources that you assigned to the base inference component.
When you create an adapter inference component, use the
Container
parameter to specify the location of the adapter artifacts. In the parameter value, use theArtifactUrl
parameter of theInferenceComponentContainerSpecification
data type.Before you can create an adapter inference component, you must have an existing inference component that contains the foundation model that you want to adapt.
- Compute
Resource Pulumi.Requirements Aws Native. Sage Maker. Inputs. Inference Component Compute Resource Requirements The compute resources allocated to run the model, plus any adapter models, that you assign to the inference component.
Omit this parameter if your request is meant to create an adapter inference component. An adapter inference component is loaded by a base inference component, and it uses the compute resources of the base inference component.
- Container
Pulumi.
Aws Native. Sage Maker. Inputs. Inference Component Container Specification - Defines a container that provides the runtime environment for a model that you deploy with an inference component.
- Model
Name string - The name of an existing SageMaker model object in your account that you want to deploy with the inference component.
- Startup
Parameters Pulumi.Aws Native. Sage Maker. Inputs. Inference Component Startup Parameters - Settings that take effect while the model container starts up.
- Base
Inference stringComponent Name The name of an existing inference component that is to contain the inference component that you're creating with your request.
Specify this parameter only if your request is meant to create an adapter inference component. An adapter inference component contains the path to an adapter model. The purpose of the adapter model is to tailor the inference output of a base foundation model, which is hosted by the base inference component. The adapter inference component uses the compute resources that you assigned to the base inference component.
When you create an adapter inference component, use the
Container
parameter to specify the location of the adapter artifacts. In the parameter value, use theArtifactUrl
parameter of theInferenceComponentContainerSpecification
data type.Before you can create an adapter inference component, you must have an existing inference component that contains the foundation model that you want to adapt.
- Compute
Resource InferenceRequirements Component Compute Resource Requirements The compute resources allocated to run the model, plus any adapter models, that you assign to the inference component.
Omit this parameter if your request is meant to create an adapter inference component. An adapter inference component is loaded by a base inference component, and it uses the compute resources of the base inference component.
- Container
Inference
Component Container Specification - Defines a container that provides the runtime environment for a model that you deploy with an inference component.
- Model
Name string - The name of an existing SageMaker model object in your account that you want to deploy with the inference component.
- Startup
Parameters InferenceComponent Startup Parameters - Settings that take effect while the model container starts up.
- base
Inference StringComponent Name The name of an existing inference component that is to contain the inference component that you're creating with your request.
Specify this parameter only if your request is meant to create an adapter inference component. An adapter inference component contains the path to an adapter model. The purpose of the adapter model is to tailor the inference output of a base foundation model, which is hosted by the base inference component. The adapter inference component uses the compute resources that you assigned to the base inference component.
When you create an adapter inference component, use the
Container
parameter to specify the location of the adapter artifacts. In the parameter value, use theArtifactUrl
parameter of theInferenceComponentContainerSpecification
data type.Before you can create an adapter inference component, you must have an existing inference component that contains the foundation model that you want to adapt.
- compute
Resource InferenceRequirements Component Compute Resource Requirements The compute resources allocated to run the model, plus any adapter models, that you assign to the inference component.
Omit this parameter if your request is meant to create an adapter inference component. An adapter inference component is loaded by a base inference component, and it uses the compute resources of the base inference component.
- container
Inference
Component Container Specification - Defines a container that provides the runtime environment for a model that you deploy with an inference component.
- model
Name String - The name of an existing SageMaker model object in your account that you want to deploy with the inference component.
- startup
Parameters InferenceComponent Startup Parameters - Settings that take effect while the model container starts up.
- base
Inference stringComponent Name The name of an existing inference component that is to contain the inference component that you're creating with your request.
Specify this parameter only if your request is meant to create an adapter inference component. An adapter inference component contains the path to an adapter model. The purpose of the adapter model is to tailor the inference output of a base foundation model, which is hosted by the base inference component. The adapter inference component uses the compute resources that you assigned to the base inference component.
When you create an adapter inference component, use the
Container
parameter to specify the location of the adapter artifacts. In the parameter value, use theArtifactUrl
parameter of theInferenceComponentContainerSpecification
data type.Before you can create an adapter inference component, you must have an existing inference component that contains the foundation model that you want to adapt.
- compute
Resource InferenceRequirements Component Compute Resource Requirements The compute resources allocated to run the model, plus any adapter models, that you assign to the inference component.
Omit this parameter if your request is meant to create an adapter inference component. An adapter inference component is loaded by a base inference component, and it uses the compute resources of the base inference component.
- container
Inference
Component Container Specification - Defines a container that provides the runtime environment for a model that you deploy with an inference component.
- model
Name string - The name of an existing SageMaker model object in your account that you want to deploy with the inference component.
- startup
Parameters InferenceComponent Startup Parameters - Settings that take effect while the model container starts up.
- base_
inference_ strcomponent_ name The name of an existing inference component that is to contain the inference component that you're creating with your request.
Specify this parameter only if your request is meant to create an adapter inference component. An adapter inference component contains the path to an adapter model. The purpose of the adapter model is to tailor the inference output of a base foundation model, which is hosted by the base inference component. The adapter inference component uses the compute resources that you assigned to the base inference component.
When you create an adapter inference component, use the
Container
parameter to specify the location of the adapter artifacts. In the parameter value, use theArtifactUrl
parameter of theInferenceComponentContainerSpecification
data type.Before you can create an adapter inference component, you must have an existing inference component that contains the foundation model that you want to adapt.
- compute_
resource_ Inferencerequirements Component Compute Resource Requirements The compute resources allocated to run the model, plus any adapter models, that you assign to the inference component.
Omit this parameter if your request is meant to create an adapter inference component. An adapter inference component is loaded by a base inference component, and it uses the compute resources of the base inference component.
- container
Inference
Component Container Specification - Defines a container that provides the runtime environment for a model that you deploy with an inference component.
- model_
name str - The name of an existing SageMaker model object in your account that you want to deploy with the inference component.
- startup_
parameters InferenceComponent Startup Parameters - Settings that take effect while the model container starts up.
- base
Inference StringComponent Name The name of an existing inference component that is to contain the inference component that you're creating with your request.
Specify this parameter only if your request is meant to create an adapter inference component. An adapter inference component contains the path to an adapter model. The purpose of the adapter model is to tailor the inference output of a base foundation model, which is hosted by the base inference component. The adapter inference component uses the compute resources that you assigned to the base inference component.
When you create an adapter inference component, use the
Container
parameter to specify the location of the adapter artifacts. In the parameter value, use theArtifactUrl
parameter of theInferenceComponentContainerSpecification
data type.Before you can create an adapter inference component, you must have an existing inference component that contains the foundation model that you want to adapt.
- compute
Resource Property MapRequirements The compute resources allocated to run the model, plus any adapter models, that you assign to the inference component.
Omit this parameter if your request is meant to create an adapter inference component. An adapter inference component is loaded by a base inference component, and it uses the compute resources of the base inference component.
- container Property Map
- Defines a container that provides the runtime environment for a model that you deploy with an inference component.
- model
Name String - The name of an existing SageMaker model object in your account that you want to deploy with the inference component.
- startup
Parameters Property Map - Settings that take effect while the model container starts up.
InferenceComponentStartupParameters, InferenceComponentStartupParametersArgs
- Container
Startup intHealth Check Timeout In Seconds - The timeout value, in seconds, for your inference container to pass health check by Amazon S3 Hosting. For more information about health check, see How Your Container Should Respond to Health Check (Ping) Requests .
- Model
Data intDownload Timeout In Seconds - The timeout value, in seconds, to download and extract the model that you want to host from Amazon S3 to the individual inference instance associated with this inference component.
- Container
Startup intHealth Check Timeout In Seconds - The timeout value, in seconds, for your inference container to pass health check by Amazon S3 Hosting. For more information about health check, see How Your Container Should Respond to Health Check (Ping) Requests .
- Model
Data intDownload Timeout In Seconds - The timeout value, in seconds, to download and extract the model that you want to host from Amazon S3 to the individual inference instance associated with this inference component.
- container
Startup IntegerHealth Check Timeout In Seconds - The timeout value, in seconds, for your inference container to pass health check by Amazon S3 Hosting. For more information about health check, see How Your Container Should Respond to Health Check (Ping) Requests .
- model
Data IntegerDownload Timeout In Seconds - The timeout value, in seconds, to download and extract the model that you want to host from Amazon S3 to the individual inference instance associated with this inference component.
- container
Startup numberHealth Check Timeout In Seconds - The timeout value, in seconds, for your inference container to pass health check by Amazon S3 Hosting. For more information about health check, see How Your Container Should Respond to Health Check (Ping) Requests .
- model
Data numberDownload Timeout In Seconds - The timeout value, in seconds, to download and extract the model that you want to host from Amazon S3 to the individual inference instance associated with this inference component.
- container_
startup_ inthealth_ check_ timeout_ in_ seconds - The timeout value, in seconds, for your inference container to pass health check by Amazon S3 Hosting. For more information about health check, see How Your Container Should Respond to Health Check (Ping) Requests .
- model_
data_ intdownload_ timeout_ in_ seconds - The timeout value, in seconds, to download and extract the model that you want to host from Amazon S3 to the individual inference instance associated with this inference component.
- container
Startup NumberHealth Check Timeout In Seconds - The timeout value, in seconds, for your inference container to pass health check by Amazon S3 Hosting. For more information about health check, see How Your Container Should Respond to Health Check (Ping) Requests .
- model
Data NumberDownload Timeout In Seconds - The timeout value, in seconds, to download and extract the model that you want to host from Amazon S3 to the individual inference instance associated with this inference component.
InferenceComponentStatus, InferenceComponentStatusArgs
- In
Service - InService
- Creating
- Creating
- Updating
- Updating
- Failed
- Failed
- Deleting
- Deleting
- Inference
Component Status In Service - InService
- Inference
Component Status Creating - Creating
- Inference
Component Status Updating - Updating
- Inference
Component Status Failed - Failed
- Inference
Component Status Deleting - Deleting
- In
Service - InService
- Creating
- Creating
- Updating
- Updating
- Failed
- Failed
- Deleting
- Deleting
- In
Service - InService
- Creating
- Creating
- Updating
- Updating
- Failed
- Failed
- Deleting
- Deleting
- IN_SERVICE
- InService
- CREATING
- Creating
- UPDATING
- Updating
- FAILED
- Failed
- DELETING
- Deleting
- "In
Service" - InService
- "Creating"
- Creating
- "Updating"
- Updating
- "Failed"
- Failed
- "Deleting"
- Deleting
Tag, TagArgs
Package Details
- Repository
- AWS Native pulumi/pulumi-aws-native
- License
- Apache-2.0
We recommend new projects start with resources from the AWS provider.