We recommend new projects start with resources from the AWS provider.
published on Monday, Apr 20, 2026 by Pulumi
We recommend new projects start with resources from the AWS provider.
published on Monday, Apr 20, 2026 by Pulumi
The details of a daemon task definition. A daemon task definition is a template that describes the containers that form a daemon. Daemons deploy cross-cutting software agents independently across your Amazon ECS infrastructure.
Create DaemonTaskDefinition Resource
Resources are created with functions called constructors. To learn more about declaring and configuring resources, see Resources.
Constructor syntax
new DaemonTaskDefinition(name: string, args?: DaemonTaskDefinitionArgs, opts?: CustomResourceOptions);@overload
def DaemonTaskDefinition(resource_name: str,
args: Optional[DaemonTaskDefinitionArgs] = None,
opts: Optional[ResourceOptions] = None)
@overload
def DaemonTaskDefinition(resource_name: str,
opts: Optional[ResourceOptions] = None,
container_definitions: Optional[Sequence[DaemonTaskDefinitionDaemonContainerDefinitionArgs]] = None,
cpu: Optional[str] = None,
execution_role_arn: Optional[str] = None,
family: Optional[str] = None,
memory: Optional[str] = None,
tags: Optional[Sequence[_root_inputs.TagArgs]] = None,
task_role_arn: Optional[str] = None,
volumes: Optional[Sequence[DaemonTaskDefinitionVolumeArgs]] = None)func NewDaemonTaskDefinition(ctx *Context, name string, args *DaemonTaskDefinitionArgs, opts ...ResourceOption) (*DaemonTaskDefinition, error)public DaemonTaskDefinition(string name, DaemonTaskDefinitionArgs? args = null, CustomResourceOptions? opts = null)
public DaemonTaskDefinition(String name, DaemonTaskDefinitionArgs args)
public DaemonTaskDefinition(String name, DaemonTaskDefinitionArgs args, CustomResourceOptions options)
type: aws-native:ecs:DaemonTaskDefinition
properties: # The arguments to resource properties.
options: # Bag of options to control resource's behavior.
Parameters
- name string
- The unique name of the resource.
- args DaemonTaskDefinitionArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- resource_name str
- The unique name of the resource.
- args DaemonTaskDefinitionArgs
- The arguments to resource properties.
- opts ResourceOptions
- Bag of options to control resource's behavior.
- ctx Context
- Context object for the current deployment.
- name string
- The unique name of the resource.
- args DaemonTaskDefinitionArgs
- The arguments to resource properties.
- opts ResourceOption
- Bag of options to control resource's behavior.
- name string
- The unique name of the resource.
- args DaemonTaskDefinitionArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- name String
- The unique name of the resource.
- args DaemonTaskDefinitionArgs
- The arguments to resource properties.
- options CustomResourceOptions
- Bag of options to control resource's behavior.
DaemonTaskDefinition Resource Properties
To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.
Inputs
In Python, inputs that are objects can be passed either as argument classes or as dictionary literals.
The DaemonTaskDefinition resource accepts the following input properties:
- Container
Definitions List<Pulumi.Aws Native. Ecs. Inputs. Daemon Task Definition Daemon Container Definition> - A list of container definitions in JSON format that describe the containers that make up the daemon task.
- Cpu string
- The number of CPU units used by the daemon task.
- Execution
Role stringArn - The Amazon Resource Name (ARN) of the task execution role that grants the Amazon ECS container agent permission to make Amazon Web Services API calls on your behalf.
- Family string
- The name of a family that this daemon task definition is registered to.
- Memory string
- The amount of memory (in MiB) used by the daemon task.
-
List<Pulumi.
Aws Native. Inputs. Tag> - Task
Role stringArn - The short name or full Amazon Resource Name (ARN) of the IAM role that grants containers in the daemon task permission to call Amazon Web Services APIs on your behalf.
- Volumes
List<Pulumi.
Aws Native. Ecs. Inputs. Daemon Task Definition Volume> - The list of data volume definitions for the daemon task.
- Container
Definitions []DaemonTask Definition Daemon Container Definition Args - A list of container definitions in JSON format that describe the containers that make up the daemon task.
- Cpu string
- The number of CPU units used by the daemon task.
- Execution
Role stringArn - The Amazon Resource Name (ARN) of the task execution role that grants the Amazon ECS container agent permission to make Amazon Web Services API calls on your behalf.
- Family string
- The name of a family that this daemon task definition is registered to.
- Memory string
- The amount of memory (in MiB) used by the daemon task.
-
Tag
Args - Task
Role stringArn - The short name or full Amazon Resource Name (ARN) of the IAM role that grants containers in the daemon task permission to call Amazon Web Services APIs on your behalf.
- Volumes
[]Daemon
Task Definition Volume Args - The list of data volume definitions for the daemon task.
- container
Definitions List<DaemonTask Definition Daemon Container Definition> - A list of container definitions in JSON format that describe the containers that make up the daemon task.
- cpu String
- The number of CPU units used by the daemon task.
- execution
Role StringArn - The Amazon Resource Name (ARN) of the task execution role that grants the Amazon ECS container agent permission to make Amazon Web Services API calls on your behalf.
- family String
- The name of a family that this daemon task definition is registered to.
- memory String
- The amount of memory (in MiB) used by the daemon task.
- List<Tag>
- task
Role StringArn - The short name or full Amazon Resource Name (ARN) of the IAM role that grants containers in the daemon task permission to call Amazon Web Services APIs on your behalf.
- volumes
List<Daemon
Task Definition Volume> - The list of data volume definitions for the daemon task.
- container
Definitions DaemonTask Definition Daemon Container Definition[] - A list of container definitions in JSON format that describe the containers that make up the daemon task.
- cpu string
- The number of CPU units used by the daemon task.
- execution
Role stringArn - The Amazon Resource Name (ARN) of the task execution role that grants the Amazon ECS container agent permission to make Amazon Web Services API calls on your behalf.
- family string
- The name of a family that this daemon task definition is registered to.
- memory string
- The amount of memory (in MiB) used by the daemon task.
- Tag[]
- task
Role stringArn - The short name or full Amazon Resource Name (ARN) of the IAM role that grants containers in the daemon task permission to call Amazon Web Services APIs on your behalf.
- volumes
Daemon
Task Definition Volume[] - The list of data volume definitions for the daemon task.
- container_
definitions Sequence[DaemonTask Definition Daemon Container Definition Args] - A list of container definitions in JSON format that describe the containers that make up the daemon task.
- cpu str
- The number of CPU units used by the daemon task.
- execution_
role_ strarn - The Amazon Resource Name (ARN) of the task execution role that grants the Amazon ECS container agent permission to make Amazon Web Services API calls on your behalf.
- family str
- The name of a family that this daemon task definition is registered to.
- memory str
- The amount of memory (in MiB) used by the daemon task.
-
Sequence[Tag
Args] - task_
role_ strarn - The short name or full Amazon Resource Name (ARN) of the IAM role that grants containers in the daemon task permission to call Amazon Web Services APIs on your behalf.
- volumes
Sequence[Daemon
Task Definition Volume Args] - The list of data volume definitions for the daemon task.
- container
Definitions List<Property Map> - A list of container definitions in JSON format that describe the containers that make up the daemon task.
- cpu String
- The number of CPU units used by the daemon task.
- execution
Role StringArn - The Amazon Resource Name (ARN) of the task execution role that grants the Amazon ECS container agent permission to make Amazon Web Services API calls on your behalf.
- family String
- The name of a family that this daemon task definition is registered to.
- memory String
- The amount of memory (in MiB) used by the daemon task.
- List<Property Map>
- task
Role StringArn - The short name or full Amazon Resource Name (ARN) of the IAM role that grants containers in the daemon task permission to call Amazon Web Services APIs on your behalf.
- volumes List<Property Map>
- The list of data volume definitions for the daemon task.
Outputs
All input properties are implicitly available as output properties. Additionally, the DaemonTaskDefinition resource produces the following output properties:
- Daemon
Task stringDefinition Arn - Id string
- The provider-assigned unique ID for this managed resource.
- Daemon
Task stringDefinition Arn - Id string
- The provider-assigned unique ID for this managed resource.
- daemon
Task StringDefinition Arn - id String
- The provider-assigned unique ID for this managed resource.
- daemon
Task stringDefinition Arn - id string
- The provider-assigned unique ID for this managed resource.
- daemon_
task_ strdefinition_ arn - id str
- The provider-assigned unique ID for this managed resource.
- daemon
Task StringDefinition Arn - id String
- The provider-assigned unique ID for this managed resource.
Supporting Types
DaemonTaskDefinitionContainerDependency, DaemonTaskDefinitionContainerDependencyArgs
The dependencies defined for container startup and shutdown. A container can contain multiple dependencies. When a dependency is defined for container startup, for container shutdown it is reversed.
Your Amazon ECS container instances require at least version 1.26.0 of the container agent to use container dependencies. However, we recommend using the latest container agent version. For information about checking your agent version and updating to the latest version, see Updating the Amazon ECS Container Agent in the Amazon Elastic Container Service Developer Guide. If you're using an Amazon ECS-optimized Linux AMI, your instance needs at least version 1.26.0-1 of the ecs-init package. If your container instances are launched from version 20190301 or later, then they contain the required versions of the container agent and ecs-init. For more information, see Amazon ECS-optimized Linux AMI in the Amazon Elastic Container Service Developer Guide.
For tasks that use the Fargate launch type, the task or service requires the following platforms:
- Linux platform version
1.3.0or later. - Windows platform version
1.0.0or later.
For more information about how to create a container dependency, see Container dependency in the Amazon Elastic Container Service Developer Guide.
- Condition string
- The dependency condition of the container. The following are the available conditions and their behavior:
START- This condition emulates the behavior of links and volumes today. It validates that a dependent container is started before permitting other containers to start.COMPLETE- This condition validates that a dependent container runs to completion (exits) before permitting other containers to start. This can be useful for nonessential containers that run a script and then exit. This condition can't be set on an essential container.SUCCESS- This condition is the same asCOMPLETE, but it also requires that the container exits with azerostatus. This condition can't be set on an essential container.HEALTHY- This condition validates that the dependent container passes its Docker health check before permitting other containers to start. This requires that the dependent container has health checks configured. This condition is confirmed only at task startup.
- Container
Name string - The name of a container.
- Condition string
- The dependency condition of the container. The following are the available conditions and their behavior:
START- This condition emulates the behavior of links and volumes today. It validates that a dependent container is started before permitting other containers to start.COMPLETE- This condition validates that a dependent container runs to completion (exits) before permitting other containers to start. This can be useful for nonessential containers that run a script and then exit. This condition can't be set on an essential container.SUCCESS- This condition is the same asCOMPLETE, but it also requires that the container exits with azerostatus. This condition can't be set on an essential container.HEALTHY- This condition validates that the dependent container passes its Docker health check before permitting other containers to start. This requires that the dependent container has health checks configured. This condition is confirmed only at task startup.
- Container
Name string - The name of a container.
- condition String
- The dependency condition of the container. The following are the available conditions and their behavior:
START- This condition emulates the behavior of links and volumes today. It validates that a dependent container is started before permitting other containers to start.COMPLETE- This condition validates that a dependent container runs to completion (exits) before permitting other containers to start. This can be useful for nonessential containers that run a script and then exit. This condition can't be set on an essential container.SUCCESS- This condition is the same asCOMPLETE, but it also requires that the container exits with azerostatus. This condition can't be set on an essential container.HEALTHY- This condition validates that the dependent container passes its Docker health check before permitting other containers to start. This requires that the dependent container has health checks configured. This condition is confirmed only at task startup.
- container
Name String - The name of a container.
- condition string
- The dependency condition of the container. The following are the available conditions and their behavior:
START- This condition emulates the behavior of links and volumes today. It validates that a dependent container is started before permitting other containers to start.COMPLETE- This condition validates that a dependent container runs to completion (exits) before permitting other containers to start. This can be useful for nonessential containers that run a script and then exit. This condition can't be set on an essential container.SUCCESS- This condition is the same asCOMPLETE, but it also requires that the container exits with azerostatus. This condition can't be set on an essential container.HEALTHY- This condition validates that the dependent container passes its Docker health check before permitting other containers to start. This requires that the dependent container has health checks configured. This condition is confirmed only at task startup.
- container
Name string - The name of a container.
- condition str
- The dependency condition of the container. The following are the available conditions and their behavior:
START- This condition emulates the behavior of links and volumes today. It validates that a dependent container is started before permitting other containers to start.COMPLETE- This condition validates that a dependent container runs to completion (exits) before permitting other containers to start. This can be useful for nonessential containers that run a script and then exit. This condition can't be set on an essential container.SUCCESS- This condition is the same asCOMPLETE, but it also requires that the container exits with azerostatus. This condition can't be set on an essential container.HEALTHY- This condition validates that the dependent container passes its Docker health check before permitting other containers to start. This requires that the dependent container has health checks configured. This condition is confirmed only at task startup.
- container_
name str - The name of a container.
- condition String
- The dependency condition of the container. The following are the available conditions and their behavior:
START- This condition emulates the behavior of links and volumes today. It validates that a dependent container is started before permitting other containers to start.COMPLETE- This condition validates that a dependent container runs to completion (exits) before permitting other containers to start. This can be useful for nonessential containers that run a script and then exit. This condition can't be set on an essential container.SUCCESS- This condition is the same asCOMPLETE, but it also requires that the container exits with azerostatus. This condition can't be set on an essential container.HEALTHY- This condition validates that the dependent container passes its Docker health check before permitting other containers to start. This requires that the dependent container has health checks configured. This condition is confirmed only at task startup.
- container
Name String - The name of a container.
DaemonTaskDefinitionDaemonContainerDefinition, DaemonTaskDefinitionDaemonContainerDefinitionArgs
A container definition for a daemon task. Daemon container definitions describe the containers that run as part of a daemon task on container instances managed by capacity providers.- Image string
- The image used to start the container. This string is passed directly to the Docker daemon. Images in the Docker Hub registry are available by default. Other repositories are specified with either
repository-url/image:tagorrepository-url/image@digest. - Name string
- The name of the container. Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed.
- Command List<string>
- The command that's passed to the container.
- Cpu int
- The number of
cpuunits reserved for the container. - Depends
On List<Pulumi.Aws Native. Ecs. Inputs. Daemon Task Definition Container Dependency> - The dependencies defined for container startup and shutdown. A container can contain multiple dependencies on other containers in a task definition.
- Entry
Point List<string> - The entry point that's passed to the container.
- Environment
List<Pulumi.
Aws Native. Ecs. Inputs. Daemon Task Definition Key Value Pair> - The environment variables to pass to a container.
- Environment
Files List<Pulumi.Aws Native. Ecs. Inputs. Daemon Task Definition Environment File> - A list of files containing the environment variables to pass to a container.
- Essential bool
- If the
essentialparameter of a container is marked astrue, and that container fails or stops for any reason, all other containers that are part of the task are stopped. - Firelens
Configuration Pulumi.Aws Native. Ecs. Inputs. Daemon Task Definition Firelens Configuration - The FireLens configuration for the container. This is used to specify and configure a log router for container logs.
- Health
Check Pulumi.Aws Native. Ecs. Inputs. Daemon Task Definition Health Check - The container health check command and associated configuration parameters for the container.
- Interactive bool
- When this parameter is
true, you can deploy containerized applications that requirestdinor attyto be allocated. - Linux
Parameters Pulumi.Aws Native. Ecs. Inputs. Daemon Task Definition Linux Parameters - Linux-specific modifications that are applied to the container configuration, such as Linux kernel capabilities.
- Log
Configuration Pulumi.Aws Native. Ecs. Inputs. Daemon Task Definition Log Configuration - The log configuration specification for the container.
- Memory int
- The amount (in MiB) of memory to present to the container. If the container attempts to exceed the memory specified here, the container is killed.
- Memory
Reservation int - The soft limit (in MiB) of memory to reserve for the container.
- Mount
Points List<Pulumi.Aws Native. Ecs. Inputs. Daemon Task Definition Mount Point> - The mount points for data volumes in your container.
- Privileged bool
- When this parameter is true, the container is given elevated privileges on the host container instance (similar to the
rootuser). - Pseudo
Terminal bool - When this parameter is
true, a TTY is allocated. - Readonly
Root boolFilesystem - When this parameter is true, the container is given read-only access to its root file system.
- Repository
Credentials Pulumi.Aws Native. Ecs. Inputs. Daemon Task Definition Repository Credentials - The private repository authentication credentials to use.
- Restart
Policy Pulumi.Aws Native. Ecs. Inputs. Daemon Task Definition Restart Policy - The restart policy for the container. When you set up a restart policy, Amazon ECS can restart the container without needing to replace the task.
- Secrets
List<Pulumi.
Aws Native. Ecs. Inputs. Daemon Task Definition Secret> - The secrets to pass to the container.
- Start
Timeout int - Time duration (in seconds) to wait before giving up on resolving dependencies for a container.
- Stop
Timeout int - Time duration (in seconds) to wait before the container is forcefully killed if it doesn't exit normally on its own.
- System
Controls List<Pulumi.Aws Native. Ecs. Inputs. Daemon Task Definition System Control> - A list of namespaced kernel parameters to set in the container.
- Ulimits
List<Pulumi.
Aws Native. Ecs. Inputs. Daemon Task Definition Ulimit> - A list of
ulimitsto set in the container. - User string
- The user to use inside the container.
- Working
Directory string - The working directory to run commands inside the container in.
- Image string
- The image used to start the container. This string is passed directly to the Docker daemon. Images in the Docker Hub registry are available by default. Other repositories are specified with either
repository-url/image:tagorrepository-url/image@digest. - Name string
- The name of the container. Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed.
- Command []string
- The command that's passed to the container.
- Cpu int
- The number of
cpuunits reserved for the container. - Depends
On []DaemonTask Definition Container Dependency - The dependencies defined for container startup and shutdown. A container can contain multiple dependencies on other containers in a task definition.
- Entry
Point []string - The entry point that's passed to the container.
- Environment
[]Daemon
Task Definition Key Value Pair - The environment variables to pass to a container.
- Environment
Files []DaemonTask Definition Environment File - A list of files containing the environment variables to pass to a container.
- Essential bool
- If the
essentialparameter of a container is marked astrue, and that container fails or stops for any reason, all other containers that are part of the task are stopped. - Firelens
Configuration DaemonTask Definition Firelens Configuration - The FireLens configuration for the container. This is used to specify and configure a log router for container logs.
- Health
Check DaemonTask Definition Health Check - The container health check command and associated configuration parameters for the container.
- Interactive bool
- When this parameter is
true, you can deploy containerized applications that requirestdinor attyto be allocated. - Linux
Parameters DaemonTask Definition Linux Parameters - Linux-specific modifications that are applied to the container configuration, such as Linux kernel capabilities.
- Log
Configuration DaemonTask Definition Log Configuration - The log configuration specification for the container.
- Memory int
- The amount (in MiB) of memory to present to the container. If the container attempts to exceed the memory specified here, the container is killed.
- Memory
Reservation int - The soft limit (in MiB) of memory to reserve for the container.
- Mount
Points []DaemonTask Definition Mount Point - The mount points for data volumes in your container.
- Privileged bool
- When this parameter is true, the container is given elevated privileges on the host container instance (similar to the
rootuser). - Pseudo
Terminal bool - When this parameter is
true, a TTY is allocated. - Readonly
Root boolFilesystem - When this parameter is true, the container is given read-only access to its root file system.
- Repository
Credentials DaemonTask Definition Repository Credentials - The private repository authentication credentials to use.
- Restart
Policy DaemonTask Definition Restart Policy - The restart policy for the container. When you set up a restart policy, Amazon ECS can restart the container without needing to replace the task.
- Secrets
[]Daemon
Task Definition Secret - The secrets to pass to the container.
- Start
Timeout int - Time duration (in seconds) to wait before giving up on resolving dependencies for a container.
- Stop
Timeout int - Time duration (in seconds) to wait before the container is forcefully killed if it doesn't exit normally on its own.
- System
Controls []DaemonTask Definition System Control - A list of namespaced kernel parameters to set in the container.
- Ulimits
[]Daemon
Task Definition Ulimit - A list of
ulimitsto set in the container. - User string
- The user to use inside the container.
- Working
Directory string - The working directory to run commands inside the container in.
- image String
- The image used to start the container. This string is passed directly to the Docker daemon. Images in the Docker Hub registry are available by default. Other repositories are specified with either
repository-url/image:tagorrepository-url/image@digest. - name String
- The name of the container. Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed.
- command List<String>
- The command that's passed to the container.
- cpu Integer
- The number of
cpuunits reserved for the container. - depends
On List<DaemonTask Definition Container Dependency> - The dependencies defined for container startup and shutdown. A container can contain multiple dependencies on other containers in a task definition.
- entry
Point List<String> - The entry point that's passed to the container.
- environment
List<Daemon
Task Definition Key Value Pair> - The environment variables to pass to a container.
- environment
Files List<DaemonTask Definition Environment File> - A list of files containing the environment variables to pass to a container.
- essential Boolean
- If the
essentialparameter of a container is marked astrue, and that container fails or stops for any reason, all other containers that are part of the task are stopped. - firelens
Configuration DaemonTask Definition Firelens Configuration - The FireLens configuration for the container. This is used to specify and configure a log router for container logs.
- health
Check DaemonTask Definition Health Check - The container health check command and associated configuration parameters for the container.
- interactive Boolean
- When this parameter is
true, you can deploy containerized applications that requirestdinor attyto be allocated. - linux
Parameters DaemonTask Definition Linux Parameters - Linux-specific modifications that are applied to the container configuration, such as Linux kernel capabilities.
- log
Configuration DaemonTask Definition Log Configuration - The log configuration specification for the container.
- memory Integer
- The amount (in MiB) of memory to present to the container. If the container attempts to exceed the memory specified here, the container is killed.
- memory
Reservation Integer - The soft limit (in MiB) of memory to reserve for the container.
- mount
Points List<DaemonTask Definition Mount Point> - The mount points for data volumes in your container.
- privileged Boolean
- When this parameter is true, the container is given elevated privileges on the host container instance (similar to the
rootuser). - pseudo
Terminal Boolean - When this parameter is
true, a TTY is allocated. - readonly
Root BooleanFilesystem - When this parameter is true, the container is given read-only access to its root file system.
- repository
Credentials DaemonTask Definition Repository Credentials - The private repository authentication credentials to use.
- restart
Policy DaemonTask Definition Restart Policy - The restart policy for the container. When you set up a restart policy, Amazon ECS can restart the container without needing to replace the task.
- secrets
List<Daemon
Task Definition Secret> - The secrets to pass to the container.
- start
Timeout Integer - Time duration (in seconds) to wait before giving up on resolving dependencies for a container.
- stop
Timeout Integer - Time duration (in seconds) to wait before the container is forcefully killed if it doesn't exit normally on its own.
- system
Controls List<DaemonTask Definition System Control> - A list of namespaced kernel parameters to set in the container.
- ulimits
List<Daemon
Task Definition Ulimit> - A list of
ulimitsto set in the container. - user String
- The user to use inside the container.
- working
Directory String - The working directory to run commands inside the container in.
- image string
- The image used to start the container. This string is passed directly to the Docker daemon. Images in the Docker Hub registry are available by default. Other repositories are specified with either
repository-url/image:tagorrepository-url/image@digest. - name string
- The name of the container. Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed.
- command string[]
- The command that's passed to the container.
- cpu number
- The number of
cpuunits reserved for the container. - depends
On DaemonTask Definition Container Dependency[] - The dependencies defined for container startup and shutdown. A container can contain multiple dependencies on other containers in a task definition.
- entry
Point string[] - The entry point that's passed to the container.
- environment
Daemon
Task Definition Key Value Pair[] - The environment variables to pass to a container.
- environment
Files DaemonTask Definition Environment File[] - A list of files containing the environment variables to pass to a container.
- essential boolean
- If the
essentialparameter of a container is marked astrue, and that container fails or stops for any reason, all other containers that are part of the task are stopped. - firelens
Configuration DaemonTask Definition Firelens Configuration - The FireLens configuration for the container. This is used to specify and configure a log router for container logs.
- health
Check DaemonTask Definition Health Check - The container health check command and associated configuration parameters for the container.
- interactive boolean
- When this parameter is
true, you can deploy containerized applications that requirestdinor attyto be allocated. - linux
Parameters DaemonTask Definition Linux Parameters - Linux-specific modifications that are applied to the container configuration, such as Linux kernel capabilities.
- log
Configuration DaemonTask Definition Log Configuration - The log configuration specification for the container.
- memory number
- The amount (in MiB) of memory to present to the container. If the container attempts to exceed the memory specified here, the container is killed.
- memory
Reservation number - The soft limit (in MiB) of memory to reserve for the container.
- mount
Points DaemonTask Definition Mount Point[] - The mount points for data volumes in your container.
- privileged boolean
- When this parameter is true, the container is given elevated privileges on the host container instance (similar to the
rootuser). - pseudo
Terminal boolean - When this parameter is
true, a TTY is allocated. - readonly
Root booleanFilesystem - When this parameter is true, the container is given read-only access to its root file system.
- repository
Credentials DaemonTask Definition Repository Credentials - The private repository authentication credentials to use.
- restart
Policy DaemonTask Definition Restart Policy - The restart policy for the container. When you set up a restart policy, Amazon ECS can restart the container without needing to replace the task.
- secrets
Daemon
Task Definition Secret[] - The secrets to pass to the container.
- start
Timeout number - Time duration (in seconds) to wait before giving up on resolving dependencies for a container.
- stop
Timeout number - Time duration (in seconds) to wait before the container is forcefully killed if it doesn't exit normally on its own.
- system
Controls DaemonTask Definition System Control[] - A list of namespaced kernel parameters to set in the container.
- ulimits
Daemon
Task Definition Ulimit[] - A list of
ulimitsto set in the container. - user string
- The user to use inside the container.
- working
Directory string - The working directory to run commands inside the container in.
- image str
- The image used to start the container. This string is passed directly to the Docker daemon. Images in the Docker Hub registry are available by default. Other repositories are specified with either
repository-url/image:tagorrepository-url/image@digest. - name str
- The name of the container. Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed.
- command Sequence[str]
- The command that's passed to the container.
- cpu int
- The number of
cpuunits reserved for the container. - depends_
on Sequence[DaemonTask Definition Container Dependency] - The dependencies defined for container startup and shutdown. A container can contain multiple dependencies on other containers in a task definition.
- entry_
point Sequence[str] - The entry point that's passed to the container.
- environment
Sequence[Daemon
Task Definition Key Value Pair] - The environment variables to pass to a container.
- environment_
files Sequence[DaemonTask Definition Environment File] - A list of files containing the environment variables to pass to a container.
- essential bool
- If the
essentialparameter of a container is marked astrue, and that container fails or stops for any reason, all other containers that are part of the task are stopped. - firelens_
configuration DaemonTask Definition Firelens Configuration - The FireLens configuration for the container. This is used to specify and configure a log router for container logs.
- health_
check DaemonTask Definition Health Check - The container health check command and associated configuration parameters for the container.
- interactive bool
- When this parameter is
true, you can deploy containerized applications that requirestdinor attyto be allocated. - linux_
parameters DaemonTask Definition Linux Parameters - Linux-specific modifications that are applied to the container configuration, such as Linux kernel capabilities.
- log_
configuration DaemonTask Definition Log Configuration - The log configuration specification for the container.
- memory int
- The amount (in MiB) of memory to present to the container. If the container attempts to exceed the memory specified here, the container is killed.
- memory_
reservation int - The soft limit (in MiB) of memory to reserve for the container.
- mount_
points Sequence[DaemonTask Definition Mount Point] - The mount points for data volumes in your container.
- privileged bool
- When this parameter is true, the container is given elevated privileges on the host container instance (similar to the
rootuser). - pseudo_
terminal bool - When this parameter is
true, a TTY is allocated. - readonly_
root_ boolfilesystem - When this parameter is true, the container is given read-only access to its root file system.
- repository_
credentials DaemonTask Definition Repository Credentials - The private repository authentication credentials to use.
- restart_
policy DaemonTask Definition Restart Policy - The restart policy for the container. When you set up a restart policy, Amazon ECS can restart the container without needing to replace the task.
- secrets
Sequence[Daemon
Task Definition Secret] - The secrets to pass to the container.
- start_
timeout int - Time duration (in seconds) to wait before giving up on resolving dependencies for a container.
- stop_
timeout int - Time duration (in seconds) to wait before the container is forcefully killed if it doesn't exit normally on its own.
- system_
controls Sequence[DaemonTask Definition System Control] - A list of namespaced kernel parameters to set in the container.
- ulimits
Sequence[Daemon
Task Definition Ulimit] - A list of
ulimitsto set in the container. - user str
- The user to use inside the container.
- working_
directory str - The working directory to run commands inside the container in.
- image String
- The image used to start the container. This string is passed directly to the Docker daemon. Images in the Docker Hub registry are available by default. Other repositories are specified with either
repository-url/image:tagorrepository-url/image@digest. - name String
- The name of the container. Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed.
- command List<String>
- The command that's passed to the container.
- cpu Number
- The number of
cpuunits reserved for the container. - depends
On List<Property Map> - The dependencies defined for container startup and shutdown. A container can contain multiple dependencies on other containers in a task definition.
- entry
Point List<String> - The entry point that's passed to the container.
- environment List<Property Map>
- The environment variables to pass to a container.
- environment
Files List<Property Map> - A list of files containing the environment variables to pass to a container.
- essential Boolean
- If the
essentialparameter of a container is marked astrue, and that container fails or stops for any reason, all other containers that are part of the task are stopped. - firelens
Configuration Property Map - The FireLens configuration for the container. This is used to specify and configure a log router for container logs.
- health
Check Property Map - The container health check command and associated configuration parameters for the container.
- interactive Boolean
- When this parameter is
true, you can deploy containerized applications that requirestdinor attyto be allocated. - linux
Parameters Property Map - Linux-specific modifications that are applied to the container configuration, such as Linux kernel capabilities.
- log
Configuration Property Map - The log configuration specification for the container.
- memory Number
- The amount (in MiB) of memory to present to the container. If the container attempts to exceed the memory specified here, the container is killed.
- memory
Reservation Number - The soft limit (in MiB) of memory to reserve for the container.
- mount
Points List<Property Map> - The mount points for data volumes in your container.
- privileged Boolean
- When this parameter is true, the container is given elevated privileges on the host container instance (similar to the
rootuser). - pseudo
Terminal Boolean - When this parameter is
true, a TTY is allocated. - readonly
Root BooleanFilesystem - When this parameter is true, the container is given read-only access to its root file system.
- repository
Credentials Property Map - The private repository authentication credentials to use.
- restart
Policy Property Map - The restart policy for the container. When you set up a restart policy, Amazon ECS can restart the container without needing to replace the task.
- secrets List<Property Map>
- The secrets to pass to the container.
- start
Timeout Number - Time duration (in seconds) to wait before giving up on resolving dependencies for a container.
- stop
Timeout Number - Time duration (in seconds) to wait before the container is forcefully killed if it doesn't exit normally on its own.
- system
Controls List<Property Map> - A list of namespaced kernel parameters to set in the container.
- ulimits List<Property Map>
- A list of
ulimitsto set in the container. - user String
- The user to use inside the container.
- working
Directory String - The working directory to run commands inside the container in.
DaemonTaskDefinitionDevice, DaemonTaskDefinitionDeviceArgs
An object representing a container instance host device.- Container
Path string - The path inside the container at which to expose the host device.
- Host
Path string - The path for the device on the host container instance.
- Permissions List<string>
- The explicit permissions to provide to the container for the device. By default, the container has permissions for
read,write, andmknodfor the device.
- Container
Path string - The path inside the container at which to expose the host device.
- Host
Path string - The path for the device on the host container instance.
- Permissions []string
- The explicit permissions to provide to the container for the device. By default, the container has permissions for
read,write, andmknodfor the device.
- container
Path String - The path inside the container at which to expose the host device.
- host
Path String - The path for the device on the host container instance.
- permissions List<String>
- The explicit permissions to provide to the container for the device. By default, the container has permissions for
read,write, andmknodfor the device.
- container
Path string - The path inside the container at which to expose the host device.
- host
Path string - The path for the device on the host container instance.
- permissions string[]
- The explicit permissions to provide to the container for the device. By default, the container has permissions for
read,write, andmknodfor the device.
- container_
path str - The path inside the container at which to expose the host device.
- host_
path str - The path for the device on the host container instance.
- permissions Sequence[str]
- The explicit permissions to provide to the container for the device. By default, the container has permissions for
read,write, andmknodfor the device.
- container
Path String - The path inside the container at which to expose the host device.
- host
Path String - The path for the device on the host container instance.
- permissions List<String>
- The explicit permissions to provide to the container for the device. By default, the container has permissions for
read,write, andmknodfor the device.
DaemonTaskDefinitionEnvironmentFile, DaemonTaskDefinitionEnvironmentFileArgs
A list of files containing the environment variables to pass to a container. You can specify up to ten environment files. The file must have a .env file extension. Each line in an environment file should contain an environment variable in VARIABLE=VALUE format. Lines beginning with # are treated as comments and are ignored.
If there are environment variables specified using the environment parameter in a container definition, they take precedence over the variables contained within an environment file. If multiple environment files are specified that contain the same variable, they're processed from the top down. We recommend that you use unique variable names. For more information, see Use a file to pass environment variables to a container in the Amazon Elastic Container Service Developer Guide.
Environment variable files are objects in Amazon S3 and all Amazon S3 security considerations apply.
You must use the following platforms for the Fargate launch type:
- Linux platform version
1.4.0or later. - Windows platform version
1.0.0or later.
Consider the following when using the Fargate launch type:
- The file is handled like a native Docker env-file.
- There is no support for shell escape handling.
- The container entry point interperts the
VARIABLEvalues.
DaemonTaskDefinitionFirelensConfiguration, DaemonTaskDefinitionFirelensConfigurationArgs
The FireLens configuration for the container. This is used to specify and configure a log router for container logs. For more information, see Custom log routing in the Amazon Elastic Container Service Developer Guide.- Options Dictionary<string, string>
- The options to use when configuring the log router. This field is optional and can be used to specify a custom configuration file or to add additional metadata, such as the task, task definition, cluster, and container instance details to the log event. If specified, the syntax to use is
"options":{"enable-ecs-log-metadata":"true|false","config-file-type:"s3|file","config-file-value":"arn:aws:s3:::mybucket/fluent.conf|filepath"}. For more information, see Creating a task definition that uses a FireLens configuration in the Amazon Elastic Container Service Developer Guide. Tasks hosted on FARGATElong only support thefileconfiguration file type. - Type string
- The log router to use. The valid values are
fluentdorfluentbit.
- Options map[string]string
- The options to use when configuring the log router. This field is optional and can be used to specify a custom configuration file or to add additional metadata, such as the task, task definition, cluster, and container instance details to the log event. If specified, the syntax to use is
"options":{"enable-ecs-log-metadata":"true|false","config-file-type:"s3|file","config-file-value":"arn:aws:s3:::mybucket/fluent.conf|filepath"}. For more information, see Creating a task definition that uses a FireLens configuration in the Amazon Elastic Container Service Developer Guide. Tasks hosted on FARGATElong only support thefileconfiguration file type. - Type string
- The log router to use. The valid values are
fluentdorfluentbit.
- options Map<String,String>
- The options to use when configuring the log router. This field is optional and can be used to specify a custom configuration file or to add additional metadata, such as the task, task definition, cluster, and container instance details to the log event. If specified, the syntax to use is
"options":{"enable-ecs-log-metadata":"true|false","config-file-type:"s3|file","config-file-value":"arn:aws:s3:::mybucket/fluent.conf|filepath"}. For more information, see Creating a task definition that uses a FireLens configuration in the Amazon Elastic Container Service Developer Guide. Tasks hosted on FARGATElong only support thefileconfiguration file type. - type String
- The log router to use. The valid values are
fluentdorfluentbit.
- options {[key: string]: string}
- The options to use when configuring the log router. This field is optional and can be used to specify a custom configuration file or to add additional metadata, such as the task, task definition, cluster, and container instance details to the log event. If specified, the syntax to use is
"options":{"enable-ecs-log-metadata":"true|false","config-file-type:"s3|file","config-file-value":"arn:aws:s3:::mybucket/fluent.conf|filepath"}. For more information, see Creating a task definition that uses a FireLens configuration in the Amazon Elastic Container Service Developer Guide. Tasks hosted on FARGATElong only support thefileconfiguration file type. - type string
- The log router to use. The valid values are
fluentdorfluentbit.
- options Mapping[str, str]
- The options to use when configuring the log router. This field is optional and can be used to specify a custom configuration file or to add additional metadata, such as the task, task definition, cluster, and container instance details to the log event. If specified, the syntax to use is
"options":{"enable-ecs-log-metadata":"true|false","config-file-type:"s3|file","config-file-value":"arn:aws:s3:::mybucket/fluent.conf|filepath"}. For more information, see Creating a task definition that uses a FireLens configuration in the Amazon Elastic Container Service Developer Guide. Tasks hosted on FARGATElong only support thefileconfiguration file type. - type str
- The log router to use. The valid values are
fluentdorfluentbit.
- options Map<String>
- The options to use when configuring the log router. This field is optional and can be used to specify a custom configuration file or to add additional metadata, such as the task, task definition, cluster, and container instance details to the log event. If specified, the syntax to use is
"options":{"enable-ecs-log-metadata":"true|false","config-file-type:"s3|file","config-file-value":"arn:aws:s3:::mybucket/fluent.conf|filepath"}. For more information, see Creating a task definition that uses a FireLens configuration in the Amazon Elastic Container Service Developer Guide. Tasks hosted on FARGATElong only support thefileconfiguration file type. - type String
- The log router to use. The valid values are
fluentdorfluentbit.
DaemonTaskDefinitionHealthCheck, DaemonTaskDefinitionHealthCheckArgs
An object representing a container health check. Health check parameters that are specified in a container definition override any Docker health checks that exist in the container image (such as those specified in a parent image or from the image's Dockerfile). This configuration maps to the HEALTHCHECK parameter of docker run.
The Amazon ECS container agent only monitors and reports on the health checks specified in the task definition. Amazon ECS does not monitor Docker health checks that are embedded in a container image and not specified in the container definition. Health check parameters that are specified in a container definition override any Docker health checks that exist in the container image.
You can view the health status of both individual containers and a task with the DescribeTasks API operation or when viewing the task details in the console.
The health check is designed to make sure that your containers survive agent restarts, upgrades, or temporary unavailability.
Amazon ECS performs health checks on containers with the default that launched the container instance or the task.
The following describes the possible healthStatus values for a container:
HEALTHY-The container health check has passed successfully.UNHEALTHY-The container health check has failed.UNKNOWN-The container health check is being evaluated, there's no container health check defined, or Amazon ECS doesn't have the health status of the container.
The following describes the possible healthStatus values based on the container health checker status of essential containers in the task with the following priority order (high to low):
UNHEALTHY-One or more essential containers have failed their health check.UNKNOWN-Any essential container running within the task is in anUNKNOWNstate and no other essential containers have anUNHEALTHYstate.HEALTHY-All essential containers within the task have passed their health checks.
Consider the following task health example with 2 containers.
- If Container1 is
UNHEALTHYand Container2 isUNKNOWN, the task health isUNHEALTHY. - If Container1 is
UNHEALTHYand Container2 isHEALTHY, the task health isUNHEALTHY. - If Container1 is
HEALTHYand Container2 isUNKNOWN, the task health isUNKNOWN. - If Container1 is
HEALTHYand Container2 isHEALTHY, the task health isHEALTHY.
Consider the following task health example with 3 containers.
- If Container1 is
UNHEALTHYand Container2 isUNKNOWN, and Container3 isUNKNOWN, the task health isUNHEALTHY. - If Container1 is
UNHEALTHYand Container2 isUNKNOWN, and Container3 isHEALTHY, the task health isUNHEALTHY. - If Container1 is
UNHEALTHYand Container2 isHEALTHY, and Container3 isHEALTHY, the task health isUNHEALTHY. - If Container1 is
HEALTHYand Container2 isUNKNOWN, and Container3 isHEALTHY, the task health isUNKNOWN. - If Container1 is
HEALTHYand Container2 isUNKNOWN, and Container3 isUNKNOWN, the task health isUNKNOWN. - If Container1 is
HEALTHYand Container2 isHEALTHY, and Container3 isHEALTHY, the task health isHEALTHY.
If a task is run manually, and not as part of a service, the task will continue its lifecycle regardless of its health status. For tasks that are part of a service, if the task reports as unhealthy then the task will be stopped and the service scheduler will replace it. When a container health check fails for a task that is part of a service, the following process occurs:
- The task is marked as
UNHEALTHY. - The unhealthy task will be stopped, and during the stopping process, it will go through the following states:
DEACTIVATING- In this state, Amazon ECS performs additional steps before stopping the task. For example, for tasks that are part of services configured to use Elastic Load Balancing target groups, target groups will be deregistered in this state.STOPPING- The task is in the process of being stopped.DEPROVISIONING- Resources associated with the task are being cleaned up.STOPPED- The task has been completely stopped.
- After the old task stops, a new task will be launched to ensure service operation, and the new task will go through the following lifecycle:
PROVISIONING- Resources required for the task are being provisioned.PENDING- The task is waiting to be placed on a container instance.ACTIVATING- In this state, Amazon ECS pulls container images, creates containers, configures task networking, registers load balancer target groups, and configures service discovery status.RUNNING- The task is running and performing its work.
For more detailed information about task lifecycle states, see Task lifecycle in the Amazon Elastic Container Service Developer Guide. The following are notes about container health check support:
- If the Amazon ECS container agent becomes disconnected from the Amazon ECS service, this won't cause a container to transition to an
UNHEALTHYstatus. This is by design, to ensure that containers remain running during agent restarts or temporary unavailability. The health check status is the "last heard from" response from the Amazon ECS agent, so if the container was consideredHEALTHYprior to the disconnect, that status will remain until the agent reconnects and another health check occurs. There are no assumptions made about the status of the container health checks. - Container health checks require version
1.17.0or greater of the Amazon ECS container agent. For more information, see Updating the Amazon ECS container agent. - Container health checks are supported for Fargate tasks if you're using platform version
1.1.0or greater. For more information, see platform versions. - Container health checks aren't supported for tasks that are part of a service that's configured to use a Classic Load Balancer.
For an example of how to specify a task definition with multiple containers where container dependency is specified, see Container dependency in the Amazon Elastic Container Service Developer Guide.
- Command List<string>
- A string array representing the command that the container runs to determine if it is healthy. The string array must start with
CMDto run the command arguments directly, orCMD-SHELLto run the command with the container's default shell. When you use the AWS Management Console JSON panel, the CLIlong, or the APIs, enclose the list of commands in double quotes and brackets.[ "CMD-SHELL", "curl -f http://localhost/ || exit 1" ]You don't include the double quotes and brackets when you use the AWS Management Console.CMD-SHELL, curl -f http://localhost/ || exit 1An exit code of 0 indicates success, and non-zero exit code indicates failure. For more information, seeHealthCheckin the docker container create command. - Interval int
- The time period in seconds between each health check execution. You may specify between 5 and 300 seconds. The default value is 30 seconds. This value applies only when you specify a
command. - Retries int
- The number of times to retry a failed health check before the container is considered unhealthy. You may specify between 1 and 10 retries. The default value is 3. This value applies only when you specify a
command. - Start
Period int - The optional grace period to provide containers time to bootstrap before failed health checks count towards the maximum number of retries. You can specify between 0 and 300 seconds. By default, the
startPeriodis off. This value applies only when you specify acommand. If a health check succeeds within thestartPeriod, then the container is considered healthy and any subsequent failures count toward the maximum number of retries. - Timeout int
- The time period in seconds to wait for a health check to succeed before it is considered a failure. You may specify between 2 and 60 seconds. The default value is 5. This value applies only when you specify a
command.
- Command []string
- A string array representing the command that the container runs to determine if it is healthy. The string array must start with
CMDto run the command arguments directly, orCMD-SHELLto run the command with the container's default shell. When you use the AWS Management Console JSON panel, the CLIlong, or the APIs, enclose the list of commands in double quotes and brackets.[ "CMD-SHELL", "curl -f http://localhost/ || exit 1" ]You don't include the double quotes and brackets when you use the AWS Management Console.CMD-SHELL, curl -f http://localhost/ || exit 1An exit code of 0 indicates success, and non-zero exit code indicates failure. For more information, seeHealthCheckin the docker container create command. - Interval int
- The time period in seconds between each health check execution. You may specify between 5 and 300 seconds. The default value is 30 seconds. This value applies only when you specify a
command. - Retries int
- The number of times to retry a failed health check before the container is considered unhealthy. You may specify between 1 and 10 retries. The default value is 3. This value applies only when you specify a
command. - Start
Period int - The optional grace period to provide containers time to bootstrap before failed health checks count towards the maximum number of retries. You can specify between 0 and 300 seconds. By default, the
startPeriodis off. This value applies only when you specify acommand. If a health check succeeds within thestartPeriod, then the container is considered healthy and any subsequent failures count toward the maximum number of retries. - Timeout int
- The time period in seconds to wait for a health check to succeed before it is considered a failure. You may specify between 2 and 60 seconds. The default value is 5. This value applies only when you specify a
command.
- command List<String>
- A string array representing the command that the container runs to determine if it is healthy. The string array must start with
CMDto run the command arguments directly, orCMD-SHELLto run the command with the container's default shell. When you use the AWS Management Console JSON panel, the CLIlong, or the APIs, enclose the list of commands in double quotes and brackets.[ "CMD-SHELL", "curl -f http://localhost/ || exit 1" ]You don't include the double quotes and brackets when you use the AWS Management Console.CMD-SHELL, curl -f http://localhost/ || exit 1An exit code of 0 indicates success, and non-zero exit code indicates failure. For more information, seeHealthCheckin the docker container create command. - interval Integer
- The time period in seconds between each health check execution. You may specify between 5 and 300 seconds. The default value is 30 seconds. This value applies only when you specify a
command. - retries Integer
- The number of times to retry a failed health check before the container is considered unhealthy. You may specify between 1 and 10 retries. The default value is 3. This value applies only when you specify a
command. - start
Period Integer - The optional grace period to provide containers time to bootstrap before failed health checks count towards the maximum number of retries. You can specify between 0 and 300 seconds. By default, the
startPeriodis off. This value applies only when you specify acommand. If a health check succeeds within thestartPeriod, then the container is considered healthy and any subsequent failures count toward the maximum number of retries. - timeout Integer
- The time period in seconds to wait for a health check to succeed before it is considered a failure. You may specify between 2 and 60 seconds. The default value is 5. This value applies only when you specify a
command.
- command string[]
- A string array representing the command that the container runs to determine if it is healthy. The string array must start with
CMDto run the command arguments directly, orCMD-SHELLto run the command with the container's default shell. When you use the AWS Management Console JSON panel, the CLIlong, or the APIs, enclose the list of commands in double quotes and brackets.[ "CMD-SHELL", "curl -f http://localhost/ || exit 1" ]You don't include the double quotes and brackets when you use the AWS Management Console.CMD-SHELL, curl -f http://localhost/ || exit 1An exit code of 0 indicates success, and non-zero exit code indicates failure. For more information, seeHealthCheckin the docker container create command. - interval number
- The time period in seconds between each health check execution. You may specify between 5 and 300 seconds. The default value is 30 seconds. This value applies only when you specify a
command. - retries number
- The number of times to retry a failed health check before the container is considered unhealthy. You may specify between 1 and 10 retries. The default value is 3. This value applies only when you specify a
command. - start
Period number - The optional grace period to provide containers time to bootstrap before failed health checks count towards the maximum number of retries. You can specify between 0 and 300 seconds. By default, the
startPeriodis off. This value applies only when you specify acommand. If a health check succeeds within thestartPeriod, then the container is considered healthy and any subsequent failures count toward the maximum number of retries. - timeout number
- The time period in seconds to wait for a health check to succeed before it is considered a failure. You may specify between 2 and 60 seconds. The default value is 5. This value applies only when you specify a
command.
- command Sequence[str]
- A string array representing the command that the container runs to determine if it is healthy. The string array must start with
CMDto run the command arguments directly, orCMD-SHELLto run the command with the container's default shell. When you use the AWS Management Console JSON panel, the CLIlong, or the APIs, enclose the list of commands in double quotes and brackets.[ "CMD-SHELL", "curl -f http://localhost/ || exit 1" ]You don't include the double quotes and brackets when you use the AWS Management Console.CMD-SHELL, curl -f http://localhost/ || exit 1An exit code of 0 indicates success, and non-zero exit code indicates failure. For more information, seeHealthCheckin the docker container create command. - interval int
- The time period in seconds between each health check execution. You may specify between 5 and 300 seconds. The default value is 30 seconds. This value applies only when you specify a
command. - retries int
- The number of times to retry a failed health check before the container is considered unhealthy. You may specify between 1 and 10 retries. The default value is 3. This value applies only when you specify a
command. - start_
period int - The optional grace period to provide containers time to bootstrap before failed health checks count towards the maximum number of retries. You can specify between 0 and 300 seconds. By default, the
startPeriodis off. This value applies only when you specify acommand. If a health check succeeds within thestartPeriod, then the container is considered healthy and any subsequent failures count toward the maximum number of retries. - timeout int
- The time period in seconds to wait for a health check to succeed before it is considered a failure. You may specify between 2 and 60 seconds. The default value is 5. This value applies only when you specify a
command.
- command List<String>
- A string array representing the command that the container runs to determine if it is healthy. The string array must start with
CMDto run the command arguments directly, orCMD-SHELLto run the command with the container's default shell. When you use the AWS Management Console JSON panel, the CLIlong, or the APIs, enclose the list of commands in double quotes and brackets.[ "CMD-SHELL", "curl -f http://localhost/ || exit 1" ]You don't include the double quotes and brackets when you use the AWS Management Console.CMD-SHELL, curl -f http://localhost/ || exit 1An exit code of 0 indicates success, and non-zero exit code indicates failure. For more information, seeHealthCheckin the docker container create command. - interval Number
- The time period in seconds between each health check execution. You may specify between 5 and 300 seconds. The default value is 30 seconds. This value applies only when you specify a
command. - retries Number
- The number of times to retry a failed health check before the container is considered unhealthy. You may specify between 1 and 10 retries. The default value is 3. This value applies only when you specify a
command. - start
Period Number - The optional grace period to provide containers time to bootstrap before failed health checks count towards the maximum number of retries. You can specify between 0 and 300 seconds. By default, the
startPeriodis off. This value applies only when you specify acommand. If a health check succeeds within thestartPeriod, then the container is considered healthy and any subsequent failures count toward the maximum number of retries. - timeout Number
- The time period in seconds to wait for a health check to succeed before it is considered a failure. You may specify between 2 and 60 seconds. The default value is 5. This value applies only when you specify a
command.
DaemonTaskDefinitionHostVolumeProperties, DaemonTaskDefinitionHostVolumePropertiesArgs
Details on a container instance bind mount host volume.- Source
Path string - When the
hostparameter is used, specify asourcePathto declare the path on the host container instance that's presented to the container. If this parameter is empty, then the Docker daemon has assigned a host path for you. If thehostparameter contains asourcePathfile location, then the data volume persists at the specified location on the host container instance until you delete it manually. If thesourcePathvalue doesn't exist on the host container instance, the Docker daemon creates it. If the location does exist, the contents of the source path folder are exported. If you're using the Fargate launch type, thesourcePathparameter is not supported.
- Source
Path string - When the
hostparameter is used, specify asourcePathto declare the path on the host container instance that's presented to the container. If this parameter is empty, then the Docker daemon has assigned a host path for you. If thehostparameter contains asourcePathfile location, then the data volume persists at the specified location on the host container instance until you delete it manually. If thesourcePathvalue doesn't exist on the host container instance, the Docker daemon creates it. If the location does exist, the contents of the source path folder are exported. If you're using the Fargate launch type, thesourcePathparameter is not supported.
- source
Path String - When the
hostparameter is used, specify asourcePathto declare the path on the host container instance that's presented to the container. If this parameter is empty, then the Docker daemon has assigned a host path for you. If thehostparameter contains asourcePathfile location, then the data volume persists at the specified location on the host container instance until you delete it manually. If thesourcePathvalue doesn't exist on the host container instance, the Docker daemon creates it. If the location does exist, the contents of the source path folder are exported. If you're using the Fargate launch type, thesourcePathparameter is not supported.
- source
Path string - When the
hostparameter is used, specify asourcePathto declare the path on the host container instance that's presented to the container. If this parameter is empty, then the Docker daemon has assigned a host path for you. If thehostparameter contains asourcePathfile location, then the data volume persists at the specified location on the host container instance until you delete it manually. If thesourcePathvalue doesn't exist on the host container instance, the Docker daemon creates it. If the location does exist, the contents of the source path folder are exported. If you're using the Fargate launch type, thesourcePathparameter is not supported.
- source_
path str - When the
hostparameter is used, specify asourcePathto declare the path on the host container instance that's presented to the container. If this parameter is empty, then the Docker daemon has assigned a host path for you. If thehostparameter contains asourcePathfile location, then the data volume persists at the specified location on the host container instance until you delete it manually. If thesourcePathvalue doesn't exist on the host container instance, the Docker daemon creates it. If the location does exist, the contents of the source path folder are exported. If you're using the Fargate launch type, thesourcePathparameter is not supported.
- source
Path String - When the
hostparameter is used, specify asourcePathto declare the path on the host container instance that's presented to the container. If this parameter is empty, then the Docker daemon has assigned a host path for you. If thehostparameter contains asourcePathfile location, then the data volume persists at the specified location on the host container instance until you delete it manually. If thesourcePathvalue doesn't exist on the host container instance, the Docker daemon creates it. If the location does exist, the contents of the source path folder are exported. If you're using the Fargate launch type, thesourcePathparameter is not supported.
DaemonTaskDefinitionKernelCapabilities, DaemonTaskDefinitionKernelCapabilitiesArgs
The Linux capabilities to add or remove from the default Docker configuration for a container defined in the task definition. For more detailed information about these Linux capabilities, see the capabilities(7) Linux manual page.
The following describes how Docker processes the Linux capabilities specified in the add and drop request parameters. For information about the latest behavior, see Docker Compose: order of cap_drop and cap_add in the Docker Community Forum.- When the container is a privleged container, the container capabilities are all of the default Docker capabilities. The capabilities specified in the
addrequest parameter, and thedroprequest parameter are ignored. - When the
addrequest parameter is set to ALL, the container capabilities are all of the default Docker capabilities, excluding those specified in thedroprequest parameter. - When the
droprequest parameter is set to ALL, the container capabilities are the capabilities specified in theaddrequest parameter. - When the
addrequest parameter and thedroprequest parameter are both empty, the capabilities the container capabilities are all of the default Docker capabilities. - The default is to first drop the capabilities specified in the
droprequest parameter, and then add the capabilities specified in theaddrequest parameter.
- Add List<string>
- The Linux capabilities for the container that have been added to the default configuration provided by Docker. This parameter maps to
CapAddin the docker container create command and the--cap-addoption to docker run. Tasks launched on FARGATElong only support adding theSYS_PTRACEkernel capability. Valid values:"ALL" | "AUDIT_CONTROL" | "AUDIT_WRITE" | "BLOCK_SUSPEND" | "CHOWN" | "DAC_OVERRIDE" | "DAC_READ_SEARCH" | "FOWNER" | "FSETID" | "IPC_LOCK" | "IPC_OWNER" | "KILL" | "LEASE" | "LINUX_IMMUTABLE" | "MAC_ADMIN" | "MAC_OVERRIDE" | "MKNOD" | "NET_ADMIN" | "NET_BIND_SERVICE" | "NET_BROADCAST" | "NET_RAW" | "SETFCAP" | "SETGID" | "SETPCAP" | "SETUID" | "SYS_ADMIN" | "SYS_BOOT" | "SYS_CHROOT" | "SYS_MODULE" | "SYS_NICE" | "SYS_PACCT" | "SYS_PTRACE" | "SYS_RAWIO" | "SYS_RESOURCE" | "SYS_TIME" | "SYS_TTY_CONFIG" | "SYSLOG" | "WAKE_ALARM" - Drop List<string>
- The Linux capabilities for the container that have been removed from the default configuration provided by Docker. This parameter maps to
CapDropin the docker container create command and the--cap-dropoption to docker run. Valid values:"ALL" | "AUDIT_CONTROL" | "AUDIT_WRITE" | "BLOCK_SUSPEND" | "CHOWN" | "DAC_OVERRIDE" | "DAC_READ_SEARCH" | "FOWNER" | "FSETID" | "IPC_LOCK" | "IPC_OWNER" | "KILL" | "LEASE" | "LINUX_IMMUTABLE" | "MAC_ADMIN" | "MAC_OVERRIDE" | "MKNOD" | "NET_ADMIN" | "NET_BIND_SERVICE" | "NET_BROADCAST" | "NET_RAW" | "SETFCAP" | "SETGID" | "SETPCAP" | "SETUID" | "SYS_ADMIN" | "SYS_BOOT" | "SYS_CHROOT" | "SYS_MODULE" | "SYS_NICE" | "SYS_PACCT" | "SYS_PTRACE" | "SYS_RAWIO" | "SYS_RESOURCE" | "SYS_TIME" | "SYS_TTY_CONFIG" | "SYSLOG" | "WAKE_ALARM"
- Add []string
- The Linux capabilities for the container that have been added to the default configuration provided by Docker. This parameter maps to
CapAddin the docker container create command and the--cap-addoption to docker run. Tasks launched on FARGATElong only support adding theSYS_PTRACEkernel capability. Valid values:"ALL" | "AUDIT_CONTROL" | "AUDIT_WRITE" | "BLOCK_SUSPEND" | "CHOWN" | "DAC_OVERRIDE" | "DAC_READ_SEARCH" | "FOWNER" | "FSETID" | "IPC_LOCK" | "IPC_OWNER" | "KILL" | "LEASE" | "LINUX_IMMUTABLE" | "MAC_ADMIN" | "MAC_OVERRIDE" | "MKNOD" | "NET_ADMIN" | "NET_BIND_SERVICE" | "NET_BROADCAST" | "NET_RAW" | "SETFCAP" | "SETGID" | "SETPCAP" | "SETUID" | "SYS_ADMIN" | "SYS_BOOT" | "SYS_CHROOT" | "SYS_MODULE" | "SYS_NICE" | "SYS_PACCT" | "SYS_PTRACE" | "SYS_RAWIO" | "SYS_RESOURCE" | "SYS_TIME" | "SYS_TTY_CONFIG" | "SYSLOG" | "WAKE_ALARM" - Drop []string
- The Linux capabilities for the container that have been removed from the default configuration provided by Docker. This parameter maps to
CapDropin the docker container create command and the--cap-dropoption to docker run. Valid values:"ALL" | "AUDIT_CONTROL" | "AUDIT_WRITE" | "BLOCK_SUSPEND" | "CHOWN" | "DAC_OVERRIDE" | "DAC_READ_SEARCH" | "FOWNER" | "FSETID" | "IPC_LOCK" | "IPC_OWNER" | "KILL" | "LEASE" | "LINUX_IMMUTABLE" | "MAC_ADMIN" | "MAC_OVERRIDE" | "MKNOD" | "NET_ADMIN" | "NET_BIND_SERVICE" | "NET_BROADCAST" | "NET_RAW" | "SETFCAP" | "SETGID" | "SETPCAP" | "SETUID" | "SYS_ADMIN" | "SYS_BOOT" | "SYS_CHROOT" | "SYS_MODULE" | "SYS_NICE" | "SYS_PACCT" | "SYS_PTRACE" | "SYS_RAWIO" | "SYS_RESOURCE" | "SYS_TIME" | "SYS_TTY_CONFIG" | "SYSLOG" | "WAKE_ALARM"
- add List<String>
- The Linux capabilities for the container that have been added to the default configuration provided by Docker. This parameter maps to
CapAddin the docker container create command and the--cap-addoption to docker run. Tasks launched on FARGATElong only support adding theSYS_PTRACEkernel capability. Valid values:"ALL" | "AUDIT_CONTROL" | "AUDIT_WRITE" | "BLOCK_SUSPEND" | "CHOWN" | "DAC_OVERRIDE" | "DAC_READ_SEARCH" | "FOWNER" | "FSETID" | "IPC_LOCK" | "IPC_OWNER" | "KILL" | "LEASE" | "LINUX_IMMUTABLE" | "MAC_ADMIN" | "MAC_OVERRIDE" | "MKNOD" | "NET_ADMIN" | "NET_BIND_SERVICE" | "NET_BROADCAST" | "NET_RAW" | "SETFCAP" | "SETGID" | "SETPCAP" | "SETUID" | "SYS_ADMIN" | "SYS_BOOT" | "SYS_CHROOT" | "SYS_MODULE" | "SYS_NICE" | "SYS_PACCT" | "SYS_PTRACE" | "SYS_RAWIO" | "SYS_RESOURCE" | "SYS_TIME" | "SYS_TTY_CONFIG" | "SYSLOG" | "WAKE_ALARM" - drop List<String>
- The Linux capabilities for the container that have been removed from the default configuration provided by Docker. This parameter maps to
CapDropin the docker container create command and the--cap-dropoption to docker run. Valid values:"ALL" | "AUDIT_CONTROL" | "AUDIT_WRITE" | "BLOCK_SUSPEND" | "CHOWN" | "DAC_OVERRIDE" | "DAC_READ_SEARCH" | "FOWNER" | "FSETID" | "IPC_LOCK" | "IPC_OWNER" | "KILL" | "LEASE" | "LINUX_IMMUTABLE" | "MAC_ADMIN" | "MAC_OVERRIDE" | "MKNOD" | "NET_ADMIN" | "NET_BIND_SERVICE" | "NET_BROADCAST" | "NET_RAW" | "SETFCAP" | "SETGID" | "SETPCAP" | "SETUID" | "SYS_ADMIN" | "SYS_BOOT" | "SYS_CHROOT" | "SYS_MODULE" | "SYS_NICE" | "SYS_PACCT" | "SYS_PTRACE" | "SYS_RAWIO" | "SYS_RESOURCE" | "SYS_TIME" | "SYS_TTY_CONFIG" | "SYSLOG" | "WAKE_ALARM"
- add string[]
- The Linux capabilities for the container that have been added to the default configuration provided by Docker. This parameter maps to
CapAddin the docker container create command and the--cap-addoption to docker run. Tasks launched on FARGATElong only support adding theSYS_PTRACEkernel capability. Valid values:"ALL" | "AUDIT_CONTROL" | "AUDIT_WRITE" | "BLOCK_SUSPEND" | "CHOWN" | "DAC_OVERRIDE" | "DAC_READ_SEARCH" | "FOWNER" | "FSETID" | "IPC_LOCK" | "IPC_OWNER" | "KILL" | "LEASE" | "LINUX_IMMUTABLE" | "MAC_ADMIN" | "MAC_OVERRIDE" | "MKNOD" | "NET_ADMIN" | "NET_BIND_SERVICE" | "NET_BROADCAST" | "NET_RAW" | "SETFCAP" | "SETGID" | "SETPCAP" | "SETUID" | "SYS_ADMIN" | "SYS_BOOT" | "SYS_CHROOT" | "SYS_MODULE" | "SYS_NICE" | "SYS_PACCT" | "SYS_PTRACE" | "SYS_RAWIO" | "SYS_RESOURCE" | "SYS_TIME" | "SYS_TTY_CONFIG" | "SYSLOG" | "WAKE_ALARM" - drop string[]
- The Linux capabilities for the container that have been removed from the default configuration provided by Docker. This parameter maps to
CapDropin the docker container create command and the--cap-dropoption to docker run. Valid values:"ALL" | "AUDIT_CONTROL" | "AUDIT_WRITE" | "BLOCK_SUSPEND" | "CHOWN" | "DAC_OVERRIDE" | "DAC_READ_SEARCH" | "FOWNER" | "FSETID" | "IPC_LOCK" | "IPC_OWNER" | "KILL" | "LEASE" | "LINUX_IMMUTABLE" | "MAC_ADMIN" | "MAC_OVERRIDE" | "MKNOD" | "NET_ADMIN" | "NET_BIND_SERVICE" | "NET_BROADCAST" | "NET_RAW" | "SETFCAP" | "SETGID" | "SETPCAP" | "SETUID" | "SYS_ADMIN" | "SYS_BOOT" | "SYS_CHROOT" | "SYS_MODULE" | "SYS_NICE" | "SYS_PACCT" | "SYS_PTRACE" | "SYS_RAWIO" | "SYS_RESOURCE" | "SYS_TIME" | "SYS_TTY_CONFIG" | "SYSLOG" | "WAKE_ALARM"
- add Sequence[str]
- The Linux capabilities for the container that have been added to the default configuration provided by Docker. This parameter maps to
CapAddin the docker container create command and the--cap-addoption to docker run. Tasks launched on FARGATElong only support adding theSYS_PTRACEkernel capability. Valid values:"ALL" | "AUDIT_CONTROL" | "AUDIT_WRITE" | "BLOCK_SUSPEND" | "CHOWN" | "DAC_OVERRIDE" | "DAC_READ_SEARCH" | "FOWNER" | "FSETID" | "IPC_LOCK" | "IPC_OWNER" | "KILL" | "LEASE" | "LINUX_IMMUTABLE" | "MAC_ADMIN" | "MAC_OVERRIDE" | "MKNOD" | "NET_ADMIN" | "NET_BIND_SERVICE" | "NET_BROADCAST" | "NET_RAW" | "SETFCAP" | "SETGID" | "SETPCAP" | "SETUID" | "SYS_ADMIN" | "SYS_BOOT" | "SYS_CHROOT" | "SYS_MODULE" | "SYS_NICE" | "SYS_PACCT" | "SYS_PTRACE" | "SYS_RAWIO" | "SYS_RESOURCE" | "SYS_TIME" | "SYS_TTY_CONFIG" | "SYSLOG" | "WAKE_ALARM" - drop Sequence[str]
- The Linux capabilities for the container that have been removed from the default configuration provided by Docker. This parameter maps to
CapDropin the docker container create command and the--cap-dropoption to docker run. Valid values:"ALL" | "AUDIT_CONTROL" | "AUDIT_WRITE" | "BLOCK_SUSPEND" | "CHOWN" | "DAC_OVERRIDE" | "DAC_READ_SEARCH" | "FOWNER" | "FSETID" | "IPC_LOCK" | "IPC_OWNER" | "KILL" | "LEASE" | "LINUX_IMMUTABLE" | "MAC_ADMIN" | "MAC_OVERRIDE" | "MKNOD" | "NET_ADMIN" | "NET_BIND_SERVICE" | "NET_BROADCAST" | "NET_RAW" | "SETFCAP" | "SETGID" | "SETPCAP" | "SETUID" | "SYS_ADMIN" | "SYS_BOOT" | "SYS_CHROOT" | "SYS_MODULE" | "SYS_NICE" | "SYS_PACCT" | "SYS_PTRACE" | "SYS_RAWIO" | "SYS_RESOURCE" | "SYS_TIME" | "SYS_TTY_CONFIG" | "SYSLOG" | "WAKE_ALARM"
- add List<String>
- The Linux capabilities for the container that have been added to the default configuration provided by Docker. This parameter maps to
CapAddin the docker container create command and the--cap-addoption to docker run. Tasks launched on FARGATElong only support adding theSYS_PTRACEkernel capability. Valid values:"ALL" | "AUDIT_CONTROL" | "AUDIT_WRITE" | "BLOCK_SUSPEND" | "CHOWN" | "DAC_OVERRIDE" | "DAC_READ_SEARCH" | "FOWNER" | "FSETID" | "IPC_LOCK" | "IPC_OWNER" | "KILL" | "LEASE" | "LINUX_IMMUTABLE" | "MAC_ADMIN" | "MAC_OVERRIDE" | "MKNOD" | "NET_ADMIN" | "NET_BIND_SERVICE" | "NET_BROADCAST" | "NET_RAW" | "SETFCAP" | "SETGID" | "SETPCAP" | "SETUID" | "SYS_ADMIN" | "SYS_BOOT" | "SYS_CHROOT" | "SYS_MODULE" | "SYS_NICE" | "SYS_PACCT" | "SYS_PTRACE" | "SYS_RAWIO" | "SYS_RESOURCE" | "SYS_TIME" | "SYS_TTY_CONFIG" | "SYSLOG" | "WAKE_ALARM" - drop List<String>
- The Linux capabilities for the container that have been removed from the default configuration provided by Docker. This parameter maps to
CapDropin the docker container create command and the--cap-dropoption to docker run. Valid values:"ALL" | "AUDIT_CONTROL" | "AUDIT_WRITE" | "BLOCK_SUSPEND" | "CHOWN" | "DAC_OVERRIDE" | "DAC_READ_SEARCH" | "FOWNER" | "FSETID" | "IPC_LOCK" | "IPC_OWNER" | "KILL" | "LEASE" | "LINUX_IMMUTABLE" | "MAC_ADMIN" | "MAC_OVERRIDE" | "MKNOD" | "NET_ADMIN" | "NET_BIND_SERVICE" | "NET_BROADCAST" | "NET_RAW" | "SETFCAP" | "SETGID" | "SETPCAP" | "SETUID" | "SYS_ADMIN" | "SYS_BOOT" | "SYS_CHROOT" | "SYS_MODULE" | "SYS_NICE" | "SYS_PACCT" | "SYS_PTRACE" | "SYS_RAWIO" | "SYS_RESOURCE" | "SYS_TIME" | "SYS_TTY_CONFIG" | "SYSLOG" | "WAKE_ALARM"
DaemonTaskDefinitionKeyValuePair, DaemonTaskDefinitionKeyValuePairArgs
A key-value pair object.DaemonTaskDefinitionLinuxParameters, DaemonTaskDefinitionLinuxParametersArgs
The Linux-specific options that are applied to the container, such as Linux KernelCapabilities.- Capabilities
Pulumi.
Aws Native. Ecs. Inputs. Daemon Task Definition Kernel Capabilities - The Linux capabilities for the container that are added to or dropped from the default configuration provided by Docker.
For tasks that use the Fargate launch type,
capabilitiesis supported for all platform versions but theaddparameter is only supported if using platform version 1.4.0 or later. - Devices
List<Pulumi.
Aws Native. Ecs. Inputs. Daemon Task Definition Device> - Any host devices to expose to the container. This parameter maps to
Devicesin the docker container create command and the--deviceoption to docker run. If you're using tasks that use the Fargate launch type, thedevicesparameter isn't supported. - Init
Process boolEnabled - Run an
initprocess inside the container that forwards signals and reaps processes. This parameter maps to the--initoption to docker run. This parameter requires version 1.25 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command:sudo docker version --format '{{.Server.APIVersion}}' - Tmpfs
List<Pulumi.
Aws Native. Ecs. Inputs. Daemon Task Definition Tmpfs> - The container path, mount options, and size (in MiB) of the tmpfs mount. This parameter maps to the
--tmpfsoption to docker run. If you're using tasks that use the Fargate launch type, thetmpfsparameter isn't supported.
- Capabilities
Daemon
Task Definition Kernel Capabilities - The Linux capabilities for the container that are added to or dropped from the default configuration provided by Docker.
For tasks that use the Fargate launch type,
capabilitiesis supported for all platform versions but theaddparameter is only supported if using platform version 1.4.0 or later. - Devices
[]Daemon
Task Definition Device - Any host devices to expose to the container. This parameter maps to
Devicesin the docker container create command and the--deviceoption to docker run. If you're using tasks that use the Fargate launch type, thedevicesparameter isn't supported. - Init
Process boolEnabled - Run an
initprocess inside the container that forwards signals and reaps processes. This parameter maps to the--initoption to docker run. This parameter requires version 1.25 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command:sudo docker version --format '{{.Server.APIVersion}}' - Tmpfs
[]Daemon
Task Definition Tmpfs - The container path, mount options, and size (in MiB) of the tmpfs mount. This parameter maps to the
--tmpfsoption to docker run. If you're using tasks that use the Fargate launch type, thetmpfsparameter isn't supported.
- capabilities
Daemon
Task Definition Kernel Capabilities - The Linux capabilities for the container that are added to or dropped from the default configuration provided by Docker.
For tasks that use the Fargate launch type,
capabilitiesis supported for all platform versions but theaddparameter is only supported if using platform version 1.4.0 or later. - devices
List<Daemon
Task Definition Device> - Any host devices to expose to the container. This parameter maps to
Devicesin the docker container create command and the--deviceoption to docker run. If you're using tasks that use the Fargate launch type, thedevicesparameter isn't supported. - init
Process BooleanEnabled - Run an
initprocess inside the container that forwards signals and reaps processes. This parameter maps to the--initoption to docker run. This parameter requires version 1.25 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command:sudo docker version --format '{{.Server.APIVersion}}' - tmpfs
List<Daemon
Task Definition Tmpfs> - The container path, mount options, and size (in MiB) of the tmpfs mount. This parameter maps to the
--tmpfsoption to docker run. If you're using tasks that use the Fargate launch type, thetmpfsparameter isn't supported.
- capabilities
Daemon
Task Definition Kernel Capabilities - The Linux capabilities for the container that are added to or dropped from the default configuration provided by Docker.
For tasks that use the Fargate launch type,
capabilitiesis supported for all platform versions but theaddparameter is only supported if using platform version 1.4.0 or later. - devices
Daemon
Task Definition Device[] - Any host devices to expose to the container. This parameter maps to
Devicesin the docker container create command and the--deviceoption to docker run. If you're using tasks that use the Fargate launch type, thedevicesparameter isn't supported. - init
Process booleanEnabled - Run an
initprocess inside the container that forwards signals and reaps processes. This parameter maps to the--initoption to docker run. This parameter requires version 1.25 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command:sudo docker version --format '{{.Server.APIVersion}}' - tmpfs
Daemon
Task Definition Tmpfs[] - The container path, mount options, and size (in MiB) of the tmpfs mount. This parameter maps to the
--tmpfsoption to docker run. If you're using tasks that use the Fargate launch type, thetmpfsparameter isn't supported.
- capabilities
Daemon
Task Definition Kernel Capabilities - The Linux capabilities for the container that are added to or dropped from the default configuration provided by Docker.
For tasks that use the Fargate launch type,
capabilitiesis supported for all platform versions but theaddparameter is only supported if using platform version 1.4.0 or later. - devices
Sequence[Daemon
Task Definition Device] - Any host devices to expose to the container. This parameter maps to
Devicesin the docker container create command and the--deviceoption to docker run. If you're using tasks that use the Fargate launch type, thedevicesparameter isn't supported. - init_
process_ boolenabled - Run an
initprocess inside the container that forwards signals and reaps processes. This parameter maps to the--initoption to docker run. This parameter requires version 1.25 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command:sudo docker version --format '{{.Server.APIVersion}}' - tmpfs
Sequence[Daemon
Task Definition Tmpfs] - The container path, mount options, and size (in MiB) of the tmpfs mount. This parameter maps to the
--tmpfsoption to docker run. If you're using tasks that use the Fargate launch type, thetmpfsparameter isn't supported.
- capabilities Property Map
- The Linux capabilities for the container that are added to or dropped from the default configuration provided by Docker.
For tasks that use the Fargate launch type,
capabilitiesis supported for all platform versions but theaddparameter is only supported if using platform version 1.4.0 or later. - devices List<Property Map>
- Any host devices to expose to the container. This parameter maps to
Devicesin the docker container create command and the--deviceoption to docker run. If you're using tasks that use the Fargate launch type, thedevicesparameter isn't supported. - init
Process BooleanEnabled - Run an
initprocess inside the container that forwards signals and reaps processes. This parameter maps to the--initoption to docker run. This parameter requires version 1.25 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command:sudo docker version --format '{{.Server.APIVersion}}' - tmpfs List<Property Map>
- The container path, mount options, and size (in MiB) of the tmpfs mount. This parameter maps to the
--tmpfsoption to docker run. If you're using tasks that use the Fargate launch type, thetmpfsparameter isn't supported.
DaemonTaskDefinitionLogConfiguration, DaemonTaskDefinitionLogConfigurationArgs
The log configuration for the container. This parameter maps to LogConfig in the docker container create command and the --log-driver option to docker run.
By default, containers use the same logging driver that the Docker daemon uses. However, the container might use a different logging driver than the Docker daemon by specifying a log driver configuration in the container definition.
Understand the following when specifying a log configuration for your containers.- Amazon ECS currently supports a subset of the logging drivers available to the Docker daemon. Additional log drivers may be available in future releases of the Amazon ECS container agent.
For tasks on FARGATElong, the supported log drivers are
awslogs,splunk, andawsfirelens. For tasks hosted on Amazon EC2 instances, the supported log drivers areawslogs,fluentd,gelf,json-file,journald,syslog,splunk, andawsfirelens. - This parameter requires version 1.18 of the Docker Remote API or greater on your container instance.
- For tasks that are hosted on Amazon EC2 instances, the Amazon ECS container agent must register the available logging drivers with the
ECS_AVAILABLE_LOGGING_DRIVERSenvironment variable before containers placed on that instance can use these log configuration options. For more information, see Amazon ECS container agent configuration in the Amazon Elastic Container Service Developer Guide. - For tasks that are on FARGATElong, because you don't have access to the underlying infrastructure your tasks are hosted on, any additional software needed must be installed outside of the task. For example, the Fluentd output aggregators or a remote host running Logstash to send Gelf logs to.
- Log
Driver string - The log driver to use for the container.
For tasks on FARGATElong, the supported log drivers are
awslogs,splunk, andawsfirelens. For tasks hosted on Amazon EC2 instances, the supported log drivers areawslogs,fluentd,gelf,json-file,journald,syslog,splunk, andawsfirelens. For more information about using theawslogslog driver, see Send Amazon ECS logs to CloudWatch in the Amazon Elastic Container Service Developer Guide. For more information about using theawsfirelenslog driver, see Send Amazon ECS logs to an service or Partner. If you have a custom driver that isn't listed, you can fork the Amazon ECS container agent project that's available on GitHub and customize it to work with that driver. We encourage you to submit pull requests for changes that you would like to have included. However, we don't currently provide support for running modified copies of this software. - Options Dictionary<string, string>
- The configuration options to send to the log driver.
The options you can specify depend on the log driver. Some of the options you can specify when you use the
awslogslog driver to route logs to Amazon CloudWatch include the following:- awslogs-create-group Required: No Specify whether you want the log group to be created automatically. If this option isn't specified, it defaults to false. Your IAM policy must include the logs:CreateLogGroup permission before you attempt to use awslogs-create-group. + awslogs-region Required: Yes Specify the Region that the awslogs log driver is to send your Docker logs to. You can choose to send all of your logs from clusters in different Regions to a single region in CloudWatch Logs. This is so that they're all visible in one location. Otherwise, you can separate them by Region for more granularity. Make sure that the specified log group exists in the Region that you specify with this option. + awslogs-group Required: Yes Make sure to specify a log group that the awslogs log driver sends its log streams to. + awslogs-stream-prefix Required: Yes, when using Fargate.Optional when using EC2. Use the awslogs-stream-prefix option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the format prefix-name/container-name/ecs-task-id. If you don't specify a prefix with this option, then the log stream is named after the container ID that's assigned by the Docker daemon on the container instance. Because it's difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option. For Amazon ECS services, you can use the service name as the prefix. Doing so, you can trace log streams to the service that the container belongs to, the name of the container that sent them, and the ID of the task that the container belongs to. You must specify a stream-prefix for your logs to have your logs appear in the Log pane when using the Amazon ECS console. + awslogs-datetime-format Required: No This option defines a multiline start pattern in Python strftime format. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages. One example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry. For more information, see awslogs-datetime-format. You cannot configure both the awslogs-datetime-format and awslogs-multiline-pattern options. Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance. + awslogs-multiline-pattern Required: No This option defines a multiline start pattern that uses a regular expression. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages. For more information, see awslogs-multiline-pattern. This option is ignored if awslogs-datetime-format is also configured. You cannot configure both the awslogs-datetime-format and awslogs-multiline-pattern options. Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance. The following options apply to all supported log drivers.
- mode Required: No Valid values: non-blocking | blocking This option defines the delivery mode of log messages from the container to the log driver specified using logDriver. The delivery mode you choose affects application availability when the flow of logs from container is interrupted. If you use the blocking mode and the flow of logs is interrupted, calls from container code to write to the stdout and stderr streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure. If you use the non-blocking mode, the container's logs are instead stored in an in-memory intermediate buffer configured with the max-buffer-size option. This prevents the application from becoming unresponsive when logs cannot be sent. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see Preventing log loss with non-blocking mode in the awslogs container log driver. You can set a default mode for all containers in a specific Region by using the defaultLogDriverMode account setting. If you don't specify the mode option or configure the account setting, Amazon ECS will default to the non-blocking mode. For more information about the account setting, see Default log driver mode in the Amazon Elastic Container Service Developer Guide. On June 25, 2025, Amazon ECS changed the default log driver mode from blocking to non-blocking to prioritize task availability over logging. To continue using the blocking mode after this change, do one of the following: Set the mode option in your container definition's logConfiguration as blocking. Set the defaultLogDriverMode account setting to blocking. + max-buffer-size Required: No Default value: 10m When non-blocking mode is used, the max-buffer-size log option controls the size of the buffer that's used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. When the buffer fills up, further logs cannot be stored. Logs that cannot be stored are lost.
To route logs using the
splunklog router, you need to specify asplunk-tokenand asplunk-url. When you use theawsfirelenslog router to route logs to an AWS Service or AWS Partner Network destination for log storage and analytics, you can set thelog-driver-buffer-limitoption to limit the number of events that are buffered in memory, before being sent to the log router container. It can help to resolve potential log loss issue because high throughput might result in memory running out for the buffer inside of Docker. Other options you can specify when usingawsfirelensto route logs depend on the destination. When you export logs to Amazon Data Firehose, you can specify the AWS Region withregionand a name for the log stream withdelivery_stream. When you export logs to Amazon Kinesis Data Streams, you can specify an AWS Region withregionand a data stream name withstream. When you export logs to Amazon OpenSearch Service, you can specify options likeName,Host(OpenSearch Service endpoint without protocol),Port,Index,Type,Aws_auth,Aws_region,Suppress_Type_Name, andtls. For more information, see Under the hood: FireLens for Amazon ECS Tasks. When you export logs to Amazon S3, you can specify the bucket using thebucketoption. You can also specifyregion,total_file_size,upload_timeout, anduse_put_objectas options. This parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command:sudo docker version --format '{{.Server.APIVersion}}'
- Secret
Options List<Pulumi.Aws Native. Ecs. Inputs. Daemon Task Definition Secret> - The secrets to pass to the log configuration. For more information, see Specifying sensitive data in the Amazon Elastic Container Service Developer Guide.
- Log
Driver string - The log driver to use for the container.
For tasks on FARGATElong, the supported log drivers are
awslogs,splunk, andawsfirelens. For tasks hosted on Amazon EC2 instances, the supported log drivers areawslogs,fluentd,gelf,json-file,journald,syslog,splunk, andawsfirelens. For more information about using theawslogslog driver, see Send Amazon ECS logs to CloudWatch in the Amazon Elastic Container Service Developer Guide. For more information about using theawsfirelenslog driver, see Send Amazon ECS logs to an service or Partner. If you have a custom driver that isn't listed, you can fork the Amazon ECS container agent project that's available on GitHub and customize it to work with that driver. We encourage you to submit pull requests for changes that you would like to have included. However, we don't currently provide support for running modified copies of this software. - Options map[string]string
- The configuration options to send to the log driver.
The options you can specify depend on the log driver. Some of the options you can specify when you use the
awslogslog driver to route logs to Amazon CloudWatch include the following:- awslogs-create-group Required: No Specify whether you want the log group to be created automatically. If this option isn't specified, it defaults to false. Your IAM policy must include the logs:CreateLogGroup permission before you attempt to use awslogs-create-group. + awslogs-region Required: Yes Specify the Region that the awslogs log driver is to send your Docker logs to. You can choose to send all of your logs from clusters in different Regions to a single region in CloudWatch Logs. This is so that they're all visible in one location. Otherwise, you can separate them by Region for more granularity. Make sure that the specified log group exists in the Region that you specify with this option. + awslogs-group Required: Yes Make sure to specify a log group that the awslogs log driver sends its log streams to. + awslogs-stream-prefix Required: Yes, when using Fargate.Optional when using EC2. Use the awslogs-stream-prefix option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the format prefix-name/container-name/ecs-task-id. If you don't specify a prefix with this option, then the log stream is named after the container ID that's assigned by the Docker daemon on the container instance. Because it's difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option. For Amazon ECS services, you can use the service name as the prefix. Doing so, you can trace log streams to the service that the container belongs to, the name of the container that sent them, and the ID of the task that the container belongs to. You must specify a stream-prefix for your logs to have your logs appear in the Log pane when using the Amazon ECS console. + awslogs-datetime-format Required: No This option defines a multiline start pattern in Python strftime format. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages. One example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry. For more information, see awslogs-datetime-format. You cannot configure both the awslogs-datetime-format and awslogs-multiline-pattern options. Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance. + awslogs-multiline-pattern Required: No This option defines a multiline start pattern that uses a regular expression. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages. For more information, see awslogs-multiline-pattern. This option is ignored if awslogs-datetime-format is also configured. You cannot configure both the awslogs-datetime-format and awslogs-multiline-pattern options. Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance. The following options apply to all supported log drivers.
- mode Required: No Valid values: non-blocking | blocking This option defines the delivery mode of log messages from the container to the log driver specified using logDriver. The delivery mode you choose affects application availability when the flow of logs from container is interrupted. If you use the blocking mode and the flow of logs is interrupted, calls from container code to write to the stdout and stderr streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure. If you use the non-blocking mode, the container's logs are instead stored in an in-memory intermediate buffer configured with the max-buffer-size option. This prevents the application from becoming unresponsive when logs cannot be sent. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see Preventing log loss with non-blocking mode in the awslogs container log driver. You can set a default mode for all containers in a specific Region by using the defaultLogDriverMode account setting. If you don't specify the mode option or configure the account setting, Amazon ECS will default to the non-blocking mode. For more information about the account setting, see Default log driver mode in the Amazon Elastic Container Service Developer Guide. On June 25, 2025, Amazon ECS changed the default log driver mode from blocking to non-blocking to prioritize task availability over logging. To continue using the blocking mode after this change, do one of the following: Set the mode option in your container definition's logConfiguration as blocking. Set the defaultLogDriverMode account setting to blocking. + max-buffer-size Required: No Default value: 10m When non-blocking mode is used, the max-buffer-size log option controls the size of the buffer that's used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. When the buffer fills up, further logs cannot be stored. Logs that cannot be stored are lost.
To route logs using the
splunklog router, you need to specify asplunk-tokenand asplunk-url. When you use theawsfirelenslog router to route logs to an AWS Service or AWS Partner Network destination for log storage and analytics, you can set thelog-driver-buffer-limitoption to limit the number of events that are buffered in memory, before being sent to the log router container. It can help to resolve potential log loss issue because high throughput might result in memory running out for the buffer inside of Docker. Other options you can specify when usingawsfirelensto route logs depend on the destination. When you export logs to Amazon Data Firehose, you can specify the AWS Region withregionand a name for the log stream withdelivery_stream. When you export logs to Amazon Kinesis Data Streams, you can specify an AWS Region withregionand a data stream name withstream. When you export logs to Amazon OpenSearch Service, you can specify options likeName,Host(OpenSearch Service endpoint without protocol),Port,Index,Type,Aws_auth,Aws_region,Suppress_Type_Name, andtls. For more information, see Under the hood: FireLens for Amazon ECS Tasks. When you export logs to Amazon S3, you can specify the bucket using thebucketoption. You can also specifyregion,total_file_size,upload_timeout, anduse_put_objectas options. This parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command:sudo docker version --format '{{.Server.APIVersion}}'
- Secret
Options []DaemonTask Definition Secret - The secrets to pass to the log configuration. For more information, see Specifying sensitive data in the Amazon Elastic Container Service Developer Guide.
- log
Driver String - The log driver to use for the container.
For tasks on FARGATElong, the supported log drivers are
awslogs,splunk, andawsfirelens. For tasks hosted on Amazon EC2 instances, the supported log drivers areawslogs,fluentd,gelf,json-file,journald,syslog,splunk, andawsfirelens. For more information about using theawslogslog driver, see Send Amazon ECS logs to CloudWatch in the Amazon Elastic Container Service Developer Guide. For more information about using theawsfirelenslog driver, see Send Amazon ECS logs to an service or Partner. If you have a custom driver that isn't listed, you can fork the Amazon ECS container agent project that's available on GitHub and customize it to work with that driver. We encourage you to submit pull requests for changes that you would like to have included. However, we don't currently provide support for running modified copies of this software. - options Map<String,String>
- The configuration options to send to the log driver.
The options you can specify depend on the log driver. Some of the options you can specify when you use the
awslogslog driver to route logs to Amazon CloudWatch include the following:- awslogs-create-group Required: No Specify whether you want the log group to be created automatically. If this option isn't specified, it defaults to false. Your IAM policy must include the logs:CreateLogGroup permission before you attempt to use awslogs-create-group. + awslogs-region Required: Yes Specify the Region that the awslogs log driver is to send your Docker logs to. You can choose to send all of your logs from clusters in different Regions to a single region in CloudWatch Logs. This is so that they're all visible in one location. Otherwise, you can separate them by Region for more granularity. Make sure that the specified log group exists in the Region that you specify with this option. + awslogs-group Required: Yes Make sure to specify a log group that the awslogs log driver sends its log streams to. + awslogs-stream-prefix Required: Yes, when using Fargate.Optional when using EC2. Use the awslogs-stream-prefix option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the format prefix-name/container-name/ecs-task-id. If you don't specify a prefix with this option, then the log stream is named after the container ID that's assigned by the Docker daemon on the container instance. Because it's difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option. For Amazon ECS services, you can use the service name as the prefix. Doing so, you can trace log streams to the service that the container belongs to, the name of the container that sent them, and the ID of the task that the container belongs to. You must specify a stream-prefix for your logs to have your logs appear in the Log pane when using the Amazon ECS console. + awslogs-datetime-format Required: No This option defines a multiline start pattern in Python strftime format. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages. One example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry. For more information, see awslogs-datetime-format. You cannot configure both the awslogs-datetime-format and awslogs-multiline-pattern options. Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance. + awslogs-multiline-pattern Required: No This option defines a multiline start pattern that uses a regular expression. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages. For more information, see awslogs-multiline-pattern. This option is ignored if awslogs-datetime-format is also configured. You cannot configure both the awslogs-datetime-format and awslogs-multiline-pattern options. Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance. The following options apply to all supported log drivers.
- mode Required: No Valid values: non-blocking | blocking This option defines the delivery mode of log messages from the container to the log driver specified using logDriver. The delivery mode you choose affects application availability when the flow of logs from container is interrupted. If you use the blocking mode and the flow of logs is interrupted, calls from container code to write to the stdout and stderr streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure. If you use the non-blocking mode, the container's logs are instead stored in an in-memory intermediate buffer configured with the max-buffer-size option. This prevents the application from becoming unresponsive when logs cannot be sent. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see Preventing log loss with non-blocking mode in the awslogs container log driver. You can set a default mode for all containers in a specific Region by using the defaultLogDriverMode account setting. If you don't specify the mode option or configure the account setting, Amazon ECS will default to the non-blocking mode. For more information about the account setting, see Default log driver mode in the Amazon Elastic Container Service Developer Guide. On June 25, 2025, Amazon ECS changed the default log driver mode from blocking to non-blocking to prioritize task availability over logging. To continue using the blocking mode after this change, do one of the following: Set the mode option in your container definition's logConfiguration as blocking. Set the defaultLogDriverMode account setting to blocking. + max-buffer-size Required: No Default value: 10m When non-blocking mode is used, the max-buffer-size log option controls the size of the buffer that's used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. When the buffer fills up, further logs cannot be stored. Logs that cannot be stored are lost.
To route logs using the
splunklog router, you need to specify asplunk-tokenand asplunk-url. When you use theawsfirelenslog router to route logs to an AWS Service or AWS Partner Network destination for log storage and analytics, you can set thelog-driver-buffer-limitoption to limit the number of events that are buffered in memory, before being sent to the log router container. It can help to resolve potential log loss issue because high throughput might result in memory running out for the buffer inside of Docker. Other options you can specify when usingawsfirelensto route logs depend on the destination. When you export logs to Amazon Data Firehose, you can specify the AWS Region withregionand a name for the log stream withdelivery_stream. When you export logs to Amazon Kinesis Data Streams, you can specify an AWS Region withregionand a data stream name withstream. When you export logs to Amazon OpenSearch Service, you can specify options likeName,Host(OpenSearch Service endpoint without protocol),Port,Index,Type,Aws_auth,Aws_region,Suppress_Type_Name, andtls. For more information, see Under the hood: FireLens for Amazon ECS Tasks. When you export logs to Amazon S3, you can specify the bucket using thebucketoption. You can also specifyregion,total_file_size,upload_timeout, anduse_put_objectas options. This parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command:sudo docker version --format '{{.Server.APIVersion}}'
- secret
Options List<DaemonTask Definition Secret> - The secrets to pass to the log configuration. For more information, see Specifying sensitive data in the Amazon Elastic Container Service Developer Guide.
- log
Driver string - The log driver to use for the container.
For tasks on FARGATElong, the supported log drivers are
awslogs,splunk, andawsfirelens. For tasks hosted on Amazon EC2 instances, the supported log drivers areawslogs,fluentd,gelf,json-file,journald,syslog,splunk, andawsfirelens. For more information about using theawslogslog driver, see Send Amazon ECS logs to CloudWatch in the Amazon Elastic Container Service Developer Guide. For more information about using theawsfirelenslog driver, see Send Amazon ECS logs to an service or Partner. If you have a custom driver that isn't listed, you can fork the Amazon ECS container agent project that's available on GitHub and customize it to work with that driver. We encourage you to submit pull requests for changes that you would like to have included. However, we don't currently provide support for running modified copies of this software. - options {[key: string]: string}
- The configuration options to send to the log driver.
The options you can specify depend on the log driver. Some of the options you can specify when you use the
awslogslog driver to route logs to Amazon CloudWatch include the following:- awslogs-create-group Required: No Specify whether you want the log group to be created automatically. If this option isn't specified, it defaults to false. Your IAM policy must include the logs:CreateLogGroup permission before you attempt to use awslogs-create-group. + awslogs-region Required: Yes Specify the Region that the awslogs log driver is to send your Docker logs to. You can choose to send all of your logs from clusters in different Regions to a single region in CloudWatch Logs. This is so that they're all visible in one location. Otherwise, you can separate them by Region for more granularity. Make sure that the specified log group exists in the Region that you specify with this option. + awslogs-group Required: Yes Make sure to specify a log group that the awslogs log driver sends its log streams to. + awslogs-stream-prefix Required: Yes, when using Fargate.Optional when using EC2. Use the awslogs-stream-prefix option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the format prefix-name/container-name/ecs-task-id. If you don't specify a prefix with this option, then the log stream is named after the container ID that's assigned by the Docker daemon on the container instance. Because it's difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option. For Amazon ECS services, you can use the service name as the prefix. Doing so, you can trace log streams to the service that the container belongs to, the name of the container that sent them, and the ID of the task that the container belongs to. You must specify a stream-prefix for your logs to have your logs appear in the Log pane when using the Amazon ECS console. + awslogs-datetime-format Required: No This option defines a multiline start pattern in Python strftime format. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages. One example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry. For more information, see awslogs-datetime-format. You cannot configure both the awslogs-datetime-format and awslogs-multiline-pattern options. Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance. + awslogs-multiline-pattern Required: No This option defines a multiline start pattern that uses a regular expression. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages. For more information, see awslogs-multiline-pattern. This option is ignored if awslogs-datetime-format is also configured. You cannot configure both the awslogs-datetime-format and awslogs-multiline-pattern options. Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance. The following options apply to all supported log drivers.
- mode Required: No Valid values: non-blocking | blocking This option defines the delivery mode of log messages from the container to the log driver specified using logDriver. The delivery mode you choose affects application availability when the flow of logs from container is interrupted. If you use the blocking mode and the flow of logs is interrupted, calls from container code to write to the stdout and stderr streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure. If you use the non-blocking mode, the container's logs are instead stored in an in-memory intermediate buffer configured with the max-buffer-size option. This prevents the application from becoming unresponsive when logs cannot be sent. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see Preventing log loss with non-blocking mode in the awslogs container log driver. You can set a default mode for all containers in a specific Region by using the defaultLogDriverMode account setting. If you don't specify the mode option or configure the account setting, Amazon ECS will default to the non-blocking mode. For more information about the account setting, see Default log driver mode in the Amazon Elastic Container Service Developer Guide. On June 25, 2025, Amazon ECS changed the default log driver mode from blocking to non-blocking to prioritize task availability over logging. To continue using the blocking mode after this change, do one of the following: Set the mode option in your container definition's logConfiguration as blocking. Set the defaultLogDriverMode account setting to blocking. + max-buffer-size Required: No Default value: 10m When non-blocking mode is used, the max-buffer-size log option controls the size of the buffer that's used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. When the buffer fills up, further logs cannot be stored. Logs that cannot be stored are lost.
To route logs using the
splunklog router, you need to specify asplunk-tokenand asplunk-url. When you use theawsfirelenslog router to route logs to an AWS Service or AWS Partner Network destination for log storage and analytics, you can set thelog-driver-buffer-limitoption to limit the number of events that are buffered in memory, before being sent to the log router container. It can help to resolve potential log loss issue because high throughput might result in memory running out for the buffer inside of Docker. Other options you can specify when usingawsfirelensto route logs depend on the destination. When you export logs to Amazon Data Firehose, you can specify the AWS Region withregionand a name for the log stream withdelivery_stream. When you export logs to Amazon Kinesis Data Streams, you can specify an AWS Region withregionand a data stream name withstream. When you export logs to Amazon OpenSearch Service, you can specify options likeName,Host(OpenSearch Service endpoint without protocol),Port,Index,Type,Aws_auth,Aws_region,Suppress_Type_Name, andtls. For more information, see Under the hood: FireLens for Amazon ECS Tasks. When you export logs to Amazon S3, you can specify the bucket using thebucketoption. You can also specifyregion,total_file_size,upload_timeout, anduse_put_objectas options. This parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command:sudo docker version --format '{{.Server.APIVersion}}'
- secret
Options DaemonTask Definition Secret[] - The secrets to pass to the log configuration. For more information, see Specifying sensitive data in the Amazon Elastic Container Service Developer Guide.
- log_
driver str - The log driver to use for the container.
For tasks on FARGATElong, the supported log drivers are
awslogs,splunk, andawsfirelens. For tasks hosted on Amazon EC2 instances, the supported log drivers areawslogs,fluentd,gelf,json-file,journald,syslog,splunk, andawsfirelens. For more information about using theawslogslog driver, see Send Amazon ECS logs to CloudWatch in the Amazon Elastic Container Service Developer Guide. For more information about using theawsfirelenslog driver, see Send Amazon ECS logs to an service or Partner. If you have a custom driver that isn't listed, you can fork the Amazon ECS container agent project that's available on GitHub and customize it to work with that driver. We encourage you to submit pull requests for changes that you would like to have included. However, we don't currently provide support for running modified copies of this software. - options Mapping[str, str]
- The configuration options to send to the log driver.
The options you can specify depend on the log driver. Some of the options you can specify when you use the
awslogslog driver to route logs to Amazon CloudWatch include the following:- awslogs-create-group Required: No Specify whether you want the log group to be created automatically. If this option isn't specified, it defaults to false. Your IAM policy must include the logs:CreateLogGroup permission before you attempt to use awslogs-create-group. + awslogs-region Required: Yes Specify the Region that the awslogs log driver is to send your Docker logs to. You can choose to send all of your logs from clusters in different Regions to a single region in CloudWatch Logs. This is so that they're all visible in one location. Otherwise, you can separate them by Region for more granularity. Make sure that the specified log group exists in the Region that you specify with this option. + awslogs-group Required: Yes Make sure to specify a log group that the awslogs log driver sends its log streams to. + awslogs-stream-prefix Required: Yes, when using Fargate.Optional when using EC2. Use the awslogs-stream-prefix option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the format prefix-name/container-name/ecs-task-id. If you don't specify a prefix with this option, then the log stream is named after the container ID that's assigned by the Docker daemon on the container instance. Because it's difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option. For Amazon ECS services, you can use the service name as the prefix. Doing so, you can trace log streams to the service that the container belongs to, the name of the container that sent them, and the ID of the task that the container belongs to. You must specify a stream-prefix for your logs to have your logs appear in the Log pane when using the Amazon ECS console. + awslogs-datetime-format Required: No This option defines a multiline start pattern in Python strftime format. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages. One example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry. For more information, see awslogs-datetime-format. You cannot configure both the awslogs-datetime-format and awslogs-multiline-pattern options. Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance. + awslogs-multiline-pattern Required: No This option defines a multiline start pattern that uses a regular expression. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages. For more information, see awslogs-multiline-pattern. This option is ignored if awslogs-datetime-format is also configured. You cannot configure both the awslogs-datetime-format and awslogs-multiline-pattern options. Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance. The following options apply to all supported log drivers.
- mode Required: No Valid values: non-blocking | blocking This option defines the delivery mode of log messages from the container to the log driver specified using logDriver. The delivery mode you choose affects application availability when the flow of logs from container is interrupted. If you use the blocking mode and the flow of logs is interrupted, calls from container code to write to the stdout and stderr streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure. If you use the non-blocking mode, the container's logs are instead stored in an in-memory intermediate buffer configured with the max-buffer-size option. This prevents the application from becoming unresponsive when logs cannot be sent. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see Preventing log loss with non-blocking mode in the awslogs container log driver. You can set a default mode for all containers in a specific Region by using the defaultLogDriverMode account setting. If you don't specify the mode option or configure the account setting, Amazon ECS will default to the non-blocking mode. For more information about the account setting, see Default log driver mode in the Amazon Elastic Container Service Developer Guide. On June 25, 2025, Amazon ECS changed the default log driver mode from blocking to non-blocking to prioritize task availability over logging. To continue using the blocking mode after this change, do one of the following: Set the mode option in your container definition's logConfiguration as blocking. Set the defaultLogDriverMode account setting to blocking. + max-buffer-size Required: No Default value: 10m When non-blocking mode is used, the max-buffer-size log option controls the size of the buffer that's used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. When the buffer fills up, further logs cannot be stored. Logs that cannot be stored are lost.
To route logs using the
splunklog router, you need to specify asplunk-tokenand asplunk-url. When you use theawsfirelenslog router to route logs to an AWS Service or AWS Partner Network destination for log storage and analytics, you can set thelog-driver-buffer-limitoption to limit the number of events that are buffered in memory, before being sent to the log router container. It can help to resolve potential log loss issue because high throughput might result in memory running out for the buffer inside of Docker. Other options you can specify when usingawsfirelensto route logs depend on the destination. When you export logs to Amazon Data Firehose, you can specify the AWS Region withregionand a name for the log stream withdelivery_stream. When you export logs to Amazon Kinesis Data Streams, you can specify an AWS Region withregionand a data stream name withstream. When you export logs to Amazon OpenSearch Service, you can specify options likeName,Host(OpenSearch Service endpoint without protocol),Port,Index,Type,Aws_auth,Aws_region,Suppress_Type_Name, andtls. For more information, see Under the hood: FireLens for Amazon ECS Tasks. When you export logs to Amazon S3, you can specify the bucket using thebucketoption. You can also specifyregion,total_file_size,upload_timeout, anduse_put_objectas options. This parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command:sudo docker version --format '{{.Server.APIVersion}}'
- secret_
options Sequence[DaemonTask Definition Secret] - The secrets to pass to the log configuration. For more information, see Specifying sensitive data in the Amazon Elastic Container Service Developer Guide.
- log
Driver String - The log driver to use for the container.
For tasks on FARGATElong, the supported log drivers are
awslogs,splunk, andawsfirelens. For tasks hosted on Amazon EC2 instances, the supported log drivers areawslogs,fluentd,gelf,json-file,journald,syslog,splunk, andawsfirelens. For more information about using theawslogslog driver, see Send Amazon ECS logs to CloudWatch in the Amazon Elastic Container Service Developer Guide. For more information about using theawsfirelenslog driver, see Send Amazon ECS logs to an service or Partner. If you have a custom driver that isn't listed, you can fork the Amazon ECS container agent project that's available on GitHub and customize it to work with that driver. We encourage you to submit pull requests for changes that you would like to have included. However, we don't currently provide support for running modified copies of this software. - options Map<String>
- The configuration options to send to the log driver.
The options you can specify depend on the log driver. Some of the options you can specify when you use the
awslogslog driver to route logs to Amazon CloudWatch include the following:- awslogs-create-group Required: No Specify whether you want the log group to be created automatically. If this option isn't specified, it defaults to false. Your IAM policy must include the logs:CreateLogGroup permission before you attempt to use awslogs-create-group. + awslogs-region Required: Yes Specify the Region that the awslogs log driver is to send your Docker logs to. You can choose to send all of your logs from clusters in different Regions to a single region in CloudWatch Logs. This is so that they're all visible in one location. Otherwise, you can separate them by Region for more granularity. Make sure that the specified log group exists in the Region that you specify with this option. + awslogs-group Required: Yes Make sure to specify a log group that the awslogs log driver sends its log streams to. + awslogs-stream-prefix Required: Yes, when using Fargate.Optional when using EC2. Use the awslogs-stream-prefix option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the format prefix-name/container-name/ecs-task-id. If you don't specify a prefix with this option, then the log stream is named after the container ID that's assigned by the Docker daemon on the container instance. Because it's difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option. For Amazon ECS services, you can use the service name as the prefix. Doing so, you can trace log streams to the service that the container belongs to, the name of the container that sent them, and the ID of the task that the container belongs to. You must specify a stream-prefix for your logs to have your logs appear in the Log pane when using the Amazon ECS console. + awslogs-datetime-format Required: No This option defines a multiline start pattern in Python strftime format. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages. One example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry. For more information, see awslogs-datetime-format. You cannot configure both the awslogs-datetime-format and awslogs-multiline-pattern options. Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance. + awslogs-multiline-pattern Required: No This option defines a multiline start pattern that uses a regular expression. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages. For more information, see awslogs-multiline-pattern. This option is ignored if awslogs-datetime-format is also configured. You cannot configure both the awslogs-datetime-format and awslogs-multiline-pattern options. Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance. The following options apply to all supported log drivers.
- mode Required: No Valid values: non-blocking | blocking This option defines the delivery mode of log messages from the container to the log driver specified using logDriver. The delivery mode you choose affects application availability when the flow of logs from container is interrupted. If you use the blocking mode and the flow of logs is interrupted, calls from container code to write to the stdout and stderr streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure. If you use the non-blocking mode, the container's logs are instead stored in an in-memory intermediate buffer configured with the max-buffer-size option. This prevents the application from becoming unresponsive when logs cannot be sent. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see Preventing log loss with non-blocking mode in the awslogs container log driver. You can set a default mode for all containers in a specific Region by using the defaultLogDriverMode account setting. If you don't specify the mode option or configure the account setting, Amazon ECS will default to the non-blocking mode. For more information about the account setting, see Default log driver mode in the Amazon Elastic Container Service Developer Guide. On June 25, 2025, Amazon ECS changed the default log driver mode from blocking to non-blocking to prioritize task availability over logging. To continue using the blocking mode after this change, do one of the following: Set the mode option in your container definition's logConfiguration as blocking. Set the defaultLogDriverMode account setting to blocking. + max-buffer-size Required: No Default value: 10m When non-blocking mode is used, the max-buffer-size log option controls the size of the buffer that's used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. When the buffer fills up, further logs cannot be stored. Logs that cannot be stored are lost.
To route logs using the
splunklog router, you need to specify asplunk-tokenand asplunk-url. When you use theawsfirelenslog router to route logs to an AWS Service or AWS Partner Network destination for log storage and analytics, you can set thelog-driver-buffer-limitoption to limit the number of events that are buffered in memory, before being sent to the log router container. It can help to resolve potential log loss issue because high throughput might result in memory running out for the buffer inside of Docker. Other options you can specify when usingawsfirelensto route logs depend on the destination. When you export logs to Amazon Data Firehose, you can specify the AWS Region withregionand a name for the log stream withdelivery_stream. When you export logs to Amazon Kinesis Data Streams, you can specify an AWS Region withregionand a data stream name withstream. When you export logs to Amazon OpenSearch Service, you can specify options likeName,Host(OpenSearch Service endpoint without protocol),Port,Index,Type,Aws_auth,Aws_region,Suppress_Type_Name, andtls. For more information, see Under the hood: FireLens for Amazon ECS Tasks. When you export logs to Amazon S3, you can specify the bucket using thebucketoption. You can also specifyregion,total_file_size,upload_timeout, anduse_put_objectas options. This parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command:sudo docker version --format '{{.Server.APIVersion}}'
- secret
Options List<Property Map> - The secrets to pass to the log configuration. For more information, see Specifying sensitive data in the Amazon Elastic Container Service Developer Guide.
DaemonTaskDefinitionMountPoint, DaemonTaskDefinitionMountPointArgs
The details for a volume mount point that's used in a container definition.- Container
Path string - The path on the container to mount the host volume at.
- Read
Only bool - If this value is
true, the container has read-only access to the volume. If this value isfalse, then the container can write to the volume. The default value isfalse. - Source
Volume string - The name of the volume to mount. Must be a volume name referenced in the
nameparameter of task definitionvolume.
- Container
Path string - The path on the container to mount the host volume at.
- Read
Only bool - If this value is
true, the container has read-only access to the volume. If this value isfalse, then the container can write to the volume. The default value isfalse. - Source
Volume string - The name of the volume to mount. Must be a volume name referenced in the
nameparameter of task definitionvolume.
- container
Path String - The path on the container to mount the host volume at.
- read
Only Boolean - If this value is
true, the container has read-only access to the volume. If this value isfalse, then the container can write to the volume. The default value isfalse. - source
Volume String - The name of the volume to mount. Must be a volume name referenced in the
nameparameter of task definitionvolume.
- container
Path string - The path on the container to mount the host volume at.
- read
Only boolean - If this value is
true, the container has read-only access to the volume. If this value isfalse, then the container can write to the volume. The default value isfalse. - source
Volume string - The name of the volume to mount. Must be a volume name referenced in the
nameparameter of task definitionvolume.
- container_
path str - The path on the container to mount the host volume at.
- read_
only bool - If this value is
true, the container has read-only access to the volume. If this value isfalse, then the container can write to the volume. The default value isfalse. - source_
volume str - The name of the volume to mount. Must be a volume name referenced in the
nameparameter of task definitionvolume.
- container
Path String - The path on the container to mount the host volume at.
- read
Only Boolean - If this value is
true, the container has read-only access to the volume. If this value isfalse, then the container can write to the volume. The default value isfalse. - source
Volume String - The name of the volume to mount. Must be a volume name referenced in the
nameparameter of task definitionvolume.
DaemonTaskDefinitionRepositoryCredentials, DaemonTaskDefinitionRepositoryCredentialsArgs
The repository credentials for private registry authentication.- Credentials
Parameter string - The Amazon Resource Name (ARN) of the secret containing the private repository credentials. When you use the Amazon ECS API, CLI, or AWS SDK, if the secret exists in the same Region as the task that you're launching then you can use either the full ARN or the name of the secret. When you use the AWS Management Console, you must specify the full ARN of the secret.
- Credentials
Parameter string - The Amazon Resource Name (ARN) of the secret containing the private repository credentials. When you use the Amazon ECS API, CLI, or AWS SDK, if the secret exists in the same Region as the task that you're launching then you can use either the full ARN or the name of the secret. When you use the AWS Management Console, you must specify the full ARN of the secret.
- credentials
Parameter String - The Amazon Resource Name (ARN) of the secret containing the private repository credentials. When you use the Amazon ECS API, CLI, or AWS SDK, if the secret exists in the same Region as the task that you're launching then you can use either the full ARN or the name of the secret. When you use the AWS Management Console, you must specify the full ARN of the secret.
- credentials
Parameter string - The Amazon Resource Name (ARN) of the secret containing the private repository credentials. When you use the Amazon ECS API, CLI, or AWS SDK, if the secret exists in the same Region as the task that you're launching then you can use either the full ARN or the name of the secret. When you use the AWS Management Console, you must specify the full ARN of the secret.
- credentials_
parameter str - The Amazon Resource Name (ARN) of the secret containing the private repository credentials. When you use the Amazon ECS API, CLI, or AWS SDK, if the secret exists in the same Region as the task that you're launching then you can use either the full ARN or the name of the secret. When you use the AWS Management Console, you must specify the full ARN of the secret.
- credentials
Parameter String - The Amazon Resource Name (ARN) of the secret containing the private repository credentials. When you use the Amazon ECS API, CLI, or AWS SDK, if the secret exists in the same Region as the task that you're launching then you can use either the full ARN or the name of the secret. When you use the AWS Management Console, you must specify the full ARN of the secret.
DaemonTaskDefinitionRestartPolicy, DaemonTaskDefinitionRestartPolicyArgs
- Enabled bool
- Ignored
Exit List<int>Codes - Restart
Attempt intPeriod
- Enabled bool
- Ignored
Exit []intCodes - Restart
Attempt intPeriod
- enabled Boolean
- ignored
Exit List<Integer>Codes - restart
Attempt IntegerPeriod
- enabled boolean
- ignored
Exit number[]Codes - restart
Attempt numberPeriod
- enabled bool
- ignored_
exit_ Sequence[int]codes - restart_
attempt_ intperiod
- enabled Boolean
- ignored
Exit List<Number>Codes - restart
Attempt NumberPeriod
DaemonTaskDefinitionSecret, DaemonTaskDefinitionSecretArgs
An object representing the secret to expose to your container. Secrets can be exposed to a container in the following ways:
- To inject sensitive data into your containers as environment variables, use the
secretscontainer definition parameter. - To reference sensitive information in the log configuration of a container, use the
secretOptionscontainer definition parameter.
For more information, see Specifying sensitive data in the Amazon Elastic Container Service Developer Guide.
- Name string
- The name of the secret.
- Value
From string - The secret to expose to the container. The supported values are either the full ARN of the ASMlong secret or the full ARN of the parameter in the SSM Parameter Store. For information about the require IAMlong permissions, see Required IAM permissions for Amazon ECS secrets (for Secrets Manager) or Required IAM permissions for Amazon ECS secrets (for Systems Manager Parameter store) in the Amazon Elastic Container Service Developer Guide. If the SSM Parameter Store parameter exists in the same Region as the task you're launching, then you can use either the full ARN or name of the parameter. If the parameter exists in a different Region, then the full ARN must be specified.
- Name string
- The name of the secret.
- Value
From string - The secret to expose to the container. The supported values are either the full ARN of the ASMlong secret or the full ARN of the parameter in the SSM Parameter Store. For information about the require IAMlong permissions, see Required IAM permissions for Amazon ECS secrets (for Secrets Manager) or Required IAM permissions for Amazon ECS secrets (for Systems Manager Parameter store) in the Amazon Elastic Container Service Developer Guide. If the SSM Parameter Store parameter exists in the same Region as the task you're launching, then you can use either the full ARN or name of the parameter. If the parameter exists in a different Region, then the full ARN must be specified.
- name String
- The name of the secret.
- value
From String - The secret to expose to the container. The supported values are either the full ARN of the ASMlong secret or the full ARN of the parameter in the SSM Parameter Store. For information about the require IAMlong permissions, see Required IAM permissions for Amazon ECS secrets (for Secrets Manager) or Required IAM permissions for Amazon ECS secrets (for Systems Manager Parameter store) in the Amazon Elastic Container Service Developer Guide. If the SSM Parameter Store parameter exists in the same Region as the task you're launching, then you can use either the full ARN or name of the parameter. If the parameter exists in a different Region, then the full ARN must be specified.
- name string
- The name of the secret.
- value
From string - The secret to expose to the container. The supported values are either the full ARN of the ASMlong secret or the full ARN of the parameter in the SSM Parameter Store. For information about the require IAMlong permissions, see Required IAM permissions for Amazon ECS secrets (for Secrets Manager) or Required IAM permissions for Amazon ECS secrets (for Systems Manager Parameter store) in the Amazon Elastic Container Service Developer Guide. If the SSM Parameter Store parameter exists in the same Region as the task you're launching, then you can use either the full ARN or name of the parameter. If the parameter exists in a different Region, then the full ARN must be specified.
- name str
- The name of the secret.
- value_
from str - The secret to expose to the container. The supported values are either the full ARN of the ASMlong secret or the full ARN of the parameter in the SSM Parameter Store. For information about the require IAMlong permissions, see Required IAM permissions for Amazon ECS secrets (for Secrets Manager) or Required IAM permissions for Amazon ECS secrets (for Systems Manager Parameter store) in the Amazon Elastic Container Service Developer Guide. If the SSM Parameter Store parameter exists in the same Region as the task you're launching, then you can use either the full ARN or name of the parameter. If the parameter exists in a different Region, then the full ARN must be specified.
- name String
- The name of the secret.
- value
From String - The secret to expose to the container. The supported values are either the full ARN of the ASMlong secret or the full ARN of the parameter in the SSM Parameter Store. For information about the require IAMlong permissions, see Required IAM permissions for Amazon ECS secrets (for Secrets Manager) or Required IAM permissions for Amazon ECS secrets (for Systems Manager Parameter store) in the Amazon Elastic Container Service Developer Guide. If the SSM Parameter Store parameter exists in the same Region as the task you're launching, then you can use either the full ARN or name of the parameter. If the parameter exists in a different Region, then the full ARN must be specified.
DaemonTaskDefinitionSystemControl, DaemonTaskDefinitionSystemControlArgs
A list of namespaced kernel parameters to set in the container. This parameter maps to Sysctls in the docker container create command and the --sysctl option to docker run. For example, you can configure net.ipv4.tcp_keepalive_time setting to maintain longer lived connections.
We don't recommend that you specify network-related systemControls parameters for multiple containers in a single task that also uses either the awsvpc or host network mode. Doing this has the following disadvantages:
- For tasks that use the
awsvpcnetwork mode including Fargate, if you setsystemControlsfor any container, it applies to all containers in the task. If you set differentsystemControlsfor multiple containers in a single task, the container that's started last determines whichsystemControlstake effect. - For tasks that use the
hostnetwork mode, the network namespacesystemControlsaren't supported.
If you're setting an IPC resource namespace to use for the containers in the task, the following conditions apply to your system controls. For more information, see IPC mode.
- For tasks that use the
hostIPC mode, IPC namespacesystemControlsaren't supported. - For tasks that use the
taskIPC mode, IPC namespacesystemControlsvalues apply to all containers within a task.
This parameter is not supported for Windows containers.
This parameter is only supported for tasks that are hosted on FARGATElong if the tasks are using platform version 1.4.0 or later (Linux). This isn't supported for Windows containers on Fargate.
- Namespace string
- The namespaced kernel parameter to set a
valuefor. - Value string
- The namespaced kernel parameter to set a
valuefor. Valid IPC namespace values:"kernel.msgmax" | "kernel.msgmnb" | "kernel.msgmni" | "kernel.sem" | "kernel.shmall" | "kernel.shmmax" | "kernel.shmmni" | "kernel.shm_rmid_forced", andSysctlsthat start with"fs.mqueue.*"Valid network namespace values:Sysctlsthat start with"net.*". Only namespacedSysctlsthat exist within the container starting with "net.* are accepted. All of these values are supported by Fargate.
- Namespace string
- The namespaced kernel parameter to set a
valuefor. - Value string
- The namespaced kernel parameter to set a
valuefor. Valid IPC namespace values:"kernel.msgmax" | "kernel.msgmnb" | "kernel.msgmni" | "kernel.sem" | "kernel.shmall" | "kernel.shmmax" | "kernel.shmmni" | "kernel.shm_rmid_forced", andSysctlsthat start with"fs.mqueue.*"Valid network namespace values:Sysctlsthat start with"net.*". Only namespacedSysctlsthat exist within the container starting with "net.* are accepted. All of these values are supported by Fargate.
- namespace String
- The namespaced kernel parameter to set a
valuefor. - value String
- The namespaced kernel parameter to set a
valuefor. Valid IPC namespace values:"kernel.msgmax" | "kernel.msgmnb" | "kernel.msgmni" | "kernel.sem" | "kernel.shmall" | "kernel.shmmax" | "kernel.shmmni" | "kernel.shm_rmid_forced", andSysctlsthat start with"fs.mqueue.*"Valid network namespace values:Sysctlsthat start with"net.*". Only namespacedSysctlsthat exist within the container starting with "net.* are accepted. All of these values are supported by Fargate.
- namespace string
- The namespaced kernel parameter to set a
valuefor. - value string
- The namespaced kernel parameter to set a
valuefor. Valid IPC namespace values:"kernel.msgmax" | "kernel.msgmnb" | "kernel.msgmni" | "kernel.sem" | "kernel.shmall" | "kernel.shmmax" | "kernel.shmmni" | "kernel.shm_rmid_forced", andSysctlsthat start with"fs.mqueue.*"Valid network namespace values:Sysctlsthat start with"net.*". Only namespacedSysctlsthat exist within the container starting with "net.* are accepted. All of these values are supported by Fargate.
- namespace str
- The namespaced kernel parameter to set a
valuefor. - value str
- The namespaced kernel parameter to set a
valuefor. Valid IPC namespace values:"kernel.msgmax" | "kernel.msgmnb" | "kernel.msgmni" | "kernel.sem" | "kernel.shmall" | "kernel.shmmax" | "kernel.shmmni" | "kernel.shm_rmid_forced", andSysctlsthat start with"fs.mqueue.*"Valid network namespace values:Sysctlsthat start with"net.*". Only namespacedSysctlsthat exist within the container starting with "net.* are accepted. All of these values are supported by Fargate.
- namespace String
- The namespaced kernel parameter to set a
valuefor. - value String
- The namespaced kernel parameter to set a
valuefor. Valid IPC namespace values:"kernel.msgmax" | "kernel.msgmnb" | "kernel.msgmni" | "kernel.sem" | "kernel.shmall" | "kernel.shmmax" | "kernel.shmmni" | "kernel.shm_rmid_forced", andSysctlsthat start with"fs.mqueue.*"Valid network namespace values:Sysctlsthat start with"net.*". Only namespacedSysctlsthat exist within the container starting with "net.* are accepted. All of these values are supported by Fargate.
DaemonTaskDefinitionTmpfs, DaemonTaskDefinitionTmpfsArgs
The container path, mount options, and size of the tmpfs mount.- Size int
- The maximum size (in MiB) of the tmpfs volume.
- Container
Path string - The absolute file path where the tmpfs volume is to be mounted.
- Mount
Options List<string> - The list of tmpfs volume mount options.
Valid values:
"defaults" | "ro" | "rw" | "suid" | "nosuid" | "dev" | "nodev" | "exec" | "noexec" | "sync" | "async" | "dirsync" | "remount" | "mand" | "nomand" | "atime" | "noatime" | "diratime" | "nodiratime" | "bind" | "rbind" | "unbindable" | "runbindable" | "private" | "rprivate" | "shared" | "rshared" | "slave" | "rslave" | "relatime" | "norelatime" | "strictatime" | "nostrictatime" | "mode" | "uid" | "gid" | "nr_inodes" | "nr_blocks" | "mpol"
- Size int
- The maximum size (in MiB) of the tmpfs volume.
- Container
Path string - The absolute file path where the tmpfs volume is to be mounted.
- Mount
Options []string - The list of tmpfs volume mount options.
Valid values:
"defaults" | "ro" | "rw" | "suid" | "nosuid" | "dev" | "nodev" | "exec" | "noexec" | "sync" | "async" | "dirsync" | "remount" | "mand" | "nomand" | "atime" | "noatime" | "diratime" | "nodiratime" | "bind" | "rbind" | "unbindable" | "runbindable" | "private" | "rprivate" | "shared" | "rshared" | "slave" | "rslave" | "relatime" | "norelatime" | "strictatime" | "nostrictatime" | "mode" | "uid" | "gid" | "nr_inodes" | "nr_blocks" | "mpol"
- size Integer
- The maximum size (in MiB) of the tmpfs volume.
- container
Path String - The absolute file path where the tmpfs volume is to be mounted.
- mount
Options List<String> - The list of tmpfs volume mount options.
Valid values:
"defaults" | "ro" | "rw" | "suid" | "nosuid" | "dev" | "nodev" | "exec" | "noexec" | "sync" | "async" | "dirsync" | "remount" | "mand" | "nomand" | "atime" | "noatime" | "diratime" | "nodiratime" | "bind" | "rbind" | "unbindable" | "runbindable" | "private" | "rprivate" | "shared" | "rshared" | "slave" | "rslave" | "relatime" | "norelatime" | "strictatime" | "nostrictatime" | "mode" | "uid" | "gid" | "nr_inodes" | "nr_blocks" | "mpol"
- size number
- The maximum size (in MiB) of the tmpfs volume.
- container
Path string - The absolute file path where the tmpfs volume is to be mounted.
- mount
Options string[] - The list of tmpfs volume mount options.
Valid values:
"defaults" | "ro" | "rw" | "suid" | "nosuid" | "dev" | "nodev" | "exec" | "noexec" | "sync" | "async" | "dirsync" | "remount" | "mand" | "nomand" | "atime" | "noatime" | "diratime" | "nodiratime" | "bind" | "rbind" | "unbindable" | "runbindable" | "private" | "rprivate" | "shared" | "rshared" | "slave" | "rslave" | "relatime" | "norelatime" | "strictatime" | "nostrictatime" | "mode" | "uid" | "gid" | "nr_inodes" | "nr_blocks" | "mpol"
- size int
- The maximum size (in MiB) of the tmpfs volume.
- container_
path str - The absolute file path where the tmpfs volume is to be mounted.
- mount_
options Sequence[str] - The list of tmpfs volume mount options.
Valid values:
"defaults" | "ro" | "rw" | "suid" | "nosuid" | "dev" | "nodev" | "exec" | "noexec" | "sync" | "async" | "dirsync" | "remount" | "mand" | "nomand" | "atime" | "noatime" | "diratime" | "nodiratime" | "bind" | "rbind" | "unbindable" | "runbindable" | "private" | "rprivate" | "shared" | "rshared" | "slave" | "rslave" | "relatime" | "norelatime" | "strictatime" | "nostrictatime" | "mode" | "uid" | "gid" | "nr_inodes" | "nr_blocks" | "mpol"
- size Number
- The maximum size (in MiB) of the tmpfs volume.
- container
Path String - The absolute file path where the tmpfs volume is to be mounted.
- mount
Options List<String> - The list of tmpfs volume mount options.
Valid values:
"defaults" | "ro" | "rw" | "suid" | "nosuid" | "dev" | "nodev" | "exec" | "noexec" | "sync" | "async" | "dirsync" | "remount" | "mand" | "nomand" | "atime" | "noatime" | "diratime" | "nodiratime" | "bind" | "rbind" | "unbindable" | "runbindable" | "private" | "rprivate" | "shared" | "rshared" | "slave" | "rslave" | "relatime" | "norelatime" | "strictatime" | "nostrictatime" | "mode" | "uid" | "gid" | "nr_inodes" | "nr_blocks" | "mpol"
DaemonTaskDefinitionUlimit, DaemonTaskDefinitionUlimitArgs
The ulimit settings to pass to the container.
Amazon ECS tasks hosted on FARGATElong use the default resource limit values set by the operating system with the exception of the nofile resource limit parameter which FARGATElong overrides. The nofile resource limit sets a restriction on the number of open files that a container can use. The default nofile soft limit is 65535 and the default hard limit is 65535.
You can specify the ulimit settings for a container in a task definition.- Hard
Limit int - The hard limit for the
ulimittype. The value can be specified in bytes, seconds, or as a count, depending on thetypeof theulimit. - Name string
- The
typeof theulimit. - Soft
Limit int - The soft limit for the
ulimittype. The value can be specified in bytes, seconds, or as a count, depending on thetypeof theulimit.
- Hard
Limit int - The hard limit for the
ulimittype. The value can be specified in bytes, seconds, or as a count, depending on thetypeof theulimit. - Name string
- The
typeof theulimit. - Soft
Limit int - The soft limit for the
ulimittype. The value can be specified in bytes, seconds, or as a count, depending on thetypeof theulimit.
- hard
Limit Integer - The hard limit for the
ulimittype. The value can be specified in bytes, seconds, or as a count, depending on thetypeof theulimit. - name String
- The
typeof theulimit. - soft
Limit Integer - The soft limit for the
ulimittype. The value can be specified in bytes, seconds, or as a count, depending on thetypeof theulimit.
- hard
Limit number - The hard limit for the
ulimittype. The value can be specified in bytes, seconds, or as a count, depending on thetypeof theulimit. - name string
- The
typeof theulimit. - soft
Limit number - The soft limit for the
ulimittype. The value can be specified in bytes, seconds, or as a count, depending on thetypeof theulimit.
- hard_
limit int - The hard limit for the
ulimittype. The value can be specified in bytes, seconds, or as a count, depending on thetypeof theulimit. - name str
- The
typeof theulimit. - soft_
limit int - The soft limit for the
ulimittype. The value can be specified in bytes, seconds, or as a count, depending on thetypeof theulimit.
- hard
Limit Number - The hard limit for the
ulimittype. The value can be specified in bytes, seconds, or as a count, depending on thetypeof theulimit. - name String
- The
typeof theulimit. - soft
Limit Number - The soft limit for the
ulimittype. The value can be specified in bytes, seconds, or as a count, depending on thetypeof theulimit.
DaemonTaskDefinitionVolume, DaemonTaskDefinitionVolumeArgs
The data volume configuration for tasks launched using this task definition. Specifying a volume configuration in a task definition is optional. The volume configuration may contain multiple volumes but only one volume configured at launch is supported. Each volume defined in the volume configuration may only specify a name and one of either configuredAtLaunch, dockerVolumeConfiguration, efsVolumeConfiguration, fsxWindowsFileServerVolumeConfiguration, or host. If an empty volume configuration is specified, by default Amazon ECS uses a host volume. For more information, see Using data volumes in tasks.- Host
Pulumi.
Aws Native. Ecs. Inputs. Daemon Task Definition Host Volume Properties - This parameter is specified when you use bind mount host volumes. The contents of the
hostparameter determine whether your bind mount host volume persists on the host container instance and where it's stored. If thehostparameter is empty, then the Docker daemon assigns a host path for your data volume. However, the data isn't guaranteed to persist after the containers that are associated with it stop running. Windows containers can mount whole directories on the same drive as$env:ProgramData. Windows containers can't mount directories on a different drive, and mount point can't be across drives. For example, you can mountC:\my\path:C:\my\pathandD:\:D:\, but notD:\my\path:C:\my\pathorD:\:C:\my\path. - Name string
- The name of the volume. Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed.
When using a volume configured at launch, the
nameis required and must also be specified as the volume name in theServiceVolumeConfigurationorTaskVolumeConfigurationparameter when creating your service or standalone task. For all other types of volumes, this name is referenced in thesourceVolumeparameter of themountPointsobject in the container definition. When a volume is using theefsVolumeConfiguration, the name is required.
- Host
Daemon
Task Definition Host Volume Properties - This parameter is specified when you use bind mount host volumes. The contents of the
hostparameter determine whether your bind mount host volume persists on the host container instance and where it's stored. If thehostparameter is empty, then the Docker daemon assigns a host path for your data volume. However, the data isn't guaranteed to persist after the containers that are associated with it stop running. Windows containers can mount whole directories on the same drive as$env:ProgramData. Windows containers can't mount directories on a different drive, and mount point can't be across drives. For example, you can mountC:\my\path:C:\my\pathandD:\:D:\, but notD:\my\path:C:\my\pathorD:\:C:\my\path. - Name string
- The name of the volume. Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed.
When using a volume configured at launch, the
nameis required and must also be specified as the volume name in theServiceVolumeConfigurationorTaskVolumeConfigurationparameter when creating your service or standalone task. For all other types of volumes, this name is referenced in thesourceVolumeparameter of themountPointsobject in the container definition. When a volume is using theefsVolumeConfiguration, the name is required.
- host
Daemon
Task Definition Host Volume Properties - This parameter is specified when you use bind mount host volumes. The contents of the
hostparameter determine whether your bind mount host volume persists on the host container instance and where it's stored. If thehostparameter is empty, then the Docker daemon assigns a host path for your data volume. However, the data isn't guaranteed to persist after the containers that are associated with it stop running. Windows containers can mount whole directories on the same drive as$env:ProgramData. Windows containers can't mount directories on a different drive, and mount point can't be across drives. For example, you can mountC:\my\path:C:\my\pathandD:\:D:\, but notD:\my\path:C:\my\pathorD:\:C:\my\path. - name String
- The name of the volume. Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed.
When using a volume configured at launch, the
nameis required and must also be specified as the volume name in theServiceVolumeConfigurationorTaskVolumeConfigurationparameter when creating your service or standalone task. For all other types of volumes, this name is referenced in thesourceVolumeparameter of themountPointsobject in the container definition. When a volume is using theefsVolumeConfiguration, the name is required.
- host
Daemon
Task Definition Host Volume Properties - This parameter is specified when you use bind mount host volumes. The contents of the
hostparameter determine whether your bind mount host volume persists on the host container instance and where it's stored. If thehostparameter is empty, then the Docker daemon assigns a host path for your data volume. However, the data isn't guaranteed to persist after the containers that are associated with it stop running. Windows containers can mount whole directories on the same drive as$env:ProgramData. Windows containers can't mount directories on a different drive, and mount point can't be across drives. For example, you can mountC:\my\path:C:\my\pathandD:\:D:\, but notD:\my\path:C:\my\pathorD:\:C:\my\path. - name string
- The name of the volume. Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed.
When using a volume configured at launch, the
nameis required and must also be specified as the volume name in theServiceVolumeConfigurationorTaskVolumeConfigurationparameter when creating your service or standalone task. For all other types of volumes, this name is referenced in thesourceVolumeparameter of themountPointsobject in the container definition. When a volume is using theefsVolumeConfiguration, the name is required.
- host
Daemon
Task Definition Host Volume Properties - This parameter is specified when you use bind mount host volumes. The contents of the
hostparameter determine whether your bind mount host volume persists on the host container instance and where it's stored. If thehostparameter is empty, then the Docker daemon assigns a host path for your data volume. However, the data isn't guaranteed to persist after the containers that are associated with it stop running. Windows containers can mount whole directories on the same drive as$env:ProgramData. Windows containers can't mount directories on a different drive, and mount point can't be across drives. For example, you can mountC:\my\path:C:\my\pathandD:\:D:\, but notD:\my\path:C:\my\pathorD:\:C:\my\path. - name str
- The name of the volume. Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed.
When using a volume configured at launch, the
nameis required and must also be specified as the volume name in theServiceVolumeConfigurationorTaskVolumeConfigurationparameter when creating your service or standalone task. For all other types of volumes, this name is referenced in thesourceVolumeparameter of themountPointsobject in the container definition. When a volume is using theefsVolumeConfiguration, the name is required.
- host Property Map
- This parameter is specified when you use bind mount host volumes. The contents of the
hostparameter determine whether your bind mount host volume persists on the host container instance and where it's stored. If thehostparameter is empty, then the Docker daemon assigns a host path for your data volume. However, the data isn't guaranteed to persist after the containers that are associated with it stop running. Windows containers can mount whole directories on the same drive as$env:ProgramData. Windows containers can't mount directories on a different drive, and mount point can't be across drives. For example, you can mountC:\my\path:C:\my\pathandD:\:D:\, but notD:\my\path:C:\my\pathorD:\:C:\my\path. - name String
- The name of the volume. Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed.
When using a volume configured at launch, the
nameis required and must also be specified as the volume name in theServiceVolumeConfigurationorTaskVolumeConfigurationparameter when creating your service or standalone task. For all other types of volumes, this name is referenced in thesourceVolumeparameter of themountPointsobject in the container definition. When a volume is using theefsVolumeConfiguration, the name is required.
Tag, TagArgs
A set of tags to apply to the resource.Package Details
- Repository
- AWS Native pulumi/pulumi-aws-native
- License
- Apache-2.0
We recommend new projects start with resources from the AWS provider.
published on Monday, Apr 20, 2026 by Pulumi
