1. Packages
  2. AWS Native
  3. API Docs
  4. lambda
  5. EventSourceMapping

AWS Native is in preview. AWS Classic is fully supported.

AWS Native v0.112.0 published on Wednesday, Jul 24, 2024 by Pulumi

aws-native.lambda.EventSourceMapping

Explore with Pulumi AI

aws-native logo

AWS Native is in preview. AWS Classic is fully supported.

AWS Native v0.112.0 published on Wednesday, Jul 24, 2024 by Pulumi

    The AWS::Lambda::EventSourceMapping resource creates a mapping between an event source and an LAMlong function. LAM reads items from the event source and triggers the function. For details about each event source type, see the following topics. In particular, each of the topics describes the required and optional parameters for the specific event source.

    Create EventSourceMapping Resource

    Resources are created with functions called constructors. To learn more about declaring and configuring resources, see Resources.

    Constructor syntax

    new EventSourceMapping(name: string, args: EventSourceMappingArgs, opts?: CustomResourceOptions);
    @overload
    def EventSourceMapping(resource_name: str,
                           args: EventSourceMappingArgs,
                           opts: Optional[ResourceOptions] = None)
    
    @overload
    def EventSourceMapping(resource_name: str,
                           opts: Optional[ResourceOptions] = None,
                           function_name: Optional[str] = None,
                           maximum_batching_window_in_seconds: Optional[int] = None,
                           destination_config: Optional[_lambda_.EventSourceMappingDestinationConfigArgs] = None,
                           maximum_retry_attempts: Optional[int] = None,
                           document_db_event_source_config: Optional[_lambda_.EventSourceMappingDocumentDbEventSourceConfigArgs] = None,
                           enabled: Optional[bool] = None,
                           event_source_arn: Optional[str] = None,
                           filter_criteria: Optional[_lambda_.EventSourceMappingFilterCriteriaArgs] = None,
                           batch_size: Optional[int] = None,
                           parallelization_factor: Optional[int] = None,
                           amazon_managed_kafka_event_source_config: Optional[_lambda_.EventSourceMappingAmazonManagedKafkaEventSourceConfigArgs] = None,
                           tumbling_window_in_seconds: Optional[int] = None,
                           bisect_batch_on_function_error: Optional[bool] = None,
                           function_response_types: Optional[Sequence[lambda_.EventSourceMappingFunctionResponseTypesItem]] = None,
                           queues: Optional[Sequence[str]] = None,
                           scaling_config: Optional[_lambda_.EventSourceMappingScalingConfigArgs] = None,
                           self_managed_event_source: Optional[_lambda_.EventSourceMappingSelfManagedEventSourceArgs] = None,
                           self_managed_kafka_event_source_config: Optional[_lambda_.EventSourceMappingSelfManagedKafkaEventSourceConfigArgs] = None,
                           source_access_configurations: Optional[Sequence[_lambda_.EventSourceMappingSourceAccessConfigurationArgs]] = None,
                           starting_position: Optional[str] = None,
                           starting_position_timestamp: Optional[float] = None,
                           topics: Optional[Sequence[str]] = None,
                           maximum_record_age_in_seconds: Optional[int] = None)
    func NewEventSourceMapping(ctx *Context, name string, args EventSourceMappingArgs, opts ...ResourceOption) (*EventSourceMapping, error)
    public EventSourceMapping(string name, EventSourceMappingArgs args, CustomResourceOptions? opts = null)
    public EventSourceMapping(String name, EventSourceMappingArgs args)
    public EventSourceMapping(String name, EventSourceMappingArgs args, CustomResourceOptions options)
    
    type: aws-native:lambda:EventSourceMapping
    properties: # The arguments to resource properties.
    options: # Bag of options to control resource's behavior.
    
    

    Parameters

    name string
    The unique name of the resource.
    args EventSourceMappingArgs
    The arguments to resource properties.
    opts CustomResourceOptions
    Bag of options to control resource's behavior.
    resource_name str
    The unique name of the resource.
    args EventSourceMappingArgs
    The arguments to resource properties.
    opts ResourceOptions
    Bag of options to control resource's behavior.
    ctx Context
    Context object for the current deployment.
    name string
    The unique name of the resource.
    args EventSourceMappingArgs
    The arguments to resource properties.
    opts ResourceOption
    Bag of options to control resource's behavior.
    name string
    The unique name of the resource.
    args EventSourceMappingArgs
    The arguments to resource properties.
    opts CustomResourceOptions
    Bag of options to control resource's behavior.
    name String
    The unique name of the resource.
    args EventSourceMappingArgs
    The arguments to resource properties.
    options CustomResourceOptions
    Bag of options to control resource's behavior.

    EventSourceMapping Resource Properties

    To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.

    Inputs

    The EventSourceMapping resource accepts the following input properties:

    FunctionName string

    The name or ARN of the Lambda function. Name formats

    • Function nameMyFunction.
    • Function ARNarn:aws:lambda:us-west-2:123456789012:function:MyFunction.
    • Version or Alias ARNarn:aws:lambda:us-west-2:123456789012:function:MyFunction:PROD.
    • Partial ARN123456789012:function:MyFunction.

    The length constraint applies only to the full ARN. If you specify only the function name, it's limited to 64 characters in length.

    AmazonManagedKafkaEventSourceConfig Pulumi.AwsNative.Lambda.Inputs.EventSourceMappingAmazonManagedKafkaEventSourceConfig
    Specific configuration settings for an Amazon Managed Streaming for Apache Kafka (Amazon MSK) event source.
    BatchSize int
    The maximum number of records in each batch that Lambda pulls from your stream or queue and sends to your function. Lambda passes all of the records in the batch to the function in a single call, up to the payload limit for synchronous invocation (6 MB).

    • Amazon Kinesis – Default 100. Max 10,000.
    • Amazon DynamoDB Streams – Default 100. Max 10,000.
    • Amazon Simple Queue Service – Default 10. For standard queues the max is 10,000. For FIFO queues the max is 10.
    • Amazon Managed Streaming for Apache Kafka – Default 100. Max 10,000.
    • Self-managed Apache Kafka – Default 100. Max 10,000.
    • Amazon MQ (ActiveMQ and RabbitMQ) – Default 100. Max 10,000.
    • DocumentDB – Default 100. Max 10,000.
    BisectBatchOnFunctionError bool
    (Kinesis and DynamoDB Streams only) If the function returns an error, split the batch in two and retry. The default value is false.
    DestinationConfig Pulumi.AwsNative.Lambda.Inputs.EventSourceMappingDestinationConfig
    (Kinesis, DynamoDB Streams, Amazon MSK, and self-managed Apache Kafka event sources only) A configuration object that specifies the destination of an event after Lambda processes it.
    DocumentDbEventSourceConfig Pulumi.AwsNative.Lambda.Inputs.EventSourceMappingDocumentDbEventSourceConfig
    Specific configuration settings for a DocumentDB event source.
    Enabled bool
    When true, the event source mapping is active. When false, Lambda pauses polling and invocation. Default: True
    EventSourceArn string
    The Amazon Resource Name (ARN) of the event source.

    • Amazon Kinesis – The ARN of the data stream or a stream consumer.
    • Amazon DynamoDB Streams – The ARN of the stream.
    • Amazon Simple Queue Service – The ARN of the queue.
    • Amazon Managed Streaming for Apache Kafka – The ARN of the cluster or the ARN of the VPC connection (for cross-account event source mappings).
    • Amazon MQ – The ARN of the broker.
    • Amazon DocumentDB – The ARN of the DocumentDB change stream.
    FilterCriteria Pulumi.AwsNative.Lambda.Inputs.EventSourceMappingFilterCriteria
    An object that defines the filter criteria that determine whether Lambda should process an event. For more information, see Lambda event filtering.
    FunctionResponseTypes List<Pulumi.AwsNative.Lambda.EventSourceMappingFunctionResponseTypesItem>
    (Streams and SQS) A list of current response type enums applied to the event source mapping. Valid Values: ReportBatchItemFailures
    MaximumBatchingWindowInSeconds int
    The maximum amount of time, in seconds, that Lambda spends gathering records before invoking the function. Default (, , event sources): 0 Default (, Kafka, , event sources): 500 ms Related setting: For SQS event sources, when you set BatchSize to a value greater than 10, you must set MaximumBatchingWindowInSeconds to at least 1.
    MaximumRecordAgeInSeconds int
    (Kinesis and DynamoDB Streams only) Discard records older than the specified age. The default value is -1, which sets the maximum age to infinite. When the value is set to infinite, Lambda never discards old records. The minimum valid value for maximum record age is 60s. Although values less than 60 and greater than -1 fall within the parameter's absolute range, they are not allowed
    MaximumRetryAttempts int
    (Kinesis and DynamoDB Streams only) Discard records after the specified number of retries. The default value is -1, which sets the maximum number of retries to infinite. When MaximumRetryAttempts is infinite, Lambda retries failed records until the record expires in the event source.
    ParallelizationFactor int
    (Kinesis and DynamoDB Streams only) The number of batches to process concurrently from each shard. The default value is 1.
    Queues List<string>
    (Amazon MQ) The name of the Amazon MQ broker destination queue to consume.
    ScalingConfig Pulumi.AwsNative.Lambda.Inputs.EventSourceMappingScalingConfig
    (Amazon SQS only) The scaling configuration for the event source. For more information, see Configuring maximum concurrency for Amazon SQS event sources.
    SelfManagedEventSource Pulumi.AwsNative.Lambda.Inputs.EventSourceMappingSelfManagedEventSource
    The self-managed Apache Kafka cluster for your event source.
    SelfManagedKafkaEventSourceConfig Pulumi.AwsNative.Lambda.Inputs.EventSourceMappingSelfManagedKafkaEventSourceConfig
    Specific configuration settings for a self-managed Apache Kafka event source.
    SourceAccessConfigurations List<Pulumi.AwsNative.Lambda.Inputs.EventSourceMappingSourceAccessConfiguration>
    An array of the authentication protocol, VPC components, or virtual host to secure and define your event source.
    StartingPosition string
    The position in a stream from which to start reading. Required for Amazon Kinesis and Amazon DynamoDB.

    • LATEST - Read only new records.
    • TRIM_HORIZON - Process all available records.
    • AT_TIMESTAMP - Specify a time from which to start reading records.
    StartingPositionTimestamp double
    With StartingPosition set to AT_TIMESTAMP, the time from which to start reading, in Unix time seconds. StartingPositionTimestamp cannot be in the future.
    Topics List<string>
    The name of the Kafka topic.
    TumblingWindowInSeconds int
    (Kinesis and DynamoDB Streams only) The duration in seconds of a processing window for DynamoDB and Kinesis Streams event sources. A value of 0 seconds indicates no tumbling window.
    FunctionName string

    The name or ARN of the Lambda function. Name formats

    • Function nameMyFunction.
    • Function ARNarn:aws:lambda:us-west-2:123456789012:function:MyFunction.
    • Version or Alias ARNarn:aws:lambda:us-west-2:123456789012:function:MyFunction:PROD.
    • Partial ARN123456789012:function:MyFunction.

    The length constraint applies only to the full ARN. If you specify only the function name, it's limited to 64 characters in length.

    AmazonManagedKafkaEventSourceConfig EventSourceMappingAmazonManagedKafkaEventSourceConfigArgs
    Specific configuration settings for an Amazon Managed Streaming for Apache Kafka (Amazon MSK) event source.
    BatchSize int
    The maximum number of records in each batch that Lambda pulls from your stream or queue and sends to your function. Lambda passes all of the records in the batch to the function in a single call, up to the payload limit for synchronous invocation (6 MB).

    • Amazon Kinesis – Default 100. Max 10,000.
    • Amazon DynamoDB Streams – Default 100. Max 10,000.
    • Amazon Simple Queue Service – Default 10. For standard queues the max is 10,000. For FIFO queues the max is 10.
    • Amazon Managed Streaming for Apache Kafka – Default 100. Max 10,000.
    • Self-managed Apache Kafka – Default 100. Max 10,000.
    • Amazon MQ (ActiveMQ and RabbitMQ) – Default 100. Max 10,000.
    • DocumentDB – Default 100. Max 10,000.
    BisectBatchOnFunctionError bool
    (Kinesis and DynamoDB Streams only) If the function returns an error, split the batch in two and retry. The default value is false.
    DestinationConfig EventSourceMappingDestinationConfigArgs
    (Kinesis, DynamoDB Streams, Amazon MSK, and self-managed Apache Kafka event sources only) A configuration object that specifies the destination of an event after Lambda processes it.
    DocumentDbEventSourceConfig EventSourceMappingDocumentDbEventSourceConfigArgs
    Specific configuration settings for a DocumentDB event source.
    Enabled bool
    When true, the event source mapping is active. When false, Lambda pauses polling and invocation. Default: True
    EventSourceArn string
    The Amazon Resource Name (ARN) of the event source.

    • Amazon Kinesis – The ARN of the data stream or a stream consumer.
    • Amazon DynamoDB Streams – The ARN of the stream.
    • Amazon Simple Queue Service – The ARN of the queue.
    • Amazon Managed Streaming for Apache Kafka – The ARN of the cluster or the ARN of the VPC connection (for cross-account event source mappings).
    • Amazon MQ – The ARN of the broker.
    • Amazon DocumentDB – The ARN of the DocumentDB change stream.
    FilterCriteria EventSourceMappingFilterCriteriaArgs
    An object that defines the filter criteria that determine whether Lambda should process an event. For more information, see Lambda event filtering.
    FunctionResponseTypes []EventSourceMappingFunctionResponseTypesItem
    (Streams and SQS) A list of current response type enums applied to the event source mapping. Valid Values: ReportBatchItemFailures
    MaximumBatchingWindowInSeconds int
    The maximum amount of time, in seconds, that Lambda spends gathering records before invoking the function. Default (, , event sources): 0 Default (, Kafka, , event sources): 500 ms Related setting: For SQS event sources, when you set BatchSize to a value greater than 10, you must set MaximumBatchingWindowInSeconds to at least 1.
    MaximumRecordAgeInSeconds int
    (Kinesis and DynamoDB Streams only) Discard records older than the specified age. The default value is -1, which sets the maximum age to infinite. When the value is set to infinite, Lambda never discards old records. The minimum valid value for maximum record age is 60s. Although values less than 60 and greater than -1 fall within the parameter's absolute range, they are not allowed
    MaximumRetryAttempts int
    (Kinesis and DynamoDB Streams only) Discard records after the specified number of retries. The default value is -1, which sets the maximum number of retries to infinite. When MaximumRetryAttempts is infinite, Lambda retries failed records until the record expires in the event source.
    ParallelizationFactor int
    (Kinesis and DynamoDB Streams only) The number of batches to process concurrently from each shard. The default value is 1.
    Queues []string
    (Amazon MQ) The name of the Amazon MQ broker destination queue to consume.
    ScalingConfig EventSourceMappingScalingConfigArgs
    (Amazon SQS only) The scaling configuration for the event source. For more information, see Configuring maximum concurrency for Amazon SQS event sources.
    SelfManagedEventSource EventSourceMappingSelfManagedEventSourceArgs
    The self-managed Apache Kafka cluster for your event source.
    SelfManagedKafkaEventSourceConfig EventSourceMappingSelfManagedKafkaEventSourceConfigArgs
    Specific configuration settings for a self-managed Apache Kafka event source.
    SourceAccessConfigurations []EventSourceMappingSourceAccessConfigurationArgs
    An array of the authentication protocol, VPC components, or virtual host to secure and define your event source.
    StartingPosition string
    The position in a stream from which to start reading. Required for Amazon Kinesis and Amazon DynamoDB.

    • LATEST - Read only new records.
    • TRIM_HORIZON - Process all available records.
    • AT_TIMESTAMP - Specify a time from which to start reading records.
    StartingPositionTimestamp float64
    With StartingPosition set to AT_TIMESTAMP, the time from which to start reading, in Unix time seconds. StartingPositionTimestamp cannot be in the future.
    Topics []string
    The name of the Kafka topic.
    TumblingWindowInSeconds int
    (Kinesis and DynamoDB Streams only) The duration in seconds of a processing window for DynamoDB and Kinesis Streams event sources. A value of 0 seconds indicates no tumbling window.
    functionName String

    The name or ARN of the Lambda function. Name formats

    • Function nameMyFunction.
    • Function ARNarn:aws:lambda:us-west-2:123456789012:function:MyFunction.
    • Version or Alias ARNarn:aws:lambda:us-west-2:123456789012:function:MyFunction:PROD.
    • Partial ARN123456789012:function:MyFunction.

    The length constraint applies only to the full ARN. If you specify only the function name, it's limited to 64 characters in length.

    amazonManagedKafkaEventSourceConfig EventSourceMappingAmazonManagedKafkaEventSourceConfig
    Specific configuration settings for an Amazon Managed Streaming for Apache Kafka (Amazon MSK) event source.
    batchSize Integer
    The maximum number of records in each batch that Lambda pulls from your stream or queue and sends to your function. Lambda passes all of the records in the batch to the function in a single call, up to the payload limit for synchronous invocation (6 MB).

    • Amazon Kinesis – Default 100. Max 10,000.
    • Amazon DynamoDB Streams – Default 100. Max 10,000.
    • Amazon Simple Queue Service – Default 10. For standard queues the max is 10,000. For FIFO queues the max is 10.
    • Amazon Managed Streaming for Apache Kafka – Default 100. Max 10,000.
    • Self-managed Apache Kafka – Default 100. Max 10,000.
    • Amazon MQ (ActiveMQ and RabbitMQ) – Default 100. Max 10,000.
    • DocumentDB – Default 100. Max 10,000.
    bisectBatchOnFunctionError Boolean
    (Kinesis and DynamoDB Streams only) If the function returns an error, split the batch in two and retry. The default value is false.
    destinationConfig EventSourceMappingDestinationConfig
    (Kinesis, DynamoDB Streams, Amazon MSK, and self-managed Apache Kafka event sources only) A configuration object that specifies the destination of an event after Lambda processes it.
    documentDbEventSourceConfig EventSourceMappingDocumentDbEventSourceConfig
    Specific configuration settings for a DocumentDB event source.
    enabled Boolean
    When true, the event source mapping is active. When false, Lambda pauses polling and invocation. Default: True
    eventSourceArn String
    The Amazon Resource Name (ARN) of the event source.

    • Amazon Kinesis – The ARN of the data stream or a stream consumer.
    • Amazon DynamoDB Streams – The ARN of the stream.
    • Amazon Simple Queue Service – The ARN of the queue.
    • Amazon Managed Streaming for Apache Kafka – The ARN of the cluster or the ARN of the VPC connection (for cross-account event source mappings).
    • Amazon MQ – The ARN of the broker.
    • Amazon DocumentDB – The ARN of the DocumentDB change stream.
    filterCriteria EventSourceMappingFilterCriteria
    An object that defines the filter criteria that determine whether Lambda should process an event. For more information, see Lambda event filtering.
    functionResponseTypes List<EventSourceMappingFunctionResponseTypesItem>
    (Streams and SQS) A list of current response type enums applied to the event source mapping. Valid Values: ReportBatchItemFailures
    maximumBatchingWindowInSeconds Integer
    The maximum amount of time, in seconds, that Lambda spends gathering records before invoking the function. Default (, , event sources): 0 Default (, Kafka, , event sources): 500 ms Related setting: For SQS event sources, when you set BatchSize to a value greater than 10, you must set MaximumBatchingWindowInSeconds to at least 1.
    maximumRecordAgeInSeconds Integer
    (Kinesis and DynamoDB Streams only) Discard records older than the specified age. The default value is -1, which sets the maximum age to infinite. When the value is set to infinite, Lambda never discards old records. The minimum valid value for maximum record age is 60s. Although values less than 60 and greater than -1 fall within the parameter's absolute range, they are not allowed
    maximumRetryAttempts Integer
    (Kinesis and DynamoDB Streams only) Discard records after the specified number of retries. The default value is -1, which sets the maximum number of retries to infinite. When MaximumRetryAttempts is infinite, Lambda retries failed records until the record expires in the event source.
    parallelizationFactor Integer
    (Kinesis and DynamoDB Streams only) The number of batches to process concurrently from each shard. The default value is 1.
    queues List<String>
    (Amazon MQ) The name of the Amazon MQ broker destination queue to consume.
    scalingConfig EventSourceMappingScalingConfig
    (Amazon SQS only) The scaling configuration for the event source. For more information, see Configuring maximum concurrency for Amazon SQS event sources.
    selfManagedEventSource EventSourceMappingSelfManagedEventSource
    The self-managed Apache Kafka cluster for your event source.
    selfManagedKafkaEventSourceConfig EventSourceMappingSelfManagedKafkaEventSourceConfig
    Specific configuration settings for a self-managed Apache Kafka event source.
    sourceAccessConfigurations List<EventSourceMappingSourceAccessConfiguration>
    An array of the authentication protocol, VPC components, or virtual host to secure and define your event source.
    startingPosition String
    The position in a stream from which to start reading. Required for Amazon Kinesis and Amazon DynamoDB.

    • LATEST - Read only new records.
    • TRIM_HORIZON - Process all available records.
    • AT_TIMESTAMP - Specify a time from which to start reading records.
    startingPositionTimestamp Double
    With StartingPosition set to AT_TIMESTAMP, the time from which to start reading, in Unix time seconds. StartingPositionTimestamp cannot be in the future.
    topics List<String>
    The name of the Kafka topic.
    tumblingWindowInSeconds Integer
    (Kinesis and DynamoDB Streams only) The duration in seconds of a processing window for DynamoDB and Kinesis Streams event sources. A value of 0 seconds indicates no tumbling window.
    functionName string

    The name or ARN of the Lambda function. Name formats

    • Function nameMyFunction.
    • Function ARNarn:aws:lambda:us-west-2:123456789012:function:MyFunction.
    • Version or Alias ARNarn:aws:lambda:us-west-2:123456789012:function:MyFunction:PROD.
    • Partial ARN123456789012:function:MyFunction.

    The length constraint applies only to the full ARN. If you specify only the function name, it's limited to 64 characters in length.

    amazonManagedKafkaEventSourceConfig EventSourceMappingAmazonManagedKafkaEventSourceConfig
    Specific configuration settings for an Amazon Managed Streaming for Apache Kafka (Amazon MSK) event source.
    batchSize number
    The maximum number of records in each batch that Lambda pulls from your stream or queue and sends to your function. Lambda passes all of the records in the batch to the function in a single call, up to the payload limit for synchronous invocation (6 MB).

    • Amazon Kinesis – Default 100. Max 10,000.
    • Amazon DynamoDB Streams – Default 100. Max 10,000.
    • Amazon Simple Queue Service – Default 10. For standard queues the max is 10,000. For FIFO queues the max is 10.
    • Amazon Managed Streaming for Apache Kafka – Default 100. Max 10,000.
    • Self-managed Apache Kafka – Default 100. Max 10,000.
    • Amazon MQ (ActiveMQ and RabbitMQ) – Default 100. Max 10,000.
    • DocumentDB – Default 100. Max 10,000.
    bisectBatchOnFunctionError boolean
    (Kinesis and DynamoDB Streams only) If the function returns an error, split the batch in two and retry. The default value is false.
    destinationConfig EventSourceMappingDestinationConfig
    (Kinesis, DynamoDB Streams, Amazon MSK, and self-managed Apache Kafka event sources only) A configuration object that specifies the destination of an event after Lambda processes it.
    documentDbEventSourceConfig EventSourceMappingDocumentDbEventSourceConfig
    Specific configuration settings for a DocumentDB event source.
    enabled boolean
    When true, the event source mapping is active. When false, Lambda pauses polling and invocation. Default: True
    eventSourceArn string
    The Amazon Resource Name (ARN) of the event source.

    • Amazon Kinesis – The ARN of the data stream or a stream consumer.
    • Amazon DynamoDB Streams – The ARN of the stream.
    • Amazon Simple Queue Service – The ARN of the queue.
    • Amazon Managed Streaming for Apache Kafka – The ARN of the cluster or the ARN of the VPC connection (for cross-account event source mappings).
    • Amazon MQ – The ARN of the broker.
    • Amazon DocumentDB – The ARN of the DocumentDB change stream.
    filterCriteria EventSourceMappingFilterCriteria
    An object that defines the filter criteria that determine whether Lambda should process an event. For more information, see Lambda event filtering.
    functionResponseTypes EventSourceMappingFunctionResponseTypesItem[]
    (Streams and SQS) A list of current response type enums applied to the event source mapping. Valid Values: ReportBatchItemFailures
    maximumBatchingWindowInSeconds number
    The maximum amount of time, in seconds, that Lambda spends gathering records before invoking the function. Default (, , event sources): 0 Default (, Kafka, , event sources): 500 ms Related setting: For SQS event sources, when you set BatchSize to a value greater than 10, you must set MaximumBatchingWindowInSeconds to at least 1.
    maximumRecordAgeInSeconds number
    (Kinesis and DynamoDB Streams only) Discard records older than the specified age. The default value is -1, which sets the maximum age to infinite. When the value is set to infinite, Lambda never discards old records. The minimum valid value for maximum record age is 60s. Although values less than 60 and greater than -1 fall within the parameter's absolute range, they are not allowed
    maximumRetryAttempts number
    (Kinesis and DynamoDB Streams only) Discard records after the specified number of retries. The default value is -1, which sets the maximum number of retries to infinite. When MaximumRetryAttempts is infinite, Lambda retries failed records until the record expires in the event source.
    parallelizationFactor number
    (Kinesis and DynamoDB Streams only) The number of batches to process concurrently from each shard. The default value is 1.
    queues string[]
    (Amazon MQ) The name of the Amazon MQ broker destination queue to consume.
    scalingConfig EventSourceMappingScalingConfig
    (Amazon SQS only) The scaling configuration for the event source. For more information, see Configuring maximum concurrency for Amazon SQS event sources.
    selfManagedEventSource EventSourceMappingSelfManagedEventSource
    The self-managed Apache Kafka cluster for your event source.
    selfManagedKafkaEventSourceConfig EventSourceMappingSelfManagedKafkaEventSourceConfig
    Specific configuration settings for a self-managed Apache Kafka event source.
    sourceAccessConfigurations EventSourceMappingSourceAccessConfiguration[]
    An array of the authentication protocol, VPC components, or virtual host to secure and define your event source.
    startingPosition string
    The position in a stream from which to start reading. Required for Amazon Kinesis and Amazon DynamoDB.

    • LATEST - Read only new records.
    • TRIM_HORIZON - Process all available records.
    • AT_TIMESTAMP - Specify a time from which to start reading records.
    startingPositionTimestamp number
    With StartingPosition set to AT_TIMESTAMP, the time from which to start reading, in Unix time seconds. StartingPositionTimestamp cannot be in the future.
    topics string[]
    The name of the Kafka topic.
    tumblingWindowInSeconds number
    (Kinesis and DynamoDB Streams only) The duration in seconds of a processing window for DynamoDB and Kinesis Streams event sources. A value of 0 seconds indicates no tumbling window.
    function_name str

    The name or ARN of the Lambda function. Name formats

    • Function nameMyFunction.
    • Function ARNarn:aws:lambda:us-west-2:123456789012:function:MyFunction.
    • Version or Alias ARNarn:aws:lambda:us-west-2:123456789012:function:MyFunction:PROD.
    • Partial ARN123456789012:function:MyFunction.

    The length constraint applies only to the full ARN. If you specify only the function name, it's limited to 64 characters in length.

    amazon_managed_kafka_event_source_config lambda_.EventSourceMappingAmazonManagedKafkaEventSourceConfigArgs
    Specific configuration settings for an Amazon Managed Streaming for Apache Kafka (Amazon MSK) event source.
    batch_size int
    The maximum number of records in each batch that Lambda pulls from your stream or queue and sends to your function. Lambda passes all of the records in the batch to the function in a single call, up to the payload limit for synchronous invocation (6 MB).

    • Amazon Kinesis – Default 100. Max 10,000.
    • Amazon DynamoDB Streams – Default 100. Max 10,000.
    • Amazon Simple Queue Service – Default 10. For standard queues the max is 10,000. For FIFO queues the max is 10.
    • Amazon Managed Streaming for Apache Kafka – Default 100. Max 10,000.
    • Self-managed Apache Kafka – Default 100. Max 10,000.
    • Amazon MQ (ActiveMQ and RabbitMQ) – Default 100. Max 10,000.
    • DocumentDB – Default 100. Max 10,000.
    bisect_batch_on_function_error bool
    (Kinesis and DynamoDB Streams only) If the function returns an error, split the batch in two and retry. The default value is false.
    destination_config lambda_.EventSourceMappingDestinationConfigArgs
    (Kinesis, DynamoDB Streams, Amazon MSK, and self-managed Apache Kafka event sources only) A configuration object that specifies the destination of an event after Lambda processes it.
    document_db_event_source_config lambda_.EventSourceMappingDocumentDbEventSourceConfigArgs
    Specific configuration settings for a DocumentDB event source.
    enabled bool
    When true, the event source mapping is active. When false, Lambda pauses polling and invocation. Default: True
    event_source_arn str
    The Amazon Resource Name (ARN) of the event source.

    • Amazon Kinesis – The ARN of the data stream or a stream consumer.
    • Amazon DynamoDB Streams – The ARN of the stream.
    • Amazon Simple Queue Service – The ARN of the queue.
    • Amazon Managed Streaming for Apache Kafka – The ARN of the cluster or the ARN of the VPC connection (for cross-account event source mappings).
    • Amazon MQ – The ARN of the broker.
    • Amazon DocumentDB – The ARN of the DocumentDB change stream.
    filter_criteria lambda_.EventSourceMappingFilterCriteriaArgs
    An object that defines the filter criteria that determine whether Lambda should process an event. For more information, see Lambda event filtering.
    function_response_types Sequence[lambda_.EventSourceMappingFunctionResponseTypesItem]
    (Streams and SQS) A list of current response type enums applied to the event source mapping. Valid Values: ReportBatchItemFailures
    maximum_batching_window_in_seconds int
    The maximum amount of time, in seconds, that Lambda spends gathering records before invoking the function. Default (, , event sources): 0 Default (, Kafka, , event sources): 500 ms Related setting: For SQS event sources, when you set BatchSize to a value greater than 10, you must set MaximumBatchingWindowInSeconds to at least 1.
    maximum_record_age_in_seconds int
    (Kinesis and DynamoDB Streams only) Discard records older than the specified age. The default value is -1, which sets the maximum age to infinite. When the value is set to infinite, Lambda never discards old records. The minimum valid value for maximum record age is 60s. Although values less than 60 and greater than -1 fall within the parameter's absolute range, they are not allowed
    maximum_retry_attempts int
    (Kinesis and DynamoDB Streams only) Discard records after the specified number of retries. The default value is -1, which sets the maximum number of retries to infinite. When MaximumRetryAttempts is infinite, Lambda retries failed records until the record expires in the event source.
    parallelization_factor int
    (Kinesis and DynamoDB Streams only) The number of batches to process concurrently from each shard. The default value is 1.
    queues Sequence[str]
    (Amazon MQ) The name of the Amazon MQ broker destination queue to consume.
    scaling_config lambda_.EventSourceMappingScalingConfigArgs
    (Amazon SQS only) The scaling configuration for the event source. For more information, see Configuring maximum concurrency for Amazon SQS event sources.
    self_managed_event_source lambda_.EventSourceMappingSelfManagedEventSourceArgs
    The self-managed Apache Kafka cluster for your event source.
    self_managed_kafka_event_source_config lambda_.EventSourceMappingSelfManagedKafkaEventSourceConfigArgs
    Specific configuration settings for a self-managed Apache Kafka event source.
    source_access_configurations Sequence[lambda_.EventSourceMappingSourceAccessConfigurationArgs]
    An array of the authentication protocol, VPC components, or virtual host to secure and define your event source.
    starting_position str
    The position in a stream from which to start reading. Required for Amazon Kinesis and Amazon DynamoDB.

    • LATEST - Read only new records.
    • TRIM_HORIZON - Process all available records.
    • AT_TIMESTAMP - Specify a time from which to start reading records.
    starting_position_timestamp float
    With StartingPosition set to AT_TIMESTAMP, the time from which to start reading, in Unix time seconds. StartingPositionTimestamp cannot be in the future.
    topics Sequence[str]
    The name of the Kafka topic.
    tumbling_window_in_seconds int
    (Kinesis and DynamoDB Streams only) The duration in seconds of a processing window for DynamoDB and Kinesis Streams event sources. A value of 0 seconds indicates no tumbling window.
    functionName String

    The name or ARN of the Lambda function. Name formats

    • Function nameMyFunction.
    • Function ARNarn:aws:lambda:us-west-2:123456789012:function:MyFunction.
    • Version or Alias ARNarn:aws:lambda:us-west-2:123456789012:function:MyFunction:PROD.
    • Partial ARN123456789012:function:MyFunction.

    The length constraint applies only to the full ARN. If you specify only the function name, it's limited to 64 characters in length.

    amazonManagedKafkaEventSourceConfig Property Map
    Specific configuration settings for an Amazon Managed Streaming for Apache Kafka (Amazon MSK) event source.
    batchSize Number
    The maximum number of records in each batch that Lambda pulls from your stream or queue and sends to your function. Lambda passes all of the records in the batch to the function in a single call, up to the payload limit for synchronous invocation (6 MB).

    • Amazon Kinesis – Default 100. Max 10,000.
    • Amazon DynamoDB Streams – Default 100. Max 10,000.
    • Amazon Simple Queue Service – Default 10. For standard queues the max is 10,000. For FIFO queues the max is 10.
    • Amazon Managed Streaming for Apache Kafka – Default 100. Max 10,000.
    • Self-managed Apache Kafka – Default 100. Max 10,000.
    • Amazon MQ (ActiveMQ and RabbitMQ) – Default 100. Max 10,000.
    • DocumentDB – Default 100. Max 10,000.
    bisectBatchOnFunctionError Boolean
    (Kinesis and DynamoDB Streams only) If the function returns an error, split the batch in two and retry. The default value is false.
    destinationConfig Property Map
    (Kinesis, DynamoDB Streams, Amazon MSK, and self-managed Apache Kafka event sources only) A configuration object that specifies the destination of an event after Lambda processes it.
    documentDbEventSourceConfig Property Map
    Specific configuration settings for a DocumentDB event source.
    enabled Boolean
    When true, the event source mapping is active. When false, Lambda pauses polling and invocation. Default: True
    eventSourceArn String
    The Amazon Resource Name (ARN) of the event source.

    • Amazon Kinesis – The ARN of the data stream or a stream consumer.
    • Amazon DynamoDB Streams – The ARN of the stream.
    • Amazon Simple Queue Service – The ARN of the queue.
    • Amazon Managed Streaming for Apache Kafka – The ARN of the cluster or the ARN of the VPC connection (for cross-account event source mappings).
    • Amazon MQ – The ARN of the broker.
    • Amazon DocumentDB – The ARN of the DocumentDB change stream.
    filterCriteria Property Map
    An object that defines the filter criteria that determine whether Lambda should process an event. For more information, see Lambda event filtering.
    functionResponseTypes List<"ReportBatchItemFailures">
    (Streams and SQS) A list of current response type enums applied to the event source mapping. Valid Values: ReportBatchItemFailures
    maximumBatchingWindowInSeconds Number
    The maximum amount of time, in seconds, that Lambda spends gathering records before invoking the function. Default (, , event sources): 0 Default (, Kafka, , event sources): 500 ms Related setting: For SQS event sources, when you set BatchSize to a value greater than 10, you must set MaximumBatchingWindowInSeconds to at least 1.
    maximumRecordAgeInSeconds Number
    (Kinesis and DynamoDB Streams only) Discard records older than the specified age. The default value is -1, which sets the maximum age to infinite. When the value is set to infinite, Lambda never discards old records. The minimum valid value for maximum record age is 60s. Although values less than 60 and greater than -1 fall within the parameter's absolute range, they are not allowed
    maximumRetryAttempts Number
    (Kinesis and DynamoDB Streams only) Discard records after the specified number of retries. The default value is -1, which sets the maximum number of retries to infinite. When MaximumRetryAttempts is infinite, Lambda retries failed records until the record expires in the event source.
    parallelizationFactor Number
    (Kinesis and DynamoDB Streams only) The number of batches to process concurrently from each shard. The default value is 1.
    queues List<String>
    (Amazon MQ) The name of the Amazon MQ broker destination queue to consume.
    scalingConfig Property Map
    (Amazon SQS only) The scaling configuration for the event source. For more information, see Configuring maximum concurrency for Amazon SQS event sources.
    selfManagedEventSource Property Map
    The self-managed Apache Kafka cluster for your event source.
    selfManagedKafkaEventSourceConfig Property Map
    Specific configuration settings for a self-managed Apache Kafka event source.
    sourceAccessConfigurations List<Property Map>
    An array of the authentication protocol, VPC components, or virtual host to secure and define your event source.
    startingPosition String
    The position in a stream from which to start reading. Required for Amazon Kinesis and Amazon DynamoDB.

    • LATEST - Read only new records.
    • TRIM_HORIZON - Process all available records.
    • AT_TIMESTAMP - Specify a time from which to start reading records.
    startingPositionTimestamp Number
    With StartingPosition set to AT_TIMESTAMP, the time from which to start reading, in Unix time seconds. StartingPositionTimestamp cannot be in the future.
    topics List<String>
    The name of the Kafka topic.
    tumblingWindowInSeconds Number
    (Kinesis and DynamoDB Streams only) The duration in seconds of a processing window for DynamoDB and Kinesis Streams event sources. A value of 0 seconds indicates no tumbling window.

    Outputs

    All input properties are implicitly available as output properties. Additionally, the EventSourceMapping resource produces the following output properties:

    AwsId string
    The event source mapping's ID.
    Id string
    The provider-assigned unique ID for this managed resource.
    AwsId string
    The event source mapping's ID.
    Id string
    The provider-assigned unique ID for this managed resource.
    awsId String
    The event source mapping's ID.
    id String
    The provider-assigned unique ID for this managed resource.
    awsId string
    The event source mapping's ID.
    id string
    The provider-assigned unique ID for this managed resource.
    aws_id str
    The event source mapping's ID.
    id str
    The provider-assigned unique ID for this managed resource.
    awsId String
    The event source mapping's ID.
    id String
    The provider-assigned unique ID for this managed resource.

    Supporting Types

    EventSourceMappingAmazonManagedKafkaEventSourceConfig, EventSourceMappingAmazonManagedKafkaEventSourceConfigArgs

    ConsumerGroupId string
    The identifier for the Kafka consumer group to join. The consumer group ID must be unique among all your Kafka event sources. After creating a Kafka event source mapping with the consumer group ID specified, you cannot update this value. For more information, see Customizable consumer group ID.
    ConsumerGroupId string
    The identifier for the Kafka consumer group to join. The consumer group ID must be unique among all your Kafka event sources. After creating a Kafka event source mapping with the consumer group ID specified, you cannot update this value. For more information, see Customizable consumer group ID.
    consumerGroupId String
    The identifier for the Kafka consumer group to join. The consumer group ID must be unique among all your Kafka event sources. After creating a Kafka event source mapping with the consumer group ID specified, you cannot update this value. For more information, see Customizable consumer group ID.
    consumerGroupId string
    The identifier for the Kafka consumer group to join. The consumer group ID must be unique among all your Kafka event sources. After creating a Kafka event source mapping with the consumer group ID specified, you cannot update this value. For more information, see Customizable consumer group ID.
    consumer_group_id str
    The identifier for the Kafka consumer group to join. The consumer group ID must be unique among all your Kafka event sources. After creating a Kafka event source mapping with the consumer group ID specified, you cannot update this value. For more information, see Customizable consumer group ID.
    consumerGroupId String
    The identifier for the Kafka consumer group to join. The consumer group ID must be unique among all your Kafka event sources. After creating a Kafka event source mapping with the consumer group ID specified, you cannot update this value. For more information, see Customizable consumer group ID.

    EventSourceMappingDestinationConfig, EventSourceMappingDestinationConfigArgs

    OnFailure Pulumi.AwsNative.Lambda.Inputs.EventSourceMappingOnFailure
    The destination configuration for failed invocations.
    OnFailure EventSourceMappingOnFailure
    The destination configuration for failed invocations.
    onFailure EventSourceMappingOnFailure
    The destination configuration for failed invocations.
    onFailure EventSourceMappingOnFailure
    The destination configuration for failed invocations.
    on_failure lambda_.EventSourceMappingOnFailure
    The destination configuration for failed invocations.
    onFailure Property Map
    The destination configuration for failed invocations.

    EventSourceMappingDocumentDbEventSourceConfig, EventSourceMappingDocumentDbEventSourceConfigArgs

    CollectionName string
    The name of the collection to consume within the database. If you do not specify a collection, Lambda consumes all collections.
    DatabaseName string
    The name of the database to consume within the DocumentDB cluster.
    FullDocument Pulumi.AwsNative.Lambda.EventSourceMappingDocumentDbEventSourceConfigFullDocument
    Determines what DocumentDB sends to your event stream during document update operations. If set to UpdateLookup, DocumentDB sends a delta describing the changes, along with a copy of the entire document. Otherwise, DocumentDB sends only a partial document that contains the changes.
    CollectionName string
    The name of the collection to consume within the database. If you do not specify a collection, Lambda consumes all collections.
    DatabaseName string
    The name of the database to consume within the DocumentDB cluster.
    FullDocument EventSourceMappingDocumentDbEventSourceConfigFullDocument
    Determines what DocumentDB sends to your event stream during document update operations. If set to UpdateLookup, DocumentDB sends a delta describing the changes, along with a copy of the entire document. Otherwise, DocumentDB sends only a partial document that contains the changes.
    collectionName String
    The name of the collection to consume within the database. If you do not specify a collection, Lambda consumes all collections.
    databaseName String
    The name of the database to consume within the DocumentDB cluster.
    fullDocument EventSourceMappingDocumentDbEventSourceConfigFullDocument
    Determines what DocumentDB sends to your event stream during document update operations. If set to UpdateLookup, DocumentDB sends a delta describing the changes, along with a copy of the entire document. Otherwise, DocumentDB sends only a partial document that contains the changes.
    collectionName string
    The name of the collection to consume within the database. If you do not specify a collection, Lambda consumes all collections.
    databaseName string
    The name of the database to consume within the DocumentDB cluster.
    fullDocument EventSourceMappingDocumentDbEventSourceConfigFullDocument
    Determines what DocumentDB sends to your event stream during document update operations. If set to UpdateLookup, DocumentDB sends a delta describing the changes, along with a copy of the entire document. Otherwise, DocumentDB sends only a partial document that contains the changes.
    collection_name str
    The name of the collection to consume within the database. If you do not specify a collection, Lambda consumes all collections.
    database_name str
    The name of the database to consume within the DocumentDB cluster.
    full_document lambda_.EventSourceMappingDocumentDbEventSourceConfigFullDocument
    Determines what DocumentDB sends to your event stream during document update operations. If set to UpdateLookup, DocumentDB sends a delta describing the changes, along with a copy of the entire document. Otherwise, DocumentDB sends only a partial document that contains the changes.
    collectionName String
    The name of the collection to consume within the database. If you do not specify a collection, Lambda consumes all collections.
    databaseName String
    The name of the database to consume within the DocumentDB cluster.
    fullDocument "UpdateLookup" | "Default"
    Determines what DocumentDB sends to your event stream during document update operations. If set to UpdateLookup, DocumentDB sends a delta describing the changes, along with a copy of the entire document. Otherwise, DocumentDB sends only a partial document that contains the changes.

    EventSourceMappingDocumentDbEventSourceConfigFullDocument, EventSourceMappingDocumentDbEventSourceConfigFullDocumentArgs

    UpdateLookup
    UpdateLookup
    Default
    Default
    EventSourceMappingDocumentDbEventSourceConfigFullDocumentUpdateLookup
    UpdateLookup
    EventSourceMappingDocumentDbEventSourceConfigFullDocumentDefault
    Default
    UpdateLookup
    UpdateLookup
    Default
    Default
    UpdateLookup
    UpdateLookup
    Default
    Default
    UPDATE_LOOKUP
    UpdateLookup
    DEFAULT
    Default
    "UpdateLookup"
    UpdateLookup
    "Default"
    Default

    EventSourceMappingEndpoints, EventSourceMappingEndpointsArgs

    KafkaBootstrapServers List<string>
    The list of bootstrap servers for your Kafka brokers in the following format: "KafkaBootstrapServers": ["abc.xyz.com:xxxx","abc2.xyz.com:xxxx"].
    KafkaBootstrapServers []string
    The list of bootstrap servers for your Kafka brokers in the following format: "KafkaBootstrapServers": ["abc.xyz.com:xxxx","abc2.xyz.com:xxxx"].
    kafkaBootstrapServers List<String>
    The list of bootstrap servers for your Kafka brokers in the following format: "KafkaBootstrapServers": ["abc.xyz.com:xxxx","abc2.xyz.com:xxxx"].
    kafkaBootstrapServers string[]
    The list of bootstrap servers for your Kafka brokers in the following format: "KafkaBootstrapServers": ["abc.xyz.com:xxxx","abc2.xyz.com:xxxx"].
    kafka_bootstrap_servers Sequence[str]
    The list of bootstrap servers for your Kafka brokers in the following format: "KafkaBootstrapServers": ["abc.xyz.com:xxxx","abc2.xyz.com:xxxx"].
    kafkaBootstrapServers List<String>
    The list of bootstrap servers for your Kafka brokers in the following format: "KafkaBootstrapServers": ["abc.xyz.com:xxxx","abc2.xyz.com:xxxx"].

    EventSourceMappingFilter, EventSourceMappingFilterArgs

    Pattern string
    A filter pattern. For more information on the syntax of a filter pattern, see Filter rule syntax.
    Pattern string
    A filter pattern. For more information on the syntax of a filter pattern, see Filter rule syntax.
    pattern String
    A filter pattern. For more information on the syntax of a filter pattern, see Filter rule syntax.
    pattern string
    A filter pattern. For more information on the syntax of a filter pattern, see Filter rule syntax.
    pattern str
    A filter pattern. For more information on the syntax of a filter pattern, see Filter rule syntax.
    pattern String
    A filter pattern. For more information on the syntax of a filter pattern, see Filter rule syntax.

    EventSourceMappingFilterCriteria, EventSourceMappingFilterCriteriaArgs

    filters List<Property Map>
    A list of filters.

    EventSourceMappingFunctionResponseTypesItem, EventSourceMappingFunctionResponseTypesItemArgs

    ReportBatchItemFailures
    ReportBatchItemFailures
    EventSourceMappingFunctionResponseTypesItemReportBatchItemFailures
    ReportBatchItemFailures
    ReportBatchItemFailures
    ReportBatchItemFailures
    ReportBatchItemFailures
    ReportBatchItemFailures
    REPORT_BATCH_ITEM_FAILURES
    ReportBatchItemFailures
    "ReportBatchItemFailures"
    ReportBatchItemFailures

    EventSourceMappingOnFailure, EventSourceMappingOnFailureArgs

    Destination string
    The Amazon Resource Name (ARN) of the destination resource. To retain records of asynchronous invocations, you can configure an Amazon SNS topic, Amazon SQS queue, Lambda function, or Amazon EventBridge event bus as the destination. To retain records of failed invocations from Kinesis and DynamoDB event sources, you can configure an Amazon SNS topic or Amazon SQS queue as the destination. To retain records of failed invocations from self-managed Kafka or Amazon MSK, you can configure an Amazon SNS topic, Amazon SQS queue, or Amazon S3 bucket as the destination.
    Destination string
    The Amazon Resource Name (ARN) of the destination resource. To retain records of asynchronous invocations, you can configure an Amazon SNS topic, Amazon SQS queue, Lambda function, or Amazon EventBridge event bus as the destination. To retain records of failed invocations from Kinesis and DynamoDB event sources, you can configure an Amazon SNS topic or Amazon SQS queue as the destination. To retain records of failed invocations from self-managed Kafka or Amazon MSK, you can configure an Amazon SNS topic, Amazon SQS queue, or Amazon S3 bucket as the destination.
    destination String
    The Amazon Resource Name (ARN) of the destination resource. To retain records of asynchronous invocations, you can configure an Amazon SNS topic, Amazon SQS queue, Lambda function, or Amazon EventBridge event bus as the destination. To retain records of failed invocations from Kinesis and DynamoDB event sources, you can configure an Amazon SNS topic or Amazon SQS queue as the destination. To retain records of failed invocations from self-managed Kafka or Amazon MSK, you can configure an Amazon SNS topic, Amazon SQS queue, or Amazon S3 bucket as the destination.
    destination string
    The Amazon Resource Name (ARN) of the destination resource. To retain records of asynchronous invocations, you can configure an Amazon SNS topic, Amazon SQS queue, Lambda function, or Amazon EventBridge event bus as the destination. To retain records of failed invocations from Kinesis and DynamoDB event sources, you can configure an Amazon SNS topic or Amazon SQS queue as the destination. To retain records of failed invocations from self-managed Kafka or Amazon MSK, you can configure an Amazon SNS topic, Amazon SQS queue, or Amazon S3 bucket as the destination.
    destination str
    The Amazon Resource Name (ARN) of the destination resource. To retain records of asynchronous invocations, you can configure an Amazon SNS topic, Amazon SQS queue, Lambda function, or Amazon EventBridge event bus as the destination. To retain records of failed invocations from Kinesis and DynamoDB event sources, you can configure an Amazon SNS topic or Amazon SQS queue as the destination. To retain records of failed invocations from self-managed Kafka or Amazon MSK, you can configure an Amazon SNS topic, Amazon SQS queue, or Amazon S3 bucket as the destination.
    destination String
    The Amazon Resource Name (ARN) of the destination resource. To retain records of asynchronous invocations, you can configure an Amazon SNS topic, Amazon SQS queue, Lambda function, or Amazon EventBridge event bus as the destination. To retain records of failed invocations from Kinesis and DynamoDB event sources, you can configure an Amazon SNS topic or Amazon SQS queue as the destination. To retain records of failed invocations from self-managed Kafka or Amazon MSK, you can configure an Amazon SNS topic, Amazon SQS queue, or Amazon S3 bucket as the destination.

    EventSourceMappingScalingConfig, EventSourceMappingScalingConfigArgs

    MaximumConcurrency int
    Limits the number of concurrent instances that the SQS event source can invoke.
    MaximumConcurrency int
    Limits the number of concurrent instances that the SQS event source can invoke.
    maximumConcurrency Integer
    Limits the number of concurrent instances that the SQS event source can invoke.
    maximumConcurrency number
    Limits the number of concurrent instances that the SQS event source can invoke.
    maximum_concurrency int
    Limits the number of concurrent instances that the SQS event source can invoke.
    maximumConcurrency Number
    Limits the number of concurrent instances that the SQS event source can invoke.

    EventSourceMappingSelfManagedEventSource, EventSourceMappingSelfManagedEventSourceArgs

    Endpoints Pulumi.AwsNative.Lambda.Inputs.EventSourceMappingEndpoints
    The list of bootstrap servers for your Kafka brokers in the following format: "KafkaBootstrapServers": ["abc.xyz.com:xxxx","abc2.xyz.com:xxxx"].
    Endpoints EventSourceMappingEndpoints
    The list of bootstrap servers for your Kafka brokers in the following format: "KafkaBootstrapServers": ["abc.xyz.com:xxxx","abc2.xyz.com:xxxx"].
    endpoints EventSourceMappingEndpoints
    The list of bootstrap servers for your Kafka brokers in the following format: "KafkaBootstrapServers": ["abc.xyz.com:xxxx","abc2.xyz.com:xxxx"].
    endpoints EventSourceMappingEndpoints
    The list of bootstrap servers for your Kafka brokers in the following format: "KafkaBootstrapServers": ["abc.xyz.com:xxxx","abc2.xyz.com:xxxx"].
    endpoints lambda_.EventSourceMappingEndpoints
    The list of bootstrap servers for your Kafka brokers in the following format: "KafkaBootstrapServers": ["abc.xyz.com:xxxx","abc2.xyz.com:xxxx"].
    endpoints Property Map
    The list of bootstrap servers for your Kafka brokers in the following format: "KafkaBootstrapServers": ["abc.xyz.com:xxxx","abc2.xyz.com:xxxx"].

    EventSourceMappingSelfManagedKafkaEventSourceConfig, EventSourceMappingSelfManagedKafkaEventSourceConfigArgs

    ConsumerGroupId string
    The identifier for the Kafka consumer group to join. The consumer group ID must be unique among all your Kafka event sources. After creating a Kafka event source mapping with the consumer group ID specified, you cannot update this value. For more information, see Customizable consumer group ID.
    ConsumerGroupId string
    The identifier for the Kafka consumer group to join. The consumer group ID must be unique among all your Kafka event sources. After creating a Kafka event source mapping with the consumer group ID specified, you cannot update this value. For more information, see Customizable consumer group ID.
    consumerGroupId String
    The identifier for the Kafka consumer group to join. The consumer group ID must be unique among all your Kafka event sources. After creating a Kafka event source mapping with the consumer group ID specified, you cannot update this value. For more information, see Customizable consumer group ID.
    consumerGroupId string
    The identifier for the Kafka consumer group to join. The consumer group ID must be unique among all your Kafka event sources. After creating a Kafka event source mapping with the consumer group ID specified, you cannot update this value. For more information, see Customizable consumer group ID.
    consumer_group_id str
    The identifier for the Kafka consumer group to join. The consumer group ID must be unique among all your Kafka event sources. After creating a Kafka event source mapping with the consumer group ID specified, you cannot update this value. For more information, see Customizable consumer group ID.
    consumerGroupId String
    The identifier for the Kafka consumer group to join. The consumer group ID must be unique among all your Kafka event sources. After creating a Kafka event source mapping with the consumer group ID specified, you cannot update this value. For more information, see Customizable consumer group ID.

    EventSourceMappingSourceAccessConfiguration, EventSourceMappingSourceAccessConfigurationArgs

    Type Pulumi.AwsNative.Lambda.EventSourceMappingSourceAccessConfigurationType
    The type of authentication protocol, VPC components, or virtual host for your event source. For example: "Type":"SASL_SCRAM_512_AUTH".

    • BASIC_AUTH – (Amazon MQ) The ASMlong secret that stores your broker credentials.
    • BASIC_AUTH – (Self-managed Apache Kafka) The Secrets Manager ARN of your secret key used for SASL/PLAIN authentication of your Apache Kafka brokers.
    • VPC_SUBNET – (Self-managed Apache Kafka) The subnets associated with your VPC. Lambda connects to these subnets to fetch data from your self-managed Apache Kafka cluster.
    • VPC_SECURITY_GROUP – (Self-managed Apache Kafka) The VPC security group used to manage access to your self-managed Apache Kafka brokers.
    • SASL_SCRAM_256_AUTH – (Self-managed Apache Kafka) The Secrets Manager ARN of your secret key used for SASL SCRAM-256 authentication of your self-managed Apache Kafka brokers.
    • SASL_SCRAM_512_AUTH – (Amazon MSK, Self-managed Apache Kafka) The Secrets Manager ARN of your secret key used for SASL SCRAM-512 authentication of your self-managed Apache Kafka brokers.
    • VIRTUAL_HOST –- (RabbitMQ) The name of the virtual host in your RabbitMQ broker. Lambda uses this RabbitMQ host as the event source. This property cannot be specified in an UpdateEventSourceMapping API call.
    • CLIENT_CERTIFICATE_TLS_AUTH – (Amazon MSK, self-managed Apache Kafka) The Secrets Manager ARN of your secret key containing the certificate chain (X.509 PEM), private key (PKCS#8 PEM), and private key password (optional) used for mutual TLS authentication of your MSK/Apache Kafka brokers.
    • SERVER_ROOT_CA_CERTIFICATE – (Self-managed Apache Kafka) The Secrets Manager ARN of your secret key containing the root CA certificate (X.509 PEM) used for TLS encryption of your Apache Kafka brokers.
    Uri string
    The value for your chosen configuration in Type. For example: "URI": "arn:aws:secretsmanager:us-east-1:01234567890:secret:MyBrokerSecretName".
    Type EventSourceMappingSourceAccessConfigurationType
    The type of authentication protocol, VPC components, or virtual host for your event source. For example: "Type":"SASL_SCRAM_512_AUTH".

    • BASIC_AUTH – (Amazon MQ) The ASMlong secret that stores your broker credentials.
    • BASIC_AUTH – (Self-managed Apache Kafka) The Secrets Manager ARN of your secret key used for SASL/PLAIN authentication of your Apache Kafka brokers.
    • VPC_SUBNET – (Self-managed Apache Kafka) The subnets associated with your VPC. Lambda connects to these subnets to fetch data from your self-managed Apache Kafka cluster.
    • VPC_SECURITY_GROUP – (Self-managed Apache Kafka) The VPC security group used to manage access to your self-managed Apache Kafka brokers.
    • SASL_SCRAM_256_AUTH – (Self-managed Apache Kafka) The Secrets Manager ARN of your secret key used for SASL SCRAM-256 authentication of your self-managed Apache Kafka brokers.
    • SASL_SCRAM_512_AUTH – (Amazon MSK, Self-managed Apache Kafka) The Secrets Manager ARN of your secret key used for SASL SCRAM-512 authentication of your self-managed Apache Kafka brokers.
    • VIRTUAL_HOST –- (RabbitMQ) The name of the virtual host in your RabbitMQ broker. Lambda uses this RabbitMQ host as the event source. This property cannot be specified in an UpdateEventSourceMapping API call.
    • CLIENT_CERTIFICATE_TLS_AUTH – (Amazon MSK, self-managed Apache Kafka) The Secrets Manager ARN of your secret key containing the certificate chain (X.509 PEM), private key (PKCS#8 PEM), and private key password (optional) used for mutual TLS authentication of your MSK/Apache Kafka brokers.
    • SERVER_ROOT_CA_CERTIFICATE – (Self-managed Apache Kafka) The Secrets Manager ARN of your secret key containing the root CA certificate (X.509 PEM) used for TLS encryption of your Apache Kafka brokers.
    Uri string
    The value for your chosen configuration in Type. For example: "URI": "arn:aws:secretsmanager:us-east-1:01234567890:secret:MyBrokerSecretName".
    type EventSourceMappingSourceAccessConfigurationType
    The type of authentication protocol, VPC components, or virtual host for your event source. For example: "Type":"SASL_SCRAM_512_AUTH".

    • BASIC_AUTH – (Amazon MQ) The ASMlong secret that stores your broker credentials.
    • BASIC_AUTH – (Self-managed Apache Kafka) The Secrets Manager ARN of your secret key used for SASL/PLAIN authentication of your Apache Kafka brokers.
    • VPC_SUBNET – (Self-managed Apache Kafka) The subnets associated with your VPC. Lambda connects to these subnets to fetch data from your self-managed Apache Kafka cluster.
    • VPC_SECURITY_GROUP – (Self-managed Apache Kafka) The VPC security group used to manage access to your self-managed Apache Kafka brokers.
    • SASL_SCRAM_256_AUTH – (Self-managed Apache Kafka) The Secrets Manager ARN of your secret key used for SASL SCRAM-256 authentication of your self-managed Apache Kafka brokers.
    • SASL_SCRAM_512_AUTH – (Amazon MSK, Self-managed Apache Kafka) The Secrets Manager ARN of your secret key used for SASL SCRAM-512 authentication of your self-managed Apache Kafka brokers.
    • VIRTUAL_HOST –- (RabbitMQ) The name of the virtual host in your RabbitMQ broker. Lambda uses this RabbitMQ host as the event source. This property cannot be specified in an UpdateEventSourceMapping API call.
    • CLIENT_CERTIFICATE_TLS_AUTH – (Amazon MSK, self-managed Apache Kafka) The Secrets Manager ARN of your secret key containing the certificate chain (X.509 PEM), private key (PKCS#8 PEM), and private key password (optional) used for mutual TLS authentication of your MSK/Apache Kafka brokers.
    • SERVER_ROOT_CA_CERTIFICATE – (Self-managed Apache Kafka) The Secrets Manager ARN of your secret key containing the root CA certificate (X.509 PEM) used for TLS encryption of your Apache Kafka brokers.
    uri String
    The value for your chosen configuration in Type. For example: "URI": "arn:aws:secretsmanager:us-east-1:01234567890:secret:MyBrokerSecretName".
    type EventSourceMappingSourceAccessConfigurationType
    The type of authentication protocol, VPC components, or virtual host for your event source. For example: "Type":"SASL_SCRAM_512_AUTH".

    • BASIC_AUTH – (Amazon MQ) The ASMlong secret that stores your broker credentials.
    • BASIC_AUTH – (Self-managed Apache Kafka) The Secrets Manager ARN of your secret key used for SASL/PLAIN authentication of your Apache Kafka brokers.
    • VPC_SUBNET – (Self-managed Apache Kafka) The subnets associated with your VPC. Lambda connects to these subnets to fetch data from your self-managed Apache Kafka cluster.
    • VPC_SECURITY_GROUP – (Self-managed Apache Kafka) The VPC security group used to manage access to your self-managed Apache Kafka brokers.
    • SASL_SCRAM_256_AUTH – (Self-managed Apache Kafka) The Secrets Manager ARN of your secret key used for SASL SCRAM-256 authentication of your self-managed Apache Kafka brokers.
    • SASL_SCRAM_512_AUTH – (Amazon MSK, Self-managed Apache Kafka) The Secrets Manager ARN of your secret key used for SASL SCRAM-512 authentication of your self-managed Apache Kafka brokers.
    • VIRTUAL_HOST –- (RabbitMQ) The name of the virtual host in your RabbitMQ broker. Lambda uses this RabbitMQ host as the event source. This property cannot be specified in an UpdateEventSourceMapping API call.
    • CLIENT_CERTIFICATE_TLS_AUTH – (Amazon MSK, self-managed Apache Kafka) The Secrets Manager ARN of your secret key containing the certificate chain (X.509 PEM), private key (PKCS#8 PEM), and private key password (optional) used for mutual TLS authentication of your MSK/Apache Kafka brokers.
    • SERVER_ROOT_CA_CERTIFICATE – (Self-managed Apache Kafka) The Secrets Manager ARN of your secret key containing the root CA certificate (X.509 PEM) used for TLS encryption of your Apache Kafka brokers.
    uri string
    The value for your chosen configuration in Type. For example: "URI": "arn:aws:secretsmanager:us-east-1:01234567890:secret:MyBrokerSecretName".
    type lambda_.EventSourceMappingSourceAccessConfigurationType
    The type of authentication protocol, VPC components, or virtual host for your event source. For example: "Type":"SASL_SCRAM_512_AUTH".

    • BASIC_AUTH – (Amazon MQ) The ASMlong secret that stores your broker credentials.
    • BASIC_AUTH – (Self-managed Apache Kafka) The Secrets Manager ARN of your secret key used for SASL/PLAIN authentication of your Apache Kafka brokers.
    • VPC_SUBNET – (Self-managed Apache Kafka) The subnets associated with your VPC. Lambda connects to these subnets to fetch data from your self-managed Apache Kafka cluster.
    • VPC_SECURITY_GROUP – (Self-managed Apache Kafka) The VPC security group used to manage access to your self-managed Apache Kafka brokers.
    • SASL_SCRAM_256_AUTH – (Self-managed Apache Kafka) The Secrets Manager ARN of your secret key used for SASL SCRAM-256 authentication of your self-managed Apache Kafka brokers.
    • SASL_SCRAM_512_AUTH – (Amazon MSK, Self-managed Apache Kafka) The Secrets Manager ARN of your secret key used for SASL SCRAM-512 authentication of your self-managed Apache Kafka brokers.
    • VIRTUAL_HOST –- (RabbitMQ) The name of the virtual host in your RabbitMQ broker. Lambda uses this RabbitMQ host as the event source. This property cannot be specified in an UpdateEventSourceMapping API call.
    • CLIENT_CERTIFICATE_TLS_AUTH – (Amazon MSK, self-managed Apache Kafka) The Secrets Manager ARN of your secret key containing the certificate chain (X.509 PEM), private key (PKCS#8 PEM), and private key password (optional) used for mutual TLS authentication of your MSK/Apache Kafka brokers.
    • SERVER_ROOT_CA_CERTIFICATE – (Self-managed Apache Kafka) The Secrets Manager ARN of your secret key containing the root CA certificate (X.509 PEM) used for TLS encryption of your Apache Kafka brokers.
    uri str
    The value for your chosen configuration in Type. For example: "URI": "arn:aws:secretsmanager:us-east-1:01234567890:secret:MyBrokerSecretName".
    type "BASIC_AUTH" | "VPC_SUBNET" | "VPC_SECURITY_GROUP" | "SASL_SCRAM_512_AUTH" | "SASL_SCRAM_256_AUTH" | "VIRTUAL_HOST" | "CLIENT_CERTIFICATE_TLS_AUTH" | "SERVER_ROOT_CA_CERTIFICATE"
    The type of authentication protocol, VPC components, or virtual host for your event source. For example: "Type":"SASL_SCRAM_512_AUTH".

    • BASIC_AUTH – (Amazon MQ) The ASMlong secret that stores your broker credentials.
    • BASIC_AUTH – (Self-managed Apache Kafka) The Secrets Manager ARN of your secret key used for SASL/PLAIN authentication of your Apache Kafka brokers.
    • VPC_SUBNET – (Self-managed Apache Kafka) The subnets associated with your VPC. Lambda connects to these subnets to fetch data from your self-managed Apache Kafka cluster.
    • VPC_SECURITY_GROUP – (Self-managed Apache Kafka) The VPC security group used to manage access to your self-managed Apache Kafka brokers.
    • SASL_SCRAM_256_AUTH – (Self-managed Apache Kafka) The Secrets Manager ARN of your secret key used for SASL SCRAM-256 authentication of your self-managed Apache Kafka brokers.
    • SASL_SCRAM_512_AUTH – (Amazon MSK, Self-managed Apache Kafka) The Secrets Manager ARN of your secret key used for SASL SCRAM-512 authentication of your self-managed Apache Kafka brokers.
    • VIRTUAL_HOST –- (RabbitMQ) The name of the virtual host in your RabbitMQ broker. Lambda uses this RabbitMQ host as the event source. This property cannot be specified in an UpdateEventSourceMapping API call.
    • CLIENT_CERTIFICATE_TLS_AUTH – (Amazon MSK, self-managed Apache Kafka) The Secrets Manager ARN of your secret key containing the certificate chain (X.509 PEM), private key (PKCS#8 PEM), and private key password (optional) used for mutual TLS authentication of your MSK/Apache Kafka brokers.
    • SERVER_ROOT_CA_CERTIFICATE – (Self-managed Apache Kafka) The Secrets Manager ARN of your secret key containing the root CA certificate (X.509 PEM) used for TLS encryption of your Apache Kafka brokers.
    uri String
    The value for your chosen configuration in Type. For example: "URI": "arn:aws:secretsmanager:us-east-1:01234567890:secret:MyBrokerSecretName".

    EventSourceMappingSourceAccessConfigurationType, EventSourceMappingSourceAccessConfigurationTypeArgs

    BasicAuth
    BASIC_AUTH
    VpcSubnet
    VPC_SUBNET
    VpcSecurityGroup
    VPC_SECURITY_GROUP
    SaslScram512Auth
    SASL_SCRAM_512_AUTH
    SaslScram256Auth
    SASL_SCRAM_256_AUTH
    VirtualHost
    VIRTUAL_HOST
    ClientCertificateTlsAuth
    CLIENT_CERTIFICATE_TLS_AUTH
    ServerRootCaCertificate
    SERVER_ROOT_CA_CERTIFICATE
    EventSourceMappingSourceAccessConfigurationTypeBasicAuth
    BASIC_AUTH
    EventSourceMappingSourceAccessConfigurationTypeVpcSubnet
    VPC_SUBNET
    EventSourceMappingSourceAccessConfigurationTypeVpcSecurityGroup
    VPC_SECURITY_GROUP
    EventSourceMappingSourceAccessConfigurationTypeSaslScram512Auth
    SASL_SCRAM_512_AUTH
    EventSourceMappingSourceAccessConfigurationTypeSaslScram256Auth
    SASL_SCRAM_256_AUTH
    EventSourceMappingSourceAccessConfigurationTypeVirtualHost
    VIRTUAL_HOST
    EventSourceMappingSourceAccessConfigurationTypeClientCertificateTlsAuth
    CLIENT_CERTIFICATE_TLS_AUTH
    EventSourceMappingSourceAccessConfigurationTypeServerRootCaCertificate
    SERVER_ROOT_CA_CERTIFICATE
    BasicAuth
    BASIC_AUTH
    VpcSubnet
    VPC_SUBNET
    VpcSecurityGroup
    VPC_SECURITY_GROUP
    SaslScram512Auth
    SASL_SCRAM_512_AUTH
    SaslScram256Auth
    SASL_SCRAM_256_AUTH
    VirtualHost
    VIRTUAL_HOST
    ClientCertificateTlsAuth
    CLIENT_CERTIFICATE_TLS_AUTH
    ServerRootCaCertificate
    SERVER_ROOT_CA_CERTIFICATE
    BasicAuth
    BASIC_AUTH
    VpcSubnet
    VPC_SUBNET
    VpcSecurityGroup
    VPC_SECURITY_GROUP
    SaslScram512Auth
    SASL_SCRAM_512_AUTH
    SaslScram256Auth
    SASL_SCRAM_256_AUTH
    VirtualHost
    VIRTUAL_HOST
    ClientCertificateTlsAuth
    CLIENT_CERTIFICATE_TLS_AUTH
    ServerRootCaCertificate
    SERVER_ROOT_CA_CERTIFICATE
    BASIC_AUTH
    BASIC_AUTH
    VPC_SUBNET
    VPC_SUBNET
    VPC_SECURITY_GROUP
    VPC_SECURITY_GROUP
    SASL_SCRAM512_AUTH
    SASL_SCRAM_512_AUTH
    SASL_SCRAM256_AUTH
    SASL_SCRAM_256_AUTH
    VIRTUAL_HOST
    VIRTUAL_HOST
    CLIENT_CERTIFICATE_TLS_AUTH
    CLIENT_CERTIFICATE_TLS_AUTH
    SERVER_ROOT_CA_CERTIFICATE
    SERVER_ROOT_CA_CERTIFICATE
    "BASIC_AUTH"
    BASIC_AUTH
    "VPC_SUBNET"
    VPC_SUBNET
    "VPC_SECURITY_GROUP"
    VPC_SECURITY_GROUP
    "SASL_SCRAM_512_AUTH"
    SASL_SCRAM_512_AUTH
    "SASL_SCRAM_256_AUTH"
    SASL_SCRAM_256_AUTH
    "VIRTUAL_HOST"
    VIRTUAL_HOST
    "CLIENT_CERTIFICATE_TLS_AUTH"
    CLIENT_CERTIFICATE_TLS_AUTH
    "SERVER_ROOT_CA_CERTIFICATE"
    SERVER_ROOT_CA_CERTIFICATE

    Package Details

    Repository
    AWS Native pulumi/pulumi-aws-native
    License
    Apache-2.0
    aws-native logo

    AWS Native is in preview. AWS Classic is fully supported.

    AWS Native v0.112.0 published on Wednesday, Jul 24, 2024 by Pulumi