1. Implementing CloudWatch Log Subscription Filters for real-time processing

    TypeScript

    CloudWatch Log Subscription Filters allow you to deliver a stream of log events from Amazon CloudWatch Logs to a supported destination. One common use case is real-time processing of these logs, which could be performed by AWS Lambda functions, Kinesis Streams, Kinesis Firehose, or any custom service that you've configured to process log streams.

    In the example provided below, we will create a subscription filter that forwards log data from a CloudWatch Log Group to a Kinesis Stream. The idea is to capture log data from your applications or AWS services, process them in real-time, and perhaps store the processed data for analytics or monitoring purposes.

    Here's a breakdown of what we are going to do:

    1. Define a CloudWatch Log Group which will collect log streams.
    2. Define a Kinesis Stream which will serve as the real-time processing platform.
    3. Create a Log Subscription Filter to connect the CloudWatch Log Group to the Kinesis Stream.

    Please note that before running this code, you should have the AWS CLI configured with the necessary permissions to create these resources.

    Now, let's dive into the Pulumi program written in TypeScript:

    import * as aws from "@pulumi/aws"; // Create a new Kinesis Stream which will receive log data from CloudWatch. const stream = new aws.kinesis.Stream("my-log-stream", { shardCount: 1, // The number of shards in the Kinesis Stream. }); // Define a CloudWatch Log Group where logs will be collected. const logGroup = new aws.cloudwatch.LogGroup("my-log-group", { retentionInDays: 7, // Define how long the logs will be retained. }); // The IAM role that AWS Lambda will assume when executing the subscription filter. const role = new aws.iam.Role("my-subscription-filter-role", { assumeRolePolicy: JSON.stringify({ Version: "2012-10-17", Statement: [{ Action: "sts:AssumeRole", Effect: "Allow", Principal: { Service: "logs.amazonaws.com", }, }], }), }); // Policy that grants the necessary permissions for the log subscription to work. new aws.iam.RolePolicy("my-subscription-filter-policy", { role: role.id, policy: JSON.stringify({ Version: "2012-10-17", Statement: [{ Action: [ "kinesis:PutRecord", "kinesis:PutRecords", ], Effect: "Allow", Resource: stream.arn, }], }), }); // Subscription filter that sends logs to the Kinesis Stream for processing. const logSubscriptionFilter = new aws.cloudwatch.LogSubscriptionFilter("my-subscription-filter", { logGroup: logGroup.name, // Connect to our defined Log Group. filterPattern: "", // Applying a filter pattern to match log events. destinationArn: stream.arn, // Point subscription to Kinesis Stream. roleArn: role.arn, // The role with permissions to route logs to the stream. }); // Exporting the names and ARNs of the created resources export const streamName = stream.name; export const streamArn = stream.arn; export const logGroupName = logGroup.name; export const logGroupArn = logGroup.arn; export const subscriptionFilterName = logSubscriptionFilter.name;

    In the program above, we start by importing the AWS package from Pulumi's suite of supported cloud providers. We then proceed to create a Kinesis Stream and a CloudWatch Log Group. The shardCount parameter of the Kinesis Stream determines the data capacity and throughput of the stream, and retentionInDays for the CloudWatch Log Group specifies how long the logs should be stored.

    Next, we define an IAM role and policy granting the necessary permissions for the log events to be delivered to our Kinesis Stream. Finally, we create a LogSubscriptionFilter, which bridges the CloudWatch Log Group and the Kinesis Stream, routing logs from the former to the latter.

    At the end of the code, we export important information about the created resources for easy access and reference.

    This setup will now stream log data from the defined log group to the Kinesis Stream, from where you can process the data in real-time using a service like AWS Lambda or Kinesis Data Analytics, depending on your use case.