1. Can you provide a code example demonstrating how to set up an AWS App Autoscaling policy in TypeScript

    TypeScript

    Certainly! AWS Application Auto Scaling allows you to automatically adjust scalable resources according to conditions you define. You can set up auto scaling policies for various resources such as Amazon ECS services, DynamoDB tables, and RDS databases among others.

    We will be setting up an Application Auto Scaling policy for an ECS service. To do so, we need two main components:

    1. aws.appautoscaling.Target: This component defines the scalable target—a specific ECS service in our case. We specify parameters such as resource ID, scalable dimension (like ECS service desired count), and the minimum and maximum number of tasks the service can scale to.

    2. aws.appautoscaling.Policy: After defining the target, we define a scaling policy. This policy consists of rules that determine when to scale your service in or out. This could be based on CPU utilization, memory utilization, or even a custom CloudWatch metric.

    The below TypeScript program demonstrates how to set up a simple target tracking scaling policy for an ECS service. We are assuming you have already configured the ECS service. The policy will increase the number of tasks when average CPU utilization exceeds 70%.

    import * as aws from "@pulumi/aws"; // First, we need to define the scalable target. This should reference an existing ECS service. const ecsScalableTarget = new aws.appautoscaling.Target("ecsScalableTarget", { // Format is: service/<cluster name>/<service name> resourceId: "service/example-cluster/example-service", // This dimension allows us to scale ECS services scalableDimension: "ecs:service:DesiredCount", // The service namespace for ECS serviceNamespace: "ecs", // The minimum and maximum number of tasks that the service should maintain minCapacity: 1, maxCapacity: 10, }); // Now, let's define the scaling policy based on CPU utilization const cpuUtilizationScalingPolicy = new aws.appautoscaling.Policy("cpuUtilizationScalingPolicy", { // Reference to the scalable target resourceId: ecsScalableTarget.resourceId, scalableDimension: ecsScalableTarget.scalableDimension, serviceNamespace: ecsScalableTarget.serviceNamespace, policyType: "TargetTrackingScaling", // We use target tracking for this example // Configure the target tracking scaling policy targetTrackingScalingPolicyConfiguration: { // The target value for average CPU utilization targetValue: 70.0, // Disable scale-in so we only scale out for the purpose of this example disableScaleIn: true, // Define the CloudWatch metric to use for scaling: Average CPU Utilization predefinedMetricSpecification: { predefinedMetricType: "ECSServiceAverageCPUUtilization", }, // Cooldown periods after scaling operations scaleInCooldown: 300, scaleOutCooldown: 300, }, }); // Exporting the policy ARN so it can be referenced in other parts of the Pulumi program or stack export const policyArn = cpuUtilizationScalingPolicy.arn;

    Breakdown of the Components:

    1. aws.appautoscaling.Target (ecsScalableTarget):

      • resourceId: This is the identifier of the ECS service to autoscale.
      • scalableDimension: This indicates what aspect we want to scale; ecs:service:DesiredCount means we will be scaling the 'desired count' parameter of the service.
      • serviceNamespace: For ECS, the namespace is 'ecs'.
      • minCapacity and maxCapacity: Define the minimum and maximum number of ECS tasks that your service can scale out to/in to.
    2. aws.appautoscaling.Policy (cpuUtilizationScalingPolicy):

      • resourceId, scalableDimension, serviceNamespace: These fields reference our scalable target to apply this policy to.
      • policyType: We are using a 'TargetTrackingScaling' policy, which adjusts the scalable target's capacity based on a target value for a specified CloudWatch metric.
      • targetTrackingScalingPolicyConfiguration: Defines the rules for the target tracking policy.
        • targetValue: The percentage of CPU utilization to trigger scaling actions.
        • disableScaleIn: Set to true to prevent the policy from scaling in the resource, scaling out only in this example.
        • predefinedMetricSpecification: Specifies using a predefined CloudWatch metric, in this case, ECS service average CPU utilization.
        • scaleInCooldown and scaleOutCooldown: The number of seconds to wait before scaling in or out after the last scaling activity.

    Finally, we export the policy ARN using export. This is useful if you want to reference this policy ARN in other parts of your Pulumi program or stack.

    Please make sure to replace placeholders like example-cluster and example-service with your actual ECS cluster and service names. Also, adjust the minCapacity and maxCapacity to suit the scaling needs of your service.