1. Auto-scaling EKS nodes across multiple VPC Availability Zones

    TypeScript

    To achieve auto-scaling for Amazon EKS worker nodes across multiple Availability Zones, we'll need to configure an EKS cluster and an EKS Node Group. The Node Group will be set up to auto-scale across the desired Availability Zones which are specified via subnets in your VPC.

    Here's an overview of the steps we are going to perform:

    1. Create an IAM role for the EKS Cluster with the necessary permissions.
    2. Define an EKS cluster with the aws.eks.Cluster resource.
    3. Define an EKS Node Group using the aws.eks.NodeGroup resource with the scalingConfig property to enable auto-scaling.
    4. Associate the EKS Node Group with multiple subnets, each one in a different Availability Zone, to ensure multi-AZ scaling.

    The aws.eks.Cluster resource sets up the EKS cluster control plane, while the aws.eks.NodeGroup resource creates an associated group of worker nodes that will run your containerized applications. It is this Node Group which is configured to auto-scale.

    Let's proceed with the Pulumi TypeScript:

    import * as aws from "@pulumi/aws"; import * as pulumi from "@pulumi/pulumi"; // Create an IAM role that will be used by the EKS cluster. const eksRole = new aws.iam.Role("eksRole", { assumeRolePolicy: aws.iam.getPolicyDocument({ statements: [{ actions: ["sts:AssumeRole"], principals: [{ type: "Service", identifiers: ["eks.amazonaws.com"], }], }], }).then(doc => doc.json), }); // Attach the AmazonEKSClusterPolicy to the IAM role. const eksClusterPolicyAttachment = new aws.iam.RolePolicyAttachment("eksClusterPolicyAttachment", { policyArn: "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy", role: eksRole, }); // Specify the subnets across multiple Availability Zones where the worker nodes would be placed. const subnetIds = [ "subnet-abcde012", // Replace with your actual subnet ID in AZ 1 "subnet-bcde012a", // Replace with your actual subnet ID in AZ 2 "subnet-fghij034", // Replace with your actual subnet ID in AZ 3 ]; // Create an EKS cluster. const eksCluster = new aws.eks.Cluster("eksCluster", { roleArn: eksRole.arn, version: "1.21", // Specify your desired Kubernetes version vpcConfig: { subnetIds: subnetIds, }, }); // Create an IAM role for the EKS Node Group. const ngRole = new aws.iam.Role("ngRole", { assumeRolePolicy: aws.iam.getPolicyDocument({ statements: [{ actions: ["sts:AssumeRole"], principals: [{ type: "Service", identifiers: ["ec2.amazonaws.com"], }], }], }).then(doc => doc.json), }); // Attach necessary policies to the Node Group IAM role. const ngPolicies = [ "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy", "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy", "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly", ].map(policyArn => new aws.iam.RolePolicyAttachment(`${policyArn}-attachment`, { policyArn, role: ngRole, })); // Create an EKS Node Group with auto-scaling configuration. const ng = new aws.eks.NodeGroup("ng", { clusterName: eksCluster.name, nodeRoleArn: ngRole.arn, instanceTypes: ["t3.medium"], // You can specify the instance type as per your requirement scalingConfig: { desiredSize: 2, // Initial desired target state minSize: 1, // Minimum number of nodes maxSize: 4, // Maximum number of nodes }, subnetIds: subnetIds, // Referencing the subnets that span multiple AZs }, { dependsOn: [eksClusterPolicyAttachment, ...ngPolicies] }); // Export the cluster's kubeconfig. export const kubeconfig = pulumi.all([ eksCluster.name, eksCluster.endpoint, eksCluster.certificateAuthority ]).apply(([name, endpoint, certificateAuthority]) => { return `apiVersion: v1 clusters: - cluster: server: ${endpoint} certificate-authority-data: ${certificateAuthority.data} name: ${name} contexts: - context: cluster: ${name} user: ${name} name: ${name} current-context: ${name} kind: Config preferences: {} users: - name: ${name} user: exec: apiVersion: client.authentication.k8s.io/v1beta1 args: - --region - ${aws.config.region} - eks - get-token - --cluster-name - ${name} command: aws env: null`; });

    This code will configure an auto-scaling EKS Node Group that spans multiple Availability Zones for high availability. The Node Group's scalingConfig property defines the minimum, maximum, and desired number of nodes. The subnetIds array defines which subnets in your VPC (across different AZs) the nodes will be placed in.

    Remember that you need to replace the placeholder subnet- IDs with your own correct subnet IDs from your VPC, and the instance type with one that matches your workload requirements.

    Also, ensure your Pulumi stack is configured with the appropriate AWS region. For instance, you can set it via command line:

    pulumi config set aws:region us-west-2

    This code will provide an auto-scaling, multi-AZ EKS cluster upon deployment. The kubeconfig export at the end of the program will output the necessary data to connect to your Kubernetes cluster using kubectl.

    You might also need to install the AWS command-line tool and configure your AWS credentials, as the kubeconfig relies on aws for cluster authentication.

    Please note that you should not commit sensitive data like subnet IDs and IAM role ARNs to version control systems in plain text. Consider using secret managers or Pulumi's secret configuration system for managing sensitive data.