1. Using kubernetes ratelimit.solo.io with distro.eks.amazonaws.com

    TypeScript

    When you are looking to integrate rate limiting capabilities into your Kubernetes cluster hosted on Amazon EKS, you would typically look for a solution like ratelimit.solo.io. This is an Envoy proxy filter that can be integrated with the Envoy sidecars in your service mesh to apply such policies. To achieve this, you would first need to set up an Amazon EKS cluster, then deploy your applications along with Envoy as a sidecar, and lastly apply the rate limiting configuration through Kubernetes resources.

    Below you will find a Pulumi program written in TypeScript that lays the foundation for this scenario. It creates an EKS cluster on AWS using the eks.Cluster resource from Pulumi's EKS package. This program does not include the specifics of deploying ratelimit.solo.io because that procedure typically involves Kubernetes manifests and configurations specific to your environment and application.

    However, the program demonstrates how you can use Pulumi to set up an EKS cluster, upon which you could deploy your applications and the rate limiting filter:

    import * as eks from "@pulumi/eks"; // Create an EKS cluster. const cluster = new eks.Cluster("my-cluster", { desiredCapacity: 2, // Specify the desired number of worker nodes. minSize: 1, // Specify the minimum number of worker nodes. maxSize: 3, // Specify the maximum number of worker nodes. enablePublicAccess: true, // If true, allows public access to the Kubernetes API. }); // Export the cluster's kubeconfig. export const kubeconfig = cluster.kubeconfig; // Output the cluster's endpoint and kubeconfig. export const clusterEndpoint = cluster.core.endpoint;

    In this program:

    • We're importing the eks module from the Pulumi EKS package to handle all of the EKS-related resources.
    • We create an instance of eks.Cluster which creates an EKS cluster. You could adjust the desiredCapacity, minSize, and maxSize to specify the size of your cluster.
    • We export the kubeconfig, which you will need to interact with your Kubernetes cluster using tools like kubectl or other Kubernetes-compatible tools.
    • We are setting enablePublicAccess to true for demonstration purposes, allowing public access to your Kubernetes API server. This should be reviewed against your own security requirements.

    Once the cluster is up and running, you would typically proceed with the following steps, which are not covered in the Pulumi program due to their specificity and complexity:

    1. Apply the requisite service mesh (like Istio with Envoy) onto the cluster. This involves Kubernetes manifests and potentially additional Pulumi code to handle custom resources.
    2. Deploy your applications with the Envoy sidecar attached. In service meshes like Istio, this is typically handled automatically via automatic sidecar injection.
    3. Configure the ratelimit service (ratelimit.solo.io) and deploy it to your cluster. You would define a ConfigMap for Envoy to know how to communicate with this service and apply rate limiting.
    4. Apply the rate limit policy configurations as EnvoyFilter resources if using Istio, or the equivalent in other service meshes.

    Remember, after setting up the cluster, you would still need to create the necessary Kubernetes manifests for your application and the rate limiting configurations, which will depend on your specific use case and environment.

    Note: To proceed with the deployment of ratelimit.solo.io, please refer to specific instructions provided in the solo.io documentation or the relevant section of the service mesh you are using.