1. Using kubernetes gateway.networking.k8s.io with http.keda.sh

    TypeScript

    To combine Kubernetes Gateway API (gateway.networking.k8s.io) with HTTP-based autoscaling using KEDA (http.keda.sh), we'll create a Pulumi program that sets up the following:

    1. A Kubernetes Gateway resource that represents a managed gateway at the edge of the service mesh to handle incoming traffic.
    2. An HTTP KEDA ScaledObject that can scale the number of pods in deployment based on the number of incoming HTTP requests.

    First, you'll need to have a Kubernetes cluster and have kubectl configured to interact with it. Make sure that your Pulumi environment is set up with access to the cluster and you have the necessary packages installed.

    The following Pulumi TypeScript program will assume you have KEDA already installed in your cluster. If KEDA is not installed, you can install it using Helm through Pulumi before deploying the ScaledObject.

    Let's proceed with the Pulumi program:

    import * as k8s from "@pulumi/kubernetes"; // Note: Ensure that the Gateway API CRDs are installed on the Kubernetes cluster // You can install Gateway API CRDs by following the Kubernetes Gateway API documentation. // Create a Kubernetes Gateway resource const gateway = new k8s.networking.v1beta1.Ingress("http-api-gateway", { // Provide the necessary metadata and spec configuration for your gateway. // This will vary depending on your cloud provider and ingress controller. // Below is a placeholder configuration. metadata: { name: "http-api-gateway", }, spec: { ingressClassName: "nginx", // Assuming you are using the NGINX Ingress Controller rules: [ // Define routing rules as needed ], }, }); // Create a KEDA ScaledObject resource for HTTP scaling const httpScaledObject = new k8s.apiextensions.CustomResource("http-scaled-object", { apiVersion: "keda.sh/v1alpha1", kind: "ScaledObject", metadata: { name: "http-scaled-object", // Ensure this is the same namespace as your deployment and gateway. namespace: "default", }, spec: { scaleTargetRef: { // This should reference the deployment you want to scale. name: "your-deployment-name", }, // Define the triggers for scaling triggers: [ { type: "http", metadata: { // Set the correct metadata parameters according to KEDA HTTP scaler documentation. "url": "http://your-service-url", // Endpoint to query for scalng metrics "value": "100", // Target value for number of requests }, } ], // Define the scaling criteria and behaviour minReplicaCount: 1, maxReplicaCount: 10, }, }, { dependsOn: [gateway] }); // Export the gateway URL so we can easily access it after deployment export const gatewayUrl = gateway.status.loadBalancer.ingress[0].hostname;

    In the above code:

    • We first define a Gateway resource that configures an ingress for HTTP traffic. The actual spec will depend on your specific use case and environment, and the above is just a placeholder.
    • Next, we define a KEDA ScaledObject resource. It refers to a Deployment (not shown in the code) that it will scale in and out based on HTTP traffic. The triggers field is configured to use KEDA's HTTP trigger, which will scale the deployment based on the number of HTTP requests it receives.
    • dependsOn is used to ensure that the ScaledObject is created after the Gateway resource, ensuring that dependencies are respected.
    • Finally, we export the gatewayUrl, which is where your ingress gateway can be accessed. This will typically be a domain name or IP address assigned to the ingress controller load balancer.

    Please ensure that you replace "your-deployment-name" and "http://your-service-url" with proper values according to your deployment. Also, adjust the minimum and maximum replica counts to suitable values for your use case.

    To deploy this program:

    1. Save the code in a file, for example, main.ts.
    2. Run pulumi up from your terminal in the same directory where your main.ts file resides.
    3. Pulumi will perform the deployment, and once finished, it will output the gatewayUrl, which you can use to access your Kubernetes services through the gateway.

    Remember to verify the configurations and compatibility with your existing Kubernetes environment, and adjust the placeholder values to match your deployment details.