1. Using kubernetes keda.sh with fluentbit.fluent.io

    TypeScript

    To integrate KEDA (Kubernetes Event-driven Autoscaling) with fluentbit.fluent.io (a Fluent Bit logging solution) in a Kubernetes cluster, we need to perform a few steps. We will use Pulumi's Kubernetes provider to manage the necessary resources on the Kubernetes cluster.

    Below, you'll find a Pulumi TypeScript program that sets up a KEDA ScaledObject for Fluent Bit. The example assumes that you already have Fluent Bit deployed in your cluster (possibly as a DaemonSet for log collection) and that you want to use KEDA to scale the number of Fluent Bit replicas based on some metrics.

    We will start by explaining each step of the process and then provide the complete Pulumi program.

    1. Define the Fluent Bit ScaledObject: The ScaledObject resource is a custom resource defined by KEDA that tells it how to scale the specified workload. In this case, the workload will be our fluentbit Deployment.

    2. Set Up the Autoscaling Triggers: The triggers within the ScaledObject define what metrics KEDA should use to decide when to scale in and scale out. These could be based on metrics like CPU usage, memory usage, or custom metrics from an external system.

    3. Deploy the ScaledObject: Using Pulumi, you apply the ScaledObject to your cluster, which KEDA will then use to begin autoscaling the fluentbit workload based on the defined triggers.

    Here is a Pulumi TypeScript program that creates a simple ScaledObject for a hypothetical Fluent Bit deployment. Please note that the specifics of the triggers for autoscaling will depend on your particular use case and environment. Additionally, ensure you have the KEDA and Fluent Bit operators installed in your cluster before applying this configuration.

    import * as kubernetes from "@pulumi/kubernetes"; // Create a KEDA ScaledObject for Fluent Bit. const fluentBitScaledObject = new kubernetes.apiextensions.CustomResource("fluentbit-scaledobject", { apiVersion: "keda.sh/v1alpha1", kind: "ScaledObject", metadata: { // The name of the ScaledObject name: "fluentbit", // The namespace where your Fluent Bit and KEDA are running namespace: "logging", // Labels to link the ScaledObject to the target deployment labels: { "deploymentName": "fluentbit", }, }, spec: { // The scale target refers to the deployment that we want KEDA to scale scaleTargetRef: { name: "fluentbit", // should match the name of the Fluent Bit deployment }, // Define one or more triggers that will cause the Fluent Bit deployment to scale triggers: [ { // For example, a Prometheus trigger that scales based on the result of a query type: "prometheus", metadata: { serverAddress: "http://prometheus-server.monitoring.svc", // replace with your Prometheus server address metricName: "fluentbit_input_records_total", // replace with the Fluent Bit metric you want to use for scaling query: `fluentbit_input_records_total{job="kubernetes-pods",namespace="logging"}`, // your Prometheus query threshold: "1000", // the threshold value for scaling up }, }, ], // Minimum and maximum number of replicas to scale between minReplicaCount: 1, maxReplicaCount: 10, // Optional: fine-tune scaling behavior with additional policies advanced: { /* ... */ }, }, }); // Export the name of the ScaledObject export const scaledObjectName = fluentBitScaledObject.metadata.name;

    Explanation:

    • We declare a new KEDA ScaledObject for the Fluent Bit deployment in the logging namespace.
    • We specify the scale target to be the Fluent Bit deployment with a matching label.
    • A Prometheus trigger is used as an example, and you should substitute the serverAddress, metricName, and query fields with values that correspond to your actual setup. This assumes that you are collecting Fluent Bit metrics with Prometheus.
    • minReplicaCount and maxReplicaCount dictate the scaling boundaries for the number of Fluent Bit pods. You should adjust these values according to your load characteristics and resource capacity.

    By applying this Pulumi program to your cluster, KEDA will start to monitor the specified metrics for Fluent Bit and scale the number of pods within the specified range based on the observed metric values. You can tune the parameters and triggers to match your specific monitoring setup and scaling requirements.