Using kubernetes keda.sh with opentelemetry.io
TypeScriptKubernetes-based Event-Driven Autoscaling (KEDA) is a component that can be installed into your Kubernetes cluster. It allows you to automatically scale your Kubernetes workloads, including deployments, jobs, and any other Kubernetes resource that can be scaled.
OpenTelemetry, on the other hand, is an observability framework for cloud-native software and provides APIs and libraries to capture metrics, traces, and logs from your applications.
KEDA can use metrics from various sources to drive the autoscaling of workloads. If you're using OpenTelemetry, you might be collecting custom metrics that could inform KEDA when to autoscale your workloads.
Below is a program in TypeScript that demonstrates how you could integrate KEDA with OpenTelemetry in a Kubernetes cluster. This Pulumi program assumes that you are familiar with Kubernetes and have a cluster available where the resources will be deployed. The sample also assumes that you have Pulumi installed and configured for use with your Kubernetes cluster.
Let's go over the steps we'll implement in this TypeScript program:
- Install KEDA into your Kubernetes cluster using a Helm chart.
- Deploy an application that utilizes OpenTelemetry for observability.
- Define a
ScaledObject
in KEDA for the deployment created in step 2, utilizing a custom OpenTelemetry metric as a trigger for scaling.
Remember, to follow this walkthrough and use the Pulumi program, make sure you have Pulumi installed and configured to connect to your Kubernetes cluster. You will also need to have
kubectl
installed and configured to interact with the cluster.Here is the Pulumi TypeScript program:
import * as kubernetes from "@pulumi/kubernetes"; import * as pulumi from "@pulumi/pulumi"; // Assuming you have your Kubernetes cluster selected in your kubeconfig. const provider = new kubernetes.Provider("k8s-provider", {}); // 1. Install KEDA using the Helm chart. // Refer to the documentation: https://www.pulumi.com/registry/packages/kubernetes/api-docs/helm.sh/v3/chart/ const keda = new kubernetes.helm.v3.Chart("keda", { chart: "keda", version: "2.x.x", // Specify the version of KEDA you want to use fetchOpts: { repo: "https://kedacore.github.io/charts" }, }, { provider: provider }); // 2. Deploy an example application that uses OpenTelemetry. const appLabels = { app: "opentelemetry-app" }; const application = new kubernetes.apps.v1.Deployment("app-deployment", { metadata: { labels: appLabels }, spec: { replicas: 1, selector: { matchLabels: appLabels }, template: { metadata: { labels: appLabels }, spec: { containers: [{ name: "app", image: "ghcr.io/open-telemetry/opentelemetry-operator-e2e-test:latest", // OpenTelemetry instrumentation and configuration goes here }], }, }, }, }, { provider: provider }); // 3. Define a ScaledObject in KEDA for the application. // The `ScaledObject` is the resource that KEDA uses to know when to // scale your workload up or down based on metrics. const scaledObject = new kubernetes.apiextensions.CustomResource("app-scaledobject", { apiVersion: "keda.sh/v1alpha1", kind: "ScaledObject", metadata: { name: "app-scaler", labels: appLabels, namespace: "default", // Consider using an application-specific namespace }, spec: { scaleTargetRef: { kind: "Deployment", name: "app-deployment", // This should match your application deployment name }, // Assuming that your OpenTelemetry collector is set up // and is exporting metrics to a Prometheus-compatible endpoint. triggers: [{ type: "prometheus", metadata: { // You would point this to a Prometheus-formatted endpoint // from where to pull the metrics. serverAddress: "http://prometheus-operated.default.svc.cluster.local:9090", metricName: "custom_metric_name", // The name of your OpenTelemetry metric threshold: "100", // The threshold value for scaling query: "sum(rate(custom_metric_name[2m]))", // Adjust the query to select your OpenTelemetry metric }, }], }, }, { provider: provider, dependsOn: [application] }); // Export the name of the deployment export const deploymentName = application.metadata.name;
In the above program,
@pulumi/kubernetes
is the package that contains the modules required to interact with Kubernetes through Pulumi. We create aProvider
to specify which Kubernetes cluster Pulumi should target.Step 1 installs KEDA using its Helm chart. Helm is a package manager for Kubernetes, and Pulumi allows you to leverage Helm charts directly within your program.
Step 2 deploys an example application which would have OpenTelemetry instrumentation in place. This application is just a placeholder and should be replaced with your actual application code/configuration that is meant to be scaled.
Lastly, step 3 creates a
ScaledObject
custom resource. ScaledObjects are the core of KEDA's functionality and define how and what it should scale based on. In the example, we use a Prometheus source as the trigger.You would need to replace
"custom_metric_name"
with the actual metric you're collecting through OpenTelemetry and wish to use for scaling purposes.To apply this Pulumi program, run
pulumi up
while in the directory containing the Pulumi code. If successful, Pulumi will deploy these resources into your Kubernetes cluster and return outputs defined in the program.