1. Answers
  2. Scaling Kubernetes with KEDA and Karpenter

How do I scale Kubernetes with KEDA and Karpenter?

In this guide, we will demonstrate how to scale Kubernetes workloads dynamically using KEDA (Kubernetes Event-Driven Autoscaling) and Karpenter. KEDA allows Kubernetes to scale based on custom metrics, while Karpenter is an open-source node autoscaler built for Kubernetes.

Key Points

  • KEDA: Enables event-driven autoscaling for Kubernetes workloads.
  • Karpenter: Provides efficient and cost-effective node autoscaling.
  • Integration: Combining KEDA and Karpenter allows for dynamic scaling of both pods and nodes based on custom metrics.

Steps

  1. Install KEDA: Deploy KEDA to your Kubernetes cluster.
  2. Install Karpenter: Deploy Karpenter to your Kubernetes cluster.
  3. Configure Scaling: Set up KEDA and Karpenter to work together to scale workloads and nodes dynamically.

Below is the Pulumi program to achieve this:

import * as pulumi from "@pulumi/pulumi";
import * as k8s from "@pulumi/kubernetes";

// Create a Kubernetes provider
const k8sProvider = new k8s.Provider("k8sProvider", {
    kubeconfig: process.env.KUBECONFIG,
});

// Install KEDA using Helm
const kedaNamespace = new k8s.core.v1.Namespace("keda-namespace", {
    metadata: { name: "keda" },
}, { provider: k8sProvider });

const keda = new k8s.helm.v3.Chart("keda", {
    chart: "keda",
    version: "2.4.0",
    namespace: kedaNamespace.metadata.name,
    fetchOpts: {
        repo: "https://kedacore.github.io/charts",
    },
}, { provider: k8sProvider });

// Install Karpenter using Helm
const karpenterNamespace = new k8s.core.v1.Namespace("karpenter-namespace", {
    metadata: { name: "karpenter" },
}, { provider: k8sProvider });

const karpenter = new k8s.helm.v3.Chart("karpenter", {
    chart: "karpenter",
    version: "0.5.0",
    namespace: karpenterNamespace.metadata.name,
    fetchOpts: {
        repo: "https://charts.karpenter.sh",
    },
}, { provider: k8sProvider });

// Create a KEDA ScaledObject to scale a deployment based on custom metrics
const scaledObject = new k8s.apiextensions.CustomResource("scaledObject", {
    apiVersion: "keda.sh/v1alpha1",
    kind: "ScaledObject",
    metadata: {
        name: "my-scaledobject",
        namespace: "default",
    },
    spec: {
        scaleTargetRef: {
            kind: "Deployment",
            name: "my-deployment",
        },
        triggers: [{
            type: "cpu",
            metadata: {
                type: "Utilization",
                value: "50",
            },
        }],
    },
}, { provider: k8sProvider });

// Create a Karpenter Provisioner to manage node scaling
const provisioner = new k8s.apiextensions.CustomResource("provisioner", {
    apiVersion: "karpenter.sh/v1alpha5",
    kind: "Provisioner",
    metadata: {
        name: "default",
    },
    spec: {
        cluster: {
            name: "my-cluster",
            endpoint: "https://my-cluster-endpoint",
        },
        ttlSecondsAfterEmpty: 30,
        limits: {
            resources: {
                cpu: 1000,
                memory: "1000Gi",
            },
        },
        provider: {
            instanceProfile: "KarpenterInstanceProfile",
        },
    },
}, { provider: k8sProvider });

Summary

In this guide, we have set up KEDA and Karpenter in a Kubernetes cluster using Pulumi. We installed both tools using Helm charts, configured a ScaledObject for KEDA to scale based on CPU utilization, and created a Provisioner for Karpenter to manage node scaling. This setup allows for dynamic scaling of both pods and nodes based on custom metrics, optimizing resource usage and cost-efficiency in your Kubernetes cluster.

Deploy this code

Want to deploy this code? Sign up for a free Pulumi account to deploy in a few clicks.

Sign up

New to Pulumi?

Want to deploy this code? Sign up with Pulumi to deploy in a few clicks.

Sign up