1. Answers
  2. Auto-scaling Kubernetes pods based on Dynatrace performance metrics

How Do I Auto-Scale Kubernetes Pods Based on Dynatrace Performance Metrics?

Introduction

In this guide, we will demonstrate how to set up auto-scaling for Kubernetes pods by leveraging performance metrics provided by Dynatrace. Using Pulumi, we will define a HorizontalPodAutoscaler (HPA) resource that dynamically adjusts the number of pods in a deployment based on these metrics. This approach ensures that your application maintains optimal performance and resource utilization.

Key Points

  • Kubernetes Deployment: We will start by creating a Kubernetes deployment to host a sample application.
  • Horizontal Pod Autoscaler Setup: We will configure an HPA to utilize custom metrics from Dynatrace, enabling it to scale the pods effectively.
  • Dynamic Scaling: The HPA will automatically adjust the number of pods based on real-time performance metrics, ensuring efficient resource management.

Detailed Steps

  1. Create a Kubernetes Deployment:

    • Begin by defining a Kubernetes deployment that will run a sample application. This involves specifying the application labels, container image, and other necessary configurations.
  2. Set up Horizontal Pod Autoscaler:

    • Configure the HPA to use Dynatrace metrics for scaling decisions. This involves specifying the target deployment, the range of replica counts, and the external metrics from Dynatrace that will trigger scaling actions.
import * as pulumi from "@pulumi/pulumi";
import * as k8s from "@pulumi/kubernetes";

// Create a Kubernetes namespace
const namespace = new k8s.core.v1.Namespace("dynatrace-namespace");

// Create a Kubernetes Deployment
const appLabels = { app: "my-app" };
const deployment = new k8s.apps.v1.Deployment("my-app-deployment", {
    metadata: {
        namespace: namespace.metadata.name,
    },
    spec: {
        selector: { matchLabels: appLabels },
        replicas: 1,
        template: {
            metadata: { labels: appLabels },
            spec: {
                containers: [{
                    name: "my-app",
                    image: "nginx",
                    ports: [{ containerPort: 80 }],
                }],
            },
        },
    },
});

// Create a Horizontal Pod Autoscaler
const hpa = new k8s.autoscaling.v2.HorizontalPodAutoscaler("my-app-hpa", {
    metadata: {
        namespace: namespace.metadata.name,
    },
    spec: {
        scaleTargetRef: {
            apiVersion: "apps/v1",
            kind: "Deployment",
            name: deployment.metadata.name,
        },
        minReplicas: 1,
        maxReplicas: 10,
        metrics: [{
            type: "External",
            external: {
                metric: {
                    name: "dynatrace_custom_metric",
                    selector: {
                        matchLabels: {
                            dynatrace: "custom-metric",
                        },
                    },
                },
                target: {
                    type: "Value",
                    value: "100",
                },
            },
        }],
    },
});

// Export the namespace and deployment names
export const namespaceName = namespace.metadata.name;
export const deploymentName = deployment.metadata.name;

Conclusion

In this guide, we successfully set up a Kubernetes deployment and configured a Horizontal Pod Autoscaler using Dynatrace performance metrics. By implementing this solution, we enable our application to dynamically adjust the number of running pods based on real-time performance data. This setup not only optimizes resource usage but also helps maintain the desired performance levels of your application, adapting to varying loads and ensuring reliability.

Deploy this code

Want to deploy this code? Sign up for a free Pulumi account to deploy in a few clicks.

Sign up

New to Pulumi?

Want to deploy this code? Sign up with Pulumi to deploy in a few clicks.

Sign up