1. Answers
  2. Using Karpenter with Google Kubernetes Engine

How do I use Karpenter with Google Kubernetes Engine?

To set up and use Karpenter with Google Kubernetes Engine (GKE) using Pulumi, we will create a GKE cluster and configure Karpenter to manage the scaling of the cluster nodes. Karpenter is a Kubernetes cluster autoscaler built to improve the efficiency and cost-effectiveness of cloud-native applications.

What We Will Do:

  1. Create a GKE cluster.
  2. Install Karpenter on the GKE cluster.
  3. Configure Karpenter to manage node scaling.

Below is the Pulumi program in TypeScript to achieve this:

import * as pulumi from "@pulumi/pulumi";
import * as gcp from "@pulumi/gcp";
import * as k8s from "@pulumi/kubernetes";

// Create a GKE cluster
const cluster = new gcp.container.Cluster("gke-cluster", {
    initialNodeCount: 1,
    minMasterVersion: "1.20.10-gke.1000",
    nodeConfig: {
        machineType: "e2-medium",
        oauthScopes: [
            "https://www.googleapis.com/auth/cloud-platform",
        ],
    },
});

// Export the cluster name and endpoint
export const clusterName = cluster.name;
export const clusterEndpoint = cluster.endpoint;

// Get the GKE cluster credentials
const kubeconfig = pulumi.all([cluster.name, cluster.endpoint, cluster.masterAuth]).apply(([name, endpoint, auth]) => {
    const context = `${gcp.config.project}_${gcp.config.zone}_${name}`;
    return `apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: ${auth.clusterCaCertificate}
    server: https://${endpoint}
  name: ${context}
contexts:
- context:
    cluster: ${context}
    user: ${context}
  name: ${context}
current-context: ${context}
kind: Config
preferences: {}
users:
- name: ${context}
  user:
    auth-provider:
      config:
        access-token: $(gcloud auth print-access-token)
        cmd-args: config config-helper --format=json
        cmd-path: /usr/lib/google-cloud-sdk/bin/gcloud
        expiry-key: '{.credential.token_expiry}'
        token-key: '{.credential.access_token}'
      name: gcp
`;
});

// Create a Kubernetes provider using the kubeconfig
const provider = new k8s.Provider("gkeK8s", {
    kubeconfig: kubeconfig,
});

// Install Karpenter Helm chart
const karpenterNamespace = new k8s.core.v1.Namespace("karpenter", {
    metadata: {
        name: "karpenter",
    },
}, { provider });

const karpenterServiceAccount = new k8s.core.v1.ServiceAccount("karpenter-service-account", {
    metadata: {
        name: "karpenter",
        namespace: karpenterNamespace.metadata.name,
    },
}, { provider });

const karpenterHelmChart = new k8s.helm.v3.Chart("karpenter", {
    chart: "karpenter",
    version: "0.5.0",
    fetchOpts: {
        repo: "https://charts.karpenter.sh",
    },
    values: {
        serviceAccount: {
            create: false,
            name: karpenterServiceAccount.metadata.name,
        },
    },
}, { provider });

// Configure Karpenter with a Provisioner
const provisioner = new k8s.apiextensions.CustomResource("karpenter-provisioner", {
    apiVersion: "karpenter.sh/v1alpha5",
    kind: "Provisioner",
    metadata: {
        name: "default",
    },
    spec: {
        requirements: [
            { key: "node.kubernetes.io/instance-type", operator: "In", values: ["e2-medium", "e2-large"] },
            { key: "topology.kubernetes.io/zone", operator: "In", values: ["us-central1-a"] },
        ],
        provider: {
            machineType: "e2-medium",
        },
        limits: {
            resources: {
                cpu: "1000",
                memory: "4000Gi",
            },
        },
    },
}, { provider });

Key Points:

  • GKE Cluster: We create a GKE cluster with an initial node count and specific machine type.
  • Karpenter Installation: We install Karpenter using a Helm chart and configure it with a service account.
  • Karpenter Configuration: We set up a Karpenter Provisioner to manage node scaling based on specified requirements.

Summary:

In this guide, we created a GKE cluster, installed Karpenter, and configured it to manage the scaling of the cluster nodes using Pulumi. This setup helps in efficiently managing Kubernetes workloads by automatically scaling the nodes based on demand.

Deploy this code

Want to deploy this code? Sign up for a free Pulumi account to deploy in a few clicks.

Sign up

New to Pulumi?

Want to deploy this code? Sign up with Pulumi to deploy in a few clicks.

Sign up