1. Answers
  2. Enabling on-premises Kubernetes cluster LoadBalancing

How do I enable load balancing for an on-premises Kubernetes cluster?

To enable load balancing for an on-premises Kubernetes cluster, you can use Kubernetes’ built-in resources like Service and Ingress. These resources allow you to distribute traffic across multiple backend pods and expose your services to external traffic.

Here is a detailed explanation of how to set up load balancing for an on-premises Kubernetes cluster using Pulumi:

  1. Kubernetes Service: A Kubernetes Service is an abstraction that defines a logical set of pods and a policy by which to access them. Services can be of different types, such as ClusterIP, NodePort, LoadBalancer, and ExternalName. For on-premises clusters, NodePort or ClusterIP are commonly used, but for external access, you might use an Ingress.

  2. Kubernetes Ingress: An Ingress is a collection of rules that allow inbound connections to reach the cluster services. It can provide load balancing, SSL termination, and name-based virtual hosting.

We’ll create a Kubernetes Service of type NodePort and an Ingress resource to enable load balancing for an application running in an on-premises Kubernetes cluster.

Steps:

  1. Create a Kubernetes Service to expose your application.
  2. Create a Kubernetes Ingress to route traffic to your service.

Pulumi Program

import * as pulumi from "@pulumi/pulumi";
import * as k8s from "@pulumi/kubernetes";

// Define the namespace
const namespace = new k8s.core.v1.Namespace("app-namespace", {
    metadata: {
        name: "app-namespace",
    },
});

// Define the application deployment
const appLabels = { app: "my-app" };
const deployment = new k8s.apps.v1.Deployment("app-deployment", {
    metadata: {
        namespace: namespace.metadata.name,
    },
    spec: {
        selector: { matchLabels: appLabels },
        replicas: 3,
        template: {
            metadata: { labels: appLabels },
            spec: {
                containers: [
                    {
                        name: "my-app",
                        image: "nginx", // Replace with your application image
                        ports: [{ containerPort: 80 }],
                    },
                ],
            },
        },
    },
});

// Create a NodePort Service to expose the application
const service = new k8s.core.v1.Service("app-service", {
    metadata: {
        namespace: namespace.metadata.name,
        labels: appLabels,
    },
    spec: {
        type: "NodePort",
        ports: [{ port: 80, targetPort: 80, protocol: "TCP" }],
        selector: appLabels,
    },
});

// Create an Ingress to route traffic to the service
const ingress = new k8s.networking.v1.Ingress("app-ingress", {
    metadata: {
        namespace: namespace.metadata.name,
        annotations: {
            "nginx.ingress.kubernetes.io/rewrite-target": "/",
        },
    },
    spec: {
        rules: [
            {
                host: "myapp.example.com", // Replace with your domain
                http: {
                    paths: [
                        {
                            path: "/",
                            pathType: "Prefix",
                            backend: {
                                service: {
                                    name: service.metadata.name,
                                    port: {
                                        number: 80,
                                    },
                                },
                            },
                        },
                    ],
                },
            },
        ],
    },
});

// Export the ingress URL
export const ingressUrl = ingress.status.loadBalancer.ingress[0].hostname || ingress.status.loadBalancer.ingress[0].ip;

Explanation:

  1. Namespace: We create a namespace for the application to isolate resources.
  2. Deployment: We define a Deployment for the application, specifying the container image and the number of replicas.
  3. Service: We create a Service of type NodePort to expose the application on a specific port.
  4. Ingress: We define an Ingress resource to route HTTP traffic to the service. The Ingress uses a hostname and path to direct traffic to the backend service.

This setup will allow you to load balance traffic to your application running in an on-premises Kubernetes cluster. The Ingress will handle external traffic and route it to the appropriate service, which then forwards it to the application pods.

Deploy this code

Want to deploy this code? Sign up for a free Pulumi account to deploy in a few clicks.

Sign up

New to Pulumi?

Want to deploy this code? Sign up with Pulumi to deploy in a few clicks.

Sign up