Assigning Taints and Tolerations for workloads segregation
TypeScriptTaints and tolerations work together to ensure that pods are not scheduled onto inappropriate nodes. A taint allows a node to repel a set of pods, while tolerations are applied to pods that should be allowed to (or "tolerate") the taints.
In the context of a Kubernetes cluster managed with Pulumi, we can add taints to nodes and ensure that workloads have the necessary tolerations to be scheduled onto the right nodes.
We will be utilizing the
kubernetes.core/v1.Node
resource from the Kubernetes provider in Pulumi to set taints on our nodes. This specifies the desired state of a node in our cluster. By providing taints to a node's spec, we can control which pods can schedule on them based on the pods' tolerations matching these taints.Here's a TypeScript program that adds a taint to a node and a toleration to a pod to demonstrate how they can be used to control pod scheduling.
import * as k8s from "@pulumi/kubernetes"; const label = "special-node"; // Tainting a node with 'special-node=true:NoSchedule' to repel pods unless they tolerate this taint. const nodeTainted = new k8s.core.v1.Node("node-tainted", { metadata: { labels: { [label]: "true", }, }, spec: { taints: [ { key: label, value: "true", effect: "NoSchedule", }, ], }, }); // A pod with a toleration for our specific taint, allowing it to be scheduled on 'special-node'. const podWithToleration = new k8s.core.v1.Pod("pod-with-toleration", { metadata: { name: "mypod", }, spec: { containers: [{ name: "nginx", image: "nginx", }], tolerations: [ { key: label, value: "true", effect: "NoSchedule", }, ], }, });
In the first section:
- We're creating a node resource (Note that in a real-world use case, you don't directly create node resources as they are often auto-managed by the cloud infrastructure. Instead, you would be working with node groups or using cloud-specific ways to add taints. This code assumes the existence of a node that you can modify).
In the second section:
- We're creating a pod resource with a
toleration
matching the taint we've added to the node. This ensures the pod can be scheduled on the node with the taint.
It is important to point out that adding taints to a node this way assumes you have direct control over the node resource. In a managed Kubernetes service (like EKS, AKS, or GKE), you usually control taints through the managed node pool configurations or by using cloud-specific Pulumi resources, such as
eks.NodeGroup
. Thetoleration
part of the pod specification, however, remains relevant for any Kubernetes setup.