Using Kubernetes karpenter.sh with Pulumi
Instructions
In this guide, we will set up Karpenter, an open-source Kubernetes node provisioning solution, using Pulumi in TypeScript. Karpenter automatically launches just the right compute resources to handle your cluster’s applications. This solution will involve setting up the necessary AWS infrastructure, deploying a Kubernetes cluster, and installing Karpenter.
Step-by-Step Explanation
- Set up AWS infrastructure: We will create an IAM role for Karpenter, a VPC, and subnets.
- Deploy an EKS cluster: We will create an EKS cluster to run our Kubernetes workloads.
- Install Karpenter: We will deploy Karpenter into our EKS cluster using Helm.
- Configure Karpenter: We will configure Karpenter to manage the node lifecycle in our EKS cluster.
Key Points
- IAM Role: Karpenter requires an IAM role with specific permissions to manage EC2 instances.
- VPC and Subnets: Necessary networking components for the EKS cluster.
- EKS Cluster: Managed Kubernetes service to run our workloads.
- Helm: Package manager for Kubernetes to install Karpenter.
- Karpenter Configuration: Settings to define how Karpenter manages nodes.
Conclusion
By following this guide, you will have a fully functional Karpenter setup on your EKS cluster, enabling efficient and automated node provisioning. This setup ensures that your Kubernetes workloads are always running on the optimal compute resources, improving both performance and cost-efficiency.
Code Example
import * as pulumi from "@pulumi/pulumi";
import * as aws from "@pulumi/aws";
import * as k8s from "@pulumi/kubernetes";
// Create an IAM role for Karpenter
const karpenterRole = new aws.iam.Role("karpenterRole", {
assumeRolePolicy: JSON.stringify({
Version: "2012-10-17",
Statement: [
{
Effect: "Allow",
Principal: {
Service: "ec2.amazonaws.com"
},
Action: "sts:AssumeRole"
}
]
})
});
// Attach the necessary policies to the role
const karpenterPolicy = new aws.iam.RolePolicyAttachment("karpenterPolicy", {
role: karpenterRole.name,
policyArn: "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
});
// Create a VPC
const vpc = new aws.ec2.Vpc("vpc", {
cidrBlock: "10.0.0.0/16",
enableDnsSupport: true,
enableDnsHostnames: true
});
// Create subnets
const subnet1 = new aws.ec2.Subnet("subnet1", {
vpcId: vpc.id,
cidrBlock: "10.0.1.0/24",
availabilityZone: "us-west-2a"
});
const subnet2 = new aws.ec2.Subnet("subnet2", {
vpcId: vpc.id,
cidrBlock: "10.0.2.0/24",
availabilityZone: "us-west-2b"
});
// Create an EKS cluster
const cluster = new aws.eks.Cluster("eksCluster", {
roleArn: karpenterRole.arn,
vpcConfig: {
subnetIds: [subnet1.id, subnet2.id]
}
});
// Install Karpenter using Helm
const karpenterHelm = new k8s.helm.v3.Release("karpenter", {
chart: "karpenter",
version: "0.5.0",
repositoryOpts: {
repo: "https://charts.karpenter.sh"
},
values: {
serviceAccount: {
create: false,
name: karpenterRole.name
}
},
namespace: "karpenter"
});
export const vpcId = vpc.id;
export const subnetIds = [subnet1.id, subnet2.id];
export const clusterName = cluster.name;
export const karpenterHelmReleaseStatus = karpenterHelm.status;
Deploy this code
Want to deploy this code? Sign up for a free Pulumi account to deploy in a few clicks.
Sign upNew to Pulumi?
Want to deploy this code? Sign up with Pulumi to deploy in a few clicks.
Sign upThank you for your feedback!
If you have a question about how to use Pulumi, reach out in Community Slack.
Open an issue on GitHub to report a problem or suggest an improvement.