Run Managed Kubernetes

Give every team a production-ready Kubernetes cluster they can deploy workloads to in minutes, with the platform plumbing (ingress, secrets, autoscaling, and workload identity) already wired up so nobody has to stitch it together by hand.

Step 1 of 1

Choose cloud

Choose an option to continue.

Frequently asked questions

Do I need the Pulumi landing-zone stack first?
Yes. The blueprint consumes landing-zone outputs (network ids, key ids, deployer identity) through a StackReference. Deploy the landing-zone family in the same cloud account first, then point this stack at it with pulumi config set landingZoneStack <your-org>/landing-zone/dev. If you want to bring your own network, replace the StackReference block in the entrypoint with the ids you already have.
Which add-ons does this blueprint install?
External Secrets Operator (for syncing cloud-native secret stores into the cluster), a cloud-native Layer-7 ingress path (AWS Load Balancer Controller on EKS, Application Gateway for Containers on AKS, GKE Gateway API on GKE), and a cloud-native node autoscaler (Karpenter on EKS, Node Auto Provisioning on AKS and GKE). All are installed through pinned Helm charts or managed-cluster config. Each add-on has a config flag so you can disable any of them on pulumi up.
How does workload identity work here?
On AWS the blueprint creates IRSA (IAM Roles for Service Accounts) roles scoped per service account and binds them through OIDC federation. On AKS it turns on Workload Identity + OIDC and wires FederatedIdentityCredential resources per service account. On GKE it enables Workload Identity Federation on the cluster and annotates each controller’s service account so it maps to a scoped Google service account. In every case pods call cloud APIs with short-lived tokens, never with static credentials.
How do I consume the cluster from another Pulumi project?
The stack exports kubeconfig, clusterName, clusterEndpoint, and clusterOidcIssuerUrl plus an escEnvironment name. Downstream workload stacks either import the Pulumi ESC environment this stack attaches to, or use a StackReference to pull those outputs. The Consume the cluster section shows both patterns with TypeScript, Python, and Go examples.
How do I upgrade Kubernetes versions later?
Bump the clusterVersion config value and run pulumi up. EKS and AKS upgrade the managed control plane in place; GKE follows the release channel you selected. Node pools refresh behind the same config value - Karpenter rolls AMIs per NodeClass, AKS NAP rolls through its AKSNodeClass, and GKE Node Auto Provisioning rolls through its NodePool templates.
What does this cost?
The control plane is a per-hour charge on every cloud even when no workloads are running. Add the system node pool, any network egress from the landing-zone network, and the Layer-7 data-plane service when you start creating Ingress / Gateway / HTTPRoute resources. This blueprint does not deploy application workloads, so the baseline is the control plane plus the system node pool.