Deploy an AKS Cluster with Service Principal Authentication

By Pulumi Team
Published
Updated

The Challenge

You need a managed Kubernetes cluster on Azure with proper authentication, role-based access control, and the ability to scale nodes automatically. Setting up AKS with service principals, SSH keys, and RBAC involves coordinating multiple Azure resources that need to reference each other correctly.

What You'll Build

  • AKS cluster with autoscaling node pool
  • Service principal authentication configured
  • SSH key pair generated for node access
  • RBAC enabled for fine-grained permissions
  • Kubeconfig exported for kubectl access

Neo Try This Prompt in Pulumi Neo

Run this prompt in Neo to deploy your infrastructure, or edit it to customize.

Best For

Use this prompt when you need a production Kubernetes cluster on Azure with proper authentication and access controls. Ideal for teams running containerized applications, microservices architectures, or any workload that benefits from managed Kubernetes with Azure AD integration.

Architecture Overview

This deployment creates a fully managed AKS cluster with several layers of security and access control. At its core, AKS handles the Kubernetes control plane while you manage the worker nodes through a configurable node pool. The service principal provides an identity for the cluster to interact with other Azure resources, while RBAC ensures that users and services only have the permissions they need.

The architecture starts with a resource group that contains all related resources. An Azure AD application registration and service principal give the cluster its identity, which AKS uses to provision load balancers, manage networking, and pull container images. SSH keys generated through the TLS provider allow direct node access for debugging, though in normal operations you interact with the cluster through kubectl using the exported kubeconfig.

Autoscaling on the node pool means the cluster adjusts capacity based on workload demand. When pods cannot be scheduled due to resource constraints, the autoscaler adds nodes. When nodes sit underutilized, it consolidates workloads and removes excess capacity. This keeps costs aligned with actual usage while ensuring your applications have the resources they need.

Resource Group

Contains all AKS-related resources in a single Azure region, making it straightforward to manage permissions, billing, and lifecycle for the entire cluster.

Service Principal

Provides the cluster with an Azure AD identity for authenticating to Azure APIs. The cluster uses this identity to manage networking, storage, and other Azure resources on your behalf.

Node Pool

Runs your container workloads on virtual machines managed by AKS. The system node pool handles both system components and application pods, with autoscaling configured to adjust capacity automatically.

RBAC Configuration

Enables Kubernetes role-based access control, allowing you to define granular permissions for users, groups, and service accounts within the cluster.

Common Customizations

  • Adjust node size and count: Specify a different VM size or set minimum and maximum node counts to match your workload requirements and budget.
  • Add user node pools: Separate system workloads from application workloads by adding dedicated user node pools with different VM sizes or taints.
  • Enable Azure CNI networking: Switch from kubenet to Azure CNI for direct VNet integration, which gives pods their own IP addresses from your subnet.
  • Configure monitoring: Add Azure Monitor and Container Insights to collect metrics and logs from the cluster and your applications.