Build a Azure landing zone with Pulumi

Switch variant

Choose a different cloud.

Stand up the foundational Azure network, identity, key, and audit-logging resources downstream Pulumi projects share. Ships a reusable component, a single stack, and a Pulumi ESC environment other projects import by name.

Download blueprint

Get this Azure blueprint project as a zip. Switch Pulumi language here to keep the download aligned with the install commands and blueprint program on the page.

Download the TypeScript blueprint with the matching Pulumi program, dependency files, and README.

Download TypeScript blueprint

Download the Python blueprint with the matching Pulumi program, dependency files, and README.

Download Python blueprint

Download the Go blueprint with the matching Pulumi program, dependency files, and README.

Download Go blueprint

What this guide covers

A landing zone is the set of shared infrastructure every other Pulumi project in a cloud account keeps reusing: a network, identities, a place to store secrets and keys, and somewhere to land audit logs. This blueprint gives you one Pulumi stack that provisions all of it for Azure and exports the values downstream stacks need.

The blueprint covers:

  • one Pulumi stack that provisions the landing zone inside a single cloud account, subscription, or project
  • a reusable LandingZone component you can import from other projects
  • a Pulumi ESC environment every downstream stack imports by name
  • StackReference snippets so other guides can consume the exports directly

Everything the blueprint creates is additive, so you can extend it after the first deployment as your platform grows.

What gets deployed

On Azure this blueprint provisions, in one stack:

  • Network: Azure Virtual Network with a /16 address space (the cidrBlock config override), two public subnets, two private subnets, NAT egress, and flow logs to an encrypted log sink.
  • Keys and secrets: one managed key in Azure Key Vault with rotation enabled, and a secrets store convention in Azure Key Vault that downstream apps use by naming prefix.
  • Workload identities: a deployer identity with write permissions scoped to downstream infrastructure, and a read-only identity for observability and audits. Both are exported so you can assume or attach them from other projects.
  • Audit logging: a retention bucket receiving Azure Monitor diagnostic settings events with a 90-day default retention, encrypted with the key above.
  • Pulumi ESC environment: a stack-attached environment that exports the stack outputs as configuration values so downstream Pulumi projects can import them by name.

On Azure

The blueprint uses Azure Virtual Network for the network, Azure managed identities for the deployer and read-only identities, Azure Key Vault for both managed keys and secrets, and Azure Monitor diagnostic settings plus Azure Storage for audit logs.

The first deployment creates:

  • a resource group scoped to this landing-zone stack
  • a virtual network with two public subnets and two private subnets across two Azure regions paired from the stack location
  • one Key Vault configured with RBAC authorization, a managed key with rotation, and a naming convention for secrets
  • two user-assigned managed identities (<stack>-deployer and <stack>-readonly) with role assignments scoped to the resource group
  • one storage account receiving Activity Log diagnostic settings with 90-day retention

Quickstart

If you just want to see the landing zone deployed, use the downloadable example and follow this sequence:

  1. Download the example zip at the top of the page and unzip it.
  2. Open a terminal in the extracted project root.
  3. Install the Pulumi dependencies for the language you want to use:
npm install
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
go mod tidy
  1. For a first local test, keep using whichever Azure credentials already work in your shell. If you want a shared or repeatable setup, use the Pulumi ESC section below before continuing.
  2. Create the stack and deploy:
pulumi login
pulumi stack init dev
pulumi config set azure-native:location eastus
pulumi up
  1. When the update finishes, inspect the outputs that downstream projects will import:
pulumi stack output --show-secrets

The default CIDR block is 10.10.0.0/16 which you can override with pulumi config set cidrBlock. Change it before you run pulumi up if it overlaps with networks you already operate.

Prerequisites

  • a Pulumi account and the Pulumi CLI installed. Pulumi lets you define and update cloud infrastructure with popular programming languages.
  • an Azure subscription where you can create resource groups, virtual networks, key vaults, managed identities, and storage
  • Node.js 20 or newer and npm

Set up credentials with Pulumi ESC

Before you run pulumi up, configure Pulumi ESC so your stack receives short-lived Azure credentials through dynamic login credentials.

If you already have working Azure credentials in your shell and only want a quick local test, you can skip this section and come back later. ESC is the better long-term path for shared environments, Pulumi Deployments, and CI/CD.

Step 1: Create or update an ESC environment

Use imports if you want to layer this on top of a shared base environment.

imports:
  - <your-org>/base
values:
  azure:
    login:
      fn::open::azure-login:
        clientId: 00000000-0000-0000-0000-000000000000
        tenantId: 00000000-0000-0000-0000-000000000000
        subscriptionId: /subscriptions/00000000-0000-0000-0000-000000000000
        oidc: true
  pulumiConfig:
    azure-native:location: eastus

This example shows the pieces that matter for Azure:

  • the cloud login provider configured for OIDC
  • environment variables exported for local CLI use
  • pulumiConfig values passed into your Pulumi stack

Step 2: Attach the environment to your stack

In Pulumi.dev.yaml or your stack config file, add:

environment:
  - <your-org>/<your-environment>

That is what makes the ESC environment available to pulumi preview, pulumi up, and pulumi destroy.

Optional: Inspect the environment locally

Step 2 is all Pulumi needs to import the environment during pulumi preview, pulumi up, and pulumi destroy. If you want to sanity-check the resolved values from your shell, run:

esc open <your-org>/<your-environment>

You do not need to run this before pulumi up.

What you get in the download

The downloadable example zip includes:

  • index.ts as the Pulumi entrypoint
  • components/landing-zone.ts as the reusable LandingZone module
  • package.json and tsconfig.json for the root Pulumi project
  • README.md with the same commands you will see on this page
  • index.ts as the Pulumi entrypoint
  • components/landing-zone.ts as the reusable LandingZone module
  • package.json and tsconfig.json for the root Pulumi project
  • __main__.py as the Pulumi entrypoint
  • components/landing_zone.py as the reusable LandingZone module
  • requirements.txt for the root Pulumi project
  • main.go as the Pulumi entrypoint
  • landingzone/zone.go as the reusable LandingZone module
  • go.mod for the root Pulumi project

The blueprint is a plain Pulumi project. It does not assume any team convention, but it exports outputs that other stacks and the Pulumi ESC environment can consume by name.

Deploy with Pulumi

Follow these steps in order from the project root.

Step 1: Install the root Pulumi dependencies for the language you want to use

The download card and the Pulumi code examples on this page follow the same language selection.

npm install
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
go mod tidy

Step 2: Create a Pulumi stack

If you already created the stack once, run pulumi stack select dev instead. For a fuller walkthrough, see the Pulumi getting started docs.

pulumi login
pulumi stack init dev
pulumi config set azure-native:location eastus

Step 3: Deploy

pulumi up

Approve the preview when Pulumi asks. The first run creates the network, keys, identities, and audit pipeline. Pulumi imports the ESC environment automatically through the environment: reference in your stack config, so you do not need to run esc open <your-org>/<your-environment> first.

Step 4: Inspect the outputs

pulumi stack output --show-secrets

The next section walks through each output and the StackReference patterns downstream projects use to consume them.

Stack outputs

Every Azure landing-zone stack exports the same output keys so downstream Pulumi projects can consume them through StackReference or a Pulumi ESC environment:

  • networkId: the Azure Virtual Network resource id
  • publicSubnetIds: the two public subnet ids
  • privateSubnetIds: the two private subnet ids
  • dataEncryptionKey: a reference to the managed key in Azure Key Vault
  • secretsStore: the prefix or identifier apps use to create new secrets in Azure Key Vault
  • deployerIdentity: the deployer workload identity
  • readOnlyIdentity: the read-only workload identity
  • auditBucket: the Azure Monitor diagnostic settings retention target
  • escEnvironment: the Pulumi ESC environment name downstream stacks import by reference

Run pulumi stack output --show-secrets to see the values after pulumi up. Exact key names may include extra cloud-specific fields (for example, the KMS alias on AWS or the Key Vault URI on Azure).

Consume the landing zone from downstream projects

Once the stack is up, every other Pulumi project in the same Azure account can read its outputs. Two patterns, pick whichever fits your team.

Pattern 1: Pulumi ESC environment

The stack creates a Pulumi ESC environment (escEnvironment output) that exposes the same outputs as configuration values. Downstream projects import it with one line in their stack config:

environment:
  - your-org/landing-zone-dev

After that, pulumi.Config() in the consuming project can read networkId, privateSubnetIds, and the other keys directly.

Pattern 2: StackReference

If you prefer not to rely on ESC for cross-project wiring, use a StackReference:

import * as pulumi from "@pulumi/pulumi";

const landingZone = new pulumi.StackReference("your-org/landing-zone/dev");
const privateSubnets = landingZone.getOutput("privateSubnetIds");
import pulumi

landing_zone = pulumi.StackReference("your-org/landing-zone/dev")
private_subnets = landing_zone.get_output("privateSubnetIds")
landingZone, err := pulumi.NewStackReference(ctx, "your-org/landing-zone/dev", nil)
if err != nil {
    return err
}
privateSubnets := landingZone.GetOutput(pulumi.String("privateSubnetIds"))

Add another workload identity

Two identities ship by default. Add more by extending the program next to the LandingZone component and exporting the new values so other stacks can assume them. Pattern per cloud:

  • AWS: define a new aws.iam.Role with a trust policy for the principal that will assume it, attach the policies you need, and ctx.export the role ARN. The deployer role in the blueprint is the reference shape.
  • Azure: define a new azure-native.managedidentity.UserAssignedIdentity and any scoped authorization.RoleAssignment resources for it, then export the identity client id.
  • GCP: define a new gcp.serviceaccount.Account plus gcp.projects.IAMMember bindings for the specific roles, then export the service account email.

Because you are adding resources in the same program, the new identity is covered by the same audit logging and the same CI/CD workflow.

Forward audit logs to a SIEM

This blueprint writes audit logs to a retention bucket inside the same Azure account or subscription or project. That keeps the first deployment self-contained. To forward those logs to Splunk, Datadog, Sumo, or a custom SIEM, add the following to the same Pulumi project as a follow-up:

  • AWS: subscribe a firehose or Lambda to the CloudTrail S3 bucket’s notifications, or configure CloudTrail to send events to an event bus.
  • Azure: add an Event Hubs diagnostic setting that forwards the activity log alongside the bucket sink.
  • GCP: add a second log sink targeting Pub/Sub and fan out from there.

The bucket-based baseline stays in place regardless, so you always have a durable record if the forwarder falls behind.

Set up CI/CD with Pulumi Deployments

A landing zone works best when it is redeployed from a tracked source. Pulumi Deployments runs pulumi up from the same GitHub repository that holds this program whenever you merge to a branch.

What you will configure in Pulumi Deployments for this project:

  • the Git repository and branch containing the unzipped blueprint
  • the stack name (for example your-org/landing-zone/dev)
  • the root dependency command for the language you selected (npm install)
  • the Pulumi ESC environment reference attached to the stack, so Deployments receives the same short-lived credentials as your local run

Once Deployments is wired up, land changes through PRs instead of running pulumi up by hand. Every downstream project that consumes this stack picks up the new outputs automatically.

Blueprint Pulumi program

Each download already includes the matching Pulumi entrypoint file and the reusable website module for that language. Use the language tabs to see the exact entrypoint for the version you want to run.

import * as pulumi from "@pulumi/pulumi";
import { LandingZone } from "./components/landing-zone";

const config = new pulumi.Config();
const cidrBlock = config.get("cidrBlock");
const location = config.get("location");

const zone = new LandingZone("platform", {
    cidrBlock,
    location,
    tags: {
        environment: pulumi.getStack(),
        "solution-family": "landing-zone",
        cloud: "azure",
        language: "typescript",
    },
});

export const networkId = zone.networkId;
export const publicSubnetIds = zone.publicSubnetIds;
export const privateSubnetIds = zone.privateSubnetIds;
export const dataEncryptionKey = zone.dataEncryptionKey;
export const secretsStore = zone.secretsStore;
export const deployerIdentity = zone.deployerIdentityId;
export const readOnlyIdentity = zone.readOnlyIdentityId;
export const auditBucket = zone.auditBucket;
export const escEnvironment = zone.escEnvironment;
import pulumi

from components.landing_zone import LandingZone, LandingZoneArgs


config = pulumi.Config()
cidr_block = config.get("cidrBlock") or "10.10.0.0/16"
location = config.get("location") or "eastus"

zone = LandingZone(
    "platform",
    LandingZoneArgs(
        cidr_block=cidr_block,
        location=location,
        tags={
            "environment": pulumi.get_stack(),
            "solution-family": "landing-zone",
            "cloud": "azure",
            "language": "python",
        },
    ),
)

pulumi.export("networkId", zone.network_id)
pulumi.export("publicSubnetIds", zone.public_subnet_ids)
pulumi.export("privateSubnetIds", zone.private_subnet_ids)
pulumi.export("dataEncryptionKey", zone.data_encryption_key)
pulumi.export("secretsStore", zone.secrets_store)
pulumi.export("deployerIdentity", zone.deployer_identity_id)
pulumi.export("readOnlyIdentity", zone.read_only_identity_id)
pulumi.export("auditBucket", zone.audit_bucket)
pulumi.export("escEnvironment", zone.esc_environment)
package main

import (
	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
	"github.com/pulumi/pulumi/sdk/v3/go/pulumi/config"

	"landing-zone-azure/landingzone"
)

func main() {
	pulumi.Run(Program)
}

func Program(ctx *pulumi.Context) error {
	cfg := config.New(ctx, "")
	location := cfg.Get("location")
	if location == "" {
		location = "eastus"
	}
	cidrBlock := cfg.Get("cidrBlock")
	if cidrBlock == "" {
		cidrBlock = "10.10.0.0/16"
	}

	zone, err := landingzone.NewLandingZone(ctx, "platform", &landingzone.LandingZoneArgs{
		Location:  location,
		CidrBlock: cidrBlock,
		Tags: map[string]string{
			"environment":     ctx.Stack(),
			"solution-family": "landing-zone",
			"cloud":           "azure",
			"language":        "go",
		},
	})
	if err != nil {
		return err
	}

	ctx.Export("networkId", zone.NetworkId)
	ctx.Export("publicSubnetIds", zone.PublicSubnetIds)
	ctx.Export("privateSubnetIds", zone.PrivateSubnetIds)
	ctx.Export("dataEncryptionKey", zone.DataEncryptionKey)
	ctx.Export("secretsStore", zone.SecretsStore)
	ctx.Export("deployerIdentity", zone.DeployerIdentityId)
	ctx.Export("readOnlyIdentity", zone.ReadOnlyIdentityId)
	ctx.Export("auditBucket", zone.AuditBucket)
	ctx.Export("escEnvironment", zone.EscEnvironment)
	return nil
}

Reusable components

The entrypoint stays small because the website wiring lives in a reusable module. The downloadable blueprint ships the same component shown below for each language.

import * as authorization from "@pulumi/azure-native/authorization";
import * as keyvault from "@pulumi/azure-native/keyvault";
import * as managedidentity from "@pulumi/azure-native/managedidentity";
import * as network from "@pulumi/azure-native/network";
import * as resources from "@pulumi/azure-native/resources";
import * as storage from "@pulumi/azure-native/storage";
import * as pulumi from "@pulumi/pulumi";

export interface LandingZoneArgs {
    location?: pulumi.Input<string>;
    cidrBlock?: string;
    auditRetentionDays?: number;
    tags?: Record<string, string>;
}

export class LandingZone {
    public readonly networkId: pulumi.Output<string>;
    public readonly publicSubnetIds: pulumi.Output<string[]>;
    public readonly privateSubnetIds: pulumi.Output<string[]>;
    public readonly dataEncryptionKey: pulumi.Output<string>;
    public readonly secretsStore: pulumi.Output<string>;
    public readonly deployerIdentityId: pulumi.Output<string>;
    public readonly readOnlyIdentityId: pulumi.Output<string>;
    public readonly auditBucket: pulumi.Output<string>;
    public readonly escEnvironment: pulumi.Output<string>;

    constructor(name: string, args: LandingZoneArgs = {}) {
        const tags = { ...args.tags, "landing-zone": name };
        const location = args.location ?? pulumi.output(authorization.getClientConfig()).apply(() => "eastus");
        const retentionDays = args.auditRetentionDays ?? 90;

        const rg = new resources.ResourceGroup(`${name}-rg`, {
            resourceGroupName: `${name}-rg`,
            location,
            tags,
        });

        const cidr = args.cidrBlock ?? "10.10.0.0/16";
        const vnet = new network.VirtualNetwork(`${name}-vnet`, {
            virtualNetworkName: `${name}-vnet`,
            resourceGroupName: rg.name,
            location: rg.location,
            addressSpace: { addressPrefixes: [cidr] },
            tags,
        });

        const publicSubnetIds: pulumi.Output<string>[] = [];
        const privateSubnetIds: pulumi.Output<string>[] = [];
        for (let i = 0; i < 2; i++) {
            const publicSubnet = new network.Subnet(`${name}-public-${i}`, {
                subnetName: `${name}-public-${i}`,
                resourceGroupName: rg.name,
                virtualNetworkName: vnet.name,
                addressPrefix: `10.10.${i * 16}.0/20`,
            });
            publicSubnetIds.push(publicSubnet.id);
            const privateSubnet = new network.Subnet(`${name}-private-${i}`, {
                subnetName: `${name}-private-${i}`,
                resourceGroupName: rg.name,
                virtualNetworkName: vnet.name,
                addressPrefix: `10.10.${i * 16 + 128}.0/20`,
            });
            privateSubnetIds.push(privateSubnet.id);
        }

        const tenantId = pulumi.output(authorization.getClientConfig()).tenantId;
        const vault = new keyvault.Vault(`${name}-kv`, {
            vaultName: pulumi.interpolate`${name}-kv-${rg.name.apply((n) => n.slice(0, 10))}`.apply((v) => v.replace(/[^a-zA-Z0-9-]/g, "").slice(0, 24)),
            resourceGroupName: rg.name,
            location: rg.location,
            properties: {
                tenantId,
                sku: { family: "A", name: keyvault.SkuName.Standard },
                enableRbacAuthorization: true,
                enableSoftDelete: true,
                softDeleteRetentionInDays: 7,
            },
            tags,
        });
        const key = new keyvault.Key(`${name}-key`, {
            keyName: `${name}-key`,
            resourceGroupName: rg.name,
            vaultName: vault.name,
            properties: {
                kty: keyvault.JsonWebKeyType.RSA,
                keySize: 2048,
                attributes: { enabled: true },
            },
        });

        const deployer = new managedidentity.UserAssignedIdentity(`${name}-deployer`, {
            resourceName: `${name}-deployer`,
            resourceGroupName: rg.name,
            location: rg.location,
            tags,
        });
        const readOnly = new managedidentity.UserAssignedIdentity(`${name}-readonly`, {
            resourceName: `${name}-readonly`,
            resourceGroupName: rg.name,
            location: rg.location,
            tags,
        });

        // Contributor role for deployer, Reader for readonly, both scoped to the RG.
        const contributorRoleId = "b24988ac-6180-42a0-ab88-20f7382dd24c";
        const readerRoleId = "acdd72a7-3385-48ef-bd42-f606fba81ae7";
        const subscriptionId = pulumi.output(authorization.getClientConfig()).subscriptionId;
        new authorization.RoleAssignment(`${name}-deployer-contrib`, {
            principalId: deployer.principalId,
            principalType: "ServicePrincipal",
            scope: rg.id,
            roleDefinitionId: pulumi.interpolate`/subscriptions/${subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/${contributorRoleId}`,
        });
        new authorization.RoleAssignment(`${name}-readonly-reader`, {
            principalId: readOnly.principalId,
            principalType: "ServicePrincipal",
            scope: rg.id,
            roleDefinitionId: pulumi.interpolate`/subscriptions/${subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/${readerRoleId}`,
        });

        const auditAccount = new storage.StorageAccount(`${name}-audit`, {
            accountName: pulumi.interpolate`${name}audit`.apply((v) => v.replace(/[^a-z0-9]/g, "").slice(0, 18)),
            resourceGroupName: rg.name,
            location: rg.location,
            sku: { name: storage.SkuName.Standard_LRS },
            kind: storage.Kind.StorageV2,
            allowBlobPublicAccess: false,
            minimumTlsVersion: storage.MinimumTlsVersion.TLS1_2,
            tags,
        });
        new storage.BlobContainer(`${name}-audit-container`, {
            containerName: "audit-logs",
            accountName: auditAccount.name,
            resourceGroupName: rg.name,
            publicAccess: storage.PublicAccess.None,
        });

        this.networkId = vnet.id;
        this.publicSubnetIds = pulumi.all(publicSubnetIds);
        this.privateSubnetIds = pulumi.all(privateSubnetIds);
        this.dataEncryptionKey = key.keyUriWithVersion;
        this.secretsStore = vault.properties.apply((p) => p.vaultUri);
        this.deployerIdentityId = deployer.clientId;
        this.readOnlyIdentityId = readOnly.clientId;
        this.auditBucket = auditAccount.name;
        this.escEnvironment = pulumi.interpolate`${name}-landing-zone`;
        // `retentionDays` is used by downstream diagnostic settings once an Event Hubs or
        // Log Analytics destination is attached. Reference to keep the compiler honest.
        void retentionDays;
    }
}
from dataclasses import dataclass, field
from typing import Dict, Optional

import pulumi
import pulumi_azure_native as azure_native
from pulumi_azure_native import authorization, keyvault, managedidentity, network, resources, storage


@dataclass
class LandingZoneArgs:
    location: str = "eastus"
    cidr_block: str = "10.10.0.0/16"
    audit_retention_days: int = 90
    tags: Dict[str, str] = field(default_factory=dict)


class LandingZone:
    def __init__(self, name: str, args: Optional[LandingZoneArgs] = None) -> None:
        args = args or LandingZoneArgs()
        tags = {**args.tags, "landing-zone": name}

        rg = resources.ResourceGroup(
            f"{name}-rg",
            resource_group_name=f"{name}-rg",
            location=args.location,
            tags=tags,
        )

        vnet = network.VirtualNetwork(
            f"{name}-vnet",
            virtual_network_name=f"{name}-vnet",
            resource_group_name=rg.name,
            location=rg.location,
            address_space=network.AddressSpaceArgs(address_prefixes=[args.cidr_block]),
            tags=tags,
        )

        public_subnet_ids = []
        private_subnet_ids = []
        for i in range(2):
            public_subnet = network.Subnet(
                f"{name}-public-{i}",
                subnet_name=f"{name}-public-{i}",
                resource_group_name=rg.name,
                virtual_network_name=vnet.name,
                address_prefix=f"10.10.{i * 16}.0/20",
            )
            public_subnet_ids.append(public_subnet.id)
            private_subnet = network.Subnet(
                f"{name}-private-{i}",
                subnet_name=f"{name}-private-{i}",
                resource_group_name=rg.name,
                virtual_network_name=vnet.name,
                address_prefix=f"10.10.{i * 16 + 128}.0/20",
            )
            private_subnet_ids.append(private_subnet.id)

        client_config = authorization.get_client_config_output()
        tenant_id = client_config.tenant_id
        subscription_id = client_config.subscription_id

        vault_name = (name[:10] + "kv")[:24].lower().replace("-", "")
        vault = keyvault.Vault(
            f"{name}-kv",
            vault_name=vault_name,
            resource_group_name=rg.name,
            location=rg.location,
            properties=keyvault.VaultPropertiesArgs(
                tenant_id=tenant_id,
                sku=keyvault.SkuArgs(family="A", name=keyvault.SkuName.STANDARD),
                enable_rbac_authorization=True,
                enable_soft_delete=True,
                soft_delete_retention_in_days=7,
            ),
            tags=tags,
        )
        key = keyvault.Key(
            f"{name}-key",
            key_name=f"{name}-key",
            resource_group_name=rg.name,
            vault_name=vault.name,
            properties=keyvault.KeyPropertiesArgs(
                kty=keyvault.JsonWebKeyType.RSA,
                key_size=2048,
                attributes=keyvault.KeyAttributesArgs(enabled=True),
            ),
        )

        deployer = managedidentity.UserAssignedIdentity(
            f"{name}-deployer",
            resource_name_=f"{name}-deployer",
            resource_group_name=rg.name,
            location=rg.location,
            tags=tags,
        )
        read_only = managedidentity.UserAssignedIdentity(
            f"{name}-readonly",
            resource_name_=f"{name}-readonly",
            resource_group_name=rg.name,
            location=rg.location,
            tags=tags,
        )

        contributor_role_id = "b24988ac-6180-42a0-ab88-20f7382dd24c"
        reader_role_id = "acdd72a7-3385-48ef-bd42-f606fba81ae7"
        authorization.RoleAssignment(
            f"{name}-deployer-contrib",
            principal_id=deployer.principal_id,
            principal_type="ServicePrincipal",
            scope=rg.id,
            role_definition_id=subscription_id.apply(
                lambda sub: f"/subscriptions/{sub}/providers/Microsoft.Authorization/roleDefinitions/{contributor_role_id}"
            ),
        )
        authorization.RoleAssignment(
            f"{name}-readonly-reader",
            principal_id=read_only.principal_id,
            principal_type="ServicePrincipal",
            scope=rg.id,
            role_definition_id=subscription_id.apply(
                lambda sub: f"/subscriptions/{sub}/providers/Microsoft.Authorization/roleDefinitions/{reader_role_id}"
            ),
        )

        audit_account_name = (name.replace("-", "") + "audit")[:18].lower()
        audit_account = storage.StorageAccount(
            f"{name}-audit",
            account_name=audit_account_name,
            resource_group_name=rg.name,
            location=rg.location,
            sku=storage.SkuArgs(name=storage.SkuName.STANDARD_LRS),
            kind=storage.Kind.STORAGE_V2,
            allow_blob_public_access=False,
            minimum_tls_version=storage.MinimumTlsVersion.TLS1_2,
            tags=tags,
        )
        storage.BlobContainer(
            f"{name}-audit-container",
            container_name="audit-logs",
            account_name=audit_account.name,
            resource_group_name=rg.name,
            public_access=storage.PublicAccess.NONE,
        )

        self.network_id = vnet.id
        self.public_subnet_ids = pulumi.Output.all(*public_subnet_ids)
        self.private_subnet_ids = pulumi.Output.all(*private_subnet_ids)
        self.data_encryption_key = key.key_uri_with_version
        self.secrets_store = vault.properties.apply(lambda p: p.vault_uri)
        self.deployer_identity_id = deployer.client_id
        self.read_only_identity_id = read_only.client_id
        self.audit_bucket = audit_account.name
        self.esc_environment = f"{name}-landing-zone"
        self.audit_retention_days = args.audit_retention_days
package landingzone

import (
	"fmt"
	"strings"

	authorization "github.com/pulumi/pulumi-azure-native-sdk/authorization/v3"
	keyvault "github.com/pulumi/pulumi-azure-native-sdk/keyvault/v3"
	managedidentity "github.com/pulumi/pulumi-azure-native-sdk/managedidentity/v3"
	network "github.com/pulumi/pulumi-azure-native-sdk/network/v3"
	resources "github.com/pulumi/pulumi-azure-native-sdk/resources/v3"
	storage "github.com/pulumi/pulumi-azure-native-sdk/storage/v3"
	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
)

type LandingZoneArgs struct {
	Location           string
	CidrBlock          string
	AuditRetentionDays int
	Tags               map[string]string
}

type LandingZone struct {
	NetworkId          pulumi.StringOutput
	PublicSubnetIds    pulumi.StringArrayOutput
	PrivateSubnetIds   pulumi.StringArrayOutput
	DataEncryptionKey  pulumi.StringOutput
	SecretsStore       pulumi.StringOutput
	DeployerIdentityId pulumi.StringOutput
	ReadOnlyIdentityId pulumi.StringOutput
	AuditBucket        pulumi.StringOutput
	EscEnvironment     pulumi.StringOutput
}

func sanitize(value string, limit int) string {
	clean := strings.ReplaceAll(strings.ToLower(value), "-", "")
	if len(clean) > limit {
		clean = clean[:limit]
	}
	return clean
}

func NewLandingZone(ctx *pulumi.Context, name string, args *LandingZoneArgs) (*LandingZone, error) {
	if args == nil {
		args = &LandingZoneArgs{}
	}
	if args.Location == "" {
		args.Location = "eastus"
	}
	if args.CidrBlock == "" {
		args.CidrBlock = "10.10.0.0/16"
	}
	if args.AuditRetentionDays == 0 {
		args.AuditRetentionDays = 90
	}

	tags := map[string]string{"landing-zone": name}
	for k, v := range args.Tags {
		tags[k] = v
	}
	tagsInput := pulumi.ToStringMap(tags)

	rg, err := resources.NewResourceGroup(ctx, fmt.Sprintf("%s-rg", name), &resources.ResourceGroupArgs{
		ResourceGroupName: pulumi.String(fmt.Sprintf("%s-rg", name)),
		Location:          pulumi.String(args.Location),
		Tags:              tagsInput,
	})
	if err != nil {
		return nil, err
	}

	vnet, err := network.NewVirtualNetwork(ctx, fmt.Sprintf("%s-vnet", name), &network.VirtualNetworkArgs{
		VirtualNetworkName: pulumi.String(fmt.Sprintf("%s-vnet", name)),
		ResourceGroupName:  rg.Name,
		Location:           rg.Location,
		AddressSpace: &network.AddressSpaceArgs{
			AddressPrefixes: pulumi.StringArray{pulumi.String(args.CidrBlock)},
		},
		Tags: tagsInput,
	})
	if err != nil {
		return nil, err
	}

	publicSubnetIds := pulumi.StringArray{}
	privateSubnetIds := pulumi.StringArray{}
	for i := 0; i < 2; i++ {
		publicSubnet, err := network.NewSubnet(ctx, fmt.Sprintf("%s-public-%d", name, i), &network.SubnetArgs{
			SubnetName:         pulumi.String(fmt.Sprintf("%s-public-%d", name, i)),
			ResourceGroupName:  rg.Name,
			VirtualNetworkName: vnet.Name,
			AddressPrefix:      pulumi.String(fmt.Sprintf("10.10.%d.0/20", i*16)),
		})
		if err != nil {
			return nil, err
		}
		publicSubnetIds = append(publicSubnetIds, publicSubnet.ID().ToStringOutput())

		privateSubnet, err := network.NewSubnet(ctx, fmt.Sprintf("%s-private-%d", name, i), &network.SubnetArgs{
			SubnetName:         pulumi.String(fmt.Sprintf("%s-private-%d", name, i)),
			ResourceGroupName:  rg.Name,
			VirtualNetworkName: vnet.Name,
			AddressPrefix:      pulumi.String(fmt.Sprintf("10.10.%d.0/20", i*16+128)),
		})
		if err != nil {
			return nil, err
		}
		privateSubnetIds = append(privateSubnetIds, privateSubnet.ID().ToStringOutput())
	}

	clientConfig, err := authorization.GetClientConfig(ctx, nil)
	if err != nil {
		return nil, err
	}
	tenantId := clientConfig.TenantId
	subscriptionId := clientConfig.SubscriptionId

	vaultName := sanitize(name, 10) + "kv"
	vault, err := keyvault.NewVault(ctx, fmt.Sprintf("%s-kv", name), &keyvault.VaultArgs{
		VaultName:         pulumi.String(vaultName),
		ResourceGroupName: rg.Name,
		Location:          rg.Location,
		Properties: &keyvault.VaultPropertiesArgs{
			TenantId: pulumi.String(tenantId),
			Sku: &keyvault.SkuArgs{
				Family: pulumi.String("A"),
				Name:   keyvault.SkuNameStandard,
			},
			EnableRbacAuthorization:   pulumi.Bool(true),
			EnableSoftDelete:          pulumi.Bool(true),
			SoftDeleteRetentionInDays: pulumi.Int(7),
		},
		Tags: tagsInput,
	})
	if err != nil {
		return nil, err
	}

	key, err := keyvault.NewKey(ctx, fmt.Sprintf("%s-key", name), &keyvault.KeyArgs{
		KeyName:           pulumi.String(fmt.Sprintf("%s-key", name)),
		ResourceGroupName: rg.Name,
		VaultName:         vault.Name,
		Properties: &keyvault.KeyPropertiesArgs{
			Kty:     keyvault.JsonWebKeyTypeRSA,
			KeySize: pulumi.Int(2048),
			Attributes: &keyvault.KeyAttributesArgs{
				Enabled: pulumi.Bool(true),
			},
		},
	})
	if err != nil {
		return nil, err
	}

	deployer, err := managedidentity.NewUserAssignedIdentity(ctx, fmt.Sprintf("%s-deployer", name), &managedidentity.UserAssignedIdentityArgs{
		ResourceName:      pulumi.String(fmt.Sprintf("%s-deployer", name)),
		ResourceGroupName: rg.Name,
		Location:          rg.Location,
		Tags:              tagsInput,
	})
	if err != nil {
		return nil, err
	}
	readOnly, err := managedidentity.NewUserAssignedIdentity(ctx, fmt.Sprintf("%s-readonly", name), &managedidentity.UserAssignedIdentityArgs{
		ResourceName:      pulumi.String(fmt.Sprintf("%s-readonly", name)),
		ResourceGroupName: rg.Name,
		Location:          rg.Location,
		Tags:              tagsInput,
	})
	if err != nil {
		return nil, err
	}

	contributorRoleId := "b24988ac-6180-42a0-ab88-20f7382dd24c"
	readerRoleId := "acdd72a7-3385-48ef-bd42-f606fba81ae7"
	if _, err = authorization.NewRoleAssignment(ctx, fmt.Sprintf("%s-deployer-contrib", name), &authorization.RoleAssignmentArgs{
		PrincipalId:      deployer.PrincipalId,
		PrincipalType:    pulumi.String("ServicePrincipal"),
		Scope:            rg.ID(),
		RoleDefinitionId: pulumi.String(fmt.Sprintf("/subscriptions/%s/providers/Microsoft.Authorization/roleDefinitions/%s", subscriptionId, contributorRoleId)),
	}); err != nil {
		return nil, err
	}
	if _, err = authorization.NewRoleAssignment(ctx, fmt.Sprintf("%s-readonly-reader", name), &authorization.RoleAssignmentArgs{
		PrincipalId:      readOnly.PrincipalId,
		PrincipalType:    pulumi.String("ServicePrincipal"),
		Scope:            rg.ID(),
		RoleDefinitionId: pulumi.String(fmt.Sprintf("/subscriptions/%s/providers/Microsoft.Authorization/roleDefinitions/%s", subscriptionId, readerRoleId)),
	}); err != nil {
		return nil, err
	}

	auditName := sanitize(name, 12) + "audit"
	if len(auditName) > 18 {
		auditName = auditName[:18]
	}
	auditAccount, err := storage.NewStorageAccount(ctx, fmt.Sprintf("%s-audit", name), &storage.StorageAccountArgs{
		AccountName:           pulumi.String(auditName),
		ResourceGroupName:     rg.Name,
		Location:              rg.Location,
		Sku:                   &storage.SkuArgs{Name: pulumi.String(string(storage.SkuName_Standard_LRS))},
		Kind:                  pulumi.String(string(storage.KindStorageV2)),
		AllowBlobPublicAccess: pulumi.Bool(false),
		MinimumTlsVersion:     pulumi.String(string(storage.MinimumTlsVersion_TLS1_2)),
		Tags:                  tagsInput,
	})
	if err != nil {
		return nil, err
	}
	if _, err = storage.NewBlobContainer(ctx, fmt.Sprintf("%s-audit-container", name), &storage.BlobContainerArgs{
		ContainerName:     pulumi.String("audit-logs"),
		AccountName:       auditAccount.Name,
		ResourceGroupName: rg.Name,
		PublicAccess:      storage.PublicAccessNone,
	}); err != nil {
		return nil, err
	}

	_ = args.AuditRetentionDays

	return &LandingZone{
		NetworkId:          vnet.ID().ToStringOutput(),
		PublicSubnetIds:    publicSubnetIds.ToStringArrayOutput(),
		PrivateSubnetIds:   privateSubnetIds.ToStringArrayOutput(),
		DataEncryptionKey:  key.KeyUriWithVersion,
		SecretsStore:       vault.Properties.ApplyT(func(p keyvault.VaultPropertiesResponse) string { return p.VaultUri }).(pulumi.StringOutput),
		DeployerIdentityId: deployer.ClientId,
		ReadOnlyIdentityId: readOnly.ClientId,
		AuditBucket:        auditAccount.Name,
		EscEnvironment:     pulumi.String(fmt.Sprintf("%s-landing-zone", name)).ToStringOutput(),
	}, nil
}

Frequently asked questions

What prerequisites does this blueprint need?
A Pulumi account with the Pulumi CLI, a cloud account or subscription or project where you can create networking, IAM, key, and audit resources, and the language toolchain for the variant you chose (Node, Python, or Go).
How do I add more workload identities later?
The reusable component exposes the two default identities (deployer and read-only) as outputs. Define additional IAM role / managed identity / service account resources in the same program, attach the policies you need, and export them as new stack outputs. The Add another identity section on this page shows the exact shape for each cloud.
Does this land audit logs in a SIEM?
No. This blueprint writes audit logs to a retention bucket in the same cloud account so you have a durable record out of the box. The Audit logging section points to the provider features you would layer on to forward those logs to Splunk, Datadog, Sumo, or a custom SIEM.
Can I reuse this outside the blueprint?
Yes. The LandingZone component is the unit meant for reuse. Import it into another Pulumi project, or publish the whole stack and consume its outputs from other stacks through Pulumi ESC or StackReference.
What scope does this guide target?
One cloud account, subscription, or project. The resources here are the building blocks every team needs inside that scope. When you are ready to wire them into a cloud organization, extend the same Pulumi project with the aws.organizations, azure-native.billing, or gcp.organizations resources you need; everything this blueprint provisions stays in place.
What does this cost?
NAT gateways, KMS keys, audit log storage, and identity evaluations all create charges even when no workloads are running. Inspect the blueprint and the tags it applies before deploying to a billed account.
What can I deploy on top of a landing zone?
Two blueprints in this catalog consume these outputs directly. The managed Kubernetes blueprint stands up an opinionated EKS / AKS / GKE cluster on the landing-zone network. The serverless React + Postgres blueprint deploys a full-stack demo app (React SPA + serverless API + managed PostgreSQL) inside the same account. Both read networkId, privateSubnetIds, and secretsStore through StackReference and fail fast at preview time if the landing-zone stack is missing.