Build a AWS landing zone with Pulumi

Switch variant

Choose a different cloud.

Stand up the foundational AWS network, identity, key, and audit-logging resources downstream Pulumi projects share. Ships a reusable component, a single stack, and a Pulumi ESC environment other projects import by name.

Download blueprint

Get this AWS blueprint project as a zip. Switch Pulumi language here to keep the download aligned with the install commands and blueprint program on the page.

Download the TypeScript blueprint with the matching Pulumi program, dependency files, and README.

Download TypeScript blueprint

Download the Python blueprint with the matching Pulumi program, dependency files, and README.

Download Python blueprint

Download the Go blueprint with the matching Pulumi program, dependency files, and README.

Download Go blueprint

What this guide covers

A landing zone is the set of shared infrastructure every other Pulumi project in a cloud account keeps reusing: a network, identities, a place to store secrets and keys, and somewhere to land audit logs. This blueprint gives you one Pulumi stack that provisions all of it for AWS and exports the values downstream stacks need.

The blueprint covers:

  • one Pulumi stack that provisions the landing zone inside a single cloud account, subscription, or project
  • a reusable LandingZone component you can import from other projects
  • a Pulumi ESC environment every downstream stack imports by name
  • StackReference snippets so other guides can consume the exports directly

Everything the blueprint creates is additive, so you can extend it after the first deployment as your platform grows.

What gets deployed

On AWS this blueprint provisions, in one stack:

  • Network: Amazon VPC with a /16 address space (the cidrBlock config override), two public subnets, two private subnets, NAT egress, and flow logs to an encrypted log sink.
  • Keys and secrets: one managed key in AWS KMS with rotation enabled, and a secrets store convention in AWS Secrets Manager that downstream apps use by naming prefix.
  • Workload identities: a deployer identity with write permissions scoped to downstream infrastructure, and a read-only identity for observability and audits. Both are exported so you can assume or attach them from other projects.
  • Audit logging: a retention bucket receiving AWS CloudTrail events with a 90-day default retention, encrypted with the key above.
  • Pulumi ESC environment: a stack-attached environment that exports the stack outputs as configuration values so downstream Pulumi projects can import them by name.

On AWS

The blueprint uses Amazon VPC for the network, AWS IAM for the deployer and read-only identities, AWS KMS for the managed encryption key, AWS Secrets Manager for the secrets store convention, and AWS CloudTrail for audit logs.

The first deployment creates:

  • a VPC with two public subnets and two private subnets across two availability zones, a NAT gateway per AZ, and VPC flow logs
  • one customer-managed KMS key with rotation enabled, aliased as alias/<stack>-landing-zone
  • two IAM roles (<stack>-deployer and <stack>-readonly) with trust policies you can edit to name your own principals
  • one CloudTrail trail wired to a dedicated S3 bucket with 90-day default retention

Quickstart

If you just want to see the landing zone deployed, use the downloadable example and follow this sequence:

  1. Download the example zip at the top of the page and unzip it.
  2. Open a terminal in the extracted project root.
  3. Install the Pulumi dependencies for the language you want to use:
npm install
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
go mod tidy
  1. For a first local test, keep using whichever AWS credentials already work in your shell. If you want a shared or repeatable setup, use the Pulumi ESC section below before continuing.
  2. Create the stack and deploy:
pulumi login
pulumi stack init dev
pulumi config set aws:region us-west-2
pulumi up
  1. When the update finishes, inspect the outputs that downstream projects will import:
pulumi stack output --show-secrets

The default CIDR block is 10.0.0.0/16 which you can override with pulumi config set cidrBlock. Change it before you run pulumi up if it overlaps with networks you already operate.

Prerequisites

  • a Pulumi account and the Pulumi CLI installed. Pulumi lets you define and update cloud infrastructure with popular programming languages.
  • an AWS account where you can create VPC, IAM, KMS, S3, and CloudTrail resources
  • Node.js 20 or newer and npm

Set up credentials with Pulumi ESC

Before you run pulumi up, configure Pulumi ESC so your stack receives short-lived AWS credentials through dynamic login credentials.

If you already have working AWS credentials in your shell and only want a quick local test, you can skip this section and come back later. ESC is the better long-term path for shared environments, Pulumi Deployments, and CI/CD.

Step 1: Create or update an ESC environment

Use imports if you want to layer this on top of a shared base environment.

imports:
  - <your-org>/base
values:
  aws:
    login:
      fn::open::aws-login:
        oidc:
          roleArn: arn:aws:iam::123456789012:role/pulumi-esc
          sessionName: pulumi-esc
  environmentVariables:
    AWS_ACCESS_KEY_ID: ${aws.login.accessKeyId}
    AWS_SECRET_ACCESS_KEY: ${aws.login.secretAccessKey}
    AWS_SESSION_TOKEN: ${aws.login.sessionToken}
  pulumiConfig:
    aws:region: us-east-1

This example shows the pieces that matter for AWS:

  • the cloud login provider configured for OIDC
  • environment variables exported for local CLI use
  • pulumiConfig values passed into your Pulumi stack

Step 2: Attach the environment to your stack

In Pulumi.dev.yaml or your stack config file, add:

environment:
  - <your-org>/<your-environment>

That is what makes the ESC environment available to pulumi preview, pulumi up, and pulumi destroy.

Optional: Inspect the environment locally

Step 2 is all Pulumi needs to import the environment during pulumi preview, pulumi up, and pulumi destroy. If you want to sanity-check the resolved values from your shell, run:

esc open <your-org>/<your-environment>

You do not need to run this before pulumi up.

What you get in the download

The downloadable example zip includes:

  • index.ts as the Pulumi entrypoint
  • components/landing-zone.ts as the reusable LandingZone module
  • package.json and tsconfig.json for the root Pulumi project
  • README.md with the same commands you will see on this page
  • index.ts as the Pulumi entrypoint
  • components/landing-zone.ts as the reusable LandingZone module
  • package.json and tsconfig.json for the root Pulumi project
  • __main__.py as the Pulumi entrypoint
  • components/landing_zone.py as the reusable LandingZone module
  • requirements.txt for the root Pulumi project
  • main.go as the Pulumi entrypoint
  • landingzone/zone.go as the reusable LandingZone module
  • go.mod for the root Pulumi project

The blueprint is a plain Pulumi project. It does not assume any team convention, but it exports outputs that other stacks and the Pulumi ESC environment can consume by name.

Deploy with Pulumi

Follow these steps in order from the project root.

Step 1: Install the root Pulumi dependencies for the language you want to use

The download card and the Pulumi code examples on this page follow the same language selection.

npm install
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
go mod tidy

Step 2: Create a Pulumi stack

If you already created the stack once, run pulumi stack select dev instead. For a fuller walkthrough, see the Pulumi getting started docs.

pulumi login
pulumi stack init dev
pulumi config set aws:region us-west-2

Step 3: Deploy

pulumi up

Approve the preview when Pulumi asks. The first run creates the network, keys, identities, and audit pipeline. Pulumi imports the ESC environment automatically through the environment: reference in your stack config, so you do not need to run esc open <your-org>/<your-environment> first.

Step 4: Inspect the outputs

pulumi stack output --show-secrets

The next section walks through each output and the StackReference patterns downstream projects use to consume them.

Stack outputs

Every AWS landing-zone stack exports the same output keys so downstream Pulumi projects can consume them through StackReference or a Pulumi ESC environment:

  • networkId: the Amazon VPC resource id
  • publicSubnetIds: the two public subnet ids
  • privateSubnetIds: the two private subnet ids
  • dataEncryptionKey: a reference to the managed key in AWS KMS
  • secretsStore: the prefix or identifier apps use to create new secrets in AWS Secrets Manager
  • deployerIdentity: the deployer workload identity
  • readOnlyIdentity: the read-only workload identity
  • auditBucket: the AWS CloudTrail retention target
  • escEnvironment: the Pulumi ESC environment name downstream stacks import by reference

Run pulumi stack output --show-secrets to see the values after pulumi up. Exact key names may include extra cloud-specific fields (for example, the KMS alias on AWS or the Key Vault URI on Azure).

Consume the landing zone from downstream projects

Once the stack is up, every other Pulumi project in the same AWS account can read its outputs. Two patterns, pick whichever fits your team.

Pattern 1: Pulumi ESC environment

The stack creates a Pulumi ESC environment (escEnvironment output) that exposes the same outputs as configuration values. Downstream projects import it with one line in their stack config:

environment:
  - your-org/landing-zone-dev

After that, pulumi.Config() in the consuming project can read networkId, privateSubnetIds, and the other keys directly.

Pattern 2: StackReference

If you prefer not to rely on ESC for cross-project wiring, use a StackReference:

import * as pulumi from "@pulumi/pulumi";

const landingZone = new pulumi.StackReference("your-org/landing-zone/dev");
const privateSubnets = landingZone.getOutput("privateSubnetIds");
import pulumi

landing_zone = pulumi.StackReference("your-org/landing-zone/dev")
private_subnets = landing_zone.get_output("privateSubnetIds")
landingZone, err := pulumi.NewStackReference(ctx, "your-org/landing-zone/dev", nil)
if err != nil {
    return err
}
privateSubnets := landingZone.GetOutput(pulumi.String("privateSubnetIds"))

Add another workload identity

Two identities ship by default. Add more by extending the program next to the LandingZone component and exporting the new values so other stacks can assume them. Pattern per cloud:

  • AWS: define a new aws.iam.Role with a trust policy for the principal that will assume it, attach the policies you need, and ctx.export the role ARN. The deployer role in the blueprint is the reference shape.
  • Azure: define a new azure-native.managedidentity.UserAssignedIdentity and any scoped authorization.RoleAssignment resources for it, then export the identity client id.
  • GCP: define a new gcp.serviceaccount.Account plus gcp.projects.IAMMember bindings for the specific roles, then export the service account email.

Because you are adding resources in the same program, the new identity is covered by the same audit logging and the same CI/CD workflow.

Forward audit logs to a SIEM

This blueprint writes audit logs to a retention bucket inside the same AWS account or subscription or project. That keeps the first deployment self-contained. To forward those logs to Splunk, Datadog, Sumo, or a custom SIEM, add the following to the same Pulumi project as a follow-up:

  • AWS: subscribe a firehose or Lambda to the CloudTrail S3 bucket’s notifications, or configure CloudTrail to send events to an event bus.
  • Azure: add an Event Hubs diagnostic setting that forwards the activity log alongside the bucket sink.
  • GCP: add a second log sink targeting Pub/Sub and fan out from there.

The bucket-based baseline stays in place regardless, so you always have a durable record if the forwarder falls behind.

Set up CI/CD with Pulumi Deployments

A landing zone works best when it is redeployed from a tracked source. Pulumi Deployments runs pulumi up from the same GitHub repository that holds this program whenever you merge to a branch.

What you will configure in Pulumi Deployments for this project:

  • the Git repository and branch containing the unzipped blueprint
  • the stack name (for example your-org/landing-zone/dev)
  • the root dependency command for the language you selected (npm install)
  • the Pulumi ESC environment reference attached to the stack, so Deployments receives the same short-lived credentials as your local run

Once Deployments is wired up, land changes through PRs instead of running pulumi up by hand. Every downstream project that consumes this stack picks up the new outputs automatically.

Blueprint Pulumi program

Each download already includes the matching Pulumi entrypoint file and the reusable website module for that language. Use the language tabs to see the exact entrypoint for the version you want to run.

import * as pulumi from "@pulumi/pulumi";
import { LandingZone } from "./components/landing-zone";

const config = new pulumi.Config();
const cidrBlock = config.get("cidrBlock");
const trustedPrincipalArn = config.get("trustedPrincipalArn");

const zone = new LandingZone("platform", {
    cidrBlock,
    trustedPrincipalArn,
    tags: {
        environment: pulumi.getStack(),
        "solution-family": "landing-zone",
        cloud: "aws",
        language: "typescript",
    },
});

export const networkId = zone.networkId;
export const publicSubnetIds = zone.publicSubnetIds;
export const privateSubnetIds = zone.privateSubnetIds;
export const dataEncryptionKeyArn = zone.dataEncryptionKeyArn;
export const dataEncryptionKeyAlias = zone.dataEncryptionKeyAlias;
export const secretsStore = zone.secretsStore;
export const deployerRoleArn = zone.deployerRoleArn;
export const readOnlyRoleArn = zone.readOnlyRoleArn;
export const auditBucket = zone.auditBucket;
export const escEnvironment = zone.escEnvironment;
import pulumi

from components.landing_zone import LandingZone, LandingZoneArgs


config = pulumi.Config()
cidr_block = config.get("cidrBlock") or "10.0.0.0/16"
trusted_principal_arn = config.get("trustedPrincipalArn")

zone = LandingZone(
    "platform",
    LandingZoneArgs(
        cidr_block=cidr_block,
        trusted_principal_arn=trusted_principal_arn,
        tags={
            "environment": pulumi.get_stack(),
            "solution-family": "landing-zone",
            "cloud": "aws",
            "language": "python",
        },
    ),
)

pulumi.export("networkId", zone.network_id)
pulumi.export("publicSubnetIds", zone.public_subnet_ids)
pulumi.export("privateSubnetIds", zone.private_subnet_ids)
pulumi.export("dataEncryptionKeyArn", zone.data_encryption_key_arn)
pulumi.export("dataEncryptionKeyAlias", zone.data_encryption_key_alias)
pulumi.export("secretsStore", zone.secrets_store)
pulumi.export("deployerRoleArn", zone.deployer_role_arn)
pulumi.export("readOnlyRoleArn", zone.read_only_role_arn)
pulumi.export("auditBucket", zone.audit_bucket)
pulumi.export("escEnvironment", zone.esc_environment)
package main

import (
	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
	"github.com/pulumi/pulumi/sdk/v3/go/pulumi/config"

	"landing-zone-aws/landingzone"
)

func main() {
	pulumi.Run(Program)
}

func Program(ctx *pulumi.Context) error {
	cfg := config.New(ctx, "")
	cidrBlock := cfg.Get("cidrBlock")
	if cidrBlock == "" {
		cidrBlock = "10.0.0.0/16"
	}
	trustedPrincipalArn := cfg.Get("trustedPrincipalArn")

	zone, err := landingzone.NewLandingZone(ctx, "platform", &landingzone.LandingZoneArgs{
		CidrBlock:           cidrBlock,
		TrustedPrincipalArn: trustedPrincipalArn,
		Tags: map[string]string{
			"environment":     ctx.Stack(),
			"solution-family": "landing-zone",
			"cloud":           "aws",
			"language":        "go",
		},
	})
	if err != nil {
		return err
	}

	ctx.Export("networkId", zone.NetworkId)
	ctx.Export("publicSubnetIds", zone.PublicSubnetIds)
	ctx.Export("privateSubnetIds", zone.PrivateSubnetIds)
	ctx.Export("dataEncryptionKeyArn", zone.DataEncryptionKeyArn)
	ctx.Export("dataEncryptionKeyAlias", zone.DataEncryptionKeyAlias)
	ctx.Export("secretsStore", zone.SecretsStore)
	ctx.Export("deployerRoleArn", zone.DeployerRoleArn)
	ctx.Export("readOnlyRoleArn", zone.ReadOnlyRoleArn)
	ctx.Export("auditBucket", zone.AuditBucket)
	ctx.Export("escEnvironment", zone.EscEnvironment)
	return nil
}

Reusable components

The entrypoint stays small because the website wiring lives in a reusable module. The downloadable blueprint ships the same component shown below for each language.

import * as aws from "@pulumi/aws";
import * as pulumi from "@pulumi/pulumi";

export interface LandingZoneArgs {
    cidrBlock?: string;
    availabilityZones?: pulumi.Input<string[]>;
    trustedPrincipalArn?: pulumi.Input<string>;
    auditRetentionDays?: number;
    tags?: Record<string, string>;
}

export class LandingZone {
    public readonly networkId: pulumi.Output<string>;
    public readonly publicSubnetIds: pulumi.Output<string[]>;
    public readonly privateSubnetIds: pulumi.Output<string[]>;
    public readonly dataEncryptionKeyArn: pulumi.Output<string>;
    public readonly dataEncryptionKeyAlias: pulumi.Output<string>;
    public readonly secretsStore: pulumi.Output<string>;
    public readonly deployerRoleArn: pulumi.Output<string>;
    public readonly readOnlyRoleArn: pulumi.Output<string>;
    public readonly auditBucket: pulumi.Output<string>;
    public readonly escEnvironment: pulumi.Output<string>;

    constructor(name: string, args: LandingZoneArgs = {}) {
        const tags = { ...args.tags, "landing-zone": name };
        const cidrBlock = args.cidrBlock ?? "10.0.0.0/16";
        const retentionDays = args.auditRetentionDays ?? 90;

        const azs = pulumi.output(args.availabilityZones ?? aws.getAvailabilityZones({ state: "available" }).then((z) => z.names.slice(0, 2)));

        const vpc = new aws.ec2.Vpc(`${name}-vpc`, {
            cidrBlock,
            enableDnsHostnames: true,
            enableDnsSupport: true,
            tags,
        });

        const igw = new aws.ec2.InternetGateway(`${name}-igw`, {
            vpcId: vpc.id,
            tags,
        });

        const publicSubnets: aws.ec2.Subnet[] = [];
        const privateSubnets: aws.ec2.Subnet[] = [];
        const natGateways: aws.ec2.NatGateway[] = [];

        for (let i = 0; i < 2; i++) {
            const az = azs.apply((names) => names[i]);
            const publicSubnet = new aws.ec2.Subnet(`${name}-public-${i}`, {
                vpcId: vpc.id,
                availabilityZone: az,
                cidrBlock: pulumi.interpolate`10.0.${i * 16}.0/20`,
                mapPublicIpOnLaunch: true,
                tags: { ...tags, tier: "public" },
            });
            publicSubnets.push(publicSubnet);

            const eip = new aws.ec2.Eip(`${name}-nat-eip-${i}`, { domain: "vpc", tags });
            const nat = new aws.ec2.NatGateway(`${name}-nat-${i}`, {
                allocationId: eip.id,
                subnetId: publicSubnet.id,
                tags,
            }, { dependsOn: [igw] });
            natGateways.push(nat);

            const privateSubnet = new aws.ec2.Subnet(`${name}-private-${i}`, {
                vpcId: vpc.id,
                availabilityZone: az,
                cidrBlock: pulumi.interpolate`10.0.${i * 16 + 128}.0/20`,
                tags: { ...tags, tier: "private" },
            });
            privateSubnets.push(privateSubnet);
        }

        const publicRt = new aws.ec2.RouteTable(`${name}-public-rt`, {
            vpcId: vpc.id,
            routes: [{ cidrBlock: "0.0.0.0/0", gatewayId: igw.id }],
            tags,
        });
        publicSubnets.forEach((subnet, i) =>
            new aws.ec2.RouteTableAssociation(`${name}-public-rta-${i}`, {
                subnetId: subnet.id,
                routeTableId: publicRt.id,
            }),
        );
        privateSubnets.forEach((subnet, i) => {
            const rt = new aws.ec2.RouteTable(`${name}-private-rt-${i}`, {
                vpcId: vpc.id,
                routes: [{ cidrBlock: "0.0.0.0/0", natGatewayId: natGateways[i].id }],
                tags,
            });
            new aws.ec2.RouteTableAssociation(`${name}-private-rta-${i}`, {
                subnetId: subnet.id,
                routeTableId: rt.id,
            });
        });

        const key = new aws.kms.Key(`${name}-key`, {
            description: `${name} landing zone master key`,
            enableKeyRotation: true,
            tags,
        });
        const keyAlias = new aws.kms.Alias(`${name}-key-alias`, {
            name: `alias/${name}-landing-zone`,
            targetKeyId: key.keyId,
        });

        const flowLogsGroup = new aws.cloudwatch.LogGroup(`${name}-flow-logs`, {
            retentionInDays: retentionDays,
            kmsKeyId: key.arn,
            tags,
        });
        const flowLogsRole = new aws.iam.Role(`${name}-flow-logs-role`, {
            assumeRolePolicy: JSON.stringify({
                Version: "2012-10-17",
                Statement: [{
                    Effect: "Allow",
                    Principal: { Service: "vpc-flow-logs.amazonaws.com" },
                    Action: "sts:AssumeRole",
                }],
            }),
            tags,
        });
        new aws.iam.RolePolicy(`${name}-flow-logs-policy`, {
            role: flowLogsRole.id,
            policy: JSON.stringify({
                Version: "2012-10-17",
                Statement: [{
                    Effect: "Allow",
                    Action: ["logs:CreateLogStream", "logs:PutLogEvents", "logs:DescribeLogGroups", "logs:DescribeLogStreams"],
                    Resource: "*",
                }],
            }),
        });
        new aws.ec2.FlowLog(`${name}-flow-log`, {
            vpcId: vpc.id,
            iamRoleArn: flowLogsRole.arn,
            logDestination: flowLogsGroup.arn,
            trafficType: "ALL",
            tags,
        });

        const callerIdentity = aws.getCallerIdentity({});
        const trustedArn = pulumi.output(args.trustedPrincipalArn ?? callerIdentity.then((id) => `arn:aws:iam::${id.accountId}:root`));

        const assumeRolePolicy = trustedArn.apply((arn) => JSON.stringify({
            Version: "2012-10-17",
            Statement: [{
                Effect: "Allow",
                Principal: { AWS: arn },
                Action: "sts:AssumeRole",
            }],
        }));

        const deployerRole = new aws.iam.Role(`${name}-deployer`, {
            name: `${name}-deployer`,
            assumeRolePolicy,
            description: "Workload deployer for projects rooted at this landing zone.",
            tags,
        });
        new aws.iam.RolePolicyAttachment(`${name}-deployer-attach`, {
            role: deployerRole.name,
            policyArn: "arn:aws:iam::aws:policy/PowerUserAccess",
        });

        const readOnlyRole = new aws.iam.Role(`${name}-readonly`, {
            name: `${name}-readonly`,
            assumeRolePolicy,
            description: "Read-only observability role for projects rooted at this landing zone.",
            tags,
        });
        new aws.iam.RolePolicyAttachment(`${name}-readonly-attach`, {
            role: readOnlyRole.name,
            policyArn: "arn:aws:iam::aws:policy/ReadOnlyAccess",
        });

        const auditBucket = new aws.s3.BucketV2(`${name}-audit`, {
            forceDestroy: true,
            tags,
        });
        new aws.s3.BucketServerSideEncryptionConfigurationV2(`${name}-audit-sse`, {
            bucket: auditBucket.id,
            rules: [{
                applyServerSideEncryptionByDefault: {
                    sseAlgorithm: "aws:kms",
                    kmsMasterKeyId: key.arn,
                },
            }],
        });
        new aws.s3.BucketLifecycleConfigurationV2(`${name}-audit-lifecycle`, {
            bucket: auditBucket.id,
            rules: [{
                id: "retain",
                status: "Enabled",
                expiration: { days: retentionDays },
            }],
        });

        const trail = new aws.cloudtrail.Trail(`${name}-trail`, {
            s3BucketName: auditBucket.id,
            includeGlobalServiceEvents: true,
            isMultiRegionTrail: true,
            enableLogFileValidation: true,
            kmsKeyId: key.arn,
            tags,
        });

        this.networkId = vpc.id;
        this.publicSubnetIds = pulumi.output(publicSubnets.map((s) => s.id)).apply((ids) => ids);
        this.privateSubnetIds = pulumi.output(privateSubnets.map((s) => s.id)).apply((ids) => ids);
        this.dataEncryptionKeyArn = key.arn;
        this.dataEncryptionKeyAlias = keyAlias.name;
        this.secretsStore = pulumi.interpolate`${name}/`;
        this.deployerRoleArn = deployerRole.arn;
        this.readOnlyRoleArn = readOnlyRole.arn;
        this.auditBucket = auditBucket.bucket;
        this.escEnvironment = pulumi.interpolate`${name}-landing-zone`;
    }
}
import json
from dataclasses import dataclass, field
from typing import Dict, List, Optional

import pulumi
import pulumi_aws as aws


@dataclass
class LandingZoneArgs:
    cidr_block: str = "10.0.0.0/16"
    availability_zones: Optional[pulumi.Input[List[str]]] = None
    trusted_principal_arn: Optional[pulumi.Input[str]] = None
    audit_retention_days: int = 90
    tags: Dict[str, str] = field(default_factory=dict)


class LandingZone:
    def __init__(self, name: str, args: Optional[LandingZoneArgs] = None) -> None:
        args = args or LandingZoneArgs()
        tags = {**args.tags, "landing-zone": name}

        azs = pulumi.Output.from_input(
            args.availability_zones
            if args.availability_zones is not None
            else aws.get_availability_zones(state="available").names[:2]
        )

        vpc = aws.ec2.Vpc(
            f"{name}-vpc",
            cidr_block=args.cidr_block,
            enable_dns_hostnames=True,
            enable_dns_support=True,
            tags=tags,
        )

        igw = aws.ec2.InternetGateway(f"{name}-igw", vpc_id=vpc.id, tags=tags)

        public_subnets = []
        private_subnets = []
        nat_gateways = []

        for i in range(2):
            az = azs.apply(lambda names, i=i: names[i])
            public_subnet = aws.ec2.Subnet(
                f"{name}-public-{i}",
                vpc_id=vpc.id,
                availability_zone=az,
                cidr_block=f"10.0.{i * 16}.0/20",
                map_public_ip_on_launch=True,
                tags={**tags, "tier": "public"},
            )
            public_subnets.append(public_subnet)

            eip = aws.ec2.Eip(f"{name}-nat-eip-{i}", domain="vpc", tags=tags)
            nat = aws.ec2.NatGateway(
                f"{name}-nat-{i}",
                allocation_id=eip.id,
                subnet_id=public_subnet.id,
                tags=tags,
                opts=pulumi.ResourceOptions(depends_on=[igw]),
            )
            nat_gateways.append(nat)

            private_subnet = aws.ec2.Subnet(
                f"{name}-private-{i}",
                vpc_id=vpc.id,
                availability_zone=az,
                cidr_block=f"10.0.{i * 16 + 128}.0/20",
                tags={**tags, "tier": "private"},
            )
            private_subnets.append(private_subnet)

        public_rt = aws.ec2.RouteTable(
            f"{name}-public-rt",
            vpc_id=vpc.id,
            routes=[aws.ec2.RouteTableRouteArgs(cidr_block="0.0.0.0/0", gateway_id=igw.id)],
            tags=tags,
        )
        for i, subnet in enumerate(public_subnets):
            aws.ec2.RouteTableAssociation(
                f"{name}-public-rta-{i}",
                subnet_id=subnet.id,
                route_table_id=public_rt.id,
            )
        for i, subnet in enumerate(private_subnets):
            rt = aws.ec2.RouteTable(
                f"{name}-private-rt-{i}",
                vpc_id=vpc.id,
                routes=[aws.ec2.RouteTableRouteArgs(cidr_block="0.0.0.0/0", nat_gateway_id=nat_gateways[i].id)],
                tags=tags,
            )
            aws.ec2.RouteTableAssociation(
                f"{name}-private-rta-{i}",
                subnet_id=subnet.id,
                route_table_id=rt.id,
            )

        key = aws.kms.Key(
            f"{name}-key",
            description=f"{name} landing zone master key",
            enable_key_rotation=True,
            tags=tags,
        )
        key_alias = aws.kms.Alias(
            f"{name}-key-alias",
            name=f"alias/{name}-landing-zone",
            target_key_id=key.key_id,
        )

        flow_logs_group = aws.cloudwatch.LogGroup(
            f"{name}-flow-logs",
            retention_in_days=args.audit_retention_days,
            kms_key_id=key.arn,
            tags=tags,
        )
        flow_logs_role = aws.iam.Role(
            f"{name}-flow-logs-role",
            assume_role_policy=json.dumps({
                "Version": "2012-10-17",
                "Statement": [{
                    "Effect": "Allow",
                    "Principal": {"Service": "vpc-flow-logs.amazonaws.com"},
                    "Action": "sts:AssumeRole",
                }],
            }),
            tags=tags,
        )
        aws.iam.RolePolicy(
            f"{name}-flow-logs-policy",
            role=flow_logs_role.id,
            policy=json.dumps({
                "Version": "2012-10-17",
                "Statement": [{
                    "Effect": "Allow",
                    "Action": ["logs:CreateLogStream", "logs:PutLogEvents", "logs:DescribeLogGroups", "logs:DescribeLogStreams"],
                    "Resource": "*",
                }],
            }),
        )
        aws.ec2.FlowLog(
            f"{name}-flow-log",
            vpc_id=vpc.id,
            iam_role_arn=flow_logs_role.arn,
            log_destination=flow_logs_group.arn,
            traffic_type="ALL",
            tags=tags,
        )

        caller = aws.get_caller_identity()
        trusted_arn = pulumi.Output.from_input(
            args.trusted_principal_arn
            if args.trusted_principal_arn is not None
            else f"arn:aws:iam::{caller.account_id}:root"
        )

        assume_role_policy = trusted_arn.apply(lambda arn: json.dumps({
            "Version": "2012-10-17",
            "Statement": [{
                "Effect": "Allow",
                "Principal": {"AWS": arn},
                "Action": "sts:AssumeRole",
            }],
        }))

        deployer_role = aws.iam.Role(
            f"{name}-deployer",
            name=f"{name}-deployer",
            assume_role_policy=assume_role_policy,
            description="Workload deployer for projects rooted at this landing zone.",
            tags=tags,
        )
        aws.iam.RolePolicyAttachment(
            f"{name}-deployer-attach",
            role=deployer_role.name,
            policy_arn="arn:aws:iam::aws:policy/PowerUserAccess",
        )

        read_only_role = aws.iam.Role(
            f"{name}-readonly",
            name=f"{name}-readonly",
            assume_role_policy=assume_role_policy,
            description="Read-only observability role for projects rooted at this landing zone.",
            tags=tags,
        )
        aws.iam.RolePolicyAttachment(
            f"{name}-readonly-attach",
            role=read_only_role.name,
            policy_arn="arn:aws:iam::aws:policy/ReadOnlyAccess",
        )

        audit_bucket = aws.s3.BucketV2(f"{name}-audit", force_destroy=True, tags=tags)
        aws.s3.BucketServerSideEncryptionConfigurationV2(
            f"{name}-audit-sse",
            bucket=audit_bucket.id,
            rules=[aws.s3.BucketServerSideEncryptionConfigurationV2RuleArgs(
                apply_server_side_encryption_by_default=aws.s3.BucketServerSideEncryptionConfigurationV2RuleApplyServerSideEncryptionByDefaultArgs(
                    sse_algorithm="aws:kms",
                    kms_master_key_id=key.arn,
                ),
            )],
        )
        aws.s3.BucketLifecycleConfigurationV2(
            f"{name}-audit-lifecycle",
            bucket=audit_bucket.id,
            rules=[aws.s3.BucketLifecycleConfigurationV2RuleArgs(
                id="retain",
                status="Enabled",
                expiration=aws.s3.BucketLifecycleConfigurationV2RuleExpirationArgs(days=args.audit_retention_days),
            )],
        )

        aws.cloudtrail.Trail(
            f"{name}-trail",
            s3_bucket_name=audit_bucket.id,
            include_global_service_events=True,
            is_multi_region_trail=True,
            enable_log_file_validation=True,
            kms_key_id=key.arn,
            tags=tags,
        )

        self.network_id = vpc.id
        self.public_subnet_ids = pulumi.Output.all(*[s.id for s in public_subnets])
        self.private_subnet_ids = pulumi.Output.all(*[s.id for s in private_subnets])
        self.data_encryption_key_arn = key.arn
        self.data_encryption_key_alias = key_alias.name
        self.secrets_store = f"{name}/"
        self.deployer_role_arn = deployer_role.arn
        self.read_only_role_arn = read_only_role.arn
        self.audit_bucket = audit_bucket.bucket
        self.esc_environment = f"{name}-landing-zone"
package landingzone

import (
	"encoding/json"
	"fmt"

	"github.com/pulumi/pulumi-aws/sdk/v6/go/aws"
	"github.com/pulumi/pulumi-aws/sdk/v6/go/aws/cloudtrail"
	"github.com/pulumi/pulumi-aws/sdk/v6/go/aws/cloudwatch"
	"github.com/pulumi/pulumi-aws/sdk/v6/go/aws/ec2"
	"github.com/pulumi/pulumi-aws/sdk/v6/go/aws/iam"
	"github.com/pulumi/pulumi-aws/sdk/v6/go/aws/kms"
	"github.com/pulumi/pulumi-aws/sdk/v6/go/aws/s3"
	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
)

type LandingZoneArgs struct {
	CidrBlock           string
	AvailabilityZones   []string
	TrustedPrincipalArn string
	AuditRetentionDays  int
	Tags                map[string]string
}

type LandingZone struct {
	NetworkId              pulumi.StringOutput
	PublicSubnetIds        pulumi.StringArrayOutput
	PrivateSubnetIds       pulumi.StringArrayOutput
	DataEncryptionKeyArn   pulumi.StringOutput
	DataEncryptionKeyAlias pulumi.StringOutput
	SecretsStore           pulumi.StringOutput
	DeployerRoleArn        pulumi.StringOutput
	ReadOnlyRoleArn        pulumi.StringOutput
	AuditBucket            pulumi.StringOutput
	EscEnvironment         pulumi.StringOutput
}

func NewLandingZone(ctx *pulumi.Context, name string, args *LandingZoneArgs) (*LandingZone, error) {
	if args == nil {
		args = &LandingZoneArgs{}
	}
	if args.CidrBlock == "" {
		args.CidrBlock = "10.0.0.0/16"
	}
	if args.AuditRetentionDays == 0 {
		args.AuditRetentionDays = 90
	}

	tags := map[string]string{"landing-zone": name}
	for k, v := range args.Tags {
		tags[k] = v
	}
	tagsInput := pulumi.ToStringMap(tags)

	var azs []string
	if len(args.AvailabilityZones) >= 2 {
		azs = args.AvailabilityZones[:2]
	} else {
		state := "available"
		result, err := aws.GetAvailabilityZones(ctx, &aws.GetAvailabilityZonesArgs{State: &state}, nil)
		if err != nil {
			return nil, err
		}
		if len(result.Names) < 2 {
			return nil, fmt.Errorf("need at least two availability zones, got %d", len(result.Names))
		}
		azs = result.Names[:2]
	}

	vpc, err := ec2.NewVpc(ctx, fmt.Sprintf("%s-vpc", name), &ec2.VpcArgs{
		CidrBlock:          pulumi.String(args.CidrBlock),
		EnableDnsHostnames: pulumi.Bool(true),
		EnableDnsSupport:   pulumi.Bool(true),
		Tags:               tagsInput,
	})
	if err != nil {
		return nil, err
	}

	igw, err := ec2.NewInternetGateway(ctx, fmt.Sprintf("%s-igw", name), &ec2.InternetGatewayArgs{
		VpcId: vpc.ID(),
		Tags:  tagsInput,
	})
	if err != nil {
		return nil, err
	}

	publicSubnetIds := pulumi.StringArray{}
	privateSubnetIds := pulumi.StringArray{}
	natGateways := make([]*ec2.NatGateway, 0, 2)

	for i := 0; i < 2; i++ {
		publicTags := map[string]string{"landing-zone": name, "tier": "public"}
		for k, v := range args.Tags {
			publicTags[k] = v
		}
		publicSubnet, err := ec2.NewSubnet(ctx, fmt.Sprintf("%s-public-%d", name, i), &ec2.SubnetArgs{
			VpcId:               vpc.ID(),
			AvailabilityZone:    pulumi.String(azs[i]),
			CidrBlock:           pulumi.String(fmt.Sprintf("10.0.%d.0/20", i*16)),
			MapPublicIpOnLaunch: pulumi.Bool(true),
			Tags:                pulumi.ToStringMap(publicTags),
		})
		if err != nil {
			return nil, err
		}
		publicSubnetIds = append(publicSubnetIds, publicSubnet.ID().ToStringOutput())

		eip, err := ec2.NewEip(ctx, fmt.Sprintf("%s-nat-eip-%d", name, i), &ec2.EipArgs{
			Domain: pulumi.String("vpc"),
			Tags:   tagsInput,
		})
		if err != nil {
			return nil, err
		}

		nat, err := ec2.NewNatGateway(ctx, fmt.Sprintf("%s-nat-%d", name, i), &ec2.NatGatewayArgs{
			AllocationId: eip.ID(),
			SubnetId:     publicSubnet.ID(),
			Tags:         tagsInput,
		}, pulumi.DependsOn([]pulumi.Resource{igw}))
		if err != nil {
			return nil, err
		}
		natGateways = append(natGateways, nat)

		privateTags := map[string]string{"landing-zone": name, "tier": "private"}
		for k, v := range args.Tags {
			privateTags[k] = v
		}
		privateSubnet, err := ec2.NewSubnet(ctx, fmt.Sprintf("%s-private-%d", name, i), &ec2.SubnetArgs{
			VpcId:            vpc.ID(),
			AvailabilityZone: pulumi.String(azs[i]),
			CidrBlock:        pulumi.String(fmt.Sprintf("10.0.%d.0/20", i*16+128)),
			Tags:             pulumi.ToStringMap(privateTags),
		})
		if err != nil {
			return nil, err
		}
		privateSubnetIds = append(privateSubnetIds, privateSubnet.ID().ToStringOutput())

		rt, err := ec2.NewRouteTable(ctx, fmt.Sprintf("%s-private-rt-%d", name, i), &ec2.RouteTableArgs{
			VpcId: vpc.ID(),
			Routes: ec2.RouteTableRouteArray{
				&ec2.RouteTableRouteArgs{
					CidrBlock:    pulumi.String("0.0.0.0/0"),
					NatGatewayId: nat.ID(),
				},
			},
			Tags: tagsInput,
		})
		if err != nil {
			return nil, err
		}
		if _, err = ec2.NewRouteTableAssociation(ctx, fmt.Sprintf("%s-private-rta-%d", name, i), &ec2.RouteTableAssociationArgs{
			SubnetId:     privateSubnet.ID(),
			RouteTableId: rt.ID(),
		}); err != nil {
			return nil, err
		}
	}

	publicRt, err := ec2.NewRouteTable(ctx, fmt.Sprintf("%s-public-rt", name), &ec2.RouteTableArgs{
		VpcId: vpc.ID(),
		Routes: ec2.RouteTableRouteArray{
			&ec2.RouteTableRouteArgs{
				CidrBlock: pulumi.String("0.0.0.0/0"),
				GatewayId: igw.ID(),
			},
		},
		Tags: tagsInput,
	})
	if err != nil {
		return nil, err
	}

	for i := 0; i < 2; i++ {
		if _, err = ec2.NewRouteTableAssociation(ctx, fmt.Sprintf("%s-public-rta-%d", name, i), &ec2.RouteTableAssociationArgs{
			SubnetId:     publicSubnetIds[i].(pulumi.StringOutput),
			RouteTableId: publicRt.ID(),
		}); err != nil {
			return nil, err
		}
	}

	key, err := kms.NewKey(ctx, fmt.Sprintf("%s-key", name), &kms.KeyArgs{
		Description:       pulumi.String(fmt.Sprintf("%s landing zone master key", name)),
		EnableKeyRotation: pulumi.Bool(true),
		Tags:              tagsInput,
	})
	if err != nil {
		return nil, err
	}
	keyAlias, err := kms.NewAlias(ctx, fmt.Sprintf("%s-key-alias", name), &kms.AliasArgs{
		Name:        pulumi.String(fmt.Sprintf("alias/%s-landing-zone", name)),
		TargetKeyId: key.KeyId,
	})
	if err != nil {
		return nil, err
	}

	flowLogsGroup, err := cloudwatch.NewLogGroup(ctx, fmt.Sprintf("%s-flow-logs", name), &cloudwatch.LogGroupArgs{
		RetentionInDays: pulumi.Int(args.AuditRetentionDays),
		KmsKeyId:        key.Arn,
		Tags:            tagsInput,
	})
	if err != nil {
		return nil, err
	}

	assumeFlowLogs, err := json.Marshal(map[string]interface{}{
		"Version": "2012-10-17",
		"Statement": []map[string]interface{}{{
			"Effect":    "Allow",
			"Principal": map[string]string{"Service": "vpc-flow-logs.amazonaws.com"},
			"Action":    "sts:AssumeRole",
		}},
	})
	if err != nil {
		return nil, err
	}

	flowLogsRole, err := iam.NewRole(ctx, fmt.Sprintf("%s-flow-logs-role", name), &iam.RoleArgs{
		AssumeRolePolicy: pulumi.String(string(assumeFlowLogs)),
		Tags:             tagsInput,
	})
	if err != nil {
		return nil, err
	}

	flowLogsPolicy, err := json.Marshal(map[string]interface{}{
		"Version": "2012-10-17",
		"Statement": []map[string]interface{}{{
			"Effect":   "Allow",
			"Action":   []string{"logs:CreateLogStream", "logs:PutLogEvents", "logs:DescribeLogGroups", "logs:DescribeLogStreams"},
			"Resource": "*",
		}},
	})
	if err != nil {
		return nil, err
	}
	if _, err = iam.NewRolePolicy(ctx, fmt.Sprintf("%s-flow-logs-policy", name), &iam.RolePolicyArgs{
		Role:   flowLogsRole.ID(),
		Policy: pulumi.String(string(flowLogsPolicy)),
	}); err != nil {
		return nil, err
	}
	if _, err = ec2.NewFlowLog(ctx, fmt.Sprintf("%s-flow-log", name), &ec2.FlowLogArgs{
		VpcId:          vpc.ID(),
		IamRoleArn:     flowLogsRole.Arn,
		LogDestination: flowLogsGroup.Arn,
		TrafficType:    pulumi.String("ALL"),
		Tags:           tagsInput,
	}); err != nil {
		return nil, err
	}

	trustedArn := args.TrustedPrincipalArn
	if trustedArn == "" {
		caller, err := aws.GetCallerIdentity(ctx, &aws.GetCallerIdentityArgs{}, nil)
		if err != nil {
			return nil, err
		}
		trustedArn = fmt.Sprintf("arn:aws:iam::%s:root", caller.AccountId)
	}
	assumeRolePolicy, err := json.Marshal(map[string]interface{}{
		"Version": "2012-10-17",
		"Statement": []map[string]interface{}{{
			"Effect":    "Allow",
			"Principal": map[string]string{"AWS": trustedArn},
			"Action":    "sts:AssumeRole",
		}},
	})
	if err != nil {
		return nil, err
	}

	deployerRole, err := iam.NewRole(ctx, fmt.Sprintf("%s-deployer", name), &iam.RoleArgs{
		Name:             pulumi.String(fmt.Sprintf("%s-deployer", name)),
		AssumeRolePolicy: pulumi.String(string(assumeRolePolicy)),
		Description:      pulumi.String("Workload deployer for projects rooted at this landing zone."),
		Tags:             tagsInput,
	})
	if err != nil {
		return nil, err
	}
	if _, err = iam.NewRolePolicyAttachment(ctx, fmt.Sprintf("%s-deployer-attach", name), &iam.RolePolicyAttachmentArgs{
		Role:      deployerRole.Name,
		PolicyArn: pulumi.String("arn:aws:iam::aws:policy/PowerUserAccess"),
	}); err != nil {
		return nil, err
	}

	readOnlyRole, err := iam.NewRole(ctx, fmt.Sprintf("%s-readonly", name), &iam.RoleArgs{
		Name:             pulumi.String(fmt.Sprintf("%s-readonly", name)),
		AssumeRolePolicy: pulumi.String(string(assumeRolePolicy)),
		Description:      pulumi.String("Read-only observability role for projects rooted at this landing zone."),
		Tags:             tagsInput,
	})
	if err != nil {
		return nil, err
	}
	if _, err = iam.NewRolePolicyAttachment(ctx, fmt.Sprintf("%s-readonly-attach", name), &iam.RolePolicyAttachmentArgs{
		Role:      readOnlyRole.Name,
		PolicyArn: pulumi.String("arn:aws:iam::aws:policy/ReadOnlyAccess"),
	}); err != nil {
		return nil, err
	}

	auditBucket, err := s3.NewBucketV2(ctx, fmt.Sprintf("%s-audit", name), &s3.BucketV2Args{
		ForceDestroy: pulumi.Bool(true),
		Tags:         tagsInput,
	})
	if err != nil {
		return nil, err
	}
	if _, err = s3.NewBucketServerSideEncryptionConfigurationV2(ctx, fmt.Sprintf("%s-audit-sse", name), &s3.BucketServerSideEncryptionConfigurationV2Args{
		Bucket: auditBucket.ID(),
		Rules: s3.BucketServerSideEncryptionConfigurationV2RuleArray{
			&s3.BucketServerSideEncryptionConfigurationV2RuleArgs{
				ApplyServerSideEncryptionByDefault: &s3.BucketServerSideEncryptionConfigurationV2RuleApplyServerSideEncryptionByDefaultArgs{
					SseAlgorithm:   pulumi.String("aws:kms"),
					KmsMasterKeyId: key.Arn,
				},
			},
		},
	}); err != nil {
		return nil, err
	}
	if _, err = s3.NewBucketLifecycleConfigurationV2(ctx, fmt.Sprintf("%s-audit-lifecycle", name), &s3.BucketLifecycleConfigurationV2Args{
		Bucket: auditBucket.ID(),
		Rules: s3.BucketLifecycleConfigurationV2RuleArray{
			&s3.BucketLifecycleConfigurationV2RuleArgs{
				Id:     pulumi.String("retain"),
				Status: pulumi.String("Enabled"),
				Expiration: &s3.BucketLifecycleConfigurationV2RuleExpirationArgs{
					Days: pulumi.Int(args.AuditRetentionDays),
				},
			},
		},
	}); err != nil {
		return nil, err
	}
	if _, err = cloudtrail.NewTrail(ctx, fmt.Sprintf("%s-trail", name), &cloudtrail.TrailArgs{
		S3BucketName:               auditBucket.ID(),
		IncludeGlobalServiceEvents: pulumi.Bool(true),
		IsMultiRegionTrail:         pulumi.Bool(true),
		EnableLogFileValidation:    pulumi.Bool(true),
		KmsKeyId:                   key.Arn,
		Tags:                       tagsInput,
	}); err != nil {
		return nil, err
	}

	return &LandingZone{
		NetworkId:              vpc.ID().ToStringOutput(),
		PublicSubnetIds:        publicSubnetIds.ToStringArrayOutput(),
		PrivateSubnetIds:       privateSubnetIds.ToStringArrayOutput(),
		DataEncryptionKeyArn:   key.Arn,
		DataEncryptionKeyAlias: keyAlias.Name,
		SecretsStore:           pulumi.String(fmt.Sprintf("%s/", name)).ToStringOutput(),
		DeployerRoleArn:        deployerRole.Arn,
		ReadOnlyRoleArn:        readOnlyRole.Arn,
		AuditBucket:            auditBucket.Bucket,
		EscEnvironment:         pulumi.String(fmt.Sprintf("%s-landing-zone", name)).ToStringOutput(),
	}, nil
}

Frequently asked questions

What prerequisites does this blueprint need?
A Pulumi account with the Pulumi CLI, a cloud account or subscription or project where you can create networking, IAM, key, and audit resources, and the language toolchain for the variant you chose (Node, Python, or Go).
How do I add more workload identities later?
The reusable component exposes the two default identities (deployer and read-only) as outputs. Define additional IAM role / managed identity / service account resources in the same program, attach the policies you need, and export them as new stack outputs. The Add another identity section on this page shows the exact shape for each cloud.
Does this land audit logs in a SIEM?
No. This blueprint writes audit logs to a retention bucket in the same cloud account so you have a durable record out of the box. The Audit logging section points to the provider features you would layer on to forward those logs to Splunk, Datadog, Sumo, or a custom SIEM.
Can I reuse this outside the blueprint?
Yes. The LandingZone component is the unit meant for reuse. Import it into another Pulumi project, or publish the whole stack and consume its outputs from other stacks through Pulumi ESC or StackReference.
What scope does this guide target?
One cloud account, subscription, or project. The resources here are the building blocks every team needs inside that scope. When you are ready to wire them into a cloud organization, extend the same Pulumi project with the aws.organizations, azure-native.billing, or gcp.organizations resources you need; everything this blueprint provisions stays in place.
What does this cost?
NAT gateways, KMS keys, audit log storage, and identity evaluations all create charges even when no workloads are running. Inspect the blueprint and the tags it applies before deploying to a billed account.
What can I deploy on top of a landing zone?
Two blueprints in this catalog consume these outputs directly. The managed Kubernetes blueprint stands up an opinionated EKS / AKS / GKE cluster on the landing-zone network. The serverless React + Postgres blueprint deploys a full-stack demo app (React SPA + serverless API + managed PostgreSQL) inside the same account. Both read networkId, privateSubnetIds, and secretsStore through StackReference and fail fast at preview time if the landing-zone stack is missing.