What this guide covers
A landing zone is the set of shared infrastructure every other Pulumi project in a cloud account keeps reusing: a network, identities, a place to store secrets and keys, and somewhere to land audit logs. This blueprint gives you one Pulumi stack that provisions all of it for GCP and exports the values downstream stacks need.
The blueprint covers:
- one Pulumi stack that provisions the landing zone inside a single cloud account, subscription, or project
- a reusable
LandingZonecomponent you can import from other projects - a Pulumi ESC environment every downstream stack imports by name
- StackReference snippets so other guides can consume the exports directly
Everything the blueprint creates is additive, so you can extend it after the first deployment as your platform grows.
What gets deployed
On GCP this blueprint provisions, in one stack:
- Network: Google Cloud VPC with a
/16address space (thecidrBlockconfig override), two public subnets, two private subnets, NAT egress, and flow logs to an encrypted log sink. - Keys and secrets: one managed key in Google Cloud KMS with rotation enabled, and a secrets store convention in Google Secret Manager that downstream apps use by naming prefix.
- Workload identities: a
deployeridentity with write permissions scoped to downstream infrastructure, and aread-onlyidentity for observability and audits. Both are exported so you can assume or attach them from other projects. - Audit logging: a retention bucket receiving Google Cloud audit log sinks events with a 90-day default retention, encrypted with the key above.
- Pulumi ESC environment: a stack-attached environment that exports the stack outputs as configuration values so downstream Pulumi projects can import them by name.
On GCP
The blueprint uses Google Cloud VPC for the network, Google Cloud IAM for the deployer and read-only service accounts, Google Cloud KMS for the managed encryption key, Google Secret Manager for the secrets store convention, and Google Cloud audit log sinks for audit logs.
The first deployment creates:
- a VPC network with two public subnets and two private subnets across two regions
- one KMS keyring and key with rotation enabled, using a stack-scoped name
- two service accounts (
<stack>-deployerand<stack>-readonly) with scoped project-level IAM bindings - a GCS bucket with object lifecycle retention plus a project-level audit log sink that routes admin and data access events into the bucket
Quickstart
If you just want to see the landing zone deployed, use the downloadable example and follow this sequence:
- Download the example zip at the top of the page and unzip it.
- Open a terminal in the extracted project root.
- Install the Pulumi dependencies for the language you want to use:
npm install
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
go mod tidy
- For a first local test, keep using whichever GCP credentials already work in your shell. If you want a shared or repeatable setup, use the Pulumi ESC section below before continuing.
- Create the stack and deploy:
pulumi login
pulumi stack init dev
pulumi config set gcp:project <your-gcp-project-id>
pulumi up
- When the update finishes, inspect the outputs that downstream projects will import:
pulumi stack output --show-secrets
The default CIDR block is 10.20.0.0/16 which you can override with pulumi config set cidrBlock. Change it before you run pulumi up if it overlaps with networks you already operate.
Prerequisites
- a Pulumi account and the Pulumi CLI installed. Pulumi lets you define and update cloud infrastructure with popular programming languages.
- a Google Cloud project where you can create VPC, IAM, KMS, storage, and audit log sinks
- Node.js 20 or newer and npm
Set up credentials with Pulumi ESC
Before you run pulumi up, configure Pulumi ESC so your stack receives short-lived GCP credentials through dynamic login credentials.
If you already have working GCP credentials in your shell and only want a quick local test, you can skip this section and come back later. ESC is the better long-term path for shared environments, Pulumi Deployments, and CI/CD.
Step 1: Create or update an ESC environment
Use imports if you want to layer this on top of a shared base environment.
imports:
- <your-org>/base
values:
gcp:
login:
fn::open::gcp-login:
project: 123456789012
oidc:
workloadPoolId: pulumi-esc
providerId: pulumi-esc
serviceAccount: pulumi-esc@example-project.iam.gserviceaccount.com
environmentVariables:
GOOGLE_CLOUD_PROJECT: ${gcp.login.project}
GOOGLE_OAUTH_ACCESS_TOKEN: ${gcp.login.accessToken}
pulumiConfig:
gcp:project: my-project-id
This example shows the pieces that matter for GCP:
- the cloud login provider configured for OIDC
- environment variables exported for local CLI use
pulumiConfigvalues passed into your Pulumi stack
Step 2: Attach the environment to your stack
In Pulumi.dev.yaml or your stack config file, add:
environment:
- <your-org>/<your-environment>
That is what makes the ESC environment available to pulumi preview, pulumi up, and pulumi destroy.
Optional: Inspect the environment locally
Step 2 is all Pulumi needs to import the environment during pulumi preview, pulumi up, and pulumi destroy. If you want to sanity-check the resolved values from your shell, run:
esc open <your-org>/<your-environment>
You do not need to run this before pulumi up.
What you get in the download
The downloadable example zip includes:
index.tsas the Pulumi entrypointcomponents/landing-zone.tsas the reusableLandingZonemodulepackage.jsonandtsconfig.jsonfor the root Pulumi projectREADME.mdwith the same commands you will see on this page
index.tsas the Pulumi entrypointcomponents/landing-zone.tsas the reusableLandingZonemodulepackage.jsonandtsconfig.jsonfor the root Pulumi project
__main__.pyas the Pulumi entrypointcomponents/landing_zone.pyas the reusableLandingZonemodulerequirements.txtfor the root Pulumi project
main.goas the Pulumi entrypointlandingzone/zone.goas the reusableLandingZonemodulego.modfor the root Pulumi project
The blueprint is a plain Pulumi project. It does not assume any team convention, but it exports outputs that other stacks and the Pulumi ESC environment can consume by name.
Deploy with Pulumi
Follow these steps in order from the project root.
Step 1: Install the root Pulumi dependencies for the language you want to use
The download card and the Pulumi code examples on this page follow the same language selection.
npm install
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
go mod tidy
Step 2: Create a Pulumi stack
If you already created the stack once, run pulumi stack select dev instead. For a fuller walkthrough, see the Pulumi getting started docs.
pulumi login
pulumi stack init dev
pulumi config set gcp:project <your-gcp-project-id>
Step 3: Deploy
pulumi up
Approve the preview when Pulumi asks. The first run creates the network, keys, identities, and audit pipeline. Pulumi imports the ESC environment automatically through the environment: reference in your stack config, so you do not need to run esc open <your-org>/<your-environment> first.
Step 4: Inspect the outputs
pulumi stack output --show-secrets
The next section walks through each output and the StackReference patterns downstream projects use to consume them.
Stack outputs
Every GCP landing-zone stack exports the same output keys so downstream Pulumi projects can consume them through StackReference or a Pulumi ESC environment:
networkId: the Google Cloud VPC resource idpublicSubnetIds: the two public subnet idsprivateSubnetIds: the two private subnet idsdataEncryptionKey: a reference to the managed key in Google Cloud KMSsecretsStore: the prefix or identifier apps use to create new secrets in Google Secret ManagerdeployerIdentity: the deployer workload identityreadOnlyIdentity: the read-only workload identityauditBucket: the Google Cloud audit log sinks retention targetescEnvironment: the Pulumi ESC environment name downstream stacks import by reference
Run pulumi stack output --show-secrets to see the values after pulumi up. Exact key names may include extra cloud-specific fields (for example, the KMS alias on AWS or the Key Vault URI on Azure).
Consume the landing zone from downstream projects
Once the stack is up, every other Pulumi project in the same GCP account can read its outputs. Two patterns, pick whichever fits your team.
Pattern 1: Pulumi ESC environment
The stack creates a Pulumi ESC environment (escEnvironment output) that exposes the same outputs as configuration values. Downstream projects import it with one line in their stack config:
environment:
- your-org/landing-zone-dev
After that, pulumi.Config() in the consuming project can read networkId, privateSubnetIds, and the other keys directly.
Pattern 2: StackReference
If you prefer not to rely on ESC for cross-project wiring, use a StackReference:
import * as pulumi from "@pulumi/pulumi";
const landingZone = new pulumi.StackReference("your-org/landing-zone/dev");
const privateSubnets = landingZone.getOutput("privateSubnetIds");
import pulumi
landing_zone = pulumi.StackReference("your-org/landing-zone/dev")
private_subnets = landing_zone.get_output("privateSubnetIds")
landingZone, err := pulumi.NewStackReference(ctx, "your-org/landing-zone/dev", nil)
if err != nil {
return err
}
privateSubnets := landingZone.GetOutput(pulumi.String("privateSubnetIds"))
Add another workload identity
Two identities ship by default. Add more by extending the program next to the LandingZone component and exporting the new values so other stacks can assume them. Pattern per cloud:
- AWS: define a new
aws.iam.Rolewith a trust policy for the principal that will assume it, attach the policies you need, andctx.exportthe role ARN. The deployer role in the blueprint is the reference shape. - Azure: define a new
azure-native.managedidentity.UserAssignedIdentityand any scopedauthorization.RoleAssignmentresources for it, then export the identity client id. - GCP: define a new
gcp.serviceaccount.Accountplusgcp.projects.IAMMemberbindings for the specific roles, then export the service account email.
Because you are adding resources in the same program, the new identity is covered by the same audit logging and the same CI/CD workflow.
Forward audit logs to a SIEM
This blueprint writes audit logs to a retention bucket inside the same GCP account or subscription or project. That keeps the first deployment self-contained. To forward those logs to Splunk, Datadog, Sumo, or a custom SIEM, add the following to the same Pulumi project as a follow-up:
- AWS: subscribe a firehose or Lambda to the CloudTrail S3 bucket’s notifications, or configure CloudTrail to send events to an event bus.
- Azure: add an Event Hubs diagnostic setting that forwards the activity log alongside the bucket sink.
- GCP: add a second log sink targeting Pub/Sub and fan out from there.
The bucket-based baseline stays in place regardless, so you always have a durable record if the forwarder falls behind.
Set up CI/CD with Pulumi Deployments
A landing zone works best when it is redeployed from a tracked source. Pulumi Deployments runs pulumi up from the same GitHub repository that holds this program whenever you merge to a branch.
What you will configure in Pulumi Deployments for this project:
- the Git repository and branch containing the unzipped blueprint
- the stack name (for example
your-org/landing-zone/dev) - the root dependency command for the language you selected (
npm install) - the Pulumi ESC environment reference attached to the stack, so Deployments receives the same short-lived credentials as your local run
Once Deployments is wired up, land changes through PRs instead of running pulumi up by hand. Every downstream project that consumes this stack picks up the new outputs automatically.
Blueprint Pulumi program
Each download already includes the matching Pulumi entrypoint file and the reusable website module for that language. Use the language tabs to see the exact entrypoint for the version you want to run.
import * as pulumi from "@pulumi/pulumi";
import { LandingZone } from "./components/landing-zone";
const config = new pulumi.Config();
const cidrBlock = config.get("cidrBlock");
const region = config.get("region");
const zone = new LandingZone("platform", {
cidrBlock,
region,
tags: {
environment: pulumi.getStack(),
"solution-family": "landing-zone",
cloud: "gcp",
language: "typescript",
},
});
export const networkId = zone.networkId;
export const publicSubnetIds = zone.publicSubnetIds;
export const privateSubnetIds = zone.privateSubnetIds;
export const dataEncryptionKey = zone.dataEncryptionKey;
export const secretsStore = zone.secretsStore;
export const deployerIdentity = zone.deployerIdentity;
export const readOnlyIdentity = zone.readOnlyIdentity;
export const auditBucket = zone.auditBucket;
export const escEnvironment = zone.escEnvironment;
import pulumi
from components.landing_zone import LandingZone, LandingZoneArgs
config = pulumi.Config()
cidr_block = config.get("cidrBlock") or "10.20.0.0/16"
region = config.get("region")
zone = LandingZone(
"platform",
LandingZoneArgs(
cidr_block=cidr_block,
region=region,
tags={
"environment": pulumi.get_stack(),
"solution-family": "landing-zone",
"cloud": "gcp",
"language": "python",
},
),
)
pulumi.export("networkId", zone.network_id)
pulumi.export("publicSubnetIds", zone.public_subnet_ids)
pulumi.export("privateSubnetIds", zone.private_subnet_ids)
pulumi.export("dataEncryptionKey", zone.data_encryption_key)
pulumi.export("secretsStore", zone.secrets_store)
pulumi.export("deployerIdentity", zone.deployer_identity)
pulumi.export("readOnlyIdentity", zone.read_only_identity)
pulumi.export("auditBucket", zone.audit_bucket)
pulumi.export("escEnvironment", zone.esc_environment)
package main
import (
"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
"github.com/pulumi/pulumi/sdk/v3/go/pulumi/config"
"landing-zone-gcp/landingzone"
)
func main() {
pulumi.Run(Program)
}
func Program(ctx *pulumi.Context) error {
cfg := config.New(ctx, "")
cidrBlock := cfg.Get("cidrBlock")
if cidrBlock == "" {
cidrBlock = "10.20.0.0/16"
}
region := cfg.Get("region")
zone, err := landingzone.NewLandingZone(ctx, "platform", &landingzone.LandingZoneArgs{
CidrBlock: cidrBlock,
Region: region,
Tags: map[string]string{
"environment": ctx.Stack(),
"solution-family": "landing-zone",
"cloud": "gcp",
"language": "go",
},
})
if err != nil {
return err
}
ctx.Export("networkId", zone.NetworkId)
ctx.Export("publicSubnetIds", zone.PublicSubnetIds)
ctx.Export("privateSubnetIds", zone.PrivateSubnetIds)
ctx.Export("dataEncryptionKey", zone.DataEncryptionKey)
ctx.Export("secretsStore", zone.SecretsStore)
ctx.Export("deployerIdentity", zone.DeployerIdentity)
ctx.Export("readOnlyIdentity", zone.ReadOnlyIdentity)
ctx.Export("auditBucket", zone.AuditBucket)
ctx.Export("escEnvironment", zone.EscEnvironment)
return nil
}
Reusable components
The entrypoint stays small because the website wiring lives in a reusable module. The downloadable blueprint ships the same component shown below for each language.
import * as gcp from "@pulumi/gcp";
import * as pulumi from "@pulumi/pulumi";
export interface LandingZoneArgs {
cidrBlock?: string;
region?: pulumi.Input<string>;
auditRetentionDays?: number;
tags?: Record<string, string>;
}
export class LandingZone {
public readonly networkId: pulumi.Output<string>;
public readonly publicSubnetIds: pulumi.Output<string[]>;
public readonly privateSubnetIds: pulumi.Output<string[]>;
public readonly dataEncryptionKey: pulumi.Output<string>;
public readonly secretsStore: pulumi.Output<string>;
public readonly deployerIdentity: pulumi.Output<string>;
public readonly readOnlyIdentity: pulumi.Output<string>;
public readonly auditBucket: pulumi.Output<string>;
public readonly escEnvironment: pulumi.Output<string>;
constructor(name: string, args: LandingZoneArgs = {}) {
const labels = { ...args.tags, "landing-zone": name };
const region = args.region ?? pulumi.output(gcp.config.region ?? "us-central1");
const retentionDays = args.auditRetentionDays ?? 90;
const project = gcp.config.project ?? "tier1-project";
const projectId = pulumi.output(project);
const vpc = new gcp.compute.Network(`${name}-vpc`, {
name: `${name}-vpc`,
autoCreateSubnetworks: false,
});
const baseCidr = args.cidrBlock ?? "10.20.0.0/16";
// Derive /20 subnets under the /16 so both public and private fit.
const cidrPrefix = baseCidr.split("/")[0].split(".").slice(0, 2).join(".");
const publicSubnetIds: pulumi.Output<string>[] = [];
const privateSubnetIds: pulumi.Output<string>[] = [];
for (let i = 0; i < 2; i++) {
const publicSubnet = new gcp.compute.Subnetwork(`${name}-public-${i}`, {
name: `${name}-public-${i}`,
network: vpc.id,
region,
ipCidrRange: `${cidrPrefix}.${i * 16}.0/20`,
privateIpGoogleAccess: true,
});
publicSubnetIds.push(publicSubnet.id);
const privateSubnet = new gcp.compute.Subnetwork(`${name}-private-${i}`, {
name: `${name}-private-${i}`,
network: vpc.id,
region,
ipCidrRange: `${cidrPrefix}.${i * 16 + 128}.0/20`,
privateIpGoogleAccess: true,
});
privateSubnetIds.push(privateSubnet.id);
}
const keyring = new gcp.kms.KeyRing(`${name}-keyring`, {
name: `${name}-keyring`,
location: region,
});
const key = new gcp.kms.CryptoKey(`${name}-key`, {
name: `${name}-key`,
keyRing: keyring.id,
rotationPeriod: "7776000s",
});
const deployer = new gcp.serviceaccount.Account(`${name}-deployer`, {
accountId: `${name}-deployer`,
displayName: `${name} deployer`,
});
const readOnly = new gcp.serviceaccount.Account(`${name}-readonly`, {
accountId: `${name}-readonly`,
displayName: `${name} read-only`,
});
new gcp.projects.IAMMember(`${name}-deployer-binding`, {
project: projectId,
role: "roles/editor",
member: pulumi.interpolate`serviceAccount:${deployer.email}`,
});
new gcp.projects.IAMMember(`${name}-readonly-binding`, {
project: projectId,
role: "roles/viewer",
member: pulumi.interpolate`serviceAccount:${readOnly.email}`,
});
const auditBucket = new gcp.storage.Bucket(`${name}-audit`, {
name: pulumi.interpolate`${name}-audit-${projectId}`,
location: region,
forceDestroy: true,
uniformBucketLevelAccess: true,
lifecycleRules: [{
condition: { age: retentionDays },
action: { type: "Delete" },
}],
labels,
});
new gcp.logging.ProjectSink(`${name}-audit-sink`, {
name: `${name}-audit-sink`,
destination: pulumi.interpolate`storage.googleapis.com/${auditBucket.name}`,
filter: "logName:\"cloudaudit.googleapis.com\"",
uniqueWriterIdentity: true,
});
this.networkId = vpc.id;
this.publicSubnetIds = pulumi.all(publicSubnetIds);
this.privateSubnetIds = pulumi.all(privateSubnetIds);
this.dataEncryptionKey = key.id;
this.secretsStore = pulumi.interpolate`projects/${projectId}/secrets`;
this.deployerIdentity = deployer.email;
this.readOnlyIdentity = readOnly.email;
this.auditBucket = auditBucket.name;
this.escEnvironment = pulumi.interpolate`${name}-landing-zone`;
}
}
from dataclasses import dataclass, field
from typing import Dict, Optional
import pulumi
import pulumi_gcp as gcp
@dataclass
class LandingZoneArgs:
cidr_block: str = "10.20.0.0/16"
region: Optional[str] = None
audit_retention_days: int = 90
tags: Dict[str, str] = field(default_factory=dict)
class LandingZone:
def __init__(self, name: str, args: Optional[LandingZoneArgs] = None) -> None:
args = args or LandingZoneArgs()
labels = {**args.tags, "landing-zone": name}
region = args.region or gcp.config.region or "us-central1"
project = gcp.config.project or "tier1-project"
vpc = gcp.compute.Network(
f"{name}-vpc",
name=f"{name}-vpc",
auto_create_subnetworks=False,
)
cidr_prefix = ".".join(args.cidr_block.split("/")[0].split(".")[:2])
public_subnet_ids = []
private_subnet_ids = []
for i in range(2):
public_subnet = gcp.compute.Subnetwork(
f"{name}-public-{i}",
name=f"{name}-public-{i}",
network=vpc.id,
region=region,
ip_cidr_range=f"{cidr_prefix}.{i * 16}.0/20",
private_ip_google_access=True,
)
public_subnet_ids.append(public_subnet.id)
private_subnet = gcp.compute.Subnetwork(
f"{name}-private-{i}",
name=f"{name}-private-{i}",
network=vpc.id,
region=region,
ip_cidr_range=f"{cidr_prefix}.{i * 16 + 128}.0/20",
private_ip_google_access=True,
)
private_subnet_ids.append(private_subnet.id)
keyring = gcp.kms.KeyRing(
f"{name}-keyring",
name=f"{name}-keyring",
location=region,
)
key = gcp.kms.CryptoKey(
f"{name}-key",
name=f"{name}-key",
key_ring=keyring.id,
rotation_period="7776000s",
)
deployer = gcp.serviceaccount.Account(
f"{name}-deployer",
account_id=f"{name}-deployer",
display_name=f"{name} deployer",
)
read_only = gcp.serviceaccount.Account(
f"{name}-readonly",
account_id=f"{name}-readonly",
display_name=f"{name} read-only",
)
gcp.projects.IAMMember(
f"{name}-deployer-binding",
project=project,
role="roles/editor",
member=deployer.email.apply(lambda email: f"serviceAccount:{email}"),
)
gcp.projects.IAMMember(
f"{name}-readonly-binding",
project=project,
role="roles/viewer",
member=read_only.email.apply(lambda email: f"serviceAccount:{email}"),
)
audit_bucket = gcp.storage.Bucket(
f"{name}-audit",
name=f"{name}-audit-{project}",
location=region,
force_destroy=True,
uniform_bucket_level_access=True,
lifecycle_rules=[gcp.storage.BucketLifecycleRuleArgs(
condition=gcp.storage.BucketLifecycleRuleConditionArgs(age=args.audit_retention_days),
action=gcp.storage.BucketLifecycleRuleActionArgs(type="Delete"),
)],
labels=labels,
)
gcp.logging.ProjectSink(
f"{name}-audit-sink",
name=f"{name}-audit-sink",
destination=audit_bucket.name.apply(lambda n: f"storage.googleapis.com/{n}"),
filter='logName:"cloudaudit.googleapis.com"',
unique_writer_identity=True,
)
self.network_id = vpc.id
self.public_subnet_ids = pulumi.Output.all(*public_subnet_ids)
self.private_subnet_ids = pulumi.Output.all(*private_subnet_ids)
self.data_encryption_key = key.id
self.secrets_store = f"projects/{project}/secrets"
self.deployer_identity = deployer.email
self.read_only_identity = read_only.email
self.audit_bucket = audit_bucket.name
self.esc_environment = f"{name}-landing-zone"
package landingzone
import (
"fmt"
"strings"
"github.com/pulumi/pulumi-gcp/sdk/v9/go/gcp/compute"
"github.com/pulumi/pulumi-gcp/sdk/v9/go/gcp/kms"
"github.com/pulumi/pulumi-gcp/sdk/v9/go/gcp/logging"
"github.com/pulumi/pulumi-gcp/sdk/v9/go/gcp/projects"
"github.com/pulumi/pulumi-gcp/sdk/v9/go/gcp/serviceaccount"
"github.com/pulumi/pulumi-gcp/sdk/v9/go/gcp/storage"
"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
"github.com/pulumi/pulumi/sdk/v3/go/pulumi/config"
)
type LandingZoneArgs struct {
CidrBlock string
Region string
AuditRetentionDays int
Tags map[string]string
}
type LandingZone struct {
NetworkId pulumi.StringOutput
PublicSubnetIds pulumi.StringArrayOutput
PrivateSubnetIds pulumi.StringArrayOutput
DataEncryptionKey pulumi.StringOutput
SecretsStore pulumi.StringOutput
DeployerIdentity pulumi.StringOutput
ReadOnlyIdentity pulumi.StringOutput
AuditBucket pulumi.StringOutput
EscEnvironment pulumi.StringOutput
}
func NewLandingZone(ctx *pulumi.Context, name string, args *LandingZoneArgs) (*LandingZone, error) {
if args == nil {
args = &LandingZoneArgs{}
}
if args.CidrBlock == "" {
args.CidrBlock = "10.20.0.0/16"
}
if args.AuditRetentionDays == 0 {
args.AuditRetentionDays = 90
}
if args.Region == "" {
args.Region = config.Get(ctx, "gcp:region")
if args.Region == "" {
args.Region = "us-central1"
}
}
labels := map[string]string{"landing-zone": name}
for k, v := range args.Tags {
labels[k] = v
}
labelsInput := pulumi.ToStringMap(labels)
project := config.Get(ctx, "gcp:project")
if project == "" {
project = "tier1-project"
}
vpc, err := compute.NewNetwork(ctx, fmt.Sprintf("%s-vpc", name), &compute.NetworkArgs{
Name: pulumi.String(fmt.Sprintf("%s-vpc", name)),
AutoCreateSubnetworks: pulumi.Bool(false),
})
if err != nil {
return nil, err
}
parts := strings.Split(args.CidrBlock, "/")
ipParts := strings.Split(parts[0], ".")
cidrPrefix := strings.Join(ipParts[:2], ".")
publicSubnetIds := pulumi.StringArray{}
privateSubnetIds := pulumi.StringArray{}
for i := 0; i < 2; i++ {
publicSubnet, err := compute.NewSubnetwork(ctx, fmt.Sprintf("%s-public-%d", name, i), &compute.SubnetworkArgs{
Name: pulumi.String(fmt.Sprintf("%s-public-%d", name, i)),
Network: vpc.ID(),
Region: pulumi.String(args.Region),
IpCidrRange: pulumi.String(fmt.Sprintf("%s.%d.0/20", cidrPrefix, i*16)),
PrivateIpGoogleAccess: pulumi.Bool(true),
})
if err != nil {
return nil, err
}
publicSubnetIds = append(publicSubnetIds, publicSubnet.ID().ToStringOutput())
privateSubnet, err := compute.NewSubnetwork(ctx, fmt.Sprintf("%s-private-%d", name, i), &compute.SubnetworkArgs{
Name: pulumi.String(fmt.Sprintf("%s-private-%d", name, i)),
Network: vpc.ID(),
Region: pulumi.String(args.Region),
IpCidrRange: pulumi.String(fmt.Sprintf("%s.%d.0/20", cidrPrefix, i*16+128)),
PrivateIpGoogleAccess: pulumi.Bool(true),
})
if err != nil {
return nil, err
}
privateSubnetIds = append(privateSubnetIds, privateSubnet.ID().ToStringOutput())
}
keyring, err := kms.NewKeyRing(ctx, fmt.Sprintf("%s-keyring", name), &kms.KeyRingArgs{
Name: pulumi.String(fmt.Sprintf("%s-keyring", name)),
Location: pulumi.String(args.Region),
})
if err != nil {
return nil, err
}
key, err := kms.NewCryptoKey(ctx, fmt.Sprintf("%s-key", name), &kms.CryptoKeyArgs{
Name: pulumi.String(fmt.Sprintf("%s-key", name)),
KeyRing: keyring.ID(),
RotationPeriod: pulumi.String("7776000s"),
})
if err != nil {
return nil, err
}
deployer, err := serviceaccount.NewAccount(ctx, fmt.Sprintf("%s-deployer", name), &serviceaccount.AccountArgs{
AccountId: pulumi.String(fmt.Sprintf("%s-deployer", name)),
DisplayName: pulumi.String(fmt.Sprintf("%s deployer", name)),
})
if err != nil {
return nil, err
}
readOnly, err := serviceaccount.NewAccount(ctx, fmt.Sprintf("%s-readonly", name), &serviceaccount.AccountArgs{
AccountId: pulumi.String(fmt.Sprintf("%s-readonly", name)),
DisplayName: pulumi.String(fmt.Sprintf("%s read-only", name)),
})
if err != nil {
return nil, err
}
if _, err = projects.NewIAMMember(ctx, fmt.Sprintf("%s-deployer-binding", name), &projects.IAMMemberArgs{
Project: pulumi.String(project),
Role: pulumi.String("roles/editor"),
Member: deployer.Email.ApplyT(func(email string) string { return "serviceAccount:" + email }).(pulumi.StringOutput),
}); err != nil {
return nil, err
}
if _, err = projects.NewIAMMember(ctx, fmt.Sprintf("%s-readonly-binding", name), &projects.IAMMemberArgs{
Project: pulumi.String(project),
Role: pulumi.String("roles/viewer"),
Member: readOnly.Email.ApplyT(func(email string) string { return "serviceAccount:" + email }).(pulumi.StringOutput),
}); err != nil {
return nil, err
}
auditBucket, err := storage.NewBucket(ctx, fmt.Sprintf("%s-audit", name), &storage.BucketArgs{
Name: pulumi.String(fmt.Sprintf("%s-audit-%s", name, project)),
Location: pulumi.String(args.Region),
ForceDestroy: pulumi.Bool(true),
UniformBucketLevelAccess: pulumi.Bool(true),
LifecycleRules: storage.BucketLifecycleRuleArray{
&storage.BucketLifecycleRuleArgs{
Condition: &storage.BucketLifecycleRuleConditionArgs{
Age: pulumi.Int(args.AuditRetentionDays),
},
Action: &storage.BucketLifecycleRuleActionArgs{
Type: pulumi.String("Delete"),
},
},
},
Labels: labelsInput,
})
if err != nil {
return nil, err
}
if _, err = logging.NewProjectSink(ctx, fmt.Sprintf("%s-audit-sink", name), &logging.ProjectSinkArgs{
Name: pulumi.String(fmt.Sprintf("%s-audit-sink", name)),
Destination: auditBucket.Name.ApplyT(func(n string) string { return "storage.googleapis.com/" + n }).(pulumi.StringOutput),
Filter: pulumi.String(`logName:"cloudaudit.googleapis.com"`),
UniqueWriterIdentity: pulumi.Bool(true),
}); err != nil {
return nil, err
}
return &LandingZone{
NetworkId: vpc.ID().ToStringOutput(),
PublicSubnetIds: publicSubnetIds.ToStringArrayOutput(),
PrivateSubnetIds: privateSubnetIds.ToStringArrayOutput(),
DataEncryptionKey: key.ID().ToStringOutput(),
SecretsStore: pulumi.String(fmt.Sprintf("projects/%s/secrets", project)).ToStringOutput(),
DeployerIdentity: deployer.Email,
ReadOnlyIdentity: readOnly.Email,
AuditBucket: auditBucket.Name,
EscEnvironment: pulumi.String(fmt.Sprintf("%s-landing-zone", name)).ToStringOutput(),
}, nil
}