Configure GCP Sole-Tenant Node Groups

The gcp:compute/nodeGroup:NodeGroup resource, part of the Pulumi GCP provider, manages a group of sole-tenant nodes: their size, maintenance schedule, autoscaling behavior, and cross-project sharing. This guide focuses on three capabilities: fixed-size and autoscaling node groups, maintenance interval configuration, and cross-project node sharing.

Node groups require a NodeTemplate that defines the machine type and region. Share settings reference guest projects that must exist separately. The examples are intentionally small. Combine them with your own node templates and project infrastructure.

Create a fixed-size node group

Teams running workloads that require physical isolation start by provisioning a node group with a fixed number of sole-tenant nodes.

import * as pulumi from "@pulumi/pulumi";
import * as gcp from "@pulumi/gcp";

const soletenant_tmpl = new gcp.compute.NodeTemplate("soletenant-tmpl", {
    name: "soletenant-tmpl",
    region: "us-central1",
    nodeType: "n1-node-96-624",
});
const nodes = new gcp.compute.NodeGroup("nodes", {
    name: "soletenant-group",
    zone: "us-central1-a",
    description: "example google_compute_node_group for the Google Provider",
    initialSize: 1,
    nodeTemplate: soletenant_tmpl.id,
});
import pulumi
import pulumi_gcp as gcp

soletenant_tmpl = gcp.compute.NodeTemplate("soletenant-tmpl",
    name="soletenant-tmpl",
    region="us-central1",
    node_type="n1-node-96-624")
nodes = gcp.compute.NodeGroup("nodes",
    name="soletenant-group",
    zone="us-central1-a",
    description="example google_compute_node_group for the Google Provider",
    initial_size=1,
    node_template=soletenant_tmpl.id)
package main

import (
	"github.com/pulumi/pulumi-gcp/sdk/v9/go/gcp/compute"
	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
)

func main() {
	pulumi.Run(func(ctx *pulumi.Context) error {
		soletenant_tmpl, err := compute.NewNodeTemplate(ctx, "soletenant-tmpl", &compute.NodeTemplateArgs{
			Name:     pulumi.String("soletenant-tmpl"),
			Region:   pulumi.String("us-central1"),
			NodeType: pulumi.String("n1-node-96-624"),
		})
		if err != nil {
			return err
		}
		_, err = compute.NewNodeGroup(ctx, "nodes", &compute.NodeGroupArgs{
			Name:         pulumi.String("soletenant-group"),
			Zone:         pulumi.String("us-central1-a"),
			Description:  pulumi.String("example google_compute_node_group for the Google Provider"),
			InitialSize:  pulumi.Int(1),
			NodeTemplate: soletenant_tmpl.ID(),
		})
		if err != nil {
			return err
		}
		return nil
	})
}
using System.Collections.Generic;
using System.Linq;
using Pulumi;
using Gcp = Pulumi.Gcp;

return await Deployment.RunAsync(() => 
{
    var soletenant_tmpl = new Gcp.Compute.NodeTemplate("soletenant-tmpl", new()
    {
        Name = "soletenant-tmpl",
        Region = "us-central1",
        NodeType = "n1-node-96-624",
    });

    var nodes = new Gcp.Compute.NodeGroup("nodes", new()
    {
        Name = "soletenant-group",
        Zone = "us-central1-a",
        Description = "example google_compute_node_group for the Google Provider",
        InitialSize = 1,
        NodeTemplate = soletenant_tmpl.Id,
    });

});
package generated_program;

import com.pulumi.Context;
import com.pulumi.Pulumi;
import com.pulumi.core.Output;
import com.pulumi.gcp.compute.NodeTemplate;
import com.pulumi.gcp.compute.NodeTemplateArgs;
import com.pulumi.gcp.compute.NodeGroup;
import com.pulumi.gcp.compute.NodeGroupArgs;
import java.util.List;
import java.util.ArrayList;
import java.util.Map;
import java.io.File;
import java.nio.file.Files;
import java.nio.file.Paths;

public class App {
    public static void main(String[] args) {
        Pulumi.run(App::stack);
    }

    public static void stack(Context ctx) {
        var soletenant_tmpl = new NodeTemplate("soletenant-tmpl", NodeTemplateArgs.builder()
            .name("soletenant-tmpl")
            .region("us-central1")
            .nodeType("n1-node-96-624")
            .build());

        var nodes = new NodeGroup("nodes", NodeGroupArgs.builder()
            .name("soletenant-group")
            .zone("us-central1-a")
            .description("example google_compute_node_group for the Google Provider")
            .initialSize(1)
            .nodeTemplate(soletenant_tmpl.id())
            .build());

    }
}
resources:
  soletenant-tmpl:
    type: gcp:compute:NodeTemplate
    properties:
      name: soletenant-tmpl
      region: us-central1
      nodeType: n1-node-96-624
  nodes:
    type: gcp:compute:NodeGroup
    properties:
      name: soletenant-group
      zone: us-central1-a
      description: example google_compute_node_group for the Google Provider
      initialSize: 1
      nodeTemplate: ${["soletenant-tmpl"].id}

The nodeTemplate property references a NodeTemplate resource that defines the machine type (n1-node-96-624) and region. The initialSize property sets the number of physical nodes to provision. Once created, the group provides dedicated hardware where you can schedule VMs that require physical isolation for compliance or licensing.

Control maintenance frequency with RECURRENT interval

Production workloads often need predictable maintenance windows to minimize disruption.

import * as pulumi from "@pulumi/pulumi";
import * as gcp from "@pulumi/gcp";

const soletenant_tmpl = new gcp.compute.NodeTemplate("soletenant-tmpl", {
    name: "soletenant-tmpl",
    region: "us-central1",
    nodeType: "c2-node-60-240",
});
const nodes = new gcp.compute.NodeGroup("nodes", {
    name: "soletenant-group",
    zone: "us-central1-a",
    description: "example google_compute_node_group for Terraform Google Provider",
    initialSize: 1,
    nodeTemplate: soletenant_tmpl.id,
    maintenanceInterval: "RECURRENT",
});
import pulumi
import pulumi_gcp as gcp

soletenant_tmpl = gcp.compute.NodeTemplate("soletenant-tmpl",
    name="soletenant-tmpl",
    region="us-central1",
    node_type="c2-node-60-240")
nodes = gcp.compute.NodeGroup("nodes",
    name="soletenant-group",
    zone="us-central1-a",
    description="example google_compute_node_group for Terraform Google Provider",
    initial_size=1,
    node_template=soletenant_tmpl.id,
    maintenance_interval="RECURRENT")
package main

import (
	"github.com/pulumi/pulumi-gcp/sdk/v9/go/gcp/compute"
	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
)

func main() {
	pulumi.Run(func(ctx *pulumi.Context) error {
		soletenant_tmpl, err := compute.NewNodeTemplate(ctx, "soletenant-tmpl", &compute.NodeTemplateArgs{
			Name:     pulumi.String("soletenant-tmpl"),
			Region:   pulumi.String("us-central1"),
			NodeType: pulumi.String("c2-node-60-240"),
		})
		if err != nil {
			return err
		}
		_, err = compute.NewNodeGroup(ctx, "nodes", &compute.NodeGroupArgs{
			Name:                pulumi.String("soletenant-group"),
			Zone:                pulumi.String("us-central1-a"),
			Description:         pulumi.String("example google_compute_node_group for Terraform Google Provider"),
			InitialSize:         pulumi.Int(1),
			NodeTemplate:        soletenant_tmpl.ID(),
			MaintenanceInterval: pulumi.String("RECURRENT"),
		})
		if err != nil {
			return err
		}
		return nil
	})
}
using System.Collections.Generic;
using System.Linq;
using Pulumi;
using Gcp = Pulumi.Gcp;

return await Deployment.RunAsync(() => 
{
    var soletenant_tmpl = new Gcp.Compute.NodeTemplate("soletenant-tmpl", new()
    {
        Name = "soletenant-tmpl",
        Region = "us-central1",
        NodeType = "c2-node-60-240",
    });

    var nodes = new Gcp.Compute.NodeGroup("nodes", new()
    {
        Name = "soletenant-group",
        Zone = "us-central1-a",
        Description = "example google_compute_node_group for Terraform Google Provider",
        InitialSize = 1,
        NodeTemplate = soletenant_tmpl.Id,
        MaintenanceInterval = "RECURRENT",
    });

});
package generated_program;

import com.pulumi.Context;
import com.pulumi.Pulumi;
import com.pulumi.core.Output;
import com.pulumi.gcp.compute.NodeTemplate;
import com.pulumi.gcp.compute.NodeTemplateArgs;
import com.pulumi.gcp.compute.NodeGroup;
import com.pulumi.gcp.compute.NodeGroupArgs;
import java.util.List;
import java.util.ArrayList;
import java.util.Map;
import java.io.File;
import java.nio.file.Files;
import java.nio.file.Paths;

public class App {
    public static void main(String[] args) {
        Pulumi.run(App::stack);
    }

    public static void stack(Context ctx) {
        var soletenant_tmpl = new NodeTemplate("soletenant-tmpl", NodeTemplateArgs.builder()
            .name("soletenant-tmpl")
            .region("us-central1")
            .nodeType("c2-node-60-240")
            .build());

        var nodes = new NodeGroup("nodes", NodeGroupArgs.builder()
            .name("soletenant-group")
            .zone("us-central1-a")
            .description("example google_compute_node_group for Terraform Google Provider")
            .initialSize(1)
            .nodeTemplate(soletenant_tmpl.id())
            .maintenanceInterval("RECURRENT")
            .build());

    }
}
resources:
  soletenant-tmpl:
    type: gcp:compute:NodeTemplate
    properties:
      name: soletenant-tmpl
      region: us-central1
      nodeType: c2-node-60-240
  nodes:
    type: gcp:compute:NodeGroup
    properties:
      name: soletenant-group
      zone: us-central1-a
      description: example google_compute_node_group for Terraform Google Provider
      initialSize: 1
      nodeTemplate: ${["soletenant-tmpl"].id}
      maintenanceInterval: RECURRENT

The maintenanceInterval property controls how frequently Google applies infrastructure updates. Setting it to RECURRENT batches updates into periodic windows no more frequent than every 28 days, reducing the number of live migrations and VM terminations. The alternative, AS_NEEDED, applies updates as they become available.

Scale node groups automatically based on demand

Workloads with variable capacity needs can use autoscaling to add nodes as demand grows.

import * as pulumi from "@pulumi/pulumi";
import * as gcp from "@pulumi/gcp";

const soletenant_tmpl = new gcp.compute.NodeTemplate("soletenant-tmpl", {
    name: "soletenant-tmpl",
    region: "us-central1",
    nodeType: "n1-node-96-624",
});
const nodes = new gcp.compute.NodeGroup("nodes", {
    name: "soletenant-group",
    zone: "us-central1-a",
    description: "example google_compute_node_group for Google Provider",
    maintenancePolicy: "RESTART_IN_PLACE",
    maintenanceWindow: {
        startTime: "08:00",
    },
    initialSize: 1,
    nodeTemplate: soletenant_tmpl.id,
    autoscalingPolicy: {
        mode: "ONLY_SCALE_OUT",
        minNodes: 1,
        maxNodes: 10,
    },
});
import pulumi
import pulumi_gcp as gcp

soletenant_tmpl = gcp.compute.NodeTemplate("soletenant-tmpl",
    name="soletenant-tmpl",
    region="us-central1",
    node_type="n1-node-96-624")
nodes = gcp.compute.NodeGroup("nodes",
    name="soletenant-group",
    zone="us-central1-a",
    description="example google_compute_node_group for Google Provider",
    maintenance_policy="RESTART_IN_PLACE",
    maintenance_window={
        "start_time": "08:00",
    },
    initial_size=1,
    node_template=soletenant_tmpl.id,
    autoscaling_policy={
        "mode": "ONLY_SCALE_OUT",
        "min_nodes": 1,
        "max_nodes": 10,
    })
package main

import (
	"github.com/pulumi/pulumi-gcp/sdk/v9/go/gcp/compute"
	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
)

func main() {
	pulumi.Run(func(ctx *pulumi.Context) error {
		soletenant_tmpl, err := compute.NewNodeTemplate(ctx, "soletenant-tmpl", &compute.NodeTemplateArgs{
			Name:     pulumi.String("soletenant-tmpl"),
			Region:   pulumi.String("us-central1"),
			NodeType: pulumi.String("n1-node-96-624"),
		})
		if err != nil {
			return err
		}
		_, err = compute.NewNodeGroup(ctx, "nodes", &compute.NodeGroupArgs{
			Name:              pulumi.String("soletenant-group"),
			Zone:              pulumi.String("us-central1-a"),
			Description:       pulumi.String("example google_compute_node_group for Google Provider"),
			MaintenancePolicy: pulumi.String("RESTART_IN_PLACE"),
			MaintenanceWindow: &compute.NodeGroupMaintenanceWindowArgs{
				StartTime: pulumi.String("08:00"),
			},
			InitialSize:  pulumi.Int(1),
			NodeTemplate: soletenant_tmpl.ID(),
			AutoscalingPolicy: &compute.NodeGroupAutoscalingPolicyArgs{
				Mode:     pulumi.String("ONLY_SCALE_OUT"),
				MinNodes: pulumi.Int(1),
				MaxNodes: pulumi.Int(10),
			},
		})
		if err != nil {
			return err
		}
		return nil
	})
}
using System.Collections.Generic;
using System.Linq;
using Pulumi;
using Gcp = Pulumi.Gcp;

return await Deployment.RunAsync(() => 
{
    var soletenant_tmpl = new Gcp.Compute.NodeTemplate("soletenant-tmpl", new()
    {
        Name = "soletenant-tmpl",
        Region = "us-central1",
        NodeType = "n1-node-96-624",
    });

    var nodes = new Gcp.Compute.NodeGroup("nodes", new()
    {
        Name = "soletenant-group",
        Zone = "us-central1-a",
        Description = "example google_compute_node_group for Google Provider",
        MaintenancePolicy = "RESTART_IN_PLACE",
        MaintenanceWindow = new Gcp.Compute.Inputs.NodeGroupMaintenanceWindowArgs
        {
            StartTime = "08:00",
        },
        InitialSize = 1,
        NodeTemplate = soletenant_tmpl.Id,
        AutoscalingPolicy = new Gcp.Compute.Inputs.NodeGroupAutoscalingPolicyArgs
        {
            Mode = "ONLY_SCALE_OUT",
            MinNodes = 1,
            MaxNodes = 10,
        },
    });

});
package generated_program;

import com.pulumi.Context;
import com.pulumi.Pulumi;
import com.pulumi.core.Output;
import com.pulumi.gcp.compute.NodeTemplate;
import com.pulumi.gcp.compute.NodeTemplateArgs;
import com.pulumi.gcp.compute.NodeGroup;
import com.pulumi.gcp.compute.NodeGroupArgs;
import com.pulumi.gcp.compute.inputs.NodeGroupMaintenanceWindowArgs;
import com.pulumi.gcp.compute.inputs.NodeGroupAutoscalingPolicyArgs;
import java.util.List;
import java.util.ArrayList;
import java.util.Map;
import java.io.File;
import java.nio.file.Files;
import java.nio.file.Paths;

public class App {
    public static void main(String[] args) {
        Pulumi.run(App::stack);
    }

    public static void stack(Context ctx) {
        var soletenant_tmpl = new NodeTemplate("soletenant-tmpl", NodeTemplateArgs.builder()
            .name("soletenant-tmpl")
            .region("us-central1")
            .nodeType("n1-node-96-624")
            .build());

        var nodes = new NodeGroup("nodes", NodeGroupArgs.builder()
            .name("soletenant-group")
            .zone("us-central1-a")
            .description("example google_compute_node_group for Google Provider")
            .maintenancePolicy("RESTART_IN_PLACE")
            .maintenanceWindow(NodeGroupMaintenanceWindowArgs.builder()
                .startTime("08:00")
                .build())
            .initialSize(1)
            .nodeTemplate(soletenant_tmpl.id())
            .autoscalingPolicy(NodeGroupAutoscalingPolicyArgs.builder()
                .mode("ONLY_SCALE_OUT")
                .minNodes(1)
                .maxNodes(10)
                .build())
            .build());

    }
}
resources:
  soletenant-tmpl:
    type: gcp:compute:NodeTemplate
    properties:
      name: soletenant-tmpl
      region: us-central1
      nodeType: n1-node-96-624
  nodes:
    type: gcp:compute:NodeGroup
    properties:
      name: soletenant-group
      zone: us-central1-a
      description: example google_compute_node_group for Google Provider
      maintenancePolicy: RESTART_IN_PLACE
      maintenanceWindow:
        startTime: 08:00
      initialSize: 1
      nodeTemplate: ${["soletenant-tmpl"].id}
      autoscalingPolicy:
        mode: ONLY_SCALE_OUT
        minNodes: 1
        maxNodes: 10

The autoscalingPolicy property replaces initialSize and defines scaling bounds. The mode property controls scaling behavior; ONLY_SCALE_OUT prevents the autoscaler from removing nodes. The minNodes and maxNodes properties set capacity limits. The maintenancePolicy and maintenanceWindow properties control how instances behave during node maintenance.

Share node groups across projects

Organizations with multiple projects can share sole-tenant nodes to consolidate capacity and simplify billing.

import * as pulumi from "@pulumi/pulumi";
import * as gcp from "@pulumi/gcp";

const guestProject = new gcp.organizations.Project("guest_project", {
    projectId: "project-id",
    name: "project-name",
    orgId: "123456789",
    deletionPolicy: "DELETE",
});
const soletenant_tmpl = new gcp.compute.NodeTemplate("soletenant-tmpl", {
    name: "soletenant-tmpl",
    region: "us-central1",
    nodeType: "n1-node-96-624",
});
const nodes = new gcp.compute.NodeGroup("nodes", {
    name: "soletenant-group",
    zone: "us-central1-f",
    description: "example google_compute_node_group for Terraform Google Provider",
    initialSize: 1,
    nodeTemplate: soletenant_tmpl.id,
    shareSettings: {
        shareType: "SPECIFIC_PROJECTS",
        projectMaps: [{
            id: guestProject.projectId,
            projectId: guestProject.projectId,
        }],
    },
});
import pulumi
import pulumi_gcp as gcp

guest_project = gcp.organizations.Project("guest_project",
    project_id="project-id",
    name="project-name",
    org_id="123456789",
    deletion_policy="DELETE")
soletenant_tmpl = gcp.compute.NodeTemplate("soletenant-tmpl",
    name="soletenant-tmpl",
    region="us-central1",
    node_type="n1-node-96-624")
nodes = gcp.compute.NodeGroup("nodes",
    name="soletenant-group",
    zone="us-central1-f",
    description="example google_compute_node_group for Terraform Google Provider",
    initial_size=1,
    node_template=soletenant_tmpl.id,
    share_settings={
        "share_type": "SPECIFIC_PROJECTS",
        "project_maps": [{
            "id": guest_project.project_id,
            "project_id": guest_project.project_id,
        }],
    })
package main

import (
	"github.com/pulumi/pulumi-gcp/sdk/v9/go/gcp/compute"
	"github.com/pulumi/pulumi-gcp/sdk/v9/go/gcp/organizations"
	"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
)

func main() {
	pulumi.Run(func(ctx *pulumi.Context) error {
		guestProject, err := organizations.NewProject(ctx, "guest_project", &organizations.ProjectArgs{
			ProjectId:      pulumi.String("project-id"),
			Name:           pulumi.String("project-name"),
			OrgId:          pulumi.String("123456789"),
			DeletionPolicy: pulumi.String("DELETE"),
		})
		if err != nil {
			return err
		}
		soletenant_tmpl, err := compute.NewNodeTemplate(ctx, "soletenant-tmpl", &compute.NodeTemplateArgs{
			Name:     pulumi.String("soletenant-tmpl"),
			Region:   pulumi.String("us-central1"),
			NodeType: pulumi.String("n1-node-96-624"),
		})
		if err != nil {
			return err
		}
		_, err = compute.NewNodeGroup(ctx, "nodes", &compute.NodeGroupArgs{
			Name:         pulumi.String("soletenant-group"),
			Zone:         pulumi.String("us-central1-f"),
			Description:  pulumi.String("example google_compute_node_group for Terraform Google Provider"),
			InitialSize:  pulumi.Int(1),
			NodeTemplate: soletenant_tmpl.ID(),
			ShareSettings: &compute.NodeGroupShareSettingsArgs{
				ShareType: pulumi.String("SPECIFIC_PROJECTS"),
				ProjectMaps: compute.NodeGroupShareSettingsProjectMapArray{
					&compute.NodeGroupShareSettingsProjectMapArgs{
						Id:        guestProject.ProjectId,
						ProjectId: guestProject.ProjectId,
					},
				},
			},
		})
		if err != nil {
			return err
		}
		return nil
	})
}
using System.Collections.Generic;
using System.Linq;
using Pulumi;
using Gcp = Pulumi.Gcp;

return await Deployment.RunAsync(() => 
{
    var guestProject = new Gcp.Organizations.Project("guest_project", new()
    {
        ProjectId = "project-id",
        Name = "project-name",
        OrgId = "123456789",
        DeletionPolicy = "DELETE",
    });

    var soletenant_tmpl = new Gcp.Compute.NodeTemplate("soletenant-tmpl", new()
    {
        Name = "soletenant-tmpl",
        Region = "us-central1",
        NodeType = "n1-node-96-624",
    });

    var nodes = new Gcp.Compute.NodeGroup("nodes", new()
    {
        Name = "soletenant-group",
        Zone = "us-central1-f",
        Description = "example google_compute_node_group for Terraform Google Provider",
        InitialSize = 1,
        NodeTemplate = soletenant_tmpl.Id,
        ShareSettings = new Gcp.Compute.Inputs.NodeGroupShareSettingsArgs
        {
            ShareType = "SPECIFIC_PROJECTS",
            ProjectMaps = new[]
            {
                new Gcp.Compute.Inputs.NodeGroupShareSettingsProjectMapArgs
                {
                    Id = guestProject.ProjectId,
                    ProjectId = guestProject.ProjectId,
                },
            },
        },
    });

});
package generated_program;

import com.pulumi.Context;
import com.pulumi.Pulumi;
import com.pulumi.core.Output;
import com.pulumi.gcp.organizations.Project;
import com.pulumi.gcp.organizations.ProjectArgs;
import com.pulumi.gcp.compute.NodeTemplate;
import com.pulumi.gcp.compute.NodeTemplateArgs;
import com.pulumi.gcp.compute.NodeGroup;
import com.pulumi.gcp.compute.NodeGroupArgs;
import com.pulumi.gcp.compute.inputs.NodeGroupShareSettingsArgs;
import java.util.List;
import java.util.ArrayList;
import java.util.Map;
import java.io.File;
import java.nio.file.Files;
import java.nio.file.Paths;

public class App {
    public static void main(String[] args) {
        Pulumi.run(App::stack);
    }

    public static void stack(Context ctx) {
        var guestProject = new Project("guestProject", ProjectArgs.builder()
            .projectId("project-id")
            .name("project-name")
            .orgId("123456789")
            .deletionPolicy("DELETE")
            .build());

        var soletenant_tmpl = new NodeTemplate("soletenant-tmpl", NodeTemplateArgs.builder()
            .name("soletenant-tmpl")
            .region("us-central1")
            .nodeType("n1-node-96-624")
            .build());

        var nodes = new NodeGroup("nodes", NodeGroupArgs.builder()
            .name("soletenant-group")
            .zone("us-central1-f")
            .description("example google_compute_node_group for Terraform Google Provider")
            .initialSize(1)
            .nodeTemplate(soletenant_tmpl.id())
            .shareSettings(NodeGroupShareSettingsArgs.builder()
                .shareType("SPECIFIC_PROJECTS")
                .projectMaps(NodeGroupShareSettingsProjectMapArgs.builder()
                    .id(guestProject.projectId())
                    .projectId(guestProject.projectId())
                    .build())
                .build())
            .build());

    }
}
resources:
  guestProject:
    type: gcp:organizations:Project
    name: guest_project
    properties:
      projectId: project-id
      name: project-name
      orgId: '123456789'
      deletionPolicy: DELETE
  soletenant-tmpl:
    type: gcp:compute:NodeTemplate
    properties:
      name: soletenant-tmpl
      region: us-central1
      nodeType: n1-node-96-624
  nodes:
    type: gcp:compute:NodeGroup
    properties:
      name: soletenant-group
      zone: us-central1-f
      description: example google_compute_node_group for Terraform Google Provider
      initialSize: 1
      nodeTemplate: ${["soletenant-tmpl"].id}
      shareSettings:
        shareType: SPECIFIC_PROJECTS
        projectMaps:
          - id: ${guestProject.projectId}
            projectId: ${guestProject.projectId}

The shareSettings property controls which projects can schedule VMs on the nodes. Setting shareType to SPECIFIC_PROJECTS restricts access to listed projects. The projectMaps array identifies guest projects by their project IDs. This allows multiple projects to use shared sole-tenant capacity without provisioning separate node groups.

Beyond these examples

These snippets focus on specific node group features: fixed and autoscaling capacity, maintenance scheduling, and cross-project sharing. They’re intentionally minimal rather than full compute deployments.

The examples reference pre-existing infrastructure such as NodeTemplate resources defining machine types, and guest projects for share settings. They focus on configuring the node group rather than provisioning everything around it.

To keep things focused, common node group patterns are omitted, including:

  • Maintenance policy options (DEFAULT, RESTART_IN_PLACE, MIGRATE_WITHIN_NODE_GROUP)
  • Autoscaling modes (ONLY_SCALE_OUT vs other modes)
  • Share type variations (SPECIFIC_PROJECTS vs other types)
  • Maintenance window time ranges

These omissions are intentional: the goal is to illustrate how each node group feature is wired, not provide drop-in infrastructure modules. See the NodeGroup resource reference for all available configuration options.

Let's configure GCP Sole-Tenant Node Groups

Get started with Pulumi Cloud, then follow our quick setup guide to deploy this infrastructure.

Try Pulumi Cloud for FREE

Frequently Asked Questions

Node Sizing & Autoscaling
Why did my node group get deleted and recreated when I changed the size?
The GCP API cannot update node group size in place. Any changes to the number of nodes (whether through Pulumi config or external changes) will cause Pulumi to delete and recreate the entire node group.
What's the difference between initialSize and autoscalingPolicy?
You must configure exactly one of these at creation. Use initialSize for a fixed number of nodes, or autoscalingPolicy to enable automatic scaling based on demand. They’re mutually exclusive options.
How do I enable autoscaling for my node group?
Configure autoscalingPolicy with a scaling mode (like ONLY_SCALE_OUT), minNodes, and maxNodes. Don’t set initialSize when using autoscaling.
Maintenance Configuration
What's the difference between maintenanceInterval and maintenancePolicy?
maintenanceInterval controls when maintenance happens (AS_NEEDED for immediate updates, or RECURRENT for scheduled updates every 28+ days). maintenancePolicy controls how instances are handled during maintenance (DEFAULT, RESTART_IN_PLACE, or MIGRATE_WITHIN_NODE_GROUP).
What does RECURRENT maintenance interval do?
RECURRENT schedules infrastructure and hypervisor updates periodically (not more than every 28 days), minimizing disruptions like live migrations and terminations on individual VMs.
What are the available maintenance policy options?
You can set maintenancePolicy to DEFAULT, RESTART_IN_PLACE, or MIGRATE_WITHIN_NODE_GROUP. The default value is DEFAULT.
Resource Sharing & Multi-Project Setup
How do I share my node group with other GCP projects?
Configure shareSettings with shareType set to SPECIFIC_PROJECTS, then provide projectMaps containing the guest project IDs you want to share with.
Immutability & Limitations
What properties can't I change after creating the node group?
The project property is immutable and cannot be changed after creation. Additionally, changing the node group size will cause a delete and recreate operation.

Using a different cloud?

Explore compute guides for other cloud providers: