Deploy a GKE Cluster with Nginx Deployment

By Pulumi Team
Published
Updated

The Challenge

You need a managed Kubernetes cluster on Google Cloud and want to verify it is working correctly before deploying real workloads. Deploying a simple nginx application validates that the cluster, networking, and Kubernetes provider are all configured properly.

What You'll Build

  • GKE cluster running on Google Cloud
  • Nginx deployment to validate cluster health
  • Kubernetes provider configured for the cluster
  • Kubeconfig exported for kubectl access
  • Cluster endpoint available for API access

Neo Try This Prompt in Pulumi Neo

Run this prompt in Neo to deploy your infrastructure, or edit it to customize.

Best For

Use this prompt when you need a managed Kubernetes cluster on GCP and want to confirm it works end-to-end. Ideal for setting up a new environment, evaluating GKE, or establishing a baseline cluster before deploying production workloads.

Architecture Overview

This deployment creates a GKE cluster and immediately validates it by deploying an nginx pod. The two-step approach (provision infrastructure, then deploy a workload) confirms that every layer of the stack works correctly: the cluster itself, the Kubernetes API, networking, and pod scheduling.

GKE manages the Kubernetes control plane, handling API server availability, etcd storage, and component upgrades. You define the worker node pool, and Google provisions and maintains the underlying VMs. Once the cluster is running, a Pulumi Kubernetes provider is configured using the cluster’s endpoint and credentials, creating a direct connection between your Pulumi program and the Kubernetes API.

The nginx deployment serves as a smoke test. If the pod reaches a running state, you know the cluster can pull container images, schedule workloads, and allocate resources correctly. The exported kubeconfig lets you interact with the cluster through kubectl for ongoing management and troubleshooting.

GKE Cluster

Provides managed Kubernetes with Google handling the control plane, automatic upgrades, and integrated logging and monitoring through Google Cloud Operations.

Kubernetes Provider

Bridges Pulumi and the GKE cluster by using the cluster’s endpoint and credentials. This allows Pulumi to create Kubernetes resources directly within the cluster in the same program that provisions it.

Nginx Deployment

Acts as a validation workload that confirms the cluster can pull images, schedule pods, and run containers. Once verified, you can replace it with your actual application deployments.

Common Customizations

  • Change the machine type and node count: Adjust the node pool to use larger machines or more nodes depending on your workload requirements.
  • Enable autoscaling: Add cluster autoscaler configuration to automatically adjust node count based on resource demand.
  • Use a private cluster: Restrict the cluster’s API endpoint to a private network for improved security in production environments.
  • Deploy a real application: Replace the nginx smoke test with your actual workload once the cluster is validated.