Deploy Cloud Run Services with Custom Containers

By Pulumi Team
Published
Updated

The Challenge

You need to deploy containerized applications as serverless services with automatic scaling, HTTPS endpoints, and pay-per-use pricing. Cloud Run handles both public images and custom-built containers, but setting up the build pipeline, registry push, IAM permissions, and service configuration requires coordinating several GCP resources.

What You'll Build

  • Cloud Run services with automatic scaling
  • Public HTTPS endpoints for each service
  • Custom container image built and pushed to registry
  • Memory and concurrency limits configured
  • Pay-per-request pricing with scale-to-zero

Neo Try This Prompt in Pulumi Neo

Run this prompt in Neo to deploy your infrastructure, or edit it to customize.

Best For

Use this prompt when you need serverless container execution with automatic scaling. Ideal for web applications, REST APIs, microservices, or any containerized workload that needs to scale from zero to handle traffic spikes without managing infrastructure.

Architecture Overview

This deployment demonstrates two approaches to Cloud Run: deploying a pre-built public image and building a custom container from source. The first service acts as a quick validation that Cloud Run, IAM, and networking are configured correctly. The second service shows the full workflow of building a Docker image, pushing it to a container registry, and deploying it as a managed service.

Cloud Run sits between traditional serverless functions and full container orchestration. Unlike Lambda or Cloud Functions, you bring a complete container image, giving you full control over the runtime, dependencies, and language. Unlike Kubernetes, you do not manage nodes, networking, or scaling policies. Cloud Run handles all of that automatically, scaling instances from zero to thousands based on incoming request volume.

Each service gets a dedicated HTTPS endpoint with a Google-managed TLS certificate. IAM policies control who can invoke the service. For public APIs and web applications, granting invoker access to all users makes the endpoint publicly accessible. For internal services, you can restrict access to specific service accounts or authenticated users.

Hello World Service

Uses a public container image to validate the Cloud Run setup without requiring a local build. This confirms that the project, region, IAM permissions, and networking are all configured correctly before you invest time building custom images.

Custom Application Service

Builds a Docker image from a local directory, pushes it to Google Container Registry, and deploys it as a Cloud Run service. This is the pattern you will use for your own applications, with configurable memory limits and concurrency controls.

IAM Configuration

Controls access to each service independently. Public services grant invoker permissions broadly, while internal services can restrict access to specific identities for service-to-service communication.

Common Customizations

  • Adjust resource limits: Set CPU, memory, and request timeout values to match your application’s resource profile.
  • Use Artifact Registry instead of GCR: Deploy images to Artifact Registry for more granular access control and multi-region replication.
  • Add environment variables and secrets: Configure runtime settings through environment variables and mount secrets from Secret Manager.
  • Set up custom domains: Map a custom domain to your Cloud Run service instead of using the auto-generated URL.