Create CI/CD Pipeline Infrastructure

By Pulumi Team
Published
Updated

The Challenge

You need to automate the path from code commit to production deployment. Manual deployments are error-prone, slow, and do not scale as teams grow. A CI/CD pipeline ensures every change is built, tested, and deployed consistently, with notifications when something goes wrong.

What You'll Build

  • Source repository for version control
  • Automated build and test service
  • Deployment service targeting application servers
  • Pipeline orchestrating the full workflow
  • Notifications and monitoring for failures

Neo Try This Prompt in Pulumi Neo

Run this prompt in Neo to deploy your infrastructure, or edit it to customize.

Best For

Use this prompt when you need to set up automated deployment infrastructure for an application. This pattern applies whether you are deploying to EC2 instances, containers, or serverless functions. It is especially useful for teams that currently deploy manually and want to move to a continuous delivery model.

Architecture Overview

This architecture provisions the infrastructure for a complete CI/CD pipeline on AWS. A source repository stores application code, a build service compiles the application and runs tests, a deployment service handles rolling updates to target servers, and a pipeline service orchestrates the entire workflow. When a developer pushes code, the pipeline automatically triggers a build, runs tests, and deploys the result.

The pipeline model breaks the deployment process into discrete stages, each with its own success criteria. If the build stage fails, the pipeline stops and notifies the team before any deployment happens. If tests fail, the code never reaches production. This staged approach catches problems early and reduces the risk of deploying broken code.

Monitoring and notifications close the feedback loop. Build failure notifications reach the team within seconds, and deployment monitoring alerts on error rate spikes or health check failures after a release. This combination of automated testing and post-deployment monitoring gives teams confidence that their pipeline is both fast and safe.

Source and Build

The source repository stores application code and triggers the pipeline on new commits. The build service pulls the latest code, installs dependencies, compiles the application, and runs unit and integration tests. Build artifacts are stored for the deployment stage. The build specification defines the steps declaratively, so the build process is version-controlled alongside the application code.

Deployment

The deployment service manages the process of updating running application instances. It supports rolling deployments that update servers in batches, health checks that verify each batch before proceeding, and automatic rollback if a deployment fails. This prevents a bad deployment from taking down the entire fleet at once.

Pipeline Orchestration

The pipeline connects source, build, and deployment stages into an automated workflow. Each stage transitions to the next only when the previous stage succeeds. You can add manual approval gates between stages for production deployments, requiring a team member to review and approve before the pipeline proceeds.

Common Customizations

  • Add a staging environment: Extend the pipeline with a staging deployment stage that runs before production, giving you a pre-production validation step.
  • Use containers instead of servers: Replace the server-based deployment with an ECS or Fargate deployment for containerized applications.
  • Add integration tests: Request a test stage between build and deployment that runs integration tests against a temporary environment.
  • Connect to an external repository: Replace the managed source repository with a connection to GitHub or GitLab if your code lives elsewhere.