Deploy a Multi-Container Voting Application with Redis

By Pulumi Team
Published
Updated

The Challenge

You need a multi-service application where frontend and backend components can be deployed and scaled independently. This architecture separates concerns and allows each tier to evolve on its own schedule.

What You'll Build

  • Redis cache service running on Fargate behind an internal load balancer
  • Web frontend service running on Fargate with a public endpoint
  • Service-to-service communication through load balancer DNS
  • Independent scaling for each service
  • Frontend URL for accessing the application

Neo Try This Prompt in Pulumi Neo

Run this prompt in Neo to deploy your infrastructure, or edit it to customize.

Best For

Use this prompt when you need a multi-container application with separate services that communicate over the network. Ideal for microservices architectures, applications with caching layers, or when you want to deploy and scale frontend and backend components independently.

Architecture Overview

This architecture splits a voting application into two independently managed Fargate services: a Redis cache that stores vote data, and a web frontend that accepts user votes and displays results. Each service runs behind its own Application Load Balancer, which means you can scale, update, and troubleshoot each tier without affecting the other.

The frontend service is built from your local application code, pushed to ECR, and deployed as a Fargate task. It connects to the Redis service by referencing the cache load balancer’s DNS name as an environment variable. This loose coupling through DNS means the frontend does not need to know the IP addresses or number of replicas running behind the cache tier, and the cache can be replaced or scaled without reconfiguring the frontend.

This pattern extends naturally to more complex microservices architectures. Adding additional services follows the same structure: create a Fargate service, place it behind a load balancer, and pass the DNS endpoint to any service that needs to communicate with it.

Frontend Service

The frontend Fargate service runs your custom application container, built from a local Dockerfile and stored in ECR. It handles HTTP requests from users, submits votes to the Redis cache, and renders the current vote tally. The frontend load balancer is public-facing, providing the URL that users access.

ECS manages the desired task count and replaces unhealthy tasks automatically. Since the frontend is stateless (all state lives in Redis), you can scale the frontend independently based on traffic without worrying about session consistency.

Redis Cache Service

The Redis service runs as a separate Fargate task using the official Redis container image. It sits behind its own load balancer, which provides a stable DNS endpoint that the frontend uses for connections. This indirection means Redis tasks can be replaced without changing the frontend configuration.

Placing Redis behind a load balancer also makes it possible to run multiple Redis replicas for read scaling, though for most applications a single-replica Redis service is sufficient as a starting point.

Service Communication

The two services communicate through their load balancer DNS names rather than direct task-to-task connections. The frontend receives the Redis endpoint as an environment variable at deployment time. This approach is simpler than service discovery mechanisms and works well for applications with a small number of services.

All traffic between services flows through the VPC’s private network. The Redis load balancer can be configured as internal (not internet-facing) so cache traffic never leaves the AWS network.

Common Customizations

  • Use ElastiCache instead of self-managed Redis: Replace the Redis Fargate service with an Amazon ElastiCache Redis cluster for managed failover, backups, and replication.
  • Add a database tier: Introduce an RDS instance alongside Redis to persist vote data durably, using Redis purely as a performance cache.
  • Configure internal load balancers: Make the Redis load balancer internal-only so it is not accessible from the public internet, restricting access to other services within the VPC.
  • Add health check endpoints: Configure custom health check paths on each service so the load balancers can verify application-level readiness, not just port availability.