The Challenge
You need a multi-service application where frontend and backend components can be deployed and scaled independently. This architecture separates concerns and allows each tier to evolve on its own schedule.
What You'll Build
- → Redis cache service running on Fargate behind an internal load balancer
- → Web frontend service running on Fargate with a public endpoint
- → Service-to-service communication through load balancer DNS
- → Independent scaling for each service
- → Frontend URL for accessing the application
Try This Prompt in Pulumi Neo
Run this prompt in Neo to deploy your infrastructure, or edit it to customize.
Best For
Architecture Overview
This architecture splits a voting application into two independently managed Fargate services: a Redis cache that stores vote data, and a web frontend that accepts user votes and displays results. Each service runs behind its own Application Load Balancer, which means you can scale, update, and troubleshoot each tier without affecting the other.
The frontend service is built from your local application code, pushed to ECR, and deployed as a Fargate task. It connects to the Redis service by referencing the cache load balancer’s DNS name as an environment variable. This loose coupling through DNS means the frontend does not need to know the IP addresses or number of replicas running behind the cache tier, and the cache can be replaced or scaled without reconfiguring the frontend.
This pattern extends naturally to more complex microservices architectures. Adding additional services follows the same structure: create a Fargate service, place it behind a load balancer, and pass the DNS endpoint to any service that needs to communicate with it.
Frontend Service
The frontend Fargate service runs your custom application container, built from a local Dockerfile and stored in ECR. It handles HTTP requests from users, submits votes to the Redis cache, and renders the current vote tally. The frontend load balancer is public-facing, providing the URL that users access.
ECS manages the desired task count and replaces unhealthy tasks automatically. Since the frontend is stateless (all state lives in Redis), you can scale the frontend independently based on traffic without worrying about session consistency.
Redis Cache Service
The Redis service runs as a separate Fargate task using the official Redis container image. It sits behind its own load balancer, which provides a stable DNS endpoint that the frontend uses for connections. This indirection means Redis tasks can be replaced without changing the frontend configuration.
Placing Redis behind a load balancer also makes it possible to run multiple Redis replicas for read scaling, though for most applications a single-replica Redis service is sufficient as a starting point.
Service Communication
The two services communicate through their load balancer DNS names rather than direct task-to-task connections. The frontend receives the Redis endpoint as an environment variable at deployment time. This approach is simpler than service discovery mechanisms and works well for applications with a small number of services.
All traffic between services flows through the VPC’s private network. The Redis load balancer can be configured as internal (not internet-facing) so cache traffic never leaves the AWS network.
Common Customizations
- Use ElastiCache instead of self-managed Redis: Replace the Redis Fargate service with an Amazon ElastiCache Redis cluster for managed failover, backups, and replication.
- Add a database tier: Introduce an RDS instance alongside Redis to persist vote data durably, using Redis purely as a performance cache.
- Configure internal load balancers: Make the Redis load balancer internal-only so it is not accessible from the public internet, restricting access to other services within the VPC.
- Add health check endpoints: Configure custom health check paths on each service so the load balancers can verify application-level readiness, not just port availability.
Related Prompts
Deploy a Multi-Cloud Application
You need to run an application across multiple cloud providers so that a regional outage or provider-level incident does …
Deploy Containers to AWS Fargate
You need to run a containerized application in production without managing servers. Fargate provides serverless …
Deploy a Scalable Fargate Service with Multiple Replicas
You need high availability and redundancy for a containerized application. Running multiple replicas ensures your …
Deploy a Containerized Application on Fargate with Load Balancing
You need to deploy a containerized application without managing servers. Fargate provides serverless container execution …