How do I deploy a Node.js Express server on AWS Fargate with a load balancer?
In this guide, we’ll walk through how to deploy a Node.js Express server on AWS Fargate and set it up with a load balancer. AWS Fargate is a serverless compute engine for containers that works with both Amazon ECS and EKS. Fargate allows you to focus on designing and building your applications instead of managing the infrastructure that runs them.
Here’s a step-by-step outline of what we’ll achieve:
- Create an ECS Cluster to run the services.
- Define a Task Definition for the Node.js Express server container.
- Create an ECS Service to maintain the desired number of running copies of the task.
- Set up an Application Load Balancer to distribute traffic to the ECS service’s tasks.
import * as pulumi from "@pulumi/pulumi";
import * as aws from "@pulumi/aws";
const ecsCluster = new aws.ecs.Cluster("ecs_cluster", {name: "my-ecs-cluster"});
const ecsTaskExecutionRole = new aws.iam.Role("ecs_task_execution_role", {
name: "ecsTaskExecutionRole",
assumeRolePolicy: JSON.stringify({
Version: "2012-10-17",
Statement: [{
Action: "sts:AssumeRole",
Principal: {
Service: "ecs-tasks.amazonaws.com",
},
Effect: "Allow",
Sid: "",
}],
}),
managedPolicyArns: ["arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy"],
});
const task = new aws.ecs.TaskDefinition("task", {
family: "my-task-family",
cpu: "256",
memory: "512",
networkMode: "awsvpc",
requiresCompatibilities: ["FARGATE"],
executionRoleArn: ecsTaskExecutionRole.arn,
containerDefinitions: JSON.stringify([{
name: "my-app",
image: "node:14",
essential: true,
portMappings: [{
containerPort: 80,
hostPort: 80,
}],
command: [
"node",
"app.js",
],
}]),
});
const appTg = new aws.lb.TargetGroup("app_tg", {
name: "app-tg",
port: 80,
protocol: "HTTP",
vpcId: "vpc-xxxxxxxx",
});
const elbSecurityGroup = new aws.ec2.SecurityGroup("elb_security_group", {
name: "elb_security_group",
description: "ELB security group",
ingress: [{
fromPort: 80,
toPort: 80,
protocol: "tcp",
cidrBlocks: ["0.0.0.0/0"],
}],
egress: [{
fromPort: 0,
toPort: 0,
protocol: "-1",
cidrBlocks: ["0.0.0.0/0"],
}],
});
const appLb = new aws.lb.LoadBalancer("app_lb", {
name: "app-lb",
internal: false,
loadBalancerType: "application",
securityGroups: [elbSecurityGroup.id],
subnets: [
"subnet-xxxxxxxx",
"subnet-xxxxxxxx",
],
});
const frontendListener = new aws.lb.Listener("frontend_listener", {
loadBalancerArn: appLb.arn,
port: 80,
protocol: "HTTP",
defaultActions: [{
type: "forward",
targetGroupArn: appTg.arn,
}],
});
const ecsSecurityGroup = new aws.ec2.SecurityGroup("ecs_security_group", {
name: "ecs_security_group",
description: "Allow inbound traffic",
ingress: [{
fromPort: 80,
toPort: 80,
protocol: "tcp",
cidrBlocks: ["0.0.0.0/0"],
}],
egress: [{
fromPort: 0,
toPort: 0,
protocol: "-1",
cidrBlocks: ["0.0.0.0/0"],
}],
});
const ecsService = new aws.ecs.Service("ecs_service", {
name: "my-app-service",
cluster: ecsCluster.id,
taskDefinition: task.arn,
desiredCount: 1,
networkConfiguration: {
subnets: [
"subnet-xxxxxxxx",
"subnet-xxxxxxxx",
],
securityGroups: [ecsSecurityGroup.id],
assignPublicIp: true,
},
loadBalancers: [{
targetGroupArn: appTg.arn,
containerName: "my-app",
containerPort: 80,
}],
}, {
dependsOn: [frontendListener],
});
export const ecsClusterName = ecsCluster.name;
export const ecsServiceName = ecsService.name;
export const loadBalancerDnsName = appLb.dnsName;
In this setup:
- We created an ECS Cluster to orchestrate our containerized service.
- Defined a Task Definition which specifies the Docker container to run.
- Managed an ECS Service that ensures our container is always running.
- Configured an Application Load Balancer to handle the web traffic in a highly available manner.
Key Points:
- ECS Cluster: Central place to manage our containerized services.
- Task Definition: Details the container image and necessary configurations.
- ECS Service: Ensures the desired number of tasks are always running.
- Load Balancer: Distributes incoming traffic to available service tasks.
By the end of this, you should have a resilient and scalable Node.js Express server running on AWS Fargate with traffic managed by an Application Load Balancer.
Deploy this code
Want to deploy this code? Sign up for a free Pulumi account to deploy in a few clicks.
Sign upNew to Pulumi?
Want to deploy this code? Sign up with Pulumi to deploy in a few clicks.
Sign upThank you for your feedback!
If you have a question about how to use Pulumi, reach out in Community Slack.
Open an issue on GitHub to report a problem or suggest an improvement.