In this comprehensive guide, we'll walk through the process of deploying a Task Manager application using Docker Swarm across multiple AWS EC2 instances. This setup demonstrates container orchestration in a production environment using Docker Swarm's built-in features for high availability and load balancing.
Prerequisites
AWS Account with EC2 access
Basic knowledge of Docker and AWS EC2
Docker image: slayerop15/task-manager:latest
3 EC2 instances (t2.micro or t2.small)
Step 1: Setting Up EC2 Instances
First, create three EC2 instances in AWS:
1 Manager Node
2 Worker Nodes
All instances should be running Ubuntu Server and be in the same VPC with the following security group configuration:
Port 22 (SSH)
Port 2377 (Swarm management)
Port 7946 (Node communication)
Port 4789 (Overlay network)
Port 5000 (Application)
Step 2: Installing Docker
On all three instances, install Docker by running the following Script:
#!/bin/bash
# Update system packages
sudo apt-get update
sudo apt-get upgrade -y
# Install required packages
sudo apt-get install -y \
apt-transport-https \
ca-certificates \
curl \
software-properties-common \
git
# Add Docker's official GPG key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
# Add Docker repository
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
# Update package database with Docker packages
sudo apt-get update
# Install Docker
sudo apt-get install -y docker-ce docker-ce-cli containerd.io
# Start and enable Docker service
sudo systemctl start docker
sudo systemctl enable docker
# Add ubuntu user to docker group
sudo usermod -aG docker ubuntu
# Install Docker Compose
sudo curl -L "https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
# Verify Docker installation
sudo docker --version > /home/ubuntu/docker_version.txt
sudo docker-compose --version >> /home/ubuntu/docker_version.txt
# Enable Docker system service to start on boot
sudo systemctl enable docker.service
sudo systemctl enable containerd.service
Verify the Docker installation on each instance:
docker --version
docker-compose --version
systemctl status docker
Step 3: Initializing the Swarm
On the manager node, initialize Docker Swarm:
sudo docker swarm init
This command will output a join token that looks like:
docker swarm join --token <TOKEN> <EC2 IP>:2377
Step 4: Joining Worker Nodes
Copy the join token command from the manager node and run it on both worker nodes:
docker swarm join --token <TOKEN> <MANAGER-IP>:2377
You should see the message: "This node joined a swarm as a worker."
Step 5: Verifying the Swarm Setup
On the manager node, verify that all nodes are connected:
docker node ls
You should see three nodes listed:
One manager (Leader)
Two workers
Step 6: Creating the Network and Deploying Services
On the manager node:
- Create an overlay network:
docker network create -d overlay taskmanager-network
- Deploy MongoDB service:
docker service create \
--name mongodb \
--network taskmanager-network \
--constraint 'node.role==manager' \
--mount type=volume,source=mongodb_data,target=/data/db \
mongo:latest
- Deploy the Task Manager application:
docker service create \
--name taskmanager \
--network taskmanager-network \
--replicas 3 \
--publish published=5000,target=5000 \
--env MONGO_URI=mongodb://mongodb:27017/taskmanager \
slayerop15/task-manager:latest
Step 7: Verifying the Deployment
Check the services:
docker service ls
You should see both services running:
mongodb (1/1 replicas)
taskmanager (3/3 replicas)
Step 8: Accessing the Application
The Task Manager application will be accessible through any of the node's public IP addresses on port 5000:
http://<NODE-PUBLIC-IP>:5000
You can access the application through any of the three EC2 instances' public IPs, as Docker Swarm handles the load balancing automatically.
Monitoring and Management
Some useful commands for monitoring your deployment:
# Check service status
docker service ps taskmanager
# View service logs
docker service logs taskmanager
# Scale the service
docker service scale taskmanager=5
# Update the service
docker service update --image slayerop15/task-manager:latest taskmanager
Conclusion
You now have a fully functional Task Manager application running in a Docker Swarm cluster with:
High availability through multiple replicas
Load balancing across nodes
Automatic failover
Easy scaling capabilities
The application is production-ready and can be scaled or updated with zero downtime. Docker Swarm handles the orchestration, ensuring your application remains available even if some containers or nodes fail.
Troubleshooting
If you encounter any issues:
Check service logs:
docker service logs <service-name>
Verify network connectivity:
docker network inspect taskmanager-network
Ensure all nodes are healthy:
docker node ls
Check container distribution:
docker service ps taskmanager