From EC2 to AWS Managed Services: Migrating to AWS RDS with DMS
Introduction
Last week, I successfully deployed MVP using a simple but effective setup — a custom AMI with everything bundled into a single EC2 instance. Building an MVP is just the beginning of a startup’s journey. As the user base grows, the initial infrastructure needs to scale effectively to meet the increased demands.
This week, I undertook a major infrastructure upgrade to ensure the platform remains fast, reliable, and cost-efficient as I scale by:
✅ Migrated the database from EC2 to Amazon RDS using AWS DMS
✅ Moved image storage to S3 with CloudFront for global delivery
✅ Secured configurations using SSM Parameter Store
✅ Updated the app to leverage these AWS services seamlessly
Let’s dive into the details!
Why Change the Architecture? Preparing for Growth
The initial MVP was built for speed, but scaling requires a more robust foundation. Here’s why the old setup wouldn’t work long-term:
1. More Users, More Problems
2. Data Everywhere, Consistency Nowhere
3. Security & Maintenance Headaches
4. Cost Efficiency at Scale
The Bottom Line: The new architecture isn’t just about fixing today’s issues — it’s about enabling tomorrow’s success without reengineering everything later.
Part 1: Database Migration to Amazon RDS
Why Move from EC2 to RDS?
The initial PostgreSQL-on-EC2 setup had critical flaws:
Why DMS Was the Right Choice?
I used AWS Database Migration Service (DMS) because it enables:
Step-by-Step Migration with AWS DMS:
# Configured RDS instance in Terraform
resource "aws_db_instance" "mvp" {
identifier = "mvp-database"
engine = "postgresql"
engine_version = "16.3"
instance_class = "db.t3.micro" # Free-tier eligible
allocated_storage = 20
db_name = "mvp"
username = var.db_username # Sensitive variables from TF Cloud
password = var.db_password
parameter_group_name = aws_db_parameter_group.mvp.name
skip_final_snapshot = true
publicly_accessible = false
vpc_security_group_ids = [aws_security_group.rds.id]
db_subnet_group_name = aws_db_subnet_group.mvp.name
}
# Custom parameter group to disable SSL enforcement
resource "aws_db_parameter_group" "mvp" {
name = "mvp-pg-ssl-disabled"
family = "postgres16"
parameter {
name = "rds.force_ssl"
value = "0" # Critical for DMS compatibility
}
}
2. Configured AWS Database Migration Service (DMS) with Terraform
# DMS Replication Instance
resource "aws_dms_replication_instance" "mvp" {
replication_instance_class = "dms.t3.micro"
replication_instance_id = "mvp-migration-instance"
allocated_storage = 20 # Minimum storage for Free Tier
vpc_security_group_ids = [aws_security_group.dms.id]
# Required tags for cost tracking
tags = {
Environment = "Migration"
}
}
# Source Endpoint (EC2 PostgreSQL)
resource "aws_dms_endpoint" "source" {
endpoint_id = "ec2-postgres-source"
endpoint_type = "source"
engine_name = "postgres"
server_name = aws_instance.app.private_ip
port = 5432
username = var.db_username
password = var.db_password
database_name = "mvp"
}
# Target Endpoint (RDS)
resource "aws_dms_endpoint" "target" {
endpoint_id = "rds-postgres-target"
endpoint_type = "target"
engine_name = "postgres"
server_name = aws_db_instance.mvp.address
port = 5432
username = var.db_username
password = var.db_password
database_name = "mvp"
}
# Replication Task
resource "aws_dms_replication_task" "migration" {
migration_type = "full-load"
replication_task_id = "mvp-full-migration"
replication_instance_arn = aws_dms_replication_instance.mvp.replication_instance_arn
source_endpoint_arn = aws_dms_endpoint.source.endpoint_arn
target_endpoint_arn = aws_dms_endpoint.target.endpoint_arn
table_mappings = file("${path.module}/table-mappings.json")
}
3. Switched the Application to RDS
# settings.py
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'HOST': '${aws_db_instance.mvp.address}', # Terraform reference
'NAME': 'mvp',
'USER': '${var.db_username}',
'PASSWORD': '${var.db_password}'
}
}
Result: A managed, scalable database that survives instance replacements!
Part 2: Image Storage on S3 + CloudFront
Why S3 Over EBS?
Implementation Steps:
2. Enabled Origin Access Control to ensure only CloudFront can access S3.
Recommended by LinkedIn
3. Updated EC2 IAM Role to grant permission for S3 operations (put, get, delete, list).
4. Copied existing images from EC2 to S3 using the AWS CLI as shown below.
5. Verified image accessibility via CloudFront.
aws s3 cp /opt/app/media/user_image/ s3://my-bucket/media/user_image/ - recursive
Result: Now, all user-uploaded images are stored securely in S3 and served efficiently through CloudFront, improving performance and reducing costs.
Part 3: Secure Configuration with SSM Parameter Store
Why Ditch secrets.sh?
How I Fixed It?
2. Updated EC2 Instance Profile to allow reading these parameters.
3. Updated GitHub Actions to remove secret file generation.
Result: No more secrets in code, and changes happen instantly across all instances.
Part 4: Deploying the New Application Version
With the new infrastructure in place, I deployed an updated version of the application:
Result: Everything worked smoothly, confirming a successful migration! 🎉
Challenges & Solutions
1. Server Error (500) During User Signup
# Updated database configuration
python manage.py makemigrations
python manage.py migrate
sudo systemctl restart gunicorn
sudo systemctl restart nginx
2. Critical File Location Issues
/opt/app/
├── cloudtalents/
│ ├── __init__.py
│ ├── settings.py
│ ├── urls.py
│ └── wsgi.py
├── manage.py
├── requirements.txt
└── venv/
3. Database Migration Hurdles
# Added DMS instance IP to allowlist
host all all 10.10.10.215/32 md5
sudo systemctl restart postgresql
4. Data Migration Failures
Key Lessons Learned
Final Thoughts
By making these improvements, the MVP is now more scalable, cost-effective, and secure:
✅ Database on Amazon RDS — No more data loss or performance bottlenecks.
✅ Images stored in Amazon S3 and served via CloudFront — Faster and cheaper image delivery.
✅ Configurations managed in SSM Parameter Store — Enhanced security and scalability.
✅ Updated application deployment pipeline — More streamlined and automated.
This upgrade ensures that the MVP is well-prepared for growth while keeping costs under control. Exciting times ahead! 🚀
I help you become an AWS DevOps Engineer | Ex-AWS
1moAwesome work, Bhavika! Thanks for sharing your learning journey publicly. It inspires others to follow the same path. You rock! 🚀