Written by Maxime Roth Fessler, DevOps & Backend Developer at TrackIt

Heroku is widely appreciated for its simplicity, fast deployments, and developer-friendly experience, making it an ideal platform for early-stage applications. It handles much of the operational complexity, allowing teams to focus on shipping features quickly.

However, as applications scale, the need for fine-grained infrastructure control, enhanced observability, and cost-efficiency becomes more pronounced. Heroku’s abstractions can limit flexibility, and its pricing model may become less sustainable at scale.

This guide outlines a real-world migration from Heroku to AWS using Terraform. The transition focuses on gaining control, scalability, and long-term efficiency by replatforming a monolithic Node.js application. AWS services such as Amazon RDS and ElastiCache are configured as drop-in replacements, with job queues replicated, secrets managed securely, and the application deployed through Elastic Beanstalk using Docker—all defined via infrastructure as code (IaC).

The sections below offer a step-by-step framework for teams seeking to move beyond Heroku and take greater ownership of their infrastructure, without the need to re-architect the application from the ground up.

Auditing the Existing Architecture

Before initiating the migration, a comprehensive audit of the Heroku environment was conducted. This included identifying all key components: dynos, add-ons, environment variables, and deployment workflows. The application in question was a monolithic Node.js service, running a web process and a background worker as defined in a Procfile (a text file used by Heroku to declare process types and commands). Heroku Postgres was used for relational data, and Redis (via BullMQ) managed job queues. Logging was handled through Papertrail, while SendGrid provided email delivery. Static assets were already served via Amazon S3 behind a CloudFront distribution.

All configuration details, service dependencies, and CI/CD workflows were thoroughly documented. This audit provided a clear blueprint for replicating the architecture on AWS and highlighted areas requiring special attention, such as secrets management and background job processing.

Migrating to AWS Equivalents

With a clear understanding of the existing architecture, each Heroku service was mapped to its AWS counterpart as follows:

  • Heroku Postgres was replaced with Amazon RDS for PostgreSQL, providing enhanced control over performance tuning, storage, and backup policies.
  • Heroku Redis queues were migrated to Amazon ElastiCache using a managed Redis cluster, offering improved throughput and secure private network access.
  • Heroku dynos running the Node.js application were containerized and deployed using AWS Elastic Beanstalk with a custom Dockerfile, enabling reuse of the build pipeline and greater flexibility over instance types and scaling policies.
  • Logging via Papertrail was transitioned to Amazon CloudWatch Logs, centralizing log data and enabling metric extraction and alerting.

This direct service mapping simplified the migration process and minimized the need for major application changes.

Infrastructure Setup

The AWS infrastructure was defined and provisioned using Terraform. The foundation consisted of a custom VPC with both public and private subnets, isolating sensitive services such as the PostgreSQL database and Redis cache within private subnets. Tightly scoped security rules allowed internal traffic between Elastic Beanstalk, RDS, and ElastiCache, while blocking external access to the database and Redis instances. 

The Elastic Beanstalk environment was configured to run the Docker-based application inside this VPC, ensuring secure access to backend services. For debugging and occasional access to private subnets, a lightweight Amazon EC2 instance was deployed as a bastion host in a public subnet. This instance acts as a secure jump point to reach internal resources like the database or Redis when needed, without exposing them to the public internet.

Deploying the infrastructure with Terraform requires the AWS CLI to be installed and configured with appropriate credentials, along with Terraform itself. For application deployment using Elastic Beanstalk, the EB CLI should also be installed. After setting up these tools, running the ‘terraform init’ command installs the project dependencies. Readers can consult the following GitHub repository containing the complete Terraform configuration to facilitate replication of this setup.

Data Migration Strategy

The data migration from the PostgreSQL database on Heroku to Amazon RDS was performed using AWS Database Migration Service (DMS) with a full-load-only approach. This method copies all data at once and is suitable when a short downtime is acceptable.

Change Data Capture (CDC), which allows near real-time replication of changes after the initial load, was also considered. However, enabling CDC on PostgreSQL requires specific parameters to be configured on the source database:

  • wal_level = logical: Enables logical replication, necessary for tracking changes at the SQL level rather than at the raw binary level.
  • max_replication_slots = 5: Defines how many replication slots can exist at once. Each DMS task using CDC requires a replication slot to stream changes.
  • max_wal_senders = 10: Determines the maximum number of concurrent processes that can send WAL (Write-Ahead Log) data to replicas or consumers like DMS.

These settings must be applied via ALTER SYSTEM and require a database restart, which is generally not possible on most Heroku plans due to restricted permissions.

Given these limitations, a simpler full-load-only strategy was chosen, scheduled during a low-traffic period to minimize user impact.

For migrations from self-managed PostgreSQL or AWS RDS instances where system parameters can be modified, enabling CDC offers a zero-downtime migration option. In such cases, verifying that all replication prerequisites are met before launching the DMS task is essential.

Deploying the AWS Services

Deployment of AWS services begins with confirming that all required tools are installed. The next step involves creating a terraform.tfvars file. A sample.tfvars file is provided in the repository as a reference to help structure the configuration. Values such as VPC CIDR blocks, database credentials, and migration settings must be filled in accordingly.

Customization of the migration strategy should reflect access levels and specific project requirements. For instance, a one-time data copy can use dms_strategy = “full-load”. Alternatively, if ongoing synchronization of the target database with source changes is needed and the necessary source database permissions are available, dms_strategy = “full-load-and-cdc”can be selected. It is important to ensure the chosen strategy aligns with permitted operations and consistency requirements. Reviewing the Data Migration Strategy section is recommended to verify the selected approach suits the project needs.

To generate an execution plan and save it as plan.out, run the following command:

terraform plan -out=plan.out

This command analyzes the Terraform configuration and generates a plan outlining the necessary actions to achieve the desired state.

After reviewing and approving the plan, apply it by running:

terraform apply “plan.out”

Starting the DMS Replication

After successfully deploying the Terraform infrastructure and provisioning the new PostgreSQL database, the DMS replication task must be started manually. Within the AWS Management Console, navigate to the Database Migration Service section, locate the replication task, select Actions, and choose Start/Resume to initiate the migration. Running a premigration assessment is recommended to identify potential issues before beginning the process.

AD 4nXdyQ4wfRJX0ez3PsUVC6 4jasWErrws706RKknjwOU5dd00umlh4tqbt49ocCdbYnw2vDx3vzLNFNhnWtRiYd45vkVXR0wDxPs4 p

Migrating the Cache

The caching and background job queue system was migrated from Heroku Redis to Amazon ElastiCache for Redis. ElastiCache is a fully managed, in-memory data store offering low latency and high throughput, making it well suited for use cases such as BullMQ queues or caching frequently accessed data. Unlike MemoryDB, which focuses on durability and strong consistency, ElastiCache provides a more cost-effective solution optimized for ephemeral workloads like job processing, where speed takes precedence over persistence. Deploying ElastiCache within the same VPC as the application and database ensured secure, low-latency communication between services.

With the recent introduction of Valkey—an open-source, drop-in replacement for Redis OSS now supported on ElastiCache—the choice was made to adopt it due to its Redis compatibility and cost savings of up to 33% compared to other engines. Valkey retains the Redis API and performance characteristics while benefiting from active community-driven development.

Migrating the Application

When migrating an application from Heroku to AWS, several deployment options are available depending on the team’s expertise, desired level of control, and operational complexity. AWS Elastic Beanstalk is a common choice, providing a Heroku-like developer experience with increased flexibility. It supports Docker containers, automates provisioning, scaling, and deployment, and integrates seamlessly with other AWS services. Alternatively, direct deployment to Amazon EC2 instances offers full control over the operating system and runtime environment but requires manual management of infrastructure, scaling, and updates.

For containerized workloads, Amazon ECS (Elastic Container Service) or EKS (Elastic Kubernetes Service) serve as robust alternatives. These options are ideal for larger projects involving multiple microservices that demand advanced orchestration, autoscaling, and service discovery. ECS, Amazon’s native container service, is simpler to implement, while EKS, based on Kubernetes, delivers greater flexibility and industry-standard tooling at the cost of a steeper learning curve and higher operational overhead. AWS App Runner is also a viable option for small to medium containerized applications that prioritize rapid deployment and minimal infrastructure management. Each solution presents trade-offs, and the choice should align with the application’s architecture and the team’s objectives.

The application deployment utilized AWS Elastic Beanstalk, striking a balance between simplicity and control suitable for teams transitioning from Heroku. This service abstracts much of the infrastructure complexity while allowing customization when required. 

Prior to deployment, the EB CLI must be installed. Afterward, the command eb init to configure your Elastic Beanstalk environment. is run from the application root directory to configure the Elastic Beanstalk environment. The AWS region should match the one specified in the Terraform configuration, and the environment name should correspond to the Terraform-created environment (my-app-env).

It is crucial to ensure that the Terraform *.tfvars file contains the correct beanstalk_platform_arn that corresponds to the application’s runtime, such as Docker or Node.js. Additionally, all necessary environment variables must be defined within the Elastic Beanstalk environment configuration in Terraform to prevent runtime issues. Once the environment is initialized, deployment is performed with eb deploy, and logs can be reviewed using eb logs after completion. 

Elastic Beanstalk launches an EC2 instance with the application running. To access the application via a web browser, the command eb open can be used.

Accessing the Bastion

To enable secure access to internal resources—such as a private RDS database—a bastion host is used. This is a lightweight EC2 instance deployed in a public subnet, functioning as a jump server to reach otherwise inaccessible private subnets. It serves as a secure bridge for administrative access to internal infrastructure without exposing those services to the public internet.

In this setup, the bastion host can be toggled using the var.activate_bastion variable in Terraform. When activated, the instance is provisioned with SSH access, which should ideally be restricted to a specific IP range for enhanced security. After deployment, the bastion’s public IP is available via the Terraform output.

To connect:

ssh youruser@<bastion-ip>

Before attempting to interact with services from the bastion host—such as testing the database connection—ensure that the necessary tools are installed, including clients like psql for PostgreSQL or any other relevant utilities.

Monitoring the Infrastructure

Infrastructure monitoring is handled through Amazon CloudWatch, which collects logs from both EC2 instances and Elastic Beanstalk environments. This setup enables real-time visibility into application and system-level events.

For instance, alerts can be configured to trigger when CPU usage on an EC2 instance crosses a defined threshold. Terraform facilitates this through the create_alarm = true flag along with a notification_email, allowing alarms to be provisioned automatically with corresponding email notifications.

Additional custom metrics and alarms can also be defined to support more tailored observability and operational requirements.

CI/CD Pipeline with GitHub Actions and Elastic Beanstalk

GitHub Actions is used to automate the deployment process. For containerized applications using Docker, the workflow can be configured to build the Docker image, push it to Amazon ECR, and then deploy it to Elastic Beanstalk. This setup allows full control over the container runtime and environment.

For simpler Node.js applications that do not require custom containers, the EB CLI can be used directly within the GitHub Actions workflow to deploy the application. This method offers a faster setup and delivers a Heroku-like developer experience while leveraging AWS infrastructure.

Conclusion

Migrating from Heroku to AWS can offer significant benefits in terms of control, scalability, and cost-efficiency. While Heroku provides an excellent developer experience for early-stage applications, its constraints around performance tuning, observability, and pricing can become limiting as workloads grow.

By adopting AWS services such as RDS, ElastiCache, Elastic Beanstalk, and CloudWatch—provisioned through Terraform—the existing Heroku architecture can be successfully replicated and enhanced without operational disruption. GitHub Actions streamlines CI/CD, and AWS Database Migration Service (DMS) provides a secure method for data transfer.

The outcome is a more flexible and cost-optimized infrastructure, offering improved visibility, security, and scalability. For teams looking to outgrow Heroku’s boundaries, this migration strategy presents a well-tested and practical path forward.

About TrackIt

TrackIt is an international AWS cloud consulting, systems integration, and software development firm headquartered in Marina del Rey, CA.

We have built our reputation on helping media companies architect and implement cost-effective, reliable, and scalable Media & Entertainment workflows in the cloud. These include streaming and on-demand video solutions, media asset management, and archiving, incorporating the latest AI technology to build bespoke media solutions tailored to customer requirements.

Cloud-native software development is at the foundation of what we do. We specialize in Application Modernization, Containerization, Infrastructure as Code and event-driven serverless architectures by leveraging the latest AWS services. Along with our Managed Services offerings which provide 24/7 cloud infrastructure maintenance and support, we are able to provide complete solutions for the media industry.