Migrating a Rails app from Heroku to Hetzner
Edu Depetris
- Apr 27, 2026- Ruby On Rails
- Dev Ops
- Heroku
- Hetzner
- Kamal
- Terraform
- Platform Migration
- Infrastructure
- Docker
This was originally a Rails 7 app that I gradually upgraded to Rails 8.2 over time. I documented that journey in two previous posts: Upgrade to Rails 8.1 and Upgrade Ruby and Rails to 8.2.
Rails 7 didn’t ship with Docker, Thruster, Kamal, and other tools that we’ll introduce along the way.
This is a high-level plan to move an app that has been running since 2021 on Heroku, with:
Rails 7 didn’t ship with Docker, Thruster, Kamal, and other tools that we’ll introduce along the way.
This is a high-level plan to move an app that has been running since 2021 on Heroku, with:
- A Postgres database (with vector support).
- A background job system running on the same dyno as the web app.
Also, due to Heroku costs, we had shut down the staging environment a while ago, so part of this migration is bringing it back.
Overall plan
Phase 0: Prepare the staging infrastructure
Phase 1: Dockerize the app
Phase 2: Prepare the VPS on Hetzner
Phase 3: Prepare the Rails app to run with Kamal
Phase 4: Migrate data and third-party services
Final: Switch traffic to Hetzner
Phase 0: Prepare the staging infrastructure
Phase 1: Dockerize the app
Phase 2: Prepare the VPS on Hetzner
Phase 3: Prepare the Rails app to run with Kamal
Phase 4: Migrate data and third-party services
Final: Switch traffic to Hetzner
Staging Infrastructure
When I started this project years ago, I decided to manage the infrastructure as code using Terraform and HCP Terraform.
It took longer at the beginning because I had to learn a lot, but now it really pays off.
I was able to clone the production infrastructure, adapt it for staging, and provision everything in minutes.
By “everything,” I mean:
- Storage: three S3 buckets (private and public) with proper access rules
- Access control: a dedicated IAM user with scoped permissions
- DNS & CDN: Cloudflare zones, redirects, and HTTPS
- Email: Google Workspace (MX, SPF, DKIM, DMARC) and Postmark
- Domain verification: TXT records for ownership
How?
All the production setup is in
All the production setup is in
production/main.tf, so I copied this to a new staging/main.tfinfra/ main.tf production/main.tf staging/main.tf
Then:
- Removed unnecessary resources
- Renamed relevant configs
- Plugged it into the root Terraform file
# infra/main.tf
# definition
terraform {
backend "remote" {
...
# providers
provider "aws" {
...
module "production" {
source = "./production"
}
module "staging" {
source = "./staging"
}After that, I run:
terraform plan terraform apply
Voilà, staging infra is ready.
Dockerize the app
While researching how to generate a Dockerfile automatically, I discovered a gem from Fly.io that handles the entire setup for you.
However, despite trying the gem, I ultimately decided to take a different approach. Since the app is running on the latest versions of Ruby on Rails, I chose to build the Dockerfile manually, starting from the default version generated by a new Rails app using PostgreSQL. In addition to the default Dockerfile, I also copied the entrypoint script.
I made a few modifications to the original Dockerfile to suit my needs.
Custom Dependencies
The app uses Active Storage previewers for PDFs and a custom Office previewer, which requires installing extra system packages like
The app uses Active Storage previewers for PDFs and a custom Office previewer, which requires installing extra system packages like
poppler-utils and libreoffice.Dockerfile:
# Install base packages
RUN apt-get update -qq && \
apt-get install --no-install-recommends -y curl libjemalloc2 libvips postgresql-client poppler-utils libreoffice && \
...Important: Thruster
Be aware that the default Rails Dockerfile now comes with Thruster. It is up to you whether you want to use it or not. In my case, I wanted to include it, so I had to make a few adjustments; the app was originally built on Rails 7, and Thruster was only introduced as a default in Rails 8.
If you want to learn more about whether to use it or not, read this discussion.
To use it, I added the following to my Gemfile:
gem "thruster", require: false
Now we're ready to build the image:docker build -t app-name .
Hetzner VPS
Since this is not a mission-critical app and I want to keep costs to a minimum, I’m going to set up a single server. By leveraging Kamal’s ability to deploy multiple apps, we can host both staging and production on the same machine.
If we need to scale this up in the future, our options include:
- Adding more servers and a load balancer.
- Increasing the server's resources (Vertical scaling).
- Separating staging and production onto different servers.
- Moving the database to its own server and potentially adding volumes.
For now, my approach is a single-server setup or one server fit all.
I provisioned a server with:
I provisioned a server with:
- 2 vCPUs
- 2 GB RAM
- 40 GB disk
All of that costs just $7. A massive win compared to what you would pay in Heroku for for similar resources.
Finally, I set up a Firewall; for now, there isn't much else to configure.
Kamal and Rails
Since this app started on Rails 7, it didn't come with Kamal pre-installed, so we need to set it up from scratch.
Installing Kamal
First, add the gem to the Gemfile:
Installing Kamal
First, add the gem to the Gemfile:
gem "kamal", require: false
Then, install the gem and run the init command
bundle install kamal init
This creates several files, with
config/deploy.yml being the most important. In my case, I added a few extra files to manage different environments:config/deploy.production.yml: Where the production-specific deployment is defined..env.staging: Environment variables for staging only..env.production: Environment variables for production only..kamal/secrets-common: Shared environment variables used by both staging and production..kamal/secrets-production: Required secrets for the production deployment
Using GHCR as a Registry
I hadn't tried using GitHub Container Registry (GHCR) before, so I decided to give it a shot. You can read more about it here.
Pro Tip: Always use lowercase for the repository name (the image tag). Otherwise, you will encounter the following error:ERROR (SSHKit::Command::Failed): docker exit status: 256docker stdout: ERROR: failed to build: invalid tag "ghcr.io/exampleUpper/app:84397": repository name must be lowercase
The kamal registry configuration for GHCR looks like this:
image: github_organization_name_or_username/app_name
registry:
server: ghcr.io
username: github_username
password:
# This is a github clasic access token with read:packages, write:packages and delete permissions and add it to .kamal/secrets
- KAMAL_REGISTRY_PASSWORDAccessories: Postgres with PGVector extension
The app uses the Postgres vector extension to store embeddings. Instead of pulling the standard Postgres image, we'll use the
pgvector image, a community-maintained version of Postgres with the extension pre-installed. Check it out on Docker Hub.accessories:
db:
image: pgvector/pgvector:pgXX
host: 1.2.3.4
env:
clear:
POSTGRES_DB: something
POSTGRES_USER: someting
secret:
- POSTGRES_PASSWORD
files:
- db/production.sql:/docker-entrypoint-initdb.d/setup.sql
directories:
- data:/var/lib/postgresql/dataFor the
production.sql initialization, I'm running these commands:CREATE DATABASE app_name_production; -- -- If you want to use the solid family -- CREATE DATABASE app_name_production_cache; CREATE DATABASE app_name_production_queue; CREATE DATABASE app_name_production_cable;
Server Configuration
On Heroku, I was running the background workers on the same Dyno as the web server. Now that I have three times more RAM, I’ve decided to level up and run the background jobs as a separate process.
deploy.production.yml
servers:
web:
- 1.2.3.4
job:
hosts:
- 1.2.3.4
cmd: bundle exec good_job start --max-threads=5For the staging server, however, we will keep things simple and run the background worker within the same process as the web server.
deploy.yml
servers:
web:
- 1.2.3.4Deployment Time!
We are finally ready to run the initial Kamal setup:
dotenv -f .env.production kamal setup -d production dotenv -f .env.staging kamal setup Finished all in 76.5 seconds
The apps are alive!
Migrating out of Heroku
Now it’s time to tackle the actual data migration and update any third-party accounts.
The High-Level Plan
The high-level strategy for the migration is as follows:
- Enable Maintenance Mode on Heroku: Scale down the dynos to zero. We don't want any background jobs or users modifying the database while we are moving the data.
- Stop the App on Hetzner: Ensure the new environment is idle.
- Backup the Heroku Database: Export the current data.
- Restore the Database on Hetzner: Import the dump into the new Postgres accessory.
- Start the Hetzner Production Server: Boot the app.
- Switch Traffic: Update DNS to point to Hetzner instead of Heroku.
- DONE!
I used the Heroku and Kamal CLIs for almost everything. Below are the commands and some scripts I created with the help of Claude.
The Heroku Side
First, stop the world on Heroku:
heroku maintenance:on --app app-name # app heroku ps:scale web=0 worker=0 --app app-name # workers and servers
Next, capture and download the latest database backup:
heroku pg:backups:capture --app app-name heroku pg:backups:download --app app-name --output tmp/heroku_backup.dump
The Hetzner Side
I found it safest to stop the app and boot a fresh database accessory before importing:
kamal app stop # Optional: Remove and reboot a clean DB accessory kamal accessory remove db kamal accessory boot db # Add required extensions (e.g., pgcrypto or any others your app needs) kamal accessory exec db --reuse "psql -U db-username -d database-name -c 'CREATE EXTENSION IF NOT EXISTS pgcrypto;'" # Restore the dump cat tmp/heroku_backup.dump | kamal accessory exec db --reuse -i "pg_restore --username db-username --dbname database-name --no-owner --no-acl" # Bring the app back kamal app boot
Dealing with COLLATION Mismatches
After restoring the database, I encountered this warning:
WARNING: database “database-name” has a collation version mismatch
DETAIL: The database was created using collation version X.xx, but the operating system provides version Y.yy
HINT: Rebuild all objects in this database that use the default collation and run ALTER DATABASE ...
Postgres doesn't handle text sorting entirely on its own. When you compare strings (
ORDER BY, unique indexes, etc.), it relies on the operating system’s C library (glibc on Linux) to determine the order. This logic is called collation.The warning appears because Heroku and the Hetzner server (or the Docker image) use different versions of glibc.
The Fix: You need to reindex the database and let Postgres know that the version has been refreshed to avoid future warnings.
kamal accessory exec db --reuse "psql -U $DB_USER -d $DB_NAME -c 'REINDEX DATABASE $DB_NAME;'" kamal accessory exec db --reuse "psql -U $DB_USER -d $DB_NAME -c 'ALTER DATABASE $DB_NAME REFRESH COLLATION VERSION;'"
Verifying the Data Migration
Once the migration is complete, it is vital to verify that no data was lost. A simple way to do this is by comparing the row counts for every table.
I used the following SQL script (
counts.sql) to count rows per table:DROP TABLE IF EXISTS _counts;
CREATE TEMP TABLE _counts (table_name text, row_count bigint);
DO $$
DECLARE r record; n bigint;
BEGIN
FOR r IN SELECT schemaname, relname FROM pg_stat_user_tables ORDER BY 1,2 LOOP
EXECUTE format('SELECT count(*) FROM %I.%I', r.schemaname, r.relname) INTO n;
INSERT INTO _counts VALUES (r.schemaname||'.'||r.relname, n);
END LOOP;
END $$;
SELECT table_name || ' = ' || row_count AS line FROM _counts ORDER BY table_name;
Run the script against both environments and save the output:
heroku pg:psql -a app-name < counts.sql > heroku_counts.txt dotenv -f .env.production kamal dbc -d production < counts.sql > hetzner_counts.txt
You can then manually compare the files or use a diff tool to highlight discrepancies. If everything looks correct, you are ready for the final step: bringing your new app to life!
I used this script to highlight them.
Switching traffic
At this point, our DNS (managed via Cloudflare) is still directing traffic to Heroku. Now, we need to point it to the Hetzner server IP. Because I am using Terraform, the transition was straightforward. I just needed to update a few lines in my configuration and apply the changes.
diff --git a/infra/main.tf b/infra/main.tf
index d96f14a..479e9e3 100644
--- a/infra/main.tf
+++ b/infra/main.tf
-# Heroku
-variable "heroku_dns_target" {
- default = "colorful-primate-0zshopgs91e9o95.herokudns.com"
+# Hetzner
+variable "hetzner_production_ip" {
+ default = "1.2.3.4"
}
@@ -272,8 +272,8 @@ resource "cloudflare_record" "root" {
zone_id = data.cloudflare_zone.app.id
name = "domain-of-the-app-here.com"
- value = var.heroku_dns_target
- type = "CNAME"
+ value = var.hetzner_production_ip
+ type = "A"
proxied = true
}One key thing to notice here is the change in the record type. In my configuration, Heroku requires a
CNAME to point to their DNS targets, but for our own VPS on Hetzner, we switch to an A record pointing directly to the static IP.Finally, ensure the app is booted on the new server:
Extras
Using a Subdomain for QA
From the very beginning, when I was setting up the staging infrastructure, I added a specific subdomain:
qa-production.my-app.com. I used this to point my Kamal production deployment to a live URL so I could perform manual QA before the final DNS switch.Kamal makes handling multiple hosts incredibly easy. Here is an example of how you can configure this in
deploy.production.yml:proxy:
ssl: true
hosts:
- qa-production.my-app-domain.com
- www.my-app-domain.com
- www.my-other-app-domain.com
...Git Worktrees for Parallel Development
I leveraged Git worktrees to work on the infrastructure code in parallel with the Rails app changes (Kamal setup, configuration files, etc.). This allowed me to keep my environment clean and switch between infrastructure and application logic without constantly stashing changes or switching branches.
GitHub Actions for Continuous Deployment (CD)
To automate the deployment process, I created two GitHub Actions:
cd.yml (for automated deployments) and manual_deploy.yml.
Here's the manual_deploy.yml code as example.
I used GitHub Environments to manage the secrets for both staging and production, ensuring that the right credentials were used for the right environment.
I merged everything, and...
🎉 Happy platform migration!