Postgres S3 backups

A simple NodeJS app to back up your PostgreSQL database to S3 via a cron

Deploy Postgres S3 backups

Backup CRON

railwayapp-templates/postgres-s3-backups

Just deployed

Deploy and Host PostgreSQL S3 Backups on Railway

PostgreSQL S3 Backups is an automated backup service that uses node-cron or Railway cron to dump PostgreSQL data and upload it to S3-compatible storage. The service is written in TypeScript and provides configurable scheduling and storage options.

About Hosting PostgreSQL S3 Backups

PostgreSQL S3 Backups runs as a Node.js application that executes pg_dump operations on a schedule and uploads compressed database dumps to S3 storage. You'll need to manage cron job reliability, monitor backup success/failure rates, and handle S3 storage costs as backup data accumulates. The service requires database connection management, S3 authentication, and error handling for network failures during uploads. Storage lifecycle policies become important for managing backup retention and costs over time.

Common Use Cases

  • Database Administrators: Automate regular PostgreSQL backups to cloud storage with configurable retention policies
  • DevOps Engineers: Implement disaster recovery procedures and maintain backup compliance for production databases

Dependencies for PostgreSQL S3 Backups Hosting

  • Node.js Runtime: TypeScript execution environment for cron scheduling and backup operations
  • PostgreSQL Access: Database connection credentials and pg_dump utility availability
  • S3 Storage: AWS S3 or compatible storage service with appropriate access permissions

Deployment Dependencies

Implementation Details

Overview:

The template uses node-cron or Railway cron, written in TypeScript to dump your PostgreSQL data to a file and then upload the file to S3.

Key Features:

  • Configurable backup schedule: By default, the cron runs at 5 AM every day but is configurable via the BACKUP_CRON_SCHEDULE environment variable
  • Support for custom buckets: The script supports using an AWS_S3_ENDPOINT environment variable to use any S3 compliant storage bucket (e.g., Wasabi)

Required Configuration:

# AWS/S3 Configuration
AWS_ACCESS_KEY_ID=your-access-key-id
AWS_SECRET_ACCESS_KEY=your-secret-access-key
AWS_S3_BUCKET=your-bucket-name
AWS_S3_REGION=us-east-1

# Database Configuration
BACKUP_DATABASE_URL=postgresql://user:password@host:port/database

# Backup Schedule
BACKUP_CRON_SCHEDULE=0 5 * * *

Environment Variables:

  • AWS_ACCESS_KEY_ID - AWS access key ID
  • AWS_SECRET_ACCESS_KEY - AWS secret access key, sometimes also called an application key
  • AWS_S3_BUCKET - The name of the bucket that the access key ID and secret access key are authorized to access
  • AWS_S3_REGION - The name of the region your bucket is located in, set to auto if unknown
  • BACKUP_DATABASE_URL - The connection string of the database to backup
  • BACKUP_CRON_SCHEDULE - The cron schedule to run the backup on. Example: 0 5 * * *
  • AWS_S3_ENDPOINT - The S3 custom endpoint you want to use. Applicable for 3rd party S3 services such as Cloudflare R2 or Backblaze R2
  • AWS_S3_FORCE_PATH_STYLE - Use path style for the endpoint instead of the default subdomain style, useful for MinIO. Default false
  • RUN_ON_STARTUP - Run a backup on startup of this application then proceed with making backups on the set schedule
  • BACKUP_FILE_PREFIX - Add a prefix to the file name
  • BUCKET_SUBFOLDER - Define a subfolder to place the backup files in
  • SINGLE_SHOT_MODE - Run a single backup on start and exit when completed. Useful with the platform's native CRON scheduler
  • SUPPORT_OBJECT_LOCK - Enables support for buckets with object lock by providing an MD5 hash with the backup file
  • BACKUP_OPTIONS - Add any valid pg_dump option, supported pg_dump options can be found here. Example: --exclude-table=pattern

Backup Process:

# Basic backup workflow
1. Connect to PostgreSQL database using BACKUP_DATABASE_URL
2. Execute pg_dump with specified BACKUP_OPTIONS
3. Compress database dump file
4. Upload to S3 bucket with timestamp and optional prefix
5. Clean up local temporary files
6. Log backup success/failure status

Advanced Configuration Examples:

# Custom S3 endpoint (Cloudflare R2)
AWS_S3_ENDPOINT=https://account-id.r2.cloudflarestorage.com
AWS_S3_FORCE_PATH_STYLE=false

# Backup with exclusions
BACKUP_OPTIONS=--exclude-table=temp_* --exclude-table=logs

# Organized storage
BACKUP_FILE_PREFIX=prod-db-
BUCKET_SUBFOLDER=postgresql-backups/

# One-time backup
SINGLE_SHOT_MODE=true

Monitoring and Maintenance:

# Cron schedule examples
BACKUP_CRON_SCHEDULE=0 2 * * *      # Daily at 2 AM
BACKUP_CRON_SCHEDULE=0 2 * * 0      # Weekly on Sunday at 2 AM  
BACKUP_CRON_SCHEDULE=0 2 1 * *      # Monthly on 1st at 2 AM

Storage Considerations:

  • Monitor S3 storage costs as backups accumulate
  • Implement lifecycle policies for backup retention
  • Consider backup compression and deduplication
  • Plan for disaster recovery and backup restoration procedures

Why Deploy PostgreSQL S3 Backups on Railway?

Railway is a singular platform to deploy your infrastructure stack. Railway will host your infrastructure so you don't have to deal with configuration, while allowing you to vertically and horizontally scale it.

By deploying PostgreSQL S3 Backups on Railway, you are one step closer to supporting a complete full-stack application with minimal burden. Host your servers, databases, AI agents, and more on Railway.


Template Content

More templates in this category

View Template
Tier
A single tool to configure,orchestrate and manage your entire pricing stack

View Template
Trigger.dev
Open source background jobs framework for TypeScript.

View Template
Mixpost Pro
Self-hosted social media management