Deploy N8N 2.0 Pro (Queue Mode with Webhooks, Workers, Code Runners, MCP & Auto-Updates)

๐Ÿช„1-Click Deploy | High Performance N8N setup with Postgres17 + Redis + MCP

Deploy N8N 2.0 Pro (Queue Mode with Webhooks, Workers, Code Runners, MCP & Auto-Updates)

N8N

๐Ÿ“ฑ App

Just deployed

โš™๏ธ Workers

Just deployed

Just deployed

๐Ÿ—„๏ธ Database

/var/lib/postgresql/data

๐Ÿ”Œ API

Just deployed

โšก Cache

Just deployed

/bitnami

๐Ÿค– MCP

Deploy and Host N8N 2.0 Pro (Queue Mode + Workers + Webhooks + Code Runners + MCP) on Railway

N8N 2.0 Pro is a production-grade n8n deployment architecture combining distributed queue-based execution, dedicated webhook processors, sandboxed code runners, and Model Context Protocol integration. It scales horizontally with independent workers, handles high-concurrency webhooks, executes JavaScript and Python code securely in isolated environments, and enables AI agent capabilitiesโ€”all containerized and cloud-ready for Railway deployment.

About Hosting n8n in Queue Mode with Workers, Webhooks, Code Runners & MCP

Hosting n8n in this configuration requires orchestrating multiple interconnected services: a main instance managing workflows and UI, Redis for job queue management, PostgreSQL for persistent data storage, dedicated workers processing queued jobs with configurable concurrency, task runner sidecars for secure Code node execution, and optional webhook processors handling incoming triggers independently. This architecture ensures the main instance remains responsive for UI interactions while background workers handle execution load. Code runners (task runners) provide sandboxed JavaScript and Python execution, isolating user code from the main process for security and stability. MCP integration enables AI agents to access external tools and data. The setup demands proper environment variable synchronization across services, Redis dual-stack configuration, runner-to-main communication via WebSocket grants, and careful concurrency tuning based on available CPU and memory resources.

Common Use Cases

  • High-Volume Workflow Automation โ€“ Process thousands of daily webhook triggers and scheduled jobs without main instance degradation
  • Secure Code Execution โ€“ Run custom JavaScript and Python scripts in isolated task runners, preventing untrusted code from affecting core services
  • AI Agent Orchestration with MCP โ€“ Route external tool calls through Model Context Protocol while managing execution concurrency
  • Multi-Tenant Automation Platform โ€“ Scale horizontally by adding workers and code runners on-demand, isolating execution loads and preventing resource contention

Dependencies

  • PostgreSQL Database โ€“ Persistent storage for workflows, credentials, executions, and audit logs
  • Redis Cache โ€“ Job queue broker managing Bull queue for distributed worker coordination
  • Task Runners (Code Runners) โ€“ Sidecar containers executing Code node JavaScript/Python in sandboxed environments

Documentation References

Why Deploy on Railway?

Railway is a singular platform to deploy your infrastructure stack. Railway will host your infrastructure so you don't have to deal with configuration, while allowing you to vertically and horizontally scale it.

By deploying this n8n architecture on Railway, you are one step closer to supporting a complete full-stack application with minimal burden. Host your servers, databases, AI agents, and more on Railway.


Template Content

More templates in this category

View Template
N8N Main + Worker
Deploy and Host N8N with Inactive worker.

View Template
Postgres S3 backups
A simple NodeJS app to back up your PostgreSQL database to S3 via a cron

View Template
Prefect [Updated Dec โ€™25]
Prefect [Dec โ€™25] (ETL & Automation alternative to Airflow) Self Host