Deploy N8N 2.0 Pro (Queue Mode with Webhooks, Workers, Code Runners, MCP & Auto-Updates)
๐ช1-Click Deploy | High Performance N8N setup with Postgres17 + Redis + MCP
N8N
๐ฑ App
main
Just deployed
โ๏ธ Workers
worker
Just deployed
runner
Just deployed
๐๏ธ Database
Just deployed
/var/lib/postgresql/data
๐ API
webhook
Just deployed
โก Cache
Just deployed
/bitnami
๐ค MCP
Just deployed
Deploy and Host N8N 2.0 Pro (Queue Mode + Workers + Webhooks + Code Runners + MCP) on Railway
N8N 2.0 Pro is a production-grade n8n deployment architecture combining distributed queue-based execution, dedicated webhook processors, sandboxed code runners, and Model Context Protocol integration. It scales horizontally with independent workers, handles high-concurrency webhooks, executes JavaScript and Python code securely in isolated environments, and enables AI agent capabilitiesโall containerized and cloud-ready for Railway deployment.
About Hosting n8n in Queue Mode with Workers, Webhooks, Code Runners & MCP
Hosting n8n in this configuration requires orchestrating multiple interconnected services: a main instance managing workflows and UI, Redis for job queue management, PostgreSQL for persistent data storage, dedicated workers processing queued jobs with configurable concurrency, task runner sidecars for secure Code node execution, and optional webhook processors handling incoming triggers independently. This architecture ensures the main instance remains responsive for UI interactions while background workers handle execution load. Code runners (task runners) provide sandboxed JavaScript and Python execution, isolating user code from the main process for security and stability. MCP integration enables AI agents to access external tools and data. The setup demands proper environment variable synchronization across services, Redis dual-stack configuration, runner-to-main communication via WebSocket grants, and careful concurrency tuning based on available CPU and memory resources.
Common Use Cases
- High-Volume Workflow Automation โ Process thousands of daily webhook triggers and scheduled jobs without main instance degradation
- Secure Code Execution โ Run custom JavaScript and Python scripts in isolated task runners, preventing untrusted code from affecting core services
- AI Agent Orchestration with MCP โ Route external tool calls through Model Context Protocol while managing execution concurrency
- Multi-Tenant Automation Platform โ Scale horizontally by adding workers and code runners on-demand, isolating execution loads and preventing resource contention
Dependencies
- PostgreSQL Database โ Persistent storage for workflows, credentials, executions, and audit logs
- Redis Cache โ Job queue broker managing Bull queue for distributed worker coordination
- Task Runners (Code Runners) โ Sidecar containers executing Code node JavaScript/Python in sandboxed environments
Documentation References
- n8n Official Documentation
- n8n Queue Mode Configuration
- n8n Task Runners Documentation
- n8n v2.0 Breaking Changes
- Redis Bull Queue Documentation
Why Deploy on Railway?
Railway is a singular platform to deploy your infrastructure stack. Railway will host your infrastructure so you don't have to deal with configuration, while allowing you to vertically and horizontally scale it.
By deploying this n8n architecture on Railway, you are one step closer to supporting a complete full-stack application with minimal burden. Host your servers, databases, AI agents, and more on Railway.
Template Content
