Deploy OpenClaw AI + n8n workflows + Tailscale security
OpenClaw AI Agent Stack — n8n , Tailscale Mesh, Modal & 500+ Integrations
N8N (w/ workers)
Primary
Just deployed
Just deployed
/var/lib/postgresql/data
Redis
Just deployed
/bitnami
Worker
Just deployed
OpenClaw - Tailscale - n8n
openclaw-railway-template
Just deployed
/data
preserved-carrier
Bucket
Just deployed
OpenClaw AI Agent Stack on Railway — n8n Workflows, Tailscale Mesh, GPU Compute & 500+ Integrations
OpenClaw is a self-hosted AI agent gateway with multi-provider LLM routing, autonomous task execution, and 39 built-in skills. This template goes beyond bare OpenClaw — adding n8n workflow automation, Tailscale encrypted mesh networking, on-demand Modal GPU compute, and 500+ SaaS integrations out of the box, all pre-wired and one-click deployable on Railway.
Deploy and Host OpenClaw AI Agent Stack on Railway
Deploying this stack means running a production-ready AI agent infrastructure in minutes — not hours. One click provisions OpenClaw (AI gateway), n8n (workflow automation), Postgres, and Redis on Railway, with Tailscale WireGuard mesh embedded directly in the container for zero-SSH secure remote access. The browser-based /setup wizard at your Railway domain handles model provider selection, cost optimization, and workspace initialization automatically. All companion service secrets and internal URLs are auto-wired via Railway reference variables. No manual configuration. No exposed ports. No DevOps overhead.
About Hosting OpenClaw AI Agent Stack on Railway
The standard OpenClaw Railway template gets you a gateway. This template gets you a complete autonomous AI infrastructure stack. On top of OpenClaw, you get n8n (workflow automation engine with AI agent nodes), Postgres, Redis, Tailscale WireGuard mesh (embedded in the container — no SSH, no exposed ports), Langfuse LLM observability, PostHog analytics, and Composio's universal MCP server for 500+ SaaS integrations. A browser-based /setup wizard handles onboarding and applies cost-optimized defaults that reduce LLM API spend by 90%+. All cross-service secrets and internal URLs are auto-generated via Railway reference variables — nothing to wire manually.
Common Use Cases
- Autonomous AI dev agents — Run persistent coding agents on Railway that execute tasks, commit PRs, and report results via Slack or Telegram — accessible securely from your local CLI over Tailscale with zero SSH configuration
- AI ↔ n8n bidirectional pipelines — Trigger n8n workflows from OpenClaw (CRM updates, scheduled reports, social scheduling via Postiz) or call the OpenClaw hooks API from inside n8n to run AI tasks at any workflow step
- Multi-provider LLM gateway with GPU burst — Route requests across Anthropic, OpenAI, DeepSeek, Grok, and Kimi from a single endpoint, with automatic model fallback chains, Langfuse cost tracking, and Modal GPU burst for ML inference and image/video generation
Dependencies for OpenClaw AI Agent Stack Hosting
- Railway account with a Volume mounted at
/datafor config and workspace persistence across deploys - Tailscale account — free tier; generate a Reusable + Ephemeral auth key at Tailscale Admin > Keys
- LLM API key — Anthropic, OpenAI, Google, OpenRouter, DeepSeek, xAI Grok, or Kimi (set in Railway Variables or entered during the
/setupwizard)
Deployment Dependencies
- OpenClaw
- n8n — workflow automation
- Tailscale — encrypted mesh networking
- Modal — serverless GPU compute
- Composio Rube MCP — 500+ SaaS integrations
- Langfuse — LLM observability and evals
Why Deploy OpenClaw AI Agent Stack on Railway?
Railway is a singular platform to deploy your infrastructure stack. Railway will host your infrastructure so you don't have to deal with configuration, while allowing you to vertically and horizontally scale it.
By deploying OpenClaw AI Agent Stack on Railway, you are one step closer to supporting a complete full-stack application with minimal burden. Host your servers, databases, AI agents, and more on Railway.
Template Content
Primary
n8nio/n8nPORT
HTTP port for n8n primary instance
NODE_OPTIONS
Node.js runtime options (e.g. --max-old-space-size)
N8N_TRUST_PROXY
Trust reverse proxy headers (default: true)
N8N_LISTEN_ADDRESS
Address n8n listens on (default: 0.0.0.0)
N8N_RUNNERS_ENABLED
Enable task runners (default: true)
QUEUE_BULL_REDIS_DUALSTACK
Enable Redis dual-stack networking
ENABLE_ALPINE_PRIVATE_NETWORKING
Enable Railway private networking
OFFLOAD_MANUAL_EXECUTIONS_TO_WORKERS
Offload manual runs to worker nodes
N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS
Enforce strict file permissions on settings
openclaw-railway-template
TrendpilotAI/openclaw-railway-tailscalePORT
HTTP port for the Express wrapper (default: 8080)
CACHEBUST
Change to force a fresh Docker build
N8N_API_KEY
n8n API key for workflow automation integration
GITHUB_TOKEN
GitHub PAT for repo access from the instance
GROK_API_KEY
xAI Grok API key (optional LLM provider)
SETUP_USERNAME
Username for /setup Basic auth (default: admin)
RUNWARE_API_KEY
Runware API key for image generation
SLACK_BOT_TOKEN
Slack bot token for channel integration
TAILSCALE_SERVE
Enable Tailscale Serve HTTPS proxy (default: true)
DEEPSEEK_API_KEY
DeepSeek API key (optional LLM provider)
TAILSCALE_AUTHKEY
Tailscale auth key (reusable + ephemeral recommended)
CLOUDFLARE_API_KEY
Cloudflare API key for DNS/tunnel integration
OPENCLAW_STATE_DIR
State directory path (default: /data/.openclaw)
TAILSCALE_HOSTNAME
Tailnet hostname (default: openclaw-railway)
OPENCLAW_GATEWAY_TOKEN
Gateway auth token (auto-generated if not set)
OPENCLAW_WORKSPACE_DIR
Workspace directory path (default: /data/workspace)
CLAUDE_CODE_OAUTH_TOKEN
Claude Code OAuth token for Anthropic auth
PGDATA
PostgreSQL data directory path
PGPORT
PostgreSQL listen port
POSTGRES_DB
Database to create on first start
POSTGRES_USER
User to create on first start
SSL_CERT_DAYS
SSL certificate validity in days
RAILWAY_DEPLOYMENT_DRAINING_SECONDS
Graceful shutdown drain period in seconds
Redis
railwayapp/redisREDISPORT
Redis server port
REDISUSER
Redis username
RAILWAY_RUN_UID
UID for Railway process execution
Worker
n8nio/n8nPORT
HTTP port for n8n worker
DB_TYPE
Database type (postgresdb)
NODE_OPTIONS
Node.js runtime options
EXECUTIONS_MODE
Execution mode: queue
N8N_LISTEN_ADDRESS
Address worker listens on
N8N_RUNNERS_ENABLED
Enable task runners
QUEUE_BULL_REDIS_DUALSTACK
Enable Redis dual-stack networking
ENABLE_ALPINE_PRIVATE_NETWORKING
Enable Railway private networking
OFFLOAD_MANUAL_EXECUTIONS_TO_WORKERS
Offload manual runs to workers
N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS
Enforce strict file permissions
preserved-carrier
Bucket

