Railway

Deploy OpenClaw + Ollama on Railway | Self-Hosted Personal AI Assistant

Self-host OpenClaw (optional - Local LLM Models). 20+ chat platforms.

Deploy OpenClaw + Ollama on Railway | Self-Hosted Personal AI Assistant

Just deployed

/data

Just deployed

/root/.ollama

OpenClaw logo

Deploy and Host OpenClaw

Deploy OpenClaw β€” the open-source personal AI assistant β€” on Railway with a single click. OpenClaw is a self-hosted agent runtime that connects your favorite chat apps (WhatsApp, Telegram, Discord, Slack, iMessage, and 20+ more) to AI models like Claude, GPT, Gemini, or fully local models via Ollama β€” letting an AI agent browse the web, manage files, run commands, and work autonomously on your behalf.

Self-host OpenClaw on Railway with this template and get a fully configured gateway, browser-based setup wizard, admin dashboard with live terminal, and persistent storage β€” no CLI or SSH access needed.

πŸš€ Getting Started with OpenClaw on Railway

Once your Railway deploy is live, open your service URL β€” you'll be redirected to the /setup wizard automatically. Pick your AI provider (Anthropic, OpenAI, Gemini, Groq, OpenRouter, or Ollama for free local models), configure your connection, and optionally add messaging channels. Click Launch OpenClaw and the gateway starts within seconds.

Step 1: Initial Setup via /setup

The /setup page is a one-time configuration wizard for selecting your AI provider, pasting your API key, and wiring up messaging channels (Telegram, Discord, Slack, etc.).

Once setup is complete, /setup cannot be used again without first wiping the config from /admin β€” it's an open URL by design, so it only works when no config exists yet.

OpenClaw setup wizard

Step 2: Access the Admin Dashboard at /admin

Log in with your WRAPPER_ADMIN_PASSWORD. This is your control panel for:

  • πŸ“Š Status β€” real-time gateway health, uptime, and quick actions (restart/stop)
  • πŸ“‹ Live Logs β€” stream OpenClaw gateway logs with filtering, in the browser
  • πŸ’» Terminal β€” full PTY terminal inside the container
  • πŸ”— Device Pairing β€” approve or reject browser pairing requests in real time
  • βš™οΈ Config Editor β€” view and edit openclaw.json with hot-reload support

OpenClaw Admin Page Protection

OpenClaw admin dashboard

OpenClaw admin terminal

Step 3: Connect to the OpenClaw UI

Click "Open OpenClaw UI" in the admin dashboard, then:

  1. On the gateway screen, paste your OPENCLAW_GATEWAY_TOKEN and click Connect

OpenClaw gateway connection screen

  1. Go to Admin β†’ Pairing and approve the incoming device pairing request

OpenClaw admin β€” approve device pairing

  1. Return to the gateway and click Connect again β€” you're in

OpenClaw UI β€” fully connected

πŸ¦™ Using Ollama β€” Free Local Models, Zero API Cost

This template includes a built-in Ollama service so you can run AI models locally on Railway without paying for any API. Ollama runs as a separate service in the same Railway project, connected to OpenClaw over Railway's private network.

How it works

  1. The Ollama service boots, pulls your chosen model(s), and listens on the private network
  2. OpenClaw connects via OLLAMA_BASE_URL β€” pre-filled in the template via Railway's private domain reference
  3. At /setup, select Ollama (local models) β€” the URL is auto-filled and a live model picker fetches available models from your Ollama instance
VariableDescription
OLLAMA_DEFAULT_MODELSModels to pull at boot (comma-separated, e.g. llama3.2:1b,qwen2.5-coder:7b)

⚠️ Tool calling required β€” OpenClaw uses tool/function calling extensively. Not all Ollama models support this. Browse compatible models at ollama.com/library.

⚠️ Railway doesn't currently support GPUs, so local models will be CPU-only and slower. For best results, use Railway's Pro plan.

Ollama RAM requirements

Model sizeMinimum RAMNotes
1B–3B2 GBFast, limited capability
7B6–8 GBBest balance for most tasks
13B+12 GB+Requires Pro plan

About Hosting OpenClaw πŸ“–

OpenClaw is a fully open-source (MIT), local-first personal AI agent. It runs as a long-lived Node.js gateway that routes messages between chat platforms and AI models.

Key features:

  • πŸ”Œ Multi-channel messaging β€” WhatsApp, Telegram, Discord, Slack, Signal, iMessage, and 20+ more
  • πŸ€– Multi-provider AI β€” Claude, GPT, Gemini, Groq, OpenRouter, Moonshot, Z.AI, MiniMax, or local models via Ollama
  • 🧠 Autonomous agent β€” browses the web, manages files, runs commands, schedules tasks via heartbeat daemon
  • 🎨 Live Canvas with A2UI β€” agent-driven visual workspace
  • πŸ”’ Self-hosted & private β€” your data stays on your machine
  • πŸ“± Companion apps for macOS, iOS, and Android

Why Deploy OpenClaw on Railway βœ…

  • 🟒 No Docker, volume, or network setup β€” Railway handles it all
  • πŸ¦™ Ollama included β€” run local models at zero extra API cost
  • πŸ” Managed TLS and custom domains out of the box
  • πŸ”„ One-click redeploys from Git with zero downtime
  • πŸ’Ύ Persistent volume keeps config and conversations across deploys

Common Use Cases πŸ’‘

  • Personal AI assistant β€” a 24/7 agent on WhatsApp or Telegram that browses, codes, and researches autonomously
  • Automation hub β€” schedule recurring tasks via heartbeat: daily summaries, monitoring, data pipelines
  • Privacy-first AI β€” run everything on local Ollama models with zero data leaving your Railway project

Dependencies for OpenClawπŸ“¦

  • OpenClaw β€” npm install -g openclaw@${OPENCLAW_VERSION} (GitHub)
  • Ollama β€” separate Railway service for local inference (ollama.com)
  • node-pty β€” native PTY for the admin terminal (compiled at Docker build time)

Deployment Dependencies

🐳 Self-Hosting Outside Railway

git clone https://github.com/praveen-ks-2001/openclaw-railway
cd openclaw-railway
docker build -t openclaw-railway .
docker run -d \
  --name openclaw \
  -p 3000:3000 \
  -e PORT=3000 \
  -e OPENCLAW_GATEWAY_TOKEN=your-secret-token \
  -e OLLAMA_BASE_URL=http://host.docker.internal:11434 \
  -v ./data:/data \
  openclaw-railway

For local Ollama, run ollama serve on your host and set OLLAMA_BASE_URL=http://host.docker.internal:11434.

πŸ’° Pricing

OpenClaw is 100% free and open-source (MIT). No subscriptions or per-user fees β€” on Railway, you only pay for compute.

  • Cloud providers (Anthropic, OpenAI, etc.): ~$5–30/month in API costs depending on usage
  • Ollama (local models): Zero API cost β€” only pay for the Railway compute running Ollama (~$5–15/month)

πŸ†š OpenClaw vs Cursor vs Claude Code

FeatureOpenClawCursorClaude Code
Open Sourceβœ… MIT❌❌
Self-Hostableβœ…βŒβŒ
Multi-Channel Chatβœ… 20+ platforms❌ IDE only❌ CLI only
Local Models (Ollama)βœ… Built-in❌❌
Autonomous Agentβœ… Heartbeat daemon⚠️ Limited⚠️ Limited
PricingFree + API costs$20–200/monthAPI costs

❓ FAQ

What is OpenClaw? An open-source, self-hosted AI agent that connects 20+ messaging platforms to models like Claude, GPT, Gemini, and local Ollama models β€” all running on your own hardware.

Can I use cloud AI providers instead of Ollama? Yes. The setup wizard supports Anthropic, OpenAI, Gemini, Groq, OpenRouter, Moonshot, Z.AI, and MiniMax out of the box.

Is it safe to expose OpenClaw to the public internet? The template uses token-based auth (OPENCLAW_GATEWAY_TOKEN), admin password protection, and explicit device pairing approval. Review the OpenClaw security docs before deploying.

How do I update OpenClaw? Set OPENCLAW_VERSION in Railway (e.g. 2026.3.24) and redeploy. Set it to latest to always pull the newest release.

How do I add more Ollama models after initial setup? Update OLLAMA_DEFAULT_MODELS on the Ollama service and redeploy. You can also pull models manually via the Admin β†’ Terminal panel.

The Ollama model dropdown shows nothing β€” what do I do? Model pulling can take a few minutes on first boot. Wait for the Ollama service to finish, then click Refresh in the setup wizard.


Template Content

More templates in this category

View Template
N8N Main + Worker
Deploy and Host N8N with Inactive worker.

jakemerson
View Template
Postgres Backup
Cron-based PostgreSQL backup to bucket storage

Railway Templates
View Template
Prefect [Updated Mar ’26]
Prefect [Mar ’26] (ETL & Automation alternative to Airflow) Self Host

shinyduo