Deploy OpenClaw + Ollama on Railway | Self-Hosted Personal AI Assistant
Self-host OpenClaw (optional - Local LLM Models). 20+ chat platforms.
OpenClaw π¦
Just deployed
/data
Ollama
Just deployed
/root/.ollama
Deploy and Host OpenClaw
Deploy OpenClaw β the open-source personal AI assistant β on Railway with a single click. OpenClaw is a self-hosted agent runtime that connects your favorite chat apps (WhatsApp, Telegram, Discord, Slack, iMessage, and 20+ more) to AI models like Claude, GPT, Gemini, or fully local models via Ollama β letting an AI agent browse the web, manage files, run commands, and work autonomously on your behalf.
Self-host OpenClaw on Railway with this template and get a fully configured gateway, browser-based setup wizard, admin dashboard with live terminal, and persistent storage β no CLI or SSH access needed.
π Getting Started with OpenClaw on Railway
Once your Railway deploy is live, open your service URL β you'll be redirected to the /setup wizard automatically. Pick your AI provider (Anthropic, OpenAI, Gemini, Groq, OpenRouter, or Ollama for free local models), configure your connection, and optionally add messaging channels. Click Launch OpenClaw and the gateway starts within seconds.
Step 1: Initial Setup via /setup
The /setup page is a one-time configuration wizard for selecting your AI provider, pasting your API key, and wiring up messaging channels (Telegram, Discord, Slack, etc.).
Once setup is complete, /setup cannot be used again without first wiping the config from /admin β it's an open URL by design, so it only works when no config exists yet.

Step 2: Access the Admin Dashboard at /admin
Log in with your WRAPPER_ADMIN_PASSWORD. This is your control panel for:
- π Status β real-time gateway health, uptime, and quick actions (restart/stop)
- π Live Logs β stream OpenClaw gateway logs with filtering, in the browser
- π» Terminal β full PTY terminal inside the container
- π Device Pairing β approve or reject browser pairing requests in real time
- βοΈ Config Editor β view and edit
openclaw.jsonwith hot-reload support



Step 3: Connect to the OpenClaw UI
Click "Open OpenClaw UI" in the admin dashboard, then:
- On the gateway screen, paste your
OPENCLAW_GATEWAY_TOKENand click Connect

- Go to Admin β Pairing and approve the incoming device pairing request

- Return to the gateway and click Connect again β you're in

π¦ Using Ollama β Free Local Models, Zero API Cost
This template includes a built-in Ollama service so you can run AI models locally on Railway without paying for any API. Ollama runs as a separate service in the same Railway project, connected to OpenClaw over Railway's private network.
How it works
- The Ollama service boots, pulls your chosen model(s), and listens on the private network
- OpenClaw connects via
OLLAMA_BASE_URLβ pre-filled in the template via Railway's private domain reference - At
/setup, select Ollama (local models) β the URL is auto-filled and a live model picker fetches available models from your Ollama instance
| Variable | Description |
|---|---|
OLLAMA_DEFAULT_MODELS | Models to pull at boot (comma-separated, e.g. llama3.2:1b,qwen2.5-coder:7b) |
β οΈ Tool calling required β OpenClaw uses tool/function calling extensively. Not all Ollama models support this. Browse compatible models at ollama.com/library.
β οΈ Railway doesn't currently support GPUs, so local models will be CPU-only and slower. For best results, use Railway's Pro plan.
Ollama RAM requirements
| Model size | Minimum RAM | Notes |
|---|---|---|
| 1Bβ3B | 2 GB | Fast, limited capability |
| 7B | 6β8 GB | Best balance for most tasks |
| 13B+ | 12 GB+ | Requires Pro plan |
About Hosting OpenClaw π
OpenClaw is a fully open-source (MIT), local-first personal AI agent. It runs as a long-lived Node.js gateway that routes messages between chat platforms and AI models.
Key features:
- π Multi-channel messaging β WhatsApp, Telegram, Discord, Slack, Signal, iMessage, and 20+ more
- π€ Multi-provider AI β Claude, GPT, Gemini, Groq, OpenRouter, Moonshot, Z.AI, MiniMax, or local models via Ollama