Railway

Deploy Open WebUI Pro Stack One-click Deploy

AI stack: open webui, litellm, pg & redis. openwebui open web ui openweb ui

Deploy Open WebUI Pro Stack One-click Deploy

Just deployed

/data

Just deployed

Just deployed

/var/lib/postgresql/data

Deploy and Host Open WebUI Pro Stack on Railway

Open WebUI Pro Stack is a production-ready, self-hosted AI workspace that gives you a private ChatGPT-like interface connected to any LLM provider. Powered by Open WebUI, LiteLLM, Postgres, and Redis — fully yours, zero vendor lock-in, one-click deploy.

About Hosting Open WebUI Pro Stack

This template deploys a complete private AI workspace on Railway across four services: Open WebUI as the chat frontend, LiteLLM as the multi-provider proxy, Postgres for persistent storage, and Redis for WebSocket session management. All services are pre-wired through Railway's internal private network with auto-generated secrets — no manual networking required. After deploy, log into LiteLLM's admin UI to add your API keys (OpenAI, Anthropic, Groq, Gemini, or any OpenAI-compatible provider), then open Open WebUI and start chatting. The first account created automatically becomes admin. Supports multi-user teams, RAG web search, image generation, and real-time streaming out of the box.

Common Use Cases

  • Private self-hosted ChatGPT alternative for individuals and teams, with full data ownership
  • Multi-provider AI gateway: route to OpenAI, Anthropic, Groq, Gemini, or local Ollama models from one interface
  • Internal AI workspace for companies that cannot send data to third-party SaaS platforms
  • Staging environment for testing and comparing LLM providers before production rollout
  • Developer sandbox for experimenting with RAG pipelines, agents, and model cost optimization

Dependencies for Open WebUI Pro Stack Hosting

  • Open WebUI for the chat interface, user management, and RAG features
  • LiteLLM proxy for multi-provider model routing, API key management, and usage analytics
  • PostgreSQL 14+ for persistent storage of chats, users, models, and configuration
  • Redis 6+ for WebSocket session management and real-time streaming support

Deployment Dependencies

Implementation Details

After deploy, the setup takes under 5 minutes:

  1. Open the LiteLLM service URL and navigate to /ui. Log in with admin and the auto-generated UI_PASSWORD from your Railway variables.
  2. Go to Models → Add Model, select your provider (OpenAI, Anthropic, Groq, etc.), enter your API key, and save. The model is immediately available.
  3. Open the Open WebUI service URL. Create your account — the first signup is automatically promoted to admin.
  4. Select your model from the dropdown and start chatting.

Environment Variables — LiteLLM

VariableDescription
PORTPort LiteLLM listens on. Open WebUI connects to this internally via Railway's private network.
LITELLM_MASTER_KEYMaster API key that authenticates all requests to LiteLLM. Open WebUI uses this key automatically via OPENAI_API_KEY. Keep it secret — it grants full proxy access.
LITELLM_SALT_KEYFixed encryption salt used to encrypt API keys stored in Postgres. Never change this after first deploy — doing so makes all stored keys unreadable.
UI_USERNAME / UI_PASSWORDCredentials for the LiteLLM admin web UI at /ui. Used to add models, manage virtual keys, and monitor usage.
DATABASE_URLPostgres connection string. LiteLLM stores model configs, virtual API keys, and usage logs here.
REDIS_URLRedis connection string used by LiteLLM for request caching and rate limiting.
STORE_MODEL_IN_DBWhen True, models added via the UI are persisted in Postgres and survive restarts. Without this, models reset on every redeploy.

Environment Variables — Open WebUI

VariableDescription
PORTPort Open WebUI's web server listens on. Exposed publicly via Railway's generated domain.
WEBUI_SECRET_KEYSecret key used to sign user sessions and cookies. Never change after first deploy — all active sessions will be invalidated.
DATABASE_URLPostgres connection string. Open WebUI stores users, chat history, documents, and settings here.
OPENAI_API_BASE_URLPoints Open WebUI to LiteLLM instead of OpenAI directly, using Railway's private domain. This is the core of the proxy architecture.
OPENAI_API_KEYSet to LiteLLM's master key. Open WebUI uses this to authenticate against the LiteLLM proxy.
REDIS_URLRedis connection string. Required for WebSocket session sharing across instances and real-time streaming stability.
WEBSOCKET_MANAGERSet to redis to delegate WebSocket state to Redis. Required for reliable streaming in containerized environments.
WEBUI_AUTHEnables login authentication. Set to false only for fully private single-user setups.
ENABLE_SIGNUPAllows new users to self-register. Set to false once your team accounts are created to lock down access.
ENABLE_OLLAMA_APIDisabled by default. Open WebUI would otherwise probe localhost:11434 on startup, causing unnecessary errors. Enable and set OLLAMA_BASE_URL if you add an Ollama service.
ENABLE_WEBSOCKET_SUPPORTEnables real-time streaming responses. Required for a smooth chat experience.
ENABLE_RAG_WEB_SEARCHAllows the AI to search the web in context when answering questions.
ENABLE_IMAGE_GENERATIONEnables image generation features. Requires connecting a compatible backend (e.g. AUTOMATIC1111) via Open WebUI settings post-deploy.

Verifying the stack end-to-end

Once LiteLLM has at least one model configured, verify the full chain with:

# 1. Check LiteLLM is healthy
curl https://.railway.app/health

# 2. List available models (replace with your LITELLM_MASTER_KEY from Railway variables)
curl https://.railway.app/v1/models \
  -H "Authorization: Bearer sk-litellm-xxxxxxxxxxxx"

# 3. Send a test message through the full proxy chain
curl https://.railway.app/v1/chat/completions \
  -H "Authorization: Bearer sk-litellm-xxxxxxxxxxxx" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "claude-haiku-4-5",
    "messages": [
      {"role": "user", "content": "Reply with exactly: stack is working"}
    ]
  }'

A successful response confirms LiteLLM is routing correctly to your provider. Open WebUI uses this exact same internal route — if the curl works, the chat UI will work too.

Security notes: Set ENABLE_SIGNUP=false in Open WebUI once your users are created to prevent unauthorized registrations. LiteLLM's /ui endpoint exposes your model and key management — remove its public Railway domain after initial setup if you want it accessible only from your IP. Both WEBUI_SECRET_KEY and LITELLM_SALT_KEY are generated once at deploy and must never be rotated in production.

Adding Ollama: Deploy an Ollama service in the same Railway project, then in Open WebUI set ENABLE_OLLAMA_API=true and OLLAMA_BASE_URL=http://${{Ollama.RAILWAY_PRIVATE_DOMAIN}}:11434. Models can then be pulled directly from the Open WebUI admin interface.

Why Deploy Open WebUI Pro Stack on Railway?

Railway is a singular platform to deploy your infrastructure stack. Railway will host your infrastructure so you don't have to deal with configuration, while allowing you to vertically and horizontally scale it.

By deploying Open WebUI Pro Stack on Railway, you are one step closer to supporting a complete full-stack application with minimal burden. Host your servers, databases, AI agents, and more on Railway.


Template Content

More templates in this category

View Template
Chat Chat
Chat Chat, your own unified chat and search to AI platform.

okisdev
View Template
Hermes Agent | OpenClaw Alternative with Dashboard
Self-improving AI agent with memory, skills, and web dashboard 🤖

codestorm
View Template
EchoDeck
Generate a mp4 from powerpoint with TTS

Fixed Scope