Railway

Deploy Open WebUI

User-friendly ChatGPT UI alternative designed to operate offline.

Deploy Open WebUI

Just deployed

/app/backend/data

Deploy and Host Open WebUI on Railway

Open WebUI (formerly Ollama WebUI) is an extensible, self-hosted web interface for interacting with large language models. It supports Ollama and any OpenAI-compatible API, giving you a polished ChatGPT-like experience on your own infrastructure — with no data leaving your deployment.

About Hosting Open WebUI

Open WebUI is a single Docker container that serves a full-featured chat interface backed by a SQLite database. This Railway template deploys the container with a persistent volume for storing conversations, uploaded documents, user accounts, and settings across redeploys. On first visit, you create an admin account — the first user registered automatically receives admin privileges. Connect it to OpenAI-compatible APIs via environment variables or configure providers directly in the UI after login. Ollama is not bundled — if you want local model inference, deploy Ollama as a separate Railway service and point OLLAMA_BASE_URL at it.

Common Use Cases

  • Private ChatGPT alternative — run a self-hosted chat interface over OpenAI, Anthropic, or any OpenAI-compatible API (Groq, Mistral, Together, Ollama, etc.) without conversations being logged by a third-party UI provider
  • Team LLM gateway — give a team a shared, access-controlled interface to multiple AI providers and models, with per-user accounts, conversation history, and the ability to share prompt presets across members
  • RAG and document Q&A — upload PDFs, text files, and other documents to the built-in knowledge base and query them in context alongside your chosen model, without external vector database setup

Dependencies for Open WebUI Hosting

  • Open WebUI Docker image — served directly by this template
  • Persistent volume — stores the SQLite database, uploaded documents, user data, and settings; mounted at /app/backend/data
  • An OpenAI-compatible API key or a running Ollama instance — required to actually use the chat interface after deployment

Deployment Dependencies

Implementation Details

The template exposes the following key environment variables:

OLLAMA_BASE_URL=        # URL of your Ollama instance if using local models; leave empty for API-only use

First login — the first account created on a fresh deployment is automatically granted admin privileges. Subsequent accounts require admin approval (configurable). If you need to reset admin access, you can do so by accessing the database volume directly.

Adding providers — after login, go to Settings → Connections to add or modify API keys and base URLs for OpenAI, Ollama, or any other OpenAI-compatible provider. Models from all configured providers appear in the model selector automatically.

RAG setup — upload documents via the Documents section in the sidebar. Uploaded files are chunked and stored locally; no external vector database is required for basic RAG.

Why Deploy Open WebUI on Railway?

Railway is a singular platform to deploy your infrastructure stack. Railway will host your infrastructure so you don't have to deal with configuration, while allowing you to vertically and horizontally scale it.

By deploying Open WebUI on Railway, you are one step closer to supporting a complete full-stack application with minimal burden. Host your servers, databases, AI agents, and more on Railway.


Template Content

More templates in this category

View Template
Chat Chat
Chat Chat, your own unified chat and search to AI platform.

okisdev
View Template
Hermes Agent | OpenClaw Alternative with Dashboard
Self-improving AI agent with memory, skills, and web dashboard 🤖

codestorm
View Template
EchoDeck
Generate a mp4 from powerpoint with TTS

Fixed Scope