Railway

Deploy divine-emotion

Deploy and Host divine-emotion with Railway

Deploy divine-emotion

open-webui/open-webui:main

open-webui/open-webui:main

Just deployed

Deploy and Host OpenWebUI - Self-Hosted AI Chat Interface

Deploy your own ChatGPT-like interface with support for multiple LLM providers (OpenAI, Anthropic, Ollama, Google, and more).

About Hosting

OpenWebUI is a feature-rich, self-hosted web interface for various Large Language Models. This deployment creates a fully functional AI chat application with persistent storage for conversations, user management, and customizable settings. The application runs in a Docker container with a SQLite database for data persistence.

Why Deploy

  • Privacy: Keep your conversations and data on your own infrastructure
  • Cost Control: Use your own API keys, avoid platform markups
  • Customization: Full control over models, prompts, and interface
  • Multi-Provider: Switch between OpenAI, Anthropic, Google, and local models
  • Team Collaboration: Built-in user management and conversation sharing
  • No Vendor Lock-in: Export your data anytime, switch providers freely

Common Use Cases

  • Personal AI Assistant: Private ChatGPT alternative with your API keys
  • Team Workspace: Shared AI chat interface for organizations
  • Development Environment: Test and compare different LLM models
  • Educational Platform: Provide students with controlled AI access
  • Customer Support: Internal tool for support teams with AI assistance
  • Content Creation Hub: Centralized platform for AI-assisted writing
  • Research Tool: Document analysis and multi-model comparison

Dependencies for OpenWebUI

Deployment Dependencies

  • Docker Image: ghcr.io/open-webui/open-webui:main
  • Port: 8080 (HTTP)
  • Persistent Volume: /app/backend/data (minimum 5GB recommended)
  • RAM: Minimum 512MB, 1GB+ recommended for better performance
  • CPU: 0.5 vCPU minimum, 1 vCPU recommended
  • Environment Variables:
    • PORT=8080 (Railway port binding)
    • WEBUI_SECRET_KEY (JWT secret, auto-generated if not provided)
    • OPENAI_API_KEY (Optional: for OpenAI models)
    • ANTHROPIC_API_KEY (Optional: for Claude models)
    • OLLAMA_BASE_URL (Optional: for local Ollama instance)
    • WEBUI_AUTH=true (Enable authentication, recommended)

Runtime Dependencies

  • SQLite: Embedded database (included in container)
  • Python 3.11: Runtime environment (included in container)
  • Node.js: Frontend build (pre-built in container)

Optional Integrations

  • LLM Providers: At least one API key (OpenAI, Anthropic, etc.) or Ollama endpoint
  • External Storage: Can be configured for file uploads
  • OAuth Providers: Google, GitHub, etc. (optional for SSO)

Quick Start

  1. Click "Deploy on Railway"
  2. Set your preferred LLM API keys in environment variables
  3. Wait for deployment (typically 2-3 minutes)
  4. Access your OpenWebUI instance via provided Railway URL
  5. Create your admin account on first visit

Post-Deployment

  • Default admin account is created on first visit
  • Configure models in Settings → Models
  • Add team members in Settings → Users
  • Customize interface in Settings → Interface

#Deploy #Docker #AI #ChatGPT #OpenSource #LLM #SelfHosted #Privacy


Template Content

open-webui/open-webui:main

ghcr.io/open-webui/open-webui:main

More templates in this category

View Template
Foundry Virtual Tabletop
A Self-Hosted & Modern Roleplaying Platform

Lucas
View Template
(v1) Simple Medusa Backend
Deploy an ecommerce backend and admin using Medusa

Shahed Nasser
View Template
peppermint
Docker-compose port for peppermint.sh

HamiltonAI