Deploy LobeChat | Multi-Model AI/LLM Support
Self Host Lobechat. Private AI chat with 40+ providers, RAG, plugins & more
PostgreSQL
Just deployed
/var/lib/postgresql/data
RustFS Init
Just deployed
RustFS
Just deployed
/data
LobeChat
Just deployed
Redis
Just deployed
/data

Deploy and Host LobeChat on Railway
Deploy LobeChat on Railway to run your own private AI chat platform with multi-model support, knowledge bases, and plugin extensibility. Self-host LobeChat with full control over your data and AI provider keys — no vendor lock-in.
This Railway template deploys a production-ready LobeChat stack with PostgreSQL (via ParadeDB with pgvector), Redis for caching and sessions, and RustFS for S3-compatible file storage. All services are pre-configured with secure inter-service networking and persistent volumes.
Getting Started with LobeChat on Railway
After Railway finishes deploying all five services, open your LobeChat URL to reach the sign-in page. LobeChat 2.0 uses Better Auth — register with an email and password to create your first account. Once logged in, navigate to Settings > Model Provider and add your API key for at least one provider (OpenAI, Anthropic Claude, Google Gemini, or any of the 40+ supported providers). Send your first message to verify the connection works.
To enable file uploads and the knowledge base, your RustFS storage is already configured. Upload a document in any chat to test RAG capabilities. Explore the Plugin Store and Agent Marketplace to extend functionality beyond basic chat.
About Hosting LobeChat
LobeChat is an open-source AI chat framework with 70,000+ GitHub stars, built with Next.js. It serves as a private alternative to ChatGPT with full multi-model support.
- Multi-Provider AI: Connect 40+ providers — OpenAI, Claude, Gemini, DeepSeek, Ollama, AWS Bedrock, Azure, Mistral, Groq, and more
- Knowledge Base (RAG): Upload documents, build searchable knowledge bases with pgvector-powered retrieval
- Plugin Ecosystem: 10,000+ skills via Function Calling, MCP marketplace integration
- Multi-Modal: Vision (image recognition), text-to-speech, speech-to-text, image generation
- Better Auth: Built-in authentication with email/password, OAuth, SSO support
- Artifacts & Thinking: Advanced reasoning visualization and interactive code artifacts
Why Deploy LobeChat on Railway
- One-click deployment with PostgreSQL, Redis, and S3 storage pre-configured
- Private networking between services — database never exposed to the internet
- Persistent volumes for database and file storage survive redeploys
- Scale services independently — add more RAM to LobeChat without touching the database
- No Docker knowledge required — Railway handles containers, networking, and TLS
Common Use Cases for Self-Hosted LobeChat
- Team AI Assistant: Deploy a shared ChatGPT alternative with centralized API key management and user accounts
- RAG-Powered Knowledge Base: Upload internal docs and build a searchable AI assistant for your organization
- Multi-Model Testing: Compare responses from GPT-4, Claude, Gemini, and open-source models side-by-side
- Custom Agent Development: Build specialized AI agents with plugins, tools, and MCP integrations for domain-specific workflows
Dependencies for LobeChat on Railway
- LobeChat —
lobehub/lobehub— Main application (Next.js 16, port 8080) - PostgreSQL —
paradedb/paradedb:latest-pg17— Database with pgvector extension for RAG - Redis — Railway managed — Session storage, caching, rate limiting
- RustFS —
rustfs/rustfs:latest— S3-compatible object storage for file uploads - RustFS Init —
alpine:latest— One-shot bucket initialization usingminio/mc
Environment Variables Reference for LobeChat
| Variable | Service | Description |
|---|---|---|
DATABASE_URL | LobeChat | PostgreSQL connection string |
REDIS_URL | LobeChat | Redis connection for sessions/cache |
S3_ENDPOINT | LobeChat | RustFS public URL for file storage |
S3_BUCKET | LobeChat | S3 bucket name (lobe) |
KEY_VAULTS_SECRET | LobeChat | Encryption key for user API key vaults |
AUTH_SECRET | LobeChat | Session token signing secret |
APP_URL | LobeChat | Public-facing application URL |
POSTGRES_DB | PostgreSQL | Database name |
POSTGRES_PASSWORD | PostgreSQL | Database password |
PGDATA | PostgreSQL | Data directory (subdirectory of volume) |
RUSTFS_ACCESS_KEY | RustFS | S3 access key |
RUSTFS_SECRET_KEY | RustFS | S3 secret key |
Deployment Dependencies
- Runtime: Node.js 22 (bundled in Docker image)
- Docker Hub: lobehub/lobehub
- GitHub: lobehub/lobe-chat (70k+ stars)
- Docs: lobehub.com/docs/self-hosting
Hardware Requirements for Self-Hosting LobeChat
| Resource | Minimum | Recommended |
|---|---|---|
| CPU | 1 vCPU | 2 vCPU |
| RAM | 1 GB (app) + 512 MB (PostgreSQL) | 2 GB (app) + 1 GB (PostgreSQL) |
| Storage | 2 GB (app) + 5 GB (database) | 10 GB (database) + 20 GB (files) |
| Runtime | Docker 20+ or Node.js 20+ | Docker Compose recommended |
RAM scales with concurrent users and knowledge base size. PostgreSQL with pgvector benefits from additional memory for vector index caching.
Self-Hosting LobeChat with Docker Compose
The fastest way to self-host LobeChat is with Docker Compose. Clone the official repository and use the deploy configuration:
git clone https://github.com/lobehub/lobe-chat.git
cd lobe-chat/docker-compose/deploy
cp .env.example .env
# Edit .env — set KEY_VAULTS_SECRET, AUTH_SECRET, POSTGRES_PASSWORD
bash setup.sh # generates secrets automatically
docker compose up -d
For a minimal single-service setup without persistent storage:
docker run -d \
-p 3210:8080 \
-e KEY_VAULTS_SECRET="$(openssl rand -base64 32)" \
-e AUTH_SECRET="$(openssl rand -base64 32)" \
-e DATABASE_URL="postgresql://user:pass@host:5432/lobechat" \
lobehub/lobehub
How Much Does LobeChat Cost to Self-Host?
LobeChat is free and open-source under the Apache 2.0 license. There are no paid tiers for the self-hosted version — you get the full feature set. The only costs are infrastructure (server, database, storage) and AI provider API keys. On Railway, expect approximately $5–15/month for the full stack depending on usage. LobeHub also offers a managed cloud service with credit-based pricing for teams that prefer not to self-host.
LobeChat vs Open WebUI vs LibreChat
| Feature | LobeChat | Open WebUI | LibreChat |
|---|---|---|---|
| License | Apache 2.0 | MIT | MIT |
| AI Providers | 40+ (API-based) | Ollama + OpenAI | 20+ |
| Knowledge Base (RAG) | Built-in with pgvector | Built-in | Plugin-based |
| Plugin System | 10,000+ skills | Pipelines | Plugins |
| Auth | Better Auth (SSO, OAuth) | Built-in | OAuth |
| Local LLMs | Via Ollama integration | Native Ollama | Via Ollama |
| GitHub Stars | 70k+ | 80k+ | 20k+ |
LobeChat excels at multi-provider API-based workflows with the richest plugin ecosystem. Open WebUI is the stronger choice for local LLM-first deployments with Ollama.
FAQ
What is LobeChat and why self-host it? LobeChat is an open-source AI chat framework that lets you build a private ChatGPT-like interface. Self-hosting gives you full control over data privacy, API key management, and the ability to connect any AI provider without third-party intermediaries.
What does this Railway template deploy for LobeChat? This template deploys five services: the LobeChat application (lobehub/lobehub), PostgreSQL with pgvector for database and vector search, Railway-managed Redis for caching, RustFS for S3-compatible file storage, and a one-shot init service that creates the storage bucket.
Why does LobeChat need PostgreSQL with pgvector on Railway? PostgreSQL stores user accounts, chat history, settings, and plugin configurations. The pgvector extension enables the Knowledge Base feature — when you upload documents, LobeChat creates vector embeddings and stores them in PostgreSQL for semantic search (RAG).
How do I add my OpenAI or Claude API key to self-hosted LobeChat?
After logging in, go to Settings > Model Provider and enter your API key for each provider. LobeChat encrypts stored keys using the KEY_VAULTS_SECRET environment variable. You can add keys for OpenAI, Anthropic Claude, Google Gemini, and 40+ other providers simultaneously.
Can I use LobeChat with local LLMs like Ollama on Railway? LobeChat supports Ollama as a model provider, but running Ollama requires GPU access which Railway does not currently provide. You can connect LobeChat on Railway to an Ollama instance running on a separate GPU server by setting the Ollama API endpoint in the model provider settings.
How do I enable file uploads and knowledge base in self-hosted LobeChat? This template pre-configures RustFS as S3-compatible storage. File uploads and knowledge base features work out of the box — upload documents in any chat conversation or create dedicated knowledge bases from the sidebar.
Template Content
PostgreSQL
paradedb/paradedb:latest-pg17RustFS Init
alpine:latestRustFS
rustfs/rustfs:latestLobeChat
lobehub/lobehubRedis
redis:8.2.1