Deploy LibreChat | Self-Hosted AI Chat with Multi-Provider Support
1-Click LibreChat deploy (OpenAI, Anthropic, Google, Ollama, RAG API)
pg_vector
Just deployed
/var/lib/postgresql
MongoDB
Just deployed
/data/db
Just deployed
Meilisearch
Just deployed
/meili_data
LibreChat
Just deployed

Deploy and Host LibreChat on Railway
Deploy a fully self-hosted AI chat platform on Railway in one click. This template provisions LibreChat alongside MongoDB, Meilisearch, PGVector, and a RAG API — everything pre-wired with private networking, secrets auto-generated, and a public URL ready on deploy.

About Hosting LibreChat
LibreChat (github.com/danny-avila/LibreChat) is an open-source, self-hosted alternative to ChatGPT that unifies multiple AI providers — OpenAI, Anthropic, Google, Groq, Mistral, and more — in a single interface. You own the data, control the models, and pay only for the API calls you make.
Key features:
- Multi-provider AI: switch between GPT, Claude, Gemini, and custom endpoints in one UI
- RAG pipeline: chat with uploaded files using retrieval-augmented generation
- Agents & MCP support: build tool-using AI assistants without code
- Code interpreter: execute Python, JS, Go, and more in a sandboxed environment
- Full-text search across conversations via Meilisearch
- Multi-user auth with email login, OAuth2, rate limiting, and moderation
Template architecture:
| Service | Image |
|---|---|
| LibreChat | ghcr.io/danny-avila/librechat-dev:latest |
| RAG API | ghcr.io/danny-avila/librechat-rag-api-dev-lite |
| PGVector | pgvector/pgvector:pg18 |
| MongoDB | mongo |
| Meilisearch | getmeili/meilisearch:v1.11.3 |
All services communicate over Railway's private network — no public exposure of internal ports.
Why Deploy LibreChat on Railway
Setting up LibreChat manually means configuring five services, writing Docker Compose files, managing secrets, and handling SSL. Railway eliminates all of that:
- Private networking — MongoDB, PGVector, and Meilisearch are never publicly exposed
- Auto-generated secrets — JWT keys, credential keys, and DB passwords are created at deploy time
- Environment variable UI — update API keys without touching config files
- One-click redeploy — pull the latest
librechat-devimage with a single click - Scales with you — upgrade resources per service as your user base grows
Common Use Cases
- Team AI hub — give your engineering or research team a private ChatGPT-style interface with shared access to latest GPT, Claude, and Gemini models without sharing API keys
- Document Q&A — use the RAG pipeline to let users upload PDFs and query them with any connected model
- AI experimentation — benchmark responses across OpenAI, Anthropic, and Google models side-by-side in one interface
- Privacy-first deployment — run AI conversations on infrastructure you control, with no data sent to third-party chat services
Dependencies for LibreChat
- OpenAI / Anthropic / Google API keys — set whichever providers you want to enable; others can stay as
user_provided - MongoDB — included in template; stores chat history, users, and configuration
- Meilisearch — included; powers full-text conversation search
- PGVector (PostgreSQL) — included; vector store for the RAG pipeline
- RAG API — included; handles document embeddings and retrieval
Environment Variables Reference
| Variable | Description | Required |
|---|---|---|
OPENAI_API_KEY | OpenAI API key | For OpenAI models |
ANTHROPIC_API_KEY | Anthropic Claude API key | For Claude models |
GOOGLE_KEY | Google Gemini API key | For Gemini models |
JWT_SECRET | JWT signing secret (auto-generated 64-char) | Yes |
CREDS_KEY | Credential encryption key (64-char) | Yes |
CREDS_IV | Credential encryption IV (32-char) | Yes |
MONGO_URI | MongoDB connection string (auto-wired) | Yes |
MEILI_MASTER_KEY | Meilisearch auth key (auto-wired) | Yes |
RAG_API_URL | RAG service URL (auto-wired via public domain) | Yes |
ALLOW_REGISTRATION | Enable new user sign-ups | Yes |
ENDPOINTS | Comma-separated list of enabled AI providers | Yes |
CONFIG_PATH | URL or path to librechat.yaml config | Optional |
Deployment Dependencies
- LibreChat docs: librechat.ai/docs
- GitHub repo: github.com/danny-avila/LibreChat
- RAG API repo: github.com/danny-avila/rag_api
- Minimum runtime: 1 vCPU, 1 GB RAM (2 GB recommended with RAG enabled)
Minimum Hardware Requirements for LibreChat
| Configuration | RAM | CPU |
|---|---|---|
| LibreChat only | 1 GB | 1 vCPU |
| Full stack (with RAG, search) | 2 GB | 2 vCPU |
| Multi-user production | 4 GB+ | 2+ vCPU |
The RAG API and PGVector add meaningful memory overhead. Start with 2 GB RAM if deploying the full template.
Self-Hosting LibreChat (Outside Railway)
To run LibreChat on your own VPS or machine:
git clone https://github.com/danny-avila/LibreChat.git
cd LibreChat
cp .env.example .env
# Edit .env — set OPENAI_API_KEY, MONGO_URI, and other keys
docker compose up -d
Access it at http://localhost:8080. MongoDB, Meilisearch, and the RAG API are all included in the default docker-compose.yml.
LibreChat vs Competitors
| Feature | LibreChat | OpenWebUI | Jan |
|---|---|---|---|
| Multi-provider (cloud APIs) | ✅ | ✅ | ❌ |
| RAG / file chat | ✅ | ✅ | ✅ |
| Multi-user + auth | ✅ | ✅ | ❌ |
| Agents & MCP | ✅ | Partial | ❌ |
| Self-hosted | ✅ | ✅ | ✅ |
| One-click Railway deploy | ✅ | ✅ | ❌ |
LibreChat vs OpenWebUI: OpenWebUI focuses on local Ollama models. LibreChat is stronger for cloud API providers, multi-user teams, and enterprise auth (OAuth2, LDAP).
Getting Started with LibreChat
After Railway finishes deploying, open the public domain Railway assigns to your LibreChat service. Click Sign Up to create the first admin account — registration is open by default (ALLOW_REGISTRATION=true). Once logged in, go to Settings → API Keys and add your OpenAI, Anthropic, or Google keys. Select a model from the endpoint dropdown and start a conversation. To enable file chat, upload a document — the RAG pipeline is pre-connected and ready.

How Much Does LibreChat Cost?
LibreChat is fully open-source (MIT licence) — free to use, modify, and self-host. There is no SaaS tier or subscription. Your only costs are infrastructure (Railway usage) and the AI API calls you make to providers like OpenAI or Anthropic. On Railway, the full five-service stack typically runs within the Hobby plan depending on usage volume.
FAQ
Is LibreChat really open source? Yes. LibreChat is MIT-licensed and actively maintained at github.com/danny-avila/LibreChat. You can fork it, modify it, and run it anywhere.
Does LibreChat support multimodal conversations? Yes — LibreChat supports image uploads, file analysis, and vision-capable models like GPT-4o and Gemini. It also supports code execution and document Q&A via the RAG pipeline.
Can I add my own AI models or custom endpoints?
Yes. Any OpenAI-compatible API (Ollama, OpenRouter, Deepseek, Mistral, etc.) can be added as a custom endpoint via the librechat.yaml config file or the CONFIG_PATH environment variable.
How do I disable public registration after setup?
Set ALLOW_REGISTRATION=false in your LibreChat service environment variables on Railway and redeploy.
Why is the RAG API URL using the public domain instead of private?
The RAG_API_URL is set to the Railway public domain because LibreChat's frontend makes direct calls to it from the browser. Internal service-to-service calls use private domains; browser-initiated RAG requests need a public endpoint.
What's the difference between librechat-dev and the stable image?
ghcr.io/danny-avila/librechat-dev:latest tracks the main development branch with the latest features. For production stability, pin to a specific release tag from the GitHub releases page.
Template Content
pg_vector
pgvector/pgvector:pg18MongoDB
mongoOPENAI_API_KEY
API key for embeddings provider
Meilisearch
getmeili/meilisearch:v1.11.3
