
Deploy LightRAG | Open Source Knowledge Graph RAG on Railway
Self host LightRAG. Graph-enhanced retrieval with Web UI & multi LLMs
LightRAG
Just deployed
/app/data

Deploy and Host LightRAG on Railway
LightRAG is an open-source graph-enhanced RAG framework from HKU Data Science Lab with 34k+ GitHub stars, presented at EMNLP 2025. It extracts entities and relationships from your documents to build a knowledge graph, then combines graph-based retrieval with vector search to deliver more accurate, context-aware answers than traditional chunk-based RAG systems.
This template pre-configures the LightRAG Server (ghcr.io/hkuds/lightrag:latest) with OpenAI LLM and embedding bindings, JWT authentication, API key protection, and a persistent volume for local file-based storage. Upload documents, query them through the built-in Web UI, and visualize your knowledge graph -- all from a single service.
Getting Started with LightRAG on Railway
After deploying, open your Railway public URL to access the LightRAG Web UI. Sign in with the credentials from your AUTH_ACCOUNTS variable (default user admin). Add your OpenAI API key by setting LLM_BINDING_API_KEY and EMBEDDING_BINDING_API_KEY in Railway's Variables tab, then redeploy. Upload your first document through the Web UI or via the REST API:
curl -X POST https://your-lightrag-domain.up.railway.app/documents/upload \
-H "X-API-Key: your-lightrag-api-key" \
-F "[email protected]"
Query your knowledge graph with a RAG request:
curl -X POST https://your-lightrag-domain.up.railway.app/query \
-H "Content-Type: application/json" \
-H "X-API-Key: your-lightrag-api-key" \
-d '{"query": "What are the key relationships between these entities?", "mode": "hybrid"}'

About Hosting LightRAG
LightRAG (MIT license) goes beyond vector-only retrieval by building a knowledge graph from your documents. Its dual-level retrieval system handles both precise entity lookups and broad thematic queries.
- Knowledge graph extraction -- automatically identifies entities and relationships across documents
- Dual-level retrieval -- low-level (specific entities/relationships) and high-level (themes/topics) search modes, plus a hybrid mode combining both
- Incremental updates -- add new documents without rebuilding the entire index
- Web UI with graph visualization -- explore your knowledge graph with gravity layouts, node queries, and subgraph filtering
- Ollama-compatible API -- connect Open WebUI or any Ollama-compatible chat client directly to LightRAG
- 20+ LLM providers -- OpenAI, Ollama, Azure OpenAI, Gemini, Anthropic (via compatible endpoint), AWS Bedrock
- Pluggable storage -- swap from local files to PostgreSQL, MongoDB, Neo4j, Milvus, Qdrant, Redis, Faiss, or OpenSearch
- Reranking support -- Cohere, Jina, and Aliyun rerankers for improved result quality
Why Deploy LightRAG on Railway
- Single-service deploy with persistent volume -- no external database required
- All configuration via environment variables, no config files to manage
- Swap LLM providers by changing three env vars (binding, host, key)
- Built-in Web UI and API authentication pre-configured
- MIT license with no usage restrictions
Common Use Cases for LightRAG
- Enterprise knowledge base -- ingest internal docs and query with natural language, surfacing entity relationships across departments
- Legal and academic research -- multi-hop reasoning across case law, papers, or regulatory filings where connections between entities matter
- AI-powered document Q&A -- build a chatbot grounded in your documentation with graph-enhanced retrieval for more accurate answers
- Financial analysis -- extract and query relationships between companies, executives, and filings from large document collections
Dependencies for Self-Hosting LightRAG on Railway
This template deploys one service:
- LightRAG --
ghcr.io/hkuds/lightrag:latest-- RAG server with Web UI, REST API, and knowledge graph engine
LightRAG uses local file-based storage by default (NetworkX for graphs, NanoVectorDB for vectors). For production scale, you can optionally add external storage services and point LightRAG at them via LIGHTRAG_KV_STORAGE, LIGHTRAG_VECTOR_STORAGE, and LIGHTRAG_GRAPH_STORAGE environment variables. Supported backends include PostgreSQL, MongoDB, Neo4j, Milvus, Qdrant, Redis, Faiss, Memgraph, and OpenSearch.
Environment Variables Reference for LightRAG
| Variable | Description | Example |
|---|---|---|
LLM_BINDING_API_KEY | LLM provider API key | Your OpenAI key |
EMBEDDING_BINDING_API_KEY | Embedding provider API key | Your OpenAI key |
AUTH_ACCOUNTS | Web UI login credentials | admin:${{secret(32)}} |
TOKEN_SECRET | JWT signing key | ${{secret(32)}} |
LIGHTRAG_API_KEY | REST API authentication key | ${{secret(32)}} |
LLM_MODEL | LLM model identifier | gpt-4o-mini |
EMBEDDING_MODEL | Embedding model identifier | text-embedding-3-large |
Deployment Dependencies
- GitHub: HKUDS/LightRAG
- Container Registry: ghcr.io/hkuds/lightrag
- Docs: LightRAG API Server
- Paper: arXiv:2410.05779
Hardware Requirements for Self-Hosting LightRAG
| Resource | Minimum | Recommended |
|---|---|---|
| CPU | 1 vCPU | 2 vCPU |
| RAM | 1 GB | 4 GB |
| Storage | 1 GB | 5 GB+ |
| GPU | Not required | Not required |
LightRAG delegates inference to external LLM providers, so no GPU is needed. Memory usage scales with document volume and knowledge graph size. On Railway's 8 GB plan, it runs comfortably even with large document collections.
Self-Hosting LightRAG with Docker
Run LightRAG with Docker using the official GHCR image:
docker run -d --name lightrag -p 9621:9621 \
-e HOST=0.0.0.0 -e PORT=9621 \
-e LLM_BINDING=openai \
-e LLM_BINDING_HOST=https://api.openai.com/v1 \
-e LLM_BINDING_API_KEY=sk-your-key \
-e LLM_MODEL=gpt-4o-mini \
-e EMBEDDING_BINDING=openai \
-e EMBEDDING_BINDING_HOST=https://api.openai.com/v1 \
-e EMBEDDING_BINDING_API_KEY=sk-your-key \
-e EMBEDDING_MODEL=text-embedding-3-large \
-e EMBEDDING_DIM=3072 \
-v lightrag_data:/app/data \
ghcr.io/hkuds/lightrag:latest
Or use Docker Compose for a persistent setup:
services:
lightrag:
image: ghcr.io/hkuds/lightrag:latest
ports:
- "9621:9621"
environment:
HOST: "0.0.0.0"
PORT: "9621"
LLM_BINDING: "openai"
LLM_BINDING_HOST: "https://api.openai.com/v1"
LLM_BINDING_API_KEY: "sk-your-key"
LLM_MODEL: "gpt-4o-mini"
EMBEDDING_BINDING: "openai"
EMBEDDING_BINDING_HOST: "https://api.openai.com/v1"
EMBEDDING_BINDING_API_KEY: "sk-your-key"
EMBEDDING_MODEL: "text-embedding-3-large"
EMBEDDING_DIM: "3072"
AUTH_ACCOUNTS: "admin:changeme"
WORKING_DIR: "/app/data/rag_storage"
INPUT_DIR: "/app/data/inputs"
volumes:
- lightrag_data:/app/data
volumes:
lightrag_data:
Access the Web UI at http://localhost:9621.
Is LightRAG Free to Self-Host?
LightRAG is fully open-source under the MIT license with no usage restrictions or paid tiers. The only cost is your LLM provider API usage (OpenAI, etc.) and infrastructure. Self-hosting on Railway means you pay only for Railway compute and storage plus your LLM API costs. Using a local model via Ollama eliminates API costs entirely.
FAQ: Self-Hosting LightRAG on Railway
What is LightRAG and how does it differ from traditional RAG? LightRAG is a graph-enhanced RAG framework that extracts entities and relationships from documents to build a knowledge graph. Unlike chunk-based RAG that relies solely on vector similarity, LightRAG combines graph traversal with vector search for more accurate, context-aware retrieval -- especially for multi-hop queries.
What LLM providers can I use with LightRAG on Railway?
LightRAG supports OpenAI, Ollama, Azure OpenAI, Gemini, AWS Bedrock, and any OpenAI-compatible endpoint. Change the LLM_BINDING, LLM_BINDING_HOST, and LLM_BINDING_API_KEY variables to switch providers without redeploying.
Does LightRAG on Railway need an external database? No. This template uses local file-based storage (NetworkX for graphs, NanoVectorDB for vectors) persisted to a Railway volume. For larger deployments, you can optionally connect PostgreSQL, MongoDB, Neo4j, Milvus, or Qdrant by setting the storage backend environment variables.
How do I connect Open WebUI or other chat clients to LightRAG on Railway? LightRAG provides an Ollama-compatible API endpoint. Point your Ollama-compatible client (Open WebUI, etc.) at your Railway LightRAG URL and it will work as a chat model with RAG capabilities built in.
Can I use LightRAG with local models instead of OpenAI on Railway?
Yes. Deploy an Ollama service on Railway or elsewhere, set LLM_BINDING=ollama and EMBEDDING_BINDING=ollama, and point the host variables to your Ollama instance. This eliminates external API costs entirely.
How does LightRAG handle new documents without rebuilding the index? LightRAG uses an incremental update algorithm. New documents are processed and their entities and relationships are merged into the existing knowledge graph without rebuilding from scratch, keeping ingestion fast even as your corpus grows.
Template Content
LightRAG
ghcr.io/hkuds/lightrag:latestAUTH_ACCOUNTS
Web UI login credentials. Please provide in format: 'username:password'