Deploy wild-pure
Lightweight, privacy-first LLM proxy
voidmind-io/voidllm:latest
Just deployed
Deploy and Host VoidLLM on Railway
VoidLLM is a lightweight, privacy-first LLM proxy for teams. It sits between your applications and LLM providers, giving you organization-wide access control, usage tracking, and key management. One binary, sub-2ms overhead, zero knowledge of your prompts.
About Hosting VoidLLM
VoidLLM deploys as a single Docker container with SQLite for storage. No external databases required. Two environment variables are auto-generated during deploy, and the proxy is immediately ready. Open the web UI to create your organization, add upstream LLM providers like OpenAI, Anthropic, Azure, Ollama, or vLLM, generate scoped API keys, and start proxying. Your existing OpenAI SDK code works by just changing the base URL.
Common Use Cases
- Replace shared API keys with scoped virtual keys per team and user
- Track LLM usage and costs across your organization without logging prompts
- Enforce rate limits and token budgets to prevent runaway spending
Dependencies for VoidLLM Hosting
- No external dependencies — runs standalone with embedded SQLite
- Upstream LLM provider credentials (OpenAI, Anthropic, etc.) added after deploy via the UI
Deployment Dependencies
Why Deploy VoidLLM on Railway?
Railway is a singular platform to deploy your infrastructure stack. Railway will host your infrastructure so you don't have to deal with configuration, while allowing you to vertically and horizontally scale it.
By deploying VoidLLM on Railway, you are one step closer to supporting a complete full-stack application with minimal burden. Host your servers, databases, AI agents, and more on Railway.
Template Content
voidmind-io/voidllm:latest
ghcr.io/voidmind-io/voidllm:latest