Deploy DeepSeek
[Nov '25] Self Host DeepSeek models in your private AI workspace.
Open-WebUI
open-webui/open-webui
Just deployed
/app/backend/data
deepseek
ollama/ollama
Just deployed
/root/.ollama

Deploy and Host DeepSeek on Railway
This Railway template lets you deploy DeepSeek models using Ollama and OpenWebUI without manual setup. It includes a model server, a chat interface, persistent storage, and automatic model downloads. You only choose which DeepSeek models to load, and the template handles the rest.
About Hosting DeepSeek
Hosting DeepSeek models usually requires configuring an inference backend, setting up storage, and manually downloading models. This template handles those steps for you. It boots Ollama, downloads the DeepSeek models listed in your environment variable, connects OpenWebUI, and stores everything on a persistent volume. You can change the model list anytime and redeploy.
How to Use This Template
- Click “Deploy Now” on Railway.
Within a minute, both Ollama and OpenWebUI will be live.
- Visit the public WebUI URL shown in your Railway dashboard.
- Login for the first time — you’ll be asked to set a username and password (this becomes your admin account).
- You’re now inside your Chat Space — start chatting instantly with your default model or add new ones.
🔧 Adding or Modifying Models
- Go to Profile → Admin Panel → Settings → Models.
- Click Manage Models on the top right.
- You’ll already see your Ollama endpoint prefilled as:
http://ollama.railway.internal:11434 - Pull any model you like from the Ollama Model Library.
(Make sure your instance has enough RAM to fit the model.) - Return to the WebUI homepage — your new model will appear on the top left.
✅ That’s it! You can now chat, test, and prototype AI ideas right in your browser.
Common Use Cases
- Personal AI assistant or research companion
- Code generation, debugging, and reasoning tasks
- Lightweight ChatGPT-style chatbot for private use
Dependencies for DeepSeek Hosting
- Ollama as the model backend
- OpenWebUI as the chat interface
- Railway persistent volumes for storing DeepSeek model files
Deployment Dependencies
- Enough RAM for the DeepSeek model variant you select
- Enough disk space for the downloaded model weights
- DeepSeek models available through Ollama’s library: https://ollama.com/library/deepseek
- Stable internet connection for model downloads
- Adequate RAM and disk size depending on the DeepSeek model you choose
- DeepSeek model references: https://huggingface.co/deepseek-ai
Implementation Details
- Models are stored under the backend's model directory on your Railway volume
- Changing model names in the environment variables triggers downloads on redeploy
FAQ
1. How do I choose which DeepSeek models to load?
Use the environment variable that accepts a comma-separated list of model names. The template will download each model on startup. Start with smaller models if you're unsure.
2. How much RAM do I need for DeepSeek models?
Smaller DeepSeek variants run on low memory, while larger ones may require 8–16GB RAM. If a model doesn't load, it usually means the instance lacks enough memory.
3. Where are downloaded models stored?
All model files are stored on your Railway volume under Ollama’s model directory. They persist across redeploys.
4. Can I switch or add models later?
Yes. Update the model list in the environment variable and redeploy. The new models will download automatically.
5. Can I use OpenAI models alongside DeepSeek?
Yes, if you add your OpenAI API key. OpenWebUI will show OpenAI models in the same interface.
6. What happens if I run out of disk space?
Model downloads will fail. Increase your Railway volume size before redeploying.
7. Can I connect external tools like Flowise or LangChain?
Yes. Use the internal Ollama endpoint to connect external tools or scripts.
8. Is this setup beginner-friendly?
Yes. The template automates backend setup, model downloading, UI connection, and storage. You mainly choose models and adjust settings if needed.
9. Can multiple models be stored at once?
Yes. You can store as many models as your disk allows. Only loaded models use RAM.
10. Does the template support model upgrades?
Yes. You can update model names anytime. The system will pull newer versions if available.
Why Deploy DeepSeek on Railway?
Railway is a singular platform to deploy your infrastructure stack. Railway will host your infrastructure so you don't have to deal with configuration, while allowing you to vertically and horizontally scale it.
By deploying DeepSeek on Railway, you are one step closer to supporting a complete full-stack application with minimal burden. Host your servers, databases, AI agents, and more on Railway.
Template Content
Open-WebUI
ghcr.io/open-webui/open-webuideepseek
ollama/ollama
