Deploy n8n Enterprise-Ready Stack + Ollama
š Infinite Scaling n8n + worker + PostgreSQL HA + Redis + Runer + Ollama
postgres-18-proxy
Just deployed
n8n-task-runner
Just deployed
n8n-worker
Just deployed
Redis-n8n
Just deployed
Just deployed
/root/.ollama
postgres-18-replica
Just deployed
/var/lib/postgresql/data
postgres-18-primary
Just deployed
/var/lib/postgresql/data
Just deployed
Deploy and Host Ollama for AI-Powered n8n on Railway
š¤ Local AI powered by Qwen 2.5 - Run AI workflows in n8n without external API costs!
About Hosting Ollama on Railway
Ollama allows you to run open-source large language models (LLMs) locally. By hosting it on Railway alongside n8n, you create a powerful, private, and cost-effective AI automation stack. This service is pre-configured to:
- Run the highly-capable Qwen 2.5:3b model.
- Communicate over Railway's private network for maximum security.
- Store models in a persistent volume so they only download once.
Why Deploy Ollama with n8n?
- Zero API Costs: Say goodbye to per-token billing from OpenAI or Claude.
- Data Privacy: Your data never leaves your Railway project. Local AI means local data.
- No Latency to External APIs: Fast communication over Railway's internal network (
railway.internal). - Tool Use: Perfect for giving n8n AI agents the ability to query your PostgreSQL database directly.
Common Use Cases
- Natural Language DB Query: "Show me all users who signed up today" -> AI generates SQL -> n8n executes.
- Support Automation: Classify incoming tickets and generate draft replies.
- Data Extraction: Extract structured data from messy emails or documents.
- Smart Workflows: Use AI as a conditional logic engine to make decisions in your workflows.
Dependencies for Ollama Service
| Dependency | Version | Purpose |
|---|---|---|
| Ollama | Latest | The engine that runs the LLM |
| Qwen 2.5 | 3b | The default intelligence model |
| Docker | Latest | Containerization for Railway |
Deployment Dependencies
| Resource | Requirement | Note |
|---|---|---|
| RAM | 4GB (Min) | Required for 3b models |
| Disk | 10GB | To store the model weights |
| CPU | 2 vCPU+ | Faster CPU = Faster responses |
Default Configuration
| Setting | Value |
|---|---|
| Model | qwen2.5:3b |
| API Endpoint | http://ollama.railway.internal:11434 |
Connecting to n8n
Step 1: Add Ollama Credentials in n8n
- Open your n8n UI.
- Go to Settings -> Credentials.
- Create a new Ollama API credential.
- Set the Base URL to:
http://ollama.railway.internal:11434.
Step 2: Use in Workflows
Use the AI Agent or Ollama Chat Model nodes. Always ensure the "Model" name in the node matches qwen2.5:3b.
Resource Requirements & Cost
| RAM | Est. Cost/mo | Performance |
|---|---|---|
| 2GB | ~$10 | Struggling / High Latency |
| 4GB | ~$20 | Balanced (Recommended) |
| 8GB | ~$40 | Fast / Supports 7b models |
Troubleshooting
- ā Connection Timed Out: Ollama might still be downloading the model weights on the first boot. Check logs.
- ā Out of Memory: Ensure the service has at least 4GB of RAM allocated in Railway.
- ā Slow Responses: Running on CPU is slower than GPU. Use smaller models like
qwen2.5:1.5bif speed is critical.
Support This Project
If this template saves you time and money, consider supporting its development! ā
Template Content
postgres-18-proxy
icueth/n8n-PostgresHA-RedisHAn8n-task-runner
icueth/n8n-PostgresHA-RedisHAn8n-worker
icueth/n8n-PostgresHA-RedisHARedis-n8n
icueth/n8n-PostgresHA-RedisHApostgres-18-replica
icueth/n8n-PostgresHA-RedisHApostgres-18-primary
icueth/n8n-PostgresHA-RedisHA