Railway

Deploy n8n Enterprise-Ready Stack + Ollama

šŸš€ Infinite Scaling n8n + worker + PostgreSQL HA + Redis + Runer + Ollama

Deploy n8n Enterprise-Ready Stack + Ollama

Just deployed

Just deployed

Just deployed

Just deployed

/root/.ollama

Just deployed

/var/lib/postgresql/data

Just deployed

/var/lib/postgresql/data

Deploy and Host Ollama for AI-Powered n8n on Railway

šŸ¤– Local AI powered by Qwen 2.5 - Run AI workflows in n8n without external API costs!


About Hosting Ollama on Railway

Ollama allows you to run open-source large language models (LLMs) locally. By hosting it on Railway alongside n8n, you create a powerful, private, and cost-effective AI automation stack. This service is pre-configured to:

  • Run the highly-capable Qwen 2.5:3b model.
  • Communicate over Railway's private network for maximum security.
  • Store models in a persistent volume so they only download once.

Why Deploy Ollama with n8n?

  1. Zero API Costs: Say goodbye to per-token billing from OpenAI or Claude.
  2. Data Privacy: Your data never leaves your Railway project. Local AI means local data.
  3. No Latency to External APIs: Fast communication over Railway's internal network (railway.internal).
  4. Tool Use: Perfect for giving n8n AI agents the ability to query your PostgreSQL database directly.

Common Use Cases

  • Natural Language DB Query: "Show me all users who signed up today" -> AI generates SQL -> n8n executes.
  • Support Automation: Classify incoming tickets and generate draft replies.
  • Data Extraction: Extract structured data from messy emails or documents.
  • Smart Workflows: Use AI as a conditional logic engine to make decisions in your workflows.

Dependencies for Ollama Service

DependencyVersionPurpose
OllamaLatestThe engine that runs the LLM
Qwen 2.53bThe default intelligence model
DockerLatestContainerization for Railway

Deployment Dependencies

ResourceRequirementNote
RAM4GB (Min)Required for 3b models
Disk10GBTo store the model weights
CPU2 vCPU+Faster CPU = Faster responses

Default Configuration

SettingValue
Modelqwen2.5:3b
API Endpointhttp://ollama.railway.internal:11434

Connecting to n8n

Step 1: Add Ollama Credentials in n8n

  1. Open your n8n UI.
  2. Go to Settings -> Credentials.
  3. Create a new Ollama API credential.
  4. Set the Base URL to: http://ollama.railway.internal:11434.

Step 2: Use in Workflows

Use the AI Agent or Ollama Chat Model nodes. Always ensure the "Model" name in the node matches qwen2.5:3b.


Resource Requirements & Cost

RAMEst. Cost/moPerformance
2GB~$10Struggling / High Latency
4GB~$20Balanced (Recommended)
8GB~$40Fast / Supports 7b models

Troubleshooting

  • āŒ Connection Timed Out: Ollama might still be downloading the model weights on the first boot. Check logs.
  • āŒ Out of Memory: Ensure the service has at least 4GB of RAM allocated in Railway.
  • āŒ Slow Responses: Running on CPU is slower than GPU. Use smaller models like qwen2.5:1.5b if speed is critical.

Support This Project

If this template saves you time and money, consider supporting its development! ā˜•

Support Me


Template Content

More templates in this category

View Template
Foundry Virtual Tabletop
A Self-Hosted & Modern Roleplaying Platform

Lucas
View Template
(v1) Simple Medusa Backend
Deploy an ecommerce backend and admin using Medusa

Shahed Nasser
View Template
peppermint
Docker-compose port for peppermint.sh

HamiltonAI