N8N (AI Starter Kit)

A powerful workflow automation tool for technical people

Deploy N8N (AI Starter Kit)

Worker

n8nio/n8n

Just deployed

Redis

bitnami/redis

Just deployed

/bitnami

Primary

n8nio/n8n

Just deployed

Ollama

ollama/ollama

Just deployed

/root/.ollama

Qdrant

qdrant/qdrant

Just deployed

/qdrant/storage

Postgres

railwayapp-templates/postgres-ssl

Just deployed

/var/lib/postgresql/data

Open WebUI

open-webui/open-webui

Just deployed

/app/backend/data

Deploy and Host n8n (AI Starter Kit) on Railway

This template extends the N8N (w/ workers) template by adding compatible AI products and components. This starter kit is designed to help you get started with self-hosted AI workflows.

About Hosting n8n (AI Starter Kit)

Hosting the n8n (AI Starter Kit) means running multiple AI-focused services including n8n workflow automation, Ollama for local language models, and Qdrant for vector storage. The system coordinates between workflow execution, AI model inference, and vector database operations while maintaining local data processing. Production deployment requires managing service interconnections, configuring AI model access, handling vector storage scaling, and coordinating workflow scheduling across AI components. Railway simplifies this multi-service deployment by providing container orchestration for all AI services, managing internal networking between components, and handling environment variable configuration for service discovery.

n8n - Banner image

Common Use Cases

  • Self-hosted AI Workflows: Build AI-powered automation workflows using local language models and vector storage
  • Proof-of-Concept Projects: Develop and test AI workflow concepts with robust components working together
  • Custom AI Automation: Create tailored AI workflows that combine language processing, data extraction, and classification

Dependencies for n8n (AI Starter Kit) Hosting

The Railway template includes the required n8n workflow engine, Ollama language model server, and Qdrant vector database with pre-configured networking.

Deployment Dependencies

Implementation Details

Service Configuration:

  • Ollama credential url: http://ollama:11434
  • Qdrant credential url: http://qdrant:6333

AI Workflow Capabilities:

With your n8n instance, you'll have access to over 400 integrations and a suite of basic and advanced AI nodes such as AI Agent, Text classifier, and Information Extractor nodes. To keep everything local, just remember to use the Ollama node for your language model and Qdrant as your vector store.

Starter Kit Design:

This starter kit is designed to help you get started with self-hosted AI workflows. While it's not fully optimized for production environments, it combines robust components that work well together for proof-of-concept projects.

Customization:

You can customize it to meet your specific needs. Feel free to visit the starter kit documentation or repository for tutorials, documentation, and examples.

Component Integration:

  • n8n Workflow Engine: Orchestrates AI workflow execution with visual workflow building
  • Ollama: Provides local language model hosting for AI processing tasks
  • Qdrant: Handles vector storage and similarity search operations
  • Worker Architecture: Distributed workflow execution for handling AI processing loads

Why Deploy n8n (AI Starter Kit) on Railway?

Railway is a singular platform to deploy your infrastructure stack. Railway will host your infrastructure so you don't have to deal with configuration, while allowing you to vertically and horizontally scale it.

By deploying n8n (AI Starter Kit) on Railway, you are one step closer to supporting a complete full-stack application with minimal burden. Host your servers, databases, AI agents, and more on Railway.


Template Content

More templates in this category

View Template
Chat Chat
Chat Chat, your own unified chat and search to AI platform.

View Template
openui
Deploy OpenUI: AI-powered UI generation with GitHub OAuth and OpenAI API.

View Template
firecrawl
firecrawl api server + worker without auth, works with dify