Deploy Perplexica | Open Source Perplexity AI Alternative
Self-host Perplexica AI. AI-powered search engine with citations.
Perplexica
Just deployed
/home/vane/data
Deploy and Host Perplexica on Railway
Perplexica is an open-source AI-powered search engine — a self-hostable Perplexity AI alternative that uses LLMs to read the web and answer your questions with citations, images, and videos. Originally launched as Perplexica and rebranded to Vane in March 2026, it ships as a single Docker image (itzcrazykns1337/vane:latest) that bundles the Next.js frontend, the API backend, and a private SearxNG metasearch engine.
This Railway template deploys Perplexica with persistent storage for chat history and provider configuration. SearxNG runs inside the same container on localhost:8080, so there is nothing extra to wire up — bring your own OpenAI / Anthropic / Gemini / Groq / Ollama key and start searching.
Getting Started with Perplexica on Railway
After the deploy goes green, click the generated *.up.railway.app domain. The first visit opens the Setup Wizard — pick a model provider (OpenAI, Anthropic, Gemini, Groq, Ollama, LM Studio, Lemonade, or local Transformers), paste the API key, and confirm. Everything is persisted to /home/vane/data/config.json inside the Railway volume so settings survive redeploys.
Type a question into the chat box on the home screen. Perplexica uses SearxNG to scrape Google, Bing, DuckDuckGo, Wolfram Alpha, and YouTube, ranks the results with embedding-based reranking, then streams an answer with inline citations and a sidebar of source images and videos. Switch focus modes (Academic, YouTube, Reddit, Writing Assistant, Wolfram Alpha) from the input bar to constrain what gets searched.

About Hosting Perplexica
Perplexica answers questions like ChatGPT but searches the live web first, like Perplexity AI — except you control the model, the search index, and the data. There is no vendor lock-in, no usage cap, and no telemetry leaving your container.
Key features:
- Live web search powered by SearxNG (no Bing/Brave API keys needed)
- Six focus modes: All, Academic, YouTube, Reddit, Writing, Wolfram Alpha
- Streaming answers with inline citations and image/video sidebars
- File uploads — drop a PDF/DOCX and chat with it
- Pluggable LLMs: OpenAI, Anthropic, Gemini, Groq, Ollama, LM Studio, Lemonade, Transformers
- Conversation history persisted to embedded SQLite
The template is a single-container deployment: Next.js (port 3000) + SearxNG (port 8080, internal only) + SQLite, all inside one image. No external Postgres or Redis required.
Why Deploy Perplexica on Railway
Railway is the fastest path from git push to a public AI search engine.
- One-click deploy from a Docker image — no Dockerfile to maintain
- Persistent volume mounts in seconds for chat history and config
- Free public HTTPS domain with automatic TLS termination
- Pay only for the compute and RAM you actually use
- 4 GB memory and 4 GB storage are typically enough for personal use
Common Use Cases
- Personal research assistant — ask cited questions instead of doom-scrolling Google SERPs
- Internal team knowledge tool — give engineers a private Perplexity behind your VPN
- Privacy-first ChatGPT alternative — no OpenAI server-side log of your queries beyond the model call itself
- Data-grounded chat over your own files — upload PDFs, DOCX, and TXT documents and chat with them
Dependencies for Perplexica on Railway
- Vane / Perplexica —
itzcrazykns1337/vane:latest(bundles Next.js app + SearxNG)
Environment Variables Reference
| Variable | Purpose | Default |
|---|---|---|
PORT | Next.js HTTP server port | 3000 |
HOSTNAME | Bind address for Next.js standalone | 0.0.0.0 |
OPENAI_API_KEY | Pre-fills the OpenAI provider in setup wizard | unset |
ANTHROPIC_API_KEY | Pre-fills the Anthropic provider | unset |
GEMINI_API_KEY | Pre-fills the Google Gemini provider | unset |
GROQ_API_KEY | Pre-fills the Groq provider | unset |
OLLAMA_BASE_URL | External Ollama endpoint (http://host:11434) | unset |
SEARXNG_API_URL | Override bundled SearxNG URL | http://localhost:8080 |
RAILWAY_RUN_UID | Forces container to run as root for volume access | 0 |
Deployment Dependencies
- Runtime: Node.js 24, Python 3 (uWSGI), Playwright/Chromium
- Docker image: itzcrazykns1337/vane on Docker Hub
- GitHub repo: ItzCrazyKns/Perplexica
- Self-host docs: Perplexica README
Hardware Requirements for Self-Hosting Perplexica on Railway
| Resource | Minimum | Recommended |
|---|---|---|
| CPU | 1 vCPU | 2 vCPU |
| RAM | 2 GB | 4 GB |
| Storage | 2 GB | 4 GB+ (chat history grows over time) |
| Runtime | Docker (Node.js 24 + Python 3) | Docker on Linux/amd64 |
The bundled image is ~3.6 GB on disk because it ships Playwright + Chromium for headless web rendering. Allocate at least 4 GB RAM on Railway to avoid OOM during cold starts.
Self-Hosting Perplexica with Docker
Run the bundled image locally:
docker run -d \
--name perplexica \
-p 3000:3000 \
-v perplexica-data:/home/vane/data \
itzcrazykns1337/vane:latest
Or with docker-compose.yml:
services:
vane:
image: itzcrazykns1337/vane:latest
ports:
- '3000:3000'
volumes:
- data:/home/vane/data
restart: unless-stopped
volumes:
data:
Open http://localhost:3000, complete the setup wizard, and you have a private Perplexity clone running on your hardware.
How Much Does Perplexica Cost to Self-Host?
Perplexica itself is MIT-licensed and free. There are no paid tiers, license keys, or seat caps. The only costs are infrastructure (Railway compute + volume) and the LLM API calls you make from inside the app — Groq and Gemini both have generous free tiers, and local Ollama is free if you run it elsewhere. A typical Railway deployment costs roughly $5–10/month at light personal use.
FAQ
What is Perplexica and why self-host it on Railway? Perplexica is an open-source Perplexity AI alternative that combines an LLM with live web search via SearxNG. Self-hosting on Railway gives you a private instance with no rate limits, no telemetry, and full control over which model provider answers your queries.
What does this Railway template deploy?
A single container running itzcrazykns1337/vane:latest, which bundles the Next.js frontend, the search/answer backend, and a private SearxNG metasearch engine. A 4 GB volume at /home/vane/data persists your chat history and provider configuration.
Why is SearxNG bundled inside the same container? Perplexica needs a SearxNG instance with JSON results format and the WolframAlpha engine enabled. Bundling it eliminates the second-service wiring step and keeps the deploy to a single 1:1 volume.
Do I need an OpenAI API key to use Perplexica?
No — Perplexica supports OpenAI, Anthropic, Gemini, Groq, Ollama, LM Studio, Lemonade, and local Transformers. You can also point it at any OpenAI-compatible endpoint (vLLM, LocalAI, OpenRouter) using the OPENAI_BASE_URL setting.
Does self-hosted Perplexica have built-in authentication? No — Perplexica ships without an auth layer. Anyone with the URL can use the instance and consume your AI API credits. For shared or public deployments, put a reverse proxy with basic auth or an SSO gateway in front of the service.
How do I upload files to chat with in Perplexica? Click the paperclip icon next to the message input. Perplexica supports PDF, DOCX, and TXT uploads. Files are processed in the container and referenced inline by your chosen LLM during the conversation.
Template Content
Perplexica
itzcrazykns1337/vane:latest