Deploy Perplexica
Open source alternative to Perplexity AI
Perplexica
itzcrazykns1337/perplexica:latest
Just deployed
/home/perplexica/data
Deploy and Host Perplexica on Railway
Perplexica is a privacy-focused AI answering engine that combines web search with AI intelligence. It supports both local LLMs through Ollama and cloud providers like OpenAI, Claude, and Groq, delivering accurate answers with cited sources. With built-in SearxNG search integration, specialized focus modes, and file upload capabilities, Perplexica offers a comprehensive search experience that respects your privacy.
About Hosting Perplexica
Hosting Perplexica involves deploying a full-stack application that includes both the Perplexica search interface and an integrated SearxNG instance for private web searches. The bundled Docker image contains everything needed to run independently. You'll need to configure AI provider settings (API keys for OpenAI, Claude, etc., or local Ollama endpoints) through the web interface after deployment. Perplexica stores search history and uploaded files persistently, requiring proper volume management. The application runs on port 8080 by default and handles both frontend interface and backend API requests through Next.js, making it straightforward to expose to the internet while maintaining privacy.
Common Use Cases
- Private Research Platform: Replace traditional search engines with an AI-powered alternative that keeps all queries private and provides cited, intelligent answers
- Academic Research Tool: Leverage specialized focus modes for academic papers, Wolfram Alpha calculations, and domain-specific searches with document upload capabilities
- Team Knowledge Base: Deploy for organizational use to provide employees with a private, AI-enhanced search tool that can analyze uploaded documents and answer questions based on both web content and internal files
Dependencies for Perplexica Hosting
- Docker-compatible hosting environment with persistent volume support for data and uploads
- AI Provider Access: API keys from at least one provider (OpenAI, Anthropic Claude, Google Gemini, Groq) or a local Ollama instance
- Sufficient compute resources: Minimum 1GB RAM, 1 vCPU (more recommended for better performance with the bundled SearxNG)
Deployment Dependencies
- Perplexica GitHub Repository - Official source code and documentation
- Perplexica Docker Hub - Pre-built Docker images
- Perplexica Installation Guide - Detailed installation instructions
- Perplexica API Documentation - API reference for integration
Key Configuration Notes:
- Port 8080 must be exposed for web access
- Two persistent volumes required:
/home/perplexica/data- Stores search history and application data/home/perplexica/uploads- Stores uploaded files
- Post-deployment setup: Navigate to
http://your-domainto configure AI provider settings, API keys, and model selections through the built-in setup screen - No environment variables required for basic deployment since the bundled image includes SearxNG (unlike the slim version)
Why Deploy Perplexica on Railway?
Railway is a singular platform to deploy your infrastructure stack. Railway will host your infrastructure so you don't have to deal with configuration, while allowing you to vertically and horizontally scale it.
By deploying Perplexica on Railway, you are one step closer to supporting a complete full-stack application with minimal burden. Host your servers, databases, AI agents, and more on Railway.
Template Content
Perplexica
itzcrazykns1337/perplexica:latest
