Railway

Deploy PageIndex

No vector DB, no chunking. LLM reasoning-based RAG with 98.7% accuracy.

Deploy PageIndex

Just deployed

Deploy and Host PageIndex on Railway

PageIndex

PageIndex is an open-source, vectorless RAG (Retrieval-Augmented Generation) framework by VectifyAI. Instead of vector databases and similarity search, it builds a hierarchical tree index from your documents and uses LLM reasoning to retrieve the most relevant sections — the way a domain expert would navigate a document, not a search engine.

No Vector DB. No Chunking. Just Reasoning.

Traditional RAG approximates relevance through embedding distance. PageIndex uses agentic tree search — navigating your document the way a domain expert would — making it ideal for financial reports, legal filings, academic papers, and any long-form professional document where similarity ≠ relevance. Inspired by AlphaGo, PageIndex performs retrieval in two steps: (1) generate a hierarchical "Table-of-Contents" tree index from your document, and (2) perform reasoning-based retrieval through LLM-guided tree search. It achieves state-of-the-art 98.7% accuracy on FinanceBench, outperforming all vector-based RAG baselines.

About Hosting PageIndex

This Railway template wraps PageIndex in a production-ready HTTP API. Bring your OpenAI (or any LiteLLM-compatible) API key, and Railway handles the rest. Upload a PDF or Markdown file, get back a structured JSON tree — ready to plug into your RAG pipeline, agentic workflow, or MCP client. No vector database setup, no embedding pipelines, no chunking configuration required.

Common Use Cases

  • Financial Document QA — Query SEC filings, earnings reports, and 10-K documents with near-perfect accuracy
  • Legal Document Analysis — Navigate contracts, regulatory filings, and compliance documents using reasoning-based search
  • Academic Research — Search across long-form papers and textbooks without losing context through chunking
  • Enterprise Knowledge Bases — Replace opaque vector search with explainable, traceable document retrieval
  • Agentic RAG Pipelines — Plug into OpenAI Agents SDK or any MCP-compatible client for end-to-end agentic workflows
  • Technical Manual Search — Retrieve precise sections from large technical documentation with full page and section references

PageIndex vs. Traditional RAG Frameworks

PageIndex vs. LlamaIndex

LlamaIndex builds vector indexes over document chunks and retrieves by embedding similarity. PageIndex skips embeddings entirely — it builds a hierarchical tree index and uses LLM reasoning to navigate it. The result is significantly higher accuracy on professional documents where the relevant section requires domain knowledge to identify, not just keyword proximity.

PageIndex vs. LangChain RAG

LangChain's RAG pipeline splits documents into chunks, embeds them, and retrieves the top-k most similar chunks. This works for general-purpose QA but degrades on long documents with complex structure. PageIndex preserves the full document hierarchy and reasons over it, which is why it achieves 98.7% on FinanceBench where standard LangChain RAG pipelines score significantly lower.

PageIndex vs. Pinecone / Chroma / Weaviate

Vector databases are retrieval infrastructure — they don't solve the accuracy problem, they just store and query embeddings faster. If your chunks and embeddings are imprecise, a faster vector DB doesn't help. PageIndex eliminates the vector DB dependency entirely, replacing approximate similarity search with explicit LLM reasoning over document structure.

Dependencies for PageIndex Hosting

  • Python 3.8+
  • LLM API key (OpenAI, Anthropic, or any LiteLLM-supported provider)
  • PDF or Markdown documents for indexing

Deployment Dependencies

Implementation Details

# Index a PDF — returns a hierarchical JSON tree
curl -X POST https://your-app.railway.app/index/pdf \
  -F "[email protected]"

# Interactive API docs (Swagger UI)
https://your-app.railway.app/docs

# Health check
https://your-app.railway.app/health

Key features:

  • Vectorless — no Pinecone, Weaviate, or Chroma required
  • No chunking — documents organized into natural semantic sections
  • Human-like retrieval — tree search mirrors how experts navigate documents
  • Explainable & traceable — every result includes page and section references
  • Multi-LLM support — works with OpenAI, Anthropic, and more via LiteLLM
  • Agentic-ready — integrates with OpenAI Agents SDK
  • MCP & API support — connect to Claude, ChatGPT, or any MCP-compatible client

Why Deploy PageIndex on Railway?

Railway is a singular platform to deploy your infrastructure stack. Railway will host your infrastructure so you don't have to deal with configuration, while allowing you to vertically and horizontally scale it.

By deploying PageIndex on Railway, you are one step closer to supporting a complete full-stack application with minimal burden. Host your servers, databases, AI agents, and more on Railway.


Template Content

More templates in this category

View Template
Chat Chat
Chat Chat, your own unified chat and search to AI platform.

okisdev
View Template
Hermes Agent | OpenClaw Alternative with Dashboard
Self-improving AI agent with memory, skills, and web dashboard 🤖

codestorm
View Template
EchoDeck
Generate a mp4 from powerpoint with TTS

Fixed Scope