Deploy hindsight
Long-term AI memory system with API+Control Plane in one deploy
Just deployed
Deploy and Host Hindsight on Railway
About Hosting Hindsight on Railway
This template runs Hindsight from the official upstream GHCR container image on Railway, exposing the Control Plane publicly while keeping the internal API connected in the same container.
Tech Stack
- Hindsight v0.5.1 (official image)
- FastAPI (memory API)
- Next.js (Control Plane UI)
- Embedded PostgreSQL (pg0)
- Railway
Why Deploy Hindsight on Railway
Railway provides fast image-based deployment, built-in domains, and simple environment-variable management, making it ideal for running Hindsight with minimal operational overhead.
Common Use Cases
- Persistent memory backend for AI agents
- Personalization memory for chat assistants
- Long-term recall and reflection workflows
- Internal memory operations dashboard via Control Plane
Deployment Notes
This template uses ghcr.io/vectorize-io/hindsight:0.5.1 (pinned upstream tag, non-slim). Public traffic is routed to port 9999 (Control Plane), while the API runs on 8888 internally. LLM is set to none by default so the service boots without external model credentials; configure provider and API key later if you need full Reflect and LLM features.
Dependencies for Hindsight on Railway
This deployment uses a single all-in-one service image with no external database requirement.
Deployment Dependencies
| Service | Image | Port | Volume |
|---|---|---|---|
| app | ghcr.io/vectorize-io/hindsight:0.5.1 | 9999 | - |
Template Content