Deploy Atomic
An AI-native knowledge graph that's yours end-to-end.
atomic-railway
Just deployed
/data
Deploy and Host Atomic on Railway
Atomic is an AI-augmented personal knowledge base. Drop in freeform markdown notes ("atoms") and an asynchronous pipeline automatically chunks them, generates embeddings, extracts hierarchical tags, and builds semantic edges between related notes. On top sit LLM-synthesized wiki articles with citations, agentic RAG chat scoped to tags, and a force-directed canvas view.
About Hosting Atomic
Atomic runs as a single Rust binary backed by SQLite (with the sqlite-vec extension for vector search), so the hosting footprint is small — one process, one data directory. This template bundles atomic-server, the React web UI, and nginx into a single container behind one port. Persistent state — registry, knowledge bases, embeddings, wiki articles, conversations — lives in a single /data volume. On first boot you claim the instance through a setup wizard using ATOMIC_SETUP_TOKEN, then configure an AI provider (OpenRouter or Ollama) from the Settings UI. No external database, queue, or object storage is required.
Common Use Cases
- Self-hosted "second brain" / Obsidian-alternative with semantic search, auto-tagging, and inline-cited wiki summaries across your notes
- Private research workspace where chat and wiki generation are scoped to a hierarchical tag tree of your sources
- Personal or team RAG endpoint exposed over the MCP protocol so Claude Desktop and other MCP clients can search your knowledge base
Dependencies for Atomic Hosting
- A Railway Volume mounted at
/data(holdsregistry.dbplus every per-knowledge-base SQLite file) - An AI provider key — OpenRouter (cloud, default) or a reachable Ollama server for fully local models — configured from the Settings UI after first run
Deployment Dependencies
- Atomic source repo: github.com/kenforthewin/atomic
- Upstream server image:
ghcr.io/kenforthewin/atomic-server - Upstream web image:
ghcr.io/kenforthewin/atomic-web - Atomic self-hosting docs: docs/manual/self-hosting
- OpenRouter (default cloud AI provider): openrouter.ai
Implementation Details
The image is a thin nginx+supervisord wrapper that pulls the prebuilt atomic-server binary and the compiled web bundle directly from the upstream GHCR images — no Atomic source tree is fetched or built, so cold builds finish in seconds.
FROM ghcr.io/kenforthewin/atomic-server:latest AS server-src
FROM ghcr.io/kenforthewin/atomic-web:latest AS web-src
FROM nginx:1.28-bookworm
COPY --from=server-src /usr/local/bin/atomic-server /usr/local/bin/atomic-server
COPY --from=web-src /usr/share/nginx/html /usr/share/nginx/html
# nginx terminates on $PORT and proxies /api, /ws, /mcp, /oauth/, /.well-known/, /health
# to atomic-server on 127.0.0.1:8081; supervisord runs both.
The image bakes opinionated defaults (ATOMIC_STORAGE=sqlite, ATOMIC_DATA_DIR=/data, internal port wiring) so only two variables need to be set on Railway:
ATOMIC_SETUP_TOKEN=
PUBLIC_URL=https://${{RAILWAY_PUBLIC_DOMAIN}}
ATOMIC_SETUP_TOKEN is consumed once by the first-run setup wizard to claim the instance and mint the first API token. PUBLIC_URL is what atomic-server advertises at /.well-known/oauth-authorization-server, which is what makes remote-MCP and OAuth flows discoverable.
Why Deploy Atomic on Railway?
Railway is a singular platform to deploy your infrastructure stack. Railway will host your infrastructure so you don't have to deal with configuration, while allowing you to vertically and horizontally scale it.
By deploying Atomic on Railway, you are one step closer to supporting a complete full-stack application with minimal burden. Host your servers, databases, AI agents, and more on Railway.
Template Content
atomic-railway
youssefsiam38/atomic-railwayATOMIC_SETUP_TOKEN
Token used to claim the instance from the setup UI on first start. Generate with: openssl rand -base64 24
