
Deploy ClingySOCKs
Deploy your own AI agents with relational memory engine instantly
Just deployed
/var/lib/postgresql/data
Just deployed
Just deployed
Deploy and Host ClingySOCKs on Railway
ClingySOCKs is a relational memory engine for AI agents. It gives LLM-powered personas persistent, structured memory through a knowledge graph, enabling agents to recall context across conversations, track emotional dynamics, and build genuine long-term relationships with users.
About Hosting ClingySOCKs
ClingySOCKs deploys as two services: a FastAPI backend (Python) that handles memory storage, conversation harvesting, and LLM orchestration, and a React frontend served via Nginx. It requires a PostgreSQL database with pgvector for relational memory and vector similarity search. The backend connects to external LLM providers (Gemini, OpenAI, Anthropic, or self-hosted via Ollama) using API keys you supply. Railway's template deploys all three components — frontend, backend, and database — in a single click with automatic networking between services.
Common Use Cases
- AI companions with long-term memory — Build agents that remember past conversations, user preferences, and emotional context across sessions
- Conversation import and knowledge extraction — Import ChatGPT exports and automatically harvest structured memories into a relational graph
- Multi-agent systems — Run multiple personas with distinct personalities, each maintaining their own memory graph and conversational style
Dependencies for ClingySOCKs Hosting
- PostgreSQL with pgvector — Relational database with vector extension for memory storage and semantic search
- At least one LLM API key — Gemini, OpenAI, Anthropic, or a local model endpoint (Ollama/LM Studio)
Deployment Dependencies
- pgvector/pgvector Docker image — PostgreSQL 16 with vector extension
- LiteLLM — Unified LLM proxy supporting 100+ providers
- FastAPI — Backend framework
- Vite + React — Frontend build tooling
Implementation Details
Both services include health checks at /health for Railway's deployment monitoring:
# Backend (FastAPI)
@app.get("/health")
async def health():
return {"status": "ok", "service": "clingysocks-memory-api"}
# Frontend (Nginx)
location = /health {
access_log off;
default_type application/json;
return 200 '{"status":"ok","service":"clingysocks-frontend"}';
}
Required environment variables for the backend:
DATABASE_URL=postgresql://user:pass@host:5432/dbname
ENCRYPTION_KEY= # openssl rand -hex 32
JWT_SECRET= # openssl rand -hex 32
The frontend requires one environment variable to discover the backend:
VITE_MEMORY_API_URL=https://.up.railway.app
## Why Deploy ClingySOCKs on Railway?
Railway is a singular platform to deploy your infrastructure stack. Railway will host your infrastructure so you don't have to deal with configuration, while allowing you to vertically and horizontally scale it.
By deploying ClingySOCKs on Railway, you are one step closer to supporting a complete full-stack application with minimal burden. Host your servers, databases, AI agents, and more on Railway.
Template Content
OPENROUTER_API_KEY
REQUIRED: Your OpenRouter API key. Get one at https://openrouter.ai/keys — This is the only key you need to start using the app!