
Deploy OpenHands
OpenHands: AI agent for coding and deployment
OpenHands
Just deployed
/.openhands
Deploy and Host OpenHands on Railway
OpenHands is an open-source AI software engineer that plans, writes, runs, and debugs code through a web UI. It uses tools, executes commands, and keeps context across sessions, helping teams ship features, fix bugs, and automate repetitive engineering work while keeping control of data and models.
About Hosting OpenHands
Hosting OpenHands means running the web UI, the backend API, and a sandbox runtime where the agent executes tools. On Railway, you deploy a container, set LLM credentials, and optionally mount a volume so ~/.openhands data persists across deploys. You can expose a public URL for callbacks and web access. Most configuration options map to environment variables, so Railway variables are the primary way to tune behavior. WebSocket streaming is supported, and this fork prefers HTTP message sends with a WebSocket fallback for proxy reliability. If you use OpenHands Secrets, custom secrets are exported as env vars in the agent runtime.
Common Use Cases
- Fix bugs and generate patches in a repo
- Prototype features and iterate faster with an AI pair engineer
- Run a self-hosted coding agent with a team UI
Protect the UI with a Password
Built-in Basic Auth
Enable Basic Auth directly in OpenHands by setting these Railway variables:
OPENHANDS_BASIC_AUTH_USER="admin"
OPENHANDS_BASIC_AUTH_PASSWORD="change-me"
# Optional: skip auth for health checks or other routes
OPENHANDS_BASIC_AUTH_EXEMPT_PATHS="/health,/healthz"
Notes:
- This protects both HTTP and WebSocket endpoints (including
/sockets/*and/runtime/*/sockets/*). - If either user or password is missing, auth is disabled.
- No proxy required.
Dependencies for OpenHands Hosting
- Railway project with a Dockerfile build (optional persistent volume)
- LLM provider API key (OpenAI, Anthropic, Mistral, OpenHands, etc.)
Deployment Dependencies
https://docs.openhands.dev/openhands/usage/advanced/configuration-optionshttps://docs.openhands.dev/openhands/usage/llms
Implementation Details
Fork adjustments vs upstream:
- Railway friendly Dockerfile with multi-stage frontend/backend build
- Improved container runtime detection for Railway deployments
- HTTP-first message sending with WebSocket fallback
Optional environment variables (not exhaustive):
# Core LLM
LLM_API_KEY="..."
LLM_MODEL="..."
LLM_BASE_URL="https://api.example.com"
LLM_NUM_RETRIES="5"
# LLM advanced options (provider dependent)
LLM_API_VERSION="..."
LLM_EMBEDDING_MODEL="..."
LLM_EMBEDDING_DEPLOYMENT_NAME="..."
LLM_DROP_PARAMS="true"
LLM_DISABLE_VISION="true"
LLM_CACHING_PROMPT="true"
# URLs and persistence
OH_WEB_URL="https://your-domain"
OH_PERSISTENCE_DIR="~/.openhands"
# Runtime and sandbox
RUNTIME="local" # docker, local, or remote
SANDBOX_VOLUMES="/host/path:/workspace"
AGENT_SERVER_IMAGE_REPOSITORY="..."
AGENT_SERVER_IMAGE_TAG="..."
Why Deploy OpenHands on Railway?
Railway is a singular platform to deploy your infrastructure stack. Railway will host your infrastructure so you don't have to deal with configuration, while allowing you to vertically and horizontally scale it.
By deploying OpenHands on Railway, you are one step closer to supporting a complete full-stack application with minimal burden. Host your servers, databases, AI agents, and more on Railway.
Template Content
OpenHands
XavTo/OpenHands-Fork
