Deploy NextChat
An open-source ChatGPT UI alternative.
NextChat
Just deployed
Deploy and Host NextChat on Railway
NextChat (formerly ChatGPT-Next-Web) is an open-source, self-hostable AI chat interface built with Next.js. It supports OpenAI, Azure OpenAI, Google Gemini, Anthropic Claude, and a dozen other LLM providers through a clean, fast web UI — with all conversation data stored locally in the browser by default.
About Hosting NextChat
NextChat is a stateless Next.js application — it proxies API requests to your chosen LLM provider and stores all chat history client-side, so no database is required. Hosting your own instance lets you embed a server-side API key (so users don't need their own), restrict access with a shared password via CODE, control which models are available, and point the app at a custom API endpoint or self-hosted model backend. This Railway template deploys the official Docker image and exposes NextChat on a Railway-provided HTTPS domain immediately after deployment.
Common Use Cases
- Team AI access — deploy a shared instance with a server-side API key so teammates can use GPT-4, Claude, or Gemini without each managing their own credentials
- Custom model frontend — use NextChat as a web UI for a self-hosted model backend (LocalAI, Ollama, RWKV-Runner) by pointing
BASE_URLat your inference server - Controlled access — restrict usage to authorised users via the
CODEpassword mechanism, and lock down model selection withCUSTOM_MODELSandDISABLE_GPT4
Dependencies for NextChat Hosting
- An API key for at least one supported LLM provider (OpenAI, Azure, Google Gemini, Anthropic, etc.)
Deployment Dependencies
Implementation Details
Set the following environment variables in your Railway service settings. Only OPENAI_API_KEY (or an equivalent provider key) is required to get started.
Core
OPENAI_API_KEY=sk-xxxxxxxx # Required if using OpenAI; comma-separate multiple keys
CODE=yourpassword # Optional: shared access password for the instance
BASE_URL=https://api.openai.com # Override to use an OpenAI-compatible proxy or self-hosted endpoint
Provider alternatives (use instead of or alongside OpenAI)
AZURE_URL=https://{resource}.openai.azure.com/openai/deployments/{deployment}
AZURE_API_KEY=azure_key_xxxxx
AZURE_API_VERSION=2024-02-01
GOOGLE_API_KEY=google_key_xxxxx
GOOGLE_URL=https://generativelanguage.googleapis.com/
Access & model control
HIDE_USER_API_KEY=1 # Prevent users from entering their own API key
DISABLE_GPT4=1 # Hide GPT-4 models from the UI
CUSTOM_MODELS=+llama3,+mistral,-gpt-3.5-turbo # Add (+) or remove (-) models from the list
DEFAULT_MODEL=gpt-4o # Set the default model on first load
Why Deploy NextChat on Railway?
Railway is a singular platform to deploy your infrastructure stack. Railway will host your infrastructure so you don't have to deal with configuration, while allowing you to vertically and horizontally scale it.
By deploying NextChat on Railway, you are one step closer to supporting a complete full-stack application with minimal burden. Host your servers, databases, AI agents, and more on Railway.
Template Content
NextChat
Yidadaa/ChatGPT-Next-Web