Railway

Deploy Promptfoo

An open-source tool for testing, evaluating, and red-teaming LLM apps.

Deploy Promptfoo

Just deployed

/home/promptfoo/.promptfoo

Deploy and Host Promptfoo on Railway

Promptfoo is an open-source tool for testing, evaluating, and red-teaming LLM applications. The self-hosted version provides a shared web UI and API server for storing and comparing eval results — making it easy to run evals in CI/CD pipelines, aggregate results across runs, and keep sensitive prompt and output data off third-party services.

About Hosting Promptfoo

The self-hosted Promptfoo image is an Express server that serves the web UI and stores eval results in a SQLite database. This Railway template deploys the official Docker image with a persistent volume so eval history survives restarts and redeploys. It is designed for individual or small-team use — it does not support horizontal scaling, multi-team access control, or SSO. For production enterprise use, see Promptfoo Enterprise.

Common Use Cases

  • Eval sharing — run promptfoo eval --share from a local machine or CI pipeline and publish results to your private Railway instance instead of the public promptfoo.app cloud
  • CI/CD eval aggregation — integrate Promptfoo into GitHub Actions or other pipelines to run automated prompt regression tests and store results centrally for review
  • Red teaming — run adversarial prompt tests against your LLM application and review attack results and failure modes through the web UI

Dependencies for Promptfoo Hosting

  • Promptfoo Docker image (ghcr.io/promptfoo/promptfoo:latest)
  • Persistent volume mounted at /home/promptfoo/.promptfoo — required to retain the SQLite database (promptfoo.db) across deployments

Deployment Dependencies

Implementation Details

The following environment variables are pre-configured by the template:

HOME=/home/promptfoo        # Pre-filled: sets the home directory for the promptfoo user
PORT=3000                   # Pre-filled: port the Express server listens on
OPENAI_API_KEY=             # Optional: set to allow users to run OpenAI evals from the web UI

Additional API keys for other providers can be added as environment variables in Railway's service settings:

ANTHROPIC_API_KEY=sk-ant-xxxxxxxx
GOOGLE_API_KEY=xxxxxxxx
AZURE_OPENAI_API_KEY=xxxxxxxx

Pointing a local Promptfoo install at your Railway instance — to share eval results to your self-hosted server instead of the public cloud, set the following in your local environment:

PROMPTFOO_REMOTE_API_BASE_URL=https://your-instance.railway.app
PROMPTFOO_SHARE_STORE_TYPE=database

Then run evals normally — promptfoo eval --share will publish results to your Railway instance.

Why Deploy Promptfoo on Railway?

Railway is a singular platform to deploy your infrastructure stack. Railway will host your infrastructure so you don't have to deal with configuration, while allowing you to vertically and horizontally scale it.

By deploying Promptfoo on Railway, you are one step closer to supporting a complete full-stack application with minimal burden. Host your servers, databases, AI agents, and more on Railway.


Template Content

More templates in this category

View Template
Chat Chat
Chat Chat, your own unified chat and search to AI platform.

okisdev
View Template
EchoDeck
Generate a mp4 from powerpoint with TTS

Fixed Scope
View Template
openui
Deploy OpenUI: AI-powered UI generation with GitHub OAuth and OpenAI API.

zexd