Deploy MLflow

Deploy and Host a production-ready MLflow instance on Railway

Deploy MLflow

Postgres

railwayapp-templates/postgres-ssl:17

Just deployed

/var/lib/postgresql/data

MLFlow

MykalMachon/railway-mlflow-stack

Just deployed

Caddy

MykalMachon/railway-mlflow-stack

Just deployed

Minio

MykalMachon/railway-mlflow-stack

Just deployed

/minio/data

Deploy and Host MLflow on Railway

MLflow is the open-source standard for managing the machine learning lifecycle. It handles experiment tracking, model packaging, and deployment with a mix of online services and local Python tooling. By standardizing workflows across tools and frameworks, MLflow makes collaboration, reproducibility, and scaling ML systems easier in both research and production environments.

About Hosting MLflow

This template provides MLflow preconfigured with:

  • Caddy for authentication and reverse proxying
  • MinIO for artifact storage
  • PostgreSQL for backend storage

all deployable in a single click on Railway!

Once the services are healthy, configure your local environment with the Railway-generated username and password, and you’re ready to build production-ready ML/AI systems.

This template is based on MLflow’s guide for Remote Experiment Tracking with MLflow Tracking Server, which outlines how to remotely access the MLflow artifact store, backend store, and shared tracking server; overall making collaboration across your ML/AI teams seamless.

Common Use Cases

This template enables a comprehensive set of MLOps usecases:

  • Track and compare experiments: Log parameters, metrics, and outputs from every training run. Quickly compare results to see which models perform best and why.
  • Centralized artifact storage: Store models, plots, logs, and other outputs in MinIO, making them easy to retrieve, share, and keep organized.
  • Manage model lifecycles: Use the model registry to version, annotate, and promote models from experimentation through staging and into production.
  • Keep data reproducible: Record dataset versions or URIs so you always know exactly which data was used to train a model—even if the raw data itself lives elsewhere.
  • Package models for reuse: Automatically capture dependencies and package models into portable formats like Docker images or Conda environments, ensuring consistent results everywhere.
  • Deploy models with ease: Serve trained models locally or via Docker containers that you can push up to a registry, and deploy back to Railway!

Dependencies for MLflow Hosting

The Railway template includes all required dependencies:

  • a Caddy HTTP Gateway / Reverse proxy that provides authentication and HTTP logging.
  • an MLflow "tracking" server that acts as a primary API connects everything together.
  • a PostgreSQL database which acts as an MLflow Backend Store.
  • a MinIO S3 compatible file store which acts as an MLflow Artifact Store.

Deployment Dependencies

The MLflow documentation is crucial for understanding how to best utilize all of the great tooling MLflow provides. Here are some links to get you started:

Implementation Details

Quick start guide

  1. Click "Deploy on Railway" and optionally set a custom username on the caddy service.
  2. Wait 3-5 minutes for deployment, then access your MLflow tracking UI via the auto-generated URL on the caddy service.
  3. Login with your username and the auto-generated password.
  4. Look around in the UI and optionally create some experiments/models for use later.
  5. Setup your local development environment and start using your new MLflow setup!

To make local setup simple, I've put together a minimal example of a development environment that is tested and validated against this template.

Environment variables

The only environment variables you'll want to mess around with are:

VariableServiceDescriptionDefault
AUTH_USERNAMECaddyYour basic authentication usernameadmin
AUTH_PASSWORDCaddyYour basic authentication passwordAuto-generated
MLFLOW_VERSIONMLflowThe version of MLflow you want to deployN/A (defaults to v3.3.1)

Version Control

If you'd like to lock your install to a newer / older version, you can pin the version of MLflow you'd like to deploy by setting an environment variable (MLFLOW_VERSION) on the MLflow Service. by default the template uses v3.3.1.

Authentication via Caddy

MLflow’s built-in authentication features are still experimental and are not recommended for a production environment.

With that in mind, this template uses Caddy as a reverse proxy with basic authentication. This follows MLflow’s best practices for production deployments and is fully supported by MLflow’s Python tooling.

For an example of authenticating with this service via MLflow's python SDK see the example development environmnet.

Extending the default MLflow Dockerfile

The default MLflow Dockerfile ships with only the essentials.

This template extends that image and applies MLflow’s recommended best practices, making the deployment both 1-click ready and extensible for your future needs.

Deploying your MLflow trained models on Railway

You can deploy trained MLflow models directly to Railway:

  1. Use MLflow to build a docker image from your trained model.
  2. Push the image to your preferred container registry.
  3. Follow Railway's guide to deploy Docker images; public or private.

For more details check the MLflow Serving docs and their guide on deploying to Kubernetes.

With a little bit of custom automation code, you could use this template as the core of an MLOps deployment pipeline!

Why Deploy MLflow on Railway?

Railway is a singular platform to deploy your infrastructure stack. Railway will host your infrastructure so you don't have to deal with configuration, while allowing you to vertically and horizontally scale it.

By deploying MLflow on Railway, you are one step closer to supporting a complete full-stack application with minimal burden. Host your servers, databases, AI agents, and more on Railway.

This setup is designed to be extensible. You can layer in CI/CD pipelines, automated model promotion, or connect it with orchestration tools like Airflow or Prefect.


Template Content

More templates in this category

View Template
Chat Chat
Chat Chat, your own unified chat and search to AI platform.

View Template
openui
Deploy OpenUI: AI-powered UI generation with GitHub OAuth and OpenAI API.

View Template
firecrawl
firecrawl api server + worker without auth, works with dify