Deploy Switchyard
Pluggable infrastructure platform. Autoscaling, job scheduling, and more.
Redis
bitnami/redis:7.2.5
Just deployed
/bitnami
autoscaler
ferretcode/switchyard/autoscale
Just deployed
dashboard
ferretcode/switchyard/dashboard
Just deployed
configurator
ferretcode/switchyard/configurator
Just deployed
scheduler
ferretcode/switchyard/scheduler
Just deployed
incident
ferretcode/switchyard/incident
Just deployed
feature-flags
ferretcode/switchyard/feature-flags
Just deployed
Postgres
railwayapp-templates/postgres-ssl:16
Just deployed
/var/lib/postgresql/data
locomotive
ferretcode/locomotive:latest
Just deployed
RabbitMQ
rabbitmq:management
Just deployed
/var/lib/rabbitmq
RabbitMQ Web UI
brody192/railway-public-to-private-proxy
Just deployed
Switchyard
A plug-and-play Railway template for feature flags, autoscaling, worker scheduling, and observability.
Read the whitepaper here.
Overview 📦
- Manage feature flags with simple configuration and robust, contextual rules
- Autoscale services based on custom thresholds
- Offload system work to an autoscaling worker pool with custom job handlers
- Incident detection & reporting. Analysis of logs, metrics, and service statuses
Architecture ☁️
Job Scheduling
- As your services need to offload time-consuming work, they can send requests to Switchyard to queue up work
- Switchyard will push work requests onto a queue for workers to process
- You define custom work handlers to process jobs, deployed as normal Railway services
- Switchyard automatically handles scaling by analyzing worker load and job requests
- NOTE: Switchyard uses the same aforementioned scaling algorithm to determine how to scale workers.
A note on autoscaling
- Switchyard's job scheduler also exposes support for general service autoscaling functionality:
- Configure services for Switchyard to watch
- Set autoscaling thresholds for Switchyard to scale by
- Switchyard uses a robust algorithm for handling normal usage, spiked usage, and sustained high usage
- Switchyard will automatically up and downscale your service replicas in Railway
Worker Considerations
- Your workers must use manual message acknowledgement, due to how Switchyard declares queues to ensure quality of service.
- Switchyard exposes an option that lets you set how many messages each worker can process at once.
- Workers will pull directly from the message bus under the
jobs
topic, containing ajob_id
andjob_context
field.- The job context is information provided directly from the host app with context for the worker to perform an action.
- The job ID is used by both the worker and the scheduler to ensure job idempotency.
- The worked should directly check the Redis cache with the provided job ID to ensure that it has not already been processed before.
Find some example worker code here.
Incident Reporting
- Switchyard automatically watches and processes logs from your service
- Based on metrics like resource usage, service status, error log frequency, and other configurable metrics, Shipyard can report incidents to an external source
- Your services are also monitored to watch for interesting status changes
- Switchyard forwards incident reports to a custom webhook ingest URL
Feature Flags
- In the dashboard, create new feature flags
- Define custom rules based on user context to determine how the feature is enabled conditionally
- e.g., certain user groups/roles, a percentage of users, etc.
- Your services can request the Switchyard service with the user context and determine which path to take
Platform Observability
- The template also deploys a Prometheus + Grafana instance to handle Switchyard's internal service insights and metrics
One-Click Deploy 📥
- Deploy on Railway in one click:
- Make sure you deploy in the same environment as your targeted application.
Developing Locally 🧪
- Before deploying on Railway, you can use
docker-compose
to run Switchyard locally, alongside your app
git clone https://github.com/ferretcode/switchyard.git
cd switchyard
docker compose up --build
goose up # in another shell
Then:
- Access the frontend at http://localhost:3003
- Make web requests to Switchyard in your app
- Test out job scheduling
- Autoscaling is a no-op action in local development.
- Configure feature flags
- Identify failure points
Deploy and Host
Deploy this template along with your application in the target environment, and fill out the environment variables for each required service.
About Hosting
Switchyard is an infrastructure platform to supplement Railway's existing deployment features--it adds autoscaling, job scheduling, feature flags, and other observability features to your application.
Dependencies for
Several features available in Switchyard require direct API calls from your app.
Deployment Dependencies
Switchyard requires PostgreSQL, Redis, RabbitMQ, and locomotive.
Why Deploy
If you need autoscaling, observability, feature flags, or job scheduling in your app, Switchyard provides those features in an open-source, pluggable package.
Common Use Cases
Any high-traffic apps deployed on Railway that would benefit from horizontal autoscaling would benefit from using Switchyard. Additionally, any apps that want incident detection and custom incident handling would find use from Switchyard.
Template Content
Redis
bitnami/redis:7.2.5autoscaler
ghcr.io/ferretcode/switchyard/autoscaleRAILWAY_API_KEY
Railway API key
configurator
ghcr.io/ferretcode/switchyard/configuratorRAILWAY_API_KEY
Railway API key
RAILWAY_API_KEY
Railway API key
feature-flags
ghcr.io/ferretcode/switchyard/feature-flagslocomotive
ghcr.io/ferretcode/locomotive:latestRAILWAY_API_KEY
Railway API key
RabbitMQ
rabbitmq:managementRabbitMQ Web UI
ghcr.io/brody192/railway-public-to-private-proxy