Promptmodel
Host, engineer and manage prompt & model configurations.
PG Listener
weavel-ai/promptmodel
Just deployed
Backend
weavel-ai/promptmodel
Just deployed
Redis
bitnami/redis
Just deployed
/bitnami
Frontend
weavel-ai/promptmodel
Just deployed
Postgres
railwayapp-templates/postgres-ssl:latest
Just deployed
/var/lib/postgresql/data
Promptmodel is an open source LLMOps platform for building & integrating AI in your product. Prompt engineering is the fastest way to build a customized AI feature that solves your problems, including summarization, content filtering, customer support automation, Image classification, etc.
You can easily build and refine your AI models on Promptmodel's intuitive prompt engineering dashboard. The platform offers tools for tracking prompt versions, conducting A/B tests, monitoring production logs, and deploying new prompts seamlessly without code modification & re-deployment.
Using this template, you can deploy Promptmodel to Railway. It automatically creates a Postgres DB to store your prompts and production logs.
This template includes
- Frontend dashboard (Next.js)
- Backend server (FastAPI)
- Redis (for realtime)
- PostgreSQL database
- Python script (for realtime)
You should edit 2 environment variables.
- frontend
- BACKEND_PUBLIC_URL : Update this to reflect the backend's public URL.
- FRONTEND_PUBLIC_URL: Update this to reflect the frontend's public URL.
- backend
- FRONTEND_PUBLIC_URL: Update this to reflect the frontend's public URL.
In case of errors, try restarting the containers, or join the Discord to get help: https://promptmodel.run/discord
Template Content
PG Listener
weavel-ai/promptmodelBackend
weavel-ai/promptmodelNEXTAUTH_SECRET
Redis
bitnami/redisFrontend
weavel-ai/promptmodelNEXTAUTH_SECRET