Deploy qwenpaw
Personal AI assistant with memory, skills, and multi-channel support
qwenpaw
Just deployed
/app/working
Deploy and Host QwenPaw on Railway
About Hosting QwenPaw on Railway
QwenPaw is a personal AI assistant platform. Railway runs it as a managed Docker deployment with automatic networking, logs, and simple scaling.
Tech Stack
- QwenPaw (Python + Node.js console)
- Docker image: agentscope/qwenpaw:v1.1.0
- Railway managed runtime
- Persistent volume storage
Why Deploy QwenPaw on Railway
- Fast image-based deployment with no server maintenance
- Built-in domain, TLS, and centralized logs
- Easy environment-variable based configuration
- One volume mount for durable memory and configuration data
Common Use Cases
- Personal AI assistant with long-term memory
- Multi-channel bot hub for chat platforms
- Skill-based automation and scheduled tasks
- Secure self-hosted assistant experiments
Deployment Notes
- Service image is pinned to
agentscope/qwenpaw:v1.1.0. - App traffic routes through
PORT=8088; QwenPaw runtime port isQWENPAW_PORT=8088. - Persistent data is mounted at
/app/working; secrets are redirected to/app/working/.secretwhen/app/working.secretis not separately mounted. - After first login, configure model provider API keys in Console settings.
Dependencies for QwenPaw on Railway
This deployment uses a single HTTP application service.
Deployment Dependencies
| Service | Image | Port | Volume |
|---|---|---|---|
| qwenpaw | agentscope/qwenpaw:v1.1.0 | 8088 | /app/working |
Template Content
qwenpaw
agentscope/qwenpaw:v1.1.0