Deploy qwenpaw
Railway

Deploy qwenpaw

Personal AI assistant with memory, skills, and multi-channel support

Deploy qwenpaw

Just deployed

/app/working

Deploy and Host QwenPaw on Railway

About Hosting QwenPaw on Railway

QwenPaw is a personal AI assistant platform. Railway runs it as a managed Docker deployment with automatic networking, logs, and simple scaling.

Tech Stack

  • QwenPaw (Python + Node.js console)
  • Docker image: agentscope/qwenpaw:v1.1.0
  • Railway managed runtime
  • Persistent volume storage

Why Deploy QwenPaw on Railway

  • Fast image-based deployment with no server maintenance
  • Built-in domain, TLS, and centralized logs
  • Easy environment-variable based configuration
  • One volume mount for durable memory and configuration data

Common Use Cases

  • Personal AI assistant with long-term memory
  • Multi-channel bot hub for chat platforms
  • Skill-based automation and scheduled tasks
  • Secure self-hosted assistant experiments

Deployment Notes

  • Service image is pinned to agentscope/qwenpaw:v1.1.0.
  • App traffic routes through PORT=8088; QwenPaw runtime port is QWENPAW_PORT=8088.
  • Persistent data is mounted at /app/working; secrets are redirected to /app/working/.secret when /app/working.secret is not separately mounted.
  • After first login, configure model provider API keys in Console settings.

Dependencies for QwenPaw on Railway

This deployment uses a single HTTP application service.

Deployment Dependencies

ServiceImagePortVolume
qwenpawagentscope/qwenpaw:v1.1.08088/app/working

Template Content

More templates in this category

View Template
Chat Chat
Chat Chat, your own unified chat and search to AI platform.

okisdev
View Template
EchoDeck
Generate a mp4 from powerpoint with TTS

Fixed Scope
View Template
NEW
Rift
Rift Is the fastest OSS AI Chat

Compound