
Deploy SeaweedFS | Open Source S3, MinIO Alternative
Self Host SeaweedFS. Object storage with S3 compatibility & full S3 API
SeaweedFS
Just deployed
/data

Deploy and Host SeaweedFS on Railway
Deploy SeaweedFS as a self-hosted S3-compatible object storage system on Railway with one click. SeaweedFS is a distributed file system optimized for storing and serving billions of files with O(1) disk seek — making it ideal for media storage, backups, and any application that speaks the S3 protocol.
This Railway template pre-configures SeaweedFS with master, volume server, filer, and S3 gateway running as a single service. It includes persistent volume storage, S3 authentication with access/secret keys, and a public endpoint ready for S3 client connections.
Getting Started with SeaweedFS on Railway
After deployment completes, your SeaweedFS S3 endpoint is live at the generated Railway domain. Configure any S3-compatible client (AWS CLI, s3cmd, boto3, rclone) using the S3_ACCESS_KEY and S3_SECRET_KEY from your Railway environment variables.
To create your first bucket and upload a file using AWS CLI:
aws configure set aws_access_key_id YOUR_ACCESS_KEY
aws configure set aws_secret_access_key YOUR_SECRET_KEY
aws --endpoint-url https://your-seaweedfs.up.railway.app s3 mb s3://my-bucket
aws --endpoint-url https://your-seaweedfs.up.railway.app s3 cp ./file.txt s3://my-bucket/
The filer UI is available on port 8888 internally for browsing stored files. The master dashboard runs on port 9333 for cluster status monitoring.
About Hosting SeaweedFS
SeaweedFS is an open-source (Apache 2.0) distributed file system inspired by Facebook's Haystack paper. It solves the "billions of small files" problem that traditional file systems and object stores handle poorly — each file requires only 40 bytes of metadata overhead and a single O(1) disk read to serve.
Key features:
- Full Amazon S3 API compatibility (buckets, multipart uploads, versioning, lifecycle policies, object lock)
- WebDAV support for mounting as a network drive
- POSIX FUSE mount for local directory access
- Cloud tiering to AWS S3, Google Cloud, Azure, and BackBlaze
- Erasure coding (10.4) for cost-efficient warm storage
- Filer metadata backed by LevelDB, MySQL, Postgres, Redis, or 10+ other stores
- Cross-datacenter active-active async replication
- Kubernetes CSI driver for persistent volumes
Why Deploy SeaweedFS on Railway
Self-host SeaweedFS on Railway for managed S3-compatible storage without AWS vendor lock-in:
- One-click deploy with persistent volume and S3 auth pre-configured
- No AWS bills — pay only for Railway infrastructure usage
- Full S3 API means any existing S3 client or SDK works unchanged
- Single container runs the entire stack (master + volume + filer + S3)
- Scale storage by adjusting Railway volume size — no cluster management
Common Use Cases for Self-Hosted SeaweedFS
- Application blob storage — Store user uploads, avatars, documents behind an S3 API that your app already knows how to use
- Media serving at scale — Serve millions of images with O(1) disk reads, no hot-spotting on popular files
- Backup destination — Use rclone, restic, or duplicati with SeaweedFS as an S3-compatible backup target
- Development S3 mock — Replace AWS S3 in local/staging environments with a real S3-compatible endpoint
Dependencies for Self-Hosting SeaweedFS on Railway
- SeaweedFS —
chrislusf/seaweedfs:4.22(76MB compressed, linux/amd64 + arm64)
Environment Variables Reference
| Variable | Description | Example |
|---|---|---|
PORT | S3 gateway listening port | 8333 |
S3_ACCESS_KEY | S3 authentication access key | Generated hex string |
S3_SECRET_KEY | S3 authentication secret key | Generated hex string |
RAILWAY_RUN_UID | Run container as root for volume permissions | 0 |
Deployment Dependencies
- Runtime: Go binary (statically compiled, no external dependencies)
- Docker Hub: chrislusf/seaweedfs
- GitHub: seaweedfs/seaweedfs (~31,900 stars)
- Docs: SeaweedFS Wiki
Hardware Requirements for Self-Hosting SeaweedFS
| Resource | Minimum | Recommended |
|---|---|---|
| CPU | 1 vCPU | 2 vCPU |
| RAM | 1 GB | 2 GB |
| Storage | 1 GB (metadata only) | 10+ GB (depends on data volume) |
| Runtime | Docker or bare metal | Docker with persistent volume |
Self-Hosting SeaweedFS with Docker
Run SeaweedFS locally with S3 gateway enabled:
docker run -d \
--name seaweedfs \
-p 8333:8333 \
-p 9333:9333 \
-v seaweedfs_data:/data \
chrislusf/seaweedfs:4.22 \
server -s3 -dir=/data -ip.bind=0.0.0.0 -master.volumeSizeLimitMB=1024
For a multi-node production setup with docker-compose:
services:
seaweedfs:
image: chrislusf/seaweedfs:4.22
ports:
- "8333:8333"
- "9333:9333"
- "8888:8888"
volumes:
- seaweedfs_data:/data
command: "server -s3 -dir=/data -ip.bind=0.0.0.0 -master.volumeSizeLimitMB=1024"
volumes:
seaweedfs_data:
Configure S3 credentials after startup via weed shell:
docker exec seaweedfs weed shell <
Template Content
SeaweedFS
chrislusf/seaweedfs:4.22