
Dify
An open-source LLM app development platform
Web
langgenius/dify-web
Just deployed
Sandbox
langgenius/dify-sandbox
Just deployed
/dependencies
Redis
redis:6-alpine
Just deployed
/data
Api
langgenius/dify-api
Just deployed
Weaviate
semitechnologies/weaviate
Just deployed
/var/lib/weaviate
Worker
langgenius/dify-api
Just deployed
Postgres
postgres:15-alpine
Just deployed
/var/lib/postgresql/data
Storage provision
minio/mc
Just deployed
Storage
minio/minio
Just deployed
/data
⚠️ After deploying for the first time, you'll need the auto-generated INIT_PASSWORD
variable in the Api
service to setup the admin account. Expect possible loading delays as a result of cache on fresh deployments.
Dify is an open-source LLM app development platform. Its intuitive interface combines AI workflow, RAG pipeline, agent capabilities, model management, observability features and more, letting you quickly go from prototype to production. Here's a list of the core features:
1. Workflow: Build and test powerful AI workflows on a visual canvas, leveraging all the following features and beyond.
https://github.com/langgenius/dify/assets/13230914/356df23e-1604-483d-80a6-9517ece318aa
2. Comprehensive model support: Seamless integration with hundreds of proprietary / open-source LLMs from dozens of inference providers and self-hosted solutions, covering GPT, Mistral, Llama3, and any OpenAI API-compatible models. A full list of supported model providers can be found here.
3. Prompt IDE: Intuitive interface for crafting prompts, comparing model performance, and adding additional features such as text-to-speech to a chat-based app.
4. RAG Pipeline: Extensive RAG capabilities that cover everything from document ingestion to retrieval, with out-of-box support for text extraction from PDFs, PPTs, and other common document formats.
5. Agent capabilities: You can define agents based on LLM Function Calling or ReAct, and add pre-built or custom tools for the agent. Dify provides 50+ built-in tools for AI agents, such as Google Search, DELL·E, Stable Diffusion and WolframAlpha.
6. LLMOps: Monitor and analyze application logs and performance over time. You could continuously improve prompts, datasets, and models based on production data and annotations.
7. Backend-as-a-Service: All of Dify's offerings come with corresponding APIs, so you could effortlessly integrate Dify into your own business logic.
Feature comparison
Feature | Dify.AI | LangChain | Flowise | OpenAI Assistants API |
---|---|---|---|---|
Programming Approach | API + App-oriented | Python Code | App-oriented | API-oriented |
Supported LLMs | Rich Variety | Rich Variety | Rich Variety | OpenAI-only |
RAG Engine | ✅ | ✅ | ✅ | ✅ |
Agent | ✅ | ✅ | ❌ | ✅ |
Workflow | ✅ | ❌ | ✅ | ❌ |
Observability | ✅ | ✅ | ❌ | ❌ |
Enterprise Features (SSO/Access control) | ✅ | ❌ | ❌ | ❌ |
Local Deployment | ✅ | ✅ | ✅ | ❌ |
Next steps
If you need to customize the configuration, please refer to the comments in our docker-compose.yml file and manually set the environment configuration. You can see the full list of environment variables here.
Community & contact
- Github Discussion. Best for: sharing feedback and asking questions.
- GitHub Issues. Best for: bugs you encounter using Dify.AI, and feature proposals. See our Contribution Guide.
- Email. Best for: questions you have about using Dify.AI.
- Discord. Best for: sharing your applications and hanging out with the community.
- Twitter. Best for: sharing your applications and hanging out with the community.
License
This repository is available under the Dify Open Source License, which is essentially Apache 2.0 with a few additional restrictions.
Template Content
Sandbox
langgenius/dify-sandboxRedis
redis:6-alpineWeaviate
semitechnologies/weaviateWorker
langgenius/dify-apiPostgres
postgres:15-alpineStorage provision
minio/mcStorage
minio/minio