-
Notifications
You must be signed in to change notification settings - Fork 7.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is it possible to change the dify app execution engine from Celery to another workflow engine, e.g. argo-workflow ? #6935
Comments
BTW, just searched the current best practices of AI apps, e.g. JupyterHub offered a way to integrate with Kubernetes: https://jupyterhub.readthedocs.io/en/stable/explanation/concepts.html#spawner |
Hello @Colstuwjx Thank you for your insightful suggestion and interest in the Dify project. We appreciate the enthusiasm and the potential contribution you've expressed. Regarding your proposal to consider updating or replacing the Celery Worker dependency with another workflow engine such as argo-workflow, we have taken it into serious consideration. We understand the importance of features like canary deployment, traffic control, and rate limiting in a production environment, as well as the benefits of using a CloudNative execution engine with Kubernetes scheduling for better tenant isolation. However, at this moment, we do not have plans to update or replace our dependency on Celery in the near future. Celery serves as a robust and widely-adopted workflow execution engine that plays a significant role in our current Dify architecture. We will continue to monitor and assess its performance to ensure it meets the needs of our users. Nevertheless, we truly value your forward-thinking and suggestions for the future development of the project. We believe that with the growing community and contributions like yours, Dify has the potential to become a leading platform in the GenAI industry. We encourage and welcome more community members to get involved in the project, driving its progress and evolution. Should you have any further ideas or suggestions, or if you are willing to contribute to this feature, please do not hesitate to get in touch to @takatost . We look forward to working with you to build an even stronger and more adaptable Dify platform. Thank you once again for your contribution and support! |
Hi @crazywoola , thanks for your kindly reply. After deep diving into the sourcecode, I suddenly found the Dify workflow execution is actually triggered by a sync api, and also use an original new Thread way rather than Celery workers. The entrance http api is workflow/run and the api controller would execute by this process: AppGenerateService -> WorkflowAppGenerator (new Thread here) -> WorkflowAppRunner -> WorkflowEngineManager. Related discussion issue: ref #4489. |
I've got an idea to resolve this issue, by taking advantage of Serverless Engine, e.g. OpenFaas. The details be like:
Any suggestions ? |
@Colstuwjx hi,we also want to publish dify workflow to argo ,do you have interest in working together? |
Self Checks
1. Is this request related to a challenge you're experiencing? Tell me about your story.
Hi team,
Thanks for sharing this awesome project! We're evaluating if we can use Dify in our production environment, however, it seems like that we need to support some production requirements in advance:
Any ideas if the point 3 could be addressed by some ways ?
Thanks!
2. Additional context or comments
I've experienced the last cloud native revolution, and I strongly believe that if one project could be pluggable and involve more community power, it could adopt more end users and gain more contributions from the community. IMO, one of the success factor of Kubernetes project is that it allows to be customized by many ways, the Resource + Reconcile pattern / CNI / CSI / SideCar design allows more community developers to join the game. Dify and the other LLMOps platform could potentially to be the Next Kubernetes in the GenAI indurstry.
3. Can you help us with this feature?
The text was updated successfully, but these errors were encountered: