Starred repositories
MMGeneration is a powerful toolkit for generative models, based on PyTorch and MMCV.
Use AnimeGANv3 to make your own animation works, including turning photos or videos into anime.
A realtime sketch to image demo using LCM and the gradio library.
Official implementations for paper: DreamTalk: When Expressive Talking Head Generation Meets Diffusion Probabilistic Models
Google, Naver multiprocess image web crawler (Selenium)
A python library and collection of notebooks for making art with machine learning.
[NeurIPS 2023] ImageReward: Learning and Evaluating Human Preferences for Text-to-image Generation
Unofficial implementation of InstantID for ComfyUI
开源的金融投资数据提取工具,专注在各类网站上爬取数据,并通过简单易用的API方式使用
Live Speech Portraits: Real-Time Photorealistic Talking-Head Animation (SIGGRAPH Asia 2021)
COYO-700M: Large-scale Image-Text Pair Dataset
CLIP+MLP Aesthetic Score Predictor
This project uses a variety of advanced voiceprint recognition models such as EcapaTdnn, ResNetSE, ERes2Net, CAM++, etc. It is not excluded that more models will be supported in the future. At the …
This node was designed to help AI image creators to generate prompts for human portraits.
APISR: Anime Production Inspired Real-World Anime Super-Resolution (CVPR 2024)
[ECCV 2024] The official code of paper "Open-Vocabulary SAM".
[CVPR 2024] PIA, your Personalized Image Animator. Animate your images by text prompt, combing with Dreambooth, achieving stunning videos. PIA,你的个性化图像动画生成器,利用文本提示将图像变为奇妙的动画
model merge extention for stable diffusion web ui
Unofficial implementation of PhotoMaker for ComfyUI
[CVPR 2024] X-Adapter: Adding Universal Compatibility of Plugins for Upgraded Diffusion Model
high-accuracy segmentation for anime character
Stable Video Diffusion Training Code and Extensions.
From comfyui workflow to web app, in seconds
[ECCV 2024] FreeInit: Bridging Initialization Gap in Video Diffusion Models