Stars
A latent text-to-image diffusion model
CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image
Learn OpenCV : C++ and Python Examples
A High-Quality Real Time Upscaler for Anime Video
The repository provides code for running inference with the Meta Segment Anything Model 2 (SAM 2), links for downloading the trained model checkpoints, and example notebooks that show how to use th…
Grounded SAM: Marrying Grounding DINO with Segment Anything & Stable Diffusion & Recognize Anything - Automatically Detect , Segment and Generate Anything
A hands-on introduction to video technology: image, video, codec (av1, vp9, h265) and more (ffmpeg encoding). Translations: 🇺🇸 🇨🇳 🇯🇵 🇮🇹 🇰🇷 🇷🇺 🇧🇷 🇪🇸
This repository contains the source code for the paper First Order Motion Model for Image Animation
Code release for NeRF (Neural Radiance Fields)
A unified framework for 3D content generation.
The image prompt adapter is designed to enable a pretrained text-to-image diffusion model to generate images with image prompt.
CoTracker is a model for tracking any point (pixel) on a video.
Segment Anything in High Quality [NeurIPS 2023]
Segment Anything in Medical Images
Grounded SAM 2: Ground and Track Anything in Videos with Grounding DINO, Florence-2 and SAM 2
This is the repo for our new project Highly Accurate Dichotomous Image Segmentation
subpixel: A subpixel convnet for super resolution with Tensorflow
[ICLR24] Official PyTorch Implementation of Magic123: One Image to High-Quality 3D Object Generation Using Both 2D and 3D Diffusion Priors
[CVPR 2025 Highlight] GEN3C: 3D-Informed World-Consistent Video Generation with Precise Camera Control
FaceScape (PAMI2023 & CVPR2020)
[3DV 2025 Best Paper] We present Object Images (Omages): An homage to the classic Geometry Images.
Stereo4D dataset and processing code
[ICLR2024] The official implementation of paper "VDT: General-purpose Video Diffusion Transformers via Mask Modeling", by Haoyu Lu, Guoxing Yang, Nanyi Fei, Yuqi Huo, Zhiwu Lu, Ping Luo, Mingyu Ding.