- Tokyo, Japan
- https://ruijieren.com/
Highlights
- Pro
Stars
[CVPR 2025 Highlight] Official implementation of "MangaNinja: Line Art Colorization with Precise Reference Following"
[ICLR 2025] From anything to mesh like human artists. Official impl. of "MeshAnything: Artist-Created Mesh Generation with Autoregressive Transformers"
Unofficial implementation of Palette: Image-to-Image Diffusion Models by Pytorch
Curated list of papers and resources focused on 3D Gaussian Splatting, intended to keep pace with the anticipated surge of research in the coming months.
One-step image-to-image with Stable Diffusion turbo: sketch2image, day2night, and more
Geometric loss functions between point clouds, images and volumes
[ECCV 2024 Oral] The official implementation of "CAT-SAM: Conditional Tuning for Few-Shot Adaptation of Segment Anything Model".
📚 A collection of papers about Sketch Synthesis (Generation).
A code repository associated with the paper A Benchmark for Rough Sketch Cleanup by Chuan Yan, David Vanderhaeghe, and Yotam Gingold from SIGGRAPH Asia 2020.
🎓Automatically Update CV Papers Daily using Github Actions
A collection of project, papers, and source code for Meta AI's Segment Anything Model (SAM) and related studies.
Grounded SAM: Marrying Grounding DINO with Segment Anything & Stable Diffusion & Recognize Anything - Automatically Detect , Segment and Generate Anything
Line thickness normalization network using in the SIGGRPAH 2018 paper "Real-Time Data-Driven Interactive Rough Sketch Inking".
Implementation of "Learning to Shadow Hand-drawn Sketches" CVPR 2020 (Oral)
Original reference implementation of "3D Gaussian Splatting for Real-Time Radiance Field Rendering"
Segment Anything in High Quality [NeurIPS 2023]
Fast and flexible image augmentation library. Paper about the library: https://www.mdpi.com/2078-2489/11/2/125
Few Shot Semantic Segmentation Papers
This is the pytorch implement of our paper "RSPrompter: Learning to Prompt for Remote Sensing Instance Segmentation based on Visual Foundation Model"
A curated list of prompt-based paper in computer vision and vision-language learning.
Collection of AWESOME vision-language models for vision tasks
Code for "Detector-Free Structure from Motion", CVPR 2024
Inpaint anything using Segment Anything and inpainting models.
Code for "Sampling Neural Radiance Fields for Refractive Objects" SA'22 TCom
The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.