Stars
✔(已完结)最全面的 深度学习 笔记【土堆 Pytorch】【李沐 动手学深度学习】【吴恩达 深度学习】
Official repository of 'Visual-RFT: Visual Reinforcement Fine-Tuning' & 'Visual-ARFT: Visual Agentic Reinforcement Fine-Tuning'’
No fortress, purely open ground. OpenManus is Coming.
Sky-T1: Train your own O1 preview model within $450
Grounded SAM 2: Ground and Track Anything in Videos with Grounding DINO, Florence-2 and SAM 2
The repository provides code for running inference with the Meta Segment Anything Model 2 (SAM 2), links for downloading the trained model checkpoints, and example notebooks that show how to use th…
✨ Light and Fast AI Assistant. Support: Web | iOS | MacOS | Android | Linux | Windows
Free ChatGPT&DeepSeek API Key,免费ChatGPT&DeepSeek API。免费接入DeepSeek API和GPT4 API,支持 gpt | deepseek | claude | gemini | grok 等排名靠前的常用大模型。
Segment-Anything + 3D. Let's lift anything to 3D.
Label Studio is a multi-type data labeling and annotation tool with standardized output format
[ECCV 2024] Official implementation of the paper "Semantic-SAM: Segment and Recognize Anything at Any Granularity"
Using Low-rank adaptation to quickly fine-tune diffusion models.
基于yoloV5-V6系列,train_palte添加多头检测。train_key添加关键点检测算法。
Prompt Learning for Vision-Language Models (IJCV'22, CVPR'22)
72.8% MobileNetV2 1.0 model on ImageNet and a spectrum of pre-trained MobileNetV2 models
The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.
This is an official implementation of our AAAI2022 paper AdaptivePose and Arxiv paper AdaptivePose++
Research code for CVPR 2021 paper "End-to-End Human Pose and Mesh Reconstruction with Transformers"
Im2Hands: Learning Attentive Implicit Representation of Interacting Two-Hand Shapes (CVPR 2023)
Whose Hands Are These? Hand Detection and Hand-Body Association in the Wild, CVPR 2022
OpenMMLab 3D Human Parametric Model Toolbox and Benchmark
🍀 Pytorch implementation of various Attention Mechanisms, MLP, Re-parameter, Convolution, which is helpful to further understand papers.⭐⭐⭐