Stars
The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.
《开源大模型食用指南》针对中国宝宝量身打造的基于Linux环境快速微调(全参数/Lora)、部署国内外开源大模型(LLM)/多模态大模型(MLLM)教程
The repository provides code for running inference with the Meta Segment Anything Model 2 (SAM 2), links for downloading the trained model checkpoints, and example notebooks that show how to use th…
Grounded SAM: Marrying Grounding DINO with Segment Anything & Stable Diffusion & Recognize Anything - Automatically Detect , Segment and Generate Anything
PyTorch code and models for the DINOv2 self-supervised learning method.
Code release for NeRF (Neural Radiance Fields)
CoTracker is a model for tracking any point (pixel) on a video.
Grounded SAM 2: Ground and Track Anything in Videos with Grounding DINO, Florence-2 and SAM 2
Benchmarking Knowledge Transfer in Lifelong Robot Learning
Evaluating and reproducing real-world robot manipulation policies (e.g., RT-1, RT-1-X, Octo) in simulation under common setups (e.g., Google Robot, WidowX+Bridge) (CoRL 2024)
Experiment on combining CLIP with SAM to do open-vocabulary image segmentation.
Suite of human-collected datasets and a multi-task continuous control benchmark for open vocabulary visuolinguomotor learning.
Dino V2 for Classification, PCA Visualization, Instance Retrival: https://arxiv.org/abs/2304.07193
Reinforcement Learning Experiments using PyBullet
Official implementation of "Towards Generalizable Vision-Language Robotic Manipulation: A Benchmark and LLM-guided 3D Policy."
Benchmarking Repository for robosuite + SAC
[NeurIPS 2024] PIVOT-R: Primitive-Driven Waypoint-Aware World Model for Robotic Manipulation
Contains implementation of AdVIL, AdRIL, and DAeQuIL algorithms from the ICML '21 Paper Of Moments and Matching.