Starred repositories
Code for "MatchAnything: Universal Cross-Modality Image Matching with Large-Scale Pre-Training", Arxiv 2025.
[RA-L'25] An Reliable and Efficient Framework for Zero-Shot Object Navigation
RealSee3D: A multi-view RGB-D dataset combining real-world captures and procedurally generated scenes, with extensible annotations for diverse 3D vision research.
Open source SDK to create applications leveraging event-based vision hardware equipment
The repository provides code for running inference and finetuning with the Meta Segment Anything Model 3 (SAM 3), links for downloading the trained model checkpoints, and example notebooks that sho…
Isaac Gym Environments for Legged Robots
Building General-Purpose Robots Based on Embodied Foundation Model
[CVPR 2025] "A Distractor-Aware Memory for Visual Object Tracking with SAM2"
SLAM-Former: Putting SLAM into One Transformer
Ultra Fast Structure-aware Deep Lane Detection (ECCV 2020)
Monocular Lane Detection Based on Deep Learning: A Survey
MapAnything: Universal Feed-Forward Metric 3D Reconstruction
InternRobotics' open platform for building generalized navigation foundation models.
Official implementation of NavMorph: A Self-Evolving World Model for Vision-and-Language Navigation in Continuous Environments (ICCV'25).
Embodied Instruction Following in Unknown Environments
[3DV 2026] ViSTA-SLAM: Visual SLAM with Symmetric Two-view Association
Official Repo of "SAIL-Recon: Large SfM by Augmenting Scene Regression with Localization"
Use Claude Code as the foundation for coding infrastructure, allowing you to decide how to interact with the model while enjoying updates from Anthropic.
对VINS-Fusion的修改,以适配地面小车进行定位建图及导航,可实时采集生成半稠密点云地图和栅格地图。
bonabai / VINS-Fusion-ROS2
Forked from zinuok/VINS-Fusion-ROS2ROS2 version of VINS-Fusion
FAST-LIVO2: Fast, Direct LiDAR-Inertial-Visual Odometry
Fast Incremental Euclidean Distance Fields for Online Motion Planning of Aerial Robots
🚀🤖 Crawl4AI: Open-source LLM Friendly Web Crawler & Scraper. Don't be shy, join here: https://discord.gg/jP8KfhDhyN
A Robust Tightly-Coupled RGBD-Inertial and Legged Odometry Fusion SLAM for Dynamic Legged Robotics
MambaOut: Do We Really Need Mamba for Vision? (CVPR 2025)