-
Nanyang Technological University
- Singapore
Highlights
- Pro
Stars
Your own personal AI assistant. Any OS. Any Platform. The lobster way. 🦞
Node Version Manager - POSIX-compliant bash script to manage multiple active node.js versions
Claude Code is an agentic coding tool that lives in your terminal, understands your codebase, and helps you code faster by executing routine tasks, explaining complex code, and handling git workflo…
The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.
A generative world for general-purpose robotics & embodied AI learning.
Latex code for making neural networks diagrams
ncnn is a high-performance neural network inference framework optimized for the mobile platform
Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
Original reference implementation of "3D Gaussian Splatting for Real-Time Radiance Field Rendering"
✨✨Latest Advances on Multimodal Large Language Models
The absolute trainer to light up AI agents.
A beautiful, simple, clean, and responsive Jekyll theme for academics
[CVPR 2025 Best Paper Award] VGGT: Visual Geometry Grounded Transformer
[Lumina具身智能社区] 具身智能技术指南 Embodied-AI-Guide
PyTorch code and models for the DINOv2 self-supervised learning method.
COLMAP - Structure-from-Motion and Multi-View Stereo
🐍 Geometric Computer Vision Library for Spatial AI
Reference PyTorch implementation and models for DINOv3
Curated list of papers and resources focused on 3D Gaussian Splatting, intended to keep pace with the anticipated surge of research in the coming months.
ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual-Inertial and Multi-Map SLAM
[CVPR 2024] Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data. Foundation Model for Monocular Depth Estimation
Unified framework for robot learning built on NVIDIA Isaac Sim
open Multiple View Geometry library. Basis for 3D computer vision and Structure from Motion.
A Robust and Versatile Monocular Visual-Inertial State Estimator
This is the official code for MobileSAM project that makes SAM lightweight for mobile applications and beyond!
Code for robust monocular depth estimation described in "Ranftl et. al., Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer, TPAMI 2022"