Focus stacking code
-
Updated
Dec 18, 2025 - Python
Focus stacking code
OBS Plugin to use a Kinect (all models supported) in OBS (and setup a virtual green screen based on depth and/or body detection).
Python Depth Map Renderer for 3D Bounding Boxes with C++/CUDA Backend
A visualization tool for analyzing preprocessing results of WAI/Nerfstudio format datasets containing RGB images and corresponding depth maps to generate interactive 3D visualizations.
This program, tries to find the occluded regions in a disparity map and creates a new disparity map that displays the occluded pixels in black.
A simple program in Matlab that converts a Disparity Map from grayscale to color.
🌊 Images to → 3D Parallax effect video. A free and open source ImmersityAI alternative
We’re looking forward to models based on DINOv3. R-s include: BetterDepth BRIDGE BriGeS ChronoDepth Depth Any Video Depth Anything Depth Pro DepthCrafter Distill Any Depth FE2E GRIN M2SVid MASt3R MegaSaM Metric3D Metric-Solver MoGe MoRE NVDS Pixel-Perfect Depth SpatialTrackerV2 StereoCrafter SVG Uni4D UniDepth UniK3D VGGT Video Depth Anything π^3
Real-time ADAS using MiDaS depth estimation and YOLO object detection for collision alerts, lane departure warnings, and intuitive visual/audio feedback.
Lightweight JavaScript/WebGL library for real-time face detection, depth estimation & 3D insertion in the browser. Bring the user’s face into your Web3D scene 🪞
It compares the disparity map generated by our algorithm against the ground truth disparity map that provided in the dataset.
Multi-sensor RealSense + YOLO Top-View People Counting System, developed within the MEI (Museo Egizio Immersive) project. The solution enables real-time detection and counting of people in defined spatial areas, driving immersive scene logic, lighting systems, and audience analytics for interactive museum installations in Unreal Engine 5.
A dataset for testing next best view methods as a part of active vision systems.
3D-Image-Toolbox is a Python-based tool that transforms images and videos into immersive spatial experiences. Using depth-anything-v2 model, it generates depth maps from standard 2D media and converts them into side-by-side 3D formats. From spacial photos (heic format) it uses the contained depth map.
ComfyUI Depth Anything (v1/v2/distill-any-depth) Tensorrt Custom Node (up to 14x faster)
This project estimates the distance to objects using stereo vision and depth maps
A tool for post-processing depth-of-field effect using an input image and corresponding depth map.
This may turn out useful if you happen to have a photometric stereo scanner or if you want to create other mappings from a given normal map.
Official code for the paper: Depth Anything At Any Condition
I'm starting this repository to explore image depth analysis using various models. The ultimate goal is to implement Visual SLAM on the robot I previously built. This space will serve as a sandbox for experimentation—where I'll test out different depth estimation models and work on automating the robot's navigation in new environments.
Add a description, image, and links to the depth-map topic page so that developers can more easily learn about it.
To associate your repository with the depth-map topic, visit your repo's landing page and select "manage topics."