π Real-Time AI Stereo Vision Library
Retinify is an advanced AI-powered stereo vision library designed for robotics. It enables real-time, high-precision 3D perception by leveraging GPU and NPU acceleration.
- π Open Source: Fully transparent and customizableβadapt the pipeline to your workflow.
- π₯ High Precision: Delivers real-time, accurate 3D mapping and object recognition from stereo image input.
- β‘ Fast Pipeline: All necessary computations run seamlessly on the GPU, enabling real-time performance.
- π₯ Camera-Agnostic: Accepts stereo images from any camera setup, giving you the flexibility to use your own hardware.
- π° Cost Efficiency: Runs using just cameras, enabling depth perception with minimal hardware cost.
- πͺΆ Minimal Dependencies: The pipeline depends only on CUDA Toolkit, cuDNN, and TensorRT, providing a lean and production-grade foundation.
Rectified Stereo Images (π Python)
import retinify
import numpy as np
from PIL import Image
# LOAD RECTIFIED STEREO INPUT IMAGES
left = np.asarray(Image.open("path/to/left.png").convert("RGB"))
right = np.asarray(Image.open("path/to/right.png").convert("RGB"))
# CREATE STEREO MATCHING PIPELINE
pipe = retinify.Pipeline()
# INITIALIZE THE PIPELINE
pipe.initialize(image_width=left.shape[1],
image_height=left.shape[0])
# EXECUTE STEREO MATCHING
pipe.execute(left, right)
# RETRIEVE DISPARITY
disparity = pipe.retrieve_disparity()Non-Rectified Stereo Images (π Python)
Using the calibration parameters, the pipeline performs undistortion, rectification, and 3D reprojection.
import retinify
import numpy as np
from PIL import Image
# LOAD NON-RECTIFIED STEREO INPUT IMAGES
left = np.asarray(Image.open("path/to/left.png").convert("RGB"))
right = np.asarray(Image.open("path/to/right.png").convert("RGB"))
# LOAD CALIBRATION PARAMETERS
calib_params = retinify.load_calibration_parameters("path/to/calib.json")
# CREATE STEREO MATCHING PIPELINE
pipe = retinify.Pipeline()
# INITIALIZE THE PIPELINE WITH CALIBRATION PARAMETERS
pipe.initialize(image_width=left.shape[1],
image_height=left.shape[0],
pixel_format=retinify.PixelFormat.RGB8,
depth_mode=retinify.DepthMode.ACCURATE,
calibration_parameters=calib_params)
# EXECUTE STEREO MATCHING
pipe.execute(left, right)
# RETRIEVE DISPARITY
disparity = pipe.retrieve_disparity()
# RETRIEVE DEPTH
depth = pipe.retrieve_depth()
# RETRIEVE POINT CLOUD
point_cloud = pipe.retrieve_point_cloud()Rectified Stereo Images (𧬠C++)
#include <retinify/retinify.hpp>
#include <opencv2/opencv.hpp>
// LOAD RECTIFIED STEREO INPUT IMAGES
cv::Mat leftImage = cv::imread("path/to/left.png");
cv::Mat rightImage = cv::imread("path/to/right.png");
// PREPARE OUTPUT CONTAINER
cv::Mat disparity = cv::Mat::zeros(leftImage.size(), CV_32FC1);
// CREATE STEREO MATCHING PIPELINE
retinify::Pipeline pipeline;
// INITIALIZE THE PIPELINE
pipeline.Initialize(leftImage.cols, leftImage.rows);
// EXECUTE STEREO MATCHING
pipeline.Execute(leftImage.ptr<uint8_t>(), leftImage.step[0],
rightImage.ptr<uint8_t>(), rightImage.step[0]);
// RETRIEVE DISPARITY
pipeline.RetrieveDisparity(disparity.ptr<float>(), disparity.step[0]);Non-Rectified Stereo Images (𧬠C++)
Using the calibration parameters, the pipeline performs undistortion, rectification, and 3D reprojection.
#include <retinify/retinify.hpp>
#include <opencv2/opencv.hpp>
// LOAD NON-RECTIFIED STEREO INPUT IMAGES
cv::Mat leftImage = cv::imread("path/to/left.png");
cv::Mat rightImage = cv::imread("path/to/right.png");
// PREPARE OUTPUT CONTAINER
cv::Mat disparity = cv::Mat::zeros(leftImage.size(), CV_32FC1);
cv::Mat depth = cv::Mat::zeros(leftImage.size(), CV_32FC1);
cv::Mat pointCloud = cv::Mat::zeros(leftImage.size(), CV_32FC3);
// LOAD CALIBRATION PARAMETERS
retinify::CalibrationParameters calibParams;
retinify::LoadCalibrationParameters("path/to/calib.json", calibParams);
// CREATE STEREO MATCHING PIPELINE
retinify::Pipeline pipeline;
// INITIALIZE THE PIPELINE WITH CALIBRATION PARAMETERS
pipeline.Initialize(leftImage.cols, leftImage.rows,
retinify::PixelFormat::RGB8,
retinify::DepthMode::ACCURATE,
calibParams);
// EXECUTE STEREO MATCHING
pipeline.Execute(leftImage.ptr<uint8_t>(), leftImage.step[0],
rightImage.ptr<uint8_t>(), rightImage.step[0]);
// RETRIEVE DISPARITY
pipeline.RetrieveDisparity(disparity.ptr<float>(), disparity.step[0]);
// RETRIEVE DEPTH
pipeline.RetrieveDepth(depth.ptr<float>(), depth.step[0]);
// RETRIEVE POINT CLOUD
pipeline.RetrievePointCloud(pointCloud.ptr<float>(), pointCloud.step[0]);π retinify documentation β Developer guide and API reference.
-
π₯ Installation Guide
Step-by-step guide to build and install retinify. -
π¨ Tutorials
Hands-on examples to get you started with real-world use cases. -
π Python Docs
Python API documentation for retinify. -
𧬠C++ Docs
C++ API documentation for retinify. -
π€ ROS2 Docs
ROS2 documentation for retinify.
| π― Target | βοΈ Env | π¦ Status |
|---|---|---|
Latency includes the time for image upload, inference, and disparity download, reported as the median over 10,000 iterations (measured with retinify::Pipeline).
These measurements were taken using each setting ofβ―retinify::DepthMode.
Note
Results may vary depending on the execution environment.
| DEVICE \ MODE | FAST | BALANCED | ACCURATE |
|---|---|---|---|
| NVIDIA RTX 3060 | 3.925ms / 254.8FPS | 4.691ms / 213.2FPS | 10.790ms / 92.7FPS |
| NVIDIA Jetson Orin Nano | 17.462ms / 57.3FPS | 19.751ms / 50.6FPS | 46.104ms / 21.7FPS |
For a list of third-party dependencies, please refer to NOTICE.md.
For all inquiries, including support, collaboration, please contact:
contact@retinify.ai