-
Notifications
You must be signed in to change notification settings - Fork 1.9k
Add ALIKED/LightGlue feature based on libtorch #3068
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
…h/torch-aligked-lightglue
…h/torch-aligked-lightglue
|
Does this mean no need for SIFT anymore? |
…h/torch-aligked-lightglue
…h/torch-aligked-lightglue
…m/colmap/colmap into user/jsch/torch-aligked-lightglue
sarlinpe
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What a massive amount of work, thank you!
|
|
||
| // Detect fixed number of top k features independent of detection score. | ||
| // Ignored if negative. | ||
| int top_k = -1; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't this redundant with max_num_features? wouldn't bool force_extract_max_num_features = false be sufficient?
src/colmap/feature/aliked.cc
Outdated
| const int width = bitmap_ptr->Width(); | ||
| const int height = bitmap_ptr->Height(); | ||
| auto row_major_array = bitmap_ptr->ConvertToRowMajorArray(); | ||
| // Clone to ensure ownership |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: add to the line that calls clone()
| const torch::Tensor torch_image = | ||
| torch::from_blob( | ||
| row_major_array.data(), {height, width, 3}, torch::kUInt8) | ||
| .clone() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is cloning needed given that casting to float creates a copy?
| // Normalize to [-1, 1]. | ||
| torch_keypoints.index({torch::indexing::Ellipsis, 0}) = | ||
| (2.0f * torch_keypoints.index({torch::indexing::Ellipsis, 0}) / | ||
| image.width) - | ||
| 1.0f; | ||
| torch_keypoints.index({torch::indexing::Ellipsis, 1}) = | ||
| (2.0f * torch_keypoints.index({torch::indexing::Ellipsis, 1}) / | ||
| image.height) - | ||
| 1.0f; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't this already done in LightGlue::forward by passing the image size as input?
B1ueber2y
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the amazing efforts!
| matches->reserve(num_keypoints1); | ||
| for (int i1 = 0; i1 < num_keypoints1; ++i1) { | ||
| const int i2 = nn12_data[i1]; | ||
| if (i1 == nn21_data[i2]) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe we also need to check against the similarity here, to avoid the match of i1 == 0 && i2 == 0.
| } | ||
| } | ||
|
|
||
| void MatchGuided(double max_error, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we add some comments in the virtual class for Match and MatchGuided. From my understanding the current intention of MatchGuided is to make use of the two view geometry to do guided matching and store the matches in inlier matches? Why not output a FeatureMatches instead?
On the other hand, since LightGlue does not benefit from guided matching, we should raise a warning on the call of this method.
I also did not find any usage for this MatchGuided method throughout COLMAP.
…h/torch-aligked-lightglue
|
It seems LiftFeat is approximately twice as fast as ALIKED and also achieves higher accuracy. It's also designed to handle challenging scenarios such as drastic lighting changes, low-texture regions, and repetitive patterns. Maybe consider switching from ALIKED to LiftFeat or adding both? |
|
I'm really interested in this, Looking forward to see this PR merged soon! |
…h/torch-aligked-lightglue
|
@xiemeilong Thanks for the feedback. There are still some known issues related to torch under Windows that I cannot root cause. I am not sure if these hint at some larger issues also on other platforms. On which platform did you run this comparison? Unrelated to these issues, the current colmap implementation does not auto-rotate the images to be upright (e.g., based on EXIF image orientation information). ALIKED/LightGlue only work reliably up to ~45deg rotation change between images. Can it be that your images are inconsistently rotated? |
|
@ahojnnes I tested this on Linux, and all images orientations are same. I found that LightGlue doesn't work well with aliked-n32 in my tests, hloc uses aliked-n16 by default. |
Uh oh!
There was an error while loading. Please reload this page.