This document provides a brief guide for setting up and using PromptReg, our demonstration implementation currently under active development.
Before you start, make sure your environment meets these dependencies:
cuda==11.4
monai==1.3.0
nibabel==5.2.0
torch==1.12.0
torchvision==0.13.0
transformers==4.39.0
We have provided a sample dataset in the .example folder to facilitate a quick start with both intra-subject and inter-subject registration tasks. Here’s how the dataset is structured:
.example/
|--- inter_subject/
| |--- prostate_mr
| | |--- image1.png
| | |--- image2.png
| | |--- image3.png
| | |--- image4.png
| | |--- label1.png // binary label for targeted ROI
| | |--- ...
| |--- abdomen_mr
| | |--- ...
| |--- abdomen_ct
| | |--- ...
|--- intra_subject/
| |--- aerial
| | |--- zh7_01_04_R.png // fixed image #1
| | |--- zh7_01_04_T.png // moving image #1
| | |--- zh7_02_01_R.png // fixed image #2
| | |--- zh7_02_01_T.png // moving image #2
| |--- histology
| | |--- ...
| |--- cardiac_mr/
| | |--- ...
| |--- lung_ct/
| | |--- ...
| |--- cell/
| | |--- ...
In the intra-subject folder, each pair of images is denoted as '_R' for the moving image and '_T' for the fixed image, which are to be aligned.
To use your own data, simply add it to the example folder and specify the image paths when running the demo:
--fix_image ./example/intra_subject/aerial/zh7_01_04_R.png --mov_image ./example/intra_subject/aerial/zh7_01_04_T.png
Easily select a SAM variant by specifying the sam_type in your command:
--sam_type sam_h
Available models include:
sam_b: Segment Anything (SAM-B)sam_h: Segment Anything (SAM-H)medsam: Segment Anything in Medical Images (MedSAM)slimsam: 0.1% Data Makes Segment Anything Slim (SlimSAM)sam_hq: Segment Anything in High Quality (SAM_HQ)
To incorporate pretrained weights, update your model via huggingface.
To run a quick demo for SAMReg [paper], use the following command:
python demo.py
For interactive use in PromptReg, add a prompt point within the image dimensions:
python demo.py --prompt --prompt_point [80,113]
To perform traditional image warping with interpolation, specify the ROI type:
python demo.py --prompt --interpolate --ROI_type label_ROI
Select from label_ROI or pseudo_ROI depending on your needs.
For more precise wrapping, adjust the number of interpolation iterations (default: 10,000).
--ddf_max_iter 100000
Set a custom precision threshold for adopting paired ROIs:
--sim_criteria 0.88
Set a custom number of adopted paired ROIs:
--num_pair 5
- If the number of adopted paired ROIs (
num_pair) exceeds the number of generated paired ROIs, the code will automatically switch to using the precision threshold (sim_criteria). - The
--sim_criteriaargument becomes ineffective when--num_pairis set.
Set a filter for ROIs using the --v_min and --v_max arguments. ROIs with an area less than v_min or greater than v_max will be excluded.
--v_min 100 --v_max 7000
This work was supported by the International Alliance for Cancer Early Detection, a partnership between Cancer Research UK [C28070/A30912; C73666/A31378], Canary Center at Stanford University, the University of Cambridge, OHSU Knight Cancer Institute, University College London and the University of Manchester.
@article{huang2024one,
title={One registration is worth two segmentations},
author={Huang, Shiqi and Xu, Tingfa and Shen, Ziyi and Saeed, Shaheer Ullah and Yan, Wen and Barratt, Dean and Hu, Yipeng},
journal={arXiv preprint arXiv:2405.10879},
year={2024}
}