by Chuyu Zhao*, Hao Huang*, Jiashuo Guo*, Ziyu Shen*, Zhongwei Zhou, Jie Liuβ , Zekuan Yuβ
*These authors contributed equally to this work.
β Corresponding author: jieliu@bjtu.edu.cn, yzk@fudan.edu.cn
We release our model and demo app on Hugging Face.
There are two ways for users to experience our RailNet CBCT tooth segmentation system:
- Clone our Hugging Face model repository and simply run
gradio_app.py. - Try our demo app without understanding any of our code!
Semi-supervised learning has become a compelling approach for 3D tooth segmentation from CBCT scans, where labeled data is minimal. However, existing methods still face two persistent challenges: limited corrective supervision in structurally ambiguous or mislabeled regions during supervised training and performance degradation caused by unreliable pseudo-labels on unlabeled data. To address these problems, we propose Region-Aware Instructive Learning (RAIL), a dual-group dual-student, semi-supervised framework. Each group contains two student models guided by a shared teacher network. By alternating training between the two groups, RAIL promotes intergroup knowledge transfer and collaborative region-aware instruction while reducing overfitting to the characteristics of any single model. Specifically, RAIL introduces two instructive mechanisms. Disagreement-Focused Supervision (DFS) Controller improves supervised learning by instructing predictions only within areas where student outputs diverge from both ground truth and the best student, thereby concentrating supervision on structurally ambiguous or mislabeled areas. In the unsupervised phase, Confidence-Aware Learning (CAL) Modulator reinforces agreement in regions with high model certainty while reducing the effect of low-confidence predictions during training. This helps prevent our model from learning unstable patterns and improves the overall reliability of pseudo-labels. Extensive experiments on four CBCT tooth segmentation datasets show that RAIL surpasses state-of-the-art methods under limited annotation.
RAIL (Region-Aware Instructive Learning) is a novel dual-group, dual-student semi-supervised framework for medical image segmentation, specifically designed to address the issue of limited labeled data in CBCT-based 3D tooth segmentation. This approach introduces two key mechanisms:
- π§ Disagreement-Focused Supervision (DFS) Controller: Focuses on areas where model predictions diverge to improve supervision in anatomically ambiguous or mislabeled regions.
- π― Confidence-Aware Learning (CAL) Modulator: Enhances model stability by reinforcing high-confidence predictions and suppressing low-confidence pseudo-labels.
RAIL outperforms state-of-the-art methods under limited annotation scenarios, improving both segmentation accuracy and model reliability in medical imaging tasks.
Official code for "RAIL: Region-Aware Instructive Learning for Semi-Supervised Tooth Segmentation in CBCT".
- [05/10/2025] We deployed our RailNet demo for CBCT tooth segmentation on Hugging Face Models for users to easily load it in one line code and view the segmentation effect of our model intuitively.
- [05/7/2025] RAIL framework code and models are now available! Please check out the GitHub repository for more details.
- [04/18/2025] We provide RAIL model checkpoints trained on 3D_CBCT_Tooth_7_113, 3D_CBCT_Tooth_13_107, CTooth_7_115 and CTooth_13_109.
π₯ Download Links (two choices below):- βοΈ Baidu Cloud: https://pan.baidu.com/s/1EXFAeZLMZJLqWjyfUQQkBA?pwd=jqxg (Extraction Code:
jqxg) - πΊ Google Drive: https://drive.google.com/file/d/1uikdKR1E82H_7DtqML15u8PxRtKe21Jr/view?usp=sharing
- βοΈ Baidu Cloud: https://pan.baidu.com/s/1EXFAeZLMZJLqWjyfUQQkBA?pwd=jqxg (Extraction Code:
This repository is based on Ubuntu 20.04, PyTorch 1.11.0, CUDA 11.3, and Python 3.8. All experiments in our paper were conducted on an NVIDIA RTX 4090 24GB GPU with an identical experimental setting under Linux.
Please follow these steps to create an environment and install the environment dependencies by requirements.txt:
conda create -n RAIL python=3.8
conda activate RAILgit clone https://github.com/Tournesol-Saturday/RAIL.git;cd RAIL;
pip install -r requirements.txtπ οΈ [Optional] Download the model checkpoint and save it at ./model/RAIL_xx_xx_xx/outputs/xx/pmt_0_iter_xxxx_best.pth
./model/RAIL_xx_xx_xx/outputs/xx/pmt_1_iter_xxxx_best.pth
./model/RAIL_xx_xx_xx/outputs/xx/pmt_2_iter_xxxx_best.pth
./model/RAIL_xx_xx_xx/outputs/xx/pmt_3_iter_xxxx_best.pth.
We obtained two public datasets and preprocessed them in some way. All datasets are placed in the ./dataset directory after preprocessing (data augmentation).
To expand the training dataset, we augmented the available CBCT scans by 1) intensity normalization, and 2) random cropped patches.
In this study, our CBCT data are stored in .h5 format, which records the corresponding annotation information for a given scanned image (if it is an unlabeled image, the corresponding annotation information is all zeros). For both the training and validation sets (excluding the testing set), the data is scaled up to 15 times for each scan.
In dataloaders/data_augmentation.py, you need to define the paths to the image and annotation folders of the CBCT scans, and then you can use the following commands to implement the augmentation of labeled data:
dataloaders/data_augmentation_labeled.pyThen you can use the following command to implement the augmentation of unlabeled data:
dataloaders/data_augmentation_unlabeled.py./dataset/CBCT_13_107/
CBCT_data/
labeled_1000889125_20171009_0/
CBCT_roi.h5
......
unlabeled_X2360674_14/
CBCT_roi.h5
Flods/
train.list
val.list
test.list
./dataset/CBCT_7_113/
CBCT_data/
labeled_1000889125_20171009_0/
CBCT_roi.h5
......
unlabeled_X2360674_14/
CBCT_roi.h5
Flods/
train.list
val.list
test.list
./dataset/CTooth_13_109/
CTooth_data/
labeled_1000889125_20171009_0/
CBCT_roi.h5
......
unlabeled_X2360674_14/
CBCT_roi.h5
Flods/
train.list
val.list
test.list
./dataset/CTooth_7_115/
CTooth_data/
labeled_Teeth_0001_0000_0/
CBCT_roi.h5
......
unlabeled_Teeth_0013_0000_14/
CBCT_roi.h5
Flods/
train.list
val.list
test.list
cd RAIL/codeTo train our model,
python train_RAIL.pyTo test our model,
python test_CBCT.pyIf you use this project in your work, please cite the following paper:
@article{zhao2025rail,
title = {RAIL: Region-Aware Instructive Learning for Semi-Supervised Tooth Segmentation in CBCT},
author = {Chuyu Zhao and
Hao Huang and
Jiashuo Guo and
Ziyu Shen and
Zhongwei Zhou and
Jie Liu and
Zekuan Yu},
journal = {arXiv preprint arXiv:2505.03538},
keywords = {CBCT tooth segmentation, confidence-aware learning, disagreement-focused supervision, semi-supervised learning},
year = {2025}
}If you find this project useful, consider citing or starring π the repo.
Special thanks to Prof. Jie Liu and Prof. Zekuan Yu for their guidance throughout this paper.
We would like to acknowledge the contributions of the following projects:
If you have any questions, welcome contact me at '22723077@bjtu.edu.cn'