Radioo
Radioo
ABSTRACT Common dental diseases include caries, periodontitis, missing teeth and restorations. Dentists
still use manual methods to judge and label lesions which is very time-consuming and highly repetitive. This
research proposal uses artificial intelligence combined with image judgment technology for an improved
efficiency on the process. In terms of cropping technology in images, the proposed study uses histogram
equalization combined with flat-field correction for pixel value assignment. The details of the bone structure
improves the resolution of the high-noise coverage. Thus, using the polynomial function connects all the
interstitial strands by the strips to form a smooth curve. The curve solves the problem where the original
cropping technology could not recognize a single tooth in some images. The accuracy has been improved
by around 4% through the proposed cropping technique. For the convolutional neural network (CNN)
technology, the lesion area analysis model is trained to judge the restoration and missing teeth of the clinical
panorama (PANO) to achieve the purpose of developing an automatic diagnosis as a precision medical
technology. In the current 3 commonly used neural networks namely AlexNet, GoogLeNet, and SqueezeNet,
the experimental results show that the accuracy of the proposed GoogLeNet model for restoration and
SqueezeNet model for missing teeth reached 97.10% and 99.90%, respectively. This research has passed
the Research Institution Review Board (IRB) with application number 202002030B0.
INDEX TERMS Biomedical image, panoramic image, histogram equalization, flat-field correction, tooth
segmentation, tooth position, CNN, transfer learning, Alexnet, GoogLeNet, Squeezenet.
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
118654 VOLUME 10, 2022
S.-L. Chen et al.: Missing Teeth and Restoration Detection Using DPR Based on Transfer Learning With CNNs
classification [4], tumor lesions [5], and vascular analysis [6]. adjustment [17]. First, contrast adjustment is used to
In proteomic analysis, by integrating proteomic information amplify the characteristics of pixel values, and then median
and combining structural deep network embedding (SDNE) filtering [18] is used to eliminate noise on the image after con-
framework [7]. From large-scale disease genomes to inte- trast adjustment. This process is model training. The symp-
gration to disease genome analysis and revealing the genetic toms of the previous image are strengthened. At the same
basis. CNN can automatically learn the different characteris- time, for the non-target part, the background masking tech-
tics of each disease symptom and analyze the importance of nology is used to cover the background of the cut teeth, so that
the characteristics and the correlation between the symptoms, the model during training can better train the characteristics
and then get the best function solution. of the disease so that the sample of the tooth cut complete
Panoramic (PANO) X-ray film is one of the dental X-rays picture is more Become complete. After the enhancement
commonly used in daily dental examinations. Compared to processing of the image symptoms is completed, the model
other dental X-ray films, it has the important advantage of will be trained. Here, three models of transfer training are
covering most anatomical structures and clinical findings in used, namely AlexNet, GoogLeNet, and SqueezeNet [19]
a single image [8]. This important feature facilitates anal- adjust its hyperparameters and learning rate to improve the
ysis by PANO experts and provides important information accuracy of the model when training this symptom, per-
related to clinical diagnosis and treatment [9]. In this study, form symptom training for various image processing, and
deep learning will be used to classify different symptoms of compare the correctness rate to find the most suitable CNN
teeth. Regarding deep learning in the development of dental model training. Then use the most suitable CNN model of
symptoms, more analysis of the risks and potential results the two symptoms to integrate, and get the most suitable
of certain procedures can be carried out. This also helps system structure. Each model contains different amounts
dentists to show patients about correcting teeth [10], if they of layers and nodes, producing the different classification
receive a complete smile overhaul in the form of a complete methodology. The novelties of the proposed method are as
arch implant and restoration, what effect they will see. This follows:
is a considerable revolution in dentistry, and it hasn’t even 1. The research uses histogram equalization combined with
stopped there. In [11], focus on the system for detecting and the flat-field correction to assign pixel values. It depicts the
segmenting each tooth in panoramic X-ray images. In [12], bone structure more clearly, and also improves the resolution
for the realization of a 2-level hierarchical CNN structure for of high-noise coverage.
tooth segmentation by labeling each mesh surface: for the 2. The research uses the polynomial function to connect all
gums Marking and use for interdental marking. The work the interstitial strands by the strips to form a smooth curve.
proposing a novel approach based on the sparse voxel octree It solves the problem that the original cutting technology
and 3D convolution neural networks (CNNs) for segmenting could not take out a single tooth in some images in the [14].
and classifying tooth types on the 3D dental models in [13]. 3. This proposal uses image preprocessing technology and
Most of the researches only go to the segmentation of teeth masking technology to increase the final accuracy by up to
and does not use these segmented images for further training. 5.4% (from 91.7% to 97.1%).
Therefore, this topic will use these Complete the cut pictures 4. From the results, the accuracy rates of the five models are
to continue the model training on dental symptoms. In this all above 95%. Among them, the accuracy rate of GoogLeNet
study, the previous research results [14] will be used to sepa- reached 97.1%. Compared with the reference, the accuracy
rate the teeth into a single sample for the tooth cutting of the rate is improved by about 7%.
X-ray ring dental film. And to perform tooth identification The analysis method of missing teeth and restoration
beyond wisdom teeth with through-like changes and spatial in dental panoramic proposed in this research can provide
relationships Technology [15], and the recent automatic iden- dentists with more accurate objective judgment data, so as
tification of tooth position based on Mask-RCNN [16] to to achieve the purpose of developing automatic diagnosis
discuss the accuracy. In this study, the technology of cutting and treatment plans as a technology for assisting precision
teeth will be improved to increase accuracy. At the same medicine. The proposed method not only reduces the work-
time, according to the different feature values of the judged load of dentists, but also allows them to have more time
symptoms, image enhancement will be used to enhance the for professional clinical treatment, improves the quality of
features of each disease. The transfer training of deep learn- medical resources, and achieves the goal of a harmonious
ing, the establishment of artificial intelligence models of doctor-patient relationship.
related symptoms, to judge the symptoms. The introduction structure of this research is followed by
Among the many dental diseases, this study focuses on the introduction of materials and methods for the analysis
the analysis and discussion of missing teeth and restora- model of missing teeth and restoration based on the convo-
tion, because the two types of symptoms are very common. lutional neural network (CNN). The third part introduces and
Before performing deep learning training, the images need analyzes the evaluation methods and experimental results of
to be enhanced with symptoms. This study mainly uses the the model. Then, these findings are discussed in Section 4.
difference in pixel values of the two symptoms to distin- Finally, the fifth section puts forward conclusions and future
guish them and to judge the symptoms. The first is contrast prospects.
obtain more detail from the teeth. Maintaining a stronger con- the center part of each line as a mark, the curve line is drawn
trast between teeth and gaps showed a significant improve- by applying the polynomial function therefore resulting to a
ment in the segmentation compared with directly using the marked imaged as shown in Figure. 3.
original image.
4) TEETH SEGMENTATION
When drawing the cutting lines, different methods of cutting FIGURE 7. Schematic diagram of the method of cutting the lower teeth
are implemented taking into account the different charac- vertically.
teristics of the upper and lower rows of teeth photographs.
Greedy Algorithm [25] is a fast iterative approach that always B. DATA SET
selects the optimal solution in the current situation when Training CNN requires the preparation of a large number of
solving a problem. However, it does not take into account tagged data sets to ensure the accuracy of the CNN model.
the overall performance which in other words, is the local Therefore, this study collaborated with three professional
optimal solution. This method can be used to effectively cut dentists. The clinical images were annotated by dentists. All
the upper teeth. experts are employed in specialist clinics and have at least
According to the Greedy Algorithm Rule concept, the 3 years of clinical experience. Experts guide researchers, pro-
method will move 1 pixel up each iteration from the tooth vide symptom knowledge, teach researchers with actual cases
seam point and look for the lowest value between the five (describe the characteristics of missing teeth and restora-
pixels horizontally. It uses this point as the starting point for tion), and provide clinical data to calibrate the CNN model
the next iteration, repeating until half the length of the tooth, (eliminate other non-target symptoms).
and then connecting the position to the corresponding tooth To reduce the computational complexity of the developed
seam point into a split line as shown in Figure. 6. This method algorithm, this study used a single tooth to judge the results.
is utilized for the upper part of the cut. Compared to the lower The image library annotated by the dentists was also based
half of the teeth, the upper part of the teeth is generally large on the single tooth for marking as described in step 2.2.
and scattered. There will be no lower front teeth that is too A total of 108 panoramic X-rays were used to obtain a total of
small where the edges are not clear problems. 3,456 dental images of the single tooth. With the help of a
In separating teeth, the pixel strength is summed up by dentist, each tooth has been marked with signs of disease.
using each line perpendicular to the curve [26]. Because the Since the data were provided by the hospital, there
gap between adjacent teeth causes the value projected on isa significant imbalance in the proportion of image with
the curve to be very low, the teeth can be split in this way. only 498 dentures and 358 missing teeth, compared with
Although this is a great method, in practice, teeth will have 2,600 normal teeth. Table 1 shows the number of images
overlap problems that are not necessarily the so-called gaps for each clinical disease type, with the number of normal
especially in the lower half. This is mainly because the lower teeth far higher than the remaining two diseases. To train
front teeth are too small to overlap thus teeth that will appear on limited data and avoid having CNN models that are
or lead to similar to the above is not applicable. With that, this under-represented by insufficient data, data enhancement
118658 VOLUME 10, 2022
S.-L. Chen et al.: Missing Teeth and Restoration Detection Using DPR Based on Transfer Learning With CNNs
3) TRAINING PHASE
In 2.2. Data Setup, the problem of unevenness and lack of data
was mentioned. To train on limited data, the CNN model is
avoided from affecting the learning effect due to insufficient
and uneven data.
This study randomly selects 350 images for each disease
symptom from the database where 70% is used as the training
set and the remaining 30% is used as the verification set. The
training set is the set of samples needed to train a network,
and the validation set is the set of samples used to evaluate
whether the network can distinguish correctly after training.
Data enhancement techniques are applied to the training set.
These techniques include random angle rotation of ±20◦ ,
random zoom, vertical and horizontal flips, and vertical and in Figure. 9. It can be seen that in tooth positions 15, 14, 13,
horizontal translation of ±30 pixels to increase training image 21, 22, 27, 34, 32, and 42, the accuracy rate exceeds 95%.
set complexity and number of samples. Through this method, In teeth 12, 11, 33, 31, and 41, it is even more than 98%. The
the training set can generate 5145 tooth images and each overall accuracy rate is 93.28%. This shows the excellent per-
judgment sample has 1715 images for training. formance of tooth cutting and tooth positioning presented in
this article. Compared with the 92.78% and 92.14% accuracy
III. EXPERIMENTAL RESULTS AND ANALYSIS rates in [14] and [15], the accuracy rate in this study showed a
This chapter will present the performance results of the pro- 0.5% improvement as listed in Table 4. Even compared with
posed tooth segmentation algorithm and compare it with the 79.00% of the literature [13], the proposed method is a huge
proposed method in reference [14]. A comparison on the improvement.
effect of the image processing of the data set with the results
of the three CNN networks will be presented for further TABLE 4. Positioning accuracy rate.
results discussion.
FIGURE 11. Loss function evaluation for the training process of the three
models.
TABLE 6. Precision and recall of the three models using the images for
feature enhancement.
TABLE 7. Precision and recall of the three models using the masked TABLE 9. Accuracy of the three models and compared to literature.
images.
IV. CONCLUSION
This study presents an advanced image cropping method
combined with CNN models for classification that are
designed to solve the classification problem of Dental
precision and recall. This result is listed in Table 8, where as Panoramic Radiographs (DPR). Partial optimization on the
expected, GoogleNet values are higher in the same condition. cutting method is performed through the preprocessing of the
In the Missing judgment, AlexNet and SqueezeNet both have image and is based on the characteristics of the human teeth.
precision values of 99% while the recall is only at 92.9%. The optimization method takes into account the structure of
The precision of GoogleNet 98.1% while the recall is 97.2%. the tooth, uses the neck and the interdental gap to segment,
While the sacrifice of a small number of recall values men- and locates the position of the tooth. This method is based
tioned earlier is permissible, it is in cases where there is a clear on the 32 teeth samples of a normal person for cutting. The
gap between precision and recall values. GoogleNet, by con- overall accuracy after tooth cutting reached 93% which is
trast, is in overall doing well indicating that GoogleNet is still very promising.
shown to have better detection of Missing. The accuracy for The classification of dental diseases is performed by a
Restoration is even more pronounced that showed GoogleNet neural network using transfer learning that classify the most
with better scores. common diseases: missing teeth and prostheses from normal
Table 9 summarizes the best results of the different models teeth. The cutting method proposed in this article has some
used in this study and compares them with current state-of- limitations. If any tooth grows in a special position, it will
the-art [33], [34], and [15]. From the results, whether it is for not be able to cut that tooth and the lack of teeth in DPR
missing teeth or Restoration, this study has better recogni- should not be too serious. The upper and lower teeth must
tion accuracy. GoogleNet has the best accuracy performance have at least 8 teeth each. Otherwise, model will not be able to
in this article having an accuracy rate of 97.1%. AlexNet judge and execute. In [8], a collaborative model dynamically
has 95.2% which is relatively low. The overall accuracy of constructed is used to integrate two tooth segmentation and
the method in this paper is above 95%, which is greatly recognition models. This method has also been effectively
improved compared with the methods in the current state-of- proven to be more potent. In view of this, the collaborative
the-art [33], [34], and [15]. model dynamically constructed will also be the direction of
The accuracy of tooth cutting and positioning will have our future efforts.
a large degree of positive correlation with subsequent judg- Future research will focus on improving this system. The
ments. When the cutting and positioning are improved, the incisor part uses the most advanced R-CNN to perform tooth
numbering and incisor. Subsequent disease identification can [16] A. Wirtz, S. G. Mirashi, and S. Wesarg, ‘‘Automatic teeth segmentation in
also classify the symptoms in more detail and add other panoramic X-ray images using a coupled shape model in combination with
a neural network,’’ in Medical Image Computing and Computer Assisted
diseases and conditions to improve on the accuracy of the Intervention—MICCAI. Cham, Switzerland: Springer, 2018, pp. 712–719,
proposed method in this study. Finally, the whole system is doi: 10.1007/978-3-030-00937-3_81.
to be simplified and the running time is to be shortened. [17] C.-C. Huang and M.-H. Nguyen, ‘‘X-ray enhancement based on com-
ponent attenuation, contrast adjustment, and image fusion,’’ IEEE
Moreover, it is hoped that it can be applied to the clinical Trans. Image Process., vol. 28, no. 1, pp. 127–141, Jan. 2019, doi:
operation of dentists. 10.1109/TIP.2018.2865637.
[18] C.-C. Chang, J.-Y. Hsiao, and C.-P. Hsieh, ‘‘An adaptive median filter
for image denoising,’’ in Proc. 2nd Int. Symp. Intell. Inf. Technol. Appl.,
REFERENCES Dec. 2008, pp. 346–350, doi: 10.1109/IITA.2008.259.
[1] J. Chen, Y. Li, and J. Zhao, ‘‘X-ray of tire defects detection via modified [19] S. A. Prajapati, R. Nagaraj, and S. Mitra, ‘‘Classification of dental diseases
faster R-CNN,’’ in Proc. 2nd Int. Conf. Saf. Produce Informatization (IIC- using CNN and transfer learning,’’ in Proc. 5th Int. Symp. Comput. Bus.
SPI), Nov. 2019, pp. 257–260, doi: 10.1109/IICSPI48186.2019.9095873. Intell. (ISCBI), Aug. 2017, pp. 70–74, doi: 10.1109/ISCBI.2017.8053547.
[20] A. Ajaz and D. Kathirvelu, ‘‘Dental biometrics: Computer aided human
[2] P. Arena, S. Baglio, L. Fortuna, and G. Manganaro, ‘‘CNN pro-
identification system using the dental panoramic radiographs,’’ in Proc.
cessing for NMR spectra,’’ in Proc. 3rd IEEE Int. Workshop Cel-
Int. Conf. Commun. Signal Process., Apr. 2013, pp. 717–721, doi:
lular Neural Netw. Appl. (CNNA), Dec. 1994, pp. 457–462, doi:
10.1109/iccsp.2013.6577149.
10.1109/CNNA.1994.381632.
[21] R. Kaur, R. S. Sandhu, A. Gera, and T. Kaur, ‘‘Edge detection in digital
[3] R. Zhu, R. Zhang, and D. Xue, ‘‘Lesion detection of endoscopy images
panoramic dental radiograph using improved morphological gradient and
based on convolutional neural network features,’’ in Proc. 8th Int.
Matlab,’’ in Proc. Int. Conf. Smart Technol. Smart Nation (SmartTechCon),
Congr. Image Signal Process. (CISP), Oct. 2015, pp. 372–376, doi:
Aug. 2017, pp. 793–797, doi: 10.1109/SmartTechCon.2017.8358481.
10.1109/CISP.2015.7407907. [22] V. E. Rushton, K. Horner, and H. V. Worthington, ‘‘Factors influenc-
[4] M. S. Wibawa, ‘‘A comparison study between deep learning and con- ing the selection of panoramic radiography in general dental practice,’’
ventional machine learning on white blood cells classification,’’ in J. Dentistry, vol. 27, no. 8, pp. 565–571, Nov. 1999, doi: 10.1016/S0300-
Proc. Int. Conf. Orange Technol. (ICOT), Oct. 2018, pp. 1–6, doi: 5712(99)00031-7.
10.1109/ICOT.2018.8705892. [23] A. K. Jain and H. Chen, ‘‘Matching of dental X-ray images for human
[5] S. Somasundaram and R. Gobinath, ‘‘Current trends on deep learn- identification,’’ Pattern Recognit., vol. 37, no. 7, pp. 1519–1532, Jul. 2004,
ing models for brain tumor segmentation and detection—A review,’’ doi: 10.1016/j.patcog.2003.12.016.
in Proc. Int. Conf. Mach. Learn., Big Data, Cloud Parallel Comput. [24] R. Wanat and D. Frejlichowski, ‘‘A problem of automatic segmentation of
(COMITCon), Feb. 2019, pp. 217–221, doi: 10.1109/COMITCon.2019. digital dental panoramic X-ray images for forensic human identification,’’
8862209. in Proc. 15th Central Eur. Seminar Comput. Graph. (CESCG), 2011.
[6] C. Kromm and K. Rohr, ‘‘Inception capsule network for retinal blood [25] B. M. Patil and B. Amarapur, ‘‘Segmentation of leaf images using
vessel segmentation and centerline extraction,’’ in Proc. IEEE 17th greedy algorithm,’’ in Proc. Int. Conf. Energy, Commun., Data
Int. Symp. Biomed. Imag. (ISBI), Apr. 2020, pp. 1223–1226, doi: Analytics Soft Comput. (ICECDS), Aug. 2017, pp. 2137–2141, doi:
10.1109/ISBI45749.2020.9098538. 10.1109/ICECDS.2017.8389830.
[7] M. Zilocchi, C. Wang, M. Babu, and J. Li, ‘‘A panoramic view of pro- [26] Y.-C. Mao, T.-Y. Chen, H.-S. Chou, S.-Y. Lin, S.-Y. Liu, Y.-A. Chen,
teomics and multiomics in precision health,’’ iScience, vol. 24, no. 8, Y.-L. Liu, C.-A. Chen, Y.-C. Huang, S.-L. Chen, C.-W. Li, P. A. R. Abu,
Jul. 2021, Art. no. 102925, doi: 10.1016/j.isci.2021.102925. and W.-Y. Chiang, ‘‘Caries and restoration detection using bitewing film
[8] G. Chandrashekar, S. AlQarni, E. E. Bumann, and Y. Lee, ‘‘Collab- based on transfer learning with CNNs,’’ Sensors, vol. 21, no. 13, p. 4613,
orative deep learning model for tooth segmentation and identification Jul. 2021, doi: 10.3390/s21134613.
using panoramic radiographs,’’ Comput. Biol. Med., vol. 148, Sep. 2022, [27] A. Gurses and A. B. Oktay, ‘‘Tooth restoration and dental work detec-
Art. no. 105829, doi: 10.1016/j.compbiomed.2022.105829. tion on panoramic dental images via CNN,’’ in Proc. Med. Tech-
[9] T. Yeshua, ‘‘Automatic detection and classification of dental restorations nol. Congr. (TIPTEKNO), Nov. 2020, pp. 1–4, doi: 10.1109/TIPTE-
in panoramic radiographs,’’ Issues Informing Sci. Inf. Technol., vol. 16, KNO50054.2020.9299272.
pp. 221–234, May 2019, doi: 10.28945/4306. [28] X. Zhang, W. Pan, and P. Xiao, ‘‘In-vivo skin capacitive image classi-
[10] W. Ying, T. Bao-yu, and X. Yun-ye, ‘‘The method of CAD model creation fication using AlexNet convolution neural network,’’ in Proc. IEEE 3rd
of dental shape based on 3D-CT images in orthodontics,’’ in Proc. Int. Int. Conf. Image, Vis. Comput. (ICIVC), Jun. 2018, pp. 439–443, doi:
Conf. Adv. Technol. Design Manuf. (ATDM), 2010, pp. 219–221, doi: 10.1109/ICIVC.2018.8492860.
10.1049/cp.2010.1292. [29] Z. Zhu, J. Li, L. Zhuo, and J. Zhang, ‘‘Extreme weather recognition using a
novel fine-tuning strategy and optimized GoogLeNet,’’ in Proc. Int. Conf.
[11] G. Jader, J. Fontineli, M. Ruiz, K. Abdalla, M. Pithon, and
Digit. Image Comput., Techn. Appl. (DICTA), Nov. 2017, pp. 1–7, doi:
L. Oliveira, ‘‘Deep instance segmentation of teeth in panoramic
10.1109/DICTA.2017.8227431.
X-ray images,’’ in Proc. 31st SIBGRAPI Conf. Graph., Patterns Images
[30] X. Qian, E. W. Patton, J. Swaney, Q. Xing, and T. Zeng, ‘‘Machine learning
(SIBGRAPI), Oct. 2018, pp. 400–407, doi: 10.1109/SIBGRAPI.2018.
on cataracts classification using SqueezeNet,’’ in Proc. 4th Int. Conf. Uni-
00058.
versal Village (UV), Oct. 2018, pp. 1–3, doi: 10.1109/UV.2018.8642133.
[12] X. Xu, C. Liu, and Y. Zheng, ‘‘3D tooth segmentation and label- [31] Y. Zhao, P. Li, C. Gao, Y. Liu, Q. Chen, F. Yang, and D. Meng, ‘‘TSAS-
ing using deep convolutional neural networks,’’ IEEE Trans. Vis. Net: Tooth segmentation on dental panoramic X-ray images by two-stage
Comput. Graphics, vol. 25, no. 7, pp. 2336–2348, Jul. 2019, doi: attention segmentation network,’’ Knowl.-Based Syst., vol. 206, Oct. 2020,
10.1109/TVCG.2018.2839685. Art. no. 106338, doi: 10.1016/j.knosys.2020.106338.
[13] S. Tian, N. Dai, B. Zhang, F. Yuan, Q. Yu, and X. Cheng, ‘‘Automatic [32] K. Motoki, F. P. Mahdi, N. Yagi, M. Nii, and S. Kobashi, ‘‘Automatic teeth
classification and segmentation of teeth on 3D dental model using hier- recognition method from dental panoramic images using faster R-CNN and
archical deep learning networks,’’ IEEE Access, vol. 7, pp. 84817–84828, prior knowledge model,’’ in Proc. Joint 11th Int. Conf. Soft Comput. Intell.
2019, doi: 10.1109/ACCESS.2019.2924262. Syst., 21st Int. Symp. Adv. Intell. Syst. (SCIS-ISIS), Dec. 2020, pp. 1–5, doi:
[14] Y.-C. Huang, C.-A. Chen, T.-Y. Chen, H.-S. Chou, W.-C. Lin, T.-C. Li, 10.1109/SCISISIS50064.2020.9322685.
J.-J. Yuan, S.-Y. Lin, C.-W. Li, S.-L. Chen, Y.-C. Mao, P. A. R. Abu, [33] N.-H. Lin, T.-L. Lin, X. Wang, W. T. Kao, H. W. Tseng, S. L. Chen,
W.-Y. Chiang, and W.-S. Lo, ‘‘Tooth position determination by auto- Y. S. Chiou, J. F. Villaverde, and Y. F. Kuo, ‘‘Teeth detection algorithm
matic cutting and marking of dental panoramic X-ray film in medical and teeth condition classification based on convolutional neural networks
image processing,’’ Appl. Sci., vol. 11, no. 24, p. 11904, Dec. 2021, doi: for dental panoramic radiographs,’’ J. Med. Imag. Health Inform., vol. 8,
10.3390/app112411904. no. 3, pp. 507–515, 2018.
[15] J. Park, J. Lee, S. Moon, and K. Lee, ‘‘Deep learning based detec- [34] B. Çelik and M. E. Çelik, ‘‘Automated detection of dental
tion of missing tooth regions for dental implant planning in panoramic restorations using deep learning on panoramic radiographs,’’
radiographic images,’’ Appl. Sci., vol. 12, no. 3, p. 3, Jan. 2022, doi: Dentomaxillofacial Radiol., vol. 51, Sep. 2022, Art. no. 20220244, doi:
10.3390/app12031595. 10.1259/dmfr.20220244.
SHIH-LUN CHEN (Member, IEEE) received the WEI-CHI LIN received the degree in electronic
B.S., M.S., and Ph.D. degrees in electrical engi- engineering from Chung Yuan Christian Univer-
neering from the National Cheng Kung Univer- sity, Zhongli, Taoyuan, Taiwan, in 2018. His cur-
sity, Tainan, Taiwan, in 2002, 2004, and 2011, rent research interests include VLSI chip design,
respectively. image processing, and machine learning.
He was an Assistant Professor and an Asso-
ciate Professor at the Department of Electronic
Engineering, Chung Yuan Christian University,
Taiwan, from 2011 to 2014 and from 2014 to 2017,
where he has been a Professor, since 2017. His
current research interests include VLSI chip design, image processing,
wireless body sensor networks, the Internet of Things, wearable devices,
data compression, fuzzy logic control, bio-medical signal processing, and TZU-CHIEN LI received the degree in electronic
reconfigureurable architecture. He was a recipient of the Outstanding Teach- engineering from Chung Yuan Christian Univer-
ing Award from Chung Yuan Christian University, in 2014 and 2019, sity, Zhongli, Taoyuan, Taiwan, in 2018. His cur-
respectively. rent research interests include VLSI chip design,
image processing, and machine learning.
TSUNG-YI CHEN received the B.S. degree in
electronic engineering from Chung Yuan Christian
University, Zhongli, Taoyuan, Taiwan, in 2020,
where he is currently pursuing the Ph.D. degree.
His current research interests include VLSI chip
design, image processing, machine learning, and
bio-medical signal processing.
JIA-JUN YUAN received the degree in electronic
engineering from Chung Yuan Christian Univer-
sity, Zhongli, Taoyuan, Taiwan, in 2018. His cur-
YEN-CHENG HUANG received the Bachelor of rent research interests include VLSI chip design,
Dentistry degree from China Medical University, image processing, and machine learning.
Taichung, Taiwan, in 2017. She is currently a
Senior Resident with the Department of Gen-
eral Dentistry, Chang Gung Memorial Hospital,
Taoyuan, Taiwan. Her current research interests
include dental radiographic image processing and
deep learning.