International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 11 Issue: 11 | Nov 2024 www.irjet.net p-ISSN: 2395-0072
ARTIFICIAL INTELLIGENCE IN IMAGE PROCESSING FOR MEDICAL PHYSICS
J. P. Pramod1, Sumaiyya Fatima2 & Baddula Gayathri Yadav3
1Asst Professor, Dept of Physics
Stanley College of Engineering and Technology for Women
2&3B.Tech Student, Dept of Computer Science and Engineering
Stanley College of Engineering and Technology for Women
---------------------------------------------------------------------------***----------------------------------------------------------------------
Abstract:
Artificial Intelligence (AI) is advancing the field of medical physics by delivering solutions that can be employed to
maximize the quality of imaging in diagnostic imaging like Magnetic Resonance Imaging (MRI) and Computed Tomography
(CT) with the weight on serving to doctors enhance diagnostic capabilities. Historically, medical imaging technologies have
transformed and shaped how healthcare professionals do diagnostics and treat patients. From X-ray technology in the past
times to cutting-edge imaging technologies like MRI and CT, there remain innovations giving birth to new possibilities and
challenges. Presently, artificial intelligence is ushering the next wave with sophisticated solutions to enhance images and
diagnostic tools, redefining the boundaries of medical physics. This demonstrates how compared to conventional
downstream methods in MRI and CT imaging, AI-based image processing techniques have changed the game by fixing low
resolution, noise, and artifacts. Through a deeper analysis of specific AI characterizations such as Convolutional Neural
Networks (CNNs) and Generative Adversarial Networks (GANs), and a discussion of clinical applications, this research
highlights these technologies aligning to meet an increasing demand for precision and accuracy in medical diagnostics.
Keywords:
Artificial Intelligence (AI), Medical Physics, Diagnostic Imaging, Magnetic Resonance Imaging (MRI), Convolutional Neural
Networks (CNNs).
Introduction:
Medical imaging has revolutionized healthcare, providing medical professionals with incredible insights into the workings
of the human body. These technologies were not available a few decades ago. Some of the major medical physics tools
include Magnetic Resonance Imaging (MRI) and Computed Tomography (CT), and they each have their benefits. For
example, in the case of MRI, uses very strong magnetic fields and radio waves to image soft tissues of the human body at
incredibly high spatial resolutions. As a result, MRI has been successfully applied to neurological, musculoskeletal, and
cardiovascular imaging. MRI does this because of the alignment of hydrogen protons through strong magnetic fields. These
protons get knocked out of this temporary comfortable state, and as they go back to their happy state they emit radio
signals, which are collected and reversed into the image. The main obstacle for MRI is sensitivity to patient motion. This
problem is due to the long scan time. Patients have to be completely motionless for 30-60 minutes. If the patient moves
even 1mm during the scan, the scan will be blurry and the image will not be diagnostic. CT imaging provides cross-
sectional images of the body through the use of X-rays. It has an incredible bone contrast, so with CT scanning tumors can
be detected inside liver tissue. CT is also used in neurological imaging in the context of stroke. However, a CT scan is
limited by its use of ionizing radiation. Any amount of ionizing radiation comes with a cancer risk if you need to be
repeated. Low-dose CT imaging has attempted to alleviate some of these issues by decreasing radiation exposure;
however, this energy decrease increases noise and lowers image resolution, which makes it more difficult for small
abnormalities to be detected. Moreover, both modalities have noise and low resolution, which results in the loss of key
information.
Artificial intelligence, with all its potential, might become a solution to some of these problems, especially in image
enhancement and noise reduction. In the past decade, AI has become an intermingling force transformational in medical
physics, totally changing the way imaging data is processed and analyzed. Highly constructed models like convolutional
neural networks (CNNs) and generative adversarial networks (GANs) have long been invented with the aim of quality
enhancement, motion artifact correction, and noise reduction. For example, CNNs can be designed to identify and delete
noise from MRI scans, producing clearer, more accurate imaging. Contrast and artifact resolution were optimized using AI
techniques in CT imaging, which enabled improved detection of pathologies and improved diagnostic outcomes.
© 2024, IRJET | Impact Factor value: 8.315 | ISO 9001:2008 Certified Journal | Page 9
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 11 Issue: 11 | Nov 2024 www.irjet.net p-ISSN: 2395-0072
Figure 1: The Process Flow Representation of Medical Image Diagnosis using AI.
Specialized AI Models for Enhanced MRI and CT scans:
In imaging and diagnostic medicine, new AI technologies are being developed that allow traditional scans to be
transformed into visual maps that contribute to deep clinical insights. Therefore, subtle elements appearing in the most
obnoxious images can now be clearer or more enhanced and hence easier to interpret with AI. The whole scope of
improvements does not merely address image enhancement but also the use of different AI models each with a set of skills
driving this change, extending the chance to MRI and CT imaging.
Convolutional Neural Networks (CNNs) have a prominent and pioneering role in understanding, refining, and
visualizing complex visual data. These networks excel in detecting and amplifying minute features such as edges, textures,
and contours. In application, a CNN can be used to outline with great precision the boundary of a tumor from an MRI scan
or to change a low-resolution CT image into a higher-resolution one. Their ability to recognize complex patterns and
structures gives them the advantage when creating finer details in medical images where clarity is crucial, like in cases
where early tumor detection or abnormalities in soft tissues are involved. Here, the structure of CNN consists of several
layers, which allows them to automatically extract features from images. The first layers of a CNN are as follows:
Convolutional Layers: These layers apply convolution operations on the input image using various filters (kernels) that
slide over it, detecting simple features such as edges and textures. Each filter activates if certain features exist in the image,
thus enabling the network to learn from patterns.
Activation Layer: These layers introduce non-linearity in the model, usually by the application of the Rectified Linear Unit
(ReLU) activation function, enabling the learning of complex relationships with the inputs.
Pooling Layer: These layers follow the convolution layers, which assist in downsampling the feature maps while
preserving and probably reducing dimension size for the computation that is significant enough. Max pooling and average
pooling are some techniques to downsample the feature maps while reducing the computational load and avoiding
overfitting.
Fully Connected Layers: These layers arise at the very end of the network, where the features learned from the previous
layer are fully connected and dispersed to produce the final output, which may be class labels or image reconstructions.
The layers connect all the neurons, allowing for a very thorough decision based on the extracted features.
Ultimately, as various convolutions take place across different levels, its hierarchical structure enables their proficiency in
tasks that demand intricate image analysis it is a must-use in medical imaging tools like MRI and CT scans.
© 2024, IRJET | Impact Factor value: 8.315 | ISO 9001:2008 Certified Journal | Page 10
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 11 Issue: 11 | Nov 2024 www.irjet.net p-ISSN: 2395-0072
Figure 2: CNN layers, including convolutional layers, pooling layers, and fully connected layers.
Generative Adversarial Networks (GANs) might, however, take a more innovative approach to picture enhancement.
GANs have two methods working in opposition to one another to enable the generation of high-quality images.:
Generator: The generator takes random noise as input and converts it to a synthetic image. Its prime aim is to produce
images that will be passed off as real ones. The generator progressively becomes adept at producing increasingly realistic
outputs by training to minimize the differences between generated images and real data.
Discriminator: The discriminator acts like that majored in critiquing the authenticity of a generated image by essentially
giving it input images which include real and non-real ones and giving output the probability of that input image being
real. The ultimate and only objective of the discriminator is to classify the image correctly as real or fake so that the
generator is always encouraged to improve its output.
This adversarial game allows GANs to produce extremely clean images from noisy or incomplete data. CT imaging benefits
greatly from GANs because they step in when low-dose scans are taken, essentially pictures taken at reduced radiation
levels to minimize patient exposure but also expose grainy images. In creating these images, GANs afford physicians every
bit of information without risking patient exposure to unnecessary radiation.
Figure 3: GANs, featuring a visual representation of the generator-discriminator loop.
© 2024, IRJET | Impact Factor value: 8.315 | ISO 9001:2008 Certified Journal | Page 11
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 11 Issue: 11 | Nov 2024 www.irjet.net p-ISSN: 2395-0072
To this mix, Recurrent Neural Networks (RNNs) add yet another capability particularly when motion over time is to be
captured. While in most models, a picture is treated as an isolated entity, RNNs sequentialize those images and can thus fit
well into the dynamic applications of MRI. Accordingly, in cases like blood flow monitoring or muscle activity tracking,
RNNs aim to provide a smooth, coherent vision of how these alterations evolve, thus rendering a much deeper
comprehension of physiological processes.
Autoencoders are the detail-oriented editors in AI land. In short, they parse complex data and reconstruct it to highlight
the most critical features; removing any irrelevant noise. Autoencoders, on the other hand, are good at capturing latent
patterns and that is where the power of autoencoders comes in optimization with MRI or CT scans. By cleaning the data,
they can find small variations that disappear in busy and noisy images, which help doctors detect early potential diseases
we might miss.
AI Techniques in MRI Image Processing:
Convolutional Neural Networks are central in MRI image enhancement, especially regarding image resolution and quality.
In MRI scanning, resolution is often hampered because of time constraints and discomfort to the subject, causing poor
image quality. CNNs solve the problem of image resolution through super-resolution techniques that upscale low-
resolution images to high-resolution ones. The models learn, using multiple layers of convolutional filters, to extract
minute anatomical detail cortical folds and subtle tissue boundaries that might not be readily apparent in low-resolution
scans. This ability is particularly important in neuroimaging, where accuracy regarding structural details is crucial in
mapping brain regions or in detecting microstructural abnormalities. The best possible resolution upscale of MRI images
by CNNs-a feature that provides radiologists with improved images will be an addition to the fight against a patient losing
an opportunity for early detection of his/her medical condition.
Another critical challenge that faces MRI imaging is noise, which can obscure some details and complicate the whole
thing's interpretation. However, CNNs excel in denoising those MRI scans by first identifying a difference between random
noise and actual tissue characteristics. They are trained on datasets with paired images: one clean image, and one noisy
image-pair for this technique to work well by successfully removing unwanted noise while keeping the relevant
anatomical features. For example, with low-field MRIs, wherein signals are weak, CNNs can increase clarity significantly
and enable a better analysis of structures such as white matter in the brain. The denoising process is needed in treating
processes like multiple sclerosis and stroke because clear visualization of lesions is not only advantageous but also
necessary for treatment.
In addition, CNNs are used for motion artifact correction, which is a common issue with MRI scans caused by the
movement of a patient. While undergoing an MRI scan, any movement of the patient can result in blurring or ghosting
artifacts that compromise image quality. With this, CNNs try to learn a set of data that includes artifacts-affected images
with corrected versions from which they can construct a cleaner output. This application is particularly advantageous with
pediatric imaging because cooperation from the child during the imaging process can be limited and reducing repeated
scans would be high on the list of priorities.
Generative Adversarial Networks (GAN) come in next to provide another wave of sophistication to MRI imaging due to
their peculiar role in image synthesis and contrast enhancement. Contrast is often key in distinguishing various soft tissues
in MRI imaging. Unlike CNNs, which mostly boost visibility in already existing images, GANs take things even one step
further, with the potential to generate entirely new high-quality images out of noisy or incomplete data. The generator
network generates synthetic images, while the discriminator network provides judgments on those images and scolds the
generator to come up with a more realistic output. This adversarial arrangement works particularly well in completing
missing details on MRI scans that suffer from incomplete data because of rapid scanning protocols or limited field of view.
The synthetic data generated by a GAN also serves to enhance the variability in AI model training when confronted with
varying image conditions.
Also, GANs come out as extremely powerful contrast enhancers. In MRI imaging, contrast is of the essence for
differentiating among different kinds of soft tissues. GANs can be trained to maximize contrast enhancement out of images
with different qualities of contrast, thus counteracting it: structures of critical importance are therefore brought into
sharper view. This can help detect soft tissue structures in the brain or the liver, for instance, where one should
differentiate between normal and pathological areas. For one example, tiny lesions or subliminal vascular structures may
be viewed more brightly and easily, thus permitting greater diagnostic confidence and chance for early disease detection.
© 2024, IRJET | Impact Factor value: 8.315 | ISO 9001:2008 Certified Journal | Page 12
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 11 Issue: 11 | Nov 2024 www.irjet.net p-ISSN: 2395-0072
Figure 4: MRI scan comparison showing before and after AI enhancement, highlighting noise reduction, motion
artifact correction, and improved resolution.
Advancements in AI-based CT Image Processing:
Convolutional Neural Networks (CNNs) play a significant role in CT imaging for overcoming the issue of low-dose CT
optimization. Since radiation damage needs to be kept to a minimum, especially in children and patients requiring more
than one or two scans, low-dose CT scans are, however, invariably noisy and low-quality images-in fact, such poor-quality
images are unlikely to provide any meaningful diagnosis. In direct contrast, CNNs have tackled this problem with great
success by learning mappings from low to high doses, or effectively reconstructing good-quality high-dose scans even
while exposing patients to minimal radiation. The mapping was performed through successive convolutional layers that
instinctively learn to detect and accentuate relevant structures such as bones, blood vessels, and tumors. Because
prominent features thus remain salient in imaging for which reconnaissance is possible, CNNs enable a high-quality
diagnosis without compromising patient safety.
Metal artifact reduction is another important application of CNN in CT imaging. Metal implants such as dental fillings and
hip replacements often cause very severe streak artifacts in the CT images, preventing proper visualization of the tissues
surrounding these artifacts. CNNs are trained to recognize and reduce these distortions, resulting in improved quality for
these images to allow for better interpretation. The networks are trained using pairs of images, low-quality images that
are affected by metal artifacts, and high-quality images without these effects, to differentiate genuine anatomical
structures from artifacts. Thus, radiologists are expected to enjoy clearer visuals with their decisions in even more
challenging situations involving metal-induced distortions.
Generative Adversarial Networks (GANs) signify a massive change in CT imaging, especially by denoising low-dose CT
scans and enhancing image quality. As CT imaging relies on X-rays, in an attempt to lower the dose, one increases noise
that outweighs fine details and limits diagnosis. GANs will generate high-quality images from noisy low-dose inputs. A
generator produces synthetic images resembling high-dose scans while the discriminator studies these outputs by
enforcing the trained discriminator to consider these from a high-quality reference. The iterative process of eradicating
noise from standard-dose CT scans upholds diagnostic integrity. This is especially important in oncology because a clear
visualization of tumor edges will lead to the right treatment decisions.
© 2024, IRJET | Impact Factor value: 8.315 | ISO 9001:2008 Certified Journal | Page 13
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 11 Issue: 11 | Nov 2024 www.irjet.net p-ISSN: 2395-0072
Another area of great merit of GANs lies in image fusion and the reconstruction thereof. In situations with multimodal
imaging, as would be PET-CT, which requires both anatomical and functional information to be visible, here GANs allow
data from different modalities to be fused into the creation of a unified high-quality image. This fusion enhances their
visibility of complex conditions; thus, radiologists can see both the structural and metabolic aspects of a particular disease
from a single image. The gist is given by GANs since they can realign and combine details from PET and CT scans to give
information crucial for each diagnosis and personalized treatment planning.
Figure 5: CT scans before and after AI enhancement, illustrating noise reduction, motion artifact correction, and
improved resolution.
Comparative Analysis: AI Models vs. Traditional Techniques:
In the field of the rapid evolution of medical imaging and die-valuation of performance in diagnosis; spotting such
differences becomes necessary. Such comparative analysis brings out key features within which these two models
perform; these include a look into different performance metrics, efficiency against accuracy, and clinical value.
Performance metrics:
It is of paramount importance a determine of quality of medical images. Two computed measures at numerous stages of
application include the Structural Similarity Index (SSIM) and the Peak Signal-to-Noise Ratio (PSNR).
The comparison of local pixel patterns to the luminance, contrast, and structure provides the SSIM. This is especially
relevant in medical imaging where the smallest difference in the quality of an imaging technique could have some clinical
effect. The PSNR provides a measure of the ratio of the maximum signal power to that of the power of the corrupting noise.
PSNR is still among the most widespread measures, but, when it comes to complex medical images, it does not always
correlate with the visual quality as perceived. Combined, the two metrics offer an image quality estimation, which is
important from the clinician's point of view for assessing the relative effectiveness of AI and traditional techniques.
Efficiency and Accuracy:
In comparison between AI and traditional image processing techniques, one of the advantages of AI is efficiency and speed
in the performance of large data sets. Deep learning models recognize the patterns and features that may have gone
undetected under traditional algorithm attempts. For instance, noise reduction and artifact removal take less time when
performed by AI in such a way as to allow clinical workflow to run smoothly.
On the other hand, conventional techniques have their worth because of the interpretability nature and simplicity involved
in them. Filter-based noise removal or histogram equalization is less intensive to realize computationally, easier to use,
and quite predictable in outcomes; hence forming an essence within some specific clinical contexts. It is the way to get a
correct balance between the snobbery of the AI models and the reliability of traditional methods.
© 2024, IRJET | Impact Factor value: 8.315 | ISO 9001:2008 Certified Journal | Page 14
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 11 Issue: 11 | Nov 2024 www.irjet.net p-ISSN: 2395-0072
Clinical Validity:
The real-world use of AI in clinical MRI and CT workflows is thus essential in assessing the information. Many studies have
shown the positive effects of AI on clinical grounds with improved diagnostic capacity and workflow efficiency. An
example is AI-assisted reconstruction work in low-dose CT images that showed improvement in image quality, with
diagnostic capabilities nearly equivalent to that of conventionally acquired standard-dose images, thus ensuring
the comparative safety of the patient with reduced radiation exposure.
However, data privacy issues, the necessity of vast volumes of training data, and the issues of integration with existing
systems act as a stumbling block to clinical proliferation. Collaborative research between research technologists and
academics has remained the only solution to overcome these problems.
By evaluation of performance metrics, efficiency and accuracy, and clinical validity, we find great disparities between AI
models and conventional techniques. We believe in a collaborative approach in which both sets of methodologies take
advantage of inherent strengths to optimize patient care delivery and diagnostic accuracy.
Figure 6: Quantitative evaluation results. The models are assessed using PSNR and SSIM as performance metrics,
with higher values of PSNR and SSIM indicating better model performance.
Challenges and Future Directions in AI-Based Medical Imaging:
The influence of AI technologies continues to change the ambiance for technological medical image sciences; however, it is
paramount that the pressures behind their design and integration should be well understood and addressed. To achieve
such victory, these challenges now become major aspirational targets, if AI could be effective in its calling of increasing
accuracy, diagnosis, and patient care outcomes. These range from issues such as data limitations, and generalization/bias
considerations, to operational practicability in implementing AI in clinical practice.
Data Limitations:
One of the greater challenges in developing robust AI models for medical imaging lies in the unavailability of large and
quality datasets for training purposes. Despite the above, now the condition is that there is a deluge of data from AI;
© 2024, IRJET | Impact Factor value: 8.315 | ISO 9001:2008 Certified Journal | Page 15
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 11 Issue: 11 | Nov 2024 www.irjet.net p-ISSN: 2395-0072
however, gathering a broad spectrum of clinically relevant imaging datasets from a single or more institutions remains
cumbersome. As great as regulations such as HIPAA or patient confidentiality laws may be, interoperability must comply
with strict regulations on data sharing among institutions. The influences of differing imaging protocols in their scope
across hospitals and clinics stand to upset efforts of data standardization.
Additionally, emerging methodologies one could look out for include federated learning, and this provides a necessary
oasis in this kind of problem. Federated learning allows the training of an AI model across several institutions without any
need for exchanging raw patients' data at an institution. Instead, local models are trained in each institution, learning their
parameters and aggregated into a model, unlike sensitive information from a patient aggregate going into it. The
advantage of it is twofold-it not only assures the safety of its subjects, but it allows using various datasets, which is for the
ideal formation of AI models in such a manner that they generalize across diverse populations with differing imaging
settings. This facilitates the collaboration of numerous health organizations in creating AI solutions while still enabling
them to maintain data protection and roll out mandates for discretion.
Generalization and Bias:
The majority of AI models trained on narrow and homogenous sets of data find it hard to adapt further to different clinical
settings. An illustrative case in point is an AI model developed on data from a specific hospital, which proved to be useful
but was unable to translate those results to anyone assessed in carriers who had varied demographics or imaging
apparatus. This small extent of generalizability causes apprehension about the extent to which it is possible to ensure that
AI systems can safely and reliably support the delivery of clinical care issues This is particularly important considering
how quickly patient demographics, imaging techniques, and types of procedures will change in clinical practice.
An extended example comes from designing an AI to detect pneumonia on chest X-ray; computer vision performed
astonishingly on the training cutoff, however for some reason even self-supervised prevailed did not in patients
longitudinally followed and images hoarded between the two clinical centers. Eventually, the idiosyncratic way some
hospitals augmented ‘pneumonia’ screening routine processes got enshrined in the visual appearance of often varying
degrees of pneumonia and combined it logically with the health images; one syllable tended to positive outcomes for one
institution less than another. This drives home the point that different data sets need to be used for training with
scrupulous considerations on the need for validation to prove that AI models can be grafted into the existing diverse
settings.
There is ongoing research into bias in AI models where a potential cure is being sought. There is also increasing emphasis
on building fairer and broader datasets and designing fair algorithms, which have the potential of outperforming even the
best models, while still addressing the issues concerning the bias in the data and its processing. In addition, external
validation of AI on a wider cross-institutional, and patient population basis, will ensure that the performance will be just as
great, even when referred to as 100% accurate, irrespective of the population.
Integration into Clinical Practice:
Bringing AI into everyday imaging workflows comes with a bunch of real-world problems. Rules and regulations, like
needing lots of testing and getting the green light from groups such as the FDA, can make it tough to get AI tech into clinics.
Plus many hospitals and medical centers don't have the right setup to add AI tools to their current imaging systems.
Getting AI platforms to work well with old-school medical gear is a big technical challenge.
Besides the technical and legal stuff, there are also mindset issues when it comes to using AI. Many doctors might not be
sure about trusting AI for important diagnosis decisions if they can't see how the algorithms work. To get people to trust
AI tech, we need to explain how these systems work, and what they can and can't do in real-life medical situations.
Creating AI models that can show doctors why they make certain predictions can help build faith in these tools.
Also, we need solid training programs for radiologists and other healthcare workers so they know how to use AI tools well.
Instead of thinking AI will take over human know-how, we should see it as a way to boost clinical decision-making leading
to more accurate and quicker diagnoses.
Conclusion:
Artificial intelligence is already showing great promise in medical imaging to give doctors better insight into the human
body, thus increasing diagnostic accuracy through innovations such as MRI and CT scans. State-of-the-art models, such as
convolutional neural networks (CNNs) and generative adversarial networks (GANs), have already been used to show how
© 2024, IRJET | Impact Factor value: 8.315 | ISO 9001:2008 Certified Journal | Page 16
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 11 Issue: 11 | Nov 2024 www.irjet.net p-ISSN: 2395-0072
AI can reduce image noise, improve clarity, and validate low-dose scans with similar reliability to traditional methods.
However, despite these extraordinary advancements, an arduous task remains in adopting AI as an ordinary constituent of
everyday clinical practice.
Transformational systems will require further research into even more intelligent, adaptive algorithms that can cleanly
address real-world problems in medical imaging. Many current AI models require huge stores of labeled (annotated) data
which can be a lengthy and expensive endeavor to systematically procure. Investigating alternative means such as
unsupervised learning might open a pathway to building robust AI systems that are not so heavily reliant on labeled data.
In addition to this is increasing transparency in AI models. Such is required not only for the clinician to understand how
decisions are reached, but for the nurturing of trust in these systems. Looking ahead, the integration of AI into regular
clinical workflows will mean more than technical innovation. A collaborative effort among AI experts, health professionals,
and regulatory authorities will be critical, to produce safe, reliable, and user-friendly solutions. Overcoming these
challenges would potentially enable AI to bring about greater changes in medical imaging and move patient care from
being about speed, safety, and personalization.
Ultimately, as AI continues to evolve, it is set to rewrite the script of medical imaging and, subsequently, healthcare,
leading to greater patient safety globally.
References:
1. Bansal, S., Sharma, A., & Gupta, R. (2023). "Artificial Intelligence in Medical Imaging: A Review." Journal of
Medical Imaging and Radiation Sciences, 54(2), 91-102. https://doi.org/10.1016/j.jmir.2023.01.004
2. Chung, A. H., et al. (2020). "Deep Learning for Medical Image Analysis: Overview and Future Directions."
Healthcare, 8(4), 410. https://doi.org/10.3390/healthcare8040410
3. Lakhani, P., & Sundaram, B. (2020). "Deep Learning in Medical Imaging: Overview and Future Directions."
Journal of Medical Imaging, 7(4), 041002. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7594889/
4. Litjens, G., et al. (2017). "A Survey of Deep Learning in Medical Image Analysis." Medical Image Analysis, 42,
60-88. https://doi.org/10.1016/j.media.2017.07.005
5. Nguyen, T. T., & Lee, S. W. (2023). "AI-Based Techniques for Improving MRI and CT Image Quality: A
Review." Progress in Medical Physics, 30(2), 49-60. https://doi.org/10.14316/pmp.2023.30.2.49
6. Wang, Z., et al. (2023). "Recent Advances in Artificial Intelligence for Medical Image Processing: A Review."
Journal of Medical Systems, 47(3), 12. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11156779/
7.
8. Zhang, Y., Chen, Y., & Zhang, L. (2023). "Artificial Intelligence in Medical Imaging: A Review of Current
Applications and Future Directions." Bioengineering, 10(12), 1435.
https://doi.org/10.3390/bioengineering10121435
© 2024, IRJET | Impact Factor value: 8.315 | ISO 9001:2008 Certified Journal | Page 17