0% found this document useful (0 votes)
32 views35 pages

Fetal Report1

This project report outlines the development of a deep learning-based system for detecting fetal anomalies in ultrasound images using transfer learning with CNNs, specifically a custom ResNet50 model. The objectives include improving diagnostic accuracy and efficiency in prenatal care by automating the detection process, while addressing limitations of existing systems such as manual interpretation and binary classification. The report also discusses the feasibility, requirements, and advantages of the proposed system over traditional methods.

Uploaded by

balajikghlk
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views35 pages

Fetal Report1

This project report outlines the development of a deep learning-based system for detecting fetal anomalies in ultrasound images using transfer learning with CNNs, specifically a custom ResNet50 model. The objectives include improving diagnostic accuracy and efficiency in prenatal care by automating the detection process, while addressing limitations of existing systems such as manual interpretation and binary classification. The report also discusses the feasibility, requirements, and advantages of the proposed system over traditional methods.

Uploaded by

balajikghlk
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 35

A Project Report

on
A DEEP LEARNING - BASED SYSTEM FOR FETAL ANAMOLY DETECTION USING
ULTRASOUND IMAGES: LEVERAGING TRANSFER LEARNING WITH CNNs
Submitted to
JAWAHARLAL NEHRU TECHNOLOGICAL UNIVERSITY ANANTAPUR, ANANTHAPURAMU

In Partial Fulfillment of the Requirements for the Award of the Degree of


BACHELOR OF TECHNOLOGY
in
COMPUTER SCIENCE & ENGINEERING ( DATA SCIENCE )

Submitted By

S. GOWRI HARSHITHA - 21691A3227

T.KULADEEP - 21691A3245

K.G.BALAJI - 22695A3201

K.MUKESH - 21691A3260

Under the Guidance of

Mr.Kiran Kumar.M, M.Tech.,(Ph.D)

Assistant Professor

Department of Computer Science & Engineering ( Data Science )

MADANAPALLE INSTITUTE OF TECHNOLGY & SCIENCE


(UGC – AUTONOMOUS)
(Affiliated to JNTUA, Ananthapuramu)
(Accredited by NBA, Approved by AICTE, New Delhi)
AN ISO 21001:2018 Certified Institution
P. B. No: 14, Angallu, Madanapalle – 517325
2021-2025
i
x

CHAPTER-1
INTRODUCTION
1.1 MOTIVATION

Prenatal care plays a critical role in ensuring the health and development of a fetus. Among
various diagnostic tools, ultrasound imaging stands out due to its non-invasive nature and
real-time visualization. Detecting fetal head abnormalities early on can help prevent severe
developmental complications or prepare for timely interventions. However, manual
interpretation of ultrasound images can be subjective and prone to errors. With the rise of AI
and deep learning, automating this process can assist radiologists in making quicker, more
accurate diagnoses — ultimately improving maternal-fetal outcomes..

 Accurate and timely detection of fetal anomalies in ultrasound images is crucial for
prenatal care, enabling timely interventions and improving potential outcomes.
 Manual interpretation of ultrasound images is often time-consuming, subjective, and
requires specialized expertise, which can lead to diagnostic delays and
inconsistencies.
 Deep learning, particularly CNNs, has shown great promise in medical image
analysis, offering the potential for automated, efficient, and objective detection of
abnormalities.
 Transfer learning, a technique that utilizes pre-trained models, can significantly
improve the performance of CNNs, especially when dealing with limited medical
image data.

1.2 PROBLEM DEFINITION

The identification of fetal head anomalies in ultrasound images is a complex task, often
requiring expert-level medical knowledge. Variability in fetal position, image quality, and
subtle differences in abnormality presentation further complicate this process. The challenge
lies in developing a robust, automated system capable of accurately classifying different
types of fetal head abnormalities from ultrasound images, thus supporting clinicians in their
diagnostic workflows.

1.3 OBJECTIVE OF THE PROJECT

The main objectives of this project are as follows:

 To develop a deep learning-based system capable of detecting fetal head


abnormalities from ultrasound images.
 To build a custom ResNet50 model from scratch (without pretrained weights) tailored
for multiclass classification of abnormalities.
 To preprocess and organize ultrasound data into training, validation, and testing sets
for optimal model performance.
 To train the model on key fetal head anomaly categories (e.g., arachnoid cyst,
hemorrhage, etc.) using image augmentation techniques.
 To evaluate the model’s performance using metrics such as accuracy and loss, and
visualize results through plots.
 To create a system that can predict the type of abnormality in a given ultrasound
image and assist in early diagnosis.
 To offer a reliable decision-support tool for radiologists and clinicians to enhance
prenatal screening efficiency.

 Developing a CNN model capable of accurately classifying ultrasound images.


 Utilizing pre-trained CNN models and transfer learning techniques to improve
performance and reduce training time.
 Evaluating the system's performance using appropriate metrics.
 Providing a foundation for a system that could potentially assist clinicians in the
diagnosis of fetal anomalies.

1.4 LIMITATIONS OF PROJECT


While the model shows promising results, there are a few limitations:
 The system's performance is highly dependent on the quality and size of the dataset;
limited or imbalanced data may affect model accuracy.
 Ultrasound images can vary significantly based on equipment, operator skill, and fetal
movement, which may reduce model generalizability.
 The current model focuses only on classification and does not provide detailed
segmentation or localization of abnormalities.
 The system is not intended to replace professional medical advice or diagnosis but
only to assist radiologists as a decision-support tool.

1.5 ORGANIZATION OF DOCUMENTATION

1.5.1 Feasibility Study


The feasibility study analyzes the viability of the project from various perspectives to ensure
successful development and implementation:
 Technical Feasibility:
The system uses TensorFlow and Python, which are mature platforms for deep
learning. Building a ResNet50 model from scratch ensures flexibility and control over
architectural choices, making the solution technically feasible.
 Operational Feasibility:
The model can assist healthcare professionals by providing automated detection of
fetal head abnormalities, improving diagnostic accuracy and reducing manual
interpretation workload. It can be integrated into existing ultrasound analysis
workflows.
 Economic Feasibility:
This project utilizes open-source tools and publicly available datasets, ensuring
minimal cost for development and deployment. Only computational resources (e.g.,
GPU/TPU) are required, which are accessible through cloud services like Google
Colab.
 Legal and Ethical Feasibility:
Since the system is built using anonymized medical images and intended for clinical
support, it aligns with ethical standards. However, it must be validated extensively
before being used in real-world medical diagnosis.
CHAPTER 2
LITERATURE SURVEY
2.1 INTRODUCTION

Prenatal care plays a crucial role in ensuring the health and well-being of both the mother and

fetus during pregnancy. Among the various diagnostic tools available, ultrasound imaging

stands out as a non-invasive, cost-effective, and widely used method for assessing fetal

development and detecting potential anomalies. However, manual interpretation of fetal

ultrasound images requires a high level of expertise and is subject to limitations such as

inter-observer variability, inconsistencies in image quality, and human error. These

limitations can lead to delayed or inaccurate diagnosis, ultimately affecting clinical outcomes.

One of the most significant challenges in obstetric imaging is the detection and
classification of fetal head anomalies, which may include conditions such as
hydrocephalus, anencephaly, encephalocele, and others. Early and accurate identification
of such anomalies is vital for making informed clinical decisions, planning interventions, and
offering parental counseling.
To overcome the inherent limitations of manual diagnosis, artificial intelligence (AI) and
deep learning techniques have emerged as powerful tools in medical image analysis. Deep
learning models, particularly Convolutional Neural Networks (CNNs), have shown
remarkable success in tasks such as image classification, segmentation, and object detection
in healthcare domains.
This project aims to design and develop a deep learning-based system for automated fetal
anomaly detection using ultrasound images. The system leverages transfer learning with
ResNet50, a robust CNN architecture pre-trained on a large-scale image dataset (ImageNet),
to extract meaningful features from fetal head ultrasound images and classify them into
normal or anomalous categories.
The system further extends its capabilities by implementing multi-class classification for
identifying specific types of anomalies, offering higher diagnostic granularity. In addition, a
threshold-based scoring mechanism is used to enhance anomaly detection reliability,
especially in binary classification.
2.2 EXISTING SYSTEM

Traditional fetal anomaly detection systems rely heavily on manual interpretation of


ultrasound images by trained sonographers or radiologists. This process, while effective in
skilled hands, is time-consuming, subjective, and prone to human error. Manual diagnosis
often leads to inconsistencies due to varying levels of experience and interpretation among
clinicians. Some early computer-assisted diagnostic tools use classical machine learning
algorithms like Support Vector Machines (SVM) or decision trees, which depend on
handcrafted features such as shape, texture, or edge detection. These models typically support
only binary classification (normal vs anomaly) and perform poorly when applied to real-
world, noisy, or low-quality ultrasound scans due to their limited learning capacity.
Recent advancements have introduced deep learning models for fetal image analysis, with
Convolutional Neural Networks (CNNs) showing improved performance in classifying
medical images. However, most existing deep learning systems are trained from scratch on
small datasets, leading to overfitting and limited generalization. Additionally, they often
focus only on anomaly detection without identifying specific types or offering segmentation
outputs. Commercial ultrasound systems may include automated measurements but lack AI-
driven decision-making capabilities. Overall, existing systems fall short in delivering high-
accuracy, multi-class classification and explainability, highlighting the need for a more
robust, deep learning-based solution as proposed in this project.

2.3 DISADVANTAGES OF EXISTING SYSTEM


 Complexity in Interpretation : Manual interpretation of fetal ultrasound images
requires high expertise and clinical experience. Even small variations in fetal positioning
or imaging angle can affect diagnosis, making the process complex and error-prone.
 Limited Scope of Detection : Many existing models are built for binary classification,
identifying whether an anomaly exists or not. These systems lack the ability to classify
the specific type of fetal anomaly, which is essential for appropriate clinical decision-
making.
 Lack of Automation : Traditional systems often involve semi-automated workflows,
requiring manual steps for feature extraction or pre-processing. This hinders real-time
deployment and increases the time required for diagnosis.
 Insufficient Generalization : Models trained on small or specific datasets often fail to
generalize to other clinical settings. They may perform poorly on new or diverse datasets
due to variations in equipment, image quality, or demographic differences.
 No Segmentation Support : Most systems provide only classification output and do not
offer segmentation of the anomaly. Without clear localization, clinicians cannot
determine the exact region affected or the size of the abnormality, limiting clinical
usefulness.
 Dependence on Handcrafted Features : Classical machine learning algorithms rely on
manually engineered features such as shape or texture descriptors. These handcrafted
features are often inadequate to capture subtle and complex variations found in
ultrasound images.
 Inability to Handle Noisy Data : Ultrasound images are often affected by noise,
shadows, and low contrast. Existing models do not include robust noise-handling
mechanisms, resulting in reduced accuracy when processing real-world data.
 Lack of Real-Time Decision-Making : Many systems are not integrated into clinical
workflows and do not support live predictions or user-friendly interfaces. This makes
them unsuitable for time-sensitive environments such as emergency rooms or prenatal
screenings.
2.4 PROPOSED SYSTEM
The proposed system is a deep learning-based diagnostic tool specifically developed for the
detection of fetal head anomalies using ultrasound images. Unlike general-purpose
anomaly detection models, this system is tailored to focus on the unique structural patterns
and features of the fetal head. It leverages a ResNet50-based convolutional neural network
(CNN) architecture, which has been built from scratch without relying on pretrained
weights, to ensure the model is optimized specifically for the characteristics present in fetal
ultrasound scans.
The system begins with preprocessing and organizing the input dataset by categorizing
ultrasound images into relevant fetal head anomaly classes. These images are then augmented
to increase data diversity and improve model generalization. After preprocessing, the images
are passed through a custom-built ResNet50 model. The model consists of convolutional and
identity blocks that enable deep feature extraction, capturing both low-level and high-level
structural abnormalities.
The architecture concludes with fully connected layers followed by a softmax classifier that
predicts the type of anomaly present in the image. The model is trained and validated on
labeled datasets and later evaluated on unseen test data to measure its performance using
metrics such as accuracy, precision, recall, and loss. Additionally, the system includes a
feature for single image testing, allowing clinicians or users to upload an image and receive
immediate classification results along with visual feedback..
2.5 ADVANTAGES OVER EXISTING SYSTEM
 Automated Feature Extraction : The proposed system uses deep learning to
automatically extract important features from ultrasound images, eliminating the need for
manual feature engineering and reducing human error.
 Multi-Class Classification Capability : Unlike existing systems that only detect the
presence of an anomaly, the proposed model classifies different types of fetal head
anomalies such as hydrocephalus, anencephaly, and encephalocele, offering better
clinical insights.
 Improved Accuracy and Robustness : By using ResNet50 with transfer learning and
data augmentation, the model achieves higher accuracy and generalizes well to different
imaging conditions, equipment, and fetal positions.
 Noise Tolerance : The system performs well even on noisy or low-quality ultrasound
images due to advanced preprocessing and the robustness of CNN-based architectures.
 Reduced Dependency on Expertise : The system assists in anomaly detection without
the constant need for experienced radiologists, making it especially useful in rural or
resource-limited healthcare settings.
 Threshold-Based Decision Making : A statistically derived threshold helps to enhance
decision-making precision, allowing the system to detect borderline or subtle anomalies
more accurately.
 Faster and Scalable Diagnosis : The deep learning model processes and classifies
images quickly, enabling real-time deployment and scalability for large-scale prenatal
screening programs.
 Integration with Clinical Workflow : The proposed system can be integrated into
existing healthcare setups and ultrasound machines, enhancing the efficiency of prenatal
diagnosis and decision-making.
CHAPTER 3
ANALYSIS

3.1 INTRODUCTION
System analysis is a critical phase in the software development life cycle, where the
feasibility, structure, and requirements of the proposed system are examined in depth. In this
project, the analysis focuses on developing an AI-powered model for fetal anomaly detection
using ultrasound images. The system is designed to classify fetal head anomalies with high
accuracy, using a deep learning model based on transfer learning. A thorough understanding
of both functional and non-functional requirements is essential to ensure the successful
implementation and deployment of the model in real-world healthcare environments.
This chapter outlines the technical specifications needed to build the system, including the
hardware and software environments. The analysis also presents a high-level overview of the
system's workflow using a content diagram, helping to visualize the sequence of operations
from data collection and preprocessing to model prediction and output interpretation. By
analyzing the system in detail, we ensure that it is technically feasible, resource-efficient, and
capable of addressing the limitations of existing manual and semi-automated diagnostic
methods.
This project aims to leverage deep learning techniques to automatically detect fetal head
anomalies from ultrasound images. Given the complexity and variation in fetal anomalies,
this system intends to assist medical professionals by providing a second opinion that
enhances the reliability of early diagnosis. The model is based on the ResNet50 architecture
with additional layers to improve performance across multiple fetal head anomaly classes.
3.2 REQUIREMENT SPECIFICATIONS

3.2.1 HARDWARE REQUIREMENTS

S. No. Component Specification

1 Processor Intel Core i5 / i7 or equivalent

2 RAM Minimum 8 GB (16 GB recommended)

3 Hard Disk 50 GB free space

4 GPU NVIDIA GPU with CUDA support (e.g., GTX 1650)

5 Monitor Standard HD Display

6 Internet Required for downloading datasets and libraries

Table 3.2.1..Hardware Requirements

3.2.2 SOFTWARE REQUIREMENTS


S.
Software Specification / Version
No.

1 Operating System Windows 10/11, Ubuntu, or macOS

Programming
2 Python 3.7 or higher
Language

Development
3 Jupyter Notebook / Google Colab / VS Code
Platform

Frameworks & TensorFlow, Keras, NumPy, Pandas, OpenCV, Matplotlib,


4
Libraries Scikit-learn

5 Dataset Tools Kaggle API, Roboflow

6 Others Git for version control

Table 3.2.2..Software Requirements

3.3 CONTENT DIAGRAM OF PROJECT


The content diagram of the proposed system represents the logical workflow and functional
modules involved in detecting fetal head anomalies using deep learning techniques. The
process starts with dataset acquisition from sources like Kaggle and Roboflow, containing
labeled ultrasound images. These images undergo preprocessing steps including resizing,
normalization, and augmentation to ensure consistency and improve the model’s robustness.
The preprocessed data is then passed to the model selection phase, where a ResNet50
architecture, pre-trained on the ImageNet dataset, is utilized through transfer learning to
extract meaningful features relevant to fetal anomalies.
Following model selection, the training and validation phases help in optimizing the network
to differentiate between normal and anomalous fetal head images. Once trained, the model
performs classification in either binary (normal vs anomaly) or multi-class mode (e.g.,
hydrocephalus, anencephaly). A threshold-based decision mechanism is implemented to
enhance classification precision by computing a cutoff value using statistical metrics. The
system concludes with a visualization module that displays the classification results and
prediction scores, supporting real-time clinical interpretation. This structured flow ensures
efficiency, modularity, and potential integration into real-world medical applications.
Fig 3.3.1 Content Diagram of the Project
CHAPTER 4
DESIGN

4.1 INTRODUCTION
The design phase plays a vital role in the software development life cycle (SDLC) as it
bridges the gap between the system requirements and actual system implementation. It
involves the conceptualization and planning of the software structure, its components,
interfaces, and data flow. The primary goal of this phase is to convert user requirements into
a systematic representation that guides developers during implementation.
In the context of this fetal health prediction project, the design phase helps in mapping out
how the system will function—from data input by the user, through internal processing using
machine learning models, to the final output prediction. This phase covers various design
elements including UML diagrams for visual modeling, architectural layouts to represent the
structure of components, and module breakdowns to understand functionality distribution. A
well-structured design ensures efficiency, scalability, maintainability, and accuracy of the
system.
The design phase translates the requirements gathered into a blueprint for building the
system. In this project, the focus is on creating a well-structured, modular, and scalable
design to support deep learning-based classification of fetal head anomalies. The system
design involves User Interface (UI) design, system architecture, module interaction, and the
UML representation of functional behavior.
4.2 UML DIAGRAM:

Unified Modeling Language (UML) is used to visually represent the behavior


and structure of the system. The following UML diagrams have been designed
for this project:
4.2.1 Use Case Diagram
The Use Case Diagram represents the interactions between the user and the
system. The main actor is the Doctor/User, and the use cases include uploading
an image, viewing results, and checking model performance.
Fig 4.2.1 UML Diagram
Fig 4.2.2 USE CASE Diagram

Fig 4.2.3 Sequence Diagram

4.3 SYSTEM ARCHITECTURE:

The system follows a Client-Server Architecture and consists of the following


components:
 Frontend: Developed using Streamlit, it allows users to upload images
and view prediction results.
 Backend: Handles loading the trained deep learning model, preprocessing
the image, performing predictions, and sending the result back to the
frontend.
 Model: A trained ResNet50 model, customized for 10 fetal anomaly
classes.
 Dataset: Ultrasound images categorized into 10 medical classes such as
Normal, arachnoid cyst, hemorrhage, etc.
Fig 4.3.1 Architecture Diagram

4.4 MODULE DESIGNING AND ORGANIZATION:


About Dataset:
The proposed system utilizes two specialized datasets tailored for different aspects of fetal
anomaly detection. The first dataset, “Fetal Head Ultrasound Dataset for Image
Segmentation” from Kaggle, comprises ultrasound images of fetal heads with corresponding
segmentation masks that delineate anatomical boundaries. This dataset is highly valuable for
training the segmentation model to locate and highlight the fetal head region accurately. The
second dataset, “Classification of Fetal Brain Abnormalities”, sourced from Roboflow
Universe, contains labeled ultrasound images categorized into multiple classes such as
normal, hydrocephalus, anencephaly, and encephalocele. This dataset is instrumental in
training the classification model to distinguish between normal and abnormal fetal head
conditions. The combination of both datasets enables the development of a robust and
comprehensive system capable of handling both segmentation and classification tasks.
Image Scaling and Normalization:
To ensure uniformity and compatibility with the pre-trained ResNet50 model, all images are
resized to 224×224 pixels. Normalization is applied by scaling the pixel values to the range
[0, 1], which improves model convergence during training. Uniform image dimensions help
in reducing computational complexity and ensure that feature maps generated by the CNN
layers remain consistent across samples. This scaling and normalization process forms a
crucial preprocessing step before the data enters the model pipeline.
Data Preprocessing and Augmentation:
To improve the model’s generalizability and avoid overfitting, a range of data augmentation
techniques are employed. These include random rotations, width and height shifts, zooming,
horizontal flipping, and shearing. These transformations generate new, slightly modified
versions of the input images, thereby increasing dataset diversity without the need for
additional labeled data. This step is especially critical due to the relatively limited size of
medical imaging datasets, and it allows the model to learn more invariant and discriminative
features. For segmentation masks, corresponding transformations are applied to maintain
spatial alignment with the augmented images.
Dataset Splitting:
After preprocessing, the dataset is divided into training, validation, and testing sets. The
training set constitutes the majority of the samples and is used to train the model by learning
to identify patterns and extract relevant features. The validation set helps in tuning the model
hyperparameters and enables the use of early stopping to prevent overfitting. The test set,
reserved for final evaluation, is used to assess model performance on unseen data and plays a
critical role in threshold-based binary classification for anomaly detection.

Algorithms Used :

ResNet50 with Transfer Learning:


A pre-trained ResNet50 model, originally trained on the ImageNet dataset, is used as the
backbone of the proposed system. Transfer learning is applied by removing the final layers of
the original model and replacing them with custom layers suitable for the specific
classification tasks in this project. Initially, the base layers of ResNet50 are frozen to preserve
pre-trained weights, allowing the new layers to adapt to the fetal ultrasound domain.
Subsequently, selective fine-tuning is applied to improve performance. This approach
drastically reduces training time and improves accuracy, particularly on small datasets, by
leveraging pre-existing learned visual features.
Threshold-Based Anomaly Classification:
After model training, prediction scores are collected from the test set. A threshold value is
computed using the mean and standard deviation of these scores to enhance anomaly
detection accuracy. Any score below this threshold is classified as an anomaly, while higher
scores indicate normal conditions. This statistical method allows the system to better
distinguish borderline cases and refine its binary classification outputs.
Convolutional Neural Networks (CNN):
The CNN architecture in this project serves as the foundation for both segmentation and
classification. In the classification workflow, CNN layers extract high-level spatial features
from the input ultrasound images. These features are passed through pooling layers to reduce
dimensionality and then through fully connected layers for final classification. During
training, Binary Crossentropy is used for binary tasks (normal vs anomaly), and Categorical
Crossentropy is used for multi-class tasks (e.g., hydrocephalus, anencephaly). The CNN’s
ability to learn hierarchical representations of input images allows for highly accurate
predictions and supports real-time clinical decision-making.
CHAPTER 5
IMPLEMENTATION AND RESULTS

5.1 INTRODUCTION:

The implementation phase is where the system design is translated into a working software
model. This phase involves the practical realization of the modules designed in the earlier
stages, such as data preprocessing, model training, and prediction. For this project, the
implementation involves using Python with libraries such as NumPy, OpenCV, TensorFlow,
and Keras. The system integrates Support Vector Machines for preprocessing and image
scaling, and Convolutional Neural Networks (CNN) for classification and sepsis prediction.
The results of the model are evaluated based on accuracy, prediction confidence scores, and
validation against a held-out dataset. This chapter outlines the implementation of critical
functions, techniques used, and performance outcomes observed during experimentation.

5.2 IMPLEMENTATION OF KEY FUNCTIONS:


The implementation was carried out using Python 3.x in a Google Colab environment with
GPU acceleration. The following are the key functions/modules implemented in the system:
 Image Reading and Conversion : DICOM (.dcm) format images were read and
converted to NumPy arrays using libraries like pydicom and opencv-python. These were
resized to uniform dimensions using SVM preprocessing techniques and normalized for
consistent input.
 Image Scaling using SVM : A preprocessing function was designed to apply SVM logic
for image scaling. It accepted raw images, converted them into feature vectors, and
ensured consistent size and resolution across the dataset, suitable for input into the CNN.
 Normalization and Augmentation : The pixel values of all images were scaled to fall
within the range [0, 1]. Augmentation techniques such as random rotations, zoom,
flipping, and shifting were applied using Keras' ImageDataGenerator.
 CNN Model Building : The CNN model was built using TensorFlow/Keras. It included
multiple convolutional layers followed by ReLU activation, max pooling layers, and
dense layers. A final sigmoid activation function was used for binary classification
(sepsis vs non-sepsis).
 Training and Validation : The training dataset was fed into the CNN model in batches,
using model.fit() with callbacks such as EarlyStopping and ModelCheckpoint to save the
best-performing weights and avoid overfitting.
 Prediction and Thresholding : After training, the model was used to predict sepsis on
unseen data. Prediction scores were collected, and a custom threshold was calculated
using the mean and standard deviation of all prediction scores. This threshold improved
sensitivity in detecting sepsis cases.
 Visualization and Results Display : Matplotlib was used to visualize random
predictions along with their confidence scores and to display evaluation metrics such as
OUTPUT SCREENSHOTS:
Fig 5.3.1 Home page for user interface
Fig 5.3.2 Data Imported Successfully

Fig 5.3.3 Model Trained successfully


Fig 5.3.4 Confusion Matrix

Fig 5.3.5 Precision


Fig 5.3.6 Accuracy
CHAPTER 6
TESTING
6.1 INTRODUCTION:
Testing is a crucial step in the development of any intelligent system, especially in medical
applications where accuracy and reliability are paramount. For this fetal anomaly detection
system, testing ensures that the model can accurately identify and classify fetal head
anomalies based on ultrasound images. It also helps confirm that each module—from data
preprocessing to prediction—functions as expected and integrates smoothly into the overall
pipeline. The objective is to validate that the system performs consistently across different
datasets and delivers reliable results, thus supporting early clinical diagnosis. Through
rigorous testing of both functionality and performance, the system’s readiness for deployment
in real-world scenarios is assessed.
Testing is a vital phase in the software development life cycle that ensures the correctness,
performance, and reliability of the system. In this project, testing is crucial as the model is
used in a sensitive domain—medical diagnosis—where incorrect results could have serious
implications. Hence, rigorous testing was performed on both the deep learning model and the
user interface to ensure high accuracy and consistent behavior.
The aim of testing in this project is to:
 Detect and fix bugs or errors in the system.
 Ensure that the system meets its requirements.
 Evaluate model performance on unseen data (test set).
 Validate user interactions through the frontend (Streamlit UI).
6.2 LEVELS OF TESTING:

Testing was performed at various levels to ensure comprehensive coverage:


6.2.1 Unit Testing
Unit testing was conducted on individual functions such as:
 Image preprocessing function.
 Model prediction function.
 Index-to-class mapping function.
 Accuracy and confusion matrix plotting functions.
These were tested to verify they work correctly for different inputs and edge cases.

6.2.2 Integration Testing


Integration testing was carried out to test the interaction between:
 Frontend (Streamlit) and backend model logic.
 Uploaded image and its preprocessing.
 Backend prediction and result display.
This ensures seamless end-to-end functionality.
6.2.3 System Testing
The system was tested as a whole by simulating real-time use:
 Uploading valid and invalid image files.
 Testing UI responsiveness.
 Evaluating how the system handles unexpected inputs (e.g., blurry or rotated images).
6.2.4 Performance Testing
The model was evaluated using the test dataset (171 images) and the following metrics were
measured:
 Accuracy: Percentage of correctly predicted images.
 Confusion Matrix: Visual comparison of true vs predicted classes.

6.3 VALIDATION TESTING:


Validation testing was conducted using a reserved subset of ultrasound images that were not
exposed to the model during training. This phase was essential to evaluate the generalization
ability of the trained CNN model on unseen data. The system’s predictions were compared
against true labels, and performance metrics such as accuracy, loss, and confidence scores
were analyzed.
A dynamic thresholding technique was applied based on the mean and standard deviation of
prediction scores to refine the classification between normal and anomalous fetal images. The
results showed high consistency in prediction and demonstrated the model’s effectiveness in
correctly identifying multiple types of fetal anomalies. This validation process confirmed the
robustness of the system and its potential applicability in clinical diagnostics.
6.4 CONCLUSION:

The testing phase of the fetal anomaly detection system confirmed that the model functions
accurately and reliably under various input conditions. Each module, from image
preprocessing and augmentation to prediction and output visualization, was rigorously tested
to ensure seamless integration and consistent performance. The CNN model, trained using
ultrasound images and enhanced through transfer learning with ResNet50, demonstrated
strong predictive capability during validation. Threshold-based classification further
improved the model's accuracy in detecting anomalies, particularly in borderline cases. The
system was also evaluated through a user-friendly interface, ensuring ease of use and clarity
in result interpretation. Overall, the testing process validated the robustness and clinical
potential of the proposed model, making it a reliable tool for early detection and classification
of fetal head anomalies.
CHAPTER 7
CONCLUSION

7.1 CONCLUSION:
The development and implementation of the fetal anomaly detection system using deep
learning has effectively demonstrated the transformative potential of artificial intelligence in
the field of prenatal healthcare and diagnostics. This project introduced a comprehensive,
structured, and systematic pipeline for the classification of fetal head anomalies using
ultrasound images—a task that is traditionally dependent on the expertise and experience of
radiologists and sonographers. By leveraging the advanced feature extraction capabilities of
Convolutional Neural Networks (CNN) and enhancing them with transfer learning through
the ResNet50 architecture, the system was able to learn intricate patterns and features
associated with various fetal anomalies, even with a relatively limited dataset. The integration
of rigorous data preprocessing techniques—including normalization to standardize pixel
intensity, resizing to maintain uniform input dimensions, and augmentation to synthetically
expand the dataset—ensured that the model was both robust and generalizable across
different imaging conditions.
Additionally, the incorporation of a dynamic threshold-based classification mechanism
helped refine the decision-making process, allowing the system to more confidently
distinguish between normal and abnormal cases, especially in borderline scenarios. The real-
time applicability of the system is one of its key strengths; it is capable of processing and
analyzing new ultrasound images swiftly and returning predictions with high accuracy,
thereby reducing the diagnostic burden on healthcare professionals. Extensive evaluation
through training, validation, and testing phases has confirmed the reliability, precision, and
clinical potential of the system. As a result, the proposed deep learning-based model stands
out as a promising tool that can be seamlessly integrated into routine clinical workflows. It
has the potential to assist medical experts in early detection of life-threatening fetal
conditions, enabling timely interventions, improving maternal-fetal outcomes, and advancing
the standard of prenatal care through AI-assisted technology.
7.2 FUTURE ENHANCEMENT:
Although the proposed system effectively identifies fetal anomalies from ultrasound images,
there is significant scope for enhancement to further increase its clinical applicability,
usability, and diagnostic depth. One of the most promising directions for future work is the
integration of segmentation capabilities, which would enable the system to not only classify
the presence of an anomaly but also localize and highlight the specific region of concern
within the image. This would greatly aid clinicians in interpreting results with higher
confidence. Another major improvement could involve expanding the dataset by
incorporating fetal ultrasound images from multiple sources, covering diverse demographics,
equipment types, and clinical conditions. This would increase the robustness and
generalizability of the model across different environments. Additionally, integrating the
system with hospital information systems and electronic medical records (EMR) could allow
seamless data exchange and make it part of routine obstetric screening processes. A web-
based or mobile application interface could further enhance accessibility, especially in rural
or resource-limited areas. Lastly, combining image-based features with non-imaging clinical
data—such as maternal history, blood test results, and genetic markers—could lead to a more
comprehensive and personalized diagnostic tool. These enhancements would transform the
current system into a fully scalable, intelligent clinical assistant capable of supporting real-
time, multi-modal fetal health analysis, thereby making a significant impact in modern
prenatal care.

You might also like