0% found this document useful (0 votes)
62 views64 pages

Project Book Batch 4 (Manasa)

The project report titled 'Detection of Aircrafts with Satellite Using R-CNN' presents a system that utilizes satellite imagery and deep learning techniques, specifically Region-based Convolutional Neural Networks (R-CNN), to detect and classify aircraft in real-time. The system aims to enhance the accuracy of aircraft detection for applications in air traffic control, military surveillance, and disaster management by providing a user-friendly interface and efficient data processing. The report outlines the project's objectives, scope, and methodology, emphasizing the need for an innovative approach to overcome limitations of traditional detection methods.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
62 views64 pages

Project Book Batch 4 (Manasa)

The project report titled 'Detection of Aircrafts with Satellite Using R-CNN' presents a system that utilizes satellite imagery and deep learning techniques, specifically Region-based Convolutional Neural Networks (R-CNN), to detect and classify aircraft in real-time. The system aims to enhance the accuracy of aircraft detection for applications in air traffic control, military surveillance, and disaster management by providing a user-friendly interface and efficient data processing. The report outlines the project's objectives, scope, and methodology, emphasizing the need for an innovative approach to overcome limitations of traditional detection methods.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 64

DETECTION OF AIRCRAFTS WITH SATELLITE

USING R-CNN
A project report submitted to
Jawaharlal Nehru Technological University Kakinada,
in the partial fulfilment for the Award of Degree of

BACHELOR OF TECHNOLOGY
IN
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
(DATA SCIENCE)
Submitted by
BAKKAMUNTHALA MANASA 21491A4456
KANDUKURI SAI VIVEKANANDA 22495A4409
JANGA HARSHITHA 21491A4445
MEDARAMITLA PAVAN SAI CHOWDARY 21491A4413
YALAGALA VENKATA GOPINADH 21491A4426
UPPALA ROHIT NARASIMHA RAO 21491A4418

Under the esteemed guidance of


Dr. Ganesh Kumar M, M. Tech, Ph. D.,
Assistant Professor

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING (DATA SCIENCE)

QIS COLLEGE OF ENGINEERING AND TECHNOLOGY


(AUTONOMOUS)
An ISO 9001:2015 Certified institution, approved by AICTE & Reaccredited by NBA, NAAC ‘A+’ Grade
(Affiliated to Jawaharlal Nehru Technological University, Kakinada)
VENGAMUKKAPALEM, ONGOLE – 523 272, A.P
2021-2025

i|Page
QIS COLLEGE OF ENGINEERING AND TECHNOLOGY
(AUTONOMOUS)
An ISO 9001:2015 Certified institution, approved by AICTE & Reaccredited by NBA, NAAC ‘A+’ Grade
(Affiliated to Jawaharlal Nehru Technological University, Kakinada)
VENGAMUKKAPALEM, ONGOLE: 523272, A.P
April 2025

DEPARTMENT OF COMPUTER SCINECE AND ENGINEERING (DATA SCIENCE)


CERTIFICATE
This is to certify that the technical report entitled “DETECTION OF AIRCRAFTS WITH
SATELLITE USING R-CNN” is a Bonafide work of the following final B Tech students in the
partial fulfilment of the requirement for the award of the degree of bachelor of technology in
COMPUTER SCIENCE AND ENGINEERING (DATA SCIENCE) for the academic year 2024-
2025.

BAKKAMUNTHALA MANASA 21491A4456


KANDUKURI SAI VIVEKANANDA 22495A4409
JANGA HARSHITHA 21491A4445
MEDARAMITLA PAVAN SAI CHOWDARY 21491A4413
YALAGALA VENKATA GOPINADH 21491A4426
UPPALA ROHIT NARASIMHA RAO 21491A4418

Signature of the guide Signature of Head of Department


Dr. Ganesh Kumar M, M. Tech, Ph. D., Dr. G. Lakshmi Vara Prasad M. Tech, Ph. D.,
Assistant Professor HOD, Associate Professor in AIMLDS

Signature of External Examiner

ii | P a g e
ACKNOWLEDGEMENT

“Task successful” makes everyone happy. But the happiness will be gold without glitter if we
didn’t state the persons who have supported us to make it a success.
We would like to place on record the deep sense of gratitude to the Hon’ble Chairman
Sri. N. SURYA KALYAN CHAKRAVARTHY GARU and Hon’ble Executive Vice chairman
Dr. N. SRI GAYATRI GARU, QIS Group of Institutions, Ongole for providing necessary
facilities to carry the project work.

We express our gratitude to Dr. Y. V. HANUMANTHA RAO, M.Tech, Ph.D., Principal


of QIS College of Engineering & Technology, Ongole for his valuable suggestions and advices
in the B.Tech course.
We express our gratitude to the Head of the Department of CSE(DS), Dr. G. L. V.
PRASAD GARU, M.Tech, Ph.D., QIS College of Engineering & Technology, Ongole for his
constant supervision, guidance and co-operation throughout the project.

We express our thankfulness to our project guide Dr. Ganesh Kumar M, M. Tech, Ph. D
Professor, QIS College of Engineering & Technology, Ongole for her constant motivation and
valuable help throughout the project work.
We would like to express our thankfulness to CSCDE & DPSR for their constant
motivation and valuable help throughout the project.
Finally, we would like to thank our parents, family and friends for their co-operation to complete
this project.

Submitted by

BAKKAMUNTHALA MANASA 21491A4456


KANDUKURI SAI VIVEKANANDA 22495A4409
JANGA HARSHITHA 21491A4445
MEDARAMITLA PAVAN SAI CHOWDARY 21491A4413
YALAGALA VENKATA GOPINADH 21491A4426
UPPALA ROHIT NARASIMHA RAO 21491A4418

iii | P a g e
ABSTRACT

"Satellite-Based Aircraft Detection Using R-CNN" aims to leverage satellite imagery and deep
learning techniques to detect and classify aircraft in real-time. By using Region-based
Convolutional Neural Networks (R-CNN), this system enhances the precision of aircraft detection,
making it an invaluable tool for air traffic control, security, and disaster management. The project
involves a user-friendly interface where users can sign up, log in, and manage their profiles,
ultimately leading to aircraft detection predictions based on satellite images.
On the backend, the system gathers satellite data, preprocesses the images for model training, and
then uses an R-CNN model to detect aircraft. After the model is trained and validated, it is saved
for future use. The system builds APIs to facilitate user requests, allowing the model to respond to
aircraft detection queries in real-time. By streamlining the entire process—from data collection to
prediction output—the system offers an efficient, accurate solution for aircraft detection that can
be used across various industries, including aviation, military, and environmental monitoring.

Keywords: Satellite imagery, aircraft detection, R-CNN, deep learning, prediction, real-time
detection, machine learning, air traffic control.

iv | P a g e
TABLE OF CONTENTS

CHAPTE TITLE PAGE

NO.
NO.

ABSTRACT iv

LIST OF TABLES viii

LIST OF FIGURES ix

LIST OF SYMBOLS AND ABBREVIATIONS x

1 INTRODUCTION 1-4

1.1 Motivation 1

1.2 Problem Statement 1

1.3 Objective of the Project 2

1.4 Scope 3

1.5 Project Introduction 3-4

2 LITERATURE SURVEY 5-14

3 SYSTEM ANALYSIS 15-17

3.1 Existing System 15

3.2 Dis-Advantages 15

3.3 Proposed System 16

3.4 Advantages 17

v|Page
CHAPTER TITLE PAGE

NO.
NO.

4 REQUIREMENT ANALYSIS 18-20

4.1 Function and non-functional requirements 18

4.2 Hardware & Software Requirements 19

4.3 Architecture 20

5 SYSTEM DESIGN 21-38

5.1 Introduction of Input design 21

5.2 UML Diagram (class, use case, sequence, 22-31

collaborative, deployment, activity, ER

diagram and Component diagram)

5.3 Source Code 32-38

6 IMPLEMENTATION AND RESULTS 39-44

6.1 Modules 39-40

6.2 System processing 41-44

7 SYSTEM STUDY AND TESTING 45-49

7.1 Feasibility study 45-48

7.2 Test Cases 49

vi | P a g e
CHAPTER TITLE PAGE

NO.
NO.

8 CONCLUSION 50

9 FUTURE ENHANCEMENT 51

10 REFERENCE 52-54

vii | P a g e
LIST OF TABLES

TABLE NO. TITLE PAGE


NO.
1 FUNCTIONAL TEST 47

2 TESTING CASES 49

viii | P a g e
LIST OF FIGURES

FIGURE TITLE PAGE


NO. NO.
1 SYSTEM ARCHITECTURE 20

2 Use Case Diagram 24

3 Class Diagram 24

4 Sequence Diagram 25

5 Collaboration Diagram 26

6 Activity Diagram 27

7 Component Diagram 28

8 Deployment Diagram 28

9 ER Diagram 29

9 Operational workflow of Level 1 Diagram 30

10 Operational workflow of Level 2 Diagram 31

11 User Authentication Module 39

12 Data Collection Module 40

13 Prediction Module 42

14 API Integration Module 43

15 Results Visualization & User Interaction Module 43

16 Logging & Monitoring Module 44

ix | P a g e
LIST OF SYMBOLS AND ABBREVIATIONS

Abbreviation Full Form


API Application Programming Interface

CNN Convolutional Neural Network

DNN Deep Neural Network

DOTA Dataset for Object Detection in Aerial Images

ER Entity-Relationship

FPS Frames Per Second

GPU Graphics Processing Unit

KNN K-Nearest Neighbors

mAP mean Average Precision

R-CNN Region-based Convolutional Neural Network

RBN Representative Batch Normalization

SAR Synthetic Aperture Radar

SVM Support Vector Machine

UML Unified Modeling Language

YOLO You Only Look Once

x|Page
CHAPTER 1
INTRODUCTION

1.1. Motivation:

The increasing demand for efficient air traffic management, security, and monitoring in various
industries such as aviation, military, and disaster management highlight the need for an
accurate and real-time detection system. Traditional methods of aircraft detection often rely on
ground-based radar systems or manual identification, which can be inefficient, prone to errors,
and limited by geographical constraints. With the rapid advancements in satellite technology
and deep learning, the ability to detect aircraft from satellite imagery has become an attractive
solution for overcoming these limitations.

The motivation for this project arises from the potential to leverage satellite imagery to provide
a more accurate, scalable, and real-time solution for aircraft detection. By using deep learning
techniques, particularly Region-based Convolutional Neural Networks (R-CNN), we can
significantly improve detection accuracy in various weather conditions and across vast areas.
The use of R-CNN allows for object detection at a much higher precision, making it ideal for
identifying aircraft from satellite images. This project aims to provide a more automated and
reliable alternative to existing methods, benefiting sectors like air traffic control, military
surveillance, and disaster response. It also promises to enhance operational efficiency,
minimize human error, and provide valuable insights for various industries relying on real-time
aircraft detection.

1.2 Problem Statement:

Detecting aircraft in real-time is a critical requirement for many sectors, including air traffic
control, military defense, and disaster management. Traditional aircraft detection methods,
such as radar-based systems or manual observation, face several limitations, including high
operational costs, limited coverage, and reliance on ground infrastructure. These systems are
often inefficient, especially in remote or hard-to-reach areas, and may struggle with timely
detection in adverse weather conditions or when dealing with a large volume of data. The
challenge, therefore, is to develop a robust and efficient solution that can detect and classify
aircraft in satellite imagery accurately and in real-time. Existing automated detection
techniques suffer from limitations in handling complex visual data, where aircraft may be

1|Page
partially obscured or appear against cluttered backgrounds. Moreover, ensuring the system's
scalability and reliability across diverse environmental conditions is a significant hurdle.

Thus, there is a need for an innovative approach that combines the power of satellite imagery
with deep learning techniques, such as R-CNN, to improve the accuracy and efficiency of
aircraft detection. By leveraging advanced object detection methods, this system aims to
provide a scalable solution capable of identifying aircraft with high precision and low error
rates, addressing the limitations of current detection systems.

1.3 Objective of the Project

The primary objective of the "Satellite-Based Aircraft Detection Using R-CNN" project is to
develop a deep learning-based system that can detect and classify aircraft from satellite imagery
in real-time. By implementing Region-based Convolutional Neural Networks (R-CNN), the
system aims to enhance the accuracy and reliability of aircraft detection compared to traditional
methods. This approach enables the system to identify aircraft efficiently, even in challenging
conditions such as low resolution, complex backgrounds, or adverse weather.

The system's objectives can be broken down into several key components:

• Satellite Data Collection: Gather and preprocess satellite imagery to provide high-
quality input data for model training and testing.
• Aircraft Detection: Train an R-CNN model on labeled satellite images to detect and
classify aircraft accurately.
• Real-Time Prediction: Develop an API that allows users to interact with the system,
request predictions, and receive real-time aircraft detection results.
• User Interface: Design a user-friendly interface where users can manage profiles,
upload satellite images, and receive predictions efficiently.
• Model Validation: Validate the trained model's performance by evaluating its accuracy
in different scenarios and ensuring its robustness for real-world deployment.

2|Page
1.4 Scope:

The scope of the "Satellite-Based Aircraft Detection Using R-CNN" project spans several key
areas, including data collection, model development, real-time prediction, and user interaction.
The system aims to detect and classify aircraft in satellite images with high accuracy and
efficiency. The scope of the project can be outlined in the following key areas:

• Data Collection: The project will use publicly available satellite imagery datasets for
aircraft detection. It will also focus on gathering a diverse set of data to train and
evaluate the model effectively.
• Model Development: The project will utilize Region-based Convolutional Neural
Networks (R-CNN) to detect and classify aircraft. The system will be trained on labeled
satellite imagery and tuned for real-time performance.
• Real-Time Detection: Once the model is trained, it will be deployed as a service,
allowing users to upload satellite images for aircraft detection. The system will process
requests and provide predictions in real-time.
• User Interface: A web-based interface will be developed for users to sign up, log in,
manage profiles, and interact with the system. The interface will also display prediction
results and allow users to access historical predictions.
• Applications and Use Cases: The system will be applicable in various industries,
including air traffic control, military surveillance, and disaster management. It aims to
provide an efficient, scalable, and automated solution for aircraft detection.

1.5 Project Introduction:

The growing need for real-time aircraft detection in various sectors, including air traffic
control, military surveillance, and disaster management, has led to the exploration of advanced
techniques for monitoring and analyzing aircraft. Satellite imagery, with its ability to cover
large geographical areas, presents a unique opportunity for remote aircraft detection. However,
accurately identifying and classifying aircraft from satellite images poses significant challenges
due to the complexity of visual data, background clutter, and varying environmental conditions.

Traditional methods, such as radar and manual observation, are often limited by their reliance
on ground-based infrastructure, high operational costs, and inability to monitor remote areas
effectively. To address these limitations, deep learning-based object detection methods, such

3|Page
as Region-based Convolutional Neural Networks (R-CNN), offer a promising solution for
aircraft detection in satellite imagery.

This project aims to develop a robust and scalable system that uses satellite imagery and R-
CNN to detect aircraft with high accuracy. The system will preprocess satellite images, train a
deep learning model, and allow real-time detection through a user-friendly interface. With its
real-time prediction capability, the system has the potential to revolutionize industries such as
aviation, defense, and emergency response, providing an automated, efficient, and accurate
solution for aircraft monitoring and detection.

4|Page
CHAPTER 2
LITERATURE SURVEY

2.1 Related Work


1. Author: Ru Luo, Li fu Chen, Jin Xing, Zhihui Yuan, Siyu Tan, Xingmi Cai and Jielan
Wang.
Date: 2021
Title: A Fast Aircraft Detection Method for SAR Images Based on Efficient Bidirectional
Path Aggregated Attention Network.
Outcome: In this paper titled "A Fast Aircraft Detection Method for SAR Images Based on
Efficient Bidirectional Path Aggregated Attention Network", a new aircraft detection method
was introduced for SAR images. Authors have designed the EBPA2N for overcoming SAR
images' problems such as complicated background, size variability, and shattered features. This
framework integrates the lightweight YOLOv5s backbone with two new modules: the
Involution Enhanced Path Aggregation module for multi-scale feature extraction and the
Effective Residual Shuffle Attention module for focusing on relevant features while reducing
noise and false alarms. The experimental assessments on Gaofen-3 SAR datasets show that
EBPA2N achieved a high detection rate of 93.05% with just 4.49% false alarm rates, thereby
beating other methods such as EfficientDet-D0 and YOLOv5s. The paper also points to the
possibility of using this network in real-time geospatial analytics and the scalability for
detecting other small man-made targets in SAR images.
2. Author: Ting Wang, Changqing Cao, Xiaodang Zeng, Zhejun Feng, Jingshi Shen,
Weiming Li, Bo Wang, Yuedong Zhou, and Xu Yan.
Date: 2021
Title: An Aircraft Object Detection Algorithm Based on Small Samples in Optical
Remote Sensing Image.
Outcome: The paper titled "An Aircraft Object Detection Algorithm Based on Small Samples
in Optical Remote Sensing Image" presents an innovative aircraft object detection algorithm
for optical remote sensing images and focuses on cases with limited amounts of available data.
Problems with this approach include a high cost to obtain spaceborne images, and the reduction
in detection performance when using existing methods with smaller datasets. The algorithm
incorporates a circle frequency filter and fusion algorithm for improving the detection accuracy
of weak and small aircraft objects. After that, mean shift clustering is used to fine tune the
locations of objects; a Support Vector Machine (SVM) classifier is used to eliminate false
alarms. Experimental results prove it superior to the Faster R-CNN model, higher precision,
recall, and processing speed. The method effectively detects 91.68% of aircraft in the test set,
hence its potential for cost-effective and reliable aircraft detection in remote sensing
applications.

5|Page
3. Author: Qi fan Wu, Daqiang Feng, Changqing Cao, Xiaodong Zeng, Zhejun Feng, Jin
Wu and Ziqiang Huang.
Date: 2021
Title: Improved Mask R-CNN for Aircraft Detection in Remote Sensing Images.
Outcome: The paper aims at improving Mask R-CNN for high-resolution remote sensing
images-based aircraft detection and segmentation. SC Mask R-CNN introduces self-calibrated
convolution and dilated convolution that enhance the accuracy of detection by improving the
extraction of the features of the objects within complex backgrounds and densely packed
targets. This model improves on feature extraction while maintaining the training time at
reasonable rates. The study also introduces the WFA-1400 dataset, designed specifically for
remote sensing tasks. Experiments demonstrate that SC Mask R-CNN outperforms traditional
Mask R-CNN in accuracy (AP, AP50, and mIoU), which shows its effectiveness in practical
applications like civil and military aerial image analysis.

4. Author: SHUN LUIO, JUAN YU, YUNJIANG XI, AND XIAO LIAO.
Date: 2022
Title: Aircraft Target Detection in Remote Sensing Images Based on Improved YOLOv5.
Outcome: This paper is titled "Aircraft Target Detection in Remote Sensing Images Based on
Improved YOLOv5," which introduces YOLOv5-Aircraft, an advanced version of the
YOLOv5 neural network for aircraft detection issues within complex remote sensing images.
The model has improved the standard YOLOv5 by using centering and scaling calibration in
batch normalization to stabilize feature distribution, a loss function based on Kullback-Leibler
divergence to improve convergence, and CSand Glass to enhance feature extraction with the
reduction of information loss while eliminating low-resolution feature layers to prevent
semantic loss. Experimental results show that YOLOv5-Aircraft is more accurate and faster
than existing methods in detecting aircraft under different conditions such as lighting changes
and cluttered backgrounds. However, the study points out that generalization improvements
are needed to further reduce the effects of environmental factors such as light and weather.

5. Author: Arwin Datumaya, Wahyudi Sumari, Dimas Eka Adinandra, Arie Rachmad
Syulistyo, Sandra Loverencic.
Date: 2022
Title: Intelligent Military Aircraft Recognition and Identification to Support Military
Personnel on the Air Observation Operation.
Outcome: This paper introduces the development of an intelligent system for military aircraft
recognition and identification to help ground-based soldiers detect low-flying or radar-evading
hostile aircraft. By utilizing a combination of Back Propagation Neural Networks (BPNN) and

6|Page
Information Fusion, the system processes 13 different aircraft characteristics reduced to five
primary features for efficient recognition. The system has demonstrated high accuracy, with a
result of 95.33% in training and 87% in testing. It can also detect aircraft not in its database.
Information fusion implementation accelerated the process by 6 seconds, which is a critical
time saving for Air Defense Systems. Some limitations, such as helicopter classification
accuracy, notwithstanding, the system provides a strong approach to real-time surveillance and
identification in military applications. The future enhancements would be the expansion of the
data to include drones and unmanned aerial vehicles for broader applicability.

6. Author: P.Ajay Kumar Goud, G. Mohit Raj , K. Rahul , and A. Vijaya Lakshmi.
Date: 2023
Title: Military Aircraft Detection Using YOLOv5.
Outcome: The paper focuses on the detection of military aircraft using the YOLOv5 object
detection algorithm. It also emphasizes the challenges in the identification of military aircraft,
especially stealth types, due to their radar-resistant designs. The study uses a dataset of various
military aircraft and applies techniques like data preprocessing, augmentation, and model
training to enhance detection accuracy. The methodology used involves annotating images with
data split into train and validation and test sets. For the implementation of the YOLOv5, it uses
PyTorch libraries. The model attained noteworthy results, including 50.1% precision and
70.4% recall and mAP@0.5 at 70.4%. From this, it has confirmed that the model could
determine and classify various aircraft through hard conditions. Authors mentioned further
training with a sophisticated dataset to enhance real-word classification performance.

7. Author: ZHIGUO LIU, YUAN GAO, AND QIANQIAN DU.


Date: 2023
Title: YOLO-Class: Detection and Classification of Aircraft Targets in Satellite Remote
Sensing Image Based on YOLO-Extract.
Outcome: The paper "YOLO-Class: Detection and Classification of Aircraft Targets in
Satellite Remote Sensing Images Based on YOLO-Extract" deals with problems of detection
and classification of aircraft in satellite imagery, including the issues of imbalanced datasets,
scale variations of the target and background, and occlusion of the target. The authors develop
the YOLO-Class model, which is better than YOLO-Extract. It optimizes feature extraction for
small, dense, and occluded targets using Representative Batch Normalization (RBN), Mish
activation function, and VariFocal loss. These enhancements address data imbalance and
improve classification accuracy. In addition, the model incorporates RepVGG modules to
strengthen the backbone network for better computational efficiency and feature extraction
without increasing complexity. Experimental results on the RarePlanes and DOTA datasets
show that YOLO-Class improves detection accuracy (mAP increased from 0.608 to 0.704) and
speed (FPS increased from 36.16 to 39.598), outperforming existing approaches under diverse
7|Page
conditions, including complex backgrounds and variable lighting. The study concludes with
suggestions for future improvements using more advanced networks and loss functions.

8. Author: Ling Lei, Yuwei She, Xiaoli Feng, Rui Xiong, Shan Liu.
Date: 2020
Title: Aircraft Detection of Remote Sensing Image Based on Faster R-CNN and Yolov3.
Outcome: This paper uses deep learning models such as Faster R-CNN and YOLOv3 for the
detection of aircraft in remote sensing images. The authors used the UCAS_AOD dataset and
expanded and labeled the images to prevent overfitting and then trained the models. Faster R-
CNN, being a two-stage detection method, was able to reach a mean Average Precision (mAP)
of 90.06%, while YOLOv3, a one-stage method, was at 85.98%. It is shown that Faster R-CNN
is more accurate, and YOLOv3 performs better on speed and detailed object detection, thus
preferable for real-time applications. The authors point out that the low size of datasets and
offer future work by involving large datasets and extend the task of detection for other types
of targets like ships and oil tanks, to extend the application area of remote sensing technology
for different scopes.

9. Author: Ugur Alganci, Mehmet Soydas, and Elif Sertel.


Date: 2020
Title: Comparative Research on Deep Learning Approaches for Airplane Detection from
Very High-Resolution Satellite Images.
Outcome: This paper reviews deep learning techniques for detecting airplanes using high-
resolution satellite imagery. The paper compares three of the state-of-the-art object detection
models: Faster R-CNN, SSD, and YOLO-v3. All the models were tested on the DOTA dataset
and also evaluated independently using Pleiades satellite images. The research highlights
Faster R-CNN as the most accurate model, followed by YOLO-v3, which offers a good balance
of accuracy and speed. SSD showed lower detection performance but excelled in object
localization. The study also discusses challenges in satellite imagery, such as limited labeled
data, complex backgrounds, and variations in object size and illumination. To address these, it
used data augmentation techniques and transfer learning. The findings highlight the CNN-
based model's potential, especially for high detection accuracy in remote sensing applications.

10. Author: Gong Cheng, Junwei Ha, Xiaoqiang Lu.


Date: 2020
Title: Remote Sensing Image Scene Classification: Benchmark and State of the Art

8|Page
Outcome: This paper reviews remote sensing image scene classification plays an important
role in a wide range of applications and hence has been receiving remarkable attention.
However, a systematic review of the literature concerning datasets and methods for scene
classification is still lacking. In addition, almost all existing datasets have a number of
limitations, including the small scale of scene classes and the image numbers.
11. Author: Yuhang Zhang, Hao Sun, Jiawei Zuo, Hongqi Wang, Guangluan Xu
and Xian Sun
Date: 2018
Title: Aircraft Type Recognition in Remote Sensing Images Based on Feature
Learning with Conditional Generative Adversarial Networks
Outcome: In this paper, the author presented an aircraft type recognition framework based on
a conditional GAN. First, a new aircraft key point detection method was carefully designed to
predict the eight key point positions precisely. The key point detection results provided an
accurate mask and ROI information for the GAN and feature extraction methods. Then, a
conditional GAN with an ROI-weighted loss function was proposed to learn features from a
large dataset without type labels. Finally, we designed an ROI feature extraction method to
extract multi-scale features in the regions of targets and eliminate the effects of complex
background information. Experiments demonstrated that the proposed framework could
effectively extract robust and distinctive features. Based on the features, our method was able
to identify aircrafts of different types and scales, and it achieved good recognition performance
on a challenging dataset.

12. Author: Marie R. G. Attard, Richard A. Phillips, Ellen Bowler, Penny J. Clarke,
Hannah Cubaynes, David W. Johnston and Peter T. Fretwell.
Date: 2024
Title: Review of Satellite Remote Sensing and Unoccupied Aircraft Systems for Counting
Wildlife on Land.
Outcome: In this paper, the author presented an aircraft type recognition framework based on
a conditional GAN. First, a new aircraft key point detection method was carefully designed to
predict the eight key point positions precisely. The key point detection results provided an
accurate mask and ROI information for the GAN and feature extraction methods. Then, a
conditional GAN with an ROI-weighted loss function was proposed to learn features from a
large dataset without type labels. Finally, we designed an ROI feature extraction method to
extract multi-scale features in the regions of targets and eliminate the effects of complex
background information. Experiments demonstrated that the proposed framework could
effectively extract robust and distinctive features. Based on the features, our method was able
to identify aircrafts of different types and scales, and it achieved good recognition performance
on a challenging dataset. Although our method to effective in aircraft recognition, it can still
be improved further.

9|Page
13. Author: Jun-Wei Hsieh1, Jian-Ming Chen, Chi-Hung Chuang, and Kao-Chin Fan
Date: 2023
Title: Aircraft type recognition in satellite images
Outcome: In this paper, the author presented a hierarchical recognition method to recognize
the types of aircrafts from satellite images. At the beginning, the method takes advantages of
aircraft symmetry to estimate an aircraft’s orientation for rotation adjustment. Then, four
features 22 including bitmaps, wavelet coefficients, Zernike moments, and distance maps were
used to capture different shape characteristics of an aircraft. Furthermore, a novel learning
method was presented to automatically determine a set of proper weights of features from
training sets for feature integration. Through analysis, the best method to recognize aircrafts is
to use the area feature first for roughly categorizing aircrafts and then detailed classifications
are made according to the suggested four features. The contributions of this paper can be
summarized as follows: (a) A symmetry-based method was proposed to estimate an aircraft’s
optimal orientation. Even though this aircraft has been populated by shadows or noise, the
proposed method still can robustly and effectively achieve the rotation adjustment. (b) A
hierarchical recognition scheme was proposed to recognize the types of aircrafts by
incorporating suitable weights into each feature and classifying aircrafts at different levels.

14. Author: Berkay Yaban, Elif Sertel, Ugur Alganci


Date: 2022
Title: Aircraft Detection in Very High Resolution Satellite Images using YOLO-based
Deep Learning Methods
Outcome: This paper explores on the application of deep learning techniques, specifically the
Yolo framework, to detect aircraft in Very High Resolution (VHR) satellite images. Aircraft
detection in satellite images is challenging due to the small size, varied orientations, and diverse
environmental conditions under which the aircraft appear. YOLO, known for its real-time
object detection capabilities, is leveraged to address these challenges by offering fast and
accurate detection.

15. Author: Julie Imbert, Gohar Dashyan, Alex Goupilleau, Tugdual Ceillier, Marie-Caroline
Corbineau
Date: 2021
Title: improving Performance of Aircraft Detection in Satellite Imagery while Limiting
the Labelling Effort: Hybrid Active Learning
Outcome: This paper explores the challenge of improving the performance of aircraft detection
in satellite imagery while minimizing the labor-intensive task of manually labeling large
datasets. It introduces a Hybrid Active Learning (HAL) approach, which combines the

10 | P a g e
strengths of active learning and traditional deep learning methods to improve model accuracy
with fewer labeled samples.

16. Author: Peder Heiselberg, Kristian A. Sorensen

Date: 2023

Title: Aircraft Detection and State Estimation in Satellite Images

Outcome: This paper explores the challenges and methodologies associated with aircraft
detection and state estimation (such as position, velocity, and orientation) in satellite images.
Aircraft detection from satellite imagery is crucial for applications like military surveillance,
air traffic control, and environmental monitoring, but it presents several difficulties due to
factors like varying object size, resolution constraints, and environmental complexity. The
paper explores advanced computer vision techniques and deep learning methods to detect
aircraft in satellite imagery and estimate their state accurately.

17. Author: Chaitanya Malladi

Date: 2017

Title: Title: Detection of Objects in Satellite images using Supervised and Unsupervised
Learning Methods

Outcome: This paper explores the application of both supervised and unsupervised learning
techniques for the detection of objects in satellite images. Object detection in satellite imagery
is crucial for a wide range of applications, including urban planning, disaster management,
environmental monitoring, and military surveillance. The paper compares and contrasts the
performance of supervised and unsupervised learning methods, providing insights into how
each approach can be utilized to identify objects in satellite images effectively.

18. Author: Stefanos Georganos, Tais Grippa, Sabine Vanhuysse, Moritz Lennert,
Michal Shimoni, Stamatis Kalogirou & Eleonore Wolf

Date: 2017

11 | P a g e
Title: Less is more: optimizing classification performance through feature selection in a
very high-resolution remote sensing object-based urban application

Outcome: This paper explores with a significant increase in the acquisition of very-high-
resolution (VHR) satellite data from earth observation (EO) missions such as Pleiades, Quick
bird, and Worldview, remote sensing (RS) information is being collected at various temporal
and spatial resolutions. The adequate interpretation of RS data is of vital importance for the
extraction of rigorous and robust results that may drive policy making in various fields such as
health and epidemiology. FS techniques can be divided into three general categories: i) filters,
ii) embedded techniques, and iii) wrappers. Filter methods represent the simplest, fastest, and
most generic approaches for selecting relevant attributes. They do not require any learning
algorithm but rather rank and select features and feature subsets based on statistical measures
such as correlation. In that manner, they attempt to allow only the most important variables to
manifest. Their main disadvantage is that they do not take model prediction and feature
interaction under consideration

19. Author: Yang Li, Kun Fu, Hao Sun and Xian Sun

Date: 2017

Title: Title: An Aircraft Detection Framework Based on Reinforcement Learning and


Convolutional Neural Networks in Remote Sensing Images

Outcome: This paper detects an Aircraft to attract increasing attention in the field of remote
sensing image analysis. Complex background, illumination change and variations of aircraft
kind and size in remote sensing images make the task challenging. In our work, we propose an
effective aircraft detection framework based on reinforcement learning and a convolutional
neural network (CNN) model. Aircraft in remote sensing images can be accurately and robustly
located with the help of the searching mechanism that the candidate region is dynamically
reduced to the correct location of aircraft, which is implemented through reinforcement
learning. The detection framework overcomes the difficulties that the current detection
methods based on reinforcement learning are only able to detect a fixed number of objects.
Specifically, we adopt the restricted Edge Boxes that generate the high-quality candidate boxes
through the prior aircraft knowledge at first. Then, we train an intelligent detection agent
through reinforcement learning and apprenticeship learning. The detection agent accurately

12 | P a g e
locates the aircraft in the candidate boxes within several actions, and it even performs better
than the greed strategy in apprenticeship learning. In our work, we propose an effective aircraft
detection framework based on reinforcement learning and CNN (RL-CNN) model. The process
of localization aircraft can be seen as an action decision problem with a sequence of actions to
refine the size and position of bounding box. Active interaction to understand the image region,
change of the correct bounding box aspect radio and selection region of interest are important
to determine the accurate position of aircraft.

20. Author: Sathis Kumar T

Date: 2021

Title: Title: Prediction of Aircraft using Deep Learning In Remote Sensing Images

Outcome: The paper presented the aircraft recognition from satellites images for surveillance
application with a super pixel segmentation and template matching model. The tracking system
provides the result with low computational complexity and better accuracy. Neural network
analysis was utilized effectively for enhancing a segmented regions and tracking target objects.
Finally, the simulated result was shown that better efficiency achieved with chosen techniques
and methodologies. This work has proposed a new automatic target classifier, based on a
combined neural networks’ system, by ISAR image processing. The novelty introduced in our
work is twofold. We have first introduced a novel automatic classification procedure, and then
we have discussed about an improved multimedia processing of ISAR images for automatic
object detection. We have exploited a neural classifier, composed by a combination of back
propagation artificial neural networks. The classifier is used to recognize aircraft targets
extracted from ISAR images. The combination of two image processing techniques, recently
introduced in literature, is exploited to improve the shape and features extraction process [20].
Then, Super pixel descriptors are computed and used as input features to our combined system.
Performance analysis is carried out in comparison with conventional multimedia processing
techniques as well as with classical automatic target recognition systems. Numerical results,
obtained from wide simulation trials, evidence the efficiency of the proposed approach for the
application to automatic aircraft target recognition.

13 | P a g e
21. Author: Maxime Oquab, Leon Bottou, Ivan Laptev, Josef Sivic

Date: 2014

Title: Title: Learning and Transferring Mid-Level Image Representations using


Convolutional Neural Networks

Outcome: The paper "Learning and Transferring Mid-Level Image Representations using
Convolutional Neural Networks" presents the application of CNNs in image recognition tasks
by emphasizing their ability to learn rich, mid-level representations of images. The paper
addresses a critical limitation of CNNs: the necessity of large annotated datasets for training.
They propose a method of transfer learning that reuse CNN layers trained on large-scale
ImageNet datasets with the intent of improving performances on tasks with limited training
data such as object classification and action classification in Pascal VOC datasets. The
experiments show notable improvements over existing state-of-the-art techniques even in cases
where images statistics are different, different tasks are involved, or different viewpoints are
used in the data. The authors also give promising results for object and action localization,
emphasizing that pre-trained CNN features can transfer well to diverse visual recognition
problems.

14 | P a g e
CHAPTER 3
SYSTEM ANALYSIS

3.1 Existing System

Existing systems for aircraft detection mainly rely on ground-based technologies such as radar
systems and visual monitoring methods. These systems often require expensive infrastructure,
are limited by geographical boundaries, and lack the flexibility to monitor remote or difficult-
to-reach areas. Radar systems are costly to deploy and maintain, especially in vast regions
where aircraft movement needs to be tracked. Moreover, these systems are not always effective
under challenging weather conditions, such as low visibility, which can significantly reduce
their detection capability. Additionally, these traditional systems require human intervention
for monitoring, which can be time-consuming and prone to errors. Another limitation of
existing systems is their inability to analyze large-scale satellite imagery in real-time, which
restricts their use for continuous surveillance. Consequently, these traditional methods struggle
with scalability, real-time processing, and accuracy when detecting aircraft in satellite imagery,
especially when the objects to be detected are small and hard to identify.

3.2 Disadvantages

• Limited Coverage: Ground-based radar and monitoring systems are often confined to
specific geographical locations, leaving gaps in coverage, especially over remote or
inaccessible regions. These systems cannot monitor large areas in real-time, which is
critical for continuous aircraft surveillance.

• High Cost: Deploying and maintaining ground-based radar systems can be highly
expensive. This includes the costs associated with infrastructure, maintenance, and
human resources. These systems are also less cost-effective for large-scale surveillance
operations.

• Weather Sensitivity: Traditional radar and visual monitoring systems often struggle to
detect aircraft during adverse weather conditions such as heavy rain, fog, or snow. This
limits their ability to provide reliable data in all environmental conditions.

• Low Detection Accuracy: Existing systems may have limited accuracy in detecting
small or distant aircraft, especially those flying at high altitudes or in complex
environments. The detection algorithms used in these systems are often not equipped
to handle varying image qualities, such as satellite imagery with lower resolutions or
high noise levels.

15 | P a g e
• Manual Intervention: Many traditional systems require significant human
intervention for monitoring and analysis, leading to potential delays and increased risk
of human error. This can hinder the efficiency of aircraft detection, especially in real-
time applications.

• Slow Response Time: Ground-based systems often require substantial time to process
and analyze data. The detection of aircraft using manual methods or outdated
algorithms can be slow, preventing timely responses to potential threats or air traffic
management needs.

• Scalability Issues: Expanding traditional detection systems to cover larger areas or


handle more data requires significant investments in additional infrastructure and
resources. This is particularly problematic for large-scale or international monitoring
needs.

• Limited Flexibility: Existing systems lack the ability to adapt to different satellite
image datasets or other remote sensing technologies. As a result, they struggle to
provide a scalable and adaptable solution for varying aircraft detection tasks.

• Dependence on Ground Infrastructure: Traditional aircraft detection systems rely


heavily on ground-based infrastructure, which is vulnerable to failure due to natural
disasters, system malfunctions, or attacks, leading to gaps in coverage.

• Inability to Handle Big Data: Ground-based detection systems are not well-suited to
handle large volumes of satellite image data generated by modern high-resolution
satellites. This limitation makes it difficult for these systems to provide real-time
aircraft detection at scale.
3.3 Proposed System

The proposed system, "Satellite-Based Aircraft Detection Using R-CNN", utilizes satellite
imagery and deep learning techniques to detect and classify aircraft in real-time. The system
gathers high-resolution satellite data, preprocesses it for training, and applies Region-based
Convolutional Neural Networks (R-CNN) for aircraft detection. After training, the model is
deployed and exposed via an API, allowing users to upload satellite images for detection
queries. The user interface offers a simple and intuitive experience, enabling users to sign up,
manage profiles, and receive aircraft detection predictions. The system is scalable and
optimized for performance, utilizing cloud infrastructure and GPU acceleration to handle large
datasets. Security measures ensure the privacy and integrity of user and satellite data. By
automating aircraft detection from satellite imagery, the system offers a more efficient, cost-
effective, and accurate solution for applications in air traffic control, military surveillance, and
environmental monitoring, overcoming the limitations of traditional detection systems.

16 | P a g e
3.4 Advantages

• Improved Detection Accuracy: The use of R-CNN allows for high precision in
detecting aircraft from satellite imagery, making the system capable of identifying
even small and partially obscured aircraft.

• Real-Time Prediction: Unlike traditional systems that are slow, the proposed
system can detect aircraft in real-time, providing timely information for air traffic
management and security purposes.

• Scalability: The system can handle large volumes of satellite imagery, enabling
continuous monitoring of wide areas and providing scalable solutions for real-time
aircraft detection across multiple regions.

• Cost-Effective: By relying on satellite imagery and deep learning, the proposed


system reduces the need for expensive ground infrastructure like radar, lowering
operational costs significantly.

• Automation: The system automates the aircraft detection process, reducing human
intervention, and minimizing errors, ensuring consistent and accurate results.

• Adaptability: The model can be trained on diverse datasets, making it adaptable to


various aircraft types, satellite image qualities, and environmental conditions.

• User-Friendly Interface: The system is designed to be intuitive and easy to use,


allowing users of all technical backgrounds to interact with the system effortlessly.

• Security and Privacy: The system incorporates robust security protocols, ensuring
that user and satellite data is protected against unauthorized access and breaches.

• Comprehensive Data Management: The backend allows for seamless integration


of satellite image processing and aircraft detection, ensuring that results are
delivered efficiently to users.

17 | P a g e
CHAPTER 4
REQUIREMENT ANALYSIS

4.1 Function and non-functional requirements

Functional Requirements

The proposed system requires several key functional components. First, it must gather high-
resolution satellite data, preprocess it, and prepare it for deep learning model training. The
system must then implement Region-based Convolutional Neural Networks (R-CNN) for
aircraft detection and classification. It should provide an API to allow users to upload
satellite images and receive detection predictions in real-time. The system must also
support user registration, login, and profile management. Additionally, it must store
detection results, offer the option to visualize predictions, and ensure seamless interaction
between the front-end interface and the back-end model.

Examples of Functional Requirements:

a) Users must be able to upload satellite images through the web interface.
b) The system should detect aircraft and return bounding box coordinates in real-
time.
c) The system must allow users to manage their profiles and view past predictions.

Non-Functional Requirements
The proposed system must meet several non-functional requirements to ensure efficiency,
scalability, and security. It should handle large-scale satellite image data, supporting real-
time processing with minimal latency. The system must be highly scalable, leveraging
cloud infrastructure to accommodate growing data volumes. It should be user-friendly,
ensuring intuitive navigation and interaction for non-experts. Additionally, security
measures should protect user data and satellite imagery, with encryption and secure
authentication mechanisms. The system must also offer high availability, ensuring
uninterrupted access to users, and provide fast response times for aircraft detection
predictions, ensuring real-time results.

18 | P a g e
Examples of Non-Functional Requirements:
The app should complete encryption and decryption processes within seconds.

• The system must handle large volumes of satellite imagery data with minimal
latency.
• The platform must be scalable to support additional users and data.
• The system should ensure secure storage and transmission of user and satellite
data.

4.2 Hardware Requirements

• Processor - I3/Intel Processor

• RAM - 8 GB

• Hard Disk - 1TB

Software Requirements

• Operating System - Windows 10

• JDK - java

• Plugin -Kotlin

• SDK - Android

• IDE -Android studio

• Database` - server script, my sql

19 | P a g e
4.3 Architecture:

FIG 1: System Architecture

20 | P a g e
CHAPTER 5
SYSTEM DESIGN

5.1 Introduction of Input design

INPUT DESIGN

The input design is the link between the information system and the user. It comprises the
developing specification and procedures for data preparation and those steps are necessary to
put transaction data in to a usable form for processing can be achieved by inspecting the
computer to read data from a written or printed document or it can occur by having people
keying the data directly into the system. The design of input focuses on controlling the amount
of input required, controlling the errors, avoiding delay, avoiding extra steps and keeping the
process simple. The input is designed in such a way so that it provides security and ease of use
with retaining the privacy. Input Design considered the following things:

➢ What data should be given as input?


➢ How the data should be arranged or coded?
➢ The dialog to guide the operating personnel in providing input.
➢ Methods for preparing input validations and steps to follow when error occur.

OBJECTIVES
1. Input Design is the process of converting a user-oriented description of the input into a
computer-based system. This design is important to avoid errors in the data input process and
show the correct direction to the management for getting correct information from the
computerized system.

2. It is achieved by creating user-friendly screens for the data entry to handle large volume of
data. The goal of designing input is to make data entry easier and to be free from errors. The
data entry screen is designed in such a way that all the data manipulates can be performed. It
also provides record viewing facilities.

3. When the data is entered it will check for its validity. Data can be entered with the help of
screens. Appropriate messages are provided as when needed so that the user will not be in
maize of instant. Thus the objective of input design is to create an input layout that is easy to
follow

21 | P a g e
OUTPUT DESIGN

A quality output is one, which meets the requirements of the end user and presents the
information clearly. In any system results of processing are communicated to the users and to
other system through outputs. In output design it is determined how the information is to be
displaced for immediate need and also the hard copy output. It is the most important and direct
source information to the user. Efficient and intelligent output design improves the system’s
relationship to help user decision-making.

1. Designing computer output should proceed in an organized, well thought out manner; the
right output must be developed while ensuring that each output element is designed so that
people will find the system can use easily and effectively. When analysis design computer
output, they should Identify the specific output that is needed to meet the requirements.

2. Select methods for presenting information.

3. Create document, report, or other formats that contain information produced by the system.

The output form of an information system should accomplish one or more of the following
objectives.

❖ Convey information about past activities, current status or projections of the


❖ Future.
❖ Signal important events, opportunities, problems, or warnings.
❖ Trigger an action.
❖ Confirm an action.

5.2 UML Diagram

UML stands for Unified Modelling Language. UML is a standardized general-purpose


modelling language in the field of object-oriented software engineering. The standard is
managed, and was created by, the Object Management Group.
The goal is for UML to become a common language for creating models of object-
oriented computer software. In its current form UML is comprised of two major components:
A Meta-model and a notation. In the future, some form of method or process may also be added
to; or associated with, UML.

22 | P a g e
The Unified modelling Language is a standard language for specifying, Visualization,
Constructing and documenting the artifacts of software system, as well as for business
modelling and other non-software systems.
The UML represents a collection of best engineering practices that have proven
successful in the modelling of large and complex systems.
The UML is a very important part of developing objects-oriented software and the
software development process. The UML uses mostly graphical notations to express the design
of software projects.

GOALS
The Primary goals in the design of the UML are as follows:
1. Provide users a ready-to-use, expressive visual modelling Language so that they can
develop and exchange meaningful models.
2. Provide extendibility and specialization mechanisms to extend the core concepts.
3. Be independent of particular programming languages and development process.
4. Provide a formal basis for understanding the modelling language.
5. Encourage the growth of OO tools market.
6. Support higher level development concepts such as collaborations, frameworks,
patterns and components.
7. Integrate best practices.

USE CASE DIAGRAM


A use case diagram in the Unified Modeling Language (UML) is a type of behavioral
diagram defined by and created from a Use-case analysis. Its purpose is to present a graphical
overview of the functionality provided by a system in terms of actors, their goals (represented
as use cases), and any dependencies between those use cases. The main purpose of a use case
diagram is to show what system functions are performed for which actor. Roles of the actors in
the system can be depicted.

23 | P a g e
FIG 2: USE CASE DIAGRAM

CLASS DIAGRAM
In software engineering, a class diagram in the Unified Modelling Language (UML) is a type
of static structure diagram that describes the structure of a system by showing the system's
classes, their attributes, operations (or methods), and the relationships among the classes. It
explains which class contains information.

FIG 3: CLASS DIAGRAM

24 | P a g e
SEQUENCE DIAGRAM
A sequence diagram in Unified Modelling Language (UML) is a kind of interaction diagram
that shows how processes operate with one another and in what order. It is a construct of a
Message Sequence Chart. Sequence diagrams are sometimes called event diagrams, event
scenarios, and timing diagrams.

FIG 4: SEQUENCE DIAGRAM

25 | P a g e
COLLABORATION DIAGRAM

In collaboration diagram the method call sequence is indicated by some numbering technique
as shown below. The number indicates how the methods are called one after another. We have
taken the same order management system to describe the collaboration diagram. The method
calls are similar to that of a sequence diagram. But the difference is that the sequence diagram
does not describe the object organization whereas the collaboration diagram shows the object
organization.

FIG 5: COLLABORATION DIAGRAM

26 | P a g e
ACTIVITY DIAGRAM

Activity diagrams are graphical representations of workflows of stepwise activities and actions
with support for choice, iteration and concurrency. In the Unified Modelling Language, activity
diagrams can be used to describe the business and operational step-by-step workflows of
components in a system. An activity diagram shows the overall flow of control.

FIG 6: ACTIVITY DIAGRAM

27 | P a g e
COMPONENT DIAGRAM

A component diagram, also known as a UML component diagram, describes the organization
and wiring of the physical components in a system. Component diagrams are often drawn to
help model implementation details and double-check that every aspect of the system's required
functions is covered by planned development.

FIG 7: COMPONENT DIAGRAM

DEPLOYMENT DIAGRAM
Deployment diagram represents the deployment view of a system. It is related to the component
diagram. Because the components are deployed using the deployment diagrams. A deployment
diagram consists of nodes. Nodes are nothing but physical hardware’s used to deploy the
application.

FIG 8: DEPLOYMENT DIAGRAM

28 | P a g e
ER Diagram

An Entity–relationship model (ER model) describes the structure of a database with the help
of a diagram, which is known as Entity Relationship Diagram (ER Diagram). An ER model is
a design or blueprint of a database that can later be implemented as a database. The main
components of E-R model are: entity set and relationship set.

An ER diagram shows the relationship among entity sets. An entity set is a group of similar
entities and these entities can have attributes. In terms of DBMS, an entity is a table or attribute
of a table in database, so by showing relationship among tables and their attributes, ER diagram
shows the complete logical structure of a database. Let’s have a look at a simple ER diagram
to understand this concept.

FIG 9: ER Diagram

5.3 DATA FLOW DIAGRAM

A Data Flow Diagram (DFD) is a traditional way to visualize the information flows within a
system. A neat and clear DFD can depict a good amount of the system requirements
graphically. It can be manual, automated, or a combination of both. It shows how information
enters and leaves the system, what changes the information and where information is stored.
The purpose of a DFD is to show the scope and boundaries of a system as a whole. It may be

29 | P a g e
used as a communications tool between a systems analyst and any person who plays a part in
the system that acts as the starting point for redesigning a system.

Level 1

FIG 10: DATA FLOW DIAGRAM (Level 1)

30 | P a g e
Level 2

FIG 11: Data Flow Diagram (Level 2)

31 | P a g e
5.3 SOURCE CODE

# Views Code
from django.shortcuts import render
from django.http.response import JsonResponse
import base64
import io
from PIL import Image
import numpy as np
from ultralytics import YOLO
from django.views.decorators.csrf import csrf_exempt
model = YOLO('app/models/best.pt')
@csrf_exempt
def Index(request):
if request.method=="POST":
base64_string = request.POST.get('image','')
image_data = base64.b64decode(base64_string)
image = Image.open(io.BytesIO(image_data)).convert("RGB")
image_np = np.array(image)
results = model(image_np)
result = results[0]
class_ids = result.boxes.cls.cpu().numpy().astype(int)
predection=""
for class_id in class_ids:
predection=result.names[class_id]
print(f"Predicted Class: {result.names[class_id]}")
return JsonResponse({"output":predection})
# Signup Code
package com.flight
import android.os.Bundle
import android.widget.Toast
import androidx.activity.OnBackPressedCallback
import androidx.appcompat.app.AppCompatActivity

32 | P a g e
import androidx.lifecycle.lifecycleScope
import com.flight.data.DataBase
import com.flight.data.Users
import com.flight.databinding.SignUpBinding
import kotlinx.coroutines.Dispatchers.Main
import kotlinx.coroutines.async
import kotlinx.coroutines.withContext

class SignUp : AppCompatActivity() {


private val bind by lazy {
SignUpBinding.inflate(layoutInflater)
}
private val data by lazy {
DataBase.getContext(this).dao()
}

override fun onCreate(savedInstanceState: Bundle?) {


super.onCreate(savedInstanceState)
setContentView(bind.root)
with(bind) {
create.setOnClickListener {
val name = name.text.toString().trim()
val email2 = email2.text.toString().trim()
val mobile = mobile.text.toString().trim()
val password = password.text.toString().trim()

if (name.isEmpty()) {
Toast.makeText(applicationContext, "Please enter your name",
Toast.LENGTH_SHORT)
.show()
return@setOnClickListener
}

if (email2.isEmpty() || !android.util.Patterns.EMAIL_ADDRESS.matcher(email2)
.matches()
){
Toast.makeText(
applicationContext,
"Please enter a valid email address",
Toast.LENGTH_SHORT
).show()
return@setOnClickListener
}

if (mobile.isEmpty() || mobile.length != 10 || !mobile.matches(Regex("\\d{10}"))) {


Toast.makeText(

33 | P a g e
applicationContext,
"Please enter a valid 10-digit mobile number",
Toast.LENGTH_SHORT
).show()
return@setOnClickListener
}

if (password.isEmpty() || password.length < 6) {


Toast.makeText(
applicationContext,
"Password must be at least 6 characters long",
Toast.LENGTH_SHORT
).show()
return@setOnClickListener
}

lifecycleScope.async {
val checkMail = data.checkMail(email2)
if (checkMail.isNotEmpty()) {
withContext(Main) {
Toast.makeText(
applicationContext,
"email Already Exists",
Toast.LENGTH_SHORT
).show()
}
} else {
data.createAc(
users = Users(
name = name,
email = email2,
mobile = mobile,
password = password
)
)
withContext(Main) {
finish()
}
}
}.start()

loginAc.setOnClickListener {
finish()

34 | P a g e
}
}
onBackPressedDispatcher.addCallback(object : OnBackPressedCallback(true) {
override fun handleOnBackPressed() {
finish()
}

})
}
}
# API Source code
package com.flight.responses

import retrofit2.Response
import retrofit2.http.Field
import retrofit2.http.FormUrlEncoded
import retrofit2.http.POST

interface Api {

@FormUrlEncoded
@POST("upload/")
suspend fun upload(
@Field("image")image:String
):Response<Predicted>
}
# Retro Fit Object Code
package com.flight.responses

import retrofit2.converter.gson.GsonConverterFactory

object RetrofitObject {
private const val BASEURL = "http://192.168.239.189:5000/"
val api: Api by lazy {
retrofit2.Retrofit.Builder().baseUrl(BASEURL)
.addConverterFactory(GsonConverterFactory.create())
.build()
.create(Api::class.java)
}
}
# Prediction Code
package com.flight

35 | P a g e
import android.annotation.SuppressLint
import android.content.Intent
import android.graphics.Bitmap
import android.graphics.BitmapFactory
import android.os.Build
import android.os.Bundle
import android.provider.MediaStore
import android.util.Base64
import android.util.Log
import androidx.activity.result.contract.ActivityResultContracts
import androidx.appcompat.app.AppCompatActivity
import androidx.core.view.isVisible
import androidx.lifecycle.lifecycleScope
import com.flight.databinding.PredictionBinding
import com.flight.responses.RetrofitObject
import kotlinx.coroutines.Dispatchers.Main
import kotlinx.coroutines.async
import kotlinx.coroutines.withContext
import java.io.ByteArrayOutputStream

class Prediction : AppCompatActivity() {


private val bind by lazy {
PredictionBinding.inflate(layoutInflater)
}
private val capture =
registerForActivityResult(ActivityResultContracts.StartActivityForResult()) {
it.data?.let { intent ->
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.TIRAMISU) {
intent.getParcelableExtra("data", Bitmap::class.java)
} else {
intent.getParcelableExtra("data")
}?.let {
bind.headImage.setImageBitmap(it)
loadModel(it)
}

}
}

private val gallery =


registerForActivityResult(ActivityResultContracts.StartActivityForResult()) {
it.data?.data?.let {
val readBytes = contentResolver.openInputStream(it)?.readBytes()
if (readBytes != null) {
val bitmap = BitmapFactory.decodeByteArray(readBytes, 0, readBytes.size)
bind.headImage.setImageBitmap(bitmap)

36 | P a g e
loadModel(bitmap)

}
}
}

@SuppressLint("SetTextI18n")
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(bind.root)
with(bind) {
bind.textView2.text = "Hi ${intent.getStringExtra("name")} !!"
capture.setOnClickListener {
this@Prediction.capture.launch(
Intent(
MediaStore.ACTION_IMAGE_CAPTURE
)
)
}
logout.setOnClickListener {
finish()
}
profile.setOnClickListener {
startActivity(Intent(applicationContext, Profile::class.java).apply {
putExtra("name", intent.getStringExtra("name"))
putExtra("mobile", intent.getStringExtra("mobile"))
putExtra("email", intent.getStringExtra("email"))
})
}
galleryView.setOnClickListener {

this@Prediction.gallery.launch(Intent(Intent.ACTION_GET_CONTENT).setType("image/*")
)
}

}
}

private val retrofit by lazy {


RetrofitObject.api
}

@SuppressLint("SetTextI18n")
private fun loadModel(it: Bitmap) {
try {

37 | P a g e
lifecycleScope.async {
val compress = ByteArrayOutputStream()
it.compress(Bitmap.CompressFormat.PNG, 100, compress)
val body =
retrofit.upload(Base64.encodeToString(compress.toByteArray(),
Base64.NO_WRAP))
body.body()?.output?.let {
withContext(Main) {
bind.resultsCard.isVisible = true
bind.resultText.text = "Predicted as $it"
}
}
}.start()
bind.resultsCard.isVisible = true

} catch (e: Exception) {


Log.i("sdkdf", "${e.message}")
bind.resultText.text = "${e.message}"

}
}

38 | P a g e
CHAPTER 6
IMPLEMENTATION AND RESULT

6.1 MODULES

USER AUTHENTICATION MODULE

• Functionality: This module handles user sign-up, login, and logout functionalities. It
ensures that only authorized users can access the system and make predictions.
• Components:
o Sign Up: Allows users to create an account by providing necessary details
such as username, password, and email.
o Login: Enables users to log in with their credentials.
o Logout: Ends the user's session.

FIG 11: USER AUTHENTICATION MODULE

39 | P a g e
USER PROFILE MANAGEMENT MODULE

• Functionality: Allows the user to manage their profile, update personal information,
and view previous interactions or predictions.
• Components:
o View Profile: Displays user information.
o Update Profile: Enables users to update personal information like email,
password, etc.

DATA COLLECTION MODULE

• Functionality: Collects the necessary satellite imagery data for aircraft detection.
• Components:
o Satellite Image Data: Fetches satellite imagery either from publicly available
sources or specific satellite data providers.
o Data Preprocessing: Ensures that the data is in a format suitable for model
training.

FIG 12: DATA COLLECTION MODULE

40 | P a g e
DATA PREPROCESSING AND AUGMENTATION MODULE

• Functionality: Preprocesses the satellite data by resizing images, normalizing pixel


values, and augmenting the data for model training.
• Components:
o Image Resizing: Ensures that images are resized to a consistent size (e.g.,
224x224) for model input.
o Data Normalization: Scales pixel values to a specific range.
o Data Augmentation: Applies transformations like rotation, flipping, and
zooming to increase the dataset's diversity.

MODEL BUILDING AND TRAINING MODULE

• Functionality: This module is responsible for creating and training the deep learning
model (R-CNN) for aircraft detection.
• Components:
o R-CNN Architecture: Implements Region-based Convolutional Neural
Networks for detecting objects (aircraft) in images.
o Training: Trains the model on a labeled dataset of satellite images containing
aircraft.
o Model Evaluation: Evaluates the model’s accuracy using validation data and
adjusts parameters to optimize performance.

MODEL SAVING AND DEPLOYMENT MODULE

• Functionality: Once the model is trained, this module saves the model and prepares it
for deployment in the prediction phase.
• Components:
o Save Model: Stores the trained model to disk.
o Model Deployment: Deploys the model to a production environment, enabling
real-time predictions.

PREDICTION MODULE

• Functionality: This module handles the prediction process by taking an uploaded


satellite image, running it through the model, and providing aircraft detection results.
• Components:
o Input Image: Allows the user to upload satellite images for prediction.
o Run Model: Executes the trained R-CNN model to detect aircraft in the image.
o Bounding Box Detection: Marks the detected aircraft in the image using
bounding boxes.
o Display Results: Displays the results to the user, highlighting detected aircraft.

41 | P a g e
FIG 13: PREDICTION MODULE

API INTEGRATION MODULE

• Functionality: Provides API endpoints for interacting with the system. This allows the
front-end to interact with the back-end, such as submitting images for prediction and
retrieving results.
• Components:
o Prediction API: Handles user requests for aircraft detection.
o Model API: Runs the model in response to incoming requests, provides
detected results back to the user.

42 | P a g e
FIG 14: API INTEGRATION MODULE

RESULT VISUALIZATION AND USER INTERACTION MODULE

• Functionality: After the model makes a prediction, this module is responsible for
displaying the results in an easy-to-understand format, such as showing the bounding
boxes on the detected aircraft.
• Components:
o Bounding Box: Visualizes the detected aircraft by drawing bounding boxes
around the identified objects in the image.
o Prediction Feedback: Provides feedback on the accuracy and confidence level
of the detection.

FIG 15: RESULT VISUALIZATION AND USER INTERACTION MODULE

43 | P a g e
SECURITY AND ACCESS CONTROL MODULE

• Functionality: Ensures that all user data and predictions are secure and that only
authorized users can access sensitive features of the system.
• Components:
o Authentication: Ensures that only authorized users can log in and use the
system.
o Data Encryption: Encrypts sensitive data such as user credentials and
predictions.

LOGGING AND MONITORING MODULE

• Functionality: Monitors system activity and logs significant events like model
performance, user actions, and system errors.
• Components:
o Activity Logging: Logs user actions such as login/logout and prediction
requests.
o Model Performance Logging: Tracks the performance of the model during
training and testing.
o Error Monitoring: Records errors and issues to help with debugging.

FIG 16: LOGGING AND MONITORING MODULE

44 | P a g e
CHAPTER 7
SYSTEM STUDY AND TESTING

7.1 FEASIBILITY STUDY

FEASIBILITY STUDY
The feasibility of the project is analyzed in this phase and business proposal is put forth with a
very general plan for the project and some cost estimates. During system analysis the feasibility
study of the proposed system is to be carried out. This is to ensure that the proposed system is
not a burden to the company. For feasibility analysis, some understanding of the major
requirements for the system is essential.

Three key considerations involved in the feasibility analysis are

 ECONOMICAL FEASIBILITY
 TECHNICAL FEASIBILITY
 SOCIAL FEASIBILITY

ECONOMICAL FEASIBILITY
This study is carried out to check the economic impact that the system will have on the
organization. The amount of fund that the company can pour into the research and development
of the system is limited. The expenditures must be justified. Thus, the developed system as
well within the budget and this was achieved because most of the technologies used are freely
available. Only the customized products had to be purchased.

TECHNICAL FEASIBILITY
This study is carried out to check the technical feasibility, that is, the technical requirements of
the system. Any system developed must not have a high demand on the available technical
resources. This will lead to high demands on the available technical resources. This will lead
to high demands being placed on the client. The developed system must have a modest
requirement, as only minimal or null changes are required for implementing this system.

SOCIAL FEASIBILITY
The aspect of study is to check the level of acceptance of the system by the user. This includes
the process of training the user to use the system efficiently. The user must not feel threatened
by the system, instead must accept it as a necessity. The level of acceptance by the users solely
depends on the methods that are employed to educate the user about the system and to make

45 | P a g e
him familiar with it. His level of confidence must be raised so that he is also able to make some
constructive criticism, which is welcomed, as he is the final user of the system.

SYSTEM TESTING
The purpose of testing is to discover errors. Testing is the process of trying to discover every
conceivable fault or weakness in a work product. It provides a way to check the functionality
of components, sub-assemblies, assemblies and/or a finished product It is the process of
exercising software with the intent of ensuring that the software system meets its requirements
and user expectations and does not fail in an unacceptable manner. There are various types of
tests. Each test type addresses a specific testing requirement.

7.2 TYPES OF TEST & TEST CASES

UNIT TESTING

Unit testing involves the design of test cases that validate that the internal program logic is
functioning properly, and that program inputs produce valid outputs. All decision branches and
internal code flow should be validated. It is the testing of individual software units of the
application .it is done after the completion of an individual unit before integration. This is a
structural testing, that relies on knowledge of its construction and is invasive. Unit tests perform
basic tests at component level and test a specific business process, application, and/or system
configuration. Unit tests ensure that each unique path of a business process performs accurately
to the documented specifications and contains clearly defined inputs and expected results.

INTEGRATION TESTING
Integration tests are designed to test integrated software components to determine if they
actually run as one program. Testing is event driven and is more concerned with the basic
outcome of screens or fields. Integration tests demonstrate that although the components were
individually satisfaction, as shown by successfully unit testing, the combination of components
is correct and consistent. Integration testing is specifically aimed at exposing the problems
that arise from the combination of components.

46 | P a g e
FUNCTIONAL TEST
Functional tests provide systematic demonstrations that functions tested are available as
specified by the business and technical requirements, system documentation, and user manuals.

Functional testing is centred on the following items:

Valid Input identified classes of valid input must be accepted.

Invalid Input identified classes of invalid input must be rejected.

Functions identified functions must be exercised.

Output identified classes of application outputs must be exercised.

TABLE 1: FUNCTIONAL TEST

Systems/Procedures: interfacing systems or procedures must be invoked. Organization and


preparation of functional tests is focused on requirements, key functions, or special test cases.
In addition, systematic coverage pertaining to identify Business process flows; data fields,
predefined processes, and successive processes must be considered for testing. Before
functional testing is complete, additional tests are identified and the effective value of current
tests is determined.

SYSTEM TEST
System testing ensures that the entire integrated software system meets requirements. It tests a
configuration to ensure known and predictable results. An example of system testing is the
configuration-oriented system integration test. System testing is based on process descriptions
and flows, emphasizing pre-driven process links and integration points.

WHITE BOX TESTING


White Box Testing is a testing in which in which the software tester has knowledge of the inner
workings, structure and language of the software, or at least its purpose. It is purpose. It is used
to test areas that cannot be reached from a black box level.

BLACK BOX TESTING


Black Box Testing is testing the software without any knowledge of the inner workings,
structure or language of the module being tested. Black box tests, as most other kinds of tests,
must be written from a definitive source document, such as specification or requirements
document, such as specification or requirements document. It is a testing in which the software
47 | P a g e
under test is treated, as a black box. you cannot “see” into it. The test provides inputs and
responds to outputs without considering how the software works.

UNIT TESTING
Unit testing is usually conducted as part of a combined code and unit test phase of the software
lifecycle, although it is not uncommon for coding and unit testing to be conducted as two
distinct phases.

Test strategy and approach

Field testing will be performed manually and functional tests will be written in detail.
Test objectives
• All field entries must work properly.
• Pages must be activated from the identified link.
• The entry screen, messages and responses must not be delayed.
Features to be tested
• Verify that the entries are of the correct format
• No duplicate entries should be allowed
• All links should take the user to the correct page.

INTEGRATION TESTINGSS
Software integration testing is the incremental integration testing of two or more integrated
software components on a single platform to produce failures caused by interface defects.

The task of the integration test is to check that components or software applications, e.g.,
components in a software system or – one step up – software applications at the company
level – interact without error.

Test Results: All the test cases mentioned above passed successfully. No defects encountered.

ACCEPTANCE TESTING
User Acceptance Testing is a critical phase of any project and requires significant participation
by the end user. It also ensures that the system meets the functional requirements.

Test Results: All the test cases mentioned above passed successfully. No defects encountered.

48 | P a g e
TESTING CASES

Test Test Scenario Test Steps Prerequisites Test Data Expected Actual Test status
case id result result

#CVD To • User User data Username When the As Pass


001 authenticate a navigate user submits Expected,
Password
successful the the user
signup with signup Mobile data, data
user data page should be
Email
• Enter store in
the location database
valid successfully
user
data
• Click on
signup
button
#CVD To • User Username, Username, When the As Pass
002 authenticate a navigate password password user submits Expected,
successful the the user
login with login data, data
user data page should be
• Enter authenticate
the successfully
valid
usernam
e,
passwor
d
• Click on
login
button

TABLE 2: TESTING CASES

49 | P a g e
CHAPTER 8
CONCLUSION

The "Satellite-Based Aircraft Detection Using R-CNN" project presents a robust solution for
detecting and classifying aircraft in real-time using satellite imagery and deep learning
techniques. By utilizing Region-based Convolutional Neural Networks (R-CNN), the system
achieves high accuracy in identifying aircraft, making it a valuable tool for various sectors such
as air traffic control, security, and disaster management. The system streamlines the process,
from data collection and preprocessing to model training, validation, and deployment, making
aircraft detection more efficient and scalable. The user-friendly interface allows users to easily
interact with the system, managing their profiles and obtaining real-time predictions.
Additionally, the backend ensures seamless operation by processing satellite data, running the
model, and delivering results via an API. This innovative solution enhances operational
efficiency, reduces human error, and offers a more automated and reliable method of aircraft
detection, addressing the limitations of traditional detection systems. The system can be
effectively used across various industries, providing improved situational awareness, security,
and response capabilities.

50 | P a g e
CHAPTER 9
FUTURE ENHANCEMENT

While the "Satellite-Based Aircraft Detection Using R-CNN" project presents a robust
solution, there are several opportunities for future enhancement to further improve its
performance and applicability. One possible enhancement is the integration of multi-modal
data, such as weather patterns, temporal changes in satellite imagery, and environmental
conditions, which could help improve the accuracy of aircraft detection, especially in
challenging scenarios like adverse weather. Additionally, the system could benefit from real-
time processing capabilities, where live satellite feeds are used to detect aircraft as they appear
in the imagery, reducing the delay in detection and improving its utility for real-time air traffic
management. Another area of improvement is the expansion of the model’s training dataset to
include more diverse aircraft types, sizes, and perspectives to enhance detection accuracy in
various settings. The introduction of edge computing could also be explored to perform
predictions on the satellite data in real-time without relying on a centralized server, reducing
latency and bandwidth consumption. Furthermore, continuous model retraining and updates
based on new satellite data would help in maintaining the system's effectiveness over time.
These advancements will ensure that the system remains scalable, adaptable, and relevant to
emerging challenges in the aerospace and defense industries.

51 | P a g e
CHAPETR 10
REFERENCES

1. FAISAL AZAM, AKASH RIZVI, WAZIR ZADA KHAN, MOHAMMAD Y.


AALSALEM, HEEJUNG YU, AND YOUSAF BIN ZIKRIA (2021). Aircraft
Classification Based on PCA and Feature Fusion Techniques in Convolutional Neural
Network. IEEE Access.

2. Ru Luo, Li fu Chen, Jin Xing, Zhihui Yuan, Siyu Tan, Xingmin Cai and Jielan Wang
(2021). A Fast Aircraft Detection Method for SAR Images Based on Efficient
Bidirectional Path Aggregated Attention Network. Multidisciplinary Digital
Publishing Institute (MDPI).

3. Ting Wang, Changqing Cao, Xiaodong Zeng, Zhejun Feng, Jingshi Shen, Weiming Li,
Bo Wang, Yuedong Zhou and Xu Yan (2020). An Aircraft Object Detection Algorithm
Based on Small Sample in Optical Remote Sensing Image. Multidisciplinary Digital
Publishing Institute (MDPI).

4. Qifan Wu, Daqiang Feng, Changqing Cao, Xiaodong Zeng, Zhejun Feng, Jin Wu, and
Ziqiang Huang (2021). Improved Mask R-CNN for Aircraft Detectection in Remote
Sensing Images. Multidisciplinary Digital Publishing Institute (MDPI).

5. XUZHAO JIANG AND YONGHONG WU (2023). Remote Sensing Object Detection


Based on Convolution and Swin Transformer. IEEEAccess.

6. SHUN LUO, JUAN YU, YUNJIANG XI, AND XIAO LIAO (2022). Aircraft Target
Detection in Remote Sensing Images Based on Improved YOLOv5.IEEEAccess.

7. Jielan Wang, Hongguang Xiao, Lifu Chen, Jin Xing, Zhouhao Pan, Ru Luo and
Xingmin Cai (2021). Integrating Weighted Feature Fusion and the Spatial Attention
Module with Convolutional Neural Networks for Auotomatic Aircraft Detection from
SAR images. Multidisciplinary Digital Publishing Institute (MDPI).

8. Fengcheng Ji, Dongping Ming, Beichen Zeng, Jiawei Yu, Yuanzhao Qing, Tongyao
Du and Xinyi Zhang (2021). Aircraft Detection in High Spatial Resolution Remote
Sensing Images Combining Multi-Angle Features Driven and Majority Voting CNN.
Multidisciplinary Digital Publishing Institute (MDPI).

9. Arwin Datumaya Wahyudi Sumari, Dimas Eka Adinandra, Arie Rachmad Syulistyo ,
Sandra Loverencic (2022) . Intelligent Military Aircraft Recognition and Identification
to Support Military Personnel on the Air Observation Operation. International Journal
on Advanced Science Engineering Information Technology.

10. P.Ajay Kumar Goud, G. Mohit Raj, K. Rahul, and A. Vijaya Lakshmi (2023) . Military
Aircraft Detection Using YOLOv5. In Intlligent Communication Technologies and
Virtual Mobile Networks. Singapore: Sprinagar Nature Singapore.

52 | P a g e
11. ZHIGUO LIU, YUAN GAO, AND QIANQIAN DU (2023). YOLO-Class: Detection
and Classification of Aircraft Targets in Satellite Remote Sensing Image Based on
YOLO-Extract. IEEE Access.

12. Ling Lei, Yuwei She, Xiaoli Feng, Rui Xiong, Shan Liu (2020). Aircraft Detection of
Remote Sensing Images Based on Faster R-CNN and Yolov3.International Conference
on Culture -oriented Science & Technology (ICCST).

13. chao Tao, Yihua Tan, Huajie Cai, and Jinwen Tian (2010). Airport Detection from
Large IKONOS Images Using Clustered SIFT Keypoints and Region Information.
IEEE GEOSCIENCE AND REMOTE SENSING LETTERS.

14. Maxime Oquab , Leon Bottou , Ivan Laptev , Josef Sivic (2014). Learning and
Transferring Mid-Level Image Representations using Convolutional Neural Networks.
Computer Vision Foundation.

15. Ugur Alganci , Mehmet Soydas , and Elif Sertel (2020). Comparative Research on
Deep Learning Approaches for Airplane Detection from Very High-Resolution
Satellite Images.ResearchGate.

16. Gong Cheng, Junwei Han, Xiaoqiang Lu. [2023] Remote Sensing Image Scene
Classification: Benchmark and State of the Art. IEEE Access.

17. Stefanos Georganos, Tais Grippa, Sabine Vanhuysse, Moritz Lennert, Michal
Shimoni, Stamatis Kalogirou &Eleonore Wolf. [2017] Less is more: optimizing
classification performance through feature selection in a veryhigh-resolution remote
sensing object-based urban application. GIScience & Remote Sensing.

18. A. Nisthana Parveen, Hannah Inbarani, E. N. Sathishkumar. [2012] Performance


analysis of unsupervised feature selection methods. Research Gate

19. Lijun Zhao Ping Tang Lianzhi Huo. [2016] Feature significance-based multibagof-
visual-words model for remote sensing image scene classification. ResearchGate

20. Yang Li, Kun Fu, Hao Sun and Xian Sun. [2017] An Aircraft Detection Framework
Based on Reinforcement Learning and Convolutional Neural Networks in Remote
Sensing Images. remote sensing, MDPI.

21. Yuhang Zhang, Hao Sun, Jiawei Zuo, Hongqi Wang, Guangluan Xu (2018). Aircraft
Type Recognition in Remote Sensing Images Based on Features Learning with
Conditional Generative Adversarial Networks (MDPI).

53 | P a g e
22. Ferhat Ucar, Besir Dandil, Fikret Ata (2020). Aircraft Detection System Based on
Regions with Convolutional Neural. International Journal of Intelligent Systems and
Applications in Engineering (IJISAE).

23. Guoxiong Hu, Zhong Yang, Jiaming Han, Li Huang, Jun Gong and Naixue Xiong.
Aircraft detection in remote sensing images based on saliency and convolution neural
network

24. Sathis Kumar T Assistant Professor, Department of CSE Saranathan College of


Engineering, Trichy, Tamil Nadu, India. (2021). Journal of Emerging Technologies
and Innovative Research (JETIR).

25. Jun-Wei Hsieh1, Jian-Ming Chen, Chi-Hung Chuang, and Kao-Chin Fan (2023).
Aircraft type recognition in satellite images. Research Gate.

26. Berkay Yaban, Elif Sertel, Ugur Alganci Istanbul Technical University (2022)

27. Julie Imbert, Gohar Dashyan, Alex Goupilleau, Tugdual Ceillier, Marie-Caroline
Corbineau IM PROVING PERFORMANCE OF AIRCRAFT DETECTION IN
SATELLITE IMAGERY WHILE LIMITING THE LABELLING EFFORT:
HYBRID ACTIVE LEARNING. International Symposium on Geoscience and
Remote Sensing (IGARSS), Jul 2021, Brussels, Belgium.

28. Citation: Attard, M.R.G.; Phillips, R.A.; Bowler, E.; Clarke, P.J.; Cubaynes, H.;
Johnston, D.W.; Fretwell, P.T. Review of Satellite Remote Sensing and Unoccupied
Aircraft Systems for Counting Wildlife on Land. Remote Sens. 2024.

29. Peder Heiselberg*,1 and Kristian A. Sorensen2 1Department for Geodesy and Earth
Observation, National Space Institute, Technical University of Denmark, Kongens Lyn
gby, Denmark. 2Center for Security, National Space Institute of Denmark, Technical
University of Denmark, Kongens Lyngby, Denmark.

30. Chaitanya Malladi University advisor: Dr. Huseyin Kusetogullari Department of


Computer Science Faculty of Computing Blekinge Institute of Technology SE-371 79
Karlskrona, Sweden.

54 | P a g e

You might also like