0% found this document useful (0 votes)
23 views69 pages

Final Report

The project report details the development of an AI-enabled fire-detecting drone utilizing deep learning and computer vision technologies to enhance fire detection capabilities beyond conventional methods. It outlines the use of a drone camera for real-time video analysis, employing image processing techniques to identify fire characteristics and reduce false alarms. The report also discusses the machine learning workflow, including data gathering, preprocessing, and model training, aiming to provide a reliable fire detection solution applicable in various environments.

Uploaded by

nawasmjaffar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views69 pages

Final Report

The project report details the development of an AI-enabled fire-detecting drone utilizing deep learning and computer vision technologies to enhance fire detection capabilities beyond conventional methods. It outlines the use of a drone camera for real-time video analysis, employing image processing techniques to identify fire characteristics and reduce false alarms. The report also discusses the machine learning workflow, including data gathering, preprocessing, and model training, aiming to provide a reliable fire detection solution applicable in various environments.

Uploaded by

nawasmjaffar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 69

AI ENABLED FIRE-DETECTING DRONE USING DEEP

LEARNING MODEL

A PROJECT REPORT

Submitted by

JAFFAR NAWAS.M 820420106018


JAYAKRISHNA.M 820420106021
MOHAMED FARHAN.N M 820420106029
SURENDHAR.P 820420106308

In partial fulfilment for the award of the degree

Of

BACHELOR OF ENGINEERING
IN

ELECTRONICS AND COMMUNICATION ENGINEERING

ANJALAI AMMAL MAHALINGAM ENGINEERING COLLEGE

KOVILVENNI 614 403

ANNA UNIVERSITY: CHENNAI 600 025

MAY 2024

1
ANNA UNIVERSITY: CHENNAI 600 025
BONAFIDE CERTIFICATE

Certified that this project report “AI ENABLED FIRE-DETECTING DRONE


USING DEEP LEARNING MODEL” is the bonafide work of “JAFFAR
NAWAS.M [820420106018], JAYAKRISHAN.M [820420106021], MOHAMED
FARHAN.N M [820420106029], and SURENDHAR.P [820420106308]” who
carried out the project work under my supervision.

SIGNATURE SIGNATURE

Dr. A. BHAVANI SANKAR, M.E., Ph.D., Mr. G. MURUGESAN, M.E.,

HEAD OF THE DEPARTMENT SUPERVISOR

PROFESSOR & HOD, ASSOCIATE PROFESSOR,

DEPARTMENT OF ECE, DEPARTMENT OF ECE,

Anjalai Ammal Mahalingam Anjalai Ammal Mahalingam

Engineering College, Engineering College,

Kovilvenni -614 403, Kovilvenni -614 403,

Thiruvarur (DT). Thiruvarur (DT).

EXAMINED ON: ………………………….

INTERNAL EXAMINER EXTERNAL EXAMINER

2
ACKNOWLEDGEMENT

Behind every achievement lies an unfathomable sea of gratitude to those who


actuated it, without them it would never have into existence. To them we lay the word
of gratitude imprinted within us.

We wish to express our sincere thanks to respected Principal, Dr. S.N. Ramaswamy,
M.E., Ph.D., Anjalai Ammal Mahalingam Engineering College, for all the
blessing and help provided during the period of project work.

We wish to express our sincere thanks to Dr. A. Bhavani Sankar, M.E., Ph.D.,
Head of the Department of Electronics and Communication Engineering, for the
continuous help over the period of project work

We wish to express our sincere thanks to our guide Mr. G. Murugesan M.E.,
Associate Professor, Department of Electronics and Communication
Engineering, for the continuous help over the period and creative ideas of first phase
of our project work and sincere guidance to us in a prior time

We would like to extend our warmest thanks to Mr. T. Rajappa, Lab technician,
Department of ECE, for helping me in this venture. Unflinching support and
encouragement from the members of our family, friends and Faculties in Anjalai
Ammal Mahalingam Engineering College helped us a long way to complete our
project work. We must thank them all from the depths of our heart.

3
ABSTRACT

The project aimed to detect fire by using the computer vision based image

processing technology that will alert people by early detection of fire. As there

are many automatic fire alarm systems already existed like the sensor method,

that has some limitations and designed to sense fire with the smoke, limited

areas. To reduce limitations and to optimize with new technology, the project is

proposed. The project is implemented by using pycharm IDE and to connect the

drone cam as hardware. Drone cam is taken as an input source, which captures

the video feed from the surrounding and feeds into the system for analysis. The

entire code is written in pure python language using the open CV library for

image processing. The theoretical parts emphasize more in computer vision

consist of machine learning, image processing, color model, and the working

algorithm of the project to detect the fire. The project gives a better

understanding of object detection with the combination of drone and deep

learning and the use of these technologies in different forms and uses.

KEYWORDS: Image processing, drone camera, computer vision, machine learning,


Open cv, Object detection

i
TABLE OF CONTENTS

CHAPTER NO. TITLE PAGE NO.

ABSTRACT i
LIST OF FIGURES iv
1. INTRODUCTION
1.1 Overview 1
1.2 Scope and Objective 2
1.3 Domain Overview 2
1.3.1 Machine Learning Model 3
1.3.1.1 Gathering Data 3
1.3.2 Data Pre-processing 4
1.3.3 Supervised Learning 6
1.3.4 Regression 8
1.3.5 Unsupervised Learning 9
1.3.6 Training and testing the 10
Of model of data
1.3.7 Validation Set 11

2. LITERATURE SURVEY 14
2.1 Literary review 14
2.2 Analysis of literature 17

3. SYSTEM ANALYSIS 19
3.1 Existing System 19
3.2 Problem Statement 19
3.3 Proposed System 19
ii
3.4 System Design 23
3.5 Haar Like Features 24
3.6 Working Principle 28
3.7 Hardware Environment 29
3.8 System Requirements 49

4. APPENDIX 51
4.1 Implemented code 51
4.2 Results and discussion 54

5. CONCLUSION AND FUTURE WORK 58


5.1 Conclusion 58
5.2 Limitation and Future Work 58

6. REFERENCE PAPERS 59

iii
LIST OF FIGURES

FIGURE No. FIGURE NAME PAGE No.

1.1 Supervised Architecture 6


1.2 Unsupervised Architecture 6
1.3 RGB Colors 7
1.4 Prediction Of IQ 8
1.5 Original Clustered Data 9
1.6 Clustered Data 9
1.7 Classifications Of Machine Learning 10
1.8 Training Set 11
1.9 Validation Set 11
1.10 Confusion Matrix 12
3.1 Pictorial Overview of Proposed Methodology 20
3.2 Block Diagram 21
3.3 Flow Diagram 22
3.4 Haar Features 25
3.5 Principle of Aerodynamic Effect 31
3.6 Quadcopter Dynamics 35
3.7 Controls of Quadcopter 36
3.8 Kinematics of Quadcopter 39
3.9 Parts of Drone 42
3.10 Drone Kit 46
3.11 Applications of Drone 48
4.2.1 Drone Setup with Controller 54
4.2.2 Surveillance Period 55

iv
4.2.3 ESP 32 wifi camera 55
4.2.4 Fire detection using web cam 55
4.2.5 Real time detection of fire 55

v
CHAPTER 1

INTRODUCTION

1.1 OVERVIEW

Computer Vision based fire detection using image processing has the
potential to be useful in conditions in which conventional methods cannot be
adopted. The fire detection algorithm uses visual characteristics of fires like
brightness, color, spectral texture, spectral flicker, and edge trembling to
discriminate them from other visible stimuli. There are various fire detection
techniques such as infrared sensor, a thermal detector, smoke detector, flame
detector, and optical smoke detector. These methods are not always reliable
as they do not always detect the fire itself but detect one or more phenomena
resulting from fire, such as smoke, heat, infrared, ultraviolet light radiation or
gas, which could be produced in other ways and hence, produces many false
alarms .By the help of computer vision and image processing techniques, it is
possible to get better results than conventional systems because images can
provide more reliable information.

In recent times, research on detection of flame and smoke using


surveillance cameras with machine vision has gained momentum. The image
processing approach involves the extraction of the smoke-plume or flame
from the background by using frame difference technologies. In the case of
the segmentation of fire features, color processing scores over gray-scale
processing. Color processing can avoid the generation of false alarms due to
variations in the lighting conditions, e.g. natural background illumination,
better than gray-scale processing. Further, a video camera is a volume sensor,
and potentially monitors a larger area. The traditional point sensor looks at a
point in space. Since the point sensor may not be affected by smoke or flame,
fire would be undetected. However, vision-based flame and smoke detection
1
still has great technical challenges, since flame and smoke are non-rigid
objects, with none of the primitive image features and Variability

1.2 SCOPE AND OBJECTIVE

The proposed project is used to detect the object on various areas like in
parking areas, road traffic congestions, agriculture, farming, there are many
uses of object detection. The proposed software product using the Haar
Cascade classifier, which is very popular in object detection through the
image or any other video feeds. In future, this approach can be extended by
for other disasters as well.

1.3 DOMAIN OVERVIEW

Introduction:
In this blog, we will discuss the workflow of a Machine learning project this
includes all the steps required to build the proper machine learning project
from scratch. We will also go over data pre-processing, data cleaning, feature
exploration and feature engineering and show the impact that it has on
Machine Learning Model Performance. We will also cover a couple of the
pre-modelling steps that can help to improve the model performance.

Python Libraries that would be need to achieve task:


1.Numpy
2.Pandas
3.Sci-kitLearn
4.Matplotlib

2
Understanding the machine learning workflow

We can define the machine learning workflow in 3 stages.

1. Gathering data

2. Data pre-processing

3. Researching the model that will be best for the type of data

4. Training and testing the model

5. Evaluation

1.3.1 Machine Learning Model

The machine learning model is nothing but a piece of code; an engineer or


data scientist makes it smart through training with data. So, if you give
garbage to the model, you will get garbage in return, i.e. the trained model
will provide false or wrong prediction

1.3.1.1 Gathering Data:

The process of gathering data depends on the type of project we desire to make, if
we want to make an ML project that uses real-time data, then we can build an IoT
system that using different sensors data. The data set can be collected from various
sources such as a file, database, sensor and many other such sources but the
collected data cannot be used directly for performing the analysis process as there
might be a lot of missing data, extremely large values, unorganized text data or
noisy data. Therefore, to solve this problem Data Preparation is done.We can also
use some free data sets which are present on the internet. Kaggle and UCI Machine
learning Repository are the repositories that are used the most for making Machine
learning models. Kaggle is one of the most visited websites that is used for
3
practicing machine learning algorithms, they also host competitions in which
people can participate and get to test their knowledge of machine learning.

1.3.2 Data pre-processing:

Data pre-processing is one of the most important steps in machine learning. It


is the most important step that helps in building machine learning models
more accurately. In machine learning, there is an 80/20 rule. Every data
scientist should spend 80% time for data per-processing and 20% time to
actually perform the analysis. Data pre-processing is a process of cleaning the
raw data i.e. the data is collected in the real world and is converted to a clean
data set. In other words, whenever the data is gathered from different sources
it is collected in a raw format and this data isn’t feasible for the
analysis.Therefore, certain steps are executed to convert the data into a small
clean data set, this part of the process is called as data pre-processing.

As we know that data pre-processing is a process of cleaning the raw data


into clean data, so that can be used to train the model. So, we definitely need
data pre-processing to achieve good results from the applied model in
machine learning and deep learning projects.Most of the real-world data is
messy, some of these types of data are:

1.3.1.1 Missing data: Missing data can be found when it is not


continuously created or due to technical issues in the application
(IOT system).
1.3.1.2 Noisy data: This type of data is also called outliners, this can
occur due to human errors (human manually gathering the data)
or some technical problem of the device at the time of collection
of data.
4
1.3.1.3 Inconsistent data: This type of data might be collected due to
human errors (mistakes with the name or values) or duplication
of data.

How can data pre-processing be performed?

These are some of the basic pre — processing techniques that can be used
toconvert raw data.

1.3.1.4 Conversion of data: As we know that Machine Learning models


can only handle numeric features, hence categorical and ordinal
data must be somehow converted into numeric features.
1.3.1.5 Ignoring the missing values: Whenever we encounter missing
data in the data set then we can remove the row or column of
data depending on our need. This method is known to be
efficient but it shouldn’t be performed if there are a lot of
missing values in the dataset.

1.3.1.6 Filling the missing values: Whenever we encounter missing data


in the data set then we can fill the missing data manually, most
commonly the mean, median or highest frequency value is used.

1.3.1.7 Machine learning: If we have some missing data then we can


predict what data shall be present at the empty position by using
the existing data.

1.3.1.8 Outliers detection: There are some error data that might be
present in our data set that deviates drastically from other
observations in a data set. [Example: human weight = 800 Kg]
5
Searching the suitable model :

Our main goal is to train the best performing model possible, using the pre-
processed data.The Machine learning is divided into two parts namely
Supevised Learning and Unsupervised Learning.So these methods will help
us with the best model.

Fig 1.1 Supervised Learning Fig 1.2 Unsupervised Learning

1.3.2 Supervised Learning:

In Supervised learning, an AI system is presented with data which is labelled,


which means that each data tagged with the correct label.

The supervised learning is categorized into 2 other categories which are


“Classification” and “Regression”.

Classification:

Classification problem is when the target variable is categorical (i.e. the


output could be classified into classes — it belongs to either Class A or B or
something else).

6
A classification problem is when the output variable is a category, such as
“red” or“blue” , “disease” or “no disease” or “spam” or “not spam”.

Fig 1.3 RGB Colors

As shown in the above representation, we have 2 classes which are plotted on


the graph i.e. red and blue which can be represented as ‘setosa flower’ and
‘versicolor flower’, we can image the X-axis as there ‘Sepal Width’ and the
Y-axis as the ‘Sepal Length’, so we try to create the best fit line that separates
both classes of flowers.

These some most used classification algorithms.

● K-Nearest Neighbor

● Naive Bayes

● Decision Trees/Random Forest

● Support Vector Machine

● Logistic Regression

7
1.3.3 Regression:

Regression is a technique for investigating the relationship between


independent variables or features and a dependent variable or outcome. It's
used as a method for predictive modelling in machine learning, in which an
algorithm is used to predict continuous outcomes.

Fig 1.4 Prediction of IQ

As shown in the above representation, we can imagine that the graph’s X-axis
is the ‘Test scores’ and the Y-axis represents ‘IQ’. So we try to create the best
fit line in the given graph so that we can use that line to predict any
approximate IQ that isn’t present in the given data.

These are some most used regression algorithms.

● Linear Regression

● Support Vector Regression

● Decision Tree/Random Forest

8
● Gaussian Processes Regression

● Ensemble Methods

1.3.5 Unsupervised Learning:

The unsupervised learning is categorized into 2 other categories which are


“Clustering”and “Association”.

Clustering:

A set of inputs is to be divided into groups. Unlike in classification, the


groups arenot known beforehand, making this typically an unsupervised task

Fig 1.5 Original Clustered Data Fig 1.6 Clustered Data

Methods used for clustering are:

• Gaussian mixtures

• K-Means Clustering

• Boosting

9
• Hierarchical Clustering

• K-Means Clustering

• Spectral Clustering

Overview of models under categories:

Fig 1.7 Classifications of Machine Learning

1.3.5 Training and testing the model on data:

For training a model we initially split the model into 3 three sections which
are ‘Training data’ ,‘Validation data’ and ‘Testing data’.You train the
classifier using ‘training data set’, tune the parameters using ‘validation set’
and then test the performance of your classifier on unseen ‘test data set’. An
important point to note is that during training the classifier only the training
and/or validation set is available. The test data set must not be used during
training the classifier. The test set will only be available during testing the
classifier.

10
Fig 1.8 Training set

Training set:

The training set is the material through which the computer learns how to
process information. Machine learning uses algorithms to perform the
training part. A set of data used for learning, that is to fit the parameters of the
classifier.

1.3.6 Validation set:

Cross-validation is primarily used in applied machine learning to estimate the


skill of a machine learning model on unseen data. A set of unseen data is used
from the training data to tune the parameters of a classifier.

Fig 1.9 Validation Set

11
Once the data is divided into the 3 given segments we can start the training
process.

In a data set, a training set is implemented to build up a model, while a test


(or validation) set is to validate the model built. Data points in the training set
are excluded from the test (validation) set. Usually, a data set is divided into a
training set, a validation set (some people use ‘test set’ instead) in each
iteration, or divided into a training set, a validation set and a test set in each
iteration.

The model uses any one of the models that we had chosen in step 3/ point 3.
Once the model is trained we can use the same trained model to predict using
the testing data i.e. the unseen data. Once this is done we can develop a
confusion matrix, this tells us how well our model is trained. A confusion
matrix has 4 parameters, which are ‘True positives’, ‘True Negatives’, ‘False
Positives’ and ‘False Negative’. We prefer that we get more values in the
True negatives and truepositives to get a more accurate model. The size of the
Confusion matrix completely depends upon the number of classes.

1.10 Confusion Matrix

True positives : These are cases in which we predicted TRUE and our
predictedoutput is correct.

12
True negatives : We predicted FALSE and our predicted output is

correct.False positives : We predicted TRUE, but the actual predicted

output is FALSE

False negatives : We predicted FALSE, but the actual predicted output is TRUE.

We can also find out the accuracy of the model using the confusion matrix.

Accuracy = (True Positives +True Negatives) / (Total number of classes)

i.e. for the above example:

Accuracy = (100 + 50) / 165 = 0.9090 (90.9% accuracy)

Evaluation:

Model Evaluation is an integral part of the model development process. It


helps to find the best model that represents our data and how well the chosen
model will work in the future.

To improve the model we might tune the hyper-parameters of the model and
try to improve the accuracy and also looking at the confusion matrix to try
to increasethe number of true positives and true negatives.

Conclusion:

In this blog, we have discussed the workflow a Machine learning project and
givesus a basic idea of how a should the problem be tackled.

13
CHAPTER 2
LITERATURE SURVEY

2.1 THE LITERATURE REVIEW

Towards Distributed Event Detection in Wireless Sensor


Networks(IEEE2020)

Norman Dziengel et al. distributed event detection in wireless sensor networks


(WSNs) is the process of observing and evaluating an event using multiple sensor
nodes without the help of a base station or other means of central coordination and
processing. Current approaches to event detection in WSNs transmit raw data to an
external entity for evaluation or rely on simplistic pattern recognition schemes. This
implies either high communication overhead or low event detection accuracy,
especially for complex events. In this paper, we present our currently
on-going work on a system for distributed event detection that particularly suits the
specific characteristics of WSNs. Adapting traditional pattern recognition
algorithms to highly embedded devices, it uses the distributed sampling of sensor
nodes to optimize the accuracy of the event detection process. Four different
algorithms for distributing, classifying and fusing “fingerprints” of the raw data
sampled on each sensor are proposed and quantitatively evaluated in a
small-scale experiment.

Detecting and monitoring time-related abnormal events using a wireless sensor


network and mobile robot( IEEE, 2019)

YuanYuan Li and Lynne E. Parker et al. presented an anomaly detection system


that is able to detect time-related anomalies by using a wireless sensor network and
a mobile robot. The sensor network uses an unsupervised fuzzy Adaptive
Resonance Theory (ART) neural network to learn and detect intruders in a
previously unknown environment. Upon the detection of an intruder, a mobile robot
travels to the position where the intruder is detected to investigate by using its

14
camera. The wireless sensor network uses a hierarchical communication/learning
structure, where the mobile robot is the root node of the tree. Our fuzzy ART
network is based on Kulakov and Davcev’s implementation [8]. However, we
enhance their work by extending the fuzzy ART neural network with a Markov
model to learn a time series and detect time-related anomalies.
Finally, a mobile robot is employed to verify whether the detected anomalies were
caused by intruders.

A Novel Mechanism for Fire Detection in Subway Transportation Systems


Based on Wireless Sensor Networks(vol. 2019)

Z.-J. Zhang, J.-S. Fu, H.-P. Chiang, and Y.-M. Huang et al. said fire is a common
and disastrous phenomenon in subway transportation systems because of closed
environment and large passenger row. Traditional methods detect and forecast re
incidents by fusing the data collected by wireless sensor networks and compare the
fusion result with a threshold. However, this approach has a signicant shortcoming.
Fusion of data decreases the amount of useful information since the distribution of
data is not considered. In this paper, a novel mechanism based on wireless sensor
networks is proposed and we can use the mechanism to detect re in subway
transportation systems. e core algorithm in this mechanism is an alternative of data
fusion in wireless sensor networks. e mechanism uses the quantity information
acquired from the data and also considers the distribution information of the data in
order to predict the occurrence of re incidents. e simulation results show that re
incidents can be detected fast and eciently.

Forest Fire Smoke Detection Based on Visual Smoke Root and Diffusion
(IEEE2019)

15
The damage caused by forest fire to forestry resources and economy is quite
serious. As one of the most important characters of early forest fire, smoke is
widely used as a signal of forest fire. In this paper, we propose a novel forest fire
smoke detection method based on computer vision and diffusion model. Unlike the
video-based methods that usually rely on image characters extraction, we try to find
the shape of smoke that is at the generation stage. To combine vision and diffusion
model together, the basic concept of smoke root is proposed. In the frame
processing stage, none characters of fire smoke are extracted (like texture, color,
frequency information etc.), and continuous frames are only used to extract stable
points in dynamic areas as the smoke root candidate points. In the diffusion model
simulation stage, all smoke root candidate points information is adopted by the
model to generate the simulation smoke. Finally, the match algorithm based on
color, dynamic areas and simulation smoke is implemented to get the final results.
In order to reduce the complexity of computation, we ignored the simulation
process of the smoke details, such as texture and turbulence, and only retained the
contour features in two-dimensional form.

FLUTTER ANALYSIS BASED VIDEO SMOKE DETECTION.(IEEE2011)

Based on the flutter characteristic of smoke, a novel video smoke detection


algorithm is proposed. Firstly, the motion region is detected from video frames.
Then four flutter features of the motion region are extracted over a sliding time
window, including the flutter direction and three types of flutter intensities. Finally,
the neuro-fuzzy inference system is used to detect smoke, in which fuzzy rules and
membership functions are trained according to the valid sample set. Experimental
results show that the new algorithm has excellent performance on Receiver
Operating Characteristic (ROC) curve which is very important for real application.

16
AN EARLY FIRE-DETECTION METHOD BASED ON IMAGE
PROCESSING. (IEEE2004 )

The paper presents an early fire-alarm raising method based on video processing.
The basic idea of the proposed of fire-detection is to adopt a RGB (red, green, blue)
model based chromatic and disorder measurement for extracting fire-pixels and
smoke-pixels. The decision function of fire-pixels is mainly deduced by the
intensity and saturation of R component. The extracted fire-pixels will be verified if
it is a real fire by both dynamics of growth and disorder, and further smoke. Based
on iterative checking on the growing ratio of flames, a fire-alarm is given when the
alarm-raising condition is met. Experimental results show that the developed
technique can achieve fully automatic surveillance of fire accident with a lower
false alarm rate and thus is very attractive for the important military, social security,
commercial applications, and so on, at a general cost.

2.2 ANALYSIS OF THE LITERATURE

Literature survey is the main advance in programming improvement measure.


Prior to building up the instrument it is important to decide the time factor,
economy and friends’ strength. When these things are fulfilled, at that point the
subsequent stage is to figure out which working framework and language can
be utilized for building up the device. When the developers begin assembling
the apparatus the software engineers need parcel of outer help. This help can be
gotten from senior developers, from book or from sites. The major part of the
project development sector considers and fully survey all the required needs for
developing the project. Before developing the tools and the associated
designing it is necessary to determine and survey the time factor, resource
requirement, man power, economy, and company strength. Prior to building the
framework the above thought are considered for building up the proposed

17
framework. The significant piece of the undertaking advancement area
considers and completely survey all the necessary requirements for building up
the venture. For each undertaking Literature survey is the main area in
programming improvement measure. Prior to building up the instruments
and the related planning it i s important to decide and survey the time factor,
asset prerequisite, labor, economy, and friends’ strength. When these things are
fulfilled and completely surveyed, at that point the following stage is to decide
about the product details in the separate framework, for example, what kind of
working framework the venture would require and what are largely the
important programming are expected to continue with the subsequent stage like
building up the apparatuses, and the related activities. Here we have taken the
general surveys of different creators and noted down the fundamental central
issues with respect to their work. In this venture literature survey assumes a
prevailing part in get assets from different areas and all the connected points
that are exceptionally valuable under this segment. The most awesome aspect if
this is the manner in which things get all together and encourages us to suite
our work according to the current information.

18
CHAPTER 3
SYSTEM ANALYSIS

3.1 EXISTING SYSTEM

The methods and classifiers used in the existing system are:


● Correlation based analysis and transformation methods
● Data Mining Approach to Predict Forest Fires using Meteorological Data
● SVM classifier
● Circle-based Approximation to Forest Fires with Distributed
Wireless Sensor Networks using clustering
● Event Detection in Wireless Sensor Networks for Disaster Management

3.2 PROBLEM STATEMENT:

The main motivation for fire prediction is to provide proper resource


allocation and to help in best possible way to firefighters of Fire Management
team. The main factors of fire are Meteorological conditions. the climatic
information is gotten from nearby sensors which are fused in the closest
meteorological stations. Land with a possible high fire risk has many
indicators that can be used to measure the forecast by closely evaluating the
indications. Every year, fire destroys millions of hectares of land. These fires
have burned vast areas and generate more carbon monoxide than total vehicle
traffic. Monitoring potential danger areas and early warning of fire can
greatly reduce response time, as well as the potential for damage and
firefighting costs.

3.3 PROPSOED SYTEM:

The objective is to deploy an efficient and cost-effective solution for early wildfire
detection by leveraging aerial surveillance capabilities.
19
The drone will autonomously scan designated areas, identify signs of fire through
real-time image analysis, and promptly notify relevant authorities and communities
via Telegram channels, enabling swift response and effective mitigation measures to
minimize the impact of wildfires on the environment and public safety.

WIFI Q IPP D N

Fig 3.1 Pictorial Overview Of Proposed Methodology

20
ADVANTAGES:

The system can be used in various areas like in parking areas, road traffic
congestions, agriculture, farming, there are many uses of object detection.
In future, this approach can be extended by for other disasters as well.

BLOCK DIAGRAM:

Fig 3.2 Block Diagram

21
FLOW DIAGRAM:

Fig 3.3 Flow diagram

22
3.4 SYSTEM DESIGN

Modules:

● Detection
● Haar-like Features
● Integral Image
● Training and Learning
● Adaptive Boosting (Ada Boost)
● Cascading

Detection:

The procedure for detecting fire classifies images based on the value of simple
features from an image. While detecting the fire, first of all, image is converted into
grayscale since it is easy to work with, and it has less data rather than RGB color
images.
The algorithm outlines a box and searches for the fire in the image,, the box is
searching for Haar-like features.
Along with small steps, the box detects the features like edges, brightness level from
the image of fire, and then data collected by boxes are put together, which helps to
determine where the fire is located.

23
3.5 Haar-like Features:

Haar features are similar to convolutional kernels, which are used to detect
the presence of those features in the given image.
The grayscale has some features some of the regions are black, some regions
are white, and some region is slightly darker or vice-versa from each other
which helps the machine to understand an image.
He Haar features like edge features, line features, center-surround features
rectangle is moved around the image.
The color white and black is decided according to the region. Which
compares the brightness and extracts features by assigning some values as it
goes and inspectsthrough that region in an image.
⦁ The construction of the haar is like that of a confirmed egg and is utilized to
decide if these properties are displayed in a given picture.

⦁ Dim has a few dark regions and some have white regions, some are hazier
or some are a method for aiding the vehicle feel the picture.

⦁ He Haar has side highlights, line elements, and elements around the focal
point of the square shape.

24
⦁ White and dark are territorially supported. It represents light and disposes
of the trademark by allocating a particular worth as it goes through the photo
and controlling the region.

Fig 3.4 Haar Features

Formula:

Δ = dull - white = 1 𝑛 ∑ 𝐼 (𝑥) 𝑛 𝑑𝑎𝑟𝑘 - 1 𝑛 𝐼 𝐼 (𝑥)

For positive pictures, Δ: 1 - 0 = 1

For the genuine picture, Δ: 0.67 - 0.15 = 0.52

Integral Image:

The integral image plays an important role. It gives a boost to the calculation,
which makes the approach as fast as possible because there are thousands of
pixels that need to be calculated.
While applying rectangular haar features the sum of pixels in unshaded
rectangles sides are subtracted from the sum of the pixel in the shaded side of
25
rectangles.
Even for a small size images there are lots of features (over 160,000 for a
24x24 image).

Since due to large number of features the algorithm requires iterating overall
features, the features must be computed efficiently. So to solve this issue
integral image is introduced.
The sum of the pixel in rectangle D can be calculated with reference to four
arrays. The value of the image at location 1 is the sum of pixels in a rectangle.
Respectively value at position 2 is A+B, value at position 20 3 is A+C, and
value atposition 4 is A+B+C+D.

Training and Learning:

Extracting features from an image to detection. The machine is now trained to


identify the features from the input. So it is fed with lots of information so
that it canpredict objects from the information.
To train the machine, It needs a lot of pictures of fire, different fireplaces,
conditions of fire, also known as a positive image. Concerning that, many
negativeimages were uploaded.
The negative image refers to pictures in which fire is not included in an
image. Negative images are uploaded so it can differentiate between the
two classes, which helps to an algorithm which features are more likely to
be a fire and whichare not expected to be fire.

Adaptive Boosting (Ada Boost):

Adaboost algorithm is a learning algorithm used to train the classifier and


selects the best subset of features. The algorithm learns from the data that is
26
given and determine the false positive and true negatives in data.
F(x)= a1f1(x) + a2f2(x) + ....

Here F(x) is the strong classifier, and a1f1(x), a2f2(x) are weak classifiers
where a1, a2 are the weights, and f1, f2 are the features.
So the strong classifier is made up of many weak classifiers since one weak
classifier is not good as adding more classifiers makes the algorithm stronger
and accurate, which is called an ensemble.

Cascading:

Cascade classifier is used for the accuracy of identification. It is composed of


several stages consisting of a strong classifier. Those strong classifiers are
passed by So all the features are grouped in several stages where each stage
hasa certain number of features.
The use of these several stages is used to determine whether the given input
sub window has features of fire or not if there are no features of fire, then
the givensub window is discarded and fails to go for other stages.
there are some stages stage 1 and stage 2. Usually, the first few stages will
contain very less numbers of features.
If the window fails, it is discarded if not apply the second stage of features
and continue the process. The window or stage which passes all the features
of fire, then it is detected as fire.

27
3.6 WORKING PRINCIPLE:

The first step is the training of the classifier, as mentioned early. Training for
the highly accurate classifier needs lots of time and processing power, so
here weonly used a little number of pictures.
After training the fire cascade classifier, the captured frame from the drone
cam is converted into grayscale. The reason for converting frame into
grayscale is because the frame captured by drone cam is in RGB color.
Since RGB images have three channels of colors, so if the image is converted
into grayscale, there will be only one channel, either black or white, which is
easy to process.
After the conversion, the fire classifier is used, which will help to find the
features and location of images. Parameters like scale factor and min
neighbor are passed. These factors are an essential factor in detecting fire. A
scale factor is used for creating of scale pyramid because while training the
classifier fixed size of the image is trained, so the scale factor will allow
rescaling the size of an input frame to detect the fire.

Another parameter min 24 neighbor will determine the quality of an


image herefor thesis min neighbor factor.

28
3.7 HARDWARE ENVIRONMENT

DRONE

Any aircraft or flying machine operated without a human pilot such


machines is called an unmanned aerial vehicle (UAV). It can be guided
autonomously or remotely by a human operator using onboard computers
and robots.

During surveillance or military operation, UAVs can be a part of an


unmanned aircraft system (UAS), Drones are separately for air and water

Drones have become increasingly popular in recent years. They are used
for a variety of purposes, including photography, videography, surveying,
inspection, and even delivery. But have you ever wondered how drones
work? In this blog post, we’ll take a look at the working principle of
drones

The basic components of a drone are the frame, motors, propellers,


battery, flight controller, and sensors. Let’s take a closer look at each of
these components.

Frame

Battery

Flight controllers

Sensors

Motors and Propellers:

Subjects for Drone or UAV

Understanding and development of drones depend on many subjects. The


29
design of drone for a particular application comprises many factors like the
aerodynamic shape of propellors, strength and weight of drone parts,
electric motor, electric speed controller, radio transmitter or receiver, and
software interface on mobile or computer for monitoring and data analysis.

• Fluid Dynamics or Aerodynamics:

• Fluid dynamics plays an important role to decide the forces


acting on the body of a drone

• The shape, size, and speed of the propeller and drone depending
on the aerodynamics of propellers or blades

• Computational Fluid Dynamics (CFD) modeling helps for flow


dynamics of airflow over drones

• CFD modeling of turbo-machinery) is essential to decide the


amount of thrust generated by propellors

• Wind tunnel testing of the aerofoil blade of the drone is still


important for testing CFD results

• Mechanical Design

• Rigid body dynamics to study the motion and forces acting on


drones

• Strength of materials

• Low weight and rigid materials are selected for drone

• Electronics and Electrical Components:

• Electric motor with and without brush is required to drive the


propellors

• Electronic Speed Controller

• Flight controller unit and computer processors

• Radio Communication: transmitter and receiver for radio signals

30
• Battery: Low weight and high-power wattage battery is important

• Software-based interface: data collection and analysis using mobile


or computer

Working Principle of Drone and Flow Pattern

• The subject of Fluid dynamics plays a significant role in the design


and development of aircraft and drones. This subject consists of the
working principle of the aerodynamics of aircraft.

• A sufficient amount of upward force is required to lift the vehicle


against gravity which is named Lift.

• A force created to move the vehicle or body in motion is called thrust.


These forces can be studied using the kinematic laws of fluid flows

Fig 3.5 Principle of aerodynamic effect

• When air flows over an aerofoil and pressure, viscous and drag force
act on the profiles

• Force is directly proportional to the velocity of air at the inlet


31
• The flow pattern around the cross-section of the aerofoil or propeller is
shown below. High fluid pressure at the bottom and low pressure at
the top of the propeller causes an upward force which is called a lift.
This force is responsible for lifting the weight of an aero-plane or
drone.

• The amount of lift force depends on the angle of inclination of the


aerofoil or propeller.

32
• Based on the principle of conservation of energy in fluid flow
(Bernoulli’s principle, the sum of all forms of energy in a fluid is
constant along the streamline

• When air flows over an aerofoil or wing, its velocity increases at the
top portion. But the pressure of air decreases.

• In contrast, the air velocity decreases and pressure increase at the


bottom side of the blade. The next pressure difference across the
aerofoil results in an upward force which is called a lift

• CFD modeling of flow over an aerofoil has been important in many


vehicular and aerospace industries

Types of drones based on the number of Propellors

A number of propellors are provided to drones. More propellors improve


the stability of drones and load-carrying capacity but such drones need
more battery power to drive more motors to get high power. A quadcopter
is a more popular drone.

• Bicopter (2 propellers)

• Triplecopter (3 propellors)

• Quadcopter (4 propellers)

33
• Hexacopter (6 propellers)

• Octacopter (8 propellers)

Working Principle of Quadcopter

• A quadcopter has four propellors at four corners of the frame

• For each propeller, speed and direction of rotation are independently


controlled for balance and movement of the drone

• In a traditional quadrotor, all four rotors are placed at an equal


distance from each other

• To maintain the balance of the system, one pair of rotors rotates in a


clockwise direction and the other pair rotates in an anti-clockwise
direction

• To move up (hover), all rotors should run at high speed. By changing


the speed of rotors, the drone can be moved forward, backward, and
side-to-side

Quadcopter Dynamics

• The movement of drone are classified into four types based on the
relation motion between four propellors: 1) throttle, 2) Pitch, 3) Roll,
and 4) Yaw

• The details of quadcopter dynamics are explained in many references

34
Fig 3.6 Quadcopter Dynamics

• Throttle/ Hover: up and down movement of the drone is called


throttle

• If all four propellors run at normal speed, then the drone will
move down

• If all four propellors run at a higher speed, then the drone will
move up. This is called the hovering of a drone

• Pitch: movement of a drone about a lateral axis (either forward or


backward) is called pitching motion

• If two rear propellors run at high speed, then the drone will
move in a forwarding direction

• If two front propellors run at high speed, then the drone will
move in the backward direction

• Roll: movement of a drone about the longitudinal axis is called rolling


motion

• If two right propellors run at high speed, then the drone will
35
move in the left direction

• If two left propellors run at high speed, then the drone will
move in the right direction

• Yawn: the rotation of the head of the drone about the vertical axis
(either the left or right) is called Yawning motion

• If two propellors of a right diagonal run at high speed, then the


drone will rotate in an anti-clockwise direction

• If two propellors of a left diagonal run at high speed, then the


drone will rotate in a clockwise direction

Fig 3.7 Controls of Quadcopter

36
Forces and Moments Acting on a Drone

Major forces acting on a Drone

When a drone moves in the air, various forces act on it. The resultant
force will decide its movement. There are major forces acting on a drone

• Weight

• Due to the mass of the drone, the body mass force always acts
in the direction of gravity

• Higher the weight of the drone, more power is required to lift


and move the drone

• Weight of drone = mass of drone × acceleration due to gravity

• Lift:

• The vertical force acting on the drone is called lift

• This force is due to pressure differences across the drone (in the
vertical direction). Hence, the speed, size, and shape of the
propeller blade decide the amount of lift force

• Lift is essential to lift the body against the gravity


37
• To create this force, all four propellors run at high speed to lift
the drone

• Thrust

• The force acting on the drone in the direction of motion is called


thrust. However, for drone dynamics, it is normal to the rotor
plane.

• During hovering, the thrust is purely vertical. If thrust is


inclined then the drone will tilt forward or backward.

• This force essential to move the drone in desired direction at


equal speed.

• To get desired motion, two propellors have been given high


speed

• Drag

• The force acting on the drone in the opposite direction of


motion due to air resistance is called drag

• This may be because of pressure difference and viscosity of air

• To reduce the drag, the aerodynamic shape of the drone is


selected

38
Kinematic for Quad-copter

Fig 3.8 Kinematics of Quadcopter

• The thrust produced by each propeller is perpendicular to the plane of


rotation of propellors. It is directly proportional to the square of the
angular velocity of the propeller

Fi = kf ×ωi2

• If L is defined as the distance between two motors or propellors for


any diagonal of the drone, then the reaction moments about the X-axis
and Y-axis

Mx = (F3 – F4) × L

My = (F1 – F2) × L

• Newton’s second law of motion

• For linear motion: Force = mass × linear acceleration

• For rotational motion: Torque = inertia × angular acceleration

Hovering Motion

• Equilibrium Conditions for hovering

39
mg = F1 + F2 + F3 + F4

All moments = 0

• Equation of motion

m = F1 + F2 + F3 + F4 – mg

m=0

Rise or Fall Motion (Throttle up)

• Conditions for hovering (rise)

mg < F1 + F2 + F3 + F4

All moments = 0

• Conditions for Fall

mg > F1 + F2 + F3 + F4

All moments = 0

• Equation of motion

m = F1 + F2 + F3 + F4 – mg

m>0

Yaw Motion

• Conditions for hovering

mg = F1 + F2 + F3 + F4

All moments ≠ 0

40
• Equation of motion

mass* linear acceleration = F1 + F2 + F3 + F4 – mg

Izz *angular acceleration@ Z-axis = M1+ M2+M3+ M4

Pitch and Roll Motion

• Conditions for hovering

mg < F1 + F2 + F3 + F4

All moments ≠ 0

• Equation of motion

mass* linear acceleration = F1 + F2 + F3 + F4 – mg

Ixx * angular acceleration @ x-axis = (F3 – F4)×L

Rigid-body dynamics

• To calculate individual speeds and forces acting on drones, the three-


dimensional rigid-body dynamics should be modeled

• The first step is to identify the reference coordinates, the direction of


rotor speed and forces acting the drones

• For the rigid body, we have to consider the effect of aerodynamic,


inertial, gravitational, and gyroscope

• Aerodynamic Forces: rotation of the propellors in air causes various


forces such as friction and drag

• Secondary aerodynamic effects: blade flapping, ground effect, and


local flow fields

• Inertial counter torques: gravitational forces acting at the center of


drone affect the rotation of propellors

41
• Gyroscopic effects: change in the orientation of drone body and plane
rotation of propellors.

• Based on Newton-Euler equations, all forces and moments acting on a


quadcopter are combined and result in a complete model of the drone
dynamics

• This physical model is useful to control the desired motion of the


quadcopter

Major Components of Drones

The following are major parts of drones.

Fig 3.9 Parts of drone


42
1. Frame:

• It should have sufficient strength to hold the propeller momentum and


additional weight for motors and cameras

• Sturdy and less aerodynamic resistance

1. Propellers:

• The speed and load lifting ability of a drone depends on shape,


size, and number of propellors

• The long propellors create huge thrust to carry heavy loads at a


low speed (RPM) and less sensitive to change the speed of
rotation

• Short propellors carry fewer loads. They change rotation speeds


quickly and require a high speed for more thrust.

2. Motor

• Both motors brushless and brushed type can be used for drones

• A brushed motor is less expensive and useful for small-sized


drones

• Brushless type motors are powerful and energy very efficient.


But they need Electronic Speed Controller (ESC) to control
their speed. These brushless motors are widely used for racing
freestyle drones, traffic surveys and aerial photography drones.

3. ESC (Electronic Speed Controller)

• ESC is used to connect the battery to the electric motor for the
power supply

• It converts the signal from the flight controller to the revolution


per minted (RPM) of motor

• ESC is provided to each y motor of the drone

43
4. Flight Controller (FC)

• It is the computer processor which manages balance and


telecommunication controls using different transmitter

• Sensors are located in this unit for the accelerometer, barometer,


magnetometer, gyrometer and GPS

• The distance measurement can be carried out by an ultrasound


sensor

5. Radio Transmitter sends the radio signal to ESC to pilot to control


motor speed.

6. Radio Receiver: Received the signal from the pilot. This device is
attached to the quadcopter

7. Battery: High-power capacity, Lithium Polymer (LiPo) is used for


most drones. The battery can have 3S (3 cells) or 4S (4 cells).

• When the pilot or autonomous system gives the drone a command, the
flight controller sends signals to the motors to spin the propellers

• The speed and direction of the motors and propellors are adjusted to
achieve the desired movement. The sensors provide data to the flight
controller, which uses it to stabilize the drone in the air and adjust its
movement

• Drones can be controlled manually using a remote controller or


programmed to fly autonomously. Autonomous drones use sensors
and pre-programmed instructions to fly to a specific location, perform
a task like taking photos or delivering a package, and return to their
starting point.

How to operate a Drone

• perating a drone can be a fun and rewarding experience, but it’s


important to know how to do so safely and legally
44
• Here are some general steps to operate a drone :

1. Read the manual: The first step is to read the drone manual
carefully, as each drone model is unique and has its own set of
instructions.

2. Register your drone: Depending on your location, you may need to


register your drone with the appropriate authorities.

3. Charge your drone battery: Make sure your drone battery is fully
charged before flying it.

4. Find a suitable location: Choose a location that is open, clear, and


away from any obstacles like trees, buildings, or power lines.

5. Check the weather: Avoid flying your drone in windy or rainy


conditions. Check the weather forecast before flying.

6. Turn on the drone: Turn on the drone and the remote control.

7. Calibrate the drone: Follow the instructions in the manual to


calibrate the drone before flying.

8. Take off: Push the throttle stick slowly and smoothly to take off the
drone.

9. Fly the drone: Use the remote control to maneuver the drone in the
air. Keep it at a safe distance from people and property.

10.Land the drone: When you are ready to land the drone, slowly bring
it down to the ground using the throttle stick.

11.Turn off the drone: After landing the drone, turn off the drone and
the remote control.

45
Fig 3.10 Drone Kit

Precautions During the Drone Use

• Drones can be a fun and useful tool, but they can also be dangerous if
not used properly. Here are some precautions to keep in mind when
using a drone.

1. Know the laws and regulations:

• Before flying a drone, make sure you know the laws and
regulations in your area

• This includes any local, state, and federal regulations, as


well as any restrictions on where you can fly your drone.

2. Always keep your drone in sight:

• It’s important to keep your drone within your line of sight


at all times

• This will help you avoid collisions with other objects or


people.

46
3. Fly in open areas

• Try to fly your drone in open areas away from people,


buildings, and other obstacles

• This will help you avoid accidents and crashes.

4. Respect people’s privacy:

• Don’t fly your drone over private property without


permission

• Also, avoid flying your drone close to people’s homes or


in areas where people have a reasonable expectation of
privacy.

5. Avoid flying in bad weather:

• Drones are not designed to handle extreme weather


conditions like strong winds, heavy rain, or snow

• Avoid flying your drone in these conditions, as they can


cause your drone to crash or become damaged.

6. Keep your drone in good condition:

• Regularly check your drone for any signs of damage or


wear and tear

• Replace any damaged parts before flying.

7. Practice safe battery use:

• Always use the manufacturer’s recommended batteries


and charger

• Avoid charging your batteries unattended and never use


damaged or swollen batteries.

8. Be prepared for emergencies: Keep a first aid kit and a fire


extinguisher nearby in case of emergencies. Also, be prepared to

47
land your drone quickly when necessary

By following these precautions, you can help ensure a safe and


enjoyable drone experience for yourself and others.

Application and Development of Drones

• Drones or UAE has a lot of application in space, defense, and military


purposes, delivery of food items, and spraying of pesticides in farming

• Refer to the post for the application of drones in industries on this


website

Application of drone and scope of CFD modeling

Fig 3.11 Applications of Drone

48
Scope of CFD Modeling for Drone Aerodynamics

• Computational fluid dynamics (CFD) predicts the pressure gradient


required for lifting a drone

• The following parameters decide the aerodynamic design of the drone

• Payload of drone

• Number and speed of propellors

• Aerodynamics of propellors

• Spacing between the propellors

• Lift coefficient of drone


• The thrust generated by the drone
• CFD modeling will help to optimize the aerodynamics and
applications of drone

3.7 SYSTEM REQUIREMENTS :

The Project requires both software and hardware systems.

3.7.1: HARDWARE REQUIREMENTS:

System : Pentium i3 Processor

Hard Disk : 500 GB

Monitor : 15’L D

RAM : 2 GB

Input Devices : Mouse, Keyboard

Drone : Payload model with 12v power supply

49
3.7.2 SOFTWARE REQUIREMENTS:

Operating system : windows 7

Coding Language: python

Tool : Anaconda Navigator

Libraries : open cv

50
CHAPTER 4

APPENDIX

4.1 IMPLEMENTED CODING:

from telegram import Bot


from ultralytics import YOLO
import cvzone
import cv2
import math
import os
import numpy as np
import imutils
import requests
import time
import telepot

#BOT_API = '6769886586:AAE1FacFVhfOfIk0lMBo7fT4i02RbvNhkKM'
TOKEN = '6572810864:AAGTixxJV_eD8C2PcAti_oLl35_SpFSSStQ'
bot = Bot(token=TOKEN)
url="http://192.168.47.228/cam-hi.jpg"
# Chat ID where you want to send messages
#CHAT_ID = '1592421798'
CHAT_ID = '996965527'
# Running real time from webcam
cap = cv2.VideoCapture(0)
51
model = YOLO('fire.pt')

# Reading the classes


classnames = ['fire']

# Variables for detecting fire every few seconds


frame_count = 0
time_interval = 60 # Send frame every 10 seconds or 10 frames
start_time = time.time()

while True:
img_resp = requests.get(url)
img_arr = np.array(bytearray(img_resp.content), dtype=np.uint8)
frame = cv2.imdecode(img_arr, -1)
frame = imutils.resize(frame, width=600)
result = model(frame, stream=True)

# Getting bbox, confidence, and class names information to work with


for info in result:
boxes = info.boxes
for box in boxes:
confidence = float(box.conf[0].cpu().numpy()) * 100 # Convert confidence
tensor to float
confidence = round(confidence, 2) # Round to 2 decimal places
Class = int(box.cls[0])

if confidence > 75:


52
x1, y1, x2, y2 = map(int, box.xyxy[0])
cv2.rectangle(frame, (x1, y1), (x2, y2), (0, 0, 255), 5)
cv2.putText(frame, f'{classnames[Class]} {confidence}%', (x1 + 8, y1 +
30),
cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 255), 2)

frame_count += 1
if frame_count % time_interval == 0 or time.time() - start_time >
time_interval:
cv2.imwrite('fire_detected.jpg', frame)
# Send image to Telegram or perform any other action
bot.sendPhoto(CHAT_ID,
photo=open('C://Users//Jaffer//Desktop//project//fire//fire_detected.jpg',rb'),
caption='Fire Detected')
start_time = time.time()

cv2.imshow('frame', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()

53
4.2 RESULTS AND DISCUSSION :

Fig 4.2.1.Drone Setup with controller

54
Fig 4.2.2 Surveillance Period Fig 4.2.3 ESP 32 Wifi Camera

55
Fig 4.2.4 Fire detection using web cam

Fig 4.2.5 Real Time Detection of Fire

56
Fig 4.2.6 Telegram Alert Notification

• Since the test was performed during room lighting, the estimations
must bechanged by the room light to be effective.

• In case of a fire, a boisterous commotion. The outcomes are

likewise displayed as FIRE.

• Assuming that there is no fire, the outcome won't be NO FIRE

57
CHAPTER 5

CONCLUSION AND FUTURE WORK

5.1 CONCLUSION :

The project aimed to detect fire with a different approach rather than using an
existing system. As technology is getting better and better as to keep it up
with the technology and to minimize the limitations also, the new system has
created. By using image processing technology for detecting the fire, these
limitations can be reduced because in this system camera acts like a human
eye, as it detects a fire, the video is captured, and the image is processed using
the software alert user. It can be used everywhere eg. Hospital, railway
station, forest etc. The designed prototype successfully detects fire. gives the
review analysis, designing system, and algorithm, test, and result.
Currently, we not used the systems like a smoke detector and sprinkler water
discharge systems, but in future it can be included.

5.2 LIMITATION & FUTURE SCOPE:

Due to safety reasons, the effectiveness of the system cannot be tested in the
full phase, so here for the prototype demo, the lighter is used as the source of
fire as it has the same features and characteristics of the fire. There is some
error in detection when the fire is far away from the camera because of the
size of the flame. The system can be made more accurate and precise by using
other machine learning algorithms like YOLO(You Only Look Once),
Tensorflow, Keras algorithm, neural network like CNN, RCNN, which are
more advanced than the current algorithm.

58
REFERENCES:

[1] N. Dziengel, G. Wittenburg, and J. Schiller, "Towards distributed event

detection in Wireless Sensor Networks," in Adjunct Proc. of 4th


IEEEfACM IntI. Conf. on Distributed Computing in Sensor Systems
(DCOSSa:AZ'08), Santorini Island, Greece, 2020.

[2] S. Jarupadung, "Distributed event detection and semantic event

processing," in The 6th ACM International Conference on Distributed


Event- Based Systems (DEBS 2020)(Doctoral Symposium), 2020.

[3] Y. Li and L. E. Parker, "Detecting and monitoring time-related abnormal

events using a Wireless Sensor Networks and mobile robot," in Intelligent


Robots and Systems, 2019. IROS 2019. IEEEfRSJ International
Conference on. IEEE, 2019.

[4] P. Radivojac, U. Korad, K. M. Sivalingam, and Z. Obradovic,

"Learning from class-imbalanced data in Wireless Sensor Networks," in


Vehicular Technology Conference, 2020. VTC 2020-Fall. 2020 IEEE
58th, vol. 5,IEEE,2020.

[5] Z.-J. Zhang, J.-S. Fu, H.-P. Chiang, and Y.-M. Huang, "A novel

mechanism for fire detection in subway transportation systems based on


Wireless Sensor Networks," International Journal of Distributed Sensor
Networks, vol. 2019, 2019.

[6] Boroujeni, N. S. (2019). Monocular vision system for unmanned

59
aerialvehicles. doi:10.22215/etd/2013-07242.
[7] Akyurek, S., Yılmaz, M.A., and Taskıran, M. (2012)., “İnsansız

HavaAraçları(Muharebe Alanında ve Terörle

[8] Chen, T., Wu, P., & Chiou, Y. (2014). An early fire-detection method

based onimage processing. 2004

[9] Gao, Y., & Cheng, P. (2019). Forest Fire Smoke Detection Based on

VisualSmoke Root and Diffusion

[10] Model.Fire Technology. doi:10.1007/s10694-019-00831-x.

[11] Gao, Y., & Cheng, P. (2019). Forest Fire Smoke Detection Based on

Visual Smoke Root and Diffusion

[12] Wang, T., Liu, Y., & Xie, Z. (2011). Flutter Analysis Based VideoSmoke

Detection. Journal of Electronics

doi:10.3724/sp.j.1146.20&10.00

[13] Lin et. al, Smoke detection in video sequences based on dynamic

textureusing volume local binary

[14](2017). KSII Transactions on Internet and Information Systems, (11).

60
doi:10.3837/tiis.2017.11.019.

[15] Lin et. al, Wang, Z., Wang, Z., Zhang, H., & Guo, X. (2017). A

Novel FireDetection Approach Based on

[16] CNN-SVM Using Tensorflow. Intelligent Computing

Methodologies LectureNotes in Computer Science,682-693.

[17] Shaqura, M., & Shamma, J. S. (2017). An Automated Quadcopter

CAD basedDesign and Modeling

61

You might also like