Final Report
Final Report
LEARNING MODEL
A PROJECT REPORT
Submitted by
Of
BACHELOR OF ENGINEERING
IN
MAY 2024
1
ANNA UNIVERSITY: CHENNAI 600 025
BONAFIDE CERTIFICATE
SIGNATURE SIGNATURE
2
ACKNOWLEDGEMENT
We wish to express our sincere thanks to respected Principal, Dr. S.N. Ramaswamy,
M.E., Ph.D., Anjalai Ammal Mahalingam Engineering College, for all the
blessing and help provided during the period of project work.
We wish to express our sincere thanks to Dr. A. Bhavani Sankar, M.E., Ph.D.,
Head of the Department of Electronics and Communication Engineering, for the
continuous help over the period of project work
We wish to express our sincere thanks to our guide Mr. G. Murugesan M.E.,
Associate Professor, Department of Electronics and Communication
Engineering, for the continuous help over the period and creative ideas of first phase
of our project work and sincere guidance to us in a prior time
We would like to extend our warmest thanks to Mr. T. Rajappa, Lab technician,
Department of ECE, for helping me in this venture. Unflinching support and
encouragement from the members of our family, friends and Faculties in Anjalai
Ammal Mahalingam Engineering College helped us a long way to complete our
project work. We must thank them all from the depths of our heart.
3
ABSTRACT
The project aimed to detect fire by using the computer vision based image
processing technology that will alert people by early detection of fire. As there
are many automatic fire alarm systems already existed like the sensor method,
that has some limitations and designed to sense fire with the smoke, limited
areas. To reduce limitations and to optimize with new technology, the project is
proposed. The project is implemented by using pycharm IDE and to connect the
drone cam as hardware. Drone cam is taken as an input source, which captures
the video feed from the surrounding and feeds into the system for analysis. The
entire code is written in pure python language using the open CV library for
consist of machine learning, image processing, color model, and the working
algorithm of the project to detect the fire. The project gives a better
learning and the use of these technologies in different forms and uses.
i
TABLE OF CONTENTS
ABSTRACT i
LIST OF FIGURES iv
1. INTRODUCTION
1.1 Overview 1
1.2 Scope and Objective 2
1.3 Domain Overview 2
1.3.1 Machine Learning Model 3
1.3.1.1 Gathering Data 3
1.3.2 Data Pre-processing 4
1.3.3 Supervised Learning 6
1.3.4 Regression 8
1.3.5 Unsupervised Learning 9
1.3.6 Training and testing the 10
Of model of data
1.3.7 Validation Set 11
2. LITERATURE SURVEY 14
2.1 Literary review 14
2.2 Analysis of literature 17
3. SYSTEM ANALYSIS 19
3.1 Existing System 19
3.2 Problem Statement 19
3.3 Proposed System 19
ii
3.4 System Design 23
3.5 Haar Like Features 24
3.6 Working Principle 28
3.7 Hardware Environment 29
3.8 System Requirements 49
4. APPENDIX 51
4.1 Implemented code 51
4.2 Results and discussion 54
6. REFERENCE PAPERS 59
iii
LIST OF FIGURES
iv
4.2.3 ESP 32 wifi camera 55
4.2.4 Fire detection using web cam 55
4.2.5 Real time detection of fire 55
v
CHAPTER 1
INTRODUCTION
1.1 OVERVIEW
Computer Vision based fire detection using image processing has the
potential to be useful in conditions in which conventional methods cannot be
adopted. The fire detection algorithm uses visual characteristics of fires like
brightness, color, spectral texture, spectral flicker, and edge trembling to
discriminate them from other visible stimuli. There are various fire detection
techniques such as infrared sensor, a thermal detector, smoke detector, flame
detector, and optical smoke detector. These methods are not always reliable
as they do not always detect the fire itself but detect one or more phenomena
resulting from fire, such as smoke, heat, infrared, ultraviolet light radiation or
gas, which could be produced in other ways and hence, produces many false
alarms .By the help of computer vision and image processing techniques, it is
possible to get better results than conventional systems because images can
provide more reliable information.
The proposed project is used to detect the object on various areas like in
parking areas, road traffic congestions, agriculture, farming, there are many
uses of object detection. The proposed software product using the Haar
Cascade classifier, which is very popular in object detection through the
image or any other video feeds. In future, this approach can be extended by
for other disasters as well.
Introduction:
In this blog, we will discuss the workflow of a Machine learning project this
includes all the steps required to build the proper machine learning project
from scratch. We will also go over data pre-processing, data cleaning, feature
exploration and feature engineering and show the impact that it has on
Machine Learning Model Performance. We will also cover a couple of the
pre-modelling steps that can help to improve the model performance.
2
Understanding the machine learning workflow
1. Gathering data
2. Data pre-processing
3. Researching the model that will be best for the type of data
5. Evaluation
The process of gathering data depends on the type of project we desire to make, if
we want to make an ML project that uses real-time data, then we can build an IoT
system that using different sensors data. The data set can be collected from various
sources such as a file, database, sensor and many other such sources but the
collected data cannot be used directly for performing the analysis process as there
might be a lot of missing data, extremely large values, unorganized text data or
noisy data. Therefore, to solve this problem Data Preparation is done.We can also
use some free data sets which are present on the internet. Kaggle and UCI Machine
learning Repository are the repositories that are used the most for making Machine
learning models. Kaggle is one of the most visited websites that is used for
3
practicing machine learning algorithms, they also host competitions in which
people can participate and get to test their knowledge of machine learning.
These are some of the basic pre — processing techniques that can be used
toconvert raw data.
1.3.1.8 Outliers detection: There are some error data that might be
present in our data set that deviates drastically from other
observations in a data set. [Example: human weight = 800 Kg]
5
Searching the suitable model :
Our main goal is to train the best performing model possible, using the pre-
processed data.The Machine learning is divided into two parts namely
Supevised Learning and Unsupervised Learning.So these methods will help
us with the best model.
Classification:
6
A classification problem is when the output variable is a category, such as
“red” or“blue” , “disease” or “no disease” or “spam” or “not spam”.
● K-Nearest Neighbor
● Naive Bayes
● Logistic Regression
7
1.3.3 Regression:
As shown in the above representation, we can imagine that the graph’s X-axis
is the ‘Test scores’ and the Y-axis represents ‘IQ’. So we try to create the best
fit line in the given graph so that we can use that line to predict any
approximate IQ that isn’t present in the given data.
● Linear Regression
8
● Gaussian Processes Regression
● Ensemble Methods
Clustering:
• Gaussian mixtures
• K-Means Clustering
• Boosting
9
• Hierarchical Clustering
• K-Means Clustering
• Spectral Clustering
For training a model we initially split the model into 3 three sections which
are ‘Training data’ ,‘Validation data’ and ‘Testing data’.You train the
classifier using ‘training data set’, tune the parameters using ‘validation set’
and then test the performance of your classifier on unseen ‘test data set’. An
important point to note is that during training the classifier only the training
and/or validation set is available. The test data set must not be used during
training the classifier. The test set will only be available during testing the
classifier.
10
Fig 1.8 Training set
Training set:
The training set is the material through which the computer learns how to
process information. Machine learning uses algorithms to perform the
training part. A set of data used for learning, that is to fit the parameters of the
classifier.
11
Once the data is divided into the 3 given segments we can start the training
process.
The model uses any one of the models that we had chosen in step 3/ point 3.
Once the model is trained we can use the same trained model to predict using
the testing data i.e. the unseen data. Once this is done we can develop a
confusion matrix, this tells us how well our model is trained. A confusion
matrix has 4 parameters, which are ‘True positives’, ‘True Negatives’, ‘False
Positives’ and ‘False Negative’. We prefer that we get more values in the
True negatives and truepositives to get a more accurate model. The size of the
Confusion matrix completely depends upon the number of classes.
True positives : These are cases in which we predicted TRUE and our
predictedoutput is correct.
12
True negatives : We predicted FALSE and our predicted output is
output is FALSE
False negatives : We predicted FALSE, but the actual predicted output is TRUE.
We can also find out the accuracy of the model using the confusion matrix.
Evaluation:
To improve the model we might tune the hyper-parameters of the model and
try to improve the accuracy and also looking at the confusion matrix to try
to increasethe number of true positives and true negatives.
Conclusion:
In this blog, we have discussed the workflow a Machine learning project and
givesus a basic idea of how a should the problem be tackled.
13
CHAPTER 2
LITERATURE SURVEY
14
camera. The wireless sensor network uses a hierarchical communication/learning
structure, where the mobile robot is the root node of the tree. Our fuzzy ART
network is based on Kulakov and Davcev’s implementation [8]. However, we
enhance their work by extending the fuzzy ART neural network with a Markov
model to learn a time series and detect time-related anomalies.
Finally, a mobile robot is employed to verify whether the detected anomalies were
caused by intruders.
Z.-J. Zhang, J.-S. Fu, H.-P. Chiang, and Y.-M. Huang et al. said fire is a common
and disastrous phenomenon in subway transportation systems because of closed
environment and large passenger row. Traditional methods detect and forecast re
incidents by fusing the data collected by wireless sensor networks and compare the
fusion result with a threshold. However, this approach has a signicant shortcoming.
Fusion of data decreases the amount of useful information since the distribution of
data is not considered. In this paper, a novel mechanism based on wireless sensor
networks is proposed and we can use the mechanism to detect re in subway
transportation systems. e core algorithm in this mechanism is an alternative of data
fusion in wireless sensor networks. e mechanism uses the quantity information
acquired from the data and also considers the distribution information of the data in
order to predict the occurrence of re incidents. e simulation results show that re
incidents can be detected fast and eciently.
Forest Fire Smoke Detection Based on Visual Smoke Root and Diffusion
(IEEE2019)
15
The damage caused by forest fire to forestry resources and economy is quite
serious. As one of the most important characters of early forest fire, smoke is
widely used as a signal of forest fire. In this paper, we propose a novel forest fire
smoke detection method based on computer vision and diffusion model. Unlike the
video-based methods that usually rely on image characters extraction, we try to find
the shape of smoke that is at the generation stage. To combine vision and diffusion
model together, the basic concept of smoke root is proposed. In the frame
processing stage, none characters of fire smoke are extracted (like texture, color,
frequency information etc.), and continuous frames are only used to extract stable
points in dynamic areas as the smoke root candidate points. In the diffusion model
simulation stage, all smoke root candidate points information is adopted by the
model to generate the simulation smoke. Finally, the match algorithm based on
color, dynamic areas and simulation smoke is implemented to get the final results.
In order to reduce the complexity of computation, we ignored the simulation
process of the smoke details, such as texture and turbulence, and only retained the
contour features in two-dimensional form.
16
AN EARLY FIRE-DETECTION METHOD BASED ON IMAGE
PROCESSING. (IEEE2004 )
The paper presents an early fire-alarm raising method based on video processing.
The basic idea of the proposed of fire-detection is to adopt a RGB (red, green, blue)
model based chromatic and disorder measurement for extracting fire-pixels and
smoke-pixels. The decision function of fire-pixels is mainly deduced by the
intensity and saturation of R component. The extracted fire-pixels will be verified if
it is a real fire by both dynamics of growth and disorder, and further smoke. Based
on iterative checking on the growing ratio of flames, a fire-alarm is given when the
alarm-raising condition is met. Experimental results show that the developed
technique can achieve fully automatic surveillance of fire accident with a lower
false alarm rate and thus is very attractive for the important military, social security,
commercial applications, and so on, at a general cost.
17
framework. The significant piece of the undertaking advancement area
considers and completely survey all the necessary requirements for building up
the venture. For each undertaking Literature survey is the main area in
programming improvement measure. Prior to building up the instruments
and the related planning it i s important to decide and survey the time factor,
asset prerequisite, labor, economy, and friends’ strength. When these things are
fulfilled and completely surveyed, at that point the following stage is to decide
about the product details in the separate framework, for example, what kind of
working framework the venture would require and what are largely the
important programming are expected to continue with the subsequent stage like
building up the apparatuses, and the related activities. Here we have taken the
general surveys of different creators and noted down the fundamental central
issues with respect to their work. In this venture literature survey assumes a
prevailing part in get assets from different areas and all the connected points
that are exceptionally valuable under this segment. The most awesome aspect if
this is the manner in which things get all together and encourages us to suite
our work according to the current information.
18
CHAPTER 3
SYSTEM ANALYSIS
The objective is to deploy an efficient and cost-effective solution for early wildfire
detection by leveraging aerial surveillance capabilities.
19
The drone will autonomously scan designated areas, identify signs of fire through
real-time image analysis, and promptly notify relevant authorities and communities
via Telegram channels, enabling swift response and effective mitigation measures to
minimize the impact of wildfires on the environment and public safety.
WIFI Q IPP D N
20
ADVANTAGES:
The system can be used in various areas like in parking areas, road traffic
congestions, agriculture, farming, there are many uses of object detection.
In future, this approach can be extended by for other disasters as well.
BLOCK DIAGRAM:
21
FLOW DIAGRAM:
22
3.4 SYSTEM DESIGN
Modules:
● Detection
● Haar-like Features
● Integral Image
● Training and Learning
● Adaptive Boosting (Ada Boost)
● Cascading
Detection:
The procedure for detecting fire classifies images based on the value of simple
features from an image. While detecting the fire, first of all, image is converted into
grayscale since it is easy to work with, and it has less data rather than RGB color
images.
The algorithm outlines a box and searches for the fire in the image,, the box is
searching for Haar-like features.
Along with small steps, the box detects the features like edges, brightness level from
the image of fire, and then data collected by boxes are put together, which helps to
determine where the fire is located.
23
3.5 Haar-like Features:
Haar features are similar to convolutional kernels, which are used to detect
the presence of those features in the given image.
The grayscale has some features some of the regions are black, some regions
are white, and some region is slightly darker or vice-versa from each other
which helps the machine to understand an image.
He Haar features like edge features, line features, center-surround features
rectangle is moved around the image.
The color white and black is decided according to the region. Which
compares the brightness and extracts features by assigning some values as it
goes and inspectsthrough that region in an image.
⦁ The construction of the haar is like that of a confirmed egg and is utilized to
decide if these properties are displayed in a given picture.
⦁ Dim has a few dark regions and some have white regions, some are hazier
or some are a method for aiding the vehicle feel the picture.
⦁ He Haar has side highlights, line elements, and elements around the focal
point of the square shape.
24
⦁ White and dark are territorially supported. It represents light and disposes
of the trademark by allocating a particular worth as it goes through the photo
and controlling the region.
Formula:
Integral Image:
The integral image plays an important role. It gives a boost to the calculation,
which makes the approach as fast as possible because there are thousands of
pixels that need to be calculated.
While applying rectangular haar features the sum of pixels in unshaded
rectangles sides are subtracted from the sum of the pixel in the shaded side of
25
rectangles.
Even for a small size images there are lots of features (over 160,000 for a
24x24 image).
Since due to large number of features the algorithm requires iterating overall
features, the features must be computed efficiently. So to solve this issue
integral image is introduced.
The sum of the pixel in rectangle D can be calculated with reference to four
arrays. The value of the image at location 1 is the sum of pixels in a rectangle.
Respectively value at position 2 is A+B, value at position 20 3 is A+C, and
value atposition 4 is A+B+C+D.
Here F(x) is the strong classifier, and a1f1(x), a2f2(x) are weak classifiers
where a1, a2 are the weights, and f1, f2 are the features.
So the strong classifier is made up of many weak classifiers since one weak
classifier is not good as adding more classifiers makes the algorithm stronger
and accurate, which is called an ensemble.
Cascading:
27
3.6 WORKING PRINCIPLE:
The first step is the training of the classifier, as mentioned early. Training for
the highly accurate classifier needs lots of time and processing power, so
here weonly used a little number of pictures.
After training the fire cascade classifier, the captured frame from the drone
cam is converted into grayscale. The reason for converting frame into
grayscale is because the frame captured by drone cam is in RGB color.
Since RGB images have three channels of colors, so if the image is converted
into grayscale, there will be only one channel, either black or white, which is
easy to process.
After the conversion, the fire classifier is used, which will help to find the
features and location of images. Parameters like scale factor and min
neighbor are passed. These factors are an essential factor in detecting fire. A
scale factor is used for creating of scale pyramid because while training the
classifier fixed size of the image is trained, so the scale factor will allow
rescaling the size of an input frame to detect the fire.
28
3.7 HARDWARE ENVIRONMENT
DRONE
Drones have become increasingly popular in recent years. They are used
for a variety of purposes, including photography, videography, surveying,
inspection, and even delivery. But have you ever wondered how drones
work? In this blog post, we’ll take a look at the working principle of
drones
Frame
Battery
Flight controllers
Sensors
• The shape, size, and speed of the propeller and drone depending
on the aerodynamics of propellers or blades
• Mechanical Design
• Strength of materials
30
• Battery: Low weight and high-power wattage battery is important
• When air flows over an aerofoil and pressure, viscous and drag force
act on the profiles
32
• Based on the principle of conservation of energy in fluid flow
(Bernoulli’s principle, the sum of all forms of energy in a fluid is
constant along the streamline
• When air flows over an aerofoil or wing, its velocity increases at the
top portion. But the pressure of air decreases.
• Bicopter (2 propellers)
• Triplecopter (3 propellors)
• Quadcopter (4 propellers)
33
• Hexacopter (6 propellers)
• Octacopter (8 propellers)
Quadcopter Dynamics
• The movement of drone are classified into four types based on the
relation motion between four propellors: 1) throttle, 2) Pitch, 3) Roll,
and 4) Yaw
34
Fig 3.6 Quadcopter Dynamics
• If all four propellors run at normal speed, then the drone will
move down
• If all four propellors run at a higher speed, then the drone will
move up. This is called the hovering of a drone
• If two rear propellors run at high speed, then the drone will
move in a forwarding direction
• If two front propellors run at high speed, then the drone will
move in the backward direction
• If two right propellors run at high speed, then the drone will
35
move in the left direction
• If two left propellors run at high speed, then the drone will
move in the right direction
• Yawn: the rotation of the head of the drone about the vertical axis
(either the left or right) is called Yawning motion
36
Forces and Moments Acting on a Drone
When a drone moves in the air, various forces act on it. The resultant
force will decide its movement. There are major forces acting on a drone
• Weight
• Due to the mass of the drone, the body mass force always acts
in the direction of gravity
• Lift:
• This force is due to pressure differences across the drone (in the
vertical direction). Hence, the speed, size, and shape of the
propeller blade decide the amount of lift force
• Thrust
• Drag
38
Kinematic for Quad-copter
Fi = kf ×ωi2
Mx = (F3 – F4) × L
My = (F1 – F2) × L
Hovering Motion
39
mg = F1 + F2 + F3 + F4
All moments = 0
• Equation of motion
m = F1 + F2 + F3 + F4 – mg
m=0
mg < F1 + F2 + F3 + F4
All moments = 0
mg > F1 + F2 + F3 + F4
All moments = 0
• Equation of motion
m = F1 + F2 + F3 + F4 – mg
m>0
Yaw Motion
mg = F1 + F2 + F3 + F4
All moments ≠ 0
40
• Equation of motion
mg < F1 + F2 + F3 + F4
All moments ≠ 0
• Equation of motion
Rigid-body dynamics
41
• Gyroscopic effects: change in the orientation of drone body and plane
rotation of propellors.
1. Propellers:
2. Motor
• Both motors brushless and brushed type can be used for drones
• ESC is used to connect the battery to the electric motor for the
power supply
43
4. Flight Controller (FC)
6. Radio Receiver: Received the signal from the pilot. This device is
attached to the quadcopter
• When the pilot or autonomous system gives the drone a command, the
flight controller sends signals to the motors to spin the propellers
• The speed and direction of the motors and propellors are adjusted to
achieve the desired movement. The sensors provide data to the flight
controller, which uses it to stabilize the drone in the air and adjust its
movement
1. Read the manual: The first step is to read the drone manual
carefully, as each drone model is unique and has its own set of
instructions.
3. Charge your drone battery: Make sure your drone battery is fully
charged before flying it.
6. Turn on the drone: Turn on the drone and the remote control.
8. Take off: Push the throttle stick slowly and smoothly to take off the
drone.
9. Fly the drone: Use the remote control to maneuver the drone in the
air. Keep it at a safe distance from people and property.
10.Land the drone: When you are ready to land the drone, slowly bring
it down to the ground using the throttle stick.
11.Turn off the drone: After landing the drone, turn off the drone and
the remote control.
45
Fig 3.10 Drone Kit
• Drones can be a fun and useful tool, but they can also be dangerous if
not used properly. Here are some precautions to keep in mind when
using a drone.
• Before flying a drone, make sure you know the laws and
regulations in your area
46
3. Fly in open areas
47
land your drone quickly when necessary
48
Scope of CFD Modeling for Drone Aerodynamics
• Payload of drone
• Aerodynamics of propellors
Monitor : 15’L D
RAM : 2 GB
49
3.7.2 SOFTWARE REQUIREMENTS:
Libraries : open cv
50
CHAPTER 4
APPENDIX
#BOT_API = '6769886586:AAE1FacFVhfOfIk0lMBo7fT4i02RbvNhkKM'
TOKEN = '6572810864:AAGTixxJV_eD8C2PcAti_oLl35_SpFSSStQ'
bot = Bot(token=TOKEN)
url="http://192.168.47.228/cam-hi.jpg"
# Chat ID where you want to send messages
#CHAT_ID = '1592421798'
CHAT_ID = '996965527'
# Running real time from webcam
cap = cv2.VideoCapture(0)
51
model = YOLO('fire.pt')
while True:
img_resp = requests.get(url)
img_arr = np.array(bytearray(img_resp.content), dtype=np.uint8)
frame = cv2.imdecode(img_arr, -1)
frame = imutils.resize(frame, width=600)
result = model(frame, stream=True)
frame_count += 1
if frame_count % time_interval == 0 or time.time() - start_time >
time_interval:
cv2.imwrite('fire_detected.jpg', frame)
# Send image to Telegram or perform any other action
bot.sendPhoto(CHAT_ID,
photo=open('C://Users//Jaffer//Desktop//project//fire//fire_detected.jpg',rb'),
caption='Fire Detected')
start_time = time.time()
cv2.imshow('frame', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
53
4.2 RESULTS AND DISCUSSION :
54
Fig 4.2.2 Surveillance Period Fig 4.2.3 ESP 32 Wifi Camera
55
Fig 4.2.4 Fire detection using web cam
56
Fig 4.2.6 Telegram Alert Notification
• Since the test was performed during room lighting, the estimations
must bechanged by the room light to be effective.
57
CHAPTER 5
5.1 CONCLUSION :
The project aimed to detect fire with a different approach rather than using an
existing system. As technology is getting better and better as to keep it up
with the technology and to minimize the limitations also, the new system has
created. By using image processing technology for detecting the fire, these
limitations can be reduced because in this system camera acts like a human
eye, as it detects a fire, the video is captured, and the image is processed using
the software alert user. It can be used everywhere eg. Hospital, railway
station, forest etc. The designed prototype successfully detects fire. gives the
review analysis, designing system, and algorithm, test, and result.
Currently, we not used the systems like a smoke detector and sprinkler water
discharge systems, but in future it can be included.
Due to safety reasons, the effectiveness of the system cannot be tested in the
full phase, so here for the prototype demo, the lighter is used as the source of
fire as it has the same features and characteristics of the fire. There is some
error in detection when the fire is far away from the camera because of the
size of the flame. The system can be made more accurate and precise by using
other machine learning algorithms like YOLO(You Only Look Once),
Tensorflow, Keras algorithm, neural network like CNN, RCNN, which are
more advanced than the current algorithm.
58
REFERENCES:
[5] Z.-J. Zhang, J.-S. Fu, H.-P. Chiang, and Y.-M. Huang, "A novel
59
aerialvehicles. doi:10.22215/etd/2013-07242.
[7] Akyurek, S., Yılmaz, M.A., and Taskıran, M. (2012)., “İnsansız
[8] Chen, T., Wu, P., & Chiou, Y. (2014). An early fire-detection method
[9] Gao, Y., & Cheng, P. (2019). Forest Fire Smoke Detection Based on
[11] Gao, Y., & Cheng, P. (2019). Forest Fire Smoke Detection Based on
[12] Wang, T., Liu, Y., & Xie, Z. (2011). Flutter Analysis Based VideoSmoke
doi:10.3724/sp.j.1146.20&10.00
[13] Lin et. al, Smoke detection in video sequences based on dynamic
60
doi:10.3837/tiis.2017.11.019.
[15] Lin et. al, Wang, Z., Wang, Z., Zhang, H., & Guo, X. (2017). A
61