A
Project Report on
Deep Learning-Based Fire and Smoke Detection System with MobileNet
                                   Architecture
             In partial fulfillment of the requirement for the award ofDegree of
                            Bachelor of TechnologyIn
                               Artificial Intelligence
                                      Submitted by
                                Anurag Mahakalkar
                                 Shreyash Arghode
                                 Geyesh Barsagade
                                 Abhishek Kongare
                                Under the Guidance of
                                   Er. Shruti Kolte
                     Department of Artificial Intelligence
                       Lokmanya Tilak Jankalyan Shikshan Sanstha’s
            Priyadarshini J. L. College of Engineering, Nagpur An
        Autonomous Institute Affiliated to R.T.M Nagpur UniversityAccredited with
                                   Grade “A” by NAAC
                        846, New Nandanvan Layout, Nagpur – 440 009.
                                 Session 2023-2024
                                     CERTIFICATE
                         This is to certify that the Project entitled
 “Deep Learning-Based Fire and Smoke Detection System with MobileNet
                            Architecture”
                                       is submitted by
                                      Anurag Mahakalkar
                                      Shreyash Arghode
                                      Geyesh Barsagade
                                      Abhishek Kongare
to Rashtrasant Tukadoji Maharaj Nagpur University, in partial fulfillment of the
requirement for Project in B.Tech."Artificial Intelligence" for the academic year 2023 –
2024.
This report is a record of the work carried out by them and underwent requisite directions
as per the University Curriculum.
                                    Under the Guidance of
                                       Er. Shruti Kolte
  Prof.B.S Dhak                                              Dr.A.M Shende
(Head of Department)                                        (Principal,PJLCE)
                            ACKNOWLEDGEMENT
The success of any work depends on efforts of many individuals. We would like to take this
opportunity to express our deep gratitude to those who extended their support and have guided
us to complete this project work.
We wish to express our sincere and deepest gratitude to our guide Er. Shruti Kolte for
her/his invaluable and unique guidance. We would also like to thank her/his for the constant
source of help, inspiration and encouragement in the successful completion of project. It has
been our privilege and pleasure to work under her expert guidance.
We are sincerely thankful to Dr.A.M Shende(Principal,PJLCE) for his invaluable assistance
and mentorship, which have played a significant role in the successful completion of this
project. His dedication to fostering academic excellence has been a constant source of
inspiration.
We like to thank Prof.B.S Dhak(HOD) for providing us the necessary information about
topic.And for providing us the necessary help and facilities we needed.
We express our thanks to all the staff members of AI department who have directly or
indirectly extended their kind co-operation in the completion of our project Report.
Date:                                                     Projectee’s Name
                                                          Anurag Mahakalkar
                                                          Shreyash Arghode
                                                          Geyesh Barsagade
                                                          Abhishek Kongare
ABSTRACT
One of the most frequent yet undesirable phenomena brought on by climate change and rising temperatures is
wildfires or any other areas. Therefore, there is a need for advanced yet user-friendly systems that at the very
least enable the effective use of contemporary tools and solutions. Fire and Smoke detection are crucial tasks
in ensuring the safety and security of various environments. In this project, we present a comprehensive solution
for fire and smoke detection using deep learning techniques. The project is developed in Python, utilizing the
powerful capabilities of the MobileNet architecture. The main objective of this project is to accurately identify
fire and smoke instances in different scenarios, including images, videos, and real-time webcam feeds. The high
accuracy indicates the model’s ability to effectively classify fire, smoke, and normal instances, enabling reliable
detection in various contexts. The proposed system allows for multi-purpose detection, providing real-time
analysis of images, videos, and live webcam feeds. This versatility ensures the applicability of the solution in a
wide range of scenarios, such as surveillance systems, fire alarm systems, and emergency response
management. Overall, this project contributes to the field of fire and smoke detection by leveraging deep
learning techniques and the MobileNet architecture. The developed system offers an efficient and accurate
solution for identifying fire and smoke instances in different visual media, thus enhancing safety and security
measures in various environments.
LIST OF FIGURES
  FIGURES                      FIGURE NAME              PAGE
    5.1     Block Diagram of Real Time Fire Detection
    5.2     Phases of Model
    5.3     Home Page
    5.4     Login Page
    5.5     Upload The Image
    5.6     Detect the Image
    5.7     Normal Detection
    5.8     Fire Detection
INDEX
   Sr. no.                          Title    Page no.
        1.   Introduction                       .
        2.   Literature Review                  .
        3.   Problem definition                 .
        4.   Tools & Platform                   .
        5.   Methodology                        .
                5.1 Block Diagram
                5.2 Methods
                5.3 Algortihms
                5.4 Techinques
                5.5 Screenshots of Project
        6.   Applications                       .
        7.   Conclusion                         .
        8.   Future Scope                       .
             References
1. INTRODUCTION
       At present, a large number of fire accidents occur every and cause serious economic losses. One critical
application of deep learning is in the development of fire and smoke detection systems, which are essential for
early warning and prevention of disasters in various environments, such as industrial facilities, commercial
buildings, and even residential homes. This introduction will provide an overview of a cutting-edge fire and
smoke detection system using the MobileNet architecture, highlighting the significance and benefits of this
approach. Fire and smoke incidents can lead to catastrophic consequences, including loss of life, property
damage, and environmental hazards.
       Traditional fire detection systems, often reliant on rule based algorithms or human intervention, have
limitations in terms of accuracy and responsiveness. Deep learning-based systems have emerged as a promising
solution, leveraging the power of neural networks to detect fire and smoke with high precision and speed. Deep
learning techniques have demonstrated remarkable capabilities in image recognition and classification tasks.
Convolutional Neural Networks (CNNs) are particularly well-suited for visual data analysis, making them ideal
for fire and smoke detection.
       These systems can quickly analyze images or video streams and identify potential fire or smoke patterns,
allowing for rapid response and mitigation. MobileNet is a deep learning architecture specifically designed for
resource-constrained devices, such as smartphones and embedded systems. Its primary advantage lies in its
efficiency and speed while maintaining competitive accuracy. MobileNet employs depthwise separable
convolutions, which reduce the computational burden, making it suitable for real-time applications. These
characteristics make MobileNet a natural choice for deploying fire and smoke detection systems on mobile
devices and edge devices.
2. LITERATURE SURVEY
        Wangda Zhao (2020)[1] A unique picture fire detection technique was created using the faster-RCNN,
R-FCN, SSD, and YOLO v3 advanced object identification CNN models. The suggested algorithms can
automatically extract intricate picture fire attributes and find fire in a range of scenarios. The author also asserted
that the CNN-based algorithms are more accurate than the conventional algorithms. The YOLO v3-based
algorithm is the most accurate among all CNN models, detecting fire at a rate of 83.7 percent and 28 frames
per second, respectively.
        K. Muhammad, J. Ahmad, I. Mehmood, S. Rho & S. W. Baik (2018)[2] et al, suggested a Convolutional
neural network (CNN) architecture for video surveillance that is cost-effective in detecting fires. The suggested
model mainly concentrated on detection precision and computational complexity. The Google Net design,
which has less computational complexity than other networks with high computational costs like AlexNet,
served as the model's primary source of inspiration. According to the author, the suggested framework performs
better on fire datasets and is appropriate for fire detection in CCTV security systems in practical applications.
        Yuming Li, Wei Zhang, Yanyan Liu & Yao Jin (2022)[3] A quick and effective fire detection model is
developed using the convolutional neural network MobileNetV3 and the anchor less structure. In two aspects,
the suggested strategy performs better. The suggested technique is small enough to be readily deployed on
visual mobile devices, which will first boost network speed. On two publicly available fire datasets as well as
self-built datasets, the model's accuracy has been tested. The proposed framework's top speed of 29.5 f/s may
meet real-time detection, making it appropriate for fire detection systems in practical applications.
        Arpit Jadon, Akshay Varshney, Mohammad Samar Ansari (2020)[4] The author proposed a
convolutional neural network model dubbed MobileNetV2 architecture to address the fire detection difficulties.
In addition to a novel MobileNetV2 architecture that outperforms current options while being computationally
realistic for implementation on less powerful hardware, the author also introduces a more transparent data
handling technique. The metrics Accuracy, Precision, Recall, and F-Measure were used to assess this model's
performance against current Convolutional neural network models on two datasets. The accuracy of the
suggested model is 0.99, the highest of any model (99%).
        Myeongho Jeon, Han-Soo Choi, Junho Lee, Myungjoo Kang (2021)[5] By emphasising the different
sizes of flames in photographs, the author suggested a framework that improves the current Convolutional
neural network-based fire image classification model. In order to include feature maps of different scales in the
final prediction, the author suggested using a feature-squeeze block. The feature-squeeze block compresses the
13516maps' features both spatially and channel-wise, allowing for effective use of the data from the multi-scale
prediction. A false positive rate of 0.0227% and an F1-score of 97. 89% were obtained from the experiment
using the provided methodology.
       Qingjie Zhang, Jiaolong Xu, Haifeng Guon (2016, January)[6]The researcher who conducted this study
suggested a deep learning approach for identifying forest fires. In a combined deep convolutional neural
network (CNN), the author trained a whole picture classifier as well as a fine-grained patch fire classifier. Here,
the author used a system that operates in a cascading manner to detect the fire. On training and test datasets, the
proposed fire patch detector achieves detection accuracy of 97% and 90%, respectively.
       Mohit Dua, Mandeep Kumar, Gopal Singh Charan, Parre Sagar Ravi (2020, February)[7]The author of
this study claims that a deep CNN technique is used in conjunction with transfer learning to detect fire. Here,
unbalanced datasets that mimic real-world circumstances are used to evaluate and compare the various CNN
models. He claimed that deep CNN models outperform more conventional CNN models in terms of
performance. Finally, the author claimed that while MobileNet is quicker and smaller than VGGNet, its
accuracy is about equivalent.
3. PROBLEM DEFINATION
3.1 Objective
The primary goal is to develop a system capable of detecting fire and smoke in images or videos using deep
learning techniques, specifically leveraging the MobileNet architecture.
3.2 Scope
The system should be able to detect fire and smoke in various scenarios, including indoor and outdoor
environments, different lighting conditions, and different scales of fire and smoke.
3.3 Challenges
Class Imbalance: There may be fewer images containing fire and smoke compared to normal images, requiring
techniques to handle class imbalance.
Real-time Processing: For applications requiring real-time detection, the system must be efficient enough to
process images or videos quickly.
Environmental Variability: The system should be robust to variations in environmental conditions such as
lighting, weather, and camera angles.
Accuracy and False Positives: Ensuring high accuracy in detection while minimizing false positives is crucial
for reliability.
3.4 Datasets
The system will require a dataset of images or videos labeled with fire and smoke annotations for training and
validation.
3.5 Evaluation Metrics
Performance metrics such as accuracy, precision, recall, and F1 score will be used to evaluate the effectiveness
of the detection system.
3.6 Integration
The system should be designed to integrate into existing fire detection or monitoring systems, or as a standalone
application.
3.7 Ethical Considerations
Consideration should be given to potential ethical implications, such as privacy concerns when using video
data, and ensuring the system does not contribute to discrimination or bias.
4. TOOLS AND PLATFORMS USED
4.1 Development Platform
   Visual Studio Code (VS Code): Visual Studio Code (VS Code) is a free, open-source, lightweight, and
    extensible source code editor made by Microsoft
4.2 Programming Languages
   Python: Primary languages for Android app development, used for the core functionality of the Android
    application.
4.3 Machine Learning Libraries
   TensorFlow: These deep learning frameworks are employed for implementing the ResNet-50 algorithm
    for counterfeit currency detection.
   NumPy: NumPy is a Python library used for working with arrays.
4.4 Anaconda
   Anaconda: Anaconda is an open-source distribution of the Python and R programming languages for data
    science.
4.5 Design and UI/UX
   Figma: Tools for designing the user interface and user experience (UI/UX) of the module.
4.6 Operating System
   windows: The target platform for the application, used for both development and deployment.
4.7 Devices
   Physical Devices: Real computer and camera for testing the application.
5. METHODOLOGY
       In this study, a system that uses convolutional neural networks to detect fire is suggested. The use of
artificial intelligence to video from CCTV footage is critically examined in this research. The dataset used in
this work includes video frames with fire.
5.1 Block Diagram
       The project's architecture consists of several interconnected modules, including Image Capture, Image
Processing, Machine Learning, Real-time Analysis, and User Interface. These modules work collaboratively to
ensure the efficient detection of counterfeit currency.
                                  Fig 5.1: Block Diagram of Real Time Fire Detection
5.2 Methods
1. Image Acquisition: It is the process of obtaining an image from sources. Hardware systems like cameras,
encoders, sensors, etc. can be used to do this. In this method, a camera is used to transfer image into a computer
to obtain the image in binary code. The employed picture format is the 8 Megapixel JPG image compression
standard. The data is gathered in the morning, afternoon, and evening during daylight hours.
2. Pre-processing: Pre-processing a digital image is a phase in image enhancement and development. This
phase follows three steps:
Read image: First, the libraries are imported, then a function is constructed to load image folders into arrays
after storing the path to the image dataset in a variable.
Resize image: Following that, a function is created, which only takes images as an argument. A basic size
should be created for all images that are fed into algorithms since certain images taken by cameras and submitted
to algorithm fluctuate in size.
Edge detection: The process of identifying an image's edges is known as edge detection. The actual picture
has all nuances, including colour, texture, and backdrop, but the other image only shows the image's edges.
Histogram Equalization: An image processing approach on a computer called Histogram equalisation, is used
to boost contrast in pictures. By substantially extending the intensity range of the image, it achieves this by
effectively spreading out the most common intensity values.
3. Feature Extraction: The feature extraction approach is beneficial when less processing power is required
without losing important or relevant data. Also, by minimizing the amount of redundant data, feature extraction
might assist in the analysis process. The reduction of data and attempts made by the computer to create variable
combinations (features), which will also hasten the knowledge and generalization phases of the training method.
4. Image Labeling And Dataset Distribution: Every theme was independently labelled twice. In a Picture
Archiving Communication System (PACS), labeling was initially assessed using the original images, and then
it was inspected yet more using the condensed images that served as the citation of the learning data. The test
was evaluated using the temporal dataset, which was divided into two categories: internal dataset and temporal
dataset.
                                      Fig 5.2: Phases of Model
5.3 Algorithm
1. Convolutional Neural Network Architecture: Convolutional, pooling, and fully connected layers make
up the conventional architecture of a CNN.
Convolution Layer: This layer carries out a dot product between two modules, where one grid represents the
kernel—a collection of attributes that can be learned, and the other grid represents the constrained area of the
input patch.
Pooling Layer: By calculating an aggregate statistic from the surrounding outputs, the pooling layer substitutes
for the network's output at specific locations. This aids in shrinking the representation's spatial size, which
lowers the amount of computation and weights needed.
Fully Connected Layer: This shows the output produced by the pooling layer in the form of kernels.
5.4 Techniques
   Image Processing Techniques: These include techniques like noise reduction, contrast enhancement, and
    sharpening to prepare the images for analysis.
   Machine Learning Techniques: The project uses supervised learning techniques for training the machine
    learning model, ensuring that it can accurately differentiate between real and counterfeit banknotes.
5.5 Screenshots of Project
1. Home Page Module : This is the home page module after copy the IP address on Google.
                                            Fig 5.3: Home Page
2. Login Page Module: This is module where user can login to the project.
                                            Fig 5.4: Login Page
3. Detection the Image: In this module we upload the image and the project detect the is a image is fire or
smoke.
                                       Fig 5.5: Upload The Image
                                        Fig 5.6: Detect the Image
4. Live Detection of Fire & Smoke: In this module our project show the live detection of fire and smoke and
normal
                                      Fig 5.7: Normal Detection
                                          Fig 5.8: Fire Detection
6. APPLICATONS
    Early Fire Detection: The system can be used in buildings, forests, or industrial settings to detect fires
     at an early stage, enabling timely intervention and reducing the risk of extensive damage or loss of life.
    Fire Monitoring in Hazardous Environments: In environments where human presence is hazardous,
     such as nuclear facilities or chemical plants, the system can continuously monitor for fires without
     putting human operators at risk.
    Wildfire Detection and Management: In forested areas, the system can help detect wildfires early,
     allowing authorities to respond quickly and potentially prevent the spread of the fire to populated areas.
    Industrial Safety: In industrial settings, the system can monitor for fires or smoke that may indicate
     machinery malfunction or other hazards, enhancing overall safety measures.
    Smart Home Fire Detection: Integrated into smart home systems, the system can provide early
     warning of fires or smoke, triggering alarms and notifications to homeowners or emergency services.
    Public Safety and Security: In public spaces such as airports, train stations, or stadiums, the system
     can enhance security measures by detecting fires or smoke that may pose a threat to public safety.
    Insurance and Risk Assessment: Insurance companies can use the system for risk assessment and
     mitigation, providing discounts to policyholders who have the system installed in their properties.
    Environmental Monitoring: The system can be used for environmental monitoring, detecting wildfires
     or industrial fires that may have environmental impacts.
    Traffic and Transportation Safety: In tunnels, bridges, or other transportation infrastructure, the
     system can monitor for fires or smoke that may affect safety and traffic flow.
    Search and Rescue Operations: In disaster scenarios such as earthquakes or building collapses, the
     system can help identify locations where fires are burning, aiding search and rescue operations.
7. CONCLUSION
       In conclusion, our project on fire and smoke detection using the MobileNet architecture and deep
learning techniques has proven to be a significant advancement in the field of computer vision and safety
systems. Throughout the course of this project, we have achieved several key milestones and have addressed
the primary objectives set forth at the project's inception.
       Our system's core strength lies in its ability to provide highly accurate and reliable fire and smoke
detection across various media types, including static images, video streams, and real-time webcam feeds. The
system's versatility and adaptability make it applicable to a wide range of scenarios, from surveillance systems
to fire alarm systems and emergency response management.
       We trained the system on a robust dataset containing 3825 images, which has contributed to its
impressive training accuracy of 97.00% and validation accuracy of 94.00%. This high level of accuracy ensures
minimal false positives and false negatives, enhancing the system's utility and trustworthiness.The choice of
the MobileNet architecture has proven to be efficient without compromising on accuracy. Its computational
speed allows for real-time fire and smoke detection, further enhancing its suitability for critical applications.
       Overall, our project has succeeded in developing a sophisticated and capable fire and smoke detection
system. By harnessing the power of deep learning and MobileNet architecture, we have created a solution that
significantly enhances safety and security measures in diverse environments. This project's outcomes serve as
a testament to the potential of advanced technologies in addressing critical real-world challenges and underscore
the importance of continued innovation in the field of computer vision and safety systems.
8. FUTURE SCOPE
    Improved Accuracy and Efficiency: Continued research and development can lead to improved model
     architectures, training techniques, and algorithms, resulting in higher accuracy and faster detection
     speeds.
    Integration with IoT and Smart Cities: Integration with Internet of Things (IoT) devices and smart
     city infrastructure can enhance the system's capabilities, enabling real-time monitoring and response to
     fire and smoke incidents.
    Edge Computing and Low-Power Devices: Optimization for edge computing and deployment on low-
     power devices can enable the system to be deployed in resource-constrained environments such as
     remote areas or on drones for aerial monitoring.
    Multimodal Sensing: Integration with other sensing modalities such as thermal imaging, gas sensors,
     or sound detection can improve the system's ability to detect fires and smoke in diverse environments
     and conditions.
    Adaptation to Dynamic Environments: Continued research in adaptive learning and transfer learning
     can enable the system to adapt to dynamic environmental conditions and new types of fires or smoke
     sources.
    Enhanced Robustness and Resilience: Research in adversarial machine learning and robust deep
     learning can enhance the system's robustness against attacks and ensure reliable operation in challenging
     conditions.
    Privacy-Preserving Techniques: Development of privacy-preserving techniques can address concerns
     about the use of surveillance cameras for fire detection, ensuring that privacy is maintained while still
     enabling effective detection.
    Global Deployment for Disaster Management: Deployment of the system on a global scale can aid
     in disaster management and response, providing early warning and assistance in areas prone to wildfires,
     industrial accidents, or natural disasters.
    Integration with Autonomous Systems: Integration with autonomous vehicles, drones, or robots can
     enable the system to provide real-time situational awareness and assist in firefighting or rescue
     operations.
    Cross-Domain Applications: Exploration of cross-domain applications, such as medical imaging for
     early detection of diseases or monitoring of agricultural fields for crop health, can further expand the
     system's impact and utility.
REFERENCES
[1] Wangda Zhao, (2020). Image fire detection algorithms based on convolutional neural networks. Case
   Studies in Thermal Engineering, val. 19, 100625.
[2] K. Muhammad, J. Ahmad, I. Mehmood, S. Rho., & S. W. Baik (2018). Convolutional neural network-based
   fire detection in surveillance videos. IEEE Access, vol. 6, 18174-18183
[3] Yuming Li, Wei Zhang, Yanyan Liu & Yao Jin (2022). A visualized fire detection method based on a
   convolutional neural network beyond anchor. Applied Intelligence,Volume 52, Issue 11, Sep 2022pp
   13280–13295.
[4] Arpit Jadon, Akshay Varshney, Mohammad Samar Ansari, (2020). The low-complexity highperformance
   deep learning model for real-time low-cost embedded fire detection systems. Procedia Computer Science,
   vol. 171, 418- 426.
[5] Myeongho Jeon, Han-Soo Choi, Junho Lee, Myungjoo Kang (2021). Multi-scale prediction for fire
   detection using convolutional neural network. Fire Technology, 57(5), 2533-2551.
[6] Qingjie Zhang, Jiaolong Xu, Haifeng Guo (2016, January). Deep convolutional neural networks for forest
   fire detection. In Proceedings of the 2016 international forum on management, education and information
   technology application. Atlantis Press.
[7] Mohit Dua, Mandeep Kumar, Gopal Singh Charan, Parre Sagar Ravi (2020, February). An improved
   approach for fire detection using deep learning models. In 2020 International Conference on Industry 4.0
   Technology (I4Tech) (pp. 171-175). IEEE.
[8] Z. Jiao, Y. Zhang, J. Xin, L. Mu, Y. Yi, H. Liu, & D. Liu (2019, July). A deep learning-based forest fire
   detection approach using UAV and YOLOv3. In 2019 1st International conference on industrial artificial
   intelligence (IAI) (pp. 1-5). IEEE.
[9] F. Saeed, A. Paul, Karthi kumar& A. Nayyar (2020). Convolutional neural network-based early fire
   detection. Multimedia Tools and Applications, 79(13), 9083-9099. International Journal of All Research
   Education and Scientific Methods (IJARESM), ISSN: 2455-6211, Volume 11, Issue 9, September-2023,
   Available online at: www.ijaresm.com Page | 1719