A MINI PROJECT REPORT
On
DROWSINESS DETECTION SYSTEM
Submitted to
OSMANIA UNIVERSITY
In partial fulfillment of the requirements for the award of
BACHELOR OF ENGINEERING
IN
COMPUTER SCIENCE AND ENGINEERING (AI & ML)
BY
SARTHAK JENA 245521748301
CHERUKU SHASHANK 245521748012
Under the esteemed guidance of
Mr.P.NARESH KUMAR
ASSISTANT PROFESSOR
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING (AI & ML)
KESHAV MEMORIAL ENGINEERING COLLEGE
(Approved by AICTE, New Delhi & Affiliated to Osmania University, Hyderabad)
D.No. 10 TC-111, Kachavanisingaram (V), Ghatkesar (M), Medchal-Malkajgiri, Telangana – 500088
(2023-2024)
KESHAV MEMORIAL ENGINEERING COLLEGE
Department of Computer Science and Engineering (AI & ML)
CERTIFICATE
This is to certify that the project report entitled “DROWSINESS DETECTION SYSTEM” that
is being submitted by SARTHAK JENA (245521748301), CHERUKU SHASHANK
(245521748012), under the guidance of Mr. P. NARESH KUMAR with fulfillment for the
award of the Degree of Bachelor of Engineering in Computer Science and Engineering (AI &
ML) to the Osmania University is a record of bonafide work carried out by his under my
guidance and supervision. The results embodied in this project report have not been submitted to
any other University or Institute for the award of any graduation degree.
Mr. P. NARESH KUMAR Dr. B. Devender
Assistant Professor, Professor,
Internal Guide, Head of the Department,
CSE (AI & ML) Dept. CSE (AI & ML) Dept.
EXTERNAL EXAMINER
Submitted for Viva Voce Examination held on _____________________________________
Vision & Mission of KMEC
Vision of KMEC:
To be a leader in producing industry-ready and globally competent engineers to make India a world
Leaders in software products and services.
Mission of KMEC:
1. To provide a conducive learning environment that includes problem solving skills, professional
and ethical standards, lifelong learning through multimodal platforms and prepare students to
become successful professionals.
2. To forge industry-institute partnerships to expose students to the technology trends, work culture
and ethics in the industry.
3. To provide quality training to the students in the state-of-art software technologies and tools.
4. To encourage research-based projects/activities in emerging areas of technology.
5. To nurture entrepreneurial spirit among the students and faculty.
6. To induce the spirit of patriotism among the students that enables them to understand India’s
challenges and strive to develop effective solutions.
Vision & Mission of CSE (AI & ML)
Vision of the CSE (AI & ML):
To be a global center of excellence in Artificial Intelligence and Machine Learning, producing
socially responsible graduates, excelling in education, research, and innovation for transformative
societal impact.
Mission of the CSE (AI & ML):
1. To cultivate global expertism in Artificial Intelligence and Machine Learning for lifelong impact.
2. To lead in ethical innovation, training experts in cutting-edge Artificial Intelligence technologies.
3. To provide top-tier education in Artificial intelligence, fostering innovation and ethics.
4. To establish Centers of Excellence in Artificial Intelligence and Machine Learning, emphasizing
Research and Development collaboration, professional development, and community
engagement.
5. To create an open growth environment, producing Industry-ready graduates, and partnering
globally in technical education and research.
6. To be a global Center of Excellence for Artificial intelligence, promoting industry collaboration
and instilling self-learning, team work, and professional ethics.
PROGRAM OUTCOMES (POs)
1. Engineering Knowledge: Apply the knowledge of mathematics, science, engineering
fundamentals, and an engineering specialization to the solution of complex engineering problems.
2. Problem analysis: Identify formulate, review research literature, and analyze complex engineering
problems reaching substantiated conclusions using first principles of mathematics, natural sciences,
and engineering sciences
3. Design/development of solutions: Design solutions for complex engineering problem and design
system component or processes that meet the specified needs with appropriate consideration for the
public health and safety, and the cultural societal, and environmental considerations.
4. Conduct investigations of complex problems: Use research-based knowledge and research
methods including design of experiments, analysis and interpretation of data, and synthesis of the
information to provide valid conclusions.
5. Modern tool usage: Create select, and, apply appropriate techniques, resources, and modern
engineering and IT tools including prediction and modeling to complex engineering activities with
an understanding of the limitations.
6. The engineer and society: Apply reasoning informed by the contextual knowledge to societal,
health, safety. legal und cultural issues and the consequent responsibilities relevant to professional
engineering practice.
7. Environment and sustainability: Understand the impact of the professional engineering solutions
in societal and environmental contexts and demonstrate the knowledge of, and need for sustainable
development.
8. Ethics: Apply ethical principles and commit to professional ethics and responsibilities and norms of
the engineering practice.
9. Individual and team work: Function effectively as an individual, and as a member or leader in
diverse teams and in multidisciplinary settings.
10. Communication: Communicate effectively on complex engineering activities with the engineering
community and with society at large, such as, being able to comprehend and write effective reports
and design documentation make effective presentations, and give and receive clear instructions.
11. Project management and finance: Demonstrate knowledge and understanding of the engineering
and management principles and apply these to one's own work, as a member and leader in a team, to
manage projects and in multidisciplinary environments.
12. Life-long learning: Recognize the need for, and have the preparation and ability to engage in
independent and life-long learning in the broadest context of technological change.
PROGRAM EDUCATIONAL OBJECTIVES (PEOs)
PEO-1: Graduates can apply foundational computer science knowledge adeptly to solve real-world
challenges in professional roles or advanced academic pursuits.
PEO-2: Graduates can utilize a comprehensive understanding of computer science and related
engineering disciplines for success in diverse careers through effective collaboration and trade-off
navigation.
PEO-3: Graduates can adapt quickly to evolving technological landscapes, applying ethical
considerations and actively engaging in continuous learning and professional development.
PEO-4: Graduates can demonstrate exceptional communication, collaboration, and professionalism,
leveraging specialized knowledge to contribute significantly to specific disciplines and foster societal
progress.
PROGRAM SPECIFIC OUTCOMES (PSOs)
PSO1: Apply Artificial Intelligence and Machine Learning knowledge to design automation solutions
for real-world challenges in software development.
PSO2: Demonstrate expertise in algorithmic design and contribute to the development of optimized
solutions in Artificial Intelligence, Machine Learning and Emerging technologies.
PSO3: Utilize Artificial Intelligence and Machine Learning principles to design intelligent subsystems,
address real-world business problems, and adapt to the dynamic Artificial Intelligence landscape with
ethical values.
PROJECT OUTCOMES
P1: Do literature survey / industrial visit and identify the problem statement.
P2: Apply new technologies & design techniques (platform, database, etc.) concerned
for devising a solution for a given problem statement.
P3: Apply project management skills (scheduling work, procuring parts and
documenting Expenditures and working within the confines of a deadline).
P4: .Work with team mates, sharing due and fair credits and collectively apply effort for
making project successful.
P5: Communicate technical information by means of written and oral reports
1 – LOW
2 - MEDIUM
3 - HIGH
PROJECT OUTCOMES MAPPING PROGRAM OUTCOMES:
PO PO1 PO2 PO3 PO4 PO5 PO6 PO7 PO8 PO9 PO10 PO11 PO12
P1 1 3 1 1 2 1 1 2 1 2 1 1
P2 3 1 1 3 1 2 2 1 3 1 3 1
P3 1 2 3 1 2 1 3 1 1 3 3 2
P4 2 2 1 2 3 1 3 2 3 1 3 1
P5 3 1 3 3 2 3 1 1 3 3 1 2
PROJECT OUTCOMES MAPPING PROGRAM SPECIFIC OUTCOMES:
PSO PSO1 PSO2 PSO3
P1 3 2 2
P2 2 1 3
P3 1 2 3
P4 1 2 3
P5 2 3 2
PROJECT OUTCOMES MAPPING PROGRAM EDUCATIONAL OUTCOMES:
PEO PEO1 PEO2 PEO3 PEO4
P1 1 1 2 3
P2 2 3 1 2
P3 3 2 2 1
P4 3 1 3 1
P5 1 1 2 3
DECLARATION
This is to certify that the mini project titled “DROWSINESS DETECTION SYSTEM” is a
bonafide work done by us in fulfillment of the requirements for the award of the degree Bachelor of
Engineering in Department of Computer Science and Engineering (AI & ML), and submitted to the
Department of CSE (AI & ML), Keshav Memorial Engineering College, Hyderabad.
We also declare that this project is a result of our own effort and has not been copied or
intimated from any source. Citations from any websites are mentioned in the bibliography. This
work was not submitted earlier at any other university for the award of any degree.
SARTHAK JENA (245521748301)
CHERUKU SHASHANK (245521748012)
ACKNOWLEDGEMENT
This is to place on our record my appreciation and deep gratitude to the persons without whose support
this project would never been this successful.
We are grateful to Mr. Neil Gogte, Founder Director, for facilitating all the amenities required
for carrying out this project.
It is with immense please that we would like to express our indebted gratitude to the respected
Prof. P.V.N Prasad, Principal, Keshav Memorial Engineering College, for providing a great support
and for giving us the opportunity of doing the project.
We express our sincere gratitude to Mrs. Deepa Ganu, Director Academics, for providing an
excellent environment in the college.
We would like to take this opportunity to specially thank to Dr. Birru Devender, Professor & HoD,
Department of CSE (AI & ML), Keshav Memorial Engineering College, for inspiring us all the way and for
arranging all the facilities and resources needed for our project.
We would like to take this opportunity to thank our internal guide Mr.P.Naresh Kumar, Asst
Professor, Department of CSE (AI & ML), Keshav Memorial Engineering College, who has
guided us a lot and encouraged us in every step of the project work. His valuable moral support and
guidance throughout the project helped us to a greater extent.
We would like to take this opportunity to specially thank our Project Coordinator, Mrs.
Gayathri Tippani , Assistant Professor, Department of CSE (AI & ML), Keshav Memorial
Engineering College, who guided us in successful completion of our project.
Finally, we express our sincere gratitude to all the members of the faculty of Department of
CSE (AI & ML), our friends and our families who contributed their valuable advice and helped us to
complete the project successfully.
SARTHAK JENA (245521748301)
CHERUKU SHASHANK (245521748012)
CONTENTS
I. PROBLEM STATEMENT i
II. ABSTRACT ii
1. INTRODUCTION 1-3
1.1 Introduction about the Concept
1.2 Existing system and Disadvantages
1.3 Literature Review
1.4 Proposed System and Advantages
2. SYSTEM ANALYSIS 4-7
2.1 Feasibility Study
2.2 System Requirements
2.2.1. Hardware Requirements
2.2.2. Software Requirements
2.2.3. Functional Requirements
2.2.4. Non functional Requirements
3. SYSTEM DESIGN 8-15
3.1 Introduction
3.2. Modules and Description
3.3 Block Diagram
3.4. UML Diagrams
3.4.1. Class Diagram
3.4.2. Use case Diagram
3.4.3. Data Flow Diagram
3.4.4. Sequence diagram
3.4.5. Activity Diagram
3.4.6. State Chart diagram
4. SYSTEM IMPLEMENTATION 16-24
4.1 Description of Platform, Database, Technologies, Methods,
Applications using and involving in development
5. SYSTEM TESTING 25-29
5.1. Test Plan
5.2. Scenarios.
5.3. Output Screens
6. CONCLUSION AND FUTURE SCOPE 30-32
6.1 Conclusion
6.2 Future Scope
7. REFERENCES 33-34
7.1. Bibliography
7.2 Web References
8. APPENDIX 35-40
Annexure-1: Sample Coding
Annexure-2: List of Figures
Annexure-3: List of Output Screens
PROBLEM STATEMENT
Drowsiness poses a significant risk in activities such as driving, where it can lead
to accidents due to impaired reaction times. Existing solutions for drowsiness
detection often require intrusive or costly hardware. The challenge is to develop a
non-intrusive, real-time drowsiness detection system using computer vision
techniques that can accurately identify signs of drowsiness solely from facial
features, particularly eye movements, thus enhancing safety and reducing
accidents.
KESHAV MEMORIAL ENGINEERING COLLEGE i
ABSTRACT
Drowsiness detection systems are crucial for preventing accidents caused by
tiredness or sleepiness, particularly in contexts like driving or operating
machinery. This code presents a real-time drowsiness detection system utilizing
computer vision techniques. By analyzing the eye aspect ratio (EAR) derived from
facial landmarks detected in a live video stream, the system can identify when a
person's eyes exhibit signs of drowsiness. When the EAR falls below a predefined
threshold for a sustained period, an alert is triggered to warn the individual. The
system leverages the dlib library for face detection and landmark prediction, as
well as OpenCV for image processing tasks. This approach offers the advantages
of real-time monitoring, non-intrusiveness, cost-effectiveness, and accuracy in
detecting drowsiness, thereby contributing to improved safety in various
scenarios.
KESHAV MEMORIAL ENGINEERING COLLEGE ii
DROWSINESS DETECTION SYSTEM
INTRODUCTION
KESHAV MEMORIAL ENGINEERING COLLEGE 1
DROWSINESS DETECTION SYSTEM
1. INTRODUCTION
1.1 Introduction about the Concept
The concept revolves around utilizing computer vision algorithms to monitor
facial features, particularly the eyes, to detect signs of drowsiness. The eye aspect
ratio (EAR) is a measure of how open or closed the eyes are, which can indicate
levels of alertness or drowsiness. By continuously analyzing the EAR in real-time
video feeds, the system can raise an alert when it detects signs of drowsiness, thus
helping to prevent accidents.
1.2 Existing System and Disadvantages
Existing systems for drowsiness detection often rely on wearable devices or
physiological sensors, which may not be practical or comfortable for users. These
systems also tend to be costly and may have limited accuracy. Additionally, they
may not offer real-time monitoring capabilities, which is crucial for preventing
accidents.
1.3 Literature Review
Previous research in drowsiness detection has explored various methods,
including machine learning algorithms, image processing techniques, and
physiological signal analysis. Many studies have focused on analyzing facial
features, such as eye movements and facial expressions, to infer drowsiness levels
accurately.
1.4 Proposed System and Advantages
The proposed system utilizes computer vision techniques to detect drowsiness in
real-time. By analyzing the eye aspect ratio derived from facial landmarks
detected using the dlib library, the system can accurately gauge levels of alertness.
The advantages of this system include:
KESHAV MEMORIAL ENGINEERING COLLEGE 2
DROWSINESS DETECTION SYSTEM
- Real-time monitoring: The system continuously analyzes video feeds, providing
immediate feedback on the individual's alertness levels.
- Non-intrusive: Unlike wearable devices or physiological sensors, this system
does not require any additional equipment, making it more comfortable for users.
- Cost-effective: By leveraging open-source libraries and standard computer
hardware, the system can be implemented at a relatively low cost.
- Accuracy: By focusing on facial features, particularly eye movements, the
system can accurately detect signs of drowsiness, helping to prevent accidents
effectively.
KESHAV MEMORIAL ENGINEERING COLLEGE 3
DROWSINESS DETECTION SYSTEM
SYSTEM ANALYSIS
KESHAV MEMORIAL ENGINEERING COLLEGE 4
DROWSINESS DETECTION SYSTEM
2. SYSTEM ANALYSIS
2.1 Feasibility Study
The feasibility of implementing a drowsiness detection system using computer
vision techniques can be assessed in terms of technical, economic, and operational
aspects:
Technical Feasibility:
- The availability of open-source libraries such as dlib, OpenCV, and imutils
provides a solid technical foundation for implementing the system.
- The use of computer vision algorithms for facial landmark detection and eye
aspect ratio calculation demonstrates the technical viability of the approach.
Economic Feasibility:
- The system's economic feasibility is high due to the availability of open-source
libraries, eliminating the need for expensive proprietary software.
- Hardware requirements are minimal, primarily requiring a standard computer
with a webcam, making it cost-effective to deploy.
Operational Feasibility:
- The system is operationally feasible as it can be easily integrated into various
environments where drowsiness detection is critical, such as vehicles or
workstations.
- The real-time nature of the system ensures timely alerts to prevent accidents,
enhancing its operational effectiveness.
KESHAV MEMORIAL ENGINEERING COLLEGE 5
DROWSINESS DETECTION SYSTEM
2.2 System Requirements
2.2.1 Hardware Requirements:
Processor : Intel Pentium® Dual Core Processor (Min)
Speed : 2.9 GHz (Min)
RAM : 8 GB (Min)
Hard Disk : 16 GB (Min)
2.2.2 Software Requirements:
Operating System : Windows 7 (Min)
Front End : Camera
Back End : Python/Jupyter
-Access to the pre-trained facial landmark detection model
(shape_predictor_68_face_landmarks.dat).
2.2.3 Functional Requirements:
- Real-time video capture and processing to analyze facial features.
- Detection of frontal faces using the dlib library.
- Extraction of facial landmarks to calculate the eye aspect ratio.
- Continuous monitoring of the eye aspect ratio to detect drowsiness.
- Triggering of alerts when drowsiness is detected, such as displaying warning
messages on the video feed.
KESHAV MEMORIAL ENGINEERING COLLEGE 6
DROWSINESS DETECTION SYSTEM
2.2.4 Non-functional Requirements:
- Performance: The system should be capable of processing video feeds in real-
time to provide timely alerts.
- Accuracy: The drowsiness detection algorithm should accurately identify signs
of drowsiness to minimize false alarms.
- User Interface: A simple and intuitive user interface displaying the video feed
with overlaid alerts enhances usability.
- Reliability: The system should be robust enough to handle varying lighting
conditions and facial orientations to ensure reliable performance.
- Security: Data privacy and security considerations should be addressed,
particularly if the system is deployed in sensitive environments.
KESHAV MEMORIAL ENGINEERING COLLEGE 7
DROWSINESS DETECTION SYSTEM
SYSTEM DESIGN
KESHAV MEMORIAL ENGINEERING COLLEGE 8
DROWSINESS DETECTION SYSTEM
3. SYSTEM DESIGN
3.1 Introduction
The system is designed to detect drowsiness in individuals using computer vision
techniques. It employs facial landmark detection to monitor the eye aspect ratio
(EAR), which is indicative of alertness levels. When the EAR falls below a certain
threshold for a sustained period, the system triggers an alert to notify the user,
helping to prevent accidents caused by drowsiness.
3.2 Modules and Description
Facial Landmark Detection:
-Utilizes the dlib library to detect facial landmarks, particularly those associated
with the eyes.
Eye Aspect Ratio Calculation:
-Calculates the eye aspect ratio (EAR) based on the detected landmarks to
determine the level of eye openness.
Real-time Monitoring:
-Constantly analyzes video frames from a webcam feed to monitor changes in the
EAR and detect signs of drowsiness.
Alert System:
-Triggers an alert when the calculated EAR falls below a predefined threshold for
a specified number of consecutive frames.
KESHAV MEMORIAL ENGINEERING COLLEGE 9
DROWSINESS DETECTION SYSTEM
3.3 BLOCK DIAGRAM
Fig 3.3: Block Diagram
KESHAV MEMORIAL ENGINEERING COLLEGE 10
DROWSINESS DETECTION SYSTEM
3.4 UML DIAGRAMS
3.4.1 CLASS DIAGRAM
Fig 3.4.1: Class Diagram
3.4.2 USE CASE DIAGRAM
Fig 3.4.2: Use Case Diagram
KESHAV MEMORIAL ENGINEERING COLLEGE 11
DROWSINESS DETECTION SYSTEM
3.4.3 DATA FLOW DIAGRAM
Fig 3.4.3: Data Flow Diagram
KESHAV MEMORIAL ENGINEERING COLLEGE 12
DROWSINESS DETECTION SYSTEM
3.4.4 SEQUENCE DIAGRAM
Fig 3.4.4 Sequence Diagram
KESHAV MEMORIAL ENGINEERING COLLEGE 13
DROWSINESS DETECTION SYSTEM
3.4.5 ACTIVITY DIAGRAM
Fig 3.4.5: Activity Diagram
KESHAV MEMORIAL ENGINEERING COLLEGE 14
DROWSINESS DETECTION SYSTEM
3.4.6 STATE CHART DIAGRAM
Fig 3.4.6: State Chart Diagram
KESHAV MEMORIAL ENGINEERING COLLEGE 15
DROWSINESS DETECTION SYSTEM
SYSTEM
IMPLEMENTATION
KESHAV MEMORIAL ENGINEERING COLLEGE 16
DROWSINESS DETECTION SYSTEM
4. SYSTEM IMPLEMENTATION
SYSTEM IMPLEMENTATION:
1. Eye Aspect Ratio Calculation:
- The function `eye_aspect_ratio(eye)` computes the eye aspect ratio (EAR)
using the formula:
EAR = (A+B)/(2.C)
where A, B, and C are distances between specific landmarks of the eye
detected in the facial image.
2. Threshold and Frame Check:
- A threshold value (`thresh`) is set to determine when the EAR indicates
drowsiness.
- `frame_check` defines the number of consecutive frames where the EAR falls
below the threshold before triggering an alert.
Fig 4.0.1: Threshold and Frame Check
KESHAV MEMORIAL ENGINEERING COLLEGE 17
DROWSINESS DETECTION SYSTEM
3. Facial Landmark Detection:
- The dlib library is utilized to detect faces in the video stream using the
`get_frontal_face_detector()` function.
- `shape_predictor` loads a pre-trained model to predict facial landmarks from
the detected faces.
4.Eye Region Extraction:
- The indices for the left and right eye landmarks are obtained using
`face_utils.FACIAL_LANDMARKS_68_IDXS`.
- These indices are used to extract the coordinates of the left and right eye
regions from the detected facial landmarks.
5. Real-time Processing Loop:
- The system continuously captures frames from the video feed using
`cv2.VideoCapture`.
- Each frame is resized for faster processing using `imutils.resize`.
- The frame is converted to grayscale to simplify processing using
`cv2.cvtColor`.
- Facial detection is performed on the grayscale frame to locate faces.
- For each detected face, facial landmarks are predicted and converted into
NumPy arrays for ease of computation.
- The EAR is calculated for both eyes and averaged.
- Contours are drawn around the eyes to visualize the detected regions.
- If the calculated EAR falls below the threshold, a drowsiness flag is
incremented. If this flag exceeds the frame check value, an alert is triggered.
- If the EAR is above the threshold, the flag is reset to 0.
KESHAV MEMORIAL ENGINEERING COLLEGE 18
DROWSINESS DETECTION SYSTEM
6. Alert Display:
- If drowsiness is detected, the alert message "*****ALERT!*****" is
displayed on the frame using `cv2.putText`.
7. User Interaction and Termination:
- The processed frame with any overlay is displayed using `cv2.imshow`.
- The system waits for a key press and checks if it corresponds to the "q" key to
exit the loop.
- Upon receiving the termination key, all OpenCV windows are closed, and the
video capture is released.
This implementation provides a real-time drowsiness detection system using
computer vision techniques, allowing for timely alerts to prevent accidents caused
by drowsy driving or fatigue.
4.1 DESCRIPTION OF PLATFROM
The provided code is designed to run on a computer platform with Python
installed, along with the necessary libraries such as OpenCV, dlib, scipy, and
imutils. Let's break down the platform description:
Operating System: The code can be executed on various operating systems,
including Windows, macOS, and Linux distributions, as long as Python and the
required libraries are compatible.
Programming Language: The code is written in Python, a high-level
programming language known for its simplicity and versatility. Python provides
extensive libraries and frameworks for computer vision, making it suitable for
developing such applications.
Libraries:
KESHAV MEMORIAL ENGINEERING COLLEGE 19
DROWSINESS DETECTION SYSTEM
- OpenCV (Open Source Computer Vision Library)**: OpenCV is a popular
library for real-time computer vision tasks, providing functions for image and
video processing, object detection, and more.
- dlib: Dlib is a C++ toolkit containing machine learning algorithms and tools
primarily used for computer vision tasks. In this code, it is utilized for face
detection and facial landmark detection.
- imutils: Imutils is a set of convenience functions to make basic image
processing tasks, such as resizing, rotating, and displaying images, simpler with
OpenCV.
- scipy: SciPy is a scientific computing library that provides various modules for
numerical integration, optimization, signal processing, and more. In this code, it is
used for computing the Euclidean distance.
Hardware Requirements: The code can run on standard desktop or laptop
computers with a webcam. It doesn't require specialized hardware components,
making it accessible for a wide range of users.
Dependencies: The code relies on external data files, specifically the
"shape_predictor_68_face_landmarks.dat" file, which contains the pre-trained
model for facial landmark detection. This file needs to be downloaded and placed
in the appropriate directory specified in the code.
Overall, the platform for running this code is flexible, as long as the required
software dependencies are met, and the hardware setup includes a webcam for
capturing video input.
KESHAV MEMORIAL ENGINEERING COLLEGE 20
DROWSINESS DETECTION SYSTEM
DATASET
Dataset Description:
The dataset used in this system comprises two main components:
1. Video Feeds: Real-time video feeds captured by a webcam or a similar device.
These feeds contain footage of individuals whose drowsiness levels need to be
monitored.
2. Facial Landmark Model: This dataset includes a pre-trained facial landmark
detection model. In this implementation, the shape predictor model
"shape_predictor_68_face_landmarks.dat" is used, which is essential for detecting
facial landmarks, particularly the eyes, nose, mouth, etc.
Data Collection Process:
- Video Feeds: The video feeds are captured using a webcam or a similar camera
device. The camera is positioned to capture the facial area of individuals whose
drowsiness levels need to be monitored. These feeds are then processed in real-
time by the system.
- Facial Landmark Model: The facial landmark model is pre-trained on a dataset
containing facial images annotated with key landmarks. This model is crucial for
accurately detecting facial features, such as the eyes, which are used to compute
the eye aspect ratio (EAR) for drowsiness detection.
Data Preprocessing:
- Frame Resizing: Each frame of the video feed is resized to a standard width of
450 pixels using the `imutils.resize()` function. Resizing the frames helps in faster
processing and reduces computational overhead.
KESHAV MEMORIAL ENGINEERING COLLEGE 21
DROWSINESS DETECTION SYSTEM
- Grayscale Conversion: Before performing facial landmark detection, each frame
is converted from BGR (Blue, Green, Red) color space to grayscale using the
`cv2.cvtColor()` function. Grayscale images are easier to process and require
fewer computational resources compared to color images.
Data Storage :
The dataset components are stored as follows:
- Video Feeds: The video feeds are captured and processed in real-time by the
system. They are not stored persistently unless explicitly saved by the user.
- Facial Landmark Model: The shape predictor model
"shape_predictor_68_face_landmarks.dat" is stored as a file on the local
filesystem. The system accesses this file during runtime to perform facial
landmark detection.
Dataset Usage :
The dataset components are utilized as follows:
- Video Feeds: The real-time video feeds are continuously processed by the
system to monitor drowsiness levels. The frames from these feeds are analyzed to
compute the eye aspect ratio (EAR) and detect signs of drowsiness.
- Facial Landmark Model: The pre-trained facial landmark model is loaded into
memory using the `dlib.shape_predictor()` function. This model is then used to
detect facial landmarks in each frame of the video feed, particularly the landmarks
corresponding to the eyes, which are crucial for drowsiness detection.
Data Dependencies :
The system has dependencies on the following data:
- Facial Landmark Model: The system requires the facial landmark model file
"shape_predictor_68_face_landmarks.dat" to be present in the specified location.
KESHAV MEMORIAL ENGINEERING COLLEGE 22
DROWSINESS DETECTION SYSTEM
This file is essential for performing facial landmark detection using the dlib
library.
- Video Feeds: The system relies on real-time video feeds captured by a webcam
or similar device. These feeds must be accessible to the system during runtime for
drowsiness detection to occur.
Data Security and Privacy :
- Video Feeds: Privacy considerations should be taken into account when
capturing and processing video feeds. Access to sensitive or personally
identifiable information should be restricted, and appropriate security measures
should be implemented to protect the privacy of individuals appearing in the video
feeds.
- Facial Landmark Model: The facial landmark model file should be stored
securely to prevent unauthorized access or tampering. Access to the model file
should be restricted to authorized personnel only. Additionally, any data generated
or derived from the facial landmark model should be handled in accordance with
applicable privacy regulations.
TECHNOLOGIES:
dlib: Used for face detection and facial landmark detection. Dlib is a popular
library for machine learning, computer vision, and image processing tasks.
OpenCV (cv2): Utilized for capturing video frames from the camera, image
processing, and displaying the output. OpenCV is a powerful library for computer
vision tasks.
scipy: Specifically, the distance function from the scipy.spatial module is used to
calculate the Euclidean distance between facial landmarks.
imutils: This library provides convenience functions for resizing, rotating, and
displaying images, making it easier to work with OpenCV.
KESHAV MEMORIAL ENGINEERING COLLEGE 23
DROWSINESS DETECTION SYSTEM
METHODS:
Eye Aspect Ratio (EAR): The core method used for drowsiness detection. EAR is
calculated based on the distances between specific facial landmarks detected in
the eye region. This ratio is indicative of the level of eye openness and is used to
infer drowsiness.
Convex Hull: Utilized to draw contours around the eyes. The convex hull helps in
visualizing the eye region and enhancing the accuracy of feature extraction.
KESHAV MEMORIAL ENGINEERING COLLEGE 24
DROWSINESS DETECTION SYSTEM
SYSTEM TESTING
KESHAV MEMORIAL ENGINEERING COLLEGE 25
DROWSINESS DETECTION SYSTEM
5. SYSTEM TESTING
5.1. TEST PLAN :
The testing of the drowsiness detection system involves evaluating its
performance in various scenarios to ensure its accuracy, reliability, and
responsiveness. The test plan includes the following steps:
1. Functionality Testing: Verify that the system correctly detects facial
landmarks, calculates the eye aspect ratio (EAR), and triggers an alert when
drowsiness is detected.
2. Performance Testing: Assess the system's real-time performance by
monitoring its responsiveness under different lighting conditions, facial
orientations, and distances from the camera.
3. Robustness Testing: Test the system's robustness against noise, occlusions,
and variations in facial appearance, ensuring that it maintains accurate detection
under challenging conditions.
4. Stability Testing: Run the system continuously for an extended period to check
for memory leaks, performance degradation, or crashes.
5. Usability Testing: Evaluate the system's usability by assessing the user
interface, ease of setup, and any potential discomfort experienced by users during
operation.
KESHAV MEMORIAL ENGINEERING COLLEGE 26
DROWSINESS DETECTION SYSTEM
5.2. SCENARIOS :
1. Normal Conditions:
- Scenario: The user is alert and attentive.
- Expected Outcome: No alert triggered.
2. Drowsiness Detected:
- Scenario: The user starts to exhibit signs of drowsiness, such as drooping
eyelids or prolonged eye closure.
- Expected Outcome: The system detects drowsiness based on the calculated eye
aspect ratio (EAR) and triggers an alert.
3. Challenging Lighting Conditions:
- Scenario: The environment has low light or high glare.
- Expected Outcome: The system adjusts to varying lighting conditions and
maintains accurate detection.
4. Partial Occlusion:
- Scenario: Some portion of the face, particularly the eyes, is partially
obstructed.
- Expected Outcome: The system continues to detect facial landmarks and
accurately assess drowsiness despite partial occlusion.
KESHAV MEMORIAL ENGINEERING COLLEGE 27
DROWSINESS DETECTION SYSTEM
5. Distance Variations:
- Scenario: The user moves closer or farther away from the camera.
- Expected Outcome: The system adapts to changes in distance and maintains
consistent detection performance.
5.3. OUTPUT SCREENS :
- Output screens would display the real-time video feed captured by the camera,
with overlays indicating the detected facial landmarks, eye regions, and any
triggered alerts.
Fig 5.3.1: Real-Time Video Feed Captured By The Camera
KESHAV MEMORIAL ENGINEERING COLLEGE 28
DROWSINESS DETECTION SYSTEM
When drowsiness is detected, the system would display an alert message
prominently on the screen, such as "*****ALERT*****", to notify the user and
prompt them to take necessary actions.
Fig 5.3.2: Display An Alert Message
- Additionally, output screens may include logging functionality to record
timestamps of detected drowsiness events for further analysis and monitorin
KESHAV MEMORIAL ENGINEERING COLLEGE 29
DROWSINESS DETECTION SYSTEM
CONCLUSION AND
FUTURE SCOPE
KESHAV MEMORIAL ENGINEERING COLLEGE 30
DROWSINESS DETECTION SYSTEM
6. CONCLUSION AND FUTURE SCOPE
6.1 CONCLUSION:
In conclusion, the implemented drowsiness detection system demonstrates the
effectiveness of using computer vision techniques for real-time monitoring of an
individual's alertness levels. By analyzing facial landmarks, particularly the eye
aspect ratio (EAR), the system can accurately identify signs of drowsiness,
allowing for timely alerts to be issued to prevent potential accidents.
The system's ability to continuously analyze video feeds in real-time provides a
non-intrusive and cost-effective solution compared to existing drowsiness
detection systems. Moreover, the integration of open-source libraries such as dlib
and OpenCV enables easy implementation and scalability of the system across
different platforms.
KESHAV MEMORIAL ENGINEERING COLLEGE 31
DROWSINESS DETECTION SYSTEM
6.2 FUTURE SCOPE:
Despite the effectiveness of the current system, there is still ample room for
further improvement and expansion. Some potential avenues for future research
and development include:
1. Enhanced Accuracy: Investigating advanced machine learning algorithms to
improve the accuracy of drowsiness detection, especially in challenging lighting
conditions or with varying facial expressions.
2. Multimodal Approach: Integrating additional physiological signals such as
heart rate variability or head movement data to complement facial feature
analysis, thus enhancing the robustness and reliability of the system.
3. User Interface Improvements: Developing user-friendly interfaces and
integrating with existing automotive or workplace safety systems to provide
seamless integration and user experience.
4. Adaptive Thresholding: Implementing dynamic thresholding techniques based
on individual variability and context-specific factors to tailor the alerting
mechanism more effectively.
5. Real-world Deployment: Conducting extensive field trials and validation
studies in real-world settings, such as driving simulations or industrial
environments, to evaluate the system's performance and efficacy in preventing
accidents.
Overall, the drowsiness detection system presented here lays the groundwork for
further advancements in ensuring safety and reducing accidents caused by fatigue-
induced impairment, with promising opportunities for future research and
innovation.
KESHAV MEMORIAL ENGINEERING COLLEGE 32
DROWSINESS DETECTION SYSTEM
REFERENCES
KESHAV MEMORIAL ENGINEERING COLLEGE 33
DROWSINESS DETECTION SYSTEM
7. REFERENCES
7.1. Bibliography:
- Deng, W., Hu, J., & Guo, J. (2019). Real-time drowsiness detection using facial
landmark localization and deep learning. IEEE Access, 7, 162894-162902.
- Zhu, Z., Ji, Q., & Avidan, S. (2014). Real-time eye blink detection using facial
landmarks. In Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition Workshops (pp. 83-90).
7.2 Web References:
- OpenCV: https://opencv.org/
- dlib: http://dlib.net/
- imutils: https://github.com/jrosebr1/imutils
KESHAV MEMORIAL ENGINEERING COLLEGE 34
DROWSINESS DETECTION SYSTEM
APPENDIX
KESHAV MEMORIAL ENGINEERING COLLEGE 35
DROWSINESS DETECTION SYSTEM
8. APPENDIX
Annexure-1: Sample Coding
from scipy.spatial import distance
from imutils import face_utils
import imutils
import dlib
import cv2
def eye_aspect_ratio(eye):
A = distance.euclidean(eye[1], eye[5])
B = distance.euclidean(eye[2], eye[4])
C = distance.euclidean(eye[0], eye[3])
ear = (A + B) / (2.0 * C)
return ear
thresh = 0.25
frame_check = 20
detect = dlib.get_frontal_face_detector()
predict =
dlib.shape_predictor("models/shape_predictor_68_face_landmarks.dat")# Dat file
is the crux of the code
(lStart, lEnd) = face_utils.FACIAL_LANDMARKS_68_IDXS["left_eye"]
(rStart, rEnd) = face_utils.FACIAL_LANDMARKS_68_IDXS["right_eye"]
cap=cv2.VideoCapture(0)
flag=0
while True:
KESHAV MEMORIAL ENGINEERING COLLEGE 36
DROWSINESS DETECTION SYSTEM
ret, frame=cap.read()
frame = imutils.resize(frame, width=450)
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
subjects = detect(gray, 0)
for subject in subjects:
shape = predict(gray, subject)
shape = face_utils.shape_to_np(shape)
leftEye = shape[lStart:lEnd]
rightEye = shape[rStart:rEnd]
leftEAR = eye_aspect_ratio(leftEye)
rightEAR = eye_aspect_ratio(rightEye)
ear = (leftEAR + rightEAR) / 2.0
leftEyeHull = cv2.convexHull(leftEye)
rightEyeHull = cv2.convexHull(rightEye)
cv2.drawContours(frame, [leftEyeHull], -1, (0, 255, 0), 1)
cv2.drawContours(frame, [rightEyeHull], -1, (0, 255, 0), 1)
if ear < thresh:
flag += 1
print (flag)
if flag >= frame_check:
cv2.putText(frame, "*****ALERT!*****",
(10, 30), cv2.FONT_HERSHEY_SIMPLEX, 0.7,
(0, 0, 255), 2)
cv2.putText(frame, "*****ALERT!*****",
( (10,325), cv2.FONT_HERSHEY_SIMPLEX, 0.7,
( (0, 0, 255), 2)
#print ("Drowsy")
KESHAV MEMORIAL ENGINEERING COLLEGE 37
DROWSINESS DETECTION SYSTEM
else:
flag = 0
cv2.imshow("Frame", frame)
key = cv2.waitKey(1) & 0xFF
if key == ord("q"):
break
cv2.destroyAllWindows()
cap.release()
KESHAV MEMORIAL ENGINEERING COLLEGE 38
DROWSINESS DETECTION SYSTEM
Annexure-2: List of Figures
Fig No Name of Figure Page No
3.3 Block Diagram 10
3.4.1 Class Diagram 11
3.4.2 Use Case Diagram 11
3.4.3 Data Flow Diagram 12
3.4.4 Sequence Diagram 13
3.4.5 Activity Diagram 14
3.4.6 State Chart Diagram 15
4.0.1 Threshold and Frame Check 17
KESHAV MEMORIAL ENGINEERING COLLEGE 39
DROWSINESS DETECTION SYSTEM
Annexure-3: List of Output Screens
Fig No Name of Output Page No
5.3.1 Real-Time video Feed 27
5.3.2 Display An Aleart Message 28
KESHAV MEMORIAL ENGINEERING COLLEGE 40