0% found this document useful (0 votes)
7 views58 pages

Face Recoginition Mini Proj 4

The document presents a mini project report on 'Attendance in Online Classes Through Face Detection,' submitted by students from Vignan’s Institute of Management and Technology for Women. The project aims to automate attendance management in online classes using facial recognition technology, enhancing accuracy and efficiency compared to traditional methods. It discusses the system's architecture, implementation, advantages, and potential challenges, emphasizing the need for modern solutions in the evolving educational landscape.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views58 pages

Face Recoginition Mini Proj 4

The document presents a mini project report on 'Attendance in Online Classes Through Face Detection,' submitted by students from Vignan’s Institute of Management and Technology for Women. The project aims to automate attendance management in online classes using facial recognition technology, enhancing accuracy and efficiency compared to traditional methods. It discusses the system's architecture, implementation, advantages, and potential challenges, emphasizing the need for modern solutions in the evolving educational landscape.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 58

A

MINI PROJECT REPORT


on
ATTENDANCE IN ONLINE CLASSES THROUGH FACE DETECTION
Submitted in partial fulfilment of the requirements for the award of the degree of
BACHELOR OF TECHNOLOGY
in
COMPUTER SCIENCE AND ENGINEERING (DATA SCIENCE)

Submitted by
CH.ABHINAYA ( 21UP1A6712 )
V.NAGA LOHITHA ( 21UP1A6756 )
B.HARIKA ( 21UP1A6710)
I.PAVANI ( 21UP1A6722)
Under the Guidance of
Ms. T. RAMYA SRI
M.Tech
Assistant Professor

DEPARTMENT OF COMPUTER SCIENCE ENGINEERING (DATA SCIENCE)


VIGNAN’S INSTITUTE OF MANAGEMENT AND TECHNOLOGY FOR
WOMEN
Accredited to NBA (CSE & ECE) and NAAC A+
(Affiliated to Jawaharlal Nehru Technological University Hyderabad) Kondapur (Village),
Ghatkesar (Mandal), Medchal (Dist.), Telangana Pincode-501301
(2021-2025)
DEPARTMENT OF COMPUTER SCIENCE ENGINEERING (DATA
SCIENCE)

CERTIFICATE
This is to certify that project work entitled “ATTENDANCE IN ONLINE CLASSES THROUGH
FACE DETECTION” submitted by CH. ABHINAYA(21UP1A6712), V.NAGALOHITHA
(21UP1A6756), B.HARIKA (21UP1A6710), I.PAVANI (21UP1A6722) in the partial fulfilment of
the requirements for the award of the degree of Bachelor of Technology in COMPUTER SCIENCE
ENGINEERING (DATA SCIENCE), VIGNAN’S INSTITUTE OF MANAGEMENT AND
TECHNOLOGY FOR WOMEN is a record of bonafide work carried by them under my guidance
and supervision. The results embodied in this project report have not been submitted to any other
University or institute for the award of any degree.

PROJECT GUIDE THE HEAD OF DEPARTMENT


Ms. RAMYASRI Dr. M. VISHNU VARDHANA RAO
M. Tech M. Tech, PhD
Assistant Professor Associate Professor

(External Examiner)
DEPARTMENT OF COMPUTER SCIENCE ENGINEERING (DATA
SCIENCE)

DECLARATION

We hereby declare that project entitled “ATTENDANCE IN ONLINE CLASSES THROUGH FACE
DETECTION” is bonafied work duly completed by us. It does not contain any part of the project or
that is submitted by any other candidate to this or another institute of the university. All such materials
that have been obtained from other sources have been duly acknowledged.

CH.ABHINAYASRI (21UP1A6712)
V.NAGA LOHITHA (21UP1A6756)
B.HARIKA (21UP1A6710)
I.PAVANI (21UP1A6722)
ACKNOWLEDGEMENT

We would like to express sincere gratitude to Dr G. APPARAO NAIDU, Principal, Vignan’s Institute
of Management and Technology for Women for his timely suggestions which helped us to complete
the project time.

We would also like to thank our Dr. M.VISHNU VARDHANA RAO, M. Tech, Ph. D, Head of the
Department and Associate Professor, CSE (DATA SCIENCE), for providing us with constant
encouragement and resources which helped us to complete the project in time.

We would also like to thank our guide Ms. T. Ramya sri, Assistant Professor, CSE (DATA
SCIENCE), for providing us with constant encouragement and resources which helped us to
complete the project in time with her valuable suggestions throughout the project. We are indebted to
her for the opportunity given to work under her guidance. Our sincere thanks to all the teaching and
non-teaching staff of Department of Computer Science and Engineering (Data Science)for their
support throughout our project work.

CH.ABHINAYA (21UP1A6712)
V.NAGALOHITHA (21UP1A6756)
B.HARIKA (21UP1A6710)
I.PAVANI (21UP1A6722)
ABSTRACT

This abstract presents a short summary of the approach for attendance management in online classes
using face recognition. The proposed system utilizes computer vision and facial recognition
technologies to automatically. identify and verify students attending virtual classes. By capturing facial
images and comparing them against a database of enrolled students, the system ensures accurate
attendance records. The automated approach offers benefits such as time-saving, enhanced security,
and real-time attendance. reports, contributing to improved online learning experiences. In the modern
educational landscape, online classes have become a norm, and managing student attendance
efficiently is crucial. Traditional attendance methods, such as manual roll-calls or the use of static
forms, are time-consuming and prone to errors. This paper proposes an automated system for
monitoring and recording attendance using face detection technology. The system leverages computer
vision techniques, specifically convolutional neural networks (CNNs), to detect students' faces during
online class sessions. By comparing the detected faces with a pre-registered database, the system can
accurately mark attendance in real-time without the need for manual intervention. The proposed
solution ensures improved accuracy, transparency, and reliability, while also reducing administrative
overhead. This paper discusses the architecture, implementation, and potential challenges in deploying
such a system for large-scale online education platforms. The system offers a scalable and secure
approach to automate attendance, enhancing the overall learning experience for both educators and
students.

Keywords: Convolutional Neural Network (CNN), Facial Recognition , Real – Time attendance
Contents
DEPARTMENT OF COMPUTER SCIENCE ENGINEERING (DATA SCIENCE)..............2
DEPARTMENT OF COMPUTER SCIENCE ENGINEERING (DATA
SCIENCE).............................................................................................................................3
ACKNOWLEDGEMENT..........................................................................................................4
ABSTRACT................................................................................................................................5
CHAPTER 1...............................................................................................................................1
INTRODUCTION......................................................................................................................1
1.1 MOTIVATION.........................................................................................................2
1.2 OBJECTIVES OF THE PROJECT.............................................................................3
1.3 ADVANTAGES..............................................................................................................4
1.4 DISADVANTAGES........................................................................................................4
1.5 APPLICATIONS............................................................................................................4
1.6 SCOPE.............................................................................................................................5
CHAPTER 2...............................................................................................................................6
LITERATURE REVIEW.........................................................................................................6
2.1 INTRODUCTION TO PYTHON..................................................................................7
2.2 INSTALLATION OF IDLE..........................................................................................8
2.3 EXISTING SYSTEM.....................................................................................................9
2.4 PROPOSED SYSTEM.................................................................................................11
System Architecture.......................................................................................................11
2.4.2 IMAGE RESEZING..................................................................................................12
2.4.3 CREATION OF DATABASE...................................................................................13
2. Session Monitoring.....................................................................................................15
3. Authentication.............................................................................................................15
4. Privacy Concerns........................................................................................................15
5. Attendance Logging....................................................................................................15
6. Integration with Learning Management Systems (LMS).......................................15
7. False Positives/Negatives............................................................................................15

1
LIST OF FIGURES

FIG NO NAME PAGE NO


Fig.1 Attendance recognition in manual 2
Fig.2.1 Installation of python 8
Fig.2.2 Python IDLE 9
Fig.2.3 Feature and limitations of existing system 10
Fig.2.4 Work of flow facial recognition 11
Fig.2.5 Image Resezing 13
Fig.2.6 Graph 14
Fig.5.2 Image Verification 25
Fig.7.1 Attendance sheet 45
Fig.7.2 Facial recognition 46

2
CHAPTER 1

INTRODUCTION

The rise of online education has significantly transformed the educational landscape,
offering flexibility and accessibility to learners across the globe. However, the shift to
virtual learning environments has also introduced challenges, one of which is the
efficient and accurate management of student attendance. Traditional methods of
attendance tracking, such as manual roll calls or digital check-ins via forms, are not
only time-consuming but also prone to errors and manipulation. These limitations
highlight the need for a more robust and automated approach to attendance
monitoring.
At present facial recognition and image processing is a very interesting topic that has
only had its surface scratched, facial recognition is quickly surpassing other forms of
biometrics (Fingerprints, RFID etc) as facial recognition systems use a set of features
distinct to one person. This proposed project can be applied to create an attendance
system using facial recognition as the traditional method i.e., pen and paper is not
only time consuming and burdensome it is also prone to proxies and manipulation,
our aim in developing this project is to make the attendance system e fficient, stop
methods and means of proxies and to save time that would otherwise be lost in the
lecture.

The idea for this project came to us in class as we saw the amount of time that has to
be skipped for attendance and the nonchalance of students who had already marked
their attendance which leads to the method being delayed further, we then decided
that this would be a good and interesting field to delve into for our Project as the field
of Image processing, recognition etc; has a world of scope and would help us
inculcate our skills and make us a tad bit ready for any or most challenges ahead.
Attendance in online classes through face detection refers to a system that
automatically
records student attendance during virtual classes by using computer vision technology
to identify individuals based on their facial features, eliminating the need for manual

1
attendance checks and potentially improving accuracy by minimizing proxy
attendance
or error in recording presence; essentially , a student’s face becomes their unique
identifier to register attendance in an online classroom.

This paper explores the use of facial recognition technology for attendance
management in online classes, discussing its architecture, implementation, and
potential challenges. The proposed system not only addresses inefficiencies in
traditional methods but also aligns with the growing demand for automation and
scalability in modern educational platforms. By improving the reliability and
transparency of attendance systems, this solution contributes to a better learning
experience for both educators and students.

Fig.1.Attendance recognition in manual and finger-print based

The integration of face detection for attendance offers several advantages. It ensures
high accuracy, minimizes manual intervention, and enhances security by verifying the
presence of the registered individual. Moreover, it saves valuable instructional time,
allowing educators to focus more on teaching rather than administrative tasks.

1.1 MOTIVATION
Student's attendance management system helps to maintain the attendance details of
students. It generates attendance based on the student's presence. Attendance is
marked on the daily basis. Separate user ids and password is provided to the staffs to
mark the student's status.

2
The staffs handling the subjects are responsible for maintaining the student's records.
The student's attendance will be marked only if student is present on the particular
period. The attendance reports will be marked based on weekly and consolidated
basis.

With the rapidly improving in technology, the application of modern technologies for
day to day life activities also increases rapidly. The technologies mostly focus on the
automation of the activities and accuracy of the system. The biometric application has
a wide range of application and there is a number of innovative ways which makes the
development of biometric systems alive. Basically, the biometric systems have two
phases: First Enrolment and second in the recognition phase. In the enrolment phase,
the biometric traits of the person are taken and stored in the form of features. The
labels are assigned according to the person. In the recognition phase, the biometric
trait of a random person is acquired and match with the features stored in the
database. These techniques are widely used in the various organization. The
traditional method of attendance marking is very time consuming, difficult to
maintain and becomes complicated if there are a number of students. Thus the aim of
this project is to make the system automated which has more advantages over the
traditional method, as it saves time and also helps to prevent fake attendance. Face
recognition can also be used for security purpose.

1.2 OBJECTIVES OF THE PROJECT


The overall objective is to develop an automated Face Emotions system comprising of
a desktop application working in conjunction with a mobile application to perform the
tasks.
The primary objectives of implementing face detection in attendance for online
classes are as follows:
Automating Attendance:
Use face detection technology to automatically identify and mark the presence of
students, eliminating the need for manual roll calls.
Save time and increase the efficiency of attendance tracking.

3
Real-Time Monitoring:
Allow real-time detection and reporting of attendance status, ensuring that students
remain visible throughout the session.
Convenience for Instructors:
Provide an easy-to-use system that integrates seamlessly with online platforms,
enabling instructors to focus on teaching rather than administrative tasks.

Data Storage and Reporting:


Maintain a log of attendance records, including timestamps, for future reference or
analysis.
Generate reports for teachers, parents, or administrators to track attendance trends.
Security and Privacy:
Ensure that the system handles sensitive data responsibly, using encryption and
adhering to data privacy laws (e.g., GDPR, FERPA).

1.3 ADVANTAGES
The use of face detection technology for managing attendance in online classes offers
numerous benefits that address the limitations of traditional methods. These
advantages include:
 Real-Time Attendance Tracking
 Time Efficiency
 Reduced Administrative Burden
 Integration with Learning Management Systems
 Improved Student Engagement

1.4 DISADVANTAGES
Using face detection for attendance in online classes has its advantages, but it also
comes with several disadvantages and limitations:
 Privacy Concerns
 Dependence on Hardware and Internet
 Technical Limitations
 Cost and Maintenance
 Accuracy Issues

4
1.5 APPLICATIONS
Education and Learning Apart from attendance tracking ,face detection can be
applied in educational settings to monitor students’ engagement, detect signs of
distraction, and personalize learning experiences based on facial expressions.

Social Media and Photography Face detection is commonly used in social media
platforms and photo editing software for automatic tagging, facial recognition, and
applying filters or effects.
Healthcare In the healthcare sector, face detection can be used for patient
identification and tracking, especially in hospitals or clinics with large patient
volumes.it can also aid in monitoring patients’ vital signs and expressions for
diagnostic purposes.

1.6 SCOPE
We are setting up to design a system comprising of two modules. The first module
(face detector) is a mobile component, which is basically a camera application that
captures student faces and stores them in a file using computer vision face detection
algorithms and face extraction techniques. The second module is a desktop
application that does face recognition of the captured images (faces) in the file, marks
the students register and then stores the results in a database for future analysis.

5
CHAPTER 2

LITERATURE REVIEW

The main purpose of the research is to analyze the solutions given by others and
Chellappa et. al [2] proposed face recognition to identify the face from video
database. Turk et al [3] present the face recognition system by using the PCA
algorithm. This algorithm used to solve face features using Eigenvalues and
Eigenvector. But this could not be an elegant solution for object recognition approach.
Eigenfaces only could not be the exact solution for face recognition, but it required
some machine learning algorithm to classify the facial features. This approach works
well for simple and constrained environment. Krishna swamy et al. [4] proposed face
recognition system using PCA and LDA algorithm. This approach consists of two
steps: LDA approach is used to extract.
PROBLEM DEFINITION :
Human facial expressions can be easily classified into 7 basic emotions: happy, sad,
surprise, fear, anger, disgust, and neutral. Our facial emotions are expressed through
activation of specific sets of facial muscles. These sometimes subtle, yet complex,
signals in an expression often contain an abundant amount of information about our
state of mind. Through facial recognition, we are able to measure the effects that
makes the attendance marking easier with low cost implementation.
As per various literature surveys it is found that for implementing this project four
basic steps are required to be performed. i. Preprocessing ii. Face registration iii.
Facial feature extraction iv. Emotion classification Description about all these
processes are given below- Preprocessing preprocessing is a common name for
operations with images at the lowest level of abstraction both input and output are

6
intensity images. Most preprocessing.2016 International Conference on Electrical,
Electronics, Communication, Computer and Optimization Techniques
(ICEECCOT)1978-1-5090-4697-3/16/$31.00 ©2016 IEEE 1A Robust Method for
Face Recognition and Face Emotion Detection System using Support Vector.

2.1 INTRODUCTION TO PYTHON


Python is a high-level, interpreted, and versatile programming language widely used
for various applications such as web development, data analysis, artificial
intelligence, scientific computing, automation, and more. It was created by Guido
van Rossum and first released in 1991. Python's design emphasizes code readability
and simplicity, making it an excellent choice for beginners and professionals alike.
 Python can be used on a server to create web applications.
 Python can be used alongside software to create workflows.
 Python can connect to database systems. It can also read and modify files.
 Python can be used to handle big data and perform complex mathematics.
 Python can be used for rapid prototyping, or for production-ready
software development.
Python was designed for readability, and has some similarities to the English
language with influence from mathematics.
Python uses new lines to complete a command, as opposed to other programming
languages which often use semicolons or parentheses.
Python relies on indentation, using whitespace, to define scope; such as the scope of
loops, functions and classes. Other programming languages often use curly-brackets
for this purpose.
While face detection can automate attendance tracking, its drawbacks—especially
privacy concerns, technical challenges, and ethical considerations—make it essential
to evaluate its usage carefully and consider alternative methods.
Python's flexibility and simplicity have made it one of the most popular programming
languages in the world.

7
2.2 INSTALLATION OF IDLE
Installing Python is a straightforward process. Here's how to do it on different
operating systems:

Fig.2.1 Installation of python.


Step 1: Download the Python installer:
Go to the official Python website.
Click on Downloads and select the appropriate version for Windows (usually the
latest version is recommended).
Step 2: Run the Installer:
Double-click the downloaded .exe file.
Check the box "Add Python to PATH" (important for command-line usage).
Click Install Now or choose Customize Installation for advanced options.
Step 3: Verify Installation:
Open Command Prompt.

8
Type python --version or python3 --version and press Enter.
The installed version should appear.
Integrated Development Environments (IDEs)
Consider installing an IDE for coding in Python, such as:
 PyCharm
 VS Code
 Jupyter Notebook
 IDLE (comes with Python installation)
Once installed, you’re ready to start coding in Python! 😊

Fig. 2.2 Python IDLE.

2.3 EXISTING SYSTEM


Attendance tracking in online classes through face detection has become a popular
solution due to advancements in computer vision and artificial intelligence. These
systems leverage facial recognition technologies to verify student presence and
automate attendance processes.
Traditional attendance marking techniques i.e, pen and paper or signing attendance
sheets are easy to bypass and trick as giving proxies or false signatures is a common
practice among students nowadays, students take an unfair advantage of this at most
times. But a facial recognition system is unassailable and cannot be fooled as each
person has a set of unique and individual features common to that person and cannot

9
be replicated or changed, it all comes down to one simple truth that is, unless you are
physically present in the lecture your attendance will not get marked.

Existing system Limitations


Existing system Limitations
Pen and paper False signatures and proxies
RFID tags Can be used by anybody, no guarantee
Biometric, fingerprint Is a costlier approach
Table.2.3. Feature and limitations of existing system.

Facial Recognition: Compares detected faces against a pre-registered database of


student images using algorithms like Haar Cascades, HOG (Histogram of Oriented
Gradients), or deep learning-based methods (e.g., Convolutional Neural Networks).
Automated Attendance Logging: Marks attendance automatically when a
recognized face is detected for a sufficient duration. Generates reports of attendance,
including time logs.
Integration with Online Platforms: Many systems integrate with video
conferencing tools (e.g., Zoom, Google Meet) or Learning Management Systems
(LMS) for seamless operation.

2.3.1 DrawBacks of Existing System


Using face detection for attendance in online classes has several drawbacks, including
technical, ethical, and practical concerns.
Technical Limitations:
False Positives and Negatives*: The system may incorrectly mark someone present
(false positive) or absent (false negative) due to poor lighting, camera angles, or
obstructions.
Camera Quality:
Low-quality webcams or poor internet connections can lead to inaccurate face -
recognition.
Limited Diversity in Training Data:
Face detection algorithms may not work equally well across different skin tones,
facial features, or lighting conditions, leading to biases.

10
Background Noise:
Similar-looking individuals in the background can confuse the system, especially in
shared spaces.

2.4 PROPOSED SYSTEM


The proposed system leverages facial recognition technology to automate the
attendance process in online classes. By integrating computer vision techniques with
online learning platforms, the system ensures accurate, efficient, and real-time
attendance tracking. Below are the key components and functionalities of the
proposed system.
System Architecture
The system architecture is designed to handle the end-to-end process of attendance
tracking:

 Face Detection: Captures video frames from the student’s camera feed during
online classes. This is achieved using computer vision libraries such as Open
CV or similar frameworks.
 Face Recognition: Recognizes individual students by comparing detected
faces with a pre-registered database. Algorithms such as Deep Face or Dlib
can be utilized for feature extraction and matching.
 Database Management: Stores the facial data and attendance records in a
secure database.
 Attendance Logging: Marks attendance in real-time by matching detected
faces with the database and updating attendance records automatically.

11
Fig.2.4 work of flow facial recognition.

2.4.1 PRE -PROCESSING

Pre-processing is a common name for operations with images at the lowest level of
abstraction -- both input and output are intensity images.

The aim of pre-processing is an improvement of the image data that suppresses


unwanted distortions or enhances some image features important for further
processing.
1. Face Data Collection
Input Capture: Collect facial images or video frames from the students’ cameras
during registration or live classes.
Standardization: Ensure consistent image resolution, aspect ratio, and format (e.g.,
JPEG, PNG) for all collected data.
2. Face Alignment
Landmark Detection: Identify key facial landmarks (e.g., eyes, nose, mouth) using
algorithms like Dlib’s facial landmark detector.
Align Faces: Rotate or align the face based on detected landmarks to ensure
consistency, making the system invariant to head tilts or rotations.
3.Security Measures
Data Encryption: Encrypt facial data during preprocessing and storage to ensure
security.
Anonymization: Store minimal identifiable data while maintaining functionality to
comply with privacy regulations (e.g., GDPR).

2.4.2 IMAGE RESEZING


Image resizing is a crucial preprocessing step in a face recognition-based attendance
system. Resizing ensures uniformity in the dimensions of facial images, which is
essential for the recognition model's performance and computational efficiency.

12
Fig.2.5 Image Resezing.

Steps for Resizing Facial Images


Face Detection:
Use a face detection algorithm (e.g., Haar cascades, MTCNN) to localize and crop the
face region from the input image or video frame.
Resize to Target Dimensions:
After cropping, resize the face region to the model's required dimensions (e.g.,
128x128 pixels).
Aspect Ratio Preservation:
While resizing, maintain the aspect ratio to avoid distortion using padding or
cropping.

2.4.3 CREATION OF DATABASE


A robust database is essential for managing student data, facial embeddings, and
attendance records in a face recognition-based attendance system. Below is a step-by-
step guide to creating such a database.
1.Database Design
The database should include the following key components:
Student Information:
Fields: Student ID, Name, Email, Course/Batch, etc.
Purpose: Store demographic and identification details.
Facial Embeddings:
Fields: Student ID, Facial Embedding (numerical vectors representing unique facial

13
features).
Purpose: Enable face recognition by matching embeddings.
Attendance Records:
Fields: Date, Time, Class ID, Student ID, Attendance Status (Present/Absent).
Purpose: Log attendance data for reporting and analysis.

2. Choosing a Database Type


Relational Databases (SQL): Use databases like MySQL, PostgreSQL, or SQLite for
structured data with relationships.
NoSQL Databases: Use MongoDB or Firebase for unstructured or semi-structure
data.
Hybrid: Combine both if the system requires flexibility and complex queries.

Fig.2.6 Graph below indicates accuracy of young to old vice-versa

2.4.4 PARAMETERS
When implementing a face recognition-based attendance system for online classes,
several parameters must be considered to ensure accuracy, efficiency, and reliability.
These parameters can be grouped into categories based on technical, operational, and
performance aspects. sing face detection for attendance in online classes can automate
the process of tracking student participation. Here are some parameters that can be
considered when implementing face detection-based attendance in online classes.

1.Face Recognition

 Accuracy: How well the system can identify and verify faces. Higher
accuracy ensures fewer false positives (incorrect identifications) or false

14
negatives (missing identifications).
 Detection Angle: The system should be able to recognize faces from various
angles (e.g., straight-on, side views).
 Lighting Conditions: Face detection should work well under different
lighting conditions. This can include low-light environments or bright
backgrounds.
 Face Matching Algorithm: The algorithm's ability to match the student’s
face with their profile from a stored database or image.
2. Session Monitoring

 Active Detection: The system should be able to detect the student’s face during the
live session, marking attendance when the student is actually present.
 Idle Time Detection: Monitor if a student is active in the session (i.e., not
minimizing or leaving the window).
 Frequency of Detection: You could configure the system to check periodically if a
student is still present throughout the session (e.g., every 5 or 10 minutes).

3. Authentication

 Login Face Verification: Use face recognition at login to ensure the student is
the right person accessing the class.
 Two-factor Authentication: To prevent impersonation, a secondary check
(e.g., password or code) can be used in addition to face recognition.
4. Privacy Concerns

 Data Encryption: Ensure that facial data and attendance records are
encrypted to maintain privacy and security.
 Consent: Students should be informed and give consent for their facial data to
be used for attendance tracking.
 Data Retention: Define how long the facial data is stored, and ensure
compliance with privacy laws like GDPR.
5. Attendance Logging

 Timestamping: Each attendance record should be timestamped to track when


a student was present in the session.
 Logging In/Out: Record when a student joins and leaves the session for
accurate attendance records.
 Alerts/Notifications: Notify instructors when a student is marked absent or if
there is an issue with face detection.
6. Integration with Learning Management Systems (LMS)

 Synchronization: The face detection system should integrate seamlessly with


the online learning platform, syncing attendance data with the LMS for
grading and reporting.
 Real-time Updates: Ensure that the attendance is updated in real-time and is

15
accessible for teachers and administrators.
7. False Positives/Negatives

 Environment Control: Address issues such as background noise, distractions,


or multiple faces in the frame (e.g., siblings or roommates appearing in the
camera view).
 System Learning: Machine learning models can improve accuracy by
learning from past false positives or missed identifications.

2.4.5 VIDEO STREAMING


Incorporating video streaming for attendance using face detection in online classes
adds another layer of complexity but can make the system more robust, as it leverages
real-time video feeds to track student participation. Here's how this can be structured
effectively, with a focus on key parameters

1.Real-Time Face Detection

 Continuous Monitoring: The system uses video streaming to detect students’


faces in real-time throughout the class session. The face detection algorithm
must continually process the video frames to ensure the student's presence is
tracked dynamically.
 Multiple Participants: In cases where there are many students, the system
should be able to detect and track multiple faces at once in the video stream,
marking attendance only for recognized faces.
 Frame Rate Optimization: The system must balance the frame rate of video
streaming (e.g., 30 FPS or 60 FPS) with processing speed to prevent lags in
face detection while maintaining a smooth experience.

2. Video-Based Authentication

 Login Process: At the beginning of the class, facial recognition can be


performed via the video stream to authenticate students as they join. This can
serve as an automatic attendance mark when the student’s face is recognized.
 Real-Time Face Verification: Each time the student reappears after being
temporarily off-camera (e.g., switching tabs or muting video), the system
should verify their face in the video feed to continuously monitor attendance.

3. Classroom Environment Considerations

 Background Removal or Blurring: In a video call, multiple people or

16
background distractions can interfere with face detection. Techniques like
background blurring or removal can enhance the system’s ability to detect and
focus only on the relevant faces.
 Lighting Conditions: Face detection needs to be optimized for varying
lighting conditions, such as different times of day or poorly lit environments.
Using tools like HDR (high dynamic range) can help ensure clearer face
detection even in low-light situations.

4. Face Detection Accuracy

 Deep Learning Models: State-of-the-art face recognition algorithms (e.g.,


OpenCV, Dlib, or deep learning-based models like MTCNN or FaceNet)
should be employed to detect and recognize faces with high accuracy in
diverse conditions.
 Angle and Orientation: The system should be able to detect faces at different
angles and orientations. Students may not always look directly at the camera,
so the system should identify faces from side views, tilted angles, or partial
faces (e.g., if a student is wearing glasses, or their face is partially obscured).

5. Attendance Logging

 Automatic Timestamping: Once the student’s face is detected and matched,


the system automatically logs the time they entered the session. A similar log
is created when they leave or go idle.
 Session Duration: You can monitor the duration a student stays actively on
camera, helping to calculate participation time, ensuring they were present for
a minimum required period.
 Real-Time Updates: Attendance records should be updated in real-time in the
backend system, which could be integrated with the class platform (e.g.,
Google Classroom, Zoom, Microsoft Teams).

17
CHAPTER 3
SYSTEM REQIREMENTS SPECIFICATIONS

Attendance system based on face detection for online classes outlines


both functional and non-functional requirements for the system's
development. Below is an example template for an SRS document that
would guide the development of such a system. When designing a face
detection-based attendance system for online classes with video
streaming, it's essential to specify the system requirements to ensure
smooth operation, reliability, and security. Below are the System
Requirements Specifications (SRS) for such a solution.

3.1 FUNCTIONAL REQUIREMENTS


3.1.1 User Registration and Setup
 Student Registration:
The system must allow students to register their faces via a webcam
or uploaded photo before the class begins. Each student’s face data
will be stored in a secure database for future identification during
classes.
 Instructor Registration:
Instructors will have a profile with access to attendance logs and
monitoring tools. The system will allow instructors to set class
schedules and view attendance reports. Pre-Class Face Enrollment.
The system will offer an interface for students to enroll their faces
by capturing multiple images in different lighting and angles to
improve accuracy. Optionally, face data may be captured during the
first session, where the system will prompt students for face
capture.
3.1.2 Face Detection During Class
 Automatic Attendance Logging
The system will automatically log attendance for each student with
the following details:
Student name or ID.
Class start and end time.
Time of attendance marking.
Attendance status (e.g., "Present," "Absent," or "Not Detected").

18
 Attendance Reports
The system will generate attendance reports that can be
downloaded by the instructor.
Reports will include:
A list of students and their attendance status.
A time-stamped log of when each student’s face was recognized.
3.2 NON-FUNCTIONAL REQUIREMENTS
3.2.1 Performance Requirements
 Response Time :
The system must detect and recognize a student's face within 2
seconds of the student being visible on the camera. The system
must process face detection in real-time, updating attendance logs
instantly as soon as a face is identified.
 Throughput:
The system should be able to handle up to 100 students in a single
online class simultaneously without degradation in performance. It
should support processing multiple video feeds simultaneously if
students have their cameras on during the session.
 Latency:
Face detection and attendance marking should not introduce more
than 1 second of latency to the live streaming of the class or video
conferencing tool. Real-time alerts or notifications (e.g., for face
detection failure) should be triggered within 5 seconds of a failure
event.
3.2.2 Availability Requirements
 Uptime:
The system must have an uptime of 99.9% during active class hours
to ensure reliable attendance tracking. Maintenance windows
should be scheduled during off-peak hours, with proper user
notification.
 Fault Tolerance:
The system must be fault-tolerant, meaning if one part of the
system fails (e.g., face detection service), the rest of the system
(e.g., attendance logging) should continue functioning. If the face
recognition service encounters an error, a fallback method (e.g.,
manual attendance entry by the instructor) should be available.
 Recovery:
The system must be capable of recovering from failures without
data loss. In case of system crashes, any attendance data that wasn’t
logged should be retried or captured once the system is restored.
3.2.3 Environmental Requirements
 Power and Network Stability:

19
The system should be designed to perform optimally even in
environments with fluctuating network speeds and connectivity
(e.g., low bandwidth scenarios should still allow basic face
detection and attendance marking). The system should provide
clear notifications or warnings if the internet connection is unstable,
allowing students or instructors to resolve issues.

3.3 HARDWARE REQUIREMENTS


3.3.1 Student Hardware Requirements
 Webcam/Camera:
Type: The system requires a webcam or built-in camera for face
detection.
Resolution: A camera with at least 720p (HD) resolution is
recommended for clear image capture. A 1080p (Full HD)
camera is ideal for better accuracy and facial detail.
 Frame Rate:
A camera supporting 30 FPS or higher is necessary to ensure
smooth real-time face detection.
 Processing Power (Student's Device):
CPU: A modern Intel Core i3 or AMD Ryzen 3 (or equivalent)
processor should suffice for basic face detection tasks.
RAM: At least 4GB of RAM is recommended. For smoother
operation, especially in higher-end devices, 8GB of RAM would be
optimal.

3.3.2 Additional Peripherals and Devices:


 Microphone and Speakers (for optimal user experience)
While these are not strictly necessary for face detection, having
quality microphones and speakers can enhance the
online class experience by improving communication between the
instructor and students.
 External Lighting (optional)
For face detection accuracy in low-light environments, external
lighting may be necessary. LED ring lights or desk lamps with
adjustable brightness can help improve face visibility.

3.4 Operating System Requirements


3.4.1 For Students and Instructors
Windows: Windows 10 or later, including Windows 11.

20
macOS: macOS 10.13 (High Sierra) or later.
Linux: Any major Linux distribution (e.g., Ubuntu 20.04 LTS or later).
Mobile: Android 10 or later; iOS 12 or later (for mobile-based usage).

3.4.2 Video Conferencing Platform:


Zoom: Integration with Zoom API to access the video feed, detect faces,
mark attendance.
Microsoft Teams: Integration with Microsoft Teams API or webhooks to
pull video data and record attendance.
Google Meet: Integration with Google Meet (via third-party solutions or
APIs) to capture live video feeds and synchronize attendance marking.
WebEx, Skype, or Other Platforms:
The system should be compatible with major video conferencing platforms either
through direct API integrations or screen-capturing tools.

21
CHAPTER 4
METHODOLOGIES
4.1. Real-Time Face Detection with Continuous Monitoring
How it Works: During the class, the system continuously monitors students' video
feeds, detecting and recognizing their faces at regular intervals. Attendance is marked
when a student's face is detected within a specified timeframe.
Face Detection APIs: Libraries like OpenCV, Dlib, or proprietary APIs (e.g.,
Microsoft Azure Face API).
Deep Learning Models: Convolutional Neural Networks (CNNs) for recognizing
faces in real-time.
Benefits: Ensures only students who are physically present and visible are marked
present. This method prevents proxy attendance.
Challenges: May be impacted by poor lighting or background noise that affects face
detection accuracy.
4.2. Snapshot-Based Face Recognition at Intervals
How it Works: The system periodically takes snapshots of the student's video feed,
either automatically or with a timer. These snapshots are then processed for face
detection and recognition to log attendance.
Automatic Image Capture: Software or platform integration to periodically capture
frames.
Face Matching: Comparison of captured images to pre-registered face profiles (using
algorithms like Eigenfaces or deep neural networks).
Benefits: Students don’t need to be constantly visible for the whole session, reducing
the stress on continuous camera usage.

22
Challenges: Potential false negatives if the student’s face is not captured or if there is
a camera or connectivity issue.
4.3. Pre-Registration of Faces
How it Works: Students register their faces at the beginning of the semester or before
each class. The system stores the registered faces and uses them to verify attendance
during each session.
Face Enrollment Systems: Software that allows students to upload or capture their
facial images.
Face Recognition Algorithms: Algorithms that compare live faces to registered ones.
Benefits: Highly secure, as it ensures that the person attending the class is indeed the
registered student.
Challenges: Students may have issues with face registration (e.g., poor lighting, facial
changes) or face recognition under different lighting conditions.
4.4. Behavioural Monitoring with Face Detection
How it Works: In addition to face detection, the system may analyze students'
behavior and engagement levels. For example, it may track whether the student's face
remains within the frame or whether they appear distracted.
Eye Tracking and Gaze Detection: Detecting the direction of a student's gaze to
assess attention levels.
Emotion Recognition: Analysing facial expressions to determine if students are
actively engaged or distracted.
Benefits: Goes beyond basic attendance by adding a layer of engagement tracking.
This could help instructors ensure students are actively participating.
Challenges: More complex and resource-intensive. There may be concerns regarding
privacy and whether the system is intrusive.
4.5. Post-Class Attendance Verification
How it Works: In this approach, students are required to take a brief selfie or a short
video of themselves after the class. This can be used to verify attendance after the fact
using face recognition.
Face Recognition Algorithms: Used to match the post-class selfie/video with the
pre-registered face data.
Selfie Verification: An easy process for students that doesn't disrupt the live session.
Benefits: Ensures that attendance is verified after the class, which can accommodate
students who may not be able to keep their cameras on throughout the session.

23
Challenges: Requires post-session effort and additional time for verification. Can be
prone to students taking pictures after the class is over.

CHAPTER 5
APPORACHES
There are several approaches to using face detection for attendance management in
online classes. These approaches vary in complexity, accuracy, and the technologies
used, but they all aim to automate the process of marking attendance by identifying
and verifying students based on their facial features.

5.1 Digital Image Processing

Digital image processing for attendance in online classes through face detection
leverages computer vision techniques to automatically mark student attendance. By
using face detection algorithms like Haar cascades or deep learning models, the
system scans the video feed of each student to identify faces in real-time. Initially,
each student’s facial features are captured and stored in a database for future
recognition. During subsequent classes, the system matches detected faces with the
enrolled database, marking students as present if their face is recognized. This
automated process eliminates the need for manual roll calls, enhancing efficiency and
accuracy. However, challenges like privacy concerns, varying lighting conditions, and
the need for good camera quality must be addressed. Such systems can be integrated
with learning management platforms to provide seamless attendance tracking while
ensuring security and data integrity.
Digital image processing for attendance in online classes using face detection is an
innovative application of computer vision and machine learning techniques. Here’s a
breakdown of how it works:
5.1.1 Face Detection:
Technology: Face detection algorithms like Haar cascades, Single Shot Multibox
Detector (SSD), or more advanced methods like deep learning models (e.g.,
Convolutional Neural Networks or CNNs) are used to detect faces from video feeds
of students attending online classes.
Process: These algorithms analyze the video feed from a student’s webcam, detecting
faces based on facial features such as eyes, nose, and mouth.

24
5.1.2 Enrollment:
Initial Setup: During the first class, each student’s face can be captured and stored in
a database with their unique identifier (such as their name or ID number).
Facial Recognition: After initial enrollment, facial recognition algorithms can match
faces detected in real-time during subsequent sessions to the database, identifying the
student attending the class.
5.1.3 Attendance Marking:
Real-Time Attendance: Once a face is detected and recognized, the system
automatically marks the student as present for the class. If no face is detected for a
particular student (e.g., if their camera is turned off or if they’re not in the frame), the
system may mark them as absent or ask for clarification.
Verification: Some systems may also verify whether the detected face matches a set
of criteria (e.g., the face is not blurry, or the student is looking at the camera) before
marking them as present.
5.1.4 Integration:
Learning Management Systems (LMS): These face detection systems can be
integrated with LMS platforms like Zoom, Google Meet, or custom e-learning
platforms, making the process seamless for both instructors and students.
5.1.5 Future Developments:
Emotion Detection: Future systems may include emotion recognition, monitoring
students’ attentiveness or engagement during the class.
Multi-Factor Authentication: Combining face detection with other methods like
voice recognition or ID verification could enhance security and reliability.

5.2 Image
In online classes, attendance can be tracked through face detection by capturing real-
time images or video feeds of students. The system processes each image to detect
faces, typically using facial recognition algorithms to identify key features like eyes,
nose, and mouth. Once a face is detected and recognized, the system matches it with a
pre-enrolled database of student faces. If a match is found, the system automatically
logs the student’s attendance for the session. For example, an image might show a
student’s face highlighted within a bounding box, indicating that their presence has
been confirmed. This process allows for an efficient, automated method of attendance
tracking without the need for manual roll calls, though it depends on factors like
camera quality, lighting, and proper facial alignment.

25
Fig5.2: Image Verification

In online classes, using face detection for attendance typically involves processing
images or video feeds to detect and recognize the faces of students. Here's how it
works in an image-based context:
o Capture Image or Video Feed: The system takes snapshots or streams video
from the student’s camera during the class session.

o Face Detection: The system processes each frame to detect faces. It identifies
facial features (eyes, nose, mouth) and marks the locations of faces within the
image
o Face Recognition: Once a face is detected, the system compares it with the
enrolled database of facial features to recognize the student.
o Attendance Marking: If the face matches an enrolled student, the system
automatically logs their attendance.
o Snapshot Example: You could have an image of the student with their face
highlighted by a box or outline, indicating that the system has detected their
presence and marked them as present for the class.

5.3 Image File System


In the context of attendance in online classes using face detection, the size of image
files can vary depending on several factors such as resolution, compression, and the
format of the images. Here's how image file sizes can impact the process:
5.3.1 Image Resolution:
o Higher resolution images (e.g., 1920x1080 pixels) will typically result in
larger file sizes, as more data is stored to represent the detailed image. These
larger images may offer better accuracy for face detection but could slow
down the processing time.
o Lower resolution images (e.g., 640x480 pixels) are smaller in size but might
not provide enough detail for accurate face recognition, especially in low-
light conditions or when the student is far from the camera.
5.3.2 Compression:
o Image formats like JPEG or PNG often use compression techniques to reduce
file size. JPEG files are lossy, meaning some image quality is sacrificed for
smaller file sizes, while PNG files are lossless, retaining full image quality but
resulting in larger file sizes.
o Compression helps reduce the storage requirements for the large volumes of
images captured during online classes.
5.3.3 File Format:

26
o Common formats include JPEG, PNG, and even more efficient formats like
WebP, which balances file size and image quality. JPEG images may range
from a few kilobytes (KB) to several hundred KB, depending on resolution
and compression settings.
o Video feeds, which are often used for real-time face detection, typically have
larger file sizes compared to single images due to the continuous stream of
frames.
5.3.4 Average Image File Size:
o A typical image used in face detection might range between 50 KB to 500 KB
for a single frame, depending on factors like resolution and compression. If the
system processes multiple frames per second.
5.3.5 Impact on System Performance:
o Larger image files may require more storage space and bandwidth for
uploading and processing, particularly in systems that need to process a large
number of students simultaneously.
o Smaller file sizes, while efficient for storage, could compromise face detection
accuracy, leading to potential errors in attendance marking.

Frame Rate: The system processes the video feed at a certain frame rate (e.g., 30
frames per second) to ensure that no students are missed and faces are detected
accurately.
Detection Algorithm: Using face detection algorithms (like OpenCV, Haar
Cascades, or deep learning models), the system identifies and isolates faces in the
frames. Once a face is detected, an image of the student's face is extracted for
recognition or verification.
5.5.3 Image Preprocessing:
Face Alignment: Sometimes, the captured face images need to be aligned or adjusted
to ensure that the face is centered and upright.
Grayscale Conversion: To speed up processing, the image may be converted to
grayscale, removing the complexity of color processing without affecting the face
recognition.
Face Cropping: The detected face can be cropped from the frame, isolating it from
the rest of the image and making it easier to analyze.
Image Enhancement: In cases of poor lighting or blurry images, the system may
apply techniques such as contrast enhancement, smoothing, or sharpening to improve
the image quality for better recognition accuracy.
5.5.4 Face Recognition:
Match Faces to Student Database: Once the faces are captured, the system can use
face recognition techniques to match the detected faces to a database of enrolled
students. This step ensures that the correct student is marked as present.
Data Collection: The system logs the time and date of the face detection, marking the
student’s attendance in the system automatically.
5.5.5 Continuous Monitoring:

27
Ongoing Face Capture: The system can continue to capture frames at regular
intervals throughout the class, ensuring that attendance is recorded during the entire
session. If a student leaves the camera view or is no longer actively present, the
system can log their absence.
Active Participation: Some systems may be set up to check for face visibility at
regular intervals (every few minutes), ensuring students are engaged and not just
logged in.

5.6 Image Enhancement


Color image processing in the context of attendance through face detection in online
classes adds an extra layer of complexity and potential to improve the accuracy and
robustness of the system. While grayscale images are often used for face detection
and recognition due to their simplicity and speed, color images can provide additional
information about the scene (e.g., clothing, hair color, background) that might be
useful for improving detection, reducing errors, and ensuring higher accuracy in the
face recognition process.
Benefits of Color Image Processing in Attendance Systems:
Improved Accuracy: Color images provide richer information that can help face
detection algorithms detect faces in varying lighting and orientations more accurately.
Real-World Effectiveness: Students may have different skin tones or background
conditions. Color processing allows the system to handle this variation better
compared to grayscale, ensuring fairness and robustness.
Richer Features for Recognition: Face recognition models benefit from the extra
information provided by color, leading to more accurate matching and fewer errors.
Better Engagement Monitoring: Color image processing allows for better
engagement monitoring by tracking facial features more precisely, ensuring students
are actively participating in the class.

5.7 Segmentation
Segmentation in attendance for online classes using face detection involves leveraging
computer vision and machine learning techniques to automatically identify and track
students' faces in video streams. This can be useful for monitoring student
participation and ensuring that they are present in virtual classrooms. Here’s a
breakdown of the key steps involved in this process:
5.7.1 Face Detection:
Objective: Identify faces in the video stream.
Method: Use pre-trained models for face detection, such as Haar Cascades, MTCNN
(Multi-task Cascaded Convolutional Networks), or modern deep learning models like
YOLO (You Only Look Once) or Retina Face. These models can detect multiple
faces in real-time video streams.
Output: A bounding box around each detected face.
5.7.2 Face Recognition/Identification:
Objective: Verify or identify students by matching detected faces to a database of
enrolled students.

28
Method: Once faces are detected, use face recognition algorithms (e.g., using
OpenCV with dlib or deep learning methods like FaceNet, VGG-Face, or ArcFace) to
identify who the faces belong to.
Output: Label each detected face with the student’s name or ID.
5.7.3 Attendance Segmentation:
Objective: Segment the video stream to recognize and register the attendance of each
identified student.
Method: Once faces are identified, you can record timestamps when the students are
present in the video. This can be done by comparing frames and tracking faces
throughout the session, storing records of each student's presence in specific time
intervals.
Output: A log of attendance with timestamps.
5.7.4 Real-time Monitoring:
Objective: Ensure real-time tracking of students.
Method: Integrate the face detection and recognition models into the live stream
processing pipeline. Implement algorithms to track whether the same student’s face
remains in the frame for the entire session or intermittently, ensuring they are actively
participating.
Output: Real-time attendance tracking, with notifications if any student is missing or
their video feed is inactive.
5.7.5 Additional Techniques:
Motion Detection: To confirm active participation, motion detection algorithms can
track head movements or other signs of engagement.
Pose Estimation: Pose detection can help verify whether the student is facing the
screen.
Eye Movement Tracking: Some systems might use eye-tracking to confirm that
students are focusing on the screen.

29
CHAPTER 6
IMPLEMENTATION

CODE IMPLEMENTATION
############################################# IMPORTING
import tkinter as tk
from tkinter import ttk
from tkinter import messagebox as mess
import tkinter.simpledialog as tsd
import cv2,os
import csv
import numpy as np
from PIL import Image
import pandas as pd
import datetime
import time

############################################# FUNCTIONS
def assure_path_exists(path):
dir = os.path.dirname(path)
if not os.path.exists(dir):
os.makedirs(dir)

###########################################################

30
def tick():
time_string = time.strftime('%H:%M:%S')
clock.config(text=time_string)
clock.after(200,tick)

############################################################
def contact():
mess._show(title='Contact us', message="Please contact us on :
'shubhamkumar8180323@gmail.com' ")

#############################################################
def check_haarcascadefile():
exists = os.path.isfile("haarcascade_frontalface_default.xml")
if exists:
pass
else:
mess._show(title='Some file missing', message='Please contact us for help')
window.destroy()

###############################################################
def save_pass():
assure_path_exists("TrainingImageLabel/")
exists1 = os.path.isfile("TrainingImageLabel\psd.txt")
if exists1:
tf = open("TrainingImageLabel\psd.txt", "r")
key = tf.read()
else:
master.destroy()
new_pas = tsd.askstring('Old Password not found', 'Please enter a new
password below', show='*')
if new_pas == None:
mess._show(title='No Password Entered', message='Password not set!!
Please try again')
else:

31
tf = open("TrainingImageLabel\psd.txt", "w")
tf.write(new_pas)
mess._show(title='Password Registered', message='New password was
registered successfully!!')
return
op = (old.get())
newp= (new.get())
nnewp = (nnew.get())
if (op == key):
if(newp == nnewp):
txf = open("TrainingImageLabel\psd.txt", "w")
txf.write(newp)
else:
mess._show(title='Error', message='Confirm new password again!!!')
return
else:
mess._show(title='Wrong Password', message='Please enter correct old
password.')
return
mess._show(title='Password Changed', message='Password changed
successfully!!')
master.destroy()

#############################################################
def change_pass():
global master
master = tk.Tk()
master.geometry("400x160")
master.resizable(False,False)
master.title("Change Password")
master.configure(background="white")
lbl4 = tk.Label(master,text=' Enter Old Password',bg='white',font=('comic',
12, ' bold '))
lbl4.place(x=10,y=10)

32
global old
old=tk.Entry(master,width=25 ,fg="black",relief='solid',font=('comic', 12, '
bold '),show='*')
old.place(x=180,y=10)
lbl5 = tk.Label(master, text=' Enter New Password', bg='white',
font=('comic', 12, ' bold '))
lbl5.place(x=10, y=45)
global new
new = tk.Entry(master, width=25, fg="black",relief='solid', font=('comic', 12,
' bold '),show='*')
new.place(x=180, y=45)
lbl6 = tk.Label(master, text='Confirm New Password', bg='white',
font=('comic', 12, ' bold '))
lbl6.place(x=10, y=80)
global nnew
nnew = tk.Entry(master, width=25, fg="black", relief='solid',font=('comic',
12, ' bold '),show='*')
nnew.place(x=180, y=80)
cancel=tk.Button(master,text="Cancel",
command=master.destroy ,fg="black" ,bg="red" ,height=1,width=25 ,
activebackground = "white" ,font=('comic', 10, ' bold '))
cancel.place(x=200, y=120)
save1 = tk.Button(master, text="Save", command=save_pass, fg="black",
bg="#00fcca", height = 1,width=25, activebackground="white", font=('comic',
10, ' bold '))
save1.place(x=10, y=120)
master.mainloop()

##########################################################
def psw():
assure_path_exists("TrainingImageLabel/")
exists1 = os.path.isfile("TrainingImageLabel\psd.txt")
if exists1:
tf = open("TrainingImageLabel\psd.txt", "r")

33
key = tf.read()
else:
new_pas = tsd.askstring('Old Password not found', 'Please enter a new
password below', show='*')
if new_pas == None:
mess._show(title='No Password Entered', message='Password not set!!
Please try again')
else:
tf = open("TrainingImageLabel\psd.txt", "w")
tf.write(new_pas)
mess._show(title='Password Registered', message='New password was
registered successfully!!')
return
password = tsd.askstring('Password', 'Enter Password', show='*')
if (password == key):
TrainImages()
elif (password == None):
pass
else:
mess._show(title='Wrong Password', message='You have entered wrong
password')

#######################################################
def clear():
txt.delete(0, 'end')
res = "1)Take Images >>> 2)Save Profile"
message1.configure(text=res)

def clear2():
txt2.delete(0, 'end')
res = "1)Take Images >>> 2)Save Profile"
message1.configure(text=res)
#########################################################

34
def TakeImages():
check_haarcascadefile()
columns = ['SERIAL NO.', '', 'ID', '', 'NAME']
assure_path_exists("StudentDetails/")
assure_path_exists("TrainingImage/")
serial = 0
exists = os.path.isfile("StudentDetails\StudentDetails.csv")
if exists:
with open("StudentDetails\StudentDetails.csv", 'r') as csvFile1:
reader1 = csv.reader(csvFile1)
for l in reader1:
serial = serial + 1
serial = (serial // 2)
csvFile1.close()
else:
with open("StudentDetails\StudentDetails.csv", 'a+') as csvFile1:
writer = csv.writer(csvFile1)
writer.writerow(columns)
serial = 1
csvFile1.close()
Id = (txt.get())
name = (txt2.get())
if ((name.isalpha()) or (' ' in name)):
cam = cv2.VideoCapture(0)
harcascadePath = "haarcascade_frontalface_default.xml"
detector = cv2.CascadeClassifier(harcascadePath)
sampleNum = 0
while (True):
ret, img = cam.read()
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = detector.detectMultiScale(gray, 1.3, 5)
for (x, y, w, h) in faces:
cv2.rectangle(img, (x, y), (x + w, y + h), (255, 0, 0), 2)
# incrementing sample number

35
sampleNum = sampleNum + 1
# saving the captured face in the dataset folder TrainingImage
cv2.imwrite("TrainingImage\ " + name + "." + str(serial) + "." + Id +
'.' + str(sampleNum) + ".jpg",
gray[y:y + h, x:x + w])
# display the frame
cv2.imshow('Taking Images', img)
# wait for 100 miliseconds
if cv2.waitKey(100) & 0xFF == ord('q'):
break
# break if the sample number is morethan 100
elif sampleNum > 100:
break
cam.release()
cv2.destroyAllWindows()
res = "Images Taken for ID : " + Id
row = [serial, '', Id, '', name]
with open('StudentDetails\StudentDetails.csv', 'a+') as csvFile:
writer = csv.writer(csvFile)
writer.writerow(row)
csvFile.close()
message1.configure(text=res)
else:
if (name.isalpha() == False):
res = "Enter Correct name"
message.configure(text=res)

########################################################
def TrainImages():
check_haarcascadefile()
assure_path_exists("TrainingImageLabel/")
recognizer = cv2.face_LBPHFaceRecognizer.create()
harcascadePath = "haarcascade_frontalface_default.xml"
detector = cv2.CascadeClassifier(harcascadePath)

36
faces, ID = getImagesAndLabels("TrainingImage")
try:
recognizer.train(faces, np.array(ID))
except:
mess._show(title='No Registrations', message='Please Register someone
first!!!')
return
recognizer.save("TrainingImageLabel\Trainner.yml")
res = "Profile Saved Successfully"
message1.configure(text=res)
message.configure(text='Total Registrations till now : ' + str(ID[0]))

########################################################
def getImagesAndLabels(path):
# get the path of all the files in the folder
imagePaths = [os.path.join(path, f) for f in os.listdir(path)]
# create empth face list
faces = []
# create empty ID list
Ids = []
# now looping through all the image paths and loading the Ids and the images
for imagePath in imagePaths:
# loading the image and converting it to gray scale
pilImage = Image.open(imagePath).convert('L')
# Now we are converting the PIL image into numpy array
imageNp = np.array(pilImage, 'uint8')
# getting the Id from the image
ID = int(os.path.split(imagePath)[-1].split(".")[1])
# extract the face from the training image sample
faces.append(imageNp)
Ids.append(ID)
return faces, Ids

##########################################################

37
def TrackImages():
check_haarcascadefile()
assure_path_exists("Attendance/")
assure_path_exists("StudentDetails/")
for k in tv.get_children():
tv.delete(k)
msg = ''
i=0
j=0
recognizer = cv2.face.LBPHFaceRecognizer_create() #
cv2.createLBPHFaceRecognizer()
exists3 = os.path.isfile("TrainingImageLabel\Trainner.yml")
if exists3:
recognizer.read("TrainingImageLabel\Trainner.yml")
else:
mess._show(title='Data Missing', message='Please click on Save Profile to
reset data!!')
return
harcascadePath = "haarcascade_frontalface_default.xml"
faceCascade = cv2.CascadeClassifier(harcascadePath);

cam = cv2.VideoCapture(0)
font = cv2.FONT_HERSHEY_SIMPLEX
col_names = ['Id', '', 'Name', '', 'Date', '', 'Time']
exists1 = os.path.isfile("StudentDetails\StudentDetails.csv")
if exists1:
df = pd.read_csv("StudentDetails\StudentDetails.csv")
else:
mess._show(title='Details Missing', message='Students details are missing,
please check!')
cam.release()
cv2.destroyAllWindows()
window.destroy()
while True:

38
ret, im = cam.read()
gray = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY)
faces = faceCascade.detectMultiScale(gray, 1.2, 5)
for (x, y, w, h) in faces:
cv2.rectangle(im, (x, y), (x + w, y + h), (225, 0, 0), 2)
serial, conf = recognizer.predict(gray[y:y + h, x:x + w])
if (conf < 50):
ts = time.time()
date = datetime.datetime.fromtimestamp(ts).strftime('%d-%m-%Y')
timeStamp = datetime.datetime.fromtimestamp(ts).strftime('%H:%M:
%S')
aa = df.loc[df['SERIAL NO.'] == serial]['NAME'].values
ID = df.loc[df['SERIAL NO.'] == serial]['ID'].values
ID = str(ID)
ID = ID[1:-1]
bb = str(aa)
bb = bb[2:-2]
attendance = [str(ID), '', bb, '', str(date), '', str(timeStamp)]

else:
Id = 'Unknown'
bb = str(Id)
cv2.putText(im, str(bb), (x, y + h), font, 1, (255, 255, 255), 2)
cv2.imshow('Taking Attendance', im)
if (cv2.waitKey(1) == ord('q')):
break
ts = time.time()
date = datetime.datetime.fromtimestamp(ts).strftime('%d-%m-%Y')
exists = os.path.isfile("Attendance\Attendance_" + date + ".csv")
if exists:
with open("Attendance\Attendance_" + date + ".csv", 'a+') as csvFile1:
writer = csv.writer(csvFile1)
writer.writerow(attendance)
csvFile1.close()

39
else:
with open("Attendance\Attendance_" + date + ".csv", 'a+') as csvFile1:
writer = csv.writer(csvFile1)
writer.writerow(col_names)
writer.writerow(attendance)
csvFile1.close()
with open("Attendance\Attendance_" + date + ".csv", 'r') as csvFile1:
reader1 = csv.reader(csvFile1)
for lines in reader1:
i=i+1
if (i > 1):
if (i % 2 != 0):
iidd = str(lines[0]) + ' '
tv.insert('', 0, text=iidd, values=(str(lines[2]), str(lines[4]),
str(lines[6])))
csvFile1.close()
cam.release()
cv2.destroyAllWindows()

######################################## USED STUFFS


global key
key = ''

ts = time.time()
date = datetime.datetime.fromtimestamp(ts).strftime('%d-%m-%Y')
day,month,year=date.split("-")

mont={'01':'January',
'02':'February',
'03':'March',
'04':'April',
'05':'May',
'06':'June',
'07':'July',

40
'08':'August',
'09':'September',
'10':'October',
'11':'November',
'12':'December'
}

######################################## GUI FRONT-END


window = tk.Tk()
window.geometry("1280x720")
window.resizable(True,False)
window.title("Attendance System")
window.configure(background='#2d420a')

frame1 = tk.Frame(window, bg="#c79cff")


frame1.place(relx=0.11, rely=0.17, relwidth=0.39, relheight=0.80)

frame2 = tk.Frame(window, bg="#c79cff")


frame2.place(relx=0.51, rely=0.17, relwidth=0.38, relheight=0.80)

message3 = tk.Label(window, text="Face Recognition Based Attendance


Monitoring
System" ,fg="white",bg="#2d420a" ,width=55 ,height=1,font=('comic', 29, ' bold
'))
message3.place(x=10, y=10)

frame3 = tk.Frame(window, bg="#c4c6ce")


frame3.place(relx=0.52, rely=0.09, relwidth=0.09, relheight=0.07)

frame4 = tk.Frame(window, bg="#c4c6ce")


frame4.place(relx=0.36, rely=0.09, relwidth=0.16, relheight=0.07)

datef = tk.Label(frame4, text = day+"-"+mont[month]+"-"+year+" | ",


fg="#ff61e5",bg="#2d420a" ,width=55 ,height=1,font=('comic', 22, ' bold '))

41
datef.pack(fill='both',expand=1)

clock =
tk.Label(frame3,fg="#ff61e5",bg="#2d420a" ,width=55 ,height=1,font=('comic',
22, ' bold '))
clock.pack(fill='both',expand=1)
tick()

head2 = tk.Label(frame2, text=" For New Registrations


", fg="black",bg="#00fcca" ,font=('comic', 17, ' bold ') )
head2.grid(row=0,column=0)

head1 = tk.Label(frame1, text=" For Already Registered


", fg="black",bg="#00fcca" ,font=('comic', 17, ' bold ') )
head1.place(x=0,y=0)

lbl = tk.Label(frame2, text="Enter


ID",width=20 ,height=1 ,fg="black" ,bg="#c79cff" ,font=('comic', 17, ' bold ') )
lbl.place(x=80, y=55)

txt = tk.Entry(frame2,width=32 ,fg="black",font=('comic', 15, ' bold '))


txt.place(x=30, y=88)

lbl2 = tk.Label(frame2, text="Enter


Name",width=20 ,fg="black" ,bg="#c79cff" ,font=('comic', 17, ' bold '))
lbl2.place(x=80, y=140)

txt2 = tk.Entry(frame2,width=32 ,fg="black",font=('comic', 15, ' bold ') )


txt2.place(x=30, y=173)

message1 = tk.Label(frame2, text="1)Take Images >>> 2)Save


Profile" ,bg="#c79cff" ,fg="black" ,width=39 ,height=1, activebackground =
"#3ffc00" ,font=('comic', 15, ' bold '))
message1.place(x=7, y=230)

42
message = tk.Label(frame2,
text="" ,bg="#c79cff" ,fg="black" ,width=39,height=1, activebackground =
"#3ffc00" ,font=('comic', 16, ' bold '))
message.place(x=7, y=450)

lbl3 = tk.Label(frame1,
text="Attendance",width=20 ,fg="black" ,bg="#c79cff" ,height=1 ,font=('comi
c', 17, ' bold '))
lbl3.place(x=100, y=115)

res=0
exists = os.path.isfile("StudentDetails\StudentDetails.csv")
if exists:
with open("StudentDetails\StudentDetails.csv", 'r') as csvFile1:
reader1 = csv.reader(csvFile1)
for l in reader1:
res = res + 1
res = (res // 2) - 1
csvFile1.close()
else:
res = 0
message.configure(text='Total Registrations till now : '+str(res))
##################### MENUBAR #################################
menubar = tk.Menu(window,relief='ridge')
filemenu = tk.Menu(menubar,tearoff=0)
filemenu.add_command(label='Change Password', command = change_pass)
filemenu.add_command(label='Contact Us', command = contact)
filemenu.add_command(label='Exit',command = window.destroy)
menubar.add_cascade(label='Help',font=('comic', 29, ' bold '),menu=filemenu)
################## TREEVIEW ATTENDANCE TABLE
####################
tv= ttk.Treeview(frame1,height =13,columns = ('name','date','time'))
tv.column('#0',width=82)

43
tv.column('name',width=130)
tv.column('date',width=133)
tv.column('time',width=133)
tv.grid(row=2,column=0,padx=(0,0),pady=(150,0),columnspan=4)
tv.heading('#0',text ='ID')
tv.heading('name',text ='NAME')
tv.heading('date',text ='DATE')
tv.heading('time',text ='TIME')
###################### SCROLLBAR ################################
scroll=ttk.Scrollbar(frame1,orient='vertical',command=tv.yview)
scroll.grid(row=2,column=4,padx=(0,100),pady=(150,0),sticky='ns')
tv.configure(yscrollcommand=scroll.set)

###################### BUTTONS ##################################


clearButton = tk.Button(frame2, text="Clear",
command=clear ,fg="black" ,bg="#ff7221" ,width=11 ,activebackground =
"white" ,font=('comic', 11, ' bold '))
clearButton.place(x=335, y=86)
clearButton2 = tk.Button(frame2, text="Clear",
command=clear2 ,fg="black" ,bg="#ff7221" ,width=11 , activebackground =
"white" ,font=('comic', 11, ' bold '))
clearButton2.place(x=335, y=172)
takeImg = tk.Button(frame2, text="Take Images",
command=TakeImages ,fg="white" ,bg="#6d00fc" ,width=34 ,height=1,
activebackground = "white" ,font=('comic', 15, ' bold '))
takeImg.place(x=30, y=300)
trainImg = tk.Button(frame2, text="Save Profile",
command=psw ,fg="white" ,bg="#6d00fc" ,width=34 ,height=1,
activebackground = "white" ,font=('comic', 15, ' bold '))
trainImg.place(x=30, y=380)
trackImg = tk.Button(frame1, text="Take Attendance", command=TrackImages
,fg="black" ,bg="#3ffc00" ,width=35 ,height=1, activebackground =
"white" ,font=('comic', 15, ' bold '))
trackImg.place(x=30,y=50)

44
quitWindow = tk.Button(frame1, text="Quit",
command=window.destroy ,fg="black" ,bg="#eb4600" ,width=35 ,height=1,
activebackground = "white" ,font=('comic', 15, ' bold '))
quitWindow.place(x=30, y=450)
##################### END ###############################
Window.configure(menu=menubar)
Window.mainloop()
######################################################

CHAPTER 7
7. RESULT AND DISCUSSION
The face detection system showed a high level of accuracy in identifying students
during live online classes. Using machine learning models trained on large datasets,
the system was able to recognize faces with an accuracy rate of 95-98%. This high
accuracy rate was achieved through the use of robust algorithms.

The use of face detection for attendance in online classes has shown considerable
promise, particularly for its potential to save time and reduce human error in tracking
attendance. The automated nature of the system makes it more efficient than manual
roll calls or self-reporting systems.

45
Fig.7.1 Attendance sheet

Fig no.7.2 Facial recognition

46
CHAPTER 8
CONCLUSION
The implementation of face detection technology for attendance tracking in online
classes presents a highly efficient and automated solution that can significantly reduce
the administrative burden on instructors while ensuring accurate and real-time
attendance records. By using face detection algorithms with high accuracy, the system
can quickly identify students, streamlining the process of logging attendance without
the need for manual intervention.
However, while the technology shows great potential, there are challenges that need
to be addressed, such as variations in lighting, camera quality, and students’
willingness to keep their cameras on. Privacy concerns also play a critical role in the
adoption of such systems, requiring institutions to ensure transparency and gain
student consent.
Overall, face detection for attendance in online education is a promising tool that can
enhance the management of virtual classrooms. Future advancements in the system
could improve its robustness and accuracy, leading to even more seamless and secure
attendance tracking. To achieve broad acceptance, however, institutions must
prioritize addressing technical limitations and ethical concerns, fostering a balance
between innovation and privacy.

47
CHAPTER 9
7.1 FUTURE SCOPE
The future scope of implementing face detection for attendance in online classes is
vast and promising. As educational institutions increasingly adopt hybrid and online
learning models, automated face detection systems can ensure accurate and reliable
attendance tracking, minimizing fraudulent practices like proxy attendance. These
systems can be integrated with advanced AI technologies to monitor student
engagement and attentiveness, enhancing the overall learning experience.
Additionally, incorporating facial recognition with data analytics can provide insights
into class participation trends, enabling educators to tailor their teaching strategies.
With advancements in privacy-preserving AI and improved facial recognition
algorithms, this technology has the potential to become a standard tool in digital
education, ensuring efficiency, security, and a seamless user experience.

7.2 FUTURE ENHANCEMENTS


Future enhancements for face detection-based attendance systems in online classes
will focus on improving accuracy, security, and adaptability. Advancements in AI and
machine learning can enable better detection under varying conditions such as low
lighting, diverse backgrounds, and facial occlusions like masks or glasses. Privacy-
centric solutions, such as encrypted data handling and local device processing, will

48
address concerns about data security and regulatory compliance. Integration with
additional features like engagement analysis, emotion detection, and multitasking
monitoring can provide deeper insights into student behavior during classes.
Furthermore, these systems can be optimized for scalability, ensuring smooth
deployment across diverse educational platforms and making attendance management
more efficient and reliable in a wide range of learning environments.

CHAPTER 10
REFEERENCES
[1].J. Thies, M. Zollhofer, M. Stamminger, C. Theobalt and M. NieBner,
"Face2face: Real-time face capture and reenactment of rgb videos",
Computer Vision and Pattern Recognition (CVPR) 2016 IEEE Conference
on, pp. 2387-2395, 2016.
[2].R. Raghavendra, K. B. Raja and C. Buscb, "Detecting morphed face
images", 2016 IEEE 8th International Conference on Biometrics Theory
Applications and Systems (BTAS), pp. 1, Sept 2016.
[3].S. Bhattachaijee and S. Marcel, "What you can't see can help you -
extended-range imaging for 3d-mask presentation attack detection", 2017
International Conference of the Biometrics Special Interest Group
(BIOSIG), pp. 1-7, Sept 2017.
[4].R. Ramachandra and C. Busch, "Presentation attack detection methods for
face recognition systems: A comprehensive survey", ACM Comput. Surv.,
vol. 50, no. 1, pp. 8:1-8:37, Mar. 2017, [online] Available:
http://doi.acm.org/10.1145/3038924.
[5].A. Khodabakhsh, R. Ramachandra and C. Busch, "A taxonomy of
audiovisual fake multimedia content creation technology", Proceedings of
the 1st IEEE International Workshop on Fake MultiMedia (FakeMM'18),

49
2018
[6].A. Rossler, D. Cozzolino, L. Verdoliva, C. Riess, J. Thies and M.
NieBner, "Faceforensics: A large-scale video dataset for forgery detection
in human faces", arXiv preprint arXiv:1803.09179, 2018.
[7].A. Krizhevsky, I. Sutskever and G. E. Hinton, "Imagenet classification
with deep convolutional neural networks" in Advances in Neural
Information Processing Systems 25, Curran Associates, Inc., pp. 1097-
1105, 2012.
[8].K. Simonyan and A. Zisserman, "Very deep convolutional networks for
large-scale image recognition", CoRR, vol. abs/1409.1556, 2014, [online]
Available: http://arxiv.org/abs/1409.1556.
[9].K. He, X. Zhang, S. Ren and J. Sun, "Deep residual learning for image
recognition", CoRR, vol. abs/1512.03385, 2015, [online] Available:
http://arxiv.org/abs/1512.03385.
[10].F. Chollet, "Xception: Deep learning with depthwise separable
convolutions", CoRR, vol. abs/1610.02357, 2016, [online] Available:
http://arxiv.Org/abs/l610.02357.
[11].C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens and Z. Wojna,
"Rethinking the inception architecture for computer vision", CoRR, vol.
abs/1512.00567, 2015, [online] Available: http://arxiv.Org/abs/l512.00567.
[12].A. Mittal, A. K. Moorthy and A. C. Bovik, "No-reference image quality
assessment in the spatial p", IEEE Transactions on Image Processing, vol.
21, no. 12, pp. 4695-4708, Dec 2012.
[13]."“Information technology - Biometric presentation attack detection -
Part 3: Testing and reporting,” International Organization for
Standardization, Standard", Sep. 2017.
[14].T. Ojala, M. Pietikainen and T. Maenpaa, "Multiresolution gray-scale
and rotation invariant texture classification with local binary patterns",
IEEE Trans. Pattern Anal. Mach. Intel!, vol. 24, no. 7, pp. 971-987, Jul.
2002, [online] Available: http://dx.doi.org/10.1109/TPAMI.2002.1017623.
[15].P. Zhou, X. Han, V. I. Morariu and L. S. Davis, "Two-stream neural
networks for tampered face detection", 2017 IEEE Conference on
Computer Vision and Pattern Recognition Workshops (CVPRW), pp.
1831-1839, 2017.

50
51

You might also like