0% found this document useful (0 votes)
8 views70 pages

Final Report

The project report details the development of a Face Recognition Attendance System aimed at automating student attendance in educational institutions, eliminating time-consuming traditional methods. It emphasizes the use of image processing techniques for face detection and recognition, allowing for efficient attendance marking without disrupting the teaching process. The report includes acknowledgments, a declaration of originality, and a comprehensive literature review on the advantages and disadvantages of various biometric systems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views70 pages

Final Report

The project report details the development of a Face Recognition Attendance System aimed at automating student attendance in educational institutions, eliminating time-consuming traditional methods. It emphasizes the use of image processing techniques for face detection and recognition, allowing for efficient attendance marking without disrupting the teaching process. The report includes acknowledgments, a declaration of originality, and a comprehensive literature review on the advantages and disadvantages of various biometric systems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 70

Face Recognition Attendance

system
PROJECT REPORT
OF MAJOR PROJECT

BACHELOR OF COMPUTER
APPLICATIONS

SUBMITTED BY
Aryan Tripathi

Batch Year —
2021-22 Enrolment
No.U1946018

PROJECT SUPERVISOR — Ms.Suneeta


Tripathi

Centre of Computer Education &


Training Institute of Professional
Studies, University of Allahabad,
Prayagraj
Uttar Pradesh
1
ACKNOWLEDGEMENT

This project report is based on ‘Face Recognition Attendance


System’. I have taken efforts in this project. However, it would not
have been possible without the kind support and help of many
individuals and organizations. I would like to extend my sincere thanks
to all of them.
I am highly indebted to ‘Ms.Suneeta Tripathi’ for her guidance and
constant supervision as well as for providing necessary information
regarding the project & also for her support in completing the project. I
would also like to express my gratitude towards my parents &
members of Institute of professional Studies (IPS), University of
Allahabad for their kind co-operation and encouragement which help
me in completion of this project.
I would like to express my special gratitude and thanks to my project
in-charge and coordinator Prof Ashish Khare Sir for providing
necessary guidance and
giving me such attention and time.
My thanks and appreciations also go to my Friends in developing the
project and people who have willingly helped me out with their
abilities.

Aryan Tripathi
BCA, 6 h Semester
CERTIFICATE

This is to certify that Aryan Tripathi Student of BCA 3 Year of


Institute Of Professional Studies, University of Allahabad has
completed his project entitled ‘Face Recognition Attendance
System’ under my guidance in the academic year (2021-22).

He has taken proper care and shown utmost sincerity in completing


this project. I certify that this project is up to my expectations and as
per the guidelines.

This application package is the original one and is never submitted


somewhere for the same purpose.

Ms.Suneeta Tripathi (PROJECT SUPERVISOR)


DECLARATION

I, [Aryan Tripathi], solemnlydeclare that the project report [Face


Recognition Attendance System’]is based on my own work carried out
during the course of our study under the supervision of [Ms.Suneeta Tripathi]
I assert the statements made and conclusions drawn are an outcome of my
research work. I further certify that
I. The work contained in the report is original and has been done by me
under the general supervision of my supervisor.

II. The work has not been submitted to any other Institution for any other
degree/diploma/certificate in this university or any other University of
India or abroad.

III. We have followed the guidelines provided by the university in writing the
report.

IV. Whenever we have used materials (data, theoretical analysis, and text)
from other sources, we have given due credit to them in the text of the
report and giving their details in the references.

Aryan Tripathi
BCA, 6th Semester
Table of Contents

Abstract 4
Introduction 5
1.1. Introduction 6
1.2. Backgro
und. 6
1.3. Problem
Statement 8
1.4. Aims
and Objectives 9
1.5. Flow
Chart 10
1.6. Scope of
the project 11
Literature Review 12
2.1. Student
Attendance System 13
2.2. Digital
Image Processing 13
2.3. Image
Representation in a Digital Computer 14
2.4. Steps in
Digital Image Processing. 14
2.5. Definition
of Terms and History 15
Model Implementation & analysis 23
3.1. Introductio
n 24
3.2. Model
Implementation 25
3.3. Design
Requirements 26
3.3.1. Software
Implementation 26
3.3.2. Hardware
Implementation 27
Specification: The Raspberry Pi 3 Model B+ is the final revision in the Raspberry Pi
3range.28 3.4.Experimental Results 31
Code Implementation 35
4.1. Code
Implementation 36
4.1.1. main.py
36
4.1.2. Dataset.py
39
4.1.3. AutoMail.
py 40
4.2. Summary
43
Working Plan 44
5.1. Introductio
n 45
5.2. Work Breakdown Structure 45
5.3. Gantt Chart. 47
5.4. Financial Plan 47
5.5. Feasibility Study 47
5.6. Summary 50
Future Work 51
6.1. Introduction 52
6.2. Future Scope of Work. 52
6.3. Summary 52
Result 53
7.1. Introduction 54
7.2. Summary 54
Abstract
Uniqueness or individuality of an individual face is the
representation of one’s identity. In this project face of an
individual is used for the purpose of attendance making
automatically. Attendance of the student is very important
for every college, universities and school. Conventional
methodology for taking attendance is by calling the name
or roll number of the student and the attendance is
recorded. Time consumption for this purpose is an
important point of concern.
Assume that the duration for one subject is around 60
minutes or 1 hour & to record attendance takes 5 to 10
minutes. For every tutor this is consumption of time. To
stay away from these losses, an automatic process is
used in this project which is based on image processing.
In this project face detection and face recognition is used.
Face detection is used to locate the position of face region
and face recognition is used for marking the understudy’s
attendance. The database of all the students in the class is
stored and when the face of th e individual student
matches with one of the faces stored in the database then
the attendance is recorded.
Chapter 1
Introduction

1.1. Introduction
Attendance is prime important for both the teacher and student of an
educational organization. So it is very important to keep record of the
attendance. The problem arises when we think about the traditional
process of taking attendance in class room. Calling name or roll number of
the student for attendance is not only a problem of time consumption but
also it needs energy. So an automatic attendance system can solve all
above problems.
There are some automatic attendances making system which are currentl y
used by much institution. One of such system is biometric technique and RFID
system. Although it is automatic and a step ahead of traditional method it fails
to meet the time constraint. The student has to wait in queue for giving
attendance, which is time taking.
This project introduces an involuntary attendance marking system, devoid
of any kind of interference with the normal teaching procedure. The system
can be also implemented during exam sessions or in other teaching
activities where attendance
is highly essential. This system eliminates classical student identification
such as calling name of the student, or checking respective identification
cards of the student, which can not only interfere with the ongoing teaching
process, but also can be stressful for students during examination
sessions. In addition, the students have to register in the database to be
recognized. The enrolment can be done on the spot through the user-
friendly interface.

1.2. Background
Face recognition is crucial in daily li fe in order to identify family, friends or
someone we are familiar with. We might not perceive that several steps
have actually taken in order to identify human faces. Human intelligence
allows us to receive information and interpret the information in the
recognition process. We receive information through the image projected
into our eyes, by specifically retina in the form of light. Light is a form of
electromagnetic waves which are radiated from a source onto an object
and projected to human vision. Rob inson- Riegler,G., & Robinson-Riegler,
B. (2008) mentioned that after visual processing done by the human visual
system, we actually classify shape, size, contour and the texture of the
object in order to analyze the information. The analyzed information will be
compared to other representations of objects or face that exist in our
memory to recognize. In fact, it is a hard challenge to build an automated
system to have the same capability as a human to recognize faces.
However, we need large memory to recognize different faces, for example,
in the Universities, there are a lot of students with different race and
gender, it is impossible to remember every face of the individual without
making mistakes. In order to overcome human limitations, computers with
al most limitless memory, high processing speed and power are used in
face recognition systems.
The human face is a unique representation of individual identity. Thus, face
recognition is defined as a biometric method in which identification of an
individual is performed by comparing real -time capture image with stored
images in the database of that person (Margaret Rouse, 2012).
Nowadays, face recognition system is prevalent due to its simplicity and
awesome performance. For instance, airport protection sys tems and FBI
use face recognition for criminal investigations by tracking suspects,
missing children and drug activities (Robert Silk, 2017). Apart from that,
Facebook which is a popular social networking website implement face
recognition to allow the use rs to tag their friends in the photo for
entertainment purposes (Sidney Fussell, 2018). Furthermore, Intel
Company allows the users to use face recognition to get access to their
online account (Reichert, C., 2017). Apple allows the users to unlock their
mobile phone, iPhone X by using face recognition (deAgonia, M., 2017).
The work on face recognition began in 1960. Woody Bledsoe, Helen Chan
Wolf and Charles Bisson had introduced a system which required the
administrator to locate eyes, ears, nose and mouth from images. The
distance and ratios between the located features and the common
reference points are then calculated and compared. The studies are further
enhanced by Goldstein, Harmon, and Lesk in 1970 by using other features
such as hair colour and l ip thickness to automate the recognition. In 1988,
Kirby and Sirovich first suggested principle component analysis (PCA)
to solve face recognition problem. Many studies on face
recognitionwere then conducted continuously until today (Ashley DuVal,
2012).

1.3. Problem Statement


Traditional student attendance marking technique is often facing a lot of
trouble. The face recognition student attendance system emphasizes its
simplicity by eliminating classical student attendance marking technique
such as calling s tudent names or checking respective identification cards.
There are not only disturbing the teaching process but also causes
distraction for students during exam sessions. Apart from calling names,
attendance sheet is passed around the classroom during the lecture
sessions. The lecture class especially the class with a large number of
students might find it difficult to have the attendance sheet being passed
around the
class. Thus, face recognition attendance system is proposed in order to
replace the
manual signing of the presence of students which are burdensome and
causes students get distracted in order to sign for their attendance.
Furthermore, the face recognition based automated student attendance
system able to overcome the problem of fraudulent approach and lecturers
does not have to count the number of students several times to ensure the
presence of the students.
The paper proposed by Zhao, W et al. (2003) has listed the difficulties of
facial identification. One of the difficulties of facial iden tification is the
identification between known and unknown images. In addition, paper
proposed by Pooja G.R et al. (2010) found out that the training process for
face recognition student attendance system is slow and time-consuming. In
addition, the paper proposed by Priyanka Wagh et al. (2015) mentioned
that different lighting and head poses are often the problems that could
degrade the performance of face recognition based student attendance
system.
Hence, there is a need to develop a real time operating student attendance
system which means the identification process must be done within defined
time constraints to prevent omission. The extracted features from facial
images which represent the identity of the students have to be consistent
towards a chang e in background, illumination, pose and
expression. High accuracy and fast computation
time will be the evaluation points of the performance.
1.4. Aims and Objectives
The objective of this project is to develop face recognition attendance
system. Expected achievements in order to fulfill the objectives are:
• To detect the face segment from the video frame.

• To extract the useful features from the face detected.


• To classify the features in order to recognize the face detected.
• To record the attendance of the identified student.
1.5. Flow Chart
1
1.6. Scope of the project
We are setting up to design a system comprising of two
modules. The first module (face detector) is a mobile component,
which is basically a camera application that captures student
faces and stores them in a file using computer vision face
detection algor ithms and face extraction techniques. The second
module is a desktop application that does face recognition of the
captured images (faces) in the file, marks the students register
and then stores the results in a database for future analysis.
Chapter 2
Literature Review

2.1. Student Attendance System


Arun Katara et al. (2017) mentioned disadvantages of RFID (Radio
Frequency Identification) card system, fingerprint system and iris
recognition system. RFID card system is implemented due to its simplicity.
However, the user tends to help their friends to check in as long as they have
their friend’s ID card. The fingerprint
system is indeed effective but not efficient because it takes time for the
verification process so the user has to line up and perform the verification
one by one. However for face recognition, the human face is always
exposed and contain less information compared to iris. Iris recognition
system which contains more detail might invade the privacy of the user.
Voice recognition is available, but it is less accurate compared to other
methods. Hence, face recognition system is suggested to be implemented
in the student attendance system.

System Type Advantage Disadvantages


RFID card system Simple Fraudulent usage
Fingerprint system Accurate Accurate

Time-
consuming
Voice Less accurate
compared
recognition to Others
system
Iris recognition Accurate Privacy Invasion
system

Table 2.1: Advantages & Disadvantages of Different Biometric System[1]


2.2. Digital Image Processing
Digital Image Processing is the processing of images which are digital in
nature by a digital computer[2]. Digital image processing techniques are
motivated by three major applications mainly:
• Improvement of pictorial information for human perception

• Image processing for autonomous machine application


• Efficient storage and transmission.
2.3. Image Representation in a Digital Computer
An image is a 2-Dimensional light intensity function
𝐟 (𝐱,𝐲) = 𝐫 (𝐱,𝐲) × 𝐢 (𝐱,𝐲) - (2.0)
Where, r (x, y) is the reflectivity of the surface of the corresponding image
point. i (x,y) Represents the intensity of the incident light. A digital image
f(x, y) is discretized both in spatial co -ordinates by grids and in brightness
by quantization[3]. Effectively, the image can be represented as a matrix
whose row, column indices specify a point in the image and the element
value identifies gray
level value at that point. These elements are referred to as pixels or pels.
Typically following image processing applications, the image size which is
used is𝟐𝟓𝟔 × 𝟐𝟓𝟔, elements, 𝟔𝟒𝟎 × 𝟒𝟖𝟎 pels or 𝟏𝟎𝟐𝟒 × 𝟏𝟎𝟐𝟒 pixels.
Quantization of these matrix pixels is done at 8 bits for black and white
images and 24 bits for colored images (because of the three color planes
Red, Green and Blue each at 8 bits)[4].
2.4. Steps in Digital Image Processing

Digital image processing involves the following basic tasks:


• Image Acquisition - An imaging sensor and the capability to digitize the
signal produced by the sensor.
• Preprocessing – Enhances the image quality, filtering, contrast enhancement
etc.

Segmentation – Partitions an input image into constituent parts of


objects.
Description/feature Selection – extracts the description of image ob
jects suitable for further computer processing.
Recognition and Interpretation – Assigning a label to the object based
on the information provided by its descriptor.
Interpretation assigns meaning to a set of labelled objects.
Knowledge Base – This helps for efficient processing as well as inter
module cooperation.

2.5. Definition of Terms and History


Face Detection
Face detection is the process of identifying and locating all the present
faces in a single image or video regardless of their position, scale,
orientation, age and
expression. Furthermore, the detection should be irrespective of
extraneous illumination co nditions and the image and video content[5].
Face Recognition
Face Recognition is a visual pattern recognition problem, where the face,
represented as a three dimensional object that is subject to varying
illumination, pose and other factors, needs to be identified based on
acquired images[6].
Face Recognition is therefore simply the task of identifying an already
detected face as a known or unknown face and in more advanced cases
telling exactly whose face it is[7].
Difference between Face Detection and Face Recognition
Face detection answers the question, Where is the face? It identifies an
object as a “face” and locates it in the input image. Face Recognition on the other
hand answers the question who is this? Or whose face is it? It decides if the
dete cted face is someone known or unknown based on the database of
faces it uses to validate this input image[8].It can therefore be seen that
face detections output (the detected face) is the input to the face recognizer and
the face Recognition’s output is the final decision i.e. face known or face
unknown.
Face Detection
A face Detector has to tell whether an image of arbitrary size contains a
human face and if so, where it is. Face detection can be performed based
on several cues: skin color (for faces in color images and videos, motion
(for faces in videos), facial/head shape, facial appearance or a combination
of these parameters. Most face detection algorithms are appearance based
without using other cues. An input image is scanned at all possible locati
ons and scales by a sub window. Face detection is posed as classifying the
pattern in the sub window either as a face or a non-face. The face/nonface
classifier is learned from face and non -face training examples using
statistical learning methods[9]. Most modern algorithms are based on the
Viola Jones object detection framework, which is based on Haar Cascades
Table: Advantages & Disadvantages of Face Detection
Methods[10]

Face Advantages Disadvantages

Detection
Method
Viola Jones 1. High detection 1. Long Training
Algorithm Speed. Time. 2.Limited Head
Pose. 3.Not able to
2. High Accuracy. detect dark
faces.
Local Binary 1.Simple 1.Only used for
Pattern computation. 2.High binary and grey
Histogram tolerance against images. 2.Overall
the performance is
inaccurate
monotonic compared to
illumination Viola-
changes. Jones
Algorithm.
Ada Boost Algorithm Need not to have The result highly
any prior knowledge depends on the
about face structure. training data and
affected by
weak classifiers.
SMQT Features and 1. Capable to deal The region contain
SNOW with lighting problem very similar to grey
in object detection. value regions will be
Classifier Method 2. Efficient in misidentified as
computation. face.
Neural-Network High accuracy only 1. Detection
if large size of process is slow and
image were trained. computation is
complex.
2. Overall
performance is
weaker than Viola-
Jones algorithm.

Viola-Jones Algorithm
Viola-Jones algorithm which was introduced by P. Viola, M. J. Jones (2001)
is the most popular algorithm to localize the face segment from static
images or video frame. Basically the concept of Viola -Jones algorithm
consists of four p arts. The first part is known as Haar feature, second part
is where integral image is created, followed by implementation of Adaboost
on the third part and lastly cascading process.

1
Fig: Haar Feature
Viola-Jones algorithm analyses a given image using Haar features
consisting of multiple rectangles (Mekha Joseph et al., 2016). In the fig
shows several types of Haar features. The features perform as window
function mapping onto the image. A single value result, which representing
each feature can be computed by subtracting the sum of the white
rectangle(s) from the sum of the black rectangle(s) (Mekha Joseph et al.,

The value of integrating image in a spe cific location is the sum of pixels on
the left and the top of the respective location. In order to illustrate clearly,
the value of the integral image at location 1 is the sum of the pixels in
rectangle A. The values of integral image at the rest of the l ocations are
cumulative. For instance, the value at
location 2 is summation of A and B, (A + B), at location 3 is summation of A
and
2016).

C, (A + C), and at location 4 is summation of all the regions, (A + B + C +


D) [11]. Therefore, the sum within the D region can be computed with only
addition and subtraction of diagonal at location 4 + 1 − (2 + 3) to eliminate
rectangles A, B and C.
Local Binary Patterns Histogram
Local Binary Pattern (LBP) is a simple yet very efficient texture operator
which labels the pixels of an image by thresholding the neighborhood of
each pixel and considers the result as a binary number. It was first
described in 1994 (LBP) and has since been found to be a powerful feature
for texture classification. It has
further been determined that when LBP is combined with histograms of
oriented
gradients (HOG) descriptor, it improves the detection performance
considerably on some datasets. Using the LBP combined with histograms
we can represent the face images with a simple data vec tor.

Surf algorithm work step by step:


LBPH algorithm work in 5 steps.
1. Parameters: the LBPH uses 4 parameters:
• Radius: the radius is used to build the circular local binary pattern
and represents the radius around the central pixel. It is usually
set to 1.
• Neighbors: the number of sample points to build the circular local
binary pattern. Keep in mind: the more sample points you
include, the higher the computational cost. It is usually set to 8.
• Grid X: the number of cells in the horizontal di rection. The more
cells, the finer the grid, the higher the dimensionality of the
resulting feature vector. It is usually set to 8.
• Grid Y: the number of cells in the vertical direction. The more cells,
the finer the grid, the higher the dimensionality of the resulting
feature vector. It is usually set to 8.
2. Training the Algorithm: First, we need to train the algorithm. To do so,
we need to use a dataset with the facial images of the people we want to
recognize. We need to also set an ID (it may be a number or the name of
the person) for each image, so the algorithm will use this information to
recognize an input image and give you an output. Images of the same
person must have the same ID. With the training set already constructed, let’s
see the LBPH computational steps.
3. Applying the LBP operation: The first computational step of the LBPH
is to create an intermediate image that describes the original image in a
better way, by highlighting the facial characteristics. To do so, the algorithm
uses a concept of a sliding window, based on the parameters radius and
neighbors.
The image below shows this procedure:
F
ig: LBP
Operation

Based on the image above, let’s break it into several small steps so we can
understand it easily:
• Suppose we have a facial image in grayscale.
• We can get part of this image as a window of 3x3 pixels.
• It can also be represented as a 3x3 matrix containing the intensity of each pixel
(0~255).
• Then, we need to take the central value of the matrix to be used as the threshold.

• This value will be used to define the new values from the 8 neighbors.
• For each neighbor of the central value (threshold), we set a new binary value. We
set 1 for values equal or higher than the threshold and 0 for values lower
than the threshold.
• Now, the matrix will contain only binary values (ignoring the central value). We
need to concatenate each binary value from each position from the matrix
line by line into a new binary value (e.g. 10001101). Note: some authors
use other approaches to concatenate the binary values (e.g. clockwise
direction), but the final result will be the same.
• Then, we convert this binary value to a decimal value and set it to the central
value of the matrix, which is actually a pixel from the original image.
• At the end of this procedure (LBP procedure), we have a new image which
represents better the characteristics of the original image.
It can be done by using bilinear interpolation. If some data point is between
the pixels, it uses the values from the 4 nearest pixels (2x2) to estimate the
value of the new data point.
4. Extracting the Histograms: Now, using the image generated in the last
step, we can use the Grid X and Grid Y parameters to divide the image into
multiple grids, as can be seen in the following image:

F
ig: Extracting the
Histogram

Based on the image above, we can extract the histogram of each region as
follows:
• As we have an image in grayscale, each histogram (from each grid) will contain
only 256 positions (0~255) representing the occurrences of each pixel
intensity.
• Then, we need to concatenate each histogram to create a new and bigger
histogram. Supposing we have 8x8 grids, we will have 8x8x256=16.384
positions in the final histogram. The final histogram represents the
characteristics of the image original image.
5. Performing the face recognition: In this step, the algorithm is already
trained. Each histogram created isused to represent each image from the
training dataset. So, given an input image, we perform the steps again for
this new image and creates a histogram which represents the image.
• So to find the image that matches the input image we just need to compare two
histograms and return the image with the closest histogram.
• We can use various approaches to compare the histograms (calculate the distance
between two histograms), for example: Euclidean distance, chi -square,
absolute value, etc. In this example, we can use the Euclidean distance
(which is quite known) based on the following formula:

• So the algorithm output is the ID from the image with the closest histogram.
The algorithm should also return the calculated distance, which can be
used as a ‘confidence’ measurement. Note: don’t be fooled about the
‘confidence’ name, as lower confidences are better because i t means the
distance between the two histograms is closer.
• We can then use a threshold and the ‘confidence’ to automatically estimate if the
algorithm has correctly recognized the image. We can assume that the
algorithm has successfully recognized if th e confidence is lower than the
threshold defined.
Chapter 3
Model
Implementation &
analysis

3.1.Introduction
Face detection involves separating image windows into two classes; one
containing faces (turning the background (clutter). It is difficult because
although commonalities exist between faces, they can vary considerably in
terms of age, skin color and facial expression. The problem is further
complicated by differing lighting conditions, image qualities and
geometries, as well as the possibility of partial occlusion and disguise. An
ideal face detector would therefore be able to detect the presence of any
face under any set of lighting conditions, upon any
background. The face detection task can be broken down into two steps.
The first step is a classification task that takes some arbitrary image as
input and outputs a binary value of yes or no, indicating whe ther there are
any faces present in the image. The second step is the face localization
task that aims to take an image as input and output the location of any face
or faces within that image as some bounding box with (x, y, width,
height).After taking the picture the system will compare the equality of the
pictures in its database and give the most related result. We will use
Raspbian operating system, open CV platform and will do the coding in
python language.

3.2. Model Implementation


The main components used in the implementation approach are open
source computer vision library (OpenCV). One of OpenCV’s goals is to provide
a simple - to-use computer vision infrastructure that helps people build fairly
sophisticated vision applications quickly. OpenCV library contains over 500
functions that span many areas in vision. The primary technology behind
Face rec ognition is OpenCV. The user stands in front of the camera
keeping a minimum distance of 50cm and his image is taken as an input.
The frontal face is extracted from the image then converted to gray scale
and stored. The Principal component Analysis (PCA) algorithm is
performed on the images and the eigen values are stored in an xml file.
When a user requests for recognition the frontal face is extracted from the
captured video frame through the camera. The eigen value is re -calculated
for the test face and it is matched with the stored data for the closest
neighbor.
3.3. Design Requirements
We used some tools to build the HFR system. Without the help of these
tools it would not be possible to make it done. Here we will discuss about
the most important one.
3.3.1. Software Implementation

1) OpenCV: We used OpenCV 3 dependency for python 3. OpenCV is


library where there are lots of image processing functions are
available. This is very useful library for image processing. Even one
can get expected outcome without writing a single code. The library is
cross -platform and free for use under the open-source BSD license.
Example of some supported functions are given bellow:

• Derivation: Gradient / laplacian computing, contours delimitation


• Hough transforms: lines, segments, circles, and geometrical shapes detection
• Histograms: computing, equalization, and object localization with back
projection algorithm
• Segmentation: thresholding, distance transform, foreground
/ background detection, watershed segmentation
• Filtering: linear and nonlinear filters, morphological operations
• Cascade detectors: detection of face, eye, car plates
• Interest points: detection and matching
• Video processing: optical flow, background subtraction, camshaft
(object tracking)
• Photography: panoramas realization, high definition imaging (HDR),
image inpainting

So it was very important to install OpenCV. But installing OpenCV 3 is a


complex process. How we did it is given below:

2
We copied this script and place it on a directory on our raspberry pi and saved
it. Then through terminal we made this script executable and then ran it.
1. Sudo chmod 755 /myfile/pi/installopencv.bash
2. sudo
/myfile/pi/installopencv.bash these are
the command line we used.
2. Python IDE: There are lots of IDEs for python. Some of them are
PyCharm, Thonny, Ninja, Spyder etc. Ninja and Spyder both are very
excellent and free but we used Spyder as it feature- rich than ninja. Spyder
is a litt le bit heavier than ninja but still much lighter than PyCharm. You
can run them in pi and get GUI on your PC through ssh-Y. We installed
Spyder through the command line below.
1. sudo apt-get isntall spyder

3.3.2. Hardware Implementation


1. Raspberry Pi 3:
1.4GHz 64-bit quad-core processor, dual-band wireless LAN, Bluetooth
4.2/BLE, faster Ethernet, and Power-over-Ethernet support (with separate
PoE HAT)
Specification: The Raspberry Pi 3 Model B+ is the final revision in the
Raspberry Pi 3 range.
• Broadcom BCM2837B0, Cortex-A53 (ARMv8) 64-bit SoC @ 1.4GHz

• 1GB LPDDR2 SDRAM


• 2.4GHz and 5GHz IEEE 802.11.b/g/n/ac wireless LAN, Bluetooth 4.2, BLE
• Gigabit Ethernet over USB 2.0 (maximum throughput 300 Mbps)
• Extended 40-pin GPIO header
• Full-size HDMI
• 4 USB 2.0 ports
• CSI camera port for connecting a Raspberry Pi camera
• DSI display port for connecting a Raspberry Pi touchscreen display
• 4-pole stereo output and composite video port
• Micro SD port for loading your operating system and storing data
• 5V/2.5A DC power input
• Power-over-Ethernet (PoE) support (requires separate PoE HAT)
2. Webcam:
ELP HD 8Megapixel USB CMOS board camera module adopt Sensor Sony
(1/3.2”) IMX179 is nice to use in Linux equipment, or those equipment which
come with windows, linux, Android system etc.

Specification:
• 1/3.2-inch Sony IMX179 USB webcam
• 8-megapixel high resolution JPEG USB camera
• UVC usb camera, Support windows, Linux, Mac with UVC, also for android
system.
• Compatible with raspberry pi, Ubuntu, OpenCV, Am cap and many other USB
web camera software and hardware.
• Webcam USB with 2.8mm lens
• 38×38/32x32mm mini micro usb board camera
• USB webcam, well used in many machines, atm machine, medical machine,
automatic vending machine, industry machine.
• USB camera module Parameters changable (Brightness, Contrast, Saturation,
White Balance, Gamma, Definition, Exposure…)
3. Power Source: We use Mi 10000 mAH Power Bank for our power
sources.

is our prototype
device.

4. Project Machine: Here


3.4.Experimental Results
The step of the experiments process are given below:
Face Detection:
Start capturing images through web camera of the client
side: Begin:
Pre-process the captured image and extract face image
• calculate the eigen value of the captured face image and compared with
eigen values of existing faces in the database.
• If eigen value does not matched with existing ones,save the new face image
information to the face database (xml file).
• If eigen value matched with existing one then recognition step will
done. End
Face Recognition:
Using PCA algorithm the following steps would be followed in for face
recognition:
Begin:
• Find the face information of matched face image in from the database.
• update the log table with corresponding face image and system time that
makes completion of attendance for an individual students.
End
This section presents the results of the experiment conducted to capture
the face into a grey scale image of 50x50 pixels .

Test data Expected Result Observed Result Pass/ Fail


OpenCAM_CB () Connects with Camera started. pass
the
installed camera
and
starts playing.
LoadHaar Loads Gets ready pass
Classifier () for
the Extraction.
HaarClassifier
Cascade files
for
frontal face
ExtractFace () Initiates the Face extracted pass
Paul-
Viola Face
extracting Frame
work.
Learn () Start the Updates the pass
PCA facedata.
Algorithm xml
Recognize () It compares the Nearest face pass
input
Here is our data set
Face Detection Rate Recognition Rate
Orientations
Oº (Frontal face) 98.7 % 95%
18º 80.0 % 78%
54º 59.2 % 58%
72º 0.00 % 0.00%
90º(Profile face) 0.00 % 0.00 %

We performed a set of experiments to demonstrate the efficiency of the


proposed method. 30 different images of 10 persons are used in training
set. Figure 3 shows a sample binary image detected by the ExtractFace()
function using Paul -Viola Face extracting Frame work detection method.
Chapter 4
Code Implementation

4.1. Code Implementation


All our code is written in Python language. First here is our project directory
structure and files.
1. HFRAS
2. | Dataset
3. | main.py
4. | dataset.py
5. | database.log
6. | data_set.csv
7. | data_log.ods
All those file in the project directory.
1. Dataset: Where all the faces are saved.
2. main.py: Main program file to run the program.
3. dataset.py: Capture images and working on datasets.
4. database.log: To keep track the database events
5. data_set.csv: To save the details of data.
6. data_log.ods: Attendance save.
4.1.1. main.py
All the work will be done here, Detect the face ,recognize the faces and take
attendance.
1. import cv2
2. import numpy as np
3. import os
4. from picamera.array import PiRGBArray
5. from picamera import PiCamera
6. import time
7. import sys
8. import logging as log
9. import datetime as dt
10. from time import
sleep 11.
12. cx = 160
13. cy = 120
14.
15. # names related to ids: example
16. names = ['None',
'Mouly',
'Mubin','Rahatul','Rumana','Hridoy','Rifat','Raihan','Shaji
a','Camelia','Fatima','Farhan','12']
17.
18. #iniciate id counter
19. id =
0 20.
21.
22. xdeg = 150
23. ydeg = 150
24. cascadePath = "haarcascade_frontalface_default.xml"
25. faceCascade = cv2.CascadeClassifier(cascadePath)
26. recognizer=cv2.face.LBPHFaceRecognizer_create()
27. # creating a log file
28. log.basicConfig(filename='database.l og',level=log.INFO)
29. file = open("/home/pi/Testnew/data_log.csv",
"a") 30.
31. images=[]
32. labels=[]
33. # save the images in the dataset folder
34. for filename in os.listdir('Dataset'):
35. im=cv2.imread('Dataset/'+filename,0)
36. images.append(im)
37. labels.append(int(filename.split('.')[0]
[0])) 38.
39. recognizer.train(images,np.array(labels))
40. print 'Training Done . .
. ' 41.
42. font = cv2.FONT_HERSHEY_SIMPLEX
43. cap=cv2.VideoCapture(0)
44. lastRes=''
45. count=0
46. print ' Done 2 . . . '
47. # save in the log file
48. log.info("Date Time , Student Name \n")
49. file.write(" - - - \n")
50. file.write(" Date:"+str(dt.datetime.now().strftime("%d -%m-%Y"))+" \
n")
51. file.write(" - 52. file.write("Time , Student
- Name \n")
- \n")
53. # detect multiface and convert the faces into gray scale
54. while(1):
55. ret, frame=cap.read()
56. gray = cv2.cvtColor(frame, cv2.COLOR_RGB2GRAY)
57. faces = faceCascade.detectMultiScale(gray)
58. count+=1
59. # Rectangel box
60. for (x,y,w,h) in faces:
61. cv2.rectangle(frame,(x,y),(x+w,y+h),(255,0,0),2)
62.
63. id,confidence=recognizer.predict(gray[y:y+h,
x:x+w]) 64.
65. # Check if confidence is less them 100 ==>"0" is perfect match
66. if (confidence < 40):
67. id = names[id]
68. confidence = " {0}%".format(round(100 - confidence))
69. log.info(str(dt.datetime.now()) + ","+ str(id)+" \n")
70. file.write(str(dt.dat etime.now().strftime("%H:%M:%S"))+
","+ str(id)+"\n")
71.
72. else:
73. id = "unknown"
74. confidence = " {0}%".format(round(100 - confidence))
75. # after face recognize the name text will show in white
color 76. cv2.putText(frame, str(id), (x+5,y-5), font, 1,
(255,255,255), 2)
77. cv2.putText(frame, str(confidence), (x+5,y+h-5), font, 1, (255,255,0),
1)
78. #cv2.putText( frame, str(lastRes), ( x, y
), cv2.FONT_HERSHEY_SIMPLEX, 0.5, ( 0, 0, 255 ), 2 )
79.
80. cv2.imshow('frame',frame)
81. k = 0xFF & cv2.waitKey(10)
82. if k == 27:
83. break
84. cap.release()
85. cv2.destroyAllWindows()

4.1.2. Dataset.py
Dataset Implementation code are given below which is also in python
code.
1. import cv2
2. from picamera.array import PiRGBArray
3. from picamera import PiCamera
4. import time
5. import os
6. import numpy
7. import io
8. #Create a memory stream so photos doesn't need to be saved in a
file 9. stream = io.BytesIO()
10. cam = cv2.VideoCapture(0)
11. detector=cv2.CascadeClassifier('haarcascade_frontalface_de
fault.xml') 12.
13. #Convert the picture into a numpy array
14. buff = numpy.fromstring(stream.getvalue(),
dtype=numpy.uint8) 15.
16. Id=raw_input('enter your id')
17. sampleNum=0
18. while(True):
19. ret, img = cam.read() #cam output
20. cv2.imshow('frame',img) #screen output
21. gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) #convert
black and white
22. faces = detector.detectMultiScale(gray, 1.3, 5) #detect face
23. for (x,y,w,h) in faces:
24. cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2) #framing
25. cv2.imwrite("Dataset/"+Id +'_'+ str(sampleNum) +
".jpg", gray[y:y+h,x:x+w]) #saving data in id
26. #incrementing sample number
27. sampleNum=sampleNum+1
28. #saving the captured face in the dataset
folder 29.
30. cv2.imshow('frame',img)
31. #wait for 100 miliseconds
32. if cv2.waitKey(100) & 0xFF == ord('q'):
33. break
34. # break if the sample number is morethan 20
35. elif sampleNum>30:
36. break
37. cam.release()
38. cv2.destroyAllWindows()
Extra Feature :

4.1.3. AutoMail.py
In this project we add an extra feature called auto mail. It can automatically
sent the attendance file to specific mail. Auto mail code given below:
1. import yagmail
2. receiver = "mygmail@ gmail.com" # receiver email
address 3.
4. body = "Attendence File" # email
body 5.
6. filename = "Attendance \data_log.csv" # attach the file
7. # mail
information 8.
9. yag = yagmail.SMTP("mygmail@gmail.com",
"mypassword") 10.
11. # sent the
mail 12.
13.
yag.send( 14.
15. to=receiver,
16.
17. subject="Attendance Report", # email
subject 18.
19. contents=body, # email
body 20.
21. attachments=filename, # file
attached 22.
23. )
Sample Images:

3
4.2. Summary
In this long yet useful chapter we managed to cover the
entire structure of how the system has been developed and how it
functions to give the best outcome.

3
Chapter 5
Working Plan
5.1. Introduction
In this chapter, we observe the entire work structure, meaning how the
scheduling was maintained throughout the developmental phase. We shall
also see the financial foundation of this project and furthermore the
feasibility study should be also discussed.
5.2. Work Breakdown Structure
In order to develop this system, we gave enormous importance to
scheduling because we believed if we want to provide the best of quality in
a given period of time then we must give due importance to scheduling
which also helpe d us to achieve a better results. The figure below focuses
the weekly work we had accomplished.
Week Proposed Work
No.
Week-1 Project Proposal Report and Presentation
Week-1 Study related works
Week-1 Study in Python
Week-2 Study related works using OpenCV
Week-2 Study related works using Bluetooth
Week-3 Study related works using processing
Week-3 Study image processing
Week-3 Study image processing
Week-4 Sketching basic structure
Week-4 Prototype design
Week-4 Finalize Prototype design
Week-4 Flexible Box
Week-5 Runnable with basic commands(Input, Output, Turn on, Turn
Off)
Week-5 Designing Lookahead table
Week-5 Designing Lookahead table
Week-6 Creating environment for image processing
Week-6 Creating environment for image processing
Week-7 Integrating all together
Week-7 Start coding
Week-8 Coding for basic instructions (Compare, Result, Accuracy
etc.) measure
Week-8 Coding for single face detection
Week-9 Single face detection and Compare with database
Week-9 Multiple Face detection and Compare
Week- Detecting Multiple face, store and compare with database
10
Week- Attendance collection
10
Week- File Generate base on collective data
10
Week- Daily file generation of attendance
10
5.3. Gantt Chart

5.4.Financial Plan
Money was required to build the system as we had to buy a lots of
components. Breakdown is given bellow:

Item Name Price


Raspberry 4000
Pi
HD Camera 1000
SD Card 500
Total 5500
5.5. Feasibility Study
Depending on the results of the initial investigation the survey is now
expanded to a more detailed feasibility study. “FEASIBILITY STUDY” is a test
of system
proposal according to its workability, impact of the organization, ability to
meet needs and effective use of the resources. It focuses on these major
questions:
1. What are the user’s demonstrable needs and how does a candidate System
meets them?
2. What resources are available for given candidate system?
3. What are the likely impacts of the candidate system on the organization?
4. Whether it is worth to solve the problem?
During feasibility analysis for on our project, following primary areas of
interest are to be considered. Investigation and generating ideas about a
new system does the following steps:

Steps in feasibility analysis


1. Form a project team and appoint a project leader.
2. Enumerate potential proposed system.
3. Define and identify characteristics of proposed system.
4. Determine and evaluate performance and cost effectively of each
proposed system.
5. Weight system performance and cost data.

6. Select the best-proposed system.


7. Prepare and report final project directive to management.
Technical feasibility
A study of available resource that may affect the ability to achieve an
acceptable system. This evaluation determines whether the technology
needed for the proposed system is available or not.
• Can the work for the project be done with current equipment existing
software technology & available personal?
• Can the system be upgraded if developed?
• If new technology is needed then what can be developed?
This is concerned with specifying equipment and software that will successfully
satisfy the user requirement.

Economic feasibility
Economic justification is generally the “Bottom Line” consideration for most
systems. Economic justification includes a broad range of concerns that
includes cost benefit analysis. In this we weight the cost and the benefits
associated with the candidate system and if it suits the basic purpose of the
organization i.e. profit making, the project is making to the analysis and
design phase.
The financial and the economic qu estions during the preliminary
investigation are verified to estimate the following:
• The cost to conduct a full system investigation.

• The cost of hardware and software for the class of application being considered.
• The benefits in the form of reduced cost.
• The proposed system will give the minute information, as a result the
performance is improved which in turn may be expected to provide
increased profits.
• This feasibility checks whether the system can be developed with the available
funds.

Operational Feasibility
It is mainly related to human organizations and political aspects. The
points to be considered are:
• What changes will be brought with the system?

• What organization structures are disturbed?


• What new skills will be required?
• Do the existing staff members have these skills? If not, can they be trained in due
course of time?
The system is operationally feasible as it very easy for the users to operate it.
Schedule feasibility
Time evaluation is the most important consider ation in the development of
project. The time schedule required for the developed of this project is very
important since more development time effect machine time, cost and
cause delay in the development of other systems.
5.6. Summary
To conclude, we discussed the scheduling processes of developing this
system. Additionally we have also identified how feasible the system is
through the lens of evaluating using various feasibility studies
Chapter 6

Future Work

6.1. Introduction
This chapter discusses the future scope or the implementation of this robot.
To incerease the scope of this device we can add some new features. As
technology is becoming more advance it will be mendatory to change the
sturctute some day with better replacement and sometimes based on
customer requirements.
6.2. Future Scope of Work
There are so many future scope on this project. Some of them are
• Can improve security
• Can use Neural Network for high accuracy
• Can used in big factory or employee attendance
• Can build on fully web base system.
6.3. Summary
This chapter has described the possible future applications of the design.
But there are a lot of possibilities with the designed device. The device
may need some research for different applications, though the principle of
the designed system will remain as it is.
Chapter 7

Result

7.1. Introduction
This chapter of the report contains the results that we achieved throughout
the course of using this system. Results Achieved From initiation through
conclusion of developing this system the following results has been
achieved. They are as follows:
• The system can be administered by a non-IT technician.
• The system is market ready for commercial use.
• The system has the capacity to carry up to a thousand faces to recognize.
• The system can serve as much people as they want within an organization. 7.2.
Summary
This chapter has covered the different types of results that we have
managed to obtain throughout the course of using this system.

You might also like