Final Report
Final Report
system
         PROJECT REPORT
          OF MAJOR PROJECT
     BACHELOR OF COMPUTER
         APPLICATIONS
            SUBMITTED BY
            Aryan Tripathi
              Batch Year —
           2021-22 Enrolment
             No.U1946018
Aryan Tripathi
BCA, 6 h Semester
                                      CERTIFICATE
     II.    The work has not been submitted to any other Institution for any other
            degree/diploma/certificate in this university or any other University of
            India or abroad.
     III.   We have followed the guidelines provided by the university in writing the
            report.
     IV.    Whenever we have used materials (data, theoretical analysis, and text)
            from other sources, we have given due credit to them in the text of the
            report and giving their details in the references.
Aryan Tripathi
BCA, 6th Semester
                           Table of Contents
Abstract                                                                               4
Introduction                                                                           5
1.1. Introduction                                                                       6
1.2.                                                                                   Backgro
    und.                                                                               6
1.3.                                                                               Problem
    Statement                                                                      8
1.4.                                                                                   Aims
    and Objectives                                                                     9
1.5.                                                                               Flow
    Chart                                                                          10
1.6.                                                                              Scope of
    the project                                                                   11
Literature Review                                                                  12
2.1.                                                                              Student
    Attendance System                                                             13
2.2.                                                                               Digital
    Image Processing                                                               13
2.3.                                                                              Image
    Representation in a Digital Computer                                          14
2.4.                                                                              Steps in
    Digital Image Processing.                                                     14
2.5.                                                                               Definition
    of Terms and History                                                           15
Model Implementation & analysis                                                    23
3.1.                                                                              Introductio
    n                                                                             24
3.2.                                                                              Model
    Implementation                                                                25
3.3.                                                                              Design
    Requirements                                                                  26
3.3.1.                                                                            Software
        Implementation                                                            26
3.3.2.                                                                            Hardware
        Implementation                                                            27
Specification: The Raspberry Pi 3 Model B+ is the final revision in the Raspberry Pi
3range.28 3.4.Experimental Results                       31
Code Implementation                                      35
4.1.                                                    Code
   Implementation                                       36
4.1.1.                                                   main.py
       36
4.1.2.                                                   Dataset.py
       39
4.1.3.                                                   AutoMail.
       py                                                40
4.2.                                                     Summary
                                                         43
Working Plan                                              44
5.1.                                                    Introductio
   n                                                    45
5.2. Work Breakdown Structure                                 45
5.3.                                          Gantt Chart.         47
5.4.                                         Financial Plan        47
5.5.                                       Feasibility Study       47
5.6.                                             Summary           50
Future Work                                               51
6.1.                                           Introduction        52
6.2.                                 Future Scope of Work.         52
6.3.                                             Summary           52
Result                                                   53
7.1.                                           Introduction        54
7.2.                                             Summary           54
     Abstract
Uniqueness or individuality of an individual face is the
representation of one’s identity. In this project face of an
individual is used for the purpose of attendance making
automatically. Attendance of the student is very important
for every college, universities and school. Conventional
methodology for taking attendance is by calling the name
or roll number of the student and the attendance is
recorded. Time consumption for this purpose is an
important point of concern.
Assume that the duration for one subject is around 60
minutes or 1 hour & to record attendance takes 5 to 10
minutes. For every tutor this is consumption of time. To
stay away from these losses, an automatic process is
used in this project which is based on image processing.
In this project face detection and face recognition is used.
Face detection is used to locate the position of face region
and face recognition is used for marking the understudy’s
attendance. The database of all the students in the class is
stored and when the face of th e individual student
matches with one of the faces stored in the database then
the attendance is recorded.
                       Chapter 1
                      Introduction
1.1. Introduction
Attendance is prime important for both the teacher and student of an
educational organization. So it is very important to keep record of the
attendance. The problem arises when we think about the traditional
process of taking attendance in class room. Calling name or roll number of
the student for attendance is not only a problem of time consumption but
also it needs energy. So an automatic attendance system can solve all
above problems.
There are some automatic attendances making system which are currentl y
used by much institution. One of such system is biometric technique and RFID
system. Although it is automatic and a step ahead of traditional method it fails
to meet the time constraint. The student has to wait in queue for giving
attendance, which is time taking.
This project introduces an involuntary attendance marking system, devoid
of any kind of interference with the normal teaching procedure. The system
can be also implemented during exam sessions or in other teaching
activities where attendance
is highly essential. This system eliminates classical student identification
such as calling name of the student, or checking respective identification
cards of the student, which can not only interfere with the ongoing teaching
process, but also can be stressful for students during examination
sessions. In addition, the students have to register in the database to be
recognized. The enrolment can be done on the spot through the user-
friendly interface.
1.2. Background
Face recognition is crucial in daily li fe in order to identify family, friends or
someone we are familiar with. We might not perceive that several steps
have actually taken in order to identify human faces. Human intelligence
allows us to receive information and interpret the information in the
recognition process. We receive information through the image projected
into our eyes, by specifically retina in the form of light. Light is a form of
electromagnetic waves which are radiated from a source onto an object
and projected to human vision. Rob inson- Riegler,G., & Robinson-Riegler,
B. (2008) mentioned that after visual processing done by the human visual
system, we actually classify shape, size, contour and the texture of the
object in order to analyze the information. The analyzed information will be
compared to other representations of objects or face that exist in our
memory to recognize. In fact, it is a hard challenge to build an automated
system to have the same capability as a human to recognize faces.
However, we need large memory to recognize different faces, for example,
in the Universities, there are a lot of students with different race and
gender, it is impossible to remember every face of the individual without
making mistakes. In order to overcome human limitations, computers with
al most limitless memory, high processing speed and power are used in
face recognition systems.
The human face is a unique representation of individual identity. Thus, face
recognition is defined as a biometric method in which identification of an
individual is performed by comparing real -time capture image with stored
images in the database of that person (Margaret Rouse, 2012).
Nowadays, face recognition system is prevalent due to its simplicity and
awesome performance. For instance, airport protection sys tems and FBI
use face recognition for criminal investigations by tracking suspects,
missing children and drug activities (Robert Silk, 2017). Apart from that,
Facebook which is a popular social networking website implement face
recognition to allow the use rs to tag their friends in the photo for
entertainment purposes (Sidney Fussell, 2018). Furthermore, Intel
Company allows the users to use face recognition to get access to their
online account (Reichert, C., 2017). Apple allows the users to unlock their
mobile phone, iPhone X by using face recognition (deAgonia, M., 2017).
The work on face recognition began in 1960. Woody Bledsoe, Helen Chan
Wolf and Charles Bisson had introduced a system which required the
administrator to locate eyes, ears, nose and mouth from images. The
distance and ratios between the located features and the common
reference points are then calculated and compared. The studies are further
enhanced by Goldstein, Harmon, and Lesk in 1970 by using other features
such as hair colour and l ip thickness to automate the recognition. In 1988,
Kirby and Sirovich first suggested principle component analysis      (PCA)
         to     solve     face recognition problem. Many studies on face
recognitionwere then conducted continuously until today (Ashley DuVal,
2012).
                                                   Time-
                                                   consuming
 Voice                                             Less accurate
                                                   compared
 recognition                                       to Others
 system
 Iris recognition           Accurate               Privacy Invasion
 system
Detection
Method
Viola Jones           1. High detection        1. Long Training
Algorithm                Speed.                Time. 2.Limited Head
                                               Pose. 3.Not able to
                      2. High Accuracy.        detect dark
                                               faces.
Local Binary          1.Simple                 1.Only     used     for
       Pattern        computation. 2.High      binary and grey
Histogram             tolerance   against      images.      2.Overall
                      the                      performance          is
                                               inaccurate
                      monotonic                compared             to
                      illumination                          Viola-
                      changes.                 Jones
                                               Algorithm.
Ada Boost Algorithm   Need not to have         The result highly
                      any prior knowledge      depends on the
                      about face structure.    training data and
                                               affected by
                                               weak classifiers.
SMQT Features and     1. Capable to deal       The region contain
SNOW                  with lighting problem    very similar to grey
                      in object detection.     value regions will be
Classifier Method     2.      Efficient   in   misidentified       as
                      computation.             face.
Neural-Network        High accuracy only       1. Detection
                      if large size of         process is slow and
                      image were trained.      computation        is
                                               complex.
                                               2. Overall
                                               performance        is
                                               weaker than Viola-
                                               Jones algorithm.
Viola-Jones Algorithm
Viola-Jones algorithm which was introduced by P. Viola, M. J. Jones (2001)
is the most popular algorithm to localize the face segment from static
images or video frame. Basically the concept of Viola -Jones algorithm
consists of four p arts. The first part is known as Haar feature, second part
is where integral image is created, followed by implementation of Adaboost
on the third part and lastly cascading process.
                                                                                1
                          Fig: Haar Feature
Viola-Jones algorithm analyses a given image using Haar features
consisting of multiple rectangles (Mekha Joseph et al., 2016). In the fig
shows several types of Haar features. The features perform as window
function mapping onto the image. A single value result, which representing
each feature can be computed by subtracting the sum of the white
rectangle(s) from the sum of the black rectangle(s) (Mekha Joseph et al.,
 The value of integrating image in a spe cific location is the sum of pixels on
 the left and the top of the respective location. In order to illustrate clearly,
 the value of the integral image at location 1 is the sum of the pixels in
 rectangle A. The values of integral image at the rest of the l ocations are
 cumulative. For instance, the value at
 location 2 is summation of A and B, (A + B), at location 3 is summation of A
 and
2016).
Based on the image above, let’s break it into several small steps so we can
understand it easily:
• Suppose we have a facial image in grayscale.
• We can get part of this image as a window of 3x3 pixels.
• It can also be represented as a 3x3 matrix containing the intensity of each pixel
(0~255).
• Then, we need to take the central value of the matrix to be used as the threshold.
• This value will be used to define the new values from the 8 neighbors.
• For each neighbor of the central value (threshold), we set a new binary value. We
set 1 for values equal or higher than the threshold and 0 for values lower
than the threshold.
• Now, the matrix will contain only binary values (ignoring the central value). We
need to concatenate each binary value from each position from the matrix
line by line into a new binary value (e.g. 10001101). Note: some authors
use other approaches to concatenate the binary values (e.g. clockwise
direction), but the final result will be the same.
• Then, we convert this binary value to a decimal value and set it to the central
value of the matrix, which is actually a pixel from the original image.
• At the end of this procedure (LBP procedure), we have a new image which
represents better the characteristics of the original image.
It can be done by using bilinear interpolation. If some data point is between
the pixels, it uses the values from the 4 nearest pixels (2x2) to estimate the
value of the new data point.
4. Extracting the Histograms: Now, using the image generated in the last
step, we can use the Grid X and Grid Y parameters to divide the image into
multiple grids, as can be seen in the following image:
                                                    F
                               ig: Extracting the
                                   Histogram
Based on the image above, we can extract the histogram of each region as
follows:
• As we have an image in grayscale, each histogram (from each grid) will contain
only 256 positions (0~255) representing the occurrences of each pixel
intensity.
• Then, we need to concatenate each histogram to create a new and bigger
histogram. Supposing we have 8x8 grids, we will have 8x8x256=16.384
positions in the final histogram. The final histogram represents the
characteristics of the image original image.
5. Performing the face recognition: In this step, the algorithm is already
trained. Each histogram created isused to represent each image from the
training dataset. So, given an input image, we perform the steps again for
this new image and creates a histogram which represents the image.
• So to find the image that matches the input image we just need to compare two
histograms and return the image with the closest histogram.
• We can use various approaches to compare the histograms (calculate the distance
between two histograms), for example: Euclidean distance, chi -square,
absolute value, etc. In this example, we can use the Euclidean distance
(which is quite known) based on the following formula:
• So the algorithm output is the ID from the image with the closest histogram.
The algorithm should also return the calculated distance, which can be
used as a ‘confidence’ measurement. Note: don’t be fooled about the
‘confidence’ name, as lower confidences are better because i t means the
distance between the two histograms is closer.
• We can then use a threshold and the ‘confidence’ to automatically estimate if the
algorithm has correctly recognized the image. We can assume that the
algorithm has successfully recognized if th e confidence is lower than the
threshold defined.
Chapter 3
Model
Implementation                                                          &
analysis
3.1.Introduction
Face detection involves separating image windows into two classes; one
containing faces (turning the background (clutter). It is difficult because
although commonalities exist between faces, they can vary considerably in
terms of age, skin color and facial expression. The problem is further
complicated   by   differing   lighting   conditions,   image   qualities   and
geometries, as well as the possibility of partial occlusion and disguise. An
ideal face detector would therefore be able to detect the presence of any
face under any set of lighting conditions, upon any
background. The face detection task can be broken down into two steps.
The first step is a classification task that takes some arbitrary image as
input and outputs a binary value of yes or no, indicating whe ther there are
any faces present in the image. The second step is the face localization
task that aims to take an image as input and output the location of any face
or faces within that image as some bounding box with (x, y, width,
height).After taking the picture the system will compare the equality of the
pictures in its database and give the most related result. We will use
Raspbian operating system, open CV platform and will do the coding in
python language.
                                                                                 2
We copied this script and place it on a directory on our raspberry pi and saved
it. Then through terminal we made this script executable and then ran it.
     1. Sudo chmod 755 /myfile/pi/installopencv.bash
      2. sudo
/myfile/pi/installopencv.bash these are
the command line we used.
2.   Python IDE: There are lots of IDEs for python. Some of them are
PyCharm, Thonny, Ninja, Spyder etc. Ninja and Spyder both are very
excellent and free but we used Spyder as it feature- rich than ninja. Spyder
is a litt le bit heavier than ninja but still much lighter than PyCharm. You
can run them in pi and get GUI on your PC through ssh-Y. We installed
Spyder through the command line below.
     1. sudo apt-get isntall spyder
Specification:
• 1/3.2-inch Sony IMX179 USB webcam
• 8-megapixel high resolution JPEG USB camera
• UVC usb camera, Support windows, Linux, Mac with UVC, also for android
system.
• Compatible with raspberry pi, Ubuntu, OpenCV, Am cap and many other USB
web camera software and hardware.
• Webcam USB with 2.8mm lens
• 38×38/32x32mm mini micro usb board camera
• USB webcam, well used in many machines, atm machine, medical machine,
automatic vending machine, industry machine.
• USB camera module Parameters changable (Brightness, Contrast, Saturation,
White Balance, Gamma, Definition, Exposure…)
3.   Power Source: We use Mi 10000 mAH Power Bank for our power
 sources.
                           is our prototype
                           device.
4.1.2.      Dataset.py
Dataset Implementation code are given below which is also in python
code.
1. import cv2
2. from picamera.array import PiRGBArray
3. from picamera import PiCamera
4. import time
5. import os
6. import numpy
7. import io
8. #Create a memory stream so photos doesn't need to be saved in a
file 9. stream = io.BytesIO()
10. cam = cv2.VideoCapture(0)
11. detector=cv2.CascadeClassifier('haarcascade_frontalface_de
fault.xml') 12.
13. #Convert the picture into a numpy array
14. buff = numpy.fromstring(stream.getvalue(),
dtype=numpy.uint8) 15.
16. Id=raw_input('enter your id')
17. sampleNum=0
18. while(True):
19. ret, img = cam.read() #cam output
20. cv2.imshow('frame',img) #screen output
21. gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) #convert
black and white
22. faces = detector.detectMultiScale(gray, 1.3, 5) #detect face
23. for (x,y,w,h) in faces:
24. cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2) #framing
25.    cv2.imwrite("Dataset/"+Id +'_'+ str(sampleNum) +
       ".jpg", gray[y:y+h,x:x+w]) #saving data in id
26. #incrementing sample number
27. sampleNum=sampleNum+1
28. #saving the captured face in the dataset
folder 29.
30. cv2.imshow('frame',img)
31. #wait for 100 miliseconds
32. if cv2.waitKey(100) & 0xFF == ord('q'):
33. break
34. # break if the sample number is morethan 20
      35. elif sampleNum>30:
      36. break
      37. cam.release()
      38. cv2.destroyAllWindows()
Extra Feature :
4.1.3.     AutoMail.py
In this project we add an extra feature called auto mail. It can automatically
sent the attendance file to specific mail. Auto mail code given below:
1. import yagmail
2. receiver = "mygmail@ gmail.com" # receiver email
address 3.
4. body = "Attendence File" # email
body 5.
6. filename = "Attendance \data_log.csv" # attach the file
7. # mail
information 8.
9. yag = yagmail.SMTP("mygmail@gmail.com",
"mypassword") 10.
11. # sent the
mail 12.
13.
yag.send( 14.
15. to=receiver,
16.
17. subject="Attendance Report", # email
subject 18.
19. contents=body, # email
body 20.
21. attachments=filename, # file
attached 22.
23. )
Sample Images:
                                                                                 3
4.2. Summary
     In this long yet useful chapter we managed to cover the
entire structure of how the system has been developed and how it
functions to give the best outcome.
                                                              3
Chapter 5
Working Plan
5.1. Introduction
In this chapter, we observe the entire work structure, meaning how the
scheduling was maintained throughout the developmental phase. We shall
also see the financial foundation of this project and furthermore the
feasibility study should be also discussed.
5.2. Work Breakdown Structure
In order to develop this system, we gave enormous importance to
scheduling because we believed if we want to provide the best of quality in
a given period of time then we must give due importance to scheduling
which also helpe d us to achieve a better results. The figure below focuses
the weekly work we had accomplished.
Week           Proposed Work
No.
Week-1      Project Proposal Report and Presentation
Week-1      Study related works
Week-1      Study in Python
Week-2      Study related works using OpenCV
Week-2      Study related works using Bluetooth
Week-3      Study related works using processing
Week-3      Study image processing
Week-3      Study image processing
Week-4      Sketching basic structure
Week-4      Prototype design
Week-4      Finalize Prototype design
Week-4      Flexible Box
Week-5      Runnable with basic commands(Input, Output, Turn on, Turn
Off)
Week-5      Designing Lookahead table
Week-5      Designing Lookahead table
Week-6      Creating environment for image processing
Week-6      Creating environment for image processing
Week-7      Integrating all together
Week-7      Start coding
Week-8      Coding for basic instructions (Compare, Result, Accuracy
etc.)       measure
Week-8      Coding for single face detection
Week-9      Single face detection and Compare with database
Week-9      Multiple Face detection and Compare
Week-       Detecting Multiple face, store and compare with database
10
Week-       Attendance collection
10
Week-       File Generate base on collective data
10
Week-       Daily file generation of attendance
10
5.3. Gantt Chart
  5.4.Financial Plan
  Money was required to build the system as we had to buy a lots of
  components. Breakdown is given bellow:
Economic feasibility
Economic justification is generally the “Bottom Line” consideration for most
systems. Economic justification includes a broad range of concerns that
includes cost benefit analysis. In this we weight the cost and the benefits
associated with the candidate system and if it suits the basic purpose of the
organization i.e. profit making, the project is making to the analysis and
design phase.
The financial and the economic qu estions during the preliminary
investigation are verified to estimate the following:
• The cost to conduct a full system investigation.
• The cost of hardware and software for the class of application being considered.
• The benefits in the form of reduced cost.
• The proposed system will give the minute information, as a result the
performance is improved which in turn may be expected to provide
increased profits.
• This feasibility checks whether the system can be developed with the available
funds.
Operational Feasibility
It is mainly related to human organizations and political aspects. The
points to be considered are:
• What changes will be brought with the system?
Future Work
6.1. Introduction
This chapter discusses the future scope or the implementation of this robot.
To incerease the scope of this device we can add some new features. As
technology is becoming more advance it will be mendatory to change the
sturctute some day with better replacement and sometimes based on
customer requirements.
6.2. Future Scope of Work
There are so many future scope on this project. Some of them are
• Can improve security
• Can use Neural Network for high accuracy
• Can used in big factory or employee attendance
• Can build on fully web base system.
6.3. Summary
This chapter has described the possible future applications of the design.
But there are a lot of possibilities with the designed device. The device
may need some research for different applications, though the principle of
the designed system will remain as it is.
                           Chapter 7
Result
7.1. Introduction
This chapter of the report contains the results that we achieved throughout
the course of using this system. Results Achieved From initiation through
conclusion of developing this system the following results has been
achieved. They are as follows:
• The system can be administered by a non-IT technician.
• The system is market ready for commercial use.
• The system has the capacity to carry up to a thousand faces to recognize.
• The system can serve as much people as they want within an organization. 7.2.
Summary
This chapter has covered the different types of results that we have
managed to obtain throughout the course of using this system.