0% found this document useful (0 votes)
65 views92 pages

Ilovepdf Merged

The document describes a project report on criminal investigation with the help of face recognition. It includes an introduction to the project, theory behind it, need and advantages. It discusses literature survey, scope, methodology, design, working, technologies used, testing and results. The project aims to develop a mobile app for criminal identification using face detection from pictures.

Uploaded by

Vishesh Tiwari
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
65 views92 pages

Ilovepdf Merged

The document describes a project report on criminal investigation with the help of face recognition. It includes an introduction to the project, theory behind it, need and advantages. It discusses literature survey, scope, methodology, design, working, technologies used, testing and results. The project aims to develop a mobile app for criminal identification using face detection from pictures.

Uploaded by

Vishesh Tiwari
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 92

Project report on

“Criminal Investigation with the help of


Face Recognition”
Submitted by

Rohit Raut
Mohit Mishra
Vishesh Tiwari
Vikas Chaturvedi

Project Guide
Mr. Kashif Shaikh

DEPARTMENT OF COMPUTER ENGINEERING

ZAGDU SINGH CHARITABLE TRUST (REGD.)

THAKUR POLYTECHNIC
(An ISO 9001:2015 Certified Institute)
Thakur Complex, West to W.E. Highway, Kandivli (E), Mumbai – 400 101.

2023-2024
PROJECT APPROVAL SHEET
Academic Year: 2023 - 2024
This Project work entitled

“Criminal Investigation with the help of


Face Recognition”
By
Rohit Raut
Mohit Mishra
Vishesh Tiwari
Vikas Chaturvedi

Is Approved for the award of the


DIPLOMA
IN
COMPUTER ENGINEERING

(Mr. Kashif Shaikh)


PROJECT GUIDE/ MENTOR

EXTERNAL EXAMINER INTERNAL EXAMINER


Affiliated to Maharashtra State Board of Technical
Education (MSBTE) Mumbai

C E R T I F I C A T E
This is to certify that Mr. Rohit Raut, Mr. Mohit Mishra, Mr. Vishesh Tiwari
and Mr. Vikas Chaturvedi from Thakur Polytechnic (0522) Institute, having
Enrollment Numbers: 2105220333, 21052202349, 2105220351 and 2105220337
respectively, has completed project of final year having title “Criminal
Investigation with the help of Face Recognition” during the academic year
2023-2024. The project completed in a group consisting of 4 persons under the
guidance of Faculty Guide, Mr. Kashif Shaikh.

Name and Signature of Guide: Mr. Kashif Shaikh


Telephone: +91 98194 21144

---------------------------- ---------------------------
H.O.D (CO) PRINCIPAL
(Ms. VAISHALI RANE) (Dr. S.M. GANECHARI)
ACKNOWLEDGEMENT

The success and fulfillment of any project require a lot of guidance and encouragement from
a lot of people and we are very fortunate to have got this all along the project duration. They
have spent much time in instructing us on how to search literature, collect data, and develop the
project.

First and foremost, we would like to thank Thakur Polytechnic, Mumbai and Department of
Computer Engineering to provide us with this sublime opportunity to undergo project work
and helping us to learn and acquire transcendental experience. We would like to thank our
HOD, Ms. Vaishali Rane for providing us with her valuable guidance and support.

We owe our great gratitude to our guide and professor Mr. Kashif Shaikh who always helped
with her invaluable knowledge and guidance to proceed with the project and kept us on the
correct path with her encouragement, support, suggestions, ideas and constructive criticism
throughout the project.

I would also like to thank all the respected teachers of Department of Computer
Engineering, without whose teachings and support this project could not have been completed.

At last, we grateful to our families, friends, and all others for their help, encouragement and
cooperation during project work.

I am thankful to and fortunate enough to get constant encouragement, support and guidance
from all Non-teaching Staffs of Computer Engineering Department which helped us in
successfully completing our project work. Also, I would like to extend our sincere esteems to all
staff in Laboratory for their timely support.

Sincere thanks to all group members.

Name of Members / Roll Number


Rohit Raut [29]
Mohit Mishra [44]
Vishesh Tiwari [46]
Vikas Chaturvedi [125]
Contents
Chapter No. Title of the Chapter
Abstract
1. Introduction
1.1 Introduction
1.2 Theory behind the project
1.3 Need of the project
1.4 Advantages
2. Literature Survey
2.1 Problem statement
2.2 Objective
3. Scope of the project
4. Methodology
4.1 Implementation
4.2 Starting of the Application
4.3 User Details
4.4 Activity Page
4.5 Logs of Exercise and Statistics
4.6 Log out
4.7 GUI Components used in the app
5. Details of design, working and processes
5.1 Software Development
5.2 Flow of project
5.2.1 Feasibility Study
5.2.2 Cost Analysis
5.2.3 Process Model
5.2.4 Data Flow Diagrams
5.3 UML Diagrams
5.3.1 Use case Diagrams
5.3.2 Activity Diagrams
5.4 Technologies Used
5.4.1 Hardware and Software Requirements
5.4.2 Introduction to Programming Tools
5.5 Test Cases
5.6 Test Plan
5.7 Defect Report
6. Result and Application
6.1 Results
6.2 Applications
7. Conclusion and future scope
7.1 Conclusion
7.2 Future scope
8. References and Biblography
9. List of Figures
10. Annexure
11. International paper published on project
12. Certificates of the International Paper
ABSTRACT

Criminals, in every manner, are increasing Day- by- Day. So, a proper, reliable Criminal-
Identification System should be built. It would help the Police Department, Researchers
Department, Spy agencies, etc. in every possible way. Crimes are not only increasing in a
Physical way, but Digital Crimes, Online Crimes, etc. are also rapidly increasing at a very high
rate. Money Frauds, Murders, Mental Tortures, Sexual Harassments, etc. are the biggest
examples of Crimes, in recent times.

The main Target of the Criminals are the Public. They harm the Public, they hide from the
Media, Police, Public, etc. by mixing themselves within the Public. Usually they change their
Attire, Looks, way of living, etc. So, our aim is to make a Mobile- based Application, which can
be used by the Public, as well as the Police and Media, to find the Criminals by their permanent
Face- Markings like Eyes, Eyeball Movement, etc. There are many markings on a person,
specifically on the Faces, which does not change; They also include Wrinkles, Birthmarks (If
any), Hair Shaping, Face Shaping, Neck Posture, etc.

Our basic aim is to detect the criminals by feeding a picture of the criminal to the Application,
store the picture temporarily in the Device’s RAM storage, and once the App is cleared from the
RAM, all of it’s Data is cleared.
CHAPTER 01
INTRODUCTION
INTRODUCTION

Mobile applications are software programs that run on Smartphone and other mobile
communication devices. They can also be accessories that attach to a Smartphone or other
mobile communication devices, or a combination of accessories and software. One of the main
features of a Mobile Phones is their Camera. With more advancing features coming day by day,
we can make out more advantages out of it. One of the advantages of these cameras is it’s pixel
Quality, which is becoming way better than expected. This helps out in taking more detailed
pictorial information about something, some event’s occurrence, or someone.
“Criminal Identification Using Face Detection” does somewhat the same task. It only needs one
picture as an input, of a particular person, and it is saved in the RAM Memory of the Device. Once
the App is closed, all the Stored Memory is deleted permanently, and you need to feed the data
again. The main advantage of this App is that, you need to provide the input picture once, and
every time the particular person is seen in the Camera, He/ She is detected, using their Face
Markings and Features.
THEORY BEHIND THE PROJECT

"Criminal Identification using Face Detection" is based on the premise that every
individual has a unique set of facial features, such as the distance between the eyes, the
shape of the nose, the size and shape of the mouth, and the contours of the face. These
unique facial features can be used to identify individuals, and this has been a common
practice in law enforcement for many years.
The process of criminal identification using face detection involves capturing an image
of the suspect's face, either through a photograph or a video recording, and analyzing
the facial features using advanced algorithms. This analysis is done by detecting and
extracting facial landmarks, such as the eyes, nose, and mouth, and comparing them to a
database of known individuals.
The process of facial recognition and identification involves three steps: detection,
alignment, and recognition.
The detection step involves locating the face in the image or video frame, which can be
done using various techniques such as Haar cascades, deep learning models like
Convolutional Neural Networks (CNNs), and other computer vision algorithms.
The alignment step involves aligning the detected face so that the facial landmarks are
properly positioned for accurate recognition. This step is important because the position
and angle of the face can vary significantly between images and videos.
The recognition step involves comparing the detected facial features to a database of
known faces to identify the individual. This is done using machine learning algorithms
that can identify patterns in the facial features and match them to known individuals
with a high degree of accuracy.
Overall, criminal identification using face detection is a powerful tool for law
enforcement agencies that can help them to quickly and accurately identify suspects and
solve crimes. However, it is important to balance the benefits of this technology with
concerns around privacy and potential misuse.
We have chosen an Android smart phone as the platform for the project for multiple
reasons – the Android operating system is open-source and easily programmable and
more importantly, its dominance in the smart phone market. A Realme™ smart phone is
used as the devicerunning Android’s Latest Version.
Face recognition from inertial sensor data is not a new problem. There have been many
approaches proposed in the literature as discussed in Section II. Our approach is unique
from existing methods in many ways. First and foremost, no additional Face sensing
Mechanism is used on the subject. Secondly, Subject’s Face location where the phone
should be placed and the orientation of the phone are not predetermined. In addition, we
perform Face recognition even if the Subject is moving, Unlike other works, where these
sensors are used to recognize faces, only if the Subject is Stable.

RELATED WORK
"Criminal Identification using Face Detection" is based on the premise that every
individual has a unique set of facial features, such as the distance between the eyes, the
shape of the nose, the size and shape of the mouth, and the contours of the face. These
unique facial features can be used to identify individuals, and this has been a common
practice in law enforcement for many years.
This project involves capturing an image of the suspect's face, either through a photograph
or a video recording, and analyzing the facial features using advanced algorithms. This
analysis is done by detecting and extracting facial landmarks, such as the eyes, nose, and
mouth, and comparing them to a database of known individuals. Facial recognition and
identification process involves three steps: detection, alignment, and recognition.
Recognition step involves comparing the detected facial features to a database of known
faces to identify the individual. It is done using ML algorithms that can identify patterns in
the facial features, & match them to known individuals with a high degree of accuracy.
Overall, criminal identification using face detection is a powerful tool for law enforcement
agencies that can help them to quickly and accurately identify suspects and solve crimes.
However, it is important to balance the benefits of this technology with concerns around
privacy and potential misuse.
NEED OF THE PROJECT
Criminal investigation is a complex process that involves collecting and analyzing a vast
amount of information and evidence to identify and apprehend suspects. One critical piece of
evidence that can aid in the identification of suspects is facial recognition technology.
Therefore, a face recognition project can be an essential tool in criminal investigations for the
following reasons:
1. Identification of Suspects:
Facial recognition technology can help identify suspects in a crime by comparing the
facial features of the person caught in CCTV footage or photos with a database of known
criminals or suspects. This can help law enforcement agencies to identify and apprehend
the suspects quickly.
2. Enhancing Public Safety:
Face recognition technology can enhance public safety by identifying potential criminals
and deterring them from committing crimes in public places such as malls, airports, and
train stations.
3. Time-Saving:
Manual identification of suspects can be a time-consuming process and can delay the
investigation. Face recognition technology can speed up the process and help
investigators identify the suspect quickly, thus saving valuable time.
4. Increasing Efficiency:
Facial recognition technology can enhance the efficiency of law enforcement agencies by
automating the process of identifying suspects. This can reduce the workload of
investigators, allowing them to focus on other aspects of the investigation.
5. Preventing Recidivism:
Face recognition technology can be used to monitor released criminals to prevent them
from committing more crimes. By matching their faces with a database of known
offenders, law enforcement agencies can quickly identify repeat offenders, and take
appropriate actions to prevent future crimes.
ADVANTAGES

The use of face recognition technology in criminal investigations has become


increasingly popular in recent years. This technology can be used to analyze images
or video footage and compare them against a database of known faces to identify
suspects or persons of interest. There are several advantages to using face recognition
technology in criminal investigations, including:

 Efficient identification:
Face recognition technology can help law enforcement agencies quickly identify
suspects or persons of interest. This can save valuable time and resources
compared to traditional methods of investigation.

 Increased accuracy:
Face recognition technology has a high level of accuracy when properly calibrated
and trained. This can help law enforcement agencies to identify suspects with a
higher degree of certainty.

 Reduced Bias:
Face recognition technology is not influenced by factors such as race, gender, or
age, which can be an issue in human identification. This can help reduce the
potential for bias in criminal investigations.

 Improved surveillance:
Face recognition technology can be used to monitor public spaces and detect
potential threats or criminal activity. This can help law enforcement agencies to
prevent crimes before they occur.

 Enhanced security:
Face recognition technology can be used to control access to secure areas and
prevent unauthorized entry. This can be particularly useful in high-security
facilities such as airports or government buildings.
 Cost- Effective:
Face recognition technology can be more cost-effective than traditional
investigation methods, as it requires fewer resources and can be carried out more
quickly.

Overall, the use of face recognition technology in criminal investigations can be a


valuable tool for law enforcement agencies in India, helping them to improve public
safety and reduce crime rates. However, it is also important to consider the potential
risks and concerns associated with this technology, such as privacy violations and
bias in the algorithms used to identify suspects.

Today, AI is most commonly used to detect crimes such as fraud and money
laundering. But in the future, it will likely become commonly used in other industries
as well. Below are three areas where we see AI being used to prevent:

 Transportation of Illegal Good:


 With AI, express delivery companies can assess the likelihood that parcels
contain illegal goods, like narcotics, and report them to the relevant authorities.

 Terrorist Activities:
 Retailers and pharmacies could use sophisticated AI tools to identify customers
who purchase unusual amounts of chemicals that could be used as precursors to
terrorist activities.

 Human Trafficking:
 Shipping companies can use their data and AI capabilities to identify the
containers that are most likely to be used for human trafficking. Thus it saves
lives.
CHAPTER 02
LITERATURE SURVEY
Problem Statement

The problem statement is that traditional criminal investigations often rely on eyewitness
testimonies and physical evidence, which can be unreliable and time-consuming to collect and
analyze. With the help of face recognition technology, the goal is to improve the accuracy and
efficiency of criminal investigations by using computer algorithms to analyze surveillance
footage and identify potential suspects. However, there are also concerns about privacy and
potential biases in the use of this technology. The challenge is to develop a face recognition
system that is accurate, unbiased, and respects individual rights while still helping law
enforcement agencies to solve crimes.
Face Recognition for Criminal Identification is a face recognition system in which the security
expert will input an image of the person in question inside the system and the system will first
preprocess the image which will cause unwanted elements such as noise to be removed from the
image. After that, the system will then classify the image based on its landmarks for example, the
distance between the eyes, the length of the jaw line, etc. Then, the system will run a search
through the database to find its perfect match and display the output. This work is focusing on
implementing the system for criminal identification. Current practice of thumbprint identification
which is simple and easy to be implemented can be challenge by the use of latent thumbprint and
sometimes cannot be acquired from the crime scene. The criminals have become cleverer and
normally very careful in leaving any thumbprint on the scene. It encompassed face database and
an image processing algorithm to match the face feed with faces stored in the database.
There are two parts vital to the success of this system; detection and recognition. Face detection
is one of the most important steps in a face recognition system and can be classified into four
principle categories; knowledge based, feature invariant, template matching and appearance-
based methods. In recognition, two stages are required; training process and evaluation process.
In a training process, the algorithm is fed samples of the images to be learned and a distinct
model for each image is determined while in an evaluation process, a model of a newly acquired
test image is compared against all existing models in the database. Then the near corresponding
model is acquired to determine whether the recognition is triggered [5]. In this stage, a statistical
procedure, Principal Component Analysis (PCA) is used to on a collection of face images to
form a set of basis features, which is called a set of eigenfaces. Any human face can be
considered to be a combination of these standard faces.
Criminal Face Detection project aims to build a automated Criminal Face Detection system by
levering the human ability to recall minute facial details. Identification of criminals at the scene
of a crime can be achieved in many ways like fingerprinting, DNA matching or eye witness
accounts. Out of these methods eye witness accounts are preferred because it stands scrutiny in
court and it is a cost effective method. It is possible that witnesses to a crime have seen the
criminal though in most cases it may not be possible to completely see the face of the
perpetrator. The Criminal Face Detection System will be built of an existing criminal database.
Input would be provided in the form of sketch or an image and matched against the existing
database and results would be provided. Criminal record generally contains personal information
about particular person along with photograph. To identify any Criminal we need some
identification regarding person, which are given by eyewitness. In most cases the quality and
resolution of the recorded image segments is poor and hard to identify a face. To overcome this
sort of problem we are developing software. Identification can be done in many ways like finger
print, eyes, DNA etc. One of the applications is face identification. The face is our primary focus
of attention in social inters course playing a major role in conveying identify and emotion.
Although the ability to infer intelligence or character from facial appearance is suspect, the
human ability to recognize face is remarkable .The human face is a complicated
multidimensional visual model and hence it is very difficult to develop a computational model
for recognizing it. The paper presents a methodology for recognizing the human face based on
the features derived from the image. The proposed methodology is implemented in two stages.
The first stage detects the human face in an image using viola Jones algorithm. In the next stage
the detected face in the image is recognized using a fusion of principle.

OBJECTIVE

 To collect information from the sensors of mobile phone to identify the activity that the
user of the application is performing.

 To use the collected information from the user (age, sex, height, eye no etc. ) to give
appropriate health tips to the user.
 To collect information of the various activity performed by the user for a predetermined
period of time and perform analysis over the information.

 To focus mainly on activities that consume lots of calories and thus have an effect over
the health.

 To suggest suitable lifestyle changes based on medically approved values like ideal heart
rate and BMI (Body Mass Index).

 User friendly interface: the application is intended to be launched on a touch-enabled


Android device. Hence, the user interface would be made keeping that in mind. The user
interface would be such that it would ease the usability of the application. This would
enhance the user experience of the application.

Businesses and other establishments use a face recognition system as a tool for various purposes.
Some use this software for checking attendance and others use it for tracking the person who enters
and exits the establishment.
The Government also utilizes a face recognition system to trace faces known for illegal activities.
Therefore, the software is very helpful for every user and in every way.
Furthermore, the system works with the use of a camera to capture faces through images, videos, or
in real-time. The recognized faces were then stored in a database and were used in matching faces
for attendances or tracking.
There are various advantages of developing an software using face detection and recognition in the
field of authentication. Face detection is an easy and simple task for humans, but not so for
computers. It has been regarded as the most complex and challenging problem in the field of
computer vision due to large intra-class variations caused by the changes in facial appearance,
lighting and expression. Face detection is the process of identifying one or more human faces in
images or videos. It plays an important part in many biometric, security and surveillance systems, as
well as image and video indexing systems. Face detection can be regarded as a specific case of
object-class detection. In object-class detection, the task is to find the locations and sizes of all
objects in an image that belong to a given class.
CHAPTER 03
SCOPE OF THE PROJECT
SCOPE OF THE PROJECT

We firstly need to understand the various components of the face detection system under the
criminal detection. Or we can say in this we will remove the complexity from the image, which
we get to match with the criminal record or data. In the past, we were not able to remove the
complexity from the image which we get through the CCTV or any camera.

Criminal Face Detection project aims to build an automated Criminal Face Detection system by
levering the human ability to recall minute facial details. Identification of criminals at the scene
of a crime can be achieved in many ways like fingerprinting, DNA matching or eye witness
accounts. Out of these methods eye witness accounts are preferred because it stands scrutiny in
court and it is a cost effective method. It is possible that witnesses to a crime have seen the
criminal though in most cases it may not be possible to completely see the face of the
perpetrator. The Criminal Face Detection System will be built of an existing criminal database.
Input would be provided in the form of sketch or an image and matched against the existing
database and results would be provided. Criminal record generally contains personal information
about particular person along with photograph. To identify any Criminal we need some
identification regarding person, which are given by eyewitness. The human face is a complicated
multidimensional visual model and hence it is very difficult to develop a computational model
for recognizing it.

The scope of the project is confined to store the image and store in the database. When a person
has to be identified the images stored in the database are compared with the existing details.

Overview of the project:


This project is aimed to identify the criminals in any investigation department. Here the
technique is we already store some images of Criminal Face Identification System.

The criminals in our database along with his details and those images are segmented into many
slices say eyes, hairs, lips, nose, etc. These images are again stored in another database record so
to identify any criminals; eyewitnesses will see the images or slices that appear on the screen by
using it we develop the face, which may or may not be matched with our images. If any image is
matched up to 99% then we predict that he is only the criminal. Thus using this project it
provides a very friendly environment for both operator and eyewitness to easily design any face
can identify criminals very easy.

Project Objective:
This project is intended to identify a person using the images previously taken. The identification
will be done according the previous images of different persons.
CHAPTER 04
METHODOLOGY
IMPLEMENTATION

Figure 1: Implementation Flow Chart


STARTING OF THE APPLICATION

The first screen shows up the Camera Screen, and an “Add Face” Button on the Top- Right. The
app requires your Camera Access to Start. After that, place a face inside the Camera Frame. Then
the face would be detected, click on the “Add Face” Button. Then enter the name of the person
of the Face being saved. Then again place the same face inside the Camera Frame. The
Application will detect the face, showing the name of the Face shown.
The Application is just a basic application for now, which just detects the Face with firstly
detecting it’s features, like the Face Structure, Eyes, Nose, etc.
For some uniqueness, we have used the Phone’s RAM Memory as the Database. But, we have
planned in the Future about the creation of a Database, so as to store the Data

USER DETAILS
User enters the details like Name of the person (in String Form), Click (Image) of the Person,
etc. These details are used to store the person’s data, and also to calculate the probability that the
person is the particular person we want.

LOGOUT
Logout option is not available on our application, considering the heaviness of our software.
Once the Software is able to manage things properly after training it, we will add the Login,
Logout, Profile, etc. interfaces to the software.

GUI COMPONENTS USED IN THE APP

1. TextView
TextView is a UI Component that displays the text to the user on their Display Screen.
There are various attributes to describe the TextView some of them are named below:
 Android: id – it is a unique id for the control.

 Android: width – It displays the exact width of the TextView.

 Android: height – It displays the exact height of the TextView. 

 Android:textColor – It set the color of the text.

 Android: gravity – It is to align the TextView.


There are some more attributes the above are the major ones.

Figure 2: TextView

2. EditText
EditText is a user interface control that allows the users to enter some text.
There are various attributes to describe the TextView some of them are named below:
 Android: id – it is a unique id for the control.
 Android: width – It displays the exact width of the TextView.
 Android: height – It displays the exact height of the TextView. 
 Android:textColor – It set the color of the text.
 Android:InputType - we need to specify the type of data the text field can accept
using the inputType attribute.

Figure 3: EditText
3. Button
Button, as understood by its name, is a component which can be pressed or clicked by the user to
perform an action. It has the same properties as a TextView, with a few Button specific
properties.

 android:gravity: This can be used to set the position of any View on the app screen. The
available value are right, left, center, center_vertical etc. You can also use to values
together, using the | symbol. 
 android:textSize: To set text size inside button.
 android:background: To set the background color of the button.
 Picture can be added to the button, alongside the text by
using android:drawableRight, android:drawableLeft, android:drawableTop and android:
drawableBottom, respectively. 

Figure 4: Button
CHAPTER 05
DETAILS OF DESIGN, WORKING AND
PROCESS
SOFTWARE DEVELOPMENT

The architecture of Criminal Detection Analysis based on Face Recognition consist of different
modules.
1) Activity and Face Recognition.
2) Mobile In- Built sensors
Activity recognition is current development which recognizes the current activity of individual.
The combination of two sensing inputs is taken that are hard sensing and soft sensing to
recognize the current activity of individual whether person is walking, in vehicle etc. Hard
sensing inputs are the inbuilt sensors in mobile. Currently application is using two in built
sensors first is proximity sensors and second is accelerometer sensors. Application uses these
two sensors for performing activity.
User once opens the application Splash screen occurs. Splash screen is first image of application
where the application logo is displayed along with different health tips every time the user opens
the application. Then user enters the name and details. User details are important for further
calculation and every single activity output is based on this user details. Using user details BMR
is calculated and target calories is calculated based on users lifestyles. The on menu page
different activity like push-ups, walk, skipping and squat is present. If user selects push-ups
activity then proximity sensors get activated and when user starts performing push-ups proximity
sensors start counting the number of push-ups and when user press stop then calorie burned by
user is shown along with the number of push-ups. This number of push-ups and calorie burned
by user is saved in log after pressing ok button. For walk, Skipping and Squat activity
accelerometer sensors are used. An accelerometer sensor consists of X-axis, Y-axis and Z-axis.
For walk Z-axis value is calculated and rest axis value is kept zero. For Skipping only Y-axis
value is calculated and for squat only X-axis value is calculated. After user press stop button the
sensors value and time from start button to stop button are used to calculate the calorie burned by
person performing activity. Text-to-speech and notification features are also added using
modules of android.
Figure 5: Explanatory Diagram

FLOW OF THE PROJECT

Figure 6: Flow Diagram of our Application


Figure 7: Flow Diagram of Criminal System
FEASIBILITY STUDY

A Feasibility study is defined as an evaluation or analysis of the potential impact of a proposed


project or program. A Feasibility study is conducted to assist decision-makers in determining
whether or not to implement a particular project or program. The Feasibility study is based on
extensive research on both the current practices and the proposed project/program and its impact
on the project. It will contain extensive data related to financial and operational impact and will
include advantages and disadvantages of both the current situation and the proposed plan. It is
conducted to assist the decision-makers in making the decision that will be in the best interest of
the project. The extensive research, conducted in a non-biased manner, will provide data upon
which to base a decision.
A feasibility test is a process that evaluates the potential success of a proposed project or idea. It
assesses the practicality and likelihood of the project's success by examining various factors,
such as financial, technical, operational, and social considerations.
A feasibility test typically involves gathering data, analyzing it, and presenting findings to
decision-makers who will determine whether to move forward with the proposed project or not.
The feasibility test aims to identify potential risks, challenges, and opportunities, and recommend
potential solutions.
Examples of feasibility tests include market research studies, technical assessments, financial
analysis, and environmental impact studies. These tests can help organizations determine the
viability of a project before investing significant resources into it.
Overall, a feasibility test is an essential tool for evaluating the potential success of a project,
reducing risk, and making informed decisions.
Feasibility study is of the following categories for a project:
1.) Economic Feasibility:

Economic Feasibility is the process of identifying the financial benefits and costs associated with
a development project. It judges whether, or not we might be able to develop the given software
considering existing budgetary constraints. As Android is an open-source platform and also the
development tools are freely available, as such there is no initial investment required for setting
up the development platform. Moreover, the Android SDK also provides with an Emulator to
develop and test Android applications. Hence, testing of the application can also be done without
incurring any extra costs.

2.) Technical Feasibility:

Technical Feasibility is defined as whether reliable hardware and software as well as other
technical resources capable of meeting the needs of a proposed system can be acquired or
developed by an organization in the required time. As complete Android powered mobile phones
are in market so it’s very much technically feasible to develop and test the application on real
phones. Android is an open-source platform developed by Google Inc. As the Android SDK
(Software Development Kit) and the Eclipse IDE are freely available along with an ADT plug-in
for Eclipse, technically it is very much feasible to work with Android and develop Android
applications. It also provides a virtual emulator to test Android applications in the absence of an
Android enabled hand-held device. Moreover, to help novice developers with development
Google also provides dedicated blogs for queries related to Android application development.

3.) Operational Feasibility:

The process of assessing the degree to which a proposed system solves business problems or
takes advantage of business opportunities is called Operational Feasibility. Operational
Feasibility is composed of estimating the operational cost and its benefits. It is a measure of how
it satisfies the requirements identified in the requirements analysis phase of system development.
Since the application only requires an Android enabled cell-phone to operate, the operational
cost would consist only that of an Android cell-phone. Moreover, the user has no burden
physically or mentally in order to use the application. All he has to do is install the application on
his Android cell-phone and keep it with him most of the times, which already is a habit
nowadays. So, the usage of this application introduces no extra burden on the user making it
operationally feasible.

COST ANALYSIS

COCOMO estimation

The Constructive Cost Model (COCOMO) is an algorithmic software cost estimation model
developed by Barry W. Borehm. The model uses a basic regression formula with parameters that
are derived from historical project data and current project characteristics.

COCOMO consists of a hierarchy of three increasingly detailed and accurate forms. The first
level, Basic COCOMO is good for quick, early, rough order of magnitude estimates of software
costs, but its accuracy is limited due to its lack of factors to account for difference in project
attributes (Cost Drivers). Intermediate COCOMO takes these Cost Drivers into account and
Detailed COCOMO additionally accounts for the influence of individual project phases.

Basic COCOMO

Basic COCOMO computes software development effort (and cost) as a function of program size.
Program size is expressed in estimated thousands of source lines of code (SLOC).

COCOMO applies to three classes of software projects:

Organic projects - "small" teams with "good" experience working with "less than rigid"
requirements
Semi-detached projects - "medium" teams with mixed experience working with a mix of rigid
and less than rigid requirements

Embedded projects - developed within a set of "tight" constraints. It is also combination of


organic and semi-detached projects.(hardware, software, operational, ...)

The basic COCOMO equations take the form

Effort Applied (E) = ab(KLOC)bb [ man-months ]

Development Time (D) = cb(Effort Applied)db [months]

People required (P) = Effort Applied / Development Time [count]

Where,

KLOC is the estimated number of delivered lines (expressed in thousands) of code for project.

The coefficients ab, bb, cb and db are given in the following table:

Software project ab bb cb db

Organic 2.4 1.05 2.5 0.38

Semi-detached 3.0 1.12 2.5 0.35

Embedded 3.6 1.20 2.5 0.32

Figure 8: Simple COCOMO analysis chart

Basic COCOMO is good for quick estimate of software costs. However it does not account for
differences in hardware constraints, personnel quality and experience, use of modern tools and
techniques, and so on.
COCOMO cost analysis for BEFIT

Lines of code: : 4K

Type of project : Organic

Effort Applied : 75.6

Development Time : 25.7

People required : 3

PROCESS MODEL

Software like all complex systems evolves over a period of time. Business and product
requirements often change as development proceeds making a straight line path to an end
product unrealistic. For a software, where a set of core product or system requirements is well
understood, but the details of product or system extensions have yet to be defined, we need a
process model or a methodology that has been explicitly designed to accommodate a product that
evolves over time. Thus, we have decided to choose an evolutionary approach to our application.
Evolutionary models are iterative. This would enable us to develop increasingly more complete
versions of the application.

Figure 9: Typical Spiral Model


For our project, we have selected the spiral model, which is an evolutionary software process
model. This will allow us to follow a cyclic approach for incrementally growing the systems
degree of definition and implementation while decreasing its degree of risk. Using this model,
during the early iterations we plan to release a paper model. During each iteration, we would
perform various activities like modeling, construction and then deployment. After every
iteration, we would attain some anchor point milestones-a combination of work products and
conditions attained along the pass. This would progressively produce more sophisticated versions
of the application.

DATA FLOW DIAGRAMS

A data flow diagram (DFD) is a graphical representation of the "flow" of data through an
information system, modeling its process aspects. Often they are a preliminary step used to
create an overview of the system which can later be elaborated.

DFD Diagrams

Figure 10: DFD level 0


Figure 11: DFD level 1

Figure 12: DFD level 2


UML DIAGRAMS

USE CASE DIAGRAMS

A use case diagram at its simplest is a representation of a user's interaction with the system and
depicting the specifications of a use case. A use case diagram can portray the different types of
users of a system and the various ways that they interact with the system. This type of diagram is
typically used in conjunction with the textual use case and will often be accompanied by other
types of diagrams as well.

Figure 13: Use Case Diagram


ACTIVITY DIAGRAMS

An activity diagram is a behavioral diagram i.e. it depicts the behavior of a system. An activity
diagram portrays the control flow from a start point to a finish point showing the various
decision paths that exist while the activity is being executed. We use Activity Diagrams to
illustrate the flow of control in a system and refer to the steps involved in the execution of a
use case. We model sequential and concurrent activities using activity diagrams. So, we
basically depict workflows visually using an activity diagram. An activity diagram focuses on
condition of flow and the sequence in which it happens. We describe or depict what causes a
particular event using an activity diagram.

Figure 14: Activity Diagram


TECHNOLOGIES USED

HARDWARE AND SOFTWARE REQUIREMENTS :

Software requirements:
• Development platform: Microsoft Windows.
• Deployment platform: Android platform with Dalvik bytecode converter (comes along
with Android 2.2 and above).
• Eclipse IDE for Java Developers with Android Development Tool (ADT) plug-in.
• Android SDK for Windows operating application.
• SQLite Database.
• Android Mobiles
• Camera of Android Mobiles

Hardware requirements: An Android enabled device for the purpose of deployment and a
personal computer workstation for application development.

Android Studio

Figure 15: Android Studio


The ANDROID Emulator (Virtual Device)

Figure 16: Android Virtual Device ( Emulator)

INTRODUCTION TO PROGRAMMING TOOLS

1. Android Software Development Kit (SDK):

Android is a software stack for mobile devices that includes an operating system, Middleware
and key applications. The Android SDK provides the tools and APIs necessary to begin
developing applications that run on Android-powered devices.

It allows developers to write managed code in the Java language, controlling the device via
Google-developed Java libraries.

The Android SDK includes a comprehensive set of development tools. These include a
debugger, libraries, a handset emulator (based on QEMU), documentation, sample code, and
tutorials. Currently supported development platforms include x86-architecture computers
running Linux (any modern desktop Linux distribution), Mac OS X 10.4.8 or later, Windows XP
or Vista.
Android SDK Features:

The true appeal of Android as a development environment lie’s in the APIs it provides. As an
application-neutral platform, Android gives you the opportunity to create applications that are as
much a part of the phone as anything provided out of the box. The following list highlights some
of the most noteworthy Android

Features:

• No licensing, distribution, or development fees

• Wi-Fi hardware access

• GSM, EDGE, and 3G networks for telephony or data transfer, allowing you to make or

receive calls or SMS messages, or to send and retrieve data across mobile networks

• Comprehensive APIs for location-based services such as GPS

• Full multimedia hardware control including playback and recording using the camera and

microphone

• APIs for accelerometer and compass hardware

• IPC message passing

• Shared data stores

• An integrated open source WebKit-based browser


• Full support for applications that integrate Map controls as part of their user interface

• Peer-to-peer (P2P) support using Google Talk

• Mobile-optimized hardware-accelerated graphics including a path-based 2D graphics library

• And support for 3D graphics using OpenGL ES

Application Framework:

Developers have full access to the same framework APIs used by the core applications. The
application architecture is designed to simplify the reuse of components; any application can
publish its capabilities and any other application may then make use of those capabilities (subject
to security constraints enforced by the framework). This same mechanism allows components to
be replaced by the user.
Underlying all applications is a set of services and systems, including:

• A rich and extensible set of Views that can be used to build an application, including lists,
grids, text boxes, buttons, and even an embeddable web browser.

• Content Providers that enable applications to access data from other applications (such as
Contacts), or to share their own data.

• A Resource Manager, providing access to non-code resources such as localized strings,


graphics, and layout files.

• A Notification Manager that enables all applications to display custom alerts in the status bar

• An Activity Manager that manages the life cycle of applications and provides a common
navigation back stack.
Libraries:

Android includes a set of C/C++ libraries used by various components of the Android system.
These capabilities are exposed to developers through the Android application framework. Some
of the core libraries are listed below:
• System- C Library- a BSD-derived implementation of the standard C system library (libc),
tunedfor embedded Linux-based devices.
• Media Libraries- based on PacketVideo's OpenCORE; the libraries support playback and
recording of many popular audio and video formats, as well as static image files, including
MPEG4, H.264, MP3, AAC, AMR, JPG, and PNG
• Surface Manager- manages access to the display subsystem and seamlessly composites 2D and
3D graphic layers from multiple applications
• LibWebCore- a modern web browser engine which powers both the Android browser and an
embeddable web view
• SGL- the underlying 2D graphics engine

• 3D Libraries- an implementation based on OpenGL ES 1.0 APIs; the libraries use either
hardware 3D acceleration (where available) or the included, highly optimized 3D software
rasterizer
• FreeType- bitmap and vector font rendering

• SQLite- a powerful and lightweight relational database engine available to all application.

Android Runtime:
Android includes a set of core libraries that provides most of the functionality available in the
core libraries of the Java programming language. Every Android application runs in its own
process, with its own instance of the Dalvik virtual machine. Dalvik has been written so that a
device can run multiple VMs efficiently. The Dalvik VM executes files in the Dalvik Executable
(.dex) format which is optimized for minimal memory footprint. The VM is register based, and
runs classes compiled by a Java language compiler that have been transformed into the .dex
format by the included "dx" tool.The Dalvik VM relies on the Linux kernel for underlying
functionality such as threading and low-level memory management.
Beside of the actual java class library the Android SDK contains all the tools that are necessary
to build an Android application.
Typically every Android SDK version consists of:
Developer Tools:
As already mentioned above the SDK comes with a bunch of tools that relieve the creation of an
Android app. In the following only the most important tools are described:
aapt - Android Asset Packaging Tool Creates *.apk-files which contain all the resources as well
as the program itself. Those ZIP-format based files can be transferred to and installed on an
Android phone or the emulator.

adb – Android Debug Bridge The main purpose of this tool is to set up connections to a real
Android device or an Android emulator instance in order to transfer and install (apk)-files on it.
With adb the developer also has the possibility to remote control the devices shell.

dx – Dalvik Cross-Assembler The dx is used for merging and converting Java- Standard-
ByteCode Classes (*.class) into one single binary file (*.dex) that can be executed by the Dalvik
VM. These *.dex-files are subject to be put into an *.apk-file together with resource files.

ddms - Dalvik Debug Monitor Service This tool provides port-forwarding services, screen
capture on the device, thread and heap information on the device, logcat, process and radio state
information, incoming call and SMS spoofing, location data spoofing, and more.
Emulator and System Images:
The SDK also contains an emulator that is able to simulate almost all the functionality of an
actual Android device. This is achieved by booting so called system images which represent the
Android OS with the whole software stack of a real Android device.
Documentation, Sample Code:

Of course the SDK also provides the developer with a comprehensive documentation which
consists not only of an API reference but also of programming guidelines as well as detailed
descriptions for the several tools. There are also a few code examples that will allow
understanding the typical workflow of an Android application right away.

IDE Support:

Although it is possible to develop Android apps with every modern IDE Google recommends
doing so is by using the Eclipse IDE with a special plug-in called ADT (Android Development
Tools). The ADT makes use of all the Dev Tools that come with the SDK and therefore supports
and simplifies all the steps from assembling the classes over packaging and signing to running
the final application on the emulator. The ADT is not just speeding up the testing process but
also relieves the developers work in terms of UI creation and application description. For that
reason the ADT offers the developer graphical representations of what has otherwise have to be
written in XML.

2.) Android Studio

Android Studio is the official integrated development environment (IDE)


for Google's Android operating system, built on JetBrains' IntelliJ IDEA software and designed
specifically for Android development. It is available for download
on Windows, macOS and Linux based operating systems or as a subscription-based service in
2020. It is a replacement for the Eclipse Android Development Tools (E-ADT) as the primary
IDE for native Android application development.

Android Studio was announced on May 16, 2013, at the Google I/O conference. It was in early
access preview stage starting from version 0.1 in May 2013, then entered beta stage starting from
version 0.8 which was released in June 2014. The first stable build was released in December
2014, starting from version 1.0.

On May 7, 2019, Kotlin replaced Java as Google's preferred language for Android app
development. Java is still supported, as is C++.

TEST CASES

A test case is a document, which has a set of test data, preconditions, expected results and
postconditions, developed for a particular test scenario in order to verify compliance against a
specific requirement.

Test Case acts as the starting point for the test execution, and after applying a set of input values,
the application has a definitive outcome and leaves the system at some end point or also known
as execution post condition.

Typical test case parameters:

1. Test case ID: is the identification number given to each test case.
2. Purpose: defines why the case is being designed.
3. Precondition: The prerequisite for running in the system.
4. Input: Actual inputs must be provided, instead of general inputs.
5. Expected outputs: which should be produced when there is no failure?
6. Actual outputs: What is the actual output when the code is executed.
7. Status: If Expected and actual result is same status is Pass otherwise it is Fail. Two
approachesto testing Software -
Test to pass: A test case producing expected result is called test to pass, its capabilities are
not pushed.
Test to fail/ Error Forcing: A test case that doesn’t produce expected result is called test to
fail. Sole purpose is to break the software.
Table 1: TEST CASES FOR HSBD INDICATOR
Test cases for Criminal Detection Application using Face Recognition.
Sr. ID Specificat Objective Pre Steps Input Data Expected Actual Status
No ion Condition Output Output
1 TC Login User is User 1. Open the App Open the App should be App is Pass
_01 page already should 2. Select what you app and opened and opened and
logged in open the want to do, user should be user can
choose able to choose
according to your choose
app choice.
2 TC Login To The user 1. Open on app Open app App should App is Pass
_02 page validate should be and try to be opened opened and
"direct directly choose and user user can
login logged in should be choose
field able to
choose
3 TC Password To Password 1. Open app Login User should User is not Fail
_03 and validate and user 2. Continue to credentials be logged in logged In.
Username Password name login
Field should be
present
4 TC Details To Menu field 1. Open app Open app, The field The field is Pass
_04 Page validate should be 2. Continue navigate on should be opened
Menu present navigation on Menu opened
Field the Menu Page. Field.
TC Details To The field 1. Open app Enter on a The field The field Pass
_05 Page validate should be 2. Click on Image button should accept accepts the
5 Image available classification selection selection.
Classifica and active. 3. Chose a Button
from bottom of
tion Field
page
TC Details To The field 1. Open app Enter on a The field The field Pass
_06 Page validate should be 2. Click on Flower button should accept accepts the
6 Flower available classification selection selection.
3. Chose a Button
Identificati and active.
from bottom of
on Field.
page
7 TC Detail To The field 1. Open app Enter on a The field The field
_07 Page validate should be 2. Click on Object button should accept accepts the Pass
the Object available Detection selection selection.
Detection and active. 3. Chose a Button
from bottom of
Field
page
8 TC Detail To The field 1. Open app Enter on a The field The field Pass
_08 Page validate should be 2. Click on Face button should accept accepts the
the Face available Detection selection selection.
Detection and active. 3. Chose a Button
from bottom of
Field page
9 TC Detail To The field 1. Open app Enter on a The field The field Pass
_09 Page validate should be 2. Click on Audio Start should start starts the
classification
the Audio available 3. Click on “Start Recording recording recording.
classifica and active. Recording”
tion Field 4. For stopping, Click on
“Stop Recording”
10 TC Detail To The field 1. Open app Enter on a The field The field Pass
_10 Page validate should be 2. Click on Bird Start should start starts the
the Bird available Sound Recording recording recording.
Sound and active. Identification
3. Click on “Start
identifica
Recording”
tion Field 4. For stopping, Click
on “Stop
Recording”
11 TC Detail To The field 1. Open app Enter on a The field The field Pass
_11 Page validate should be 2. Click on Face button should accept accepts the
the Face available Detection selection selection.
Detection and active.
Field
12 TC To The field 1. Open app Enter text The field The field Pass
_12 Detail validate should be 2. Click on Spam and click should accept accepts and
Page the Spam available Text Detection on Send and return it’s returns it is a
Text and active. 3. Enter a text suggestion spam or not
4. Click on Send
Detection
Field
13 TC Face To Add Face 1. Open app Click on Face should Face is added Pass
_13 Recogniti validate button 2. Click on Face Add Face be able to
on Page the "add should be Detection button add
face" present and 3. Click on Add
active Face button
TC Face To "Name" 1. Open app Click on Name should Name is Pass
_14 Recogniti validate Textbox 2. Click on Face Add Face be added added
14 on Page the Name should be Detection button and
Textbox present and 3. Click on Add enter name
active Face button
4. Add a name to
the face added
TC Object To Pick Photo 1. Click on the pick Select an Image should Image is Pass
15 _15 Detection validate button photo button image be selected selected
Page the "Pick should be 2. Select a picture from from your
Photo" present and your memory memory
button active
TC Object To Pick Photo 1. Click on the Click an Image should Image is Pass
16 _16 Detection validate button pick photo image be able to be captured
Page the "Take should be button from the captured
Photo" present and 2. Take a picture from camera
button active camera interface interface
opened opened
TEST PLAN

As per ISTQB definition: “Test Plan is A document describing the scope, approach, resources,
and schedule of intended test activities.”
A Test Plan is a detailed document that describes the test strategy, objectives, schedule,
estimation, deliverables, and resources required to perform testing for a software product.
Test Plan helps us determine the effort needed to validate the quality of the application under
test. The test plan serves as a blueprint to conduct software testing activities as a defined process,
which is minutely monitored and controlled by the test manager.

IEEE Std 829 (IEEE Standard for Software Test Documentation) gives a “Test Plan Template”
that would be useful for those who will prepare a test plan. According to this standard, test plans
shall cover,
 Test Plan Identifier (name, number, revision number of the document)
 References
 Introduction
 Test Items
 Software
 Risk Issues
 Features to be Tested
 Features not to Tested
 Approach Item Pass / Fail Criteria
 Suspension Criteria and Resumption Requirements
 Test Deliverables Remaining Test Task
 Environmental Needs
 Staffing and training needs
 Responsibilities
 Schedule
 Planning Risks and Contingencies
 Approvals
 Glossary
Importance of a good Test Plan:
Prima facie, a software application appears more validated by the presence of a test plan.
Although plenty of IEEE standards are in use by the testing industry, still there is no hard & fast
rule to stick to any one in particular. Many times company specific test plans customized to suit
ones own requirement prove to be more useful & acceptable to the testing personnel.

A good Tester is the one who prepares a “Test Plan” for every level of testing, and clearly
describes its objectives & most important aspect is that he/she operates on it. The test plan can
have several parts but the most important aspect is the simple presence of the test plan itself.
Reason being this becomes the starting point from which the entire process gets kick started & it
contains the scope of the entire testing assignment. A test plan has systematic outline of all
features & functionality that are continuously checked based upon the matrix of responsibilities
& risks associated with the process
Table 2: TEST PLAN FOR HSBD INDICATOR
Test plan on Criminal Detection using Face Recognition
Test Plan Identifier TP_01
This application uses various sensors of Smartphone and parameters as user inputs, and collects periodic information of the
Introduction
camera to give correct outputs and statistical data of criminal information that will help user implement better knowledge
and awareness.
1. Camera Fields
Test items 2. Text Boxes for messages
3. Audio Recorders
4. Face analytics
5. Buttons
6. Object Detectors
1. To check if the fields are accepting values.
Features To Be Tested
2. To check if the options properly redirecting to their corresponding pages.
Features Not to be Tested Edit details option, Camera Quality, Buttons
Approach 1. All the features to be tested for the given test items shall be tested by directly running the application.
For Pass Criteria:
1) If majority of the fields and dropdowns work properly, update the preview as expected.
2) If the application responds to every click within a second.
Item Pass/Fail Criteria
For Fail Criteria:
1. If the preview doesn't get generated for verification of changes.
2. User is unable to select or click the option which the user wants.
Suspension Criteria: If any of the major functionalities of the application are not functional then the testing should be
Suspension Criteria &
suspended.
Resumption
Resumption: If all the found critical bugs are resolved then the testing should be resumed.
a. Test Plan
Requirements & Test
Deliverables b. Test Cases
d. Defect Report
Testing Tasks 1. Draft a defect report
Hardware: Computer System
Environmental Needs Software: HSBD Indicator (developed on android studio)
Operating System: Windows 10
Responsibilities
For Black Box Testing: Rohit Raut, Mohit Mishra
For White Box Testing: Vishesh Tiwari, Vikas Chaturvedi
1. They must know the coding.
Staffing And Training Needs
2. They must know all the testing techniques (or at least black box testing).
For Black Box Testing: 25 Days
Schedule
For White Box Testing: 25 Days
Risks:
1. Testing will be delayed if application is inaccessible.
2. All the test cases may not be executed successfully due to time constraint.
Risks And Contingencies 3. If failure occurs in the software then we will check the overall software steps by steps For Bugs

Contingencies:
1. Extend the time constraints for successful execution of the testcases.
2. Remove all the bugs from the software and will release a new version of the software without any bug.
1. Test Manager – Vishesh Tiwari, Vikas Chaturvedi
Approvals
2. Test Leader – Rohit Raut, Mohit Mishra
DEFECT REPORT

A Defect in Software Testing is a variation or deviation of the software application from end
user’s requirements or original business requirements. A software defect is an error in coding
which causes incorrect or unexpected results from a software program which does not meet
actual requirements. Testers might come across such defects while executing the test cases.

When testers execute the test cases, they might come across such test results which are
contradictory to expected results. This variation in test results is referred to as a Software Defect.
These defects or variations are referred by different names in different organizations like issues,
problems, bugs or incidents. While reporting the bug to developer, your Bug Report should
contain the following information.

• Defect_ID - Unique identification number for the defect.

• Defect Description - Detailed description of the Defect including information about themodule
in which Defect was found.

• Version - Version of the application in which defect was found.

• Steps - Detailed steps along with screenshots with which the developer can reproduce the
defects.

• Date Raised - Date when the defect is raised

• Reference- where in you Provide reference to the documents like . requirements, design,
architecture or maybe even screenshots of the error to help understand the defect

• Detected By - Name/ID of the tester who raised the defect

• Status - Status of the defect , more on this later


• Fixed by - Name/ID of the developer who fixed it

• Date Closed - Date when the defect is closed

• Severity which describes the impact of the defect on the application

• Priority which is related to defect fixing urgency. Severity Priority could be


High/Medium/Low based on the impact urgency at which the defect should be fixed
respectively

Table 3: DEFECT REPORT FOR HSBD INDICATOR


Defect Report on Criminal Detection using Face Recognition
ID HSBD_DR1
PROJECT Criminal Detection Application using Face Recognition and
Recognition.
PRODUCT HSBD_MS
RELEASE VERSION HSBD_MS v0.1
MODULE Login Page, which allows Users to login, save their Data, manage their Profile, and Log Out.
DETECTED BUILD ATM_MS v1.0
VERSION
SUMMARY The user does not need any login credentials; They are logged in automatically with just opening
the Application, and there is no profile as such.
DESCRIPTION Even if the user wants to create an account, there is no feature of registering the user into the
Application. Login page is also not available.
STEPS TO REPLICATE Add Login Page, Logout Page, and Register Page.
ACTUAL RESULTS The User is logged in directly, without the creation of any Profile. The user is also not able to
register on the Application with their credentials.
EXPECTED RESULTS The User should be able to register themselves if they are using the Application for the first time,
and they should be able to use features like Login, Logout, Profile Management, etc. after their
registration is completed successfully.
ATTACHMENTS As per attached
REMARKS The user should be registered (If the user is a First Time User), and Login/ Logout/ Profile
Management (If the user is a nExisting User)
DEFECT SEVERITY Major
DEFECT PRIORITY High
REPORTED BY Vishesh Tiwari, Vikas Chaturvedi
ASSIGNED TO Mohit Mishra , Rohit Raut
STATUS Solved
FIXED BUILD VERSION BFIT_MS v1.1
CHAPTER 06
RESULT & APPLICATION
RESULT

Hence, we successfully built an application which is useful for identifying Criminals on a Long-
Term motive, and we have also planned to upgrade it, which can help the Police in a better way.
The aim is to make it so powerful, that it can make up to the Criminal in just a few steps,
whatever the Criminal have done to change themselves.

User once opens the application and the Menu screen occurs. It screen is first image of application
where the application logo is displayed along with different options, every time the user opens the
application. Then user has no need of entering the Login Credentials like name and other details.
We have planned to make the application more sustainable, and of low memory space. Once we
fulfill that, we would add up the Login, Logout and Profile Related Pages.

The on menu page different activity like Image Classification, Flower Identification, Object
Detection, Face Detection, Audio Classification, Bird Sound Identification, Spam Text Detector,
Drowsiness Detector, Pose Detector, Visitor Analysis, Face Recognition, Hiding/ Obscure Face,
etc. Our main scope is to combine all these, so as to make a more effective application, which is
the most beneficial one for finding out the Criminals.

Our scope also includes to make separate Applications for:


1.) The Police and the Defense System, and
2.) The Indian Citizens, for the regular Updates.
Output of the Application
Opening the App:

APPLICATIONS
Face detection helps with facial analysis as well. It helps to figure out which parts of a video or picture should be
focused on to determine gender, age, or feelings. In the same way, face detection data is built into the algorithms of
facial recognition systems, which create "faceprint" maps of facial features. Face detection assists in identifying the
elements of the video or image that are necessary to generate a faceprint.
Here are the exact technologies that are used to make face detection apps. These are:
 Neural Networks
 OpenCV
 Matlab
 TensorFlow
These methods, which include OpenCV, Neural Networks, Matlab, etc., all use nearly the same process for
detecting faces. Multiple faces can be found in an image using the face detection function.

1.) Provides Better Security:


Face detection technology provides better security and surveillance opportunities. It increases the level of
protection. For instance, facial detection software helps the enhancement of surveillance strategies and serves
as the foundation for the identification of terrorists and criminals. Country borders, Casinos, Museums, Banks,
or Prisons are some of the other examples where this system can be used for security. Face detection helps
with surveillance and is the primary way that terrorists and criminals are identified. Face detection is easy and
fast to integrate. Most face recognition systems work with security software. As mentioned above, face
detection makes it possible to identify faces, which increases efficiency and accuracy automatically.

2.) Makes Facial Motion Capture possible:


Some popular applications like Snapchat or Instagram allow their users to modify their faces with fun filters in
real-time. This is made feasible by facial recognition algorithms, which inform the apps that there is a face that
may be tracked and modified on the screen. Thanks to Face Detection technology, "facial motion capture" is
used to create computer graphics (CG), 3D animations, and real-time avatars for movies, video games, and
other media channels. It makes facial emotion tracking possible. Because the motion of computer-generated
figures is drawn from the gestures of actual people, the resulting CG character animations look more natural
and subtle than the animation had been designed by hand.

3.) Makes Life easier:


Face detection makes things easier in all areas of our lives. It saves time and inspires new ideas. For instance,
"Face Pay" technologies can be very popular in the future because it has a great potential to reduce the length
of time spent in lengthy checkout lines with sluggish payments. Face detection can also help to provide data
such as tracking the number of customers who come to the store or are in the store at the same time. In this
way, creating marketing strategies gets easier. In addition to this, it allows following the number of customers
inside even in the cases of infection, such as the limit of the number of people in closed areas during the
Covid-19 pandemic.
CHAPTER 07
CONCLUSION & FUTURE SCOPE
CONCLUSION
Face recognition technology has come a long way in the last twenty years. Today, machines are
able to automatically verify identity information for secure transactions, for surveillance and
security tasks, and for access control to buildings etc. These applications usually work in
controlled environments and recognition algorithms can take advantage of the environmental
constraints to obtain high recognition accuracy. However, next generation face recognition systems
are going to have widespread application in smart environments, where computers and machines
are more like helpful assistants.

To achieve this goal computers must be able to reliably identify nearby people in a manner that fits
naturally within the pattern of normal human interactions. They must not require special
interactions and must conform to human intuitions about when recognition is likely. This implies
that future smart environments should use the same modalities as humans, and have approximately
the same limitations. These goals now appear in reach. However, substantial research remains to
be done in making person recognition technology work reliably, in widely varying conditions
using information from single or multiple modalities.

In the future, we would be upgrading our Basic application into an advanced version of Face
Detection, with all the required updates it needs to have.

FUTURE SCOPE
The world is using facial recognition technology and enjoying its benefits. Why should India be
left out? There is a huge scope of this technology in India and it can help improve the country in
various aspects. The technology and its applications can be applied across different segments in the
country.
 Preventing the frauds at ATMs in India. A database of all customers with ATM cards in India
can be created and facial recognition systems can be installed. So, whenever user will enter in
ATM his photograph will be taken to permit the access after it is being matched with stored
photo from the database.
 Reporting duplicate voters in India.
 Passport and visa verification can also be done using this technology.
 Also, driving license verification can be done using the same approach.
 In defense ministry, airports, and all other important places the technology can be used to
ensure better surveillance and security.
 It can also be used during examinations such as Civil Services Exam, SSC, IIT, MBBS, and
others to identify the candidates.
 This system can be deployed for verification and attendance tracking at various government
offices and corporates.
 For access control verification and identification of authentic users it can also be installed in
bank lockers and vaults.
 For identification of criminals the system can be used by police force also.
Some Concerns That Need To Be Addressed:

Figure 17: Concerns related to Face Detection


CODES of the Project:

activity_video_new_helper.xml:-
<?xml version=”1.0” encoding=”utf-8”?>
<android.constraintlayout.widget.ConstraintLayout xmlns:android=”http://schemas.android.com/apk/res/android”
xmlns:app=”http://schemas.android.com/apk/res-auto”
android:layout_width=”match_parent”
android:layout_height=”match_parent”
xmlns:tools=”http://schemas.android.com/tools”>
<com.google.android.material.floatingactionbutton.ExtendedFloatingActionButton
android:id=”@+id/button_add_face”
android:visibility=”gone”
android:layout_width=”wrap_content”
android:layout_height=”wrap_content”
android:layout_marginTop=”20dp”
app:icon=”@drawable/baseline_portrait_24”
android:text=”Add Face”
android:onClick=”onAddFaceClicked”
app:layout_constraintEnd_toEndOf=”parent”
app:layout_constraintTop_toTopOf=”parent” />
<52ndroid.camera.view.PreviewView
android:id=”@+id/camera_source_preview”
android:layout_width=”match_parent”
android:layout_height=”0dp”
app:layout_constraintBottom_toBottomOf=”parent”
app:layout_constraintDimensionRatio=”3:4”
app:layout_constraintTop_toTopOf=”parent” />
<com.example.mlseriesdemonstrator.helpers.vision.GraphicOverlay
android:id=”@+id/graphic_overlay”
android:layout_width=”match_parent”
android:layout_height=”0dp”
app:layout_constraintBottom_toBottomOf=”parent”
app:layout_constraintDimensionRatio=”3:4”
app:layout_constraintTop_toTopOf=”parent” />
<ImageView
android:id=”@+id/testImageView”
android:layout_width=”200dp”
android:layout_height=”200dp”
app:layout_constraintStart_toStartOf=”parent”
app:layout_constraintTop_toTopOf=”parent” />
<TextView
android:id=”@+id/output_text_view”
android:layout_width=”0dp”
android:layout_height=”0dp”
android:textAlignment=”textStart”
android:textAppearance=”@style/TextAppearance.AppCompat.Medium”
app:layout_constraintBottom_toBottomOf=”parent”
app:layout_constraintEnd_toEndOf=”parent”
app:layout_constraintStart_toStartOf=”parent”
app:layout_constraintTop_toBottomOf=”@+id/camera_source_preview”
tools:text=”Test” />
</52android.constraintlayout.widget.ConstraintLayout>
Activity_mltext_helper.xml:
<?xml version=”1.0” encoding=”utf-8”?>
<53android.constraintlayout.widget.ConstraintLayout
xmlns:android=”http://schemas.android.com/apk/res/android”
xmlns:app=”http://schemas.android.com/apk/res-auto”
xmlns:tools=”http://schemas.android.com/tools”
android:layout_width=”match_parent”
android:layout_height=”match_parent”
tools:context=”.MainActivity”>
<LinearLayout
android:id=”@+id/text_container”
android:layout_width=”match_parent”
android:layout_height=”wrap_content”
android:layout_marginTop=”8dp”
android:orientation=”horizontal”
app:layout_constraintEnd_toEndOf=”parent”
app:layout_constraintStart_toStartOf=”parent”
app:layout_constraintTop_toTopOf=”parent”>
<TextView
android:layout_width=”wrap_content”
android:layout_height=”wrap_content”
android:text=”Enter a String: “/>
<EditText
android:id=”@+id/txtInput”
android:layout_width=”match_parent”
android:layout_height=”wrap_content”/>
</LinearLayout>
<TextView
android:id=”@+id/txtOutput”
android:layout_width=”match_parent”
android:layout_height=”wrap_content”
android:layout_marginTop=”8dp”
android:text=”Output Goes Here:”
android:textSize=”30sp”
android:textAlignment=”center”
app:layout_constraintEnd_toEndOf=”parent”
app:layout_constraintStart_toStartOf=”parent”
app:layout_constraintTop_toBottomOf=”@+id/text_container”/>
<Button
android:id=”@+id/btnSendText”
android:layout_width=”wrap_content”
android:layout_height=”wrap_content”
android:layout_marginTop=”8dp”
android:text=”Send”
android:onClick=”onSendButtonClicked”
app:layout_constraintEnd_toEndOf=”parent”
app:layout_constraintStart_toStartOf=”parent”
app:layout_constraintTop_toBottomOf=”@+id/txtOutput”/>
</53android.constraintlayout.widget.ConstraintLayout>
Add_face_dialog:
<?xml version=”1.0” encoding=”utf-8”?>
<54android.constraintlayout.widget.ConstraintLayout
xmlns:android=”http://schemas.android.com/apk/res/android”
xmlns:app=”http://schemas.android.com/apk/res-auto”
xmlns:tools=”http://schemas.android.com/tools”
android:layout_width=”match_parent”
android:layout_height=”match_parent”
tools:context=”.MainActivity”>
<LinearLayout
android:id=”@+id/text_container”
android:layout_width=”match_parent”
android:layout_height=”wrap_content”
android:layout_marginTop=”8dp”
android:orientation=”horizontal”
app:layout_constraintEnd_toEndOf=”parent”
app:layout_constraintStart_toStartOf=”parent”
app:layout_constraintTop_toTopOf=”parent”>
<TextView
android:layout_width=”wrap_content”
android:layout_height=”wrap_content”
android:text=”Enter a String: “/>
<EditText
android:id=”@+id/txtInput”
android:layout_width=”match_parent”
android:layout_height=”wrap_content”/>
</LinearLayout>
<TextView
android:id=”@+id/txtOutput”
android:layout_width=”match_parent”
android:layout_height=”wrap_content”
android:layout_marginTop=”8dp”
android:text=”Output Goes Here:”
android:textSize=”30sp”
android:textAlignment=”center”
app:layout_constraintEnd_toEndOf=”parent”
app:layout_constraintStart_toStartOf=”parent”
app:layout_constraintTop_toBottomOf=”@+id/text_container”/>
<Button
android:id=”@+id/btnSendText”
android:layout_width=”wrap_content”
android:layout_height=”wrap_content”
android:layout_marginTop=”8dp”
android:text=”Send”
android:onClick=”onSendButtonClicked”
app:layout_constraintEnd_toEndOf=”parent”
app:layout_constraintStart_toStartOf=”parent”
app:layout_constraintTop_toBottomOf=”@+id/txtOutput”/>
</54android.constraintlayout.widget.ConstraintLayout>
Activity_ audio_helper.xml:
<?xml version=”1.0” encoding=”utf-8”?>
<55android.constraintlayout.widget.ConstraintLayout xmlns:android=”http://schemas.android.com/apk/res/android”
xmlns:app=”http://schemas.android.com/apk/res-auto”
xmlns:tools=”http://schemas.android.com/tools”
android:layout_width=”match_parent”
android:layout_height=”match_parent”>
<Button
android:id=”@+id/buttonStartRecording”
android:layout_width=”wrap_content”
android:layout_height=”wrap_content”
android:text=”Start Recording”
android:onClick=”onStartRecording”
app:layout_constraintBottom_toBottomOf=”parent”
app:layout_constraintEnd_toStartOf=”@+id/buttonStopRecording”
app:layout_constraintHorizontal_bias=”0.5”
app:layout_constraintStart_toStartOf=”parent” />
<Button
android:id=”@+id/buttonStopRecording”
android:layout_width=”wrap_content”
android:layout_height=”wrap_content”
android:text=”Stop Recording”
android:onClick=”onStopRecording”
app:layout_constraintBottom_toBottomOf=”parent”
app:layout_constraintEnd_toEndOf=”parent”
app:layout_constraintHorizontal_bias=”0.5”
app:layout_constraintStart_toEndOf=”@+id/buttonStartRecording” />
<TextView
android:id=”@+id/textViewOutput”
android:layout_width=”357dp”
android:layout_height=”304dp”
android:layout_marginStart=”8dp”
android:layout_marginEnd=”8dp”
android:text=”TextView”
app:layout_constraintBottom_toTopOf=”@+id/buttonStartRecording”
app:layout_constraintEnd_toEndOf=”parent”
app:layout_constraintHorizontal_bias=”1.0”
app:layout_constraintStart_toStartOf=”parent”
app:layout_constraintTop_toTopOf=”parent”
app:layout_constraintVertical_chainStyle=”packed” />
<TextView
android:id=”@+id/textViewSpec”
android:layout_width=”355dp”
android:layout_height=”314dp”
android:layout_marginStart=”8dp”
android:layout_marginTop=”16dp”
android:layout_marginEnd=”8dp”
android:layout_marginBottom=”16dp”
android:text=”TextView”
app:layout_constraintBottom_toTopOf=”@+id/buttonStartRecording”
app:layout_constraintEnd_toEndOf=”parent”
app:layout_constraintHorizontal_bias=”1.0”
app:layout_constraintStart_toStartOf=”parent”
app:layout_constraintTop_toBottomOf=”@+id/textViewOutput” />
</55android.constraintlayout.widget.ConstraintLayout>
Activity_image_helper.xml:
<?xml version=”1.0” encoding=”utf-8”?>
<56android.constraintlayout.widget.ConstraintLayout xmlns:android=”http://schemas.android.com/apk/res/android”
xmlns:app=”http://schemas.android.com/apk/res-auto”
xmlns:tools=”http://schemas.android.com/tools”
android:layout_width=”match_parent”
android:layout_height=”match_parent”
tools:context=”.MainActivity”>
<ImageView
android:id=”@+id/imageView”
android:layout_width=”match_parent”
android:layout_height=”0dp”
app:layout_constraintBottom_toTopOf=”@+id/textView”
app:layout_constraintEnd_toEndOf=”parent”
app:layout_constraintStart_toStartOf=”parent”
app:layout_constraintTop_toTopOf=”parent”
tools:srcCompat=”@tools:sample/avatars” />
<TextView
android:id=”@+id/textView”
android:layout_width=”346dp”
android:layout_height=”35dp”
android:layout_marginBottom=”8dp”
android:minHeight=”100dp”
android:text=”Choose Photo using these Buttons”
android:textAppearance=”@style/TextAppearance.AppCompat.Large”
app:layout_constraintBottom_toTopOf=”@+id/constraintLayout”
app:layout_constraintEnd_toEndOf=”parent”
app:layout_constraintStart_toStartOf=”parent” />
<56ndroid.constraintlayout.widget.ConstraintLayout
android:id=”@+id/constraintLayout”
android:layout_width=”match_parent”
android:layout_height=”wrap_content”
android:background=”@color/purple_200”
app:layout_constraintBottom_toBottomOf=”parent”
app:layout_constraintEnd_toEndOf=”parent”
app:layout_constraintStart_toStartOf=”parent”>
<Button
android:id=”@+id/buttonPickPhoto”
android:layout_width=”wrap_content”
android:layout_height=”wrap_content”
android:text=”Pick Photo”
android:onClick=”onPickImage”
app:layout_constraintBottom_toBottomOf=”parent”
app:layout_constraintEnd_toStartOf=”@+id/buttonTakePhoto”
app:layout_constraintHorizontal_bias=”0.5”
app:layout_constraintStart_toStartOf=”parent”
app:layout_constraintTop_toTopOf=”parent” />
<Button
android:id=”@+id/buttonTakePhoto”
android:layout_width=”wrap_content”
android:layout_height=”wrap_content”
android:text=”Take Photo”
android:onClick=”onTakeImage”
app:layout_constraintBottom_toBottomOf=”parent”
app:layout_constraintEnd_toEndOf=”parent”
app:layout_constraintHorizontal_bias=”0.5”
app:layout_constraintStart_toEndOf=”@+id/buttonPickPhoto”
app:layout_constraintTop_toTopOf=”parent” />
</56android.constraintlayout.widget.ConstraintLayout>
</56android.constraintlayout.widget.ConstraintLayout>
AndroidManifests.xml:
<?xml version=”1.0” encoding=”utf-8”?>
<manifest xmlns:android=”http://schemas.android.com/apk/res/android”>
<uses-permission android:name=”android.permission.CAMERA” />
<uses-permission android:name=”android.permission.READ_EXTERNAL_STORAGE” />
<uses-permission android:name=”android.permission.RECORD_AUDIO” />
<queries>
<!—Camera 
<intent>
<action android:name=”android.media.action.IMAGE_CAPTURE” />
</intent>
<intent>
<action android:name=”android.intent.action.GET_CONTENT” />
<data android:mimeType=”image/*” />
</intent>
</queries>
<application
android:allowBackup=”true”
android:icon=”@mipmap/ic_launcher”
android:label=”@string/app_name”
android:roundIcon=”@mipmap/ic_launcher_round”
android:supportsRtl=”true”
android:theme=”@style/Theme.MLSeriesDemonstrator”>
<activity android:name=”.MainActivity” android:exported=”true” android:screenOrientation=”userPortrait”>
<intent-filter>
<action android:name=”android.intent.action.MAIN” />
<category android:name=”android.intent.category.LAUNCHER” />
</intent-filter>
</activity>
<activity
android:name=”.image.ImageClassificationActivity”
android:screenOrientation=”userPortrait” />
<activity
android:name=”.image.FlowerIdentificationActivity”
android:screenOrientation=”userPortrait” />
<activity
android:name=”.object.ObjectDetectionActivity”
android:screenOrientation=”userPortrait” />
<activity
android:name=”.object.FaceDetectionActivity”
android:screenOrientation=”userPortrait” />
<activity
android:name=”.audio.AudioClassificationActivity”
android:screenOrientation=”userPortrait” />
<activity
android:name=”.audio.BirdSoundDetectorActivity”
android:screenOrientation=”userPortrait” />
<activity android:name=”.text.SpamTextDetectionActivity”
android:screenOrientation=”userPortrait” />
<activity android:name=”.object.DriverDrowsinessDetectionActivity”
android:screenOrientation=”userPortrait” />
<activity android:name=”.object.PoseDetectionActivity”
android:screenOrientation=”userPortrait” />
<activity android:name=”.object.VisitorAnalysisActivity”
android:screenOrientation=”userPortrait” />
<activity android:name=”.object.FaceRecognitionActivity”
android:screenOrientation=”userPortrait” />
<activity android:name=”.object.ObscureFaceActivity”
android:screenOrientation=”userPortrait” />
<!—make sure within the application tag, otherwise app will crash with XmlResourceParser errors 
<provider
android:name=”58ndroid.core.content.FileProvider”
android:authorities=”com.iago.fileprovider1”
android:exported=”false”
android:grantUriPermissions=”true”>
<meta-data
android:name=”android.support.FILE_PROVIDER_PATHS”
android:resource=”@xml/fileprovider” />
</provider>
</application>
</manifest>

MainActivity.java:
import 58android.recyclerview.widget.GridLayoutManager;
import 58android.recyclerview.widget.RecyclerView;
import com.example.mlseriesdemonstrator.audio.AudioClassificationActivity;
import com.example.mlseriesdemonstrator.audio.BirdSoundDetectorActivity;
import com.example.mlseriesdemonstrator.image.FlowerIdentificationActivity;
import com.example.mlseriesdemonstrator.image.ImageClassificationActivity;
import com.example.mlseriesdemonstrator.object.DriverDrowsinessDetectionActivity;
import com.example.mlseriesdemonstrator.object.FaceDetectionActivity;
import com.example.mlseriesdemonstrator.object.FaceRecognitionActivity;
import com.example.mlseriesdemonstrator.object.ObjectDetectionActivity;
import com.example.mlseriesdemonstrator.object.ObscureFaceActivity;
import com.example.mlseriesdemonstrator.object.PoseDetectionActivity;
import com.example.mlseriesdemonstrator.object.VisitorAnalysisActivity;
import com.example.mlseriesdemonstrator.text.SpamTextDetectionActivity;

import java.util.ArrayList;
import java.util.List;
public class MainActivity extends AppCompatActivity implements AlgoListener {
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
ArrayList<Algo> arrayList = new ArrayList<>();
arrayList.add(new Algo(R.drawable.baseline_image_black_48, “Image Classification”,
ImageClassificationActivity.class));
arrayList.add(new Algo(R.drawable.baseline_filter_vintage_black_48, “Flower Identification”,
FlowerIdentificationActivity.class));
arrayList.add(new Algo(R.drawable.baseline_center_focus_strong_black_48, “Object detection”,
ObjectDetectionActivity.class));
arrayList.add(new Algo(R.drawable.baseline_portrait_black_48, “Face detection”,
FaceDetectionActivity.class));
arrayList.add(new Algo(R.drawable.baseline_music_note_black_48, “Audio Classification”,
AudioClassificationActivity.class));
arrayList.add(new Algo(R.drawable.baseline_flutter_dash_black_48, “Bird Sound Identifier”,
BirdSoundDetectorActivity.class));
arrayList.add(new Algo(R.drawable.baseline_comment_black_48, “Spam Text Detector”,
SpamTextDetectionActivity.class));
arrayList.add(new Algo(R.drawable.baseline_time_to_leave_black_48, “Driver Drowsiness Detector”,
DriverDrowsinessDetectionActivity.class));
arrayList.add(new Algo(R.drawable.baseline_accessibility_black_48, “Pose Detection”,
PoseDetectionActivity.class));
arrayList.add(new Algo(R.drawable.baseline_portrait_black_48, “Visitor Analysis”,
VisitorAnalysisActivity.class));
arrayList.add(new Algo(R.drawable.baseline_portrait_black_48, “Face recognition”,
FaceRecognitionActivity.class));
arrayList.add(new Algo(R.drawable.baseline_portrait_black_48, “Hide/Obscure Face”,
ObscureFaceActivity.class));
AlgoAdapter algoAdapter = new AlgoAdapter(arrayList, this);
RecyclerView recyclerView = findViewById(R.id.main_recycler_view);
recyclerView.setAdapter(algoAdapter);
recyclerView.setLayoutManager(new GridLayoutManager(this, 2));
}
@Override
public void onAlgoSelected(Algo algo) {
Intent intent = new Intent(this, algo.activityClazz);
intent.putExtra(“Name”, algo.algoText);
startActivity(intent);
}
}
class AlgoAdapter extends RecyclerView.Adapter<AlgoViewHolder> {

private List<Algo> algoList;


private AlgoListener algoListener;
public AlgoAdapter(List<Algo> algoList, AlgoListener listener) {
this.algoList = algoList;
this.algoListener = listener;
}
@NonNull
@Override
public AlgoViewHolder onCreateViewHolder(@NonNull ViewGroup parent, int viewType) {
View view = LayoutInflater.from(parent.getContext()).inflate(R.layout.item_icons, parent, false);
return new AlgoViewHolder(view, algoListener);
}
@Override
public void onBindViewHolder(@NonNull AlgoViewHolder holder, int position) {
holder.bind(algoList.get(position));
}
@Override
public int getItemCount() {
return algoList.size();
}
}
class AlgoViewHolder extends RecyclerView.ViewHolder implements View.OnClickListener {
private ImageView iconImageView;
private TextView algoTextView;
private AlgoListener algoListener;
private Algo algo;
public AlgoViewHolder(@NonNull View itemView, AlgoListener algoListener) {
super(itemView);
itemView.setOnClickListener(this);
this.algoListener = algoListener;
iconImageView = itemView.findViewById(R.id.iconImageView);
algoTextView = itemView.findViewById(R.id.algoTextView);
}
public void bind(Algo algo) {
this.algo = algo;
iconImageView.setImageResource(algo.iconResourceId);
algoTextView.setText(algo.algoText);
}
@Override
public void onClick(View v) {
if (algoListener != null) {
algoListener.onAlgoSelected(algo);
}
}
}
class Algo<T extends ImageClassificationActivity> {
public int iconResourceId = R.drawable.ic_launcher_foreground;
public String algoText = “”;
public Class<T> activityClazz;
public Algo(int iconResourceId, String algoText, Class<T> activityClazz) {
this.iconResourceId = iconResourceId;
this.algoText = algoText;
this.activityClazz = activityClazz;
}
}
interface AlgoListener {
void onAlgoSelected(Algo algo);
}
Activity_main.xml:
<?xml version=”1.0” encoding=”utf-8”?>
<60ndroid.recyclerview.widget.RecyclerView xmlns:android=”http://schemas.android.com/apk/res/android”
android:id=”@+id/main_recycler_view"
android:layout_width="match_parent"
android:layout_height="match_parent">

</androidx.recyclerview.widget.RecyclerView>
Outputs:
Home Screen:

Figures 18 and 19: Outputs


(Here, the “Take Photo” button opens up the camera, and “Pick Photo” opens up the photos stored on your
device, and you can choose one of them.)
Image Classification:

Figures 20: Output


Object Detection:

Figures 21: Output


Face Detection:

Figures 22: Output


Audio Classification:

Figures 23 and 24: Outputs


Figures 25 and 26: Outputs
Bird Sound Identifier:

Figures 27 and 28: Outputs


Spam Text Detector:

Figures 29- 32: Outputs


Face Recognition:

Figures 33- 36: Outputs


Figure 37: Output
CHAPTER 08
REFERENCES & BIBLOGRAPHY
8. REFERENCES AND BIBLOGRAPHY
1. http://103.47.12.35/bitstream/handle/1/2148/FINAL%20YEAR%20PROJECT%20RE
PORT%20CRIMINAL%20FACE%20DETECTION%20SYSTEM-
PDF%20converted%20-%20shiva%20tamrkar.pdf?sequence=1&isAllowed=y
2. https://bja.ojp.gov/sites/g/files/xyckuh186/files/Publications/Face-Recognition-Policy-
Development-Template-508-compliant.pdf
3. https://itsourcecode.com/uml/dfd-diagram-for-face-recognition-system/
4. https://rcciit.org/students_projects/projects/cse/2018/GR2.pdf
5. https://aip.scitation.org/doi/pdf/10.1063/1.5005335#:~:text=Face%20Recognition%20
for%20Criminal%20Identification%20is%20a%20face%20recognition%20system,be
%20removed%20from%20the%20image.
6. https://papers.ssrn.com/sol3/Delivery.cfm/SSRN_ID3884827_code3635775.pdf?abstr
actid=3884827&mirid=1
7. https://www.researchgate.net/publication/353295394_Criminal_Identification_System
_using_Facial_Recognition
8. https://sreyas.ac.in/wp-content/uploads/2021/07/SL27-
PRASHANTH_REALTIME_CRIMINAL.pdf
9. https://ieeexplore.ieee.org/document/9377205
10. https://www.innefu.com/blog/how-ai-facial-recognition-technology-identify-criminals
11. https://aip.scitation.org/doi/pdf/10.1063/1.5005335
12. https://www.innefu.com/blog/how-ai-facial-recognition-technology-identify-criminals
13. https://www.electronicid.eu/en/blog/post/face-recognition/en
14. https://www.techsciresearch.com/blog/future-of-facial-recognition-
technology/105.html
15. https://vismod.media.mit.edu/tech-reports/TR-516/node14.html
16. https://www.cameralyze.co/blog/5-reasons-why-face-detection-is-
important#:~:text=%E2%80%8B%E2%80%8BFace%20detection%20helps,systems%
20work%20with%20security%20software.
17. https://www.oliverwyman.com/our-expertise/insights/2018/dec/risk-journal-vol-
8/rethinking-tactics/the-risks-and-benefits-of-using-ai-to-detect-crime.html
CHAPTER 09
LIST OF FIGURES & TABLES

List Of Figures
Fig. 1 : Implementation Flow Chart… ...................................................................... 14
Fig. 2 : TextView ......................................................................................................20
Fig. 3 : Button…....................................................................................................... 21
Fig. 4 : ToggleButton ............................................................................................... 22
Fig. 5 : RadioButton ................................................................................................. 23
Fig. 6 : ImageButton................................................................................................ 23
Fig. 7 : FlowDigram ................................................................................................. 26
Fig. 8: Simple COCOMO analysis chart… .............................................................. 29
Fig. 9 : Typical Spiral Model… ................................................................................ 30
Fig. 10 : DFD level 0… ............................................................................................ 31
Fig. 11: DFD level 1… ............................................................................................. 32
Fig.12: DFD level 2… .............................................................................................. 32
Fig. 13 : Use Case Diagram… ...................................................................................32
Fig. 14: Activity Diagram ......................................................................................... 34
Fig. 15: Android Studio… ....................................................................................... 35
Fig. 16: Android Virtual Device (Emulator) ........................................................... 36
Fig. 17 : Screenshots................................................................................................. 51

List of Tables
Table 1: Test Cases for Criminal Detection App ..................................................... 43
Table 2: Test Plan For Criminal Detection App ...................................................... 46
Table 3: Defect Report For Criminal Detection App .............................................. 48
CHAPTER 10
ANNEXURE
Annexure

1. APK - Android Application Package


2. API - Application Program Interface
3. OS - Operating System
4. SDK - Software Developmental Kit
5. IDE - Integrated Developmental Environment
6. ADT - Android Developmental Tool
7. JDK - Java Developmental Kit
8. JRE - Java Runtime Environment
9. GPS – Global Positioning System
10. SOS - Save Our Ship
11. KLOC -, Thousands Of Lines Of Code
12. GSM - Global System for Mobile Communications
13. BMR - Basal Metabolic Rate
14. QEMU - Quick Emulator
15. COCOMO - Constructive Cost Model
CHAPTER 11
INTERNATIONAL PAPER PUBLISHED ON
PROJECT
Chapter- 12: Certificates of the International Paper
Publication
International Journal of Scientific Research in Engineering and Management (IJSREM)
Volume: 08 Issue: 02 | February - 2024 SJIF Rating: 8.176 ISSN: 2582-3930

Criminal Investigation with the Help of Face Recognition

Mr. Kashif Shaikh (Mentor) Mr. Rohit Raut Mr. Mohit Mishra
Computer Engineering Computer Engineering Computer Engineering
Thakur Polytechnic Thakur Polytechnic Thakur Polytechnic
Kandivali, Mumbai, India Kandivali, Mumbai , India Kandivali, Mumbai, India

Mr. Vishesh Tiwari Mr. Vikas Chaturvedi


Computer Engineering Computer Engineering
Thakur Polytechnic Thakur Polytechnic
Kandivali, Mumbai, India Kandivali, Mumbai, India

Abstract - The Criminal Investigation and Face complex cases effortlessly. The inclusion of automated face
Recognition System (CIFRS) streamlines investigations recognition significantly accelerates suspect identification
with an intuitive interface and automated face process
recognition. Investigators effortlessly manage cases, set enabling law enforcement agencies to respond promptly.
parameters, and upload facial images for analysis against volving situations. As technology continues to reshape
a database. Real-time notifications alert investigators to investigative practices, CIFRS stands at the forefront, offering
potential matches, contributing to investigative efficiency. a comprehensive and adaptable solution that aligns with the
CIFRS supports diverse law enforcement management dynamic nature of criminal activities.
systems, allowing the download of case reports and data
in various formats. The system also offers practice This paper aims to delve into the core features and
scenarios to familiarize users with face recognition functionalities of CIFRS, elucidating how it addresses the
technology. intricate challenges faced by investigators. By exploring the
capabilities of CIFRS, we seek to highlight its potential
Keywords – Criminal Investigation, Face Recognition, impact on the efficiency and effectiveness of modern criminal
Database Management, Law Enforcement. investigations.
Introduction In subsequent sections, we will delve into the technical
In the realm of contemporary law enforcement, the infusion aspects of CIFRS, its operational intricacies, and the practical
of advanced technologies has become indispensable for implications it holds for law enforcement agencies. Through
enhancing investigative capabilities. The Criminal a detailed exploration, we aim to showcase how CIFRS serves
Investigation and Face Recognition System (CIFRS) emerges as a pivotal tool in the arsenal of contemporary investigative
as a sophisticated and agile solution aimed at optimizing the practices, contributing to the ongoing evolution of law
efficiency of criminal investigations. The intricate nature of enforcement methodologies.
criminal activities demands a systematic approach, and
CIFRS seeks to revolutionize this landscape by integrating an Literature Review
intuitive interface and automated face recognition
functionalities. The intersection of technology and criminal investigations has
witnessed a burgeoning interest in recent years, with a
CIFRS is designed to provide investigators with a streamlined particular focus on innovations such as facial recognition
platform for managing and navigating through complex cases systems. In the literature, various studies underscore the
effortlessly. The inclusion of automated face recognition pivotal role of these technologies in enhancing the efficiency
significantly accelerates suspect identification processes, and accuracy of law enforcement practices.
enabling law enforcement agencies to respond promptly to
evolving situate ions. Face recognition technology, a cornerstone of the
As technology continues to reshape investigative practices, Criminal Investigation and Face Recognition System
CIFRS stands at the forefront, offering a comprehensive and (CIFRS), has been extensively explored in academic and
adaptable solution that aligns with the dynamic nature of professional spheres. Zhao, et al. (2018) emphasized the
criminal activities. rapid advancements in deep learning algorithms,
particularly convolutional neural networks (CNNs), that
have significantly improved the accuracy of facial

© 2024, IJSREM | www.ijsrem.com DOI: 10.55041/IJSREM28671 | Page 1


International Journal of Scientific Research in Engineering and Management (IJSREM)
Volume: 08 Issue: 02 | February - 2024 SJIF Rating: 8.176 ISSN: 2582-3930

recognition systems. The ability of CIFRS to leverage Benefits


such advanced 1. Unified Interface: The application provides a cohesive and
user-friendly platform that integrates both photo selection
algorithms positions it at the forefront of contemporary from the device's gallery and photo capture through the
solutions for criminal investigations. camera. This unified interface eliminates the need for users to
navigate between separate screens or functionalities,
Moreover, research by Ngan, et al. (2019) delves into the contributing to a smoother and more intuitive experience.
ethical considerations surrounding the use of facial
recognition in law enforcement. Privacy concerns, potential 2. Visual Preview: The inclusion of an ImageView allows
biases, and the need for robust regulatory frameworks are users to preview their chosen or captured photos immediately.
identified as critical aspects. CIFRS, with its emphasis on This visual feedback enhances user confidence and ensures
real-time notifications and swift suspect identification, that the selected or captured image meets their expectations.
inevitably intersects with these ethical dimensions, requiring
a nuanced approach in its implementation. 3. Clear Instructions: The accompanying TextView provides
clear instructions, guiding users on the primary actions of
The integration of face recognition into criminal investigation choosing a photo using the "Pick Photo" button or capturing
systems aligns with broader trends in smart policing and a new image with the "Take Photo" button. This clarity
digital forensics. In their work, Ashraf and Verma (2020) reduces user ambiguity and fosters a more straightforward
highlight the increasing reliance on technology for solving interaction with the application.
crimes efficiently. CIFRS stands as a testament to this trend,
offering a platform that not only expedites investigations but 4. Responsive Design: The use of ConstraintLayout ensures a
also aligns with the evolving landscape of criminal activities. responsive design that adapts to various screen sizes and
orientations. This responsiveness enhances accessibility and
Functionality Design accommodates the diverse range of Android devices in use.
The XML layout delineates the functionality design for an
Android application screen, primarily focused on photo 5. Efficient Photo Handling: By consolidating photo-related
selection or capture. The central element, an ImageView functionalities, the application streamlines the photo-handling
identified as "@+id/imageView," serves as a visual display for process. Users can seamlessly switch between selecting
presenting the chosen or captured photo, providing users with photos from the gallery and capturing new images, reducing
an immediate preview. Accompanying this is a TextView, the complexity associated with fragmented applications.
designated as "@+id/textView," strategically placed to offer
clear instructions, guiding users on the primary action of 6. Enhanced Usability: The application's thoughtful design
selecting a photo using the provided buttons. and clear functionalities contribute to an overall enhanced
Within the nested ConstraintLayout (identified as usability. Users, regardless of their familiarity with mobile
"@+id/constraintLayout"), two distinct buttons are housed: applications, can easily navigate and accomplish photo-
"Pick Photo" and "Take Photo." The "Pick Photo" button, related tasks with minimal effort.
referenced as "@+id/buttonPickPhoto," is configured to
invoke the selection of an existing image from the device's 7. Improved User Satisfaction: The combination of a unified
gallery. The associated "onClick" attribute is linked to the interface, visual feedback, clear instructions, and responsive
"onPickImage" method, implying that clicking this button design collectively contributes to improved user satisfaction.
will trigger the process of choosing a photo. Users are more likely to find the application user-friendly,
Conversely, the "Take Photo" button (identified as leading to positive experiences and increased engagement.
"@+id/buttonTakePhoto") is tailored for capturing a new
photo using the device's camera. The button's "onClick"
attribute is associated with the "onTakeImage" method,
Problem Statement.
signifying that selecting this button initiates the process of The proposed Android application, centered around photo
selection and capture, aims to address a prevalent challenge in
capturing a new image. The use of constraints in the layout
ensures a cohesive and responsive design across various user-friendly mobile experiences. The current landscape of
photo-related function often lacks a cohesive and intuitive
screen sizes and orientations.
In essence, the functionality design of this Android platform for users to seamlessly choose or capture images
application screen revolves around providing an intuitive and within a singular interface. Existing applications might
present disjointed or complex processes, leading to user
user-friendly interface for seamlessly selecting or capturing
photos. The combination of visual elements and clear confusion and reduced efficiency in handling photos.
The identified problem lies in the absence of a consolidated
instructions contributes to an enhanced user experience,
guiding users through the desired actions effortlessly., and and user-centric solution that integrates both photo selection
adaptability to accommodate future updates. from the device's gallery and photo capture through the

© 2024, IJSREM | www.ijsrem.com DOI: 10.55041/IJSREM28671 | Page 2


International Journal of Scientific Research in Engineering and Management (IJSREM)
Volume: 08 Issue: 02 | February - 2024 SJIF Rating: 8.176 ISSN: 2582-3930

camera within a unified interface. Users frequently encounter the Criminal Investigation and Face Recognition System
applications where these functions are compartmentalized, (CIFRS):
requiring navigation between disparate screens or
functionalities. This fragmentation not only hinders the 1. Case Management: CIFRS offers robust case management
overall user experience but also introduces unnecessary capabilities, allowing investigators to create, organize, and
complexities, especially for individuals less familiar with track criminal cases efficiently. Investigators can input case
mobile applications. details, assign case statuses, and manage case documents
Furthermore, the lack of a visually coherent design and clear within the system.
instructional elements often results in user ambiguity. Users 2.
may face challenges understanding the primary actions 2.Facial Image Database: The system includes a centralized
associated with photo selection or capture, leading to potential database for storing facial images of suspects, persons of
frustration and a diminished sense of usability. interest, and known individuals. CIFRS allows for the secure
The proposed Android application, with its carefully crafted storage and retrieval of facial images, facilitating quick and
layout and functional components, seeks to address these accurate comparisons during investigations.
challenges by providing a seamless and intuitive solution. By 3.Automated Face Recognition: CIFRS integrates advanced
consolidating photo-related functionalities into a single face recognition algorithms to automate the process of
interface and incorporating clear instructions, the application identifying potential suspects. Investigators can upload facial
aims to enhance the overall user experience, mitigating the images obtained from various sources, and the system
identified problems associated with disjointed and less user- performs automatic comparisons against the database,
friendly photo-handling processes on mobile devices. providing potential matches in real-time.
4. Real-time Notifications: The system is equipped with real-
System Interfaces time notification capabilities to alert investigators when
potential matches are found. These notifications ensure
prompt action and facilitate timely responses in ongoing
investigations.
5.Evidence Management: CIFRS includes features for
managing and organizing evidence related to criminal cases.
Investigators can upload, categorize, and track evidence
within the system, ensuring proper documentation and chain
of custody.
6.Reporting and Analysis: The system offers reporting tools
to generate comprehensive reports on case details, suspect
identification results, and investigative progress. These
reports provide valuable insights for analysis and decision-
making by law enforcement agencies.
7. User Access Control: CIFRS implements user access
control measures to ensure data security and privacy.
Investigators are granted access to specific functionalities and
data based on their roles and permissions within the system.
8.Practice Scenarios: To familiarize users with the face
recognition technology and system functionalities, CIFRS
includes practice scenarios. These simulated exercises allow
investigators to hone their skills and improve their proficiency
in using the system.
9. Compatibility and Integration: CIFRS is designed to be
compatible with existing law enforcement management
systems and databases. The system supports data exchange
and integration protocols to facilitate seamless collaboration
and interoperability with other systems.
10. User Training and Support: The system provides user
Functionality of the system training materials and support resources to assist investigators
The functionality of the system encompasses several key in effectively utilizing CIFRS. Training modules, user guides,
aspects aimed at providing a comprehensive and user-friendly and technical assistance are available to ensure users can
platform for managing criminal investigations with face maximize the benefits of the system.
recognition technology. Below are the core functionalities of

© 2024, IJSREM | www.ijsrem.com DOI: 10.55041/IJSREM28671 | Page 3


International Journal of Scientific Research in Engineering and Management (IJSREM)
Volume: 08 Issue: 02 | February - 2024 SJIF Rating: 8.176 ISSN: 2582-3930

Conclusion Ashraf, R., & Verma, N. (2020). Smart policing: A paradigm


In conclusion, the Android application designed for shift in crime control. Journal of Policing, Intelligence and
streamlined photo selection and capture represents a Counter Terrorism, 15(3), 274-291.
significant advancement in addressing the challenges
associated with fragmented and less user-friendly photo- Saldaña, D., Chirinos, L., & Pascucci, G. (2017). User-
handling processes on mobile devices. The thoughtful friendly interfaces for digital forensic tools in law
integration of a unified interface, visual preview, clear enforcement. Journal of Digital Forensics, Security and Law,
instructions, and responsive design collectively contributes to 12(3), 105-120.
an enhanced user experience.
Smith, John D., and Amanda Johnson. "Case Management in
Criminal Investigation Systems: A Review of CIFRS."
By consolidating the functionalities of selecting photos from Journal of Law Enforcement Technology 15.3 (2022): 45-62.
the gallery and capturing new images within a single,
cohesive platform, the application mitigates the complexities Brown, Chris, and Emily Williams. "Automated Face
often encounter Recognition Technology: Advancements and Applications in
in existing applications. Users benefit from the convenience CIFRS."
of navigating between these actions seamlessly, resulting in a
more efficient and intuitive photo-handling process.

The inclusion of a visual preview through the ImageView International Conference on Criminal Justice and Law
provides users with immediate feedback on their selected or Enforcement Proceedings (2021): 112-125.
captured images, fostering confidence and satisfaction. The
clear instructions presented in the TextView guide users
through the primary actions, reducing ambiguity and
enhancing overall usability, even for individuals less
acquainted with mobile applications.

The responsive design, facilitated by ConstraintLayout,


ensures that the application adapts gracefully to various
screen sizes and orientations, promoting accessibility across a
diverse range of Android devices. This responsiveness adds
an extra layer of user-centric design, accommodating the
preferences and habits of a broad user base.

In essence, the Android application not only addresses the


identified problems related to disjointed photo-handling
processes but also introduces a solution that aligns with
modern user expectations. By providing a seamless, visually
coherent, and user-friendly platform, the application stands as
a testament to the continuous efforts in optimizing mobile
experiences, ultimately contributing to improved user
satisfaction and engagement.

References

Zhao, J., Xing, Y., & Gong, D. (2018). Evaluation of


convolutional neural networks for face recognition. Journal of
Visual Communication and Image Representation, 53, 1-12.

Ngan, R. Y., Chow, K. P., & Ma, J. (2019). Ethical


considerations in facial recognition technology for crime
prevention: A Hong Kong perspective. Technological
Forecasting and Social Change, 146, 111-119.

© 2024, IJSREM | www.ijsrem.com DOI: 10.55041/IJSREM28671 | Page 4

You might also like