Ilovepdf Merged
Ilovepdf Merged
Rohit Raut
Mohit Mishra
Vishesh Tiwari
Vikas Chaturvedi
Project Guide
Mr. Kashif Shaikh
THAKUR POLYTECHNIC
(An ISO 9001:2015 Certified Institute)
Thakur Complex, West to W.E. Highway, Kandivli (E), Mumbai – 400 101.
2023-2024
PROJECT APPROVAL SHEET
Academic Year: 2023 - 2024
This Project work entitled
C E R T I F I C A T E
This is to certify that Mr. Rohit Raut, Mr. Mohit Mishra, Mr. Vishesh Tiwari
and Mr. Vikas Chaturvedi from Thakur Polytechnic (0522) Institute, having
Enrollment Numbers: 2105220333, 21052202349, 2105220351 and 2105220337
respectively, has completed project of final year having title “Criminal
Investigation with the help of Face Recognition” during the academic year
2023-2024. The project completed in a group consisting of 4 persons under the
guidance of Faculty Guide, Mr. Kashif Shaikh.
---------------------------- ---------------------------
H.O.D (CO) PRINCIPAL
(Ms. VAISHALI RANE) (Dr. S.M. GANECHARI)
ACKNOWLEDGEMENT
The success and fulfillment of any project require a lot of guidance and encouragement from
a lot of people and we are very fortunate to have got this all along the project duration. They
have spent much time in instructing us on how to search literature, collect data, and develop the
project.
First and foremost, we would like to thank Thakur Polytechnic, Mumbai and Department of
Computer Engineering to provide us with this sublime opportunity to undergo project work
and helping us to learn and acquire transcendental experience. We would like to thank our
HOD, Ms. Vaishali Rane for providing us with her valuable guidance and support.
We owe our great gratitude to our guide and professor Mr. Kashif Shaikh who always helped
with her invaluable knowledge and guidance to proceed with the project and kept us on the
correct path with her encouragement, support, suggestions, ideas and constructive criticism
throughout the project.
I would also like to thank all the respected teachers of Department of Computer
Engineering, without whose teachings and support this project could not have been completed.
At last, we grateful to our families, friends, and all others for their help, encouragement and
cooperation during project work.
I am thankful to and fortunate enough to get constant encouragement, support and guidance
from all Non-teaching Staffs of Computer Engineering Department which helped us in
successfully completing our project work. Also, I would like to extend our sincere esteems to all
staff in Laboratory for their timely support.
Criminals, in every manner, are increasing Day- by- Day. So, a proper, reliable Criminal-
Identification System should be built. It would help the Police Department, Researchers
Department, Spy agencies, etc. in every possible way. Crimes are not only increasing in a
Physical way, but Digital Crimes, Online Crimes, etc. are also rapidly increasing at a very high
rate. Money Frauds, Murders, Mental Tortures, Sexual Harassments, etc. are the biggest
examples of Crimes, in recent times.
The main Target of the Criminals are the Public. They harm the Public, they hide from the
Media, Police, Public, etc. by mixing themselves within the Public. Usually they change their
Attire, Looks, way of living, etc. So, our aim is to make a Mobile- based Application, which can
be used by the Public, as well as the Police and Media, to find the Criminals by their permanent
Face- Markings like Eyes, Eyeball Movement, etc. There are many markings on a person,
specifically on the Faces, which does not change; They also include Wrinkles, Birthmarks (If
any), Hair Shaping, Face Shaping, Neck Posture, etc.
Our basic aim is to detect the criminals by feeding a picture of the criminal to the Application,
store the picture temporarily in the Device’s RAM storage, and once the App is cleared from the
RAM, all of it’s Data is cleared.
CHAPTER 01
INTRODUCTION
INTRODUCTION
Mobile applications are software programs that run on Smartphone and other mobile
communication devices. They can also be accessories that attach to a Smartphone or other
mobile communication devices, or a combination of accessories and software. One of the main
features of a Mobile Phones is their Camera. With more advancing features coming day by day,
we can make out more advantages out of it. One of the advantages of these cameras is it’s pixel
Quality, which is becoming way better than expected. This helps out in taking more detailed
pictorial information about something, some event’s occurrence, or someone.
“Criminal Identification Using Face Detection” does somewhat the same task. It only needs one
picture as an input, of a particular person, and it is saved in the RAM Memory of the Device. Once
the App is closed, all the Stored Memory is deleted permanently, and you need to feed the data
again. The main advantage of this App is that, you need to provide the input picture once, and
every time the particular person is seen in the Camera, He/ She is detected, using their Face
Markings and Features.
THEORY BEHIND THE PROJECT
"Criminal Identification using Face Detection" is based on the premise that every
individual has a unique set of facial features, such as the distance between the eyes, the
shape of the nose, the size and shape of the mouth, and the contours of the face. These
unique facial features can be used to identify individuals, and this has been a common
practice in law enforcement for many years.
The process of criminal identification using face detection involves capturing an image
of the suspect's face, either through a photograph or a video recording, and analyzing
the facial features using advanced algorithms. This analysis is done by detecting and
extracting facial landmarks, such as the eyes, nose, and mouth, and comparing them to a
database of known individuals.
The process of facial recognition and identification involves three steps: detection,
alignment, and recognition.
The detection step involves locating the face in the image or video frame, which can be
done using various techniques such as Haar cascades, deep learning models like
Convolutional Neural Networks (CNNs), and other computer vision algorithms.
The alignment step involves aligning the detected face so that the facial landmarks are
properly positioned for accurate recognition. This step is important because the position
and angle of the face can vary significantly between images and videos.
The recognition step involves comparing the detected facial features to a database of
known faces to identify the individual. This is done using machine learning algorithms
that can identify patterns in the facial features and match them to known individuals
with a high degree of accuracy.
Overall, criminal identification using face detection is a powerful tool for law
enforcement agencies that can help them to quickly and accurately identify suspects and
solve crimes. However, it is important to balance the benefits of this technology with
concerns around privacy and potential misuse.
We have chosen an Android smart phone as the platform for the project for multiple
reasons – the Android operating system is open-source and easily programmable and
more importantly, its dominance in the smart phone market. A Realme™ smart phone is
used as the devicerunning Android’s Latest Version.
Face recognition from inertial sensor data is not a new problem. There have been many
approaches proposed in the literature as discussed in Section II. Our approach is unique
from existing methods in many ways. First and foremost, no additional Face sensing
Mechanism is used on the subject. Secondly, Subject’s Face location where the phone
should be placed and the orientation of the phone are not predetermined. In addition, we
perform Face recognition even if the Subject is moving, Unlike other works, where these
sensors are used to recognize faces, only if the Subject is Stable.
RELATED WORK
"Criminal Identification using Face Detection" is based on the premise that every
individual has a unique set of facial features, such as the distance between the eyes, the
shape of the nose, the size and shape of the mouth, and the contours of the face. These
unique facial features can be used to identify individuals, and this has been a common
practice in law enforcement for many years.
This project involves capturing an image of the suspect's face, either through a photograph
or a video recording, and analyzing the facial features using advanced algorithms. This
analysis is done by detecting and extracting facial landmarks, such as the eyes, nose, and
mouth, and comparing them to a database of known individuals. Facial recognition and
identification process involves three steps: detection, alignment, and recognition.
Recognition step involves comparing the detected facial features to a database of known
faces to identify the individual. It is done using ML algorithms that can identify patterns in
the facial features, & match them to known individuals with a high degree of accuracy.
Overall, criminal identification using face detection is a powerful tool for law enforcement
agencies that can help them to quickly and accurately identify suspects and solve crimes.
However, it is important to balance the benefits of this technology with concerns around
privacy and potential misuse.
NEED OF THE PROJECT
Criminal investigation is a complex process that involves collecting and analyzing a vast
amount of information and evidence to identify and apprehend suspects. One critical piece of
evidence that can aid in the identification of suspects is facial recognition technology.
Therefore, a face recognition project can be an essential tool in criminal investigations for the
following reasons:
1. Identification of Suspects:
Facial recognition technology can help identify suspects in a crime by comparing the
facial features of the person caught in CCTV footage or photos with a database of known
criminals or suspects. This can help law enforcement agencies to identify and apprehend
the suspects quickly.
2. Enhancing Public Safety:
Face recognition technology can enhance public safety by identifying potential criminals
and deterring them from committing crimes in public places such as malls, airports, and
train stations.
3. Time-Saving:
Manual identification of suspects can be a time-consuming process and can delay the
investigation. Face recognition technology can speed up the process and help
investigators identify the suspect quickly, thus saving valuable time.
4. Increasing Efficiency:
Facial recognition technology can enhance the efficiency of law enforcement agencies by
automating the process of identifying suspects. This can reduce the workload of
investigators, allowing them to focus on other aspects of the investigation.
5. Preventing Recidivism:
Face recognition technology can be used to monitor released criminals to prevent them
from committing more crimes. By matching their faces with a database of known
offenders, law enforcement agencies can quickly identify repeat offenders, and take
appropriate actions to prevent future crimes.
ADVANTAGES
Efficient identification:
Face recognition technology can help law enforcement agencies quickly identify
suspects or persons of interest. This can save valuable time and resources
compared to traditional methods of investigation.
Increased accuracy:
Face recognition technology has a high level of accuracy when properly calibrated
and trained. This can help law enforcement agencies to identify suspects with a
higher degree of certainty.
Reduced Bias:
Face recognition technology is not influenced by factors such as race, gender, or
age, which can be an issue in human identification. This can help reduce the
potential for bias in criminal investigations.
Improved surveillance:
Face recognition technology can be used to monitor public spaces and detect
potential threats or criminal activity. This can help law enforcement agencies to
prevent crimes before they occur.
Enhanced security:
Face recognition technology can be used to control access to secure areas and
prevent unauthorized entry. This can be particularly useful in high-security
facilities such as airports or government buildings.
Cost- Effective:
Face recognition technology can be more cost-effective than traditional
investigation methods, as it requires fewer resources and can be carried out more
quickly.
Today, AI is most commonly used to detect crimes such as fraud and money
laundering. But in the future, it will likely become commonly used in other industries
as well. Below are three areas where we see AI being used to prevent:
Terrorist Activities:
Retailers and pharmacies could use sophisticated AI tools to identify customers
who purchase unusual amounts of chemicals that could be used as precursors to
terrorist activities.
Human Trafficking:
Shipping companies can use their data and AI capabilities to identify the
containers that are most likely to be used for human trafficking. Thus it saves
lives.
CHAPTER 02
LITERATURE SURVEY
Problem Statement
The problem statement is that traditional criminal investigations often rely on eyewitness
testimonies and physical evidence, which can be unreliable and time-consuming to collect and
analyze. With the help of face recognition technology, the goal is to improve the accuracy and
efficiency of criminal investigations by using computer algorithms to analyze surveillance
footage and identify potential suspects. However, there are also concerns about privacy and
potential biases in the use of this technology. The challenge is to develop a face recognition
system that is accurate, unbiased, and respects individual rights while still helping law
enforcement agencies to solve crimes.
Face Recognition for Criminal Identification is a face recognition system in which the security
expert will input an image of the person in question inside the system and the system will first
preprocess the image which will cause unwanted elements such as noise to be removed from the
image. After that, the system will then classify the image based on its landmarks for example, the
distance between the eyes, the length of the jaw line, etc. Then, the system will run a search
through the database to find its perfect match and display the output. This work is focusing on
implementing the system for criminal identification. Current practice of thumbprint identification
which is simple and easy to be implemented can be challenge by the use of latent thumbprint and
sometimes cannot be acquired from the crime scene. The criminals have become cleverer and
normally very careful in leaving any thumbprint on the scene. It encompassed face database and
an image processing algorithm to match the face feed with faces stored in the database.
There are two parts vital to the success of this system; detection and recognition. Face detection
is one of the most important steps in a face recognition system and can be classified into four
principle categories; knowledge based, feature invariant, template matching and appearance-
based methods. In recognition, two stages are required; training process and evaluation process.
In a training process, the algorithm is fed samples of the images to be learned and a distinct
model for each image is determined while in an evaluation process, a model of a newly acquired
test image is compared against all existing models in the database. Then the near corresponding
model is acquired to determine whether the recognition is triggered [5]. In this stage, a statistical
procedure, Principal Component Analysis (PCA) is used to on a collection of face images to
form a set of basis features, which is called a set of eigenfaces. Any human face can be
considered to be a combination of these standard faces.
Criminal Face Detection project aims to build a automated Criminal Face Detection system by
levering the human ability to recall minute facial details. Identification of criminals at the scene
of a crime can be achieved in many ways like fingerprinting, DNA matching or eye witness
accounts. Out of these methods eye witness accounts are preferred because it stands scrutiny in
court and it is a cost effective method. It is possible that witnesses to a crime have seen the
criminal though in most cases it may not be possible to completely see the face of the
perpetrator. The Criminal Face Detection System will be built of an existing criminal database.
Input would be provided in the form of sketch or an image and matched against the existing
database and results would be provided. Criminal record generally contains personal information
about particular person along with photograph. To identify any Criminal we need some
identification regarding person, which are given by eyewitness. In most cases the quality and
resolution of the recorded image segments is poor and hard to identify a face. To overcome this
sort of problem we are developing software. Identification can be done in many ways like finger
print, eyes, DNA etc. One of the applications is face identification. The face is our primary focus
of attention in social inters course playing a major role in conveying identify and emotion.
Although the ability to infer intelligence or character from facial appearance is suspect, the
human ability to recognize face is remarkable .The human face is a complicated
multidimensional visual model and hence it is very difficult to develop a computational model
for recognizing it. The paper presents a methodology for recognizing the human face based on
the features derived from the image. The proposed methodology is implemented in two stages.
The first stage detects the human face in an image using viola Jones algorithm. In the next stage
the detected face in the image is recognized using a fusion of principle.
OBJECTIVE
To collect information from the sensors of mobile phone to identify the activity that the
user of the application is performing.
To use the collected information from the user (age, sex, height, eye no etc. ) to give
appropriate health tips to the user.
To collect information of the various activity performed by the user for a predetermined
period of time and perform analysis over the information.
To focus mainly on activities that consume lots of calories and thus have an effect over
the health.
To suggest suitable lifestyle changes based on medically approved values like ideal heart
rate and BMI (Body Mass Index).
Businesses and other establishments use a face recognition system as a tool for various purposes.
Some use this software for checking attendance and others use it for tracking the person who enters
and exits the establishment.
The Government also utilizes a face recognition system to trace faces known for illegal activities.
Therefore, the software is very helpful for every user and in every way.
Furthermore, the system works with the use of a camera to capture faces through images, videos, or
in real-time. The recognized faces were then stored in a database and were used in matching faces
for attendances or tracking.
There are various advantages of developing an software using face detection and recognition in the
field of authentication. Face detection is an easy and simple task for humans, but not so for
computers. It has been regarded as the most complex and challenging problem in the field of
computer vision due to large intra-class variations caused by the changes in facial appearance,
lighting and expression. Face detection is the process of identifying one or more human faces in
images or videos. It plays an important part in many biometric, security and surveillance systems, as
well as image and video indexing systems. Face detection can be regarded as a specific case of
object-class detection. In object-class detection, the task is to find the locations and sizes of all
objects in an image that belong to a given class.
CHAPTER 03
SCOPE OF THE PROJECT
SCOPE OF THE PROJECT
We firstly need to understand the various components of the face detection system under the
criminal detection. Or we can say in this we will remove the complexity from the image, which
we get to match with the criminal record or data. In the past, we were not able to remove the
complexity from the image which we get through the CCTV or any camera.
Criminal Face Detection project aims to build an automated Criminal Face Detection system by
levering the human ability to recall minute facial details. Identification of criminals at the scene
of a crime can be achieved in many ways like fingerprinting, DNA matching or eye witness
accounts. Out of these methods eye witness accounts are preferred because it stands scrutiny in
court and it is a cost effective method. It is possible that witnesses to a crime have seen the
criminal though in most cases it may not be possible to completely see the face of the
perpetrator. The Criminal Face Detection System will be built of an existing criminal database.
Input would be provided in the form of sketch or an image and matched against the existing
database and results would be provided. Criminal record generally contains personal information
about particular person along with photograph. To identify any Criminal we need some
identification regarding person, which are given by eyewitness. The human face is a complicated
multidimensional visual model and hence it is very difficult to develop a computational model
for recognizing it.
The scope of the project is confined to store the image and store in the database. When a person
has to be identified the images stored in the database are compared with the existing details.
The criminals in our database along with his details and those images are segmented into many
slices say eyes, hairs, lips, nose, etc. These images are again stored in another database record so
to identify any criminals; eyewitnesses will see the images or slices that appear on the screen by
using it we develop the face, which may or may not be matched with our images. If any image is
matched up to 99% then we predict that he is only the criminal. Thus using this project it
provides a very friendly environment for both operator and eyewitness to easily design any face
can identify criminals very easy.
Project Objective:
This project is intended to identify a person using the images previously taken. The identification
will be done according the previous images of different persons.
CHAPTER 04
METHODOLOGY
IMPLEMENTATION
The first screen shows up the Camera Screen, and an “Add Face” Button on the Top- Right. The
app requires your Camera Access to Start. After that, place a face inside the Camera Frame. Then
the face would be detected, click on the “Add Face” Button. Then enter the name of the person
of the Face being saved. Then again place the same face inside the Camera Frame. The
Application will detect the face, showing the name of the Face shown.
The Application is just a basic application for now, which just detects the Face with firstly
detecting it’s features, like the Face Structure, Eyes, Nose, etc.
For some uniqueness, we have used the Phone’s RAM Memory as the Database. But, we have
planned in the Future about the creation of a Database, so as to store the Data
USER DETAILS
User enters the details like Name of the person (in String Form), Click (Image) of the Person,
etc. These details are used to store the person’s data, and also to calculate the probability that the
person is the particular person we want.
LOGOUT
Logout option is not available on our application, considering the heaviness of our software.
Once the Software is able to manage things properly after training it, we will add the Login,
Logout, Profile, etc. interfaces to the software.
1. TextView
TextView is a UI Component that displays the text to the user on their Display Screen.
There are various attributes to describe the TextView some of them are named below:
Android: id – it is a unique id for the control.
There are some more attributes the above are the major ones.
Figure 2: TextView
2. EditText
EditText is a user interface control that allows the users to enter some text.
There are various attributes to describe the TextView some of them are named below:
Android: id – it is a unique id for the control.
Android: width – It displays the exact width of the TextView.
Android: height – It displays the exact height of the TextView.
Android:textColor – It set the color of the text.
Android:InputType - we need to specify the type of data the text field can accept
using the inputType attribute.
Figure 3: EditText
3. Button
Button, as understood by its name, is a component which can be pressed or clicked by the user to
perform an action. It has the same properties as a TextView, with a few Button specific
properties.
android:gravity: This can be used to set the position of any View on the app screen. The
available value are right, left, center, center_vertical etc. You can also use to values
together, using the | symbol.
android:textSize: To set text size inside button.
android:background: To set the background color of the button.
Picture can be added to the button, alongside the text by
using android:drawableRight, android:drawableLeft, android:drawableTop and android:
drawableBottom, respectively.
Figure 4: Button
CHAPTER 05
DETAILS OF DESIGN, WORKING AND
PROCESS
SOFTWARE DEVELOPMENT
The architecture of Criminal Detection Analysis based on Face Recognition consist of different
modules.
1) Activity and Face Recognition.
2) Mobile In- Built sensors
Activity recognition is current development which recognizes the current activity of individual.
The combination of two sensing inputs is taken that are hard sensing and soft sensing to
recognize the current activity of individual whether person is walking, in vehicle etc. Hard
sensing inputs are the inbuilt sensors in mobile. Currently application is using two in built
sensors first is proximity sensors and second is accelerometer sensors. Application uses these
two sensors for performing activity.
User once opens the application Splash screen occurs. Splash screen is first image of application
where the application logo is displayed along with different health tips every time the user opens
the application. Then user enters the name and details. User details are important for further
calculation and every single activity output is based on this user details. Using user details BMR
is calculated and target calories is calculated based on users lifestyles. The on menu page
different activity like push-ups, walk, skipping and squat is present. If user selects push-ups
activity then proximity sensors get activated and when user starts performing push-ups proximity
sensors start counting the number of push-ups and when user press stop then calorie burned by
user is shown along with the number of push-ups. This number of push-ups and calorie burned
by user is saved in log after pressing ok button. For walk, Skipping and Squat activity
accelerometer sensors are used. An accelerometer sensor consists of X-axis, Y-axis and Z-axis.
For walk Z-axis value is calculated and rest axis value is kept zero. For Skipping only Y-axis
value is calculated and for squat only X-axis value is calculated. After user press stop button the
sensors value and time from start button to stop button are used to calculate the calorie burned by
person performing activity. Text-to-speech and notification features are also added using
modules of android.
Figure 5: Explanatory Diagram
Economic Feasibility is the process of identifying the financial benefits and costs associated with
a development project. It judges whether, or not we might be able to develop the given software
considering existing budgetary constraints. As Android is an open-source platform and also the
development tools are freely available, as such there is no initial investment required for setting
up the development platform. Moreover, the Android SDK also provides with an Emulator to
develop and test Android applications. Hence, testing of the application can also be done without
incurring any extra costs.
Technical Feasibility is defined as whether reliable hardware and software as well as other
technical resources capable of meeting the needs of a proposed system can be acquired or
developed by an organization in the required time. As complete Android powered mobile phones
are in market so it’s very much technically feasible to develop and test the application on real
phones. Android is an open-source platform developed by Google Inc. As the Android SDK
(Software Development Kit) and the Eclipse IDE are freely available along with an ADT plug-in
for Eclipse, technically it is very much feasible to work with Android and develop Android
applications. It also provides a virtual emulator to test Android applications in the absence of an
Android enabled hand-held device. Moreover, to help novice developers with development
Google also provides dedicated blogs for queries related to Android application development.
The process of assessing the degree to which a proposed system solves business problems or
takes advantage of business opportunities is called Operational Feasibility. Operational
Feasibility is composed of estimating the operational cost and its benefits. It is a measure of how
it satisfies the requirements identified in the requirements analysis phase of system development.
Since the application only requires an Android enabled cell-phone to operate, the operational
cost would consist only that of an Android cell-phone. Moreover, the user has no burden
physically or mentally in order to use the application. All he has to do is install the application on
his Android cell-phone and keep it with him most of the times, which already is a habit
nowadays. So, the usage of this application introduces no extra burden on the user making it
operationally feasible.
COST ANALYSIS
COCOMO estimation
The Constructive Cost Model (COCOMO) is an algorithmic software cost estimation model
developed by Barry W. Borehm. The model uses a basic regression formula with parameters that
are derived from historical project data and current project characteristics.
COCOMO consists of a hierarchy of three increasingly detailed and accurate forms. The first
level, Basic COCOMO is good for quick, early, rough order of magnitude estimates of software
costs, but its accuracy is limited due to its lack of factors to account for difference in project
attributes (Cost Drivers). Intermediate COCOMO takes these Cost Drivers into account and
Detailed COCOMO additionally accounts for the influence of individual project phases.
Basic COCOMO
Basic COCOMO computes software development effort (and cost) as a function of program size.
Program size is expressed in estimated thousands of source lines of code (SLOC).
Organic projects - "small" teams with "good" experience working with "less than rigid"
requirements
Semi-detached projects - "medium" teams with mixed experience working with a mix of rigid
and less than rigid requirements
Where,
KLOC is the estimated number of delivered lines (expressed in thousands) of code for project.
The coefficients ab, bb, cb and db are given in the following table:
Software project ab bb cb db
Basic COCOMO is good for quick estimate of software costs. However it does not account for
differences in hardware constraints, personnel quality and experience, use of modern tools and
techniques, and so on.
COCOMO cost analysis for BEFIT
Lines of code: : 4K
People required : 3
PROCESS MODEL
Software like all complex systems evolves over a period of time. Business and product
requirements often change as development proceeds making a straight line path to an end
product unrealistic. For a software, where a set of core product or system requirements is well
understood, but the details of product or system extensions have yet to be defined, we need a
process model or a methodology that has been explicitly designed to accommodate a product that
evolves over time. Thus, we have decided to choose an evolutionary approach to our application.
Evolutionary models are iterative. This would enable us to develop increasingly more complete
versions of the application.
A data flow diagram (DFD) is a graphical representation of the "flow" of data through an
information system, modeling its process aspects. Often they are a preliminary step used to
create an overview of the system which can later be elaborated.
DFD Diagrams
A use case diagram at its simplest is a representation of a user's interaction with the system and
depicting the specifications of a use case. A use case diagram can portray the different types of
users of a system and the various ways that they interact with the system. This type of diagram is
typically used in conjunction with the textual use case and will often be accompanied by other
types of diagrams as well.
An activity diagram is a behavioral diagram i.e. it depicts the behavior of a system. An activity
diagram portrays the control flow from a start point to a finish point showing the various
decision paths that exist while the activity is being executed. We use Activity Diagrams to
illustrate the flow of control in a system and refer to the steps involved in the execution of a
use case. We model sequential and concurrent activities using activity diagrams. So, we
basically depict workflows visually using an activity diagram. An activity diagram focuses on
condition of flow and the sequence in which it happens. We describe or depict what causes a
particular event using an activity diagram.
Software requirements:
• Development platform: Microsoft Windows.
• Deployment platform: Android platform with Dalvik bytecode converter (comes along
with Android 2.2 and above).
• Eclipse IDE for Java Developers with Android Development Tool (ADT) plug-in.
• Android SDK for Windows operating application.
• SQLite Database.
• Android Mobiles
• Camera of Android Mobiles
Hardware requirements: An Android enabled device for the purpose of deployment and a
personal computer workstation for application development.
Android Studio
Android is a software stack for mobile devices that includes an operating system, Middleware
and key applications. The Android SDK provides the tools and APIs necessary to begin
developing applications that run on Android-powered devices.
It allows developers to write managed code in the Java language, controlling the device via
Google-developed Java libraries.
The Android SDK includes a comprehensive set of development tools. These include a
debugger, libraries, a handset emulator (based on QEMU), documentation, sample code, and
tutorials. Currently supported development platforms include x86-architecture computers
running Linux (any modern desktop Linux distribution), Mac OS X 10.4.8 or later, Windows XP
or Vista.
Android SDK Features:
The true appeal of Android as a development environment lie’s in the APIs it provides. As an
application-neutral platform, Android gives you the opportunity to create applications that are as
much a part of the phone as anything provided out of the box. The following list highlights some
of the most noteworthy Android
Features:
• GSM, EDGE, and 3G networks for telephony or data transfer, allowing you to make or
receive calls or SMS messages, or to send and retrieve data across mobile networks
• Full multimedia hardware control including playback and recording using the camera and
microphone
Application Framework:
Developers have full access to the same framework APIs used by the core applications. The
application architecture is designed to simplify the reuse of components; any application can
publish its capabilities and any other application may then make use of those capabilities (subject
to security constraints enforced by the framework). This same mechanism allows components to
be replaced by the user.
Underlying all applications is a set of services and systems, including:
• A rich and extensible set of Views that can be used to build an application, including lists,
grids, text boxes, buttons, and even an embeddable web browser.
• Content Providers that enable applications to access data from other applications (such as
Contacts), or to share their own data.
• A Notification Manager that enables all applications to display custom alerts in the status bar
• An Activity Manager that manages the life cycle of applications and provides a common
navigation back stack.
Libraries:
Android includes a set of C/C++ libraries used by various components of the Android system.
These capabilities are exposed to developers through the Android application framework. Some
of the core libraries are listed below:
• System- C Library- a BSD-derived implementation of the standard C system library (libc),
tunedfor embedded Linux-based devices.
• Media Libraries- based on PacketVideo's OpenCORE; the libraries support playback and
recording of many popular audio and video formats, as well as static image files, including
MPEG4, H.264, MP3, AAC, AMR, JPG, and PNG
• Surface Manager- manages access to the display subsystem and seamlessly composites 2D and
3D graphic layers from multiple applications
• LibWebCore- a modern web browser engine which powers both the Android browser and an
embeddable web view
• SGL- the underlying 2D graphics engine
• 3D Libraries- an implementation based on OpenGL ES 1.0 APIs; the libraries use either
hardware 3D acceleration (where available) or the included, highly optimized 3D software
rasterizer
• FreeType- bitmap and vector font rendering
• SQLite- a powerful and lightweight relational database engine available to all application.
Android Runtime:
Android includes a set of core libraries that provides most of the functionality available in the
core libraries of the Java programming language. Every Android application runs in its own
process, with its own instance of the Dalvik virtual machine. Dalvik has been written so that a
device can run multiple VMs efficiently. The Dalvik VM executes files in the Dalvik Executable
(.dex) format which is optimized for minimal memory footprint. The VM is register based, and
runs classes compiled by a Java language compiler that have been transformed into the .dex
format by the included "dx" tool.The Dalvik VM relies on the Linux kernel for underlying
functionality such as threading and low-level memory management.
Beside of the actual java class library the Android SDK contains all the tools that are necessary
to build an Android application.
Typically every Android SDK version consists of:
Developer Tools:
As already mentioned above the SDK comes with a bunch of tools that relieve the creation of an
Android app. In the following only the most important tools are described:
aapt - Android Asset Packaging Tool Creates *.apk-files which contain all the resources as well
as the program itself. Those ZIP-format based files can be transferred to and installed on an
Android phone or the emulator.
adb – Android Debug Bridge The main purpose of this tool is to set up connections to a real
Android device or an Android emulator instance in order to transfer and install (apk)-files on it.
With adb the developer also has the possibility to remote control the devices shell.
dx – Dalvik Cross-Assembler The dx is used for merging and converting Java- Standard-
ByteCode Classes (*.class) into one single binary file (*.dex) that can be executed by the Dalvik
VM. These *.dex-files are subject to be put into an *.apk-file together with resource files.
ddms - Dalvik Debug Monitor Service This tool provides port-forwarding services, screen
capture on the device, thread and heap information on the device, logcat, process and radio state
information, incoming call and SMS spoofing, location data spoofing, and more.
Emulator and System Images:
The SDK also contains an emulator that is able to simulate almost all the functionality of an
actual Android device. This is achieved by booting so called system images which represent the
Android OS with the whole software stack of a real Android device.
Documentation, Sample Code:
Of course the SDK also provides the developer with a comprehensive documentation which
consists not only of an API reference but also of programming guidelines as well as detailed
descriptions for the several tools. There are also a few code examples that will allow
understanding the typical workflow of an Android application right away.
IDE Support:
Although it is possible to develop Android apps with every modern IDE Google recommends
doing so is by using the Eclipse IDE with a special plug-in called ADT (Android Development
Tools). The ADT makes use of all the Dev Tools that come with the SDK and therefore supports
and simplifies all the steps from assembling the classes over packaging and signing to running
the final application on the emulator. The ADT is not just speeding up the testing process but
also relieves the developers work in terms of UI creation and application description. For that
reason the ADT offers the developer graphical representations of what has otherwise have to be
written in XML.
Android Studio was announced on May 16, 2013, at the Google I/O conference. It was in early
access preview stage starting from version 0.1 in May 2013, then entered beta stage starting from
version 0.8 which was released in June 2014. The first stable build was released in December
2014, starting from version 1.0.
On May 7, 2019, Kotlin replaced Java as Google's preferred language for Android app
development. Java is still supported, as is C++.
TEST CASES
A test case is a document, which has a set of test data, preconditions, expected results and
postconditions, developed for a particular test scenario in order to verify compliance against a
specific requirement.
Test Case acts as the starting point for the test execution, and after applying a set of input values,
the application has a definitive outcome and leaves the system at some end point or also known
as execution post condition.
1. Test case ID: is the identification number given to each test case.
2. Purpose: defines why the case is being designed.
3. Precondition: The prerequisite for running in the system.
4. Input: Actual inputs must be provided, instead of general inputs.
5. Expected outputs: which should be produced when there is no failure?
6. Actual outputs: What is the actual output when the code is executed.
7. Status: If Expected and actual result is same status is Pass otherwise it is Fail. Two
approachesto testing Software -
Test to pass: A test case producing expected result is called test to pass, its capabilities are
not pushed.
Test to fail/ Error Forcing: A test case that doesn’t produce expected result is called test to
fail. Sole purpose is to break the software.
Table 1: TEST CASES FOR HSBD INDICATOR
Test cases for Criminal Detection Application using Face Recognition.
Sr. ID Specificat Objective Pre Steps Input Data Expected Actual Status
No ion Condition Output Output
1 TC Login User is User 1. Open the App Open the App should be App is Pass
_01 page already should 2. Select what you app and opened and opened and
logged in open the want to do, user should be user can
choose able to choose
according to your choose
app choice.
2 TC Login To The user 1. Open on app Open app App should App is Pass
_02 page validate should be and try to be opened opened and
"direct directly choose and user user can
login logged in should be choose
field able to
choose
3 TC Password To Password 1. Open app Login User should User is not Fail
_03 and validate and user 2. Continue to credentials be logged in logged In.
Username Password name login
Field should be
present
4 TC Details To Menu field 1. Open app Open app, The field The field is Pass
_04 Page validate should be 2. Continue navigate on should be opened
Menu present navigation on Menu opened
Field the Menu Page. Field.
TC Details To The field 1. Open app Enter on a The field The field Pass
_05 Page validate should be 2. Click on Image button should accept accepts the
5 Image available classification selection selection.
Classifica and active. 3. Chose a Button
from bottom of
tion Field
page
TC Details To The field 1. Open app Enter on a The field The field Pass
_06 Page validate should be 2. Click on Flower button should accept accepts the
6 Flower available classification selection selection.
3. Chose a Button
Identificati and active.
from bottom of
on Field.
page
7 TC Detail To The field 1. Open app Enter on a The field The field
_07 Page validate should be 2. Click on Object button should accept accepts the Pass
the Object available Detection selection selection.
Detection and active. 3. Chose a Button
from bottom of
Field
page
8 TC Detail To The field 1. Open app Enter on a The field The field Pass
_08 Page validate should be 2. Click on Face button should accept accepts the
the Face available Detection selection selection.
Detection and active. 3. Chose a Button
from bottom of
Field page
9 TC Detail To The field 1. Open app Enter on a The field The field Pass
_09 Page validate should be 2. Click on Audio Start should start starts the
classification
the Audio available 3. Click on “Start Recording recording recording.
classifica and active. Recording”
tion Field 4. For stopping, Click on
“Stop Recording”
10 TC Detail To The field 1. Open app Enter on a The field The field Pass
_10 Page validate should be 2. Click on Bird Start should start starts the
the Bird available Sound Recording recording recording.
Sound and active. Identification
3. Click on “Start
identifica
Recording”
tion Field 4. For stopping, Click
on “Stop
Recording”
11 TC Detail To The field 1. Open app Enter on a The field The field Pass
_11 Page validate should be 2. Click on Face button should accept accepts the
the Face available Detection selection selection.
Detection and active.
Field
12 TC To The field 1. Open app Enter text The field The field Pass
_12 Detail validate should be 2. Click on Spam and click should accept accepts and
Page the Spam available Text Detection on Send and return it’s returns it is a
Text and active. 3. Enter a text suggestion spam or not
4. Click on Send
Detection
Field
13 TC Face To Add Face 1. Open app Click on Face should Face is added Pass
_13 Recogniti validate button 2. Click on Face Add Face be able to
on Page the "add should be Detection button add
face" present and 3. Click on Add
active Face button
TC Face To "Name" 1. Open app Click on Name should Name is Pass
_14 Recogniti validate Textbox 2. Click on Face Add Face be added added
14 on Page the Name should be Detection button and
Textbox present and 3. Click on Add enter name
active Face button
4. Add a name to
the face added
TC Object To Pick Photo 1. Click on the pick Select an Image should Image is Pass
15 _15 Detection validate button photo button image be selected selected
Page the "Pick should be 2. Select a picture from from your
Photo" present and your memory memory
button active
TC Object To Pick Photo 1. Click on the Click an Image should Image is Pass
16 _16 Detection validate button pick photo image be able to be captured
Page the "Take should be button from the captured
Photo" present and 2. Take a picture from camera
button active camera interface interface
opened opened
TEST PLAN
As per ISTQB definition: “Test Plan is A document describing the scope, approach, resources,
and schedule of intended test activities.”
A Test Plan is a detailed document that describes the test strategy, objectives, schedule,
estimation, deliverables, and resources required to perform testing for a software product.
Test Plan helps us determine the effort needed to validate the quality of the application under
test. The test plan serves as a blueprint to conduct software testing activities as a defined process,
which is minutely monitored and controlled by the test manager.
IEEE Std 829 (IEEE Standard for Software Test Documentation) gives a “Test Plan Template”
that would be useful for those who will prepare a test plan. According to this standard, test plans
shall cover,
Test Plan Identifier (name, number, revision number of the document)
References
Introduction
Test Items
Software
Risk Issues
Features to be Tested
Features not to Tested
Approach Item Pass / Fail Criteria
Suspension Criteria and Resumption Requirements
Test Deliverables Remaining Test Task
Environmental Needs
Staffing and training needs
Responsibilities
Schedule
Planning Risks and Contingencies
Approvals
Glossary
Importance of a good Test Plan:
Prima facie, a software application appears more validated by the presence of a test plan.
Although plenty of IEEE standards are in use by the testing industry, still there is no hard & fast
rule to stick to any one in particular. Many times company specific test plans customized to suit
ones own requirement prove to be more useful & acceptable to the testing personnel.
A good Tester is the one who prepares a “Test Plan” for every level of testing, and clearly
describes its objectives & most important aspect is that he/she operates on it. The test plan can
have several parts but the most important aspect is the simple presence of the test plan itself.
Reason being this becomes the starting point from which the entire process gets kick started & it
contains the scope of the entire testing assignment. A test plan has systematic outline of all
features & functionality that are continuously checked based upon the matrix of responsibilities
& risks associated with the process
Table 2: TEST PLAN FOR HSBD INDICATOR
Test plan on Criminal Detection using Face Recognition
Test Plan Identifier TP_01
This application uses various sensors of Smartphone and parameters as user inputs, and collects periodic information of the
Introduction
camera to give correct outputs and statistical data of criminal information that will help user implement better knowledge
and awareness.
1. Camera Fields
Test items 2. Text Boxes for messages
3. Audio Recorders
4. Face analytics
5. Buttons
6. Object Detectors
1. To check if the fields are accepting values.
Features To Be Tested
2. To check if the options properly redirecting to their corresponding pages.
Features Not to be Tested Edit details option, Camera Quality, Buttons
Approach 1. All the features to be tested for the given test items shall be tested by directly running the application.
For Pass Criteria:
1) If majority of the fields and dropdowns work properly, update the preview as expected.
2) If the application responds to every click within a second.
Item Pass/Fail Criteria
For Fail Criteria:
1. If the preview doesn't get generated for verification of changes.
2. User is unable to select or click the option which the user wants.
Suspension Criteria: If any of the major functionalities of the application are not functional then the testing should be
Suspension Criteria &
suspended.
Resumption
Resumption: If all the found critical bugs are resolved then the testing should be resumed.
a. Test Plan
Requirements & Test
Deliverables b. Test Cases
d. Defect Report
Testing Tasks 1. Draft a defect report
Hardware: Computer System
Environmental Needs Software: HSBD Indicator (developed on android studio)
Operating System: Windows 10
Responsibilities
For Black Box Testing: Rohit Raut, Mohit Mishra
For White Box Testing: Vishesh Tiwari, Vikas Chaturvedi
1. They must know the coding.
Staffing And Training Needs
2. They must know all the testing techniques (or at least black box testing).
For Black Box Testing: 25 Days
Schedule
For White Box Testing: 25 Days
Risks:
1. Testing will be delayed if application is inaccessible.
2. All the test cases may not be executed successfully due to time constraint.
Risks And Contingencies 3. If failure occurs in the software then we will check the overall software steps by steps For Bugs
Contingencies:
1. Extend the time constraints for successful execution of the testcases.
2. Remove all the bugs from the software and will release a new version of the software without any bug.
1. Test Manager – Vishesh Tiwari, Vikas Chaturvedi
Approvals
2. Test Leader – Rohit Raut, Mohit Mishra
DEFECT REPORT
A Defect in Software Testing is a variation or deviation of the software application from end
user’s requirements or original business requirements. A software defect is an error in coding
which causes incorrect or unexpected results from a software program which does not meet
actual requirements. Testers might come across such defects while executing the test cases.
When testers execute the test cases, they might come across such test results which are
contradictory to expected results. This variation in test results is referred to as a Software Defect.
These defects or variations are referred by different names in different organizations like issues,
problems, bugs or incidents. While reporting the bug to developer, your Bug Report should
contain the following information.
• Defect Description - Detailed description of the Defect including information about themodule
in which Defect was found.
• Steps - Detailed steps along with screenshots with which the developer can reproduce the
defects.
• Reference- where in you Provide reference to the documents like . requirements, design,
architecture or maybe even screenshots of the error to help understand the defect
Hence, we successfully built an application which is useful for identifying Criminals on a Long-
Term motive, and we have also planned to upgrade it, which can help the Police in a better way.
The aim is to make it so powerful, that it can make up to the Criminal in just a few steps,
whatever the Criminal have done to change themselves.
User once opens the application and the Menu screen occurs. It screen is first image of application
where the application logo is displayed along with different options, every time the user opens the
application. Then user has no need of entering the Login Credentials like name and other details.
We have planned to make the application more sustainable, and of low memory space. Once we
fulfill that, we would add up the Login, Logout and Profile Related Pages.
The on menu page different activity like Image Classification, Flower Identification, Object
Detection, Face Detection, Audio Classification, Bird Sound Identification, Spam Text Detector,
Drowsiness Detector, Pose Detector, Visitor Analysis, Face Recognition, Hiding/ Obscure Face,
etc. Our main scope is to combine all these, so as to make a more effective application, which is
the most beneficial one for finding out the Criminals.
APPLICATIONS
Face detection helps with facial analysis as well. It helps to figure out which parts of a video or picture should be
focused on to determine gender, age, or feelings. In the same way, face detection data is built into the algorithms of
facial recognition systems, which create "faceprint" maps of facial features. Face detection assists in identifying the
elements of the video or image that are necessary to generate a faceprint.
Here are the exact technologies that are used to make face detection apps. These are:
Neural Networks
OpenCV
Matlab
TensorFlow
These methods, which include OpenCV, Neural Networks, Matlab, etc., all use nearly the same process for
detecting faces. Multiple faces can be found in an image using the face detection function.
To achieve this goal computers must be able to reliably identify nearby people in a manner that fits
naturally within the pattern of normal human interactions. They must not require special
interactions and must conform to human intuitions about when recognition is likely. This implies
that future smart environments should use the same modalities as humans, and have approximately
the same limitations. These goals now appear in reach. However, substantial research remains to
be done in making person recognition technology work reliably, in widely varying conditions
using information from single or multiple modalities.
In the future, we would be upgrading our Basic application into an advanced version of Face
Detection, with all the required updates it needs to have.
FUTURE SCOPE
The world is using facial recognition technology and enjoying its benefits. Why should India be
left out? There is a huge scope of this technology in India and it can help improve the country in
various aspects. The technology and its applications can be applied across different segments in the
country.
Preventing the frauds at ATMs in India. A database of all customers with ATM cards in India
can be created and facial recognition systems can be installed. So, whenever user will enter in
ATM his photograph will be taken to permit the access after it is being matched with stored
photo from the database.
Reporting duplicate voters in India.
Passport and visa verification can also be done using this technology.
Also, driving license verification can be done using the same approach.
In defense ministry, airports, and all other important places the technology can be used to
ensure better surveillance and security.
It can also be used during examinations such as Civil Services Exam, SSC, IIT, MBBS, and
others to identify the candidates.
This system can be deployed for verification and attendance tracking at various government
offices and corporates.
For access control verification and identification of authentic users it can also be installed in
bank lockers and vaults.
For identification of criminals the system can be used by police force also.
Some Concerns That Need To Be Addressed:
activity_video_new_helper.xml:-
<?xml version=”1.0” encoding=”utf-8”?>
<android.constraintlayout.widget.ConstraintLayout xmlns:android=”http://schemas.android.com/apk/res/android”
xmlns:app=”http://schemas.android.com/apk/res-auto”
android:layout_width=”match_parent”
android:layout_height=”match_parent”
xmlns:tools=”http://schemas.android.com/tools”>
<com.google.android.material.floatingactionbutton.ExtendedFloatingActionButton
android:id=”@+id/button_add_face”
android:visibility=”gone”
android:layout_width=”wrap_content”
android:layout_height=”wrap_content”
android:layout_marginTop=”20dp”
app:icon=”@drawable/baseline_portrait_24”
android:text=”Add Face”
android:onClick=”onAddFaceClicked”
app:layout_constraintEnd_toEndOf=”parent”
app:layout_constraintTop_toTopOf=”parent” />
<52ndroid.camera.view.PreviewView
android:id=”@+id/camera_source_preview”
android:layout_width=”match_parent”
android:layout_height=”0dp”
app:layout_constraintBottom_toBottomOf=”parent”
app:layout_constraintDimensionRatio=”3:4”
app:layout_constraintTop_toTopOf=”parent” />
<com.example.mlseriesdemonstrator.helpers.vision.GraphicOverlay
android:id=”@+id/graphic_overlay”
android:layout_width=”match_parent”
android:layout_height=”0dp”
app:layout_constraintBottom_toBottomOf=”parent”
app:layout_constraintDimensionRatio=”3:4”
app:layout_constraintTop_toTopOf=”parent” />
<ImageView
android:id=”@+id/testImageView”
android:layout_width=”200dp”
android:layout_height=”200dp”
app:layout_constraintStart_toStartOf=”parent”
app:layout_constraintTop_toTopOf=”parent” />
<TextView
android:id=”@+id/output_text_view”
android:layout_width=”0dp”
android:layout_height=”0dp”
android:textAlignment=”textStart”
android:textAppearance=”@style/TextAppearance.AppCompat.Medium”
app:layout_constraintBottom_toBottomOf=”parent”
app:layout_constraintEnd_toEndOf=”parent”
app:layout_constraintStart_toStartOf=”parent”
app:layout_constraintTop_toBottomOf=”@+id/camera_source_preview”
tools:text=”Test” />
</52android.constraintlayout.widget.ConstraintLayout>
Activity_mltext_helper.xml:
<?xml version=”1.0” encoding=”utf-8”?>
<53android.constraintlayout.widget.ConstraintLayout
xmlns:android=”http://schemas.android.com/apk/res/android”
xmlns:app=”http://schemas.android.com/apk/res-auto”
xmlns:tools=”http://schemas.android.com/tools”
android:layout_width=”match_parent”
android:layout_height=”match_parent”
tools:context=”.MainActivity”>
<LinearLayout
android:id=”@+id/text_container”
android:layout_width=”match_parent”
android:layout_height=”wrap_content”
android:layout_marginTop=”8dp”
android:orientation=”horizontal”
app:layout_constraintEnd_toEndOf=”parent”
app:layout_constraintStart_toStartOf=”parent”
app:layout_constraintTop_toTopOf=”parent”>
<TextView
android:layout_width=”wrap_content”
android:layout_height=”wrap_content”
android:text=”Enter a String: “/>
<EditText
android:id=”@+id/txtInput”
android:layout_width=”match_parent”
android:layout_height=”wrap_content”/>
</LinearLayout>
<TextView
android:id=”@+id/txtOutput”
android:layout_width=”match_parent”
android:layout_height=”wrap_content”
android:layout_marginTop=”8dp”
android:text=”Output Goes Here:”
android:textSize=”30sp”
android:textAlignment=”center”
app:layout_constraintEnd_toEndOf=”parent”
app:layout_constraintStart_toStartOf=”parent”
app:layout_constraintTop_toBottomOf=”@+id/text_container”/>
<Button
android:id=”@+id/btnSendText”
android:layout_width=”wrap_content”
android:layout_height=”wrap_content”
android:layout_marginTop=”8dp”
android:text=”Send”
android:onClick=”onSendButtonClicked”
app:layout_constraintEnd_toEndOf=”parent”
app:layout_constraintStart_toStartOf=”parent”
app:layout_constraintTop_toBottomOf=”@+id/txtOutput”/>
</53android.constraintlayout.widget.ConstraintLayout>
Add_face_dialog:
<?xml version=”1.0” encoding=”utf-8”?>
<54android.constraintlayout.widget.ConstraintLayout
xmlns:android=”http://schemas.android.com/apk/res/android”
xmlns:app=”http://schemas.android.com/apk/res-auto”
xmlns:tools=”http://schemas.android.com/tools”
android:layout_width=”match_parent”
android:layout_height=”match_parent”
tools:context=”.MainActivity”>
<LinearLayout
android:id=”@+id/text_container”
android:layout_width=”match_parent”
android:layout_height=”wrap_content”
android:layout_marginTop=”8dp”
android:orientation=”horizontal”
app:layout_constraintEnd_toEndOf=”parent”
app:layout_constraintStart_toStartOf=”parent”
app:layout_constraintTop_toTopOf=”parent”>
<TextView
android:layout_width=”wrap_content”
android:layout_height=”wrap_content”
android:text=”Enter a String: “/>
<EditText
android:id=”@+id/txtInput”
android:layout_width=”match_parent”
android:layout_height=”wrap_content”/>
</LinearLayout>
<TextView
android:id=”@+id/txtOutput”
android:layout_width=”match_parent”
android:layout_height=”wrap_content”
android:layout_marginTop=”8dp”
android:text=”Output Goes Here:”
android:textSize=”30sp”
android:textAlignment=”center”
app:layout_constraintEnd_toEndOf=”parent”
app:layout_constraintStart_toStartOf=”parent”
app:layout_constraintTop_toBottomOf=”@+id/text_container”/>
<Button
android:id=”@+id/btnSendText”
android:layout_width=”wrap_content”
android:layout_height=”wrap_content”
android:layout_marginTop=”8dp”
android:text=”Send”
android:onClick=”onSendButtonClicked”
app:layout_constraintEnd_toEndOf=”parent”
app:layout_constraintStart_toStartOf=”parent”
app:layout_constraintTop_toBottomOf=”@+id/txtOutput”/>
</54android.constraintlayout.widget.ConstraintLayout>
Activity_ audio_helper.xml:
<?xml version=”1.0” encoding=”utf-8”?>
<55android.constraintlayout.widget.ConstraintLayout xmlns:android=”http://schemas.android.com/apk/res/android”
xmlns:app=”http://schemas.android.com/apk/res-auto”
xmlns:tools=”http://schemas.android.com/tools”
android:layout_width=”match_parent”
android:layout_height=”match_parent”>
<Button
android:id=”@+id/buttonStartRecording”
android:layout_width=”wrap_content”
android:layout_height=”wrap_content”
android:text=”Start Recording”
android:onClick=”onStartRecording”
app:layout_constraintBottom_toBottomOf=”parent”
app:layout_constraintEnd_toStartOf=”@+id/buttonStopRecording”
app:layout_constraintHorizontal_bias=”0.5”
app:layout_constraintStart_toStartOf=”parent” />
<Button
android:id=”@+id/buttonStopRecording”
android:layout_width=”wrap_content”
android:layout_height=”wrap_content”
android:text=”Stop Recording”
android:onClick=”onStopRecording”
app:layout_constraintBottom_toBottomOf=”parent”
app:layout_constraintEnd_toEndOf=”parent”
app:layout_constraintHorizontal_bias=”0.5”
app:layout_constraintStart_toEndOf=”@+id/buttonStartRecording” />
<TextView
android:id=”@+id/textViewOutput”
android:layout_width=”357dp”
android:layout_height=”304dp”
android:layout_marginStart=”8dp”
android:layout_marginEnd=”8dp”
android:text=”TextView”
app:layout_constraintBottom_toTopOf=”@+id/buttonStartRecording”
app:layout_constraintEnd_toEndOf=”parent”
app:layout_constraintHorizontal_bias=”1.0”
app:layout_constraintStart_toStartOf=”parent”
app:layout_constraintTop_toTopOf=”parent”
app:layout_constraintVertical_chainStyle=”packed” />
<TextView
android:id=”@+id/textViewSpec”
android:layout_width=”355dp”
android:layout_height=”314dp”
android:layout_marginStart=”8dp”
android:layout_marginTop=”16dp”
android:layout_marginEnd=”8dp”
android:layout_marginBottom=”16dp”
android:text=”TextView”
app:layout_constraintBottom_toTopOf=”@+id/buttonStartRecording”
app:layout_constraintEnd_toEndOf=”parent”
app:layout_constraintHorizontal_bias=”1.0”
app:layout_constraintStart_toStartOf=”parent”
app:layout_constraintTop_toBottomOf=”@+id/textViewOutput” />
</55android.constraintlayout.widget.ConstraintLayout>
Activity_image_helper.xml:
<?xml version=”1.0” encoding=”utf-8”?>
<56android.constraintlayout.widget.ConstraintLayout xmlns:android=”http://schemas.android.com/apk/res/android”
xmlns:app=”http://schemas.android.com/apk/res-auto”
xmlns:tools=”http://schemas.android.com/tools”
android:layout_width=”match_parent”
android:layout_height=”match_parent”
tools:context=”.MainActivity”>
<ImageView
android:id=”@+id/imageView”
android:layout_width=”match_parent”
android:layout_height=”0dp”
app:layout_constraintBottom_toTopOf=”@+id/textView”
app:layout_constraintEnd_toEndOf=”parent”
app:layout_constraintStart_toStartOf=”parent”
app:layout_constraintTop_toTopOf=”parent”
tools:srcCompat=”@tools:sample/avatars” />
<TextView
android:id=”@+id/textView”
android:layout_width=”346dp”
android:layout_height=”35dp”
android:layout_marginBottom=”8dp”
android:minHeight=”100dp”
android:text=”Choose Photo using these Buttons”
android:textAppearance=”@style/TextAppearance.AppCompat.Large”
app:layout_constraintBottom_toTopOf=”@+id/constraintLayout”
app:layout_constraintEnd_toEndOf=”parent”
app:layout_constraintStart_toStartOf=”parent” />
<56ndroid.constraintlayout.widget.ConstraintLayout
android:id=”@+id/constraintLayout”
android:layout_width=”match_parent”
android:layout_height=”wrap_content”
android:background=”@color/purple_200”
app:layout_constraintBottom_toBottomOf=”parent”
app:layout_constraintEnd_toEndOf=”parent”
app:layout_constraintStart_toStartOf=”parent”>
<Button
android:id=”@+id/buttonPickPhoto”
android:layout_width=”wrap_content”
android:layout_height=”wrap_content”
android:text=”Pick Photo”
android:onClick=”onPickImage”
app:layout_constraintBottom_toBottomOf=”parent”
app:layout_constraintEnd_toStartOf=”@+id/buttonTakePhoto”
app:layout_constraintHorizontal_bias=”0.5”
app:layout_constraintStart_toStartOf=”parent”
app:layout_constraintTop_toTopOf=”parent” />
<Button
android:id=”@+id/buttonTakePhoto”
android:layout_width=”wrap_content”
android:layout_height=”wrap_content”
android:text=”Take Photo”
android:onClick=”onTakeImage”
app:layout_constraintBottom_toBottomOf=”parent”
app:layout_constraintEnd_toEndOf=”parent”
app:layout_constraintHorizontal_bias=”0.5”
app:layout_constraintStart_toEndOf=”@+id/buttonPickPhoto”
app:layout_constraintTop_toTopOf=”parent” />
</56android.constraintlayout.widget.ConstraintLayout>
</56android.constraintlayout.widget.ConstraintLayout>
AndroidManifests.xml:
<?xml version=”1.0” encoding=”utf-8”?>
<manifest xmlns:android=”http://schemas.android.com/apk/res/android”>
<uses-permission android:name=”android.permission.CAMERA” />
<uses-permission android:name=”android.permission.READ_EXTERNAL_STORAGE” />
<uses-permission android:name=”android.permission.RECORD_AUDIO” />
<queries>
<!—Camera
<intent>
<action android:name=”android.media.action.IMAGE_CAPTURE” />
</intent>
<intent>
<action android:name=”android.intent.action.GET_CONTENT” />
<data android:mimeType=”image/*” />
</intent>
</queries>
<application
android:allowBackup=”true”
android:icon=”@mipmap/ic_launcher”
android:label=”@string/app_name”
android:roundIcon=”@mipmap/ic_launcher_round”
android:supportsRtl=”true”
android:theme=”@style/Theme.MLSeriesDemonstrator”>
<activity android:name=”.MainActivity” android:exported=”true” android:screenOrientation=”userPortrait”>
<intent-filter>
<action android:name=”android.intent.action.MAIN” />
<category android:name=”android.intent.category.LAUNCHER” />
</intent-filter>
</activity>
<activity
android:name=”.image.ImageClassificationActivity”
android:screenOrientation=”userPortrait” />
<activity
android:name=”.image.FlowerIdentificationActivity”
android:screenOrientation=”userPortrait” />
<activity
android:name=”.object.ObjectDetectionActivity”
android:screenOrientation=”userPortrait” />
<activity
android:name=”.object.FaceDetectionActivity”
android:screenOrientation=”userPortrait” />
<activity
android:name=”.audio.AudioClassificationActivity”
android:screenOrientation=”userPortrait” />
<activity
android:name=”.audio.BirdSoundDetectorActivity”
android:screenOrientation=”userPortrait” />
<activity android:name=”.text.SpamTextDetectionActivity”
android:screenOrientation=”userPortrait” />
<activity android:name=”.object.DriverDrowsinessDetectionActivity”
android:screenOrientation=”userPortrait” />
<activity android:name=”.object.PoseDetectionActivity”
android:screenOrientation=”userPortrait” />
<activity android:name=”.object.VisitorAnalysisActivity”
android:screenOrientation=”userPortrait” />
<activity android:name=”.object.FaceRecognitionActivity”
android:screenOrientation=”userPortrait” />
<activity android:name=”.object.ObscureFaceActivity”
android:screenOrientation=”userPortrait” />
<!—make sure within the application tag, otherwise app will crash with XmlResourceParser errors
<provider
android:name=”58ndroid.core.content.FileProvider”
android:authorities=”com.iago.fileprovider1”
android:exported=”false”
android:grantUriPermissions=”true”>
<meta-data
android:name=”android.support.FILE_PROVIDER_PATHS”
android:resource=”@xml/fileprovider” />
</provider>
</application>
</manifest>
MainActivity.java:
import 58android.recyclerview.widget.GridLayoutManager;
import 58android.recyclerview.widget.RecyclerView;
import com.example.mlseriesdemonstrator.audio.AudioClassificationActivity;
import com.example.mlseriesdemonstrator.audio.BirdSoundDetectorActivity;
import com.example.mlseriesdemonstrator.image.FlowerIdentificationActivity;
import com.example.mlseriesdemonstrator.image.ImageClassificationActivity;
import com.example.mlseriesdemonstrator.object.DriverDrowsinessDetectionActivity;
import com.example.mlseriesdemonstrator.object.FaceDetectionActivity;
import com.example.mlseriesdemonstrator.object.FaceRecognitionActivity;
import com.example.mlseriesdemonstrator.object.ObjectDetectionActivity;
import com.example.mlseriesdemonstrator.object.ObscureFaceActivity;
import com.example.mlseriesdemonstrator.object.PoseDetectionActivity;
import com.example.mlseriesdemonstrator.object.VisitorAnalysisActivity;
import com.example.mlseriesdemonstrator.text.SpamTextDetectionActivity;
import java.util.ArrayList;
import java.util.List;
public class MainActivity extends AppCompatActivity implements AlgoListener {
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
ArrayList<Algo> arrayList = new ArrayList<>();
arrayList.add(new Algo(R.drawable.baseline_image_black_48, “Image Classification”,
ImageClassificationActivity.class));
arrayList.add(new Algo(R.drawable.baseline_filter_vintage_black_48, “Flower Identification”,
FlowerIdentificationActivity.class));
arrayList.add(new Algo(R.drawable.baseline_center_focus_strong_black_48, “Object detection”,
ObjectDetectionActivity.class));
arrayList.add(new Algo(R.drawable.baseline_portrait_black_48, “Face detection”,
FaceDetectionActivity.class));
arrayList.add(new Algo(R.drawable.baseline_music_note_black_48, “Audio Classification”,
AudioClassificationActivity.class));
arrayList.add(new Algo(R.drawable.baseline_flutter_dash_black_48, “Bird Sound Identifier”,
BirdSoundDetectorActivity.class));
arrayList.add(new Algo(R.drawable.baseline_comment_black_48, “Spam Text Detector”,
SpamTextDetectionActivity.class));
arrayList.add(new Algo(R.drawable.baseline_time_to_leave_black_48, “Driver Drowsiness Detector”,
DriverDrowsinessDetectionActivity.class));
arrayList.add(new Algo(R.drawable.baseline_accessibility_black_48, “Pose Detection”,
PoseDetectionActivity.class));
arrayList.add(new Algo(R.drawable.baseline_portrait_black_48, “Visitor Analysis”,
VisitorAnalysisActivity.class));
arrayList.add(new Algo(R.drawable.baseline_portrait_black_48, “Face recognition”,
FaceRecognitionActivity.class));
arrayList.add(new Algo(R.drawable.baseline_portrait_black_48, “Hide/Obscure Face”,
ObscureFaceActivity.class));
AlgoAdapter algoAdapter = new AlgoAdapter(arrayList, this);
RecyclerView recyclerView = findViewById(R.id.main_recycler_view);
recyclerView.setAdapter(algoAdapter);
recyclerView.setLayoutManager(new GridLayoutManager(this, 2));
}
@Override
public void onAlgoSelected(Algo algo) {
Intent intent = new Intent(this, algo.activityClazz);
intent.putExtra(“Name”, algo.algoText);
startActivity(intent);
}
}
class AlgoAdapter extends RecyclerView.Adapter<AlgoViewHolder> {
</androidx.recyclerview.widget.RecyclerView>
Outputs:
Home Screen:
List Of Figures
Fig. 1 : Implementation Flow Chart… ...................................................................... 14
Fig. 2 : TextView ......................................................................................................20
Fig. 3 : Button…....................................................................................................... 21
Fig. 4 : ToggleButton ............................................................................................... 22
Fig. 5 : RadioButton ................................................................................................. 23
Fig. 6 : ImageButton................................................................................................ 23
Fig. 7 : FlowDigram ................................................................................................. 26
Fig. 8: Simple COCOMO analysis chart… .............................................................. 29
Fig. 9 : Typical Spiral Model… ................................................................................ 30
Fig. 10 : DFD level 0… ............................................................................................ 31
Fig. 11: DFD level 1… ............................................................................................. 32
Fig.12: DFD level 2… .............................................................................................. 32
Fig. 13 : Use Case Diagram… ...................................................................................32
Fig. 14: Activity Diagram ......................................................................................... 34
Fig. 15: Android Studio… ....................................................................................... 35
Fig. 16: Android Virtual Device (Emulator) ........................................................... 36
Fig. 17 : Screenshots................................................................................................. 51
List of Tables
Table 1: Test Cases for Criminal Detection App ..................................................... 43
Table 2: Test Plan For Criminal Detection App ...................................................... 46
Table 3: Defect Report For Criminal Detection App .............................................. 48
CHAPTER 10
ANNEXURE
Annexure
Mr. Kashif Shaikh (Mentor) Mr. Rohit Raut Mr. Mohit Mishra
Computer Engineering Computer Engineering Computer Engineering
Thakur Polytechnic Thakur Polytechnic Thakur Polytechnic
Kandivali, Mumbai, India Kandivali, Mumbai , India Kandivali, Mumbai, India
Abstract - The Criminal Investigation and Face complex cases effortlessly. The inclusion of automated face
Recognition System (CIFRS) streamlines investigations recognition significantly accelerates suspect identification
with an intuitive interface and automated face process
recognition. Investigators effortlessly manage cases, set enabling law enforcement agencies to respond promptly.
parameters, and upload facial images for analysis against volving situations. As technology continues to reshape
a database. Real-time notifications alert investigators to investigative practices, CIFRS stands at the forefront, offering
potential matches, contributing to investigative efficiency. a comprehensive and adaptable solution that aligns with the
CIFRS supports diverse law enforcement management dynamic nature of criminal activities.
systems, allowing the download of case reports and data
in various formats. The system also offers practice This paper aims to delve into the core features and
scenarios to familiarize users with face recognition functionalities of CIFRS, elucidating how it addresses the
technology. intricate challenges faced by investigators. By exploring the
capabilities of CIFRS, we seek to highlight its potential
Keywords – Criminal Investigation, Face Recognition, impact on the efficiency and effectiveness of modern criminal
Database Management, Law Enforcement. investigations.
Introduction In subsequent sections, we will delve into the technical
In the realm of contemporary law enforcement, the infusion aspects of CIFRS, its operational intricacies, and the practical
of advanced technologies has become indispensable for implications it holds for law enforcement agencies. Through
enhancing investigative capabilities. The Criminal a detailed exploration, we aim to showcase how CIFRS serves
Investigation and Face Recognition System (CIFRS) emerges as a pivotal tool in the arsenal of contemporary investigative
as a sophisticated and agile solution aimed at optimizing the practices, contributing to the ongoing evolution of law
efficiency of criminal investigations. The intricate nature of enforcement methodologies.
criminal activities demands a systematic approach, and
CIFRS seeks to revolutionize this landscape by integrating an Literature Review
intuitive interface and automated face recognition
functionalities. The intersection of technology and criminal investigations has
witnessed a burgeoning interest in recent years, with a
CIFRS is designed to provide investigators with a streamlined particular focus on innovations such as facial recognition
platform for managing and navigating through complex cases systems. In the literature, various studies underscore the
effortlessly. The inclusion of automated face recognition pivotal role of these technologies in enhancing the efficiency
significantly accelerates suspect identification processes, and accuracy of law enforcement practices.
enabling law enforcement agencies to respond promptly to
evolving situate ions. Face recognition technology, a cornerstone of the
As technology continues to reshape investigative practices, Criminal Investigation and Face Recognition System
CIFRS stands at the forefront, offering a comprehensive and (CIFRS), has been extensively explored in academic and
adaptable solution that aligns with the dynamic nature of professional spheres. Zhao, et al. (2018) emphasized the
criminal activities. rapid advancements in deep learning algorithms,
particularly convolutional neural networks (CNNs), that
have significantly improved the accuracy of facial
camera within a unified interface. Users frequently encounter the Criminal Investigation and Face Recognition System
applications where these functions are compartmentalized, (CIFRS):
requiring navigation between disparate screens or
functionalities. This fragmentation not only hinders the 1. Case Management: CIFRS offers robust case management
overall user experience but also introduces unnecessary capabilities, allowing investigators to create, organize, and
complexities, especially for individuals less familiar with track criminal cases efficiently. Investigators can input case
mobile applications. details, assign case statuses, and manage case documents
Furthermore, the lack of a visually coherent design and clear within the system.
instructional elements often results in user ambiguity. Users 2.
may face challenges understanding the primary actions 2.Facial Image Database: The system includes a centralized
associated with photo selection or capture, leading to potential database for storing facial images of suspects, persons of
frustration and a diminished sense of usability. interest, and known individuals. CIFRS allows for the secure
The proposed Android application, with its carefully crafted storage and retrieval of facial images, facilitating quick and
layout and functional components, seeks to address these accurate comparisons during investigations.
challenges by providing a seamless and intuitive solution. By 3.Automated Face Recognition: CIFRS integrates advanced
consolidating photo-related functionalities into a single face recognition algorithms to automate the process of
interface and incorporating clear instructions, the application identifying potential suspects. Investigators can upload facial
aims to enhance the overall user experience, mitigating the images obtained from various sources, and the system
identified problems associated with disjointed and less user- performs automatic comparisons against the database,
friendly photo-handling processes on mobile devices. providing potential matches in real-time.
4. Real-time Notifications: The system is equipped with real-
System Interfaces time notification capabilities to alert investigators when
potential matches are found. These notifications ensure
prompt action and facilitate timely responses in ongoing
investigations.
5.Evidence Management: CIFRS includes features for
managing and organizing evidence related to criminal cases.
Investigators can upload, categorize, and track evidence
within the system, ensuring proper documentation and chain
of custody.
6.Reporting and Analysis: The system offers reporting tools
to generate comprehensive reports on case details, suspect
identification results, and investigative progress. These
reports provide valuable insights for analysis and decision-
making by law enforcement agencies.
7. User Access Control: CIFRS implements user access
control measures to ensure data security and privacy.
Investigators are granted access to specific functionalities and
data based on their roles and permissions within the system.
8.Practice Scenarios: To familiarize users with the face
recognition technology and system functionalities, CIFRS
includes practice scenarios. These simulated exercises allow
investigators to hone their skills and improve their proficiency
in using the system.
9. Compatibility and Integration: CIFRS is designed to be
compatible with existing law enforcement management
systems and databases. The system supports data exchange
and integration protocols to facilitate seamless collaboration
and interoperability with other systems.
10. User Training and Support: The system provides user
Functionality of the system training materials and support resources to assist investigators
The functionality of the system encompasses several key in effectively utilizing CIFRS. Training modules, user guides,
aspects aimed at providing a comprehensive and user-friendly and technical assistance are available to ensure users can
platform for managing criminal investigations with face maximize the benefits of the system.
recognition technology. Below are the core functionalities of
The inclusion of a visual preview through the ImageView International Conference on Criminal Justice and Law
provides users with immediate feedback on their selected or Enforcement Proceedings (2021): 112-125.
captured images, fostering confidence and satisfaction. The
clear instructions presented in the TextView guide users
through the primary actions, reducing ambiguity and
enhancing overall usability, even for individuals less
acquainted with mobile applications.
References