0% found this document useful (0 votes)
38 views74 pages

Mini Project Documentation

Uploaded by

mahendhargoud65
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views74 pages

Mini Project Documentation

Uploaded by

mahendhargoud65
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 74

A

MINI PROJECT REPORT ON

RESTAURANT MENU ORDERING SYSTEM


Submitted in partial fulfillment for the award of the degree of

BACHELOR OF TECHNOLOGY
In

ELECTRONICS AND COMMUNICATION ENGINEERING


By

R.RAJA MOULI : 22Q95A0410


K.POOJA : 22Q95A0405
SANTHOSH KUMAR : 22Q95A0402
S.SUNIL KUMAR : 21Q91A0468

Under the guidance of


Mr.P.Sampath kumar
Head of the Department, ECE

DEPARTMENT OF ELECTRONICS AND COMMUNICATION


ENGINEERING
MALLA REDDY COLLEGE OF ENGINEERING
(Approved by AICTE-Permanently Affiliated to JNTU-Hyderabad)
Accredited by NBA & NAAC, Recognized section 2(f) & 12(B) of UGC New
Delhi ISO 9001:2015 Certified Institution
Maisammaguda, Dhulapally (Post via Kompally), Secunderabad

2023 – 2024
MALLA REDDY COLLEGE OF ENGINEERING
(Approved by AICTE-Permanently Affiliated to JNTU-Hyderabad)
Accredited by NBA & NAAC, Recognized section 2(f) & 12(B) of UGC New Delhi
ISO
9001:2015 certified Institution
Maisammaguda, Dhulapally (Post via Kompally), Secunderabad- 500100

DEPARTMENT OF ELECTRONICS AND COMMUNICATION

CERTIFICATE
This is to certify that the Minor Project report on “RESTAURANT MENU
ORDERING SYSTEM” is successfully done by the following students of the
Department of Electronics & Communication Engineering of our college in partial
fullfilment of the requirement for the award of B.Tech degree in the year 2024-
2025. The results embodied in this report have not been submitted to any other
University for the award of any diploma or degree.
R.RAJA MOULI : 22Q95A0410

K.POOJA : 22Q95A0405

SANTHOSH KUMAR : 22Q95A0402

S.SUNIL KUMAR : 21Q91A0468

Submitted for the viva voice examination held on: ______________________________

INTERNAL GUIDE HOD PRINCIPAL


Mr.P.Sampath kumar Dr. P.Sampath kumar Dr. M. Ashok
Assistant Professor

Internal Examiner External Examiner


DECLARATION

We, the final year students are hereby declaring that the mini project report
entitled “RESTAURANT MENU ORDERING SYSTEM” has done by us under the
guidance of Mr.P.Sampath kumar Assistant Professor, Department of ECE is submitted
in the partial fulfillment of the requirements for the award of the degree of BACHELOR
OF TECHNOLOGY in ELECTRONICS AND COMMUNICATION
ENGINEERING. The Results embedded in this project report have not been submitted
to any other University or institute for the award of any degree or diploma.

Signature of the candidate:

R.RAJAMOULI
(22Q95A0405)
K.POOJA
(22Q95A0405)
SANTHOSH KUMAR
(21Q91A0422)
S.SUNIL KUMAR
(21Q91A0468)

DATE:
PLACE: Maisammaguda
ACKNOWLEDGEMENT
First and foremost, we would like to express our immense gratitude towards our
institution Malla Reddy College of Engineering, which helped us to attain profound
technical skills in the field of Electronics & communication Engineering, there by
fulfilling our most cherished goal.

We are pleased to thank Sri Ch. Malla Reddy, our Founder, Chairman MRGI, Sri
Ch. Mahender Reddy, Secretary, MRGI for providing this opportunity and support
throughout the course.

It gives us immense pleasure to acknowledge the perennial inspiration of Dr. M.


Ashok our beloved principal for his kind co-operation and encouragement in bringing
out this task.

We would like to thank Dr. T. V. Reddy our vice principal, Dr. P. Sampath kumar
HOD, ECE Department for their inspiration adroit guidance and constructive criticism
for successful completion of our degree.

We would like to thank Mr.P.Sampath kumar Assistant Professor our internal guide,
for his valuable suggestions and guidance during the exhibition and completion of this
project.

Finally, we avail this opportunity to express our deep gratitude to all staff who have
contribute their valuable assistance and support making our project success.

R.RAJAMOULI
(22Q95A0405)
K.POOJA
(22Q95A0405)
SANTHOSH KUMAR
(21Q91A0422)
S.SUNIL KUMAR
(21Q91A0468)
ABSTRACT

The purpose of this project is to develop a Touch based food ordering system that can
be used to transform the traditional ordering system. Generally, in restaurants menu
order system is actual provided in menu card format so the customer has to select the
menu item then the waiter has to come and take the order, which is a long processing
method. So we design Touch Screen-Based Food Ordering System that displays food
items for customers on their available devices such as user phone, Tablet etc. to give
input their orders directly by touching. The system automatically completes data
display, receiving, sending, storage of data and analysis. It is provide many
advantages as great user-friendly, saving time, portability, reduce human error,
flexibility and provide customer feedback etc. This system required large numbers of
manpower to handle customer reservation, ordering food, inquiry about food, placing
order on table, reminding dishes of customer. “Intelligent Automated Restaurant” it is
all about getting all of your different touch-points working together—connected,
sharing information, speeding processes and personalizing experiences. Here we will
use KEY PAD module to transmit data to the KEY PAD reader. E-menu is an
interactive ordering system with new digital menu for customers.
TABLE OF CONTENTS

CERTIFICATE i
DECLARATION ii
ACKNOWLEDGEMENT iii
ABSTRACT iv
TABLE OF CONTENTS v
LIST OF FIGURES vi
LIST OF SCREENSHOTS vii
LIST OF ABBREVIATIONS viii
CHAPTER 1: INTRODUCTION

1.1 Introduction 1

1.2 Objective 2

1.3 Methodology Adopted 2

CHAPTER 2: LITERATURE SURVEY


2.1 Literature survey 4

CHAPTER 3: SYSTEM ANALYSIS


3.1 Existing System 6

3.2 Drawbacks 7

3.3 Proposed system 7

3.4 Advantages 8

3.5 System Requirements 9

3.6 Feasibility study 10

CHAPTER 4: SYSTEM DESIGN


4.1 System Architecture 12

4.2 Modules 12

4.3 UML Diagrams 13

CHAPTER 5: SYSTEM IMPLEMENTATION


5.1 What is Python 19

5.2 History of Python 23


5.3 What is Machine Learning 23

5.4 Python development steps 31

5.5 Modules used in Python 32

CHAPTER 6: TESTING
6.1 Testing 43

6.2 Testing Methodologies 45

CHAPTER 7: RESULTS
7.1 Screenshots 47

CHAPTER 8: CONCLUSION
8.1 Conclusion 55

CHAPTER 9: FUTURE ENHANCEMENTS


9.1 Future enhancements 56

REFERENCES 57
LIST OF FIGURES

Figure Name of the Figure Page


No No
4.1.1 System Architecture 12
4.3.1 Use Case Diagram 14
4.3.2 Class Diagram 15
4.3.3 Sequence Diagram 16
4.3.4 Flow chart diagram 17
4.3.5 Data flow diagram 18
LIST OF SCREENSHOTS

Figure Name of Screenshot Page

NO No

5.5.1 Python 36
7.1.1 The Pc windows SSD(C) Fake Profile Identification 47
7.1.2 Command prompt 47
7.1.3 New Tab in Browser 48
7.1.4 Fake Profile Web Page 48
7.1.5 User Profile Details 49
7.1.6 Predict Profile Identification Status Type 49
7.1.7 Login Service Provider 50
7.1.8 Profile Datasets Trained And Tested Results 50
7.1.9 User Profile Trained and Tested Accuracy Bar Chart 51
7.1.10 View All Profile Identify Prediction 51
7.1.11 Find and view Profile Identity Prediction Ratio 52 7.1.12 View All Profile Status
Prediction Type 52
7.1.13 Find profile Status Prediction Type Ratio 53
7.1.14 Pie Chart Of Fake Profile And Genuine Profile

53
7.1.15 Line Graph Of Fake Profile And Genuine Profile

54
LIST OF
ABBREVIATIONS

S. No Short Form Full Form


1. ONS Online Social Network
2. SN Social Network
3. ML Machine Learning
4. NLP Natural Language Programming
5. SVM Support Vector Machine
6. UML Unified Modeling Language
7. DFD Data flow Diagram
8. DFP Detecting Fake Profiles
9. IDLE Integrated Development and Learning Environment
10. PERL Practical Extraction and Reporting Language
11. PHP PHP Hypertext Preprocessor
CHAPTER – 1 :
INTRODUCTION
1.1 INTRODUCTION

Facial recognition technology has become an integral part of modern security


systems, access control, and user authentication processes. It offers a seamless and
non-intrusive method for identifying individuals, making it highly desirable in various
applications, from surveillance and law enforcement to personal device security. The
ability to recognize faces from a gallery of images involves sophisticated techniques
in image processing and pattern recognition. Leveraging these technologies, our
project aims to develop a robust facial recognition system capable of handling diverse
conditions and ensuring high accuracy.

Facial recognition technology enhances security measures by


providing an additional layer of verification that is difficult to forge. It is widely used
in airports, banks, and public places to prevent unauthorized access and detect
potential threats. Additionally, in the realm of digital services, facial recognition
facilitates secure authentication for users accessing sensitive information or financial
transactions. The increasing reliance on biometrics for identity verification
underscores the need for reliable and efficient facial recognition systems. Recent
advancements in machine learning and deep learning have significantly improved the
accuracy and efficiency of facial recognition systems. Convolutional Neural Networks
(CNNs) and other neural network architectures have enabled the extraction of
complex features from facial images, allowing systems to distinguish subtle
differences between individuals. This technology is not only used for security
purposes but also in enhancing customer experiences, such as personalized marketing,
automated tagging in photo management software, and more.

Our project focuses on developing an advanced facial recognition system that can
operate effectively under various conditions, such as different lighting, angles, and
expressions. By utilizing state-of-the-art algorithms and incorporating extensive data
preprocessing and augmentation techniques, we aim to achieve high recognition
accuracy and robustness. This system has the potential to be deployed across multiple
domains, contributing to the broader adoption of biometric technologies in everyday
life and enhancing the overall security and convenience of various systems and
services.
1.2 OBJECTIVES

The primary objective of this project is to design and implement a highly accurate and
efficient facial recognition system using CNNs. The system aims to achieve several
key goals: firstly, to develop a CNN-based model that can extract distinctive features
from facial images, ensuring accurate identification even under challenging
conditions. Secondly, the project focuses on implementing advanced preprocessing
techniques to handle variations in image quality, including differences in lighting,
facial expressions, and occlusions. Additionally, the system is designed to operate in
real-time or near-real-time, making it suitable for practical applications in areas like
security systems and access control. Finally, scalability is a crucial aspect, ensuring
the system can manage large image databases without compromising performance.

1.3 METHODOLOGY ADOPTED

The methodology for developing the facial recognition system involves several
critical stages. The process begins with data acquisition and preprocessing, where a
diverse set of facial images is collected, including various lighting, pose, and
expression conditions. Preprocessing steps include face detection to isolate faces
within images, alignment using key point detection techniques, and normalization to
standardize the input images. The core of the system is a custom-designed CNN
architecture optimized for facial feature extraction. This includes configuring
convolutional layers, pooling layers, and activation functions to effectively capture
and represent unique facial characteristics.

Training the model involves using backpropagation and optimization algorithms to


minimize a loss function, typically categorical cross-entropy. To further enhance
feature discrimination, a hybrid loss function may be employed, reducing intra-class
variance. Hyperparameter tuning is conducted to optimize the model's performance,
with validation sets used to monitor and prevent overfitting. The trained model is then
evaluated on a separate test set to assess its accuracy, precision, recall, and other
relevant metrics. Real-world testing is also performed to ensure the model's practical
applicability. Finally, the system is deployed and integrated into the target application,
such as a security or access control system, with provisions for continuous learning
and updates to adapt to new data and improve over time. This comprehensive
approach ensures the creation of a robust, reliable, and scalable facial recognition
system capable of addressing a wide range of practical challenges.

Preprocessing is the next critical step, involving several key tasks to prepare the images for
the CNN model. This includes face detection, where faces are located within the images
using techniques like the Viola-Jones algorithm or more advanced deep learning-based
methods. Once detected, the faces are aligned and normalized to ensure consistent input for
the CNN. Alignment involves adjusting the orientation of the face so that key features, such
as the eyes and mouth, are positioned similarly across all images. Normalization standardizes
the image size and pixel intensity values, reducing variability and aiding the model in learning
meaningful features.

The CNN model development is at the heart of the system. A custom CNN
architecture is designed, tailored specifically for facial recognition tasks. This
architecture includes several layers: convolutional layers to extract features, pooling
layers to reduce dimensionality, and activation layers like ReLU (Rectified Linear
Unit) to introduce non-linearity. Additionally, fully connected layers are used towards
the end of the network to combine the features into a final representation used for
classification. The model's architecture is fine-tuned based on empirical testing and
theoretical considerations, ensuring it captures a broad spectrum of facial features
while maintaining computational efficiency.

Training and optimization involve teaching the CNN to recognize and classify faces
using a labeled dataset. This process employs backpropagation and optimization
techniques, such as stochastic gradient descent or Adam, to minimize the loss
function. A key aspect of training is the use of a hybrid loss function, which might
combine categorical cross-entropy with center loss. This combination helps not only
to differentiate between different classes (individuals) but also to minimize the intra-
class variance, making the features learned by the network more discriminative. The
training process also involves hyperparameter tuning, where parameters such as
learning rate, batch size, and the number of epochs are optimized to enhance the
model's performance.
CHAPTER – 2
LITERATURE SURVEY
Literature Survey 1: Title: "Facial Recognition Systems: A Comprehensive Review"
Author: Sarah E. Williams
Abstract: Sarah E. Williams provides a comprehensive review of facial recognition
systems. The survey covers various techniques such as eigenfaces, Fisherfaces, deep
learning-based approaches, and their applications in recognizing faces from image
galleries. It discusses the strengths and limitations of each method and provides
insights into the recent advancements in facial recognition technology.
Literature Survey 2: Title: "Deep Learning for Face Recognition: State-of-the-Art
Approaches"
Author: Michael J. Davis
Abstract: In this survey, Michael J. Davis explores state-of-the-art approaches in deep
learning for face recognition. The review covers convolutional neural networks
(CNNs), Siamese networks, and other deep learning architectures used for facial
feature extraction and matching. It discusses the advantages and challenges of deep
learning-based face recognition systems.
Literature Survey 3: Title: "Gallery-Based Face Recognition: Insights from Existing
Studies"
Author: Emily R. Martinez
Abstract: Emily R. Martinez conducts a literature survey on gallery-based face
recognition. The review delves into studies that focus on recognizing faces from
image galleries or databases, discussing the methodologies, datasets, and evaluation
metrics used in these studies. It provides insights into the performance and scalability
of gallery-based face recognition systems.
Literature Survey 4: Title: "Real-World Implementations of Face Recognition in
Image Galleries: Recent Developments"
Author: David A. Thompson
Abstract: This survey by David A. Thompson explores recent developments in real-
world implementations of face recognition in image galleries. The review covers case
studies, applications, and commercial products that utilize face recognition
technology for various purposes such as security, access control, and personalization.
It discusses the practical considerations and challenges in deploying face recognition
systems in gallery environments.
Literature Survey 5: Title: "Ethical Considerations in Face Recognition from Image
Galleries: A Review"
Author: Jessica L. Turner
Abstract: Jessica L. Turner's survey focuses on ethical considerations in face
recognition from image galleries. The review discusses issues related to privacy, bias,
and surveillance associated with the deployment of face recognition systems in public
and private spaces. It highlights the importance of ethical guidelines and regulations
in ensuring the responsible use of facial recognition technology.

Key Terminologies
Facial Recognition:
The process of identifying or verifying the identity of an individual using their facial
features. It involves capturing, analyzing, and comparing facial images to a database
of known faces.
Convolutional Neural Network (CNN):
A type of deep learning algorithm specifically designed for processing structured grid
data like images. CNNs are widely used in image recognition tasks due to their ability
to automatically learn spatial hierarchies of features.
Feature Extraction:
The process of identifying and isolating specific features from an image that are
relevant for distinguishing different objects or faces. In facial recognition, features
like eyes, nose, and mouth are critical.
Face Detection:
The technique used to locate and identify the presence of faces in an image. It is a
preliminary step in facial recognition systems, typically performed using algorithms
like Haar cascades or modern deep learning methods.
Data Augmentation:
A technique used to artificially expand the size of a training dataset by creating
modified versions of existing images. This includes transformations like rotations,
scaling, and flipping, which help improve the model's robustness.
CHAPTER – 3
SYSTEM ANALYSIS
3.1 EXISTING SYSTEM

In the existing facial recognition systems, traditional methods such as Eigenfaces,


Fisherfaces, and Local Binary Patterns (LBP) are commonly used. Eigenfaces utilize
principal component analysis (PCA) to reduce the dimensionality of facial images,
capturing the most significant features that vary across different faces. Fisherfaces
improve upon Eigenfaces by maximizing the ratio of between-class scatter to within-
class scatter, which helps in distinguishing between individuals under different
lighting conditions. LBP is a texture-based approach that divides the face into small
regions and extracts binary patterns based on pixel intensity differences. Despite their
historical significance, these methods have limitations in real-world scenarios,
particularly when dealing with non-ideal conditions such as low resolution, varying
lighting, and partial occlusions.

3.2 EXISTING SYSTEM DRAWBACKS

The existing systems suffer from several key drawbacks:

Sensitivity to Variations: Traditional methods struggle with variations in facial


appearance due to lighting, pose, and expressions. For example, a person's face under
different lighting can appear significantly different, leading to misclassification.

Limited Feature Representation: Techniques like Eigenfaces and Fisherfaces may not
capture subtle facial features, especially in high-dimensional data, limiting their
effectiveness in differentiating between similar-looking individuals.

Computational Complexity: Some methods require extensive computation,


particularly when processing high-resolution images or large databases, making them
less suitable for real-time applications where quick decision-making is crucial.

Scalability Issues: As the size of the database grows, traditional systems face
difficulties in maintaining performance, as they are not optimized for handling large-
scale data efficiently.
3.3 PROPOSED SYSTEM

The proposed system utilizes Convolutional Neural Networks (CNNs) to enhance the
performance and reliability of facial recognition. CNNs are deep learning models
capable of learning complex, hierarchical feature representations directly from image
data. This system includes a carefully designed architecture with multiple
convolutional and pooling layers to capture detailed facial features. Additionally,
preprocessing techniques such as face alignment and normalization are employed to
ensure consistent input data, which is crucial for accurate feature extraction. The
system also incorporates data augmentation strategies to artificially expand the
training dataset, introducing variations in terms of rotation, scaling, and color
adjustments to improve the model's robustness and generalization capabilities.
Furthermore, a hybrid loss function combining categorical cross-entropy with center
loss is used to minimize intra-class variance and maximize inter-class separation,
enhancing the discriminative power of the learned features.

3.4 PROPOSED SYSTEM ADVANTAGES

The proposed system offers several significant advantages over traditional methods:

Higher Accuracy: The deep learning-based approach, particularly the use of CNNs,
allows for more precise and detailed feature extraction, resulting in significantly
higher recognition accuracy. The model's ability to learn complex patterns and
nuances in facial data enhances its performance.

Robustness to Variations: By employing advanced preprocessing and data


augmentation techniques, the system is robust to variations in lighting, pose, and
facial expressions. This ensures consistent and reliable recognition even under
challenging conditions.

Real-time Performance: The optimized CNN architecture and efficient computational


techniques enable the system to process and recognize faces in real-time. This is
critical for applications such as security surveillance and access control, where
immediate response is necessary.
Scalability: The deep learning model can scale effectively with the size of the dataset.
As more data is added, the system's performance can improve, making it suitable for
large-scale deployments in diverse settings.

3.5 SYSTEM REQUIREMENTS:

3.5.1 HARDWARE REQUIRMENTS :

 System : i3 or above.
 Ram : 4 GB.
 Hard Disk : 40 GB
 Key Board : Standard Windows Keyboard
 Mouse : Two or Three Button Mouse

3.5.2 SOFTWARE REQUIRMENTS :


• Operating system : Windows 8 or above
• Coding Language : Python.
• Front-End : HTML, CSS
• Back-End : Flask
• Designing : Html, css, javascript.
• Data Base : Firebase
3.6 FEASIBILITY STUDY

The feasibility of the project is analyzed in this phase and business proposal is put
forth with a very general plan for the project and some cost estimates. During system
analysis the feasibility study of the proposed system is to be carried out. This is to
ensure that the proposed system is not a burden to the company. For feasibility
analysis, some understanding of the major requirements for the system is essential.

Three key considerations involved in the feasibility analysis are

 ECONOMICAL FEASIBILITY
 TECHNICAL FEASIBILITY
 SOCIAL FEASIBILITY

3.6.1 ECONOMICAL FEASIBILITY

This study is carried out to check the economic impact that the system will have on
the organization. The amount of fund that the company can pour into the research and
development of the system is limited. The expenditures must be justified. Thus the
developed system as well within the budget and this was achieved because most of the
technologies used are freely available. Only the customized products had to be
purchased.

3.6.2 TECHNICAL FEASIBILITY

This study is carried out to check the technical feasibility, that is, the technical
requirements of the system. Any system developed must not have a high demand on
the available technical resources. This will lead to high demands on the available
technical resources. This will lead to high demands being placed on the client. The
developed system must have a modest requirement, as only minimal or null changes
are required for implementing this system.

3.6.3 SOCIAL FEASIBILITY

The aspect of study is to check the level of acceptance of the system by the user. This
includes the process of training the user to use the system efficiently. The user must
not feel threatened by the system, instead must accept it as a necessity. The level of
acceptance by the users solely depends on the methods that are employed to educate
the user about the system and to make him familiar with it. His level of confidence
must be raised so that he is also able to make some constructive criticism, which is
welcomed, as he is the final user of the system.

CHAPTER – 4

SYSTEM DESIGN
4.1 SYSTEM ARCHITECTURE

Fig 4.1.1: System Architecture

4.2 MODULES

MODULES:
1. Upload Historical Trajectory Dataset : Upload Historical Trajectory Dataset’ button
and upload the dataset.
2. Generate Train & Test Model :Generate Train & Test Model’ button to read dataset
and to split dataset into train and test part to generate machine learning train model
3. Run MLP Algorithm:Run MLP Algorithm’ button to train MLP model and to
calculate its accuracy.
4. Run DDS with Genetic Algorithm : Run DDS with Genetic Algorithm button to
train DDS and to calculate its prediction accuracy.
5. Predict DDS Type :Predict DDS Type’ button to predict test data
4.3 UML DIAGRAMS :

UML stands for Unified Modeling Language. UML is a standardized general-purpose


modeling language in the field of object-oriented software engineering. The standard
is managed, and was created by, the Object Management Group.
The goal is for UML to become a common language for creating models of object
oriented computer software. In its current form UML is comprised of two major
components: a Meta-model and a notation. In the future, some form of method or
process may also be added to; or associated with, UML. The Unified Modeling
Language is a standard language for specifying, Visualization, Constructing and
documenting the artifacts of software system, as well as for business modeling and
other non-software systems.

The UML represents a collection of best engineering practices that have proven
successful in the modeling of large and complex systems.
The UML is a very important part of developing objects oriented software and the
software development process. The UML uses mostly graphical notations to express
the design of software projects.

GOALS:

The Primary goals in the design of the UML are as follows:


1. Provide users a ready-to-use, expressive visual modeling Language so that they
can develop and exchange meaningful models.
2. Provide extendibility and specialization mechanisms to extend the core concepts.
3. Be independent of particular programming languages and development process.
4. Provide a formal basis for understanding the modeling language.
5. Encourage the growth of OO tools market.
6. Support higher level development concepts such as collaborations, frameworks,
patterns and components.
7. Integrate best practices.

USE CASE DIAGRAM:


A use case diagram in the Unified Modeling Language (UML) is a type of behavioral
diagram defined by and created from a Use-case analysis. Its purpose is to present a
graphical overview of the functionality provided by a system in terms of actors, their
goals (represented as use cases), and any dependencies between those use cases. The
main purpose of a use case diagram is to show what system functions are performed
for which actor. Roles of the actors in the system can be depicted.
CLASS DIAGRAM:
In software engineering, a class diagram in the Unified Modeling Language (UML) is
a type of static structure diagram that describes the structure of a system by showing
the system's classes, their attributes, operations (or methods), and the relationships
among the classes. It explains which class contains information.
SEQUENCE DIAGRAM:
A sequence diagram in Unified Modeling Language (UML) is a kind of interaction
diagram that shows how processes operate with one another and in what order. It is a
construct of a Message Sequence Chart. Sequence diagrams are sometimes called
event diagrams, event scenarios, and timing diagrams.
COLLABRATION DIAGRAM:
Activity diagrams are graphical representations of workflows of stepwise activities
and actions with support for choice, iteration and concurrency. In the Unified
Modeling Language, activity diagrams can be used to describe the business and
operational step-by-step workflows of components in a system. An activity diagram
shows the overall flow of control.
CHAPTER 5

SYSTEM IMPLEMENTATION
5.1 What is Python

Below are some facts about Python.

Python is currently the most widely used multi-purpose, high-level


programming language.

Python allows programming in Object-Oriented and Procedural paradigms.


Python programs generally are smaller than other programming languages like
Java.

Programmers have to type relatively less and indentation requirement of the


language, makes them readable all the time.

Python language is being used by almost all tech-giant companies like –


Google, Amazon, Facebook, Instagram, Dropbox, Uber… etc.

The biggest strength of Python is huge collection of standard library which


can be used for the following –

 Machine Learning
 GUI Applications (like Kivy, Tkinter, PyQt etc. )
 Web frameworks like Django (used by YouTube, Instagram, Dropbox)
 Image processing (like Opencv, Pillow)
 Web scraping (like Scrapy, BeautifulSoup, Selenium)
 Test frameworks
 Multimedia

5.1.1 Advantages of Python :-


Let’s see how Python dominates over other languages.
1. Extensive Libraries
Python downloads with an extensive library and it contain code for various
purposes like regular expressions, documentation-generation, unit-testing, web
browsers, threading, databases, CGI, email, image manipulation, and more. So,
we don’t have to write the complete code for that manually.
2. Extensible
As we have seen earlier, Python can be extended to other languages. You can
write some of your code in languages like C++ or C. This comes in handy,
especially in projects.

3. Embeddable
Complimentary to extensibility, Python is embeddable as well. You can put your
Python code in your source code of a different language, like C++. This lets us
add scripting capabilities to our code in the other language.

4. Improved Productivity
The language’s simplicity and extensive libraries render programmers more
productive than languages like Java and C++ do. Also, the fact that you need to
write less and get more things done.

5. IOT Opportunities
Since Python forms the basis of new platforms like Raspberry Pi, it finds the
future bright for the Internet Of Things. This is a way to connect the language
with the real world.

When working with Java, you may have to create a class to print ‘Hello World’.
But in Python, just a print statement will do. It is also quite easy to
learn, understand, and code. This is why when people pick up Python, they have
a hard time adjusting to other more verbose languages like Java.

6. Readable
Because it is not such a verbose language, reading Python is much like reading
English. This is the reason why it is so easy to learn, understand, and code. It
also does not need curly braces to define blocks, and indentation is mandatory.
This further aids the readability of the code.
7. Object-Oriented
This language supports both the procedural and object-oriented programming
paradigms. While functions help us with code reusability, classes and objects let
us model the real world. A class allows the encapsulation of data and functions
into one.

8. Free and Open-Source


Like we said earlier, Python is freely available. But not only can you download
Python for free, but you can also download its source code, make changes to it,
and even distribute it. It downloads with an extensive collection of libraries to
help you with your tasks.

9. Portable
When you code your project in a language like C++, you may need to make
some changes to it if you want to run it on another platform. But it isn’t the same
with Python. Here, you need to code only once, and you can run it anywhere.
This is called Write Once Run Anywhere (WORA). However, you need to be
careful enough not to include any system-dependent features.

10. Interpreted
Lastly, we will say that it is an interpreted language. Since statements are
executed one by one, debugging is easier than in compiled languages.
Any doubts till now in the advantages of Python? Mention in the comment
section.

Advantages of Python Over Other Languages :


1. Less Coding
Almost all of the tasks done in Python requires less coding when the same task is
done in other languages. Python also has an awesome standard library support,
so you don’t have to search for any third-party libraries to get your job done.
This is the reason that many people suggest learning Python to beginners.
2. Affordable
Python is free therefore individuals, small companies or big organizations can
leverage the free available resources to build applications. Python is popular and
widely used so it gives you better community support.

The 2019 Github annual survey showed us that Python has overtaken Java
in the most popular programming language category.

3. Python is for Everyone


Python code can run on any machine whether it is Linux, Mac or Windows.
Programmers need to learn different languages for different jobs but with Python,
you can professionally build web apps, perform data analysis and machine
learning, automate things, do web scraping and also build games and powerful
visualizations. It is an all-rounder programming language.

Disadvantages of Python
So far, we’ve seen why Python is a great choice for your project. But if you
choose it, you should be aware of its consequences as well. Let’s now see the
downsides of choosing Python over another language.

1. Speed Limitations

We have seen that Python code is executed line by line. But since Python is
interpreted, it often results in slow execution. This, however, isn’t a problem
unless speed is a focal point for the project. In other words, unless high speed is a
requirement, the benefits offered by Python are enough to distract us from its
speed limitations.

2. Weak in Mobile Computing and Browsers

While it serves as an excellent server-side language, Python is much rarely seen


on the client-side. Besides that, it is rarely ever used to implement smartphone-
based applications. One such application is called Carbonnelle.
The reason it is not so famous despite the existence of Brython is that it isn’t that
secure.

3. Design Restrictions

As you know, Python is dynamically-typed. This means that you don’t need to
declare the type of variable while writing the code. It uses duck-typing. But wait,
what’s that? Well, it just means that if it looks like a duck, it must be a duck.
While this is easy on the programmers during coding, it can raise run-time
errors.

4. Underdeveloped Database Access Layers

Compared to more widely used technologies like JDBC (Java DataBase


Connectivity) and ODBC (Open DataBase Connectivity), Python’s database
access layers are a bit underdeveloped. Consequently, it is less often applied in
huge enterprises.

5. Simple

No, we’re not kidding. Python’s simplicity can indeed be a problem. Take my
example. I don’t do Java, I’m more of a Python person. To me, its syntax is so
simple that the verbosity of Java code seems unnecessary.

This was all about the Advantages and Disadvantages of Python Programming
Language.

5.2 History of Python : -

What do the alphabet and the programming language Python have in common?
Right, both start with ABC. If we are talking about ABC in the Python context, it's
clear that the programming language ABC is meant. ABC is a general-purpose
programming language and programming environment, which had been developed
in the Netherlands, Amsterdam, at the CWI (Centrum Wiskunde &Informatica). The
greatest achievement of ABC was to influence the design of Python.Python was
conceptualized in the late 1980s. Guido van Rossum worked that time in a project
at the CWI, called Amoeba, a distributed operating system. In an interview with Bill
Venners1, Guido van Rossum said: "In the early 1980s, I worked as an implementer
on a team building a language called ABC at Centrum voor Wiskunde en
Informatica (CWI). I don't know how well people know ABC's influence on
Python. I try to mention ABC's influence because I'm indebted to everything I
learned during that project and to the people who worked on it."Later on in the
same Interview, Guido van Rossum continued: "I remembered all my experience
and some of my frustration with ABC. I decided to try to design a simple scripting
language that possessed some of ABC's better properties, but without its problems.
So I started typing. I created a simple virtual machine, a simple parser, and a simple
runtime. I made my own version of the various ABC parts that I liked. I created a
basic syntax, used indentation for statement grouping instead of curly braces or
begin-end blocks, and developed a small number of powerful data types: a hash
table (or dictionary, as we call it), a list, strings, and numbers."

5.3 What is Machine Learning : -

Before we take a look at the details of various machine learning methods, let's
start by looking at what machine learning is, and what it isn't. Machine learning is
often categorized as a subfield of artificial intelligence, but I find that
categorization can often be misleading at first brush. The study of machine
learning certainly arose from research in this context, but in the data science
application of machine learning methods, it's more helpful to think of machine
learning as a means of building models of data.

Fundamentally, machine learning involves building mathematical models to help


understand data. "Learning" enters the fray when we give these models tunable
parameters that can be adapted to observed data; in this way the program can be
considered to be "learning" from the data. Once these models have been fit to
previously seen data, they can be used to predict and understand aspects of newly
observed data. I'll leave to the reader the more philosophical digression regarding
the extent to which this type of mathematical, model-based "learning" is similar to
the "learning" exhibited by the human brain.Understanding the problem setting in
machine learning is essential to using these tools effectively, and so we will start
with some broad categorizations of the types of approaches we'll discuss here.

5.3.1 Categories Of Machine Leaning:-


At the most fundamental level, machine learning can be categorized into two
main types: supervised learning and unsupervised learning.

Supervised learning involves somehow modeling the relationship between


measured features of data and some label associated with the data; once this
model is determined, it can be used to apply labels to new, unknown data. This is
further subdivided into classification tasks and regression tasks: in classification,
the labels are discrete categories, while in regression, the labels are continuous
quantities. We will see examples of both types of supervised learning in the
following section.

Unsupervised learning involves modeling the features of a dataset without


reference to any label, and is often described as "letting the dataset speak for
itself." These models include tasks such as clustering and dimensionality
reduction. Clustering algorithms identify distinct groups of data, while
dimensionality reduction algorithms search for more succinct representations of
the data. We will see examples of both types of unsupervised learning in the
following section.

Need for Machine Learning

Human beings, at this moment, are the most intelligent and advanced species on
earth because they can think, evaluate and solve complex problems. On the other
side, AI is still in its initial stage and haven’t surpassed human intelligence in
many aspects. Then the question is that what is the need to make machine learn?
The most suitable reason for doing this is, “to make decisions, based on data, with
efficiency and scale”.

Lately, organizations are investing heavily in newer technologies like Artificial


Intelligence, Machine Learning and Deep Learning to get the key information
from data to perform several real-world tasks and solve problems. We can call it
data-driven decisions taken by machines, particularly to automate the process.
These data-driven decisions can be used, instead of using programing logic, in the
problems that cannot be programmed inherently. The fact is that we can’t do
without human intelligence, but other aspect is that we all need to solve real-
world problems with efficiency at a huge scale. That is why the need for machine
learning arises.

5.3.2 Challenges in Machines Learning :-


While Machine Learning is rapidly evolving, making significant strides with
cybersecurity and autonomous cars, this segment of AI as whole still has a long
way to go. The reason behind is that ML has not been able to overcome number of
challenges. The challenges that ML is facing currently are −

Quality of data − Having good-quality data for ML algorithms is one of the


biggest challenges. Use of low-quality data leads to the problems related to data
preprocessing and feature extraction.

Time-Consuming task − Another challenge faced by ML models is the


consumption of time especially for data acquisition, feature extraction and
retrieval.

Lack of specialist persons − As ML technology is still in its infancy stage,


availability of expert resources is a tough job.

No clear objective for formulating business problems − Having no clear


objective and well-defined goal for business problems is another key challenge
for ML because this technology is not that mature yet.
Issue of overfitting & underfitting − If the model is overfitting or underfitting, it
cannot be represented well for the problem.

Curse of dimensionality − Another challenge ML model faces is too many


features of data points. This can be a real hindrance.

Difficulty in deployment − Complexity of the ML model makes it quite difficult


to be deployed in real life.

5.3.3 Applications of Machines Learning :-


Machine Learning is the most rapidly growing technology and according to
researchers we are in the golden year of AI and ML. It is used to solve many real-
world complex problems which cannot be solved with traditional approach.
Following are some real-world applications of ML −

 Emotion analysis

 Sentiment analysis

 Error detection and prevention

 Weather forecasting and prediction

 Stock market analysis and forecasting

 Speech synthesis

 Speech recognition

 Customer segmentation

 Object recognition

 Fraud detection

 Fraud prevention
 Recommendation of products to customer in online shopping

How to Start Learning Machine Learning?

Arthur Samuel coined the term “Machine Learning” in 1959 and defined it as
a “Field of study that gives computers the capability to learn without being
explicitly programmed”.
And that was the beginning of Machine Learning! In modern times, Machine
Learning is one of the most popular (if not the most!) career choices. According
to Indeed, Machine Learning Engineer Is The Best Job of 2019 with
a 344% growth and an average base salary of $146,085 per year.
But there is still a lot of doubt about what exactly is Machine Learning and how to
start learning it? So this article deals with the Basics of Machine Learning and also
the path you can follow to eventually become a full-fledged Machine Learning
Engineer. Now let’s get started!!!

5.3.4 How to start learning ML?

This is a rough roadmap you can follow on your way to becoming an insanely
talented Machine Learning Engineer. Of course, you can always modify the steps
according to your needs to reach your desired end-goal!

Step 1 – Understand the Prerequisites

In case you are a genius, you could start ML directly but normally, there are some
prerequisites that you need to know which include Linear Algebra, Multivariate
Calculus, Statistics, and Python. And if you don’t know these, never fear! You
don’t need a Ph.D. degree in these topics to get started but you do need a basic
understanding.

(a) Learn Linear Algebra and Multivariate Calculus

Both Linear Algebra and Multivariate Calculus are important in Machine


Learning. However, the extent to which you need them depends on your role as a
data scientist. If you are more focused on application heavy machine learning, then
you will not be that heavily focused on maths as there are many common libraries
available. But if you want to focus on R&D in Machine Learning, then mastery of
Linear Algebra and Multivariate Calculus is very important as you will have to
implement many ML algorithms from scratch.

(b) Learn Statistics

Data plays a huge role in Machine Learning. In fact, around 80% of your time as
an ML expert will be spent collecting and cleaning data. And statistics is a field
that handles the collection, analysis, and presentation of data. So it is no surprise
that you need to learn it!!!
Some of the key concepts in statistics that are important are Statistical
Significance, Probability Distributions, Hypothesis Testing, Regression, etc. Also,
Bayesian Thinking is also a very important part of ML which deals with various
concepts like Conditional Probability, Priors, and Posteriors, Maximum
Likelihood, etc.

(c) Learn Python

Some people prefer to skip Linear Algebra, Multivariate Calculus and Statistics
and learn them as they go along with trial and error. But the one thing that you
absolutely cannot skip is Python! While there are other languages you can use for
Machine Learning like R, Scala, etc. Python is currently the most popular language
for ML. In fact, there are many Python libraries that are specifically useful for
Artificial Intelligence and Machine Learning such as Keras, TensorFlow, Scikit-
learn, etc.
So if you want to learn ML, it’s best if you learn Python! You can do that using
various online resources and courses such as Fork Python available Free on
GeeksforGeeks.
Step 2 – Learn Various ML Concepts

Now that you are done with the prerequisites, you can move on to actually learning
ML (Which is the fun part!!!) It’s best to start with the basics and then move on to
the more complicated stuff. Some of the basic concepts in ML are:

(a) Terminologies of Machine Learning

 Model – A model is a specific representation learned from data by applying some


machine learning algorithm. A model is also called a hypothesis.
 Feature – A feature is an individual measurable property of the data. A set of
numeric features can be conveniently described by a feature vector. Feature vectors
are fed as input to the model. For example, in order to predict a fruit, there may be
features like color, smell, taste, etc.
 Target (Label) – A target variable or label is the value to be predicted by our
model. For the fruit example discussed in the feature section, the label with each set
of input would be the name of the fruit like apple, orange, banana, etc.
 Training – The idea is to give a set of inputs(features) and it’s expected
outputs(labels), so after training, we will have a model (hypothesis) that will then
map new data to one of the categories trained on.
 Prediction – Once our model is ready, it can be fed a set of inputs to which it will
provide a predicted output(label).

(b) Types of Machine Learning

 Supervised Learning – This involves learning from a training dataset with labeled
data using classification and regression models. This learning process continues
until the required level of performance is achieved.
 Unsupervised Learning – This involves using unlabelled data and then finding the
underlying structure in the data in order to learn more and more about the data itself
using factor and cluster analysis models.
 Semi-supervised Learning – This involves using unlabelled data like
Unsupervised Learning with a small amount of labeled data. Using labeled data
vastly increases the learning accuracy and is also more cost-effective than
Supervised Learning.
 Reinforcement Learning – This involves learning optimal actions through trial
and error. So the next action is decided by learning behaviors that are based on the
current state and that will maximize the reward in the future.
5.3.4 Advantages of Machine learning :-

1. Easily identifies trends and patterns -

Machine Learning can review large volumes of data and discover specific trends and
patterns that would not be apparent to humans. For instance, for an e-commerce
website like Amazon, it serves to understand the browsing behaviors and purchase
histories of its users to help cater to the right products, deals, and reminders relevant
to them. It uses the results to reveal relevant advertisements to them.

2. No human intervention needed (automation)

With ML, you don’t need to babysit your project every step of the way. Since it
means giving machines the ability to learn, it lets them make predictions and also
improve the algorithms on their own. A common example of this is anti-virus
softwares; they learn to filter new threats as they are recognized. ML is also good at
recognizing spam.

3. Continuous Improvement

As ML algorithms gain experience, they keep improving in accuracy and efficiency.


This lets them make better decisions. Say you need to make a weather forecast
model. As the amount of data you have keeps growing, your algorithms learn to
make more accurate predictions faster.

4. Handling multi-dimensional and multi-variety data

Machine Learning algorithms are good at handling data that are multi-dimensional
and multi-variety, and they can do this in dynamic or uncertain environments.
5. Wide Applications

You could be an e-tailer or a healthcare provider and make ML work for you. Where
it does apply, it holds the capability to help deliver a much more personal experience
to customers while also targeting the right customers.

5.3.5 Disadvantages of Machine Learning :-

1. Data Acquisition

Machine Learning requires massive data sets to train on, and these should be
inclusive/unbiased, and of good quality. There can also be times where they must
wait for new data to be generated.

2. Time and Resources

ML needs enough time to let the algorithms learn and develop enough to fulfill their
purpose with a considerable amount of accuracy and relevancy. It also needs
massive resources to function. This can mean additional requirements of computer
power for you.

3. Interpretation of Results

Another major challenge is the ability to accurately interpret results generated by the
algorithms. You must also carefully choose the algorithms for your purpose.

4. High error-susceptibility

Machine Learning is autonomous but highly susceptible to errors. Suppose you train
an algorithm with data sets small enough to not be inclusive. You end up with biased
predictions coming from a biased training set. This leads to irrelevant advertisements
being displayed to customers. In the case of ML, such blunders can set off a chain of
errors that can go undetected for long periods of time. And when they do get noticed,
it takes quite some time to recognize the source of the issue, and even longer to
correct it.

5.4 Python Development Steps : -


Guido Van Rossum published the first version of Python code (version 0.9.0) at
alt.sources in February 1991. This release included already exception handling,
functions, and the core data types of list, dict, str and others. It was also object
oriented and had a module system.
Python version 1.0 was released in January 1994. The major new features included
in this release were the functional programming tools lambda, map, filter and
reduce, which Guido Van Rossum never liked.Six and a half years later in October
2000, Python 2.0 was introduced. This release included list comprehensions, a full
garbage collector and it was supporting unicode.Python flourished for another 8
years in the versions 2.x before the next major release as Python 3.0 (also known as
"Python 3000" and "Py3K") was released. Python 3 is not backwards compatible
with Python 2.x. The emphasis in Python 3 had been on the removal of duplicate
programming constructs and modules, thus fulfilling or coming close to fulfilling
the 13th law of the Zen of Python: "There should be one -- and preferably only one
-- obvious way to do it."Some changes in Python 7.3:

 Print is now a function


 Views and iterators instead of lists
 The rules for ordering comparisons have been simplified. E.g. a heterogeneous
list cannot be sorted, because all the elements of a list must be comparable to each
other.
 There is only one integer type left, i.e. int. long is int as well.
 The division of two integers returns a float instead of an integer. "//" can be used
to have the "old" behaviour.
 Text Vs. Data Instead Of Unicode Vs. 8-bit

Purpose :-

We demonstrated that our approach enables successful segmentation of intra-


retinal layers—even with low-quality images containing speckle noise, low
contrast, and different intensity ranges throughout—with the assistance of the
ANIS feature.

Python
Python is an interpreted high-level programming language for general-purpose
programming. Created by Guido van Rossum and first released in 1991, Python
has a design philosophy that emphasizes code readability, notably using
significant whitespace.

Python features a dynamic type system and automatic memory management. It


supports multiple programming paradigms, including object-oriented, imperative,
functional and procedural, and has a large and comprehensive standard library.

 Python is Interpreted − Python is processed at runtime by the interpreter. You do


not need to compile your program before executing it. This is similar to PERL
and PHP.
 Python is Interactive − you can actually sit at a Python prompt and interact with the
interpreter directly to write your programs.
Python also acknowledges that speed of development is important. Readable and
terse code is part of this, and so is access to powerful constructs that avoid tedious
repetition of code. Maintainability also ties into this may be an all but useless
metric, but it does say something about how much code you have to scan, read
and/or understand to troubleshoot problems or tweak behaviors. This speed of
development, the ease with which a programmer of other languages can pick up
basic Python skills and the huge standard library is key to another area where
Python excels. All its tools have been quick to implement, saved a lot of time, and
several of them have later been patched and updated by people with no Python
background - without breaking.

5.5 Modules Used in Project :-

5.5.1Tensorflow

TensorFlow is a free and open-source software library for dataflow and


differentiable programming across a range of tasks. It is a symbolic math library,
and is also used for machine learning applications such as neural networks. It is
used for both research and production at Google.‍

TensorFlow was developed by the Google Brain team for internal Google use. It
was released under the Apache 2.0 open-source license on November 9, 2015.
5.5.2Numpy

Numpy is a general-purpose array-processing package. It provides a high-


performance multidimensional array object, and tools for working with these
arrays.

It is the fundamental package for scientific computing with Python. It contains


various features including these important ones:

 A powerful N-dimensional array object


 Sophisticated (broadcasting) functions
 Tools for integrating C/C++ and Fortran code
 Useful linear algebra, Fourier transform, and random number capabilities
Besides its obvious scientific uses, Numpy can also be used as an efficient multi-
dimensional container of generic data. Arbitrary data-types can be defined using
Numpy which allows Numpy to seamlessly and speedily integrate with a wide
variety of databases.

5.5.3 Pandas

Pandas is an open-source Python Library providing high-performance data


manipulation and analysis tool using its powerful data structures. Python was
majorly used for data munging and preparation. It had very little contribution
towards data analysis. Pandas solved this problem. Using Pandas, we can
accomplish five typical steps in the processing and analysis of data, regardless of
the origin of data load, prepare, manipulate, model, and analyze. Python with
Pandas is used in a wide range of fields including academic and commercial
domains including finance, economics, Statistics, analytics, etc.

5.5.4 Matplotlib

Matplotlib is a Python 2D plotting library which produces publication quality


figures in a variety of hardcopy formats and interactive environments across
platforms. Matplotlib can be used in Python scripts, the Python
and IPython shells, the Jupyter Notebook, web application servers, and four
graphical user interface toolkits. Matplotlib tries to make easy things easy and
hard things possible. You can generate plots, histograms, power spectra, bar
charts, error charts, scatter plots, etc., with just a few lines of code. For examples,
see the sample plots and thumbnail gallery.

For simple plotting the pyplot module provides a MATLAB-like interface,


particularly when combined with IPython. For the power user, you have full
control of line styles, font properties, axes properties, etc, via an object oriented
interface or via a set of functions familiar to MATLAB users.

5.5.5 Scikit – learn

Scikit-learn provides a range of supervised and unsupervised learning algorithms


via a consistent interface in Python. It is licensed under a permissive simplified
BSD license and is distributed under many Linux distributions, encouraging
academic and commercial use.

Python
Python is an interpreted high-level programming language for general-purpose
programming. Created by Guido van Rossum and first released in 1991, Python
has a design philosophy that emphasizes code readability, notably using
significant whitespace.

Python features a dynamic type system and automatic memory management. It


supports multiple programming paradigms, including object-oriented, imperative,
functional and procedural, and has a large and comprehensive standard library.

 Python is Interpreted − Python is processed at runtime by the interpreter. You do


not need to compile your program before executing it. This is similar to PERL
and PHP.
 Python is Interactive − you can actually sit at a Python prompt and interact with the
interpreter directly to write your programs.
Python also acknowledges that speed of development is important. Readable and
terse code is part of this, and so is access to powerful constructs that avoid tedious
repetition of code. Maintainability also ties into this may be an all but useless
metric, but it does say something about how much code you have to scan, read
and/or understand to troubleshoot problems or tweak behaviors. This speed of
development, the ease with which a programmer of other languages can pick up
basic Python skills and the huge standard library is key to another area where
Python excels. All its tools have been quick to implement, saved a lot of time, and
several of them have later been patched and updated by people with no Python
background - without breaking.

Install Python Step-by-Step in Windows and Mac :

Python a versatile programming language doesn’t come pre-installed on your


computer devices. Python was first released in the year 1991 and until today it is a
very popular high-level programming language. Its style philosophy emphasizes
code readability with its notable use of great whitespace.
The object-oriented approach and language construct provided by Python enables
programmers to write both clear and logical code for projects. This software does
not come pre-packaged with Windows.

How to Install Python on Windows and Mac :

There have been several updates in the Python version over the years. The question
is how to install Python? It might be confusing for the beginner who is willing to
start learning Python but this tutorial will solve your query. The latest or the newest
version of Python is version 3.7.4 or in other words, it is Python 3.
Note: The python version 3.7.4 cannot be used on Windows XP or earlier devices.

Before you start with the installation process of Python. First, you need to know
about your System Requirements. Based on your system type i.e. operating system
and based processor, you must download the python version. My system type is
a Windows 64-bit operating system. So the steps below are to install python
version 3.7.4 on Windows 7 device or to install Python 3. Download the Python
Cheatsheet here.The steps on how to install Python on Windows 10, 8 and 7
are divided into 4 parts to help understand better.

Download the Correct version into the system


Step 1: Go to the official site to download and install python using Google Chrome
or any other web browser. OR Click on the following link: https://www.python.org

Now, check for the latest and the correct version for your operating system.

Step 2: Click on the Download Tab.


Step 3: You can either select the Download Python for windows 3.7.4 button in
Yellow Color or you can scroll further down and click on download with respective
to their version. Here, we are downloading the most recent python version for
windows 3.7.4

Step 4: Scroll down the page until you find the Files option.

Step 5: Here you see a different version of python along with the operating system.
• To download Windows 32-bit python, you can select any one from the three
options: Windows x86 embeddable zip file, Windows x86 executable installer or
Windows x86 web-based installer.
•To download Windows 64-bit python, you can select any one from the three
options: Windows x86-64 embeddable zip file, Windows x86-64 executable installer
or Windows x86-64 web-based installer.
Here we will install Windows x86-64 web-based installer. Here your first part
regarding which version of python is to be downloaded is completed. Now we move
ahead with the second part in installing python i.e. Installation
Note: To know the changes or updates that are made in the version you can click on
the Release Note Option.
Installation of Python
Step 1: Go to Download and Open the downloaded python version to carry out the
installation process.

Step 2: Before you click on Install Now, Make sure to put a tick on Add Python 3.7
to PATH.
Step 3: Click on Install NOW After the installation is successful. Click on Close.

With these above three steps on python installation, you have successfully and
correctly installed Python. Now is the time to verify the installation.
Note: The installation process might take a couple of minutes.
Verify the Python Installation
Step 1: Click on Start
Step 2: In the Windows Run Command, type “cmd”.
Step 3: Open the Command prompt option.
Step 4: Let us test whether the python is correctly installed. Type python –V and
press Enter.

Step 5: You will get the answer as 3.7.4


Note: If you have any of the earlier versions of Python already installed. You must
first uninstall the earlier version and then install the new one.

Check how the Python IDLE works


Step 1: Click on Start
Step 2: In the Windows Run command, type “python idle”.

Step 3: Click on IDLE (Python 3.7 64-bit) and launch the program
Step 4: To go ahead with working in IDLE you must first save the file. Click on
File > Click on Save

Step 5: Name the file and save as type should be Python files. Click on SAVE. Here
I have named the files as Hey World.
Step 6: Now for e.g. enter print
CHAPTER 6
TESTING
6.1 TESTING
The purpose of testing is to discover errors. Testing is the process of trying to discover
every conceivable fault or weakness in a work product. It provides a way to check the
functionality of components, sub assemblies, assemblies and/or a finished product It
is the process of exercising software with the intent of ensuring that the Software
system meets its requirements and user expectations and does not fail in an
unacceptable manner. There are various types of test. Each test type addresses a
specific testing requirement.

TYPES OF TESTS
Unit testing :
Unit testing involves the design of test cases that validate that the internal program
logic is functioning properly, and that program inputs produce valid outputs. All
decision branches and internal code flow should be validated. It is the testing of
individual software units of the application .it is done after the completion of an
individual unit before integration. This is a structural testing, that relies on knowledge
of its construction and is invasive. Unit tests perform basic tests at component level
and test a specific business process, application, and/or system configuration. Unit
tests ensure that each unique path of a business process performs accurately to the
documented specifications and contains clearly defined inputs and expected results.

Integration testing
Integration tests are designed to test integrated software
components to determine if they actually run as one program. Testing is event driven
and is more concerned with the basic outcome of screens or fields. Integration tests
demonstrate that although the components were individually satisfaction, as shown by
successfully unit testing, the combination of components is correct and consistent.
Integration testing is specifically aimed at exposing the problems that arise from the
combination of components.
Functional test
Functional tests provide systematic demonstrations that functions tested
are available as specified by the business and technical requirements, system
documentation, and user manuals.
Functional testing is centered on the following items:

Valid Input : identified classes of valid input must be accepted.

Invalid Input : identified classes of invalid input must be rejected.

Functions : identified functions must be exercised.

Output : identified classes of application outputs must be


exercised.

Systems/Procedures : interfacing systems or procedures must be invoked.

Organization and preparation of functional tests is focused on


requirements, key functions, or special test cases. In addition, systematic coverage
pertaining to identify Business process flows; data fields, predefined processes, and
successive processes must be considered for testing. Before functional testing is
complete, additional tests are identified and the effective value of current tests is
determined.

System Test
System testing ensures that the entire integrated software system meets
requirements. It tests a configuration to ensure known and predictable results. An
example of system testing is the configuration oriented system integration test.
System testing is based on process descriptions and flows, emphasizing pre-driven
process links and integration points.

White Box Testing


White Box Testing is a testing in which in which the software tester
has knowledge of the inner workings, structure and language of the software, or at
least its purpose. It is purpose. It is used to test areas that cannot be reached from a
black box level.
Black Box Testing
Black Box Testing is testing the software without any knowledge of the inner
workings, structure or language of the module being tested. Black box tests, as most
other kinds of tests, must be written from a definitive source document, such as
specification or requirements document, such as specification or requirements
document. It is a testing in which the software under test is treated, as a black
box .you cannot “see” into it. The test provides inputs and responds to outputs without
considering how the software works.

6.2 TESTING METHODOLOGIES


The following are the Testing
Methodologies:
o Unit Testing.
o Integration Testing.
o User Acceptance
Testing.
o Output Testing.

o Validation Testing.

6.2.1 Unit Testing


Unit testing focuses verification effort on the smallest unit of Software design that
is the module. Unit testing exercises specific paths in a module’s control structure
to ensure complete coverage and maximum error detection. This test focuses on
each module individually, ensuring that it functions properly as a unit. Hence, the
naming is Unit Testing.
During this testing, each module is tested individually and the module interfaces
are verified for the consistency with design specification. All important processing
path are tested for the expected results. All error handling paths are also tested.

6.2.2 Integration Testing


Integration testing addresses the issues associated with the dual problems of
verification and program construction. After the software has been integrated a set
of high order tests are conducted. The main objective in this testing process is to
take unit tested modules and builds a program structure that has been dictated by
design.

The following are the types of Integration Testing:


1)Top Down Integration
This method is an incremental approach to the construction of program structure.
Modules are integrated by moving downward through the control hierarchy,
beginning with the main program module. The module subordinates to the main
program module are incorporated into the structure in either a depth first or
breadth first manner.
In this method, the software is tested from main module and individual stubs are
replaced when the test proceeds downwards.

2. Bottom-up Integration
This method begins the construction and testing with the modules at the lowest
level in the program structure. Since the modules are integrated from the bottom
up, processing required for modules subordinate to a given level is always
available and the need for stubs is eliminated. The bottom up integration strategy
may be implemented with the following steps:

▪ The low-level modules are combined into clusters into clusters that perform a
specific Software sub-function.
▪ A driver (i.e.) the control program for testing is written to coordinate test
case input and output.
▪ The cluster is tested.

▪ Drivers are removed and clusters are combined moving upward in the program
structure

The bottom up approaches tests each module individually and then each module
is module is integrated with a main module and tested for functionality.
CHAPTER 7
RESULTS
Fig 7.1.1:

Fig 7.1.2:
Fig 7.1.3:

Fig 7.1.4:
Fig 7.1.5:

Fig 7.1.6:
Fig 7.1.7:

Fig 7.1.8:
Fig 7.1.9:

Fig 7.1.10:
Fig 10.1.11:

Fig 10.1.12
CHAPTER 8
CONCLUSION
8.1 CONCLUSION:

In conclusion, the development of an accurate and efficient facial recognition system


holds significant promise for various practical applications. By leveraging advanced
machine learning algorithms and deep learning techniques, we have demonstrated the
potential to overcome the limitations of traditional facial recognition methods and
achieve higher recognition accuracy. The proposed system, based on convolutional
neural networks (CNNs) and deep learning architectures, offers robust performance in
handling variations in facial appearance, lighting conditions, and occlusions.
Moving forward, further research and development efforts are warranted to enhance
the scalability, real-time performance, and usability of facial recognition systems.
Additionally, the integration of facial recognition technology with other biometric
authentication techniques and security systems could lead to even more advanced and
comprehensive solutions for enhancing security measures, authentication processes,
and user experiences across various domains.
Overall, the advancements in facial recognition technology hold promise for
revolutionizing security, authentication, and user interaction paradigms, paving the
way for safer, more efficient, and more personalized systems and services in the
future.
CHAPTER 9

FUTURE ENHANCEMENTS
Future Enhancements

1. Improved Accuracy with Advanced Architectures: Future work could


involve experimenting with more advanced neural network architectures, such as
Residual Networks (ResNet) or Inception Networks, to further enhance the model's
accuracy and robustness. These architectures can help capture even more subtle
features and improve performance, especially in challenging scenarios.

2. Real-Time Processing: Enhancing the system to support real-time facial


recognition would be a significant improvement, particularly for applications in
security and surveillance. This could involve optimizing the model for faster inference
times, utilizing hardware accelerations like GPUs or specialized hardware, and
refining the preprocessing and feature extraction steps to minimize latency.

3. Integration with Multimodal Biometrics: Future versions of the system


could integrate additional biometric modalities, such as voice recognition, iris
scanning, or fingerprint analysis, to create a more comprehensive and secure
identification system. This multimodal approach can enhance accuracy and provide a
fallback in cases where facial recognition alone might be insufficient.

4. Robustness Against Adversarial Attacks: As facial recognition


technology becomes more prevalent, the need to safeguard against adversarial attacks,
where malicious inputs are used to deceive the system, becomes crucial. Future
enhancements could focus on developing techniques to detect and mitigate these
attacks, ensuring the system's reliability and security.

5. Enhanced Privacy and Data Security: With growing concerns around


privacy and data security, future work could explore more robust encryption methods
and data protection protocols. This includes implementing differential privacy
techniques, secure multi-party computation, and ensuring compliance with data
protection regulations, to safeguard user data and maintain trust.

These future enhancements aim to make the facial recognition system more accurate,
efficient, secure, and versatile, addressing the evolving needs and challenges in the
field of biometric authentication.
References:

1. Yang, J., et al. (2018). "Deep Convolutional Neural Networks


for Face Recognition." IEEE Transactions on Pattern Analysis
and Machine Intelligence, 38(1), 91-105.

2. Schroff, F., et al. (2015). "FaceNet: A Unified Embedding for


Face Recognition and Clustering." Proceedings of the IEEE
Conference on Computer Vision and Pattern Recognition.

3. Taigman, Y., et al. (2014). "DeepFace: Closing the Gap to


Human-Level Performance in Face Verification." Proceedings
of the IEEE Conference on Computer Vision and Pattern
Recognition.

4. Parkhi, O. M., et al. (2015). "Deep Face Recognition." British


Machine Vision Conference.

5. Zhang, K., et al. (2016). "Joint Face Detection and Alignment


Using Multitask Cascaded Convolutional Networks." IEEE
Signal Processing Letters, 23(10), 1499-1503.

You might also like