Mini Project Documentation
Mini Project Documentation
              BACHELOR OF TECHNOLOGY
                                 In
                            2023 – 2024
        MALLA REDDY COLLEGE OF ENGINEERING
                 (Approved by AICTE-Permanently Affiliated to JNTU-Hyderabad)
      Accredited by NBA & NAAC, Recognized section 2(f) & 12(B) of UGC New Delhi
      ISO
                              9001:2015 certified Institution
          Maisammaguda, Dhulapally (Post via Kompally), Secunderabad- 500100
                              CERTIFICATE
   This is to certify that the Minor Project report on “RESTAURANT MENU
   ORDERING SYSTEM” is successfully done by the following students of the
   Department of Electronics & Communication Engineering of our college in partial
   fullfilment of the requirement for the award of B.Tech degree in the year 2024-
   2025. The results embodied in this report have not been submitted to any other
   University for the award of any diploma or degree.
                    R.RAJA MOULI                    : 22Q95A0410
K.POOJA : 22Q95A0405
           We, the final year students are hereby declaring that the mini project report
 entitled “RESTAURANT MENU ORDERING SYSTEM” has done by us under the
 guidance of Mr.P.Sampath kumar Assistant Professor, Department of ECE is submitted
 in the partial fulfillment of the requirements for the award of the degree of BACHELOR
 OF     TECHNOLOGY            in    ELECTRONICS            AND      COMMUNICATION
 ENGINEERING. The Results embedded in this project report have not been submitted
 to any other University or institute for the award of any degree or diploma.
                                                     R.RAJAMOULI
(22Q95A0405)
                                                     K.POOJA
(22Q95A0405)
                                                     SANTHOSH KUMAR
(21Q91A0422)
                                                     S.SUNIL KUMAR
(21Q91A0468)
DATE:
PLACE: Maisammaguda
                           ACKNOWLEDGEMENT
       First and foremost, we would like to express our immense gratitude towards our
 institution Malla Reddy College of Engineering, which helped us to attain profound
 technical skills in the field of Electronics & communication Engineering, there by
 fulfilling our most cherished goal.
      We are pleased to thank Sri Ch. Malla Reddy, our Founder, Chairman MRGI, Sri
 Ch. Mahender Reddy, Secretary, MRGI for providing this opportunity and support
 throughout the course.
      We would like to thank Dr. T. V. Reddy our vice principal, Dr. P. Sampath kumar
 HOD, ECE Department for their inspiration adroit guidance and constructive criticism
 for successful completion of our degree.
      We would like to thank Mr.P.Sampath kumar Assistant Professor our internal guide,
 for his valuable suggestions and guidance during the exhibition and completion of this
 project.
      Finally, we avail this opportunity to express our deep gratitude to all staff who have
 contribute their valuable assistance and support making our project success.
                                                     R.RAJAMOULI
(22Q95A0405)
                                                     K.POOJA
(22Q95A0405)
                                                     SANTHOSH KUMAR
(21Q91A0422)
                                                     S.SUNIL KUMAR
(21Q91A0468)
                                  ABSTRACT
The purpose of this project is to develop a Touch based food ordering system that can
be used to transform the traditional ordering system. Generally, in restaurants menu
order system is actual provided in menu card format so the customer has to select the
menu item then the waiter has to come and take the order, which is a long processing
method. So we design Touch Screen-Based Food Ordering System that displays food
items for customers on their available devices such as user phone, Tablet etc. to give
input their orders directly by touching. The system automatically completes data
display, receiving, sending, storage of data and analysis. It is provide many
advantages as great user-friendly, saving time, portability, reduce human error,
flexibility and provide customer feedback etc. This system required large numbers of
manpower to handle customer reservation, ordering food, inquiry about food, placing
order on table, reminding dishes of customer. “Intelligent Automated Restaurant” it is
all about getting all of your different touch-points working together—connected,
sharing information, speeding processes and personalizing experiences. Here we will
use KEY PAD module to transmit data to the KEY PAD reader. E-menu is an
interactive ordering system with new digital menu for customers.
                        TABLE OF CONTENTS
CERTIFICATE                                 i
DECLARATION                                 ii
ACKNOWLEDGEMENT                             iii
ABSTRACT                                    iv
TABLE OF CONTENTS                           v
LIST OF FIGURES                             vi
LIST OF SCREENSHOTS                         vii
LIST OF ABBREVIATIONS                       viii
CHAPTER 1: INTRODUCTION
1.1 Introduction 1
1.2 Objective 2
3.2 Drawbacks 7
3.4 Advantages 8
4.2 Modules 12
 CHAPTER 6: TESTING
   6.1 Testing                     43
CHAPTER 7: RESULTS
   7.1 Screenshots                 47
CHAPTER 8: CONCLUSION
   8.1 Conclusion                  55
REFERENCES                         57
           LIST OF FIGURES
NO No
 5.5.1       Python                                                                  36
 7.1.1       The Pc windows SSD(C) Fake Profile Identification                       47
 7.1.2      Command prompt                                                           47
 7.1.3       New Tab in Browser                                                      48
 7.1.4       Fake Profile Web Page                                                   48
 7.1.5      User Profile Details                                                     49
 7.1.6       Predict Profile Identification Status Type                              49
 7.1.7       Login Service Provider                                                  50
 7.1.8       Profile Datasets Trained And Tested Results                             50
 7.1.9       User Profile Trained and Tested Accuracy Bar Chart                      51
7.1.10       View All Profile Identify Prediction                                    51
 7.1.11 Find and view Profile Identity Prediction Ratio 52 7.1.12 View All Profile Status
 Prediction Type                                                                       52
 7.1.13       Find profile Status Prediction Type Ratio                              53
 7.1.14       Pie Chart Of Fake Profile And Genuine Profile
                                                                                     53
 7.1.15       Line Graph Of Fake Profile And Genuine Profile
                                                                                     54
               LIST OF
               ABBREVIATIONS
Our project focuses on developing an advanced facial recognition system that can
operate effectively under various conditions, such as different lighting, angles, and
expressions. By utilizing state-of-the-art algorithms and incorporating extensive data
preprocessing and augmentation techniques, we aim to achieve high recognition
accuracy and robustness. This system has the potential to be deployed across multiple
domains, contributing to the broader adoption of biometric technologies in everyday
life and enhancing the overall security and convenience of various systems and
services.
1.2 OBJECTIVES
The primary objective of this project is to design and implement a highly accurate and
efficient facial recognition system using CNNs. The system aims to achieve several
key goals: firstly, to develop a CNN-based model that can extract distinctive features
from facial images, ensuring accurate identification even under challenging
conditions. Secondly, the project focuses on implementing advanced preprocessing
techniques to handle variations in image quality, including differences in lighting,
facial expressions, and occlusions. Additionally, the system is designed to operate in
real-time or near-real-time, making it suitable for practical applications in areas like
security systems and access control. Finally, scalability is a crucial aspect, ensuring
the system can manage large image databases without compromising performance.
The methodology for developing the facial recognition system involves several
critical stages. The process begins with data acquisition and preprocessing, where a
diverse set of facial images is collected, including various lighting, pose, and
expression conditions. Preprocessing steps include face detection to isolate faces
within images, alignment using key point detection techniques, and normalization to
standardize the input images. The core of the system is a custom-designed CNN
architecture optimized for facial feature extraction. This includes configuring
convolutional layers, pooling layers, and activation functions to effectively capture
and represent unique facial characteristics.
Preprocessing is the next critical step, involving several key tasks to prepare the images for
the CNN model. This includes face detection, where faces are located within the images
using techniques like the Viola-Jones algorithm or more advanced deep learning-based
methods. Once detected, the faces are aligned and normalized to ensure consistent input for
the CNN. Alignment involves adjusting the orientation of the face so that key features, such
as the eyes and mouth, are positioned similarly across all images. Normalization standardizes
the image size and pixel intensity values, reducing variability and aiding the model in learning
meaningful features.
The CNN model development is at the heart of the system. A custom CNN
architecture is designed, tailored specifically for facial recognition tasks. This
architecture includes several layers: convolutional layers to extract features, pooling
layers to reduce dimensionality, and activation layers like ReLU (Rectified Linear
Unit) to introduce non-linearity. Additionally, fully connected layers are used towards
the end of the network to combine the features into a final representation used for
classification. The model's architecture is fine-tuned based on empirical testing and
theoretical considerations, ensuring it captures a broad spectrum of facial features
while maintaining computational efficiency.
Training and optimization involve teaching the CNN to recognize and classify faces
using a labeled dataset. This process employs backpropagation and optimization
techniques, such as stochastic gradient descent or Adam, to minimize the loss
function. A key aspect of training is the use of a hybrid loss function, which might
combine categorical cross-entropy with center loss. This combination helps not only
to differentiate between different classes (individuals) but also to minimize the intra-
class variance, making the features learned by the network more discriminative. The
training process also involves hyperparameter tuning, where parameters such as
learning rate, batch size, and the number of epochs are optimized to enhance the
model's performance.
   CHAPTER – 2
LITERATURE SURVEY
Literature Survey 1: Title: "Facial Recognition Systems: A Comprehensive Review"
Author: Sarah E. Williams
Abstract: Sarah E. Williams provides a comprehensive review of facial recognition
systems. The survey covers various techniques such as eigenfaces, Fisherfaces, deep
learning-based approaches, and their applications in recognizing faces from image
galleries. It discusses the strengths and limitations of each method and provides
insights into the recent advancements in facial recognition technology.
Literature Survey 2: Title: "Deep Learning for Face Recognition: State-of-the-Art
Approaches"
Author: Michael J. Davis
Abstract: In this survey, Michael J. Davis explores state-of-the-art approaches in deep
learning for face recognition. The review covers convolutional neural networks
(CNNs), Siamese networks, and other deep learning architectures used for facial
feature extraction and matching. It discusses the advantages and challenges of deep
learning-based face recognition systems.
Literature Survey 3: Title: "Gallery-Based Face Recognition: Insights from Existing
Studies"
Author: Emily R. Martinez
Abstract: Emily R. Martinez conducts a literature survey on gallery-based face
recognition. The review delves into studies that focus on recognizing faces from
image galleries or databases, discussing the methodologies, datasets, and evaluation
metrics used in these studies. It provides insights into the performance and scalability
of gallery-based face recognition systems.
Literature Survey 4: Title: "Real-World Implementations of Face Recognition in
Image Galleries: Recent Developments"
Author: David A. Thompson
Abstract: This survey by David A. Thompson explores recent developments in real-
world implementations of face recognition in image galleries. The review covers case
studies, applications, and commercial products that utilize face recognition
technology for various purposes such as security, access control, and personalization.
It discusses the practical considerations and challenges in deploying face recognition
systems in gallery environments.
Literature Survey 5: Title: "Ethical Considerations in Face Recognition from Image
Galleries: A Review"
Author: Jessica L. Turner
Abstract: Jessica L. Turner's survey focuses on ethical considerations in face
recognition from image galleries. The review discusses issues related to privacy, bias,
and surveillance associated with the deployment of face recognition systems in public
and private spaces. It highlights the importance of ethical guidelines and regulations
in ensuring the responsible use of facial recognition technology.
Key Terminologies
Facial Recognition:
The process of identifying or verifying the identity of an individual using their facial
features. It involves capturing, analyzing, and comparing facial images to a database
of known faces.
Convolutional Neural Network (CNN):
A type of deep learning algorithm specifically designed for processing structured grid
data like images. CNNs are widely used in image recognition tasks due to their ability
to automatically learn spatial hierarchies of features.
Feature Extraction:
The process of identifying and isolating specific features from an image that are
relevant for distinguishing different objects or faces. In facial recognition, features
like eyes, nose, and mouth are critical.
Face Detection:
The technique used to locate and identify the presence of faces in an image. It is a
preliminary step in facial recognition systems, typically performed using algorithms
like Haar cascades or modern deep learning methods.
Data Augmentation:
A technique used to artificially expand the size of a training dataset by creating
modified versions of existing images. This includes transformations like rotations,
scaling, and flipping, which help improve the model's robustness.
 CHAPTER – 3
SYSTEM ANALYSIS
3.1 EXISTING SYSTEM
Limited Feature Representation: Techniques like Eigenfaces and Fisherfaces may not
capture subtle facial features, especially in high-dimensional data, limiting their
effectiveness in differentiating between similar-looking individuals.
Scalability Issues: As the size of the database grows, traditional systems face
difficulties in maintaining performance, as they are not optimized for handling large-
scale data efficiently.
3.3 PROPOSED SYSTEM
The proposed system utilizes Convolutional Neural Networks (CNNs) to enhance the
performance and reliability of facial recognition. CNNs are deep learning models
capable of learning complex, hierarchical feature representations directly from image
data. This system includes a carefully designed architecture with multiple
convolutional and pooling layers to capture detailed facial features. Additionally,
preprocessing techniques such as face alignment and normalization are employed to
ensure consistent input data, which is crucial for accurate feature extraction. The
system also incorporates data augmentation strategies to artificially expand the
training dataset, introducing variations in terms of rotation, scaling, and color
adjustments to improve the model's robustness and generalization capabilities.
Furthermore, a hybrid loss function combining categorical cross-entropy with center
loss is used to minimize intra-class variance and maximize inter-class separation,
enhancing the discriminative power of the learned features.
The proposed system offers several significant advantages over traditional methods:
Higher Accuracy: The deep learning-based approach, particularly the use of CNNs,
allows for more precise and detailed feature extraction, resulting in significantly
higher recognition accuracy. The model's ability to learn complex patterns and
nuances in facial data enhances its performance.
       System                :       i3 or above.
       Ram                   :       4 GB.
       Hard Disk             :      40 GB
       Key Board             :      Standard Windows Keyboard
       Mouse                  :      Two or Three Button Mouse
The feasibility of the project is analyzed in this phase and business proposal is put
forth with a very general plan for the project and some cost estimates. During system
analysis the feasibility study of the proposed system is to be carried out. This is to
ensure that the proposed system is not a burden to the company. For feasibility
analysis, some understanding of the major requirements for the system is essential.
        ECONOMICAL FEASIBILITY
        TECHNICAL FEASIBILITY
      SOCIAL FEASIBILITY
This study is carried out to check the economic impact that the system will have on
the organization. The amount of fund that the company can pour into the research and
development of the system is limited. The expenditures must be justified. Thus the
developed system as well within the budget and this was achieved because most of the
technologies used are freely available. Only the customized products had to be
purchased.
This study is carried out to check the technical feasibility, that is, the technical
requirements of the system. Any system developed must not have a high demand on
the available technical resources. This will lead to high demands on the available
technical resources. This will lead to high demands being placed on the client. The
developed system must have a modest requirement, as only minimal or null changes
are required for implementing this system.
The aspect of study is to check the level of acceptance of the system by the user. This
includes the process of training the user to use the system efficiently. The user must
not feel threatened by the system, instead must accept it as a necessity. The level of
acceptance by the users solely depends on the methods that are employed to educate
the user about the system and to make him familiar with it. His level of confidence
must be raised so that he is also able to make some constructive criticism, which is
welcomed, as he is the final user of the system.
CHAPTER – 4
                           SYSTEM DESIGN
4.1 SYSTEM ARCHITECTURE
4.2 MODULES
MODULES:
1. Upload Historical Trajectory Dataset : Upload Historical Trajectory Dataset’ button
and upload the dataset.
2. Generate Train & Test Model :Generate Train & Test Model’ button to read dataset
and to split dataset into train and test part to generate machine learning train model
3. Run MLP Algorithm:Run MLP Algorithm’ button to train MLP model and to
calculate its accuracy.
4. Run DDS with Genetic Algorithm : Run DDS with Genetic Algorithm button to
train DDS and to calculate its prediction accuracy.
5. Predict DDS Type :Predict DDS Type’ button to predict test data
4.3 UML DIAGRAMS :
The UML represents a collection of best engineering practices that have proven
successful in the modeling of large and complex systems.
The UML is a very important part of developing objects oriented software and the
software development process. The UML uses mostly graphical notations to express
the design of software projects.
GOALS:
SYSTEM IMPLEMENTATION
    5.1 What is Python
     Machine Learning
     GUI Applications (like Kivy, Tkinter, PyQt etc. )
     Web frameworks like Django (used by YouTube, Instagram, Dropbox)
     Image processing (like Opencv, Pillow)
     Web scraping (like Scrapy, BeautifulSoup, Selenium)
     Test frameworks
     Multimedia
3. Embeddable
 Complimentary to extensibility, Python is embeddable as well. You can put your
 Python code in your source code of a different language, like C++. This lets us
 add scripting capabilities to our code in the other language.
4. Improved Productivity
 The language’s simplicity and extensive libraries render programmers more
 productive than languages like Java and C++ do. Also, the fact that you need to
 write less and get more things done.
5. IOT Opportunities
 Since Python forms the basis of new platforms like Raspberry Pi, it finds the
 future bright for the Internet Of Things. This is a way to connect the language
 with the real world.
 When working with Java, you may have to create a class to print ‘Hello World’.
 But in Python, just a print statement will do. It is also quite easy to
 learn, understand, and code. This is why when people pick up Python, they have
 a hard time adjusting to other more verbose languages like Java.
6. Readable
 Because it is not such a verbose language, reading Python is much like reading
 English. This is the reason why it is so easy to learn, understand, and code. It
 also does not need curly braces to define blocks, and indentation is mandatory.
 This further aids the readability of the code.
7. Object-Oriented
 This language supports both the procedural and object-oriented programming
 paradigms. While functions help us with code reusability, classes and objects let
 us model the real world. A class allows the encapsulation of data and functions
 into one.
9. Portable
 When you code your project in a language like C++, you may need to make
 some changes to it if you want to run it on another platform. But it isn’t the same
 with Python. Here, you need to code only once, and you can run it anywhere.
 This is called Write Once Run Anywhere (WORA). However, you need to be
 careful enough not to include any system-dependent features.
10. Interpreted
 Lastly, we will say that it is an interpreted language. Since statements are
 executed one by one, debugging is easier than in compiled languages.
 Any doubts till now in the advantages of Python? Mention in the comment
 section.
 The 2019 Github annual survey showed us that Python has overtaken Java
 in the most popular programming language category.
Disadvantages of Python
 So far, we’ve seen why Python is a great choice for your project. But if you
 choose it, you should be aware of its consequences as well. Let’s now see the
 downsides of choosing Python over another language.
1. Speed Limitations
 We have seen that Python code is executed line by line. But since Python is
 interpreted, it often results in slow execution. This, however, isn’t a problem
 unless speed is a focal point for the project. In other words, unless high speed is a
 requirement, the benefits offered by Python are enough to distract us from its
 speed limitations.
3. Design Restrictions
 As you know, Python is dynamically-typed. This means that you don’t need to
 declare the type of variable while writing the code. It uses duck-typing. But wait,
 what’s that? Well, it just means that if it looks like a duck, it must be a duck.
 While this is easy on the programmers during coding, it can raise run-time
 errors.
5. Simple
 No, we’re not kidding. Python’s simplicity can indeed be a problem. Take my
 example. I don’t do Java, I’m more of a Python person. To me, its syntax is so
 simple that the verbosity of Java code seems unnecessary.
 This was all about the Advantages and Disadvantages of Python Programming
 Language.
What do the alphabet and the programming language Python have in common?
Right, both start with ABC. If we are talking about ABC in the Python context, it's
clear that the programming language ABC is meant. ABC is a general-purpose
programming language and programming environment, which had been developed
in the Netherlands, Amsterdam, at the CWI (Centrum Wiskunde &Informatica). The
greatest achievement of ABC was to influence the design of Python.Python was
conceptualized in the late 1980s. Guido van Rossum worked that time in a project
at the CWI, called Amoeba, a distributed operating system. In an interview with Bill
Venners1, Guido van Rossum said: "In the early 1980s, I worked as an implementer
on a team building a language called ABC at Centrum voor Wiskunde en
Informatica (CWI). I don't know how well people know ABC's influence on
Python. I try to mention ABC's influence because I'm indebted to everything I
learned during that project and to the people who worked on it."Later on in the
same Interview, Guido van Rossum continued: "I remembered all my experience
and some of my frustration with ABC. I decided to try to design a simple scripting
language that possessed some of ABC's better properties, but without its problems.
So I started typing. I created a simple virtual machine, a simple parser, and a simple
runtime. I made my own version of the various ABC parts that I liked. I created a
basic syntax, used indentation for statement grouping instead of curly braces or
begin-end blocks, and developed a small number of powerful data types: a hash
table (or dictionary, as we call it), a list, strings, and numbers."
  Before we take a look at the details of various machine learning methods, let's
  start by looking at what machine learning is, and what it isn't. Machine learning is
  often categorized as a subfield of artificial intelligence, but I find that
  categorization can often be misleading at first brush. The study of machine
  learning certainly arose from research in this context, but in the data science
  application of machine learning methods, it's more helpful to think of machine
  learning as a means of building models of data.
 Human beings, at this moment, are the most intelligent and advanced species on
 earth because they can think, evaluate and solve complex problems. On the other
 side, AI is still in its initial stage and haven’t surpassed human intelligence in
 many aspects. Then the question is that what is the need to make machine learn?
 The most suitable reason for doing this is, “to make decisions, based on data, with
 efficiency and scale”.
 Emotion analysis
 Sentiment analysis
 Speech synthesis
 Speech recognition
 Customer segmentation
 Object recognition
 Fraud detection
    Fraud prevention
    Recommendation of products to customer in online shopping
    Arthur Samuel coined the term “Machine Learning” in 1959 and defined it as
    a “Field of study that gives computers the capability to learn without being
    explicitly programmed”.
    And that was the beginning of Machine Learning! In modern times, Machine
    Learning is one of the most popular (if not the most!) career choices. According
    to Indeed, Machine Learning Engineer Is The Best Job of 2019 with
    a 344% growth and an average base salary of $146,085 per year.
    But there is still a lot of doubt about what exactly is Machine Learning and how to
    start learning it? So this article deals with the Basics of Machine Learning and also
    the path you can follow to eventually become a full-fledged Machine Learning
    Engineer. Now let’s get started!!!
    This is a rough roadmap you can follow on your way to becoming an insanely
    talented Machine Learning Engineer. Of course, you can always modify the steps
    according to your needs to reach your desired end-goal!
    In case you are a genius, you could start ML directly but normally, there are some
    prerequisites that you need to know which include Linear Algebra, Multivariate
    Calculus, Statistics, and Python. And if you don’t know these, never fear! You
    don’t need a Ph.D. degree in these topics to get started but you do need a basic
    understanding.
Data plays a huge role in Machine Learning. In fact, around 80% of your time as
an ML expert will be spent collecting and cleaning data. And statistics is a field
that handles the collection, analysis, and presentation of data. So it is no surprise
that you need to learn it!!!
Some of the key concepts in statistics that are important are Statistical
Significance, Probability Distributions, Hypothesis Testing, Regression, etc. Also,
Bayesian Thinking is also a very important part of ML which deals with various
concepts like Conditional Probability, Priors, and Posteriors, Maximum
Likelihood, etc.
Some people prefer to skip Linear Algebra, Multivariate Calculus and Statistics
and learn them as they go along with trial and error. But the one thing that you
absolutely cannot skip is Python! While there are other languages you can use for
Machine Learning like R, Scala, etc. Python is currently the most popular language
for ML. In fact, there are many Python libraries that are specifically useful for
Artificial Intelligence and Machine Learning such as Keras, TensorFlow, Scikit-
learn, etc.
So if you want to learn ML, it’s best if you learn Python! You can do that using
various online resources and courses such as Fork Python available Free on
GeeksforGeeks.
    Step 2 – Learn Various ML Concepts
    Now that you are done with the prerequisites, you can move on to actually learning
    ML (Which is the fun part!!!) It’s best to start with the basics and then move on to
    the more complicated stuff. Some of the basic concepts in ML are:
   Supervised Learning – This involves learning from a training dataset with labeled
    data using classification and regression models. This learning process continues
    until the required level of performance is achieved.
   Unsupervised Learning – This involves using unlabelled data and then finding the
    underlying structure in the data in order to learn more and more about the data itself
    using factor and cluster analysis models.
   Semi-supervised       Learning      – This   involves   using   unlabelled   data   like
    Unsupervised Learning with a small amount of labeled data. Using labeled data
    vastly increases the learning accuracy and is also more cost-effective than
    Supervised Learning.
   Reinforcement Learning – This involves learning optimal actions through trial
    and error. So the next action is decided by learning behaviors that are based on the
    current state and that will maximize the reward in the future.
    5.3.4 Advantages of Machine learning :-
    Machine Learning can review large volumes of data and discover specific trends and
    patterns that would not be apparent to humans. For instance, for an e-commerce
    website like Amazon, it serves to understand the browsing behaviors and purchase
    histories of its users to help cater to the right products, deals, and reminders relevant
    to them. It uses the results to reveal relevant advertisements to them.
    With ML, you don’t need to babysit your project every step of the way. Since it
    means giving machines the ability to learn, it lets them make predictions and also
    improve the algorithms on their own. A common example of this is anti-virus
    softwares; they learn to filter new threats as they are recognized. ML is also good at
    recognizing spam.
3. Continuous Improvement
    Machine Learning algorithms are good at handling data that are multi-dimensional
    and multi-variety, and they can do this in dynamic or uncertain environments.
5. Wide Applications
You could be an e-tailer or a healthcare provider and make ML work for you. Where
it does apply, it holds the capability to help deliver a much more personal experience
to customers while also targeting the right customers.
1. Data Acquisition
Machine Learning requires massive data sets to train on, and these should be
inclusive/unbiased, and of good quality. There can also be times where they must
wait for new data to be generated.
ML needs enough time to let the algorithms learn and develop enough to fulfill their
purpose with a considerable amount of accuracy and relevancy. It also needs
massive resources to function. This can mean additional requirements of computer
power for you.
3. Interpretation of Results
Another major challenge is the ability to accurately interpret results generated by the
algorithms. You must also carefully choose the algorithms for your purpose.
4. High error-susceptibility
Machine Learning is autonomous but highly susceptible to errors. Suppose you train
an algorithm with data sets small enough to not be inclusive. You end up with biased
predictions coming from a biased training set. This leads to irrelevant advertisements
being displayed to customers. In the case of ML, such blunders can set off a chain of
errors that can go undetected for long periods of time. And when they do get noticed,
it takes quite some time to recognize the source of the issue, and even longer to
correct it.
Purpose :-
    Python
   Python is an interpreted high-level programming language for general-purpose
   programming. Created by Guido van Rossum and first released in 1991, Python
   has a design philosophy that emphasizes code readability, notably using
   significant whitespace.
5.5.1Tensorflow
   TensorFlow was developed by the Google Brain team for internal Google use. It
   was released under the Apache 2.0 open-source license on November 9, 2015.
    5.5.2Numpy
5.5.3 Pandas
5.5.4 Matplotlib
   Python
   Python is an interpreted high-level programming language for general-purpose
   programming. Created by Guido van Rossum and first released in 1991, Python
   has a design philosophy that emphasizes code readability, notably using
   significant whitespace.
There have been several updates in the Python version over the years. The question
is how to install Python? It might be confusing for the beginner who is willing to
start learning Python but this tutorial will solve your query. The latest or the newest
version of Python is version 3.7.4 or in other words, it is Python 3.
Note: The python version 3.7.4 cannot be used on Windows XP or earlier devices.
Before you start with the installation process of Python. First, you need to know
about your System Requirements. Based on your system type i.e. operating system
and based processor, you must download the python version. My system type is
a Windows 64-bit operating system. So the steps below are to install python
version 3.7.4 on Windows 7 device or to install Python 3. Download the Python
Cheatsheet here.The steps on how to install Python on Windows 10, 8 and 7
are divided into 4 parts to help understand better.
Now, check for the latest and the correct version for your operating system.
Step 4: Scroll down the page until you find the Files option.
Step 5: Here you see a different version of python along with the operating system.
• To download Windows 32-bit python, you can select any one from the three
options: Windows x86 embeddable zip file, Windows x86 executable installer or
Windows x86 web-based installer.
•To download Windows 64-bit python, you can select any one from the three
options: Windows x86-64 embeddable zip file, Windows x86-64 executable installer
or Windows x86-64 web-based installer.
Here we will install Windows x86-64 web-based installer. Here your first part
regarding which version of python is to be downloaded is completed. Now we move
ahead with the second part in installing python i.e. Installation
Note: To know the changes or updates that are made in the version you can click on
the Release Note Option.
Installation of Python
Step 1: Go to Download and Open the downloaded python version to carry out the
installation process.
Step 2: Before you click on Install Now, Make sure to put a tick on Add Python 3.7
to PATH.
  Step 3: Click on Install NOW After the installation is successful. Click on Close.
With these above three steps on python installation, you have successfully and
correctly installed Python. Now is the time to verify the installation.
  Note: The installation process might take a couple of minutes.
 Verify the Python Installation
  Step 1: Click on Start
  Step 2: In the Windows Run Command, type “cmd”.
Step 3: Open the Command prompt option.
Step 4: Let us test whether the python is correctly installed. Type python –V and
press Enter.
Step 3: Click on IDLE (Python 3.7 64-bit) and launch the program
Step 4: To go ahead with working in IDLE you must first save the file. Click on
File > Click on Save
Step 5: Name the file and save as type should be Python files. Click on SAVE. Here
I have named the files as Hey World.
Step 6: Now for e.g. enter print
CHAPTER 6
 TESTING
6.1 TESTING
The purpose of testing is to discover errors. Testing is the process of trying to discover
every conceivable fault or weakness in a work product. It provides a way to check the
functionality of components, sub assemblies, assemblies and/or a finished product It
is the process of exercising software with the intent of ensuring that the Software
system meets its requirements and user expectations and does not fail in an
unacceptable manner. There are various types of test. Each test type addresses a
specific testing requirement.
TYPES OF TESTS
Unit testing :
 Unit testing involves the design of test cases that validate that the internal program
logic is functioning properly, and that program inputs produce valid outputs. All
decision branches and internal code flow should be validated. It is the testing of
individual software units of the application .it is done after the completion of an
individual unit before integration. This is a structural testing, that relies on knowledge
of its construction and is invasive. Unit tests perform basic tests at component level
and test a specific business process, application, and/or system configuration. Unit
tests ensure that each unique path of a business process performs accurately to the
documented specifications and contains clearly defined inputs and expected results.
Integration testing
                             Integration tests are designed to test integrated software
components to determine if they actually run as one program. Testing is event driven
and is more concerned with the basic outcome of screens or fields. Integration tests
demonstrate that although the components were individually satisfaction, as shown by
successfully unit testing, the combination of components is correct and consistent.
Integration testing is specifically aimed at exposing the problems that arise from the
combination of components.
Functional test
              Functional tests provide systematic demonstrations that functions tested
are available as specified by the business and technical requirements, system
documentation, and user manuals.
                Functional testing is centered on the following items:
System Test
                System testing ensures that the entire integrated software system meets
requirements. It tests a configuration to ensure known and predictable results. An
example of system testing is the configuration oriented system integration test.
System testing is based on process descriptions and flows, emphasizing pre-driven
process links and integration points.
o Validation Testing.
  2. Bottom-up Integration
 This method begins the construction and testing with the modules at the lowest
level in the program structure. Since the modules are integrated from the bottom
up, processing required for modules subordinate to a given level is always
available and the need for stubs is eliminated. The bottom up integration strategy
may be implemented with the following steps:
▪ The low-level modules are combined into clusters into clusters that perform a
specific Software sub-function.
▪ A driver (i.e.) the control program for testing is written to coordinate test
    case input and output.
▪ The cluster is tested.
▪ Drivers are removed and clusters are combined moving upward in the program
structure
 The bottom up approaches tests each module individually and then each module
is module is integrated with a main module and tested for functionality.
CHAPTER 7
 RESULTS
Fig 7.1.1:
 Fig 7.1.2:
  Fig 7.1.3:
Fig 7.1.4:
Fig 7.1.5:
Fig 7.1.6:
Fig 7.1.7:
Fig 7.1.8:
Fig 7.1.9:
Fig 7.1.10:
Fig 10.1.11:
Fig 10.1.12
CHAPTER 8
CONCLUSION
8.1 CONCLUSION:
FUTURE ENHANCEMENTS
Future Enhancements
These future enhancements aim to make the facial recognition system more accurate,
efficient, secure, and versatile, addressing the evolving needs and challenges in the
field of biometric authentication.
References: