0% found this document useful (0 votes)
8 views82 pages

Sign Speak Blackbook Soft

The document outlines a project titled 'SIGN-SPEAK' by Priya Srivastava, aimed at developing a real-time sign language recognition system using convolutional neural networks (CNN) to translate American Sign Language gestures into text and speech. The project includes various modules such as data acquisition, gesture classification, and text-to-speech translation, addressing challenges like varying lighting conditions and backgrounds through the use of the MediaPipe library. The initiative seeks to enhance communication for deaf and dumb individuals, making it easier for them to interact with those who do not understand sign language.

Uploaded by

Rashid Ryan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views82 pages

Sign Speak Blackbook Soft

The document outlines a project titled 'SIGN-SPEAK' by Priya Srivastava, aimed at developing a real-time sign language recognition system using convolutional neural networks (CNN) to translate American Sign Language gestures into text and speech. The project includes various modules such as data acquisition, gesture classification, and text-to-speech translation, addressing challenges like varying lighting conditions and backgrounds through the use of the MediaPipe library. The initiative seeks to enhance communication for deaf and dumb individuals, making it easier for them to interact with those who do not understand sign language.

Uploaded by

Rashid Ryan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 82

A PROJECT

ON

SIGN-SPEAK

UNIVERSITY OF MUMBAI
In the partial fulfilment of the degree of

BACHELOR OF SCIENCE (INFORMATION TECHNOLOGY)

By
Priya Srivastava

Under the esteemed guidance of


Dr. Meghna Bhatia
H.O.D

DEPARTMENT OF INFORMATION TECHNOLOGY

SIES (NERUL) COLLEGE OF ARTS, SCIENCE &


COMMERCE
(Affiliated to University of Mumbai)
NERUL, NAVI MUM
SIGN -SPEAK

A Project Submitted to University of


Mumbai in partial fulfilment of the
Degree of
BACHELOR OF SCIENCE (INFORMATION TECHNOLOGY)

BY

PRIYA SRIVASTAVA

(T.22.92)

Under The Esteemed Guidance Of

MRS. Sameera Ibrahim &

DR. Meghna Bhatia

DEPARTMENT OF INFORMATION TECHNOLOGY


SIES (NERUL) COLLEGE OF ARTS, SCIENCE & COMMERCE

(Affiliated to University of Mumbai)


NERUL, NAVIMUMBAI -400706
MAHARASHTRA
PROFORMA FOR THE APPROVAL PROJECT PROPOSAL

PNR No 2022016401787705 Roll no T.22.92

1. Name of the Student – PRIYA SRIVASTAVA


2. Title of the Project : SIGN SPEAK
3. Name of the Guide: MRS. Sameera Ibrahim
4. Teaching experience of the Guide
5. Is this your first submission? Yes No
Signature of the Student Signature of the Guide

Date: ..................... Date: .........................

Signature of the
Coordinator Date:
.....................

2024-2025
4
5
DECLARATION

I hereby declare that the project entitled, “SIGN -SPEAK” done at SIES (Nerul)
College of Arts, Science and Commerce, has not been in any case duplicated to
submit to any other university for the award of any degree. To the best of my
knowledge other than me, no one has submitted to any other university.

The project is done in partial fulfilment of the requirements for the award of degree of
BACHELOR OF SCIENCE (INFORMATION TECHNOLOGY) to be submitted
as final semester project as part of our curriculum.

Name and Signature of the Student

6
7
BAI – 400706
Date: …………………

Acknowledgement
I would like to express my deepest gratitude to my project guide, Ms.
Sameera Ibrahim and Dr Meghna Bhatia, for their invaluable guidance,
support, and encouragement throughout the course of this project Her expertise and
insights have been crucial in shaping the direction and
execution of this work, providing me with the knowledge and confidence needed
to complete it successfully.

I extend my sincere thanks to the BSC IT Department of my college, whose


resources and support have been instrumental in the development of this project. I am
particularly grateful for the learning environment provided, which has allowed me to
explore and delve deeper into the subject matter.

I would also like to thank my parents and friends for their unwavering
support and encouragement throughout this journey. Their constant motivation and
understanding have
been a driving force that helped me complete this project within the given time frame.

Lastly, my heartfelt gratitude goes to everyone who contributed directly


or indirectly to this project. Your support has been greatly appreciated, and this
project would not have been possible without your valuable inputs and
encouragement.
8
Thanking you

9
10
Table Of Content

Chapter 1: Introduction...............................................................................................................................7
1.1 Objective............................................................................................................................................7
1.2 Scope.................................................................................................................................................8
1.3 Project Modules.................................................................................................................................8
1.4 Project Requirements........................................................................................................................8
1.4.1 Hardware Requirement...............................................................................................................8
1.4.2 Software Requirement................................................................................................................8
Chapter 2 :Survey of Technology.................................................................................................................9
Chapter 3 : System Analysis And Design....................................................................................................21
3.1 Comparison Table............................................................................................................................21
3.2 Research Gap...................................................................................................................................21
3.3 Project Feasibility Study...................................................................................................................22
3.3.1 Operational feasibility..............................................................................................................22
3.3.2 Technical feasibility...................................................................................................................22
3.3.3 Economic Feasibility..................................................................................................................23
3.4 Timeline Chart..................................................................................................................................23
Chapter 4 : Software Requirement............................................................................................................24
4.1 Implementation and Testing........................................................................................................25
Chapter 5 : Coding.....................................................................................................................................28
Chapter 6 : Circuit Diagram/E-R Diagram...................................................................................................63
6.1 Usecase model.............................................................................................................................65
6.2 Data-Flow Diagram.......................................................................................................................66
6.3 ER Diagram...................................................................................................................................68
Chapter 7 : Conclusion...............................................................................................................................71
Chapter 7 : Refrences.................................................................................................................................72

11
Abstract
Abstract: SIGN-SPEAK
Sign language is one of the oldest and most natural form of language for communication,
hence we have come up with a real time method using neural networks for finger spelling
based American sign language. Automatic human gesture recognition from camera images
is an interesting topic for developing vision. We propose a convolution neural network
(CNN) method to recognize hand gestures of human actions from an image captured by
camera. The purpose is to recognize hand gestures of human task activities from a camera
image. The position of hand and orientation are applied to obtain the training and testing
data for the CNN. The hand is first passed through a filter and after the filter is applied
where the hand is passed through a classifier which predicts the class of the hand gestures.
Then the calibrated images are used to train CNN.

Data pre-processing and Feature extraction:


In this approach for hand detection, firstly we detect hand from image that is acquired by
webcam and for detecting a hand we used media pipe library which is used for image
processing. So, after finding the hand from image we get the region of interest (Roi) then
we cropped that image and convert the image to grey image using OpenCV library after we
applied the gaussian blur. The filter can be easily applied using open computer vision
library also known as OpenCV. Then we converted the grey image to binary image using
threshold and Adaptive threshold methods.
We have collected images of different signs of different angles for sign letter A to Z.
In this method there are many loop holes like your hand must be ahead of clean soft
background and that is in proper lightning condition then only this method will give good
accurate results but in real world we don’t get good background everywhere and we don’t
get good lightning conditions too.
So, to overcome this situation we try different approaches then we reached at one
interesting solution in which firstly we detect hand from frame using media pipe and get the
hand landmarks of hand present in that image then we draw and connect those landmarks in
simple white image.
By doing this we tackle the situation of background and lightning conditions because the
media pipe library will give us landmark points in any background and mostly in any
lightning conditions.

Text To Speech Translation: The model translates known gestures into words. we have
used pyttsx3 library to convert the recognized words into the appropriate speech. The text-
to- speech output is a simple workaround, but it's a useful feature because it simulates a
real-life dialogue.

Keywords: Machine learning, hand gestures, image processing, media pipe, long short-
term memory, convolutional neural network, real-time recognition, sign language
recognition.

12
Chapter 1: Introduction
Sign-Language-To-Text-and-Speech-Conversion
American sign language is a predominant sign language Since the only disability D&M
people have been communication related and they cannot use spoken languages hence the
only way for them to communicate is through sign language. Communication is the process
of exchange of thoughts and messages in various ways such as speech, signals, behaviour
and visuals. Deaf and dumb(D&M) people make use of their hands to express different
gestures to express their ideas with other people. Gestures are the nonverbally exchanged
messages and these gestures are understood with vision. This nonverbal communication of
deaf and dumb people is called sign language.
In our project we basically focus on producing a model which can recognise Fingerspelling
based hand gestures in order to form a complete word by combining each gesture. The
gestures we aim to train are as given in the image below.
.

Fig 1.

1.1 Objective:
More than 70 million deaf people around the world use sign languages to communicate. Sign
language allows them to learn, work, access services, and be included in the communities.
It is hard to make everybody learn the use of sign language with the goal of ensuring that
people with disabilities can enjoy their rights on an equal basis with others.

13
So, the aim is to develop a user-friendly human computer interface (HCI) where the
computer understands the American sign language This Project will help the dumb and deaf
people by making their life easy. To create a computer software and train a model using
CNN which takes an image of hand gesture of American Sign Language and shows the
output of the particular sign language in text format converts it into audio format.

1.2 Scope
This System will be Beneficial for Both Dumb/Deaf People and the People Who do not
understands the Sign Language. They just need to do that with sign Language gestures and
this system will identify what he/she is trying to say after identification it gives the output in
the form of Text as well as Speech format.

1.3 Project Modules


1.3.1. Data Acquisition
1.3.2. Data pre-processing and Feature extraction
1.3.3.Gesture Classification
1.3.4 Text and Speech Translation

1.4 Project Requirements


1.4.1 Hardware Requirement

 Webcam

1.4.2 Software Requirement

 Operating System: Windows 8 and Above


 IDE: PyCharm
 Programming Language: Python 3.9 5
 Python libraries: OpenCV, NumPy, Keras,mediapipe,Tensorflow

14
Chapter-2
Survey of Technologies

1. Data Acquisition
The acquisition of hand gesture data is crucial for training an accurate model. The two major
approaches for data collection are:
Glove-Based Methods: These methods use electromechanical gloves to detect hand gestures
precisely. However, they are expensive and not user-friendly.
Vision-Based Methods: These methods utilize a webcam to capture hand movements. They
are cost-effective and do not require additional devices. However, challenges such as varying
skin tones, lighting conditions, and background complexities need to be addressed.
To overcome these challenges, the Media pipe library is used to detect hand landmarks,
ensuring robustness across different lighting conditions and backgrounds.
2. Data Pre-processing and Feature Extraction
In this approach for hand detection, firstly we detect hand from image that is acquired by
webcam and for detecting a hand we used media pipe library which is used for image
processing. So, after finding the hand from image we get the region of interest (Roi) then we
cropped that image and convert the image to grey image using OpenCV library after we
applied the gaussian blur. The filter can be easily applied using open computer vision library
also known as OpenCV. Then we converted the grey image to binary image using threshold
and Adaptive threshold methods.
We have collected images of different signs of different angles for sign letter A to Z.

15
in this method there are many loop holes like your hand must be ahead of clean soft
background and that is in proper lightning condition then only this method will give good
accurate results but in real world we don’t get good background everywhere and we don’t get
good lightning conditions too.
So to overcome this situation we try different approaches then we reached at one interesting
solution in which firstly we detect hand from frame using media pipe and get the hand
landmarks of hand present in that image then we draw and connect those landmarks in simple
white image
Media pipe Landmark System:

Now we get this landmark points and draw it in plain white background using OpenCV
library.
-By doing this we tackle the situation of background and lightning conditions because the
media pipe library will give us landmark points in any background and mostly in any
lightning conditions.

16
-we have collected 180 skeleton images of Alphabets from A to Z
3. Gesture Classification
Convolutional Neural Network (CNN)
CNN is a class of neural networks that are highly useful in solving computer vision problems.
They found inspiration from the actual perception of vision that takes place in the visual
cortex of our brain. They make use of a filter/kernel to scan through the entire pixel values of
the image and make computations by setting appropriate weights to enable detection of a
specific feature. CNN is equipped with layers like convolution layer, max pooling layer,
flatten layer, dense layer, dropout layer and a fully connected neural network layer. These
layers together make a very powerful tool that can identify features in an image. The starting
layers detect low level features that gradually begin to detect more complex higher-level
features
Unlike regular Neural Networks, in the layers of CNN, the neurons are arranged in 3
dimensions: width, height, depth.
The neurons in a layer will only be connected to a small region of the layer (window size)
before it, instead of all of the neurons in a fully-connected manner.
Moreover, the final output layer would have dimensions (number of classes), because by the
end of the CNN architecture we will reduce the full image into a single vector of class scores.
Convolutional Layer:
In convolution layer I have taken a small window size [typically of length 5*5] that extends
to the depth of the input matrix.

17
2.1 Literature Review
Mahesh Kumar N B1 Assistant Professor (Senior Grade),
Bannari Amman Institute of Technology,
Sathyamangalam, Erode, India (2018):
This paper shows the sign language recognizing of 26 hand gestures in Indian sign language using
MATLAB. The proposed system contains four modules such as: pre-processing and hand
segmentation, feature extraction, sign recognition and sign to text. By using image processing the
segmentation can be done. Otsu algorithm is used for segmentation purposes Some of the features
are extracted such as Eigen values and Eigen vectors which are used in recognition. The Linear
Discriminant Analysis (LDA) algorithm was used for gesture recognition and recognized gestures
are converted into text and voice format. The proposed system helps to dimensionality.

figure 2.

2.2 Translation of Sign Language Finger-Spelling to Text using Image Processing


In This proposed system, they intend to recognize some very basic elements of sign language and
to translate them to text. Firstly, the video shall be captured frame-by-frame, the captured video

18
will be processed and the appropriate image will be extracted, this retrieved image will be further
processed using BLOB analysis and will be sent to the statistical database here the captured image
shall compared with the one saved in the database and the matched image will be used to
determine the performed alphabet sign in the language. Here, they will be implementing only

American Sign Language Finger-spellings, and They will construct words and sentences with
them. With the proposed method, they found that the probability of Obtaining desired output is
around 93% which is sufficient and Can be enough to make it suitable to be used on a larger
scale For the intended purpose.

figure 3.

2.3 Sign Language to Text and Speech Conversion


Sign language is one of the oldest and most natural forms of language for communication. Since
most people do not know sign language and interpreters are very difficult to come by, They have
come up with a real-time method using Convolution Neural Network (CNN) for fingerspelling
based American Sign Language (ASL). In Their method, the hand is first passed through a filter
and after the filter has applied the hand is passed through a classifier that predicts the class of the
hand gestures. Using Their approach, They are able to reach a model accuracy of 95.8%.

19
figure 4.

2.4 Sign Language to Text and Speech Translation in Real Time Using
Convolutional Neural Network
Creating a desktop application that uses a computer’s webcam to capture a person signing
gestures for American sign language (ASL), and translate it into corresponding text and speech
in real time. The translated sign language gesture will be acquired in text which is farther
converted into audio. In this manner they are implementing a finger spelling sign language
translator. To enable the detection of gestures, they are making use of a Convolutional neural
network (CNN).
A CNN is highly efficient in tackling computer vision problems and is capable

of detecting the desired features with a high degree of accuracy upon sufficient training. The
modules are image acquisition, hand region segmentation and hand detection and tracking hand
posture recognition and display as text/speech. A finger spelling sign language translator is
obtained which has an accuracy of 95%

2.5

CONVERSION OF SIGN LANGUAGE TO TEXT AND SPEECH USING MACHINE


LEARNING TECHNIQUES
Author : Victorial Adebimpe Akano(2018)

Communication with the hearing impaired (deaf/mute) people is a great challenge in our society
today; this can be attributed to the fact that their means of communication (Sign Language or
hand gestures at a local level) requires an interpreter at every instance. To convert ASL signed
hand gestures into text as well as speech using unsupervised feature learning to eliminate

20
communication barrier with the hearing impaired and as well provide teaching aid for sign
language.

Sample images of different ASL signs were collected using the Kinect sensor using the image
acquisition toolbox on MATLAB. About five hundred (500) data samples (with each sign count
five and ten (5-10)) were collected as the training data. The reason for this is to make the
algorithm very robust for images of the same database in order to reduce the rate of
misclassification. The combination FAST and SURF with a KNN of 10 also showed that
unsupervised learning classification could determine the best matched feature from the existing
database. In turn, the best match was converted to text as well as speech. The introduced system
achieved a 92% accuracy of supervised feature learning and 78%of unsupervised feature learning

21
2.6 An Improved Hand Gesture Recognition Algorithm based on
image contours to Identify the American Sign Language
This paper proposed a recognition and classification of hand gesture to identify the correct
denotation with maximum accurateness for standard American Sign Language. The proposal
intelligently used the information based on image contours to identify the character's
representation of hand gesture. The proposal optimizes the performance overhead through
identifications of 17 characters and 6 symbols based on image contours and convexity
measurement of Standard American Sign Language without using complex algorithms and
specialized hardware devices. Accuracy measurement done through simulation, which shows how
their proposal provides more accuracy with minimum complexity in comparison to other state-of-
the-art works. The average accuracy is 86% overall.

figure 6.

22
The layer consists of learnable filters of window size. During every iteration I slid the
window by stride size [typically 1], and compute the dot product of filter entries and input
values at a given position.
As I continue this process well create a 2-Dimensional activation matrix that gives the
response of that matrix at every spatial position.
That is, the network will learn filters that activate when they see some type of visual feature
such as an edge of some orientation or a blotch of some colour.

Pooling Layer:
We use pooling layer to decrease the size of activation matrix and ultimately reduce the
learnable parameters.
There are two types of pooling:
a. Max Pooling:
In max pooling we take a window size [for example window of size 2*2], and only taken the
maximum of 4 values.
Well lid this window and continue this process, so well finally get an activation matrix half of
its original Size.
b. Average Pooling:
In average pooling we take average of all Values in a window.
Pooling.

23
Fully Connected Layer:
In convolution layer neurons are connected only to a local region, while in a fully connected
region, well connect the all the inputs to neurons.
Fully Connected Layer

The preprocessed 180 images/alphabet will feed the keras CNN model.
Because we got bad accuracy in 26 different classes thus, We divided whole 26 different
alphabets into 8 classes in which every class contains similar alphabets: [y,j]
[c,o]
[g,h]
[b,d,f,I,u,v,k,r,w]
[p,q,z]
[a,e,m,n,s,t]
All the gesture labels will be assigned with a
probability. The label with the highest probability will treated to be the predicted label.
So when model will classify [aemnst] in one single class using mathematical operation
on hand landmarks we will classify further into single alphabet a or e or m or n or s or t.

24
-Finally, we got 97% Accuracy (with and without clean background and proper lightning
conditions) through our method. And if the background is clear and there is good lightning
condition then we got even 99% accurate results

Class Reduction for Improved Accuracy


Initially, 26 different alphabet classes resulted in lower accuracy. To improve classification,
alphabets were grouped into 8 clusters based on their similarity, reducing misclassification
errors. A post-processing step was then applied to refine predictions within each cluster,
achieving an accuracy of 97% in varied backgrounds and up to 99% in optimal conditions.
4. Text-To-Speech Translation
Once a sign is recognized, the system converts it into text and subsequently into speech using
the pyttsx3 library. This feature enhances usability by allowing real-time spoken
communication.
To make the developer experience consistent across widely varying types of applications,
such as Windows-based applications and Web-based applications.
To build all communication on industry standards to ensure that code based on the .NET
Framework can integrate with any other code.

25
Chapter 3 : System Analysis And Design

3.1 Comparison Table


Author Mahesh Krishna Bikash K. Ayush Victorial Rakesh
name Kumar Modi Yadav Pandey Adebimpe Kumar
Akano

Algorithm LDA Blob CNN CNN KNN contour


Analysis measure
ment
Accuracy 80% 93% 95.8% 95% 92% 86%

Year 2018 2013 2020 2020 2018 2021

3.2 Research Gap


-In first research paper [1] they used LDA algorithm and they converted rgb image to binary
image but the image processing is not as good enough to get more accurate features of particular
sign

-In second research paper [2] they recognize sign using direct image pixels comparison which are
stored into their database they also converted rgb image to binary.in that image processing they
removed some necessary features.

-In third research paper [3] and [4] they have applied CNN algorithm for sign recognition which
is very effective but they didn’t do much image processing before feeding data to train CNN

-in fifth research paper [5] they have used simplest algorithm knn for sign recognition and they
also didn’t do much image processing maybe that’s the reason for their moderate accuracy

-in sixth research paper [6] they used contour and convexity measurements for image recognition.
But the algorithm didn’t result in good accuracy

26
3.3 Project Feasibility Study
3.3.1 Operational feasibility
- The whole purpose of this system is to handle the work much more accurately and
efficiently with less time consumption.

- This app is very user-friendly to use. They only require knowledge about American Sign
Language.

-The system is operationally feasible as it is very easy for the End users to operate it. It only
needs basic information about windows application.

3.3.2 Technical feasibility


The technical needs of the system may include: Front-end and back-end selection An important
issue for the development of a project is the selection of suitable front-end and back-end. When
we decided to develop the project, we went through an extensive study to determine the most
suitable platform that suits the needs of the organization as well as helps in development of the
project. The aspects of our study included the following factors.

Front-end selection:

It must have a graphical user interface that assists users that are not from IT background.

So we have made front-end using Python Tkinter Gui.

Features:

1. Scalability and extensibility.

2. Flexibility.

3. Easy to debug and maintain.

Back-end Selection:We have used Python as our Back-end Language which has the most
widest library collections The technical feasibility is frequently the most difficult area
encountered at this stage. Our app will fit perfectly for technical feasibility.

27
3.3.3 Economic Feasibility
The developing system must be justified by cost and benefit. Criteria to ensure that effort is
concentrated on project, which will give best, return at the earliest. One of the factors, which
affect the development of a new system, is the cost it would require. Since the system is
developed as part of project work, there is no manual cost to spend for the proposed system. Also,
all the resources are already available, it gives an indication of the system is economically
possible for development.

28
.4 Timeline Chart
figure 7.

Chapter-4

Software Requirements

Name of the Component Specification

Operating System Windows7,windows11

Language Python

Database Binary file

Interface GUI (webcam)

Software Devlopment Kit Visual Studio 2022 / PyCharm

Hardware Requirements:

29
Name of Component Specification

11th Gen Intel(R) Core (TM) i5-1135G7


Processor

RAM 8 GB

System type 64-bit Operating System

Keyboard 122 keys

30
Implementation and Testing

Here are some snapshots when user shows some hand gestures in different background as well as
in different lightning conditions and system is giving corresponding prediction.

figure 34.

figure 35.

Here the hand gesture of sign ‘k’ is shown with different background and still our model is predicting
correct letter.

31
figure 36.

figure 37.

After Implementing the cnn algorithm we made gui using python Tkinter and add Suggestions
also to make the process smooth for user.

Below shown sign is used for giving space between words.

32
figure 38.

Below shown sign use after predicting each alphabet to move further.

figure 39.

33
Chapter-5
Coding:

User Interface:

data_collection_binary.py
import cv2
from cvzone.HandTrackingModule import
HandDetector from cvzone.ClassificationModule
import Classifier import numpy as np
import os, os.path
from keras.models import
load_model import traceback

#model = load_model('C:\\Users\\devansh raval\\PycharmProjects\\pythonProject\\cnn9.h5')

capture = cv2.VideoCapture(0)

hd =
HandDetector(maxHands=1)
hd2 =
HandDetector(maxHands=1)
# #training data
# count = len(os.listdir("D://sign2text_dataset_2.0/Binary_imgs//A"))

#testing data
count = len(os.listdir("D://test_data_2.0//Gray_imgs//A"))

p_dir = "A"

34
c_dir = "a"

offset = 30
step = 1
flag=False
suv=0
#C:\Users\devansh raval\PycharmProjects\pythonProject
white=np.ones((400,400),np.uint8)*255
cv2.imwrite("C:\\Users\\devansh raval\\PycharmProjects\\pythonProject\\white.jpg",white)

while True:
try:
_, frame = capture.read()
frame = cv2.flip(frame, 1)
hands= hd.findHands(frame, draw=False, flipType=True)
img_final=img_final1=img_final2=0

if hands:
hand = hands[0]
x, y, w, h = hand['bbox']
image = frame[y - offset:y + h + offset, x - offset:x + w + offset]
#image1 = imgg[y - offset:y + h + offset, x - offset:x + w + offset]

roiroi1
# = image1
= image #rgb#rdb image
image with
without
drawing
drawing

# #for simple gray image without draw


gray = cv2.cvtColor(roi, cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray, (1, 1), 2)
#

# #for binary image


gray2 = cv2.cvtColor(roi, cv2.COLOR_BGR2GRAY)
blur2 = cv2.GaussianBlur(gray2, (5, 5), 2)
th3 = cv2.adaptiveThreshold(blur2, 255,
cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY_INV, 11, 2)
ret, test_image = cv2.threshold(th3, 27, 255, cv2.THRESH_BINARY_INV +
cv2.THRESH_OTSU)
#
#
test_image1=blur
img_final1 = np.ones((400, 400), np.uint8) * 148
h=
test_image1.shape[0] w =
test_image1.shape[1]
img_final1[((400 - h) // 2):((400 - h) // 2) + h, ((400 - w) // 2):((400 - w) // 2) + w] =
test_image1

img_final = np.ones((400, 400), np.uint8) * 255


h = test_image.shape[0]
w = test_image.shape[1]
img_final[((400 - h) // 2):((400 - h) // 2) + h, ((400 - w) // 2):((400 - w) // 2) + w] = test_image

35
hands = hd.findHands(frame, draw=False, flipType=True)

if hands:
# #print(" -------lmlist=",hands[1])
hand = hands[0]
x, y, w, h = hand['bbox']
image = frame[y - offset:y + h + offset, x - offset:x + w + offset]
white = cv2.imread("C:\\Users\\devansh raval\\PycharmProjects\\pythonProject\\white.jpg")
# img_final=img_final1=img_final2=0
handz = hd2.findHands(image, draw=False, flipType=True)
if handz:
hand = handz[0]
pts = hand['lmList']
# x1,y1,w1,h1=hand['bbox']

os = ((400 - w) // 2) - 15
os1 = ((400 - h) // 2) - 15
for t in range(0, 4, 1):
cv2.line(white, (pts[t][0] + os, pts[t][1] + os1), (pts[t + 1][0] + os, pts[t + 1][1] + os1),
(0, 255, 0), 3)
for t in range(5, 8, 1):
cv2.line(white, (pts[t][0] + os, pts[t][1] + os1), (pts[t + 1][0] + os, pts[t + 1][1] + os1),
(0, 255, 0), 3)
for t in range(9, 12, 1):
cv2.line(white, (pts[t][0] + os, pts[t][1] + os1), (pts[t + 1][0] + os, pts[t + 1][1] + os1),
(0, 255, 0), 3)
for t in range(13, 16, 1):
cv2.line(white, (pts[t][0] + os, pts[t][1] + os1), (pts[t + 1][0] + os, pts[t + 1][1] + os1),
(0, 255, 0), 3)
for t in range(17, 20, 1):
cv2.line(white, (pts[t][0] + os, pts[t][1] + os1), (pts[t + 1][0] + os, pts[t + 1][1] + os1),
(0, 255, 0), 3)
cv2.line(white, (pts[5][0] + os, pts[5][1] + os1), (pts[9][0] + os, pts[9][1] + os1), (0, 255,
0),
3)
cv2.line(white, (pts[9][0] + os, pts[9][1] + os1), (pts[13][0] + os, pts[13][1] + os1), (0,
255, 0),
3)
cv2.line(white, (pts[13][0] + os, pts[13][1] + os1), (pts[17][0] + os, pts[17][1] + os1),
(0, 255, 0), 3)
cv2.line(white, (pts[0][0] + os, pts[0][1] + os1), (pts[5][0] + os, pts[5][1] + os1), (0, 255,
0),
3)
cv2.line(white, (pts[0][0] + os, pts[0][1] + os1), (pts[17][0] + os, pts[17][1] + os1), (0,
255, 0),
3)

for i in range(21):
cv2.circle(white, (pts[i][0] + os, pts[i][1] + os1), 2, (0, 0, 255), 1)

cv2.imshow("skeleton", white)

36
# cv2.imshow("5",
hands = hd.findHands(white, draw=False, flipType=True)
skeleton5)
if hands:
hand = hands[0]
x, y, w, h = hand['bbox']
cv2.rectangle(white, (x - offset, y - offset), (x + w, y + h), (3, 255, 25), 3)

image1 = frame[y - offset:y + h + offset, x - offset:x + w + offset]

roi1 = image1 #rdb image with drawing

#for gray image with drawings


gray1 = cv2.cvtColor(roi1, cv2.COLOR_BGR2GRAY)
blur1 = cv2.GaussianBlur(gray1, (1, 1), 2)

test_image2= blur1
img_final2= np.ones((400, 400), np.uint8) * 148
h = test_image2.shape[0]
w = test_image2.shape[1]
img_final2[((400 - h) // 2):((400 - h) // 2) + h, ((400 - w) // 2):((400 - w) // 2) + w] =
test_image2

#cv2.imshow("aaa",white)
# cv2.imshow("gray",img_final2)
cv2.imshow("binary", img_final)
# cv2.imshow("gray w/o draw", img_final1)

# img = img_final.reshape(1, 400, 400, 1)


# # print(model.predict(img))
# prob = np.array(model.predict(img)[0], dtype='float32')
# ch1 = np.argmax(prob, axis=0)
# prob[ch1] = 0
# ch2 = np.argmax(prob, axis=0)
# prob[ch2] = 0
# ch3 = np.argmax(prob, axis=0)
# prob[ch3] = 0
# ch1 = chr(ch1 + 65)
# ch2 = chr(ch2 + 65)
# ch3 = chr(ch3 + 65)
# frame = cv2.putText(frame, "Predicted " + ch1 + " " + ch2 + " " + ch3, (x - offset -
150, y - offset - 10),
# cv2.FONT_HERSHEY_SIMPLEX,
# 1, (255, 0, 0), 1, cv2.LINE_AA)

#cv2.rectangle(frame, (x - offset, y - offset), (x + w, y + h), (3, 255, 25), 3)


# frame = cv2.putText(frame, "dir=" + c_dir + " count=" + str(count), (50,50),
# cv2.FONT_HERSHEY_SIMPLEX,
# 1, (255, 0, 0), 1, cv2.LINE_AA)
cv2.imshow("frame", frame)
interrupt = cv2.waitKey(1)
if interrupt & 0xFF == 27:

37
# esc key
break
if interrupt & 0xFF == ord('n'):
p_dir = chr(ord(p_dir) + 1)
c_dir = chr(ord(c_dir) + 1)
if ord(p_dir)==ord('Z')+1:
p_dir="A"
c_dir="a"
flag = False
# #training data
# count = len(os.listdir("D://sign2text_dataset_2.0/Binary_imgs//" + p_dir + "//"))

# test data
count = len(os.listdir("D://test_data_2.0/Gray_imgs//" + p_dir + "//"))

if interrupt & 0xFF == ord('a'):


if flag:
flag=False
else:
suv=0
flag=True

print("=====",flag)
if flag==True:

if suv==50:
flag=False
if step%2==0:
# #this is for training data collection
# cv2.imwrite("D:\\sign2text_dataset_2.0\\Binary_imgs\\" + p_dir + "\\" + c_dir +
str(count) + ".jpg", img_final)
# cv2.imwrite("D:\\sign2text_dataset_2.0\\Gray_imgs\\" + p_dir + "\\" + c_dir +
str(count) + ".jpg", img_final1)
# cv2.imwrite("D:\\sign2text_dataset_2.0\\Gray_imgs_with_drawing\\" + p_dir +
"\\" + c_dir + str(count) + ".jpg", img_final2)

# this is for testing data collection


# cv2.imwrite("D:\\test_data_2.0\\Binary_imgs\\" + p_dir + "\\" + c_dir + str(count) +
".jpg",
# img_final)
cv2.imwrite("D:\\test_data_2.0\\Gray_imgs\\" + p_dir + "\\" + c_dir + str(count) + ".jpg",
img_final1)
cv2.imwrite(
"D:\\test_data_2.0\\Gray_imgs_with_drawing\\" + p_dir + "\\" + c_dir + str(count) +
".jpg",
img_final2)

count += 1
suv += 1
step+=1
except Exception:
print("==",traceback.format_exc() )

38
capture.release()
cv2.destroyAllWindows()

data_collection_final.py
import cv2
from cvzone.HandTrackingModule import
HandDetector import numpy as np
import os as oss
import traceback

capture = cv2.VideoCapture(0)
hd =
HandDetector(maxHands=1)
hd2 =
HandDetector(maxHands=1)

count = len(oss.listdir("D:\\sign2text_dataset_3.0\\
AtoZ_3.0\\A\\")) c_dir = 'A'

offset = 15
step = 1
flag=False
suv=0

white=np.ones((400,400),np.uint8)*255
cv2.imwrite("C:\\Users\\devansh raval\\PycharmProjects\\pythonProject\\white.jpg",white)

while True:
try:
_, frame = capture.read()
frame = cv2.flip(frame, 1)
hands= hd.findHands(frame, draw=False, flipType=True)
white = cv2.imread("C:\\Users\\devansh raval\\PycharmProjects\\pythonProject\\white.jpg")

if hands:
hand = hands[0]
x, y, w, h = hand['bbox']
image =np.array( frame[y - offset:y + h + offset, x - offset:x + w + offset])

handz,imz = hd2.findHands(image, draw=True, flipType=True)


if handz:
hand = handz[0]
pts = hand['lmList']
# x1,y1,w1,h1=hand['bbox']
os=((400-w)//2)-15
os1=((400-h)//2)-15
for t in range(0,4,1):
cv2.line(white,(pts[t][0]+os,pts[t][1]+os1),(pts[t+1][0]+os,pts[t+1][1]+os1),(0,255,0),3)
for t in range(5,8,1):
cv2.line(white,(pts[t][0]+os,pts[t][1]+os1),(pts[t+1][0]+os,pts[t+1][1]+os1),(0,255,0),3)
for t in range(9,12,1):

39
cv2.line(white,(pts[t][0]+os,pts[t][1]+os1),(pts[t+1][0]+os,pts[t+1][1]+os1),(0,255,0),3)
for t in range(13,16,1):
cv2.line(white,(pts[t][0]+os,pts[t][1]+os1),(pts[t+1][0]+os,pts[t+1][1]+os1),(0,255,0),3)
for t in range(17,20,1):
cv2.line(white,(pts[t][0]+os,pts[t][1]+os1),(pts[t+1][0]+os,pts[t+1][1]+os1),(0,255,0),3)
cv2.line(white, (pts[5][0]+os, pts[5][1]+os1), (pts[9][0]+os, pts[9][1]+os1), (0, 255, 0), 3)
cv2.line(white, (pts[9][0]+os, pts[9][1]+os1), (pts[13][0]+os, pts[13][1]+os1), (0, 255, 0),
3)
cv2.line(white, (pts[13][0]+os, pts[13][1]+os1), (pts[17][0]+os, pts[17][1]+os1), (0, 255,
0), 3)
cv2.line(white, (pts[0][0]+os, pts[0][1]+os1), (pts[5][0]+os, pts[5][1]+os1), (0, 255, 0), 3)
cv2.line(white, (pts[0][0]+os, pts[0][1]+os1), (pts[17][0]+os, pts[17][1]+os1), (0, 255, 0),
3)

skeleton0=np.array(white)
zz=np.array(white)
for i in range(21):
cv2.circle(white,(pts[i][0]+os,pts[i][1]+os1),2,(0 , 0 , 255),1)

skeleton1=np.array(white)

cv2.imshow("1",skeleton1)

frame = cv2.putText(frame, "dir=" + str(c_dir) + " count=" + str(count), (50,50),


cv2.FONT_HERSHEY_SIMPLEX,
1, (255, 0, 0), 1, cv2.LINE_AA)
cv2.imshow("frame", frame)
interrupt = cv2.waitKey(1)
if interrupt & 0xFF == 27:
# esc key
break

if interrupt & 0xFF == ord('n'):


c_dir = chr(ord(c_dir)+1)
if ord(c_dir)==ord('Z')+1:
c_dir='A'
flag = False
count = len(oss.listdir("D:\\sign2text_dataset_3.0\\AtoZ_3.0\\" + (c_dir) + "\\"))

if interrupt & 0xFF == ord('a'):


if flag:
flag=False
else:
suv=0
flag=True

print("=====",flag)
if flag==True:

if suv==180:
flag=False
if step%3==0:
cv2.imwrite("D:\\sign2text_dataset_3.0\\AtoZ_3.1\\" + (c_dir) + "\\" + str(count) + ".jpg",
skeleton1)

40
count += 1
suv += 1
step+=1

except Exception:
print("==",traceback.format_exc() )

capture.release()
cv2.destroyAllWindows()

prediction_wo_gui.py

import math
import cv2
from cvzone.HandTrackingModule import
HandDetector import numpy as np
from keras.models import
load_model import traceback

model =
load_model('/cnn8grps_rad1_model.h5')
white = np.ones((400, 400), np.uint8) * 255
cv2.imwrite("C:\\Users\\devansh raval\\PycharmProjects\\pythonProject\\white.jpg", white)

capture = cv2.VideoCapture(0)

hd =
HandDetector(maxHands=1)
hd2 =
HandDetector(maxHands=1)

offset = 29
step = 1
flag =
False suv
=0

def distance(x, y):


return math.sqrt(((x[0] - y[0]) ** 2) + ((x[1] - y[1]) ** 2))

def distance_3d(x, y):


return math.sqrt(((x[0] - y[0]) ** 2) + ((x[1] - y[1]) ** 2) + ((x[2] - y[2]) ** 2))

bfh = 0
dicttt=dict()
count=0
kok=[]

while True:
try:
_, frame = capture.read()
frame = cv2.flip(frame, 1)
hands = hd.findHands(frame, draw=False, flipType=True)
print(frame.shape)

41
if
# #print(" -------lmlist=",hands[1])
hands:
hand = hands[0]
x, y, w, h = hand['bbox']
image = frame[y - offset:y + h + offset, x - offset:x + w + offset]
white = cv2.imread("C:\\Users\\devansh raval\\PycharmProjects\\pythonProject\\white.jpg")
# img_final=img_final1=img_final2=0
handz = hd2.findHands(image, draw=False, flipType=True)
if handz:
hand = handz[0]
pts = hand['lmList']
# x1,y1,w1,h1=hand['bbox']

os = ((400 - w) // 2) - 15
os1 = ((400 - h) // 2) - 15
for t in range(0, 4, 1):
cv2.line(white, (pts[t][0] + os, pts[t][1] + os1), (pts[t + 1][0] + os, pts[t + 1][1] + os1),
(0, 255, 0), 3)
for t in range(5, 8, 1):
cv2.line(white, (pts[t][0] + os, pts[t][1] + os1), (pts[t + 1][0] + os, pts[t + 1][1] + os1),
(0, 255, 0), 3)
for t in range(9, 12, 1):
cv2.line(white, (pts[t][0] + os, pts[t][1] + os1), (pts[t + 1][0] + os, pts[t + 1][1] + os1),
(0, 255, 0), 3)
for t in range(13, 16, 1):
cv2.line(white, (pts[t][0] + os, pts[t][1] + os1), (pts[t + 1][0] + os, pts[t + 1][1] + os1),
(0, 255, 0), 3)
for t in range(17, 20, 1):
cv2.line(white, (pts[t][0] + os, pts[t][1] + os1), (pts[t + 1][0] + os, pts[t + 1][1] + os1),
(0, 255, 0), 3)
cv2.line(white, (pts[5][0] + os, pts[5][1] + os1), (pts[9][0] + os, pts[9][1] + os1), (0, 255,
0),
3)
cv2.line(white, (pts[9][0] + os, pts[9][1] + os1), (pts[13][0] + os, pts[13][1] + os1), (0,
255, 0),
3)
cv2.line(white, (pts[13][0] + os, pts[13][1] + os1), (pts[17][0] + os, pts[17][1] + os1),
(0, 255, 0), 3)
cv2.line(white, (pts[0][0] + os, pts[0][1] + os1), (pts[5][0] + os, pts[5][1] + os1), (0, 255,
0),
3)
cv2.line(white, (pts[0][0] + os, pts[0][1] + os1), (pts[17][0] + os, pts[17][1] + os1), (0,
255, 0),
3)

for i in range(21):
cv2.circle(white, (pts[i][0] + os, pts[i][1] + os1), 2, (0, 0, 255), 1)

cv2.imshow("2", white)
# cv2.imshow("5", skeleton5)

# #print(model.predict(img))
white = white.reshape(1, 400, 400, 3)
prob = np.array(model.predict(white)[0], dtype='float32')
ch1 = np.argmax(prob, axis=0)
prob[ch1] = 0

42
ch2 = np.argmax(prob, axis=0)
prob[ch2] = 0
ch3 = np.argmax(prob, axis=0)
prob[ch3] = 0

pl = [ch1, ch2]

#condition for [Aemnst]


l=[[5,2],[5,3],[3,5],[3,6],[3,0],[3,2],[6,4],[6,1],[6,2],[6,6],[6,7],[6,0],[6,5],[4,1],[1,0],[1,1],
[6,3],[1,6],[5,6],[5,1]
,[4,5],[1,4],[1,5],[2,0],[2,6],[4,6],[1,0],[5,7],[1,6],[6,1],[7,6],[2,5],[7,1],[5,4],[7,0],[7,5],[7,2]]
if pl in l:
if (pts[6][1] < pts[8][1] and pts[10][1] < pts[12][1] and pts[14][1] < pts[16][1] and
pts[18][1]
<pts[20][1]):
ch1=0
#print("00000")

#condition for [o][s]


l=[[2,2],[2,1]]
if pl in l:
if (pts[5][0] < pts[4][0] ):
ch1=0
print("++++++++++++++++++")
#print("00000")

#condition for [c0][aemnst]


l=[[0,0],[0,6],[0,2],[0,5],[0,1],[0,7],[5,2],[7,6],[7,1]]
pl=[ch1,ch2]
if pl in l:
if (pts[0][0]>pts[8][0] and pts[0][0]>pts[4][0] and pts[0][0]>pts[12][0] and
pts[0][0]>pts[16][0] and pts[0][0]>pts[20][0]) and pts[5][0] > pts[4][0]:
ch1=2
#print("22222")

# condition for [c0][aemnst]


l = [[6,0],[6,6],[6,2]]
pl = [ch1, ch2]
if pl in l:
if distance(pts[8],pts[16])<52:
ch1 = 2
#print("22222")

##print(pts[2][1]+15>pts[16][1])
# condition for [gh][bdfikruvw]
l = [[1,4],[1,5],[1,6],[1,3],[1,0]]
pl = [ch1, ch2]

if pl in l:
if pts[6][1] > pts[8][1] and pts[14][1] < pts[16][1] and pts[18][1]<pts[20][1] and
pts[0][0]<pts[8][0] and pts[0][0]<pts[12][0] and pts[0][0]<pts[16][0] and pts[0][0]<pts[20][0]:
ch1 = 3
print("33333c")

43
#con for [gh]
l=[[4,6],[4,1],[4,5],[4,3],[4,7]]
[l]
pl=[ch1,ch2]
if pl in l:
if pts[4][0]>pts[0][0]:
ch1=3
print("33333b")

# con for [gh][pqz]


l = [[5, 3],[5,0],[5,7], [5, 4], [5, 2],[5,1],[5,5]]
pl = [ch1, ch2]
if pl in l:
if pts[2][1]+15<pts[16][1]:
ch1 = 3
print("33333a")

# con for [l][x]


l = [[6, 4], [6, 1], [6, 2]]
pl = [ch1, ch2]
if pl in l:
if distance(pts[4],pts[11])>55:
ch1 = 4
#print("44444")

# con for [l][d]


l = [[1, 4], [1, 6],[1,1]]
pl = [ch1, ch2]
if pl in l:
if (distance(pts[4], pts[11]) > 50) and (pts[6][1] > pts[8][1] and pts[10][1] <
pts[12][1] and pts[14][1] < pts[16][1] and pts[18][1] <pts[20][1]):
ch1 = 4
#print("44444")

# con for [l]


[gh]
l = [[3, 6], [3, 4]]
pl = [ch1, ch2]
if pl in
l:
if (pts[4][0]<pts[0]
[0]):
ch1 =
4
#print("44444")

# con for [l]


[c0]
l = [[2, 2], [2, 5],[2,4]]
pl = [ch1, ch2]
if pl in
l:
if (pts[1][0] <
pts[12][0]):
ch1 =
4
#print("44444")

# con for [l][c0]


l = [[2, 2], [2, 5], [2, 4]]

44
pl = [ch1, ch2]
if pl in
l:
if (pts[1][0] <
pts[12][0]):
ch1 =
4
#print("44444")

# con for [gh]


l = [[3, 6],[3,5],[3,4]]
[z]
pl = [ch1, ch2]
if pl in l:
if (pts[6][1] > pts[8][1] and pts[10][1] < pts[12][1] and pts[14][1] < pts[16][1] and
pts[18][1]
<pts[20][1]) and pts[4][1]>pts[10][1]:
ch1 = 5
print("55555b")

# con for [gh][pq]


l = [[3,2],[3,1],[3,6]]
pl = [ch1, ch2]
if pl in l:
if pts[4][1]+17>pts[8][1] and pts[4][1]+17>pts[12][1] and pts[4]
[1]+17>pts[16][1] and pts[4][1]+17>pts[20][1]:
ch1 = 5
print("55555a")

# con for [l][pqz]


l = [[4,4],[4,5],[4,2],[7,5],[7,6],[7,0]]
pl = [ch1, ch2]
if pl in l:
if pts[4][0]>pts[0][0]:
ch1 = 5
#print("55555")

# con for [pqz][aemnst]


l = [[0, 2],[0,6],[0,1],[0,5],[0,0],[0,7],[0,4],[0,3],[2,7]]
pl = [ch1, ch2]
if pl in l:
if pts[0][0]<pts[8][0] and pts[0][0]<pts[12][0] and pts[0][0]<pts[16][0] and pts[0]
[0]<pts[20][0]:
ch1 = 5
#print("55555")

# con for [pqz][yj]


l = [[5, 7],[5,2],[5,6]]
pl = [ch1, ch2]
if pl in l:
if pts[3][0]<pts[0][0]:
ch1 = 7
#print("77777")

# con for [l][yj]


l = [[4, 6],[4,2],[4,4],[4,1],[4,5],[4,7]]

45
pl = [ch1, ch2]
if pl in
l:
if pts[6][1] <
pts[8][1]:
ch1 =
7
#print("77777")

# con for [x]


l = [[6, 7],[0,7],[0,1],[0,0],[6,4],[6,6] ,[6,5],[6,1]]
[yj]
pl = [ch1, ch2]
if pl in l:
if pts[18][1] > pts[20][1]:
ch1 = 7
#print("77777")

# condition for [x][aemnst]


l = [[0,4],[0,2],[0,3],[0,1],[0,6]]
pl = [ch1, ch2]
if pl in l:
if pts[5][0]>pts[16][0]:
ch1 = 6
#print("66666")

# condition for [yj][x]


l = [[7,
2]]
pl = [ch1, ch2]
if pl in
l:
if pts[18][1] <
pts[20][1]:
ch1 =
6
#print("66666")

# condition for [c0][x]


l = [[2, 1],[2,2],[2,6],[2,7],[2,0]]
pl = [ch1, ch2]
if pl in l:
if distance(pts[8],pts[16])>50:
ch1 = 6
#print("66666")

# con for [l][x]

l = [[4, 6],[4,2],[4,1],[4,4]]
pl = [ch1, ch2]
if pl in l:
if distance(pts[4], pts[11]) < 60:
ch1 = 6
#print("66666")

#con for [x][d]


l = [[1,4],[1,6],[1,0],[1,2]]
pl = [ch1, ch2]
if pl in l:
if pts[5][0] - pts[4][0] - 15 > 0:

46
ch1 = 6

# con for [b][pqz]


l = [[5,0],[5,1],[5,4],[5,5],[5,6],[6,1],[7,6],[0,2],[7,1],[7,4],[6,6],[7,2],[5,0],[6,3],[6,4],[7,5],
[7,2]]
pl = [ch1, ch2]
if pl in l:
if (pts[6][1] > pts[8][1] and pts[10][1] > pts[12][1] and pts[14][1] > pts[16][1]
and pts[18][1] > pts[20][1]):
ch1 = 1
print("111111")

# con for [f][pqz]


l = [[6, 1],[6,0],[0,3],[6,4],[2,2], [0,6],[6,2],[7, 6],[4,6],[4,1],[4,2], [0, 2], [7, 1], [7, 4], [6,
6], [7, 2], [7, 5], [7,
2]]
pl = [ch1, ch2]
if pl in l:
if (pts[6][1] < pts[8][1] and pts[10][1] > pts[12][1] and pts[14][1] > pts[16][1] and
pts[18][1] > pts[20][1]):
ch1 = 1
print("111112")

l = [[6, 1], [6, 0],[4,2],[4,1],[4,6],[4,4]]


pl = [ch1, ch2]
if pl in l:
if (pts[10][1] > pts[12][1] and pts[14][1] > pts[16][1] and
pts[18][1] > pts[20][1]):
ch1 = 1
print("111112")

# con for [d][pqz]


fg=19
#print(" ch1=",ch1," ch2=",ch2)
l = [[5,0],[3,4],[3,0],[3,1],[3,5],[5,5],[5,4],[5,1],[7,6]]
pl = [ch1, ch2]
if pl in l:
if ((pts[6][1] > pts[8][1] and pts[10][1] < pts[12][1] and pts[14][1] < pts[16][1] and
pts[18][1] < pts[20][1]) and (pts[2][0]<pts[0][0]) and pts[4][1]>pts[14][1]):
ch1 = 1
print("111113")

l = [ [4, 1], [4, 2],[4, 4]]


pl = [ch1, ch2]
if pl in l:
if (distance(pts[4], pts[11]) < 50) and (pts[6][1] > pts[8][1] and pts[10][1] <
pts[12][1] and pts[14][1] < pts[16][1] and pts[18][1] < pts[20][1]):
ch1 = 1
print("1111993")

l = [[3, 4], [3, 0], [3, 1], [3, 5],[3,6]]


pl = [ch1, ch2]
if pl in l:

47
if ((pts[6][1] > pts[8][1] and pts[10][1] < pts[12][1] and pts[14][1] < pts[16][1] and
pts[18][1] < pts[20][1]) and (pts[2][0] < pts[0][0]) and pts[14][1]<pts[4][1]):
ch1 = 1
print("1111mmm3")

l = [[6, 6],[6, 4], [6, 1],[6,2]]


pl = [ch1, ch2]
if pl in l:
if pts[5][0]-pts[4][0]-15<0:
ch1 = 1
print("1111140")

# con for [i][pqz]


l = [[5,4],[5,5],[5,1],[0,3],[0,7],[5,0],[0,2],[6,2],[7, 5], [7, 1], [7, 6], [7, 7]]
pl = [ch1, ch2]
if pl in l:
if ((pts[6][1] < pts[8][1] and pts[10][1] < pts[12][1] and pts[14][1] < pts[16][1] and
pts[18][1] > pts[20][1])):
ch1 = 1
print("111114")

# con for [yj][bfdi]


l = [[1,5],[1,7],[1,1],[1,6],[1,3],[1,0]]
pl = [ch1, ch2]
if pl in l:
if (pts[4][0]<pts[5][0]+15) and ((pts[6][1] < pts[8][1] and pts[10][1] < pts[12][1]
and pts[14][1] < pts[16][1] and
pts[18][1] > pts[20][1])):
ch1 = 7
print("111114lll;;p")

#con for [uvr]


l = [[5,5],[5,0],[5,4],[5,1],[4,6],[4,1],[7,6],[3,0],[3,5]]
pl = [ch1, ch2]
if pl in l:
if ((pts[6][1] > pts[8][1] and pts[10][1] > pts[12][1] and pts[14][1] < pts[16][1] and
pts[18][1] < pts[20][1])) and pts[4][1]>pts[14][1]:
ch1 = 1
print("111115")

# con for [w]


fg=13
l = [[3,5],[3,0],[3,6],[5,1],[4,1],[2,0],[5,0],[5,5]]
pl = [ch1, ch2]
if pl in l:
if not(pts[0][0]+fg < pts[8][0] and pts[0][0]+fg < pts[12][0] and pts[0][0]+fg <
pts[16][0] and pts[0][0]+fg < pts[20][0]) and not(pts[0][0] > pts[8][0] and pts[0][0] >
pts[12][0] and pts[0][0] >
pts[16][0] and pts[0][0] > pts[20][0]) and distance(pts[4], pts[11]) < 50:
ch1 = 1
print("111116")

48
# con for [w]

l = [ [5, 0], [5, 5],[0,1]]


pl = [ch1, ch2]
if pl in l:
if pts[6][1]>pts[8][1] and pts[10][1]>pts[12][1] and pts[14][1]>pts[16][1]:
ch1 = 1
print("1117")

#-------------------------condn for 8 groups ends

#-------------------------condn for subgroups starts


#
if ch1 == 0:
ch1='S'
if pts[4][0] < pts[6][0] and pts[4][0] < pts[10][0] and pts[4][0] < pts[14][0]
and pts[4][0] < pts[18][0]:
ch1 = 'A'
if pts[4][0] > pts[6][0] and pts[4][0] < pts[10][0] and pts[4][0] < pts[14][0] and pts[4]
[0] <
pts[18][0] and pts[4][1] < pts[14][1] and pts[4][1] < pts[18][1] :
ch1 = 'T'
if pts[4][1] > pts[8][1] and pts[4][1] > pts[12][1] and pts[4][1] > pts[16][1]
and pts[4][1] > pts[20][1]:
ch1 = 'E'
if pts[4][0] > pts[6][0] and pts[4][0] > pts[10][0] and pts[4][0] > pts[14][0]
and pts[4][1] < pts[18][1]:
ch1 = 'M'
if pts[4][0] > pts[6][0] and pts[4][0] > pts[10][0] and pts[4][1] < pts[18][1]
and pts[4][1] < pts[14][1]:
ch1 = 'N'

if ch1 ==
2:
if distance(pts[12], pts[4])
> 42:
ch1 =
'C'
else:
ch1 =
'O'

if ch1 ==
3:
if (distance(pts[8], pts[12]))
> 72:
ch1 =
'G'
else:
ch1 =
'H'

if ch1 ==
7:
if distance(pts[8], pts[4])
> 42:
ch1 =
'Y'
49
else:
ch1 =
'J'

50
if ch1 ==
4: ch1 =
'L'
=
ch1 ==
if ch1
6: 'X'

if ch1 ==
5: if pts[4][0] > pts[12][0] and pts[4][0] > pts[16][0] and pts[4][0]
> if pts[8][1] <
pts[20][0]:
pts[5][1]: ch1 =
'Z'
else:
ch1 = 'Q'
else:
ch1 = 'P'

if ch1 ==
1: if (pts[6][1] > pts[8][1] and pts[10][1] > pts[12][1] and pts[14][1] > pts[16][1] and
pts[18][1]
>pts[20][1]):
ch1 = 'B'
if (pts[6][1] > pts[8][1] and pts[10][1] < pts[12][1] and pts[14][1] < pts[16][1] and
pts[18][1]
<pts[20][1]):
ch1 = 'D'
if (pts[6][1] < pts[8][1] and pts[10][1] > pts[12][1] and pts[14][1] > pts[16][1]
and pts[18][1] > pts[20][1]):
ch1 = 'F'
if (pts[6][1] < pts[8][1] and pts[10][1] < pts[12][1] and pts[14][1] < pts[16][1]
and pts[18][1] > pts[20][1]):
ch1 = 'I'
if (pts[6][1] > pts[8][1] and pts[10][1] > pts[12][1] and pts[14][1] > pts[16][1]
and pts[18][1] < pts[20][1]):
ch1 = 'W'
if (pts[6][1] > pts[8][1] and pts[10][1] > pts[12][1] and pts[14][1] < pts[16][1]
and pts[18][1] < pts[20][1]) and pts[4][1]<pts[9][1]:
ch1 = 'K'
if ((distance(pts[8], pts[12]) - distance(pts[6], pts[10])) < 8) and (pts[6][1] > pts[8][1]
and
pts[10][1] > pts[12][1] and pts[14][1] < pts[16][1] and pts[18][1] < pts[20][1]):
ch1 = 'U'
if ((distance(pts[8], pts[12]) - distance(pts[6], pts[10])) >= 8) and (pts[6][1] > pts[8][1]
and
pts[10][1] > pts[12][1] and pts[14][1] < pts[16][1] and pts[18][1] < pts[20][1]) and (pts[4][1]
>pts[9][1]):
ch1 = 'V'

if (pts[8][0] > pts[12][0]) and (pts[6][1] > pts[8][1] and pts[10][1] > pts[12][1]
and pts[14][1] < pts[16][1] and pts[18][1] < pts[20][1]):
ch1 = 'R'

if ch1== 1 or 'E' or 'S' or 'X' or 'Y' or 'B':


if (pts[6][1] > pts[8][1] and pts[10][1] < pts[12][1] and pts[14][1] < pts[16][1]
and pts[18][1] > pts[20][1]):
ch1 = 'Space'

if ch1== 'E' or 'Y' or


'B':if (pts[4][0] < pts[5]
[0] ):

51
ch1 = 'Next'

if ch1== 'Next' or 'B' or 'C' or 'H' or 'F':


if (pts[0][0] > pts[8][0] and pts[0][0] > pts[12][0] and pts[0][0] > pts[16][0] and pts[0]
[0] >
pts[20][0]) and pts[4][1]<pts[8][1] and pts[4][1]<pts[12][1] and pts[4][1]<pts[16][1] and pts[4]
[1]<pts[20][1]:
ch1 = 'Backspace'

print("ch1=", ch1, " ch2=", ch2, " ch3=", ch3)


kok.append(ch1)

# # [0->aemnst][1->bfdiuvwkr][2->co][3->gh][4->l][5->pqz][6->x][7->yj]
if ch1 != 1:
if (ch1,ch2) in dicttt:
dicttt[(ch1,ch2)] += 1
else:
dicttt[(ch1,ch2)] = 1

frame = cv2.putText(frame, "Predicted " + str(ch1), (30, 80),


cv2.FONT_HERSHEY_SIMPLEX,
3, (0, 0, 255), 2, cv2.LINE_AA)

cv2.imshow("frame", frame)
interrupt = cv2.waitKey(1)
if interrupt & 0xFF == 27:
# esc key
break

except Exception:
print("==", traceback.format_exc())

dicttt = {key: val for key, val in sorted(dicttt.items(), key = lambda ele: ele[1], reverse
= True)} print(dicttt)
print(set(kok))
capture.release()
cv2.destroyAllWindows
()

final_pred.py
# Importing
Libraries import
numpy as np
import math
import cv2

import os, sys


import traceback
import pyttsx3
from keras.models import load_model
from cvzone.HandTrackingModule import HandDetector

52
from string import
ascii_uppercase import
enchant
ddd=enchant.Dict("en-US")
hd =
HandDetector(maxHands=1)
hd2 =
HandDetector(maxHands=1)
import tkinter as tk
from PIL import Image, ImageTk

offset=29

os.environ["THEANO_FLAGS"] = "device=cuda,

assert_no_cpu_op=True" # Application :

class Application:

def init (self):


self.vs =
cv2.VideoCapture(0)
self.current_image = None
self.model = load_model('cnn8grps_rad1_model.h5')
self.speak_engine=pyttsx3.init()
self.speak_engine.setProperty("rate",100)
voices=self.speak_engine.getProperty("voices")
self.speak_engine.setProperty("voice",voices[0].id)

self.ct = {}
self.ct['blank'] = 0
self.blank_flag =
0
self.space_flag=F
alse
self.next_flag=Tru
e
self.prev_char=""
self.count=-1
self.ten_prev_char
=[] for i in
range(10):
self.ten_prev_char.append(" ")

for i in ascii_uppercase:
self.ct[i] = 0
print("Loaded model from disk")

self.root = tk.Tk()
self.root.title("Sign Language To Text Conversion")
self.root.protocol('WM_DELETE_WINDOW',
self.destructor) self.root.geometry("1300x700")

self.panel = tk.Label(self.root)
self.panel.place(x=100, y=3, width=480,
height=640)

self.panel2 = tk.Label(self.root) # initialize image panel


self.panel2.place(x=700, y=115, width=400, height=400)

53
self.T = tk.Label(self.root)
self.T.place(x=60, y=5)
self.T.config(text="Sign Language To Text Conversion", font=("Courier", 30, "bold"))

self.panel3 = tk.Label(self.root) # Current Symbol


self.panel3.place(x=280, y=585)

self.T1 = tk.Label(self.root)
self.T1.place(x=10, y=580)
self.T1.config(text="Character :", font=("Courier", 30, "bold"))

self.panel5 = tk.Label(self.root) # Sentence


self.panel5.place(x=260, y=632)

self.T3 = tk.Label(self.root)
self.T3.place(x=10, y=632)
self.T3.config(text="Sentence :", font=("Courier", 30, "bold"))

self.T4 = tk.Label(self.root)
self.T4.place(x=10, y=700)
self.T4.config(text="Suggestions :", fg="red", font=("Courier", 30, "bold"))

self.b1=tk.Button(self.root)
self.b1.place(x=390,y=700)

self.b2 = tk.Button(self.root)
self.b2.place(x=590, y=700)

self.b3 = tk.Button(self.root)
self.b3.place(x=790, y=700)

self.b4 = tk.Button(self.root)
self.b4.place(x=990, y=700)

self.speak =
tk.Button(self.root)
self.speak.place(x=1305,
y=630)
self.speak.config(text="Speak", font=("Courier", 20), wraplength=100,
command=self.speak_fun)

self.clear = tk.Button(self.root)
self.clear.place(x=1205, y=630)
self.clear.config(text="Clear", font=("Courier", 20), wraplength=100, command=self.clear_fun)

self.str = " "


self.ccc=0
self.word = "
"
self.current_symbol = "C"
self.photo = "Empty"

54
self.word1=" "
self.word2 =
" " self.word3
=""
self.word4 =
""

self.video_loop()

def

video_loop(self):
try:
ok, frame = self.vs.read()
cv2image =
cv2.flip(frame, 1) if
cv2image.any:
hands = hd.findHands(cv2image, draw=False,
flipType=True) cv2image_copy=np.array(cv2image)
cv2image = cv2.cvtColor(cv2image,
cv2.COLOR_BGR2RGB) self.current_image =
Image.fromarray(cv2image)
imgtk =
ImageTk.PhotoImage(image=self.current_image)
self.panel.imgtk = imgtk
self.panel.config(image=imgtk)

if hands[0]:
hand = hands[0]
map = hand[0]
x, y, w, h=map['bbox']
image = cv2image_copy[y - offset:y + h + offset, x - offset:x + w + offset]

white = cv2.imread("white.jpg")
#
img_final=img_final1=img_final
2=0 if image.all:
handz = hd2.findHands(image, draw=False,
flipType=True) self.ccc += 1
if handz[0]:
hand = handz[0]
handmap=hand[0
]
self.pts =
handmap['lmList'] #
x1,y1,w1,h1=hand['bbox
']

1][1] + os1),
1][1] +
os1),

1][1] +
os1),

55
os = ((400 [t][0] + os, self.pts[t][1] + os1), (self.pts[t + 1][0] + os, self.pts[t +
- w) // 2) -
15 (0, 255, 0), 3)
os1 = for t in range(9, 12, 1):
((400 - h) cv2.line(white, (self.pts[t][0] + os, self.pts[t][1] + os1), (self.pts[t + 1][0] +
// 2) - 15
for t in os, self.pts[t + (0, 255, 0), 3)
range(0,
4, 1):
cv2.line(
white,
(self.pts
[t][0] +
os,
self.pts[
t][1] +
os1),
(self.pts
[t + 1]
[0] + os,
self.pts[
t+

(
0
,

2
5
5
,

0
)
,

3
)
f
o
r

i
n

r
a
n
g
e
(
5
,

8
,

1
)
:
cv2.line(
white,
(self.pts
56
for t in range(13, 16, 1):
cv2.line(white, (self.pts[t][0] + os, self.pts[t][1] + os1), (self.pts[t + 1][0] + os,
1][1] + os1), self.pts[t +

(0, 255, 0), 3)


for t in range(17, 20, 1):
1][1] + os1), cv2.line(white, (self.pts[t][0] + os, self.pts[t][1] + os1), (self.pts[t + 1][0] + os,
self.pts[t +

os1), (0, 255, (0, 255, 0), 3)


0), cv2.line(white, (self.pts[5][0] + os, self.pts[5][1] + os1), (self.pts[9][0] + os,
self.pts[9][1] +

os1), (0, 255, 3)


0), cv2.line(white, (self.pts[9][0] + os, self.pts[9][1] + os1), (self.pts[13][0] + os,
self.pts[13][1] +

+ os1), 3)
cv2.line(white, (self.pts[13][0] + os, self.pts[13][1] + os1), (self.pts[17][0] + os,
self.pts[17][1]
os1), (0, 255,
0), (0, 255, 0), 3)
cv2.line(white, (self.pts[0][0] + os, self.pts[0][1] + os1), (self.pts[5][0] + os,
self.pts[5][1] +
os1), (0, 255,
0), 3)
cv2.line(white, (self.pts[0][0] + os, self.pts[0][1] + os1), (self.pts[17][0] + os,

self.pts[17][1] + 3)

for i in range(21):
cv2.circle(white, (self.pts[i][0] + os, self.pts[i][1] + os1), 2, (0, 0, 255), 1)

res=white
self.predict(res)

self.current_image2 = Image.fromarray(res)

imgtk = ImageTk.PhotoImage(image=self.current_image2)

self.panel2.imgtk = imgtk
self.panel2.config(image=imgtk)

self.panel3.config(text=self.current_symbol, font=("Courier", 30))

#self.panel4.config(text=self.word, font=("Courier", 30))

self.b1.config(text=self.word1, font=("Courier", 20), wraplength=825,


command=self.action1)
self.b2.config(text=self.word2, font=("Courier",
20), wraplength=825, command=self.action2)
self.b3.config(text=self.word3, font=("Courier",
20), wraplength=825, command=self.action3)
self.b4.config(text=self.word4, font=("Courier",
20), wraplength=825, command=self.action4)

57
self.panel5.config(text=self.str, font=("Courier", 30),
wraplength=1025) except Exception:
print(Exception. traceback )
hands = hd.findHands(cv2image, draw=False,
flipType=True) cv2image_copy=np.array(cv2image)
cv2image = cv2.cvtColor(cv2image,
cv2.COLOR_BGR2RGB) self.current_image =
Image.fromarray(cv2image)
imgtk =
ImageTk.PhotoImage(image=self.current_image)
self.panel.imgtk = imgtk
self.panel.config(image=imgtk)

if hands:
# #print("--------lmlist=",hands[1])
hand = hands[0]
x, y, w, h = hand['bbox']
image = cv2image_copy[y - offset:y + h + offset, x - offset:x + w + offset]

white = cv2.imread("C:\\Users\\devansh raval\\PycharmProjects\\pythonProject\\


white.jpg") # img_final=img_final1=img_final2=0

handz = hd2.findHands(image, draw=False,


flipType=True) print(" ", self.ccc)
self.ccc +=
1 if handz:
hand = handz[0]
self.pts =
hand['lmList']
# x1,y1,w1,h1=hand['bbox']

os = ((400 - w) // 2) - 15
os1 = ((400 - h) // 2) - 15
for t in range(0, 4, 1):
cv2.line(white, (self.pts[t][0] + os, self.pts[t][1] + os1), (self.pts[t + 1][0] + os,
os1) self.pts[t + 1][1] +
,
(0, 255, 0), 3)
for t in range(5, 8, 1):
cv2.line(white, (self.pts[t][0] + os, self.pts[t][1] + os1), (self.pts[t + 1][0] + os,
os1) self.pts[t + 1][1] +
,
(0, 255, 0), 3)
for t in range(9, 12, 1):
cv2.line(white, (self.pts[t][0] + os, self.pts[t][1] + os1), (self.pts[t + 1][0] + os,
os1) self.pts[t + 1][1] +
,
(0, 255, 0), 3)
for t in range(13, 16, 1):
cv2.line(white, (self.pts[t][0] + os, self.pts[t][1] + os1), (self.pts[t + 1][0] + os,
os1) self.pts[t + 1][1] +
,
(0, 255, 0), 3)
for t in range(17, 20, 1):
cv2.line(white, (self.pts[t][0] + os, self.pts[t][1] + os1), (self.pts[t + 1][0] + os,
os1)
, self.pts[t + 1][1] + (0, 255, 0), 3)

58
cv2.line(white, (self.pts[5][0] + os, self.pts[5][1] + os1), (self.pts[9][0] + os, self.pts[9]
255, 0), [1] + os1), (0,

3)
(0, 255, cv2.line(white, (self.pts[9][0] + os, self.pts[9][1] + os1), (self.pts[13][0] + os,
0), self.pts[13][1] + os1),

3)
os1), cv2.line(white, (self.pts[13][0] + os, self.pts[13][1] + os1), (self.pts[17][0] + os,
self.pts[17][1] +

255, 0), (0, 255, 0), 3)


cv2.line(white, (self.pts[0][0] + os, self.pts[0][1] + os1), (self.pts[5][0] + os, self.pts[5]
[1] + os1), (0,
(0, 255,
0), 3)
cv2.line(white, (self.pts[0][0] + os, self.pts[0][1] + os1), (self.pts[17][0] + os,

self.pts[17][1] + os1), 3)

for i in range(21):
cv2.circle(white, (self.pts[i][0] + os, self.pts[i][1] + os1), 2, (0, 0, 255), 1)

res=white
self.predict(res)

self.current_image2 = Image.fromarray(res)

imgtk = ImageTk.PhotoImage(image=self.current_image2)

self.panel2.imgtk = imgtk
self.panel2.config(image=imgtk)

self.panel3.config(text=self.current_symbol, font=("Courier", 30))

#self.panel4.config(text=self.word, font=("Courier", 30))

self.b1.config(text=self.word1, font=("Courier", 20), wraplength=825,


command=self.action1) self.b2.config(text=self.word2, font=("Courier", 20),
wraplength=825, command=self.action2) self.b3.config(text=self.word3,
font=("Courier", 20), wraplength=825, command=self.action3)
self.b4.config(text=self.word4, font=("Courier", 20), wraplength=825,
command=self.action4)

self.panel5.config(text=self.str, font=("Courier", 30),


wraplength=1025) except Exception:
print("==",
traceback.format_exc()) finally:
self.root.after(1, self.video_loop)

def distance(self,x,y):
return math.sqrt(((x[0] - y[0]) ** 2) + ((x[1] - y[1]) ** 2))

def action1(self):
idx_space = self.str.rfind(" ")
idx_word = self.str.find(self.word, idx_space)

59
last_idx = len(self.str)
self.str =
self.str[:idx_word]
self.str = self.str + self.word1.upper()

def action2(self):
idx_space = self.str.rfind(" ")
idx_word = self.str.find(self.word,
idx_space) last_idx = len(self.str)
self.str=self.str[:idx_word]
self.str=self.str+self.word2.upper()
#self.str[idx_word:last_idx] =
self.word2

def action3(self):
idx_space = self.str.rfind(" ")
idx_word = self.str.find(self.word,
idx_space) last_idx = len(self.str)
self.str = self.str[:idx_word]
self.str = self.str + self.word3.upper()

def action4(self):
idx_space = self.str.rfind(" ")
idx_word = self.str.find(self.word,
idx_space) last_idx = len(self.str)
self.str = self.str[:idx_word]
self.str = self.str + self.word4.upper()

def speak_fun(self):
self.speak_engine.say(self.str)
self.speak_engine.runAndWait(
)

def clear_fun(self):
self.str=" "
self.word1 = " "
self.word2 = " "
self.word3 = " "
self.word4 = " "

def predict(self,
test_image):
white=test_image
white = white.reshape(1, 400, 400, 3)
prob = np.array(self.model.predict(white)[0],
dtype='float32') ch1 = np.argmax(prob, axis=0)
prob[ch1] = 0
ch2 = np.argmax(prob,
axis=0) prob[ch2] = 0
ch3 = np.argmax(prob,
axis=0) prob[ch3] = 0

pl = [ch1, ch2]

60
# condition for [Aemnst]
l = [[5, 2], [5, 3], [3, 5], [3, 6], [3, 0], [3, 2], [6, 4], [6, 1], [6, 2], [6, 6], [6, 7], [6, 0], [6, 5],
[4, 1], [1, 0], [1, 1], [6, 3], [1, 6], [5, 6], [5, 1], [4, 5], [1, 4], [1, 5], [2, 0], [2, 6], [4, 6],
[1, 0], [5, 7], [1, 6], [6, 1], [7, 6], [2, 5], [7, 1], [5, 4], [7, 0], [7, 5], [7, 2]]
if pl in l:
if (self.pts[6][1] < self.pts[8][1] and self.pts[10][1] < self.pts[12][1] and self.pts[14][1]
< self.pts[16][1] and self.pts[18][1] < self.pts[20][
1]):
ch1 = 0

# condition for
[o][s] l = [[2, 2],
[2, 1]]
if pl in l:
if (self.pts[5][0] <
self.pts[4][0]): ch1 = 0
print("++++++++++++++++++")
# print("00000")

# condition for [c0][aemnst]


l = [[0, 0], [0, 6], [0, 2], [0, 5], [0, 1], [0, 7], [5, 2], [7, 6], [7, 1]]
pl = [ch1,
ch2] if pl in
l:
if (self.pts[0][0] > self.pts[8][0] and self.pts[0][0] > self.pts[4][0] and self.pts[0][0] >
self.pts[12][0] and self.pts[0][0] > self.pts[16][
0] and self.pts[0][0] > self.pts[20][0]) and self.pts[5][0] >
self.pts[4][0]: ch1 = 2

# condition for [c0]


[aemnst] l = [[6, 0], [6,
6], [6, 2]]
pl = [ch1,
ch2] if pl in
l:
if self.distance(self.pts[8], self.pts[16])
< 52: ch1 = 2

# condition for [gh][bdfikruvw]


l = [[1, 4], [1, 5], [1, 6], [1, 3], [1, 0]]
pl = [ch1, ch2]

if pl in l:
if self.pts[6][1] > self.pts[8][1] and self.pts[14][1] < self.pts[16][1] and self.pts[18][1]
< self.pts[20][1] and self.pts[0][0] < self.pts[8][
0] and self.pts[0][0] < self.pts[12][0] and self.pts[0][0] < self.pts[16][0] and
self.pts[0][0] < self.pts[20][0]:
ch1 = 3

# con for [gh][l]


l = [[4, 6], [4, 1], [4, 5], [4, 3], [4, 7]]
pl = [ch1,
ch2] if pl in
l:
if self.pts[4][0] > self.pts[0][0]:

61
ch1 = 3

# con for [gh][pqz]


l = [[5, 3], [5, 0], [5, 7], [5, 4], [5, 2], [5, 1], [5, 5]]
pl = [ch1,
ch2] if pl in
l:
if self.pts[2][1] + 15 <
self.pts[16][1]: ch1 = 3

# con for [l][x]


l = [[6, 4], [6, 1], [6, 2]]
pl = [ch1,
ch2] if pl in
l:
if self.distance(self.pts[4], self.pts[11])
> 55: ch1 = 4

# con for [l][d]


l = [[1, 4], [1, 6], [1, 1]]
pl = [ch1,
ch2] if pl in
l:
if (self.distance(self.pts[4], self.pts[11]) > 50) and (
self.pts[6][1] > self.pts[8][1] and self.pts[10][1] < self.pts[12][1] and self.pts[14]
[1] < self.pts[16][1] and self.pts[18][1] <
self.pts[20][1]):
ch1 = 4

# con for [l]


[gh] l = [[3,
6], [3, 4]]
pl = [ch1,
ch2] if pl in
l:
if (self.pts[4][0] <
self.pts[0][0]): ch1 = 4

# con for [l][c0]


l = [[2, 2], [2, 5], [2, 4]]
pl = [ch1,
ch2] if pl in
l:
if (self.pts[1][0] <
self.pts[12][0]): ch1 = 4

# con for [l][c0]


l = [[2, 2], [2, 5], [2, 4]]
pl = [ch1,
ch2] if pl in
l:
if (self.pts[1][0] <
self.pts[12][0]): ch1 = 4

# con for [gh][z]


l = [[3, 6], [3, 5], [3, 4]]
pl = [ch1,
ch2] if pl in
l:

62
if (self.pts[6][1] > self.pts[8][1] and self.pts[10][1] < self.pts[12][1] and self.pts[14][1]
< self.pts[16][1] and self.pts[18][1] < self.pts[20][
1]) and self.pts[4][1] > self.pts[10][1]:
ch1 = 5

# con for [gh][pq]


l = [[3, 2], [3, 1], [3, 6]]
pl = [ch1,
ch2] if pl in
l:
if self.pts[4][1] + 17 > self.pts[8][1] and self.pts[4][1] + 17 > self.pts[12][1] and self.pts[4]
[1] + 17 > self.pts[16][1] and self.pts[4][
1] + 17 > self.pts[20][1]:
ch1 = 5

# con for [l][pqz]


l = [[4, 4], [4, 5], [4, 2], [7, 5], [7, 6], [7, 0]]
pl = [ch1,
ch2] if pl in
l:
if self.pts[4][0] > self.pts[0]
[0]: ch1 = 5

# con for [pqz][aemnst]


l = [[0, 2], [0, 6], [0, 1], [0, 5], [0, 0], [0, 7], [0, 4], [0, 3], [2, 7]]
pl = [ch1,
ch2] if pl in
l:
if self.pts[0][0] < self.pts[8][0] and self.pts[0][0] < self.pts[12][0] and self.pts[0][0] <
self.pts[16][0] and self.pts[0][0] < self.pts[20][0]:
ch1 = 5

# con for [pqz][yj]


l = [[5, 7], [5, 2], [5, 6]]
pl = [ch1,
ch2] if pl in
l:
if self.pts[3][0] < self.pts[0]
[0]: ch1 = 7

# con for [l][yj]


l = [[4, 6], [4, 2], [4, 4], [4, 1], [4, 5], [4, 7]]
pl = [ch1,
ch2] if pl in
l:
if self.pts[6][1] < self.pts[8]
[1]: ch1 = 7

# con for [x][yj]


l = [[6, 7], [0, 7], [0, 1], [0, 0], [6, 4], [6, 6], [6, 5], [6, 1]]
pl = [ch1,
ch2] if pl in
l:
if self.pts[18][1] >
self.pts[20][1]: ch1 = 7

# condition for [x][aemnst]

63
l = [[0, 4], [0, 2], [0, 3], [0, 1], [0, 6]]
pl = [ch1,
ch2] if pl in
l:
if self.pts[5][0] >
self.pts[16][0]: ch1 = 6

# condition for [yj][x]


print("2222 ch1=+++++++++++++++++",
ch1, ",", ch2) l = [[7, 2]]
pl = [ch1,
ch2] if pl in
l:
if self.pts[18][1] < self.pts[20][1] and self.pts[8][1] <
self.pts[10][1]: ch1 = 6

# condition for [c0][x]


l = [[2, 1], [2, 2], [2, 6], [2, 7], [2, 0]]
pl = [ch1,
ch2] if pl in
l:
if self.distance(self.pts[8], self.pts[16])
> 50: ch1 = 6

# con for [l][x]

l = [[4, 6], [4, 2], [4, 1], [4, 4]]


pl = [ch1,
ch2] if pl in
l:
if self.distance(self.pts[4], self.pts[11])
< 60: ch1 = 6

# con for [x][d]


l = [[1, 4], [1, 6], [1, 0], [1, 2]]
pl = [ch1,
ch2] if pl in
l:
if self.pts[5][0] - self.pts[4][0] -
15 > 0: ch1 = 6

# con for [b][pqz]


l = [[5, 0], [5, 1], [5, 4], [5, 5], [5, 6], [6, 1], [7, 6], [0, 2], [7, 1], [7, 4], [6, 6], [7, 2], [5, 0],
[6, 3], [6, 4], [7, 5], [7, 2]]
pl = [ch1,
ch2] if pl in
l:
if (self.pts[6][1] > self.pts[8][1] and self.pts[10][1] > self.pts[12][1] and self.pts[14][1]
> self.pts[16][1] and self.pts[18][1] > self.pts[20][
1]):
ch1 = 1

# con for [f][pqz]


l = [[6, 1], [6, 0], [0, 3], [6, 4], [2, 2], [0, 6], [6, 2], [7, 6], [4, 6], [4, 1], [4, 2], [0, 2], [7, 1],
[7, 4], [6, 6], [7, 2], [7, 5], [7, 2]]
pl = [ch1,
ch2] if pl in
l:

64
if (self.pts[6][1] < self.pts[8][1] and self.pts[10][1] > self.pts[12][1] and self.pts[14][1] >
an self.pts[16][1]
d
self.pts[18][1] > self.pts[20]
[1]): ch1 = 1

l = [[6, 1], [6, 0], [4, 2], [4, 1], [4, 6], [4, 4]]
pl = [ch1,
ch2] if pl in
l:
if (self.pts[10][1] > self.pts[12][1] and self.pts[14][1] > self.pts[16][1]
and self.pts[18][1] > self.pts[20][1]):
ch1 = 1

# con for [d]


[pqz] fg = 19
# print(" ch1=",ch1," ch2=",ch2)
l = [[5, 0], [3, 4], [3, 0], [3, 1], [3, 5], [5, 5], [5, 4], [5, 1], [7, 6]]
pl = [ch1,
an ch2] if pl in
d l:
if ((self.pts[6][1] > self.pts[8][1] and self.pts[10][1] < self.pts[12][1] and self.pts[14][1] <
self.pts[16][1]

self.pts[18][1] < self.pts[20][1]) and (self.pts[2][0] < self.pts[0][0]) and self.pts[4][1] >
self.pts[14][1]):
ch1 = 1

l = [[4, 1], [4, 2], [4, 4]]


pl = [ch1,
ch2] if pl in
l:
if (self.distance(self.pts[4], self.pts[11]) < 50) and (
self.pts[6][1] > self.pts[8][1] and self.pts[10][1] < self.pts[12][1] and self.pts[14]
[1] < self.pts[16][1] and self.pts[18][1] <
self.pts[20][1]):
ch1 = 1

l = [[3, 4], [3, 0], [3, 1], [3, 5], [3, 6]]
pl = [ch1,
ch2] if pl in
l:
an if ((self.pts[6][1] > self.pts[8][1] and self.pts[10][1] < self.pts[12][1] and self.pts[14][1] <
d self.pts[16][1]

self.pts[18][1] < self.pts[20][1]) and (self.pts[2][0] < self.pts[0][0]) and self.pts[14][1] <
self.pts[4][1]):
ch1 = 1

l = [[6, 6], [6, 4], [6, 1], [6, 2]]


pl = [ch1,
ch2] if pl in
l:
if self.pts[5][0] - self.pts[4][0] -
15 < 0: ch1 = 1

# con for [i][pqz]


l = [[5, 4], [5, 5], [5, 1], [0, 3], [0, 7], [5, 0], [0, 2], [6, 2], [7, 5], [7, 1], [7, 6], [7, 7]]
pl = [ch1, ch2]

65
if pl in l:
if ((self.pts[6][1] < self.pts[8][1] and self.pts[10][1] < self.pts[12][1] and self.pts[14][1] <
an self.pts[16][1]
d
self.pts[18][1] > self.pts[20][1])):
ch1 = 1

# con for [yj][bfdi]


l = [[1, 5], [1, 7], [1, 1], [1, 6], [1, 3], [1, 0]]
pl = [ch1,
ch2] if pl in
l:
if (self.pts[4][0] < self.pts[5][0] + 15) and (
an (self.pts[6][1] < self.pts[8][1] and self.pts[10][1] < self.pts[12][1] and self.pts[14][1] <
d self.pts[16][1]

self.pts[18][1] > self.pts[20][1])):


ch1 = 7

# con for [uvr]


l = [[5, 5], [5, 0], [5, 4], [5, 1], [4, 6], [4, 1], [7, 6], [3, 0], [3, 5]]
pl = [ch1,
ch2] if pl in
an l:
d if ((self.pts[6][1] > self.pts[8][1] and self.pts[10][1] > self.pts[12][1] and self.pts[14][1] <
self.pts[16][1]

self.pts[18][1] < self.pts[20][1])) and self.pts[4][1] >


self.pts[14][1]: ch1 = 1

# con for
[w] fg = 13
l = [[3, 5], [3, 0], [3, 6], [5, 1], [4, 1], [2, 0], [5, 0], [5, 5]]
pl = [ch1,
ch2] if pl in
l:
if not (self.pts[0][0] + fg < self.pts[8][0] and self.pts[0][0] + fg < self.pts[12][0] and
self.pts[0][0] + fg < self.pts[16][0] and
self.pts[0][0] + fg < self.pts[20][0]) and not (
self.pts[0][0] > self.pts[8][0] and self.pts[0][0] > self.pts[12][0] and self.pts[0][0]
> self.pts[16][0] and self.pts[0][0] > self.pts[20][
0]) and self.distance(self.pts[4], self.pts[11]) < 50:
ch1 =

1 # con for

[w]

l = [[5, 0], [5, 5], [0, 1]]


pl = [ch1,
ch2] if pl in
l:
if self.pts[6][1] > self.pts[8][1] and self.pts[10][1] > self.pts[12][1] and self.pts[14][1] >
self.pts[16][1]: ch1 = 1

# -------------------------condn for 8 groups ends

# -------------------------condn for subgroups


starts #
if ch1 == 0:
66
ch1 = 'S'
if self.pts[4][0] < self.pts[6][0] and self.pts[4][0] < self.pts[10][0] and self.pts[4][0] <
self.pts[14][0] and self.pts[4][0] < self.pts[18][0]:
ch1 = 'A'
if self.pts[4][0] > self.pts[6][0] and self.pts[4][0] < self.pts[10][0] and self.pts[4][0] <
self.pts[14][0] and self.pts[4][0] < self.pts[18][
0] and self.pts[4][1] < self.pts[14][1] and self.pts[4][1] <
self.pts[18][1]: ch1 = 'T'
if self.pts[4][1] > self.pts[8][1] and self.pts[4][1] > self.pts[12][1] and self.pts[4][1] >
self.pts[16][1] and self.pts[4][1] > self.pts[20][1]:
ch1 = 'E'
if self.pts[4][0] > self.pts[6][0] and self.pts[4][0] > self.pts[10][0] and self.pts[4][0] >
self.pts[14][0] and self.pts[4][1] < self.pts[18][1]:
ch1 = 'M'
if self.pts[4][0] > self.pts[6][0] and self.pts[4][0] > self.pts[10][0] and self.pts[4][1] <
self.pts[18][1] and self.pts[4][1] < self.pts[14][1]:
ch1 = 'N'

if ch1 == 2:
if self.distance(self.pts[12], self.pts[4])
> 42: ch1 = 'C'
else:
ch1 = 'O'

if ch1 == 3:
if (self.distance(self.pts[8],
self.pts[12])) > 72: ch1 = 'G'
else:
ch1 = 'H'

if ch1 == 7:
if self.distance(self.pts[8], self.pts[4])
> 42: ch1 = 'Y'
else:
ch1 = 'J'

if ch1 == 4:
ch1 = 'L'

if ch1 == 6:
ch1 = 'X'

if ch1 == 5:
if self.pts[4][0] > self.pts[12][0] and self.pts[4][0] > self.pts[16][0] and self.pts[4][0]
> self.pts[20][0]: if self.pts[8][1] < self.pts[5][1]:
ch1 = 'Z'
else:
ch1 = 'Q'
else:
ch1 = 'P'

67
if ch1 == 1:
if (self.pts[6][1] > self.pts[8][1] and self.pts[10][1] > self.pts[12][1] and self.pts[14][1]
> self.pts[16][1] and self.pts[18][1] > self.pts[20][
1]):
ch1 = 'B'
if (self.pts[6][1] > self.pts[8][1] and self.pts[10][1] < self.pts[12][1] and self.pts[14][1]
< self.pts[16][1] and self.pts[18][1] < self.pts[20][
1]):
ch1 = 'D'
if (self.pts[6][1] < self.pts[8][1] and self.pts[10][1] > self.pts[12][1] and self.pts[14][1]
> self.pts[16][1] and self.pts[18][1] > self.pts[20][
1]):
ch1 = 'F'
if (self.pts[6][1] < self.pts[8][1] and self.pts[10][1] < self.pts[12][1] and self.pts[14][1]
< self.pts[16][1] and self.pts[18][1] > self.pts[20][
1]):
ch1 = 'I'
if (self.pts[6][1] > self.pts[8][1] and self.pts[10][1] > self.pts[12][1] and self.pts[14][1]
> self.pts[16][1] and self.pts[18][1] < self.pts[20][
1]):
ch1 = 'W'
if (self.pts[6][1] > self.pts[8][1] and self.pts[10][1] > self.pts[12][1] and self.pts[14][1]
< self.pts[16][1] and self.pts[18][1] < self.pts[20][
1]) and self.pts[4][1] < self.pts[9][1]:
ch1 = 'K'
if ((self.distance(self.pts[8], self.pts[12]) - self.distance(self.pts[6], self.pts[10])) <
8) and ( self.pts[6][1] > self.pts[8][1] and self.pts[10][1] > self.pts[12][1] and
self.pts[14][1] <
self.pts[16][1] and self.pts[18][1] <
self.pts[20][1]):
ch1 = 'U'
if ((self.distance(self.pts[8], self.pts[12]) - self.distance(self.pts[6], self.pts[10])) >= 8)
and ( self.pts[6][1] > self.pts[8][1] and self.pts[10][1] > self.pts[12][1] and
self.pts[14][1] <
self.pts[16][1] and self.pts[18][1] <
self.pts[20][1]) and (self.pts[4][1] >
self.pts[9][1]): ch1 = 'V'

if (self.pts[8][0] > self.pts[12][0]) and (


self.pts[6][1] > self.pts[8][1] and self.pts[10][1] > self.pts[12][1] and self.pts[14]
[1] < self.pts[16][1] and self.pts[18][1] <
self.pts[20][1]):
ch1 = 'R'

if ch1 == 1 or ch1 =='E' or ch1 =='S' or ch1 =='X' or ch1 =='Y' or ch1 =='B':
if (self.pts[6][1] > self.pts[8][1] and self.pts[10][1] < self.pts[12][1] and self.pts[14][1]
< self.pts[16][1] and self.pts[18][1] > self.pts[20][1]):
ch1=" "

print(self.pts[4][0] < self.pts[5][0])


if ch1 == 'E' or ch1=='Y' or ch1=='B':

68
if (self.pts[4][0] < self.pts[5][0]) and (self.pts[6][1] > self.pts[8][1] and self.pts[10][1]
> self.pts[12][1] and self.pts[14][1] > self.pts[16][1] and self.pts[18][1] > self.pts[20][1]):
ch1="next"

if ch1 == 'Next' or 'B' or 'C' or 'H' or 'F' or 'X':


if (self.pts[0][0] > self.pts[8][0] and self.pts[0][0] > self.pts[12][0] and self.pts[0][0] >
self.pts[16][0] and
self.pts[0][0] > self.pts[20][0]) and (self.pts[4][1] < self.pts[8][1] and self.pts[4][1] < self.pts[12]
[1] and
self.pts[4][1] < self.pts[16][1] and self.pts[4][1] < self.pts[20][1]) and (self.pts[4][1] < self.pts[6]
[1] and
self.pts[4][1] < self.pts[10][1] and self.pts[4][1] < self.pts[14][1] and self.pts[4][1] <
self.pts[18][1]): ch1 = 'Backspace'

if ch1=="next" and self.prev_char!="next":


if self.ten_prev_char[(self.count-2)%10]!="next":
if self.ten_prev_char[(self.count-
2)%10]=="Backspace": self.str=self.str[0:-1]
else:
if self.ten_prev_char[(self.count - 2) % 10] !=
"Backspace": self.str = self.str +
self.ten_prev_char[(self.count-2)%10]
else:
if self.ten_prev_char[(self.count - 0) % 10] !=
"Backspace": self.str = self.str +
self.ten_prev_char[(self.count - 0) % 10]

if ch1==" " and self.prev_char!="


": self.str = self.str + " "

self.prev_char=ch1
self.current_symbol=ch1
self.count += 1
self.ten_prev_char[self.count%10]=ch1

if len(self.str.strip())!=0:
st=self.str.rfind(" ")
ed=len(self.str)
word=self.str[st+1:ed]
self.word=word
if len(word.strip())!
=0:
ddd.check(word)
lenn =
len(ddd.suggest(word)) if
lenn >= 4:
self.word4 = ddd.suggest(word)[3]

if lenn >= 3:
self.word3 = ddd.suggest(word)[2]

if lenn >= 2:
self.word2 = ddd.suggest(word)[1]

if lenn >= 1:
self.word1 = ddd.suggest(word)[0]
else:
self.word1 = " "

69
self.word2 =
" " self.word3
= " "
self.word4 =
""

def destructor(self):
print(self.ten_prev_char)
self.root.destroy()
self.vs.release()
cv2.destroyAllWindows()

print("Starting Application...")

(Application()).root.mainloop()

70
Chapter-6

Uml diagram:

What is UML?
UML stands for Unified Modeling Language is the successor to the wave of
Object Oriented Analysis and Design (0OA&D) methods that appeared in the late 80's. It
most directly unifies the methods of Booch, Rumbaugh (OMT) and Jacobson. The UML
İs called a modeling language, not a method. Most methods consist at least in principle,
of both a modeling language and a process. The Modeling language is that notation that
methods used to express design.
The notation is the graphical stuff; it is the syntax of the modeling language. For instance,
class diagram notation defines how items are concepts such as class, association, and
multiplicity is represented. These are:
Class Diagram: The class diagram technique has become truly central within object-
oriented methods. Virtually every method has included some variation on this
technique. Class diagram is also subject to the greatest range of modeling concept.
Although the basic elements are needed by everyone, advanced concepts are used less
often. A class diagram describes the types of objects in the system and the various
kinds of static relationship that exist among them. There are two principal kinds of
static relationship: Subtype
Association
Class diagram also show the attributes and operations of a class and the constraints that
apply to the way objects are connected.

Association: Association represent between instances of class. From the conceptual


perspective, association represents conceptual relations between classes. Each association
has two roles. Each role is a direction on the association. A role also has multiplicity,
which is a indication of how many object may participate in the given relationship.

Generalization: A typical example of generalization evolves the personal and corporate


customer of a business. They have differences but also many similarity. The similarities
can be placed in generalization with personal customer and corporate customer sub type.

Aggregation: aggregation is the part of relationship. It is like saying a car has engine
and wheels as its parts. This sounds good, but difficult thing is considering, what is the
difference is aggregation and association.

Interaction: interaction diagrams are models that describes how groups of objects
collaboration in some behavior.

Typically, an interaction diagram captures the behavior a single use cases. The
diagram shows a number of example objects the messages that are passed between
these objects in use cases. These are following approaches with simple use case
that exhibits the following behavior.

71
Objects can send a message to another. Each message is checks with given stock item.
There are two diagrams: Sequence and Collaboration diagram. Package Diagram: One
of the oldest questions in software methods is: how do you break down a large system
into smaller systems? It becomes difficult to understand and the changes we make to
them

Structured methods used functional decomposition in which the overall system was
mapped as a function broken down into sub function, which is further broken down
into sub function and so forth. The separation of process data is gone, functional
decomposition is gone, but the old question is still remains. One idea is to group the
classes together into higher-level unit. This idea, applied very loosely, appears in many
objects. In UML, this grouping mechanism is package. The term package diagram for
a diagram that shows packages of classes and the dependencies among them. A
dependency exists between two elements if changes to the definition of one element
may cause to other. With classes, dependencies exist for various reasons: one class
sends a message to another; one class has another as part of its data; one class
mentions another as a parameter to an operation. A dependency between two packages
exists: and any dependencies exist between any two classes in the package.

State diagram: State diagram are a familiar technique to describe the behavior of a
system. They describe all the possible states a particular object can get into and how
the objects state changes as a result of events that reach the objects. In most O0
technique, state diagrams are drawn for a single class to show the lifetime behavior of a
singe object. There are many foam of state diagram, each with slightly different
semantics.
The most popular one used in O0 technique is based on David Havel's state

PERT CHART (Program Evaluation Review Technique):


PERT chart is organized for events, activities or tasks. It is a scheduling device that
shows graphically the order of the tasks to be performed. It enables the calculation of
the critical path. The time and cost associated along a path is calculated and the path
requires the greatest amount of elapsed time in critical path.

72
GANTT CHART
It is also known as Bar chart is used exclusively for scheduling purpose. It is a project
controlling technique. It is used for scheduling. Budgeting and resourcing planning. A
Gantt is a bar chart with each bar representing activity. The bars are drawn against a time
line. The length of time planned for the activity. The Gantt chart in the figure shows the
Gray parts is slack time that is the latest by which a task has been finished.

Usecase model

Use Case Model of the Project:


The use case model for any system consists of "use cases". Use cases represent
different ways in which the system can be used by the user. A simple way to find all
the use case of a system is to ask the questions "What the user can do using the
system?" The use cases partition the system behavior into transactions such that each
transaction performs some useful action from the users' point of view. The purpose of
the use case to define a piece of coherent behavior without reveling the internal
structure of the system. An use case typically represents a sequence of interaction
between the user and the system. These interactions consists of one main line sequence
is represent the normal interaction between the user and the system. The use case
model is an important analysis and design artifact (task).Use cases can be represented
by drawing a use case diagram and writing an accompany text elaborating the drawing.
In the use case diagram each use case is represented by an ellipse with the name of use
case written inside the ellipse. AlI the ellipses of the system are enclosed with in a
rectangle which represents the system boundary. The name of the system being
modules appears inside the rectangle. The different users of the system are
represented by using stick person icon. The stick person icon is normally referred to
as an Actor. The line connecting the actor and the use cases is called the
communication relationship. When a stick person icon represents an external system
it is annotated by the stereo type<<external system>>.

73
DataFlowDiagram:

Data flow diagram is the starting point of the design phase that functionally
decomposes the requirements specification. A DFD consists of a series of bubbles
joined by lines. The bubbles represent data transformation and the lines represent data
flows in the system. A DFD describes what data flow rather than how they are
processed, so it does not hardware, software and data structure.
A data-flow diagram (DFD) is a graphical representation of the "flow" of data through
an information system. DFDs can also be used for the visualization of data processing
(structured design). A data flow diagram (DFD) is a significant modeling technique
for analyzing and constructing information processes. DFD literally means an
illustration that explains the course or movement of information in a process. DFD
illustrates this flow of information in a process based on the inputs and outputs. A
DFD can be referred to as a Process Model.
The data flow diagram is a graphical description of a system's data and how to Process
transform the data is known as Data Flow Diagram (DFD).
Unlike details flow chart, DFDs don't supply detail descriptions of modules that
graphically describe a system's data and how the data interact with the system. Data flow
diagram number of symbols and the following symbols are of by Marco.

There are seven rules for construct a data flow diagram.


ii) Squares, circles and files must wears names. i)
Arrows should not cross each other.
ii) Decomposed data flows must be balanced.
iv) No two data flows, squares or circles can be the same names. v)
Draw all data flows around the outside of the diagram.

74
vi) Choose meaningful names for data flows, processes & data stores. vi)
Control information such as record units, password and validation
requirements are not penitent to a data flow diagram.
Additionally, a DFD can be utilized to visualize data processing or a structured design.
This basic DFD can be then disintegrated to a lower level diagram demonstrating
smaller steps exhibiting details of the system that is being modelled.
On a DFD, data items flow from an external data source or an internal data store to an
internal data store or an external data sink, via an internal process. It is common practice to
draw a context-level data flow diagram first, which shows the interaction between the
system and external agents, which act as data sources and data sinks. On the context diagram
(also known as the Level 0 DFD'), the system's interactions with the outside world are
modeled purely in terms of data flows across the system boundary. The context diagram
shows the entire system as a single process, and gives no clues as to its internal organization.

DFD-Level 0

DFD-Level 1

75
E R Diagram
About ER Diagram:
Entity Relationship Diagram
E-R Model is a popular high level conceptual data model. This model and its variations
are frequently used for the conceptual design of database application and many database
design tools employ its concept.
A database that confirms to an E-R diagram can be represented by a collecton of tables
Attributes Relations o Many-to-many o Many-to-one o One-to-many o One-to-one
Weak entities
The entities and their relationships between them are shown using the following Conventions.

• Diamond,oval and relationships are labeled.


Model is an abstraction process that hides super details while highlighting
details relation to application at end.
A data model is a mechanism that provides this abstraction for database application.
Data modeling is used for representing entities and their relationship in the database.
Entities are the basic units used in modeling database entities can have concrete
existence or constitute ideas or concepts.
• Entity type or entity set is a group of similar objects concern to
an organization for which it maintain data,
• Properties are characteristics of an entity also called as attributes A key isa
single attribute or combination of 2 or more attributes of an entity set is used
to identify one or more instances of the set.
In relational model we represent the entity by a relation and use tuples to represent
an instance of the entity.
Relationship is used in data modeling to represent in association between
an entity set.
• An association between two attributes indicates that the values of the
associated attributes are independent.

76
Sequence diagram

77
Software Testing

Security Testing of the Protect


Testing is vital for the success of any software. no system design is ever perfect. Testing is
also carried in two phases. first phase is during the software engineering that is during the
module creation. second phase is after the completion of software. this is system testing
which verifies that the whole set of programs hanged together.
White Box Testing:

78
In this technique, the close examination of the logical parts through the software are tested by
cases that exercise species sets of conditions or loops. all logical parts of the software
checked
once. errors that can be corrected using this technique are typographical errors, logical
expressions which should be executed once may be getting executed more than once and
error
resulting by using wrong controls and loops. When the box testing tests all the independent
part
within a module a logical decisions on their true and the false side are exercised , all loops
and bounds within their operational bounds were exercised and internal data structure to
ensure their validity were exercised once.
Black Box Testing:
This method enables the software engineer to device sets of input techniques that fully
exercise
all functional requirements for a program. black box testing tests the input, the output and the
external data. it checks whether the input data is correct and whether we are getting the
desired output.

79
Chapter-7

CONCLUSION:
1. Conclusion of the Project Sign-Language-To-Text-and-Speech-Conversion:
Our project is only a humble venture to satisfy the needs to manage their project work.
Several user friendly coding have also adopted. This package shall prove to be a powerful
package in satisfying all the requirements of the school. The objective of software
planning is to provide a frame work that enables the manger to make reasonable estimates
made within a limited time frame at the beginning of the software project and should be
updated regularly as the project progresses. At the end it is concluded that we have made
effort on following points... A description of the background and context of the project
and its relation to work already done in the area
• Made statement of the aims and objectives of the project.
• We define the problem on which we are working in the project.
The description of Purpose, Scope, and applicability
We describe the requirement Specifications of the system and the actions that can be
done on these things.
We understand the problem domain and produce a model of the system, which describes
operations that can be performed on the system.
We included features and operations in detail, including screen layouts. We
designed user interface and security issues related to system.
• Finally the system is implemented and tested according to test cases

Future Scope of the Project:


Finally, we are able to predict any alphabet[a-z] with 97% Accuracy (with and without clean
background and proper lightning conditions) through our method. And if the background is
clear and there is good lightning condition then we got even 99% accurate results.
In Future work we will make one android application in which we implement this algorithm
for gesture prediction

80
References and Bibliography:
[1] ijaerv13n9_90.pdf (ripublication.com)
[2] Translation of Sign Language Finger-Spelling to Text using Image
Processing (ijcaonline.org)
[3] Sign Language to Text and Speech Conversion (ijariit.com)
[4] Sign Language to Text and Speech Translation in Real Time Using Convolutional Neural
Network – IJERT
[5](PDF) Conversion of Sign Language To Text And Speech Using Machine Learning
Techniques (researchgate.net)
[6] An Improved Hand Gesture Recognition Algorithm based on image contours to Identify
the American Sign Language - IOPscience

81

You might also like