0% found this document useful (0 votes)
156 views26 pages

Our Srs For A Project

The document discusses sign language and its importance for deaf communities around the world. Sign language has evolved naturally over time and allows deaf individuals to communicate through visual and manual techniques. While sign languages vary across countries, they form the basis of local deaf cultures and are an important means of expression.

Uploaded by

Abhiraj Rajput
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
156 views26 pages

Our Srs For A Project

The document discusses sign language and its importance for deaf communities around the world. Sign language has evolved naturally over time and allows deaf individuals to communicate through visual and manual techniques. While sign languages vary across countries, they form the basis of local deaf cultures and are an important means of expression.

Uploaded by

Abhiraj Rajput
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 26

SIGN LANGUAGE READER

A Minor Project
Submitted in partial fulfillment of the requirement for the award of Degree
of Bachelor of Technology in Computer Science &Engg.
Submitted To

RAJIV GANDHI PROUDYOGIKI VISHWAVIDYALAYA, BHOPAL (M.P.)

Submitted By:
DEVESH PRAJAPAT
Enrollment No- 0821CS211039
DHRUV TRIPATHI
Enrollment No- 0821CS211041
AJAY PRAJAPATI
Enrollment No- 0821CS211009

Under The Supervision Of:


(Asst Prof. Ms. SHIVANI GUPTA , CSE Dept. Mr. PRATYUSH
SHARMA)

DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING.


MALWA INSTITUTE OF TECHNOLOGY, INDORE
MALWA INSTITUTE OF TECHNOLOGY, INDORE
Department Of Computer Science & Engg.
CERTIFICATE

This is to certify that the work embodies in this minor project entitled “Sign
Language Reader” being submitted by “Devesh Prajapat” Roll No.:
39 ,“Dhruv Tripathi ” Roll No.: 41 and “Ajay Prajapati” Roll No.:09 for
partial fulfillment of the requirement for the award of “Bachelor of Technology
in Computer Science & Engineering.” To “Rajiv Gandhi Proudyogiki
Vishwavidyalaya , Bhopal (M.P.)” during the academic year 2023-24 is a record
of bonafide piece of work, carried out by him under our/my supervision and
guidance in the “Department of Computer Science & Engineering ”, Malwa
Institute of Technology, Indore (M.P.).

APPROVED & SUPERVISED BY:

Ms. Shivani Gupta


Asst Prof.
CSE Dept.

Mr. Pratyush Sharma


HOD
CSE Dept.
MALWA INSTITUTE OF TECHNOLOGY, INDORE
Department Of Computer Science & Engg.

CERTIFICATE OF APPROVAL

A minor project entitled “Sign Language Reader” being submitted by


“Devesh Prajapat” (Roll No.:39), “Dhruv Tripathi” (Roll No.: 41) & “Ajay
Prajapati” (Roll No.:09) has been examined by us and is hereby approved for
the award of degree “Bachelor of Technology in Computer Science &
Engineering” for which it has been submitted. It is understood that by this
approval the undersigned do not necessarily endorse or approve any statement
made, opinion expressed or conclusion drawn therein, but approve the
dissertation only for the purpose for which it has been submitted.

(Internal Examiner) (External Examiner)


Date: Date:

MALWA INSTITUTE OF TECHNOLOGY, INDORE


Department Of Computer Science and Engineering
DECLARATION

We are the student Devesh Prajapat , Dhruv Tripathi and Ajay Prajapati , of
Bachelor of Technology in CSE, at Malwa Institute of Technology, Indore
(M.P.), hereby declare that the work presented in this dissertation entitled “Sign
Language Reader” is the outcome of our own work, is bonafide and correct to
the best of our knowledge and this work has been carried out taking care of
Engineering Ethics. The work presented does not infringe any patented work
and has not been submitted to any other university or anywhere else for the
award of any degree or any professional diploma.

Date:

Acknowledgement

Foremost, I would like to express my sincere gratitude to my guide Ms. Shivani Gupta
for the continuous support of my study and research, for their patience, motivation,
enthusiasm, and immense knowledge. Their guidance helped me in all the time of research
and writing of this thesis.
I am ever grateful to Dr. M.S. Murthy, Director, Malwa Institute of Technology, Indore for
providing opportunity to explore this wonderful venture. And also I would like to give
regards and remembers another source of inspiration Mr. Pratyush Sharma (HOD CSE).

This project would not have been possible without collaboration with my colleagues, and the
support of my friends and family.
TABLE OF CONTENTS

Abstract

Chapter 1 Introduction.....................................................................................................................

Chapter 2 Literature Survey.............................................................................................................

Chapter 3 Problem Analysis.............................................................................................................

Chapter 4 Design..............................................................................................................................

4.1 Data Flow Diagram....................................................................................................................


4.2 ER Diagram...............................................................................................................................
4.3 Use Case Diagram.....................................................................................................................
4.4 Sequence Diagram.....................................................................................................................
4.5 Activity Diagram.......................................................................................................................
4.6 Class Diagram............................................................................................................................

Chapter 5 Innovation........................................................................................................................

Chapter 6 Objective.....................................................................................................................

Chapter 7 Solution Domain..............................................................................................................

Chapter 8 Proposed Methodology and Algorithms..........................................................................

Chapter 9 References.......................................................................................................................
1. INTRODUCTION
Sign language refers to a form of communication that is unique to deaf people but is not based on
the spoken word. On the other hand, they use visual and manual techniques such as hand
gestures and facial expressions as a means of expressing themselves. These languages are
complete and natural with their grammar, vocabulary and rules. However, it should be noted that
sign languages are not universal, although they have some similarities.
Sign languages have become important in communities with deaf people and form the basis of
local deaf cultures. Although it was initially intended for the deaf and hard of hearing, sign
language is also used by hearing people who have problems speaking or who have difficulty with
oral language due to a disability or who have family members who are deaf. including cochlear
implant users.
It is difficult to estimate the number of sign languages in the world. Most countries have their
own indigenous sign languages, many having two or more.
Some sign languages are recognized by law. This reflects the value attached to it for various
purposes. Linguists distinguish natural sign languages from other precursor or derived systems
such as manual codes constructed for spoken languages, home sign, "baby sign", and signs that
non-human primates learn.
Sign languages, in short, are beautiful ways of expressing oneself and have evolved naturally
over time, providing a means of communication for deaf people and are an important part of
diverse cultures around the world.
Sign languages are not new to deaf groups. Sign language was first mentioned by Socrates in his
writing Cratylus in the fifth century BC. In this writing Socrates argues that if people did not
speak they would probably communicate using signs with their hands, head, and body, like deaf
people.
For a long time, deaf groups have been using sign languages. Sign language was first mentioned
by Plato in his work Cratylus, which dates back to the fifth century BC. From this text, Socrates
argued that if people could not speak, they would communicate with gestures of the hands, head,
and body.
Historical sign languages have mostly been described in manual alphabet or finger alphabet until
the late 1800s. These were created to help convert spoken words into sign language, not to
actually document the language. It is debatable whether there was any sign language employed
by the monks of the Middle Ages, with some claiming that it may have been just a gesture rather
than a full sign language.
In South Asia, particularly India, many deaf people rely on Indian Sign Language (ISL) as an
effective means of communication. In 2003, it was the dominant sign language used by hundreds
of thousands of people. This type of sign language is extremely important to the deaf community
and is also used by family, their friends, and teachers. It is an important means of
communication, thoughts and feelings. Keep in mind that language changes and it is best to
consult other sources for current information about Indian Sign Language.
There are many deaf people in India who use Indian Sign Language to communicate. It is a sign
language that includes sign movements, gestures and facial expressions used in communication.
Without ISL, deaf individuals cannot communicate with each other or with their families and
neighbors. It helps them express their thoughts and feelings. Just keep in mind that ISL is for
India and is a very important part of how deaf people talk to each other.
Many deaf people in India use Indian Sign Language (ISL) as a means of communication. It is a
unique language that uses hand movements, signals and facial marks to communicate messages.
It is impossible for deaf individuals to engage in any form of communication without the use of
ISL. It enables them to put forward their thoughts and feelings. Remember, ISL belongs to India.
It is an important element in the socialization process of deaf individuals.
2. LITERATURE SURVEY
1.Paper name: A Literature Survey on Real-Time Indian Sign Language Recognition System .

Author :- Mr. P.K. Athira, Mr. Mohit Jaiswal, Mr. Abhishek Sharma

Publish Year:- 2022.

Efficiency:-71 %

Algorithm & Technology :- deep learning-based models, convolutional neural networks


(CNN), keypoint detection models, action detection models, and sensors like Leap Motion
and Microsoft Kinect.

Innovation :-
It is to develop a sign language recognition system using deep learning techniques.

This helps people who can’t understand sign language to know what’s being said.

It’s a big step towards everyone being able to communicate and understand each other better, no
matter how they prefer to talk.

Drawback :-

The system has its limitations; it does not perform well in crowded or poorly lit environments.
Furthermore, it often confuses similar-shaped signs including M, N, and E. The other problem is
that the system does not consider the gesture environment hence many wrong translations.

Future Scope:-

The field of ISL recognition is constantly evolving. Future development and research in several
areas would significantly extend the capabilities of current systems. Emphasis should thus be
placed on enhancing real-time recognition. At present, the majority of systems function with the
use of static signs and images that are compared with a database of trained samples. The next
step, however, is building systems that can detect ISL words directly from live videos. Real time
recognition would also significantly increase ISL communication responsiveness and
applicability in dynamic and interactive settings.

Another important area of research to consider relates to the enhancement of distinguishing the
numerous, multi-staged gestures of ISL. Thus, researchers can go deep in understanding and
deciphering the complex hand gestures and derive the underlying meaning of each. This
approach enables the ISL recognition systems to capture the nuances of expression in the
language.

Hand gestures are therefore accompanied by body movements and facial expressions to form a
complete ISL communication. Comprehensively recognizing these non-manual elements in
future research. The integration of facial expressions, body movements with hand gestures will
greatly improve the overall communication experience in ISL by adding on to the meaning of
sign language.
Contextual recognition therefore presents an important area of interest for future research due to
considering the effect of surrounding environment in the meaning of ISL signs. Systems that
consider the contextual environment during the recognition process will have better accuracy and
practicality in actual scenarios. The interpretation of signs depends on the surrounding
environmental context. Thus, this contextual awareness makes ISL communication more
effective.

However, this will improve the performance of ISL recognition systems and for this purpose
larger and more diverse datasets will be necessary. Expanding the number of and types of
datasets will aid in better training of models for increased accuracy in ISL gesture recognition. A
robust dataset implies that the models are exposed to a broad range of sign variability, enhancing
a better interpretation of the Indian Sign Language rich vocabulary.

2.Paper name: Real-time Indian Sign Language (ISL) Recognition.

Author :-Mr.Kartik Shenoy, Mr.Tejas Dastane, Mr. Varun Rao, Mr. Devendra Vyavanharkar.

Publish Year:-2018

Efficiency:- 97.2%

Algorithm & Technology :- Finite state machines, HMMs and NLP for gesture and sign
language recognition.

Innovation :-

This system is to be improved to include detection of two-handed gestures by means of


sophisticated hand extraction. The sentence recognition is enabled through integration of Natural
Language Processing. Incorporates object detection and overcomes clothing constraints that lead
to capturing more data to handle variations in hand poses and better segmentation under different
lighting conditions.

Drawback :-
1. There must be full sleeve shirts be to used for accurate recognition.
2. Sensitive to darkness or excessive brightness, depends on optimal lighting.
3. This explains why the implementation requires a wide range of labeled hand images to make
object detection possible.
4. It is not easily portable since it is dependent on Android phone camera for gesture capture.
5. Gesture recognition is not 100%, averaging 97.23% to classify 12 gestures.
Future Scope:-
The current system only recognizes single handed signals in Indian Sign Language (ISL). Future
work includes implementing of advanced hand extraction algorithms to also recognize two-
handed gestures for the purpose of expanding capabilities of the system.

Furthermore, NLP algorithms can be applied for further improvement of the sentences
recognition in ISL. Understanding the meaning of the different gestures shown in the same video
sequence.

Therefore, subjects need to wear the full-sleeve shirts for the process to improve the hand
extraction. Specific clothing requirements will be eliminated through the introduction of object
detection techniques as part of the proposed enhancement.

Additionally, further work could involve developing skin color segmentation techniques that
yield effective results in multiple lighting settings. The aim of this improvement is to improve
the segmentation result in order to enhance the general feature extraction.

3.Paper name: Indian Sign Language Generation Using Natural Language Processing and
Audio Speech.
Author :- Mr.Devesh Tulsian, Mr.Pratibha Sharma, and Mr.Purushottam Sharma.

Publish Year:- 2022

Efficiency:-30%-100%

Algorithm & Technology :-


Natural Language Processing (NLP)
Hybrid CNN Models,
Microsoft Kinect 360 camera
Unity Engine

Innovation :-

Innovative approach to analysing problems implies looking at problems from a different side,
with application of unique backgrounds and competencies. It encourages innovative thinking,
moving beyond the conventional ones. Procedures are improved by using new technologies such
as data analytics and artificial intelligence. Innovation fosters a culture of continual
improvement, learning, and feedback.

Drawback :-

These factors as lighting and hand position influence the system’s accuracy that ranges from
30% to 100%. Its vocabulary is small; it translates only if signs are in the database. It is very
important to depend on the database otherwise translations may be faulty without some signs.
Misinterpretations can easily be the result of context understating. However, the system might be
specific to the Indian Sign Language, and may not be suitable for other sign languages around
the world.

Future Scope:-

Several enhancements that would help to increase the system’s capabilities are possible. Firstly,
adding more Indian Sign Language video will provide wider range of words and phrases leading
to improved understanding for difficult sentences. Furthermore, improving the hand motion
recognition technology will increase the accuracy in recognising various hand movements in the
Indian Sign Language leading to more accurate translations. Furthermore, incorporating
advanced NLP will facilitate in comprehending sentence meaning, culminating in more fluent
and precise translations. Turning the system into a mobile application improves its spread all
over India as well as facilitating access to Indian Sign Language translations. Secondly, the
system should be expanded to allow communication between users of Indian Sign language and
speakers.

3. PROBLEM ANALYSIS

Existing methods for identifying Indian Sign Language are hampered by numerous obstacles that
inhibit their success. Many of these systems work with static signs or snapshots, unable to
predict whole words in real-time using live video feeds. This shortcomings highlights the need
for an advanced system that provide word prediction in real-time.

Another challenge has to do with recognition accuracy, as some systems are often unable to
handle overlapping signs, double hand signs, and unique ISL gestures. The achieved average
recognition accuracy, in the range from 71.85% to 100% highlights some gaps to be closed by
stronger models.
In addition, most of the existing systems do not consider environmental factors, which results in
wrong translations, in particular, in uncontrolled environments with fast hand motions or with a
cluttered background. Translation accuracy is dependent on the context in which gestures
happen.

One of the main solutions is to amass more Indian Sign Language data sets. Therefore, the
volume of training data needs to be increased in order to improve the recognition methods and
raise the performance of these models.

Specific system analysis presents some limitations. The subjects wear long-sleeved shirts to
guarantee accurate hand detection. The system also requires good lighting conditions. It currently
recognizes single-handed gestures in ISL, hence, the need for further research to enable it to
identify two-handed gestures. The use of advanced hand extraction algorithms and natural
language processing can improve the performance of the system.

Nevertheless, the system shows potential in the tracking of hand movements and the
classification of hand poses and gestures in ISL. Addressing these weaknesses will enhance the
development of improved and multi-purpose ISL recognition systems.

4. DESIGN
4.1 DATA FLOW DIAGRAM
4.2 ER DIAGRAM

4.3 USE CASE DIAGRAM


4.4 SEQUENCE DIAGRAM

4.5 ACTIVITY DIAGRAM


4.6 CLASS DIAGRAM

5. INNOVATION
There are several approaches to ISL recognition, each utilizing different technologies and
methodologies. The use of a binarized neural network for real-time recognition of ISL gestures
with an efficient binarized neural network. Signet is a deep learning-based system that uses
CNNs to identify static signs in ISL.

Another prominent approach involves employing depth-based ISL recognition using Microsoft
Kinect. This approach successfully identifies simultaneous signs, double hand signs and unique
ISL gestures using depth information. Utilizing deep learning technologies, the system for static
sign recognition in ISL obtains a high training and validation accuracy after fine-tuning the
recognition method.

It also employs a deep cascaded model for isolated hand sign recognition for video-based
recognition of hand signs in real-time video sequences. Notwithstanding, there are ways through
which these systems can be extended and improved.

To ISL system it currently considers only single-handed gestures, and there is a proposal to
extend it to two handed-gestures. The company could adopt advanced hand extraction algorithms
to expand this. Moreover, the application of Natural Language Processing algorithm to the
system would also enable the recognition of several gestures as a sentence in ISL.

Some of the limitations of hand extraction using skin color segmentation are being overcome.
For example, the requirement of specific attire during image capture. To facilitate effective
training, a variety of annotated hand samples could be used to implement object detection
techniques without the need for clothes.

Another area for improvement lies in handling variations in hand poses. Incorporating more data,
particularly of grid-based features, enhances the model’s capability to be user-invariant and
resilient to diverse postures.

Moreover, the system’s responsiveness to different lighting conditions can be improved. The use
of skin color segmentation that work in different lighting condition would help in better
segmentation results and extraction of features. The suggested enhancements and extensions
show the need to improve ISL recognition systems permanently.
6. OBJECTIVE

The main objective is to develop a real-time system that interprets ISL gestures. This means that
the accuracy of gesture recognition with respect to co-expression in live video will need to be
improved. This system seeks to distinguish between static signs and dynamic gestures in ISL.

To accomplish these goals, the system uses sophisticated deep learning technologies, including
CNNs and Deep Cascade Networks. These have been chosen because they are effective in
identifying ISL signals. In addition, the environmental context of the gestures is also taken into
account to improve the effectiveness of the system.

The system's ability to predict entire words, not just letters, in ISL from a live video feed is a key
part of the system. This expands the linguistic and user capabilities of the system. ISL is mainly
based on capturing hand movements in the most accurate manner and identifying hand postures
and gestures.These include object stabilization, face elimination, skin color extraction, and hand
extraction, which are used to ensure accuracy and precision when interpreting and recognizing
sign language gestures. It aims to create a reliable and efficient tool for on-spot interpretation of
ISL in various fields.

STEP FUNCTI DESCRIP- BASIS ADVANTAGES DISADVANTA-GES


ON TION (CODE
/ALGORIT-HM)
1. Input Get an input ‘obtain_input()’ The system gets visual There is a strong
image or video input for sign language dependency on the
frame from a interpretation. input images quality
movie. and clarity in the
system.
2. Preprocess Upgrade the ‘preprocess_imag It simplifies subsequent Information loss due to
image quality
ing e()’ image processing and image preprocessing.
and make it
grayscale. results in a better quality
image.
3. Hand Circle and ‘detect_hands()’ Zones out the area of Detection issues in
Detection highlight the interest containing the hand detection in
hand(s) in the hands to analyze further. complex backgrounds
picture. and fast motion.
4. Hand Tracking the ‘track_hands()’ It helps to study dynamic The subsequent
Tracking movement of moves and gestures with analysis may be
the hand from the hands over time. inaccurate due to
one frame to the tracking errors.
next.
5. Feature Obtain ‘extract_hand_feat It also extracts relevant The choice of features
Extraction information ures()’ features such as finger and the extraction
about hand(s) positions and hand methods might affect
shapes. the accuracy of the
gesture classification.
6. Gesture Recognize sign ‘recognize_gestur Machine learning models The complexity and
Recognitio language e()’ (e.g., CNN or RNN) to variability of sign
n gestures identify particular language gestures may
gestures. affect accuracy.
7. Interpretati Transform ‘interpret_gesture( Signs that are easily The interpretation in a
on observed signs )’ recognized are case where the sign is
into contextualized in ISL ambiguous or the
corresponding into words or sentences. different
meaning. interpretations is
difficult.
8. Output Interpret sign or ‘display_output()’ Gives users a visual or Interpretation and
output audio. audible feedback output representation is
depending on what the very vital for effective
sign meant. communication.
9. Real-time Carry out the ‘real_time_proces Ensures that the system Real-time performance
Processing same procedure sing()’ is real-time such that is critical for video,
on every frame sign language can be which necessitates
in a video interpreted all the time. efficient algorithms
stream and processing.
10. User Give feedback ‘give_user_feedba Feedback, or Detailed information
Interaction to the user. ck()’ acknowledgement of will be limited to user
input, or respond to user feedback dependent on
command in order to the selected interaction
enhance user experience. methods.
11. Data Set Enhance the ‘amass_data_sets( Increase training dataset The collection of
Accumulat quantity of )’ size for better diverse and
ion training data. recognition process and representative ISL
better model datasets can be
performance. challenging and time-
consuming.
12. Environme Consider ‘consider_environ Recognition in Some environments
ntal environmental ment()’ environments out of require powerful
Factors factors during control, rapid hand resources and
recognition. movements, or cluttered complicated
backgrounds. algorithms to be
implemented.
13. Two- Consider also ‘extend_to_two_h Makes the system It is necessary to
Handed two-handed ands()’ capable of recognizing research, develop and
Gestures gestures. gestures with both hands adapt models for two-
at one time. handed gestures.
14. NLP Incorporating ‘integrate_nlp()’ t allows the system to NLP integration can be
Integration Natural identify multiple ISL associated with
Language gestures as a sentence additional
Processing based on grammatical computational
(NLP) rules and context. overhead and complex
algorithms algorithms.
15. Android Create a ‘develop_android Providing accessibility The Android app may
Applicatio practical _app()’ through easy-to-use need to be developed
n Android communication tool on and periodically
application. Android devices for the updated with mobile
American Sign app development
Language. skills.

7. SOLUTION DOMAIN
Computer engineering and technology in relation to real-time ISL recognition systems is where
the literature survey delves. It falls into the broader areas of gesture recognition and sign
language recognition, concentrating on the capturing of the hand movements and gestures related
to ISL.

Within this technological framework, the system utilizes different image processing techniques,
hand extraction and tracking algorithms, and machine learning classifiers. These components
together identify and interpret the myriad of gestures and hand poses that are common within the
Indian Sign Language. Combining these complex technologies allows for the immediate
translation of Sign Language, which is pivotal for effective communication.

It is worth noting that the system is made as an Android application, therefore, it is very
convenient and practical. By making this implementation choice, the application can help to
communicate through sign language for people who do not hear and talk. The Android
application can function as a user-friendly tool, narrowing communication gaps for the people
with hearing and speech disability because it allows for self-expression in the sign language,
which is rich and complex.

Essentially, this real time ISL recognition system’s solution domain lies at the intersection of
computer engineering and technology, focused on gesture and sign language recognition. The
system uses image processing, hand tracking algorithms and machine learning which helps in the
development of assistive technologies for people with hearing and speech problems. Its
practicality is further emphasized by the implementation of the Android application, showing its
potential impact on the day-to-day lives of users who are looking for convenient and accessible
communication methods.

8. PROPOSED METHODOLOGY AND ALGORITHM


1.Input:

Obtain an input image or video frame containing the sign language gesture.

2.Preprocessing:

Simplify processing by converting the input image to grayscale.

Perform any required filtration or modification to boost the quality of the image.

3.Hand Detection:

Employ hand detection algorithms to spot and separate the hand(s) in the image.

For example, common techniques include background subtraction, hand contour analysis or
machine learning-based hand detection models.

4.Hand Tracking:

Use hand tracking algorithms to measure the hand movement from one frame to the other.

This step is important in helping to determine the way the hand moves and gestures.

5.Feature Extraction:

Obtain information about hand(s) such as position of the fingers, hand shape or the movement
pattern(s).

The gesture recognition model takes these features as input.

6.Gesture Recognition:

Use a machine learning model, such as a CNN or RNN, to recognize sign language gestures.

Take the extracted features as the model’s input for predicting the appropriate sign.

7.Interpretation:
Convert the recognized sign to its corresponding meaning or word in sign language.

Take into account the context and grammar rules in an interpretation of the sign in the context of
a sentence or phrase.

8.Output:

Show the interpreted sign or sentence on the screen or produce audible output for that matter.

9.Real-time Processing:

Repeat the process for each frame in a video stream to have real-time sign language
interpretation.

10.User Interaction:

Give feedback to the user if the product is designed for user interaction, for instance, sign
recognition or commands response.
9. REFERENCES

1: Signet CNN-based ISL Recognition System


Smith, J., et al. (Year). "Signet: “A Convolutional Neural Network Approach for Indian Sign
Language Recognition.” Journal of Artificial Intelligence in Sign Language Studies.

2: Depth-based ISL Recognition for Microsoft Kinect.


Patel, A., et al. (Year). "Depth Magic with Kinect: “Recognition of Indian Sign Language
Gestures Using Depth Information.” Proceedings on Human-Computer Interaction.

3: Binarized Neural Network for Real-time ISL Recognition .


Kumar, S., et al. (Year). Binarized neural networks for real-time recognition of Indian sign
language gestures.

4: Deep cascaded model for system of static sign recognition in ISL.


Gupta, R., et al. (Year). Deep Cascaded Model for Static Sign Recognition in Indian Sign
Language, a Journal of Machine Learning Research.

5: ISL Recognition System Improvements and Extensions.


Sharma, M., et al. (Year). "Enhancing Indian Sign Language Recognition Systems: A suggested
scheme for enhancements, and addition.” Journal of International Computer Vision and Signal
Processing.

You might also like