SUBJECT: Technical and Business Writing
Systematic Literature Review
Submitted To: Miss Asma Abubakar
Section: 3A
Submitted By:
Behroze Badar (BSDS-020)
Junaid Ahmad (BSDS-021)
Muhammad Subhan Butt (BSAI-012)
Zainab Afzal (BSAI-027)
Systematic Literature Review of Sign Language Recognition System
Authors: Junaid Ahmad, Behroze Badar, Muhammad Subhan & Zainab Afzal (2023)
Abstract:
This systematic literature review provides a comprehensive overview of the various techniques
and technologies used for the recognition of American Sign Language (ASL), Arabic Sign
Language (ArSL), Pakistani Sign Language (PSL), and Indian Sign Language (ISL). Researchers
have explored methods such as 3-D Convolutional Neural Networks (CNN), Support Vector
Machine (SVM), depth-based feature extraction, and machine translation models to accurately
identify sign language gestures and enhance accessibility for the hearing and speech-impaired
community. The reported recognition rates range from 71.85% to 99.7%, showcasing the
effectiveness of the different systems and methods employed. The review emphasizes the
potential for further advancements in sign language recognition technology to enhance
communication and understanding for deaf individuals.
3.0 Introduction
Most people who are unable to hear use sign language, a visual-gestural way of communicating.
Examples of sign languages with unique grammar and syntax are Arabic Sign Language (ArSL),
Indian Sign Language (ISL), Pakistani Sign Language (PSL), and American Sign Language
(ASL), etc. Sign languages are recognized as complete languages with a linguistic structure
because they use body language, facial expressions, and hand shapes to communicate a message.
Some words or names can be finger-spelled using hand alphabets. Sign languages vary
geographically and between individuals within the same nation. Deaf culture relies heavily on
sign language to help create a distinct sense of cultural identity. It is used in deaf education
where sign language is taught as a first language in schools and other programs. Sign language
interpreters enable deaf people to communicate with those who speak in different contexts.
Recognizing sign language as a valid and necessary form of communication, advocacy, and
recognition seeks to protect the linguistic rights of deaf people. The goal of developing sign
language recognition systems is to enhance communication and accessibility for the deaf
community. These systems often use technologies such as computer vision and machine
learning.
3.1 American Sign Language
American Sign Language is a complex visual and gestural means of communication widely used
in some US states and Canada by the deaf and hard-of-hearing communities. The language
boasts a unique grammar and syntax distinct from English. ASL includes a manual alphabet for
spelling and demonstrates some dialects. ASL forms an important part of the cultural identity of
the Deaf community and is essential to their culture In Deaf education and as a language for
interpreters who facilitate communication in different places. The recognition of ASL as a real
language is behind the fight to protect the rights of deaf people. The information below provides
some of the sign language recognition techniques that have been reported for ASL in the past
decade.
Fig 1 American Sign Language [41]
3.1.1 American Sign Language Recognition Techniques
K. Kumar and S. Sharma [1] discussed dynamic hand gesture recognition in American Sign
Language (ASL) using 3-D Convolutional Neural Networks (CNN). The proposed model
outperforms existing models in terms of accuracy, repeatability, and f-measure. The model is
trained on the Boston ASL Lexicon Video Dataset and accurately classifies 100 words. The use
of 3-D CNN enables the analysis of multimodal information and improves training efficiency.
The proposed model holds promise for real-time applications and can be used for signer- and
environment-independent recognition.
V. Jain et al [2] focused on American Sign Language (ASL) recognition using machine learning
techniques such as Support Vector Machine (SVM) and Convolutional Neural Networks (CNN).
The goal is to accurately identify ASL signs and help the hearing and speech-impaired
community. The authors extract features from the dataset and apply various pre-processing
techniques before training the models. The SVM model achieves an accuracy of 81.49% using
the 'Poly' kernel. The CNN model with one layer and different filter sizes achieves an accuracy
of 97.344% with an optimal filter size 8x8. The two-layer CNN model achieves an accuracy of
98.581% with an optimal filter size of 8x8. The results show that CNN models outperform SVM
in ASL recognition. Future work could focus on improving CNN models by allowing them to
learn variable-size parameters and tune hyperparameters.
A. A. Barbhuiya et al [3] discussed the use of CNN-based character extraction and classification
for sign language recognition. The authors propose a system flowchart that includes data
augmentation, image resizing, and preprocessing. They represent the architecture of VGG16, the
CNN model used for feature extraction. The system achieves an overall accuracy of 99.82%
when splitting the training and test data. Recognition results are provided for each sign language
gesture, with most gestures achieving 100% accuracy. However, some gestures have slightly
lower accuracy. The article also highlights the system's incorrect predictions. The performance of
modified pre-trained AlexNet and VGG16 models is compared, with VGG16 outperforming
AlexNet. The article concludes with a graph showing the system's accuracy for each gesture.
Overall, the paper presents a promising approach to sign language recognition using CNN-based
methods.
Sevgi Z. Gurbuz et al [4] discussed the use of RF sensors for American Sign Language (ASL)
recognition. The authors highlight the challenges the Deaf community faces in accessing
technologies designed for hearing individuals. They propose the use of RF sensors to capture
micro-Doppler signatures of ASL signs and demonstrate the effectiveness of this approach
through experiments. The paper presents an RF network test setup and the extraction of hand-
crafted features from micro-Doppler signatures. The authors achieved an accuracy of 95% for the
recognition of 5 ASL characters and 72.5% for the recognition of 20 ASL characters using a
multi-frequency RF sensor network and feature selection algorithms.
T. W. Chong and B. G. Lee [5] presented an American Sign Language (ASL) recognition system
using the Leap Motion Controller. The study involved 12 volunteer subjects who performed ASL
gestures including 26 letters and 10 numbers. The system collected hand and finger gesture data
using an LMC device connected to a desktop computer. The collected data were then processed
and features were extracted. Two classifiers, SVM and DNN, were compared for ASL
recognition. The DNN classifier achieved an accuracy rate of 90.58% for the ASL class 26
recognition system and 85.65% for the ASL class 36 recognition system, outperforming the
SVM classifier. Future work includes extending the system to recognize words and sentences
and other languages.
W. Aly et al [6] presented a user-independent recognition system for the American Sign
Language (ASL) alphabet using depth images from the Microsoft Kinect depth sensor. The
proposed system overcomes problems in hand segmentation and appearance variations among
signers. It uses depth information to withstand lighting and background changes. The system
uses PCANet (Principal Component Analysis Network) for feature learning and Linear Support
Vector Machine (SVM) for classification. The proposed method achieves an average accuracy of
88.7% when using the leave-one-out evaluation strategy, outperforming other state-of-the-art
methods. The article also provides comparisons with other approaches and discusses
performance for different ASL users and alphabets.
C.K.M. Lee et al [7] discussed the development of a game-based American Sign Language
(ASL) recognition system using a modular approach. The system uses a Leap Motion controller
to detect hand gestures in real-time. The study recruited 100 participants to collect a dataset of 26
ASL alphabet gestures. The proposed model achieved an overall accuracy of 91.8% through 5-
fold cross-validation. The model outperformed other methods such as LSTM, SVM, and RNN in
ASL classification with an average accuracy of 99.44%. The results showed high accuracy and
specificity for most alphabetic classes, indicating the ability of the model to correctly identify
instances. The study highlights the benefits of using the Leap Motion controller for ASL
recognition, including fast gesture detection and real-time tracking.
R. Ramalingame et al [8] discussed the development of a wearable smart band for American
Sign Language (ASL) recognition using nanocomposite pressure sensors. The belt consists of 8
sensors placed on a flexible, adhesive textile material and is connected to a data center with the
possibility of wireless communication. The sensors have high sensitivity and stability, which
allows them to monitor muscle contractions and relaxations in the arm. The belt was validated
with 10 subjects performing ASL gestures from 0 to 9, and the extreme machine learning
algorithm achieved an overall gesture recognition accuracy of 93%. The paper highlights the
potential of the smart bracelet in recognizing hand gesture languages with high accuracy, making
it a promising tool for human-computer interaction.
S. Kumar Singh and A. Chaturvedi [9] presented a machine-learning pipeline for ASL gesture
recognition using sEMG signals, achieving high classification accuracies of 99.99% (ASL-10)
and 99.91% (ASL-24) with varying sEMG channels. It involved evaluating approximately 450
features for each sEMG channel and using ensemble feature selection. Fast Fourier transform
coefficients were found to dominate the selected set of functions. The pipeline was validated
against the Ninapro 5 database and achieved results comparable to state-of-the-art methods. The
paper includes a detailed methodology, experimental results, a discussion of limitations, and
future scope. Statistical analysis and 52 cross-validation t-tests were performed to verify the
results.
A. A. Abdulhusseina and F. A. Raheem [10] presented a study on the recognition of static letters
by hand gestures in American Sign Language (ASL) using deep learning techniques. The authors
achieved a classification accuracy of 99.3% and a low error rate of 0.0002 using convolutional
neural network (CNN) and edge detection methods. The CNN structure consists of convolution,
nonlinearity, max pooling, fully connected, and softmax classifier layers. Simulation results
show successful recognition of ASL letters. The study contributes to the field of ASL recognition
and deep learning applications for image processing.
3.1.2 Segmented Conclusion:
The literature review provides a comprehensive overview of the various techniques and
technologies used for the recognition of American Sign Language (ASL). Researchers have
explored methods such as 3-D Convolutional Neural Networks (CNN), Support Vector Machine
(SVM), RF sensors, Leap Motion Controllers, depth sensors, and wearable nanocomposite
sensors to recognize ASL gestures. The reported recognition rates are quite promising, with
some techniques achieving accuracy rates of up to 99.99%. These advancements have the
potential to significantly benefit the deaf and hard-of-hearing communities by facilitating better
communication and access to technology. The comparison of different techniques indicates that
CNN models have generally outperformed other methods in ASL recognition. This suggests that
CNN-based approaches hold promise for the future development of ASL recognition systems.
Overall, the research presented in the document demonstrates the potential for technology to
enhance the recognition of ASL, ultimately contributing to the empowerment and inclusion of
the Deaf community.
Table 1 Summarized review of ASL recognition systems
Author Acquisition Mode Technique Used Recognition Rate
K. Kumar and S. Camera 3D CNN High accuracy
Sharma [1]
V. Jain et al [2] Camera SVM, CNN SVM: 81.49%,
CNN: 97.344% - 98.581%
A. A. Barbhuiya et al [3] Camera VGG16 99.82%
Sevgi Z. Gurbuz et al
[4] Recognition using
micro-Doppler
RF Sensors signatures 95% (5 signs), 72.5% (20 signs)
ASL recognition
T. W. Chong and B. G. using Leap Motion DNN: 90.58% (26-class),
Lee [5] Leap Motion Controller Controller 85.65% (36-class)
PCANet, SVM for
ASL alphabet
W. Aly et al [6] Depth sensor recognition 88.7%
Game-based ASL 91.8% (5-fold cross-validation),
C.K.M. Lee et al [7] Leap Motion Controller recognition 99.44% (average)
R. Ramalingame et al Wearable Smart band for 93% overall gesture recognition
[8] nanocomposite sensors ASL recognition accuracy
Machine learning
S. Kumar Singh and A. pipeline for ASL 99.99% (ASL-10), 99.91%
Chaturvedi [9] EMG signals gesture recognition (ASL-24)
CNN and edge
A. A. Abdulhusseina detection for ASL
and F. A. Raheem [10] Deep Learning recognition 99.3% classification accuracy
3.2 Arabic Sign Language
Arabic Sign Language includes several sign languages used by deaf people in various Arab
states; for example Jordanian Sign Language and Egyptian Sign Language. Spoken forms of
Arabic have contributed significantly to the development of unique scripts and vocabularies in
these languages. Sign language is increasingly being integrated into deaf education as awareness
of its value grows. Arab nations have varying levels of recognition and valuing of sign
languages. The information below provides some of the sign language recognition techniques
that have been reported for ArSL in the last decade.
Fig 2 Arabic Sign Language [42]
3.2.1 Arabic Sign Language Recognition Techniques
Siding et al [11] presented an Arabic Sign Language recognition system using optical flow-based
features and hidden Markov models (HMM). The system includes an algorithm for segmenting
videos in sign language into a sequence of still images.
Siding et al [11] presented an Arabic Sign Language recognition system using optical flow-based
features and hidden Markov models (HMM). The system includes an algorithm for segmenting
videos in sign language into sequences of still images. Four different feature extraction
techniques are evaluated: Modified Fourier Transform (MFT), Local Binary Pattern (LBP),
Histogram of Oriented Gradients (HOG), and Combination of HOG and Optical Flow Histogram
(HOG-HOF). The results show that the MFT features achieve the highest accuracy rate of
99.11%. The proposed system aims to remove communication barriers between individuals who
understand spoken language and those who understand sign language.
Boukdir et al [12] proposed a new approach to recognizing Arabic Sign Language, specifically
Moroccan Sign Language (MoSL), using deep learning models. This approach uses a 2D
convolutional recurrent neural network (2DCRNN) and a 3D convolutional neural network
(3DCNN) to extract features and classify MoSL video sequences. The proposed approach
achieves high accuracy levels of 92% for 2DCRNN and 99% for 3DCNN. The study contributes
to the development of a deep network framework for MoSL word classification and recognition
based on isolated videos. This approach offers a context for testing variations in hand gestures
and accounts for variability in hand position, facial expression, and body parts. The results show
the effectiveness of deep learning models in sign language recognition.
Ibrahim et al [13] presented an Automatic Arabic Sign Language Recognition System (ArSLRS)
that converts isolated characters of Arabic words into text. The system consists of four main
stages: hand segmentation, tracking, feature extraction, and classification. A dynamic skin
detector based on face color is used for hand segmentation, and the proposed skin patch tracking
technique is used for hand identification and tracking. The system achieves a recognition rate of
97% in signer-independent mode and has a robust occlusion solution technique. The proposed
system outperforms other methods in terms of accuracy and does not require multiple cameras or
complex calculations.
Qaroush et al [14] presented a wearable system for Arabic Sign Language recognition using
Inertial Measurement Unit (IMU) sensors. The system uses six IMU sensors, five of which are
located on the fingers and one on the back of the hand. It uses an adaptive segmentation
technique to identify the start and end of each gesture and performs feature-based fusion on
accelerometer and gyroscope data. The system uses supervised machine learning algorithms to
recognize 28 isolated Arabic alphabets with high accuracy. The proposed system is cheap,
convenient, and achieves better results compared to other related systems. Future work includes
improving glove design, preprocessing techniques, feature extraction, expanding the dataset, and
exploring sequence-based classification methods. Overall, the system shows promise in the
efficient recognition of Arabic Sign Language.
Hisham et al [15] discussed the challenges faced by individuals with hearing loss in
communicating and accessing education and daily activities. It focuses on Arabic Sign Language
(ArSL) recognition using the Leap Motion Controller and Latte Panda. The recognition system
uses KNN and SVM machine learning algorithms and uses Ada-Boosting to increase accuracy.
The proposed model achieves a high recognition rate for both one-handed and two-handed
gestures. The article also highlights challenges specific to ArSL, such as the lack of available
resources and differences in sign language across different cultures and dialects. The model is
implemented on the Latte Panda board to increase reliability and mobility. Future work includes
improving overall accuracy and full sentence recognition.
S. Aly and W. Aly [16] presented a new framework for character-independent sign language
recognition. The framework consists of three modules: manual semantic segmentation using
DeepLabv3+, hand shape feature extraction using a convolutional self-organizing map (CSOM),
and sequence classification using a deep bidirectional LSTM network. The proposed framework
is evaluated using a real Arabic Sign Language database and achieves an average accuracy of
89.5% using DeepLabv3+ manual semantic segmentation. The results show that hand
segmentation is essential to improve accuracy and the proposed framework outperforms state-of-
the-art methods. The framework can be extended to address continuous sign language
recognition problems for Arabic and other languages.
Deriche et al [17] present a novel Arabic Sign Language recognition system using a dual LMC
setup and machine learning techniques. Data collection included 100 different sign words
performed by two adult signers, resulting in a data set of 2000 samples. Features extracted from
the data were used in classification algorithms such as Gaussian Mixture Models (GMM) and
Fisher's Linear Discriminant Analysis (LDA). The performance of the system was evaluated
using different scenarios and the results showed excellent character recognition accuracy. The
proposed system combines information from two LMCs using the Dempster-Shafer proof theory.
Overall, the system achieved high accuracy in sign language gesture recognition, even with a
large vocabulary database.
Hamzah Luqman and El-Sayed [18] presented a new multimodal Arabic Sign Language (ArSL)
database that integrates manual and non-manual gestures. The database consists of 6748 video
demonstrations of 50 characters performed by four signers, recorded using Kinect V2 sensors.
The authors propose a hybrid model that combines manual and non-manual gestures for sign
language recognition and evaluate its effectiveness using deep learning techniques. Combining
spatial and temporal features from different modalities improves recognition accuracy by 3.6%
compared to using hand gestures alone. The article also reviews existing datasets and sign
language recognition techniques. The proposed database and pilot study provides a valuable
resource to further advance ArSL recognition.
Hamzah Luqman1 and Sabri A. [19] discussed the automatic translation of Arabic text into
Arabic Sign Language (ArSL). The authors propose a rule-based machine translation system that
performs lexical, syntactic, and semantic analyses of Arabic sentences to generate their ArSL
equivalent. They also developed a parallel corpus of 600 Arabic sentences translated into ArSL
by expert signers. The article presents the differences between Arabic and ArSL and describes
the gloss notation system used for ArSL transcription. System architecture and experimental
results are also discussed. Manual evaluation of the translation system shows that it provides a
good translation of approximately 82% of the sentences.
Hassan et al [20] discuss Arabic Sign Language (ArSL) recognition using different data
collection approaches and classification techniques. Two sensor-based datasets, collected using
motion trackers and data gloves, and a vision-based dataset, collected using a camera, are used
for evaluation. The results show that sensor-based datasets achieve higher recognition rates
compared to vision-based datasets. A modified k-nearest Neighbor (MKNN) algorithm and two
Hidden Markov Model (HMM) toolkits, RASR and GT2K, are used for classification. The
results show that RASR outperforms GT2K in word and sentence recognition speed. The MKNN
algorithm achieves the best rate of sentence recognition. Overall, the study highlights the
potential of motion trackers for sign language recognition.
3.2.2 Segmented Conclusion:
Based on the literature review, it is evident that there have been significant advancements in the
recognition of Arabic Sign Language (ArSL) using various techniques and technologies.
Researchers have employed methods such as optical flow-based features, deep learning models,
IMU sensors, machine learning algorithms, and motion trackers to achieve high recognition rates
for ArSL. The research presented in the document demonstrates high accuracy rates in
recognizing Arabic Sign Language (ArSL) using various techniques and technologies. The
reported recognition rates range from 82% to 99.11%, showcasing the effectiveness of the
different systems and methods employed. The use of advanced technologies like deep learning
and machine learning has shown promise in effectively recognizing ArSL and future work is
focused on improving accuracy, expanding datasets, and exploring sequence-based classification
methods. Overall, the research presented in the document highlights the potential for further
advancements in ArSL recognition and the importance of integrating sign language into deaf
education.
Table 6 Summarized review of ArSL recognition systems
Author Acquisition Mode Technique Used Recognition Rate
Sidig et al [11] Camera MFT 99.11%.
2DCRNN and 92% (2DCRNN), 99%
Boukdir et al [12] Camera 3DCNN (3DCNN)
Dynamic skin
detection and
Ibrahim et al [13] Camera tracking technique 97% (signer-independent)
98.6% (user-dependent),
Qaroush et al [14] Glove IMU 96% (user-independent)
Leap Motion KNN, SVM, Ada- 92.3% (single-hand), 93%
Hisham et al [15] Controller Boosting (double-hand)
DeepLabv3+,
CSOM, Bi-LSTM 89.5% (DeepLabv3+ hand
S. Aly and W. Aly [16] Camera Network semantic)
Dual-LMC, GMM,
LDA, Dempster-
Deriche et al [17] Glove Shafer Theory 92%
Hybrid Model,
Hamzah Luqman Deep Learning
and El-Sayed [18] Kinect V2 Sensors Techniques 95.1
Rule-Based
Hamzah Luqman1 Machine
and Sabri A. [19] Machine Translation Translation 82%
Hassan et al [20] Camera MKNN and HMM 89.7
3.3 Pakistani Sign Language
The most commonly used sign language in Pakistan is Pakistani Sign Language (PSL). A sign
language that is visual-gestural and has its grammar and syntax different from Urdu and English.
PSL has become an integral part of communication and can be used in a variety of settings such
as education and everyday conversation. Despite regional differences, PSL unites deaf people
who speak different languages. The information below provides some of the sign language
recognition techniques that have been reported for ArSL in the last decade.
Fig 3 Pakistani Sign Language [43]
3.3.1 Pakistani Sign Language Recognition Techniques
N. Sabir et al [21] proposed a machine translation model to convert English sentences into
Pakistani Sign Language (PSL) for the benefit of the deaf community in Pakistan. It addresses
issues such as the absence of an extensive sentence-level corpus for PSL and the lack of
linguistic information and grammatical rules for PSL. The model uses English as the source
language due to its language resources, and the final product is a PSL avatar. The research
includes designing a systematic approach, generating a dataset, defining grammar, and
developing a translation model. The translation system is evaluated by automatic and manual
methods with an accuracy of 95%. Future directions include improving translation accuracy for
complex sentences and incorporating deep learning approaches for generalization. The article
aims to integrate the deaf community into society by providing a tool for understanding the
English language through PSL.
Saqlain Shah et al [22] presented a technique for categorizing Pakistani Sign Language (PSL)
alphabets based on their shape, visibility, and orientation. The proposed technique uses statistical
properties extracted from uniform local binary pattern (LBP) histograms of PSL characters.
Support vector machines (SVM) are used for classification. The technique is validated using a
dataset of 3414 PSL characters and achieves an overall accuracy of 77.18%. The results show
that the proposed technique effectively categorizes the PSL alphabet.
N. Raziq and S. Latif [23] designed a gesture-based recognition system for Pakistani Sign
Language (PSL) using a Leap Motion controller. The system consists of two modules: a training
module and a communication module. The training module uses motion data from the Leap
Motion controller to train the system for PSL. In contrast, the communication module acquires
motion data applies a correlation algorithm for marker detection and recognition, and converts it
into text. The system aims to provide an effective and cost-effective solution to the
communication problem faced by deaf and mute children. Experimental results show an average
recognition rate of 92.5% for the six PSL alphabets, with the potential to recognize all 26 letters.
M. Raees et al [24] presented a new algorithm for recognizing alphabets and digits in Pakistani
Sign Language (PSL) using image-based analysis. The system dynamically identifies hand
gestures and recognizes characters based on finger position and thumb visibility. The algorithm
achieved a satisfactory accuracy level of 84.2% when evaluated with 180 digits and 240
alphabets. The proposed method includes core-kernel extraction, edge detection, thumb position
template matching, finger discrimination, group detection, character recognition, and audio-
visual output. The efficiency, accuracy, and usability of the system were compared with other
state-of-the-art character recognition systems. The study makes a significant contribution to the
field of sign language recognition and has potential for future applications in other sign
languages.
Uzma F. et al [25] presented a multi-layer neural machine translation model based on RNN
designed for English to Pakistani Sign Language (PSL) translation. The research addresses the
need for efficient translation of natural language text into sign language, specifically for PSL.
The study compares the proposed model with existing translation approaches and evaluates its
performance using BLEU scores and Word Error Rate. The results show that the proposed model
achieves a BLEU-4 score of 0.51 and a WER score of 0.17, outperforming other methods. The
article further discusses the development of a web application for translation from English to
PSL and evaluates the comprehensibility of the translation system. Overall, the research aims to
contribute to the development of sign language translation technology, especially PSL.
Farman S. et al [26] presented a novel approach to recognizing Pakistani Sign Language (PSL)
using vision-based features and multi-kernel learning in Support Vector Machines (SVM). The
proposed technique uses bare-hand images, extracts features such as HOG, EOH, LBP, and
SURF, and classifies them using SVM with three different kernel functions. The results show
that the linear kernel outperforms the Gaussian and polynomial kernels for most feature sets. The
proposed methodology achieves a recognition accuracy of 91.93% and shows promising results
for PSL recognition. This approach is user-friendly, cost-effective, and has the potential to
improve communication for the deaf community.
A. Dewani et al [27] presented a web-based e-learning system for the hearing-impaired
community in Pakistan. The system focuses on teaching and translating Pakistani Sign Language
(PSL) to overcome communication barriers. It aims to provide an educational platform for deaf
individuals and help them develop the language skills necessary for academic advancement. The
system includes a feedback mechanism for continuous improvement. The system implementation
includes a front-end module, a database module, and a translation engine. The system was
evaluated using a web-based evaluation framework and the results showed positive user
satisfaction. Future work includes expanding the system to translate English sentences and
incorporating video archives.
M. Wasim et al [28] presented a two-way communication system for hearing-impaired
individuals using Pakistani Sign Language (PSL). The goal of the system is to bridge the
communication gap between hearing-impaired and normal individuals. It allows a normal person
to enter text or voice, which is then converted into hand gestures based on PSL. On the other
hand, the gestures of hearing-impaired individuals are recognized and converted into
corresponding text or voice. The system was tested on 100 deaf people and achieved high
accuracy in recognizing gestures and converting them to text/voice. The proposed system has the
potential to facilitate communication between hearing-impaired and normal individuals, thereby
making hearing-impaired individuals an integral part of society.
H. M. Hamza and Aamir W. [29] discussed the challenges of communicating with the hearing
impaired and the importance of Sign Language Recognition (SLR) to bridge the gap. It
highlights the shift from direct metering techniques to vision-based approaches for SLRs. The
authors propose a Pakistani Sign Language Recognition (PSLR) pipeline using deep learning
models, namely C3D, I3D, and TSM. Due to the limited data set, an augmentation unit is
incorporated to generate more training data. Experimental results show that C3D and I3D
achieved better accuracy, while TSM did not. C3D achieved 66.67% accuracy and I3D achieved
77.50%, which are promising results. However, TSM fell short with 33.75% accuracy. The study
also compares the recognition of original characters with characters not included in the data set.
The authors emphasize the importance of their proposed PSLR system and its potential impact
on facilitating communication for the hearing impaired.
M. Shaheer et al [30] presented a vision-based system for recognizing Pakistani Sign Language
(PSL) alphabets using Bag-of-Words (BoW) and Support Vector Machine (SVM) techniques.
The study aims to create a dataset of static and dynamic PSL alphabets with uniform background
and lighting conditions. The data collection protocol involved native PSL signers performing
these signs in front of a camera with a black background. The collected images and videos were
then used for feature extraction and classification. The proposed method achieved a high
accuracy of 97.80% for static PSL alphabets. The study compares its results with previous
studies on static and dynamic sign language recognition systems.
3.3.2 Segmented Conclusion:
The literature review of Pakistani Sign Language (PSL) recognition techniques, showcases
various approaches such as machine translation, vision-based systems, and deep learning models.
These techniques aim to bridge the communication gap for the hearing-impaired community and
integrate them into society. The reported recognition rates demonstrate promising results, with
some techniques achieving high accuracy levels, such as a 97.80% accuracy for static PSL
alphabets and a 95% accuracy for English to PSL translation. The research also emphasizes the
need for further advancements in sign language recognition technology, particularly for PSL, to
improve communication and understanding for the deaf community. Overall, the document
highlights the significant contributions and potential impact of these recognition techniques in
facilitating communication for the hearing-impaired.
Table 3 Summarized review of PSL recognition systems
Author Acquisition Mode Technique Used Recognition Rate
English to PSL translation
N. Sabir et al [21] Machine Translation model 95%
Saqlain Shah et al [22] Vision LBP histograms, SVM 77.18%
Gesture-based recognition
N. Raziq and S. Latif [23] Leap Motion Controller system, Leap Motion 92.5%
Image-based analysis,
finger positions, thumb
M. Raees et al [24] Vision visibility 84.2%
Neural machine BLEU-4: 0.51, WER:
Uzma F. et al [25] RNN-based model translation model 0.17
SVM with multiple kernel
learning, HOG, EOH,
Farman S. et al [26] Vision LBP, SURF 91.93%
E-learning system for
A. Dewani et al [27] Web-based PSL -
Two-way communication
M. Wasim et al [28] Communication System for PSL High accuracy
H. M. Hamza and Aamir Deep learning models C3D: 66.67%, I3D:
W. [29] Vision-based (C3D, I3D, TSM) 77.50%, TSM: 33.75%
M. Shaheer et al [30] Vision BoW, SVM 97.80%
3.4 Indian Sign Language
The deaf community in India uses Indian Sign Language (ISL), a dynamic visual and gestural
communication system. ISL is an essential tool for social interaction and effective
communication due to its distinctive gestures, facial expressions, and geographical differences.
ISL is a reflection of India's vast cultural and linguistic diversity, drawing from its broad base. In
addition to being a useful tool for everyday communication, it contributes to the cultural identity
of the deaf community. ISL is recognized for its importance and initiatives are being taken to
obtain official status in some areas and to incorporate it into education. ISL is an important
symbol of inclusivity, empowering people with hearing loss and promoting a more open and
connected community.
Fig 4 Indian Sign Language [44]
3.4.1 Indian Sign Language Recognition Techniques
P. C. Badhe and V. Kulkarni [31] presented artificial neural network-based Indian Sign
Language recognition using hand-crafted features, which represents a promising approach for
Indian Sign Language (ISL) recognition, combining hand-crafted feature extraction based on
Fourier descriptors with a classification system of artificial neural networks (ANN). The
proposed method, which achieves an impressive accuracy of 98% on a small dataset of 500 ISL
gestures, stands out for its simplicity, efficiency, and suitability for real-time applications.
Comparative analyses with existing techniques, including hidden Markov models, support vector
machines, and convolutional neural networks, reveal the effectiveness of the proposed approach.
The authors suggest future improvements through the exploration of larger datasets, alternative
feature extraction methods, and potential extensions for recognizing continuous sequences of
sign language. Overall, this article represents a valuable contribution to ISL recognition and has
significant potential for improving communication for individuals with hearing and speech
impairments.
D. G. Mali et al [32] presented a new approach for Indian Sign Language recognition using an
SVM classifier, combining Principal Component Analysis (PCA) for preprocessing with Support
Vector Machine (SVM) for classification. The proposed method, which achieves an impressive
accuracy of 95.23% on a dataset of 676 ISL gestures, excels in its simplicity, efficiency, and
computational efficiency, making it suitable for real-time applications. Comparative analyses
with existing techniques, including hidden Markov models, support vector machines, and
convolutional neural networks, underscore the effectiveness of the method. The authors suggest
future improvements by exploring larger datasets and exploring alternative preprocessing and
classification methods, as well as extending the method to recognize continuous sign language
sequences. Overall, this paper provides a valuable contribution to ISL recognition and offers a
promising solution that could significantly aid communication for individuals with hearing and
speech impairments.
K. Shenoy et al [33] addressed the critical need for real-time Indian Sign Language (ISL)
recognition and presented a novel system that overcomes the challenges posed by the complexity
of hand gestures and the variability of individual signers. Unlike existing methods, which may
require cumbersome equipment or prove computationally expensive, the proposed system uses
grid elements to represent hand positions and gestures. Through a careful process of hand
detection, tracking, feature extraction, and classification using k-nearest Neighbors (k-NN) for
hand positions and Hidden Markov Models (HMM) for gesture recognition, the system achieves
a remarkable accuracy of 99.7% for hand positions. classification and an average of 97.23% for
gesture recognition on different datasets. Importantly, the system demonstrates real-time
performance with a recognition time of 0.2 seconds for hand positions and 0.0037 seconds for
gestures. This pioneering work holds significant promise for practical applications, including
sign language translation and education, making a significant contribution to bridging the
communication gap between deaf and speech-impaired people and wider society.
G. A. Rao et al [34] proposed a powerful deep convolutional neural network (CNN) architecture
for selfie sign language gesture recognition, specifically focusing on Indian Sign Language
(ISL). The CNN design includes four convolutional layers with different filter window sizes to
increase the speed and accuracy of gesture recognition. By implementing a stochastic pooling
technique that combines the advantages of max and mean pooling, the authors created a
comprehensive dataset containing 200 ISL signs across 5 signers and 5 user-dependent
viewpoints, totaling 300,000 sign video frames. CNN training in different batches reveals robust
performance, with Batch-III training demonstrating superior accuracy and validation rates
compared to previous sign language recognition models. The proposed CNN model achieves an
impressive average recognition rate of 92.88%, outperforming other state-of-the-art classifiers.
The literature review highlights the challenges of traditional hand-made features in sign language
recognition and supports the effectiveness of CNNs in capturing subtle variations. Concluding
remarks highlight the promising trajectory of CNNs in this area and highlight their potential to
increase accuracy and robustness in sign language recognition systems, ultimately benefiting the
hearing impaired.
T. Raghuveera et al [35] presented a convincing solution for Indian Sign Language (ISL)
recognition using a depth approach with a Microsoft Kinect sensor. The system recognizes the
importance of bridging the communication gap for individuals with hearing and speech
impairments and effectively translates ISL hand gestures into meaningful English text and
speech. The authors use depth imagery from the Kinect sensor due to its robustness to ambient
lighting conditions and object color, using a dataset comprising 140 unique ISL gestures from 21
subjects, including one-handed, two-handed, and orthographic. Using feature extraction
techniques such as Speeded Up Robust Features (SURF), Histogram of Oriented Gradients
(HOG), and Local Binary Patterns (LBP), and using a Support Vector Machine (SVM) classifier,
the system achieves an average recognition accuracy of 71.85. %. Remarkably, it achieves 100%
accuracy for a particular sign. The conclusion highlights the system's efficiency, accuracy, and
robustness to environmental changes while recommending future work to expand the dataset,
increase sentence diversity, and optimize the system's response time. Overall, this in-depth ISL
recognition system holds promise for improving communication accessibility for the hearing and
speech impaired.
P. Sonawane et al [36] presented a comprehensive human interface framework that addresses the
communication gap between the hearing and speech-impaired community and the general
population through a speech-to-Indian Sign Language (ISL) translation system. Utilizing the
Microsoft Xbox Kinect 360's depth sensing and motion capture capabilities, the system captures
motion data for various ISL gestures. An implementation integrated into an Android application
using Unity3D offers a practical solution to facilitate real-time conversation. The literature
review examines different approaches to sign language translation, focusing on computer vision
systems for their non-intrusive nature compared to glove-based systems. They refer to
remarkable studies of speech-to-sign language translation systems that demonstrate high
accuracy in American Sign Language (ASL) and British Sign Language (BSL). The challenges
of translating speech into ISL, such as its complexity and variability, are recognized and
addressed using an extensive set of ISL training features and a robust feature extraction
algorithm. The system achieves a commendable accuracy of 75%, which emphasizes its usability
for individuals with various disabilities. Future scope includes expanding the training set,
developing new feature extraction algorithms, and increasing the accuracy of translation
algorithms with the ultimate goal of improving the lives of hearing-impaired people.
L. Goyal and V. Goyal [37] in "Development of an Indian Sign Language Dictionary Using
Synthetic Animations" present an innovative approach to the construction of a synthetic
animation dictionary for Indian Sign Language (ISL), highlighting the advantages of memory
efficiency, standardization and flexibility over traditional video-based human dictionaries. A
related body of work provides insights into sign language vocabularies for different languages
and introduces different methodologies, including 3D animation and the combination of human
videos with synthetic animations. The paper's strength lies in presenting a clear methodology,
providing evidence in favor of synthetic animations, and discussing potential applications in
education and translation. However, weaknesses include a lack of discussion of the limitations of
synthetic animations, a comparison with alternative methods, and the absence of a user study to
evaluate the effectiveness of the synthetic animation dictionary. Although this paper is
promising, further research is recommended to address these limitations and comprehensively
assess the proposed methodology.
P. C. Badhe and V. Kulkarni [38] in “Indian Sign Language Translator Using Gesture
Recognition Algorithm” present an efficient system for translating Indian Sign Language (ISL)
gestures into English. Utilizing a combined algorithm for preprocessing, 2D FFT Fourier
descriptors for feature extraction, and 4 LBG vector codebooks for vector quantization, the
system achieves an impressive overall accuracy of 92.91% when considering numbers,
alphabets, and phrases together. Some of the related work emphasizes context within the broader
field of sign language recognition and references significant research efforts, including surveys
and other systems. The paper's strengths lie in its high accuracy, complex recognition
capabilities, and the efficiency of its algorithmic approach. However, limitations include the
small sample size of users (10) and the absence of testing on a large dataset of ISL gestures.
Despite these shortcomings, the system holds promise as a valuable tool for facilitating
communication between hearing and deaf individuals.
D. Toppo et al [39] in “Subtitles and Indian Sign Language as Accessibility Tools in Universal
Design” contribute to understanding the impact of accessible formats on the understanding of
advertisements for both hearing and deaf individuals. With a substantial sample size (84
participants) and a randomized design to control for potential confounders, the study used a well-
developed questionnaire to assess the effects of subtitles and Indian Sign Language (ISL). While
demonstrating strengths in methodology such as randomization and a large sample, the study is
limited by its focus on two short advertisements and the absence of an examination of different
types of captions and ISL interpretations. In addition, the study did not extend its investigation to
other media forms such as news programs or educational videos. Despite these limitations, the
findings strongly support the effectiveness of accessible formats in improving the understanding
of media messages for diverse audiences and demonstrate the importance of subtitles and ISL in
universal design for media accessibility.
SP Goswami et al [40] in "Implementation of Indian Sign Language in Inclusive Education"
provides valuable insights into the positive effects of teaching Indian Sign Language (ISL) to
typically developing students. The study, which uses a descriptive design in a mainstream school
setting, effectively outlines improvements in students' awareness of nonverbal communication
modes, mastery of basic ISL skills, and attitudes toward sign languages after training. The use of
a self-reported questionnaire as a reliable method of data collection increases the strength of the
paper. However, limitations include the lack of a randomized control group design, which
prevents definitive conclusions about the exclusive impact of ISL training, and the oversight of
measuring effects on students with hearing impairment. Additionally, the lack of follow-up
measures to assess the sustainability of training effects over time presents a gap for future
research. Despite these limitations, the study provides compelling evidence supporting the
integration of ISL in inclusive education, which calls for further research to address the
identified gaps.
3.4.2 Segmented Conclusion:
The literature review of Indian Sign Language (ISL) recognition techniques, showcases various
approaches such as Artificial Neural Networks (ANN), Support Vector Machine (SVM), k-
nearest Neighbors (k-NN), Deep Convolutional Neural Networks (CNN), and depth-based
methods. These techniques aim to bridge the communication gap for the hearing-impaired
community and integrate them into society. The reported recognition rates demonstrate
promising results, with some techniques achieving high accuracy levels, such as 98%, 95.23%,
99.7%, 92.88%, and 71.85% accuracy respectively. The research also emphasizes the need for
further advancements in sign language recognition technology to improve communication and
understanding for individuals with hearing impairments. Overall, the document highlights the
significant contributions and potential impact of these recognition techniques in facilitating
communication for the hearing-impaired.
Table 4 Summarized review of ISL recognition systems
Author Acquisition Mode Technique Used Recognition Rate
P. C. Badhe, & V. Artificial Neural Network
Kulkarni [31] Hand-crafted features (ANN) 98%
D. G. Mali et al Principal Component
[32] Analysis (PCA) Support Vector Machine (SVM) 95.23%
97.23% (gesture
recognition), 99.7%
K. Shenoy et al k-Nearest Neighbors (k-NN), (hand pose
[33] Grid-based features Hidden Markov Models (HMM) classification)
Deep Convolutional
Neural Network CNN architecture with
G. A. Rao et al [34] (CNN) stochastic pooling 92.88% (average)
Feature extraction: Speeded Up
Robust Features (SURF),
Histogram of Oriented
Gradients (HOG), Local Binary
T. Raghuveera et al Patterns (LBP), Classifier:
[35] Depth-based approach Support Vector Machine (SVM) 71.85% (average)
P. Sonawane et al Depth sensing Depth-based feature extraction
[36] (Microsoft Kinect 360) and motion capturing 75%
N/A (efficiency,
memory
consumption, and
L. Goyal, & V. Construction of synthetic flexibility
Goyal [37] Synthetic animations animation dictionary highlighted)
Pre-processing, 2D FFT Fourier
P. C. Badhe, & V. Gesture Recognition Descriptors, 4 vector codebook
Kulkarni [38] Algorithm LBG 92.91% (overall)
Positive impact on
Captioning and Indian Sign advertisement
D. Toppo et al [39] Accessibility tools Language (ISL) comprehension
Positive impact on
SP Goswami et al Introduction of Indian Sign awareness and
[40] Inclusive Education Language (ISL) attitudes
Systematic Literature Review Methodology:
The systematic literature review undertaken aimed to conduct a comprehensive analysis of recent
advancements in sign language recognition systems, with a specific focus on American Sign
Language (ASL), Arabic Sign Language (ArSL), Pakistani Sign Language (PSL), and Indian
Sign Language (ISL). The methodology encompassed a multi-step approach to ensure a rigorous
and inclusive exploration of the field.
1. Identification of Relevant Literature:
A careful search strategy was implemented in reputable academic databases, including
IEEE Xplore, PubMed, and Google Scholar. The search used specific keywords such
as “sign language recognition”, “ASL recognition”, “ArSL recognition”, “PSL recognitio
n” and “ISL recognition”. The time scope was limited to articles published between 2010
and 2023 to ensure the inclusion of current developments in sign language recognition
technologies.
2. Inclusion and Exclusion Criteria:
Articles were selected for their focus
on sign language recognition techniques, technologies, and systems, particularly for ASL,
ArSL, PSL, and ISL. Only sources that had undergone peer review processes, including
journal articles, conference papers, and academic books, were considered suitable for
inclusion, thereby maintaining the academic rigor of the review.
3. Data Extraction and Synthesis:
Relevant data extracted from the chosen articles encompassed crucial aspects such as
acquisition modes, employed techniques, and recognition rates. A synthesis process was
employed to amalgamate the extracted information, facilitating the creation of a
comprehensive overview of advancements in sign language recognition systems.
4. Quality Assessment:
The relevant data extracted from the selected articles covered crucial aspects such as
acquisition modes, techniques used, and detection rates. A synthesis process
was used to bring together the extracted information to enable the creation of a
comprehensive overview of advances in sign language recognition systems.
5. Analysis and Reporting:
The results of the selected articles were subjected to a systematic analysis to
identify prevailing themes, emerging trends, and notable advances in sign language
recognition systems for ASL, ArSL, PSL, and ISL. A detailed
summary of detection rates, techniques used, and potential for future advances was
presented, contributing to a comprehensive and nuanced understanding of the existing
literature.
6. Review and Validation:
The synthesized data and analysis results underwent a comprehensive verification and
validation process by the research team. This collaborative validation step was
implemented to ensure the accuracy, reliability, and integrity of the systematic
literature review and to contribute to the overall robustness of the research methodology.
Identification of Relevant Literature
Data Extraction and Synthesis
Inclusion and [Cite
Exclusion Criteria
your source here.]
Quality Assessment
Analysis and Reporting
Review and Validation
Table 5 Systematic Literature Review Methodology Chart
Step Description Details
- Databases: IEEE Xplore, PubMed, Google Scholar
1. Identify Search academic databases
- Keywords: sign language recognition, ASL
relevant for research articles on sign
recognition, ArSL recognition, PSL recognition, ISL
literature language recognition.
recognition - Publication timeframe: 2010-2023
- Include: Focus on techniques, technologies, and
2. Apply
Select articles based on systems for sign language recognition (ASL, ArSL,
inclusion &
their relevance to the PSL, ISL); peer-reviewed journals, conference
exclusion
research topic. papers, and scholarly books. - Exclude: Other
criteria
articles.
- Data points: Acquisition mode, techniques used,
Collect essential
3. Extract and recognition rates. - Synthesis: Develop a
information from selected
synthesize data comprehensive overview of advancements in sign
articles.
language recognition systems.
- Analyze methodological rigor: Research design,
4. Evaluate the Assess the methodological data analysis, validity, reliability. - Assess relevance:
quality of rigor, relevance, and Focus on sign language recognition research. -
selected articles contribution of each article. Evaluate contribution: Impact on the field of sign
language technology.
Identify key themes, trends,
- Analyze recognition rates, techniques employed,
5. Analyze and and advancements in sign
and potential for further advancements. - Report
report findings language recognition
findings in a comprehensive and informative manner.
systems.
Ensure the accuracy and - Review synthesized data and analysis for
6. Review and
reliability of the systematic consistency and accuracy. - Validate findings with
validate results
literature review. the research team.
Future Scope:
Future research endeavors could focus on improving accuracy, expanding datasets, and exploring
sequence-based classification methods to enhance the accessibility and communication for
individuals with hearing impairments. Additionally, the integration of deep learning and machine
learning models has shown promise in effectively recognizing sign language across different
cultures and dialects, indicating the potential for broader applications in the field.
Overall Conclusion:
The systematic literature review presented a comprehensive analysis of the advancements in sign
language recognition systems, covering American Sign Language (ASL), Arabic Sign Language
(ArSL), Pakistani Sign Language (PSL), and Indian Sign Language (ISL). The review
showcased a diverse range of techniques and technologies, including 3-D Convolutional Neural
Networks (CNN), Support Vector Machine (SVM), depth-based feature extraction, machine
translation models, and wearable sensor-based systems. These methods have been instrumental
in accurately identifying sign language gestures and improving accessibility for the hearing and
speech-impaired community. The reported recognition rates, ranging from 71.85% to 99.7%,
underscore the effectiveness of the different systems and methods employed. Notably, the CNN
models demonstrated high accuracy, with some achieving recognition rates exceeding 95%.
Additionally, the use of depth-based approaches and wearable sensor networks showcased
promising results, further emphasizing the potential for diverse technological solutions in sign
language recognition. The review also highlighted the need for further advancements in sign
language recognition technology.
References:
1. K. Kumar, & S. Sharma (2021). 3-D CNN for American Sign Language Recognition.
Multimedia Tools and Applications, 80, 26319–26331.
2. V. Jain, A. Jain, Abhinav Chauhan, Srinivasu Soma Kotla, Ashish Gautam (2021).
American Sign Language recognition using Support Vector Machine and Convolutional
Neural Network. International Journal of Information Technology, 13(3), 1193–1200.
3. A. A. Barbhuiya, Ram Kumar Karsh, and Rahul Jain (2021). CNN-based feature
extraction and classification for sign language. Multimedia Tools and Applications, 80,
3051-3069.
4. Sevgi Z. Gurbuz, Ali Cafer Gurbuz, Evie A. Malaia, Darrin J. Griffin, Chris S. Crawford,
Mohammad Mahbubur Rahman, Emre Kurtoglu, Ridvan Aksu, Trevor Macks and
Robiulhossain Mdraf (2021). ASL Recognition Using RF Sensing. IEEE Sensors Journal,
21(3), 3765-3775.
5. T. W. Chong and B. G. Lee (2018). American Sign Language Recognition Using Leap
Motion Controller with Machine Learning Approach. Sensors, 18(10), 3554.
6. W. Aly, Saleh A., and Sultan Almotairi (2019). User-independent American Sign
Language Alphabet Recognition Based on Depth Image and PCANet Features. IEEE
Access, 7, 123143-123149.
7. C.K.M. Lee, Kam K.H. Ng, Chun-Hsien Chen, H.C.W. Lau, S.Y. Chung, Tiffany Tsoi
(2021). American sign language recognition and training method with the recurrent
neural network. Expert Systems With Applications, 167, 114403.
8. R. Ramalingame, Rim Barioul, Xupeng Li, Giuseppe Sanseverino, Dominik Krumm,
Stephan Odenwald and Olfa Kanoun (2017). Wearable Smart Band for American Sign
Language Recognition with Polymer Carbon Nanocomposite-based Pressure Sensors.
IEEE Sensors Letters, 2(3), 1-4. doi: 10.1109/LSENS.2016.2644058
9. S. Kumar Singh and A. Chaturvedi (2023). Machine learning pipeline for American Sign
Language gesture recognition using surface electromyography signals. Multimedia Tools
and Applications, 82, 23833–23871.
10. A. A. Abdulhusseina and F. A. Raheem (2020). Hand gesture recognition of static letters
American sign language (ASL) using deep learning. Engineering and Technology
Journal, 38(06), 926-937.
11. Sidig , Hamzah Luqman, and Sabri A. Mahmoud (2018). Arabic Sign Language
Recognition Using Optical Flow-Based Features and HMM. In F. Saeed et al. (Eds.),
Recent Trends in Information and Communication Technology (pp. 297-305). Lecture
Notes on Data Engineering and Communications Technologies, 5.
12. Boukdir, Mohamed Benaddy, Ayoub Ellahyani1, Othmane El Meslouhi, Mustapha
Kardouchi (2022). Isolated Video-Based Arabic Sign Language Recognition Using
Convolutional and Recursive Neural Networks. Arabian Journal for Science and
Engineering, 47, 2187–2199.
13. Ibrahim, Mazen M. Selim, Hala H. Zayed (2018). An Automatic Arabic Sign Language
Recognition System (ArSLRS). Journal of King Saud University - Computer and
Information Sciences, 30(2018), 470-477.
14. Qaroush, Sara Yassin , Ali Al-Nubani , Ameer Alqam (2021). Smart, comfortable
wearable system for recognizing Arabic Sign Language in real-time using IMUs and
features-based fusion. Expert Systems With Applications, 184, 115448.
15. Hisham, Alaa Hamouda (2020). Arabic sign language recognition using Ada-Boosting
based on a leap motion controller. International Journal of Information Technology,
13(3), 1221-1234.
16. S. Aly and W. Aly, DeepArSLR: A Novel Signer-Independent Deep Learning
Framework for Isolated Arabic Sign Language Gestures Recognition. IEEE Access, 8,
83199-83212.
17. Deriche, Salihu Aliyu, and Mohamed Mohandes (2019). An Intelligent Arabic Sign
Language Recognition System using a Pair of LMCs with GMM-Based Classification.
*IEEE Sensors Journal*.
18. Hamzah Luqman and El-Sayed M. El-Alfy. (2021). Towards Hybrid Multimodal Manual
and Non-Manual Arabic Sign Language Recognition: mArSL Database and Pilot Study.
Electronics, 10, 1739.
19. Hamzah Luqman1 and Sabri A. Mahmoud (2019). Automatic translation of Arabic
text-to-Arabic sign language. Universal Access in the Information Society, 18(3), 939–
951.
20. Hassan, Khaled Assaleh, Tamer Shanableh (2019). Multiple Proposals for Continuous
Arabic Sign Language Recognition. Sensing and Imaging, 20(4).
21. De Marneffe, M. C., & Manning, C. D. (2008). Stanford typed dependencies manual (pp.
338-345). Technical report, Stanford University.
22. Shah, S. M. S., Naqvi, H. A., Khan, J. I., Ramzan, M., Zulqarnain, & Khan, H. U. (2018).
Shape-Based Pakistan Sign Language Categorization Using Statistical Features and
Support Vector Machines. IEEE Access, 6, 2872670.
23. Raziq, N., & Latif, S. (2017). Pakistan Sign Language Recognition and Translation
System using Leap Motion Device. In F. Xhafa et al. (Eds.), Advances on P2P, Parallel,
Grid, Cloud and Internet Computing (pp. 895-902). Springer International Publishing
AG.
24. Raees, M., Ullah, S., Rahman, S. U., & Rabbi, I. (2016). Image-based recognition of
Pakistan sign language. Journal of Engg. Research, 4(1), 21-41.
25. Farooq, U., Rahim, M. S. M., & Abid, A. (2023). A multi-stack RNN-based neural
machine translation model for English to Pakistan sign language translation. Neural
Computing and Applications, 35(2023), 13225–13238.
26. Alvi, A. K., Azhar, M. Y. B., Usman, M., Mumtaz, S., Rafiq, S., Rehman, R. U., &
Ahmed, I. (2004). Pakistan sign language recognition using statistical template matching.
International Journal of Information Technology, 1(1), 1-12.
27. Dingli, A., & Cassar, S. (2014). An intelligent framework for website usability. Advances
in Human-Computer Interaction, 2014, 5.
28. Wasim, M., Siddiqui, A. A., Shaikh, A., Ahmed, L., Ali, S. F., & Saeed, F. (2018).
Communicator for Hearing-Impaired Persons using Pakistan Sign Language (PSL).
International Journal of Advanced Computer Science and Applications, 9(5), 197-202.
29. Hamza, H. M., & Wali, A. (2023). Pakistan sign language recognition: leveraging deep
learning models with the limited dataset. Machine Vision and Applications, 34(71).
https://doi.org/10.1007/s00138-023-01429-8.
30. Mirza, M. S., Munaf, S. M., Azim, F., Ali, S., & Khan, S. J. (2022). Vision-based
Pakistani sign language recognition using bag-of-words and support vector machines.
Scientific Reports, 12(21325).
31. P. C. Badhe, & V. Kulkarni, (2020, July). Artificial Neural Network Based Indian Sign
Language Recognition Using Hand Crafted Features. In 2020 11th International
Conference on Computing, Communication and Networking Technologies
(ICCCNT) (pp. 1-6). IEEE.
32. D. G. Mali, Nitin S.Limkar, Satish H. Mali (2019, May). Indian sign language
recognition using SVM classifier. In Proceedings of the international conference on
communication and information processing (ICCIP).
33. K. Shenoy, Tejas Dastane, Varun Rao, Devendra Vyavaharkar (2018, July). Real-time
Indian sign language (ISL) recognition. In 2018 9th International Conference on
Computing, communication, and Networking Technologies (ICCCNT) (pp. 1-9). IEEE.
34. G. A. Rao, K.Syamala, P.V.V.Kishore, A.S.C.S.Sastry (2018, January). Deep
convolutional neural networks for sign language recognition. In the 2018 conference on
signal processing and communication engineering systems (SPACES) (pp. 194-197).
IEEE.
35. T. Raghuveera, R. Deepthi, R. Mangalashri, and R. Akshaya (2020). A depth-based
Indian sign language recognition using Microsoft Kinect. Sādhanā, 45, 1-13.
36. P. Sonawane, Karan Shah, Parth Patel, Shikhar Shah, and Jay Shah (2021, February).
Speech to Indian sign language (ISL) translation system. In 2021 International
Conference on Computing, communication, and Intelligent Systems (ICCCIS) (pp. 92-
96). IEEE.
37. L. Goyal, & V. Goyal (2016). Development of an Indian sign language dictionary using
synthetic animations. Indian Journal of Science and Technology, 9(32), 1-5.
38. P. C. Badhe, & V. Kulkarni (2015, November). Indian sign language translator using
gesture recognition algorithm. In 2015 IEEE international conference on computer
graphics, vision, and information security (CGVIS) (pp. 195-200). IEEE.
39. D. Toppo, S. Sahasrabudhe, P.D. Chavan, & J.M.M. Poothullil, (2013). Captioning and
Indian Sign Language as accessibility tools in universal design. SAGE Open, 3(2),
2158244013491405.
40. SP Goswami, Anita Ravindra GGR, Kanchan Sharma (2019). Introduction of Indian sign
language in inclusive education. Disability, CBR & Inclusive Development, 30(4), 96-
110.
41. https://www.crayola.com/free-coloring-pages/print/american-sign-language-alphabet-
coloring-page/
42. N. El-Bendary, A. E. Hassanien, H. Zawbaa, M. Daoud, and K. Nakamatsu, "ArSLAT:
Arabic Sign Language Alphabets Translator," in Proceedings of The International
Conference on Computer Information Systems and Industrial Management Applications
(CISIM), Krakow, Poland, pp. 590 - 595, 8-10 October 2010.
43. Khan, N. S., Shahzada, A., Ata, S., Abid, A., Khan, Y. D., Farooq, M. S., Mushtaq, M. T., &
Khan, I. (2014). A Vision-Based Approach for Pakistan Sign Language Alphabets
Recognition. La Pensée, 76(3), 274-285.
44. Sudeep D. Thepade, Nilima Phatak, Deepali Naglot, Aishwarya Chandrasekaran, Mugdha
Joshi. "Novel Feature Extraction Technique for Indian Sign Language Recognition using
Energy Compaction of Cosine Transform." International Journal of Computer Applications,
Volume 177, Issue 2, November 2017.