0% found this document useful (0 votes)
13 views21 pages

Final Report

This seminar report presents an innovative augmented reality application for automatic player face detection and recognition in cricket games, enhancing live broadcasts with real-time player information. Utilizing the AdaBoost algorithm and a PAL-based model, the system addresses challenges such as occlusions and varied lighting, achieving high accuracy in player recognition. The report outlines the methodology, experimental results, and future possibilities for integrating this technology into sports broadcasting, aiming to improve viewer engagement and experience.

Uploaded by

amonlohith2255
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views21 pages

Final Report

This seminar report presents an innovative augmented reality application for automatic player face detection and recognition in cricket games, enhancing live broadcasts with real-time player information. Utilizing the AdaBoost algorithm and a PAL-based model, the system addresses challenges such as occlusions and varied lighting, achieving high accuracy in player recognition. The report outlines the methodology, experimental results, and future possibilities for integrating this technology into sports broadcasting, aiming to improve viewer engagement and experience.

Uploaded by

amonlohith2255
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

AUTOMATIC PLAYER FACE DETECTION AND RECOGNITION

FOR PLAYERS IN CRICKET GAMES

A SEMINAR REPORT

submitted by

AJAY E (LKMC21CS054)

To

The APJ Abdul Kalam Technological University

in partial fulfilment of the requirements for the award of the degree


of

Bachelor of Technology
In

Computer Science and Engineering

Department of Computer Science and Engineering

KMCT College of Engineering, Kalanthode


2024-2025
DECLARATION

I undersigned hereby declare that the seminar report” Automatic Player Face Detection and
Recognition for Players in Cricket Games”, submitted for partial fulfillment of the
requirements for the award of degree of Bachelor of Technology of the APJ Abdul Kalam
Technological University, Kerala is a bonafide work done by me under supervision of Mrs.
Najiya Nasrin k This submission represents my ideas in my own words and where ideas or
words of others have been included, I have adequately and accurately cited and referenced
the original sources. I also declare that I have adhered to ethics of academic honesty and
integrity and have not misrepresented or fabricated any data or idea or fact or source in my
submission. I understand that any violation of the above will be a cause for disciplinary
action by the institute and or the University and can also evoke penal action from the
sources which have thus not been properly cited or from whom proper permission has not
been obtained. This report has not been previously formed the basis for the award of any
degree, diploma or similar title of any other University.

Place:
Date: AJAY E

i
KMCT COLLEGE OF ENGINEERING, KALLANTHODE
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

(Affiliated to the APJ Abdul Kalam Technological University)

CERTIFICATE

This is to certify the seminar report entitled “AUTOMATIC PLAYER FACE


DETECTION AND RECOGNITION FOR PLAYERS IN CRICKET GAMES”
submitted by AJAY E (LKMC21CS054) to the APJ Abdul Kalam Technological
University in partial fulfilment of the requirements for the award of the Degree of
Bachelors of Technology in Computer Science and Engineering is a bonafide record of
seminar carried out by him under the guidance and supervision. This report in any form
has not been submitted to any other University or Institute of any purpose.

Seminar Guide: Seminar Coordinator: Head Of the Department:


Mrs Najiya Nasrin K Mrs Najiya Nasrin K Dr. Sreekesh Namboodiri

Asst Prof Asst Prof Asst Prof


CSE Dept CSE Dept CSE Dept

ii
ACKNOWLEDGEMENT

I would remember with grateful appreciation, the encouragement and support rendered by
Dr. SABIQ P V, the Principal of KMCT College of Engineering, Calicut. I express my
deepest sense of gratitude towards Dr.SREEKESH NAMBOODIRI, Head of the
Department of Computer Science and Engineering for his valuable advice and guidance.

I also express my heartiest gratitude to the seminar coordinator Mrs. NAJIYA NASRIN
K, Department Computer Science and Engineering for the timely suggestions and
encouragement given for the successful completion of this seminar. I would always oblige
for the helping hands of all other staff members of the department and all my friends and
well wishers, who directly or indirectly contributed in this venture.

Last but not least, I am indebted to God Almighty for being the guiding light throughout
this seminar and helped me to complete the same within the stipulated time.

AJAY E

iii
ABSTRACT

This research presents an innovative augmented reality application that enhances


cricket broadcasting through real-time player face detection and recognition. Using
the AdaBoost algorithm and a PAL-based model, the system identifies and recognizes
players during live games, even under challenging conditions such as occlusions,
varied lighting, and facial expressions. Trained on an extensive dataset of cricket
footage, the model achieves high accuracy, providing instant access to players'
personal data and performance statistics, ultimately elevating the viewer experience.
The application showcases a promising solution for automated player recognition in
sports, with potential applications across various live broadcast sports. This paper
details the methodology, experimental results, and future possibilities for advancing
sports broadcasting with real-time face recognition technology.

iv
CONTENTS

CHAPTER 1 : INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1

CHAPTER 2 : LITERATURE REVIEW. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3

CHAPTER 3 : METHODOLOGY. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5

CHAPTER 4 : SYSTEM ARCHITECTURE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

CHAPTER 5 : RESULT ANALYSIS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9

CHAPTER 6 : CONCLUSION AND FUTURE SCOPE. . . . . . . . . . . . . . . . . . . 12

APPENDIX A : REFERENCE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

v
LIST OF FIGURES

Title
Page No.

Fig.1.System architecture diagram of the proposed. 8

Fig.2(a) Output of proposed module 10


10
Fig.2(b) Execution time of proposed model.
11
Fig.3(a)Accuracy in terms of each subject’s usage of training

Fig.3(b) Accuracy in terms of each subjects usage of training images 11

vi
ABBREVIATION

AI - Artificial Intelligence

AR - Augmented Reality

CNN - Convolutional Neural Network

LDA - Linear Discriminant Analysis

PAL - Pose Aligned Landmarks

AdaBoost - Adaptive Boosting

PCA - Principal Component Analysis

HOG - Histogram of Oriented Gradients (used in object detection)

RNN - Recurrent Neural Network

FLE - Facial Landmark Estimation (method for aligning facial features)

CNCC - Classic Nearest Centre Classifier

vii
Automatic Player Face Detection Seminar Report 2024

CHAPTER 1
INTRODUCTION

In recent years, advancements in computer vision and machine learning have significantly
reshaped the landscape of sports broadcasting. By applying artificial intelligence (AI)
techniques to live video feeds, broadcasters can offer enriched, data-driven insights that
deepen viewer engagement and understanding. Face recognition technology has emerged as a
particularly promising tool within this domain. Initially developed with simple geometric
features, it has rapidly evolved with the advent of sophisticated deep learning models, which
can now analyze and identify unique facial characteristics across complex visual contexts. This
evolution has seen widespread application across security, social media, and entertainment
industries, capitalizing on AI's ability to distinguish subtle nuances in facial structures. In
sports, where players’ faces are often partially obscured or captured at various angles, this
technology presents unique opportunities for real-time identification and information display,
especially in high-energy and fast-paced environments like cricket.

Cricket, a sport celebrated worldwide for its complex strategies and dynamic gameplay, poses
distinct challenges for tracking player identities and statistics. A cricket team consists of
multiple players, each frequently moving around a large field. This movement makes it
difficult for viewers—whether in a stadium or watching from home—to keep track of
individual players and understand their specific contributions during the game. In large
venues, where individual players are harder to distinguish, or during televised events, where
camera angles constantly shift, identifying players on the field becomes a formidable task.
Automatic face detection and recognition technology bridges this gap by providing real-time
information about each player, thus enhancing the viewing experience. It enables fans to
seamlessly track players, view their statistics, and gain insights into their performance with
minimal interruption to the live action.

Department of CSE 1 KMCT CE


Automatic Player Face Detection Seminar Report 2024

Our study introduces an augmented reality (AR) application specifically designed to


integrate with live cricket broadcasts, using player face recognition to offer viewers
immediate access to player information. By leveraging the AdaBoost algorithm for initial
player and face detection, combined with a robust PAL-based face recognition model, the
system tackles various real-world challenges encountered in live sports broadcasting. The
application has been extensively trained on cricket footage, allowing it to recognize faces
despite occlusions, varying lighting conditions, and pose changes—challenges inherent to
outdoor, dynamic sports environments. The chosen model achieves efficient, real-time
processing, providing fans with up-to-the-minute data on players, even in scenarios where
visibility might be compromised. This real-time feature adds a layer of interactivity to cricket
viewing, bringing a new dimension to fan engagement by allowing users to dive into player
stats and profiles in the context of live gameplay.

The broader implications of this technology extend far beyond cricket. As fans increasingly
demand more interactive and immersive viewing experiences, integrating machine learning
and AR into sports analytics opens new avenues for fan engagement. By allowing audiences
to access live, personalized content through their screens, this technology could set a new
standard for sports broadcasting. With further refinement, similar systems could be adapted
to other sports, such as soccer, basketball, or rugby, where tracking individual player
performance in real-time offers significant added value. This report presents a comprehensive
overview of our methodology, experimental validations, and potential future directions. We
envision a future where AR-powered, machine learning-enhanced sports broadcasts provide
fans with a rich, data-driven experience that brings them closer to the action, making sports
more accessible, engaging, and insightful.

Department of CSE 2 KMCT CE


Automatic Player Face Detection Seminar Report 2024

CHAPTER 2
LITERATURE REVIEW
2.1 Evolution of Face Recognition in Computer Vision: Face recognition technology has
undergone a remarkable transformation, progressing from simple geometry-based methods to
complex, deep-learning models. Early systems relied on manually defined features, like the
distance between eyes or the width of the nose, which provided basic recognition abilities but
struggled with variations in expression, pose, and lighting. These limitations made them
unreliable in real-world conditions, where faces are rarely presented in a controlled, front-
facing manner. In the 2000s, with the advent of deep learning, Convolutional Neural
Networks (CNNs) revolutionized face recognition by learning detailed facial feature
representations from large datasets. These advancements have significantly improved the
accuracy and adaptability of face recognition, allowing it to function reliably across a wide
range of scenarios, including low lighting and diverse angles. As a result, face recognition has
gained traction in applications beyond security, findinga utility in social media and
entertainment, where dynamic environments make accuracy and adaptability crucial. This
foundational progress enables face recognition to be applied in more challenging settings,
including live sports, where player faces are often partially obstructed or viewed from varying
angles.
2.2 Applications and Challenges of Face Recognition in Sports: The use of automatic face
recognition in sports broadcasting has emerged as a promising tool to enrich the viewer
experience by providing immediate player identification, game statistics, and other insights.
However, applying face recognition in sports presents unique challenges due to the
unpredictable nature of the environment. Sports like cricket and soccer involve fast-paced
action, varied lighting conditions, and frequent changes in player orientation, making it
difficult for conventional recognition systems to maintain accuracy. Early research focused on
using facial recognition alongside jersey numbers or other textual cues, with some success in
settings like soccer where facial and non-facial cues help in identifying players. Yet, these
methods have their limitations; they can be less effective in cases of partial face visibility,
occlusions from helmets or other players, and quickly shifting camera angles. In cricket, where
players often wear protective gear and change positions rapidly, robust recognition is
essential. Recent studies have suggested that adopting advanced machine learning techniques,
such as CNNs and Recurrent Neural Networks (RNNs), can enhance accuracy, but achieving
reliable, real-time performance for live broadcasts remains a significant technical hurdle.

Department of CSE 3 KMCT CE


Automatic Player Face Detection Seminar Report 2024

2.3 Integration of Augmented Reality and Real-Time Recognition in Sports Broadcasting:


Augmented Reality (AR), when paired with real-time face recognition, offers a powerful
platform for delivering dynamic, context-rich information to sports viewers. AR applications
in sports broadcasting can overlay player names, statistics, and performance data onto live
video feeds, providing a seamless experience without disrupting the game flow. Some research
has explored AR enhancements in sports, such as adding visual markers to players on the field
or inserting virtual elements in broadcast streams. However, early AR implementations often
struggled with synchronization, image clarity, and processing delays, limiting their
effectiveness in fast-paced settings. Additionally, these systems lacked the ability to handle the
nuances of real-time player tracking in varied lighting and from different angles. More recent
developments have seen improvements in processing power and algorithm efficiency, but the
application of AR with real-time face recognition in cricket is still an emerging area. This
research seeks to address these gaps by developing an AR-integrated face recognition system
that is optimized for cricket. By tackling issues such as occlusions, non-uniform lighting, and
varied poses, the study aims to create a more immersive and informative viewing experience,
allowing fans to access real-time insights about players and gameplay. This project also sets
the stage for broader adoption of AR-based recognition systems in other sports, ultimately
enhancing the way viewers engage with live broadcasts across the board.

Department of CSE 4 KMCT CE


Automatic Player Face Detection Seminar Report 2024

CHAPTER 3

METHODOLOGY

3.1 System Overview: The proposed system is an advanced augmented reality (AR)-
enhanced solution for cricket broadcasting that aims to recognize players in real time and
display their personal information and statistics. This system integrates several computer
vision techniques to detect, identify, and display data about each player on the field. It is
structured around four core modules: player detection, face detection, face alignment, and
face recognition, each contributing to the system’s efficiency and robustness in live
environments. When integrated with AR, this setup allows for seamless, dynamic overlays
of player details on live video feeds, enriching the viewer experience by providing instant
access to player names, performance metrics, and other stats. The system is optimized for
fast-paced sports settings, making it possible to identify players accurately even under
challenging conditions, like quick movement, varied camera angles, and diverse lighting.

3.2 Image Acquisition and Preprocessing: The system’s first step is acquiring a diverse
dataset of player images to build a reliable recognition model. This dataset includes images
of cricket players from multiple angles, in different lighting conditions, and with various
facial expressions and poses. By sourcing a wide range of images, the system becomes
resilient to common obstacles encountered in sports broadcasts, such as partial occlusions,
shadows, and fluctuating lighting. The acquired images are resized to consistent dimensions
(such as 100×140, 50×70, and 20×30 pixels) to ensure uniformity across the dataset.
Preprocessing also includes normalizing lighting, contrast, and color balance across images,
which helps in minimizing discrepancies caused by inconsistent lighting or other
environmental factors. This normalization process is essential for enhancing the model’s
accuracy in recognizing players, as it minimizes errors from visual variability inherent in
live cricket matches.

Department of CSE 5 KMCT CE


Automatic Player Face Detection
Seminar Report 2024

3.3 Player Detection and Face Detection: For player detection, the system uses the
AdaBoost algorithm with Haar-like features, a popular choice for real-time object
detection due to its high accuracy and speed. AdaBoost is particularly suited for this task
as it combines multiple weak classifiers to create a robust model capable of detecting
players quickly and effectively. The algorithm is trained on a comprehensive cricket-
specific dataset, allowing it to recognize players in varied game scenarios, from close-up
shots to distant views. After the player’s body is detected, the face detection module focuses
on identifying facial regions within the detected player, further refining the selection by
isolating clearly visible facial features. This two-step detection process enables the system to
reliably pinpoint player faces even in complex situations where players’ faces may be
partially obscured by helmets, other players, or equipment. By effectively filtering out
occluded or low-resolution faces, this component significantly improves the reliability of
the recognition process.

3.4 Face Alignment and Recognition: Once a player’s face is detected, the face alignment
module adjusts the orientation and position of the face, ensuring that it is consistently
centered and symmetrical. This step is crucial because it minimizes variations in facial
positioning, which can lead to recognition errors, especially when players are viewed from
different angles or under diverse lighting. For alignment, the system uses a facial landmark
estimation method that marks key facial points, such as the eyes, nose, and mouth, aligning
the face for optimal processing. The face recognition module then employs a PAL-based
algorithm, specifically chosen for its robustness to low resolution, occlusions, and varying
lighting. This algorithm compares the detected face with a preloaded database of player
images and uses Adaptive Boosting (AdaBoost) with Linear Discriminant Analysis (LDA)
to improve classification accuracy. The system is thus capable of accurately identifying
players, even under challenging conditions, such as shadows or non-frontal poses, which
are common in live cricket broadcasts.

Department of CSE 6 KMCT CE


Automatic Player Face Detection Seminar Report 2024

3.5 Data Display and Augmented Reality Integration: Upon successful recognition, the
system retrieves detailed information about the identified player, including their name,
age, nationality, and relevant performance statistics, from a player database. This
information is displayed using an AR interface, allowing viewers to access player details
seamlessly as they watch the game. The AR overlay is designed to be compatible with
both traditional television broadcasts and mobile devices, such as smartphones or tablets,
enabling viewers to point their devices at the screen or field and instantly receive player
information. This real-time AR integration not only enhances the viewer’s engagement by
making game insights more accessible but also opens new avenues for interactive sports
broadcasting, where viewers can explore detailed player analytics as the game progresses.
This innovative approach has the potential to transform how fans interact with live
sports, providing a dynamic, data-rich environment that can adapt to the needs of
modern, tech-savvy audiences.

Department of CSE 7 KMCT CE


Automatic Player Face Detection Seminar Report 2024

CHAPTER 4
SYSTEM ARCHITECTURE

The architecture of the proposed system is structured into interconnected modules,


each responsible for a specific task from image capture to data display. First, image
acquisition captures frames from live feeds, and preprocessing standardizes these
images for consistency in lighting and resolution. Next, the player detection module
uses the AdaBoost algorithm to identify players on the field, and the face detection
module isolates facial areas for accuracy in recognition. Face alignment ensures each
detected face is centered, enhancing the reliability of the face recognition stage, where
a PAL-based model, supported by AdaBoost and LDA, matches the face with entries
in the player database. Finally, the augmented reality (AR) data display overlays
player information on the live video feed, creating a real-time, interactive experience
for viewers across broadcast screens and mobile devices. This architecture enables
smooth, real-time player identification and data display, enriching the viewer
experience.

Fig.1.System architecture diagram of the proposed.

Department of CSE 8 KMCT CE


Automatic Player Face Detection Seminar Report 2024

CHAPTER 5
RESULT ANALYSIS

5.1 Player Detection Accuracy: The system's player detection module, based on the AdaBoost
algorithm, achieved high accuracy in identifying players across varying conditions. Tests
showed that the module could detect players effectively in diverse scenarios, including
different lighting, occlusions, and image resolutions. Results indicated that AdaBoost’s
performance remained robust even with small or partially occluded players, which is essential
for live sports environments. This accuracy in detection provides a solid foundation for the
subsequent face detection and recognition stages.

5.2 Face Detection Performance: The face detection module demonstrated high effectiveness
in identifying and isolating facial regions within the detected players. It was particularly
adept at handling images with moderate pose variations and limited occlusions. However,
the system’s performance slightly declined when dealing with extreme occlusions or blurred
faces. Overall, the face detection module successfully provided clean, isolated facial images
for recognition, achieving reliable results for most standard broadcast conditions.

5.3 Face Recognition Accuracy: The face recognition component, using the PAL-based model,
showed a high rate of successful identification under varied conditions, including low
resolution, non-uniform lighting, and minor pose variations. The model achieved above-
average accuracy, even when trained with a limited dataset, and was able to match faces to
database entries consistently. Although accuracy declined slightly with extreme angle
variations or severe occlusions, the module’s recognition capability was reliable for typical
game conditions, ensuring accurate identification for a majority of players.

Department of CSE 9 KMCT CE


Automatic Player Face Detection Seminar Report 2024

5.4 Real-Time Performance: Execution times were monitored across modules to assess real-time
viability. On average, the player detection and face recognition modules processed frames within a
few milliseconds, achieving near real-time performance on standard hardware. The system maintained
efficient processing for images of up to five players per frame, with only minimal lag under higher
processing loads. This real-time capability is critical for live sports broadcasting, as it allows the
system to keep up with the fast pace of the game without compromising accuracy.

Fig.2(a) Output of proposed module

Fig.2(b) Execution time of proposed model.

Department of CSE 10 KMCT CE


Automatic Player Face Detection Seminar Report 2024

5.5 Robustness Under Challenging Conditions: The system’s robustness was tested under
extreme conditions, such as poor lighting, significant occlusions, and varied poses. Results
showed that the player detection and face recognition modules maintained high accuracy with
minor lighting changes and moderate occlusions. However, under severe occlusions or very
low lighting, accuracy slightly decreased. Despite these limitations, the system’s adaptability
to handle such conditions made it suitable for live sports, where such challenges are common.

5.6 Comparative Analysis with State-of-the-Art Models: In a comparative study with other
state-of-the-art face recognition algorithms, including CNN and Capsule Networks, the
proposed system performed competitively. Although CNN showed slightly higher accuracy in
some scenarios, the PAL-based model used here provided faster processing times, making it
more suitable for real-time applications. The system's balance between accuracy and speed,
along with its specialized design for sports broadcasting, highlights its practical advantages
over more computationally intensive models.

Fig.3(a)Accuracy in terms of each subject’s usage Fig.3(b) Accuracy in terms of each subjects usage of
of training training images

Department of CSE 11 KMCT CE


Automatic Player Face Detection Seminar Report 2024

CHAPTER 6

CONCLUSION AND FUTURE SCOPE

7.1. Conclusion

AI-SenseVision has successfully demonstrated its potential as an affordable, real-time


navigation aid for visually impaired individuals. Through its integration of object
recognition, obstacle detection, and versatile navigation modes, the system provides a
comprehensive solution that supports safe and independent mobility across varied
environments. User testing indicated high satisfaction with the device’s accuracy, ease of
use, and adaptability to both indoor and outdoor settings. However, challenges in low-light
object detection and small obstacle identification highlight areas for further enhancement.
Overall, AI-SenseVision offers a strong foundation for assistive technology, making it
accessible and valuable for users in low-resource settings, promoting independence and
improving quality of life.

7.2. Future Scope

Future development of AI-SenseVision will focus on refining key aspects to enhance its
effectiveness. Planned improvements include advanced low-light image processing and
further calibration of obstacle detection to improve performance with small or dynamic
objects. Additionally, gesture recognition could be integrated to facilitate seamless mode
switching, providing a more hands-free experience. Multi-language support and increased
customization options for auditory feedback are also being considered to broaden
accessibility for diverse users. Long-term goals involve cloud connectivity for periodic
updates and data analysis, allowing for continuous model refinement, improved accuracy,
and enhanced user experience over time.

Department of CSE 12 KMCT CE


Automatic Player Face Detection Seminar Report 2024

REFERENCES

[1] Blindness and vision impairment, Accessed on: Nov. 28, 2022. [Online]. Available:
https://www.who.int/news-room/fact-sheets/detail/blindness and-visual-impairment

[2] J. Wang, E. Liu, Y. Geng, X. Qu, and R. Wang, “A survey of 17 in door travel assistance
systems for blind and visually impaired people,” IEEE Trans. Human-Mach. Syst., vol. 52,
no. 1, pp. 134–148, Feb. 2022, doi: 10.1109/THMS.2021.3121645.

[3] A. D. P. D. Santos, A. H. G. Suzuki, F. O. Medola, and A. Vaezipour, “A systematic


review of wearable devices for orientation and mobility of adults with visual impairment and
blindness,” IEEE Access, vol. 9, pp. 162306–162324, 2021, doi:
10.1109/ACCESS.2021.3132887.

[4] F. E.-Z. El-Taher, A. Taha, J. Courtney, and S. Mckeever, “A systematic


reviewofurbannavigationsystemsforvisuallyimpairedpeople,”Sensors, vol. 21, 2021, Art. no.
3103, doi: 10.3390/s21093103.

[5] S. K. Jarraya, W. S. Al-Shehri, and M. S. Ali, “Deep multi-layer perceptron-based obstacle


classification method from partial visual in formation: Application to the assistance of visually
impaired peo ple,” IEEE Access, vol. 8, pp. 26612–26622, 2020, doi: 10.1109/AC
CESS.2020.2970979.

[6] S. Martinez-Cruz, L. A. Morales-Hernandez, G. I. Perez-Soto, J. P. Benitez-Rangel, and K.


A. Camarillo-Gomez, “An outdoor navigation assistance system for visually impaired people
in public transporta tion,” IEEE Access, vol. 9, pp. 130767–130777, 2021, doi: 10.1109/AC
CESS.2021.3111544.

[7] W.-J. Chang, L.-B. Chen, C.-Y. Sie, and C.-H. Yang, “An artificial intelligence edge
computing-based assistive system for visually impaired pedestrian safety at zebra crossings,”
IEEE Trans. Consum. Electron., vol. 67, no. 1, pp. 3–11, Feb. 2021, doi:
10.1109/TCE.2020.3037065.

[8] E. Cardillo, C. Li, and A. Caddemi, “Millimeter-wave radar cane: A blind people aid with
moving human recognition capabilities,” IEEE J.
Electromagn.,RFMicrow.Med.Biol.,vol.6,no.2,pp. 204–211,Jun.2022, doi:
10.1109/JERM.2021.3117129.

[9] S. Dhou et al., “An IoT machine learning-based mobile sensors unit for
visuallyimpairedpeople,”Sensors,vol.22,no.14,Jul.2022,Art.no.5202, doi: 10.3390/s22145202.

KMCT CE
Department of CSE 13

You might also like