0% found this document useful (0 votes)
13 views2 pages

Abstract

Uploaded by

anuragkk1145
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views2 pages

Abstract

Uploaded by

anuragkk1145
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 2

ABSTRACT

This research paper introduces an innovative approach to developing a gesture-controlled


whiteboard system using the open-source libraries OpenCV, MediaPipe, and NumPy. The
proposed system leverages computer vision techniques to accurately track and interpret
finger movements captured by a camera, enabling users to write on a virtual whiteboard
without the need for physical markers. This paper details the technical implementation of the
system, including the integration of OpenCV for real-time hand detection, MediaPipe for
hand landmark estimation, and NumPy for efficient data processing. Writing is a mode of
communication that enables us to articulate our ideas and convey information in a tangible
form. Today, typing and writing are the predominant means of documenting information.
Writing involves the formation of letters or words using a pen, pencil or even a finger, on a
surface such as paper or a touch screen. In recent times, wearable devices have emerged
that are capable of detecting and interpreting our actions through gesture recognition, which
is a computing process that utilizes mathematical algorithms to comprehend human
gestures. To monitor the movement of the fingers, computer vision is often employed. In
addition to recording information, this technology can also be used to perform various tasks
such as sending emails or text messages. Moreover, it has the potential to be an immensely
useful tool for the hearing-impaired community, as it provides an alternative method of
communication that does not rely on sound. With the aid of this technology, individuals who
are deaf or hard of hearing can effectively communicate with others, thereby enhancing their
overall quality of life.

Keyword

Keywords - Gesture recognition, Air writing, Machine Learning, Mediapipe, numpy, OpenCv,
Computer vision.

Literature Review

Real-time hand gesture recognition is a crucial area of research that has led to the
development of various techniques for different applications. One such system developed by
Shomi Khan, M. Elieas Ali, and Sree Sourav Das uses a skin color identification algorithm to
translate American Sign Language (ASL) from real-time video into text. However, identifying
the hand can be a challenge as skin tone and hand form can vary from person to person. To
overcome this, the system employs two neural networks. The Scalable Color Descriptor
(SCD) neural network is the first algorithm used to identify skin pixels in the image, and the
Hand Gesture Recognition (HGR) neural network extracts the features. The features are
extracted by two distinct algorithms: Finding the fingertip and Pixel segmentation algorithm.
In addition to ASL translation, some systems can accomplish mouse actions, such as
moving the cursor, clicking left and right with hand gestures, using computer-vision-based
real-time dynamic hand gestures. S. Belgamwar and S. Agrawal have developed a new
human-computer interaction (HCI) technique that integrates a camera, an accelerometer, a
pair of Arduino microcontrollers, and an Ultrasonic Distance Sensor to capture motions. The
distance between the hand and the distance sensor is determined to record the gestures.
Another innovative technology is the LED-based hand gesture recognition system developed
by Pavitra Ramasamy and Prabhu G. It enables users to create the alphabet or type
anything they wish by simply waving their finger over an LED light source. The system tracks
the color of the LED to extract the movement of the finger and sketch the alphabet. The
background is black, and the tracked object's color is converted to white. The user can draw
an image of the alphabet in black and white by stitching together several black and white
frames. 3D hand gesture detection is another area of research that has led to the
development of various techniques. Quentin De Smedt, Hazem Wannous, and Jean-Philippe
Vandeborre used a skeleton-based model to gain an effective descriptor from the Intel
RealSense depth camera's hand skeleton linked joints. The skeleton-based approach is
better than the depth-based approach. Prajakta Vidhate, Revati Khadse, and Saina Rasal
developed a virtual paint application that uses ball-tracking technology to track the hand
movement and write on the screen. They used a glove with a ping-pong ball attached to it as
a contour. Finally, Ruimin Lyu, Yuefeng Ze, Wei Chen, and Fei Chen developed an airbrush
model that employs the Leap Motion Controller to track hands and produce an immersive
freehand painting experience. These innovative technologies have great potential for a wide
range of applications and can significantly improve human-computer interaction.

You might also like