0% found this document useful (0 votes)
9 views9 pages

Gesture Recognition-Synopsis

The project titled 'Gesture Recognition Based Virtual Mouse and Keyboard' aims to eliminate the need for physical input devices by using hand gestures for mouse and keyboard functions through computer vision technology. It utilizes a webcam and Python programming to recognize gestures and translate them into cursor movements and keyboard inputs. The project seeks to enhance user interaction with computers while reducing hardware costs and complexity.

Uploaded by

gaurivanve1999
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views9 pages

Gesture Recognition-Synopsis

The project titled 'Gesture Recognition Based Virtual Mouse and Keyboard' aims to eliminate the need for physical input devices by using hand gestures for mouse and keyboard functions through computer vision technology. It utilizes a webcam and Python programming to recognize gestures and translate them into cursor movements and keyboard inputs. The project seeks to enhance user interaction with computers while reducing hardware costs and complexity.

Uploaded by

gaurivanve1999
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 9

SYNOPSIS

1 Project Group Information


Group Id:

Name of Students:

2 Project Title
Gesture Recognition Based Virtual Mouse and Keyboard

3 Project Sponsorship details


Sponsorship Company: NA

External Guide: NA

Sponsoring Company Address: NA

4 Problem Statement
• Generally for personal use in computers and laptops we use a
physical mouse or touchpads invented a long time ago and in this
project requirement for external hardware is completely eliminated
by using human computer interaction technology we detect hand
movements and gestures for mouse movements and mouse events.

5 Area of Project
Artificial Intelligence:
Artificial intelligence is the simulation of human intelligence processes
by machines, especially computer systems. Specific applications of AI
include expert systems, natural language processing, speech recognition
and machine vision.

Artificial Intelligence: A Modern Approach (link resides outside IBM),one of


the leading textbooks in the study of AI. In it, they delve into four potential
goals or definitions of AI, which differentiates computer systems on the basis of
rationality and thinking vs. acting:

1
Human approach:

• Systems that think like humans


• Systems that act like humans Ideal approach:

• Systems that think rationally


• Systems that act rationally

Alan Turing’s definition would have fallen under the category of “systems that
act like humans.”

At its simplest form, artificial intelligence is a field, which combines computer


science and robust datasets, to enable problem-solving. It also encompasses
subfields of machine learning and deep learning, which are frequently
mentioned in conjunction with artificial intelligence.

6 Abstract
• Nowadays computer vision has reached its pinnacle, where a
computer can identify its owner using a simple program of image
processing. In this stage of development, people are using this
vision in many aspects of day to day life, like Face Recognition,
Color detection, Automatic car, etc. In this project, computer vision
is used in creating an Optical mouse and keyboard using hand
gestures. The camera of the computer will read the image of
different gestures performed by a person’s hand and according to
the movement of the gestures the Mouse or the cursor of the
computer will move, even perform right and left clicks using
different gestures. Similarly, the keyboard functions may be used
with some different gestures, like using one finger gesture for
alphabet select and four-figure gesture to swipe left and right.
• It will act as a virtual mouse and keyboard with no wire or external
devices. The only hardware aspect of the project is a web-cam and
the coding is done on python using Anaconda platform. Here the
Convex hull defects are first generated and then using the defect
calculations an algorithm is generated and mapping the mouse and
keyboard functions with the defects. Mapping a couple of them

2
with the mouse and keyboard, the computer will understand the
gesture shown by the user and act accordingly.

7 Goals and Objectives


• Goals
• The camera of the computer will read the image of different gestures
performed by a person's hand and according to the movement of the
gestures the Mouse or the cursor of the computer will move, even
perform right and left clicks using different gestures.

• Objectives

• The basic objective is to develop a virtual mouse and keyboard using


the concepts of hand gesture recognition and image processing which
will ultimately move the mouse pointer according to the hand gestures,

• To similarly with the help of the gesture can use keyboard functions
which will be defined as per the convenience of the user. Reducing the
cost of hardware.

8 Relevant mathematics associated with the Project:


• What do we need to find?
• The camera of the computer will read the image of different gestures
performed by a person's hand and according to the movement of the
gestures the Mouse or the cursor of the computer will move, even
perform right and left clicks using different gestures.
• Input: Live Camera with Gesture
• Output: Gesture recognition

• Set of functions:
• It can perform six different functions: left click, right click, left
movement, right movement, up movement and down movement.
• Time complexity:
• Space complexity:

3
• Failures and success conditions:
Failures when? Not recognize accurate gesture or training accuracy
is less
Success when? Recognize accurate gesture
9 Names of Conferences / Journals where papers can be
published
1.IEEE Xplore (Institute of Electrical and Electronics Engineers)

2.IRJET(International Research Journal of Engineering and Technology)

3. Google Scholar

10 Review/ Literature survey of Conference/Journal


Papers supporting Project idea
Sr. Paper title Author Name Year Problem solved in this Technique used What will be
No. of paper Existing to solve future work:
Public Future Scope
ation Problem Statement problem:
Existing
Problem
Solution

1. Immersive Yeasom Lee, 2018 Virtual reality is a 3D Geometry By using


Gesture Wonjae Choi technology that Various studies Pose
Interfaces for 3D provides users with a have been estimation
Map Navigation virtual 3D conducted Algorithm
in HMD-Based environment created to we are
on a computer and at improve improve the
the same time realism
through
real-

Virtual stimulates the senses time Accuracy of


Environments of humans to make interaction Our system.
them feel immersion. with user's
actual motion
in virtual space

4
2. PERSONAL Ling-Erl 2007 new digital interactive Face Detection By using
GESTURE- Cheng, Hung- and multimedia system.
Pose
technologies are
DRIVEN Ming Wang challenging the estimation
VIRTUAL traditional role of Algorithm
museum and galleries
WALKTHROUGH we are
that provide artistic
SYSTEMS information and improve the
learning education in Accuracy of
the modern life. The
text panel, printed Our system
catalogue,

3. Detecting Mrs. A.V. 2017 with the evolution of Hand Gesture Using
Centroid for computing Recognition, machine
Dehankar
technologies the Learning
Hand Gesture Priyadarshini current user computer Centroid Technique
Recognition using College of interaction Detection, solve
devices like Morphological problem.
Morphological mouse, keyboard, Computations.
Computations joysticks, pen etc. are
getting replaced by
touch screen and hand
gesture .
4. A new 3D Viewer Muhammad 2020 the visualization of the 3D interaction, By using
system based on Jehanzeb 3D models is a multiple views,
Pose
hand gesture scorching topic in hand gesture,
recognition for Department computer vision and Microsoft estimation
smart interaction of Computer human-computer Kinect. Algorithm
interaction. The
Science we are
demands for 3D
models have been improve the
increased due to high Accuracy of
involvement in
animated characters, Our system
virtual reality and
augmented reality. To
interact with 3D
models with the help
of mouse and
keyboard is a very
hectic, less efficient
and complex .
5. Finger Lidun Long, 2015 Current methods virtual Using
Gesturebased design the highway environment; Artificial
Xinsha Fu,
horizontal and vertical finger gesture;
Natural alignments separately navigation; 3D
User Interface for in 2DCAD through
keyboard and mouse,
3D Highway

5
Alignment Honglei Zhu, then combine the object intelligence
alignments into a 3D
Design in Virtual Ting Ge manipulation; Technieque.
space curve to obtain
Environment the highway highway
centerline. They centerline
totally violate design
highway’s nature
because highway

6. Lossless Franz J, Menin 2016 Desktop-based 3D hand By using


A, operating systems
multitasking: Gesture and 2D Python
Nedel L allow the use of many
Using 3D hand Gesture. Programmin
applications
gestures concurrently, but the g
frequent switching
embedded in between two or more
mouse devices applications distracts language we
the user. are
develop
Desktop
application.

7. Qualitative Muhammad 2018 The aim of the system Speech, By using


Analysis of a Zeeshan is to analyse the Gesture, Pose
Multimodal Baig1 and designer behaviour MMIS, 3D estimation
Interface System Manolya and quality of Modelling, Algorithm
using Kavakli interaction, in a virtual CAD, Object we are
Speech/Gesture reality environment. Manipulation improve the
The system has the Accuracy of
basic functionality for Our system
3D object modelling.
The users have
performed two sets of
experiments.

8. VSS: The Virtual M.Pandit 2013 Human Computer web services, By using
Sensor System Quicken Interface still remains middleware, Python
Loans Detroit, a main challenge for http, XML, Programmin
Michigan successful sensors, g
Gesture
USA communication. In the recognition
ever changing world
language we
of technology, where
touch screen devices, are
tablets, smartphones. develop
Desktop
application.

6
9. USING Stephan 2012 Working in highly Virtual By using
GESTURES TO Rogge1 , immersive Virtual Environment, Pose
INTERACTIVE Philipp Environments (VEs) interactive estimation
LY MODIFY Amtsfeld2 demands special blade design, Algorithm
TURBINE interaction techniques. GPU flow we are
BLADES IN A If the user needs to simulation, improve the
VIRTUAL interact three- Kinect, Accuracy of
ENVIRONMEN dimensionally, OpenNI, NITE Our system
T traditional input
devices such as
keyboard.

10 VECAR: Virtual Mau-Tsuen 2013 Augmented Reality English By using


English Yang , (AR) is a technology Teaching; Pose
Classroom with WanChe that merges virtual Markerless estimation
Liao, & Ya-
Markerless Chun Shih objects with real- Augmented Algorithm
Augmented world images Reality; 3D we are
seamlessly. The Gesture
Reality and improve the
power of real-time
Intuitive Gesture interaction and Interaction; Accuracy of
Interaction complete immersion Our system
makes AR ideal for
language learning in
that the exposure and
motivation.

7
11 Plan of Project Execution

NO TASK DURATION START END DATE


(Days/Months) DATE
1 Group Formation
2 Decide Area Of
Interest
3 Search Topic
4 Topic Selection
5 Sanction Topic
6 Search Related
Information
7 Understanding
Concept
8 Search Essential
Document
(IEEE & White
Paper, Software)
9 Problem Definition
10 Literature Survey
11 SRS
12 Project Planning
13 Modeling& design
14 Technical
Specification
15 PPT

8
9

You might also like