"Gesture Recognition": A Final Report On
"Gesture Recognition": A Final Report On
FINAL REPORT ON
“GESTURE RECOGNITION”
SUBMITTED BY
Sunny Dwivedi
Bhaskar Pal
Ajit Yadav
TO
DEPARTMENT OF E & TC ENGINEERING
ARMY INSTITUTE OF TECHNOLOGY
DIGHI HILLS, PUNE 411015
2009-2010
TABLE OF CONTENTS
4) Background 10
4.1) Object Recognition 11
4.12) Large Object Recognition 12
4.13) Shape Recognition 12
5) Digital image processing 13
6) Neural network 14
6.1) Neuron model 15
6.11) Simple Neuron 16
6.12) Transfer function 17
6.13) Supervised learning 17
6.14) Unsupervised learning 18
6.2) Advantages 19
6.3) Limitation 19
6.4) Single perceptron 21
6.41) Perceptron convergence algo. 22
6.42) Linearly separable only 23
7) Matlab 25
8) Approach 26
8.1) Image database 26
8.2) Perceptron implementation 29
8.31) Learning rules 29
.8.32) Linear classification
9) Modules 32
9.1) Feature Extraction 32
3
12) Conclusion 42
13) Source Code 43
13.1) Feature Extraction 43
13.2) Neural network training 45
13.3) Testing 47
14) Application 49
15) Refrences 50
1. ABSTRACT
Gesture recognition is a topic in computer science and language technology with
the goal of interpreting human gesture via mathematical algorithms. Gesture can
originate from any bodily motion or state but commonly originate from face or
hand. Current focuses in the field of
4
Using Neural network a simple and fast algorithm will be developed to work on a
workstation. It will recognize static hand gestures, namely, a subset of American
Sign Language (ASL).
A pattern recognition system emotion recognition from the face and hand gesture
recognition. will be using a transform that converts an image into a feature vector,
which will then be compared with the feature vectors of a training set of gestures.
The final system will be implemented with a Perceptron network.
2. PROJECT OBJECTIVE
The scope of this project is to create a method to recognize hand gestures, based on a pattern
recognition technique developed by McConnell; employing histograms of local orientation. The
orientation histogram will be used as a feature vector for gesture classification and interpolation.
5
High priority for the system is to be simple without making use of any special hardware. All the
computation should occur on a workstation or PC. Special hardware would be used only to
digitize the image (scanner or digital camera).
2. INTRODUCTION
Since the introduction of the most common input computer devices not a lot have changed.
This is probably because the existing devices are adequate. It is also now that computers have
been so tightly integrated with everyday life, that new applications and hardware are
6
constantly introduced. The means of communicating with computers at the moment are limited
to keyboards, mice, light pen, trackball, keypads etc. These devices have grown to be familiar but
inherently limit the speed and naturalness with which we interact with the computer.
As the computer industry follows Moore’s Law since middle 1960s, powerful machines are built
equipped with more peripherals. Vision based interfaces are feasible and at the present moment
the computer is able to “see”. Hence users are allowed for richer and user friendly man-machine
interaction. This can lead to new interfaces that will allow the deployment of new commands that
are not possible with the current input devices. Plenty of time will be saved as well. Recently,
there has been a surge in interest in recognizing human hand gestures. Handgesture recognition
has various applications like computer games, machinery control (e.g. crane), and thorough
mouse replacement. One of the most structured sets of gestures belongs to sign language. In sign
language, each gesture has an assigned meaning (or meanings).
Computer recognition of hand gestures may provide a more natural-computer interface, allowing
people to point, or rotate a CAD model by rotating their hands. Hand gestures can be classified
in two categories: static and dynamic. A static gesture is a particular hand configuration and
pose, represented by a single image. A dynamic gesture is a moving gesture, represented by a
sequence of images. We will focus on the recognition of static images.
Interactive applications pose particular challenges. The response time should be very fast. The
user should sense no appreciable delay between when he or she makes a gesture or motion and
when the computer responds. The computer vision algorithms should be reliable and work for
different people.
There are also economic constraints: the vision-based interfaces will be replacing existing ones,
which are often very low cost. A hand-held video game controller and a television remote control
each cost about $40. Even for added functionality, consumers may not want to spend more.
When additional hardware is needed the cost is considerable higher. Academic and industrial
researchers have recently been focusing on analyzing images of people.
the Sign Language of Sweden. ASL also has its own grammar that is different from English.
ASL consists of approximately 6000 gestures of common words with finger spelling used to
communicate obscure words or proper nouns. Finger spelling uses one hand and 26 gestures to
communicate the 26 letters of the alphabet.
Another interesting characteristic that will be ignored by this project is the ability that ASL
offers to describe a person, place or thing and then point to a place in space to temporarily store
for later reference.
ASL uses facial expressions to distinguish between statements, questions and directives. The
eyebrows are raised for a question, held normal for a statement, and furrowed for a directive.
There has been considerable work and research in facial feature recognition, they will not be
used to aid recognition in the task addressed. This would be feasible in a full real-time ASL
dictionary.
3. BACKGROUND
Research on hand gestures can be classified into three categories. The first category, glove based
analysis, employs sensors (mechanical or optical) attached to a glove that transduces finger
flexions into electrical signals for determining the hand posture. The relative position of the hand
is determined by an additional sensor. This sensor is normally a magnetic or an acoustic sensor
8
attached to the glove. For some data glove applications, look-up table software toolkits are
provided with the glove to be used for hand posture recognition. The second category, vision
based analysis, is based on the way human beings perceive information about their surroundings,
yet it is probably the most difficult to implement in a satisfactory way. Several different
approaches have been tested so far. One is to build a three-dimensional model of the human
hand. The model is matched to images of the hand by one or more cameras, and parameters
corresponding to palm orientation and joint angles are estimated. These parameters are then used
to perform gesture classification. A hand gesture analysis system based on a three-dimensional
hand skeleton model with 27 degrees of freedom was developed by Lee and Kunii. They
incorporated five major constraints based on the human hand kinematics to reduce the model
parameter space search. To simplify the model matching, specially marked gloves were used.
The third category, analysis of drawing gestures, usually involves the use of a stylus as an input
device. Analysis of drawing gestures can also lead to recognition of written text. The vast
majority of hand gesture recognition work has used mechanical sensing, most often for direct
manipulation of a virtual environment and occasionally for symbolic communication. Sensing
the hand posture mechanically has a range of problems, however, including reliability, accuracy
and electromagnetic noise. Visual sensing has the potential to make gestural interaction more
practical, but potentially embodies some of the most difficult problems in machine vision. The
hand is a non-rigid object and even worse self-occlusion is very usual.
Full ASL recognition systems (words, phrases) incorporate data gloves. Takashi and Kishino
discuss a Data glove-based system that could recognize 34 of the 46 Japanese gestures (user
dependent) using a joint angle and hand orientation coding technique. From their paper, it seems
the test user made each of the 46 gestures 10 times to provide data for principle component and
cluster analysis. A separate test was created from five iterations of the alphabet by the user, with
each gesture well separated in time. While these systems are technically interesting, they suffer
from a lack of training. Excellent work has been done in support of machine sign language
recognition by Sperling and Parish, who have done careful studies on the bandwidth necessary
for a sign conversation using spatially and temporally sub-sampled images. Point light
experiments (where “lights” are attached to significant locations on the body and just these
9
points are used for recognition), have been carried out by Poizner. Most systems to date study
isolate/static gestures. In most of the cases those are finger spelling signs.
In some interactive applications, the computer needs to track the position or orientation of a hand
that is prominent in the image. Relevant applications might be computer games, or interactive
machine control. In such cases, a description of the overall properties of the image ,may be
adequate. Image moments, which are fast to compute, provide a very coarse summary of global
averages of orientation and position. If the hand is on a uniform background, this method can
distinguish hand positions and simple pointing gestures.
The large-object-tracking method makes use of a low-cost detector/processor to quickly calculate
moments. This is called the artificial retina chip. This chip combines image detection with some
low-level image processing (named artificial retina by analogy with those combined abilities of
the human retina). The chip can compute various functions useful in the fast algorithms for
interactive graphics applications.
Most applications, such as recognizing particular static hand signal, require a richer description
of the shape of the input object than image moments provide. If the hand signals fell in a
predetermined set, and the camera views a close-up of the hand, we may use an example-based
10
approach, combined with a simple method top analyze hand signals called orientation
histograms.
These example-based applications involve two phases; training and running. In the training
phase, the user shows the system one or more examples of a specific hand shape. The computer
forms and stores the corresponding orientation histograms. In the run phase, the computer
compares the orientation histogram of the current image with each of the stored templates and
selects the category of the closest match, or interpolates between templates, as appropriate. This
method should be robust against small differences in the size of the hand but probably would be
sensitive to changes in hand orientation.
12
5. NEURAL NETWORK
Neural networks are composed of simple elements operating in parallel. These elements are
inspired by biological nervous systems. As in nature, the network function is determined largely
by the connections between elements. We can train a neural network to perform a particular
function by adjusting the values of the connections (weights) between elements. Commonly
neural networks are adjusted, or trained, so that a particular input leads to a specific target
output. Such a situation is shown in fig(2). There, the network is adjusted, based on a comparison
of the output and the target, until the network output matches the target. Typically many such
input/target pairs are used, in this supervised learning (training method studied in more detail on
following chapter), to train a network.
13
Neural networks have been trained to perform complex functions in various fields of application
including pattern recognition, identification, classification, speech, vision and control systems.
Today neural networks can be trained to solve problems that are difficult for conventional
computers or human beings. The supervised training methods are commonly used, but other
networks can be obtained from unsupervised training techniques or from direct design methods.
Unsupervised networks can be used, for instance, to identify groups of data. Certain kinds of
linear networks and Hopfield networks are designed directly. In summary, there are a variety of
kinds of design and learning techniques that enrich the choices that a user can make.
The field of neural networks has a history of some five decades but has found solid application
only in the past fifteen years, and the field is still developing rapidly. Thus, it is distinctly
different from the fields of control systems or optimization where the terminology, basic
mathematics, and design procedures have been firmly established and applied for many years.
A neuron with a single scalar input and no bias is shown on the left below.
14
Figure 3: Neuron
The scalar input p is transmitted through a connection that multiplies its strength by the scalar
weight w, to form the product wp, again a scalar. Here the weighted input wp is the only
argument of the transfer function f, which produces the scalar output a. The neuron on the right
has a scalar bias, b. You may view the bias as simply being added to the product wp as shown by
the summing junction or as shifting the function f to the left by an amount b. The bias is much
like a weight, except that it has a constant input of 1. The transfer function net input n, again a
scalar, is the sum of the weighted input wp and the bias b. This sum is the argument of the
transfer function f. Here f is a transfer function typically a step function or a sigmoid function,
that takes the argument n and producesthe output a. Examples of various transfer functions are
given in the next section. Note that w and b are both adjustable scalar parameters of the
neuron.The central idea of neural networks is that such parameters can be adjusted so that the
network exhibits some desired or interesting behavior.,
Thus, we can train the network to do a particular job by adjusting the weight or bias parameters,
or perhaps the network itself will adjust these parameters to achieve some desired end. All of the
neurons in the program written in MATLAB have a bias. However, you may omit a bias in a
neuron if you wish.
The hard limit transfer function shown above limits the output of the neuron to either 0, if the net
input argument n is less than 0, or 1, if n is greater than or equal to 0. This is the function used
for the Perceptron algorithm written in MATLAB to create neurons that make a classification
decision.
There are two modes of learning: Supervised and unsupervised. Below there is a brief
description of each one to determine the best one for our problem.
Supervised learning is based on the system trying to predict outcomes for known examples and is
a commonly used training method. It compares its predictions to the target answer and "learns"
from its mistakes. The data start as inputs to the input layer neurons. The neurons pass the inputs
along to the next nodes. As inputs are passed along, the weighting, or connection, is applied and
16
when the inputs reach the next node, the weightings are summed and either intensified or
weakened. This continues until the data reach the output layer where the model predicts an
outcome. In a supervised learning system, the predicted output is compared to the actual output
for that case. If the predicted output is equal to the actual output, no change is made to the
weights in the system. But, if the predicted output is higher or lower than the actual outcome in
the data, the error is propagated back through the system and the weights are adjusted
accordingly.
This feeding errors backwards through the network is called "back-propagation." Both the
Multilayer Perceptron and the Radial Basis Function are supervised learning techniques. The
Multilayer Perceptron uses the back-propagation while the Radial Basis Function is a feed-
forward approach which trains on a single pass of the data.
Pattern recognition is a powerful technique for harnessing the information in the data and
generalizing about it. Neural nets learn to recognize the patterns which exist in the data
set.
17
There are a few other limitations that should be understood. First, It is difficult to extract rules
from neural networks. This is sometimes important to people who have to explain their answer to
others and to people who have been involved with artificial intelligence, particularly expert
systems which are rule-based.
As with most analytical methods, you cannot just throw data at a neural net and get a good
answer. You have to spend time understanding the problem or the outcome you are trying to
predict. And, you must be sure that the data used to train the system are appropriate and are
measured in a way that reflects the behavior of the factors. If the data are not representative of
the problem, neural computing will not product good results. This is a classic situation where
"garbage in" will certainly produce "garbage out."
Finally, it can take time to train a model from a very complex data set. Neural techniques are
computer intensive and will be slow on low end PCs or machines without math coprocessors. It
is important to remember though that the overall time to results can still be faster than other data
analysis approaches, even when the system takes longer to train. Processing speed alone is not
the only factor in performance and neural networks do not require the time programming and
debugging or testing assumptions that other analytical approaches do.
Each of the inputs and the bias is connected to the main perceptron by a weight. A weight is
generally a real number between 0 and 1. When the input number is fed into the perceptron, it is
multiplied by the corresponding weight. After this, the weights are all summed up and fed
through a hard-limiter. Basically, a hard-limiter is a function that defines the threshold values for
'firing' the perceptron. For example, the limiter could be:
For example, if the sum of the input multiplied by the weights is -2, the limiting function would
return 0. Or if the sum was 3, the function would return 1.
Now, the way a perceptron learns to distinguish patterns is through modifying its weights - the
concept of a learning rule must be introduced. In the perceptron, the most common form of
learning is by adjusting the weights by the difference between the desired output and the actual
output. Mathematically, this can be written:
20
Below are some variable and parameters used in the convergence algorithm
α ( n ) =threshold
y ( n ) = actual response
d ( n ) = desired response
Step 1: Initialization
Set w(0)=0. Then perform the following computations for time n=1,2,….
Step 2: Activation
At time n, activate the perceptron by applying continuous-valued input vector x(n) and desired
response d(n).
Step 5
Increment time n by one unit and go back to step 2
Y = x1 and not x2
X1 X2 y
0 0 0
0 1 0
1 0 1
1 1 0
Perceptrons can only solve problems where the solutions can be divided by a line (or hyperplane)
- this is called linear separation. To explain the concept of linear separation further, let us look at
the function shown to the left. The function reads 'x1 and (not x2)'. Let us assume that we run the
function through a perceptron, and the weights converge at 0 for the bias, and 2, -2 for the inputs.
If we calculate the net value (the weighted sum) we get:
So the perceptron correctly draws a line that divides the two groups of points. Note that it doesn't
only have to be a line, it can be a hyperplane dividing points in 3-D space, or beyond. This is
where the power of perceptrons lies, but also where its limitations lie. For example, perceptrons
cannot solve the XOR binary functions.
Non-linear separable problems can be solved by a perceptron provided an appropriate set of first-
layer processing elements exists. The first layer of fixed processing units computes a set of
functions whose values depend on the input pattern. Even though the data set of input patterns
may not be separable, when viewed in the space of original input variables, it can easily be the
case that the same set of patterns becomes linearly separable. We will later see how such a
technique is used in the perceptron’s development so that the chance that the preprocessing layer
is maximized.
6. MATLAB
Algorithm development
Modeling, simulation, and prototyping
Data analysis, exploration, and visualization
Scientific and engineering graphics
Application development, including Graphical User Interface building
MATLAB is an interactive system whose basic data element is an array that does not require
dimensioning. This allows you to solve many technical computing problems, especially those
with matrix and vector formulations, in a fraction of the time it would take to write a program in
a scalar non-interactive language such as C or Fortran. MATLAB has evolved over a period of
years with input from many users. In university environments, it is the standard instructional tool
for introductory and advanced courses in mathematics, engineering, and science. In industry,
MATLAB is the tool of choice for high-productivity research, development, and analysis.
The reason that I have decided to use MATLAB for the development of this project is its
toolboxes. Toolboxes allow you to learn and apply specialized technology. Toolboxes are
comprehensive collections of MATLAB functions (M-files) that extend the MATLAB
environment to solve particular classes of problems. It includes among others image processing
and neural networks toolboxes.
7. APPROACH
7.1 Image database
The starting point of the project was the creation of a database with all the images that would be
used for training and testing. The image database can have different formats. Images can be
either hand drawn, digitized photographs or a 3D dimensional hand. Photographs were used, as
they are the most realistic approach.
25
Images came from two main sources. Various ASL databases on the Internet and photographs I
took with a digital camera. This meant that they have different sizes, different resolutions and
some times almost completely different angles of shooting. Images belonging to the last case
were very few but they were discarded, as there was no chance of classifying them correctly.
Two operations were carried out in all of the images. They were converted to grayscale and the
background was made uniform. The internet databases already had uniform backgrounds but the
ones I took with the digital camera had to be processed in Adobe Photoshop.
The database itself was constantly changing throughout the completion of the project as it was it
that would decide the robustness of the algorithm. Therefore, it had to be done in such way that
different situations could be tested and thresholds above which the algorithm didn’t classify
correct would be decided.
The construction of such a database is clearly dependent on the application. If the application is a
crane controller for example operated by the same person for long periods the algorithm doesn’t
have to be robust on different person’s images. In this case noise and motion blur should be
tolerable. The applications can be of many forms and since I wasn’t developing for a specific one
I have tried to experiment for many alternatives.
We can see an example below. In the first row are the training images. In the second the testing
images.
26
For most of the gestures the training set originates from a single gesture. Those were enhanced in
Adobe Photoshop using various filters. The reason for this is that I wanted the algorithm to be
very robust for images of the same database. If there was a misclassification to happen it would
be preferred to be for unknown images.
27
Test Set:
The number of test images varies for each gesture. There is no reason for keeping those on a
constant number. Some images can tolerate much more variance and images from new databases
and they can be tested extensively, while other images are restricted to fewer testing images.
We will define a learning rule as a procedure for modifying the weights and biases of anetwork.
(This procedure may also be referred to as a training algorithm.) The learning rule is applied to
train the network to perform some particular task. Learning rules in the MATLAB toolbox fall
into two broad categories: supervised learning and unsupervised learning.
Those two categories were described in detail in previous chapter. The algorithm has been
developed using supervised learning. In supervised learning, the learning rule is provided with a
set of examples (the training set) of proper network behavior: where is an input to the network,
and is the corresponding correct (target) output. As the inputs are applied to the network, the
network outputs are compared to the targets. The learning rule is then used to adjust the weights
and biases of the network in order to move the network outputs closer to the targets. The
perceptron learning rule falls in this supervised learning category.
Adapt is another function in MATLAB for training a neural network. I was using this atthe first
stages when I was using a back-propagation network. Their main difference isthat with train
29
only batch training (updating the weights after each presenting thecomplete data set) can be used,
while with adapt you have the choice of batch andincremental training (updating the
weightsafter the presentation of each single training sample). Adapt supports far less training
functions. Since I didn’t have a very goodreason to go for incremental training I decided to use
train which is more flexible.
Finally, train applies the inputs to the new network, calculates the outputs, compares them to the
associated targets, and calculates a mean square error. If the error goal is met,or if the maximum
number of epochs is reached, the training is stopped and train returns the new network and a
training record. Otherwise train goes through another epoch. The LMS algorithm converges
when this procedure is executed.
30
8. MODULES
8.1 MODULE 1
FEATURE EXTRACTION
STEP 1:
The first thing for the program to do is to read the image database. A for loop is used to read an
entire folder of images and store them in MATLAB’s memory. The folder is selected by the user
from menus.
STEP 2:
Next step is to convert images from RGB type to GRAY type. This is important because we do
not need the colour of the images.
STEP 3:
Then resizing of all the images is done.
STEP 4:
The next step is EDGE DETECTION
We have used CANNY EDGE DETECTION METHOD for this purpose.
STEP 5:
Smoothing image to reduce the number of component.
STEP 5:
Now after extracting features of the images store all the features in save
total_feature_save_all feature_extra data_target.
8.2. MODULE 2
32
STEP 1:
Load the Image Database prepared in module 1. This will load all image features extracted in
module 1.
STEP 2:
Set the input target vector and data target vector. The input, test and target vectors are saved
on the hard drive in text files. All data isstored in a single column. MATLAB can tell where
vector ends and another starts simply by writing so in the fscanf command.
STEP 3:
Find the optimal number of neurons.
STEP 4:
Initialize the neural network and set the parameters. Following are the parameters.
Training record of various values over each epoch:
TR.epoch Epoch number
TR.perf Training performance
TR.vperf Validation performance
TR.tperf Test performance
STEP 5:
The error is calculated by subtracting the output A from target T. Then the sum-squared
error is calculated.
STEP 6:
Save the designed network.
8.3 MODULE 3
TESTING
33
STEP 1:
Load the image database prepared in module 1 and module 2.
STEP 2 :
Set P as the input image. Define its size. Then set T as the data target.
STEP 3:
Define the testing error function.
STEP 4:
Simulate the testing set and get the result.
8.4 MODULE 4
GUI
In this module we prepared graphical user interface to make the program user friendly. There are
many software which can be used for this purpose like VB etc. But there is a facility in
MATLAB also to provide graphical user interface.
9. FLOW CHARTS
9.1. Feature extraction
34
9.2. Training
35
10. RESULTS
INPUT IMAGE
OUTPUT IMAGE
37
HIST
OGRAM OF OUTPUT IMAGE
38
ans = 1.1455e-004
ans =1.2565e-004
ans =1.3781e-004
ans =1.5110e-004
ans =1.6564e-004
ans =1.8154e-004
ans =1.9891e-004
ans =2.1788e-004
ans =2.3860e-004
ans =2.6124e-004
ans =2.8614e-004
ans =3.1474e-004
ans =3.5821e-004
ans =5.2208e-004
ans = 0.0020
ans =0.0176
ans =6.2152e-004
ans =0.0147
ans = 0.0072
ans = 6.8907e-004
1 a
2 b
3 c
4 d
0 e
2 b
4 d
3 c
0 e
1 a
3 c
4 d
0 e
1 a
2 b
4 d
1 a
2 b
3 c
4 d
11. CONCLUSION
41
The idea of the project got started from a McConnel’s idea of orientation histograms. Many
researchers found the idea interesting and tried to use it in various applications. From hand
recognition to cat recognition and geographical statistics.
My supervisor and I had the idea of trying to use this technique in conjunction with Neural
Networks. In other approaches of pattern recognition that orientation histograms have been used
different ways of comparing and classifying were employed. Euclidean distance is a straight
forward approach to it. It is efficient as long as the data sets are small and not further
improvement is expected. Another advantage of using neural networks is that you can draw
conclusions from the net work output. If a vector is not classified correct we can check its output
and work out a solution.
As far as the orientation algorithm is concerned it can be further improved. The main problem is
how good differentiation one can achieve. This of course is dependent up on the images but it
comes down to the algorithm as well. Edge detection techniques are keep changing while line
detection can solve some problems. One of the ideas that I had lately is the one of tangents but I
don’t know if it is feasible and there is not time of developing it. To say that I have come to
robust conclusions at the end of the project is not safe. This is possible only for the first part of
the project. Regardless of how many times you run the program the output vector will always be
the same. This is not the case with the perceptron. Apart from not being 100% stable there are so
many parameters (e.g. number of layers, number of nodes) that one can play with that finding the
optimal settings is not that straight forward. As mentioned earlier it all comes down to the
application. If there is a specific noise target for example you can work to fit this specifications.
My major goal was speed and the avoidance of special hardware. This was achieve although it
would be faster if written in C / C++ but the complexity of the design and implementation would
have been much higher. MATLAB is slower but allows its users to work faster and concentrate
on the results and not on the design.
clc
close all
clear all
feature_extraction = [];
feature_extra = [];
j1=1;
j2=1;
for j=65:70
for i=1:5
RGB = imread([num2str(char(j)) num2str(i) '.jpg']);
img_crop = RGB;
I = rgb2gray(RGB); % convert to intensity
I = imresize(I,[80 64],'bicubic');
% figure(j),
figure;
imshow(I);
bw_7050=imresize(B,[70,50]);
for cnt=1:7
for cnt2=1:5
Atemp=sum(bw_7050((cnt*10-9:cnt*10),(cnt2*10-
9:cnt2*10)));
lett((cnt-1)*5+cnt2)=sum(Atemp);
end
end
lett=((100-lett)/100);
lett=lett;
feature = [lett];
feature_extra = [feature_extra; feature];
data_target(j1,j2) = 1;
j2 = j2+1;
end
j1 = j1+1;
end
clc
close all
clear all
% load total_feature_asl_1;
% load total_feature_save
% load total_feature_save_two_2
load total_feature_save_all;
P = feature_extra';
[m n]=size(feature_extra);
T = data_target;
N_train = length(T);
input_vectors_train = P;
% Finding the optimal number of neurons
[input_dim1 input_dim2] = size(input_vectors_train);
Training_Error = NaN * zeros(1,11);
% nb_neurons = 10;
% loop on the number of neurons
% %declare neural network
% net = newff( minmax(input_vectors_train),[nb_neurons 26],
{'purelin' 'purelin'}, 'traingdx');% creating the NN
% % init(net); % initializing the NN
% net.trainParam.epochs = 10000; % Defining the number of epochs
% net.trainParam.show = 9000;
% net.trainParam.goal = 1e-2;
% net.trainparam.mingrad=1.0000e-018;
% net.trainparam.min_grad=1.0000e-018;
[S2,Q] = size(T);
S1 = 4;
45
%training started
[net tr] = train(net,input_vectors_train,T); % training the NN
% Simulating the training set
Sim_train = sim(net,input_vectors_train); % Simulation using the
training set
Result_train = find(compet(Sim_train));
Target_train = find(compet(T));
Number_of_errors = sum(Result_train ~= Target_train); %
Calculating the number of errors
Training_Error = Number_of_errors / (3*N_train) % Normalizing
the number of errors
% Saving the designed network
save network_50epochs_17apr10_th_ab net;
clc
close all
46
clear all
% load total_feature_asl_1;
load total_feature_save_all
P = feature_extra';
[m n]=size(feature_extra);
T = data_target;
[m N_test] = size(T);
Testing_Error = NaN * zeros(1,11);
% Simulating the testing set
Sim_test = sim(net,P);% Simulation using the testing set
Result_test = find(compet(Sim_test));
Target_test = find(compet(T));
% Number_of_Errors = sum(Result_test ~= Target_test)%
Calculating the number of errors
% Norm_Number_of_Error = Number_of_Errors / (3*N_test) %
Normalizing the number of errors
nb_gesture = mod(find(compet(Sim_test)),5)
[m n]=size(nb_gesture);
47
[m1 n1]=size(tab_asl1);
for i=1:m
for j=1:m1
if(nb_gesture(i)== tab_asl1(j))
disp(tab_asl(j));
end
end
end
13. APPLICATION
Creating a proper sign language (ASL – American Sign Language at this case) dictionary is not
the desired result at this point. This would combine advanced grammar and syntax structure
understanding of the system, which is outside the scope of this project. The American Sign
Language will be used as the database since it’s a tightly structured set. From that point further
applications can be suited. The distant (or near?) future of computer interfaces could have the
48
usual input devices and in conjunction with gesture recognition some of the user’s feelings
would be perceived as well. Taking ASL recognition further a full real-time dictionary could be
created with the use of video. As mentioned before this would require some Artificial
Intelligence for grammar and syntax purposes.
Another application is huge database annotation. It is far more efficient when properly executed
by a computer, than by a human.
14. REFRENCES
http://classes.monterey.edu/CST/CST332-01/world/Mat/Edge.htm
http://www.generation5.org/perceptron.shtml
http://www.cs.bgu.ac.il/~omri/Perceptron/
http://www.phy.syr.edu/courses/modules/MM/Neural/neur_train.html
http://hebb.cis.uoguelph.ca/~skremer/Teaching/27642/BP/node2.html
http://www.willamette.edu/~gorr/classes/cs449/Classification/perceptron.html
http://www.bus.olemiss.edu/jjohnson/share/mis695/perceptron.html