2016 International Conference on Electrical, Electronics, Communication, Computer and Optimization Techniques (ICEECCOT)
A Novel Neural Network Structure for Motion
Control in Joints
Aparanji V M Uday V Wali Aparna R
Dept.of E & CE Dept.of E & CE Dept.of ISE
Siddaganga Inst. of Technology KLE Dr MSS CET Siddaganga Inst. of Technology
Tumakuru, India Belagavi, India Tumakuru, India
vmasit@rediffmail.com udaywali@gmail.com raparna@sit.ac.in
new cell is in resonance with the input that created it. A
Abstract—This paper proposes a new type of artificial novel procedure to add a new cell that will be in
neural network useful for motion control of end-point of a
resonance with the input has been suggested in [11].
joint typically seen in biological systems. The network is
based on a hybrid concept combining Adaptive Resonance Main strength of ART ANNs is their ability to
Theory (ART) and Self Organizing Maps (SOMs). Basic continuously grow and adjust to the input [5]. It is also
concepts related to the new architecture and necessary possible to device methods to eliminate (forget) cells from
algorithm to implement the network are presented in the
paper. General applicability of the proposed method using the network if the frequency of matching input to such
two different kinds of joints for two different types of cell is less than a certain statistical threshold. The
gradient functions has been presented. The algorithms have capacity of ART ANN to classify untrained data emerges
been implemented using R simulation language. Results of from the selection of strongest winner. The arrangement
the implementation are also presented in the paper.
of cells is dependent on the order in which training input
Index Terms—Adaptive Resonance Theory, Self Organizing was presented and hence very variable. This implies that
Maps, Artificial Neural Networks, Motion Control, Joints the relations between system parameters presented via the
control. input vectors are not mapped to the spatial organization of
the ART network.
I. INTRODUCTION
Recent work in robotic and humanoid systems has Self Organizing Maps (SOMs) on the other hand are
concentrated on three aspects: sensory systems, massively good in mapping system parameters to spatial
parallel distributed control networks and joint design. organization of the network [6]. SOM networks start with
This paper introduces a new type of simple but massively a set of neurons that are initialized to random value/s or
parallel control similar to that of a biological system that set of default values perturbed by a small random value.
can be easily trained or can learn automatically from the SOM network may be thought of as a surface, represented
environment. Proposed control structure is applied to two by the collection of cells [7]. When input is applied, all
types of joints to demonstrate applicability of the cells compare the input to the stored values and generate a
proposed structure. This paper also hints at the type of matching signal proportional to the degree of match [10].
physical structure for joint control. One with the strongest signal (maximum matched value)
will be the winner as in other neural networks. It will
Artificial Neural Networks (ANNs) based on Adaptive strengthen its synaptic weights such that the next repeated
Resonance Theory (ART) match the incoming input to a occurrences of the same input will strengthen the winner
stored and possibly transformed value stored in a nerve in response to that input. Essential feature of SOM is that
cell or its synapses [2]. A cell that best matches the input the winner will pull its neighboring cells towards its own
will be considered as a winner [3]. The winner will value from their existing value. This causes neighbors to
strengthen its matching capacity for the given input while respond similarly to an input. However, each of the
others will reduce their matching capacity. If the output neighboring cells will have slightly varying synaptic
from the winner is not above a statistical threshold, a new weights, distributed spatially in an ordered way over the
cell is added to the network and is marked as winner. surface representing the collection of cells. Over several
Success of an ANN using ART depends on how well the
978-1-5090-4697-3/16/$31.00 ©2016 IEEE 227
learning iterations, cells with similar values will form for control of Joint structures in biological systems. The
clusters that diffuse into each other gradually. essential features of this type of network are:
Critical to the success of SOM networks is the way the 1) The network starts with a few cells and grows as it is
neighbors are associated with each other. This relation is trained with newer inputs. The network ‘memorizes’
essentially represented by a gradient function that varies the inputs and corresponding outputs.
with distance between the winner and its neighbors. The
2) An input transformation network generates multiple
gradient is generally chosen such that the effect decays
mutated copies of the input during training phase.
quickly within a few steps. The gradient function, Each such transformed input is similar but slightly
difference between the current value and required value different than the original input. New cells may be
and the distance are used to update neighbor cells. As the added to the network corresponding to these mutated
distance from the winner increases, the effect of the inputs.
gradient function reduces, leaving neighbors further than
a certain radius unaffected. Traditionally, an exponential 3) An output diffusion function called the gradient
function matches the mutated inputs to diffused
function of type d(e-nf) is used, where d is difference in
output.
cell value, n neighbor radius and f is a scaling factor.
Typically, this value becomes insignificant for neighbors 4) A joint or a set of joints has a fixed end and a free end.
at a radius greater than four or five cells. It is possible to Intermediate joints can move the segment attached to
replace exponential function with polynomial functions the joint through a small set of angles from a central
including linear functions to reduce computational axis (See fig. 1). Thus the free end can move within a
complexity. Often, the gradient function has to be biased specific {x,y,z} space.
by another input-transformation matrix. This matrix
5) The goal of moving the joints is to reach a particular
influences the SOM behavior and application. point in {x,y,z}, which can be achieved by a specific
set of excitation values to be applied to the joint
A new type of ANN evolved by combining the features depending of the Degrees of Freedom (DoF). Each
of ART and SOM networks has been presented here. The DoF is controlled by one parameter.
approach reduces computational overhead typically seen
in a SOM. It also allows SOM to grow with the training. 6) Each neuron can store and emit a control value
corresponding to DoF of the joint.
II. PROPOSED ANN FOR JOINTS
7) Each neuron is set-associative: When training input is
given, the end-point location associated with the given
In a biological control system the sensory and motor
input is stored as a set. When target location (or goal)
circuits are separate but connected by a neural bridge. is given, corresponding excitation values to activate
There is a supervisory decision making process that motors at each DoF are emitted.
behaves differently than a memory system. Biological
systems are often controlled by a superior goal rather than 8) When a particular location can be reached by more
just a set of input. That goal is set by logical, emotional, than one possible set of joint angles, each such
environmental and other factors. The motor circuit may solution is ‘memorized’ by a separate cell. When such
a cell is added to the network, it generates a
or may not be able to achieve that goal but will act in the
neighborhood of cells that are similar to itself. This
direction of the goal. could be construed as a layer of solution, as shown in
fig. 2. It may be possible to use any of the solutions
Back Propagation Network (BPN) is the most used but there could be other constraints for selection of a
ANN configuration for control system design but solution.
alternate network designs do exist [1]. Higher level of
control can only be achieved by multi-level ANNs 9) A traverse (or path) can be identified by a transiting
generally referred to as deep-learning neural networks [4]. over a set of points that can be reached using any one
of the possible solutions. Often the problem will be to
These have been used to solve complex problems in
find the best solution.
gaming, natural language processing, scene identification,
etc. We have evolved a new type of ANN that is suitable
228
for details). Further, this is required only for simulation;
in case of real time systems, the end-point location is
known during training and is the target during normal use.
In a real-life system, the training happens by exciting
the joint randomly and measuring the location of the end-
point. During normal usage, the goal would be to move
the end-point to a specific location (or a set of locations,
in a sequence). The excitation required to be applied to
the joints needs to be obtained from the control network
(of artificial neural cells) and applied to the joints.
Fig. 1. A simple planar joint configuration
In effect, the problem of joint motion control can be
stated as a search for a suitable traverse along the
neighboring cells in the neural membranes to meet the
target location.
III. TRAINING THE NETWORK
Fig. 2. Set of joint angles yielding same {x,y,z} During simulation, the joints are excited to random
angles (inputs θ and φ). These inputs are fed to the
To illustrate the idea, for the joint shown in fig.1, the transformation functions to find the angles of the
end- point will be a location in {x,y} plane (z is constant). neighboring cells. For each neighboring cell, position of
However, it is possible to reach some of the points by the joint end is calculated and stored in neural cells or
interchanging values of θ and φ. Additional angles may Short Term Memory (STM). These cells form clusters
also be possible due to approximations and required containing a set of cells to reach any of the possible
resolution of angle. This effect is shown in fig. 3, which location within the {x,y,z} space. This neural network
shows a set of planes, each one indicating a layer. The structure for motion control in joints is shown in fig. 4.
results are rotated by an angle to make the layers visible. Note that no assumption on number of joints and
For a given point, the solution can be provided by a cell in segments has been made.
any one of the layers, located at the same {x,y} location.
Fig. 4. Neural network structure for motion control in joints
IV. THE ALGORITHM
The algorithm has been described here using the
Fig. 3. Sets of solutions to the joints shown in Fig. 1 with linear
gradient functions planar joint shown in fig. 1. However, the algorithm is
generic enough for joints with multiple DoF.
In case where there are several segments connected, the
end coordinates can be easily calculated by shifting the Define cell as a tuple {joint angles, output
locations and layer}
origin to end-point of previous segment. (See example 3 Create an empty array named cell_list to contain
cells.
229
Create another empty array named neighbor_list to mutated by a simple relation like r*dθ where r is the radial
contain cells.
Read input table containing excitation values { and distance from the best matching cell and dθ is a small
}, for example. perturbation angle.
for each set of the inputs {
create a new empty cell Fig. 5 below shows the results of the same planar joint
store input set in cell.
calculate (or measure) {x,y,z} output set
with exponential gradient function. As the distance from
update output of the cell. the winner increases, the effect of the exponential gradient
append cell to neighbor_list function reduces, thereby neighbors farther away from the
create neighbor_list best matching cell remain unaffected. The number of cells
insert neighbor_list into cell_list
and the number of layers created in planar joint with
}
function to create neighbor_list { exponential gradient are less than the number of cells and
repeat { number of layers in planar joint with linear gradient
perturb input to cell function (see table 1).
calculate (or measure) output
append to neighbor_list
} while condition is true
}
function insert (neighbor_list, cell_list) {
for i in 1:nbr_cnt {
output_match = 0
input_match = 0
lmax = 0
for(j in 1:cellCount) {
if nbr_list[i] matches cell_list[j] at output {
output_match = output_match + 1
if nbr_list[i] matches cell_List[j] at input{
smoothen output
input_match = input_match+1
}
if(lmax < cell_list[j].layer Fig. 5. Sets of solutions to the planar joints with exponential gradient
lmax = cell_list[j].layer function
}
}
if {x,y} does not match any cell The second type of joint configuration for a spherical
insert cell at layer 1
else {x,y} matches one or multiple times {
surface with a single segment is shown in fig. 6. Graphical
if inputs do not match { representation of end-point movement is shown in fig. 7.
lmax = lmax + 1
insert cell at layer lmax
From this figure, the end-point location can be calculated
} as:
}
}
V. IMPLEMENTATION AND RESULTS x = r1 sin(θ) + r2 cos(φ) + xOffset (3)
y = r1 cos(θ) + r2 sin(φ) + yOffset (4)
The algorithm suggested in the above section has been z = r1 cos(θ)+ r2 cos(φ) + zOffset (5)
implemented using R for several joint structures. First
implementation was a planar joint shown in fig. 1. The Here the angles are measured with reference to a fixed
results are shown in fig. 3 for linear gradient functions. reference coordinate. Hence, when more than one segment
The outputs are calculated using the following equations: is used in the joint, segment output can be computed by
translating the output by the amount given by output of
x = r1 sin(θ) + r2 sin(θ+φ) + xOffset (1) previous segment. Typically, such joints can be realized
y = r1 cos(θ) + r2 cos(θ+φ) + yOffset (2) by using mini-drafter type of mechanism where the second
and subsequent segments remain parallel to the reference
Note that the angular position of second segment is frame irrespective of the shifted location.
measured as a deviation from that of the first segment. So,
the second term in eqn.1 and eqn.2 has a sum of segment
angles. This is one possible configuration of the joint.
Results shown in fig. 3 were obtained by using a simple
input transformation matrix of size 9 x 9. The inputs were
230
calculated by adding results for individual segments as
follows:
x = x1+x2+…+xn (6)
y = y1+y2+…+yn (7)
z = z1+z2+…+zn (8)
In general, the output of the nth segment is calculated
by shifting the origin to the end of the (n-1)th segment.
The output of a joint for multi segment is the summation
of the outputs of all the individual segments.
Fig. 6. Joint configuration in x, y, z plane
Results for joint with two segments are shown in fig. 9.
Notice that more cells are created compared to single
segment joint. The results (fig. 9(a)) are rotated by an
angle to make the inner cells visible (fig. 9(b)).
Fig. 7. Displacement of joint shown in Fig. 6 in x, y, z coordinates
The solutions for single segment joint in spherical
plane is shown in fig. 8. As expected, movements of the
(a)
end-points generate a spherical surface.
Fig. 8. Sets of solutions to single segment joint shown in Fig. 6 & 7.
(b)
The third implementation is joint configuration given
Fig. 9. Solutions to the joints in spherical plane for two segment
in Fig. 6 but two segments connected in series. The end
coordinates (of the second segment) are calculated by Table. 1 shows the number of cells obtained for
shifting the origin to end-point of first segment. The different implementations with varying θ and φ. As θ and
outputs for the joint configuration for n- segments are
φ increase, the number of cells also increase.
231
Table. 1: Number of cells for different implementations two different types of gradient functions. Results of the
implementation are encouraging.
Number of cells
REFERENCES
Planar 1. Laurene Fausett, "Fundamentals Of Neural
Planar with Spherical Spherical
with
Dtheta dphi
linear
exponential (single (two Networks", Prentice-Hall, Inc. Upper Saddle River,
gradient segment) segments) NJ, USA, 1994, pp. 218-288.
gradient
2. Tom Mitchell, "Machine Learning", McGrawHill, pp.
1 1 1072 433 2792 1207 81-127.
3. Carpenter G. A., and Grossberg S, “ART2:Self
organization of stable category recognition codes for
2 2 1436 608 3138 1934
analog input patterns", Applied Optics䠈1987, vol. 26,
pp. 4919-4930.
3 3 1448 730 3044 2292 4. A. Von Lehmen, E. G. Paek, P. F. Liao, A. Marrakchi
and J. S. Patel, “Factors influencing learning by
4 4 1698 833 3348 2763 backpropagation,” Proceedings of the IEEE
International Conference on Neural Networks, 1988,
5 5 1710 1047 3061 3210 Vol. I, pp. 335-341.
5. Carpenter G. A., and Grossberg S., “ART2-A: An
adaptive resonance algorithm for rapid category
1 2 1307 507 2761 1276
learning and recognition”, Neural Network, Apr. 1991,
vol. 4, pp. 493-504.
2 1 1170 540 2600 1859 6. Yin H, Allinson NM "Self-organising mixture
networks for probability density estimation". IEEE
2 4 1546 722 3181 1956 Trans. Neural Networks, 12, 2001, pp 405–411.
7. N.C. Yeo, K.H. Lee, Y.V. Venkatesh, and S.H. Ong,
4 2 1615 746 3467 3682 "Colour image segmentation using the self-organizing
map and adaptive resonance theory", Elsevier, Image
1 3 1461 568 2395 1312 and Vision Computing 23, 2005, pp 1060–1079
8. Jiaoyan, Brian Funt and Lilong, ” A New Type of
3 1 1279 607 2824 2485 ART2 Architecture and Application to Color Image
Segmentation", Springer, ICANN 2008, Part I, LNCS
VI. CONCLUSION 5163, pp. 89–98.
9. Jian Fan, YangSong, and MinRuiFei , “ART2 neural
We have proposed a new type of artificial neural network interacting with environment”, Elsevier,
network useful for motion control of end-point of a joint Neurocomputing,2008, 72, pp 170–176.
typically seen in biological systems. The network is a 10. Teuvo Kohonen, “Essentials of the self-organizing
hybrid concept of combining Adaptive Resonance Theory map", Elsevier, Neural Networks 37, 2013, 52–65.
and Self Organizing Maps. We have simulated the 11. V M Aparanji, Uday V Wali, and R Aparna, " R
proposed method using two different kinds of joints for Library For Neural Networks", IJTS, Volume IX,
Issue 1, 2016, pp. 9-12.
232