0% found this document useful (0 votes)
6 views28 pages

Research Article

Uploaded by

Tanjim Mostafa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views28 pages

Research Article

Uploaded by

Tanjim Mostafa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 28

Hindawi Publishing Corporation

Journal of Sensors
Volume 2016, Article ID 8594096, 16 pages
http://dx.doi.org/10.1155/2016/8594096

Research Article
Vision-Based Autonomous Underwater Vehicle Navigation in
Poor Visibility Conditions Using a Model-Free Robust Control

1 2
Ricardo Pérez-Alcocer, L. Abril Torres-Méndez,
2 2
Ernesto Olguín-Díaz, and A. Alejandro Maldonado-Ramírez
1 CONACYT-Instituto Politecnico´ Nacional-CITEDI, 22435 Tijuana, BC, Mexico

2
Robotics and Advanced Manufacturing Group, CINVESTAV Campus Saltillo, 25900 Ramos Arizpe, COAH, Mexico
Correspondence should be addressed to L. Abril Torres-Mendez;´ abriltorresm15@gmail.com

Received 25 March 2016; Revised 5 June 2016; Accepted 6 June 2016

Academic Editor: Pablo Gil

Copyright © 2016 Ricardo Perez´-Alcocer et al. This is an open access article distributed under the Creative Commons
Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is
properly cited.
This paper presents a vision-based navigation system for an autonomous underwater vehicle in semistructured environments
with poor visibility. In terrestrial and aerial applications, the use of visual systems mounted in robotic platforms as a control
sensor feedback is commonplace. However, robotic vision-based tasks for underwater applications are still not widely
considered as the images captured in this type of environments tend to be blurred and/or color depleted. To tackle this problem,
we have adapted the color space to identify features of interest in underwater images even in extreme visibility conditions. To
guarantee the stability of the vehicle at all times, a model-free robust control is used. We have validated the performance of our
visual navigation system in real environments showing the feasibility of our approach.

1. Introduction pressure sensors, compasses, and global positioning systems


(GPS) are commonly used [8]. Note that even though GPS
The development of research in autonomous underwater devices are widely used for localization, they show low perfor-
vehicles (AUVs) began approximately four decades ago. mance in an underwater environment. Therefore, data fusion is
Since then, a considerable amount of research has been needed to increase the accuracy of the pose estimation (for a
presented. In particular, the localization and navigation review of sensor fusion techniques see [9].)
problems rep-resent a challenge in the AUVs development Vision-based systems are a good choice because they
due to the unstructured and hazardous conditions of the provide high resolution images with high speed acquisition at
environment and the complexity of determining the global low cost [10]. However, in aquatic environments the color
position of the vehicle. An extensive review of the research attenuation produces poor visibility when the distance
related to this topic is presented in [1–4]. increases. In contrast, at short distances the visibility may be
Sensor systems play a relevant role in the development of good enough and the measurement accuracy higher than other
AUV navigation systems as they provide information about sensors. Therefore, tasks in which visual information is used
the system status and/or environmental conditions. There exist are limited to object recognition and manipulation, docking
several sensors that provide relevant and accurate information vehicle [11], reconstruction of the ocean floor struc-ture [12],
[5–7]. However, global or local pose estimation of underwater and underwater inspection and maintenance [13]. In [14], the
vehicles is still an open problem, specially when a single authors discuss how visual systems can be used in underwater
sensor is used. Typically, underwater vehicles use multisensor vehicles, and they present a vision system which obtains depth
systems with the intention of estimating their position and estimations based on a camera data. In [10], a visual system
determining the location of objects in their workspace. was introduced. This visual system, called Fugu-f, was
Usually, inertial measurement units (IMUs), designed to provide visual information in
2 Journal of Sensors

submarine tasks such as navigation, surveying, and order to define the AUV behavior in a semistructured
mapping. The system is robust in mechanical structure and environment. The AUV dynamic model is described and a
software components. Localization has been also addressed robust control scheme is experimentally validated for
with vision systems. In [15] a vision-based localization attitude and depth regulation tasks. An important controller
system for an AUV with limited sensing and computation feature is that it can be easily implemented in the
capabil-ities was presented. The vehicle pose is estimated experimental platform. The main characteristics affecting
using an Extended Kalman Filter (EKF) and a visual the images taken underwater are described, and an adapted
odometer. The work in [16] presents a vision-based version of the perceptually uniform color space is used to
underwater localization technique in a structured find the artificial marks in a poor visibility environment.
underwater environment. Artificial landmarks are placed in The exact positions of the landmarks in the vehicle
the environment and a visual system is used to identify the workspace are not known, but an approximate knowledge
known objects. Additionally a Monte Carlo localization of their localization is available.
algorithm estimates the vehicle position. The main contributions of this work include (1) the
Several works for visual feedback control in underwater development of a novel visual system for detection of
vehicles have been developed [17–28]. In [17], the authors artificial landmarks in poor visibility conditions
present a Boosting algorithm which was used to identify underwater, which does not require the adjustment of
features based on color. T his method uses, as input, images in internal parameters when environmental conditions change,
the RGB color space, and a set of classifiers are trained offline and a new simple visual navigation approach which does
in order to segment the target object to the background and the not require keeping the objects of interest in the field of
visual error is defined as an input signal for the PID con-troller. view of the camera at all times, considering that only their
In a similar way, a color-based classification algorithm is approximate localization is given. In addition, a robust
presented in [18]. This classifier was implemented using the controller guarantees the stability of the AUV.
JBoost software package in order to identify buoys of different The remaining part of this paper is organized as
color. Both methods require an offline training process which is follows. In Section 2 the visual system is introduced. The
a disadvantage when the environment changes. In [19], an visual navigation system and details of the controller are
adaptive neural network image-based visual servo controller is presented in Section 3. Implementation details and the
proposed; this control scheme allows placing the under-water validated exper-imental results are presented in Section 4.
vehicle in the desired position with respect to a fixed target. In Finally, Section 5 concludes this work.
[20], a self-triggered position based visual servo scheme for the
motion control of an underwater vehicle was presented. The
visual controller is used to keep the target in the center of 2. The Visual System
image with the premise that the target will always remain
Underwater visibility is poor due to the optical properties of
inside the camera field of view. In [21], the authors present an
evolved stereo-SLAM procedure implemented in two light propagation, namely, absorption and scattering, which
underwater vehicles. They computed the pose of the vehicle are involved in the image formation process. Although a big
using a stereo visual system and the navigation was performed amount of research has focused on using mathematical models
following a dynamic graph. A visual guidance and control for image enhancement and restoration [25, 26], it is clear that
methodology for a docking task is presented in [22]. Only one the main challenge is the highly dynamic environment; that is,
high-power LED light was used for AUV visual guidance the limited number of parameters that are typically considered
without distance estimation. The visual information and a PID could not represent all the actual variables involved in the
controller were employed in order to regulate the AUV attitude. process. Furthermore, for effective robot navigation, the
In [23], a robust visual controller for an underwater vehicle is enhanced images are needed in real time, which is not always
presented. The authors implemented genetic algorithms in a possible to achieve in all approaches. For that reason, we
stereo visual system for real-time pose estimation, which was decided to explore directly the use of perceptually uniform
tested in environments under air bubble disturbance. In [24], color spaces, in particular the color space. In the following
the development and testing process of a visual system for sections, we describe the integrated visual framework
buoys detection is presented. This system used the HSV color proposed for detecting artificial marks in aquatic
space and the Hough trans-formation in the detection process. environments, in which the color space was adapted for
These algorithms require the internal parameters adjusting underwater imagery.
depending on the work environment, which is a disadvantage.
In general, the visual systems used in these papers were 2.1. Color Discrimination for Underwater Images Using the
configured for a particular environment and when the Color Space. Three main problems are observed in
environmental characteristics change, it is necessary to readjust underwater image formation [26]. The first is known as
some parameters. In addi-tion, robust control schemes were not disturbing noise, which is due to suspended matter in water,
proposed for attitude regulation. such as bubbles, small particles of sand, and small fish or
plants that inhabit the aquatic ecosystem. These particles
In this work, a novel navigation system for autonomous block light and generate noisy images with distorted colors.
underwater vehicle is presented. The navigation system The second problem is related with the refraction of light.
combines a visual controller with an inertial controller in When a camera set and objects are placed in two different
Journal of Sensors 3

Figure 1: Photographs with multicolored objects taken underwater and in air.

environments with different refractive index, the objects in the In this image, the data of the objects are concentrated in a
picture have different distortion for each environment, and small interval.
therefore the position estimation is not the same in both Therefore, in order to increase the robustness of the
environments. The third problem in underwater images is the iden-tification method, new limits for each channel are
light attenuation. The light intensity decreases as the distance established. These values help to increase the contrast
to the objects increases; this is due to the attenuation of the between objects and the background in the image. The new
light in function of its wavelength. The effect of this is that the limits are calculated using the frequency histogram for each
colors of the observed underwater objects look different from of the channels, and, with this, the extreme values in the
those perceived in the air. Figure 1 shows two images with the histogram with a higher frequency than a threshold value
same set of different color objects taken underwater and in air. are computed. The difference between using the frequency
In these images, it is possible to see the characteristics of the histogram, and not only the minimum and maximum
underwater images mentioned above. values, is that the first method eliminates outliers.
A color space is a mathematical model through which the Finally, a data normalization procedure is performed
perceptions of color are represented. The color space selection using the new interval in each channel of the color space.
is an important decision in the development of the image Af ter this, it is possible to obtain a clear segmentation of
processing algorithm, because it can dramatically affect the the objects with colors located at the end values of the
performance of the vision system. We selected the color space channels. Figure 3 shows the result of applying the
[27], because it has features that simplify the analysis of the proposed algorithm in the , , channels. It can be observed
data coming from the underwater images. In underwater that some objects are significantly highlighted from the
images, the background color (sea color) is usually blue or greenish background; particularly, the red circle in the beta
green; these colors correspond to the limits of the and channel presents a high contrast.
channels, respectively, and, therefore, to identify objects with
contrasting colors to the blue or green colors results much 2.2. Detection of Artificial Marks in Aquatic Environments.
easier. A modification of the original transformation method The localization problem for underwater vehicles requires
form the RGB to the space color was made. The logarithm identifying specific objects in the environment. Our naviga-
operation was removed from the transformation reducing the tion system relies on a robust detection of the artificial marks
processing time while keeping the color distribution. Thus, the in the environment. Artificial red balls were selected as the
mapping between RGB and the modif ied color space is known marks in the aquatic environment. Moreover, circles
expressed as a linear transformation: tags with different color were attached to the sphere in order to
determine the section on the sphere that is being observed.
0.3475 0.8265 0.5559 R Detection of circles in images is an important and frequent
[] [ ][ ] problem in image processing and computer vision. A wide
[] =[0.2162 0.4267 −0.6411][G] , variety of applications such as quality control, classi-fication of
[] [0.1304 −0.1033 −0.0269][B] manufactured products, and iris recognition use circle detection
algorithms. The most popular techniques for detecting circles
where ∈ [0.0,1.73] is the achromatic channel which are based on the Circle Hough Transform (CHT) [28].
determines the luminance value, ∈ [−0.6411,0.6429]is the However, this method is slow, demands a considerable amount
yellow-blue opposite channel, and ∈ [−0.1304,0.1304], is the of memory, and identifies many false positives, especially in
red-cyan with a significant influence of green. The data in the presence of noise. Furthermore, it has many parameters that
these channels include a wide variety of colors; however, the must be previously selected by the user. This last characteristic
information in aquatic images is contained in a very narrow limits their use in under-water environments since ambient
interval. Figure 2 shows an underwater image and the conditions are constantly changing. For this reason, it is
frequency histogram for each channel of the color space. desirable a circle detection
4 Journal of Sensors

(a) Input image (b)

(c) (d)

Figure 2: Frequency histograms for each channel of color space.

algorithm with a fixed set of internal parameters that does not displays the circles detected in the original image. The rows
require adjustment even if small or large circle identification is in the figure present the obtained results under different
required or if the ambient light changes. The circle detection conditions. The first experiment analyzes a picture taken in
algorithm presented by Akinlar and Tobal [29] provides the a pool with clear water. Although the spheres are not close
desired properties. We have evaluated its performance in to the camera, they can be easily detected by our visual
aquatic images with good results. Specifically, we applied the system. T he second row is also a photograph taken in the
algorithm to the channel image which is the resulting image pool, but in this case the visibility was poor; however, the
from the procedure described in the previous section. As it was method works appropriately and detects the circle. Finally,
mentioned, the channel presents the highest contrast between the last row shows the results obtained from a scene taken in
red color objects and the background color of underwater the marine environment, in which visibility is poor. In this
images. This enables the detection algorithm to find circular case, the presence of the red object in the image is almost
shapes in the field of view with more precision. This is an imperceptible to the human eye; however the detector
important discover to the best of our knowledge, this is the first identifies the circle successfully.
time that this color space model is used in underwater images The navigation system proposed in this work is the
for this purpose. integration of the visual system, described above, with a
Figure 4 shows the obtained results. T he images are novel control scheme that defines the behavior of the
organized as follows: the first column shows the original vehicle based on the available visual information. The
input image; the second column corresponds to the graphical block diagram in Figure 5 shows the components of the
representation of the channel; and finally the third column navigation system and the interaction between them.
Journal of Sensors 5

(a) Input image (b)

(c) (d)

Figure 3: Result of conversion of the input image to color space after adjusting the range of values.

3. Visual Navigation System to achieve the navigation objective is presented. Knowing


the dynamics of underwater vehicles and their interaction
In this section the navigation system and its implementation in with the environment plays a vital role for the vehicles
our robotic system are presented. The autonomous under- performance. The underwater vehicles dynamics include
water vehicle, called Mexibot (see Figure 6), is part of the hydrodynamic parametric uncertainties, which are highly
Aqua robot family [30] and an evolution of the RHex platform nonlinear, coupled, and time varying. The AUV is a rigid
[31]. The Aqua robots are amphibious with the ability to work body moving in 3D space. Consequently, the AUV
in both land and water environments. The underwater vehicle
dynamics can be represented with respect to the inertial
has a pair of embedded computers; one computer is used for
reference denoted by = { } or with respect to the body
the visual system and for other phases such as registration of
reference frame = { }. Figure 7 presents the AUV
data; the second computer is used for the low-level control. An
important feature is that both computers are connected via
reference frames and their movements.
Ethernet, so they are able to exchange data or instructions. The In [32], Fossen describes the method to obtain the
control loop of the robot runs on a real-time constraint; for this underwater vehicle dynamics using Kirchhoff ’s laws. Fluid
reason, QNX operating system is installed in the control damping, gravity-buoyancy, and all external forces are also
computer. On the other hand, the vision computer has Ubuntu included and the following representation is obtained:
12.04as the operating system. On this computer, high-level
applications are developed which use the Robot Operating ̇
System (ROS). In addition, the vehicle has an IMU, which ^ + V (^)^ + V (^ )^ +gV (q) = F + V ( ) (2)
provides attitude and angular velocity of the vehicle. A ^ = (q)q̇, (3)
pressure sensor is used to estimate the depth of the robot, and a
set of three cameras are used, two in front of the robot and one 6
where q = [ ] ∈ R is the pose of the vehicle, ^ = [k , ] ∈
in the back. 6
R is the twist of the vehicle,
3
3.1. Model-Free Robust Control. The visual navigation system k = [V V V ] ∈ R is the linear velocity, and =
3
requires a control scheme to regulate the depth and attitude of [ ] ∈ R is the angular velocity expressed in the body
6×6
the underwater vehicle. In this subsection, the under-water reference frame. ∈R is the positive constant and
vehicle dynamics is analyzed and the controller used symmetric inertia matrix which includes the inertial mass
6 Journal of Sensors

(a) Inpute image (b) channel (c) Detected circles

Figure 4: Example results of applying the circle detection algorithm using the color space in underwater images with different visibility
conditions.

zd Depth d z ż
control
Fu u
Attitude Force

Underwater Visual control distribution 2 ̇2

Route d
environment system 2 planner

2 ̇
2

Figure 5: Block diagram of the proposed navigation system for autonomous underwater vehicle.
Journal of Sensors 7

This control scheme ensures stability for tracking tasks


despite any inaccuracies in the dynamic parameters of the
vehicle and the perturbations in the environment, [33].
Therefore, this control law can be used to define the
behavior of both the inertial and the visual servoing mode
of the underwater vehicle.
It is also important to highlight that this control law can
be implemented easily, because it only requires measure-
ments of q, q̇and rough estimates of and ].
Figure 6: Our underwater vehicle Mexibot.
3.1.1. Stability Analysis. Model (2)-(3) is also known as the
quasi-Lagrangian formulation since the velocity vector ^
b
defines a quasi-Lagrangian velocity vector. The Lagrangian
b ey
formulation upon which the stability analysis relies is found
ez
by using (3) and its time derivative on (2) and premultiply
y
e the resulting equation by the transpose of the velocity
z z
operator (q)[34]:
B
(q)q̈+ (q,q̇)q̇+ (q,q̇,^ )q̇+g (q)
x (9)
= + (⋅),
eb
x
where (q) = (q) (q = (q) > 0; (q,q̇)=
I
̇ ̇
ey
e (q) ()+q (q) V ()q, which implies that −(1/2) =
x

Figure 7: AUV representation including the inertial and body


; += 0; and all the terms are bounded, for nonnegative
reference frame.
constants ≥ 0 as follows:
6×6
and the added mass matrix. V(^) ∈ R is the skew- ‖ (q)‖ ≤ 1 = { (q)}
6×6
symmetric Coriolis matrix and V(^ ) > 0 ∈ R is the
positive definite dissipative matrix, which depends on the ‖ (q,q̇)‖≤ 2 ‖q̇‖
6 6
magnitude of the relative fluid velocity ^ ∈ R . ∈ R is
6
the f luid velocity in the inertial reference andgV(q) ∈ R
(q,q̇,^ ) ≤ 3 ‖q̇‖+ 4 ‖ ‖ (10)
is the potential wrench vector which includes gravitational
6
and buoyancy effects. F ∈ R is the vector of external
forces, expressed in the vehicle frame and produced by the
6×6 g (q) ≤5
vehicle thrusters, (q) ∈ R is the operator that maps the
2
generalized velocity q̇to the vehicle twist ^, and V(⋅) is the
external disturbance wrench produced by the fluid currents.
Consider the following control law [33]: ̇

F
=
̂^̇̂^
+ ≤ + ‖ ̇‖‖ ‖+
V 6 7 q 8 ‖‖.
(4) Then, control law (4) adopts the following shape in the
− 2 Lagrangian space:
− (q)( s + ∫s +‖s‖ s),

̂ ̂ = (q)F
where , , , , and are constant positive definite V Λ
̂ ̂ ̂
matrices, is a positive scalar, and q̃ is the pose error:
= (q)q̈+ (q,q̇)q̇+ (q)q̇− s
(11)
2
− ∫s −‖s‖ s,

q̃ = q −q , (5) with the following relationships:


Expressing this extended error as a velocity error
after which the extended (tracking) error s is defined as
̇ s = q̇−q̇ (7)
s = q̃+ Λq̃. (6)
for an artificial reference velocity q̇= q̇− Λq̃, it raises the
vehicle’s twist reference as
^ ≜ (q)q̇= (q)(q̇− Λq̃) = ^ − (q)Λq̃. (8)
̂ ̂
(q) ≜ (q) (q) > 0,
̂ ̂ ̇
(q,q̇)≜ (q) (q), (12)
̂ ̂
(q) ≜ (q) V (q) > 0,
̇
̂ ̂ ̂

from which it raises + = or equivalently the following

property:

̂ ̂

x [2 (q)− (q,q̇)] x = 0, ∀x = ̸0. (13)


8 Journal of Sensors

Now consider that the left-hand side of Lagrangian Then the last term in (20) is bounded as follows:
formulation (9) can be expressed in the following
regression-like expression: ̃
( )− ( ) −g (q)
(q)q̈+ (q,q̇)q̇+ (q,q̇,^ )q̇+g (q)
(14)
= (q,q̇,q̈),
()
̃

≤() + + g (q) (22)


2
where (q,q̇,q̈): −R → R is the regressor constructed by ≤ 0 + 5 + 1 ‖s‖+ 2 ‖s‖ .
known nonlinear functions of the generalized coordinates
and its f irst and second time derivatives and ∈ R is the 6
vector of unknown parameters. Also, let a0 ≜ ( 0 + 5 )e6, where e6 ∈ R is a vector of
Then for an arbitrary smooth (at least once differentiable) ones, such that ‖a0‖ = ( 0 + 5 ) > 0. Then, after these
signal q̇∈ R , there should exist a modified regressor bounding expressions, (20) is bounded as follows:
(q,q̇,q̇,q̈): R → R such that
̇ 2 4 3 2
(q)q̈+ ( )q̇+ ( )q̇+g (q) + 2 ‖s‖ + 1 ‖s‖
(s) ≤ − ‖s‖ − ‖s‖ , (23)
= (q,q̇,q̇,q̈) .
(15 ̂
) where is the largest eigenvalue of matrix q . T he
()+
The difference between the estimate version and the ̇
̃ ̂ conditions to satisfy ()s< 0 are summarized as
real parameters = − produces an estimate system error:
̃ ̂ > 1+ 2,
(q,q̇,q̇,q̈) = [ (q)− (q)]q̈ (24)
̂ > ,2
+ [ (q,q̇)− (q,q̇)]q̇ (16)
̂
+ [ (q,q̇,^ ) − (q)]q̇, which are conditions in the control law gains choice.
̇
Under these conditions s is negative definite and the ()
which after the above equivalence is properly bounded:
extended error is asymptotically stable:

2
lim ‖s‖ →0.
( , ̇, ̇, ̈) q̇
̃

→∞
qqqq ≤ 9 q̇+ 10 + 11 q̈. (17)
Then the closed-loop dynamics is found using control law (11)
̃
2
= − ‖s‖ s − (q,q̇,q̇,q̈) −g (q)+ ( ).in the
open-
Now consider the following Lyapunov candidate func-
tion:
loop Lagrangian expression (9):
̂ ̂ ̂ Finally, after definition (6) whenever s = 0 it follows
(q)ṡ+ (q,q̇)s + (q)s + s + ∫s ̇
with ̃a ≜ a0 − ∫s for some constant vector a0 ∈ R . T he that q̃ = −Λq̃ which means that q reaches the set point q .
time derivative of the Lyapunov candidate function along the
(18)
trajectories of the closed-loop system (18), after property Therefore, the stability for the system is proved. A detailed
(13) and proper simplifications, becomes explanation and analysis of the controller can be found in
ss 2
[33].
̇ ̂ q s s s s s a ()=− [ ()+ ] − ‖‖ − 0 The implementation of the control does not require
(20)
1 ̂ 1 −1 knowledge of the dynamic model parameters; hence it is
s ̃
(s) = 2 (q)s + 2a

(19) a robust control with respect to the f luid disturbances and dynamic
parametric knowledge. However it is necessary to know the relationship
between the control input and the actuators.
3.2. Thrusters Force Distribution. The by a set of six f ins which move along a sinusoidal path
propulsion forces of Mexibot are generated defined as
̃
+s ( ( )− (q,q̇,q̇,q̈) −g (q)).
2
Assuming that ^̇is bounded implies that both q̇and q̈are ( )= sen ( + )+ , (
̇
also bounded. Then, assuming that and are also bounded 2
̃ 2
it yields ‖ ( )‖ + ‖ (q,q̇,q̇,q̈)‖ ≤ 1 + 1 ‖q̇‖+ 2 ‖q̇‖ , which can
be expressed in terms of the extended error as
where is the angle of the position of the flip, is the
amplitude of motion, is the period of each cycle, is the
2 central angle of the oscillation, and is the phase offset
̃ between different fins of the robot.
( , ̇, ̇, ̈)
() + qqqq ≤ 0 + 1 ‖s‖+ 2 ‖s‖ . (21)
Journal of Sensors = + + + +
11 22 33 44 55

(30e
Both Georgiades in [35] and Plamondon in [36] show )
models for the thrust generated by the symmetric oscillation of
the fins used in the Aqua robot family. Plamondon presents a
relationship between the thrust generated by the fins and the
parameters describing the motion in (26). Thus, the magnitude
of force generated by each flip with the sinusoidal movement
(26) is determined by the following equation:
2
( 1 +2 2)
=0.1963 − 0.1554,
3
where , 1, and 2 correspond to the dimensions of the fins,
represents the density of water, is the amplitude, and is the
period of oscillation. T hus, the magnitude of the force
generated by the robot fins can be established in function of
the period and the amplitude of the fin oscillation
movement at runtime. Figure 8 shows the force produced by
the fins, where defines the direction and the magnitude of
the force vector expressed in the body reference frame as

[ ]
[ ]
= .
F []

[ ]
In addition, due to the kinematic characteristics of the
vehicle, = 0. Therefore, the vector of forces and moments
generated by the actuators is defined as follows:

[ ]
[ ]
[ ]

[ ]
[ ]
[ ]
F =
[ ].

[]
[ ]

[ ]
[]
[]

Consider the fins numeration as shown in Figure 9; then


the following equations state the relationship between the
coordinates and of F and the vector F as
=+++++
(30a)
1 2 3 4 5 6

=0 (30b)
=+++++
(30c)
1 2 3 4 5 6
+ + + +
=
1 1 2 2 3 3 4 4 5 5

(30d)
+ 6 6
9 l
x1

6 4
Trust line
5
F
Joint px
Fin Figure 9: Fins distribution in the underwater vehicle.

Fp
F
pz
in Figure 9. Note that the symmetry of the vehicle establishes
Figure 8: Diagram of forces generated by the fins movements that = =− =− , =− , =− = =− ,
where the angle establishes the direction of the force. 1 3 4 6 2 5 1 3 4 6 and 2 = 5 =
0.
System (30a), (30b), (30c), (30d), (30e), and (30f) has
y yp1 five equations with twelve independent variables. Among
all possible solutions the one presented in this work arises
l after the imposition of the following constraints:
y1

x
C1: = = ,
1 2 3
3 1
C2: = = ,
4 5 6

+ 6 6
C3: + =− − ,
1 3 4 6

(31)
= + + + + C4: − = − ,
11 22 33 44 55
(30f) 1 3 4 6

+ 6 6
, C5:
2
=− ,
5

where and are the distance coordinates of the th fin joint C6: =
with respect to to the vehicle’s center of mass as shown 0.
10 Journal of Sensors

Then one system solution is found to be where is the actual yaw angle, ̃V is the visual error in
horizontal axis, rows and columns are the image dimensions,
1
0 0 1 and is the radius of the circle. This desired yaw angle is
[ 1 ] [ 2 ]
[ ] [ 1 ]
proportional to the visual error, but it also depends on the
[ ]
[ 4 ] [ 0 0 −1 ]
radius of the circle found. When the object is close to the
[ ] [ 2 ]
[ ] [ ][ ] camera, the sphere radius is larger, and therefore the change
[ 1 ] 0 0
[ 1 2 ][ ]
[ ]
[ ]= [ ][ ] , (32) of also increases. Note that the resolution of the image
[ 2 ] [ 0 0 0 ][ ]

[ ] [ 1 ][ ] given by the vehicle’s camera is 320 × 240 pixels; with this,


[ ]
[ ]

3
[0 − 0 ]
the gain used to define the reference yaw angle in (37) was
[ ] [ 1 2 ][ ]
[
] [ ] established as 300. This value was obtained experimentally,
[ ]

4 [0−1 2 0]
with a trial error procedure, and produces a correction of

[ 5 ] [0 − 1− 2 0 ] approximately 1 , with a visual error ̃V = 10and radius of the


observed sphere = 25.This update of the desired yaw angle
where modifies the vehicle attitude and reduces the position error
1
of the sphere in the image. We note that the update of the
1 = 2(2 +) , desired yaw angle was necessary only when the visual error

(33) was bigger than 7 pixels; by this reason when V is smaller


1 2

1 than this threshold the reference signal keeps the previous


2 = 4 . value.

Finally, when a circle inside of other circle is found, that


Now, the oscillation amplitudeof the th fin is means the underwater vehicle is close to the mark and a
computed after (27) using an oscillation period of 0.4[s], and direction change is performed. The desired yaw angle is set to
the corresponding thrust is defined as the actual yaw value plus an increment related to the location

of the next sphere. This integration of the visual system and


2 2
= + .
√ (34)
the controller results in the autonomous navigation system
for underwater vehicle which is able to track the marks placed
Finally, the central angle of oscillation is computed as
in a semistructured environment.

3.3. Desired Signals Computation. In this navigation system


−1 the controller performs a set-point task. The desired values
= tan ( ).
are computed based on the visual information. Due to the
under-actuated nature of the vehicle and sensor limitations,
only the attitude and depth of the vehicle can be controlled.
The desired depth value is a constant and the roll desired
angle is always = 0. As the constraint C6: = 0 has been
considered, the depth is controlled indirectly by modifying 4. Experimental Results
the desired pitch angle . This desired orientation angle is
In order to evaluate the performance of the visual navigation
calculated in terms of the depth error as
system, a couple of experimental results are presented in this
=(−), section. Two red spheres were placed in a pool with turbid
water. An example of the type of view in this environment is
where is a positive constant.
shown in Figure 10. This environment is semistructured
The visual system defines the desired yaw angle . Images because the f loor is not natural and also because of the lack of
from the left camera are processed in a ROS node with the currents; however, the system is subjected to disturbances
algorithms described in Section 2.1 in order to determine the produced by the movement of swimmers which closely follow
presence of the sphere in the field of view. When visual the robot. As it was mentioned before, the exact position of the
information is not available, this angle remains constant with spheres is unknown; only the approximate angle which relates
the initial value or with the last computed value. However, if a the position between the marks is available. Figure 11 shows a
single circle with a radius bigger than a certain threshold is
diagram with the artificial marks distribution. The underwater
found, the new desired yaw angle is computed based on the
vehicle starts swimming towards one of the spheres. Although
visual error. This error is defined as the distance in pixels
the circle detection algorithm includes functions for selecting
between the center of the image and the position in the - axis
the circle of interest when more than one are detected, for this
of the detected circle. So, the new desired yaw angle is
first experiment, no more than one visual mark is in front of the
computed as visual field of the camera at the same time.
300
The implementation of the attitude and depth control was
= +̃V , (37) performed in a computer with QNX real-time operating system
(columns ×rows) and the sample time of the controller is 1 ms. T his controller
accesses the inertial sensors in order to regulate the depth and
orientation of the vehicle. The reference signal of the yaw angle
was set with the initial orientation of the robot and updated by
the visual system when a sphere is detected. This visual system
was implemented in a computer with Ubuntu and ROS, having
an approximate sample time of 33 ms when a visual mark is
present. The parameters
Journal of Sensors 11

Table 1: Parameters of our AUV used in the experimental test.


Notation Description Value Units
̂ Mexibot mass 1.79 [kg]
̂ Inertia moment with 0.001 [kg m2]
respect to -axis
̂ Inertia moment with 0.001 [kg m2]
respect to -axis
̂
Inertia moment with 0.001 [kg m2]
respect to -axis
1 Distance between the AUV
center of mass and the 0.263 [m]
position of the fin 1 in axis
1 Distance between the AUV
center of mass and the 0.149 [m]
position of the fin 1 in axis
2 Distance between the AUV
center of mass and the
0.199 [m]
1 position of the fin 2 in axis
2
Fin length 0.190 [m]
Fin width 1 0.049 [m]
Fin width 2 0.068 [m]
Water density 1000 [kg/m3]

nature of the vehicle dynamics causes the fact that small


variations in the control gains affect considerably the per-
formance of the controller. For the experimental validation,
we first tuned the gains of the attitude controller, following
this sequence: , Λ, , and . T hen, the parameter of the depth
controller was selected. With this, the control gains were set
as follows:
= 0.7,
=17,

= diag {0.30,0.20,1.50}, (
Figure 10: Diagram to illustrate the approximate location of visual = diag {0.10,0.07,0.20},
marks.
The control gains in (4) and (36) were established after a
trial and error process. The nonlinear and strongly coupled

≈0

≈45

Figure 11: Diagram to illustrate the approximate location of visual


marks.

used in the implementation are presented in Table 1 and


were obtained from the manufacturer. Water density value
has been used with a nominal value assuming that the
controller is able to handle the inaccuracy with respect to
real values.
Λ = diag {0.50,0.50,0.50}.
In the first experiment, the navigation task considers the
following scenario. A single red sphere is placed in front of
the vehicle approximately at 8 meters of distance. The time
evolution of the depth coordinate ( )and the attitude signals
( ),( ), ( )are shown in Figure 12, where the corresponding
depth and attitude reference signals are also plotted. The
first 20 seconds corresponds to the start-up period of the
navigation system. After that, the inertial controller ensures
that the vehicle moves in the same direction until the
navigation system receives visual feedback. This feedback
occurs past thirty seconds and the desired value for the
angle yaw ( )starts to change in order to follow the red
sphere. Notice that the reference signal for the pitch angle
( )presents continuous changes after the initialization
period. This is because the depth control is performed
indirectly by modifying the value of with (36). In addition
the initial value for ( )isnot relevant because
12 Journal of Sensors

0 10

5
(m)

−0.5


0

()
z

−5
−1
0 10 20 30 40 −10
Time (s) 0 10 20 30 40
Time (s)
(a)
(b)
10

50
5

0

()

0
()

−5

−10 −50
0 10 20 30 40
0 10 20 30 40
Time (s)
Time (s)
(c)
(d)

Figure 12: Navigation experiment when tracking one sphere. Desired values in red and actual values in black. (a) Depth ( );(b) roll ( );
(c) pitch ( ); (d) yaw ( ).
The navigation task assigned to the underwater vehicle in
the second experiment includes the two spheres with
this value is updated after the inertial navigation system
starts. The corresponding depth and attitude error signals
are depicted in Figure 13, where all of these errors have
considerably small magnitude, bounded by a value around

0.2m for the depth error and 10 for attitude error.
The time evolution of the visual error in the horizontal
axis is depicted in Figure 14. Again, the first thirty seconds
does not show relevant information because no visual feed-
back is obtained. Later, the visual error is reduced to a
value in an acceptable interval represented by the red lines.
This interval represents the values where the desired yaw
angle does not change, even when the visual system is not
detecting the sphere. As mentioned before, the experiments
show that when V ≤ 7 pixels, the AUV can achieve the
assigned navigation task. Finally, a disturbance, generated
by nearby swimmers when they displace water, moves the
vehicle and the error increases, but the visual controller acts
to reduce this error.
The previous results show that the proposed controller
(4) under the thruster force distribution (32) provides a
good behavior in the set-point control of the underwater
vehicle, with small depth and attitude error values. This
performance enables the visual navigation system to track
the artificial marks placed in the environment.
the distribution showed in Figure 11. For this experiment the
exact position of the spheres is unknown; only the
approximate relative orientation and distance between them
are known. The first sphere was positioned in front of the
AUV at an approximated distance of 8m. When the robot
detects that the first ball was close enough, it should change

the yaw angle to 45 in order to find the second sphere.
Figure 16 shows the time evolution of the depth coordinate
( ),the attitude signals ( ), ( ), ( ),and the
corresponding reference signals during the experiment.
Similarly to the previous experiment, the actual depth, roll
angle, and pitch angle are close to the desired value, even
when small ambient disturbances are present. The yaw angle
plot shows the different stages of the system. The desired
value at the starting period is an arbitrary value that does not
have any relation with the vehicle state. After the
initialization period, a new desired value for the yaw angle
is set and this angle remains constant as long as the visual
system does not provide information. When the visual
system detects a sphere, the navigation system generates a
smooth desired signal allowing the underwater vehicle to
track the artificial mark. When the circle inside of the other

circle was detected, the change in direction of 45 was
applied. This reference value was fixed until the second
sphere was detected and a new desired signal was generated
with small changes. Finally, the second circle inside of the
sphere was detected and a new
Journal of Sensors 13

0.5 10

5
(m)

0 0
̃z

−5

−0.5 −10
0 10 20 30 40 0 10 20 30 40
Time (s) Time (s)
(a) (b)
10 20

5 0

()

0 −20

̃

−5 −40

−10 −60
0 10 20 30 40 0 10 20 30 40
Time (s) Time (s)
(c) (d)

Figure 13: Navigation tracking errors of one sphere when using controller (4). (a) Depth ( );(b) roll ( );(c) pitch ( );(d) yaw ( ).

40

20

0
p

c
e

s
(

)
i

−20

−40
0 10 20 30
Time (s)

Figure 14: Navigation tracking of one sphere. Visual error obtained from the AUV navigation experiment.
Note that, in this experiment, significant amount of the
error was produced by environmental disturbances.


change of 45 was performed and the desired value
remained constant until the end of the experiment.
Figure 17 shows the depth and attitude error signals.
Similar to the first experiment, the magnitude of this error
is bounded by a value around 0.2m for the depth error and

10 for the attitude error, except for the yaw angle, which
presents higher values produced by the direction changes.
Finally, the graph of the time evolution of the visual
error is depicted in Figure 15. It can be observed that, at the
beginning, while the robot was moving forward, the error
remained constant because the system was unable to
determine the presence of the artificial mark in the
environment. At a given time , the visual system detected
the first sphere, with an estimated radius of about 30pixels.
Then, as the robot gets closer to the target, the visual error
begins to decrease due to the improvement in visibility and
14 Journal of Sensors

100

50

−50
0 10 20 30 40 50 60
Time (s)

Figure 15: Navigation tracking of two spheres. Visual error obtained from the AUV navigation experiment.

0 10

5
−0.5
(m)

0
z

−1
−5

−1.5 −10
0 20 40 60 0 20 40 60
Time (s) Time (s)

(a) (b)
10

5

0
()

−5 0 20 40 60

−10
0 20 40 60
Time (s) Time (s)

(c) (d)

Figure 16: Navigation tracking of two spheres. Desired values in red and actual values in black. (a) Depth ( );(b) roll ( );(c) pitch
( );(d) yaw ( ).
but rapidly this error decreased to the desired interval. At
the end of the experiment, another change of direction was
generated and the error remained constant, because no
the radius of the sphere increases. When the radius is other sphere in the environment was detected.
bigger than a given threshold, a change-of-direction action
is fired in order to avoid collision and to search for the
second sphere. Then, all variables are reset. Once again the
error remains constant at the beginning due to the lack of
visual feedback. In this experiment, when the second mark
was identified, the visual error was bigger than 100 pixels,
5. Conclusion
In this paper, a visual-based controller to guide the navigation
of an AUV in a semistructured environment using artificial
marks was presented. The main objective of this work is to
provide to an aquatic robot the capability of moving in an
environment when visibility conditions are far from ideal and
artificial landmarks are placed with an approximately known
distribution. A robust control scheme applied under a given
thruster force distribution combined with a visual servoing
control was implemented. Experimental evaluations
Journal of Sensors 15

0.5 10

0
0

̃
−5

−0.5 −10
0 20 40 60 0 20 40 60
Time (s)
Time (s)
(b)
(a)
10
100

5
50
0
()
̃

0
̃

−5 −50
0 20 40 60
−10 Time (s)
0 20 40 60 (d)
Time (s)
(c)
Figure 17: Navigation error when tracking two spheres when using controller (4). (a) Depth ( );(b) roll ( );(c) pitch ( );(d) yaw ( ).
lenges,” in Proceedings of the IFAC Conference of Manoeuvering
and Control of Marine Craft, vol. 88, 2006.
for the navigation system were carried out in an aquatic
environment with poor visibility. The results show that our
approach was able to detect the visual marks and perform
the navigation satisfactorily. Future work includes the use
of natural landmarks and to lose some restrictions, for
example, that more than one visual mark can be present in
the field of view of the robot.

Competing Interests
The authors declare that they have no competing interests.

Acknowledgments
The authors thank the financial support of CONACYT,
Mexico´.

References
[1] J. J. Leonard, A. A. Bennett, C. M. Smith, and H. J. S. Feder,
“Autonomous underwater vehicle navigation,” in Proceedings of
the IEEE ICRA Workshop on Navigation of Outdoor Autonomous
Vehicles, 1998.
[2] J. C. Kinsey, R. M. Eustice, and L. L. Whitcomb, “A survey
of underwater vehicle navigation: recent advances and new chal-
[3] L. Stutters, H. Liu, C. Tiltman, and D. J. Brown, “Navigation
technologies for autonomous underwater vehicles,” IEEE Trans-
actions on Systems, Man and Cybernetics Part C: Applications
and Reviews, vol. 38, no. 4, pp. 581–589, 2008.
[4] L. Paull, S. Saeedi, M. Seto, and H. Li, “AUV navigation and
localization: a review,” IEEE Journal of Oceanic Engineering,
vol. 39, no. 1, pp. 131–149, 2014.
[5] A. Hanai, S. K. Choi, and J. Yuh, “A new approach to a laser
ranger for underwater robots,” in Proceedings of the IEEE/RSJ
International Conference on Intelligent Robots and Systems
(IROS ’03), pp. 824–829, October 2003.
[6] F. R. Dalgleish, F. M. Caimi, W. B. Britton, and C. F. Andren,
“An AUV-deployable pulsed laser line scan (PLLS) imaging sensor,”
in Proceedings of the MTS/IEEE Conference (OCEANS ’07), pp. 1–5,
Vancouver, Canada, September 2007.
[7] A. Annunziatellis, S. Graziani, S. Lombardi, C. Petrioli, and
R. Petroccia, “CO2Net: a marine monitoring system for CO2
leakage detection,” in Proceedings of the OCEANS, 2012, pp. 1–
7, IEEE, Yeosu, Republic of Korea, 2012.
[8] G. Antonelli, Underwater Robots-Motion and Force Control
of Vehicle-Manipulator System, Springer, New York, NY, USA,
2nd edition, 2006.
[9] T. Nicosevici, R. Garcia, M. Carreras, and M. Villanueva, “A
review of sensor fusion techniques for underwater vehicle nav-
igation,” in Proceedings of the MTTS/IEEE TECHNO-OCEAN
’04 (OCEANS ’04), vol. 3, pp. 1600–1605, IEEE, Kobe, Japan,
2004.
16 Journal of Sensors

[10] F. Bonin-Font, G. Oliver, S. Wirth, M. Massot, P. L. Negre, [25] M. Bryson, M. Johnson-Roberson, O. Pizarro, and S. B.
and J.-P. Beltran, “Visual sensing for autonomous underwater Williams, “True color correction of autonomous underwater
exploration and intervention tasks,” Ocean Engineering, vol. 93, vehicle imagery,” Journal of Field Robotics, 2015.
pp. 25–44, 2015. [26] A. Yamashita, M. Fujii, and T. Kaneko, “Color registration of
[11] K. Teo, B. Goh, and O. K. Chai, “Fuzzy docking guidance using underwater images for underwater sensing with consideration of
augmented navigation system on an AUV,” IEEE Journal of Oceanic light attenuation,” in Proceedings of the IEEE International
Engineering, vol. 40, no. 2, pp. 349–361, 2015. Conference on Robotics and Automation (ICRA ’07), pp. 4570–
[12] R. B. Wynn, V. A. I. Huvenne, T. P. Le Bas et al., 4575, Roma, Italy, April 2007.
“Autonomous Underwater Vehicles (AUVs): their past, present [27] D. L. Ruderman, T. W. Cronin, and C.-C. Chiao, “Statistics of
and future contributions to the advancement of marine cone responses to natural images: implications for visual coding,”
geoscience,” Marine Geology, vol. 352, pp. 451–468, 2014. Journal of the Optical Society of America A: Optics and Image
[13] F. Bonin-Font, M. Massot-Campos, P. L. Negre-Carrasco, G. Science, and Vision, vol. 15, no. 8, pp. 2036–2045, 1998.
Oliver-Codina, and J. P. Beltran, “Inertial sensor self-calibration in [28] T. D’Orazio, C. Guaragnella, M. Leo, and A. Distante, “A
a visually-aided navigation approach for a micro-AUV,” Sensors, new algorithm for ball recognition using circle hough transform
vol. 15, no. 1, pp. 1825–1860, 2015. and neural classifier,”Pattern Recognition, vol. 37, no. 3, pp. 393–
[14] J. Santos-Victor and J. Sentieiro, “The role of vision for under- 408, 2004.
water vehicles,” in Proceedings of the IEEE Symposium on [29] C. Akinlar and C. Topal, “EDCircles: a real-time circle
Autonomous Underwater Vehicle Technology (AUV ’94), pp. 28– 35, detector with a false detection control,” Pattern Recognition, vol.
IEEE, Cambridge, Mass, USA, July 1994. 46, no. 3, pp. 725–740, 2013.
[15] A. Burguera, F. Bonin-Font, and G. Oliver, “Trajectory-based [30] G. Dudek, P. Giguere, C. Prahacs et al., “AQUA: an amphibious
visual localization in underwater surveying missions,” Sensors, autonomous robot,” Computer, vol. 40, no. 1, pp. 46–53, 2007.
vol. 15, no. 1, pp. 1708–1735, 2015. [31] U. Saranli, M. Buehler, and D. E. Koditschek, “RHex: a simple
[16] D. Kim, D. Lee, H. Myung, and H.-T. Choi, “Artificial and highly mobile hexapod robot,” International Journal of Robotics
landmark-based underwater localization for AUVs using weighted Research, vol. 20, no. 7, pp. 616–631, 2001.
tem-plate matching,” Intelligent Service Robotics, vol. 7, no. 3, pp. [32] T. I. Fossen, Guidance and Control of Ocean Vehicles, John
175– 184, 2014.
Wiley & Sons, 1994.
[17] J. Sattar and G. Dudek, “Robust servo-control for underwater [33] R. Perez´-Alcocer, E. Olgu´ın-D´ıaz, and L. A. Torres-Mendez,´
robots using banks of visual filters,” inProceedings of the IEEE “Model-free robust control for fluid disturbed underwater vehicles,” in
International Conference on Robotics and Automation (ICRA ’09), Intelligent Robotics and Applications, C.-Y. Su, S. Rakheja, and H. Liu,
pp. 3583–3588, Kobe, Japan, May 2009. Eds., vol. 7507 of Lecture Notes in Computer Science, pp. 519–529,
[18] C. Barngrover, S. Belongie, and R. Kastner, “Jboost Springer, Berlin, Germany, 2012.
optimization of color detectors for autonomous underwater vehicle
[34] E. Olgu´ın-D´ıaz and V. Parra-Vega, “Tracking of
naviga-tion,” in Computer Analysis of Images and Patterns, pp.
constrained submarine robot arms,” in Informatics in Control,
155–162, Springer, 2011.
Automation and Robotics, vol. 24, pp. 207–222, Springer, Berlin,
[19] J. Gao, A. Proctor, and C. Bradley, “Adaptive neural network
Germany, 2009.
visual servo control for dynamic positioning of underwater
[35] C. Georgiades, Simulation and control of an underwater
vehicles,” Neurocomputing, vol. 167, pp. 604–613, 2015.
hexapod robot [M.S. thesis], Department of Mechanical
[20] S. Heshmati-Alamdari, A. Eqtami, G. C. Karras, D. V.
Engineering, McGill University, Montreal, Canada, 2005.
Dimarog-onas, and K. J. Kyriakopoulos, “A self-triggered visual
servoing model predictive control scheme for under-actuated [36] N. Plamondon, Modeling and control of a biomimetic under-
underwa-ter robotic vehicles,” in Proceedings of the IEEE water vehicle [Ph.D. thesis], Department of Mechanical Engi-
International Conference on Robotics and Automation (ICRA ’14), neering, McGill University, Montreal, Canada, 2011.
pp. 3826– 3831, Hong Kong, June 2014.
[21] P. L. N. Carrasco, F. Bonin-Font, and G. O. Codina, “Stereo
graph-slam for autonomous underwater vehicles,” in Pro-ceedings
of the 13th International Conference on Intelligent Autonomous
Systems, pp. 351–360, 2014.
[22] B. Li, Y. Xu, C. Liu, S. Fan, and W. Xu, “Terminal navigation and
control for docking an underactuated autonomous underwater vehicle,” in
Proceedings of the IEEE International Conference on CYBER Technology
in Automation, Control, and Intelligent Systems (CYBER ’15), pp. 25–30,
Shenyang, China, June 2015.
[23] M. Myint, K. Yonemori, A. Yanou, S. Ishiyama, and M. Minami,
“Robustness of visual-servo against air bubble disturbance of
underwater vehicle system using three-dimensional marker and dual-
eye cameras,” in Proceedings of the MTS/IEEE Washington (OCEANS
’15), pp. 1–8, IEEE, Washington, DC, USA, 2015.
[24] B. Sut¨o,˝ R. Doczi,´ J. Kallo´ et al., “HSV color space based
buoy detection module for autonomous underwater vehicles,” in
Proceedings of the 16th IEEE International Symposium on
Computational Intelligence and Informatics (CINTI ’15), pp. 329–
332, IEEE, Budapest, Hungary, November 2015.
International Journal of

Rotating
Machinery

International Journal of

The Scientific Journal of Distributed


Engineering World Journal Sensors Sensor Networks

Journal of

Hindawi Publishing Corporation Hindawi Publishing Corporation Hindawi Publishing Corporation Hindawi Publishing Corporation Hindawi Publishing Corporation

http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014

Journal of
Control Science
and Engineering

Advances in
Civil Engineering
Hindawi Publishing Corporation Hindawi Publishing Corporation
http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014

Submit your manuscripts at


http://www.hindawi.com

Journal of

Journal of Electrical and Computer


Robotics Engineering
Hindawi Publishing Corporation Hindawi Publishing Corporation

http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014

VLSI Design
Advances in
OptoElectronics

International
Journal of

Enginee
International Journal of Simulation Aerospace

Modelling &
Observation in Engineering
ring
Navigation and
Hindawi Publishing Corporation Volume 2014 Volume 2014

Hindawi Publishing Corporation Hindawi Publishing Corporation Hindawi Publishing Corporation


Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com
http://www.hindawi.com

http://www.hindawi.com Volume 2014

International Journal of
International Journal of Antennas and Active and Passive Advances in
Chemical Engineering Propagation Electronic Components Shock and Vibration Acoustics and Vibration
Hindawi Publishing Corporation Hindawi Publishing Corporation Hindawi Publishing Corporation Hindawi Publishing Corporation Hindawi Publishing Corporation

http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014

You might also like