Semi–autonomous Navigation
of a Robotic Wheelchair
A.A. Argyros, P. Georgiadis, P. Trahanias, D.P. Tsakiris
Institute of Computer Science – FORTH
P.O. Box 1385, GR–711 10, Heraklion, Crete, Greece
E-mail: argyros, georgiad, trahania, tsakiris @ics.forth.gr
Abstract. The present work considers the development of a wheelchair for people with special needs, which is capable
of navigating semi-autonomously within its workspace. Such a system is expected to prove useful to people with impaired
mobility, who may have limited fine motor control of the upper extremities. Among the implemented behaviors of this
robotic system are the avoidance of obstacles, the motion in the middle of the free space and the following of a moving
target specified by the user (e.g. follow a person walking in front of the wheelchair). The wheelchair is equipped with
sonars, which are used for distance measurement in preselected critical directions, and with a panoramic camera (with
a 360 degree field of view), which is used for following a moving target. After suitably processing the color sequence
of the panoramic images using the color histogram of the desired target, the orientation of the target with respect to the
wheelchair is determined, while its distance is determined by the sonars. The motion control laws developed for the
system use the sensory data and take into account the nonholonomic kinematic constraints of the wheelchair, in order to
guarantee certain desired features of the closed–loop system, such as stability, while preserving their simplicity, for ease
in implementation. An experimental prototype has been developed at ICS–FORTH, based on a commercially–available
wheelchair, where the sensors, the computing power and the electronics needed for the implementation of the navigation
behaviors and of the user interfaces (touch screen, voice commands) were developed as add–on modules.
Keywords: Wheelchairs, robot navigation, nonholonomic mobile robots, person following, sensor–based control,
panoramic cameras.
1. Introduction
People with impaired mobility are faced with multiple challenges when moving in environments designed for people
without such problems. Existing assistive devices, such as wheelchairs, are primarily useful to people whose mobility
problems are not combined with others, like limited fine motor control of the upper extremities or reduced ability for
perception of their environment, which render control of a wheelchair problematic. Such combinations of mobility, motor
control and perception problems are not uncommon, making advances in robotic technology, primarily developed for
mobile robot navigation [3], relevant in building more effective assistive devices.
The present work considers the enhancement of a commercially–available power wheelchair (usually driven by its user
through a joystick) by the computational and sensory apparatus necessary for automating certain frequently–occurring
navigation tasks. The implemented tasks are obstacle avoidance, the motion towards a desired direction which is specified
by the user using a touch screen or voice commands, the motion in the middle of the free space defined by obstacles or
environment features and the following of a target (e.g. a moving person) specified by the user. Certain of these tasks
are carried out in cooperation with the user, hence the term semi–autonomous navigation. The difference from the usual
mode of operation of such a wheelchair is that its user is relieved from its continuous control during the execution of such
a task and has merely to issue some high–level commands, usually when the task is initiated (e.g. to select the person to
be followed by pointing on a touch screen, to select the direction of motion by appropriate voice commands, etc.).
The sensory modalities used are odometry, sonars and panoramic vision. The sonars measure range in preselected critical
directions around the wheelchair. The panoramic camera provides visual data from a 360 field of view, a significant
advantage over conventional cameras. In mobile robotics, the main alternative to panoramic cameras are moving cameras
mounted on pan–and–tilt platforms or hand–eye systems (cameras mounted at the end of a manipulator arm). The use
of a moving limited-f.o.v. camera on a wheelchair necessitates its precise orientation, especially when the wheelchair is
also moving; this can be a challenging control problem [6]. Looking in a direction outside from the current field of view
of the camera, requires repositioning the sensor, which involves a delay that may be unacceptable when the environment
also changes. This problem becomes more severe when the direction where the camera needs to look next is not known
a–priori; time-consuming exploratory actions are then necessary. In contrast to the above, panoramic cameras offer the
capability of extracting information simultaneously from all desired directions of their visual field. Neither moving parts,
nor elaborate control mechanisms is required to achieve this capability.
Section 2 of the paper discusses the use of panoramic vision in this system. Section 3 discusses the system’s navigation
behaviors. Section 4 provides some details on the experimental prototype built.
2. Panoramic Vision
Fig. 1. Panoramic Image
Fig. 2. Unfolded panoramic images: person tracking sequence
A panoramic image can be “unfolded” giving rise to a cylindrical image. Different columns of the resulting cylindrical
image correspond to different viewing directions. A panoramic image can be unfolded using a polar-to-Cartesian
transformation. Fig. 1 shows an example of a panoramic image and fig. 2 shows examples of unfolded ones. The property
of the resulting image is that the full 360 field of view is mapped on its horizontal dimension. In the remainder of this
paper, unless otherwise stated, the term panoramic image refers to an unfolded panoramic image. Let F denote a feature
of the environment. Let be the bearing angle of feature F in the panoramic image. Since we deal with panoramic
images, the bearing angle of feature F can easily be computed as:
2
where is the x-coordinate of the feature
in the image, and is the width of the panoramic image (measured in pixels). Thus, recovering the orientation of an
environmental feature with respect to the panoramic camera becomes easy once the feature has been identified in the
panoramic image.
In the case of a person–following task, the goal of the processing of panoramic images is to specify the orientation of
the moving person. In order to achieve this goal, color information from the images is exploited. More specifically, a
modification of the Color Indexing Scheme [5], has been employed. This algorithm identifies an object by comparing its
color characteristics to the color characteristics of objects in a database.
In our system, first the user selects the person to be tracked. This is done through a touch screen and an appropriate
software interface. Based on this selection, the system builds an internal human body representation, consisting of three
image regions that correspond to the head, torso and legs of the person to be tracked. For each of these regions, a
normalized color histogram is built. Normalization is performed, to make the system less vulnerable to changes in the
global image illumination due to changes in lighting conditions. In subsequent image frames, a window is defined, in
which the above–mentioned regions are searched for. This is done by comparing the color histograms of a reference model
to every possible location in the search window. The locations of several of the best matches for each one of the model
regions are stored. The locations are optimized globally, in the sense that they should obey certain topological rules (e.g.
head above torso, torso above legs etc). The best location for the model regions defines the best estimate for the location
of the person being tracked and, thus, its orientation with respect to the wheelchair. The regions of the model are then
tuned (both with respect to color information and to the size of the corresponding image areas), in order to better reflect
the appearance of the moving person.
This tracking method assumes that the processing of the image sequences proceeds fast enough to guarantee that the
appearance of the moving person between subsequent image frames does not change significantly. In practice, we have
been able to acquire and process frames at 3Hz on a typical Pentium III processor, which proved sufficient for most cases.
The system fails in cases where moving persons are dressed in colors very similar to the scene background (e.g. people
dressed in white in a room with white walls). Fig. 2 shows a sequence of 7 panoramic images, where the person inside
the white boxes is being tracked, as it moves from the middle of the scene in the first image to the left of it in the last one.
3. Navigation Behaviors
The main navigation capabilities implemented are the motion towards a desired direction which is specified by the user
using a touch screen or voice commands, the motion in the middle of the free space defined by obstacles or environment
features and the following of a target (e.g. a moving person) specified by the user. In all cases, obstacle avoidance is also
implemented. The last two behaviors will be subsequently presented in more detail.
3.1. Motion in the Middle of Free Space
Our wheelchair is kinematically equivalent to a mobile robot of the unicycle type. We suppose that it is moving on a
planar surface inside a “corridor” formed by obstacles, which can be locally approximated by two straight parallel walls.
We further suppose that sensors able to specify range from the walls are mounted on the wheelchair (e.g. sonars, laser
range finder, panoramic camera) (fig. 3).
Ρ1
δ d 1 φ1
C θ
−φ2
ε
M(x,y)
d2
y
Ο x Ρ2
Fig. 3. Motion in the middle of free space
Consider an inertial coordinate system
centered at a point of the plane and aligned with one of the walls, a moving
coordinate system
attached to the middle of the wheelchair’s wheel axis and another moving one
attached
to the nodal point of the range finder. Let !#"%$ &
be the position of the point M and be the orientation of the wheelchair
with respect to the coordinate system (' )+*
Let 0 be the distance of the point C from M and ,.-
0 the width of the
corridor.
We suppose that the wheels of the mobile platform roll without slipping on the plane supporting the system. This induces
a nonholonomic constraint on the motion of the wheelchair, due to the fact that the instantaneous velocity lateral to the
heading direction of the mobile platform has to be zero. From this, we get the usual unicycle kinematic model [3] for the
mobile platform
/˙ 10 cos &+ "2˙ 30 sin &2 &4˙ 156 (1)
where 0 ˙ cos &87 " ˙ sin & is the heading speed and 5 is the angular velocity of the unicycle.
def
Consider the rays 9 and 9 in the forward directions and :; with respect to the heading direction of the wheelchair
(fig. 3). We suppose that 9 intersects the left wall, while 9 intersects the right wall of the corridor and that "<6 0 =,>$
1 2 1 2
and &?<@A:;BC$D with 0 E1EGF ' Let H I'
1 2
2 1 2
The task of staying in the middle of the corridor consists in using the angular velocity 5 of the system to drive the lateral
distance of the wheelchair from the walls, as well as its orientation, to desired values. This amounts to asymptotically
stabilizing the state J"C#&($ of the subsystem
"+˙ 10 sin &2 &4˙ K5 (2)
of the unicycle kinematics 1 to J"ML
.&LN$OPQ 0 $> using only the angular velocity 5 as the control of the system. The
heading speed 0CJR#$ cannot be controlled, but we suppose that it is known at all times.
2
When reconstruction of the state J"C#&($ from the sensory data is possible, a path-following control scheme, similar to the
one developed in [4], can be applied to the system.
In the case that reconstruction of the state "ST&
$
from the sensory data is not desirable, a motion control scheme based on the
scaled difference of inverse depths, is possible. In the case that is time–varying, but strictly positive 0 0 0 J0CJR#$- VUSRW* $>
the angular velocity control
[ 9 1 : 9 1 $ 5XY:;Z 0 1 sin
1 2
(3)
with positive gain Z can be shown to locally asymptotically stabilize the system 2 to "L
&LN$>'
1 An input scaling
procedure [4] can be used to reduce the linearization of the closed–loop system around the desired equilibrium to a linear
time–invariant system. (Cf. [7] for details.)
Corridor Following − No State Reconstruction
20 Corridor Following − No State Reconstruction − Positive v
18 0
16
−0.02
State x, y, theta, Heading Speed v
14
x
y
12
theta
Control omega
−0.04
v
10
8 −0.06
−0.08
4
2
−0.1
0
0 2 4 6 8 10 12 14 16 18 20 0 5 10 15 20 25 30 35 40 45 50
!\"C]&
$ 0 5
Time t Time t
State and heading speed (b) Control 1
Fig. 4. Motion in the middle of the free space
0
Proposition 1 Let the heading speed of the unicycle 1 be time–varying and assume that it is strictly positive at all
times, piecewise continuous and bounded. Let 1 and 2 be the distances specified in fig. 3. The angular velocity of 9 9 5
Z -
equation 3 with gain 1 0 stabilizes locally asymptotically the subsystem 2 of the unicycle kinematics to the equilibrium
"L(]&LN$\^Q $D'
2 0
When 0 is negative, a similar approach, employing “backwards looking” rays can be employed.
Fig. 4 shows MATLAB simulations of the system for the controls 3 and for the case where the heading speed of the
mobile robot is strictly positive and varies periodically with time. The state "C#&
$
is not being reconstructed in this case.
5
The control is used to achieve stabilization of J"C#&($ $
to the desired values 5 0 starting from the initial state 4 0 4 ' $>'
In the experimental prototype developed, this behavior is implemented using sonars.
3.2. Person Following
In order to implement the person–following behavior, the wheelchair is equipped with a color panoramic camera and with
a ring of sonars. The camera specifies the orientation of the moving target with respect to the wheelchair, while the sonars
specify its distance from it. This section describes the design of a sensor–based control law implementing this behavior.
Fig. 5. Person following by the robotic wheelchair
Consider an inertial coordinate system
centered at a point of the plane and a moving coordinate system
attached to the middle
of the wheelchair’s wheel axis. Let !#"%$
be the position of the point M and be the orientation &
of the wheelchair with respect to the coordinate system (' _
Point in fig. 5 is the target of interest moving along an
(unknown) trajectory. The target coordinates with respect to the coordinate system are JS`WT"M`\T&N`$>'
The goal of the control system is to specify the wheelchair velocities a b 0c5B$
def
that will keep the target in a constant
position with respect to the wheelchair. This constant position can be represented by a virtual point (cf. fig. 5), with d
constant relative coordinates S?e?%"
?ef$
with respect to the coordinate system
and with coordinates Jce?%"
e.$
with respect to the coordinate system ('
The control goal can, then, be expressed as minimizing the deviation of point
d _
from point or as minimizing the tracking error
def
g h gij%gk$OPJ e :l ` ;" e :X" ` $D'
It can be easily seen that
e Hm7n ?e cos &f:o" pe &%q" e 1"f7@ pe 8& 7@" ?e &%'
sin sin cos Since the motion of the wheelchair is subject to
the nonholonomic constraints 1, the change of the tracking error during the motion (error equations) is
gf˙ Hrs &
$at: u ˙ n
` (4)
with
rs &
$Wwv cos & :mJC?e sin &.7l"
?e cos &
$
sin & S?e cos &x:["
?e sin & y (5)
where u ˙ ` z S˙ `W "M˙`$ is the translational velocity of the target. The matrix r{J&
$ is invertible whenever S?e is non–zero.
def
Proposition 2 Let ?e be non–zero. If the target translational velocity u ˙ ` is uniformly bounded and sufficiently small
(but not necessarily zero when the error is zero), then the control law
a g(#&
$\Ir2| &
$]}1g4
1
(6)
where } is a Hurwitz matrix, will maintain the tracking error ultimately uniformly bounded (i.e. uniformly bounded after
a finite initial time).
u`
The closed–loop system of equations 4, 6 can be seen as an exponentially stable nominal system with nonvanishing
perturbation. Under the above conditions on ˙ the proposition follows from known robustness results ([2]).
For simplicity, we choose }~:;Z j where is the 2 2 unit matrix. The above state–feedback law becomes then
a g(T&
$\:;Zpr | &
$]g(' This control law depends on & and on the tracking error g
' The first is not known and the second
1
needs to be estimated from sensory data. It turns out that, while doing so, it is possible to eliminate the dependence of a
on &%'
Let be the relative orientation of the target with respect to the wheelchair and be the corresponding distance, as shown
in fig. 5. In section 2, a method for the estimation of from a sequence of color panoramic images was presented. The
tracking error g is related to the sensory information and by
g i 3Cpe cos &x:["(pe sin &x:q cos &.7XC$
g k 3C?e sin &;7l"
?e cos &x:q sin J&.76C$' (7)
Using 7, the previous control law takes the form
aJNC$W v ;: Z;: JSZ?J"
eI :[ cos C$!7n"(peH5
?e1:q sin C$T C?e y ' (8)
This law depends exclusively on sensory information. The parameters ?e and " ?e determine the position where the
target will be held with respect to the wheelchair.
Fig. 6 shows the results of MATLAB simulations for the case where we attempt to track a target moving with velocity
S`K
˙ sin ˙ R> "M`6
2 cos ˙7 R> &N`K '
0 3 using the sensor–based control law 8 with parameters 1 1 S?eG "
?eG
and Zl '
1 Fig. 6.a shows the !T"$
trajectories of the wheelchair and of the target and fig. 6.b shows the tracking
error gi%jgkN$>'
We observe that the error remains bounded, despite neglecting the target velocity in the control law design
and despite the subsequent simplifications of this control law. In experiments with the robotic wheelchair prototype, this
source of error is negligible.
(a) Trajectories !T"$ of target and wheelchair (b) Tracking error
Fig. 6. Person–following by the robotic wheelchair
4. The Experimental Prototype
An experimental prototype of a robotic wheelchair was built at ICS–FORTH (fig. 7). It is based on a commercially–
available power wheelchair, where the sensors, the computing power and the electronics needed for the implementation of
the navigation behaviors and of the user interfaces (touch screen, voice commands) were developed as add–on modules.
The main hardware components of the robotic wheelchair (fig. 8) are:
1) The power wheelchair: A MEYRA Eurosprint wheelchair is used as the base of our system. The wheelchair is actuated
by two DC motors driving independently each rear wheel. The motion is controlled by the user, either directly through a
joystick, or indirectly through a computer, which communicates with the wheelchair through a serial port.
Fig. 7. Experimental prototype of robotic wheelchair
2) The sensors: The sensory modalities employed are odometry, sonars and panoramic vision. A ring of 6 Polaroid sonars
with a range of 6 m and beam width of 20 are used, as well as a Neuronics panoramic camera with a paravoloid mirror
and a 360 field of view. The electronics interfacing the sensors with the computer system, as well as those necessary for
controlling the sensors and for data collection, were built in-house.
3) The computer system: It is composed of a portable computer and of a set of 5 PIC microcontrollers interconnected by a
CAN network. The portable computer processes the vision data and communicates with the user through the appropriate
software interfaces. Among the microcontrollers, one is dedicated to the communication with the portable computer and
with the wheelchair control unit through serial ports, three are dedicated to controlling the sonars and to receiving and
processing their data and one to receiving and processing odometry data.
Fig. 8. Hardware architecture of robotic wheelchair
Extensive tests have been performed with this prototype to evaluate its behavior in a variety of operating conditions.
Among its navigation capabilities, obstacle avoidance, the motion towards a specified direction and the motion in the
middle of the free space work quite robustly at this stage. The following of a moving target works reliably under good
lighting conditions. When significant variations in lighting occur during the movement of the wheelchair, the color–based
visual tracking may lose the target or confuse it with another one. Further work to enhance the independence of the
tracking scheme from lighting conditions is currently under way.
5. Conclusions
The experimental prototype of a robotic wheelchair with the capability of semi-autonomous navigation was presented.
Issues related to the processing and use of sensory information from sonars and a panoramic camera mounted on the
wheelchair, to the control of the system based on the sensory information, as well as to the hardware and software
architecture of the system were discussed. Such a system is expected to assist people with impaired mobility, who may
have limited fine motor control.
Acknowledgments
This work was partially supported by the General Secretariat for Research and Technology of the Hellenic Ministry of
Development through project DRIVER, contract No. 98AMEA 18. The contributions of M. Maniadakis, O. Bantiche and
D. Muti in this project are gratefully acknowledged.
6. REFERENCES
[1] Argyros, A.A. and Bergholm, F., “Combining Central and Peripheral Vision for Reactive Robot Navigation”, Proc.
Computer Vision and Pattern Recognition Conf. (CVPR’99), Fort Collins, Colorado, USA, June 23-25, 1999.
[2] Khalil, H.K., Nonlinear Systems, Macmillan Publishing Co., 1992.
[3] Laumond, J.–P., Ed., Robot Motion Planning and Control, Lecture Notes in Control and Information Sciences, 229,
Springer–Verlag, 1998.
[4] Samson, C., “Control of Chained Systems: Application to Path Following and Time–Varying Point–Stabilization of
Mobile Robots”, IEEE Trans. on Automatic Control 40, 64-77, 1995.
[5] Swain, M.J. and Ballard, D.H. “Indexing via color histograms”, Proc. Intl. Conf. on Computer Vision, 1990.
[6] Tsakiris, D.P., Rives, P. and Samson, C., “Extending Visual Servoing Techniques to Nonholonomic Mobile Robots”,
in The Confluence of Vision and Control, Eds. Hager, G., Kriegman, D. and Morse, S., Lecture Notes in Control and
Information Systems (LNCIS 237), Springer-Verlag, 1998.
[7] Tsakiris, D.P. and Argyros, A.A., “Corridor Following by Mobile Robots Equipped with Panoramic Cameras”, Proc.
8th IEEE Medit. Conf. on Control and Automation (MED’2000), Rio, Greece, July 17-19, 2000.