0% found this document useful (0 votes)
11 views6 pages

SSS: A Hybrid Architecture Applied To Robot Navigation

The paper presents the SSS architecture for robot navigation, integrating servo-control, subsumption, and symbolic layers to enhance real-time control and decision-making. It details the implementation of a robot named 'TJ' that can autonomously map and navigate indoor environments at a speed of 2.6 feet per second. The architecture effectively addresses challenges in navigation by utilizing situation recognizers and event detectors to facilitate communication between the layers, enabling smooth and efficient movement in dynamic settings.

Uploaded by

ali
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views6 pages

SSS: A Hybrid Architecture Applied To Robot Navigation

The paper presents the SSS architecture for robot navigation, integrating servo-control, subsumption, and symbolic layers to enhance real-time control and decision-making. It details the implementation of a robot named 'TJ' that can autonomously map and navigate indoor environments at a speed of 2.6 feet per second. The architecture effectively addresses challenges in navigation by utilizing situation recognizers and event detectors to facilitate communication between the layers, enabling smooth and efficient movement in dynamic settings.

Uploaded by

ali
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Appears in the Proceedings of the 1992 IEEE Conference on Robotics and Automation (ICRA-92), pp. 2719Ð2724.

SSS: A Hybrid Architecture Applied to Robot Navigation

Jonathan H. Connell
IBM T.J. Watson Research Center, Box 704, Yorktown Heights NY 10598

Abstract beneficial feature of such systems [3]. However, for some


tasks, such as navigation, it is certainly convenient to
This paper describes a new three layer architecture, SSS, have higher-level centralized representations. This is the
for robot control. It combines a servo-control layer, a fortŽ of standard hierarchical symbolic programming
Òsubsumption" layer, and a symbolic layer in a way that languages. The usual stumbling block of such systems,
allows the advantages of each technique to be fully real-time control, can be finessed by delegating tactical
exploited. The key to this synergy is the interface between authority to the subsumption and servo control layers.
the individual subsystems. We describe how to build
situation recognizers that bridge the gap between the servo
and subsumption layers, and event detectors that link the
subsumption and symbolic layers. The development of
such a combined system is illustrated by a fully Symbolic
implemented indoor navigation example. The resulting real
robot, called ÒTJ", is able automatically map office event ∆s, ∆t process
building environments and smoothly navigate through detectors parametrization
them at the rapid speed of 2.6 feet per second.
Subsumption
1: Introduction ∆s, dt
situation setpoint
In the ÒSSS" architecture (an acronym for Òservo, recognizers selection
subsumption, symbolic" systems) we have tried to
combine the best features of conventional servo-systems Servo
and signal processing, with multi-agent reactive controllers
and state-based symbolic AI systems. ds, dt
For instance, servo-controllers have trouble with many Sensors Actuators
real-world phenomena which are not understood well
enough to be modelled accurately or which are non-linear. Figure 1 - The SSS architecture combines 3 control
Behavior-based or subsumption systems (e.g. [2, 5]), on techniques which can be characterized by their treatment of
the other hand, do not impose as many modelling time and space. Special interfaces allow the layers of this
constraints on the world and are good at making rapid, system to cooperate effectively.
radical decisions. Yet such systems often yield jerky
motions due to their slow sample rate and their discrete The three layers in our system come from progressively
view of the world. This shortcoming can in turn be easily quantizing first space then time. As shown in figure 1, the
rectified by adding appropriate servo-systems which are servo-style system basically operates in a domain of
particularly good at making smooth motions. continuous time and continuous space. That is, these
Behavior-based systems also have problems with world systems constantly monitor the state of the world and
modelling and persistent state. Since behavior-based typically represent this state as an ensemble of scalar
systems are often implemented in a distributed fashion, values. Behavior-based system also constantly check their
there is no good place to put a world model. Indeed, many sensors but their representations tend to be special-purpose
of the adherents of this school claim that this is a recognizers for certain types of situations. In this way
behavior-based systems discretize the possible states of the
world into a small number of special task-dependent
categories. Symbolic systems take this one step further and
also discretize time on the basis of significant events. They
commonly use terms such as Òafter X do Y" and Òperform
A until B happens". Since we create temporal events on
the basis of changes in spatial situations, it does not make
sense for us to discretize time before space. For the same
reason, we do not include a fourth layer in which space is
continuous but time is discrete.
In order to use these three fairly different technologies we
must design effective interfaces between them. The first
interface is the command transformation between the
behavior-based layer and the underlying servos.
Subsumption-style controllers typically act by adjusting
the setpoints of the servo-loops, such as the wheel speed
controller, to one of a few values. All relevant PID
calculations and trapezoidal profile generation are then
performed transparently by the underlying servo system.
The sensory interface from a signal-processing front-end
to the subsumption controller is a little more involved. A Figure 2 - TJ has a 3-wheeled omni-directional base and uses
both sonar ranging and infrared proximity detection for
productive way to view this interpretation process is in the navigation. All servo-loops, subsumption modules, and the
context of Òmatched filters" [16, 6]. The idea here is that, contingency table are on-board. The robot receives path
for a particular task, certain classes of sensory states are segment commands over a spread-spectrum radio link.
equivalent since they call for the same motor response by
the robot. There are typically some key features that, for
the limited range of experiences the robot is likely to 2: The navigation task
encounter, adequately discriminate the relevant situations
from all others. Such Òmatched filter" recognizers are the The task for our robot (shown in figure 2) is to map a
mechanism by which spatial parsing occurs. collection of corridors and doorways and then rapidly
The command interface between the symbolic and navigate from one office in the building to another. To
subsumption layers consists of the ability to turn each accomplish this we had to address two basic navigation
behavior on or off selectively [7], and to parameterize problems. The first of these is compensating for the
certain modules. These event-like commands are Òlatched" variability of the environment. There are often people in
and continue to remain in effect without requiring constant the halls, doors open and close by random amounts, some
renewal by the symbolic system. days there are large stacks of boxes, and the trash cans are
The sensor interface between the behavior-based layer and always moving around.
the symbolic layer is accomplished by a mechanism which We solve the variability problem by restricting ourselves
looks for the first instant in which various situation to very coarse geometric maps. We record only the distance
recognizers are all valid. For instance, when the robot has and orientations between relevant intersections. The actual
not yet reached its goal but notices that it has not been details of the path between two points are never recorded,
able to make progress recently, this generates a Òpath- thus there are never any details to correct. However, to
blocked" event for the symbolic layer. To help decouple make use of such a coarse map the robot must be able to
the symbolic system from real-time demands we have added reliably follow the segments so described [4]. Fortunately,
a structure called the Òcontingency table". This table allows behavior-based systems are particularly good at this sort of
the symbolic system to pre-compile what actions to take local navigation.
when certain events occur, much as baseball outfielders The other basic navigation problem is knowing when
yell to each other Òthe play's to second" before a pitch. The the robot has arrived back in a place it has been before. The
entries in this table reflect what the symbolic system difficulty is that, using standard robot sensory systems, an
expects to occur and each embodies a one-step plan for individual office or particular intersection is not
coping with the actual outcome. distinguishable from any other. One approach is to use
odometry to determine the absolute position of the robot.
However, it is well known that over long distances such correcting the robot's direction of travel after it has veered
measurements can drift quite severely due to differing around an obstacle. If the detour is short, the average
surface traction and non-planar areas. heading will not have been affected much.
We solve the loop problem by exploiting the geometry The average heading signal provides an interesting
of the environment. In most office buildings all corridors opportunity for the symbolic system to deliberately Òfake
are more or less straight and meet at right angles. Therefore out" the associated behaviors. For instance, the symbolic
we measure the length of each path segment and treat this system can cleanly specify a new direction of travel for the
as a straight line. Similarly, when the robot switches from robot by yanking around the robot's Òtail". This method is
one path to another we force the turn to be a multiple of better than commanding the robot to follow a new absolute
90 degrees. This is essentially an odometric representation direction, especially for cases in which the robot was not
which is recalibrated in both heading and travel distance at aligned precisely with the original corridor or in which the
each intersection. In this way we maintain a coarse (x, y) new corridor seems to bend gradually (in actuality or from
position estimate of the robot which can be compared to odometric drift). Instead of forcing the robot to continually
the stored coordinates of relevant places. scrape its way along one wall or the other, the average
heading will eventually adjust itself to reflect the direction
2.1: Tactical navigation along which progress has been made and thereby allow the
robot to proceed smoothly.
Tactical, or moment-to-moment control of the robot is Although they sound like servo-controllers, the two
handled by the servo and subsumption layers of our odometric behaviors were put in the subsumption layer for
architecture. The servo layer consists of two velocity two reasons. First, they do not require fine-grained error
controllers, one for translation and one for rotation, on the signals. The alignment behavior is quiescent if the robot is
robot base. These proportional controllers operate at 256 Òclose" to the right heading, and the travel behavior only
Hz and have acceleration-limited trapezoidal profiling. slows the robot when it is Ònear" the goal. Second, and
Built on top of these servo-controllers are various more importantly, we wanted these behaviors to interact
reactive behaviors which run at a rate of 7.5Hz. One of the with other subsumption behaviors. For example, the
more important of these is wall following. For this we use alignment behavior takes precedence over the part of wall
a carefully arranged set of side-looking infrared proximity following that moves the robot closer to a surface,
detectors. The method for deriving of specific responses for however it is not as important as collision avoidance.
each sensory state is detailed in [4, 6] for similar systems. Many of these other behaviors are necessarily cast in a
While some researchers have tried to fuse sensory data over subsumption style framework because of the limited
time and then fit line segments of it, most Òwalls" that we quality of sensory information available. Thus, to
want to follow are not really flat. There are always gaps accommodate the appropriate dominance relations, the
caused by doors, and often junk in the hall that makes the alignment and travel limiting behaviors were also included
walls look lumpy. This is the same reason we did not try in this layer.
to implement this activity as a servo-controller: it is very
hard to directly extract angle and offset distance from the 2.2: Strategic navigation
type of sensory information we have available. The
matched filter approach lets us get way with only partial The strategic part of navigation Ð where to go next Ð is
representations of the environment relative to the robot. handled by the symbolic layer. To provide this
There are also two tactical navigation modules based on information, our symbolic system maintains a coarse
odometry. The first of these looks at the cumulative travel geometric map of the robot's world. This map consists of a
and slows or stops the robot when the value gets close to a number of landmarks, each with a type annotation, and a
specified distance. A similar setup exists based on the number of paths between them, each with an associated
average heading of the robot. The average heading is length. The landmarks used in the map are the sudden
computed by slowly shifting the old average heading value appearance and disappearance of side walls. These are
toward the robot's current direction of travel. If the robot is detected by long range IR proximity detectors on each side
only turning in place the average heading does not change, of the robot. Normally, in a corridor the robot would
but after the robot has travelled about 5 feet in a new continuously perceive both walls. When it gets to an
direction the value will be very close to the actual heading. intersection, suddenly there will be no wall within range of
A special behavior steers the robot to keep this Òtail" one or both of these sensors. Similarly, when the robot is
straight behind, which in turn causes the robot to remain cruising down a corridor and passes an office, the IR beam
aligned with its average heading. This is very useful for will enter the room far enough so that no return is detected.
Once a map has been created, an efficient route can be the one listed first is taken. After this one burst of activity
plotted by a spreading activation algorithm [10] or some the old contingency table is flushed and the symbolic
other method. To traverse this route, the symbolic system system is free to load an entirely new table.
enables whatever collection of subsumption modules it For the navigation application, the contingency table
deems appropriate for the first segment and parameterizes includes a check for an opening in the correct direction at
their operation in appropriate ways. The symbolic system an appropriate path displacement, along with commands to
does not need to constantly fiddle with the subsumption reset the orientation and travel distances for the next path
layer, it only has to reconfigure the layer when specific segment (see figure 3). Notice that the robot also checks to
events occur. In our navigation example, this typically make sure it is aligned with the average heading when it
happens when the robot has reached the end of one path observes an IR opening to the left. This is to keep the
segment and needs to have its course altered to put it on robot from mistakenly triggering the next subsumption
the next segment. In this case, the alteration of configuration just because the IR signal happened to
subsumption parameters must be swift or the robot will vanish as the robot was veering around some obstacle in
overshoot the intersection. To relieve this real-time burden its path.
from the symbolic system we create a Òcontingency table"
such as shown in figure 3 (in pseudo-LISP code). This
3: Experimental results
structure is similar to the Òevent dispatch" clauses in the
original subsumption architecture [2].
Our first experiment was aimed at validating the claim that
environmentally constrained odometry allows us to solve
the loop navigation problem. For this we provided the
(do-until-return robot with a rough path to follow of the the form: ((travel1
(setq recognizers (check-situations)) turn 1 ) (travel2 turn2 ) ...). Travel was specified as an
(cond ((and (near-distance? recognizers)
integral number of inches and turns were specified as an
(aligned-okay? recognizers)
integral number of degrees. The top half of figure 4 shows
(left-opening? recognizers))
the path the robot took according to its odometry. The
(inc-heading! 90)
circles with small projections indicate where the robot
(new-travel! 564)
observed openings in the directions indicated. The circles
(return recognizers))
are the same size as the robot (12 inches diameter) to
((beyond-distance? recognizers)
provide scale. Notice that neither of the loops appears to be
(inc-heading! 180)
closed. Based on this information it is questionable
(new-travel! 48)
whether the corridor found in the middle of the map is the
(return recognizers))
same one which the robot later traversed.
((no-progress? recognizers)
The symbolic map, which appears in the lower half of
(disable! stay-aligned)
figure 4, correctly matches the important intersections to
(enable! scan-for-escape)
each other. In this map, nodes are offset slightly to the side
(return recognizers))
of the path segments. Each node's type is denoted
(t nil)))
iconically as a corner Ð two short lines indicating the
Figure 3 - The Òcontingency table" continuously monitors a
direction of the opening and the direction of free travel.
collection of special-purpose situation recognizers. When a This symmetry reflects the fact that the robot is likely to
specified conjunction occurs, this Òevent" causes a new set of perceive the same corner when coming out through the
permissions and parameters to be passed to the subsumption marked aperture. Corner nodes are placed at the robot
system. After this, the symbolic system builds a new table. position where they are sensed; no adjustment is made for
the width of the corridor. This is partially compensated by
The contingency table allows the symbolic system to a wide tolerance in matching nodes.
specify a number of events and what response to make in When a new opening is detected the robot compares it
each case. As suggested by the code, this contingency table with all the other nodes in the map that have similar
module continually checks the status of a number of opening directions. If there is a match which is no more
special-purpose situation recognizers. When one of the than 6 feet from the robot's current position in either x or
listed conjunctions occurs, the module performs the y, the new opening is considered to be a sighting of the old
specified alterations to the subsumption controller then node. In this case we average the positions of the robot and
returns the triggering condition to the symbolic system. If the node and move both to this new location. This
two or more events occur simultaneously, the action for merging operation is why the corridors do not look
perfectly straight in the symbolic map. However, when the large displacements. However, the local details are
robot is instructed to follow the same path a second time qualitatively correct (some of the angularity is due to the
and update the map, the changes are minimal. time lag between successive position readings). On
different runs we started the robot in slightly different
directions. We also altered the positions of various
obstacles along the path and changed the states of some of
the doors on the corridor. This lead to slightly different
forms for each of the runs. Despite these variations in
initial heading and the configuration of the environment,
the symbolic system was able to successfully navigate the
robot to its goal in all 5 runs.

Figure 5 - These traces illustrate the odometrically perceived


path of the robot on 5 consecutive runs along the same path.
Notice that the wiggles along each segment are different since
obstacles were moved and doors were altered between runs. To
the symbolic system, however, all runs seemed identical.

Figure 4 - Geometric constraints of the environment allow


4: Discussion
the robot to build maps with loops despite poor absolute
odometry. The top picture shows the path the robot thought it The SSS architecture is related to a number of recent
took. The hair-like projections indicate openings it sensed. projects in robot control architectures. The greatest degree
The lower picture shows the map the robot built during this of similarity is with the ATLANTIS system developed at
exploration.
JPL [11, 9]. In this system there is a subsumption-style
Òcontrol" layer, an operating system-like Òsequencing"
The second experiment shows that the subsumption layer, and a model-building Òdeliberative" layer. As in our
system is sufficiently competent at local navigation to navigation scheme, this system delegates the task of
allow the use of a coarse geometric map. In this following a particular path segment control to a behavior-
experiment we had the symbolic system plan a path (from based system while using a rough topological map to
node 10 to node 5) using the map it generated in the first specify the turns between segments. A mobile robot
part. We then told it to configure the two lower layers of developed at Bell labs [15] also uses this same task
the architecture to follow this path. The result of five decomposition as does the AuRA system [1] from
consecutive runs is shown in figure 5. The displayed paths UMASS Amherst.
are based on the robot's odometry and hence do not However, in ATLANTIS (and in RAPs [8], its
accurately reflect the robot's true position in space over intellectual predecessor) each of the behaviors in the
subsumption controller is associated with some Ògoal" and patrol in a parking lot. We also intend to use a similar
can report on its progress toward achievement of this system to acquire and manipulate objects using a larger
objective. A control system developed at Hughes [13] also mobile robot with an on-board arm.
requires a similar signalling of process Òfailures". The
problem with this type of system is that detecting true References
failures at such a low level can be difficult. In our system,
external occurrences, not the internal state of some [1] Ronald C. Arkin, ÒMotor Schema Based Navigation for
a Mobile Robot", Proceedings of the IEEE Conference
subsidiary process, determine when the behavior-based on Robotics and Automation, 264Ð271, 1987.
system is reconfigured. [2] Rodney Brooks, ÒA Layered Intelligent Control System
ATLANTIS also places much more emphasis on the for a Mobile Robot", IEEE Journal Robotics and
sequencer layer than we do. In ATLANTIS the deliberative Automation, RA-2, 14-23, April 1986.
layer essentially builds a partially ordered Òuniversal plan" [3] Rodney Brooks, ÒIntelligence without
Representation", Artificial Intelligence, vol. 47, 139Ð
[14] which it downloads to the sequencer layer for 160, 1991.
execution. Similarly, one of the control systems used on [4] Jonathan H. Connell, ÒNavigation by Path
HILARE [12] generates off-line a Òmission" plan which is Remembering", Proceedings of the 1988 SPIE
passed to a supervisor module that slowly doles out pieces Conference on Mobile Robots, 383Ð390.
to the Òsurveillance manager" for execution. In such [5] Jonathan H. Connell, Minimalist Mobile Robotics: A
Colony-style Architecture for a Mobile Robot,
systems, the symbolic system is completely out of the Academic Press, Cambridge MA, 1990 (also MIT TR-
control loop during the actual performance of the prescribed 1151).
task. This exclusion allows for only simple fixes to plans [6] Jonathan H. Connell, ÒControlling a Robot Using
and makes it difficult to do things such as update the Partial Representations", Proceedings of the 1991 SPIE
traversability of some segment in the map. In contrast, the Conference on Mobile Robots, (to appear).
[7] Jonathan H. Connell and Paul Viola, ÒCooperative
contingency table in the SSS system only decouples the Control of a Semi-autonomous Mobile Robot",
symbolic system from the most rapid form of the decision Proceedings of the IEEE Conference on Robotics and
making Ð the symbolic system must still constantly replan Automation, Cincinnati OH, 1118Ð1121, May 1990.
the strategy and monitor the execution of each step. [8] R. James Firby, ÒAn Investigation into Reactive
In summary, with our SSS system we have attempted to Planning in Complex Domains", Proceedings of AAAI-
87, 202Ð206, 1987.
provide a recipe for constructing fast-response, goal-directed [9] Erann Gat, ÒTaking the Second Left: Reliable Goal-
robot control systems. We suggest combining a linear Directed Reactive Control for Real-world Autonomous
servo-like system, a reactive rule-like system, and a Mobile Robots", Ph.D. thesis, Virginia Polytechnic
discrete-time symbolic system in the same controller. This Institute and State University, May 1991.
is not to say a good robot could not be built using just one [10] Maja J. Mataric, ÒEnvironment Learning Using a
Distributed Representation", Proceedings of the IEEE
of these technologies exclusively. We simply believe that Conference on Robotics and Automation, 402Ð406,
certain parts of the problem are most easily handled by 1990.
different technologies. To this end we have tried to explain [11] David P. Miller and Erann Gat, ÒExploiting Known
the types of interfaces between systems that we have found Topologies to Navigate with Low-Computation
to be effective. To summarize, the upward sensory links Sensing", Proceedings of the 1991 SPIE Conference on
Sensor Fusion, 1990.
are based on the temporal concepts of situations and [12] Fabrice R. Noreils and Raja G. Chatila, ÒControl of
events, while the downward command links are based on Mobile Robot Actions", Proceedings of the IEEE
parameter adjustment and setpoint selection. Conference on Robotics and Automation, 701Ð707,
The SSS architecture has been used for indoor navigation 1989.
and proved quite satisfactory. Developing a robot which [13] David W. Payton, ÒAn Architecture for Reflexive
Autonomous Vehicle Control", Proceedings of the IEEE
moves at an average speed of 32 inches per second (the Conference on Robotics and Automation, 1838Ð1845,
peak speed is higher), but which can still reliably navigate 1986.
to a specified goal, is a non-trivial problem. It required [14] Marcel Schoppers, ÒUniversal Plans for Reactive
using a subsumption approach to competently swerve Robots in Unpredictable Environments", Proceedings
around obstacles, a symbolic map system to keep the robot of IJCAI-87, Milan Italy, 1039Ð1046, August 1987.
[15] Monnett Hanvey Soldo, ÒReactive and Preplanned
on track, and a number of servo controllers to make the Control in a Mobile Robot", Proceedings of the IEEE
robot move smoothly. Conference on Robotics and Automation, Cincinnati
We plan to extend this work to a number of different OH, 1128Ð1132, May 1990.
navigation problems including the traversal of open [16] RŸdiger Wehner, ÒMatched Filters - Neural Models of
lobbies, movement within a particular room, and outdoors the External World", Journal of Comparative
Physiology, 161:511-531.

You might also like