0% found this document useful (0 votes)
102 views10 pages

Unified Theories of Cognition

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
102 views10 pages

Unified Theories of Cognition

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

BEHAVIORAL AND BRAIN SCIENCES (1992) 15, 425-492

Printed in the United States of America

Precis of Unified theories


of cognition
Alien Newell
School of Computer Science, Carnegie Mellon University, Pittsburgh, PA
15213
Electronic mail: newell@ cs.cmu.edu

Abstract: The book presents the case that cognitive science should turn its attention to developing theories of human cognition that
cover the full range of human perceptual, cognitive, and action phenomena. Cognitive science has now produced a massive number
of high-quality regularities with many microtheories that reveal important mechanisms. The need for integration is pressing and will
continue to increase. Equally important, cognitive science now has the theoretical concepts and tools to support serious attempts at
unified theories. The argument is made entirely by presenting an exemplar unified theory of cognition both to show what a real
unified theory would be like and to provide convincing evidence that such theories are feasible. The exemplar is SOAR, a cognitive
architecture, which is realized as a software system. After a detailed discussion of the architecture and its properties, with its relation
to the constraints on cognition in the real world and to existing ideas in cognitive science, SOAR is used as theory for a wide range of
cognitive phenomena: immediate responses (stimulus-response compatibility and the Sternberg phenomena); discrete motor skills
(transcription typing); memory and learning (episodic memory and the acquisition of skill through practice); problem solving
(cryptarithmetic puzzles and syllogistic reasoning); language (sentence verification and taking instructions); and development
(transitions in the balance beam task). The treatments vary in depth and adequacy, but they clearly reveal a single, highly specific,
operational theory that works over the entire range of human cognition. SOAR is presented as an exemplar unified theory, not as the
sole candidate. Cognitive science is not ready yet for a single theory there must be multiple attempts. But cognitive science must
begin to work toward such unified theories.

Keywords: artificial intelligence; chunking; cognition; cognitive science; computation; problem solving; production systems; SOAR;
symbol systems

The book begins by urging on psychology unified theories can't play 20 questions with nature and win" (Newell
of cognition: 1973a), which even then fretted about the gap between
Psychology has arrived at the possibility of unified the empirical and theoretical progress in cognitive psy-
theories of cognition - theories that gain their power by chology and called for more integrative theories. This
positing a single system of mechanisms that operate book may be seen as a step toward answering that call.
together to produce the full range of human cognition.
I do not say they are here, but they are within reach and The nature of theories. Chapter 1 discusses the notion
we should strive to attain them. of theory, to ground communication, building on some
My goal is to convince the reader that unified theories concrete examples: Fitts's Law, the power law of practice,
of cognition are really worth striving for - now, as we and a theory of search in problem spaces. There is nothing
move into the nineties. This cannot be done just by special about a theory just because it deals with the
talking about it. An exemplar candidate is put forth to human mind. It is important, however, that the theory
illustrate concretely what a unified theory of cognition make predictions, not the theorist. Theories are always
means and why it should be a goal for cognitive science. approximate, often deliberately so, in order to deliver
The candidate is a theory (and system) called SOAR (Laird useful answers. Theories cumulate, being refined and
et al. 1987). reformulated, corrected and expanded. This view is
The book is the written version of the William James Lakatosian, rather than Popperian: A science has invest-
Lectures, delivered at Harvard University in spring 1987. ments in its theories and it is better to correct one than to
Its stance is personal, reflecting the author's thirty years discard it.
of research in cognitive science, although this precis will
be unable to convey much of this flavor. What are unified theories of cognition? Unified theories
of cognition are single sets of mechanisms that cover all of
cognition - problem solving, decision making, routine
Chapter 1: Introduction action, memory, learning, skill, perception, motor ac-
tivity, language, motivation, emotion, imagining, dream-
The first chapter describes the enterprise. It grounds the ing, daydreaming, and so on. Cognition must be taken
concerns for how cognitive science should proceed by broadly to include perception and motor activity. No
reflecting on a well-known earlier paper entitled "You unified theory of cognition will deal with the full list above

© 7992 Cambridge University Press 0140-525X/92 $5.00+.00 425


Newell: Unified theories of cognition

all at once. What can be asked is a significant advance in Though cognitive science does not yet have unified
its coverage. theories, there are harbingers: Many local theories make
As the title indicates, the book is focused on the plural, evident what cognitive mechanisms must be operating.
on many unified theories of cognition. This is not eclecti- But important attempts at unified theories have also been
cism, but a recognition of the state of the art. Cognitive made. John Anderson's work on ACT* (Anderson 1983)
science does not have a unified theory yet. Many candi- must be taken to have pride of place among such at-
dates will arise, given the current practice of theorizing in tempts. [See also Anderson: "Is Human Cognition Adap-
cognitive science, where every scientist of note believes tive" BBS 14(3) 1991.] Other examples are the Model
himself a major theorist. This point is important, since the Human Processor (Card et al. 1983), the CAPS theory (Just
book works with a single exemplar (SOAR). An exemplar is & Carpenter 1987), and a collection of efforts in percep-
not the unified theory, and not necessarily even a tual decisions (Ratcliff 1985).
candidate.
Why strive for unified theories, beyond the apple-pie The task of the book. The book endeavors to make the
desire of all sciences to be unified? The biggest reason is case for serious work on unified theories of cognition. It
that a single system (the mind) produces behavior. There adopts a specific strategy, presenting an exemplar theory.
are other reasons, however. Cognitive theory is radically Any other way seems to involve just talk and exhortation,
underdetermined by data, hence as many constraints as guaranteed to have little effect. There are lots of risks to
possible are needed and unification makes this possible. A such a course - it will seem presumptuous and people will
unified theory is a vehicle of cumulation simply as a insist on subjecting the exemplar to a Popperian criticism
theoretically motivated repository. A unified theory in- to falsify it. But, on the positive side, one can hope the
creases identifiability and allows theoretical constructs to reader will follow a frequent plea of Warren McCulloch's,
be amortized over a wide base of phenomena. issued in similar circumstances: "Don't bite my finger,
The human mind can be viewed as the solution to a set look where I'm pointing" (McCulloch 1965).
of multiple constraints. Exhibiting flexible behavior, ex-
hibiting adaptive (goal-oriented) behavior, operating in
real time, operating in terms of the four-dimensional
Chapter 2: Foundations of cognitive science
environment of perceptual detail and a body with many Chapter 2 works through some basic cognitive-science
degrees of freedom, operating in a world requiring im- concepts to provide a foundation for the remainder of the
mense knowledge to characterize, using symbols and book. This is cast as a review, although some novel points
abstractions, using language, learning from experience arise.
about the environment, acquiring abilities through devel-
opment, operating autonomously but also within a social Knowledge systems. A particularly important way of
community, being self-aware with a sense of self are all describing the human is as a knowledge system. The
essential functionalities of the mind. A system must human is viewed as having a body of knowledge and a set
satisfy these constraints to be mind-like. Humans also of goals, so that it takes actions in the environment that its
have known constraints on construction: a neural system, knowledge indicates will attain its goals. The term knowl­
grown by embryological processes, and arising through edge is used, as it is throughout computer science and AI,
evolution. How necessary these constructive processes as belief (it can be wrong and often is), not as the philoso-
are, so that only systems built that way can be minds, is pher's justified true belief. Knowledge systems are one
currently an open question, but the major point is that the level in the hierarchy of systems that make up an intel-
embodied minds we see satisfy all these constraints and ligent agent. For current computers, this is physical
any theory that ignores any appreciable number of them devices, continuous circuits, logic circuits, register-
loses important sources of direction. transfer systems, symbol (or programming) systems, and
knowledge-level systems, all of which are simply alterna-
Is psychology ready for unified theories? Cognitive sci- tive descriptions of the same physical system.
ence is well into its fourth decade; it is no longer a young Knowledge-level systems do not give a set of mechanisms
child of a science. Indeed, behaviorism reached its own that determine behavior, the hallmark of all other de-
peak in fewer years. Cognitive science must take itself in scriptive levels. Rather, behavior is determined by a
hand and move forward. This exhortatory point is not principle of rationality that knowledge is used in the
made to suggest that cognitive science has made little service of the agent's goals. This is analogous to other
progress. The strongest reason cognitive science should teleological principles, such as Fermat's principle of least
attempt unified theories now is that it has accumulated a time for optics. Lower-level descriptions (the symbol
vast and elegant body of regularities, highly robust and level) describe how a knowledge-level system is realized
often parametric. This is especially the product of cogni- in mechanism. The knowledge level is useful to capture
tive psychology and psycholinguistics, which have devel- the notion of a goal-oriented system and abstract away
oped an amazing experimental engine for discovering, from all details of processing and representation. How-
exploring, and confirming new regularities. Other sci- ever, humans can only be described approximately as
ences (e.g., biochemistry) have many more regularities, knowledge-level systems, and the departure can be
but they all fit within a theory that is integrated enough so striking.
that they never pose the challenge cognitive science now
faces. If we do not begin integration now, we will find Representation. Knowledge must be represented in or-
ourselves with an increasingly intractable task as the years der to be used. The concept of representation is captured
go by while the engine of regularities works ever more by the representation law. In an external world, entity (X)
industriously. is transformed (T) into entity (Y). A representation of X-T-

426 BEHAVIORAL AND BRAIN SCIENCES (1992) 15:3


Newell: Unified theories of cognition

Y occurs in a medium within some system when an as input and producing symbol structures as output; and
encoding from X to an entity in the medium (x) and an (4) interpretation processes, taking symbol structures as
encoding of T into an internal transformation in the input and executing operations (the structures thereby
medium (t) produces an internal entity (t/), which can be representing these operations). There must be sufficient
decoded to the external world to correspond to Y. Actual memory and symbols, complete composability of struc-
representations are comprised of myriad instances of the tures by the operators, and complete interpretability (any
representational law to cover all of the specific represen- sequence of operations can be represented).
tational connections that actually occur. Within this cognitive-science framework, the great
Obtaining a representation for a given external situa- philosophical puzzle of intentionality (Brentano 1874) -
tion seems to require discovering an internal medium how symbols can be about external things - has a straight-
with the appropriate natural transformations - this is the forward solution. There are knowledge-level systems.
essence of analog representation. But as external situa- The knowledge in them is about the external world.
tions become more diverse, complex, and abstract, dis- Symbol systems implement knowledge-level systems by
covering adequate analogs becomes increasingly difficult, using symbols, symbol structures, and so on. Therefore,
and at last impossible. A radically different solution exists these internal symbol structures are about (i.e., repre-
(the great move), however, where the internal medium sent) the external world. They will only approximate such
becomes freely manipulable with combinatorially many representation if the symbol system cannot realize the
states and all the representational work is done by being knowledge-level system adequately. Moreover, as the
able to compose internal transformations to satisfy repre- amount of knowledge and the diversity of goals increases,
sentational laws. Sufficiently composable schemes of it is not possible, even theoretically, to realize faithfully
transformations allow the formation of highly general the knowledge-level description of a system. How a given
representational systems that simultaneously satisfy system comes to have its knowledge is a matter of the
many of the requisite representational laws. system's history, including the knowledge available to the
processes that created the system. This appears to be a
Computation. Computational systems are exactly those satisfactory resolution to the vexed question of
that provide composability of transformations. The prime intentionality.
question about computational systems is what functions
they can produce. The great move to composable trans- Architectures. Unified theories of cognition will be for-
formations for representations occurs precisely because mulated as architectures. The architecture of the mind is
most machines do not admit much variety in their select- a major source of commonality of behavior, both within an
able transformations. This leads to the familiar, but in- individual and between individuals. The architecture is
credible, results from computer science about universal the fixed structure that realizes a symbol system. In the
computational systems that can attain the ultimate in computer hierarchy this is the description at the register-
flexibility. They can produce, by being instructed, all the transfer level; in biological systems it is the level of neural
functions that can be produced by any class of machines, structure that is organized to provide symbols.
however diverse. Thus, systems (universal computers) The important question about the architecture con-
exist that provide the universal composability of transfor- cerns what functions it provides. The architecture pro-
mations needed to produce systems that can universally vides the boundary that separates structure from content,
represent whatever needs to be represented. This also but all external tasks require both structure and content
shows that computation does not in itself represent. It for their performance. So the division of function is what
provides the wherewithal for a system to represent if the in the architecture enables the content to determine task
appropriate representational laws are satisfied. performance. An obvious part of the answer is that the
architecture provides the mechanisms for realizing a
Symbols. The book takes the term symbol to refer to the symbol system, but two additional types exist. One is the
parts of expressions that represent, for example, the "cat" mechanisms to exploit implementation technology for
in "The cat is on the mat." Symbols provide distal access to power, memory, and reliability - such as caches and
knowledge-bearing structures that are located physically parallelism. The other is the mechanisms to obtain auton-
elsewhere within the system. The requirement for distal omy of operation - interrupts, dynamic-resource alloca-
access is a constraint on computing systems that arises tion, and protection. What is understood about the func-
from action always being physically local, coupled with tions of the architecture comes entirely from engineered
only a finite amount of knowledge being encodable within computers. Additional functions are surely involved in
a finite volume of space, coupled with the human mind's natural architectures for autonomous, intelligent
containing vast amounts of knowledge. Hence encoded creatures.
knowledge must be spread out in space, whence it must Architectures exhibit an immense variety. Universal
be continually transported from where it is stored to computation might seem to require highly specialized
where processing requires it (distribution does not gain- systems for its realization. On the contrary, any specific
say this constraint). Symbols are the means that accom- symbol system can be realized in an indefinite variety of
plish the required distal access. architectures, and any specific architecture can be imple-
Symbol systems are universal computational systems mented in an indefinite variety of technologies. Any
with the role of symbols made manifest. Symbol systems technology that can implement one architecture can im-
consist of (1) a memory, containing independently modi- plement an indefinite variety of them. All these systems
fiable structures that contain symbols; (2) symbols (pat- must perform the key functions of symbol systems, but
terns in the structures), providing the distal access to these can be realized in an indefinite variety of ways. This
other structures; (3) operations, taking symbol structures potential for variety means that strong inferences are not

BEHAVIORAL AND BRAIN SCIENCES (1992) 15:3 427


Newell: Unified theories of cognition

possible from the structure of engineered digital compu- knowledge search to discover what knowledge can be
ters to how architectures are realized in the brain. brought to bear to guide the search fruitfully. Knowledge
search is a major activity in general intelligent systems.
Intelligence. The concept of intelligence is crucial for
cognitive science. Unfortunately, its long, variegated Summary. The concepts in this chapter constitute the
history produced a multiplicity of notions that bear a cumulated yield of thirty years of attempting to under-
family resemblance but serve different masters - often stand the nature of computation, representation, and
designed to block any unified concept of intelligence. symbols. As cast here, all the concepts are not equally
Still, cognitive science (and any unified theory of cogni- familiar. The knowledge level is still not common in
tion) must conceptualize the potential of a given task to theoretical treatments, although it permeates the prac-
cause difficulty to a person who attempts it and the tice of cognitive and computer sciences. The separation of
potential of a given person for solving difficult tasks. A representation from computation is not sufficiently ap-
system is intelligent to the degree that it approximates a preciated. The concept of intelligence may even seem
knowledge-level system. This is what emerges from the strange. Despite these traces of novelty, this chapter
concept in the second chapter. The distinction between should be like a refresher course to the practicing cogni-
knowledge and intelligence is key. If a system does not tive scientist.
have some knowledge, failure to use it cannot be a failure
of intelligence, which can work only with the knowledge
Chapter 3: Human cognitive architecture
the system has. If a system uses all the knowledge it has
and nothing improves its performance, then there is no The concepts of Chapter 2 apply to humans and compu-
role left for intelligence. Thus intelligence is the ability to ters alike, but a unified theory of human cognition will be
use the knowledge the system has in the service of the expressed in a theory of human cognitive architecture.
system's goals. This notion answers many requirements of This chapter attempts to discover some generally applica-
a concept of intelligence, but it does not lead directly to a ble constraints on the human architecture. Any proposed
quantitative measure of intelligence, because knowledge specific theory of the architecture would take such con-
per se is not quantifiable. straints as given and not as part of its specific architectural
proposal. The chapter is necessarily speculative, since
Search and problem spaces. What processing is required general arguments are notoriously fragile.
to obtain intelligent behavior? How does a system bring
its knowledge to bear to attain its goals? For difficult tasks The human is a symbol system. This chapter argues that
the general answer is that the system will search. Search human minds are symbol systems. The strongest argu-
is not just another cognitive process, occurring alongside ment is from the flexibility and diversity of human re-
other processes (the view prior to the cognitive revolu- sponse functions (i.e., responses as a function of the
tion), but the fundamental process for attaining tasks that environment) - the immense variety of ways that humans
require intelligence. There are two fundamental reasons generate new response functions, from writing books to
for this. First, a difficult task is one in which the system reading them, to creating recipes for cooking food, to
does not always know how to behave. But to make going to school, to rapping, to dancing. Other organisms
progress means to generate some behavior, and when an are also adaptive, and in fascinating ways, but the diver-
error arises and is detected, to attempt to correct it - a de sity and range of human adaptations exceeds these by all
facto search step. When errors occur within errors, com- bounds, indeed it is beyond enumeration. Focusing on
binatorial search emerges. Second, search provides a diversity of response functions links up directly with the
method of last resort. If no other methods are available to defining property of symbol systems as systems that admit
a system, it can always posit a space within which goal the extreme of flexible response functions. Any system
attainment lies, and then search that space. No matter that is sufficiently flexible in its response functions must
how little it knows, it can always posit a bigger space, so be a symbol system (i.e., capable of universal computa-
this method of "generate and test" can always be tion). Actually, the argument holds only asymptotically:
formulated. No one has the foggiest notion what a class of systems
An intelligent system is always operating in a problem might be like that showed human-scale flexibility but
space, the space of the system's own creation that at- weren't universal. In addition, the simplicity of the func-
tempts to restrict the arena of action to what is relevant. tional requirements for symbol systems makes it most
The agent is at some current state in this space with a set unlikely that such systems exist. Thus, the human mind is
of available operators. The system searches within this taken to be a symbol system, establishing a high-level
space to reach a desired state that represents task attain- constraint on the human cognitive architecture. It must
ment. This search is combinatorial in character, just as all support a symbol system.
the experience in AI attests. Solving problems in problem
spaces is not just an arbitrary search. Knowledge can be Systems levels and the time scale of human action. Intel-
brought to bear to guide the search. Given enough ligent systems are built up in a hierarchy of system levels.
knowledge, no search at all will occur: The appropriate Each system level consists of a more abstract way of
operator will be selected at each state and the desired describing the same physical system and its behavior,
state will be reached forthwith. For general intelligent where the laws of behavior are a function only of the states
systems (and humans), life is a sequence of highly diverse as described at that level. In computers, engineers work
tasks and the system has available a correspondingly large hard to make the levels perfect, so that nothing from a
body of knowledge. Thus, besides the problem search in lower level ever disturbs the given level. Nature is not so
the problem space there is also at every current state a compulsive and levels are stronger or weaker depending

428 BEHAVIORAL AND BRAIN SCIENCES (1992) 15:3


Newell: Unified theories of cognition

on how complete is the sealing off from effects from lower 1983) and has been deployed mostly to deny the rele-
levels. Higher system levels are spatially larger and run vance of the algorithms developed in AI for vision and
more slowly than do lower ones, because the higher levels natural language processing because they take too long.
are composed of multiple systems at the next lower level But the constraint is much more binding than that and can
and their operation at a higher level comes from the be used to make a number of inferences about the human
operation of multiple interactive systems at the next cognitive architecture.
lower level. Increase in size and slow-down in speed are
geometric, although the factor between each level need
The cognitive band. The human cognitive architecture
not be constant. The concern in this chapter is with time,
must now be shaped to satisfy the real-time constraint. A
not space. In particular, the temporal factor for a minimal
particular style of argument is used to infer the system
system level is about a factor of 10, that is, an order of
levels of the cognitive band. Functions are allocated to
magnitude. It could be somewhat less, but for conve-
the lowest (fastest) possible system level by arguments
nience we will take X10 as the minimal factor.
that they could not be accomplished any faster, given
Ranging up the time scale of action for human systems,
other allocations (and starting at the bottom of ~10 msec).
a new systems level appears just about every factor of 10,
Whether they could be slower is undetermined. But as
that is just about as soon as possible. Starting at
they stack up, the upper limit of cognitive behavior at ~1
organelles, they operate at time scales of about 100 jjisecs.
sec is reached, clamping the system from the top, thereby
Neurons are definitely a distinct system level from
determining absolutely the location of cognitive functions
organelles, and they operate at about 1 msec, X10 slower.
at specific system levels.
Neural circuits operate at about 10 msec, yet another X10
The upshot is that the distal accessing associated with
slower. These three systems can be taken to constitute
symbols must occur at the level of neural circuits, about
the biological band. Continuing upward reaches what can
10 msec. Above this, hence at 100 msec, comes the
be called the cognitive band — the fastest deliberate acts
level of elementary deliberations, the fastest level at
(whether external or internal) take on the order of 100
which (coded) knowledge can be assembled and be
msec, genuine cognitive operations take 1 sec, and above
brought to bear on a choice between operations. This
that, at the order of 10 sec is a region with no standard
level marks the distinction in cognition between auto-
name, but consisting of the small sequences of action that
matic and controlled processing. What happens within an
humans compose to accomplish smallish tasks. Above the
act of deliberation is automatic, and the level itself per-
cognitive band lies the rational band where humans carry
mits control over action.
out long sequences of actions directed toward their goals.
A level up from elementary deliberations brings simple
In time scale this ranges from minutes to hours. No fixed
operations, composed of a sequence of deliberations with
characteristic systems level occurs here, because the
their associated microactions, hence taking of the order of
organization of human activity now depends on the task
1 sec. This brings the system up against the real-time
being attempted and not on the inner mechanisms.
constraint. It must be able to generate genuine, if ele-
Above the rational band is the social band, dominated by
mentary, cognitive activity in the external world. Simple
the distributed activities of multiple individuals. As the
operations provide this: enough composition to permit a
scale proceeds upward, the boundaries become less dis-
sequence of realizations of a situation and mental reac-
tinct, due to the flexibility of human cognition and the
tions to that realization, to produce a response adaptive to
dominance of task organization. The time scale of human
the situation. Thus, the real-time constraint is met.
action reflects both a theoretical view about minimal
With time, cognition can be indefinitely composed,
systems levels and an empirical fact that human activities,
though a processing organization is required to control it.
when ranged along such a scale, provide distinguishable
Above the level of simple operations is the first level of
system levels about every minimal factor.
composed operations, at 10 sec, characterized by its
The real-time constraint on cognition. That neurons are operations being decomposed into sequences of simple
~1 msec devices and elementary neural circuits are 10 operations. An important bridge has been crossed with
msec devices implies that human cognition is built up this level, namely, simple operations are a fixed reper-
from ~10 msec components. But elementary cognitive toire of actions and now the operations themselves can be
behavior patently occurs by 1 sec. Fast arcs from stimulus composed.
to response occur five times faster ( 200 msec), but their
simplicity and degree of preparation make them suspect The intendedly rational band. Composition is recursive
as cognition. Yet creative discourse happens in about one and more complex operations can exist whose processing
second. These two limits create the real-time constraint requires many sublevels of suboperations. What prevents
on cognition: Only about 100 operation times are avail- the cognitive band from simply climbing into the sky?
able to attain cognitive behavior out of neural-circuit Cognition begins to succeed; as the seconds grow into
technology. This constraint is extremely binding. It pro- minutes and hours, enough time exists for cognition to
vides almost no time at all for the cognitive system to extract whatever knowledge exists and bring it to bear.
operate. The constraint may also be expressed as follows: The system can be described increasingly in knowledge-
Elementary but genuine cognition must be produced in level terms and the internal cognitive mechanism need
just two system levels. Neural circuits (at ~10 msec) can not be specified. This becomes the band of rational - goal
be assembled into some sorts of macrocircuits (one factor and knowledge driven - behavior. It is better labeled
of 10) and these macrocircuits must then be assembled to intendedly rational behavior, since the shift toward the
produce cognitive behavior (the second factor of 10). This knowledge level takes hold only gradually and can never
constraint is familiar (Feldman & Ballard 1982; Fodor be complete.

BEHAVIORAL AND BRAIN SCIENCES (1992) 15:3 429


Newell: Unified theories of cognition

Summary. This chapter has produced some general con- whenever it does not have what is needed to proceed.
straints on the nature of the human cognitive architec- Within the subgoal, deciding what problem space to use
ture. These must hold for all proposed architectures, and what operators to select occurs simply by continuing
becoming something an architecture satisfies rather than with decision cycles in the new context. Impasses can
an architectural hypothesis per se. The gain to theorizing arise while working on a subgoal, giving rise to a hierarchy
is substantial. of goals and subgoals, in the manner familiar in complex
The different bands - biological, cognitive, and (in- intelligent systems.
tendedly) rational - correspond to different realms of law.
The biological band is solidly the realm of natural law. The Chunking. The organization of productions, problem
cognitive band, on the other hand, is the realm of repre- spaces, decisions, and impasses produces performance,
sentational law and computational mechanisms. The com- but it does not acquire new permanent knowledge.
putational mechanisms are described by natural law, just Chunking provides this function. This is a continuous,
as are biological mechanisms. But simultaneously, the automatic, experience-based learning mechanism. It op-
computations are arranged to satisfy representational erates when impasses are resolved, preserving the knowl-
laws, so that the realm becomes about the external world. edge that subgoals generated by creating productions that
The rational band is the realm of reason. Causal mecha- embody this knowledge. On later occasions this knowl-
nisms have disappeared and what determines behavior is edge can be retrieved immediately, rather than again
goals and knowledge (within the physical constraints of reaching an impasse and requiring problem solving.
the environment). Chunking is a process that converts goal-based problem
solving into long-term memory. Chunks are active pro-
cesses, not declarative data structures to be interpreted.
Chapter 4: Symbolic processing for intelligence Chunking does not just reproduce past problem solving;
it transfers to other analogous situations, and the transfer
The chapter deals with the symbolic processing required can be substantial. Chunking applies to all impasses, so
for intelligence and introduces the SOAR architecture. learning can be of any kind whatever: what operators to
The shift from general considerations to full details of an select, how to implement an operator, how to create an
architecture and its performance reflects the cardinal operator, what test to use, what problem space to use, and
principle that the only way a cognitive theory predicts so on. Chunking learns only what SOAR experiences (since
intelligence is if the system designed according to that it depends on the occurrence of impasses). Hence, what is
theory exhibits intelligent behavior. Intelligence is a learned depends not just on chunking but on SOAR'S
functional capability. problem solving.

The central architecture for performance. In SOAR all The total cognitive system. SOAR'S cognitive system con-
tasks, both difficult and routine, are cast as problem sists of the performance apparatus plus chunking. The
spaces. All long-term memory is realized as a production total cognitive system adds to this mechanisms for percep-
system in which the productions form a recognition mem- tion and motor behavior. The working memory operates
ory, the conditions providing the access path, and the as a common bus and temporary store for perception,
actions providing the memory contents. Unlike standard central cognition, and motor behavior. Perceptual sys-
production systems, there is no conflict resolution, all tems generate elements in working memory, which are
satisfied productions put their contents into working matched by the productions in long-term memory.
memory. Thus SOAR is entirely problem-space struc- Central cognition generates elements in working mem-
tured, and the recognition of which productions fire ory, which are interpreted as commands by the motor
constitutes the knowledge search. system. Perceptual processing occurs in two stages: the
Control over behavior in the problem space is ex- (sensory) mechanisms that deliver elements to working
ercised by the decision cycle. First, information flows memory and the analysis and elaboration of these percep-
freely from the long-term memory into working memory. tual elements by encoding productions. Likewise on the
New elements may trigger other productions to fire, motor side, decoding productions in long-term memory
adding more elements, until all the knowledge imme- elaborate motor commands and produce whatever form is
diately available in long-term memory is retrieved. In- needed by the motor systems, followed by motor system
cluded in this knowledge are preferences about which proper that makes movements. The sensory and motor
decisions are acceptable or better than others. Second, a modules are cognitively impenetrable, but the encoding
decision procedure sorts through the preferences to de- and decoding processes interact with other knowledge in
termine the next step to take in the problem space: what working memory.
operator to select, whether the task is accomplished,
whether the problem space is to be abandoned, and so on. SOAR as an intelligent system. Intelligence is only as
The step is taken, which initiates the next decision cycle. intelligence does. The chapter describes the range of
The decision cycle suffices if the knowledge retrieved is different tasks, types of learning, and modes of external
sufficient to indicate what step to take next. But if not, an interaction that SOAR has exhibited. Two large SOAR
impasse occurs the decision procedure cannot deter- systems are described in some detail. One, Rl-SOAR
mine how to proceed given the preferences available to it. (Rosenbloom et al. 1985), does the task of Rl, a classical
Impasses occur frequently, whenever knowledge cannot expert system (McDermott 1982), which configures VAX
be found just by immediate pattern recognition. The systems. Rl-SOAR does the same task. It shows that a
architecture then sets up a subgoal to acquire the missing single system can mix general (knowledge-lean) problem
knowledge. Thus the architecture creates its own goals solving and specialized (knowledge-intensive) operation

430 BEHAVIORAL AND BRAIN SCIENCES (1992) 15:3


Newell: Unified theories of cognition

as a function of what knowledge the system has available. when you're far away you can't. The appropriate scale is
R1-SOAR also shows that experiential learning can acquire temporal and behavior that takes 200 msec to about 3 sec
the knowledge to move the system from knowledge-lean sits close to the architecture. Thus, immediate-response
to knowledge-intensive operation. The second system, performance is not just another area of behavior to illus-
Designer-SOAR (Steier 1989), designs algorithms, a diffi- trate a unified theory, it is the area that can give direct
cult intellectual task that contrasts with the expertise- experimental evidence of the mechanisms of the architec-
based task of Rl. Designer-SOAR starts with a specifica- ture. Furthermore, cognitive psychology has learned
tion of an algorithm and attempts to discover an algorithm how to generate large numbers of regularities at this
in terms of general actions such as generate, test, store, level, many of which are quantitative, parametric, and
and retrieve, using symbolic execution and execution on robust. Literally thousands of regularities have been
test cases. Designer-SOAR learns within the span of doing discovered (the book estimates 3000). Tim Salthouse
a single task (within-task transfer), and also between tasks (1986) provides an illustration by his listing of 29 reg-
of the same basic domain (across-task transfer), but it ularities for just the tiny area of transcription typing (this
shows little transfer between tasks of different domains. and several other such listings are given and discussed
throughout the remainder of the chapter and book). All of
Mapping SOAR onto human cognition. SOAR is an archi- these regularities are constraints against the nature of the
tecture capable of intelligent action. Next, one must show architecture. They provide the diverse data against which
that it is an architecture of human cognition. Given the to identify the architecture. Thus, it is appropriate to start
results about the cognitive band, deriving from the real- the consideration of SOAR as a unified cognitive theory by
time constraint, there is only one way to interpret SOAR as looking at immediate behavior.
the human cognitive architecture. Moreover, since these
results have established absolute, though approximate, Methodological preliminaries. SOAR is a theory just like
time scales for cognitive operations, this interpretation any other. It must explain and predict the regularities and
leads to an order-of-magnitude absolute temporal identi- relate them to each other; however, it need not neces-
fication of the operations in SOAR as a theory of cognition. sarily produce entirely novel predictions: An important
SOAR productions correspond to the basic symbol access role is to incorporate what we now understand about the
and retrieval of human long-term memory, hence they mechanisms of cognition, as captured in the microtheo-
take 10 msec. The SOAR decision cycle corresponds to ries of specific experimental paradigms. A scientist should
the level of elementary deliberation and hence takes be able to think in terms of the architecture and then
~ 100 msec. The problem-space organization corresponds explanations should flow naturally. SOAR should not be
to higher organization of human cognition in terms of treated as a programming language. It is surely program-
operations. Operators that do not reach an impasse corre- mable - its behavior is determined by the content of its
spond to simple operations, hence they take ~1 sec. SOAR memory and stocking its memory with knowledge is
problem spaces within which only simple (nonimpassing) required to get SOAR to behave. But SOAR is this way
operators occur correspond to the first level of composed because humans are this way, hence programmability is
operations. This is the first level at which goal attainments central to the theory. That SOAR is not only programmable
occur and the first at which learning (impasse resolution) but universal in its computational capabilities does not
occurs. Problem spaces of any degree of complexity of mean it can explain anything. Important additional con-
their operators are possible and this provides the hier- straints block this popular but oversimple characteriza-
archy of operations that stretches up into the intendedly tion. First, SOAR must exhibit the correct time patterns of
rational level. behavior and do so against a fixed set of temporal primi-
tives (the absolute times associated with the levels of the
Summary. This chapter has a strong AI flavor, because the cognitive band). Second, it must exhibit the correct error
emphasis is on how a system can function intelligently, patterns. Third, the knowledge in its memory - its
which implies constructing operational systems. A prime program and data - must be learned. It cannot simply be
prediction of a theory of cognition is that humans are placed there arbitrarily by the theorist, although as a
intelligent and the only way to make that prediction is to matter of necessity it must be mostly posited by the
demonstrate it operationally. The prediction is limited, theorist because the learning history is too obscure.
however, by the degree to which the SOAR system itself is Finally, SOAR as a theory is underspecified. The architec-
capable of intelligence. SOAR is state of the art AI, but it ture continues to evolve, and aspects of the current
cannot deliver more than that. architecture (SOAR 4 in the book, now SOAR 5) are known
to be wrong. In this respect, a unified theory is more like a
Chapter 5: Immediate behavior Lakatosian research programme than a Popperian theory.

The book now turns to specific regions of human behavior Functional analysis of immediate responses. The tasks of
to explore what a unified theory must provide. The first of immediate responding comprise a family with many com-
these is behavior that occurs in a second or two in mon characteristics, especially within the experimental
response to some evoking situation: immediate-response paradigms used by cognitive psychologists. These com-
behavior. This includes most of the familiar chronometric mon properties are extremely constraining, and make it
experimentation that has played such a large role in possible to specialize SOAR to a theory that applies just to
creating modern cognitive psychology. this class of tasks. Immediate responses occur in the base-
level problem space, where the elements generated by
The scientific role of immediate-response data. When perception arise and where commands are given to the
you're down close to the architecture, you can see it, motor system. This base-level space is also the one that

BEHAVIORAL AND BRAIN SCIENCES (1992) 15:3 431


References!'Newell:Unified theories of cognition

Johnson-Laird, P. (1983) Mental models. Harvard University Press. [aAN, Lewis, R. (1992) Recent developments in the NL-Soar garden path theory.
NEW1 Technical report, School of Computer Science, Carnegie Mellon
Johnson-Laird, P. N. & Bara, B. G. (1984) Syllogistic inference. Cognition University (forthcoming). [rAN]
16:1-61. [NEW] Lewis, R., Huffman, S., John, B., Laird, J., Lehman, J. F., Newell, A.,
Johnson-Laird, P. N. & Byrne, R. M. J. (1991) Deduction. Erlbaum. [rAN] Rosenbloom, P., Simon, T. & Tessler, S. (1990) Soar as a unified theory
Just, M. A. & Carpenter, P. A. (1987) The psychology of reading and of cognition: Spring 1990. Proceedings of the Cognitive Science Twelfth
language comprehension. Allyn & Bacon. [aAN] Annual Conference (July). Cognitive Science Society. [rAN]
Kaplan, S. (1973) Cognitive maps in perception and thought. In: Image and Lewis, R. L., Newell, A. & Polk, T. A. (1989) Toward a Soar theory of taking
environment, ed. R. M. Downs & D. Stea. Aldine. [EC] instructions for immediate reasoning tasks. Proceedings of the Cognitive
(1991) Beyond rationality: Clarity-based decision making. In: Environment, Science Eleventh Annual Conference (August). Cognitive Science
cognition and action: An integrative multidisciplinary approach, ed. Society. [rAN]
T. Garling & G. Evans. Oxford University Press. [EC] Lindsay, R. K. (1991) Symbol-processing theories and the Soar architecture.
Kaplan, S. & Kaplan, R. (1982) Cognition and environment: Functioning in an (Review of Unified theories of cognition). Psychological Science 2:294-
uncertain world. Praeger. (republished by Ulrich's, 1989). [EC] 302. [rAN, RWR]
Kaplan, S., Sonntag, M. & Chown, E. (1991) Tracing recurrent activity in Logan, G. D. (1985) Executive control of thought and action. Acta
cognitive elements (TRACE): A model of temporal dynamics in a cell Psychologica 60:193-210. [RAC]
assembly. Connection Science 3:179-206. [EC] Mandler, G. & Mandler, J., eds. (1964) Thinking: From association to Gestalt.
Kaplan, S., Weaver, M. & French, R. (1990) Active symbols and internal Wiley. [CP]
models: Towards a cognitive connectionism. Artificial Intelligence i? Mark, L. S. (1987) Eyeheight-scaled information about affordances: A study of
Society 4:51-71. [EC] sitting and stair climbing. Journal of Experimental Psychology: Human
Katona, G. (1940) Organizing and memorizing. Columbia University Perception and Performance 13:361-70. [KJV]
Press. [CP] Markovitch, S. & Scott, P. (1988) The role of forgetting in learning.
Keele, S. W. (1986) Motor control. In: Handbook of perception and human Proceedings of the Fifth International Conference on Machine Learning,
performance, vol. II, ed. K. R. Boff, L. Kaufman & J. P. Thomas. Ann Arbor, Michigan. [EC]
Wiley. [rAN, EH] Marr, D. (1982) Vision. W. H. Freeman. [rAN]
Kieras, D. E. & Bovair, S. (1986) A production system analysis of transfer of Maslov, S. Y. (1987) Theory of deductive systems and its applications. MIT
training. Journal of Memory and Language 25:506 24. [RWR] Press. [LG]
Kintsch, N. (1974) The representation of meaning in memory. Erlbaum. [SL, Mayr, E. (1982) The growth of biological thought: Diversity, evolution and
CP] inheritance. Harvard University Press. [rAN]
Klapp, S. T., Marshburn, E. A. & Lester, P. T. (1983) Short-term memory McClelland, J. L. (1988) Parallel distributed processing: Implications for
does not involve the "working memory" of information processing: The cognition and development. Technical report, Department of Psychology,
demise of a common assumption. Journal of Experimental Psychology: Carnegie Mellon University. [rAN]
General 112:240-64. [RAC] McClelland, J. L. & Rumelhart, D. E. (1986) Parallel distributed processing,
Kleinsmith, L. J. & Kaplan, S. (1964) Interaction of arousal and recall interval vol. 2. MIT Press. [CP]
in nonsense syllable paired-associate learning. Journal of Experimental McCulloch, W. S. (1965) Embodiments of mind. MIT Press. [aAN]
Psychology 67:124-26. [EC] McDermott, J. (1982) Rl: A rule-based configurer of computer systems.
Koch, S., ed. (1959) Epilogue. In: Psychology: A study of a science, vol. 3. Artificial Intelligence 19:39-88. [aAN]
McGraw-Hill. [NHF] Meyer, D. E., Abrams, R. A., Kornblum, S., Wright, C. E. & Smith,
Kolers, P. A. & Smythe, W. E. (1984) Symbol manipulation: Alternatives to J. E. K. (1988) Optimality in human motor performance: Ideal control
the computational view of mind. Journal of Verbal Learning and Verbal of rapid aimed movements. Psychological Review 95:340-70. [RWP]
Behavior 23:328-30. [ PC D ] Miller, C. S. & Laird, J. E. (1992) A simple symbolic algorithm for
Kornblum, S., Hasbroucq, T. & Osman, A. (1990) Dimensional overlap, a incremental concept acquisition. Artificial Intelligence Laboratory,
cognitive basis for stimulus-response compatibility: A model and University of Michigan, January, (unpublished). [rAN]
taxonomy. Psychological Review 97:253-70. [rAN, RWP] Miller, C. S. & Laird, J. E. (1991) A constraint-motivated model of concept
LaBerge, D. (1980) Unitization and automaticity in perception. In: Nebraska formation. Proceedings of the Cognitive Science Society, August. [rAN]
symposium on motivation, ed. J. H. Flowers. University of Nebraska Miller, G. A. (1986) Dismembering cognition. In: G. Stanley Hall and the
Press. [MVL] Johns Hopkins Tradition, ed. S. H. Hulse & B. F. Green, Jr. Johns
LaBerge, D. & Brown, V. (1989) Theory of attentional operations in shape Hopkins University Press. [SFC]
identification. Psychological Review 96:101-24. [rAN] Mitchell, T. M., Alien, J., Chalasani, P., Cheng, J., Etzioni, O., Rinquette,
Lachman, J. L. & Lachman, R. (1979) Theories of memory organization and M. & Schlimmer, J. C. (1991) Theo: A framework for self-improving
human evolution. In: Memory organization and structure, ed. C. R. Puff. systems. In: Architectures for intelligence, ed. K. VanLehn.
Academic Press. [EC] Erlbaum. [rAN]
Laird, J. E., Congdon, C. B., Altmann, E. & Swerdlow, K. (1990) Soar user's Montague, R. (1974) Formal philosophy, ed. R. Thomason. Yale University
manual: Version 5.2. University of Michigan; Department of Electrical Press. [PS]
Engineering and Computer Science. [JAM] Moore, E. F. (1956) Gedanken-experiments on sequential machines.
Laird, J. E., Newell, A. & Rosenbloom, P. S. (1987) Soar: An architecture for Automata studies, annals of mathematical studies, No. 34. Princeton
general intelligence. Artificial Intelligence 33:1-64. [aAN, JAM] University Press. [JTT]
Laird, J. E. & Rosenbloom, P. S. (1990) Integrating execution, planning, and Moray, N. (1959) Attention in dichotic listening: Affective cues and the
learning in Soar for external environments. Proceedings of the National influence of instructions. Quarterly Journal of Experimental Psychology
Conference on Artificial Intelligence, July. [rAN] 11:56-60. [MVL]
Laird, J. E., Rosenbloom, P. S. & Newell, A. (1986) Universal subgoaling and (1969) Attention: Selective processes in vision and hearing.
chunking. Kluwer. [RWP] Hutchinson. [MVL]
Lakatos, I. M. (1978) The methodology of scientific research programmes. Nagel, T. (1986) The view from nowhere. Oxford University Press
Cambridge University Press. [JMC] (Oxford). [PCD]
Lehman, J. F., Lewis, R. L. & Newell, A. (1991a) Integrating knowledge Neisser, U. (1963) The multiplicity of thought. British Journal of Psychology
sources in language comprehension. Proceedings of the Cognitive Science 54:1-14. [CP]
Thirteenth Annual Conference, August. Cognitive Science (1987) From direct perception to conceptual structure. In: Concepts and
Society. [rAN] conceptual development: Ecological and intellectual factors in
(1991b) Natural language comprehension in Soar: Spring 1991 (Technical categorization, ed. U. Neisser. Cambridge University Press. [KJV]
Report CMU-CS-91-117). School of Computer Science, Carnegie-Mellon Nelson, T. O. (1976) Reinforcement and human memory. In: Handbook of
University. [rAN] learning and cognitive processes, vol. 3, ed. W. K. Estes.
Lehman, J. F., Newell, A., Polk, T. A. & Lewis, R. (1992) The role of Erlbaum. [TRS]
language in cognition: A computational inquiry. In: Conceptions of the Newell, A. (1969) Heuristic programming: Ill-structured problems. In:
human mind, ed. G. Harman. Erlbaum (in press). [rAN] Progress in operations research, III, ed. J. Aronofsky. Wiley. [rAN]
Levenick, J. R. (1991) NAPS: A connectionist implementation of cognitive (1973a) You can't play 20 questions with nature and win: Projective
maps. Connection Science 3:107-26. [EC] comments on the papers of this symposium. In: Visual information
Lewandowsky, S., Dunn, J. C. & Kirsner, K., eds. (1989) Implicit memory: processing, ed. W. G. Chase. Academic Press. [arAN, SFC, SKC, JAM,
Theoretical issues. Erlbaum. [SL] JTT]

490 BEHAVIORAL AND BRAIN SCIENCES (1992) 15:3


References/ Newell: Unified theories of cognition

(1973b) Artificial intelligence and the concept of mind. In: Computer models Premack, D. (1986) Gavagai! or the future history of the animal language
of thought and language, ed. R. C. Schank & K. M. Colby. W. H. controversy. MIT Press. [PCD, DQ]
Freeman. [rAN] Proctor, R. W. & Reeve, T. G., eds. (1990) Stimulus-response compatibility:
(1973c) Production systems: Models of control structures. In: Visual An integrated perspective. North-Holland. [RWP]
information processing, ed. W. C. Chase. Academic Press. [rAN] Pylyshyn, Z. W. (1984) Computation and cognition: Towards a foundation for
(1980) Physical symbol systems. Cognitive Science 4:135-83. [rAN] cognitive science. MIT Press. [JAM]
(1983) Intellectual issues in the history of artificial intelligence. In: The Quiatt, D. & Reynolds, V. (1992) Primate behaviour: Information, social
study of information: Interdisciplinary messages, ed. F. Machlup & cognition, and the evolution of culture (in press). Cambridge University
U. Mansfield. Wiley. [rAN] Press (Cambridge). [DQ]
(1989a) Putting it all together. In: Complex information processing: The Quinlan, J. R. (1986) Induction of decision trees. Machine Learning 1:81-
impact of Herbert A. Simon, ed. D. Klahr & K. Kotovsky. 106. [rAN]
Erlbaum. [JAM] Rapaport, D., ed. (1951) Organization and pathology of thought. Columbia
(1989b) How it all got put together. In: Complex information processing: University Press. [CP]
The impact of Herbert A. Simon, ed. D. Klahr & K. Kotovsky. Ratcliff, R. (1985) Theoretical interpretations of the speed and accuracy of
Erlbaum. [JAM] positive and negative responses. Psychological Review 92:212 25. [aAN]
(1992) Unified theories of cognition: The role of Soar. In: Soar: A cognitive Reddy, R. & Newell, A. (1974) Knowledge and its representations in a speech
architecture in perspective, ed. J. A. Michon & A. Akyiirek. understanding system. In: Knowledge and cognition, ed. L. W. Gregg.
Kluwer. [rAN, JAM] Erlbaum. [MVL]
Newell, A. & Rosenbloom, P. (1981) Mechanisms of skill acquisition and the Reisberg, D., Rappoport, I. & O'Shaughnessy, M. (1984) Limits of working
law of practice. In: Cognitive skills and their acquisition, ed. J. R. memory: The digit span-digit. Journal of Experimental Psychology:
Anderson. Erlbaum. [aAN] Learning, Memory and Cognition 10:203-21. [RAC]
Newell, A., Rosenbloom, P. & Laird, J. (1989) Symbolic architectures for Rescorla, R. A. & Wagner, A. R. (1972) A theory of Pavlovian conditioning:
cognition. In Foundations of cognitive science, ed. M. Posner. Variations in the effectiveness of reinforcement and nonreinforcement.
MIT/Bradford Books. [rAN] In: Classical conditioning II: Current research and theory. Appleton-
Newell, A., Shaw, J. C. & Simon, H. A. (1958) Elements of a theory of Century-Crofts. [rAN]
human problem solving. Psychological Review 65:151 66. [SKC, CP] Resnikoff, H. L. (1989) The illusion of reality. Springer-Verlag. [SKC]
Newell, A. & Simon, H. A. (1972) Human problem solving. Prentice- Roitblat, H. L. & von Fersen, L. (1992) Comparitive cognition:
Hall. [aAN, RAC, SKC, MVL, CP] Representations and processes in learning and memory. Annual Review of
Nilsson, L.-G. & Gardiner, J. M. (1991) Memory theory and the boundary Psychology 43:671-710. [rAN]
conditions of the Tulving-Wiseman law. In: Relating theory and data: Rolls, E. T., Baylis, G. C. & Leonard, C. M. (1985) Role of low and high
Essays on human memory in honor of Bennet B. Murdoch, ed. W. E. spatial frequencies in the face-selective responses of neurons in the cortex
Hockley & S Lewandowsky. Erlbaum. [SL] in the superior temporal sulcus in the monkey. Vision Research
Nilsson, L.-G., Law, J. & Tulving, E. (1988) Recognition failure of recallable 25(8): 1021-35. [WMS]
unique names: Evidence for an empirical law of memory and learning. Rosch, E. & Lloyd, B. B., eds. (1978) Cognition and categorization.
Journal of Experimental Psychology: Learning, Memory and Cognition Erlbaum. [rAN]
14:266-77. [SL] Rosenbloom, P. S., Laird, J. E., McDermott, J., Newell, A. & Orciuch, E.
Nilsson, N. J. (1983) Artificial intelligence prepares for 2001. AI Magazine (1985) Rl-Soar: An experiment in knowledge-intensive programming in a
4/1:7-14. [SAV] problem solving architecture. Pattern Analysis and Machine Intelligence
Norman, D. A. (1991) Approaches to the study of intelligence. Artificial 7:561-69. [aAN]
Intelligence 47:327-46. [SL, RWR] Rosenbloom, P. S., Laird, J. E. & Newell, A. (1987) Knowledge-level
Norman, D. A. & Draper, S. W., eds. (1986) User center system design. learning in Soar. Proceedings of AAAI-87. Morgan Kaufman. [rAN]
Erlbaum. [JMC] (1988) The chunking of skill and knowledge. In: Working models of human
Norman, D. A. & Shallice, T. (1980) Attention and action: Willed and perception, ed. H. Bouma & B. A. G. Elsendoorn. Academic
automatic behavior. Unpublished paper, Center for Human Information Press. [rAN]
Processing, University of California San Diego. [MVL] Rosenbloom, P. S. & Newell, A. (1986) The chunking of goal hierarchies: A
Parnas, L. (1985) Software aspects of strategic defence systems. American generalized model of practice. In: Machine learning: An artificial
Scientist 73:432-30. [PCD] intelligence approach II, ed. R. S. Michalski, J. Carbonell & T. Mitchell.
Pashler, H. & Johnston, J. C. (1989) Chronometric evidence for central Morgan Kaufman. [MVL]
postponement in temporally overlapping tasks. Quarterly Journal of Rosenbloom, P. S., Newell, A. & Laird, J. E. (1991) Toward the knowledge
Experimental Psychology 41A: 19-45. [RWR] level in Soar: The role of the architecture in the use of knowledge. In:
Payne, J. W., Bettman, J. R. & Johnson, E. J. (1992) Behavioral decision Architectures for intelligence, ed. K. VanLehn. Erlbaum. [SAV]
research: A constructive processing perspective. Annual Review of Rumelhart, D. E., McClelland, J. L. & the PDF Research Group (1986)
Psychology 43:87-131. [rAN] Parallel distributed processing: Explorations in the microstructure of
Peck, V. A. & John, B. E. (1992) Browser-Soar: A computational model of cognition, vol. I, Foundations. Bradford/MIT Press. [MVL]
a highly interactive task. Proceedings of CHI'92 Conference, May. Salthouse, T. (1986) Perceptual, cognitive, and motoric aspects of transcription
[rAN] typing. Psychological Bulktin 99:303-19. [aAN, SKC]
Penrose, R. (1989) The emperor's new mind. Oxford University Press. [PCD] Saunders, A. (1984) Ten symposia on attention and performance: Some issues
Peters, R. S. (1958) The concept of motivation. Routledge & Kegan and trends. In: Attention and performance X, control of language
Paul. [PCD] processes, ed. H. Bouma & D. G. Bouwhuis. Erlbaum. [SKC]
(1969) Motivation, emotion, and the conceptual schemes of common sense. Savelsbergh, G. J. P., Whiting, H. T. A. & Bootsma, R. J. (1991) Grasping
In: Human action, ed. T. Mischel. Academic Press. [PCD] tau. Journal of Experimental Psychology: Human Perception and
Piaget, J. (1967) Etudes sociologiques. Droz. [CP] Performance 17:315-22. [KJV]
Pitts, W. H. & McCulloch, W. S. (1947) How we know Universals: The Schank, R. C. (1972) Conceptual dependency: A theory of natural language
perception of auditory and visual forms. Bulletin of Mathematical understanding. Cognitive Psychology 3:552-631. [CP]
Biophysics 9:127-47. [PCD] (1982) Dynamic memory: A theory of reminding and learning in computers
Polk, T. A. & Newell, A. (1988) Modeling human syllogistic reasoning in Soar. and people. Cambridge University Press (Cambridge). [RWR]
Proceedings of the Cognitive Science Annual Conference, 1988 (August). Schleidt, W. M. (1988) The eye illusion. Archives of Psychology 140(2): 137-
Cognitive Science Society. [rAN] 40. [WMS]
(in preparation) A verbal reasoning theory of categorical syllogism. [rAN] Schmidt, R. A., Zelaznik, H., Hawkins, B., Frank, J. S. & Quinn, J. T., Jr.
Polk, T. A., Newell, A. & Lewis, R. L. (1989) Toward a unified theory of (1979) Motor-output variability: A theory for the accuracy of rapid motor
immediate reasoning in Soar. Proceedings of the Cognitive Science acts. Psychological Review 86:415-51. [RWP]
Annual Conference, 1989 (August). Cognitive Science Society. [rAN] Schweickert, R. (1978) A critical path generalization of the additive factor
Polk, T. A., Newell, A. & VanLehn, K. (1991) Analysis of symbolic parameter method: Analysis of a Stroop task. Journal of Mathematical Psychology
models: A new technique for fitting models in cognitive science. 18:105-39. [JTT]
Unpublished technical report, School of Computer Science, Carnegie Schweickert, R. & Townsend, J. T. (1989) A trichotomy: Interactions of factors
Mellon University. [rAN] prolonging sequential and concurrent mental processes in stochastic
Pomerantz, J. R., Kaplan, S. & Kaplan, R. (1969) Satiation effects in the discrete mental (PERT) networks. Journal of Mathematical Psychology
perception of single letters. Perception <b- Psychophysics 6:129 32. [EC] 33(3):328-47. [JTT]

BEHAVIORAL AND BRAIN SCIENCES (1992) 15:3 491


References! Newell:Unified theories of cognition

Searle, J. R. (1980) Minds, brains, and programs. Behavioral and Brain and Tweedledee but they can (and should) be distinguished.
Sciences 3:417-57. [rAN, LG] Psychological Science 1:46-54. [rAN, SL, JTT]
(1990) Consciousness, explanatory inversion, and cognitive science. Townsend, J. T. & Ashby, F. G. (1983) Stochastic modeling of elementary
Behavioral and Brain Sciences 13:585-42. [PCD] psychological processes. Cambridge University Press.
Seibel, R. (1963) Discrimination reaction time for a 1023-alternative task. (Cambridge). [JTT]
Journal of Experimental Psychology 66:215-26. [MVL] Townsend, J. T. & Schweickert, R. (1985) Interactive effects of factors
Seifert, C M. & Hutchins, E. L. (in press) Error as opportunity: Learning in affecting processes in stochastic PERT networks. In: Cognition,
a situated task. Journal of Human-Computer Interaction. [RWR] information processing, and motivation, ed. G. d'Ydewalle. North-
Selz, O. (L922) Zur Psychologie des produktiven Denkens und des Irrtum. Holland. [JTT]
Bonn: Cohen. [CP] Treisman, A. & Gelade, G. (1980) A feature integration theory of attention.
Shiffrin, R. M. & Schneider, W. (1977) Controlled and automatic human Cognitive Psychology 12:97-136. [rAN]
information processing, II: Perceptual learning, automatic attending, and Treisman, A. & Gormican, S. (1988) Feature analysis in early vision: Evidence
a general theory. Psychological Review 84:127-90. [MVL] from search asymmetries. Psychological Review 95:15-48. [rAN]
Schultz, T. R. & Schmidt, W. C. (1991) A Cascade-Correlation model of Treisman, A. & Schmidt, H. (1982) Illusory conjunctions in the perception of
balance scale phenomena. Proceedings of the Thirteenth Annual objects. Cognitive Psychology 14:107-41. [rAN]
Conference of the Cognitive Science Society. Erlbaum. [rAN, TRS] Tulving, E. (1983) Elements of episodic memory. Oxford University
Siegler, R. S. (1976) Three aspects of cognitive development. Cognitive Press. [arAN]
Psychology 8:481-520. [arAN] Urmson, J. O. (1956) Philosophical analysis: Its development between the two
(1981) Developmental sequences between and within concepts. Monographs World Wars. Clarendon Press. [PCD]
of the Society for Research in Child Development 46:whole no. Uttal, W. R. (1990) On some two-way barriers between models and
189. [TRS] mechanisms. Perception b- Psychophysics 48:188-203. [WRU]
Siegler, R. S. & Jenkins, E. (1989) How children discover new strategies. Uttal, W. R., Bradshaw, G., Dayanand, S., Lovell, R., Shepherd, T.,
Erlbaum. [TRS] Kakarala, R., Skifsted, K. & Tupper, G. (1992) The swimmer: An
Simon, H. A. (1991) Models of my life. Basic Books. [CP, KJV] integrated computational model of a perceptual-motor system.
Simon, H. A. & Feigenbaum, E. A. (1964) An information-processing theory Erlbaum. [WRU]
of some effects of similarity, familiarization, and meaningfulness in verbal VanLehn, K. (1991) Rule acquisition events in the discovery of problem-
learning. Journal of Verbal Learning and Verbal Behavior 3:385- solving strategies. Cognitive Science 15:1-47. [TRS]
96. [aAN] Vere, S. A. & Bickmore, T. W. (1990) A basic agent. Computational
Simon, H. A. & Kaplan, C. (1989) Foundation of cognitive science. In: Intelligence 6:41-60. [SAV]
Foundations of cognitive science, ed. M.-I. Posner. MIT Press. [NHF] von Helmholtz, H. (1971) The origin and correct interpretation of our sense
Simon, H. A. & Kotovsky, K. (1963) Human acquisition of concepts for impressions. In: Selected writings of Hermann von Helmholtz, ed. R.
sequential patterns. Psychological Review 70:534-46. [rAN] Kahl. Wesleyan University Press. [LG]
Simon, T., Newell, A. & Klahr, D. (1991) Q-Soar: A computational account of von Neumann, J. (1966) Theory of self-reproducing automata. University of
children's learning about number conservation. In: Computational Illinois Press. [LG]
approaches to concept formation, ed. D. Fisher & M. Pazzani. Morgan Wallechinsky, D., Wallace, I. & Wallace, A. (1977) The book of lists. William
Kauffman. [rAN] Morrow. [rAN]
Skinner, B. F. (1957) Verbal behavior. Appleton-Century-Crofts. [DB, CP] Warren, W. H., Jr. & Hannon, D. J. (1988) Direction of self-motion is
Smolensky, P. (1988) On the proper treatment of connectionism. Behavioral perceived from optical flow. Nature 336:162-63. [KJV]
and Brain Sciences 11:1-74. [rAN, EC, CP] Warren, W. H., Jr. & Whang, S. (1987) Visual guidance of walking through
Snoddy, G. S. (1926) Learning and stability. Journal of Applied Psychology apertures: Body-scaled information about affordances. Journal of
10:1-36. [SKC] Experimental psychology: Human Perception and Performance 13:371-
Solomon, H. Y., Turvey, M. T. & Burton, G. (1989) Perceiving extents of 83. [KJV]
rods by wielding: Haptic diagonalization and decomposition of the inertia Watanabe, S. (1985) Pattern recognition: Human and mechanical.
tensor. Journal of Experimental Psychology: Human Perception and Wiley. [LG]
Performance 15:58-68. [KJV] Wertheimer, M. (1945) Productive thinking. Wiley. [CP]
Sondheimer, N. K. (1989) The rate of progress in natural language processing. Wetherick, N. E. (1989) Psychology and syllogistic reasoning. Philosophical
In: Theoretical issues in natural language processing, ed. Y. Wilkes. Psychology 2:111-24. [rAN, NEW]
Erlbaum. [SAV] Wetherick, N. E. & Gilhooly, K. J. (1990) Syllogistic reasoning: Effects of
Sowa, J. F. (1991) Towards a reintegration of AI research. In: Future premise order. In: Lines of thinking, vol. 1, ed. K. J. Gilhooly, M. T. G.
directions in artificial intelligence, ed. P. A. Flach & R. A. Meersman. Keane, R. H. Logie & G. Erdos. Wiley. [rAN, NEW]
North-Holland. [SAV] Whorf, B. L. (1956) A linguistic consideration of thinking in primitive
Sperber, D. (1984) Anthropology and psychology: Towards an epidemiology of communities. In: Language, thought, and reality - selected writings of
representations. Man 20:73-89. [CP] Benjamin Lee Whorf, ed. J. B. Carroll. MIT Press. [WMS]
Sperling, G. (1960) The information available in brief visual presentations. Wiesmeyer, M. D. (1992) An operator-based model of human covert visual
Psychological Monographs 74(11): whole no. 498. [rAN] attention. Ph.D. dissertation, University of Michigan, CSE-
Steier, D. M. (1989) Automating algorithm design within an architecture for TR-123-92. [rAN]
general intelligence. Ph.D. dissertation, Computer Science, Carnegie- Wiesmeyer, M. & Laird, J. (1990) A computer model of 2D visual attention.
Mellon University. [aAN] Proceedings of the Annual Conference of the Cognitive Science Society,
Sternberg, S. (1966) High-speed scanning in human memory. Science July. [rAN]
153:652-54. [MVL, SL, JTT] Wilson, E. O. (1975) Sociobiology: The new synthesis. Harvard University
(1975) Memory scanning: New findings and current controversies. Quarterly Press. [rAN]
Journal of Experimental Psychology 27:1-32. [aAN] Winograd, S. (1964) Input-error-limiting automata. Journal of the Association
Sternberg, R. J. & Detterman, D. K., eds. (1979) Human intelligence, of Computing Machinery 11:338-51. [JTT]
Perspectives on its theory and measurement. Ablex. [rAN] Witten, E. (1988) Interview with Edward Witten. In: Superstrings: A theory
Thorndike, E. L. (1903) Educational Psychology. Lemke & Buechner. [aAN] of everything? ed. P. C. W. Davies & J. Brown. Cambridge University
Townsend, J. T. (1971) A note on the identifiability of parallel and serial Press. [LG]
processes. Perception 6- Psychophysics 10:161-63. [JTT] Wolfe, J. M., Cave, K. R. & Franzel, S. L. (1989) Guided search: An
(1975) The mind-body equation revisited. In: Philosophical aspects of the alternative to the feature integration model for visual search. Journal of
mind-body problem, ed. C.-Y. Cheng. [JTT] Experimental Psychology: Human Perception and Performance 15:419-
(1990) Serial vs. parallel processing: Sometimes they look like Tweedledum 33. [rAN]

492 BEHAVIORAL AND BRAIN SCIENCES (1992) 15:3

You might also like