0% found this document useful (0 votes)
5 views17 pages

The Nature of Cognition

The document discusses the challenges of creating artificial cognitive systems that can operate in unpredictable environments, highlighting the need for machines to mimic human cognition. It outlines four key aspects to consider when modeling cognitive systems: the computational/bio-inspired spectrum, the level of abstraction, the interdependence of brain, body, and environment, and the ultimate-proximate distinction. Ultimately, understanding these aspects is crucial for designing systems that can effectively replicate human-like cognitive capabilities.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views17 pages

The Nature of Cognition

The document discusses the challenges of creating artificial cognitive systems that can operate in unpredictable environments, highlighting the need for machines to mimic human cognition. It outlines four key aspects to consider when modeling cognitive systems: the computational/bio-inspired spectrum, the level of abstraction, the interdependence of brain, body, and environment, and the ultimate-proximate distinction. Ultimately, understanding these aspects is crucial for designing systems that can effectively replicate human-like cognitive capabilities.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

1

The Nature of Cognition

1.1 Motivation for Studying Artificial Cognitive Systems

When we set about building a machine or writing a software ap-


plication, we usually have a clear idea of what we want it to do
and the environment in which it will operate. To achieve reliable
performance, we need to know about the operating conditions
and the user’s needs so that we can cater for them in the design.
Normally, this isn’t a problem. For example, it is straightfor-
ward to specify the software that controls a washing machine or
tells you if the ball is out in a tennis match. But what do we do
when the system we are designing has to work in conditions that
aren’t so well-defined, where we cannot guarantee that the infor-
mation about the environment is reliable, possibly because the
objects the system has to deal with might behave in an awkward
or complicated way, or simply because unexpected things can
happen?
Let’s use an example to explain what we mean. Imagine we
wanted to build a robot that could help someone do the laun-
dry: load a washing machine with clothes from a laundry basket,
1
The challenge of ironing
clothes as a benchmark for
match the clothes to the wash cycle, add the detergent and con- robotics [1] was originally
ditioner, start the wash, take the clothes out when the wash is set by Maria Petrou [2]. It
is a difficult task because
finished, and hang them up to dry (see Figure 1.1). In a per-
clothes are flexible and
fect world, the robot would also iron the clothes,1 and put them unstructured, making them
back in the wardrobe. If someone had left a phone, a wallet, or difficult to manipulate, and
ironing requires careful use
something else in a pocket, the robot should either remove it of a heavy tool and complex
before putting the garment in the wash or put the garment to visual processing.
2 artificial cognitive systems

Figure 1.1: A cognitive robot


would be able to see a dirty
garment and figure out what
needs to be done to wash
and dry it.

one side to allow a human to deal with it later. This task is well
beyond the capabilities of current robots2 but it is something 2
Some progress has been
made recently in developing
that humans do routinely. Why is this? It is because we have
a robot that can fold clothes.
the ability to look at a situation, figure out what’s needed to For example, see the article
achieve some goal, anticipate the outcome, and take the appro- “Cloth grasp point detection
based on multiple-view geo-
priate actions, adapting them as necessary. We can determine metric cues with application
which clothes are white (even if they are very dirty) and which to robotic towel folding” by
Jeremy Maitin-Shepard et al.
are coloured, and wash them separately. Better still, we can also
[3] which describes how the
learn from experience and adapt our behaviour to get better at PR2 robot built by Willow
the job. If the whites are still dirty after being washed, we can Garage [4] tackles the prob-
lem. However, the focus in
apply some extra detergent and wash them again at a higher this task is not so much the
temperature. And best of all, we usually do this all on our own, ill-defined nature of the job
autonomously, without any outside help (except maybe the first — how do you sort clothes
into different batches for
couple of times). Most people can work out how to operate a washing and, in the process,
washing machine without reading the manual, we can all hang anticipate, adapt, and learn
— as it is on the challenge of
out damp clothes to dry without being told how to do it, and vision-directed manipulation
(almost) everyone can anticipate what will happen if you wash of flexible materials.
your smartphone.
We often refer to this human capacity for self-reliance, for
being able to figure things out, for independent adaptive an-
ticipatory action, as cognition. What we want is the ability to
create machines and software systems with the same capacity,
i.e., artificial cognitive systems. So, how do we do it? The first
step would be to model cognition. And this first step is, un-
fortunately, where things get difficult because cognition means
the nature of cognition 3

different things to different people. The issue turns on two key


concerns: (a) the purpose of cognition — the role it plays in hu-
mans and other species, and by extension, the role it should play
in artificial systems — and (b) the mechanisms by which the
cognitive system fulfils that purpose and achieves its cognitive
ability. Regrettably, there’s huge scope for disagreement here
and one of the main goals of this book is to introduce you to the
different perspectives on cognition, to explain the disagreements,
and to tease out their differences. Without understanding these
issues, it isn’t possible to begin the challenging task of develop-
ing artificial cognitive systems. So, let’s get started.

1.2 Aspects of Modelling Cognitive Systems

There are four aspects which we need to consider when mod-


elling cognitive systems:3 how much inspiration we take from 3
For an alternative view
natural systems, how faithful we try to be in copying them, how that focusses on assessing
the contributions made by
important we think the system’s physical structure is, and how particular models, espe-
we separate the identification of cognitive capability from the cially computational and
robotic models, see Anthony
way we eventually decide to implement it. Let’s look at each of Morse’s and Tom Ziemke’s
these in turn. paper “On the role(s) of
To replicate the cognitive capabilities we see in humans and modelling in cognitive
science” [5].
some other species, we can either invent a completely new so-
lution or draw inspiration from human psychology and neuro-
science. Since the most powerful tools we have today are com-
puters and sophisticated software, the first option will probably
be some form of computational system. On the other hand, psy-
chology and neuroscience reflect our understanding of biological
life-forms and so we refer to the second option as a bio-inspired
system. More often than not, we try to blend the two together.
This balance of pure computation and bio-inspiration is the first
aspect of modelling cognitive systems.
Unfortunately, there is an unavoidable complication with the
bio-inspired approach: we first have to understand how the bi-
ological system works. In essence, this means we must come up
with a model of the operation of the biological system and then
use this model to inspire the design of the artificial system. Since
biological systems are very complex, we need to choose the level
4 artificial cognitive systems

Modular decomposition of a
Figure 1.2: Attempts to build
High hypothetical model of mind an artificial cognitive sys-
/ X tem can be positioned in a

X Cognitive system modelled on


the macroscopic organization
of the brain
/
two-dimensional space, with
one axis defining a spec-
Abstraction
trum running from purely
Level computational techniques to
Cognitive system based on
statistical learning of techniques strongly inpired
specific domain rules
by biological models, and
/ Cognitive system based on _....... ) ( :
with another axis defining

Low
X artificial neural networks the level of abstraction of the
biological model.
Computational Biological
Inspiration

of abstraction at which we study them. For example, assuming


for the moment that the centre of cognitive function is the brain
(this might seem a very safe assumption to make but, as we’ll
see, there’s a little more to it than this), then you might attempt
to replicate cognitive capacity by emulating the brain at a very
high level of abstraction, e.g. by studying the broad functions of
different regions in the brain. Alternatively, you might opt for a
low level of abstraction by trying to model the exact electrochem-
ical way that the neurons in these regions actually operate. The
choice of abstraction level plays an important role in any attempt
to model a bio-inspired artificial cognitive system and must be
made with care. That’s the second aspect of modelling cognitive
systems.
Taking both aspects together — bio-inspiration and level of
abstraction — we can position the design of an artificial cognitive
system in a two-dimensional space spanned by a computational
/ bio-inspired axis and an abstraction-level axis; see Figure 1.2.
Most attempts today occupy a position not too far from the cen-
tre, and the trend is to move towards the biological side of the
computational / bio-inspired spectrum and to cover several lev-
els of abstraction.
In adopting a bio-inspired approach at any level of abstraction
it would be a mistake to simply replicate brain mechanisms in
complete isolation in an attempt to replicate cognition. Why? Be-
cause the brain and its associated cognitive capacity is the result
the nature of cognition 5

Different behaviours realized


Figure 1.3: The ultimate-
Behaviour Z
with the same mechanism proximate distinction. Ulti-
mate explanations deal with
why a given behaviour exists
in a system, while proximate
Ultimate explanations address the
Explanation:
Why? specific mechanisms by
Different mechanisms used which these behaviours are
to realize the same behaviour
realized. As shown here,
different mechanisms could
Behaviour A
be used to achieve the same
behaviour or different be-
Mechanism 1 Mechanism N
Proximate Explanation: haviours might be realized
How? with the same mechanism.
What’s important is to un-
derstand that identifying the
behaviours you want in a
cognitive system and finding
of evolution and the brain evolved for some purpose. Also, the suitable mechanisms to re-
alize them are two separate
brain and the body evolved together and so you can’t divorce issues.
one from the other without running the risk of missing part of
the overall picture. Furthermore, this brain-body evolution took
place in particular environmental circumstances so that the cog-
nitive capacity produced by the embodied brain supports the
biological system in a specific ecological niche. Thus, a com-
plete picture may really require you to adopt a perspective that
views the brain and body as a complete system that operates in
a specific environmental context. While the environment may
be uncertain and unknown, it almost always has some in-built
regularities which are exploited by brain-body system through
its cognitive capacities in the context of the body’s characteris-
tics and peculiarities. In fact, the whole purpose of cognition in
a biological system is to equip it to deal with this uncertainty
and the unknown nature of the system’s environment. This,
then, is the third aspect of modelling cognitive systems: the ex-
tent to which the brain, body, and environment depend on one
another.4 4
We return to the relation-
ship between the brain,
Finally, we must address the two concerns we raised in the
body, and environment in
opening section, i.e., the purpose of cognition and the mecha- Chapter 5 on embodiment.
nisms by which the cognitive system fulfils that purpose and
achieves its cognitive ability. That is, in drawing on bio-inspiration,
we need to factor in two complementary issues: what cognition
is for and how it is achieved. Technically, this is known as the
6 artificial cognitive systems

ultimate-proximate distinction in evolutionary psychology; see Fig-


ure 1.3. Ultimate explanations deal with questions concerned
with why a given behaviour exists in a system or is selected
through evolution, while proximate explanations address the
specific mechanisms by which these behaviours are realized.
To build a complete picture of cognition, we must address both
explanations. We must also be careful not to get the two issues
mixed up, as they very often are.5 Thus, when we want to build 5
The importance of the
ultimate-proximate dis-
machines which are able to work outside known operating con-
tinction is highlighted by
ditions just like humans can — to replicate the cognitive charac- Scott-Phillips et al. in a re-
teristics of smart people — we must remember that this smart- cent article [6]. This article
also points out that ultimate
ness may have arisen for reasons other than the ones in which and proximate explanations
it is being deployed in the current task-at-hand. Our brains and of phenomena are often con-
fused with one another so
bodies certainly didn’t evolve so that we could load and unload
we end up discussing prox-
a washing machine with ease, but we’re able to do it nonethe- imate concerns when we
less. In attempting to use bio-inspired cognitive capabilites to really should be discussing
ultimate ones. This is very
perform utilitarian tasks, we may well be just piggy-backing on often the case with artificial
a deeper and quite possibly quite different functional capacity. cognitive systems where
The core problem then is to ensure that this system functional there is a tendency to focus
on the proximate issues of
capacity matches the ones we need to get our job done. Under- how cognitive mechanisms
standing this, and keeping the complementary issues of the work, often neglecting the
equally important issue of
purpose and mechanisms of cognition distinct, allows us to keep what purpose cognition is
to the forefront the important issue of how one can get an artifi- serving in the first place.
cial cognitive system (and a biological one, too, for that matter) These are two complemen-
tary views and both are
to do what we want it to do. If we are having trouble doing this, needed. See [7] and [8] for
the problem may not be the operation of the specific (proximate) more details on the ultimate-
proximate distinction.
mechanisms of the cognitive model but the (ultimate) selection of
the cognitive behaviours and their fitness for the given purpose
in the context of the brain-body-mind relationship.
To sum up, in preparing ourselves to study artificial cognitive
systems, we must keep in mind four important aspects when
modelling cognitive systems:

1. The computational / bio-inspired spectrum;

2. The level of abstraction in the biological model;

3. The mutual dependence of brain, body, and environment;

4. The ultimate-proximate distinction (why vs. how).


the nature of cognition 7

Understanding the importance of these four aspects will help


us make sense of the different traditions in cognitive science,
artificial intelligence, and cybernetics (among other disciplines)
and the relative emphasis they place on the mechanisms and the
purpose of cognition. More importantly, it will ensure we are
addressing the right questions in the right context in our efforts
to design and build artificial cognitive systems.

1.3 So, What Is Cognition Anyway?

It should be clear from what we have said so far that in asking


“what is cognition?” we are posing a badly-framed question:
what cognition is depends on what cognition is for and how
cognition is realized in physical systems — the ultimate and
proximate aspects of cognition, respectively. In other words, the
answer to the question depends on the context — on the rela-
tionship between brain, body, and environment — and is heavily
coloured by which cognitive science tradition informs that an-
swer. We devote all of Chapter 2 to these concerns. However,
before diving into a deep discussion of these issues, we’ll spend
a little more time here setting the scene. In particular, we’ll pro-
vide a generic characterization of cognition as a preliminary
answer to the question “what is cognition?”, mainly to identify
the principal issues at stake in designing artificial cognitive sys-
tems and always mindful of the need to explain how a given
system addresses the four aspects of modelling identified above.
Now, let’s cut to the chase and answer the question.
Cognition implies an ability to make inferences about events
in the world around you. These events include those that in-
volve the cognitive agent itself, its actions, and the consequences
of those actions. To make these inferences, it helps to remem-
6
We discuss the forward-
looking role of memory
ber what happened in the past since knowing about past events in anticipating events in
helps to anticipate future ones.6 Cognition, then, involves pre- Chapter 7.
dicting the future based on memories of the past, perceptions of
7
Inanimate objects don’t
behave but animate ones
the present, and in particular anticipation of the behaviour7 of do, as do inanimate objects
the world around you and, especially, the effects of your actions being controlled by animate
ones (e.g. cars in traffic). So
in it. Notice we say actions, not movement of motions. Actions agency, direct or indirect, is
usually involve movement or motion but an action also involves implied by behaviour.
8 artificial cognitive systems

something else. This is the goal of the action: the desired out-   

come, typically some change in the world. Since predictions are


rarely perfect, a cognitive system must also learn by observing
    
what does actually happen, assimilate it into its understanding,
  
and then adapt the way it subsequently does things. This forms
a continuous cycle of self-improvement in the system’s ability to
anticipate future events. The cycle of anticipation, assimilation, Figure 1.4: Cognition as
a cycle of anticipation,
and adaptation supports — and is supported by — an on-going assimilation, and adaptation:
process of action and perception; see Figure 1.4. embedded in, contributing
to, and benefitting from a
We are now ready for our preliminary definition. continuous process of action
and perception.
Cognition is the process by which an autonomous system per-
ceives its environment, learns from experience, anticipates the
outcome of events, acts to pursue goals, and adapts to changing 8
These six attributes of
circumstances.8 cognition — autonomy,
perception, learning, antic-
We will take this as our preliminary definition of cognition and, ipation, action, adaptation
— are taken from the au-
depending on the approach we are discussing, we will adjust it thor’s definition of cognitive
accordingly in later chapters. systems in the Springer En-
cyclopedia of Computer Vision
While definitions are convenient, the problem with them is
[9]
that they have to be continuously amended as we learn more 9
The Nobel laureate, Peter
about the thing they define.9 So, with that in mind, we won’t be- Medawar, has this to say
about definitions: “My ex-
come too attached to the definition and we’ll use it as a memory perience as a scientist has
aid to remind us that cognition involved at least six attributes of taught me that the comfort
autonomy, perception, learning, anticipation, action, and adapta- brought by a satisfying and
well-worded definition is
tion. only short-lived, because it is
For many people, cognition is really an umbrella term that certain to need modification
and qualification as our ex-
covers a collection of skills and capabilities possessed by an perience and understanding
agent.10 These include being able to do the following. increase; it is explanations
and descriptions that are
• Take on goals, formulate predictive strategies to achieve them, needed” [10]. Hopefully, you
will find understandable
and put those strategies into effect;
explanations in the pages
that follow.
• Operate with varying degrees of autonomy; 10
We frequently use the term
agent in this book. It means
• Interact — cooperate, collaborate, communicate — with other any system that displays a
agents; cognitive capacity, whether
it’s a human, or (potentially,
at least) a cognitive robot,
• Read the intentions of other agents and anticipate their ac-
or some other artificial
tions; cognitive entity. We will use
agent interchangably with
• Sense and interpret expected and unexpected events; artifical cognitive system.
the nature of cognition 9

Figure 1.5: Another aspect


of cognition: effective
interaction. Here the robot
anticipates someone’s needs
(see Chapter 9, Section 9.4
Instrumental Helping).

• Anticipate the need for actions and predict the outcome of its
own actions and those of others;

• Select a course of action, carry it out, and then assess the


outcome;

• Adapt to changing circumstances, in real-time, by adjusting


current and anticipated actions;
11
The “non-” part of “non-
• Learn from experience: adjust the way actions are selected functional” is misleading
as it suggests a lesser value
and performed in the future; compared to functional
characteristics whereas, in
• Notice when performance is degrading, identify the reason for reality, these characteristics
the degradation, and take corrective action. are equally important but
complementary to func-
These capabilities focus on what the agent should do: its func- tionality when designing a
system. For that reason, we
tional attributes. Equally important are the effectiveness and sometimes refer to them as
the quality of its operation: its non-functional characteristics (or, meta-functional attributes;
see [11] for a more extensive
perhaps more accurately, its meta-functional characteristics): its list and discussion of meta-
dependability, reliability, usability, versatility, robustness, fault- functionional attributes.
tolerance, and safety, among others.11 12
We will come back to
the issue of maintaining
These meta-functional characteristics are linked to the func- integrity several times in
tional attributes through system capabilities that focus not this book, briefly in the next
on carrying out tasks but on maintaining the integrity of the section, and more at length
in the next chapter. For the
agent.12 Why are these capabilities relevant to artificial agents? moment, we will just remark
They are relevant — and critically so — because artificial agents that the processes by which
integrity is maintained
such as a robot that is deployed outside the carefully-configured are known as autonomic
environments typical of many factory floors have to deal with a processes.
10 artificial cognitive systems

world that is only partially known. It has to work with incom-


plete information, uncertainty, and change. The agent can only
cope with this by exhibiting some degree of cognition. When you
factor interaction with people into the requirements, cognition
becomes even more important. Why? Because people are cogni-
tive and they behave in a cognitive manner. Consequently, any
agent that interacts with a human needs to be cognitive to some
degree for that interaction to be useful or helpful. People have
their own needs and goals and we would like our artificial agent
to be able to anticipate these (see Figure 1.5). That’s the job of
cognition.
So, in summary, cognition is not to be seen as some module
in the brain of a person or the software of a robot — a planning
module or a reasoning module, for example — but as a system-
wide process that integrates all of the capabilities of the agent to
endow it with the six attributes we mentioned in our memory-
aid definition: autonomy, perception, learning, anticipation,
action, and adaptation.

1.3.1 Why Autonomy?

Notice that we included autonomy in our definition. We need to


be careful about this. As we will see in Chapter 4, the concept of
autonomy is a difficult one. It means different things to different
people, ranging from the fairly innocent, such as being able to
operate without too much help or assistance from others, to the
more controversial, which sees cognition as one of the central
processes by which advanced biological systems preserve their
autonomy. From this perspective, cognitive development has
two primary functions: (1) to increase the system’s repertoire of 13
The increase of action ca-
effective actions, and (2) to extend the time-horizon of its ability pabilities and the extension
anticipation capabilities as
to anticipate the need for and outcome of future actions.13 the primary focus of cogni-
Without wishing to preempt the discussion in Chapter 4, tion is the central message
conveyed in A Roadmap
because there is a tight relationship between cognition and au-
for Cognitive Development
tonomy — or not, depending on who you ask — we will pause in Humanoid Robots [12], a
here just a while to consider autonomy a little more. multi-disciplinary book co-
written by the author, Claes
From a biological perspective, autonomy is an organizational von Hofsten, and Luciano
characteristic of living creatures that enables them to use their Fadiga.
the nature of cognition 11

own capacities to manage their interactions with the world in


order to remain viable, i.e., to stay alive. To a very large extent,
autonomy is concerned with the system maintaining itself: self-
maintenance, for short.14 This means that the system is entirely 14
The concepts of self-
maintenance and recursive
self-governing and self-regulating. It is not controlled by any
self-maintenance in self-
outside agency and this allows it to stand apart from the rest organizing autonomous
of the environment and assert an identity of its own. That’s system was introduced by
Mark Bickhard [13]. We will
not to say that the system isn’t influenced by the world around discuss them in more detail
it, but rather that these influences are brought about through in Chapter 2. The key idea is
that self-maintenant systems
interactions that must not threaten the autonomous operation of
make active contributions
the system.15 to their own persistence
If a system is autonomous, its most important goal is to pre- but do not contribute to
the maintenance of the
serve its autonomy. Indeed, it must act to preserve it since the conditions for persistence.
world it inhabits that may not be very friendly. This is where On the other hand, recursive
self-maintenant systems do
cognition comes in. From this (biological) perspective, cognition
contribute actively to the
is the process whereby an autonomous self-governing system conditions for persistence.
acts effectively in the world in which it is embedded in order to 15
When an influence on
maintain its autonomy.16 To act effectively, the cognitive system a system isn’t directly
controlling it but nonetheless
must sense what is going on around it. However, in biological has some impact on the
agents, the systems responsible for sensing and interpretation of behaviour of the system, we
refer to it as a perturbation.
sensory data, as well as those responsible for getting the motor 16
The idea of cognition
systems ready to act, are actually quite slow and there is often being concerned with
a delay between when something happens and when an au- effective action, i.e. action
that helps preserve the
tonomous biological agent comprehends what has happened. system’s autonomy, is due
This delay is called latency and it is often too great to allow the primarily to Francisco Varela
agent to act effectively: by time you have realized that a preda- and Humberto Maturana
[14]. These two scientists
tor is about to attack, it may be too late to escape. This is one of have had a major impact
the primary reasons a cognitive system must anticipate future on the world of cognitive
science through their work
events: so that it can prepare the actions it may need to take in on biological autonomy and
advance of actually sensing that these actions are needed. the organizational principles
In addition to sensory latencies, there are also limitations im- which underpin autonomous
systems. Together, they
posed by the environment and the cognitive system’s body. To provided the foundations for
perform an action, and specifically to accomplish the goal asso- a new approach to cognitive
science called Enaction. We
ciated with an action, you need to have the relevant part of your will discuss enaction and
body in a certain place at a certain time. It takes time to move, enactive systems is more
so, again, you need to be able to predict what might happen and detail in Chapter 2.

prepare to act. For example, if you have to catch an object, you


need to start moving your hand before the object arrives and
12 artificial cognitive systems

sometimes even before it has been thrown. Also, the world in


which the system is embedded is constantly changing and is out-
side the control of the system. Consequently, the sensory data
which is available to the cognitive system may not only be late in
arriving but critical information may also be missing. Filling in
these gaps is another of the primary functions of a cognitive sys-
tem. Paradoxically, it is also often the case that there is too much
information for the system to deal with and it has to ignore some
of it.17 17
The problem of ignoring
information is related to
Now, while these capabilities derive directly from the biolog-
two problems in cogitive
ical autonomy-preserving view of cognition, it should be fairly science: the Frame Problem
clear that they would also be of great use to artificial cognitive and Attention. We will take
up these issues again later in
systems, whether they are autonomous or not. However, before the book.
moving on to the next section which elaborates a little more on
the relationship between biological and artificial cognitive sys-
tems, it is worth noting that some people consider that cognition
should involve even more than what we have discussed so far.
For example, an artificial cognitive system might also be able
to explain what it is doing and why it is doing it.18 This would 18
The ability not simply
to act but to explain the
enable the system to identify potential problems which could
reasons for an action was
appear when carrying out a task and to know when it needed proposed by Ron Brachman
new information in order to complete it. Taking this to the next in an article entitled “Sys-
tems that know what they’re
level, a cognitive system would be able to view a problem or sit- doing” [15].
uation in several different ways and to look at alternative ways
of tackling it. In a sense, this is similar to the attribute we dis-
cussed above about cognition involving an ability to anticipate
the need for actions and their outcomes. The difference in this
case is that the cognitive system is considering not just one but
many possible sets of needs and outcomes. There is also a case to
be made that cognition should involve a sense of self-reflection:19 19
Self-reflection, often re-
an ability on the part of the system to think about itself and its ferred to as meta-cognition,
is emphasized by some peo-
own thoughts. We see here cognition straying into the domain of ple, e.g. Aaron Sloman [16]
consciousness. We won’t say anything more in this book on that and Ron Sun [17], as an im-
portant aspect of advanced
subject apart from remarking that computational modelling of cognition.
consciousness is an active area of research in which the study of
cognition plays an important part.
the nature of cognition 13

1.4 Levels of Abstraction in Modelling Cognitive Systems

All systems can be viewed at different levels of abstraction, suc-


cessively removing specific details at higher levels and keeping
just the general essence of what is important for a useful model
of the system. For example, if we wanted to model a physical
structure, such as a suspension bridge, we could do so by speci-
fying each component of the bridge — the concrete foundations,
the suspension cables, the cable anchors, the road surface, and
the traffic that uses it — and the way they all fit together and
influence one another. This approach models the problem at a
very low level of abstraction, dealing directly with the materials
from which the bridge will be built, and we would really only
know after we built it whether or not the bridge will stay up. Al-
ternatively, we could describe the forces at work in each member
of the structure and analyze them to find out if they are strong
enough to bear the required loads with an acceptable level of
movement, typically as a function of different patterns of traffic
flow, wind conditions, and tidal forces. This approach models
the problem at a high level of abstraction and allows the architect
to established whether or not his or her design is viable before
it is constructed. For this type of physical system, the idea is
usually to use an abstract model to validate the design and then
realize it as a physical system. However, deciding on the best
level of abstraction is not always straightforward. Other types
of system — biological ones for example — don’t yield easily to
this top-down approach. When it comes to modelling cognitive
systems, it will come as no surprise that there is some disagree-
20
David Marr was a pioneer
in the field of computer
ment in the scientific community about what level of abstraction vision. He started out as a
one should use and how they should relate to one another. We neuroscientist but shifted
to computational modelling
consider here two contrasting approaches to illustrate their dif-
to try to establish a deeper
ferences and their relative merits in the context of modelling and understanding of the human
designing artificial cognitive systems. visual system. His semi-
nal book Vision [18] was
As part of his influential work on modelling the human visual published posthumously in
system, David Marr20 advocated a three-level hierarchy of ab- 1982.
straction;21 see Figure 1.6. At the top level, there is the computa- 21
Marr’s three-level hierar-
chy is sometimes known as
tional theory. Below this, there is the level of representation and the Levels of Understanding
algorithm. At the bottom, there is the hardware implementation. framework.
14 artificial cognitive systems

Figure 1.6: The three levels


  
   at which a system should be

understood and modelled:
 the computational theory
  that formalizes the prob-
     lem, the representational
and algorithmic level that
 addresses the implementa-
   tion of the theory, and the
    hardware level that phy-
 
ically realizes the system
(after David Marr [18]).
The computational theory
is primary and the system
At the level of the computational theory, you need to answer should be understood and
modelled first at this level
questions such as “what is the goal of the computation, why is of abstraction, although the
it appropriate, and what is the logic of the strategy by which it representational and algo-
is carried out?” At the level of representation and algorithm, the rithmic level is often more
intuitively accessible.
questions are different: “how can this computational theory be
applied? In particular, what is the representation for the input
and output, and what is the algorithm for the transformation?”
Finally, the question at the level of hardware implementation is
“how can the representation and algorithm be physically real-
ized?” In other words, how can we build the physical system?
Marr emphasized that these three levels are only loosely cou-
pled: you can — and, according to Marr, you should — think
about one level without necessarily paying any attention to those
below it. Thus, you begin modelling at the computational level,
ideally described in some mathematical formalism, moving on to
representations and algorithms once the model is complete, and 22
Tomaso Poggio recently
proposed a revision of
finally you can decide how to implement these representations Marr’s three-level hierarchy
and algorithms to realize the working system. Marr’s point is in which he advocates
that, although the algorithm and representation levels are more greater emphasis on the
connections between the
accessible, it is the computational or theoretical level that is crit- levels and an extension of
ically important from an information processing perspective. In the range of levels, adding
Learning and Development
essence, he states that the problem can and should first be mod- on top of the computational
elled at the abstract level of the computational theory without theory level (specifically
strong reference to the lower and less abstract levels.22 Since hierarchical learning), and
Evolution on top of that [19].
many people believe that cognitive systems — both biological Tomaso Poggio co-authored
and artificial — are effectively information processors, Marr’s the original paper [20] on
which David Marr based his
hierarchy of abstraction is very useful. more famous treatment in
Marr illustrated his argument succinctly by comparing the his 1982 book Vision [18].
the nature of cognition 15

problem of understanding vision (Marr’s own goal) to the prob-


lem of understanding the mechanics of flight.
“Trying to understand perception by studying only neurons is
like trying to understand bird flight by studying only feathers: it
just cannot be done. In order to understand bird flight, we have to
understand aerodynamics; only then do the structure of feathers
and the different shapes of birds’ wings make sense”

Objects with different cross-sectional profiles give rise to differ-


ent pressure patterns on the object when they move through a
fluid such as air (or when a fluid flows around an object). If you
choose the right cross-section then there is more pressure on the
bottom than on the top, resulting in a lifting force that counters
the force of gravity and allows the object to fly. It isn’t until you
know this that you can begin to understand the problem in a 23
The cognitivist approach
way that will yield a solution for your specific needs. to cognition proposes an
Of course, you eventually have to decide how to realize a abstract model of cognition
which doesn’t require you to
computational model but this comes later. The point he was consider the final realization.
making is that you should decouple the different levels of ab- In other words, cognitivist
models can be applied to
straction and begin your analysis at the highest level, avoiding any platform that supports
consideration of implementation issues until the computational the required computations
or theoretical model is complete. When it is, it can then subse- and this platform could be
a computer or a brain. See
quently drive the decisions that need to be taken at the lower Chapter 2, Section 2.1, for
level when realizing the physical system. more details.
24
Over the last 25 years,
Marr’s dissociation of the different levels of abstraction is
Scott Kelso, the founder
significant because it provides an elegant way to build a com- of the Center for Complex
plex system by addressing it in sequential stages of decreasing Systems and Brain Sciences
at Florida Atlantic Univer-
abstraction. It is a very general approach and can be applied sity, has developed a theory
successfully to modelling, designing, and building many differ- of Coordination Dynamics.
This theory, grounded in the
ent systems that depend on the ability to process information. It
concepts of self-organization
also echoes the assumptions made by proponents of a particular and the tools of coupled
paradigm of cognition — cognitivism — which we will meet in nonlinear dynamics, incor-
porates essential aspects of
the next chapter.23 cognitive function, includ-
Not everyone agrees with Marr’s approach, mainly because ing anticipation, intention,
attention, multimodal inte-
they think that the physical implementation has a direct role to gration, and learning. His
play in understanding the computational theory. This is particu- book, Dynamic Patterns –
larly so in the emergent paradigm of embodied cognition which The Self-Organization of Brain
and Behaviour [21], has influ-
we will meet in the next chapter, the embodiment reflecting the enced research in cognitive
physical implementation. Scott Kelso,24 makes a case for a com- science world-wide.
16 artificial cognitive systems

Figure 1.7: Another three


 
   levels at which a system
  should be modelled: a
  boundary constraint level
that determines the task or
  goal, a collective variable
         
level that characterizes
  coordinated states, and a
component level which
      forms the realized system
(after Scott Kelso [21] ).
All three levels are equally
important and should be
considered together.

pletely different way of modelling systems, especially non-linear


dynamical types of systems that he believes may provide the true
basis for cognition and brain dynamics. He argues that these
types of system should be modelled at three distinct levels of ab-
straction, but at the same time. These three levels are a boundary
constraint level, a collective variables level, and a components
level. The boundary constraint level determines the goals of
the system. The collective variable25 level characterizes the be- 25
Collective variables, also
referred to as order param-
haviour of the system. The component level forms the realized
eters, are so called because
physical system. Kelso’s point is that the specification of these they are responsible for the
three levels of model abstraction are tightly coupled and mutu- system’s overall collective
behaviour. In dynamical
ally dependent. For example, the environmental context of the systems theory, collective
system often determines what behaviours are feasible and use- variables are a small sub-
set of the system’s many
ful. At the same time, the properties of the physical system may
degrees of freedom but
simplify the necessary behaviour. Paraphrasing Rolf Pfeifer,26 they govern the transitions
“morphology matters”: the properties of the physical shape or between the states that the
system can exhibit and
the forced needed for required movements may actually simplify hence its global behaviour.
the computational problem. In other words, the realization of 26
Rolf Pfeifer, University
the system and its particular shape or morphology cannot be of Zurich, has long been
a champion of the tight
ignored and should not be abstracted away when modelling the relationship between a
system. This idea that you cannot model the system in isolation system’s embodiment and
from either the system’s environmental context or the system’s its cognitive behaviour, a
relationship set out in his
ultimate physical realization is linked directly to the relationship book How the body shapes the
between brain, body, and environment. We will meet it again way we think: A new view of
intelligence [22], co-authored
later in the book when we discuss enaction in Chapter 2 and by Josh Bongard.
when we consider the issue of embodiment in Chapter 5.
The mutual dependence of system realization and system
the nature of cognition 17

Figure 1.8: Circular causality


— sometimes referred
Global
System to as continuous recipro-
Behaviour cal causation or recursive
self-maintenance — refers to
the situation where global
system behaviour some-
Determine Influences
how influences the local
behaviour of the system
components and yet it is the
local interaction between
the components that deter-
mines the global behaviour.
This phenomenon appears
to be one of the pivotal
Component Dynamics
mechanisms in autonomous
cognitive systems.

27
Scott Kelso uses the
modelling presents us with a difficulty, however. If we look care- term “circular causality”
fully, we see a circularity, with everything depending on some- to describe the situation
in dynamical systems
thing else. It’s not easy to see how you break into the modelling
where the cooperation of
circle. This is one of the attractions of Marr’s approach: there is the individual parts of the
a clear place to get started. This circularity crops up repeatedly system determine the global
system behaviour which, in
in cognition and it does so in many forms. All we will say for turn, governs the behaviour
the moment is that circular causality27 — where global system of these individual parts
[21]. This is related to
behaviour somehow influences the local behaviour of the sys-
Andy Clark’s concept
tem components and yet it is the local interaction between the of continuous reciprocal
components that determines the global behaviour; see Figure 1.8 causation (CRC) [23] which
“occurs when some system
— appears to be one of the key mechanisms of cognition. We S is both continuously
will return again to this point later in the book. For the moment, affecting and simultaneously
being affected by, activity in
we’ll simply remark that the two constrasting approaches to
some other system O” [24].
system modelling mirror two opposing paradigms of cognitive These ideas are also echoed
science. It is to these that we now turn in Chapter 2 to study the in Mark Bickhard’s concept
of recursive self-maintenance
foundations that underpin our understanding of natural and [13]. We will say more about
artificial cognitive systems. these matters in Chapter 4.

You might also like