1992-11-12: The Intentional Stance

The Intentional Stance (1987)

by Daniel C.  Dennett (1942-)

This book (which I first read in 1989) describes the stance by which we (usually unconsciously) understand and predict the behavior of humans and other creatures (and even inanimate objects, such as computers, corporations, nations).  The name is based on a technical use of the word intentional, which philosophers use to mean the mental states which are indicated by the intentional idiom.  Roughly it refers to the notions of mental objects, such as beliefs, desires, wishes, expectations, intentions, suspicions, thoughts, and others.  Dennett compares it with “folk psychology”, and various other theories, on the way to constructing a full-blown theory of the mind.

Dennett describes an intentional system as one which holds certain beliefs and desires, and acts rationally on the basis of its beliefs to further its desires.  Taking some system to be an intentional system is first of all a matter of pragmatism.  This stance can often be very effective and economical for predicting and understanding the behavior of a system.  Of course, Dennett takes it more seriously than this, and argues that the intentions attributed to the human (and animal) variety of intentional systems are actually present as physical structures in the brains of such systems, and capable of creating the observed behavior of those systems.

My interest (aside from a general spectator interest in cognitive science) in this subject is the relation between intentions and memes.  I suspect that the intentional strategy which humans (mostly implicitly, but occasionally explicitly) use grew out of the social environment of distant hominid forebears, as a way of creating socially acceptable behavior.  On this (unconscious) layer of mental organization, the layers of memes that make up all cultures began to be laid down.  The animals which survived in social settings were those whose brains were organized in such a way as to enable this type of mental structure.  The descendants of those social hominids, whose brains were further enhanced by evolutionary pressure, eventually were capable of high-order intentions (intentions about intentions), including politics and philosophy.  In my natural history of memes, I would like to draw a plausible sequence of developments starting from the intentional stance reaching to modern human minds and culture (a long way).

A clear description of an intentional system is given on page 49:

  1. A system’s beliefs are those it ought to have, given its perceptual capacities, its epistemic needs, and its biography…
  2. A system’s desires are those it ought to have, given its biological needs and the most practicable means of satisfying them…
  3. A system’s behavior will consist of those acts that it would be rational for an agent with those beliefs and desires to perform.

The italicized phrases are of course subjective.  When considering the behavior of a monkey or a dog, we can estimate the ought to have‘s on the basis of our understanding of it biological nature.  For humans, we implicitly assume, unless we know better from experience, that they are like us.

By assuming that humans (and other mammals) are intentional systems, and got that way by a process of evolution, we can make many predictions of future behavior and understand many past actions.  In addition, I think we can make highly plausible guesses about sequences of accretion of memes on pre-human character to generate the long sequences of cultures that have existed throughout humankind’s existence.

On page 50, he asserts his belief that the quality of “design” attained by evolution, exemplified by the design of the eye and vision system, can be continued “all the way in“, to deliberation design and belief design and strategy-concocter design.  In other words, “nature has built us to do things right; …  to believe the truth and love the good.”  I find this optimistic attitude invigorating and inspirational.

One apparently weak point is the assumption of rationality; we are not all that rational.  Nonetheless, we are pretty rational.  This assumption does give great predictive power to intentional systems which assume that others are intentional systems.  To say that someone believes that p is to say that that person is disposed to behave in certain ways under certain conditions.

Dennett also points (page 51) out something about evolution that is often missed:

If we are designed by evolution, then we are almost certainly nothing more than a bag of tricks, patched together by a satisficing Nature …  and no better than our ancestors had to be to get by.

The final point I need from Dennett’s work is the notion of different grades of intentional systems (page 242).  A first-order intentional system has beliefs and desires (etc.) but no beliefs and desires about beliefs and desires.  These can be expressed as

x believes that p

y wants that q

where p and q are clauses that contain no intentional idioms (see page ??).

A second-order intentional system has intentional states about other intentional states (those of others as well as its own).

x wants y to believe that x is hungry

x believes that y wants x to jump left

x fears that y will discover that x has a food cache

A third-order intentional system is capable of such states as

x wants y to believe that x believes he is alone

He provides the following marvelous passage:

How high can we human beings go?  In principle, forever, no doubt, but in fact I suspect that you wonder whether I realize how hard it is for you to be sure that you understand whether I mean to be saying that you can recognize that I can believe you to want me to explain that most of us can keep track of only about five or six orders, under the best of circumstances.

This is certainly much higher-order than the famous sentence attributed to Henry Kissinger:  I know you think you understand what you think I said, but I am not sure you realize that what you heard is not what I meant.  (A fifth-order clause and a fourth-order clause, by my count.)

It seems likely that deliberate communication between intentional systems requires that they be capable of at least third-order intentions.  So in order to deliberately pass a belief p to another, I must believe that my behavior will make her believe that I believe p.  Of course these are not necessarily all ‘conscious’.  Dennett makes a distinction between conscious and unconscious beliefs (pp.  ??), but does so in such a way that this is apparently not a universal understanding among his community.

For my purposes, a belief is a type of association, that is a physical structure composed of patterns of connections among neurons.  They obey the laws of association calculus, and influence a creature’s selection of behavior.  For most creatures, there is no question of these being conscious.  In humans the only interesting question is: Which beliefs are or can become conscious?


Print Friendly, PDF & Email