Previous: postscriptContents

Society of Mind

Glossary and bibliography

Because I thought this theory of the mind might interest not only specialists but everyone who thinks, I favored ordinary words over the technical language of psychology. This was rarely any sacrifice because so many psychological terms already stood for obsolete ideas. But since I also wished to speak to specialists, I tried to hide more technical ideas between the lines; I hope this second level does not show. However there still were certain points at which no ordinary words seemed satisfactory, and I had to invent new terms or assign new meanings to old ones.


(12.6) A type of learning based on collecting examples of an idea without attempting to describe what they have in common. Contrast with Uniframe.


(1.6) Any assembly of parts considered in terms of what it can accomplish as a unit, without regard to what each of its parts does by itself.


(1.4) Any part or process of the mind that by itself is simple enough to understand — even though the interactions among groups of such agents may produce phenomena that are much harder to understand.

Artificial Intelligence

(7.4) The field of research concerned with making machines do things that people consider to require intelligence. There is no clear boundary between psychology and Artificial Intelligence because the brain itself is a kind of machine. For an introduction to this field, I recommend Patrick Winston's textbook Artificial Intelligence, Addison-Wesley, 1984. For more connections with psychology, see Roger Schank and Kenneth Colby (eds.), Computer Models of Thought and Language, Freeman, 1973. Some influential early ideas about brains and machines can be seen in Warren McCulloch's Embodiments of Mind, MIT Press, Cambridge, Mass., 1966. See Intelligence.

Attachment Learning

(17.2) The specific theory, proposed in this book, that the presence of someone to whom we are emotionally attached has a special effect on how we learn, especially in infancy. Attachment learning tends to cause us to modify our goals — rather than merely improve our methods for achieving the goals we already have.


(6.4) Any part of the brain connected not to the outside world, but only to another part of the same brain. Like a manager, a B-brain can supervise an A-brain without understanding either how the A-brain works or the problems with which the A-brain is involved — for example, by recognizing patterns of activity that indicate the A-brain is confused, wasting time in repetitive activity, or focused on an unproductive level of detail.


(12.1) A scenario adapted from Patrick Winston's doctoral thesis, Learning Structural Descriptions by Examples, in The Psychology of Computer Vision, P. H. Winston (ed.), McGraw-Hill, 1975. The study of the world of children's building-blocks may at first seem childishly simple, but it has been one of the most productive areas of research about Artificial Intelligence, child psychology, and modern robotics engineering.


(27.2) An agent that inhibits or suppresses the operation of other agents. Censorlike agents are involved with how we learn from our mistakes. This idea played a prominent role in Freud's theories but has been virtually ignored by modern experimental psychologists — presumably because it is hard to study what people do not think. See Freud's 1905 book Jokes and Their Relation to the Unconscious. I suspect censorlike agents may constitute the larger part of human memory. The discussion of censors and jokes in chapter 27 is based on my essay Jokes and Their Relation to the Cognitive Unconscious, published in Cognitive Constraints on Communication, Representations and Processes, L. Vaina and J.K.K. Hintikka (eds.), . Reidel, 1981. See Suppressors.

Challenger, Professor

(4.4) A rival of mine, disguised as the treacherous archaeologist in Arthur Conan Doyle's novel The Lost World, who resembles Sherlock Holmes's nemesis, the mathematician Moriarty, except for being somewhat more honorable.

Closing the Ring

(19.10) A technique by which an agency can recall many details of a memory from being given only a few cues.

Common Sense

(1.5) The mental skills that most people share. Commonsense thinking is actually more complex than many of the intellectual accomplishments that attract more attention and respect, because the mental skills we call expertise often engage large amounts of knowledge but usually employ only a few types of representations. In contrast, common sense involves many different kinds of representations and thus requires a larger range of different skills.

Computer Science

(6.8) A science still in its infancy. While other sciences study how particular types of objects interact, computer science studies how interactions work in general — that is, how societies of parts can accomplish what those parts cannot do separately. Although computer science began with the study of serial computers — that is, of machines that could do only one thing at a time — it has grown to the point of studying the sorts of interconnected networks of processes that must go on inside societies of mind. (For an introduction to the theory of single-process machines, see my book Computation: Finite and Infinite Machines, Prentice-Hall, 1967.)


(6.1) In this book, the word is used mainly for the myth that human minds are self-aware in the sense of perceiving what happens inside themselves. I maintain that human consciousness can never represent what is occurring at the present moment, but only a little of the recent past — partly because each agency has a limited capacity to represent what happened recently and partly because it takes time for agencies to communicate with one another. Consciousness is peculiarly hard to describe because each attempt to examine temporary memories distorts the very records it is trying to inspect. The description of consciousness in section 6.1 was adapted from my epilogue to Vernor Vinge's novel True Names, Bluejay Books, New York, 1984.


(20.2) The effect upon one's state of mind of all the influences present at the time. At each moment, the context within which each agency works is determined by the activity of the nemes that reach that agency. See Neme.


(16.4) An arrangement in which each of several agents is connected so as to inhibit all the others — so that only one of them can remain active at a time.

Cross-Realm Correspondence

(29.4) A structure that has useful applications in two or more different mental realms. Such correspondences sometimes enable us to transfer knowledge and skill from one domain to another — without needing to accumulate experience in that other realm. This is the basis of certain important kinds of analogies and metaphors.


(7.10) The myth that the production of novel ideas, artistic or otherwise, comes from some distinctive form of thought. I recommend the discussion of this subject in the chapter Variations on a Theme as the Crux of Creativity, in Douglas Hofstadter's Metamagical Themas, Basic Books, 1985.

Default Assumption

(8.5, 12.12) The kind of assumption we make when we lack reasons to think otherwise. For example, we assume by default that an unfamiliar individual who belongs to a familiar class will think and act like a typical member of that class. Default assumptions are more than mere conveniences; they constitute our most productive way to make generalizations. Although such assumptions are frequently wrong, they usually do little harm because they are automatically displaced when more specific information becomes available. However, they can do incalculable harm when they are held too rigidly.


(27.1) An agent that constantly watches for a certain condition and intervenes when it occurs. Our discussion of demons is partly based on Eugene Charniak's doctoral thesis, Toward a Model of Children's Story Comprehension, MIT, 1972.


(7.8) An agency whose actions tend to make the present state of affairs more like some goal or desired state whose description is represented in that agency. This idea was developed by Allen Newell, C. J. Shaw, and Herbert A. Simon into an important theory about human problem solving. See G. Ernst and Allen Newell, GPS, A Case Study in Generality and Problem Solving, Academic Press, 1969.


(24.6) An agent associated with a particular direction or region in space. I suspect that bundles of direction-nemes are used inside our brains for representing not only spatial locations and directions, but also for representing many nonspatial concepts. Direction-nemes resemble isonomes in spatial realms but more resemble polynemes in other realms. See Interaction-Square and Frame-Array.

Distributed Memory

(20.9) A representation in which each fragment of information is stored, not by making a single, substantial change in one agent, but by making small changes in many different agents. Many theorists have been led to believe that the construction of distributed memory-systems must involve nondigital devices such as holograms; that this is not so was shown by P. J. Willshaw, 0. P. Buneman, and H. C. Longuet-Higgins in Non-Holographic Associative Memory, Nature, 222, 1969. See Memorizers.

Duplication Problem

(23.2) The question of how a mind could compare two similar ideas without possessing two identical agencies for representing both of them at the same time. This problem was never recognized in older theories of psychology, and I suspect it will be the downfall of most holistic theories of higher-level thought. See Time Blinking.


(16.1) A term used for too many different purposes. There is a popular view that emotions are inherently more complex and harder to understand than other aspects of human thought. I maintain that infantile emotions are comparatively simple in character and that the complexity of adult emotions results from accumulating networks of mutual exploitations. In adults, these networks eventually become indescribably complicated, but no more so than the networks of our adult intellectual structures. Beyond a certain point, to distinguish between the emotional and intellectual structures of an adult is merely to describe the same structures from different points of view. See Proto-specialist.


(4.5) The act of one agency making use of the activity of another agency, without understanding how it works. Exploitation is the most usual relationship among agencies because it is so difficult for them to understand one another.

Exception Principle

(12.9) The concept that it may not pay to change a well-established skill in order to accommodate an exception. The more one builds upon a certain foundation, the greater the disruption upon changing it. A system's growth will tend to cease, past the point at which the damage caused by any change outweighs the immediate gain. See Investment Principle.


(24.2) A representation based on a set of terminals to which other structures can be attached. Normally, each terminal is connected to a default assumption, which is easily displaced by more specific information. Other ideas about frames that are not discussed within this book were published in my chapter A Framework for Representing Knowledge, in Psychology of Computer Vision, P. H. Winston (ed.), McGraw-Hill, 1975. See Picture-Frames, Trans-frame.


(25.2) A family of frames that share the same terminals. Information attached to any terminal of a frame-array automatically becomes available to all the frames of that array. This makes it easy to change perspective, not only in regard to a physical viewpoint, but in other mental realms as well. Frame-arrays are often controlled by bundles of direction-nemes.

Functional Autonomy

(17.4) The idea that specific goals can lead to subgoals of broader character. For example, in order to please another individual, a child might develop more general goals of acquiring knowledge, power, or wealth — yet the very same subgoals might serve equally well an initial wish to injure that other individual. The term functional autonomy derives from Gordon Allport, who was one of my professors at Harvard.

Functional Definition

(12.4) Specifying something in terms of how it might be used, rather than in terms of its parts and their relationships. See Structural Definition.

Generate and Test

(7.3) Solving a problem by trial and error — that is, by proposing solutions recklessly, then rejecting those that do not work.


(7.10) An individual of prodigious mental accomplishment. Although even the most outstanding human prodigies rarely develop even twice as quickly as their peers, many people feel that their existence demands a special explanation. I suspect that the answer is to be found not in the superficial skills such people learn, but in the early accidents that lead them into learning better ways to learn.


(7.8) The representation in a difference-engine of an imagined final state of affairs. This definition of goal may at first seem too impersonal because it does not explain either the elation that comes with achieving a human goal or the frustration that accompanies failure. However, we should not expect to explain such complicated phenomena of adult psychology directly in terms of simple principles, since they also depend on many other aspects of our mental architecture. Basing our concept of goal on the difference-engine idea helps us to avoid the single-agent fallacy by permitting us to speak about a goal without needing to refer to the person who entertains that goal; a person's many agencies may each have different goals — without that person being aware of them.


(22.10) An operation involved with speech that corresponds to a step in a process of constructing a mental representation. Grammar-tactics are not the same as grammar rules, although these have a close relation. The difference is that grammar rules are both superficial and subjective — in the sense that they purport to describe regularities in one person's behavior as observed by someone else — while grammar-tactics are objective in the sense that they are defined to be the underlying processes that actually produce speech. Although it may be more difficult to discover just what those processes do, it is better to speculate on how language is produced and used than merely to describe its observed, external forms.


(5.2) Literally, a tiny person. In psychology, the unproductive and paradoxical idea that a person's behavior depends upon the behavior of another personlike entity located deeper inside that person.


(14.9) The idea of representing the interaction between two processes by linking pairs of examples to direction-nemes. We can use this same technique not only for representing spatial relationships, but for causal, temporal, and many other kinds of interactions. This makes the interaction-square idea a powerful scheme for representing cross-realm correspondences.


(2.1) The effect of one part of a system upon another part. It is remarkable that in the history of science virtually all phenomena have eventually been explained in terms of interactions between parts taken two at a time. For example, Newton's law of gravity, which describes the mutual attraction of two particles, enables us to predict the motions of all the planets, stars, and galaxies — without any need to consider three or more objects at a time! One could conceive of a universe in which whenever three stars formed an equilateral triangle, one of them would instantly disappear — but virtually no three-part interactions have ever been observed in the physical world.


(15.9) A term used in this book to refer to any process that can be suspended while the agency involved can do some other job — yet later return to where it left off. The ability to do this requires some sort of temporary memory. See Recursion Principle.


(7.1) A term frequently used to express the myth that some single entity or element is responsible for the quality of a person's ability to reason. I prefer to think of this word as representing not any particular power or phenomenon, but simply all the mental skills that, at any particular moment, we admire but don't yet understand.


(6.5) The myth that our minds possess the ability directly to perceive or apprehend their own operations.


(12.10) The myth that the mind possesses some immediate (and hence inexplicable) abilities to solve problems or perceive truths. This belief is based on naive views of how we get ideas. For example, we often experience a moment of excitement or exhilaration at the moment of completing a complex and pro longed but nonconscious analysis of a problem. The myth of intuition wrongly attributes the solution to what happened in that final moment. As for being able directly to apprehend what is true, we simply forget how frequently our intuitions turn out wrong.

Investment Principle

(14.6) The tendency of any well-developed skill to retard the growth of similar skills because the latter work less well in their early forms — and hence are used so infrequently that they never reach maturity. Because of this, we tend to invest most of our time and effort on elaborating a comparatively few techniques, rather than on accumulating many different ones. This can lead, at the same time, both to the formation of a coherent and effective personal style and to a decline in flexibility that may be wrongly attributed to aging. See Exception Principle.


(22.1) A signal or pathway in the brain that has similar effects on several different agencies.


(8.1) The theory that certain kinds of memories are based on turning on sets of agents that reactivate one's previous partial mental states. This idea was first described in my essay K-lines: A Theory of Memory, Cognitive Science, 4 (2), April 1980.


(7.5) An omnibus word for all the processes that lead to long-term changes in our minds.


(8.5) The idea that a typical mental process tends to operate, at each moment, only within a certain range or portion of the structure of each agency. This makes it possible for one process to work on small details without disrupting other processes that are concerned with large-scale plans.

Logical Thinking

(18.1) The popular but unsound theory that much of human reasoning proceeds in accord with clear-cut rules that lead to foolproof conclusions. In my view, we employ logical reasoning only in special forms of adult thought, which are used mainly to summarize what has already been discovered. Most of our ordinary mental work — that is, our commonsense reasoning — is based more on thinking by analogy — that is, applying to our present circumstances our representations of seemingly similar previous experiences.


(19.5) An agent that can reset an agency into some previously useful state. See Recognizer and Distributed Memory.


(15.3) An omnibus term for a great many structures and processes that have ill-defined boundaries in both everyday and technical psychology; these include what we call re-membering, re-collecting, re-minding, and recognizing. This book suggests that what these share in common is their involvement with how we reproduce our former partial mental states.

Mental State

(8.4) The condition of activity of a group of agents at a certain moment. In this book we have assumed that every agent, at any moment, is either completely aroused or completely quiescent; in other words, we ignore the possibility of different degrees of arousal. This kind of two-state or digital assumption is characteristic of computer science and, at first, may seem too simplistic. However, experience has shown that the so-called analog theories that are alleged to be more realistic quickly become so complicated that, in the end, the simpler two-state models actually lead to deeper understandings — at least about basic principles. See Partial Mental State.


(29.8) The myth that there is a clear distinction between representations that are realistic and those that are merely suggestive. In their book Metaphors We Live By, University of Chicago Press, 1980, Mark Johnson and George Lakoff demonstrate that metaphor is no mere special device of literary expression but permeates virtually every aspect of human thought.


(15.8) The smallest components of our short-term memory-systems.


(20.5) A neme involved with agents at a relatively low level. See Neme.


(30.3) Any structure that a person can use to simulate or anticipate the behavior of something else.


(25.6) An agent whose output represents a fragment of an idea or state of mind. The context within which a typical agent works is largely determined by the activity of the nemes that reach it. I called nemes C-lines in Plain Talk About Neuro- developmental Epistemology, in Proceedings of the Fifth International Joint Conference on Artificial Intelligence, Cambridge, Mass., 1977; the description in section 20. 5 is also based on the idea of microfeature developed by David L.Waltz and Jordan Pollack in Massively Parallel Parsing, Cognitive Science, 9 (1).


(25.6) An agent whose outputs affect an agency in some predetermined manner, such as a pronome, isonome, or paranome; an agent whose effect depends more on genetic architecture than on learning from experience. The suffix -nome was chosen to suggest an atom-like, unchanging quality.

Noncompromise Principle

(3.2) The idea that when two agencies conflict it may be better to ignore them both and yield control to yet another, independent agency.

Papert's Principle

(10.4) The hypothesis that many steps in mental growth are based less on the acquisition of new skills than on building new administrative systems for managing already established abilities.


(29.3) An agent that operates on agencies of several different mental realms at once, with similar effects on all of them.

Partial Mental State

(8.4) A description of the state of activity of some particular group of mental agents. This technical but simple idea makes it easy to understand how one can entertain and combine several ideas at the same time. See Mental State.


(19.7) A type of recognition machine that learns to weigh evidence. Invented by Frank Rosenblatt in the late 1950s, Perceptrons use singularly simple procedures for learning which weights to assign to various fragments of evidence. Seymour Papert and I analyzed this type of machine in the book Perceptrons, MIT Press, 1969, and showed that the simplest kinds of Perceptrons cannot do very much by themselves. However, they can do much more when arranged into societies so that some of them can then learn to recognize relations among the patterns recognized by the others. It seems quite likely that some types of brain cells use similar principles.


(24.7) A type of frame whose terminals are controlled by direction-nemes. Picture-frames are particularly suited to representing certain kinds of spatial information.


(19.5) An agent that arouses different activities, at the same time, in different agencies — as a result of learning from experience. Contrast with Isonome.


(21.1 ) A type of agent associated with a particular role or aspect of a representation — corresponding, for example, to the Actor, Trajectory, or Cause of some action. Pronome agents frequently control the attachments of terminals of frames to other frames; to do this, a pronome must possess some temporary memory.


(16.3) One of the genetically constructed subsystems responsible for some of an animal's instinctive behavior. Large portions of our minds start out as almost separate proto-specialists, and we interpret their activity as manifesting different, primitive emotions. Later, as agencies become more interconnected and learn to exploit one another, these differences grow less distinct. This conception is based on the societylike theory proposed by Niko Tinbergen in The Study of Instinct, Oxford University Press, 1951.

Puzzle Principle

(7.3) The idea that any problem can be solved by trial and error — provided one already has some way to recognize a solution when one is found. See Generate and Test.

Realm, Mental

(29.1) A division of the mind that deals with some distinct variety of concern by using distinct mechanisms and representations.


(19.6) An agent that becomes active in response to a particular pattern of input signals.

Recursion Principle

( 15.11 ) The idea that no society, however large, can overcome every limitation — unless it has some way to reuse the same agents, over and over again, for different purposes. See Interruption.


(13.1) Replacing one representation of something by another, different type of representation.


(21.6) A structure that can be used as a substitute for something else, for a certain purpose, as one can use a map as a substitute for an actual city. See Functional Definition and Model.

Re-duplication Theory of Speech

(22.10) My conjecture about what happens when a speaker explains an idea to a listener. A difference-enginelike process tries to construct a second copy of the idea's representation inside the speaker's mind. Each mental operation used in the course of that duplication process activates a corresponding grammar-tactic in the language- agency, and these lead to a stream of speech. This will result in communication to the extent that suitably matched inverse grammar-tactics construct, inside the listener's mind, an equivalent representation.


(13.5) A sequence of actions produced so automatically that it can be performed without disturbing the activities of many other agencies. The action script in section 21.7 accomplishes this by eliminating all the higher-level managers like Put and Get. A script-based skill tends to be inflexible because it lacks bureaucracy; one gains speed by removing higher-level anchor points but loses access to alternatives when things go wrong; script-based experts run the risk of becoming inarticulate. The book by Roger Schank and Robert Abelson, Scripts, Goals, Plans and Understanding, Erlbaum Associates, 1977, speculates about the human use of scripts.


(4.1) In this book, when written Self, the myth that each of us contains some special part that embodies the essence of the mind. When written as self, the word has the ordinary sense of a person's individuality. See Single-Agent Fallacy.

Single-Agent Fallacy

(4.1) The idea that a person's thought, will, decisions, and actions originate in some single center of control, instead of emerging from the activity of complex societies of processes.


(2.4) A situation in which one system mimics the behavior of another. In principle, a modern computer can be used to simulate any other kind of machine. This is important for psychology, because in the past, there was usually no way for scientists to confirm their expectations about the consequences of a complicated theory or mechanism. The theories in this book have not yet been simulated, partly because they are not specified clearly enough and partly because the older computers lacked enough capacity and speed to simulate enough agents. Such machines have recently become available; for an example, see W. Daniel Hillis's doctoral thesis, The Connection Machine, MIT Press, Cambridge, Mass., 1985.


(16.8) An illusion that a certain thing is present, caused by a process that evokes, at higher levels of the mind, a state resembling the state of mind that would be caused by that thing's actual presence. (A new word. )


(1.1) In this book, an organization of parts of a mind. I reserved the term community for referring to organizations of people because I did not want to suggest that a human mind resembles a human community in any particular way.

Society of More

(10.2) The agents used by a mind to make comparisons of quantities.

Stage of Development

(16.2) An episode in the growth of a mind. Chapter 17 offers several reasons why complicated systems tend to grow in sequences of episodes, rather than through processes of steady change.

State of Mind

(8.4) See Mental State.

Structural Definition

(12.4) Describing something in terms of the relationships among its parts. Contrast with Functional Definition.


(27.2) A censorlike agent that works by disrupting a mental state that has already occurred. Suppressors are easier to construct than censors, and require less memory, but are much less efficient.

Time Blinking

(23.3) Finding differences between two mental states by activating them in rapid succession and noticing which agents change their states. I suspect it is by using this method that our brains avoid the duplication problem mentioned in section 23.2. Time blinking might be one of the synchronized activities of brain cells that gives rise to brain waves.


(21.6) Literally, the path or route of an action or activity. However, we use this word not only for a path in space, but, by analogy, for other realms of thought. See Pronome.


(21.3) A particular type of frame that is centered around the trajectory between two situations, one for before and the other for after. The theories in this book about Trans-frames owe much to Roger Schank. See his book, Conceptual Information Processing, North-Holland, 1975.


(17.10) A term often used, in common-sense psychology, to refer to areas of thought that are actively barred or censored against introspection. In this book we take conscious to mean aspects of our mental activity of which we are aware. But since there are very few such processes, we must consider virtually everything done by the mind to be unconscious.


(12.3) A description designed to represent whichever common aspects of a group of things can be used to distinguish them from other things.

Will, Freedom of

(30.6) The myth that human volition is based upon some third alternative to either causality or chance.