BEHAVIORAL AND BRAIN SCIENCES (1992) 15, 425-492 Printed in the United States of America

Allen Newel! School of Computer Science, Carnegie Mellon University, Pittsburgh, PA 15213 Electronic mail: newell@

Abstracts The book presents the case that cognitive science should turn its attention to developing theories of human cognition that cover the full range of human perceptual, cognitive, and action phenomena. Cognitive science has now produced a massive number of high-quality regularities with many micro theories that reveal important mechanisms. The need for integration is pressing and will continue to increase. Equally important, cognitive science now has the theoretical concepts and tools to support serious attempts at unified theories. The argument is made entirely by presenting an exemplar unified theory of cognition both to show what a real unified theory would be like and to provide convincing evidence that such theories are feasible. The exemplar is SOAR, a cognitive architecture, which is realized as a software system. After a detailed discussion of the architecture and its properties, with its relation to the constraints on cognition in the real world and to existing ideas in cognitive science, SOAR is used as theory for a wide range of cognitive phenomena: immediate responses (stimulus-response compatibility and the Sternberg phenomena); discrete motor skills (transcription typing); memory and learning (episodic memory and the acquisition of skill through practice); problem solving (cryptarithmetic puzzles and syllogistic reasoning); language (sentence verification and taking instructions); and development (transitions in the balance beam task). The treatments vary in depth and adequacy, but they clearly reveal a single, highly specific, operational theory that works over the entire range of human cognition, SOAR is presented as an exemplar unified theory, not as the sole candidate. Cognitive science is not ready yet for a single theory - there must be multiple attempts. But cognitive science must begin to work toward such unified theories. Keywords^ artificial intelligence; chunking; cognition; cognitive science; computation; problem solving; production systems; SOAR; symbol systems

The book begins by urging on psychology unified theories of cognition: Psychology has arrived at the possibility of unified theories of cognition - theories that gain their power by positing a single system of mechanisms that operate together to produce the full range of human cognition.

can't play 20 questions with nature and win" (Newell 1973a), which even then fretted about the gap between the empirical and theoretical progress in cognitive psychology and called for more integrative theories. This book may be seen as a step toward answering that call.

I do not say they are here, but they are within reach and we should strive to attain them. My goal is to convince the reader that unified theories of cognition are really worth striving for - now, as we move into the nineties. This cannot be done just by talking about it. An exemplar candidate is put forth to illustrate concretely what a unified theory of cognition means and why it should be a goal for cognitive science. The candidate is a theory (and system) called SOAR (Laird et al. 1987). The book is the written version of the William James Lectures, delivered at Harvard University in spring 1987. Its stance is personal, reflecting the author's thirty years of research in cognitive science, although this precis will be unable to convey much of this flavor.

The nature of theories. Chapter 1 discusses the notion of theory, to ground communication, building on some concrete examples: Fitts's Law, the power law of practice, and a theory of search in problem spaces. There is nothing special about a theory just because it deals with the human mind. It is important, however, that the theory make predictions, not the theorist. Theories are always approximate, often deliberately so, in order to deliver useful answers. Theories cumulate, being refined and reformulated, corrected and expanded. This view is Lakatosian, rather than Popperian: A science has investments in its theories and it is better to correct one than to discard it.

introduction The first chapter describes the enterprise. It grounds the concerns for how cognitive science should proceed by reflecting on a well-known earlier paper entitled "You © 1992 Cambridge University Press


What ar© unified theories ©f eognition? Unified theories of cognition are single sets of mechanisms that cover all of cognition - problem solving, decision making, routine action, memory, learning, skill, perception, motor activity, language, motivation, emotion, imagining, dreaming, daydreaming, and so on. Cognition must be taken broadly to include perception and motor activity. No unified theory of cognition will deal with the full list above 425

Newell: Unified theories of cognition all at once. What can be asked is a significant advance in its coverage. As the title indicates, the book is focused on the plural, on many unified theories of cognition. This is not eclecticism, but a recognition of the state of the art. Cognitive science does not have a unified theory yet. Many candidates will arise, given the current practice of theorizing in cognitive science, where every scientist of note believes himself a major theorist. This point is important, since the book works with a single exemplar (SOAR). An exemplar is not the unified theory, and not necessarily even a candidate. Why strive for unified theories, beyond the apple-pie desire of all sciences to be unified? The biggest reason is that a single system (the mind) produces behavior. There are other reasons, however. Cognitive theory is radically underdetermined by data, hence as many constraints as possible are needed and unification makes this possible. A unified theory is a vehicle of cumulation simply as a theoretically motivated repository. A unified theory increases identifiability and allows theoretical constructs to be amortized over a wide base of phenomena. The human mind can be viewed as the solution to a set of multiple constraints. Exhibitingflexiblebehavior, exhibiting adaptive (goal-oriented) behavior, operating in real time, operating in terms of the four-dimensional environment of perceptual detail and a body with many degrees of freedom, operating in a world requiring immense knowledge to characterize, using symbols and abstractions, using language, learning from experience about the environment, acquiring abilities through development, operating autonomously but also within a social community, being self-aware with a sense of self are all essential functionalities of the mind. A system must satisfy these constraints to be mind-like. Humans also have known constraints on construction: a neural system, grown by embryological processes, and arising through evolution. How necessary these constructive processes are, so that only systems built that way can be minds, is currently an open question, but the major point is that the embodied minds we see satisfy all these constraints and any theory that ignores any appreciable number of them loses important sources of direction. Is psychology ready for unified theories? Cognitive science is well into its fourth decade; it is no longer a young child of a science. Indeed, behaviorism reached its own peak in fewer years. Cognitive science must take itself in hand and move forward. This exhortatory point is not made to suggest that cognitive science has made little progress. The strongest reason cognitive science should attempt unified theories now is that it has accumulated a vast and elegant body of regularities, highly robust and often parametric. This is especially the product of cognitive psychology and psycholinguistics, which have developed an amazing experimental engine for discovering, exploring, and confirming new regularities. Other sciences (e.g., biochemistry) have many more regularities, but they all fit within a theory that is integrated enough so that they never pose the challenge cognitive science now faces. If we do not begin integration now, we will find ourselves with an increasingly intractable task as the years go by while the engine of regularities works ever more industriously. 426


Though cognitive science does not yet have unified theories, there are harbingers: Many local theories make evident what cognitive mechanisms must be operating. But important attempts at unified theories have also been made. John Anderson's work on ACT* (Anderson 1983) must be taken to have pride of place among such attempts. [See also Anderson: "Is Human Cognition Adaptive" BBS 14(3) 1991.] Other examples are the Model Human Processor (Card et al. 1983), the CAPS theory (Just & Carpenter 1987), and a collection of efforts in perceptual decisions (Ratcliff 1985). The task of the book. The book endeavors to make the case for serious work on unified theories of cognition. It adopts a specific strategy, presenting an exemplar theory. Any other way seems to involve just talk and exhortation, guaranteed to have little effect. There are lots of risks to such a course - it will seem presumptuous and people will insist on subjecting the exemplar to a Popperian criticism to falsify it. But, on the positive side, one can hope the reader will follow a frequent plea of Warren McCulloch's, issued in similar circumstances: "Don't bite my finger, look where I'm pointing" (McCulloch 1965). 2:

snltive science

Chapter 2 works through some basic cognitive-science concepts to provide a foundation for the remainder of the book. This is cast as a review, although some novel points arise. Knowledge SfSteros. A particularly important way of describing the human is as a knowledge system. The human is viewed as having a body of knowledge and a set of goals, so that it takes actions in the environment that its knowledge indicates will attain its goals. The term knowledge is used, as it is throughout computer science and AI, as belief (it can be wrong and often is), not as the philosopher's justified true belief Knowledge systems are one level in the hierarchy of systems that make up an intelligent agent. For current computers, this is physical devices, continuous circuits, logic circuits, registertransfer systems, symbol (or programming) systems, and knowledge-level systems, all of which are simply alternative descriptions of the same physical system. Knowledge-level systems do not give a set of mechanisms that determine behavior, the hallmark of all other descriptive levels. Rather, behavior is determined by a principle of rationality that knowledge is used in the service of the agent's goals. This is analogous to other teleological principles, such as Fermat's principle of least time for optics. Lower-level descriptions (the symbol level) describe how a knowledge-level system is realized in mechanism. The knowledge level is useful to capture the notion of a goal-oriented system and abstract away from all details of processing and representation. However, humans can only be described approximately as knowledge-level systems, and the departure can be striking. Representation. Knowledge must be represented in order to be used. The concept of representation is captured by the representation law. In an external world, entity (X) is transformed (T) into entity (Y). A representation of X-T-

Newell: Unified theories of cognition Y occurs in a medium within some system when an encoding from X to an entity in the medium (x) and an encoding of T into an internal transformation in the medium (t) produces an internal entity (y), which can be decoded to the external world to correspond to Y. Actual representations are comprised of myriad instances of the representational law to cover all of the specific representational connections that actually occur. Obtaining a representation for a given external situation seems to require discovering an internal medium with the appropriate natural transformations - this is the essence of analog representation. But as external situations become more diverse, complex, and abstract, discovering adequate analogs becomes increasingly difficult, and at last impossible. A radically different solution exists (the great move), however, where the internal medium becomes freely manipulable with combinatorially many states and all the representational work is done by being able to compose internal transformations to satisfy representational laws. Sufficiently composable schemes of transformations allow the formation of highly general representational systems that simultaneously satisfy many of the requisite representational laws. Computation. Computational systems are exactly those that provide composability of transformations. The prime question about computational systems is what functions they can produce. The great move to composable transformations for representations occurs precisely because most machines do not admit much variety in their selectable transformations. This leads to the familiar, but incredible, results from computer science about universal computational systems that can attain the ultimate in flexibility. They can produce, by being instructed, all the functions that can be produced by any class of machines, however diverse. Thus, systems (universal computers) exist that provide the universal composability of transformations needed to produce systems that can universally represent whatever needs to be represented. This also shows that computation does not in itself represent. It provides the wherewithal for a system to represent if the appropriate representational laws are satisfied. Symbols. The book takes the term symbol to refer to the parts of expressions that represent, for example, the "cat" in "The cat is on the mat." Symbols provide distal access to knowledge-bearing structures that are located physically elsewhere within the system. The requirement for distal access is a constraint on computing systems that arises from action always being physically local, coupled with only a finite amount of knowledge being encodable within a finite volume of space, coupled with the human mind's containing vast amounts of knowledge. Hence encoded knowledge must be spread out in space, whence it must be continually transported from where it is stored to where processing requires it (distribution does not gainsay this constraint). Symbols are the means that accomplish the required distal access. Symbol systems are universal computational systems with the role of symbols made manifest. Symbol systems consist of (1) a memory, containing independently modifiable structures that contain symbols; (2) symbols (patterns in the structures), providing the distal access to other structures; (3) operations, taking symbol structures

as input and producing symbol structures as output; and (4) interpretation processes, taking symbol structures as input and executing operations (the structures thereby representing these operations). There must be sufficient memory and symbols, complete composability of structures by the operators, and complete interpretability (any sequence of operations can be represented). Within this cognitive-science framework, the great philosophical puzzle of intentionality (Brentano 1874) how symbols can be about external things - has a straightforward solution. There are knowledge-level systems. The knowledge in them is about the external world. Symbol systems implement knowledge-level systems by using symbols, symbol structures, and so on. Therefore, these internal symbol structures are about (i.e., represent) the external world. They will only approximate such representation if the symbol system cannot realize the knowledge-level system adequately. Moreover, as the amount of knowledge and the diversity of goals increases, it is not possible, even theoretically, to realize faithfully the knowledge-level description of a system. How a given system comes to have its knowledge is a matter of the system's history, including the knowledge available to the processes that created the system. This appears to be a satisfactory resolution to the vexed question of intentionality. Architectures. Unified theories of cognition will be formulated as architectures. The architecture of the mind is a major source of commonality of behavior, both within an individual and between individuals. The architecture is the fixed structure that realizes a symbol system. In the computer hierarchy this is the description at the registertransfer level; in biological systems it is the level of neural structure that is organized to provide symbols. The important question about the architecture concerns what functions it provides. The architecture provides the boundary that separates structure from content, but all external tasks require both structure and content for their performance. So the division of function is what in the architecture enables the content to determine task performance. An obvious part of the answer is that the architecture provides the mechanisms for realizing a symbol system, but two additional types exist. One is the mechanisms to exploit implementation technology for power, memory, and reliability - such as caches and parallelism. The other is the mechanisms to obtain autonomy of operation - interrupts, dynamic-resource allocation, and protection. What is understood about the functions of the architecture comes entirely from engineered computers. Additional functions are surely involved in natural architectures for autonomous, intelligent creatures. Architectures exhibit an immense variety. Universal computation might seem to require highly specialized systems for its realization. On the contrary, any specific symbol system can be realized in an indefinite variety of architectures, and any specific architecture can be implemented in an indefinite variety of technologies. Any technology that can implement one architecture can implement an indefinite variety of them. All these systems must perform the key functions of symbol systems, but these can be realized in an indefinite variety of ways. This potential for variety means that strong inferences are not BEHAVIORAL AND BRAIN SCIENCES (1992) 15:3


Newell: Unified theories of cognition possible from the structure of engineered digital computers to how architectures are realized in the brain. intelligence. The concept of intelligence is crucial for cognitive science. Unfortunately, its long, variegated history produced a multiplicity of notions that bear a family resemblance but serve different masters — often designed to block any unified concept of intelligence. Still, cognitive science (and any unified theory of cognition) must conceptualize the potential of a given task to cause difficulty to a person who attempts it and the potential of a given person for solving difficult tasks. A system is intelligent to the degree that it approximates a knowledge-level system. This is what emerges from the concept in the second chapter. The distinction between knowledge and intelligence is key. If a system does not have some knowledge, failure to use it cannot be a failure of intelligence, which can work only with the knowledge the system has. If a system uses all the knowledge it has and nothing improves its performance, then there is no role left for intelligence. Thus intelligence is the ability to use the knowledge the system has in the service of the system's goals. This notion answers many requirements of a concept of intelligence, but it does not lead directly to a quantitative measure of intelligence, because knowledge per se is not quantifiable. Search and problem spaces. What processing is required to obtain intelligent behavior? How does a system bring its knowledge to bear to attain its goals? For difficult tasks the general answer is that the system will search. Search is not just another cognitive process, occurring alongside other processes (the view prior to the cognitive revolution), but the fundamental process for attaining tasks that require intelligence. There are two fundamental reasons for this. First, a difficult task is one in which the system does not always know how to behave. But to make progress means to generate some behavior, and when an error arises and is detected, to attempt to correct it - a de facto search step. When errors occur within errors, combinatorial search emerges. Second, search provides a method of last resort. If no other methods are available to a system, it can always posit a space within which goal attainment lies, and then search that space. No matter how little it knows, it can always posit a bigger space, so this method of "generate and test" can always be formulated. An intelligent system is always operating in a problem space, the space of the system's own creation that attempts to restrict the arena of action to what is relevant. The agent is at some current state in this space with a set of available operators. The system searches within this space to reach a desired state that represents task attainment. This search is combinatorial in character, just as all the experience in AI attests. Solving problems in problem spaces is not just an arbitrary search. Knowledge can be brought to bear to guide the search. Given enough knowledge, no search at all will occur: The appropriate operator will be selected at each state and the desired state will be reached forthwith. For general intelligent systems (and humans), life is a sequence of highly diverse tasks and the system has available a correspondingly large body of knowledge. Thus, besides the problem search in the problem space there is also at every current state a 428


knowledge search to discover what knowledge can be brought to bear to guide the search fruitfully. Knowledge search is a major activity in general intelligent systems. Summary. The concepts in this chapter constitute the cumulated yield of thirty years of attempting to understand the nature of computation, representation, and symbols. As cast here, all the concepts are not equally familiar. The knowledge level is still not common in theoretical treatments, although it permeates the practice of cognitive and computer sciences. The separation of representation from computation is not sufficiently appreciated. The concept of intelligence may even seem strange. Despite these traces of novelty, this chapter should be like a refresher course to the practicing cognitive scientist. Chapter 3: Human cognitiwe architecture The concepts of Chapter 2 apply to humans and computers alike, but a unified theory of human cognition will be expressed in a theory of human cognitive architecture. This chapter attempts to discover some generally applicable constraints on the human architecture. Any proposed specific theory of the architecture would take such constraints as given and not as part of its specific architectural proposal. The chapter is necessarily speculative, since general arguments are notoriously fragile. The human is a symbol system* This chapter argues that human minds are symbol systems. The strongest argument is from the flexibility and diversity of human response functions (i.e., responses as a function of the environment) - the immense variety of ways that humans generate new response functions, from writing books to reading them, to creating recipes for cooking food, to going to school, to rapping, to dancing. Other organisms are also adaptive, and in fascinating ways, but the diversity and range of human adaptations exceeds these by all bounds, indeed it is beyond enumeration. Focusing on diversity of response functions links up directly with the defining property of symbol systems as systems that admit the extreme of flexible response functions. Any system that is sufficiently flexible in its response functions must be a symbol system (i.e., capable of universal computation). Actually, the argument holds only asymptotically: No one has the foggiest notion what a class of systems might be like that showed human-scale flexibility but weren't universal. In addition, the simplicity of the functional requirements for symbol systems makes it most unlikely that such systems exist. Thus, the human mind is taken to be a symbol system, establishing a high-level constraint on the human cognitive architecture. It must support a symbol system. Systems leweis and the time scale of hyraan action. Intelligent systems are built up in a hierarchy of system levels. Each system level consists of a more abstract way of describing the same physical system and its behavior, where the laws of behavior are a function only of the states as described at that level. In computers, engineers work hard to make the levels perfect, so that nothing from a lower level ever disturbs the given level. Nature is not so compulsive and levels are stronger or weaker depending

Newell: Unified theories of cognition on how complete is the sealing off from effects from lower levels. Higher system levels are spatially larger and run more slowly than do lower ones, because the higher levels are composed of multiple systems at the next lower level and their operation at a higher level comes from the operation of multiple interactive systems at the next lower level. Increase in size and slow-down in speed are geometric, although the factor between each level need not be constant. The concern in this chapter is with time, not space. In particular, the temporal factor for a minimal system level is about a factor of 10, that is, an order of magnitude. It could be somewhat less, but for convenience we will take X10 as the minimal factor. Ranging up the time scale of action for human systems, a new systems level appears just about every factor of 10, that is just about as soon as possible. Starting at organelles, they operate at time scales of about 100 jxsecs. Neurons are definitely a distinct system level from organelles, and they operate at about 1 msec, X10 slower. Neural circuits operate at about 10 msec, yet another X10 slower. These three systems can be taken to constitute the biological hand. Continuing upward reaches what can be called the cognitive hand - the fastest deliberate acts (whether external or internal) take on the order of 100 msec, genuine cognitive operations take 1 sec, and above that, at the order of 10 sec is a region with no standard name, but consisting of the small sequences of action that humans compose to accomplish smallish tasks. Above the cognitive band lies the rational band where humans carry out long sequences of actions directed toward their goals. In time scale this ranges from minutes to hours. No fixed characteristic systems level occurs here, because the organization of human activity now depends on the task being attempted and not on the inner mechanisms. Above the rational band is the social band, dominated by the distributed activities of multiple individuals. As the scale proceeds upward, the boundaries become less distinct, due to the flexibility of human cognition and the dominance of task organization. The time scale of human action reflects both a theoretical view about minimal systems levels and an empirical fact that human activities, when ranged along such a scale, provide distinguishable system levels about every minimal factor.

1983) and has been deployed mostly to deny the relevance of the algorithms developed in AI for vision and natural language processing because they take too long. But the constraint is much more binding than that and can be used to make a number of inferences about the human cognitive architecture. The cognitiwe band. The human cognitive architecture must now be shaped to satisfy the real-time constraint. A particular style of argument is used to infer the system levels of the cognitive band. Functions are allocated to the lowest (fastest) possible system level by arguments that they could not be accomplished any faster, given other allocations (and starting at the bottom of—10 msec). Whether they could be slower is undetermined. But as they stack up, the upper limit of cognitive behavior at —1 sec is reached, clamping the system from the top, thereby determining absolutely the location of cognitive functions at specific system levels. The upshot is that the distal accessing associated with symbols must occur at the level of neural circuits, about 10 msec. Above this, hence at —100 msec, comes the level of elementary deliberations, the fastest level at which (coded) knowledge can be assembled and be brought to bear on a choice between operations. This level marks the distinction in cognition between automatic and controlled processing. What happens within an act of deliberation is automatic, and the level itself permits control over action. A level up from elementary deliberations brings simple operations, composed of a sequence of deliberations with their associated microactions, hence taking of the order of 1 sec. This brings the system up against the real-time constraint. It must be able to generate genuine, if elementary, cognitive activity in the external world. Simple operations provide this: enough composition to permit a sequence of realizations of a situation and mental reactions to that realization, to produce a response adaptive to the situation. Thus, the real-time constraint is met. With time, cognition can be indefinitely composed, though a processing organization is required to control it. Above the level of simple operations is the first level of composed operations, at —10 sec, characterized by its operations being decomposed into sequences of simple operations. An important bridge has been crossed with this level, namely, simple operations are a fixed repertoire of actions and now the operations themselves can be composed.

The real-time constraint on cognition. That neurons are — 1 msec devices and elementary neural circuits are —10 msec devices implies that human cognition is built up from ~10 msec components. But elementary cognitive behavior patently occurs by 1 sec. Fast arcs from stimulus to response occur five times faster (—200 msec), but their simplicity and degree of preparation make them suspect The intendedly rational band. Composition is recursive as cognition. Yet creative discourse happens in about one and more complex operations can exist w7hose processing second. These two limits create the real-time constraint requires many sublevels of suboperations. What prevents on cognition: Only about 100 operation times are availthe cognitive band from simply climbing into the sky? able to attain cognitive behavior out of neural-circuit Cognition begins to succeed; as the seconds grow into technology. This constraint is extremely binding. It prominutes and hours, enough time exists for cognition to vides almost no time at all for the cognitive system to extract whatever knowledge exists and bring it to bear. operate. The constraint may also be expressed as follows: The system can be described increasingly in knowledgeElementary but genuine cognition must be produced in level terms and the internal cognitive mechanism need just two system levels. Neural circuits (at —10 msec) can not be specified. This becomes the band of rational - goal be assembled into some sorts of macrocircuits (one factor and knowledge driven - behavior. It is better labeled of 10) and these macrocircuits must then be assembled to intendedly rational behavior, since the shift toward the produce cognitive behavior (the second factor of 10). This knowledge level takes hold only gradually and can never constraint is familiar (Feldman & Ballard 1982; Fodor be complete. BEHAVIORAL AND BRAIN SCIENCES (1992) 15:3


Newell: Unified theories of cognition Summary. This chapter has produced some general constraints on the nature of the human cognitive architecture. These must hold for all proposed architectures, becoming something an architecture satisfies rather than an architectural hypothesis per se. The gain to theorizing is substantial. The different bands - biological, cognitive, and (intendedly) rational - correspond to different realms of law. The biological band is solidly the realm of natural law. The cognitive band, on the other hand, is the realm of representational law and computational mechanisms. The computational mechanisms are described by natural law, just as are biological mechanisms. But simultaneously, the computations are arranged to satisfy representational laws, so that the realm becomes about the external world. The rational band is the realm of reason. Causal mechanisms have disappeared and what determines behavior is goals and knowledge (within the physical constraints of the environment). symbolic processing for intelligence The chapter deals with the symbolic processing required for intelligence and introduces the SOAR architecture. The shift from general considerations to full details of an architecture and its performance reflects the cardinal principle that the only way a cognitive theory predicts intelligence is if the system designed according to that theory exhibits intelligent behavior. Intelligence is a functional capability. The central architecture for performance. In SOAR all tasks, both difficult and routine, are cast as problem spaces. All long-term memory is realized as a production system in which the productions form a recognition memory, the conditions providing the access path, and the actions providing the memory contents. Unlike standard production systems, there is no conflict resolution, all satisfied productions put their contents into working memory. Thus SOAR is entirely problem-space structured, and the recognition of which productions fire constitutes the knowledge search. Control over behavior in the problem space is exercised by the decision cycle. First, information flows freely from the long-term memory into working memory. New elements may trigger other productions to fire, adding more elements, until all the knowledge immediately available in long-term memory is retrieved. Included in this knowledge are preferences about which decisions are acceptable or better than others. Second, a decision procedure sorts through the preferences to determine the next step to take in the problem space: what operator to select, whether the task is accomplished, whether the problem space Is to be abandoned, and so on. The step is taken, which initiates the next decision cycle. The decision cycle suffices if the knowledge retrieved is sufficient to indicate what step to take next. But if not, an impasse occurs — the decision procedure cannot determine how to proceed given the preferences available to it. Impasses occur frequently, whenever knowledge cannot be found just by immediate pattern recognition. The architecture then sets up a subgoal to acquire the missing knowledge. Thus the architecture creates its own goals 430


whenever it does not have what is needed to proceed. Within the subgoal, deciding what problem space to use and what operators to select occurs simply by continuing with decision cycles in the new context. Impasses can arise while working on a subgoal, giving rise to a hierarchy of goals and subgoals, in the manner familiar in complex intelligent systems. Chunking., The organization of productions, problem spaces, decisions, and impasses produces performance, but it does not acquire new permanent knowledge. Chunking provides this function. This is a continuous, automatic, experience-based learning mechanism. It operates when impasses are resolved, preserving the knowledge that subgoals generated by creating productions that embody this knowledge. On later occasions this knowledge can be retrieved immediately, rather than again reaching an impasse and requiring problem solving. Chunking is a process that converts goal-based problem solving into long-term memory. Chunks are active processes, not declarative data structures to be interpreted. Chunking does not just reproduce past problem solving; it transfers to other analogous situations, and the transfer can be substantial. Chunking applies to all impasses, so learning can be of any kind whatever: what operators to select, how to implement an operator, how to create an operator, what test to use, what problem space to use, and so on. Chunking learns only what SOAR experiences (since it depends on the occurrence of impasses). Hence, what is learned depends not just on chunking but on SOAR'S problem solving. The total cognitiwe system, SOAR'S cognitive system consists of the performance apparatus plus chunking. The total cognitive system adds to this mechanisms for perception and motor behavior. The working memory operates as a common bus and temporary store for perception, central cognition, and motor behavio'r. Perceptual systems generate elements in working memory, which are matched by the productions in long-term memory. Central cognition generates elements in working memory, which are interpreted as commands by the motor system. Perceptual processing occurs in two stages: the (sensory) mechanisms that deliver elements to working memory and the analysis and elaboration of these perceptual elements byJ encoding productions. Likewise on the motor side, decoding productions in long-term memory elaborate motor commands and produce whatever form is needed by the motor systems, followed by motor system proper that makes movements. The sensory and motor modules are cognitively impenetrable, but the encoding and decoding processes interact with other knowledge in working memory. is an intelligent system,, Intelligence is only as intelligence does. The chapter describes the range of different tasks, types of learning, and modes of external interaction that SOAR has exhibited. Two large SOAR systems are described in some detail. One, RI-SOAR (Rosenbloom et al. 1985), does the task of Rl, a classical expert system (McDermott 1982), which configures VAX systems. RI-SOAR does the same task. It shows that a single system can mix general (knowledge-lean) problem solving and specialized (knowledge-intensive) operation

Newell: Unified theories of cognition as a function of what knowledge the system has available. Rl-SOAR also shows that experiential learning can acquire the knowledge to move the system from knowledge-lean to knowledge-intensive operation. The second system, Designer-SOAR (Steier 1989), designs algorithms, a difficult intellectual task that contrasts with the expertisebased task of Rl. Designer-SOAR starts with a specification of an algorithm and attempts to discover an algorithm in terms of general actions such as generate, test, store, and retrieve, using symbolic execution and execution on test cases. Designer-SOAR learns within the span of doing a single task (within-task transfer), and also between tasks of the same basic domain (across-task transfer), but it shows little transfer between tasks of different domains. Mapping SOAR onto human cognition* SOAR is an architecture capable of intelligent action. Next, one must show that it is an architecture of human cognition. Given the results about the cognitive band, deriving from the realtime constraint, there is only one way to interpret SOAR as the human cognitive architecture. Moreover, since these results have established absolute, though approximate, time scales for cognitive operations, this interpretation leads to an order-of-magnitude absolute temporal identification of the operations in SOAR as a theory of cognition. SOAR productions correspond to the basic symbol access and retrieval of human long-term memory, hence they take —10 msec. The SOAR decision cycle corresponds to the level of elementary deliberation and hence takes — 100 msec. The problem-space organization corresponds to higher organization of human cognition in terms of operations. Operators that do not reach an impasse correspond to simple operations, hence they take ~1 sec. SOAR problem spaces within which only simple (nonimpassing) operators occur correspond to the first level of composed operations. This is the first level at which goal attainments occur and the first at which learning (impasse resolution) occurs. Problem spaces of any degree of complexity of their operators are possible and this provides the hierarchy of operations that stretches up into the intendedly rational level. Summary. This chapter has a strong AIflavor,because the emphasis is on how a system can function intelligently, which implies constructing operational systems. A prime prediction of a theory of cognition is that humans are intelligent and the only way to make that prediction is to demonstrate it operationally. The prediction is limited, however, by the degree to which the SOAR system itself is capable of intelligence, SOAR is state of the art AI, but it cannot deliver more than that. Chapter 5: Immediate behawior The book now turns to specific regions of human behavior to explore what a unified theory must provide. The first of these is behavior that occurs in a second or two in response to some evoking situation: immediate-response behavior. This includes most of the familiar chronometric experimentation that has played such a large role in creating modern cognitive psychology. The scientific role of immediate-response data. When you're down close to the architecture, you can see it,

when you're far away you can't. The appropriate scale is temporal and behavior that takes 200 msec to about 3 sec sits close to the architecture. Thus, immediate-response performance is not just another area of behavior to illustrate a unified theory, it is the area that can give direct experimental evidence of the mechanisms of the architecture. Furthermore, cognitive psychology has learned how to generate large numbers of regularities at this level, many of which are quantitative, parametric, and robust. Literally thousands of regularities have been discovered (the book estimates ~3000). Tim Salthouse (1986) provides an illustration by his listing of 29 regularities for just the tiny area of transcription typing (this and several other such listings are given and discussed throughout the remainder of the chapter and book). All of these regularities are constraints against the nature of the architecture. They provide the diverse data against which to identify the architecture. Thus, it is appropriate to start the consideration of SOAR as a unified cognitive theory by looking at immediate behavior. Methodological preliminaries, SOAR is a theory just like any other. It must explain and predict the regularities and relate them to each other; however, it need not necessarily produce entirely novel predictions: An important role is to incorporate what we now understand about the mechanisms of cognition, as captured in the micro theories of specific experimental paradigms. A scientist should be able to think in terms of the architecture and then explanations should flow naturally. SOAR should not be treated as a programming language. It is surely programmable - its behavior is determined by the content of its memory and stocking its memory with knowledge is required to get SOAR to behave. But SOAR is this way because humans are this way, hence programmability is central to the theory. That SOAR is not only programmable but universal in its computational capabilities does not mean it can explain anything. Important additional constraints block this popular but oversimple characterization. First, SOAR must exhibit the correct time patterns of behavior and do so against a fixed set of temporal primitives (the absolute times associated with the levels of the cognitive band). Second, it must exhibit the correct error patterns. Third, the knowledge in its memory — its program and data - must be learned. It cannot simply be placed there arbitrarily by the theorist, although as a matter of necessity it must be mostly posited by the theorist because the learning history is too obscure. Finally, SOAR as a theory is underspecified. The architecture continues to evolve, and aspects of the current architecture (SOAR 4 in the book, now SOAR 5) are known to be wrong. In this respect, a unified theory is more like a Lakatosian research programme than a Popperian theory. Functional analysis of immediate responses. The tasks of immediate responding comprise a family with many common characteristics, especially within the experimental paradigms used by cognitive psychologists. These common properties are extremely constraining, and make it possible to specialize SOAR to a theory that applies just to this class of tasks. Immediate responses occur in the baselevel problem space, where the elements generated by perception arise and where commands are given to the motor system. This base-level space is also the one that BEHAVIORAL AND BRAIN SCIENCES (1992) 15:3


Newell: Unified theories of cognition does not arise from impassing. Other spaces arise from it when knowledge is lacking to handle the interaction with the world. Given that time is required by both the perceptual and the motor systems, immediate-response behavior can impasse only infrequently if it is to meet the temporal demands of the task. The main structure of the system that performs the immediate-response task is the pipeline of processing that occurs in the top space - Perceive (P), Encode (E), Attend (A), Comprehend (C), Task (T), Intend (I), Decode (D), Move (M). The order is highly constrained, but not invariant. Setting the task (T) usually occurs prior to the perception of the stimulus. Some processes need not occur, for example, attention (A), which may already be focused on the appropriate object. This pipeline is a functional organization of SOAR operators given certain task constraints. The pipeline is not architectural, although some of its components, perception (P), and move (M), reflect architectural divisions. Each component can be identified with the execution of collections of productions or simple operators, and by assigning rough a priori durations to the times for productions (~10 msec) and minimum simple operators (—50 msec) it is possible to compute a priori durations for immediate-response tasks. immediate response tasks. The above functional scheme is applied to the classical simple reaction task (SRT) of pressing a button in response to a light and to the twochoice reaction task (2CRT) of pressing the button corresponding to whichever light goes on. In both cases the model follows directly from the pipeline, yielding a priori estimates of ~220 msec and —380 msec respectively, which are close to typical experimental values. The scheme is then applied to a more complex case, stimulus-response compatibility, in which the response depends in some nonstraightforward way on the stimulus (such as pressing the lower button to call the Up elevator). The basic empirical results are that the more involved the mapping between stimulus and response, the slower the response and the greater the likelihood of error. The predictions of SOAR about durationflowfrom executing a conditional sequence of simple operators to perform the mapping. The more complex the mapping, the longer the sequence. A particular example is worked through for a task of using compatible or incompatible abbreviations for computer commands where the a priori prediction of the model is —2140 msec and the observed experimental average is 2400 msec. There is more to this story than just close predictions and the fact that they do not require fitting any parameters to the data at hand. The work is actually that of Bonnie John (John & Newell 1987) and the theory she used is a version of the GOMS theory (Goal, Operators, Methods, and Selections rules), developed in HCI (human-computer interaction) to predict the time it takes expert users to do tasks that have become routine cognitive skills. This theory does an excellent job of predicting stimulus-response compatibility and the book exhibits a graph of predicted versus observed performance for some 20 different experiments, including many of the classical experiments on stimulus-response compatibility, showing better than 20% accuracy (all without fitting parameters). The GOMS theory is essentially identical to the specialization of SOAR to the P-E-A-C-T-I-D-M pipeline. 432


Thus, SOAR incorporates the GOMS theory and with it inherits all the predictions it makes. This is not usurpation but the proper role for a unified theory of cognition, namely, to incorporate what is known throughout cognition, both regularities and mechanisms. Item recognition. The chapter examines another classical example of immediate behavior, the Sternberg itemrecognition paradigm (Sternberg 1975), where a subject is presented a set of items and then must say (as rapidly as possible) whether a presented probe is a member of the presented set. The main phenomenon is that response time is linear in the size of the presented set (suggesting a search process), but the time per item is very fast (~40 msec) and the data confute the obvious search process, where termination occurs after the target is found and appears instead to support an exhaustive search in which all items are always examined. An almost unimaginable number of variations on this paradigm have been run: A list of 32 regularities are presented, which provide another example of how rich the data from experimental cognition are for theory building. Some theories deal with half a dozen of these regularities; no theory comes close to explaining all 32. SOAR is applied to the item recognition paradigm differently from the way it is applied to the reaction-time and stimulus-response compatibility paradigms to emphasize, in part, that a unified theory of cognition is to be applied in diverse ways depending on the situation. A qualitative theory is derived from SOAR in which essential regularities of the paradigm, such as exhaustiveness, are seen to come directly from characteristics of the architecture. Then it is shown why a fully detailed theory of the paradigm is not fruitfully constructed now: because it interacts with known places in the architecture that are uncertain and in flux. Tfping. The final immediate-response paradigm examined is that of transcription typing, using the list of 29 regularities developed by Salthouse (1986) presented earlier in the chapter. Typing introduces several additions: continuous behavior, even though the subject must respond to new stimulus information immediately; concurrent processing, because humans overlap processing in typing and discrete motor behavior. As in the immediate-reaction tasks, a complete model is provided. This is also the work of John (1988) and extends the GOMS model used for stimulus-response compatibility to concurrent processors for perception, cognition, and motor action. The model is highly successful (again without estimating parameters from the data), accounting for three-quarters of the regularities and not clearly contradicting any. The model is silent about error regularities, which all depend on the analog and control aspects of the motor system working within the anatomy of the hands and arms, aspects not yet covered by the model. As earlier, the extension of GOMS to concurrent processing fits into the SOAR architecture, providing (or extending) the example of incorporating this theory into SOAR. SOAR makes the same predictions as the GOMS models. Summary. This chapter provides basic confirmation that the detailed SOAR architecture is in reasonable accord

Newell: Unified theories of cognition with a .range of experimental facts about immediate behavior. This is important, since it was already clear - by construction, so to speak - that gross aspects of SOAR were consonant with human data. Problem spaces, production systems, and chunking are all major theoretical constructs in cognitive psychology. Any reasonable way of composing an architecture from these elements will successfully cover large domains of behavior, especially higher-level behavior such as problem solving. But, in general, such architectural ideas have not been brought to bear on modelling the details of immediate behavior. The same architecture (and theory) that explains immediate behavior explains higher-level behavior. A theme that was illustrated throughout the chapter was that SOAR moves toward making predictions that do not require estimating parameters from the data being predicted - so-called no-parameter predictions. Noparameter predictions almost never occur in current cognitive theorizing, mostly because the theories do not cover the entire arc from stimulus to response, so there are always unknown perceptual and motor processes coupled with the processes of interest. One advantage of a unified theory is to provide a complete enough set of mechanisms so no-parameter predictions can become the norm, not the exception. Chapter 6: Memory, learning and skill This chapter covers the SOAR theory of human memory, learning, and skill acquisition. Three fundamental assertions capture much of what the SOAR architecture implies for learning. First, according to the functional unity of long-term memory, all long-term memory consists of recognition-retrieval units at a fine grain of modularity (SOAR'S productions being simply one instance). Second, according to the chunking-learning hypothesis, all longterm acquisition occurs by chunking, which is the automatic and continuous formation of recognition-retrieval units on the basis of immediate goal-oriented experience. Third, according to the functionality of short-term memory, short-term memory arises because of functional requirements of the architecture, not because of technological limits on the architecture that limit persistence. An important aspect of the SOAR architecture is that it embodies the notion of chunking and hence provides a variety of evidence about what chunking can do. The chapter provides a thorough review of this, especially ways that chunking permits SOAR to learn from the eternal environment and ways chunking operates as a knowledge-transfer process from one part of SOAR to another. The SOAR qyaiitatiwe theory of learning. A definite global qualitative theory of learning and memory can be read out of the SOAR architecture, much of it familiar in terms of what we know about human learning and memory. All learning arises from goal-oriented activity. Learning occurs at a constant average rate of about a chunk every 2 sees. Transfer is essentially by identical elements (although more general than Thorndike's [1903]) and will usually be highly specific. Rehearsal can help to learn, but only if some processing is done (as in depth-of-processing conceptions). Functional fixity and Einstellung, well at-

tested memory biases in humans, will occur. The encoding specificity principle applies and the classical results about chunking, such as from chess, will hold. Some of these aspects are controversial, either theoretically or empirically, so the SOAR theory takes a stand at a general level. A unified theory of cognition must yield answers to questions of interest, for instance the distinction between episodic and semantic memory (Tulving 1983). Ultimately, what must be explained are the regularities and phenomena, but it is useful to see what global answers SOAR gives. First, in SOAR, episodic and semantic information are held in the same memory, because there is only one long-term memory in SOAR. This simply moves the question to how SOAR handles episodic and semantic knowledge. Basically, chunks are episodic recordings collecting this experience. The more specific that collection, the more episodic it is, to the point where the chunks that record the selection of operators provide in effect a reconstructable record of what operators were applied at the original time. This episodic record has a strong flavor of remembering what one did in the prior situation, rather than the objective sequence of (say) environmental states. There is some evidence of that, as well, however. Chunking being episodic, semantic memory arises by abstraction. Chunks are not conditional on all the specific features of the learning situation, only on those that entered into obtaining the goal. Thus, chunking can abstract from the conditions that tie the chunk to the episode and depend only on conditions on content; they hence provide semantic information. An example given in the book is how SOAR learns to solve little blockstacking problems in ways that become completely independent of the problems in which the learning took place. In sum, we see that SOAR contains both semantic and episodic knowledge but that it all happens within a single memory. It is important to note that this solution followed from the SOAR architecture, independently of the episodic/semantic issue. Data chunking. A theory must be right in its details as well as its global explanations. Chunking provides interesting and appropriate learning in many situations, but it fails when extended to certain tasks, producing what is called the data-chunking problem. A prominent example is paired-associate learning, where the nonsense pair (BAJ, GID) is presented and the subject must later say GID if presented with the cue BAJ. SOAR should obviously construct a production something like BAJ — >GID. The problem is that any chunks built are conditional on both BAJ and GID, since they are both provided from the outside (indeed, the production must be conditional on GID: What if the pair (BAJ, DAK) were presented later?). There seems to be no way to get a production with only BAJ as a condition and GID as the action. This example reveals a deep difficulty, and the learning task involved (learning data) seems central. The difficulty was discovered empirically after many successful studies with chunking in SOAR. It provides a nice paradigm for what a unified theory of cognition should do. A natural reaction is to modify the architecture but the chapter describes the successful result of staying with architecture in the belief that it will ultimately yield a solution. The solution seems quite technical, resting on BEHAVIORAL AND BRAIN SCIENCES (1992) 15:3


Newell: Unified theories of cognition the way problem spaces separate operator applications and the search control that guides them. Yet the solution carries with it some deep implications. For one, longterm memory for data is reconstructive, that is, SOAR has learned to reconstruct the past in response to the cues presented. For another, the theory of verbal learning that emerges is strongly EPAM-like (Simon & Feigenbaum 1964). Skill acquisition. Skill acquisition flows naturally from chunking, which in many ways is procedural learning. In particular, there is a well-known law, the power law of practice (Newell & Rosenbloom 1981), that relates the time taken to do a task to the number of times that task has been performed. The chunking mechanisms in SOAR were actually developed, in prior work, to explain the power law of practice, which might make it unsurprising that SOAR exhibits the power law. Such a sociological reaction does not gainsay that chunking is also the mechanism that supports episodic and semantic memory, so that a single memory and learning organization supports all the different types of memory that cognitive sciences distinguish. Short-term memory. Structurally, SOAR seems like the perfect image of a 1960's psychological theory; a longterm memory, a short-term memory, and (perhaps) sensory buffers. But the story is not so simple. In particular, SOAR'S working memory is entirely functionally defined. Its short-term time constants for replacement, access, and retention directly reflect the requirements of the operations it must perform. No limitation of retention or access arises simply from the technology in which the architecture is built - to create short-term memory problems for the system. Such technological limitations have been central to all short-term memory theories: decay, fixed-slot, interference. Thus, SOAR cannot provide simple explanations for standard short-term memory phenomena but must establish limitations out of the need for functionality elsewhere in the system. Technological limitations are certainly possible and SOAR could be so augmented. But strategically, this becomes a move of last resort. An interesting conjecture about short-term memory does flow from the architecture, even as it stands. The current literature reveals a virtual zoo of different shortterm memories - visual, auditory, articulatory, verbal (abstract?), tactile, and motor modalities; with different time constants and interference patterns. SOAR offers a conjecture about how such a zoo arises, based on the way the perceptual and motor systems integrate with cognition through an attention operator that permits perceptual and motor systems, with their varying forms of persistence, to be exploited by central cognition as shortterm memories. Summarye A unified theory of cognition must provide a theory that covers learning and memory as well as performance. One reason the current era is right for beginning to attempt unified theories is that we have just begun (throughout machine learning) to develop systems that effectively combine both. 434


Chapter 7: SntendeiSf rational Ifeehawior

This chapter treats how humans go about obtaining their goals - how they intend to be rational. As the time available for a task lengthens (relative to its demands), the human is able to bring more and more knowledge to bear and hence approaches being a knowledge-level system. In the time-scale of human action, the human passes out of the cognitive band into the rational band. This happens only gradually. If being describable purely in terms of knowledge is attaining the peaks of rationality, then this chapter deals with how humans explore the foothills of rationality, the region where problem solving is manifest. Three specific tasks are used to make some points about how unified theories of cognition deal with this region. CryptarithmetiCe Cryptarithmetic played a key role in establishing that human problem solving could be modeled in extreme detail as search in a problem space (Newell & Simon 1972). It is a puzzle in which an arithmetic sum is encoded as words: SEND + MORE = MONEY, where each letter is a digit and distinct letters are distinct digits. The original work posited a production-system architecture and developed manual simulations of the moment-by-moment behavior of humans attempting to solve the problem, based on thinkaloud protocols of the subjects while working on the problem. This chapter provides a version of SOAR that simulates successfully a particularly difficult segment of a subject from Newell and Simon (1972). Resimulating old situations, which were already more detailed than most psychologists appreciated, would seem not to have wide appeal in psychology, yet reasons exist for doing it within a unified theory of cognition. First, a unified theory must explain the entire corpus of regularities. Old data do not thereby become less valuable. An important activity in a unified theory is going back and showing that the theory covers classic results. Often it will be important to go behind the familiar results, which are statements of regularities, to explain the original data. Second, cognitive psychology needs to predict long stretches of free behavior to show that it can obtain enough knowledge about a human to predict all the zigs and zags. Cryptarithmetic, although an experimental laboratory task, is .freely generated: Individual human subjects simply work away for half an hour, doing whatever they decide will advance solving the task. It is a good candidate for this endeavor at long-duration detailed prediction. Sf itogisms. Logical reasoning is epitomized in the task of categorical syllogisms: Some bowlers are actors. All bowlers are chefs. What follows necessarily about actors and chefsP Such tasks have been studied continuously by psychology, driven by the urge to understand whether humans have the capacity for logical thought. Syllogisms can be surprisingly difficult, even though they are easy to state and it is easy to understand what is wanted. This raises the possibility that humans somehow do not have the capacity for rational thought. Additional regularities have emerged and the chapter gives half a dozen. [See also BBS multiple book review of Johnson-Laird & Byrne's "Deduction" (forthcoming).]

Newell: Unified theories of cognition A SOAR model of syllogistic reasoning is presented. A central question is whether the internal representation is in terms of propositions or models. The two types are analyzed, culminating in a definition of annotated models, the representation used in the SOAR theory of syllogisms, which is model-like but admits highly restricted types of propositions. (Mental) models versus propositions have been an important issue in cognitive science (Johnson-Laird 1983). Annotative models do not flow from the architecture; hence they require an additional assumption about how humans encode tasks (though hints exist on how to ground them in the perceptual architecture). The SOAR model presented not only exhibits all the main known regularities of syllogistic reasoning, but it also fits very well the data on the (64 distinct) individual syllogisms. For unified theories of cognition, this shows how an important cognitive concept, mental models, which is not part of the theory's conceptual structure, is handled. A mental model is simply a type of problem space, with the model being the state representation. This identification is not superficial; it represents the solution to an existing cognitive-science issue. Sentence werification. Elementary sentence verification has been a paradigm task since the early 1970s to probe the nature of comprehension (Clark & Clark 1977). In a typical task, the subject reads a sentence, Plus is above star, then sees a picture that shows a star sign above a plus sign, and answers (correctly) No. The task takes about 3 sec and by varying sentence and pictures it was discovered that different structural features produced different characteristic durations (e.g., using below rather than above adds 90 msec). These striking chronometric regularities in genuine, albeit simple, language behavior provided striking confirmation for the basic assumptions of cognitive science. A SOAR model is presented that performs sentence verification and shows the right pattern of regularities. Theorizing in the sentenceverification paradigm has occurred primarily by means of flowcharts (which are widely used in cognitive theory). Again, the question is how an important concept outside a unified theory is to be accommodated. The answer flows directly from the SOAR theory instantiated to this task. Flowcharts are inferred by the scientist from multiple temporal behavior traces. These traces are generated dynamically by the paths in the problem spaces that deal with the task. Hence flow charts do not exist in the human in static form and the human does not behave by interpreting or running a flowchart. Again, this solution is not superficial; it settles an important theoretical issue in cognitive science, though one that has seldom been raised. The answer to the question of what flowcharts are raises the question of where flowcharts come from. This permits an excursion into the SOAR theory of how the human acquires the sentence verification task. This is cast as the problem of taking instructions: Read the sentence. Then examine the picture. Press the T-button if the sentence is true of the picture. . . . In both the syllogism task (reading the syllogisms) and the sentence-verification task (reading the sentence), SOAR produces a model of the situation it believes it is dealing with (respectively, actors

and bowlers, and pictures with pluses and stars). It does the same for taking instruction, except that the situation comprises intentions for its own future behavior. As SOAR attempts to perform the task (and finds out it doesn't know how), it consults these intentions and behaves accordingly. As it does so, it builds chunks that capture that knowledge, so that on future attempts it knows how to do the task without being instructed. This answers the question of where the flowcharts come from. SOAR acquires from the instructions the problem spaces it uses to do the task and the traces of which produce the data from which the psychologist infers the flowchart. It also shows how a unified theory of cognition permits one and the same theory to be moved in entirely new directions (here, instruction taking). Summary. These three tasks provide a sampling of the diverse issues that attend the foothills of rationality. They indicate that a unified theory of cognition must have the scope to cover a wide territory. A single theory should cover all these distinct tasks and phenomena. A side theme of Chapter 7 is to note that there are variations on simulation other than simply producing the full trace of behavior in a given task. In cryptarithmetic, the operators were controlled by the experimenter so as always to be correct; simulation was hence focused exclusively on the search control involved. In syllogisms, simulation was used to discover the basic form of the theory. In sentence verification, simulation was used to be sure simple processing was being accomplished exactly according to theory. The general lesson, already noted in an earlier chapter, is that unified theories are to be used in diverse ways depending on the scientific endeavor at hand.

Chapter 8: Along the frontiers The last chapter assesses where matters stand on the case for unified theories of cognition and their promise for cognitive science. To be concrete the book has used an exemplar unified theory (SOAR) but a point stressed throughout is that the concern is not to promote SOAR as the unified theory, or even as a candidate theory (hence the use of the term exemplar), but to get the construction of unified theories on the agenda of cognitive science. (SOAR is my candidate in the post-book period, but that is not the book.) To restate basics: A unified theory of cognition is a theory of mind, but a theory of mind is not by definition computational. A computational unified theory of cognition must be a theory of the architecture. The hallmark of a unified theory is its coverage of cognitive phenomena, taken broadly to include perception and motor behavior. The last half of the book was devoted to coverage intelligence, immediate chronometric behavior, discrete perceptual-motor behavior, verbal learning, skill acquisition, short-term memory, problem solving, logical reasoning, elementary sentence verification, and instructions and self-organization for new tasks. Listing all these varieties seems tedious, but it emphasizes that coverage is important. Coverage per se is a surrogate for dealing with the full detail of phenomena and regularities in every region. A long list of what SOAR covers does not gainsay an



Newell: Unified theories of cognition equally long list of what SOAR does not cover: A list of thirty is presented, among which are emotion and affect, imagery, perceptual decisions, and priming. A unified theory of cognition cannot deal with everything at once and in depth; however, a unified theory does not have the luxury of declaring important aspects of human cognition out of bounds. Thus, the chapter considers, even more speculatively than in the main body of the book, several areas along the frontier of its current capabilities. Language. Language is central to cognition, although it has not been central to the exemplar, because of SOAR'S growth path through AI and cognitive psychology. Some general theoretical positions on language can be read out of SOAR. Language will be handled by the same architecture that handles all other cognitive activities. Language, being a form of skilled behavior, will be acquired as other skilled behaviors are, by chunking. Language is entirely functional in its role because an architecture is a device for producing functional behavior. It is possible to go further than these general statements. The fragment of language behavior presented in sentence verification made evident a basic principle: Comprehension occurs by applying comprehension operators to each word in the incoming utterance, where each comprehension operator brings to bear all the knowledge relevant to that word in context. One reason comprehension operators are the appropriate organization comes from the way chunking can grow the comprehension operators. To emphasis this possibility, a small example of language acquisition by SOAR is presented, making the further point that a unified theory of cognition should be able to approach all of cognition. The nascent SOAR theory of language seems to be a highly specific one within the spectrum of active theories of the nature of human language. In particular, it seems at the opposite pole from ideas of modularity (Fodor 1983), being the very epitome of using general cognitive mechanisms for language. An extended discussion of SOAR against the criteria of modularity aims to show that this view is mistaken - that SOAR is as modular or as nonmodular as various task domains make it. The essential ingredient is chunking, which is a mechanism for starting with generalized resources and creating highly specialized ones. Thus, language mechanisms grown by chunking may exhibit all the attributes of modular architectures. A general lesson from this discussion is that there are more possibilities in architectures than might be imagined, and that it is only in considering the details that the actual implications of an architecture can be discovered. Development Like language, the development of cognitive capabilities in children is an essential aspect of cognition. While a unified theory of cognition can be excused from dealing with developmental phenomena right off, it must be possible to see how development would be approached. This is accomplished by picking a specific example, namely, the balance-beam task. Children from age 5 upward watch as a balance beam is loaded with disks on pegs situated along the beam. They then predict whether the beam will tilt right or left or remain balanced when support is withdrawn. Siegler (1976) demonstrated experimentally the progress of children of increasing age through a series of theoretically defined 486


processing stages. This work is a beautiful example of comparative statics, in which separate theories are defined for each stage, making possible theoretical comparisons of how processing capabilities advance. What is missing, both here and throughout developmental psychology, is any theory of the mechanisms of transition. A SOAR theory is presented of the first and second transitions (the third transition is of quite a different character). This is an operational SOAR system that actually moves up the levels as a function of its experience in doing the task (all by the occurrence of an appropriate pattern of chunking). This transition mechanism, though not extended to detailed explanations of all the regularities of the balancebeam paradigm, is still a viable suggestion for the nature of the transition mechanism — indeed, it is a singular suggestion, given the paucity of suggestions in the field. The attempt to obtain a transition mechanism is dominated by a more pervasive problem in SOAR, namely, how to recover from error. Given successful adaptation to limited cases of the balance beam, what to do when the next level of complexity is introduced and the old program of response is inadequate? In SOAR, chunks are neither modified nor removed, hence, as in humans, much of the past remains available. Entrenched error is accordingly a problem. As in data chunking, SOAR'S solution of this involves schemes for working within the existing architecture by creating new problem spaces to hold new correct chunks and permitting old incorrect chunks to be sequestered in old problem spaces whose access can usually be avoided. Chapter 8 lays out these recovery mechanisms in some detail, not only to make clear how transitions occur, but to illustrate how the SOAR architecture poses deep problems. The example also illustrates how the focus of analysis moves up from productions to operators to problem spaces and then to characteristic patterns of problem space activity. The biological band- Brief consideration is given to how the SOAR architecture relates to the neurological substrate - the frontier below. Most basically, the SOAR architecture has been pinned down in absolute temporal terms, so that,forexample, productions must occur at the level of simple neural circuits and the decision procedure must be of the order of 100 msec. Thus, the cognitive band no longerfloatswith respect to the biological band. This is in itself a genuine advance over the decades of declaring cognition to be autonomous from the brain. Regarding connectionism, which mediates for cognitive science much of the orientation toward neural-based theory, the story is more complex, SOAR is equally committed to neural technology and the analysis and enforcement of time scales make that commitment realistic. With respect to the commitment to constraint-satisfaction computations, SOAR has an alternative orientation to simultaneous pattern matching as fundamental, SOAR is equally committed to continuous learning. Regarding the requirement that functional power reside in the instantaneous solutions of huge numbers of soft (graded) constraints (a view not held by all connectionists), SOAR relies on different principles of task organizations (problem spaces and impasses); but SOAR also responds strongly to meeting the constraints of performing a diversity of multiple higher-level tasks, a requirement that connectionism does not yet respond to. Many basic features attributed to

Commentary/Newell: Unified theories of cognition connectionist systems, such as massive parallelism and continuous learning, are shared by SOAR. Any dichotomous view - with SOAR cast as the paradigm example of the physical symbol system - is simplistic. The social band. Even briefer consideration is given to the social band, the frontier above. Basically, a unified theory of cognition can provide social theories with a model of the individual cognizing agent. A theory of cognition, unified or otherwise, is not a theory of social groups or organization, nascent or otherwise. Whether an improved agent would be useful is problematic in the current state of social science. The subarea of social cognition has been involved in moving the results of cognitive psychology into social psychologyforalmost two decades. Much has happened; there can now be free talk of retrieval, availability, and processing. Most social theories are still not cast in terms of cognitive mechanisms, however, but posit variables with values defined only up to their order. That an essential factor in the power of all cognitive theories (including SOAR) is their being cast in terms of mechanisms that actually determine the processing is still almost completely lost on the sciences of the social band. One other possibility to help the theories of the social band is to disentangle the social from nonsocial mechanisms. An example of how such an analysis might go is presented using Festinger's Social comparison theory (Festinger 1954). The interesting result from this analysis is that to get some conclusions at the social level required Festinger to devote most of this theory to defining nonsocial factors. A cleaner standard notion of a cognitive agent, complete enough to be useful to the social scientist (i.e., unified) might actually help out. The role of applications. Applications are an important part of the frontier of any science, though not the same as the scientific frontier. An important way unified theories of cognition contribute to applications is to permit all aspects of an application to be dealt with. The area of Human-Computer Interaction (HCI) provides good illustrations, where many aspects of perception, cognition, and motor performance all enter into each application situation (e.g., how computer browsers should be designed so as to be most useful), SOAR in fact already covers a wide proportion of the processes involved in interaction with computer interfaces, and HCI is a major area of application for SOAR. However, the most relevant aspect of applications is in the other direction - not what the unified theory does for applications but what applications do for the theory. This is of course not the direction usually emphasized, although everyone believes that application success convinces the polity to support the science. Much more is involved, however. Applications establish what accuracy is sufficient, when a regularity is worth remembering, and when a theory should not be discarded. Moreover, they provide a community that cares about relevant parts of a science and will cherish and nurture them, even when the theorist, following the willo'-the-wisp of new exciting data, ignores them. How to mowe toward unified theories of cognition* The tour of the frontier is finished. What remain are not questions of science but of strategy. How are we to evolve unified theories of cognition: How to get from where we

now are, with unified theories only in prospect, to where we want to be, with all of us working within their firm and friendly confines? Here are my suggestions. Have many unified theories: Unification cannot be forced, and exploration requires owning an architecture conceptually. Develop consortia and communities: Unified theories require many person-years of effort; they are not for the lone investigator. Be synthetic and cumulative: Incorporate, rather than replace, local theories. Modify unified theories, even in radical ways: Do not abandon them after they have received massive investments. Create data bases of results and adopt a benchmark philosophy: Each new version must predict all the benchmark tasks. Make models easy to use, easy to learn, and easy to make inferences from. Finally, acquire one or more application domains to provide support. So the book ends where it began, but with understanding by the reader. Psychology has arrived at the possibility of unified theories of cognition - theories that gain their power by positing a single system of mechanisms that operate together to produce the full range of human cognition. I do not say they are here. But they are within reach and we should strive to attain them.

Commentary submitted by the qualified professional readership of this journal will be considered for publication in a later issue as Continuing Commentary on this article. Integrative overviews and syntheses are especially encouraged. All page references are to Novell's Unified theories of cognition unless otherwise indicated.

Unified cognitiwe theory: Yoy can't get there from here Derek Bickerton Department of Linguistics, University of Hawaii, Honolulu, HI 96822 Electronic mail: [email protected]

Newell's goal is legitimate and he is probably right in thinking that the time to attempt it has come, so the remarks that follow are not meant to discourage the enterprise as such. Newell himself realises that the problem is "How are we to get from here, where I have claimed [unified theories of cognition] are in prospect, to there, where we will all be working within their firm and friendly confines?" (p. 503). Well, I don't think we can at least, not without first backtracking and rethinking some of Newell's most basic assumptions, especially those that involve the role of language. Newell admits the importance of language but seems tofindit embarrassing. It comes in at No. 6 on his list of "multiple constraints that shape mind" (p. 19, Fig. 1-7), albeit unpromisingly chaperoned (a mind must be able to "use language, both natural and artificial' [emphasis added] - so don't preliterates and technophobes have minds?). But when he summarizes SOAR'S overall achievement (p. 440), No. 6 appears as the first constraint that SOAR has so far failed to meet. This is hardly surprising, given Newell's views on what language is. He states that language "starts externally and ends BEHAVIORAL AND BRAIN SCIENCES (1992) 15:3



Unified theories of cognition

externally" and is consequently "a perceptual-motor phenomenon" (p. 441), which implies some kind of behavioral approach a la Skinner (1957) - mouths open and shut, ears flap, but nothing much happens in between. He says it is "patently a skill" (ibid.), which it patently isn't. This belief arises from a confusion, as pernicious as it is widespread, between language (which is first and foremost a representational system, see Bickerton 1990) and the use of language, which you could, I suppose, regard as a skill provided that you also regarded walking, breathing, and growing pubic hair as skills. [See also Bickerton: "The Language Bioprogram Hypothesis" BBS 7(23) 1984.] To criticize the eighteen pages on language and language acquisition that follow would be like shooting fish in a barrel; 111 just say that before embarking on them, Newell should at least have familiarized himself with the extensive literature on language learnability (not "child language" or "language acquisition"!), starting from Gold's classic paper (Gold 1967), which demonstrates in a format Newell would surely appreciate that anything working like SOAR does could never succeed in acquiring a natural language. Fortunately there are more constructive remarks that a linguist can make. [See also Chomsky: "Rules and Representations" BBS 3(1) 1980; Lightfoot: "The Child's Trigger Experience" BBS 12(2) 1989; Pinker & Bloom: "Natural Language and Natural Selection" BBS 13(4) 1990; Grain: "Language, Tools and Brain" BBS 14(4) 1991.] Indeed, I may have been unfair in suggesting that Newell fails to understand language. On pp. 63-65 he gives a description of it that I couldn't hope to improve on: He points out that it must be "capable of combinatorial variety," must have "high stability" in storage but "low stability" in usage, must show a "modest" energy cost along with "completeness of possible transformations," must ensure that "the organism can do with it what the organism wants," and must use "the same medium . . . for the transformations as for the representations," so that "it is all one medium" and "there is self-application." There's just one thing wrong: Newell doesn't realise he's talking about language. He thinks he's talking about a "composable representation system." Well, that's exactly what language is. Elsewhere (pp. 113-17), Newell notes that humans, as "symbol systems" capable of "unlimited qualitative adaptation," differ totally from all other organisms, whose adaptations are sharply limited and largely if not exclusively determined by biological factors. Yet to him this seems just an accepted fact of life to be (if possible) machine-instantiated, rather than a shocking mystery that any candidate theory of human cognition ought at least to try to explain in evolutionary terms — for if we can't say how such things might come to be, we can hardly claim to be sure about what they are. Had he thought about this a little, Newell might have put together the 2 of superior human adaptivity and the 2 of what a "composable representation system" is and does in order to make the 4 of the table below: language free adaptivity

Humans +

All other animals

Parsimony alone should suggest that the second trait might be dependent on the first. Indeed, it has already been proposed by Macphail ["The Comparative Psychology of Intelligence" BBS 10(4) 1987] that the possession of language is all that differentiates human from nonhuman intelligence, while Premack ["The Codes of Man and Beasts" BBS 6(1) 1983] has shown that even minimal quasilinguistic training brings about a quantum leap in the cognitive capacities of chimpanzees. Newell is suspicious, and not without reason, of the "modular" approach to language and cognition set forth in Fodor (1983; see also BBS multiple book review of Fodor's The Modularity of Mind, BBS 8(1) 1985) and currently popular among nativist linguists. However, he does not even consider the possibility



that a unified theory of human cognition might simply stand his own theory on its head and claim that, far from language being somehow dependent upon cognition, the properties of human cognition, at least to the extent that the latter differs from animal cognition, are largely determined by, and may indeed directly derive from, the properties of language. If this possibility turns out to be correct (there are good arguments for it, and nothing in Newell's book argues explicitly or even implicitly against it), then unified theories of cognition would seem to have two profitable ways to go. The first, cautious way would be to work on a theory of nonhuman cognition, preferably instantiated in some concrete artefact that could avoid obstacles, learn routes, build cognitive maps, evade predators, recognise and perhaps even capture prey in the ways in which some specific nonhuman species does. The second, more risky way would be to design a computer program that could acquire (and subsequently use) a natural human language, and then see what (if anything) in the vast repertoire of human cognitive feats such a program could not subsequently perform. Given the state of the art, neither project will be easy. Far easier to pursue the tried-and-trusted route of getting a program to solve problems in (more or less) the way people solve problems, then keeping your fingers crossed in the hope that when that's accomplished, all the other stuff will just unravel. But it's dangerous to let the current state of technology determine your theories. All too often you end up looking for your car keys on the side of the road where the street-lamps are. If they just happened to drop on the other side, you'll be out there looking for an awful long time.

Rdframing the problem of inteSiigent feehawior Stuart K. Card Xerox Palo Alto Research Center, 3333 Coyote Hill Rd., Palo Alto, CA

94304 Electronic mail: [email protected]

For some time, Allen Newell has worried about the state of theory in psychology (e.g., Newell 1973a); his new book, Unified theories of cognition, is his latest contribution to the problem. There has long been something a bit strange about psychological theory. First, there is the problem of theory cumulation. Years of work often just set the stage for "demise" papers that disparage the foundations laid (Saunders 1984). Research seems to wander away from problems more out of exhaustion or literatures grown too big rather than from the problems' resolutions (Newell 1973a). Second, even when robust regularities are unearthed, they are often virtually ignored by the psychological community. For example, until very recently the power law of practice (Snoddy 1926) and Fitts's Law (Fitts 1954) were unknown to most graduate students in the discipline, as manual control theory (Baron et al. 1970) is unknown today, despite the robustness and the practical utility of these empirical regularities. Third, there is the only marginal adequacy of psychological theory as the basis of applied science and technology, despite an enormous need (Broadbent 1990; Elkind et al. 1990). Underlying this state of affairs there are probably many factors; I will suggest only three that are relevant to Newell's book. One is that psychology is largely a functional science. As Resnikoff (1989, p. 3) put it: We should . . . recognize the limitations of psychology and linguistics as experimental sciences. Although the initial conditions and externalities of the experimental arrangement can be modified by the experimenter, the structural properties of the system under investigation generally cannot be manipulated, nor can its internal state be

Commentary/Newell: prepared with any degree of certitude. . . . It is here that the role of the computer in information science will be of crucial significance, for the computer scientist can prepare the internal state of the observed system as well as its initial conditions. Overcoming the limitations on psychology as an experimental science tends to lead the search for cumulative theory either to computer simulation as a technique for working out the potential mechanisms underlying behavior (as in Novell's book) or to the neurosciences (to the extent they can manipulate the internal state of an organism experimentally and associate that state with physical structure, as Resnikoff suggests). A second difficulty of psychological theory concerns the integrated nature of intelligent behavior. Psychological theories are often addressed to a single phenomenon and are not subjected to constraints apart from it, but behavior may reflect several mechanisms in concert. How the mechanisms underlying problem solving, learning, and memory interact needs to be addressed if theories of each are not to be underconstrained. A third reason is the attitude toward theory by psychologists themselves. Unlike many other sciences, psychology seems to place little value on predicting behavior approximately (but motivated by mechanism) as opposed to precise curve-fitting with the data already in hand. This works against the accumulation of theories constrained by the results of multiple phenomena while allowing the growth of microtheories shaped by the accidental artifacts of experimental paradigms. The purpose of Newell's book is to argue for unified theories of cognition as a way of overcoming these problems and cumulating theory. For his main argument it does not matter whether that theory is symbolist, connectionist, or neuroscience-based provided it can stretch across as many of the psychological phenomena as possible. Suppose, for example, that the connectionist arguments turned out to be correct. The problems of a functional science of integrated behavior and of attitudes about psychological theory would remain and would still have to be addressed. Newell is at pains to lay out the major empirical constraints on such a theory. His book is thus an attempt to reframe the problem for all psychological theory for the sake of cumulative progress. To develop his argument, Newell describes the SOAR theory. This is a particular theory cast in the form of a computer simulation as a means of reasoning from combinations of local mechanisms to larger effects, SOAR theory has made several substantive contributions to a unified theory of cognition, it seems to me: The first is the notion of universal subgoaling. This elegant mechanism answers the question, long a problem for cognitive theory, of where goals come from. In the SOAR theory, goals are emergent structures that arise outof difficulties ("impasses") that an intelligent organism^afrives at in trying to do something. Thus novice behavior could be expected to have more subgoals than expert behavior and goals could be expected to arise either internally (e.g., through lack of knowledge) or externally through interaction with the external environment. Universal subgoaling substantially extends the problem space idea first seen in protocols of logic problems (Newell et al. 1958) and later developed in terms of production systems (Newell & Simon 1972). A second contribution is the reflective decision mechanism. This gives the theory a mechanism for metacognition and a decision process that requires more processing in the face of difficult or contradictory information. A third contribution is the deep integration of a learning theory (chunking) with the subgoaling mechanism. This makes it possible to address with precision questions (about within-trial learning, for example) that are only barely approachable using experimental transfer-of-learning designs. This same chunking theory is shown to be quantitatively consistent with the empirical power law of learning relation. Newell's book tries to go farther and give us a sketch of how this theory might be extended down to the architectural level -

Unified theories of cognition

for example, the domain of reaction time experiments, but this can only be a sketch, SOAR theory has no serious model yet of working memory or of perception or of motor action. What can be shown is that the productions and the elementary units of memory and action in the SOAR theory are consistent with various chronometric constraints and may even be used to compose explanations of immediate response. SOAR theory is least developed in just those places where the human information-processing architecture breaks through most strongly into behavior and where experimental psychology is most developed. Unfortunately, the style of discourse in psychology is to reject any theory that is deficient in a microdomain without giving credit to its unifying aspects. So a memory theorist might reject a theory that did not explain the articulation rate effect in favor of a microtheory that did without counting against the microtheory the fact that it could not (let us suppose) be used to understand the memory-related subgoaling in problem solving'or indeed anything outside of its immediate domain. I fear that Newell's book will fall prey to this sort of criticism without a balancing appreciation of just how far he has been able to push his unification across major puzzles in rational behavior toward elementary information processing and neural constraints. We should judge a theory not just by the things it gets wrong, but by the number of things it gets right and the value of these in unifying theoretical subdomains, otherwise it is difficult to see how psychological theory can ever cumulate. That said, it seems to me there are several things that need to be done if SOAR theory is to have significant success as a unified theory of cognition, even on its own terms. To accelerate progress, more codification of the empirical base of phenomena is required. More lists are needed of established phenomena like Salthouse's (1986) list for transcription typing (see Card 1990 for an attempt at one for working memory). These need to be used to test and augment the theory. Although one of the principal claimed benefits of a unified theory of cognition is that it allows one to see how different sorts of phenomena interact, there need to be more demonstrations that exploit this - for example, demonstrations of how two or more different phenomena actually derive from a common set of mechanisms or how two different theories are actually special cases of the same theory. That is, there needs to be more demonstrated theoretical payoff of the integration mechanisms. A related milestone would be the prediction of phenomena not yet discovered empirically. Then there is, of course, the augmentation of the SOAR theory by components that address working memory, perception, motor behavior, attention, and emotion. A key result would be the demonstration of mechanisms for combined internalexternal cognition. A unified theory of cognition, however, need not have all mechanisms in place before demonstrating success. The phenomena and theories of psychology can begin to be unified even with a partial set of mechanisms. A unified theory of cognition is going to arrive incrementally - it will cumulate. Finally, there needs to be a better way to talk about a computational theory being simulated in a computer program as distinguished from the program itself in the same way that computer scientists discuss scheduling theory as distinct from Unix. There is a problem in the way the SOAR theory is described that tends to focus discussion on whether SOAR is "programmed" instead of on the adequacy of the proposed mechanisms. Newell's book and the SOAR theory will have succeeded if they do nothing more than successfully reframe the problems to be solved by psychological theory. A test will be whether critics of the present theory are induced to offer suggestions for improvements.



Commentary I Newell: Unified theories of cognition

k unified theory for psychologists? Richard A. Carlson1 and Mark Detweiler Department of Psychology, Pennsylvania State University, University Park, PA 16802-3106 Electronic mail: [email protected]

On the first page of Unified theories of cognition (hereafter Cognition), Newell states that "psychology has arrived at the possibility of unified theories of cognition" (emphasis added). Most of the following 500-plus pages then describe SOAR, "a candidate unified theory" (p. 38). Newell indicates that his intention is not to evaluate SOAR as a theory, but to argue for unified theory, with SOAR as an example. Newell's vision is impressive, and his contribution to the interdisciplinary work of cognitive science can hardly be overstated. Moreover, it is hard to disagree with his general sentiments concerning the value of global theory. However, it is reasonable to ask what different audiences in the cognitive science community are likely tofindin Cognition. As cognitive psychologists, we find Newell's vision of unified theory disappointing. Our disappointments revolve around Newell's treatment of empirical psychological evidence. In particular, we expect that many psychologists will have problems with the missing sense of interplay between theory development and ongoing empirical research. One of our aims in reading Cognition was to learn how SOAR (or at least some unified theory) would advance empirical research on human cognition. Unfortunately, building unified theories, at least as seen in SOAR, does little to shape the next set of empirical questions. Newell's research agenda seems to be defined primarily in terms of learning about SOAR models rather than learning about cognition. We believe it is worthwhile to look for relatively direct evidence for particular theoretical constructs and that a unified theory should organize existing data as well as guiding the collection of new data. Newell tends to give all regularities presented equal status and to provide only functional accounts of stimulus-response relations. This strategy may be valuable for developing some kinds of theories, but it makes relatively little contact with the actual conduct of empirical cognitive research. Newell suggests that "every good experimentalist should ponder thatfindingone more regularity is simply adding one to this dazzling field of ~~3,GQG" (p. 237). Having pondered, we conclude that Newell misrepresents what good experimentalists do. The best empirical research is not simply concerned with discovering and documenting regularities; rather, it seeks to understand the scope and implications of previously reported regularities - attempting tofindboundary conditions and interactions that reveal much more about cognition than is suggested by Newell's lists of "regularities." Consider the case of short-term or working memory. As presented in Cognition, SOAR has, and apparently needs, an effectively unlimited short-term storage capacity. Newell may be right that short-term memory phenomena are better explained by tradeoffs among multiple functionally motivated mechanisms than by structural limitations. Furthermore, we don't see an inherent problem with theoretical assumptions that are a priori implausible. Psychology has often been too quick to translate empirical regularities directly into theoretical constructs, and short-term limits may be an example of this. Psychological plausibility has played an important role, however, in the development of cognitive theory. It is not enough that a theory or simulation produce output similar to human behavioral data - we want to know whether it accomplishes the task in the same way as humans. Given an apparently implausible mechanism (e.g., unlimited short-term storage), it seems reasonable to expect Newell to provide some argument that could serve to educate our intuition about that mechanism.



The treatment of short-term memory is especially central to our complaint about psychological plausibility because for most cognitive theorists there is a close link between working memory and consciousness. This link provides a theoretical basis for generating process observations to support the psychological plausibility of simulation models (Ericsson & Simon 1980; Newell & Simon 1972). We find the lack of discussion of process data surprising and troubling, symptomatic of Newell's general elevation of AI over psychological criteria of theory evaluation. Newell makes much of the classic distinction between shortterm and long-term memory, saying that SOAR "fits the classical organization so well that if, per Crowder [1982], we have witnessed the demise of STM, presumably SOAR has suffered demise as well, just as it was about to be born" (p. 356). Well, yes - at least as a psychological theory with something to say to psychologists studying short-term and working memory phenomena. Much of the last 15 or so years' work on short-term memory (e.g., Baddeley 1986; Crowder 1982; Daneman & Carpenter 1980; Klapp et al. 1983; Reisberg et al. 1984) has been concerned with establishing limits on regularities (e.g., the typical STM span of 7 + / — 2 items) and with the interactions of STM phenomena with other phenomena (e.g., reasoning, learning, and motor activity). One conclusion to be drawn from this work is that short-term remembering is often accomplished by covert (or not so covert) motor routines - for example, by covert articulation (Baddeley 1986) or by a "finger rehearsal loop" (Reisberg et al. 1984). This conclusion seems to us to cast doubt on Newell's sharp distinction between central cognition and motor processes (p. 195). A second conclusion is that short-term memory capacity, as measured by memory tasks, does not correspond to the capacity for active information processing. For example, Klapp et al. (1983) demonstrated that when subjects are allowed time to establish a rehearsal routine, shortterm retention need not interfere with active information processing. A third conclusion is that working memory capacities are recruited and coordinated by metacognitive control processes (e.g., Logan 1985). It is not that SOAR cannot handle these elaborated phenomena - indeed, Newell discusses some of them. However, in denigrating the search for the "3,001st" empirical regularity, Cognition misrepresents the interplay of (empirical) research and (local) theory that has generated these important conclusions. In our view, the empirical work on short-term and working memory has provided a great deal of information about the relations among, and boundary conditions on, empirical regularities and underlying theoretical mechanisms. We applaud Newell's pursuit of unified theories of cognition. Psychologists are likely to be disappointed however, in both the particular exemplar and the research strategy presented here. One of Newell's central premises is that "AI provides the theoretical infrastructure for the study of human cognition" (p. 40). WefinishedCognition with a renewed appreciation of how great the disciplinary gap remains between AI and cognitive psychology. And though we endorse Newell's most general arguments for unified theories, we suspect that psychology will have to find a different path to unified theory. ACKNOWLEDGMENT We would like to thank Kevin Murnane for helpful comments and discussion on this review. NOTE 1. Address correspondence to Richard A. Carlson.

Commentary/Newell: Unified theories of cognition

Toward unified cognitive theory: The path is well worn and the trenches are deep John M. Carroll User Interface Institute, IBM T. J. Watson Research Center, Yorktown Heights, NY 10598 Electronic mail: [email protected]

Allen Ne well's Unified theories of cognition is inspiring, but also puzzling. It is inspiring because he has forcefully directed psychology toward an objective of which it must never lose sight, but which can get lost among workaday concerns: unified theory. The book is puzzling, however, in its approach to making the case that psychology is "ready" for unified theories, and in its view of how unified theories support practical applications. Newell argues that the study of cognition is ready for unified theories by developing an example theory, SOAR. The example is interesting but very incomplete. Accordingly, it cannot provide proof by demonstration; in the end, one can only wonder whether SOAR is promising or just inadequate (p. 24). The only other way to make the argument is by systematic analysis of the state of psychology and of the space of theoretical possibilities. In view of this, it is extremely odd that Newell pays so little attention to the prior work of Anderson (1976; 1983) on ACT*: This work already has 15 years of development and is far more closely linked to and motivated by manifest characteristics of human cognition and the state of contemporary psychology than SOAR is now. Newell calls ACT* a "harbinger" of unified theories (pp. 26-29), but this seems wrong. It is repeated endlessly in the book that the main point is that attention should be directed to unified theory, and that SOAR is only one possibility. This begins sounding hollow after 500 pages and no systematic analysis. It is a problem throughout this book that past efforts of Newell and his collaborators are absorbed willy-nilly into a sort of "SOAR'S suburbs" (e.g., MHP is recruited as "SOAR'S model of typing," p. 296). In doing this, Newell seems willing to take approximation as identity; theories that talk about problem spaces and information processing are swallowed whole (pp. 279, 282, 285) and a 19% discrepancy in their predictions is accepted without comment (p. 281). This allows Newell to adduce much evidence and theoretical analysis in support of "SOAR," but it makes it unclear just what is at stake in SOAR and which claims and properties matter. I am sympathetic to Newell's call for an ethos of research programmes (Lakatos 1978), but the absence of a detailed comparative analysis of prior work like ACT*, GOMS, and MHP creates unclarity and the appearance of credulity. In some ways, things are worse outside the SOAR suburbs. Fodor's modularity thesis surely ought to be seen by theorists concerned with central cognition as a challenge (Carroll 1988). Yet Newell seems to preclude modularity (in his rejection of trap-state mechanisms, p. 221) and at the same time to suggest that SOAR may be able to behave as if it were modular (pp. 45559). The modularity issue lies at the heart of the book's central concern with whether psychology is ready for unified theories, but it is raised only as a discussion question and then left open. [See also BBS multiple book review of Fodor's The Modularity of Mind, BBS 8(1) 1985.] My second difficulty with the book lies in its view of the interaction between unified theories and practical application. Newell argues that because "real tasks engage many aspects of cognition . . . unified theories of cognition are exactly what are needed to make progress in building an applied cognitive psychology" (pp. 498-99). For SOAR, the principal application domain is identified as human-computer interaction, or HCI (p. 502). The general point seems fine as far as it goes, and envisions an appealing give-and-take relation between science and application. Unfortunately, things are not as encouraging in detail.

SOAR per se has contributed little or nothing to HCI application; if one considers the SOAR suburbs of GOMS and MHP, there have been specific technical contributions, but "real world" achievements only recently and so far only in the limited and highly routinized task of timing telephone-operator scripts (Gray et al. 1992). SOAR has not provided a practical analysis of the acquisition of cognitive skill or of instruction (pp. 345-51, 423-26, 499) except under enforced rote learning conditions that free humans reject in the real world (Carroll 1990). And it certainly has never provided direct "help in design" (p. 284; cf. Carroll & Campbell 1986). Part of the problem is that SOAR has not yet addressed areas that are crucial to HCI design reasoning: concept learning, decisions under risk and uncertainty, individual differences, metaphor and analogy, imagery, play, morals and values (pp. 433-36). And on these grounds, it seems appropriate to say that applications may have to wait for more substantial developments of the theory. Another part of the problem, however, is that Newell's discussion of applications is too narrow, focusing only on the work of his students. Much current theory development in HCI (in particular, work on integrating theory and design application) takes as given the failure of limited and lowlevel approaches, like those inhabiting the SOAR suburbs. Looking at the HCI field more broadly, the models SOAR has digested are increasingly peripheral (e.g., Budde et al. 1991; Carroll 1991; Norman & Draper 1986). Newell candidly observes that "if a theory is doing good things in terms of applications, then, even if it seems uninteresting for other reasons, it will be kept and used" (p. 501). This may be true, but perhaps chiefly as a commentary on human gullibility. If we take our theories seriously, if we wish to drive toward unified theories, then it will be a dangerous course to pursue utility without scientific skepticism and careful analysis of why applications work. This pitfall is evident in HCI today. For example, Gray et al. (1992), cited above, state their research objective as "validating" GOMS tout court. This is a lot less than it will take to make effective applications, and it is just irrelevant to any serious concern with unified theory. Although psychology is plainly ready for unified theory and pursuing it now, I hope that it will be wary of approaches that inadequately balance selfapprobation with skepticism.

Re-membering cognition Susan F. Chspman Cognitive Science Program, Office of Naval Research, Arlington, VA

22217-5000 Ei@CtroniC mail: [email protected]

Beneath the growing success of cognitive psychology as an academic and social enterprise, there has been an undercurrent of discontent. In 1980, James Jenkins asked whether we could have a fruitful cognitive psychology (Jenkins 1981). His central concern was that cognitive psychologists seemed to be generating microtheories for a haphazard assortment of arbitrary laboratory tasks. He pointed to the difficulties that the strategic and computational flexibility of the human subject presented for those who would seek truth in reductionist laboratory experiments: "The stupid theories will appear correct because the subjects can become as stupid as we require them to be." He pointed to the great importance of the phenomena that emerge when cognition is allowed to show its complexity, "No study of single tones will inform us as to melody . . . " Some years later, but quite independently, George Miller (1986) sounded the same theme when he complained that researchers were "dismembering cognition," in "an attempted analysis that violates the natural joints, that leaves its object




Unified theories of cognition

shattered in lumps." Miller acknowledged that analysis is central to the scientific enterprise, but he argued that we were neglecting to seek the laws of synthesis that would explain how the whole might be assembled from the supposed parts. Except for those who have been too well socialized into the cults that surround particular experimental paradigms, it is not difficult to share the roots of these discontents. Read a textbook. Textbooks of cognitive psychology appear to have been organized by sorting a set of notecards describing experiments into a convenient number of chapter heaps. The sense of a coherent total functioning system does not emerge. Perhaps the yearning for coherence is too subtle a point. Both Jenkins and Miller pointed to practical application and practical realistic problems as an important test for cognitive psychology. There is an important class of practical applications that points up the incoherence of present cognitive theory over and over again: Suppose that one wants to create a battery of tests that would do a broad, thorough survey of cognitive function. What should be in that battery? To be honest, no one today can tell you. Yet this is something we would like to be able to do - to detect the cognitive consequences of environmental contaminants, of infectious diseases, of aging or of stress. Both Jenkins and Miller cited an earlier critique by Allen Newell (1973a): "You can't play 20 questions with nature and win." We now find Newell taking the lead in a major effort to remedy this deficiency in cognitive theory, his Unified theories of cognition. In contrast to the critiques, Cognition is essentially a conciliatory statement. The vast body of accumulated experimental results, which Newell estimates as 3,000 "regularities," is treated with respect as a valuable resource. They are, as Jenkins also noted, evidence of ways that human beings can act under certain conditions. Trying to account for all of them together, within a single theoretical framework, poses a very different problem compared to the traditional attempts to formulate and test theories for each separately. The book is also conciliatory in that Newell, a central figure among those researchers who undertook to study the complexities of problem solving and expertise, accords a special status to the traditional, reduced reaction time studies of experimental cognitive psychology: Because of their severe time constraints, he views these experiments as a way of "getting close to the architecture." Although the words are somewhat new, I think that is what those researchers had in mind all along. In a sense, Cognition is two books. One of them carries the general message that the time has come to attempt unified theories of cognition. The other presents a specific case, the SOAR cognitive architecture, and shows how it can be used to develop explanations for a wide range of cognitive performances, SOAR is a computational system that began as an AI project with no necessary commitment to model human intelligence, but with substantial inspiration from psychology. Two of SOAR'S key features - the reliance on problem spaces and the inclusion of a constantly active learning process called chunking - have clear psychological inspiration. But other less psychologically plausible features have been retained from the AI past, notably a very large, essentially unlimited working memory. The fact that it started with another purpose is a handicap for SOAR as a potential unified theory of cognition, but the handicap may be compensated for by the fact that Newell has attracted and organized a substantial community working within the SOAR framework. Still, it seems somewhat more likely that a theory setting out, from the first and without equivocation, to be a psychological theory will succeed as a psychological theory. As Newell himself recognizes, individuals may choose to be more impressed either by what the theory has been able to account for or by what it has not accounted for. It would be unfortunate, however, if those who choose to be unimpressed by the particular exemplar of a unified theory carelessly extend that evaluation to the broader message of Cognition. The dissatisfactions with the state of cognitive psy442


chology have been very real, shared by some of its most significant figures. The embarrassment of not being able to respond to important applied questions that are obviously about cognition is real. Newell is trying to move our arguments into a new territory in which one unified theory will be compared to another, not one microtheory to another. Miller urged reliance upon converging experiments and converging evidence from diverse disciplinary perspectives, but such evidence needs something like a broad and explicit computational theory of cognition to converge upon. As Newell's book concludes, he does not say that unified theories are here today. "But they are within reach and we should strive to attain them."

limited storage natural intelligence Eric Chowna and Stephen Kaplana'b department of Electrical Engineering and Computer Science and b Department of Psychology, University of Michigan, Ann Arbor, Ml 48109 Electronic mail: [email protected]; bstephenJ

Unifying congnition: Has it all been put together?

Unifying congnition: Has it all been put together? - PDF Download Free
11MB Sizes 2 Downloads 4 Views