Introduction to Cognitive Science
Shimon Edelman,
Computing the Mind:
How the Mind Really Works
¹
(New York: Oxford University Press, 2008)

Main Ideas

Last Update: 14 January 2011

Note: NEW or UPDATED material is highlighted


Note: Page references are in parentheses.


  1. Cognition & Computation:

    1. [T]he mind can be understood in terms of the brain… "The Astonishing Hypothesis is that ‘You,’ your joys and your sorrows, your memories and your ambitions, your sense of personal identity and free will, are in fact no more than the behavior of a vast assembly of nerve cells and their associated molecules."—Francis Crick[,] The Astonishing Hypothesis (1994). (63)

    2. Nothing in cognition makes sense except in the light of computation. (343)

      1. [W]e may conclude that the brain is a dynamical system that generates the mind by implementing all manner of computations. (79)

      2. [T]his book…present[s] a complete and consistent, if simplified, panorama of cognition as it is understood in terms of computation…. (515)


  2. Simulation:

    1. In the present book, simulation is a thread that runs through many diverse topics…. …[S]imulation is a general feature of cognitive representation, permeating all perceptual processing and conceptual thinking. (330)

    2. [S]imulating computation is indistinguishable in its outcome from performing it in the first place. (206)

      1. [A] simulation of a computation and the computation itself are equivalent: try to simulate the addition of 2 and 3, and the result will be just as good as if you "actually" carried out the addition—that is the fundamental nature of numbers…. Therefore, if the mind is a computational entity a simulation of the relevant computations would constitute its fully functional replica. (81)

        1. [I]t is conceivable that a really elaborate simulacrum of another person that lives in my brain may become conscious. (445–446)

    3. A comprehensive computational theory of general intelligence—one that would explain the behavioral findings, fit the big picture painted by genetics and neuroscience, and capitalize on the success of…functional hypotheses…—can be constructed on an existing foundation: the concept of the virtual machine. (371)

      1. A [virtual machine] is a process that offers, usually by means of considerable extra computation, a functionality that is not directly available in the environment that supports it. (338)

        1. To the outside world, [a Universal Turing Machine] TMA [that simulates a particular Turing Machine] TMB will…look functionally indistinguishable from TMB—even though the latter exists only virtually (in the memory of TMA)…. (205)


  3. Hierarchical Abstraction:

    1. [I]f you would grant your friends and neighbors (and, potentially, other sentient beings or machines) the same cognitive capacities as you attribute to yourself, you must recognize the mind as an organizational entity. (6)

    2. The principle that makes…[the "actual behavior" of brains "amenable to study and to understanding"] is hierarchical abstraction…. Brains evolve to deal with a world of information that is characterized by statistical structure at multiple levels; their functional architecture and the computations it implements reflect this structure. (498–499)

        1. …function (that is, computation)… (370)

      1. The single most important property of the world that makes it intelligible is the pervasiveness of hierarchical abstraction as the vehicle of emergent complexity in physical systems. (238)

      2. Simplicity is what complexity must be made of, because there isn't anything else to make it out of, and hierarchical abstraction is the only way in which sufficiently interesting complex stuff can be built out of simple building blocks. (30)

      3. [H]ierarchical abstraction is…the mind's tool for the study of the world. (31)

      4. [H]ierarchical abstraction is…a natural by-product of the idea of using symbols as vehicles of representation. (31) [See unitization, below]


  4. Statistics:

    1. The world is statistically well-behaved. … Minds evolve to take advantage of that. (143)

      1. [S]tatistics is a mathematical framework not for capitulating in the face of uncertainty, but for managing it. (199)

    2. The waking brain is constantly engaged in collecting and maintaining a variety of statistical measurements of the world, including traces of past stimulation and of its own past states; even during sleep it keeps churning through the accrued data, seeking and consolidating patterns. (49–50)

      1. [Therefore, statistics is] a computational concept that is absolutely crucial for understanding cognition. (49)

      2. We can conceptualize the kind of computation that neural networks naturally support as mapping the activities of one set of neurons (one multidimensional space) into the activities of another, output set (another multidimensional space). (51)

      3. A collection of inputs…becomes a "cloud" of points [in that space], and this is where the conceptual link to statistics becomes obvious: the shape of that cloud captures the statistical relationships among the data points. (55, 92)

        1. Clouds of points in multidimensional representation spaces are the only entities connected to the external world that a cognitive system can ever get to know. (92)

        2. It's there for real. …I do not merely suggest a metaphor of a space with a cloud of points in it. …Its reality is of the same kind that you trust (sometimes with your life) when you use the fruits of…engineering. (93)


  5. Learning:

    1. [T]he key methodological point behind this book: the single most important thing one can know is how to learn. (307)

      1. [L]earning is learning of regularities. (248)


  6. Perception:

    1. The visual world…is a simulation totally controlled by the brain… (451)

    2. Corollary to the Astonishing Hypothesis: The perception of a given quality or aspect of the visual stimulus consists of the activity of identifiable neurons in the cerebral cortex. (66)

    3. [T]he ineffability of perceptual experience is the rule, not an exception, in everyday life. … To understand perception (as well as its relation to memory) we must, therefore, understand what the raw, phenomenal, pre-conceptual feel of sensing the world consists of, and how the sensory stimulation gets computed into that feel. (87)

    4. [A]ll perceptual problems share the same computational structure. (99)

      1. The Universal Law of Generalization (Shepard, 1987) states that any set of stimuli can be arranged in a representation space in such a manner that the likelihood that the subject using that space will generalize the label of one stimulus to another diminishes exponentially with their distance. (115)

    5. [C]ategorization tasks, in which the brain learns to produce discrete labels, given the data…is classification. [A]daptation tasks, in which the brain learns to generate graded percepts or responses, given the data…is regression. In each case, the real issue is how to extend the learned discrete labeling or graded output scheme to new inputs…; this is the problem of generalization. (118)

      1. [O]ne of [memory's] strongest points: regression to a schema, a process that lets the details slip, yet retains the gist that is usually a good statistical summary of the stimulus. (214)

        1. regression to a schema is…a very common characteristic of cognition. (234)

      2. [T]he basic nature of the problem of statistical learning: computationally, both classification and regression are underdetermined. (119)

        1. [P]erception of distal entities (objects or events)² can be considered veridical (truthful) with respect to categorization insofar as the natural similarities defined over those entities are reflected in the layout of the internal representation space. (123)

        2. [T]he perception of similarities among distal objects will be veridical if the mapping from the visual world to the representation space where the perceived similarities are to be judged is smooth…. (127)

    6. To see is to form a representation of what you're looking at in terms of similarities to what you've seen on other occasions. (142)

      1. [T]o see is to have access to an interpretable activation pattern of an ensemble of tuned units responding to the stimulus, each bearing information about both the shape and the location of some portion of it. (145)

    7. [T]he brain learns concepts by acquiring highly structured perceptual "snapshots" that characterize the situation at hand. (188)


  7. Language Learning:

    1. [T]here is a continuity on all levels between language and the rest of cognition…it has evolved like any other biological trait, operates on the same computational principles, and is implemented by the same brain mechanism as other cognitive functions…. (285–286)

    2. The process whereby the cognitive system makes objects out of features (or larger objects out of smaller ones…) is called unitization. (251) [see hierarchical abstraction, above]

    3. Along with unitization, the two computational operations…, alignment and comparison, are the key tools of grammar induction: candidate structures found by aligning and comparing strings at a certain level of representation become units that participate in structure discovery at the next higher level. (253)

    4. [I]nstead of postulating an infinitely productive generative grammar limited externally by independent factors such as short-term memory capacity, it is more parsimonious to assume that the grammar itself is limited…by being inherently probabilistic (and thereby joining all other cognitive faculties). (270)

    5. [L]anguage is like a subway transit system. (273–275)

        Lexicon-grammar the set of stations.
        Grammatical utterances possible trips.
        Learning constructing the transit system.
        Probabilistic annotation traffic flow.
        Structure vs. statistics routes vs. traffic.
        Fluency and disfluency navigation proficiency.
        Meaning the outside world.


  8. Thinking and Problem Solving:

    1. What gets you through life is, in a nutshell, this: an ability to figure out what to do next, by thinking. (315)

      1. [T]he one functional need that any cognitive system must address is figuring out what to do next. (420)

    2. If intelligent behavior which humans often exhibit in problem situations that require thinking is to be explained without resort to miracles or a homunculus, it must be ultimately reduced to steps that are themselves devoid of intelligence—such as elementary instructions for a Turing Machine…. (321)

    3. [A]cquisition, comparison, and generalization of patterns. A system that can carry out these elementary computations in any [cognitive] domain, while exerting executive control as needed, is well-equipped to face all the tasks that require intelligence [such as perception, memory, language, and analogy]. (372–373)

      1. [The] general computational problem of executive control: balancing responsiveness to the flow of information from the senses and from memory with adherence to one's own goals and desires. (403)

    4. [C]reative ideas come from…playing variations on familiar themes (Hofstadter, 1985). (381–382)


  9. Consciousness:

    1. The unifying concept [for "a computational explanation…of elemental phenomenal experience, and of consciousness grounded in reflection and narrative"] will be that of access among representations—those maintained internally, as well as "that vast external memory, the external environment itself" (Reitman et al., 1978). (395)

      1. access to information is the core computational issue in reflexive consciousness, just as it is in attention and in phenomenal awareness. (431)

    2. Using the…[following] tools makes short order of the hard problem of consciousness: it will take us precisely four steps to get from here to the key insight into the nature of the phenomenal Self. (418)

      1. The experienced reality is virtual.
      2. The experienced reality is a simulation of the world
      3. The simulation is not recognized by the system as such.
      4. The part of the simulation that represents the system itself is special.
        (419)

    3. If I "gotta be…something"—and it indeed feels that way at all times when I am awake, even when I don't think—what kind of something is it? What else but a kind of computation (and thus a multiply realizable process rather than a thing)? That's what I am, and that's also what the "I" is. (418)


  10. On Free Will:

    1. [R]andomness is not a solution to the problem of reconciling free will with physics…. (465)

    2. I am compelled to choose, but what compels me…is me. (467)


  11. Summary:

    1. [O]ur integrated understanding of how the mind works… (487)

      1. Given that minds are what brains do, the cognitive states of the mind implemented by a brain are nothing else than activation patterns of the brain's neurons, each describable by a point in a multidimensional space, one dimension per neuron. … [M]inds, being instances of computation, are multiply realizable…. (487)

      2. [B]rains exude minds by dint of representing and computing reality. (498)

      3. A mind is an instance of computation. (499)

      4. [It is not merely the case that the brain works by "brain cells fir[ing] in patterns" [Pinker] and "You are nothing but a pack of neurons!" (Crick 1994), but:] You are nothing but an outbreak of computation! (500)


Notes:

  1. See also:

    1. Pinker, Steven (1997), How the Mind Works (New York: W.W. Norton).

      • Omits linguistics, because of the prior existence (1st edition, 1994) of:
        Pinker, Steven (2007), The Language Instinct, PS edition (New York: Harper Perennial Modern Classics).

      • Rapaport, William J. (2000), Review of Steven Pinker, How the Mind Works, in Minds and Machines 10(3): 381-389.

    2. Fodor, Jerry (2000), The Mind Doesn't Work That Way: The Scope and Limits of Computational Psychology (Cambridge, MA: MIT Press).

    [back to top]

  2. "Because nothing is permanent, objects are merely slow events (Hurford, 2003)." (33) [back]



Text copyright © 2011 by William J. Rapaport (rapaport@buffalo.edu)
http://www.cse.buffalo.edu/~rapaport/575/S11/edelman-ideas.html-20110114