Last Update: 16 May 2011
Note: or material is highlighted
Abstract: We present a partial computational theory of a cognitive agent's ability to acquire word meanings from natural-language contexts. The theory is implemented in SNePS, a knowledge-representation and reasoning system with facilities for parsing and generating fragments of English and for belief revision. We take a word's meaning, as understood by an agent, to be its relation to the rest of that agent's knowledge. However, because such knowledge is idiosyncratic and contains information not relevant to a conventional definition, we research means by which an agent can abstract such definitions from individual experience. We are not concerned with a prescriptively 'correct' definition of a word so much as with understanding it as it is used. We investigate which aspects of meaning are typically salient to defining common nouns and verbs, and how definitions are revised in successive encounters with a word. Previously unknown aspects of meaning may be added, and less salient information deleted as more salient information is acquired. Such deletion does not, however, imply deletion from the agent's beliefs. There are, however, occasions when an agent's understanding may be erroneous or incomplete in a way that requires revision of a prior belief. If a word is used in a way that contradicts a previous meaning postulate, that postulate must be revised to render the agent's understanding of the narrative consistent with its other knowledge. We present computational methods for performing such revision. We present several demonstrations of our system learning word meanings through successive encounters while reading stories from Malory's Le morte D'Arthur. These demonstrations are compared with protocols taken from two human readers who reasoned aloud while reading the same passages and attempted to define the target words from context. We also discuss the limitations of our current system and present ideas for its further development. Potential applications of this work include education, computational lexicography, and cognitive science studies of narrative understanding.
Abstract: As part of an interdisciplinary project to develop a computational cognitive model of a reader of narrative text, we are developing a computational theory of how natural-language-understanding systems can automatically expand their vocabulary by determining from context the meaning of words that are unknown, misunderstood, or used in a new sense. `Context' includes surrounding text, grammatical information, and background knowledge, but no external sources. Our thesis is that the meaning of such a word can be determined from context, can be revised upon further encounters with the word, "converges" to a dictionary-like definition if enough context has been provided and there have been enough exposures to the word, and eventually "settles down" to a "steady state" that is always subject to revision upon further encounters with the word. The system is being implemented in the SNePS knowledge-representation and reasoning system. (The online document is a slightly modified version (containing the algorithms) of that which appears in the Proceedings.)
Abstract: As part of an interdisciplinary project to develop a computational cognitive model of a reader of narrative text, we are developing a computational theory of how natural-language-understanding systems can automatically acquire new vocabulary by determining from context the meaning of words that are unknown, misunderstood, or used in a new sense. `Context' includes surrounding text, grammatical information, and background knowledge, but no external sources. Our thesis is that the meaning of such a word can be determined from context, can be revised upon further encounters with the word, "converges" to a dictionary-like definition if enough context has been provided and there have been enough exposures to the word, and eventually "settles down" to a "steady state" that is always subject to revision upon further encounters with the word. The system is being implemented in the SNePS knowledge-representation and reasoning system.
Abstract: We discuss a research project that develops and applies algorithms for computational contextual vocabulary acquisition (CVA): learning the meaning of unknown words from context. We try to unify a disparate literature on the topic of CVA from psychology, first- and second-language acquisition, and reading science, in order to help develop these algorithms: We use the knowledge gained from the computational CVA system to build an educational curriculum for enhancing students' abilities to use CVA strategies in their reading of science texts at the middle-school and college undergraduate levels. The knowledge gained from case studies of students using our CVA techniques feeds back into further development of our computational theory.
Abstract: A paraphrased version of our original document. Provides a curriculum for teaching CVA based on our computer algorithms and think-aloud protocol research, with a focus on reasoning about, or making connections between, information in the text and the reader's prior knowledge, making hypotheses about a word's meaning, weighing the evidence, and drawing conclusions.
Abstract: Natural languages differ from most formal languages in having a partial, rather than a total, semantic interpretation function; e.g., some noun phrases don't refer. The usual semantics for handling such noun phrases (e.g., Russell, Quine) require syntactic reform. The alternative presented here is semantic expansion, viz., enlarging the range of the interpretation function to make it total. A specific ontology based on Alexius Meinong's Theory of Objects, which can serve as domain of interpretation, is suggested, and related to the work of Hector-Neri Castañeda, Gottlob Frege, Jerrold J. Katz & Jerry Fodor, Terence Parsons, and Dana Scott.
Abstract: This essay considers what it means to understand natural language and whether a computer running an artificial-intelligence program designed to understand natural language does in fact do so. It is argued that a certain kind of semantics is needed to understand natural language, that this kind of semantics is mere symbol manipulation (i.e., syntax), and that, hence, it is available to AI systems. Recent arguments by Searle and Dretske to the effect that computers cannot understand natural language are discussed, and a prototype natural-language-understanding system is presented as an illustration.
Abstract: John Searle once said: "The Chinese room shows what we knew all along: syntax by itself is not sufficient for semantics. (Does anyone actually deny this point, I mean straight out? Is anyone actually willing to say, straight out, that they think that syntax, in the sense of formal symbols, is really the same as semantic content, in the sense of meanings, thought contents, understanding, etc.?)." I say: "Yes". Stuart C. Shapiro has said: "Does that make any sense? Yes: Everything makes sense. The question is: What sense does it make?" This essay explores what sense it makes to say that syntax by itself is sufficient for semantics.
Abstract: What does it mean to understand language? John Searle once said: "The Chinese Room shows what we knew all along: syntax by itself is not sufficient for semantics. (Does anyone actually deny this point, I mean straight out? Is anyone actually willing to say, straight out, that they think that syntax, in the sense of formal symbols, is really the same as semantic content, in the sense of meanings, thought contents, understanding, etc.?)." Elsewhere, I have argued "that (suitable) purely syntactic symbol-manipulation of a computational natural-language-understanding system's knowledge base suffices for it to understand natural language." The fundamental thesis of the present book is that understanding is recursive: "Semantic" understanding is a correspondence between two domains; a cognitive agent understands one of those domains in terms of an antecedently understood one. But how is that other domain understood? Recursively, in terms of yet another. But, since recursion needs a base case, there must be a domain that is not understood in terms of another. So, it must be understood in terms of itself. How? Syntactically! In syntactically understood domains, some elements are understood in terms of others. In the case of language, linguistic elements are understood in terms of non-linguistic ("conceptual") yet internal elements. Put briefly, bluntly, and a bit paradoxically, semantic understanding is syntactic understanding. Thus, any cognitive agent--human or computer--capable of syntax (symbol manipulation) is capable of understanding language.
The purpose of this book is to present arguments for this position, and to investigate its implications. Subsequent chapters discuss: models and semantic theories (with critical evaluations of work by Arturo Rosenblueth and Norbert Wiener, Brian Cantwell Smith, and Marx W. Wartofsky); the nature of "syntactic semantics" (including the relevance of Antonio Damasio's cognitive neuroscientific theories); conceptual-role semantics (with critical evaluations of work by Jerry Fodor and Ernest Lepore, Gilbert Harman, David Lewis, Barry Loewer, William G. Lycan, Timothy C. Potts, and Wilfrid Sellars); the role of negotiation in interpreting communicative acts (including evaluations of theories by Jerome Bruner and Patrick Henry Winston); Hilary Putnam's and Jerry Fodor's views of methodological solipsism; implementation and its relationships with such metaphysical concepts as individuation, instantiation, exemplification, reduction, and supervenience (with a study of Jaegwon Kim's theories); John Searle's Chinese-Room Argument and its relevance to understanding Helen Keller (and vice versa); and Herbert Terrace's theory of naming as a fundamental linguistic ability unique to humans.
Throughout, reference is made to an implemented computational theory of cognition: a computerized cognitive agent implemented in the SNePS knowledge-representation and reasoning system. SNePS is: symbolic (or "classical"; as opposed to connectionist), propositional (as opposed to being a taxonomic or "inheritance" hierarchy), and fully intensional (as opposed to (partly) extensional), with several types of interrelated inference and belief-revision mechanisms, sensing and effecting mechanisms, and the ability to make, reason about, and execute plans.
Abstract: What is the computational notion of "implementation"? It is not individuation, instantiation, reduction, or supervenience. It is, I suggest, semantic interpretation.
Abstract: A theory of "syntactic semantics" is advocated as a way of understanding how computers can think (and how the Chinese-Room-Argument objection to the Turing Test can be overcome): (1) Semantics, as the study of relations between symbols and meanings, can be turned into syntax--a study of relations among symbols (including meanings)--and hence syntax can suffice for the semantical enterprise. (2) Semantics, as the process of understanding one domain modeled in terms of another, can be viewed recursively: The base case of semantic understanding--understanding a domain in terms of itself--is syntactic understanding. (3) An internal (or "narrow"), first-person point of view makes an external (or "wide"), third-person point of view otiose for purposes of understanding cognition.
Abstract: This essay continues my investigation of "syntactic semantics": the theory that, pace Searle's Chinese-Room Argument, syntax does suffice for semantics (in particular, for the semantics needed for a computational cognitive theory of natural-language understanding). Here, I argue that syntactic semantics (which is internal and first-person) is what has been called a conceptual-role semantics: The meaning of any expression is the role that it plays in the complete system of expressions. Such a "narrow", conceptual-role semantics is the appropriate sort of semantics to account (from an "internal", or first-person perspective) for how a cognitive agent understands language. Some have argued for the primacy of external, or "wide", semantics, while others have argued for a two-factor analysis. But, although two factors can be specified--one internal and first-person, the other only specifiable in an external, third-person way--only the internal, first-person one is needed for understanding how someone understands. A truth-conditional semantics can still be provided, but only from a third-person perspective.