Why is the study of language central to cognition? The answer lies in the key properties of language as they manifest themselves in the way speakers use it. The best way to get a sense of the centrality of language in understanding cognitive phenomena is through some examples. In the rest of this introduction I illustrate some features of language that display surprising regularities. Among the many ways in which an efficient communication code could be designed, natural languages seem to choose quite peculiar ones. The question is why. We consider some of the answers that modern linguistics gives to this question, which lead us into a scenic (if necessarily brief) tour of its main problematics. In particular, section 2 is devoted to language structure and its main articulations. Section 3 is devoted to language use, its interplay with language structure, and the various disciplines that deal with these matters. We then close, in section 4, with a few short remarks on the place of linguistics within cognitive science.
Languages are made of words. How many words do we know? This is something that can be estimated quite accurately (see Pinker 1994: 149 ff.). To set a base line, consider that Shakespeare in his works uses roughly 15,000 different words. One would think that the vocabulary of, say, a high school student, is considerably poorer. Instead, it turns out that a high school senior reliably understands roughly 45,000 words out of a lexicon of 88,500 unrelated words. It might be worth mentioning how one arrives at this estimate. One samples randomly the target corpus of words and performs simple comprehension tests on the sample. The results are then statistically projected to the whole corpus. Now, the size of the vocabulary of a high school senior entails that from when the child starts learning words at a few months of age until the age of eighteen, he or she must be learning roughly a word every hour and half when awake. We are talking here of learning arbitrary associations of sound patterns with meanings. Compare this with the effort it takes to learn an even short poem by heart, or the names of a handful of basketball players. The contrast is striking. We get to understand 45,000 words with incomparably less effort, to the point of not even being aware of it. This makes no sense without the assumption that our mind must be especially equipped with something, a cognitive device of some sort, that makes us so successful at the task of learning words. This cognitive device must be quite specialized for such a task, as we are not as good at learning poems or the names of basketball players (cf. WORD MEANING, ACQUISITION OF
The world of sounds that make up words is similarly complex. We all find the sounds of our native language easy to distinguish. For example, to a native English speaker the i-sounds in "leave" and "live" are clearly different. And unless that person is in especially unfavorable conditions, he or she will not take one for the other. To a native English speaker, the difficulty that an Italian learning English (as an adult) encounters in mastering such distinctions looks a bit mysterious. Italians take revenge when English speakers try to learn the contrast between words like "fato" 'fate' vs. "fatto" 'fact.' The only difference between them is that the t-sound in fatto sounds to the Italian speaker slightly longer or tenser, a contrast that is difficult for a speaker of English to master. These observations are quite commonplace. The important point, however, is that a child exposed to the speech sounds of any language picks them up effortlessly. The clicks of Zulu (sounds similar to the "tsk-tsk" of disapproval) or the implosive sounds of Sindhi, spoken in India and Pakistan (sounds produced by sucking in air, rather than ejecting it -- see ARTICULATION) are not harder for the child to acquire than the occlusives of English. Adults, in contrast, often fail to learn to produce sounds not in their native repertoire. Figuring out the banking laws or the foods of a different culture is generally much easier. One would like to understand why.
Behind its daily almost quaint appearance, language seems to host many remarkable regularities of the sort just illustrated. Here is yet another example taken from a different domain, that of pronouns and ANAPHORA. Consider the following sentence:
What is common to these different aspects of language is the fact that our linguistic behavior reveals striking and complex regularities. This is true throughout the languages of the worlds. In fact the TYPOLOGY of the world"s languages reveals significant universal tendencies. For example, the patterns of word order are quite limited. The most common basic orders of the major sentence constituents are subject-verb-object (abbreviated as SVO) and SOV. Patterns in which the object precedes the subject are quite rare. Another language universal one might mention is that all languages have ways of using clauses to modify nouns (as in "the boy that you just met," where the relative clause "that you just met" modifies the noun "boy"). Now structural properties of this sort are not only common to all known spoken languages but in fact can be found even in SIGN LANGUAGES, that is, visual-gestural languages typically in use in populations with impaired verbal abilities (e.g., the deaf). It seems plausible to maintain that universal tendencies in language are grounded in the way we are; this must be so for speaking is a cognitive capacity, that capacity in virtue of which we say that we "know" our native language. We exercise such capacity in using language. A term often used in this connection is "linguistic competence." The way we put such competence to use in interacting with our environment and with each other is called "performance."
The necessity to hypothesize a linguistic competence can be seen also from another point of view. Language is a dynamic phenomenon, dynamic in many senses. It changes across time and space (cf. LANGUAGE VARIATION AND CHANGE). It varies along social and gender dimensions (cf. LANGUAGE AND GENDER; LANGUAGE AND CULTURE). It also varies in sometimes seemingly idiosyncratic ways from speaker to speaker. Another important aspect of the dynamic character of language is the fact that a speaker can produce and understand an indefinite number of sentences, while having finite cognitive resources (memory, attention span, etc.). How is this possible? We must assume that this happens by analogy with the way we, say, add two numbers we have never added before. We can do it because we have mastered a combinatorial device, an ALGORITHM. But the algorithm for adding we have learned through explicit training. The one for speaking appears to grow spontaneously in the child. Such an algorithm is constitutive of our linguistic competence.
The fact that linguistic competence does not develop through explicit training can be construed as an argument in favor of viewing it as a part of our genetic endowment (cf. INNATENESS OF LANGUAGE). This becomes all the more plausible if one considers how specialized the knowledge of a language is and how quickly it develops in the child. In a way, the child should be in a situation analogous to that of somebody who is trying to break the mysteries of an unknown communication code. Such a code could have in principle very different features from that of a human language. It might lack a distinction between subjects and objects. Or it might lack the one between nouns and verbs. Many languages of practical use (e.g., many programming languages) are designed just that way. The range of possible communication systems is huge and highly differentiated. This is part of the reason why cracking a secret code is very hard -- as hard as learning an unfamiliar language as an adult. Yet the child does it without effort and without formal training. This seems hard to make sense of without assuming that, in some way, the child knows what to look for and knows what properties of natural speech he or she should attend to in order to figure out its grammar. This argument, based on the observation that language learning constitutes a specialized skill acquired quickly through minimal input, is known as the POVERTY OF THE STIMULUS ARGUMENT. It suggests that linguistic competence is a relatively autonomous computational device that is part of the biological endowment of humans and guides them through the acquisition of language. This is one of the planks of what has come to be known as GENERATIVE GRAMMAR, a research program started in the late 1950s by Noam Chomsky, which has proven to be quite successful and influential.
It might be useful to contrast this view with another one that a priori might be regarded as equally plausible (see CONNECTIONIST APPROACHES TO LANGUAGE). Humans seem to be endowed with a powerful all-purpose computational device that is very good at extracting regularities from the environment. Given that, one might hypothesize that language is learned the way we learn any kind of algorithm: through trial and error. All that language learning amounts to is simply applying our high-level computational apparatus to linguistic input. According to this view, the child acquires language similarly to how she learns, say, doing division, the main difference being in the nature of the input. Learning division is of course riddled with all sorts of mistakes that the child goes through (typical ones involve keeping track of rests, misprocessing partial results, etc.). Consider, in this connection, the pattern of pronominalization in sentences (1) through (4). If we learn languages the way we learn division, the child ought to make mistakes in figuring out what can act as the antecedent of a reflexive and what cannot. In recent years there has been extensive empirical investigation of the behavior of pronominal elements in child language (see BINDING THEORY; SYNTAX, ACQUISITION OF; SEMANTICS, ACQUISITION OF). And this was not what was found. The evidence goes in the opposite direction. As soon as reflexives and nonreflexive pronouns make their appearance in the child's speech, they appear to be used in an adult-like manner (cf. Crain and McKee 1985; Chien and Wexler 1990; Grodzinsky and Reinhart 1993).
Many of the ideas we find in generative grammar have antecedents throughout the history of thought (cf. LINGUISTICS, PHILOSOPHICAL ISSUES). One finds important debates on the "conventional" versus "natural" origins of language already among the presocratic philosophers. And many ancient grammarians came up with quite sophisticated analyses of key phenomena. For example the Indian grammarian Panini (fourth to third century B.C.) proposed an analysis of argument structure in terms of THEMATIC ROLES (like agent, patient, etc.), quite close in spirit to current proposals. The scientific study of language had a great impulse in the nineteenth century, when the historical links among the languages of the Indo-European family, at least in their general setup, were unraveled. A further fundamental development in our century was the structuralist approach, that is the attempt to characterize in explicit terms language structure as it manifests itself in sound patterns and in distributional patterns. The structuralist movement started out in Europe, thanks to F. DE SAUSSURE and the Prague School (which included among it protagonists N. Trubeckoj and R. JAKOBSON) and developed, then, in somewhat different forms, in the United States through the work of L. BLOOMFIELD, E. SAPIR, Z. Harris (who was Chomsky's teacher), and others. Structuralism, besides leaving us with an accurate description of many important linguistic phenomena, constituted the breeding ground for a host of concepts (like "morpheme," "phoneme," etc.) that have been taken up and developed further within the generative tradition. It is against this general background that recent developments should be assessed.
Our linguistic competence is made up of several components (or "modules," see MODULARITY AND LANGUAGE) that reflect the various facets of language, going from speech sounds to meaning. In this section we will review the main ones in a necessarily highly abbreviated from. Language can be thought of as a LEXICON and combinatiorial apparatus. The lexicon is constituted by the inventory of words (or morphemes) through which sentences and phrases are built up. The combinatorial apparatus is the set of rules and principles that enable us to put words together in well-formed strings, and to pronounce and interpret such strings. What we will see, as we go through the main branches of linguistics, is how the combinatorial machinery operates throughout the various components of grammar. Meanwhile, here is a rough road map of major modules that deal with language structure.
We already saw that the number of words we know is quite remarkable. But what do we mean by a "word"? Consider the verb "walk" and its past tense "walked." Are these two different words? And how about "walk" versus "walker"? We can clearly detect some inner regular components to words like "walked" namely the stem "walk" (which is identical to the infinitival form) and the ending "-ed," which signals "past." These components are called "morphemes;" they constitute the smallest elements with an identifiable meaning we can recognize in a word. The internal structure of words is the object of the branch of linguistics known as MORPHOLOGY. Just like sentences are formed by putting words together, so words themselves are formed by putting together morphemes. Within the word, that is, as well as between words, we see a combinatorial machinery at work. English has a fairly simple morphological structure. Languages like Chinese have even greater morphological simplicity, while languages like Turkish or Japanese have a very rich morphological structure. POLYSYNTHETIC LANGUAGES are perhaps the most extreme cases of morphological complexity. The following, for example, is a single word of Mohawk, a polysynthetic North American Indian language (Baker 1996: 22):
Another aspect of morphology is compounding, which enables one to form complex words by "glomming" them together. This strategy is quite productive in English, for example, blackboard, blackboard design, blackboard design school, and so on. Compounds can be distinguished from phrases on the basis of a variety of converging criteria. For example, the main stress on compounds like "blackboard" is on "black," while in the phrase "black board" it is on "board" (cf. STRESS, LINGUISTIC METER AND POETRY). Moreover syntax treats compounds as units that cannot be separated by syntactic rules. Through morphological derivation and compounding the structure of the lexicon becomes quite rich.
So what is a word? At one level, it is what is stored in our mental lexicon and has to be memorized as such (a listeme). This is the sense in which we know 45,000 (unrelated) words. At another, it is what enters as a unit into syntactic processes. In this second sense (but not in the first) "walk" and "walked" count as two words. Words are formed by composing together smaller meaningful units (the morphemes) through specific rules and principles.
Morphemes are, in turn, constituted by sound units. Actually, speech forms a continuum not immediately analyzable into discrete units. When exposed to an unfamiliar language, we can not tell where, for example, the word boundaries are, and we have difficulty in identifying the sounds that are not in our native inventory. Yet speakers classify their speech sound stream into units, the phonemes. PHONETICS studies speech sounds from an acoustic and articulatory point of view. Among other things, it provides an alphabet to notate all of the sounds of the world's languages. PHONOLOGY studies how the range of speech sounds are exploited by the grammars of different languages and the universal laws of the grammar of sounds. For example, we know from phonetics that back vowels (produced by lifting the rear of the tongue towards the palate) can be rounded (as in "hot") or unrounded (as in "but") and that this is so also for front vowels (produced by lifting the tongue toward the front of the vocal tact). The i-sound in "feet" is a high, front, unrounded vowel; the sound of the corresponding German word "füsse" is also pronounced raising the tongue towards the front, but is rounded. If a language has rounded front vowels it also has rounded back vowels. To illustrate, Italian has back rounded vowels, but lacks altogether unrounded back vowels. English has both rounded and unrounded back vowels. Both English and Italian lack front rounded vowels. German and French, in contrast, have them. But there is no language that has in its sound inventory front rounded vowels without also having back rounded ones. This is the form that constraints on possible systems of phonemes often take.
As noted in section 1, the type of sounds one finds in the world's languages appear to be very varied. Some languages may have relatively small sound inventories constituted by a dozen phonemes (as, for example, Polynesian); others have quite large ones with about 140 units (Khoisan). And there are of course intermediate cases. One of the most important linguistic discoveries of this century has been that all of the wide variety of phonemes we observe can be described in terms of a small universal set of DISTINCTIVE FEATURES (i.e., properties like "front," "rounded," "voiced," etc.). For example, /p/ and /b/ (bilabial stops) have the same feature composition except for the fact that the former is voiceless (produced without vibration of the vocal cords) while the latter is voiced. By the same token, the phoneme /k/, as in "bake," and the final sound of the German word "Bach" are alike, except in one feature. In the former the air flux is completely interrupted (the sound is a stop) by lifting the back of the tongue up to the rear of the palate, while in the latter a small passage is left which results in a turbulent continuous sound (a fricative, notated in the phonetic alphabet as /x/). So all phonemes can be analyzed as feature structures.
There is also evidence that features are not just a convenient way to classify phonemes but are actually part of the implicit knowledge that speakers have of their language. One famous experiment that provides evidence of this kind has to do with English plurals. In simplified terms, plurals are formed by adding a voiced alveolar fricative /z/ after a voiced sound (e.g., fad[z]) and its voiceless counterpart /s/ after a voiceless one (e.g., fat[s]). This is a form of assimilation, a very common phonological process (see PHONOLOGICAL RULES AND PROCESSES). If a monolingual English speaker is asked to form the plural of a word ending in a phoneme that is not part of his or her native inventory and has never been encountered before, that speaker will follow the rule just described; for example, the plural of the word "Bach" will be [baxs] not [baxz]. This means that in forming the plural speakers are actually accessing the featural make up of the phonemes and analyzing phonemes into voiced verus voiceless sets. They have not just memorized after which sounds /s/ goes and after which /z/ goes (see Akmajian et al. 1990: chapter 3 and references therein).
Thus we see that even within sound units we find smaller elements, the distinctive features, combined according to certain principles. Features, organized in phonemes, are manipulated by rule systems. Phonemes are in turn structured into larger prosodic constituents (see PROSODY AND INTONATION, which constitute the domains over which stress and TONE are determined. On the whole we see that the world of speech sounds is extremely rich in structure and its study has reached a level of remarkable theoretical sophistication (for recent important developments, see OPTIMALITY THEORY.
The area where we perhaps most clearly see the power of the combinatorial machinery that operates in language is SYNTAX, the study of how words are composed into phrases. In constructing sentences, we don"t merely put words into certain sequences, we actually build up a structure. Here is a simple illustration.
English is an SVO language, whose basic word order in simple sentences is the one in (6a).
Alternative orders, such as those in (6b - c), are ungrammatical in English. They are grammatical in other languages; thus (6b"), the word-by-word Italian translation of (6b), is grammatical in Italian; and so is (6c"), the Japanese translation of (6c). A priori, the words in (6a) could be put together in a number of different ways, which can be represented by the following tree diagrams:
The structure in (7a) simply says that "Kim," "Lee," and "saw" are put together all at once and that one cannot recognize any subunit within the clause. Structure (7b) says that there is a subunit within the clause constituted by the subject plus the verb; (7c) that the phrasing actually puts together the verb plus the object. The right analysis for English turns out to be (7c), where the verb and the object form a unit, a constituent called the verb phrase (VP), whose "center," or, in technical terms, whose "head" is the verb. Interestingly, such an analysis turns out to be right also for Japanese and Italian, and, it seems, universally. In all languages, the verb and the object form a unit. There are various ways of seeing that it must be so. A simple one is the following: languages have proforms that is elements that lack an inherent meaning and get their semantic value from a linguistic antecedent (or, in some cases, the extralinguistic context). Personal pronouns like "he" or "him" are a typical example:
Notice that English does not have a proform that stands for the subject plus a transitive verb. There is no construction of the following sort:
A particularly interesting case is constituted by VSO languages, such as Irish, Breton, and many African languages, etc. Here is an Irish example (Chung and McCloskey 1987: 218):
|Ni olan||se bainne||ariamh|
|Neg drink-PRES.||he milk||ever|
|He never drinks milk.|
The process through which (11) is derived is called HEAD MOVEMENT and is analogous to what one observes in English alternations of the following kind:
Summing up, there is evidence that in sentences like (6a) the verb and the object are tied together by an invisible knot. This abstract structure in constituents manifests itself in a number of phenomena, of which we have discussed one: the existence of VP proforms, in contrast with the absence of subject+verb proforms. The latter appears to be a universal property of languages and constitutes evidence in favor of the universality of the VP. Along the way, we have also seen how languages can vary and what mechanisms can be responsible for such variations (cf. X-BAR THEORY). Generally speaking, words are put together into larger phrases by a computational device that builds up structures on the basis of relatively simple principles (like: "put a head next to its complement" or "move a head to the front of the clause"). Aspects of this computational device are universal and are responsible for the general architecture that all languages share; others can vary (in a limited way) and are responsible for the final form of particular languages.
There is converging evidence that confirms the psychological reality of constituent structure, that is, the idea that speakers unconsciously assign a structure in constituents to sequences of words. A famous case that shows this is a series of experiments known as the "click" experiments (cf. Fodor, Bever, and Garret 1974). In these experiments, subjects were presented with a sentence through a headphone. At some stage during this process a click sound was produced in the headphone and subjects were then asked at which point of the presentation the click occurred. If the click occurred at major constituent breaks (such as the one between the subject and the VP) the subjects were accurate in recalling when it occurred. If, however, the click occurred within a constituent, subjects would make systematic mistakes in recalling the event. They would overwhelmingly displace the click to the closest constituent break. This behavior would be hard to explain if constituent structure were not actually computed by subjects in processing a sentence (see Clark and Clark 1977 for further discussion).
Thus, looking at the syntax of languages we discover a rich structure that reveals fundamental properties of the computational device that the speaker must be endowed with in order to be able to speak (and understand). There are significant disagreements as to the specifics of how these computational devices are structured. Some frameworks for syntactic analysis (e.g., CATEGORIAL GRAMMAR HEAD-DRIVEN PHRASE STRUCTURE GRAMMAR LEXICAL FUNCTIONAL GRAMMAR) emphasize the role of the lexicon in driving syntactic computations. Others, like MINIMALISM, put their emphasis on the economical design of the principles governing how sentences are built up (see also OPTIMALITY THEORY). Other kinds of disagreement concern the choice of primitives (e.g., RELATIONAL GRAMMAR and COGNITIVE LINGUISTICS). In spite of the liveliness of the debate and of the range of controversy, most, maybe all of these frameworks share a lot. For one thing, key empirical generalizations and discoveries can be translated from one framework to the next. For example, all frameworks encode a notion of constituency and ways of fleshing out the notion of "relation at a distance" (such as the one we have described above as head movement). All frameworks assign to grammar a universal structural core and dimensions along which particular languages may vary. Finally, all major modern frameworks share certain basic methodological tenets of formal explicitness, aimed at providing mathematical models of grammar (cf. FORMAL GRAMMARS).
Syntax interacts directly with all other major components of grammar. First, it draws from the lexicon the words to be put into phrases. The lexical properties of words (e.g. whether they are verbs or nouns, whether and how many complements they need, etc.) will affect the kind of syntactic structures that a particular selection of words can enter into. For example, a sentence like "John cries Bill" is ungrammatical because "cry" is intransitive and takes no complement. Second, syntax feeds into phonology. At some point of the syntactic derivation we get the words in the order that we want to pronounce them. And third, syntax provides the input to semantic interpretation.
To illustrate these interfaces further, consider the following set of sentences:
In rough terms, in Chinese one utters the sentence in its basic form (which is semantically ambiguous -- see AMBIGUITY) then one does scoping mentally. In English, one first applies scoping (i.e., one marks what is being asked), then utters the result. This way of looking at things enables us to see question formation in languages as diverse as English and Chinese in terms of a uniform mechanism. The only difference lies in the level at which scoping applies. Scope marking takes place overtly in English (i.e., before the chosen sequence of words is pronounced). In Chinese, by contrast, it takes place covertly (i.e., after having pronounced the base form). This is why sentence (16) is ambiguous in Chinese.
There are other elements that need to be assigned a scope in order to be interpreted. A prime case is constituted by quantified NPs like "a student" or "every advisor" (see QUANTIFIERS). Consider (19):
Now we have just seen that natural language marks scope in questions by overt or covert movement. If we assume that this is the strategy generally made available to us by grammar, then we are led to conclude that also in cases like (19) scope must be marked via movement. That is, in order to interpret (19), we must determine the scope of the quantifiers by putting them at the beginning of the clause they operate on. For (19), this can be done in two ways:
Both (22a) and (22b) are obtained out of (19). In (22a) we move "a new student" over "every advisor." In (22b) we do the opposite. These structures correspond to the interpretations in (21a) and (21b), respectively. In a more standard logical notation, they would be expressed as follows:
So in the interpretation of sentences with quantified NPs, we apply scoping to such NPs. Scoping of quantifiers in English is a covert movement, part of the mental computation of MEANING, much like scoping of wh-words in Chinese. The result of scoping (i.e., the structures in , which are isomorphic to ) is what gets semantically interpreted and is called LOGICAL FORM.
What I just sketched in very rough terms constitutes one of several views currently being pursued. Much work has been devoted to the study of scope phenomena, in several frameworks. Such study has led to a considerable body of novel empirical generalizations. Some important principles that govern the behavior of scope in natural language have been identified (though we are far from a definitive understanding). Phenomena related to scope play an important role at the SYNTAX-SEMANTICS INTERFACE. In particular, according to the hypothesis sketched previously, surface syntactic representations are mapped onto an abstract syntactic structure as a first step toward being interpreted. Such an abstract structure, logical form, provides an explicit representation of scope, anaphoric links, and the relevant lexical information. These are all key factors in determining meaning. The hypothesis of a logical form onto which syntactic structure is mapped fits well with the idea that we are endowed with a LANGUAGE OF THOUGHT, as our main medium for storing and retrieving information, reasoning, and so on. The reason why this is so is fairly apparent. Empirical features of languages lead linguists to detect the existence of a covert level of representation with the properties that the proponents of the language of thought hypothesis have argued for on the basis of independent considerations. It is highly tempting to speculate that logical form actually is the language of thought. This idea needs, of course, to be fleshed out much more. I put it forth here in this "naive" form as an illustration of the potential of interaction between linguistics and other disciplines that deal with cognition.
What is meaning? What is it to interpret a symbolic structure of some kind? This is one of the hardest question across the whole history of thought and lies right at the center of the study of cognition. The particular form it takes within the picture we have so far is: How is logical form interpreted? A consideration that constrains the range of possible answers to these questions is that our knowledge of meaning enables us to interpret an indefinite number of sentences, including ones we have never encountered before. To explain this we must assume, it seems, that the interpretation procedure is compositional (see COMPOSITIONALITY). Given the syntactic structure to be interpreted, we start out by retrieving the meaning of words (or morphemes). Because the core of the lexicon is finite, we can memorize and store the meaning of the lexical entries. Then each mode of composing words together into phrases (i.e., each configuration in a syntactic analysis tree) corresponds to a mode of composing meanings. Thus, cycling through syntactic structure we arrive eventually at the meaning of the sentence. In general, meanings of complex structures are composed by putting together word (or morpheme) meanings through a finite set of semantic operations that are systematically linked to syntactic configurations. This accounts, in principle, for our capacity of understanding a potential infinity of sentences, in spite of the limits of our cognitive capacities.
Figuring out what operations we use for putting together word meanings is one of the main task of SEMANTICS. To address it, one must say what the output of such operations is. For example, what is it that we get when we compose the meaning of the NP "Pavarotti" with the meaning of the VP "sings 'La Boheme' well"? More generally, what is the meaning of complex phrases and, in particular, what is the meaning of clauses? Although there is disagreement here (as on other important topics) on the ultimate correct answer, there is agreement on what it is that such an answer must afford us. In particular, to have the information that Pavarotti sings "La Boheme" well is to have also the following kind of information:
The relation between a sentence like "Pavarotti sings well" and "someone sings well" (or any of the sentences in ), is called "entailment". Its standard definition involves the concept of truth: A sentence A entails a sentence B if and only if whenever A is true, then B must also be true. This means that if we understand under what conditions a sentence is true, we also understand what its entailments are. Considerations such as these have lead to a program of semantic analysis based on truth conditions. The task of the semantic component of grammar is viewed as that of recursively spelling out the truth conditions of sentences (via their logical form). The truth conditions of simple sentences like "Pavarotti sings" are given in terms of the reference of the words involved (cf. REFERENCE, THEORIES OF). Thus "Pavarotti sings" is true (in a certain moment t) if Pavarotti is in fact the agent of an action of singing (at t). Truth conditions of complex sentences (like "Pavarotti sings or Domingo sings") involve figuring out the contributions to truth conditions of words like "or." According to this program, giving the semantics of the logical form of natural language sentences is closely related to the way we figure out the semantics of any logical system.
Entailment, though not the only kind of important semantic relation, is certainly at the heart of a net of key phenomena. Consider for example the following pair:
The relevance of entailment for natural language is one of the main discoveries of modern semantics. I will illustrate it in what follows with one famous example, having to do with the distributional properties of words like "any" (cf. Ladusaw 1979, 1992 and references therein). A word like "any" has two main uses. The first is exemplified in (27a):
The main puzzle in the behavior of words like "any" is understanding what exactly constitutes a "negative" context. Consider for example the following set of sentences:
Surely no one explicitly taught us these facts. No one taught us that "any" is acceptable within an NP headed by "every," but not within a VP of which an "every"-headed NP is subject. Yet we come to have convergent intuitions on these matters. Again, something in our mental endowment must be responsible for such judgments. What is peculiar to the case at hand is that the overt distribution of a class of morphemes like "any" appears to be sensitive to the entailment properties of their context. In particular, it appears to be sensitive to a specific logical property, that of licensing inferences from sets to subsets, which "no," "at most n" and "every" share with sentential negation. It is worth noting that most languages have negative polarity items and their properties tend to be the same as "any," with minimal variations (corresponding to degrees of "strength" of negativity). This illustrates how there are specific architectural features of grammar that cannot be accounted for without a semantic theory of entailment for natural language. And it is difficult to see how to build such a theory without resorting to a compositional assignment of truth conditions to syntactic structures (or something that enables to derive the same effects -- cf. DYNAMIC SEMANTICS). The case of negative polarity is by no means isolated. Many other phenomena could be used to illustrate this point (e.g. FOCUS TENSE AND ASPECT). But the illustration just given will have to suffice for our present purposes. It is an old idea that we understand each other because our language, in spite of its VAGUENESS, has a logic. Now this idea is no longer just an intriguing hypothesis. The question on the table is no more whether this is true. The question is what the exact syntactic and semantic properties of this logic are.
Ultimately, the goal of a theory of language is to explain how language is used in concrete communicative situations. So far we have formulated the hypothesis that at the basis of linguistic behavior there is a competence constituted by blocks of rules or systems of principles, responsible for sound structure, morphological structure, and so on. Each block constitutes a major module of our linguistic competence, which can in turn be articulated into further submodules. These rule systems are then put to use by the speakers in speech acts. In doing so, the linguistic systems interact in complex ways with other aspects of our cognitive apparatus as well as with features of the environment. We now turn to a consideration of these dimensions.
The study of the interaction of grammar with the CONTEXT of use is called PRAGMATICS. Pragmatics looks at sentences within both the extralinguistic situation and the DISCOURSE of which it is part. For example, one aspect of pragmatics is the study of INDEXICALS AND DEMONSTRATIVES (like "I," "here," "now," etc.) whose meaning is fixed by the grammar but whose reference varies with the context. Another important area is the study of PRESUPPOSITION, that is, what is taken for granted in uttering a sentence. Consider the difference between (37a) and (37b):
Yet another aspect of pragmatics is the study of how we virtually always go beyond what is literally said. In ordinary conversational exchanges, one and the same sentence, for example, "the dog is outside," can acquire the illocutionary force of a command ("go get it"), of a request ("can you bring it in?"), of an insult ("you are a servant; do your duty"), or can assume all sort of metaphorical or ironical colorings, and so on, depending on what the situation is, what is known to the illocutionary agents, and so on. A breakthrough in the study of these phenomena is due to the work of P. GRICE . Grice put on solid grounds the commonsense distinction between literal meaning, that is, the interpretation we assign to sentences in virtue of rules of grammar and linguistic conventions, and what is conveyed or implicated, as Grice puts it, beyond the literal meaning. Grice developed a theory of IMPLICATURE based on the idea that in our use of grammar we are guided by certain general conversational norms to which we spontaneously tend to conform. Such norms instruct us to be cooperative, truthful, orderly, and relevant (cf. RELEVANCE AND RELEVANCE THEORY). These are norms that can be ignored or even flouted. By exploiting both the norms and their violations systematically, thanks to the interaction of literal meaning and mutually shared information present in the context, the speaker can put the hearer in the position of inferring his communicative intentions (i.e., what is implicated). Some aspects of pragmatics (e.g., the study of deixis or presupposition) appear to involve grammar-specific rule systems, others, such as implicature, more general cognitive abilities. All of them appear to be rule governed.
Use of language is an important factor in language variation. Certain forms of variation tend to be a constant and relatively stable part of our behavior. We all master a number of registers and styles; often a plurality of grammatical norms are present in the same speakers, as in the case of bilinguals. Such coexisting norms affect one another in interesting ways (see CODESWITCHING). These phenomena, as well as pragmatically induced deviations from a given grammatical norm, can also result in actual changes in the prevailing grammar. Speakers" creative uses can bring innovations about that become part of grammar. On a larger scale, languages enter in contact through a variety of historical events and social dynamics, again resulting in changes. Some such changes come about in a relatively abrupt manner and involve simultaneously many aspects of grammar. A case often quoted in this connection is the great vowel shift which radically changed the vowel space of English toward the end of the Middle English period. The important point is that the dynamic of linguistic change seems to take place within the boundaries of Universal Grammar as charted through synchronic theory (cf. LINGUISTIC UNIVERSALS AND UNIVERSAL GRAMMAR). In fact, it was precisely the discovery of the regularity of change (e.g., Grimm's laws) that led to the discovery of linguistic structure.
A particularly interesting vantage point on linguistic change is provided by the study of CREOLES (Bickerton 1975, 1981). Unlike most languages that evolve from a common ancestor (sometimes a hypothesized protolanguage, as in the case of the Indoeuropean family), Creoles arise from communities of speakers that do not share a native language. A typical situation is that of slaves or workers brought together by a dominating group that develop an impoverished quasi-language (a pidgin) in order to communicate with one another. Such quasi-languages typically have a small vocabulary drawn from several sources (the language of the dominating group or the native languages of the speakers), no fixed word order, no inflection. The process of creolization takes place when such a language starts having its own native speakers, that is, speakers born to the relevant groups that start using the quasi-language of their parents as a native language. What typically happens is that all of a sudden the characteristics of a full-blown natural language come into being (morphological markers for agreement, case endings, modals, tense, grammaticized strategies for focusing, etc.). This process, which in a few lucky cases has been documented, takes place very rapidly, perhaps even within a single generation. This has led Bickerton to formulate an extremely interesting hypothesis, that of a "bioprogram," that is, a species-specific acquisition device, part of our genetic endowment, that supplies the necessary grammatical apparatus even when such an apparatus is not present in the input. This raises the question of how such a bioprogram has evolved in our species, a topic that has been at the center of much speculation (see EVOLUTION OF LANGUAGE). A much debated issue is the extent to which language has evolved through natural selection, in the ways complex organs like the eye have. Although not much is yet known or agreed upon on this score, progress in the understanding of our cognitive abilities and of the neurological basis of language is constant and is likely to lead to a better understanding of language evolution (also through comparisons of the communication systems of other species; see ANIMAL COMMUNICATION PRIMATE LANGUAGE).
The cognitive turn in linguistics has brought together in a particularly fruitful manner the study of grammar with the study of the psychological processes at its basis on the one hand and the study of other forms of cognition on the other. PSYCHOLINGUISTICS deals with how language is acquired (cf. LANGUAGE ACQUISITION) and processed in its everyday uses (cf. NATURAL LANGUAGE PROCESSING SENTENCE PROCESSING). It also deals with language pathology, such as APHASIA and various kinds of developmental impairments (see LANGUAGE IMPAIRMENT, DEVELOPMENTAL).
With regard to acquisition, the available evidence points consistently in one direction. The kind of implicit knowledge at the basis of our linguistic behavior appears to be fairly specialized. Among all the possible ways to communicate and all the possible structures that a system of signs can have, those that are actualized in the languages of the world appear to be fairly specific. Languages exploit only some of the logically conceivable (and humanly possible) sound patterns, morphological markings, and syntactic and semantic devices. Here we could give just a taste of how remarkable the properties of natural languages are. And it is not obvious how such properties, so peculiar among possible semiotic systems, can be accounted for in terms of, say, pragmatic effectiveness or social conventions or cultural inventiveness (cf. SEMIOTICS AND COGNITION). In spite of this, the child masters the structures of her language without apparent effort or explicit training, and on the basis of an often very limited and impoverished input. This is clamorously so in the case of creolization, but it applies to a significant degree also to "normal" learning. An extensive literature documents this claim in all the relevant domains (see WORD MEANING, ACQUISITION OF PHONOLOGY, ACQUISITION OF SYNTAX, ACQUISITION OF SEMANTICS, ACQUISITION OF). It appears that language "grows into the child," to put it in Chomsky's terms; or that the child "invents" it, to put it in Pinker's words. These considerations could not but set the debate on NATIVISM on a new and exciting standing. At the center of intense investigations there is the hypothesis that a specialized form of knowledge, Universal Grammar, is part of the genetic endowment of our species, and thus constitutes the initial state for the language learner. The key to learning, then, consists in fixing what Universal Grammar leaves open (see PARAMETER SETTING APPROACHES TO ACQUISITION, CREOLIZATION AND DIACHRONY). On the one hand, this involves setting the parameters of variation, the "switches" made available by Universal Grammar. On the other hand, it also involves exploiting, for various purposes such as segmenting the stream of sound into words, generalized statistical abilities that we also seem to have (see Saffran, Aslin, and Newport 1996). The interesting problem is determining what device we use in what domain of LEARNING. The empirical investigation of child language proceeds in interaction with the study of the formal conditions under which acquisition is possible, which has also proven to be a useful tool in investigating these issues (cf. ACQUISITION, FORMAL THEORIES OF).
Turning now to processing, planning a sentence, building it up, and uttering it requires a remarkable amount of cognitive work (see LANGUAGE PRODUCTION). The same applies to going from the continuous stream of speech sounds (or, in the case of sign languages, gestures) to syntactic structure and from there to meaning (cf. PROSODY AND INTONATION, PROCESSING ISSUES SPEECH PERCEPTION SPOKEN WORD RECOGNITION VISUAL WORD RECOGNITION). The measure of the difficulty of this task can in part be seen by how partial our progress is in programming machines to accomplish related tasks such as going from sounds to written words, or to analyze an actual text, even on a limited scale (cf. COMPUTATIONAL LINGUISTICS COMPUTATIONAL LEXICONS). The actual use of sentences in an integrated discourse is an extremely complex set of phenomena. Although we are far from understanding it completely, significant discoveries have been made in the last decades, also thanks to the advances in linguistic theory. I will illustrate it with one well known issue in sentence processing.
As is well known, the recursive character of natural language syntax enables us to construct sentences of indefinite length and complexity:
Language is important for many fairly obvious and widely known reasons. It can be put to an enormous range of uses; it is the main tool through which our thought gets expressed and our modes of reasoning become manifest. Its pathologies reveal important aspects of the functioning of the brain (cf. LANGUAGE, NEURAL BASIS OF); its use in HUMAN-COMPUTER INTERACTION is ever more a necessity (cf. SPEECH RECOGNITION IN MACHINES SPEECH SYNTHESIS). These are all well established motivations for studying it. Yet, one of the most interesting things about language is in a way independent of them. What makes the study of language particularly exciting is the identification of regularities and the discovery of the laws that determine them. Often unexpectedly, we detect in our behavior, in our linguistic judgments or through experimentation, a pattern, a regularity. Typically, such regularities present themselves as intricate, they concern exotic data that are hidden in remote corners of our linguistic practice. Why do we have such solid intuitions about such exotic aspects of, say, the functioning of pronouns or the distribution of negative polarity items? How can we have acquired such intuitions? With luck, we discover that at the basis of these intricacies there are some relative simple (if fairly abstract) principles. Because speaking is a cognitive ability, whatever principles are responsible for the relevant pattern of behavior must be somehow implemented or realized in our head. Hence, they must grow in us, will be subject to pathologies, and so on. The cognitive turn in linguistics, through the advent of the generative paradigm, has not thrown away traditional linguistic inquiry. Linguists still collect and classify facts about the languages of the world, but in a new spirit (with arguably fairly old roots) -- that of seeking out the mental mechanisms responsible for linguistic facts. Hypotheses on the nature of such mechanisms in turn lead to new empirical discoveries, make us see things we had previously missed, and so on through a new cycle. In full awareness of the limits of our current knowledge and of the disputes that cross the field, it seems impossible to deny that progress over the last 40 years has been quite remarkable. For one thing, we just know more facts (facts not documented in traditional grammars) about more languages. For another thing, the degree of theoretical sophistication is high, I believe higher than it ever was. Not only for the degree of formalization (which, in a field traditionally so prone to bad philosophizing, has its importance), but mainly for the interesting ways in which arrays of complex properties get reduced to ultimately simple axioms. Finally, the cross-disciplinary interaction on language is also a measure of the level the field is at. Abstract modeling of linguistic structure leads quite directly to psychological experimentation and to neurophysiological study and vice versa (see, e.g., GRAMMAR, NEURAL BASIS OF LEXICON, NEURAL BASIS OF BILINGUALISM AND THE BRAIN). As Chomsky puts it, language appears to be the first form of higher cognitive capacity that is beginning to yield. We have barely begun to reap the fruits of this fact for the study of cognition in general.
Akmajian, A., R. Demers, A. Farmer, and R. Harnish. (1990). Linguistics. An Introduction to Language and Communication. 4th ed. Cambridge, MA: MIT Press.
Baker, M. (1996). The Polysynthesis Parameter. Oxford: Oxford University Press.
Bickerton, D. (1975). The Dynamics of a Creole System. Cambridge: Cambridge University Press.
Bickerton, D. (1981). Roots of Language. Ann Arbor, MI: Karoma.
Chien, Y.-C., and K. Wexler. (1990). Children's knowledge of locality conditions in binding as evidence for the modularity of syntax and pragmatics. Language Acquisition 1:225-295.
Crain, S., and C. McKee. (1985). The acquisition of structural restrictions on anaphora. In S. Berman, J. Choe, and J. McDonough, Eds., Proceedings of the Eastern States Conference on Linguistics. Ithaca, NY: Cornell University Linguistic Publications.
Clark, H., and E. Clark. (1977). The Psychology of Language. New York: Harcourt Brace Jovanovich.
Cheng, L. (1991). On the Typology of Wh-Questions. Ph.D. diss., MIT. Distributed by MIT Working Papers in Linguistics.
Chung, S., and J. McCloskey. (1987). Government, barriers and small clauses in Modern Irish. Linguistic Inquiry 18:173-238.
Dayal, V. (1998). Any as inherent modal. Linguistics and Philosophy.
Fodor, J. A., T. Bever, and M. Garrett. (1974). The Psychology of Language. New York: McGraw-Hill.
Grodzinsky, Y., and T. Reinhart. (1993). The innateness of binding and coreference. Linguistic Inquiry 24:69-101.
Huang, J. (1982). Grammatical Relations in Chinese. Ph.D. diss., MIT. Distributed by MIT Working Papers in Linguistics.
Ladusaw, W. (1979). Polarity Sensitivity as Inherent Scope Relation. Ph.D. diss., University of Texas, Austin. Distributed by IULC, Bloomington, Indiana (1980).
Ladusaw, W. (1992). Expressing negation. SALT II. Ithaca, NY: Cornell Linguistic Circle.
Pinker, S. (1994). The Language Instinct. New York: William Morrow.
Saffran, J., R. Aslin, and E. Newport. (1996). Statistical learning by 8-month-old infants. Science 274:1926-1928.
Aronoff, M. (1976). Word Formation in Generative Grammar. Cambridge, MA: MIT Press.
Atkinson, M. (1992). Children's Syntax. Oxford: Blackwell.
Brent, M. R. (1997). Computational Approaches to Language Acquisition. Cambridge, MA: MIT Press.
Chierchia, G., and S. McConnell-Ginet. (1990). Meaning and Grammar. An Introduction to Semantics. Cambridge, MA: MIT Press.
Chomsky, N. (1981). Lectures on Government and Binding. Dordrecht: Foris.
Chomsky, N. (1987). Language and Problems of Knowledge: The Managua Lectures. Cambridge, MA: MIT Press.
Chomsky, N. (1995). The Minimalist Program. Cambridge, MA: MIT Press.
Chomsky, N., and M. Halle. (1968). The Sound Pattern of English. New York: Harper and Row.
Elman, J. L., E. A. Bates, M. H. Johnson, A. Karmiloff-Smith, D. Parisi, and K. Plunkett. (1996). Rethinking Innateness: A Connectionist Perspective on Development. Cambridge, MA: MIT Press.
Gleitman, L., and B. Landau, Eds. (1994). The Acquisition of the Lexicon. Cambridge, MA: MIT Press.
Haegeman, L. (1990). An Introduction to Government and Binding Theory. 2nd ed. Oxford: Blackwell.
Hauser, M. D. (1996). The Evolution of Communication. Cambridge, MA: MIT Press.
Jusczyk, P. W. (1997). The Discovery of Spoken Language. Cambridge, MA: MIT Press.
Kenstowicz, M., and C. Kisseberth. (1979). Generative Phonology: Description and Theory. New York: Academic Press.
Ladefoged, P. (1982). A Course in Phonetics. 2nd ed. New York: Harcourt Brace Jovanovich.
Levinson, S. (1983). Pragmatics. Cambridge: Cambridge University Press.
Lightfoot, D. (1991). How to Set Parameters: Arguments from Language Change. Cambridge, MA: MIT Press.
Ludlow, P., Ed. (1997). Readings in the Philosophy of Language. Cambridge, MA: MIT Press.
Osherson, D., and H. Lasnik. (1981). Language: An Invitation to Cognitive Science. Cambridge, MA: MIT Press.
Stevens, K. N. (1998). Acoustic Phonetics. Cambridge, MA: MIT Press .
Copyright © 1999 Massachusetts Institute of Technology