ARCHIVEUM · The Architecture of Language · Artifact III of VIII

How Meaning
Works

The gap between what a sentence says and what a speaker means. How context, inference, shared knowledge, and the cooperative structure of conversation make communication possible at all.

The Iceberg of Utterance

A man walks into a room where his colleague is working and says: "It's a bit warm in here, isn't it?" The colleague gets up and opens the window. No command was issued. No request was grammatically formed. The sentence, analyzed at the level of its literal content, is a statement about the temperature of the room together with a tag question that invites confirmation. And yet the colleague understood it immediately as a request to open the window, executed that request without hesitation, and would have been puzzled if someone asked why they obeyed a command that was never given.

This is one of the simplest and most ordinary acts of communication imaginable, and it is deeply mysterious. The formal semantic content of the utterance, what the sentence literally means, does not explain how the colleague understood what was wanted. Something else is at work: a complex of contextual knowledge, social conventions, inferences about the speaker's intentions, and assumptions about what a reasonable person would be communicating in this situation. All of this happens in a fraction of a second, automatically and effortlessly, and neither speaker nor listener is typically aware that it is happening at all.

The study of meaning in language requires two distinct enterprises. Semantics describes what sentences mean independently of context. Pragmatics describes what speakers mean by uttering sentences in particular contexts. Most of what actually happens in communication falls under pragmatics.

The distinction between semantics and pragmatics has been debated and refined since the 1930s, when Charles Morris introduced it in his work on semiotics. But it was the philosopher H. P. Grice who, in a series of lectures at Harvard in 1967, gave the clearest and most influential account of the mechanism by which speakers communicate far more than they literally say. Grice's account is the fulcrum of this artifact.

The territory covered here connects directly to what Artifact II established about compositional semantics. Compositionality accounts for how sentences get their literal meanings. What it cannot account for, and what pragmatics exists to explain, is the gap between literal meaning and communicated meaning. That gap is, in most everyday communication, wider than the literal meaning itself.

The Gap Between Saying
and Meaning

The sheer magnitude of the gap between what sentences literally say and what speakers use them to communicate is easy to underestimate from inside ordinary language use, because filling the gap is so automatic. Consider a few examples that make it visible.

A friend who has just seen a film that they found deeply unimpressive says: "Well, that was certainly... interesting." The literal content is an assertion that the film was interesting. The communicated meaning is, approximately, the opposite. The pause before "interesting," the particular intonation, and the word choice of "certainly" in a context where something more positive would be expected combine to signal to the listener that the literal content is not the message. The listener does not hear an assertion that the film was interesting; they hear a restrained expression of disappointment or disdain. The gap between semantic content and communicated meaning is crossed in a single step, without conscious reflection.

A student asks a professor for a letter of recommendation for a competitive academic position. The professor writes: "Ms. Johnson's attendance was impeccable and she always submitted her assignments on time." The recipient of this letter does not conclude that Ms. Johnson is a strong candidate. They conclude, almost certainly, that she is a weak one, and that the professor was unable or unwilling to say anything substantive in her favor. The semantic content of the letter contains no negative assessments. The communicated meaning is deeply negative. The gap is crossed through inference from what was conspicuously not said.

A: Can you pass the salt?
B: [passes the salt]

The literal meaning of A's utterance is a question about B's ability to perform an action. Taken at face value, the answer is almost certainly yes: B can pass the salt. But B does not answer the literal question. B responds to the request that the utterance is used to make. No one in this exchange experiences any confusion about what was meant. The gap is crossed so automatically that it is invisible.

Why the Gap Exists

The existence of the gap is not a defect in language. It is a consequence of two properties that language use requires. First, the productivity of language means that an infinite number of sentences can be produced, but human attention and working memory are finite. If every communicative intention required its own precisely fitted literal formulation, communication would be impossibly slow and effortful. The gap allows speakers to communicate far more than they literally say, and listeners to recover far more than they literally hear, with considerable economy of expression.

Second, the gap is a resource for social negotiation. Indirect communication allows speakers to make requests without imposing, to express opinions without committing, to signal meanings while maintaining plausible deniability. "It's a bit warm in here" is a gentler way of asking someone to open a window than "Open the window." The indirectness softens the request and leaves the listener the option of ignoring it without either party having to acknowledge that a request was made and refused. This social function of indirection is universal across cultures, though the specific conventions differ.

The philosopher of language Paul Grice proposed that the mechanism underlying the gap is not arbitrary or culture-specific but is grounded in a general rational principle governing cooperative communication. Understanding his account requires starting with a simple observation: communication is a cooperative enterprise.

Grice and the
Cooperative Principle

H. P. Grice

1913 – 1988 · "Logic and Conversation," William James Lectures, Harvard, 1967 (published 1975)

Herbert Paul Grice was a British philosopher who spent most of his career at Oxford before moving to Berkeley. His 1967 William James Lectures at Harvard, published in 1975 as "Logic and Conversation" in the collection Studies in the Way of Words (1989), represent one of the most influential contributions to the philosophy of language and to linguistics of the twentieth century. Grice's account of conversational implicature gave the first systematic explanation of how speakers communicate meaning beyond what their sentences literally say, and the framework he developed continues to organize research in pragmatics, psycholinguistics, and formal semantics.

Grice's starting point was the observation that conversation, at least in its cooperative varieties, is not a random sequence of utterances. It is a rational, goal-directed activity in which both speakers and listeners are working toward some shared communicative purpose. Conversations proceed on the implicit assumption that participants are trying to communicate effectively, truthfully, relevantly, and clearly. When this assumption holds, listeners can infer the speaker's intentions from the literal content of what was said together with knowledge of the context and the assumption of cooperation.

Grice formalized this observation in the Cooperative Principle:

Make your conversational contribution such as is required, at the stage at which it occurs, by the accepted purpose or direction of the talk exchange in which you are engaged. H. P. Grice, "Logic and Conversation," 1975

The Cooperative Principle is not a moral injunction or a claim about how people actually behave in all conversations. It is a description of the rational norm that listeners assume speakers are conforming to, and which therefore licenses inferences from what was said to what was meant. When a speaker appears to violate one of the maxims that follow from the Cooperative Principle, the listener does not conclude that the speaker has simply abandoned rationality. Instead, the listener assumes that the violation is itself communicative: a deliberate deviation that signals some additional meaning beyond the literal content.

The Mechanism of Inference

The key to Grice's account is the distinction between what is said and what is implicated. What is said is the conventional, truth-conditional content of an utterance: what the sentence literally means. What is implicated is the additional meaning that the listener infers from what was said, in the context of the Cooperative Principle. The implicated meaning is not encoded in the sentence; it is generated by the listener's inference, and it can be cancelled without contradiction (unlike the literal meaning) if the speaker makes clear that the implication was not intended.

This cancellability is Grice's test for implicature. If someone says "Some of the students passed the exam," they typically implicate that not all of them passed (otherwise, why not say "all"?). But if the speaker continues "in fact, all of them did," no contradiction results. The implication that not all passed is not part of the literal semantic content of the sentence; it is a defeasible inference that the listener was licensed to draw from the cooperative context, and that can be withdrawn without logical inconsistency. This distinguishes implicature from semantic entailment: "Some students passed" does not logically entail that not all of them did, but it implicates it in normal conversational contexts.

The Four Maxims
The Rational Structure of Conversation

Grice organized the Cooperative Principle into four categories of maxims, each describing a dimension of cooperative communication. He named them after Kant's categories, though the connection is more rhetorical than philosophical.

Maxim of Quantity

Make your contribution as informative as required for the current purposes of the exchange. Do not make your contribution more informative than is required.

If someone asks "Do you know what time it is?" and you say "Yes," you have given less information than the question was seeking. If someone asks how to get to the station and you give a complete account of the city's transport history, you have given more. Both violate Quantity.

Maxim of Quality

Do not say what you believe to be false. Do not say that for which you lack adequate evidence.

The maxim underlying the presumption of sincerity in conversation. When it is flouted openly, the listener recognizes the flouting and draws inferences: irony, sarcasm, and hyperbole all work by apparent violations of Quality that the listener is expected to recognize as deliberate.

Maxim of Relation

Be relevant.

The simplest to state and one of the most powerful in generating implicatures. When A asks "Where is John?" and B replies "There's a yellow VW outside Sue's house," B's response implicates that John is at Sue's house, through the assumption that B's reply is relevant. Without the assumption of relevance, the reply is gibberish.

Maxim of Manner

Avoid obscurity. Avoid ambiguity. Be brief. Be orderly.

The maxim that generates inferences from the form rather than the content of an utterance. "She got married and had a child" implicates a temporal order (first married, then child) partly through the Manner maxim: if the events occurred in a different order, a cooperative speaker would have said so. "She had a child and got married" implicates the reverse order.

Flouting, Violating, and Opting Out

Grice distinguished between several types of non-observance of the maxims. A speaker who violates a maxim does so covertly and without communicative intent: a liar violates the maxim of Quality without expecting to be detected. A speaker who flouts a maxim does so openly, in a way that the listener is expected to notice, with the intention of generating an implicature.

Irony is the paradigm case of flouting Quality. When someone looks at a torrential rainstorm and says "Lovely weather for a picnic," they say something that is obviously and blatantly false. The listener does not conclude that the speaker has made an error about the weather. They conclude that the speaker is being ironic, communicating something closer to the opposite of what was literally said. The communication works precisely because the flouting is transparent: the listener must recognize that the Cooperative Principle is still in operation, and that the apparent Quality violation is itself the vehicle of a message.

Flouting Quantity generates implicatures from what is conspicuously not said. The recommendation letter that mentions only punctuality and attendance implicates weak academic performance by conspicuously omitting mention of intellectual qualities. The listener reasons: a cooperative speaker who could say something positive about the candidate's academic work would have said so; the fact that they have not said so implicates that there is nothing positive to say.

Flouting Relation generates inferences from apparently irrelevant contributions. When a mother says to a child who is misbehaving at the dinner table, "Your father is coming home soon," she communicates a warning through an apparently non-sequitur observation about the father's schedule. The child infers the relevance: father's arrival will bring consequences for present behavior.

A speaker can also opt out of the Cooperative Principle explicitly: "I can't say any more than this" or "I'm not in a position to comment" signals that the speaker is stepping outside the cooperative framework for reasons they do not wish to explain. Witnesses who "cannot recall" events, politicians who "decline to speculate," and lawyers who advise their clients to "say nothing": all are opting out rather than violating or flouting.

Implicature
Scalar, Conversational, and Conventional

The richness of Grice's framework becomes apparent when examining the different types of implicature it generates and the sometimes surprising precision with which the mechanism operates. Three types deserve particular attention: scalar implicatures (generated by the use of items from a scale of informativeness), conversational implicatures proper (generated by the Cooperative Principle in context), and conventional implicatures (a residual category that Grice himself found troublesome).

Scalar Implicature

Many words in natural language come in informativeness scales: sets of expressions ordered by how much information they convey. The scale for numerals runs, roughly, upward: one, two, three, four, five... The scale for quantifiers runs: some, many, most, all. The scale for logical connectives runs: or, and. The scale for modal verbs runs: possible, likely, certain.

The mechanism of scalar implicature, formalized by Laurence Horn in 1972 building on Grice, is this: when a speaker uses a weaker (less informative) item from a scale, they implicate that the stronger item on the same scale does not apply, because a cooperative speaker would have used the stronger item if it were true. "Some students passed" implicates "Not all students passed" because the speaker used "some" rather than "all," and a cooperative speaker who knew all students had passed would have said so. "It's possible that she's at home" implicates something weaker than certainty, because if the speaker were certain, they would have said "She's definitely at home."

Scalar implicatures are among the most computationally well-understood pragmatic inferences, and they have been the subject of experimental investigation in psycholinguistics. Studies by Noveck (2001) and Papafragou and Musolino (2003) found that children take longer to compute scalar implicatures than adults, suggesting that the inference requires additional cognitive effort and is not automatic in the way that semantic processing is. Adults, in most contexts, compute the implication quickly and automatically, but the inference can be suspended when the context makes it clear that the stronger statement is not under consideration.

Generalized and Particularized Implicature

Grice distinguished between generalized conversational implicatures, which arise in the absence of any special context and are the default interpretation of an expression, and particularized conversational implicatures, which arise only in specific contexts.

The "some but not all" implication of "some" is generalized: it arises whenever "some" is used, without any special contextual features being required. A speaker who says "I lost some of my luggage" will, in virtually any context, be understood to mean that they did not lose all of it. If they had lost all of it, a cooperative speaker would have said so.

By contrast, the implication that John is at Sue's house when B says "There's a yellow VW outside Sue's house" in response to a question about John's location is particularized: it arises only in the context of that specific exchange, from the specific relevance that the speaker's contribution has in that context. Removed from that context, the utterance about the yellow VW implicates nothing about John's location.

Conventional Implicature

Not all implied meanings are generated by the Cooperative Principle. Some are triggered by the conventional meaning of specific words, independently of context. Grice called these conventional implicatures. The word "but" is the clearest example. "She is a philosopher but she is kind" and "She is a philosopher and she is kind" have the same truth conditions (both are true when and only when she is both a philosopher and kind), but "but" carries an additional implication of contrast or unexpectedness: it implies that being kind is somehow remarkable or surprising given that she is a philosopher. This implication is not cancellable without awkwardness; it is conventionally encoded in the word itself rather than generated by pragmatic inference.

Similarly, "even" triggers an implicature of remarkableness or unexpectedness: "Even John passed the exam" implicates that John was not expected to pass. "Still" and "yet" carry temporal implications about the expected completion of a process. "But," "yet," "even," "also," "too," "again": words like these carry meaning that is not captured by their truth-conditional contribution and is not derivable from the Cooperative Principle. They form a lexical layer of pragmatic meaning that sits between pure semantics and pure pragmatics.

Conventional implicatures posed a theoretical problem for Grice because they resisted his primary distinction between what is said and what is implicated: they appear to be conventional (encoded in the lexicon) but not part of the truth-conditional content of the sentence. This problem has been the subject of extensive subsequent work in formal pragmatics, including accounts by Kent Bach and Robert Harnish and, more recently, by Christopher Potts in his theory of expressives and supplements.

Speech Act Theory
Language as Action

While Grice was developing his account of how speakers communicate more than they say, a parallel tradition was developing an equally fundamental point: that utterances are not just descriptions of the world but actions in the world. When a judge says "I sentence you to five years," the words do not describe a sentencing; they perform one. When a couple exchange "I do" in a marriage ceremony, the words do not report their consent; they constitute it. When a referee says "I declare the game over," the game is over. These observations, initially due to the Oxford philosopher J. L. Austin, established the field of speech act theory.

J. L. Austin

1911 – 1960 · How to Do Things with Words, William James Lectures, Harvard, 1955 (published 1962)

John Langshaw Austin was an Oxford philosopher and the leading figure of ordinary language philosophy. His 1955 William James Lectures at Harvard (a remarkable coincidence: both Grice's foundational lectures and Austin's were given as William James Lectures at Harvard) were published posthumously as How to Do Things with Words. Austin began by noticing that many utterances are not descriptions of states of affairs at all, and therefore cannot be true or false in the usual sense. His development of the distinction between locutionary, illocutionary, and perlocutionary acts provided the framework that John Searle subsequently systematized into a general theory of speech acts.

Performatives and Constatives

Austin's initial distinction was between performative utterances and constative utterances. A constative utterance states something that can be true or false: "The cat is on the mat" is true when the cat is on the mat and false otherwise. A performative utterance performs an action by being said: "I promise to be there" does not describe a promise; it constitutes one. "I hereby declare war" does not describe a declaration of war; uttered by the appropriate person in the appropriate context, it is the declaration.

Austin noticed that performatives cannot be true or false but can succeed or fail in a different sense. A promise made under duress, a bet made by someone who lacks the authority to bet, a marriage performed by someone not legally authorized to officiate: these are what Austin called infelicitous speech acts. They fail not because they are false but because the conditions for their successful performance, their felicity conditions, are not met. For a speech act to be felicitous, it requires: an appropriate conventional procedure; appropriate participants and circumstances; correct and complete execution of the procedure; and, for some acts, the requisite thoughts, feelings, and intentions on the part of the participants.

Locutionary, Illocutionary, Perlocutionary

Austin's more lasting contribution was a three-part analysis of what happens when someone says something. Every utterance can be analyzed at three levels.

The locutionary act is the basic act of producing a meaningful utterance with a particular sense and reference: the act of saying something with a determinate content. When someone utters the words "Close the window," the locutionary act is the production of that sentence with its conventional meaning.

The illocutionary act is the action performed in saying something: the act done by means of the utterance. "Close the window" might be an order, a request, a suggestion, or a warning, depending on context. The illocutionary act is the type of speech act being performed: assertion, question, command, promise, threat, apology, greeting, and so on. Illocutionary force is what distinguishes "I will be there" as a promise from "I will be there" as a prediction or as a warning.

The perlocutionary act is the effect achieved by performing the illocutionary act: the consequences in the mind or behavior of the listener. Closing the window in response to a request is the perlocutionary effect of that request. Being persuaded, being frightened, being amused: these are perlocutionary effects. Unlike illocutionary acts, perlocutionary effects are not under the direct control of the speaker. A speaker can intend to persuade without succeeding; the perlocutionary effect depends on the listener's response.

Searle's Classification of Speech Acts

John R. Searle

b. 1932 · Speech Acts, 1969; Expression and Meaning, 1979

Searle was Austin's student at Oxford and subsequently a professor at Berkeley. Where Austin developed the taxonomy of speech acts in a somewhat unsystematic way, Searle gave it a rigorous theoretical foundation. His 1969 book Speech Acts provided the standard account of the felicity conditions for different types of speech acts and developed a systematic classification. He later became famous for the Chinese Room thought experiment (1980), which challenged strong artificial intelligence claims about machine understanding. His work on intentionality and social reality extends speech act theory into questions about how institutions and social facts are constituted by collective intentional behavior.

Searle's classification of illocutionary acts organizes the enormous variety of things that can be done with words into five fundamental categories.

Type What It Does Examples
Assertives Commit the speaker to the truth of a proposition. The world is being described as it is. Assertions, claims, descriptions, predictions, diagnoses, reminders
Directives Attempt to get the hearer to do something. The world is to be brought into conformity with the words. Orders, requests, questions, invitations, commands, suggestions
Commissives Commit the speaker to some future course of action. Promises, vows, bets, threats, offers, pledges
Expressives Express a psychological state about a state of affairs. Apologies, thanks, congratulations, condolences, greetings
Declarations Bring about changes in the world by their very performance, given appropriate authority. Declarations of war, verdicts, baptisms, excommunications, rulings, firings

Indirect Speech Acts
Meaning More Than You Say

Speech act theory and Gricean pragmatics converge most powerfully in the analysis of indirect speech acts: cases where one illocutionary act is performed by means of another. "Can you pass the salt?" is grammatically a question about ability, making it formally a directive in question form that requests information about a state. Its primary illocutionary force, in most contexts, is a request: a directive that the salt be passed. The listener responds to the request, not to the question. The indirect speech act, in which a directive is performed through a syntactically interrogative form, is one of the most pervasive structures in everyday language.

Why Indirectness?

The prevalence of indirect speech acts is not accidental. The linguists Penelope Brown and Stephen Levinson developed an influential theory of politeness in their 1987 book Politeness: Some Universals in Language Usage, which explains indirectness in terms of face: the public self-image that every social actor has and that every social interaction negotiates.

Brown and Levinson distinguish between positive face (the desire to be liked, approved of, and recognized as a member of a community) and negative face (the desire for autonomy, freedom from imposition, and the right to act without being impeded). Many speech acts are inherently face-threatening: requests impose on the listener's negative face by restricting their autonomy; criticisms and disagreements threaten the speaker's positive face. Indirect speech acts soften these threats. "Could you possibly close the window?" threatens the listener's negative face less than "Close the window" because the interrogative form formally preserves the listener's freedom to say no, even though no one expects them to do so.

Brown and Levinson's cross-linguistic research found that the mechanisms of politeness vary across cultures but the underlying concerns for positive and negative face appear universal. The specific forms that indirectness takes differ: what counts as polite in Japanese (extensive use of honorific language systems) differs from what counts as polite in English (strategic use of interrogative and conditional forms) and from what counts as polite in some cultures where directness is itself valued as a form of respect. But the negotiation of face through speech is universal.

Conventional and Non-Conventional Indirectness

Some indirect speech acts are so conventionalized that their literal meaning is nearly transparent. "Can you pass the salt?" is a request in virtually any context: the literal reading (a question about ability) is so routinely bypassed that it barely registers. These are conventional indirect speech acts: forms whose indirect function is fixed by convention rather than calculated from context in each occurrence.

Other indirect acts require active inference from context. A doctor who tells a patient "Your numbers are not looking great" communicates different things depending on whether "your numbers" refers to cholesterol, blood pressure, or tumor markers. A manager who says to an employee "You seem to be having trouble meeting deadlines" may be issuing a warning, offering help, or beginning a disciplinary process. The literal content is the same; the illocutionary force is determined by context, relationship, intonation, and the listener's knowledge of what kind of interaction this is. Meaning, in these cases, is assembled collaboratively by speaker and listener from a wide range of available information, most of it not encoded in the words themselves.

A: Would you like to come to dinner on Saturday?
B: I have a lot of work to catch up on this weekend.

B has not answered A's question with a yes or a no. B has said something true, relevant, and potentially explanatory of a negative answer without explicitly giving one. A will almost certainly understand this as a polite decline. The indirectness allows B to avoid the face-threatening act of an outright refusal while leaving A some room to interpret charitably. Both parties understand what has been communicated, and both understand that neither has to acknowledge the decline explicitly.

Presupposition
What Language Takes for Granted

Every sentence takes something for granted. The sentence "John stopped smoking" presupposes that John was previously smoking. The sentence "My sister is a doctor" presupposes that I have a sister. The question "When did you stop being difficult?" presupposes that the person addressed was, at some point, being difficult. These presuppositions are not part of the asserted content of the sentences; they are background assumptions that both speaker and listener are taken to accept and that are not at issue in the main communicative exchange.

Frege's Original Observation

The concept of presupposition in this technical sense was introduced by Frege in the same 1892 paper that gave us the sense-reference distinction. Frege noted that the sentence "The present king of France is bald" has a problem: there is no present king of France. Frege's analysis was that the sentence fails to have a truth value when the presupposition (that there is a present king of France) is false: it is neither true nor false, but in some sense "truth-valueless."

Bertrand Russell famously challenged this analysis in his 1905 paper "On Denoting," arguing that definite descriptions like "the present king of France" are not names that refer but quantified expressions that contribute to the truth conditions of the sentence. On Russell's analysis, "The present king of France is bald" is straightforwardly false (rather than truth-valueless), because it says, roughly, "There is exactly one king of France, and that individual is bald," and the first conjunct is false. The disagreement between the Fregean and Russellian approaches to presupposition has been the subject of debate in formal semantics ever since, and a definitive resolution has not been reached.

Presupposition Triggers

Linguists have catalogued a wide variety of presupposition triggers: constructions and lexical items that systematically introduce presuppositional content. The main classes include the following.

Definite descriptions presuppose the existence and uniqueness of their referent. "The tallest mountain in Asia" presupposes that there is exactly one mountain that is tallest in Asia.

Factive verbs presuppose the truth of their complement. "She knows that the experiment failed" presupposes that the experiment failed; so does "She regrets that the experiment failed." By contrast, "She believes that the experiment failed" carries no such presupposition; her belief may be false.

Change-of-state verbs presuppose the prior existence of the state being changed. "He stopped running" presupposes he was running. "She began to recover" presupposes she was ill. "The engine started" presupposes the engine was not running.

Iteratives presuppose a previous occurrence. "She came back" presupposes she was there before. "He tried again" presupposes a previous attempt.

Cleft sentences presuppose all but their focused element. "It was the butler who killed him" presupposes that someone killed him and asserts that the butler was that someone. This is why cleft sentences are a favored rhetorical strategy for smuggling contested claims past the listener's critical attention: "It was the tax cuts that caused the recession" presupposes there was a recession and that something caused it, while asserting only that the cause was the tax cuts. The listener's critical apparatus is directed at the asserted element while the presuppositions pass without scrutiny.

Presupposition in Political and Rhetorical Language

The rhetorical significance of presupposition is difficult to overstate. Questions are among the most powerful presupposition-loading devices in political discourse. The question "Have you ever taken money from foreign donors?" presupposes the relevant category of offense while leaving the specific allegation formally in question form. "When did your administration first learn of the scandal?" presupposes a scandal and a moment of learning, rather than asserting either. Cross-examinations, press conferences, and political debates are dense with presupposition-loading that operates below the level of what is explicitly claimed and therefore below the level of what can be straightforwardly denied.

The psychologist Elizabeth Loftus demonstrated the power of presupposition in a series of experiments on eyewitness memory in the 1970s. In one famous study, participants who had watched a film of a car accident were asked either "How fast were the cars going when they smashed into each other?" or "How fast were the cars going when they hit each other?" The word "smashed" presupposes more violent contact than "hit," and participants who heard the smashed version consistently estimated higher speeds and were more likely to report seeing broken glass (which was not present in the film) than participants who heard the hit version. Presupposition can shape not only the interpretation of an utterance but the subsequent memory of the event being described.

Context and Deixis
The Anchoring of Utterance in the World

Every utterance is produced at a particular time, in a particular place, by a particular person, addressed to a particular person or audience, and in the context of a particular conversation and social relationship. These features of the utterance situation, its context, are not merely background conditions for communication; they are essential to determining what most utterances mean. A substantial class of linguistic expressions has no meaning at all except relative to a context of utterance. These are called deictic expressions, from the Greek word for pointing.

The Three Dimensions of Deixis

Person deixis involves the first and second person pronouns: "I," "me," "my," "you," "your," and their equivalents in other languages. These expressions pick out different individuals depending entirely on who is speaking and who is being addressed. The word "I" refers to Socrates when Socrates utters it, and to Marcus Aurelius when Marcus Aurelius utters it. The meaning of "I" is not a specific individual but a rule: "I" refers to whoever is the current speaker. This rule, applied in a context, picks out a particular referent.

Place deixis involves expressions like "here," "there," "this," "that," "come," and "go." "Here" picks out the location of the speaker. "This" indicates proximity to the speaker; "that" indicates distance. "Come" involves motion toward the deictic center (the speaker's location); "go" involves motion away from it. These expressions are entirely context-dependent: the sentence "It's over there" conveys no information about what is being referred to or where it is without knowledge of who is speaking and where.

Time deixis involves expressions like "now," "then," "today," "yesterday," "tomorrow," "soon," and verb tenses. "Yesterday" picks out the day before the utterance day. "Now" picks out the moment of utterance. Tense morphology anchors the time of a described event to the time of utterance: the past tense locates an event before the moment of speaking. Without knowledge of when an utterance was produced, expressions like "now" and "yesterday" are uninterpretable.

Shared Knowledge and Common Ground

The philosopher David Lewis introduced the concept of common ground in the 1960s and 1970s, subsequently developed by Robert Stalnaker and Herbert Clark. Common ground refers to the body of mutually shared knowledge that participants in a conversation are assumed to share and to know that they share. It is not enough for two people to know something; for communication to proceed smoothly, each must know that the other knows it, and each must know that the other knows that each knows it, and so on.

Common ground explains why definite reference works. When a speaker says "The new restaurant on the corner is excellent," the use of the definite article presupposes that the listener can identify the restaurant being referred to. This presupposition is licensed by the assumption that the identity of the restaurant is in their common ground: both know of it, both know that the other knows, and both know that this shared knowledge licenses the definite description. When a speaker says "The project we discussed last week," the phrase "last week" anchors the reference to a shared past interaction that is presumed to be part of the common ground.

Herbert Clark's extensive experimental and theoretical work on common ground, summarized in Using Language (1996), showed that speakers and listeners actively maintain and update a model of their common ground throughout a conversation. Utterances are understood against this model, and new information presented in one exchange enters the common ground and becomes available for reference in subsequent exchanges. When common ground is misaligned, misunderstandings arise: the speaker assumes shared knowledge that the listener does not have, or the listener assumes shared knowledge that the speaker did not intend to invoke. The repair of these misalignments, through clarification requests and acknowledgment tokens, is a major part of the work of conversation.

The Indexicality of Meaning

The philosopher Charles Sanders Peirce, working at the turn of the twentieth century, distinguished three types of sign: icons (signs that resemble what they represent), symbols (signs whose relation to their object is conventional), and indices (signs that stand in a real causal or existential relation to their object). Smoke is an index of fire: not because it resembles fire or because there is a convention connecting them, but because it is causally produced by fire. A pointing finger is an index of the thing pointed at.

Deictic expressions are linguistic indices: they point to their referents through their relationship to the context of utterance rather than through resemblance or convention. All languages have deictic expressions, because all language use is situated: speakers and listeners always occupy positions in space and time, and language must be able to make reference to those positions and to the participants in the speech event. The indexical dimension of language is the dimension that connects language use to the world in which it occurs, anchoring the abstract structure of the linguistic system to the concrete reality of particular communicative events.

Relevance Theory
A Revision of Grice

Dan Sperber and Deirdre Wilson

Sperber b. 1942, Wilson b. 1941 · Relevance: Communication and Cognition, 1986 (2nd ed. 1995)

Dan Sperber is a French cognitive scientist and anthropologist; Deirdre Wilson is a British linguist. Their 1986 book Relevance proposed a revision of Grice's framework that eliminated the four maxims and replaced them with a single cognitive principle. The book sparked immediate and sustained debate, and Relevance Theory has become one of the two or three most influential frameworks in pragmatics, generating a substantial research programme in linguistics, cognitive science, and the study of communication.

Grice's framework, as described, works as an account of the rational structure of communication. But Sperber and Wilson identified a significant problem with it: the four maxims are not all equal in status, and of them, Relevance does the most work. Every implicature ultimately depends on the assumption that what the speaker says is relevant to the context. The other maxims, they argued, can be derived from the relevance assumption. Why be brief? Because unnecessary words reduce relevance by increasing processing effort without increasing cognitive benefit. Why be truthful? Because false utterances are irrelevant to reality. Why be informative? Because uninformative utterances fail to deliver cognitive benefit. If this derivation works, the four maxims reduce to one: be relevant.

The Principle of Relevance

Sperber and Wilson defined relevance technically, as a relation between an input (an utterance or a piece of information) and a cognitive context. An input is relevant to a context to the extent that it yields positive cognitive effects: true conclusions that could not have been derived from the input alone or from the context alone, but only from their combination. Such conclusions are called contextual implications. The more contextual implications an input yields, and the less processing effort it requires to yield them, the more relevant it is.

From this definition, Sperber and Wilson derived two principles. The Cognitive Principle of Relevance: human cognition tends to be geared toward the maximization of relevance, in the sense of seeking the maximum cognitive effect for the minimum processing effort. The Communicative Principle of Relevance: every act of ostensive communication communicates a presumption of its own optimal relevance, meaning the communicator intends the utterance to be relevant enough to be worth the listener's effort to process it, and the most relevant compatible with the communicator's abilities and preferences.

The mechanism of relevance-guided inference works as follows. When a listener receives an utterance, they search for an interpretation that satisfies the presumption of relevance: an interpretation that yields adequate positive cognitive effects for minimal processing effort. The first interpretation that satisfies this presumption is the communicated interpretation. This eliminates the need for Grice's maxims as separate principles: the listener is not checking utterances against a list of maxims but searching for the most relevant interpretation.

Where Relevance Theory Differs from Grice

The most important difference between Grice and Sperber and Wilson concerns the nature of the inferential process. For Grice, pragmatic inference is a conscious, quasi-logical reasoning process: listeners work through an argument from the Cooperative Principle and the maxims to the implicature. For Sperber and Wilson, pragmatic inference is a non-demonstrative, automatic, unconscious process of hypothesis formation and confirmation. Listeners do not reason to the most relevant interpretation; they compute it, in the way that perceptual systems compute the perception of three-dimensional objects from two-dimensional retinal images.

Relevance Theory also significantly expands the scope of pragmatic processes. Grice focused primarily on implicature: the meaning communicated beyond what is said. Sperber and Wilson argue that pragmatic processes also operate on the explicit content of utterances, in a way that Grice's framework did not systematically recognize. Determining the explicit content of "It's warm enough" requires inferring "warm enough for what": the explicit content cannot be determined without pragmatic enrichment that goes beyond the semantic content of the sentence. Similarly, pronouns require pragmatic resolution (what does "she" refer to?), and many other aspects of the literal interpretation of an utterance require context-driven inference rather than pure compositional semantics.

This expansion of pragmatic processes into the determination of explicit content has been called pragmatic intrusion by critics of Relevance Theory (including Kent Bach, who argues that much of what Sperber and Wilson treat as explicit content is actually an additional level of meaning between what is said and what is implicated). The debate has sharpened the understanding of the semantic-pragmatic interface considerably, even if a consensus has not been reached.

Metaphor and Metonymy
The Conceptual Structure of Figurative Language

The account of meaning described so far has treated language as primarily a vehicle for communicating propositional content: assertions, questions, requests, and so on. But a substantial portion of everyday language is figurative rather than literal, and the treatment of figurative language is a significant test of any theory of meaning. Metaphor in particular is not a marginal or decorative feature of language; it is, as George Lakoff and Mark Johnson argued in their influential 1980 book Metaphors We Live By, a fundamental feature of human conceptual structure.

George Lakoff and Mark Johnson

Lakoff b. 1941, Johnson b. 1949 · Metaphors We Live By, 1980

Lakoff is a cognitive linguist at Berkeley; Johnson is a philosopher at the University of Oregon. Their 1980 collaboration proposed a radical reconception of metaphor: not as a rhetorical device that poets and orators use but as the organizing principle of human conceptual structure. The book argued that the way people understand abstract domains (time, argument, emotion, causation) is systematically structured by metaphorical mappings from more concrete domains (space, physical movement, objects). This claim, initially controversial, has been substantially supported by subsequent research in cognitive linguistics and psycholinguistics.

Conceptual Metaphor

The traditional account of metaphor treats it as a figure of speech in which a word is used in a non-literal sense by virtue of some similarity between its literal referent and the intended referent. "Juliet is the sun" is metaphorical because Juliet is not literally the sun but shares certain properties with it (warmth, brilliance, centrality). On this account, metaphor is a linguistic phenomenon: it happens in language.

Lakoff and Johnson argued that this account gets things backwards. Metaphor is not primarily a feature of language but a feature of thought: a systematic mapping between two conceptual domains that allows people to understand and reason about one domain in terms of another. The linguistic expressions of a metaphor are surface manifestations of an underlying conceptual structure.

Their evidence: English speakers talk about arguments in terms of warfare. "He attacked every weak point in my argument." "She demolished his position." "I defended my thesis." "His objections were shot down." "We won the debate." These are not isolated poetic expressions; they constitute a systematic pattern in which the entire domain of argumentation is understood through the conceptual structure of warfare. Lakoff and Johnson call this the conceptual metaphor ARGUMENT IS WAR, using capitals to indicate the conceptual rather than linguistic level.

The same analysis applies across many abstract domains. TIME IS MONEY is evidenced by: "You're wasting my time," "This will save you time," "I've invested a lot of time in this," "How do you spend your evenings?" IDEAS ARE FOOD is evidenced by: "That's a half-baked idea," "I can't swallow that claim," "Let me chew on that for a while," "She devoured the book." LIFE IS A JOURNEY is evidenced by: "She's at a crossroads," "He's come a long way," "They're going nowhere fast," "I don't know where I'm headed."

Metonymy

Metonymy is a related but distinct phenomenon: using one entity to refer to another that is associated with it in a real-world relationship, rather than conceptually mapped to it by similarity. "The White House announced today" uses the physical building to refer to the institution or the administration. "I'm parked out back" uses the self to refer to one's vehicle. "She's a good read" uses the act of reading to refer to a book. "The suits on the board didn't understand the proposal" uses clothing to refer to the people who wear it.

Lakoff and Johnson treat metonymy as a conceptual phenomenon as well as a linguistic one. People do not merely substitute one expression for another; they think of one entity in terms of another that stands in a particular relation to it. The relation can be part-to-whole (PART FOR WHOLE: "wheels" for "car"), producer-to-product (PRODUCER FOR PRODUCT: "I'm listening to Beethoven"), place-to-institution (PLACE FOR INSTITUTION: "Wall Street reacted negatively"), container-to-contents (CONTAINER FOR CONTENTS: "the kettle is boiling"), or several other relations.

Metaphor, Pragmatics, and Relevance

The traditional account of how metaphor is understood, going back at least to Aristotle, is that listeners first compute the literal meaning of a metaphorical expression, recognize that it fails (Juliet is not literally the sun), and then search for a figurative meaning. Sperber and Wilson's Relevance Theory challenges this account: if listeners were always computing literal meaning first, metaphor comprehension would be systematically slower than literal interpretation, which experimental evidence does not clearly support for conventional or weakly figurative expressions.

Relevance Theory proposes instead that metaphors are interpreted through the same process as literal utterances: the listener searches for the most relevant interpretation, and if a loose or non-literal reading yields more cognitive effects for less effort than a precise literal one, it will be the interpretation that is settled on. On this account, metaphor is not a special category of language use requiring a dedicated processing mechanism; it is at one end of a continuum of loose interpretation that runs from approximation ("The room is rectangular," when it is almost but not precisely rectangular) through hyperbole ("I've told you a million times") to full metaphor ("Juliet is the sun"). The difference between these cases is one of degree of looseness, not of kind.

Meaning Between Minds
What Communication Actually Requires

Stepping back from the technical machinery, what is the picture that emerges? Communication is not the transmission of semantic content from one mind to another through the medium of a linguistic code. This transmission model, while intuitive, is inadequate to describe what actually happens when people communicate. What actually happens is considerably more complex, and considerably more impressive.

A speaker forms an intention to communicate something. That intention is not simply the intention to produce a sentence with a certain semantic content; it is the intention to produce a particular illocutionary effect in the listener's mind (an assertion to be believed, a request to be acted on, a promise to be relied on). The speaker selects an utterance that they believe will lead the listener to recognize this intention, given the context, the common ground, the conventions of the language, and the shared assumption of cooperative communication. The utterance is produced.

The listener receives the utterance. They decode its phonological, morphological, and syntactic structure to arrive at its semantic content. They enrich and specify that content through pragmatic processes: resolving ambiguities, filling in underspecified material, identifying the referents of deictic and anaphoric expressions, and anchoring the interpretation in the context. They infer the illocutionary force, the type of speech act being performed. And they calculate the implicatures: the additional meanings that the cooperative context licenses them to derive from what was said and what was not said.

All of this happens in real time, in fractions of a second, while simultaneously tracking the social dynamics of the exchange, maintaining an updated model of the common ground, monitoring the relevance of the contribution to the ongoing conversation, and formulating a response. The speed and automaticity of this process is as remarkable as the complexity of what it achieves.

The extraordinary thing about human communication is not that misunderstandings occur. Given the complexity of the processes involved, the gap between what is said and what is meant, and the degree to which both sides depend on unstated common ground, the extraordinary thing is that communication succeeds as often as it does.

The frameworks examined in this artifact, Grice's Cooperative Principle and maxims, Austin and Searle's speech act theory, the analysis of presupposition, the theory of deixis and common ground, Relevance Theory, and the cognitive linguistics of conceptual metaphor, each illuminate a different aspect of the pragmatic dimension of meaning. None of them, individually or together, provides a complete account. The semantics-pragmatics interface remains one of the most active research areas in linguistics and philosophy of language, and fundamental questions (How much of what we communicate is semantics? How much is pragmatics? What exactly is the boundary between what is said and what is implicated?) are genuinely unresolved.

What is not in doubt is that the literal meaning of an utterance, the compositional semantic content delivered by the mechanisms described in Artifact II, is only the beginning of the story of meaning. The rest of the story is Pragmatics: the study of how context, knowledge, inference, social relations, and cooperative rationality transform that literal content into the rich, specific, and (usually) correctly interpreted communication that language use actually achieves.

Artifact IV will take up the question that hovers at the edge of this one: if the interpretation of an utterance depends so heavily on context and background knowledge, and if context and background knowledge are culturally and linguistically specific, does the language one speaks shape what one can think? The Sapir-Whorf question, examined next, is in part a question about where pragmatic context ends and cognitive structure begins.


Key Figures in This Artifact

Charles Morris (1901–1979): Introduced the distinction between semantics, pragmatics, and syntactics in Foundations of the Theory of Signs, 1938. · H. P. Grice (1913–1988): The Cooperative Principle; conversational maxims; conversational and conventional implicature. William James Lectures, Harvard, 1967; published as "Logic and Conversation," 1975. · Laurence Horn (b. 1945): Formalization of scalar implicature; the Q-principle and R-principle as a revision of Grice, A Natural History of Negation, 1989. · J. L. Austin (1911–1960): Performative utterances; locutionary, illocutionary, and perlocutionary acts; felicity conditions. How to Do Things with Words, 1962. · John R. Searle (b. 1932): Systematization of speech act theory; five categories of illocutionary acts; the Chinese Room argument. Speech Acts, 1969. · Penelope Brown and Stephen Levinson: Face theory; politeness universals; positive and negative face. Politeness, 1987. · Gottlob Frege (1848–1925): Presupposition and truth-value gaps; the problem of definite descriptions. · Bertrand Russell (1872–1970): The theory of definite descriptions; "On Denoting," 1905. · Elizabeth Loftus (b. 1944): Presupposition and memory distortion; eyewitness testimony research, 1970s. · Charles Sanders Peirce (1839–1914): The sign triad: icon, index, symbol; indexicality and deixis. · David Lewis (1941–2001): Common ground; scorekeeping in a language game; possible worlds semantics. · Robert Stalnaker (b. 1940): Pragmatics as the study of context; common ground and assertion. · Herbert Clark (b. 1940): Common ground and grounding in communication; Using Language, 1996. · Dan Sperber and Deirdre Wilson: Relevance Theory; the cognitive and communicative principles of relevance; pragmatic enrichment of explicit content. Relevance, 1986. · George Lakoff and Mark Johnson: Conceptual metaphor theory; Metaphors We Live By, 1980. · Kent Bach (b. 1943): Impliciture; the semantics-pragmatics distinction; critique of Relevance Theory's treatment of explicit content.

The Architecture of Language · Artifact III of VIII

Meaning Opens Into Thought

Once meaning is understood as a cooperative achievement between minds, the next question becomes unavoidable: does the language you speak shape the way you think?

I · How Language Evolved II · The Structure of Language III · How Meaning Works IV · The Sapir-Whorf Question V · Writing and the Transformation of Mind VI · Translation and the Untranslatable VII · Rhetoric and Persuasion VIII · The Limits of Language
Next ArtifactIV. The Sapir-Whorf Question