Beatriz (L), 7, from Rio de Janeiro speaks with an indigenous girl at the Kari-Oca village as part of the ‘Rio+20’ United Nations Conference on Sustainable Development. Photo by Ricardo Moraes/Reuters


Is linguistics a science?

Much of linguistic theory is so abstract and dependent on theoretical apparatus that it might be impossible to explain

by Arika Okrent + BIO

Beatriz (L), 7, from Rio de Janeiro speaks with an indigenous girl at the Kari-Oca village as part of the ‘Rio+20’ United Nations Conference on Sustainable Development. Photo by Ricardo Moraes/Reuters

Science is a messy business, but just like everything with loose ends and ragged edges, we tend to understand it by resorting to ideal types. On the one hand, there’s the archetype of the scientific method: a means of accounting for observations, generating precise, testable predictions, and yielding new discoveries about the natural consequences of natural laws. On the other, there’s our ever-replenishing font of story archetypes: the accidental event that results in a sudden clarifying insight; the hero who pursues the truth in the face of resistance or even danger; the surprising fact that challenges the dominant theory and brings it toppling to the ground.

The interplay of these archetypes has produced a spirited, long-running controversy about the nature and origins of language. Recently, it’s been flung back into public awareness following the publication of Tom Wolfe’s book The Kingdom of Speech (2016).

In Wolfe’s breathless re-telling, the dominant scientific theory is Noam Chomsky’s concept of a ‘universal grammar’ – the idea that all languages share a deep underlying structure that’s almost certainly baked into our biology by evolution. The crucial hypothesis is that its core, essential feature is recursion, the capacity to embed phrases within phrases ad infinitum, and so express complex relations between ideas (such as ‘Tom says that Dan claims that Noam believes that…’). And the challenging fact is the discovery of an Amazonian language, Pirahã, that does not have recursion. The scientific debate plays out as a classic David-and-Goliath story, with Chomsky as a famous, ivory-tower intellectual whose grand armchair proclamations are challenged by a rugged, lowly field linguist and former Christian missionary named Daniel Everett.

Stories this ripe for dramatisation come along rarely in any branch of science, much less the relatively obscure field of theoretical linguistics. But the truth will always be more complicated than the idealisations we use to understand it. In this case, the details lend themselves so well to juicy, edifice-crumbling story arcs that a deeper, more consequential point tends to be overlooked. It concerns not Everett’s challenge to Chomsky’s theory, but Chomsky’s challenge to the scientific method itself.

This counter-attack takes the form of the Chomskyans’ response to Everett. They say that even if Pirahã has no recursion, it matters not one bit for the theory of universal grammar. The capacity is intrinsic, even if it’s not always exploited. As Chomsky and his colleagues put it in a co-authored paper, ‘our language faculty provides us with a toolkit for building languages, but not all languages use all the tools’. This looks suspiciously like defiance of a central feature of the scientific archetype, one first put forward by the philosopher Karl Popper: theories are not scientific unless they have the potential to be falsified. If you claim that recursion is the essential feature of language, and if the existence of a recursionless language does not debunk your claim, then what could possibly invalidate it?

In an interview with in 2007, Everett said he emailed Chomsky: ‘What is a single prediction that universal grammar makes that I could falsify? How could I test it?’ According to Everett, Chomsky replied to say that universal grammar doesn’t make any predictions; it’s a field of study, like biology.

The nub of the disagreement here boils down to what exactly linguistics says about the world, and the appropriate archetypes we should apply to make it effective. So just what kinds of questions does linguistics want to answer? What counts as evidence? Is universal grammar in particular – and theoretical linguistics in general – a science at all?

Popper first proposed the criterion of falsifiability as a way of drawing a bright line between science and pseudoscience. If you go out looking for confirmation of a theory or hypothesis, he said, you are almost certain to discover it. For a pseudoscience such as astrology, the predictions are so vaguely stated that any apparent contradiction can be explained away. For Freudian psychology, there’s no type of human behaviour that could not be accounted for by unconscious drives.

By contrast, good theories or hypotheses are those that allow you to search for contrary evidence. Thus Albert Einstein’s theory of general relativity made a very specific prediction about the effect of gravity on light, which could be subsequently tested during the solar eclipse of 1919. Unlike astrology or Freudianism, relativity could be contradicted. It was possible to conceive of an observation that would conflict with one’s expectations (although the eclipse ultimately vindicated Einstein). The capacity to be disproved is what makes general relativity scientific.

To what extent is linguistic theory scientific, in this Popperian sense? Well, the first thing to say is that linguistics is hard to write about precisely because it seems like it should be easy. When it comes to general relativity, we accept that there are principles at work that are complex and that we don’t observe directly. But it’s harder for us to accept that the same is true of language – we do observe it directly, every day. The elementary particles of language are out there for the taking: words, sounds, sentences.

Linguistics therefore requires you to look beyond what you think you know, and start looking instead at what you don’t know that you know. This implicit knowledge has been the object of study in linguistics since the 1950s. Back then, Chomsky revolutionised the field when he observed that grammar is a generative system. That is, a language is not a big set of all the words and sentences people say in that language; rather, it’s a mental system of rules for generating acceptable sentences. We have the ability to create sentences we’ve never heard that conform to norms we’ve never explicitly learned. From the limited, finite exposure we get while learning our native language, we somehow acquire an unlimited, infinitely productive system of rules.

Trying to pinpoint those rules depends on a rather counterintuitive practice: not collecting examples of what people actually say, but carefully crafting sentences that no one would ever say. For example, this sentence is clearly violating some principle:

(1) What did Mary believe the rumour that Bill was eating?

The task for the linguist is to formulate the principle that this phrase violates by comparing it with acceptable sentences. Is the problem that the sentence has no meaning? Well, that’s hard to sustain in light of the fact that this sentence does make sense:

(2) Mary believed the rumour that Bill was eating spaghetti.

If spaghetti was drowned out by a sudden noise when someone was saying example 2, there’s no obvious reason why we couldn’t respond by asking the question in example 1. Maybe the problem is that the what in example 1 refers to the object of eating, which would appear too far away at the other end of the sentence if it weren’t a question? No, because in this example, the distance between what and the object it traces is fine:

(3) What did Mary say that John thought Bill was eating _____?

All this suggests that the principle will refer not to something we can observe directly – the order of the words and the distance between them – but to some second-level of analysis we must infer, some grouping of the words into a hierarchical structure.

It’s unusual for a science to depend as heavily as linguistics does on intentional violations and bad examples

Determining the nature of those structures has been the project of linguistics for decades now. The linguist forms a hypothesis about the configuration of words (Mary [believed [the rumour [that Bill was eating spaghetti.]]]); formulates a rule referring to that structure, which is violated by the unacceptable sentence (you can’t move the object of a verb out to the what position if it has to cross over a noun-phrase level – ‘the rumour’ – to get there) and tests the hypotheses by coming up with more good and bad sentences.

This is an incredibly counterintuitive way to think about language, which after all is a thing we intuitively know how to use. But it’s still science, an effort to discover the nature of something by forming hypotheses and testing them against evidence. Sentence 3 is evidence that the hypothesis mentioned above it – about the distance of what from the place it traces to – is incorrect. It’s just that the evidence here is not language as spoken ‘out there’ in the world, but an idealised set of consciously contrived sentences.

Consciously contrived examples are nothing unusual in science. Frictionless planes, perfect spheres and ideal gases are tremendously useful abstractions from the messy reality of ‘stuff in the world’. However, it is unusual for a science to depend as heavily as linguistics does on intentional violations and bad examples – which, when it comes down to it, are only violations to the extent that our intuitive judgment says they are. From this perspective, the object of study in linguistics is not words, sentences or human communicative behaviour, the things we can see and hear – it’s an underlying system, an abstraction. The abstraction makes predictions, not necessarily about what people will say, but about what their intuitive judgments should be.

In Chomsky’s formulation, we are not just after a set of abstract rules that account for the things we can see and hear, but one that explains why they are the way they are. In the late 1970s, Chomsky began to refer to this method of enquiry as the ‘Galilean style’ – a term coined by the German philosopher Edmund Husserl and popularised by the American physicist Steven Weinberg. For Galileo to get at the mathematical truths, he had to have the vision to abstract away from real-world effects that interfered with the expected observations. The laws governing falling bodies, for example, had to be considered apart from air resistance or friction. Air resistance is a fact of the real world, but the scientific view of the motion of falling bodies is the ‘truth’. As Weinberg put it in a 1976 paper, these are ‘abstract mathematical models of the Universe to which at least the physicists give a higher degree of reality than they accord the ordinary world of sensations’.

Chomsky’s Galilean vision was that our intuitive judgments about language stem from an innate language faculty, a universal grammar underlying the human capacity for language. His project is to determine the essential nature of that universal grammar – not the nature of language, but the nature of the human capacity for language. The distinction is a subtle one. Many linguists use the same kind of evidence (native speakers’ intuitive acceptability judgments) and the same methods (hypothesising structures and constraints that account for them), but simply want to discover the rules of particular languages, or to examine how different languages handle comparable phenomena. There are also many linguists who look at language use in the real world, and want to answer questions such as: what are the social factors that lead to the use of one linguistic formulation over another? What do children’s errors reveal about their knowledge of language? What does context contribute to the interpretation of linguistic meaning? All of this can be done without making any commitment to whether or not the descriptions are part of an innate universal grammar.

However, for Chomskyans there is a standing commitment to this idea. Universal grammar is not a hypothesis to be tested, but a foundational assumption. Plenty of people take issue with that assumption, but all types of linguists generally agree that there are indeed constraints on what a human language can be, that languages don’t do absolutely anything. They differ on where those restrictions come from. The commonalities among languages might come from more general cognitive domains, such as memory and information-processing capabilities, or from the commonalities in human social or cultural capacities. Chomsky doesn’t think so. In The Architecture of Language (2000), he said: ‘There are properties of the language faculty, which are not found elsewhere, not only in the human mind, but in other biological organisms as far as we know.’ The essential part of the human capacity for language is specific to language and to humans.

So what is that essential part? The phrase ‘universal grammar’ gives the impression that it’s going to be a list of features common to all languages, statements such as ‘all languages have nouns’ or ‘all languages mark verbs for tense’. But there are very few features shared by all known languages, possibly none. The word ‘universal’ is misleading here too. It seems like it should mean ‘found in all languages’ but in this case it means something like ‘found in all humans’ (because otherwise they would not be able to learn language as they do.)

This all sounds maddeningly circular or at the very least extremely confusing. Grammar is a property of language, but universal grammar is a property of humans, and humans are able to learn the particular grammar of their language because they have the universal grammar of humans. Or something like that. So what are the relevant observations? What is the evidence?

You’d think that a simplifying turn would be easy to explain, but even linguists have trouble understanding it

Native-speaker intuitions about as wide a range of human languages as possible seems a good place to start. However, the more languages that the Chomskyan theory tried to incorporate into its abstract core, the messier things got. A vast body of theoretical work has been produced in the past few decades, involving structural descriptions and rules that account for the difference between good and bad sentences over many specific examples, in many different languages, through a common set of fundamental principles. But to achieve this feat, the theory grew ever more technically complex, with more and more levels and stipulations, and lots of exceptions that necessitated even more theoretical machinery to explain. The whole agenda was moving far away from the Galilean ideal of explanatory simplicity. Anyway, it was unlikely that so much specific complex machinery could possibly be instantiated in the human brain, and even more unlikely that it could have evolved.

In the 1990s, Chomsky introduced something he called the Minimalist programme. It is presented not as a theory of what universal grammar is, but as an outline of a productive way to think about things, one that prioritises simplicity, elegance, parsimony. He invoked another aspect of Galilean style, the idea that the scientist should be guided by the expectation that the deepest laws of nature will be the easiest and simplest ones. In a 1999 interview, Chomsky said that ‘it is the abstract systems you are constructing that are really the truth; the array of phenomena is some distortion of the truth because of too many factors, all sorts of things. And so, it often makes good sense to disregard phenomena and search for principles that really seem to give some deep insight into why some of them are that way, recognising that there are others that you can’t pay attention to.’ In other words, air resistance is real, but it’s just not relevant to the deeper truth about the motion of falling bodies.

You might think that a simplifying turn would be easy to explain to a lay audience, but unfortunately that’s not the case. Even linguists have trouble understanding it. Frankly I’m not sure I really understand it myself. The drive toward minimalism here is actually a drive toward abstraction. Not boiling down the facts to see what they all share in common, but abstracting away from observations until what we have is so remote that it defies appeals to concrete models, metaphors or even formal, mathematical-type statements.

We’re pretty far from the ideal archetypes of science now. What are we measuring? What are we even observing? What is the theory, what are its specific claims, and how are we testing them? Is this science? Or philosophy? And does it even matter?

What we have is a field with a big, exciting idea at its heart – that there is an innate universal feature of human language – but a major part of it is unfalsifiable and it’s so abstract and dependent on theoretical apparatus that it might be impossible to explain. Chomskyan linguistics has become like a theoretical physics version of linguistics, with universal grammar as the elusive, unifying theory of everything. Does that mean it’s not worth doing? Is there any value to working out an explanatory model for its own sake? To saying, let’s just see where this leads? The generative linguist Cedric Boeckx at the University of Barcelona stresses that minimalism is not a theory, but a programme – ‘and as such it is neither right nor wrong; it can only be fecund or sterile’, he wrote in ‘A Tale of Two Minimalisms’ (2010).

I must admit, there have been times when, upon going through some highly technical, abstract analysis of why some surface phenomena in two very different languages can be captured by a single structural principle, I get a fuzzy, shimmering glimpse in my peripheral vision of a deeper truth about language. Really, it’s not even a glimpse, but a ghost of a leading edge of something that might come into view but could just as easily not be there at all. I feel it, but I feel no impulse to pursue it. I can understand, though, why there are people who do feel that impulse.

I’ve also had the same feeling about analyses that make appeals to the idea of regularities arising from something other than a core, innate language capacity: from our affinity for patterns, our human social endowments, our historical accumulation of cultural habits. At this point, I’m no longer a practitioner of linguistics, but an observer and communicator about the discipline. I have the luxury of not having to argue for any particular point of view. To each her own facts to find interesting. To each his own impulse to follow.

Not embedding one phrase inside another was just one of the many ways that the Pirahã prioritised the here and now

Still, that’s a rather unsatisfying perch to land on. As narrative archetypes go, ‘Ah well, everybody’s got their own thing’ is not compelling at all. Shouldn’t there be something at stake in determining which way of looking at things is the right one? Isn’t the whole point to find the truth?

The Pirahã themselves would disagree. In their culture, as Everett describes it in his memoir of his time with them, Don’t Sleep, There Are Snakes (2008), ‘they have no craving for truth as a transcendental reality’. They live in the moment, according to what he terms the ‘immediacy of experience principle’. Ironically, in the 2005 article that began the whole Chomsky/Everett debate, Everett barely touched on the notion that the Pirahã’s lack of recursion might challenge the theory of universal grammar. Instead, his aim was to show that the Pirahã cultural commitment to immediate, concrete experience permeated the very structure of their language: not embedding one phrase inside another was just one of the many ways that the Pirahã prioritised the here and now. Other evidence he adduced for this priority included the simplicity of the kinship system, the lack of numbers, and the absence of fiction or creation myths.

The years-long immersion in Pirahã culture and the struggle to understand it had a profound personal effect on Everett. His encounter with their concept of truth made him rethink his belief in God and eventually become an atheist. His renunciation of universal grammar involved a similar disillusionment, since he had worked within the framework for the first 25 years of his career. Yet Everett’s study of the Pirahã falsifies neither Christianity nor universal grammar, since they are not designed for falsification in the first place. They are both a way to try to get a handle on reality. The first asks that you take a set of assumptions on faith because they are the truth. The second provides a set of assumptions for generating a line of enquiry that might at some point lead to the truth.

I’m not sure whether you can call yourself a Christian if you reject the foundational tenets of Christianity – but you can certainly reject the assumptions of universal grammar and still call yourself a linguist. In fact, a drive to debunk Chomsky’s assumptions has led to a flourishing of empirical work in the field. Even as a foil, villain or edifice to be crumbled, the theory of universal grammar offers a framework for discovery, a place to aim the magnifying glass, chisel or wrecking ball, as the case may be. Archetypes of all kinds can simplify and exaggerate, and universal grammar is no different. But whether as a structured mythology or a catalyst for conflict, it nonetheless helps us to reach a deeper understanding of the world.