Language—who can use it, and how well—has been in the news recently. If you haven’t heard, a recent AI language model was released for public use. It’s a chatbot from the company OpenAI called ChatGPT. And its capabilities are, to use a technical term, astounding.
It can draft essays at an advanced undergraduate level on just about any topic. It can write a scene for a movie script along any premise you specify. It can plan a set of meals for you this week, provide the recipes, compile a shopping list, and tell you how what you’re eating will affect your overall health and fitness goals. And in terms of grammar and sentence construction, it makes no mistakes. Literally none. This isn’t your grandmother’s chatbot.
This episode is not about how ChatGPT works; it is about our current understanding of how language works. With advances in AI allowing us to create more sophisticated programs for using language, that understanding may change in the near future. But even with all the recent advances, the underlying logic behind how these kinds of programs work and what they can teach us about human language goes back decades in research on cognitive science and artificial intelligence. It seems like there’s something about ChatGPT that understands the words it’s using. The truth is we don’t know yet. It’s too soon to tell.
What we do know is that we humans understand the words we use, and why we’re capable of doing that is one of the great and fantastic puzzles of our species. My guest today, Gary Lupyan, is one of my favorite sources of insights about that puzzle. Gary is a professor of psychology at the University of Wisconsin, Madison. He studies language, particularly semantics, from a cognitive science perspective.
This conversation is about Gary’s point of view on language, words, and how we use them to both construct an understanding of the world and convey it to those around us. It’s not necessarily about endorsing a big sweeping theory. But to put together some of the pieces of what we know, what we don’t know, and what we may have misunderstood about language.
For example, take the famous Sapir-Whorf hypothesis. This is the idea that language determines thought—that if you were to speak a language other than the one(s) already you do, it could potentially lead to an entirely different way of seeing the world. And really, the big picture of Sapir-Whorf has been settled. The truth, honestly, is not that exciting. Language does determine thought—but only a little, and not in any ways that can’t be worked around. As Gary describes it, language is a system of categories. The language we speak can orient us toward different delineations of those categories with the world. But no language prevents us from seeing or comprehending any category outright. What’s really fascinating here is not the broadest aspects of the overarching theory, but the implications for specific cases. There are versions of this that we touch on a lot throughout this conversation.
But in terms of grand theories, a general theme emerged in our conversation of describing ideas about language on a spectrum: from Chomsky to Tomasello. Noam Chomsky you’ve probably heard of. He’s one of the most prolific scholars of the second half of the twentieth century. He was a founding father of cognitive science, and to a large degree single-handedly determined the trajectory of linguistics for a period of almost thirty years. His most famous construction is "colorless green ideas sleep furiously." It’s a totally legitimate English sentence, but one that expresses an illegitimate concept. It is representative of Chomsky’s focus on structure: he didn’t care about whether or not anyone had ever used that sentence; he just cared that it was possible to do so.
Michael Tomasello, on the other hand, takes a usage-based approach to language. Mike has been a guest on this show and is another cognitive scientist who has had a big impact on my own thinking. He believes the way to make sense of language is as a tool, one that allows us to communicate with the other members of our species. Structure is important. But how language is used in real-life social settings is more important. Spoiler alert: both Gary and I are much more sympathetic to Tomasello’s characterization of language than we are to Chomsky’s. Nonetheless, both theoretical approaches offer important insights about language and the way we humans use it.
The way I approached this conversation was essentially to ask Gary the biggest questions I could come up with about language: What’s it for? How do words get their meanings? What was protolanguage like? What parts of language are determined by critical periods? Then just see where he takes it from there.
Overall, this conversation was really a joy to have. We cover a lot of my favorite topics in cognitive science. Language is something I can get really worked up about, and it was fun to be able to talk about it with someone who is so much more knowledgeable than I am. For anyone who has ever used words or had words used on them, I think you’ll find something to enjoy in this conversation.
At the end of each episode, I ask my guest about three books that have most influenced their thinking. Here are Gary’s picks:
Vehicles: Experiments in Synthetic Psychology
by Valentino Braitenberg (1984)
A cult classic: the perfect book for thinking about thinking.
by Daniel Dennett (1991)
It’s not about getting all the details right; it’s about inspiring further thinking.
4 3 2 1: A Novel
by Paul Auster (2017)
The most ambitious effort by a novelist at the top of his game. For students of the epic conceptual masterpiece.
Honorable mention: My favorite book on Language, by Michael Tomasello, if you’re interested in the technical details of what we talked about:
(I hope you find something good for your next read. If you happen to find it through the above links, I get a referral fee. Thanks!)