Photo: Elizabeth Kay/Unsplash
Those of us who learned arithmetic using pen and paper, working with the ten digits 0-9 and place value, may take for granted that this is the way it’s always been done, or at least the way it ought to be done. But if you think of the amount of time and energy spent in the early school years just to teach place value, you’ll realise that this sort of numeracy is not preordained.
Over the past 5,500 years, more than 100 distinct ways of writing numbers have been developed and used by numerate societies, linguistic anthropologist Stephen Chrisomalis has found. Thousands more ways of speaking numbers, manipulating physical objects, and using human bodies to enumerate are known to exist, or to have existed, he writes in his new book, Reckonings: Numerals, Cognition, and History. Remarkably, each of the basic structures was invented multiple times independently of one another.
In Reckonings, Chrisomalis considers how humans past and present have used numerals, reinterpreting historical and archaeological representations of numerical notation and exploring the implications of why we write numbers with figures rather than words. Drawing on, and expanding upon, the enormous cross-cultural and comparative literatures in linguistics, cognitive anthropology and the history of science that bear on questions of numeracy, he shows that numeracy is a social practice.
Chrisomalis took time out from a busy end to the spring semester to field a few questions about his new book, his spirited defense of Roman numerals, his complicated relationships with mathematicians and his thoughts on the validity of the Sapir-Whorf hypothesis.
Philip Laughlin: We’ve worked with a number of linguists and anthropologists over the years but you are our first author to specialize in written numerical systems. What sparked your interest in this topic? Why are numerals an important area of research?
Stephen Chrisomalis: I first became interested in numerals when I wrote a paper in an undergraduate cognitive anthropology course in the mid-1990s. After moving away from the subject for a couple years, I came back to it when I was looking for a PhD topic along with my advisor, the late Bruce Trigger at McGill. This resulted in my dissertation, which later became my first book, Numerical Notation: A Comparative History (Cambridge, 2010). It was an unorthodox project for an anthropology department – neither strictly archaeological nor ethnohistorical nor ethnographic. But that was exactly the sort of creative project that it was possible to do at McGill at that time, and that sadly, given the exigencies of the modern job market, is almost impossible to imagine doing today.
What brought me to numerical notation as a dissertation subject is much of what still appeals to me about it now. We have evidence from over 100 different systems used across every inhabited continent over 5,000 years, including all the world’s literate traditions. Numbers are a ubiquitous domain of human existence, and written numerals are virtually everywhere that there is writing. While, of course, the historical and archaeological records are partial (which is in turn both exciting and frustrating), understanding their history and cross-cultural transmission is a tractable problem. We can tell, roughly, when and where they originate and how they relate to one another.
Also, every user of a numerical notation system is also a speaker of one or more languages, which lets us ask great questions comparing number words to numerical notation and to show how they interact. These questions can be as simple as “Do people say ‘two thousand twenty one’ or ‘twenty twenty one’?” and as big as “Were numbers first visual marks or spoken words?” As a linguist and an anthropologist, that’s very attractive. Because there is a significant and large literature on numerical cognition, the comparative, historical data I bring to the table is useful for testing and expanding on our knowledge in that interdisciplinary area.
You had the cover image and title for this book in your head for years. Can you explain the significance of the watch and why you chose the title “Reckonings” in the first place? What were you trying to get across to potential readers with that evocative word?
The title ‘Reckonings’ invokes the triple meaning of the word ‘reckon’ – to calculate, to think and to judge – which parallels the three parts of the subtitle: “Numerals, Cognition, and History.” Reckoning is not mathematics, in its technical, disciplinary sense, but it reflects the everyday practices of working with and manipulating numbers. Then, in English and in other languages, we extend the verb for calculation to thinking in general – to reckon thus involves the more general cognitive questions I hope I’ve addressed. Finally, we come to reckoning as judgement – every numerical notation faces its own reckoning as users decide whether to adopt, transmit and eventually, abandon it.
As I spend a lot of time talking about the obsolescence of numeral systems, most notably but not limited to the Roman numerals, I wanted to echo this decision-making process of judgement by which users decide to abandon one notation in favor of another. “Reckonings” signals that the book might be about arithmetic – but it’s about a lot more than that.
The cover image of the book is a watch designed by the French watchmaker Jean-Antoine Lépine in 1788, now held at the British Museum (BM 1958,1201.289). Lépine was one of the first horologists to consistently use Western (commonly called Arabic) numerals instead of Roman numerals for hour markers, but in the 1780s he made a number of watches like this one, where he instead playfully mixed the two systems. The hybridity on this sort of artifact is visually striking and memorable to the viewer, both then and now. But actually, it isn’t as weird as it seems; we combine numerical representations all the time, like when we write something like “1.2 million” instead of “1,200,000”.
Unlike the Roman numerals alone, which would be visually ‘unbalanced’ on a watch, this hybrid system expresses every number from 1 through 12 in no more than two digits. To me it embodies the passage of time in material form and the replacement of the Roman numerals. By the 1780s, they had been replaced for most purposes, but watch and clock faces are one of the places where, even today, they’re pretty common. As a sort of metonym for this historical process, the Lépine watch highlights that the decline and fall of the Roman numerals was not a slow, steady, predictable replacement, but one with many disjunctures.
Also read: Madhava and the Uninfluential Discovery of Calculus
At the book launch, you talked a bit about the future of number systems, but with the caveat that you are not a “futurologist”. So I’ll ask you to put on a historian’s hat instead: What kind of cultural changes are necessary for a society to switch from one number system to another? It seems to me that significant changes would have to happen at least at the political and economic level for one numerical system to supersede another, right?
One of the key arguments in Reckonings is that numerical notations don’t get replaced in orderly, ‘survival of the fittest’ processes. To argue that simply raises the question: fittest for what purpose? Numerical notations are principally used for representation and communication, not computation, so it’s the context of writing and reading practices, rather than changes in mathematics, that motivate their history.
The last time that happened was with the advent of the printing press and the rise of a more general literacy among white middle-class Europeans in the 15th and 16th centuries alongside mercantile capitalism – and not coincidentally, that was exactly when, where and by whom the Roman numerals were substantially abandoned. Strikingly, this was not accompanied with a lot of fanfare or explicit comparison of the two systems. The Western numerals had been around in Europe for 500 years, and then, once conditions were right, they rapidly supplanted Roman numerals so that some commentators could even see the change within a lifetime.
Just in general, though, we should be very hesitant to assume that we are at the ‘end of history’ of anything, whether that’s a number system or a political system. Nothing lasts forever – we may simply lack the imagination to think of how and when it could be replaced. I’m confident that your average Roman or Egyptian or Aztec merchant or engineer couldn’t have imagined the Western numerals. In the same way, I think we’re subject to the same sort of cognitive closure – it’s hard to imagine from our vantage point what could ever change to lead to the abandonment of a universal notation.
While I don’t see it happening soon, the sorts of massive technological changes of the past 50 years, extended for another century or two, could be such a catalyst. But alone, technology isn’t deterministic – only in conjunction with changes in social, economic and communicative practices would that be likely. And even then, there’s no guarantee. Not only would such a system need to be invented, but some community of users would need to perceive it as being sufficiently better, for some purpose, to warrant abandoning what is, at this point, a universal notation. One of the costs of doing so is that, because numerical systems are for communicating, using a rare system means you aren’t able to communicate with as many people as with a common one. That isn’t necessarily fatal to a notation’s survival, but it is an obstacle to be overcome.
I’m curious to know how mathematicians have responded to your work. How do they feel about the historical perspective you bring to number systems and arithmetic, and does it have any bearing on their own work?
Well, I sometimes joke with my friends and collaborators who are mathematicians that mathematics is the branch of anthropology that focuses on numbers! Seriously, though, mathematics and anthropology seem to stand on the opposite ends of a spectrum about the degree to which the phenomena they study are fundamentally human. Mathematicians are mostly Platonists – or really, more strictly, mathematical realists – they argue very persuasively that abstract mathematical objects like numbers are in some sense real independent of our observation of them. Some of this sense is intuitive – anyone who has done mathematics has experienced this sensation of real discovery, even when ‘discovering’ something that had been known for centuries.
But some of it is based on the quite proper insight that even very weird mathematical entities, like π, are not subject to cultural whim. In contrast, anthropologists tend to see most things as culturally constructed and subject to variation – a human product of specific social contexts. These are caricatures, but there’s some truth to them. Anthropologists tend not to spend a lot of time on things where the human element is irrelevant.
In the late 1940s and early 1950s this led to a debate between the Platonist mathematician and popularizer Martin Gardner and the materialist evolutionary anthropologist Leslie White. The former treats variation in the form of notation as nothing more than differences in representing a deep, underlying reality, the latter sees the construction of mathematical objects by human minds as prior to the invention of formal mathematics. And because anthropologists (as a stereotype with some validity) don’t often have much training in mathematics, it’s easy to fall into this binary.
But actually it doesn’t need to be nearly so antagonistic. Most mathematicians freely grant that humans create notations, and that different notations make some ideas easy to see and others much harder. And even a brief look at the comparative history of mathematics and numeration shows that there is much more commonality among human arithmetic than there is difference. There simply is nowhere where 1 + 1 = 3. That’s because the human mind is interacting with the real world to produce insights, and that’s true even for mathematics where the objects of analysis are quite abstract.
Also read: Remembering Shakuntala Devi, Who Did Much More Than Solve Math Problems
You spend a lot of time debunking the idea that the Roman numerals are an inferior notation. Why so much energy on defending an archaic system? What do the Roman numerals tell us about number systems in general?
Of all the number systems still in use today, the only one most readers will be familiar that uses a different structure to the now nearly-universal Western numerals 0-9 are the Roman numerals. They may be archaic but we still use them for all sorts of secondary and prestige functions, like counting things we really value, such as kings, popes and Super Bowls. (Although every year, the week before the Super Bowl, there is a news article complaining about them!) But they are still taught and learned in lots of schools. And actually, that’s part of the problem, because along with the signs IVXLCDM and the rules for combining them, we are also taught an ideology that the Roman numerals were no good for math, and that was why they were discarded. That ideology is so pervasive that it has rarely been questioned.
It’s not that I think the Roman numerals are the best system available; they just aren’t nearly as awful as we’ve been conditioned to believe. They not only survived but thrived, throughout large swaths of Europe, for 2,000 years. Unless we believe that premodern people were a bunch of dupes, they can’t have been so bad. Even after the Western numerals 0-9 were introduced into Europe through Arabic and Indian ancestors, it took hundreds of years for the Roman numerals to be supplanted. This is not some sort of archaism that just happened to avoid being snuffed out. It was a core representational practice throughout most of Europe for almost any purpose you can imagine numbers being used for.
The one thing they weren’t really used for is what we now think of as pen-and-paper arithmetic – lining up numbers in columns. That’s because in classical antiquity and indeed, until the past few hundred years, arithmetic was an embodied and material practice (e.g., on a counting board or abacus) whereas numerical notation was used for writing and recording. This division of labor is very common throughout the world’s numerical notations, most of which were never manipulated directly for arithmetic.
Since you are an anthropology professor and I am a cognitive science editor at The MIT Press, I would be remiss if I did not ask you about Benjamin Lee Whorf’s Language, Thought, and Reality. This is probably the first cognitive science book MIT Press ever published and it has remained in print for over 60 years now. In 2021, what do you make of that book in particular and the Sapir-Whorf hypothesis in general? Is it an idea that still retains power?
Language, Thought, and Reality is almost certainly the most cited piece of linguistic anthropological writing ever, and still taught in hundreds of courses, including my own graduate seminar, where I regularly teach the chapter ‘The relation of habitual thought and behavior to language’. It is also one of the most deeply misunderstood books across multiple fields. There really is no such thing as “the Sapir-Whorf hypothesis”; rather, there are a set of ideas, some centuries old, about the effect that language has on thought.
But everyone knows that language has some effect on thought – no reasonable person denies this. What Whorf, Sapir and other linguistic relativists do is argue that variations in linguistic structure – patterns in syntax or morphology, or differences in how languages divide up the world – affect cognition. (Although I should note that neither Sapir nor Whorf used the term ‘cognition’ at all, because MIT Press was one of the first presses to define cognitive science as an independent subject of investigation!)
What is the scope of these effects? Probably less than in some of the more programmatic statements by folks like Whorf, but significantly more than would have been assumed by linguists, say, 30 years ago when I first studied the discipline. Number and numeracy was one of the earliest domains of human social life studied under what we would now think of as the 19th-century ancestors of linguistic relativity – but largely in a now-rightly rejected way that saw systems of thought as part of ordered, evolutionary stages from simple to complex.
I recently published a paper on this very subject. In LT&R, Whorf argued that Newtonian physics was, in significant part, a product of the structuring influence of European languages in which Newton wrote, like Latin and English. That’s a big claim and hard to justify empirically. But both number words and number notations are basically categories; they’re ways of dividing up the world of quantity meaningfully, and humans do it in different ways. These notations constrain but do not determine how we solve numerical problems. A meme that floats around social media every few months asks people to describe how they solve the problem “28 + 47”. The answer is 75 – that’s not relativistic. And there are a number of strategies for solving the problem.
In a community where arithmetic is mediated through a device like an abacus, the strategies will be even more varied. The notation, or the technology, or the language matters. But not in isolation – rather, it’s by asking how notations relate to numerical practices, in social contexts, how they’re learned and employed, that we can get at its cognitive effects. That’s not anti-Whorfian; maybe it’s post-Whorfian.
Philip Laughlin acquires books for the MIT Press in the fields of cognitive science, philosophy, linguistics and bioethics. Stephen Chrisomalis is professor of anthropology at Wayne State University and author of, among other books, Reckonings: Numerals, Cognition, and History. This interview was originally published by MIT Press Reader and has been republished here with permission.