The Man Behind the Machine
A review of Benjamín Labatut's historical novel of ideas, The Maniac.
The most celebrated scientific-literary crossover event of the year is undoubtedly Benjamín Labatut’s historically accurate novel, The Maniac. In this book, the Chilean author tracks the life and mind of 20th century polymath John von Neumann through his contribution to the atomic bomb and his central role in creating modern computers. There is also the specter of artificial intelligence, another field to which von Neumann made central contributions. In the novel, Labatut draws a line from von Neumann to the AI technology of today, specifically to Google DeepMind’s AlphaGo. The book is undoubtedly a literary achievement, an innovation in how to communicate and investigate scientific ideas. It has been celebrated by eminent reviewers in the NYT and Washington Post. But reading it left me with a question. What did we learn about John von Neumann? For that matter, what did we learn about AI?
A brief history of ideas leading to the modern computer
The history of the modern computer begins with mathematical logic. This was the discipline, booming as it was in the mid-19th century, which first took seriously the idea that human reason could be distilled into a non-human mechanism. The flagship work in this line of inquiry was George Boole’s 1854 treatise, The Laws of Thought. Though Boole’s manuscript dealt mainly with mathematical formalisms, his main goal was, as he wrote, “to investigate the fundamental laws of those operations of the mind by which reasoning is performed.”
The idea that the processes of human reasoning could be codified in math was in the air in Boole’s day, and we remember Boole largely because he was the first logician to bring his ideas to market in a useful form. His contemporary was a man named Augustus De Morgan. Boole and De Morgan rivaled one another in much the same way as Newton and Gottfried Leibniz, or Darwin and Alfred Wallace. In the independent discovery of calculus, Newton’s and Leibniz’s work served different ends. Newton understood what calculus could do; he was essentially an applied mathematician. But Leibniz understood the primacy of notation (the use of the term dx to denote a derivative was his, not Newton’s, invention). The work produced by Boole and De Morgan featured a similar tradeoff, though in the case of logic, it was Boole’s superior notation that won out. Their rivalry was representative of an abiding interest in mathematical logic that carried on deep into the later years of that century and the beginning of the next.
The culmination of this tradition was Russell and Whitehead’s Principia Mathematica. The result of ten years of diligent mathematical labor, Russell and Whitehead’s work was meant to present the world with a logical system to end all logical systems. Russell and Whitehead were trying to solve a specific problem, one which plagued the previous work in logic. The preceding work, such as Boole’s and De Morgan’s, was based on assumptions. If you took certain things for granted, their systems worked. But why take those things for granted? How do you know they’re really true? Answering this question was the holy grail of mathematical logic, and in their desperation to sip from the blessed chalice spent seven hundred pages attempting to prove that 1+1 does indeed equal 2. Their goal was to create an axiomless mathematical foundation for logic, an instantiation of reason in its most unassailable form, where nothing needed to be taken for granted. They failed to achieve it.
Ultimately, Russell and Whitehead’s system had elements that were either unprovable or inconsistent. If you made everything provable, inconsistencies arose. If you made everything consistent, it required assumptions. For example, you can make a mathematical foundation for logic using set theory. But then you’re subject to Russell’s paradox. Russell and Whitehead never found a way to have it all.
Then Kurt Gödel came along, with his famous incompleteness theorem. Gödel showed that it wasn’t Russell and Whitehead’s fault. Not only was the provability-consistency tradeoff an ineradicable part of their system; it was an ineradicable part of any system. The kind of logical system Russell and Whitehead wanted to create wasn’t possible. That promise of completeness could never be achieved.
Gödel’s theorem broke the great chain of achievements in the tradition of logic stretching from Boole to Russell. As a consequence, the attention of the best and brightest mathematicians (which is what Boole and De Morgan and Russell and Gödel undoubtedly were in their day) was diverted to working on other problems.
Enter Alan Turing and Alonzo Church—yet another intellectual duopoly—who independently developed what is known as the Church-Turing thesis. I’m not a logician by trade so the nuance of it is lost on me. But in essence it says to hell with provability. All that matters is consistency. Specifically, what matters is that you can create a system where you have a guaranteed method for determining an effective solution to any calculation you might plug in to your system. This project, as Turing and Church, showed was possible. It turned logic from a metaphysical problem into a practical one.
The most famous iteration of this insight is what we now know as the Turing Machine. This was not actually a machine but an abstract mathematical model. Benedict Cumberbatch movies aside, an actual Turing Machine as such was never built. It had, in theory, an infinite tape on which could be printed symbols (such as ones and zeros), which could in turn be read or rewritten by the machine. A Turing Machine was “complete” in the sense that Turing’s theoretical machine was capable of simulating, given enough time and space, any other Turing Machine. That is, it was a mathematical blueprint for hardware that could run programmable software. It approached Russell and Whitehead’s goal of the ultimate logical system—but from an entirely different angle. Unlike Russell and Whitehead, Turing succeeded.
The Turing Machine became the theoretical basis for the modern computer. It is the central idea behind the CPU. But Turing didn’t actually construct the machine himself. It was a Hungarian mathematician, named John von Neumann, who finally gave form to Turing’s function. Turing described a computer. Von Neumann actually built one. And it is von Neumann, as one of the 20th century’s most central intellectuals, who is the subject of an imaginative work which straddles the line between historical fiction and intellectual history: Benjamín Labatut’s latest novel, The Maniac.
Labatut’s remarkable formal innovations
Labatut’s novel is about the man behind the machine. More specifically, it is about the men and women behind that man, and how for the most part they found him exasperating. The project of Labatut’s novel is to draw a line between von Neumann and modern AI’s achievement of AlphaGo, specifically the victory of Google DeepMind’s system over the generationally talented South Korean Go champion, Lee Sedol. It is a how-the-sausage-is-made story, centering on the development of modern computers around the time of the Second World War.
Like the great logicians of the past, Labatut’s innovation is one of formalism. It is a way of constructing a novel that no one—no one whose reviews I’ve read anyway—has quite seen before. The novel is constructed as a series of texts, dozens of them: some formal essays, some inner monologues, some journal entries. Each one is authored by a different person who was connected to von Neumann in some way, personally or professionally. The book draws on promises inherent to both fiction (no one actually said these things exactly as they are written) and non-fiction (the sentiments expressed and the experiences transcribed belonged to real people). It makes the book hard to place on the shelf with the standard divisions of genre.
This aspect of the book is dealt with in different ways by those who have reviewed it. The Guardian labels the book “semi-fictional.” Becca Rothfeld, in The Washington Post, poses the question: “Is The MANIAC a work of fiction? Or do we call it fiction because we lack a better word for its creative conquest of fact?” And Tom McCarthy, writing for the New York Times, makes the claim that although the book does “assume the guise of fiction,” it could plausibly be recategorized with a “(minor and essentially rhetorical) tweak into long-form journalism”. A big reason to read the book, and to be excited about it, is for the intriguing novelty of Labatut’s formal innovations.
So let’s take for granted that Labatut achieves structural excellence in his novel. What does that formalism reveal about the novel’s subject?
One less charitable review summarizes the book, not entirely inaccurately, as “a repetitious, linear account of von Neumann’s life from the point of view of his acquaintances.” Labatut presents us with a novel of a hundred characters (none of whom, by the way, is actually the alleged protagonist, John von Neumann, who despite his prominent place on stage was cast in a non-speaking role). However, the characters don’t recur. They enter stage right, waiting in a single-file queue just off stage, say their bit, then exit stage left never to be heard from, or often even referenced, ever again. It has the effect of keeping the story moving. But it also makes it hard to have any sort of cumulative pay-off in the course of that progress.
Labatut’s writing itself is excellent. But almost too excellent. He tasked himself with differentiating the voices of dozens of 20th century mathematicians, the vast majority of whom come from the eastern border of central Europe. The writing is accomplished, but not in a way that seems representative of what 20th century mathematicians actually sounded like. Labatut gives them an ability of articulation which bursts the bubble of realism.
Another issue is that the stylistic differences between voices are somewhat muddled. The attempts to establish stylistic markers to differentiate speakers are a bit clumsy. For example, most characters—it felt like a large majority—write in looping, polysyndetic sentences. Others talk in clipped, staccato sentences in which the subject is habitually omitted. In the book’s first passage, the writing comes across as a work of immense virtuosity, rendering the mind of the troubled intellectual (though not von Neumann’s) in a fresh and profound way. But it loses its edge when the same idiosyncratic flourishes are used to articulate the perspectives of so many different people. (That being said, my current standard for an author’s ability to render multiple voices is Hernan Diaz’s Pulitzer-winning Trust. So it’s possible I’m being overly harsh.)
The Guardian’s reviewer, Sam Byers, poses the slightly uncomfortable question at the end of this line of criticism: “All that a brilliant novel requires, then – talent, ambition, skill, intelligence – is present in abundance. And yet, somehow, a brilliant novel is not quite what we end up with. It’s a thermodynamic conundrum. With this much creative energy invested, why does the result feel underpowered?”
Byers’s answer is: “diffusion.” The author has bitten himself off too much. He has given himself too many limitations, too many characters, and perhaps too few pages, to fully chew through it. And I agree, the result is diffuse: it accomplishes very little in the way of world-building. This, after all, is the standard payoff associated with historical fiction, intellectual history, and the novel of a hundred characters when done well. It allows you to privileged insight into a world you would be otherwise unable to inhabit.
But yet, I don’t think that’s the core issue. I’m not sure that world-building was what the author set out to do. I think the author’s intention was to reveal some important insight about technology, artificial intelligence, the nature of genius, or the nature of John von Neumann. My problem is that I came away uncertain about what exactly that insight was.
What exactly is this book about?
So what does Labatut’s story reveal about John von Neumann? About technology? About the past, present, or future of artificial intelligence?
Both Tom McCarthy (whose Making of Incarnation is next up on my fiction shelf) in the NYT, and Becca Rothfeld, my favorite literary influencer, in WaPo, come across as unflaggingly enthusiastic about the book. But they both seem happy to leave this question unaddressed: What exactly is this book about?
For example: over the course of the book far, far too much is made of von Neumann’s illness. At the end of his life, von Neumann goes mad from the treatment of his terminal cancer. The book’s characters are simply obsessed with this fate, seeing it uniformly as the inevitable culmination of von Neumann’s intellectual pursuits. It is a Sontagonal illness-as-metaphor taken to an uncritical extreme. I appreciate that Labatut, by his own rules, must remain within the historical facts of von Neumann’s life. But as a narrative device, it’s incredibly flimsy. Certainly, I appreciate the tempting narrative symmetry between the heights of von Neumann’s intellectual achievements with the depths of his illness. But dwelling on this trajectory, as the characters often do, has the unfortunate effect of weakening the supporting cast’s observations about the protagonist.
All the accounts provided by the various characters agree: John von Neumann was the most brilliant man of his generation, then suffered a debilitating illness and lapsed into insanity at the end of his life. They all seem to read this demise as dictum on his life—how else was a life of such immense genius supposed to end? Well, I mean… lots of ways! You could go like Einstein, with quiet elegance. Like Russell, you could die with the dignity of old age and vast wisdom. Like Turing, you could die in unjust tragedy. Like Marie Curie, you could be killed by that to which you had been most devoted. If the book’s novel insight is that the mad scientist is fated to topple from genius into madness, that is neither novel nor an insight.
Here are some other possible theses the author builds toward:
(1) The possibility of building technology—am I required to call it Promethean?—so advanced that it actually imperils rather than augments humanity. Particularly in the context of the atomic bomb and AI, I don’t think this theme is lost on anyone these days.
(2) The interpersonal costs of success. Von Neumann’s unparalleled intellectual acumen clearly comes at the cost of his ability to maintain normal human relationships. Yet this is a story as old as success itself.
(3) The inability of rational minds to conceive of all possible outcomes. This is a possibility. But I think the only people who will be surprised by this conclusion are the so-called rational minds in question.
(4) Technological process as a human enterprise and not one that is carried out by the especially normal or functional members of our species. This seems to me the plausible thesis of the book. No one has the full story: there is no single narrator to technological progress. Omniscience is not possible. The effect of Labatut’s formalism, in this reading, would be to show that technological progress is undertaken by the proverbial blind men (afflicted by mania, in this case), feeling the different parts of the elephant and coming to separate conclusions about what kind of thing the overall beast might be.
All that said, the book’s best and most accomplished sections are its opening and its closing. These sections—one man’s rational descent into madness, and AlphaGo’s recent victory against Lee Sedol—land most effectively. They are also the only ones that have pretty much nothing to do with John von Neumann. Labatut’s book does draw a line between von Neumann and the contemporary success of AlphaGo. But it is one that is immensely squiggly and convoluted around the life of John von Neumann, then dashed between that period and the modern one, denoting that something happened in that interim, but what exactly it might be is not specified. I kinda wish it were.
The future of non-fiction: fiction
Though I’ve been critical of Labatut’s work, let me say that the criticality derives from an ardent, near-religious belief in the project he set for himself. I believe that in this book, Labatut has put his finger on the future of non-fiction: fiction.
Over the next decade, I predict we will see more projects like this: fiction dressed up as non-fiction, or non-fiction dressed up as fiction. Around twenty years ago, with the publication of Malcolm Gladwell’s Tipping Point, there was an innovation in non-fiction. The use of the story to illustrate serious academic ideas. That paradigm has now reached a point of saturation, of exhausting that medium’s capability—to the point where ChatGPT could probably write Adam Grant’s next book for him, and the reading public would not know the difference. The only narrative frontier for the exploration of ideas to go toward is fiction. We see this done in Labatut’s work. We also see it in Nathan Hill’s recent Wellness (which I plan to write about soon). The most interesting and influential books of ideas in the coming years will, like Labatut’s, test out different literary devices for how best to convey this currently enigmatic blend of fact and fiction.
On this front, it will be exciting to see Labatut’s continued contributions to this venture. He clearly has the right stuff. For me, The Maniac is in the realm of work that competes for the cutting intellectual edge—but ultimately, is slightly off. It is what Boole wrote before The Laws of Thought. It is De Morgan’s work to rival Boole’s. (Both are quite excellent company.) I look forward to seeing what he comes up with next.
Thanks for reading! If you liked this piece, you might also like my piece—call it a treatise of sorts—about how to read a novel as a theory of behavior. It’s very AI x literature crossover, which will probably be of interest to you if you made it this far.
Or as D. Graham Burnett writes in the epilogue of “A Trial By Jury”,
“.....and it isn’t until the end, the very end...that I see (and I see it sharply, suddenly) that the writing has been, all along (without my knowing), the doing of the thing I wanted so badly from the start, that the writing has done the thing I wanted so badly from the start-- it has made the trial into words, a thing to read, to interpret, to circle back through. A text. Like art. Meaning something different to each person. Keeping the large questions open.
But the trial was not that”
Obviously ideas can outlive their originator ( or originators,) but any substance that undergoes the process of distillation is no longer the original substance.
AlphaGo’s victory over Lee Sedol is only in the realm of mathematical calculations,
Not the ever mysterious functioning of the human brain, body, and mind.