Image: Praewthida K/Unsplash
Few will disagree that artificial intelligence (AI) is one of the most significant technological developments of this decade. After defeating reigning world champions at games like chess and Jeopardy!, AI has now moved on to challenges that find practical applications in research and industry. It can drive cars, manage financial portfolios, report medical scans, paint pictures, write essays and do many other things (albeit with caveats in each case).
Over the last 75 years, many scientists, engineers and entrepreneurs have told us again and again that intelligent computers that can really think are just around the corner. Looking at the enormous hype the field enjoys today, one would be hard-pressed to not believe it. Some futurists insist that superintelligent machines will soon pervade the world, and that these machines will be able to create other machines that are even more intelligent. This process will lead to an ever-accelerating spiral of technological progress culminating in the singularity, an epochal moment in which humans have become irrelevant or, worse, extinct. All this is inevitable and only a matter of time.
Is it?
Computer scientist and entrepreneur Erik Larson addresses this question in his new book, The Myth of Artificial Intelligence. Larson makes no attempt to create suspense: right off the bat, he tells us the myth is utterly false, that we’re nowhere close to developing true AI, and that we don’t even have a clue how we could go about it. He writes in the opening pages that the myth he wants to debunk is not the possibility of true AI but its inevitability.
Larson spends a considerable number of pages to set up the myth he wants to tear down. Taking us on a whirlwind tour down the lanes of history, he tells us the stories of Alan Turing at Bletchley Park, David Hilbert’s ambitious program to formalise all of mathematics, and Kurt Gödel’s fatal blow to it. After pinning the origin of the myth on what he calls Turing’s “intelligence errors”, Larson traces its development through the decades to today’s futurists like Ray Kurzweil and Elon Musk, and their promises of conscious, spiritual machines.
Then, halfway into the book, Larson switches gears and opens up a conversation on the problem of inference. This is where he sounds most comfortable and convincing. The primary ammunition he uses against AI is centered on the nature of inference. Classical AI or symbolic AI – the dominant paradigm of AI research until the 1980s – is based on the principles of deduction. On the other hand, modern AI systems, like deep-learning models, are based on induction.
Larson makes a cogent and persuasive argument that general intelligence also requires a third type of reasoning known as abduction (which yields plausible answers but not necessarily ones that are verified to be correct). Importantly, abduction can’t be reduced to deduction or induction – ergo, today’s AI systems could never achieve general intelligence no matter how well they do on narrowly defined tasks.
Believing in the myth of AI has more serious consequences for our society beyond merely losing sleep over the prospects of a robot uprising. The myth, Larson argues, is negatively affecting research in many fields of science.
Data science is a tool that can aid human ingenuity; but using it to supplant human ingenuity is eroding our culture of invention and creativity. The later chapters of The Myth highlight how our inflated expectations of AI are themselves becoming barriers to genuine research in the field. To progress towards general intelligence, we must first acknowledge that our current ‘best’ approaches to AI have fundamental limitations.
To this end, Larson writes, “There is nothing to be gained by indulging the myth here; it can offer no solutions to our human condition except in the manifestly negative sense of discounting human potential and limiting future human possibility.”
The text inside the front flap of the hardcover edition reads: “Ever since Alan Turing, AI enthusiasts have equated artificial intelligence with human intelligence. This is a profound mistake.” It is ironic then that Larson falls into the same anthropocentric trap: assuming human intelligence to be the only kind of intelligence. To meaningfully compare AI with human intelligence, or any two systems of intelligence in general, we must first define the term ‘intelligence’.
Is intelligence a wide collection of narrow skills? Is it the general ability to learn from experience? Is it the capacity to hoard and retrieve information? Or the capability to generate data that the system has not encountered before?
As it happens, intelligence is fiendishly difficult to capture in a single, concise definition. This is also why biologists, psychologists, neuroscientists, computer scientists and philosophers have defined ‘intelligence’ differently in different contexts. So one would expect Larson’s book given its choice of subject, to at least acknowledge, if not discuss, this diversity of viewpoints. However, Larson doesn’t appear to want to wade through this quagmire, so he simply pretends it doesn’t exist.
Larson adopts a brash writing style that gives the reader no opportunity to assess facts to form independent opinions. His book is often a one-way soliloquy that will disappoint those looking for a conversation. It paints everything as black or white. Ideas are correct or incorrect; there is no room for disagreement. Those who hold contrary viewpoints are unceremoniously dismissed. Alan Turing makes a grave error, Stuart Russell misses the point, and developers of DeepMind misunderstand the nature of inference, and that’s that.
Also read: The Inconvenient Truth About Quantum Computing
And that is the book’s biggest weakness. Larson throws around a lot of famous names. (It would be difficult to find a page that doesn’t pull up someone who is not on Wikipedia.) He then neatly categorises those names into ‘heroes’ fighting the myth and ‘villains’ perpetuating it, effectively creating an incongruous pastiche of viewpoints. Larson has a solid central argument, that today’s AI banks on deduction and induction but not abduction, which is necessary for general intelligence. But instead of building a bridge that will lead the reader to this argument, Larson drops it in the middle. And since the central argument is strong enough to stand on its own, you wonder if the pastiche was necessary at all.
Nonetheless, the book comes at an opportune moment – when AI has breached the peak of expectations and is now inching downwards, into the trough of disillusionment. It deflates the hype surrounding the subject and offers coherent arguments against the inevitability and imminence of true machine intelligence. It will appeal to readers looking for a bold, one-track critique of AI that doesn’t mince words. Those looking for a balanced and nuanced discussion of the AI landscape should look elsewhere.
Viraj Kulkarni has a master’s degree in computer science from the University of Calfornia, Berkeley, and is currently pursuing a PhD in quantum artificial intelligence. He is on Twitter at @VirajZero.