Illustration: DeepMind/Unsplash
- Chatbots have come a long way since early primitive attempts in the 1960s, but they are no closer to thinking for themselves than they were back then.
- There is zero chance a current AI chatbot will rebel in an act of free will – all they do is turn text prompts into probabilities and then turn these probabilities into words.
- Future versions of these AIs are going to kill people when we put them in positions of power that they are far too stupid to have, like dispensing medical advice or running a suicide prevention hotline.
- Don’t worry about superintelligent AIs trying to enslave us. Worry about ignorant and venal AIs designed to squeeze every penny of online ad revenue out of us.
The rapid rise of artificial intelligence over the past few decades, from pipe dream to reality, has been staggering. AI programs have long been chess and Jeopardy! champions, but they have also conquered poker, crossword puzzles, Go and even protein folding. They power the social media, video, and search sites we all use daily, and very recently they have leaped into a realm previously thought unimaginable for computers: artistic creativity.
Given this meteoric ascent, it’s not surprising that there are continued warnings of a bleak Terminator-style future of humanity destroyed by superintelligent AIs that we unwittingly unleash upon ourselves. But when you look beyond the splashy headlines, you’ll see that the real danger isn’t how smart AIs are. It’s how mindless they are – and how delusional we tend to be about their so-called intelligence.
Last summer, an engineer at Google claimed the company’s latest AI chatbot is a sentient being because… it told him so. This chatbot, similar to the one Facebook’s parent company recently released publicly, can indeed give you the impression you’re talking to a futuristic, conscious creature. But this is an illusion: it is merely a calculator that chooses words semi-randomly based on statistical patterns from the internet text it was trained on. It has no comprehension of the words it produces, nor does it have any thoughts or feelings. It’s just a fancier version of the autocomplete feature on your phone.
Chatbots have come a long way since early primitive attempts in the 1960s, but they are no closer to thinking for themselves than they were back then. There is zero chance a current AI chatbot will rebel in an act of free will – all they do is turn text prompts into probabilities and then turn these probabilities into words. Future versions of these AIs aren’t going to decide to exterminate the human race; they are going to kill people when we foolishly put them in positions of power that they are far too stupid to have – such as dispensing medical advice or running a suicide prevention hotline.
Also read: Has Artificial Intelligence ‘Solved’ Biology’s Protein-Folding Problem?
It’s been said that TikTok’s algorithm reads your mind. But it’s not reading your mind – it’s reading your data. TikTok finds users with similar viewing histories as you and selects videos for you that they’ve watched and interacted with favorably. It’s impressive, but it’s just statistics.
Similarly, the AI systems used by Facebook and Instagram and Twitter don’t know what information is true, what posts are good for your mental health, what content helps democracy flourish – all they know is what you and others like you have done on the platform in the past and they use this data to predict what you’ll likely do there in the future.
Don’t worry about superintelligent AIs trying to enslave us; worry about ignorant and venal AIs designed to squeeze every penny of online ad revenue out of us.
And worry about police agencies that gullibly think AIs can anticipate crimes before they occur – when in reality all they do is perpetuate harmful stereotypes about minorities.
The reality is that no AI could ever harm us unless we explicitly provide it the opportunity to do so – yet we seem hellbent on putting unqualified AIs in powerful decision-making positions where they could do exactly that.
Part of why we ascribe far greater intelligence and autonomy to AIs than they merit is because their inner-workings are largely inscrutable. They involve lots of math, lots of computer code, and billions of parameters. This complexity blinds us, and our imagination fills in what we don’t see with more than is actually there.
In 1770, a chess playing robot – or “automaton,” in the parlance of the day – was created that for almost a century traveled the world and defeated many flabbergasted challengers, including notable individuals such as Napoleon and Benjamin Franklin. But it was eventually revealed to be a hoax: this was not some remarkable early form of AI, it was just a contraption in which a human chess player could hide in a box and control a pair of mechanical arms.
People so desperately wanted to see intelligence in a machine that for 84 years they overlooked the much more banal (and obvious, in hindsight) explanation: chicanery.
Also read: Mind the Hype: We Must Build AI That Can Dance With Us, Not Replace Us
While our technology has progressed by leaps and bounds since the 18th century, our romantic attitude toward it has not. We still refuse to look inside the box, instead choosing to believe that magic in the form of superintelligence is occurring, or that it is just around the corner.
This fanciful yearning distracts us from the genuine danger AI poses when we mistakenly think it is much smarter than it actually is. And if the past 250 years are any indication, this is the real danger that will persist into our future.
Just as people in the 18th and 19th centuries overlooked the banal truth behind the chess playing automaton, people today are overlooking a banal but effective way to protect our future selves from the risk of runaway AIs.
We should expand AI literacy efforts to schools and the wider public so that people are less susceptible to the illusions of AI grandeur peddled by futurists and technology companies whose economic livelihood depends on convincing you that AI is far more capable than it really is.
This piece was originally published on Future Tense, a partnership between Slate magazine, Arizona State University and New America.