Representative image. Photo: Mohamed Nohassi/Unsplash
The warning consisted of a single sentence: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
The pithy statement, published in May by the nonprofit Center for AI Safety, was signed by a number of influential people, including a sitting member of Congress, a former state supreme court justice, and an array of technology industry executives. Among the signatories were many of the very individuals who develop and deploy artificial intelligence today; hundreds of cosigners – from academia, industry, and civil society – identified themselves as “AI Scientists.”
Should we be concerned that the people who design and deploy AI are now sounding the alarm of existential risk, like a score of modern-day Oppenheimers? Yes – but not for the reasons the signatories imagine.
As a law professor who specialises in AI, I know and respect many of the people who signed the statement. I consider some to be mentors and friends. I think most of them are genuinely concerned that AI poses a risk of extinction on a level with pandemic and nuclear war. But, almost certainly, the warning statement is motivated by more than mere technical concerns – there are deeper social, societal, and (yes) market forces at play. It’s not hard to imagine how a public fixation on the risk of extinction from AI would benefit industry insiders while harming contemporary society.
How do the signers think this extinction would happen? Based on prior public remarks, it’s clear that some imagine a scenario wherein AI gains consciousness and intentionally eradicates humankind. Others envision a slightly more plausible path to catastrophe, wherein we grant AI vast control over human infrastructures, defence, and markets, and then a series of black swan events destroys civilisation.
The risk of these developments – be they Skynet or Lemony Snicket – is low. There is no obvious path between today’s machine learning models – which mimic human creativity by predicting the next word, sound, or pixel – and an AI that can form a hostile intent or circumvent our every effort to contain it.
Regardless, it is fair to ask why Dr Frankenstein is holding the pitchfork. Why is it that the people building, deploying, and profiting from AI are the ones leading the call to focus public attention on its existential risk? Well, I can see at least two possible reasons.
The first is that it requires far less sacrifice on their part to call attention to a hypothetical threat than to address the more immediate harms and costs that AI is already imposing on society. Today’s AI is plagued by error and replete with bias. It makes up facts and reproduces discriminatory heuristics. It empowers both government and consumer surveillance. AI is displacing labour and exacerbating income and wealth inequality. It poses an enormous and escalating threat to the environment, consuming an enormous and growing amount of energy and fueling a race to extract materials from a beleaguered Earth.
These societal costs aren’t easily absorbed. Mitigating them requires a significant commitment of personnel and other resources, which doesn’t make shareholders happy – and which is why the market recently rewarded tech companies for laying off many members of their privacy, security, or ethics teams.
How much easier would life be for AI companies if the public instead fixated on speculative theories about far-off threats that may or may not actually bear out? What would action to “mitigate the risk of extinction” even look like? I submit that it would consist of vague whitepapers, series of workshops led by speculative philosophers, and donations to computer science labs that are willing to speak the language of longtermism. This would be a pittance, compared with the effort required to reverse what AI is already doing to displace labour, exacerbate inequality, and accelerate environmental degradation.
A second reason the AI community might be motivated to cast the technology as posing an existential risk could be, ironically, to reinforce the idea that AI has enormous potential. Convincing the public that AI is so powerful that it could end human existence would be a pretty effective way for AI scientists to make the case that what they are working on is important. Doomsaying is great marketing. The long-term fear may be that AI will threaten humanity, but the near-term fear, for anyone who doesn’t incorporate AI into their business, agency, or classroom, is that they will be left behind. The same goes for national policy: If AI poses existential risks, US policymakers might say, we better not let China beat us to it for lack of investment or overregulation. (It is telling that Sam Altman – the CEO of OpenAI and a signatory of the Center for AI Safety statement – warned the EU that his company will pull out of Europe if regulations become too burdensome.)
Some people might ask: Must it be one or the other? Why can’t we attend to both the immediate and long-term concerns of AI? In theory, we can. In practice, money and attention are finite, and elevating speculative future risks over concrete immediate harms comes with significant opportunity costs. The Center for AI Safety statement itself seemed to acknowledge this reality with its use of the word “priority.”
To be sure, the generative AI behind this latest wave of chatbots and image or voice generators can do amazing things, leveraging a gift for classification and prediction to create original content across a range of domains. But we don’t have to imagine fantastical scenarios to appreciate the near-term threats it poses.
Addressing AI’s harms while harnessing its capacity requires all hands on deck. We must work at every level to build meaningful guardrails, protect the vulnerable, and ensure that the technology’s costs and benefits fall proportionately across society. Prioritising a speculative, dystopic risk of annihilation distracts from these goals. Pretending that AI poses an existential threat only elevates the status of the people closest to AI and suggests easier rules they’d love to play by. This is not a mistake society can afford to make.
Ryan Calo is the Lane Powell and D. Wayne Gittinger Professor at the University of Washington School of Law and a Professor in the UW Information School and (by courtesy) Paul G. Allen School of Computer Science & Engineering.
This article was originally published on Undark. Read the original article.