Now Reading
Mind the Hype: We Must Build AI That Can Dance With Us, Not Replace Us

Mind the Hype: We Must Build AI That Can Dance With Us, Not Replace Us

Illustration: mohamed_hassan/pixabay


  • AI researchers’s primary goal was to improve the autonomy of machines. But we don’t really need AI to be autonomous. We need AI to be reliable and trustworthy.
  • Humans excel at some things and computers excel at others. We need systems that bring out the best in both – so that their combination is more effective than either alone.
  • Achieving this requires a paradigm shift in the mindsets of both AI developers and its eventual users, beginning with the way we measure effectiveness.
  • To develop truly collaborative AI, we will need newer performance measures that are collaborative instead of comparative.

As an AI researcher developing AI for healthcare, people often ask me if AI will replace human doctors in the future.

The question doesn’t come as a surprise. We do, after all, live in an era where we see technology displacing people all the time. Machines have been automating activities requiring manual labour for decades.

But doctors are not manual labourers. Being a doctor requires years of training, a wealth of experience diagnosing and treating patients, vast quantities of knowledge about the human body, high levels of intelligence, and a keen sense of judgement.

Can computers attain all these qualities to perform the job of doctors?

And there, in this very question, lies a grave error. Is imitating us the ultimate goal for AI? Must we develop AI that competes with us? Can we not instead design systems that collaborate with us, augment our capabilities, and help us do things better?

It turns out that we can, but we must first change our mindset.

‘Drosophila of reasoning’

After famously losing to IBM’s chess-playing supercomputer ‘Deep Blue’ in 1997, Garry Kasparov did not turn hostile towards AI. Instead, he became a vocal advocate of collaboration between humans and machines. He started what later became centaur chess.

In Greek mythology, a centaur is a hybrid animal that is half-human and half-horse. Each centaur chess player is a composite team of humans and computers. Computers exploit their computational power to crunch millions of moves and provide the best set of candidates. Human players exercise their judgement and experience to pick one move out of this set that they believe to be the most strategically sound.

In an early freestyle chess tournament where humans, computers and hybrid human-computer teams participated, who do you think won? Not the human grandmasters. Not the best-rated chess algorithms running on supercomputers. The team that won was a centaur consisting of two amateur chess players assisted by three ordinary computers.

Kasparov has called chess the “drosophila of reasoning”[footnote]The fruit fly, Drosophila melanogaster, is a popular model organism in biological research.[/footnote]. For a long time, it served as the ultimate test for machine intelligence. Even in the early days of AI development, machines excelled at making short-term tactical decisions. For example, computers were better than humans at chess endgames. Expert human players, however, were far better at making long-term strategic decisions, such as deciding whether to sacrifice a piece to gain a positional advantage.

We see the same patterns in other fields. AI outperforms humans at close-ended tasks while humans remain vastly superior at performing more open-ended activities. Consider AI-enabled systems for examining radiology scans. Computers already match or even surpass expert radiologists at detecting specific conditions from medical images. But they cannot interpret scans in the context of the patient’s personal information and medical history. Doing that requires an understanding of human anatomy and disease processes.

“AI cannot completely automate reporting of radiology scans, but that doesn’t stop AI from creating value,” Amit Kharat, co-founder of DeepTek, where I work, said. “By using AI to augment the capabilities of our radiologists, we can deliver better quality reports in lower turnaround times, ultimately improving the quality and affordability of medical imaging.”

Also: read: The Inconvenient Truth About Quantum Computing

AI in healthcare

AI researchers have traditionally focused on developing algorithms that can replicate human intelligence. Their primary goal was to improve the autonomy of machines. But we don’t really need AI to be autonomous. We need AI to be reliable and trustworthy.

Humans excel at some things and computers excel at other things. We need systems that bring out the best in both – so that their combination is far more effective than either of them could ever be on their own.

This is, of course, easier said than done and requires a paradigm shift in the mindsets of both AI developers and its eventual users. To evaluate the performance of their algorithms, developers compare them with how humans perform at the same task. That is, they might compare an image recognition system with how well humans can recognise images from a test dataset. They might test a radiological diagnostics system against the decisions of human radiologists.

To develop truly collaborative AI, we will need newer performance measures that are collaborative instead of comparative. We need metrics that don’t measure how well AI does on its own but how well the combination of the composite human-AI team does when working together. Such metrics will go a long way in changing the competitive paradigm under which AI is being developed today.

For example, instead of evaluating a face recognition AI system by comparing it with how good humans are at recognizing faces, you could compare how good human readers are when aided by AI with the same human readers when not aided by AI.

Along with adopting collaborative performance measures, we also need to design interfaces that improve users’ trust in the AI guiding them. Users, especially those in disciplines that require significant knowledge and expertise, are sceptical of advice generated by AI.

In a recent study, radiologists were shown chest X-rays and their diagnoses, and were asked to evaluate the correctness of the diagnoses. All the diagnoses were generated by human experts, but some of them were falsely labelled as if they had come from an AI system.

These radiologists were later asked questions about the quality of the diagnoses they examined. They consistently rated the diagnoses as being of lower quality when they thought they were checking the verdict of an AI system instead of a human expert.

How can we develop systems that improve efficiency by incorporating AI elements while, at the same time, safeguarding human agency, participation and  creativity?

Also read: Why We Must Unshackle AI From the Boundaries of Human Knowledge

Human-centred AI

“In the future, we will use AI to recognise hidden patterns not visible to the expert eye, but radiologists will arbitrate these decisions on the basis of the clinical and legal context,” Vinay Duddalwar, a professor of radiology at the University of Southern California and a well-known researcher working in the field, said.

In a January 2022 essay, Erik Brynjolfsson, director of the Stanford Digital Economy Lab, drew a sharp line between human-like AI and human-centred AI. By replicating and automating human capabilities, human-like AI will turn machines into cheaper substitutes for human workers.

Eventually, workers will lose their economic and political bargaining power and become increasingly dependent on those who control the technology. On the other hand, human-centred AI will augment human capabilities and allow people to do things they never could before.

By replacing different classes of workers one by one, human-like AI will slowly concentrate power and money in the hands of a few. By empowering workers and providing them increasingly valuable opportunities, human-centred AI will give us a chance to create a prosperous, inclusive, more-equal society. Both will boost productivity – but the latter will ensure that humans remain indispensable for creating value and making decisions.

As a society, we need to make these choices consciously and together. Like all forms of technology, AI is a tool. Whether it is a boon or a bane depends on how we wield it and how we allow others to wield it.

So when people ask me if AI will replace doctors in the future, I heartily borrow a leaf from Garry Kasparov. I tell them I don’t see AI replacing doctors in the future – but that I do see doctors who use AI replacing doctors who don’t use AI.

Duddalwar summed it up nicely when he paraphrased Isaac Asimov’s zeroth law of robotics: “Ultimately, an AI system must not harm humanity, or, by inaction, allow humanity to come to harm. Human-centred AI will ensure this.”

Viraj Kulkarni has a master’s degree in computer science from the University of Calfornia, Berkeley, and is currently pursuing a PhD in quantum artificial intelligence. He is also the chief data scientist at DeepTek. He is on Twitter at @VirajZero.

Scroll To Top