Now Reading
Why We Must Unshackle AI From the Boundaries of Human Knowledge

Why We Must Unshackle AI From the Boundaries of Human Knowledge

artificial intelligence, machine learning, algorithmic bias, human morals, discrimination, bias, COMPAS, supervised learning, unsupervised learning, self-learning, Alan Turing, AlphaZero, DeepMind, Google, Garry Kasparov, racial discrimination,

Artificial intelligence (AI) has made astonishing progress in the last decade. AI can now drive cars, diagnose diseases from medical images, recommend movies, even whom you should date, make investment decisions, and create art that people have sold at auction.

A lot of research today, however, focuses on teaching AI to do things the way we do them. For example, computer vision and natural language processing – two of the hottest research areas in the field – deal with building AI models that can see like humans and use language like humans. But instead of teaching computers to imitate human thought, the time has now come to let them evolve on their own, so instead of becoming like us, they have a chance to become better than us.

Supervised learning has thus far been the most common approach to machine learning, where algorithms learn from datasets containing pairs of samples and labels. For example, consider a dataset of enquiries (not conversions) for an insurance website with information about a person’s age, occupation, city, income, etc. plus a label indicating whether the person eventually purchased the insurance. A supervised learning model trained using this dataset could calculate the probability of a new enquiry converting into a sale. Such models are very good at predicting outcomes but they have one big drawback: their performance is limited by the performance of the system that did the original labelling. And in most cases, the latter is a human being.

This limitation, and how AI can overcome it, is best illustrated by the story of how computers learned to play chess. Through most of the 20th century, AI’s critics argued that computers would never beat humans at chess because playing chess requires imagination, intuition, foresight, planning – what they called real intelligence – and not just computational ability. But in 1997, IBM’s Deep Blue chess program defeated Garry Kasparov.

Chess programs feature sophisticated algorithms that have been fed thousands of recorded games played by grandmasters. The algorithms learn how to play chess by analysing these historical games. However, because the algorithms learn from a dataset of games played by humans, their abilities are limited by the skill humans possess no matter how well they play.

In 2017, AlphaZero, a program developed by a Google subsidiary named DeepMind, entered the world of chess in spectacular fashion. Unlike its predecessors, AlphaZero had been taught only the basic rules of chess, and it taught itself to play by playing against itself. It quickly defeated Stockfish, the champion program at the time. Because AlphaZero did not learn from human games, it had developed a different style of play. In one game, it sacrificed four pawns in a row – something chess players might have found bizarre.

In a technical paper, its developers described how AlphaZero discarded strategies human grandmasters routinely used to invent new ones nobody knew existed. AlphaZero’s success stems from the fact that – unlike previous algorithmic contenders – it didn’t inherit the limitations of human knowledge.

In 1953, Alan Turing wrote that one cannot program a machine to play a game better than one does oneself. Today, what we need more than anything else are computers that know better than their creators. When developing AI models that can diagnose cancer, for example, we want the models to know things we don’t. For that to happen, we must allow the models to teach themselves and free them of the boundaries of our own understanding.

In 2018, researchers found that an AI recruiting tool used by Amazon discriminated against women. The company had created this tool to crawl the web and identify potential candidates and rate them. To train the algorithm to determine a candidate’s suitability, its developers used a database of CVs submitted to the company over a 10-year period. Because Amazon, like most technology companies, employed fewer women than men, the algorithm presumed the gender imbalance was part of Amazon’s formula for success.

For another example, the COMPAS program the US government used to prescribe sentences to convicts based on the probability that they would reoffend was found to have inherited the judicial system’s racial discrimination as well.

To blame AI for the prejudices of human beings is stupid. At the same time, it is also possible to develop algorithms that are better than us at being fair. Technology alone cannot fight societal biases but we can ensure that our algorithmic offspring don’t inherit our prejudices, and in fact overcome our moral shortcomings.

Discrimination is as old as humankind; religious preaching, moral education, processes or legislation may mitigate its consequences but can’t eliminate it altogether. But today, as we increasingly cede decision-making to AI algorithms, we have a unique opportunity. For the first time in history, we have a real shot at building a fair society that is free of human prejudices by building machines that are fair by design.

Of course, this is still only a pipe dream, but our first steps in this direction should be to change our priorities: Instead of obsessing over the performance of supervised learning models on specific problems, we must develop methods that allow AI to learn without labels that humans have created.

Some researchers have already recognised this, and in the last few years have been developing new techniques to support reinforcement learning and unsupervised learning, both of which offer AI more autonomy under less human supervision. AI will soon permeate most aspects of our lives, which in turn means these developments are very important for the future of humanity.

Viraj Kulkarni has a master’s degree in computer science from UC Berkeley and is currently pursuing a PhD in artificial intelligence.

Scroll To Top