Now Reading
Is This the AI We Should Fear?

Is This the AI We Should Fear?

machine learning, deep learning, artificial intelligence, human cognitive architecture, human cognition, Facebook, Twitter, Google, Microsoft, social media, data harvesting, data ownership, privacy, deep learning, self-driving cars, signal processing, symbolic reasoning, fuzzy logic, neuro-fuzzy reasoning, automation, second industrial revolution,

Scientists working on artificial intelligence (AI) have traditionally pursued the goal of constructing intelligent machines, with human intelligence used as a benchmark. However, philosophers like John Haugeland (author of AI: The Very Idea, 1985) have contested that the goal is to build machines with “minds of their own”. Of late, AI has also been conflated with machine learning, which is but one component of AI, as well as with data science and, more imaginatively, even the Internet of Things. But what exactly is AI?

In the middle of the last century, when AI was still in its infancy, scientists were having heated debates on the nature of intelligence and on the possibility of machine intelligence. Alan Turing, English computer scientist, sought to bring the arguments to a close with his ‘imitation game’, known today as the Turing test. Its fundamental principle was that if a machine could respond to text messages like a human could, then it must be intelligent.

Similarly, in this age of machine-learning, we will need to decide when a machine has acquired the ability  to think for itself. To answer this question we need to look within our own heads and think about what makes us intelligent. At first, the list seems endless because intelligence has to be multifaceted. Beating a human champion at chess or Go, diagnosing diseases from images better than humans can and responding to voice commands is smart, but it is not by any stretch of imagination the whole story.

An intelligent agent operates autonomously in its environment. The following diagram depicts a human agent: she senses the world around her, understands what she senses, and deliberates over what she has imbibed.

Image: Deepak Khemani

The diagram depicts the three layers of information processing. The outermost layer processes signals: the light falling on our retinas, sound impinging upon our ears, etc. The middle layer processes the incoming information, recognises patterns and assigns class labels. The innermost layer is concerned with cognition: the process of thought, memory, language, contemplation and imagination.

In this architecture, the precise nature of deliberation remains elusive. We absorb information via our sensory organs – eyes, ears, nose, touch – and we retain only what we want. Our memories have complex structures that can be divided into two, short-term and long-term, and are joined by a deep repository of subconscious knowledge. We digest what we learn in myriad ways and recall relevant bits when required. We let our minds wander when we’re idle and we often allow free rein to the imagination.

Imagination is the soul of our intelligence. We conjure fantastic worlds, imagine how things will work as a result of decisions we are yet to make, and fulfil our goals through deliberated action. Of course, we are not always in control, rather we don’t always have the sense that we are. At the same time, our exploitative behaviour – as evidenced in the crimes of humanity – is a mark of intelligence. No single human is supremely intelligent but every aspect of intelligence does manifest itself in one or the other of us.

As intelligent agents and before the advent of modern technology, we perceived the world through our senses and through oral stories. Then, devices like the postcard, newspaper, the telephone, radio and then television extended the reach of our senses. Soon, through advertisements in some of these media, we began perceiving not just what we wanted to see but also what someone else with commercial or political interests wanted us to see.

Then came the internet and the social media, both of which changed our lives in important ways. Until their advent, information had flowed only one way: from the world outside to us. But now, with each person personally seeking out information from the world wide web, the facilitator began to observe who was consuming what. The flow of information became two-way, and the observer became the observed. Data, as they say, is the new oil.

There is now lots of data available on the internet – as much about our behaviours as everything else – which has in turn triggered work in data science, analytics machine-learning and, nowadays, deep-learning, a kind of machine-learning focusing on ways to organise information instead of performing certain tasks.

The success of deep neural networks in pattern recognition and image-labelling, in particular, has been spectacular. Algorithms trained on multiple and diverse images in which, say, a horse is present, are able to label new images with a horse with considerable accuracy. Similarly, it is possible that algorithms tracking you on the internet can identify what your eyes are looking at as you’re browsing. As a result, the modern picture of our cognitive architecture looks like this:

Image: Deepak Khemani

Today’s AI only scratches the surface of cognition – the core of AI – depicted as the small blue circle. While it is the most significant part of the human cognitive system, it is relatively less important for AI as we understand it today. The machine-learning layer, depicted as the bloated outer shell, is where the action is, and its impact on our lives includes both the good and the bad.

For starters, there have been some big strides in medical diagnostics. It is now possible to combine the experience of thousands of doctors by observing how they diagnose certain conditions and find ways for machines to do that better, such as by homing in better on critical symptoms.

However, each piece data comes from a human patient who may be worried about it falling into the wrong hands. For example, medical information of this kind could be valuable to insurance companies and potential employers. Similarly, social media platforms harvest information about your likes, dislikes, preferences, leanings, inclinations, even beliefs, with the intention of selling the data to advertisers constantly on the lookout for potential customers for their products.

Social media would like you to stay addicted because the more you use the services of a platform, the more the data you generate and the more, and better, marketers are able to target you. This prompted the rise of the influencers: protagonists that thrive on the attention of the members of their social network and whose accounts advertisers use to push their ads. In fact, we’re also tracked when we’re not actually on these platforms: through various apps on our smartphones, some of which purport to be funny camera filters but ask for access to our contacts to work.

When the freebies first began – Hotmail was perhaps the pioneer – it wasn’t clear what was in it for the company. But now, almost everyone knows that when they’re being offered something for free, whether by Google or Facebook, it’s not going to be a one-way transaction. They’re going to know they themselves are the products, that their data is going to be harvested by these companies even as you use them for free.

So AI and machine-learning in their most ubiquitous form today are instruments of the capitalist pursuit of profit. They shouldn’t be confused with automation, another buzzword often uttered in the same breath. While some forms of automation use methods gleaned from AI research, such as self-driving cars and algorithmic trading in stock markets, most of it has little to do with intelligence. It is automation, and not AI as such, that is responsible for the loss of human jobs. AT does make our lives more comfortable but it also requires government regulation to ensure the wealth generated is distributed more equitably.

Nonetheless, these technologies have become quite troubling for their implications for our privacy and data ownership as well. Harvesting and exploiting data, whether for good or bad, is in the realm of data science, analytics and machine-learning. But is it AI? Perhaps not.

We scientists want to understand intelligence as well as create machines that are intelligent. We want to create companion machines in ageing societies, machines that can teach our children math or serve as perceptive personal assistants. We’d like to build a robot that cooks an exotic meal for you with recipes from the internet or even teach you how to play bridge. We don’t just want to build machines that interact with humans in a superficial manner while pretending to be deep.

In sum, the AI which we now see is only the crust of a would-be intelligent entity, but this limited version is what corporate interest lies in. Indeed, this AI is only the tip of the machine-intelligence iceberg, and the corporate world does not seem to be interested in expanding its limits to do more, do better. And it’s likely they won’t until it makes commercial sense for them to do so.

Deepak Khemani is a professor in the department of computer science, IIT Madras.

Scroll To Top