Now Reading
Drones That Can Spot ‘Violent Behaviour’ in Crowds to Be Tested in India

Drones That Can Spot ‘Violent Behaviour’ in Crowds to Be Tested in India

New Delhi: A month from now, a drone looking to catch “violent behaviour” will be flying over a university festival at the National Institute of Technology (NIT), Warangal. This will be the first real-world test for a new system developed by researchers from the University of Cambridge, NIT Warangal and the Indian Institute of Science Bangalore, who are attempting to use artificial intelligence (AI) and drone surveillance to identify violent behaviour in crowds.

The researchers, led by Amarjot Singh at the University of Cambridge, have described their work in a paper titled ‘Eye in the Sky’. Singh told The Verge that the idea first came to him after the Manchester Arena bombing of 2017. He thought if violent or suspicious behaviour could be detected in time, it could perhaps also be stopped before people got hurt.

How it works

According to the paper published by Singh and his colleagues, Devendra Patil from NIT Warangal and S.N. Omkar from IISc Bengaluru, what they are developing is an “improved real-time autonomous drone surveillance system to identify violent individuals in public areas.”

Two cameras on the drone pick up video footage from the crowd and transmit it in realtime for analysis. An algorithm then combs through the footage and checks whether any of the poses struck by people in the crowd match with what the researchers have designated as ‘violent’. For now, the algorithm can detect five ‘violent poses’: strangling, punching, kicking, shooting and stabbing. Once it’s further developed, the researchers hope footage from surveillance cameras can be used to avoid any sort of violent attacks.

Does it work?

Even without getting into the ethical conundrums technology of this sort may throw up, it’s not perfect. Accuracy is a concern that the researchers have brought out in their paper. While they say the technology had a 94% success rate in detecting violent poses, they also noted that this number decreases as the number of people in the frame increases. When there were ten people in the drone camera’s frame, accuracy in detecting ‘violence’ fell to a lowly 79%.

Things will likely be different when real-world footage is used. The video is likely to be blurry, operators won’t be able to control how many people are in the frame and people’s actions are unlikely to be as pronounced, and so easier to misinterpret. Singh told The Verge that the system would not be misinterpreting gestured like high fives, for instance, but said the team does not yet have evidence to support this.

In addition, as The Verge has pointed out, the technology has so far been tested only on volunteers, who are use exaggerated violent poses in front of the camera. Singh agreed that this may not reflect how the technology would actually work – and the upcoming experiments at NIT Warangal’s Technozion and Spring Spree festivals will be the real test of that. “We have permission to fly over [Technozion], happening in a month, and are seeking permission for the other,” he told The Verge.

What if it does work?

Even if it were to work perfectly and be completely glitch-free, technology of this sort raises a host of questions, based on who will be responsible for the data generated and how they will choose to use it. On this specific project, Meredith Whittaker, an expert who looks at the social implications of AI, tweeted about some problems. She also objected to the tests planned at NIT Warangal.

People have also raised doubts in the past about how surveillance technology could impact civil society or anyone raising their voice against those in power – doubts that would apply to this drone AI surveillance project as well. An article in DroneLife wrote:

The problem lies in defining violence and training an AI system to recognize it with accuracy.  There’s the slippery slope of automated surveillance. What if it moves beyond violence to spot suspicious behavior, not just violence? Who defines what that is and what safeguards are in place to stop that power from being abused?

…Either way, it’s an interesting application of drone technology – and no doubt this won’t be the last we hear about automated, machine learning-driven aerial surveillance. In the right hands, it’s difficult to argue that it could help prevent crimes and bring people to justice.

But in the wrong hands, whether that’s an authoritarian government or an unchecked police force, it could just be another tool of tyranny.

Speaking to The Verge, Singh defended the project by saying it could have positive impacts, though new regulations may be needed to check its use.“Anything can be used for good. Anything can be used for bad,” he said, but didn’t go into what ‘bad’ would mean here or whether at all it is worth the substantial risk.

Scroll To Top