Tech, Now + Beyond

A police state powered by AI drones that monitor violent behavior is becoming a reality

Who holds the responsibility when unmanned drones are used to control human behavior?

Reading Time: 3 minutes

What would it mean to be monitored 24/7? Your behavior, your interactions and just how you navigate in different spaces recorded and analyzed to determine if you’re a passive civilian or a threat. Of course, we live in a society where digital monitoring is common, if not expected both in public and private settings. Not to mention the Internet revels in a good FBI agent webcam meme, and yet, when we think about surveillance there is usually someone behind those six monitor screens sifting through the footage, capable of differentiating a high-five from an assault. It’s human powered.

The line between violent behavior and a simple hand gesture is blurred. Click To Tweet

Researchers from India and the United Kingdom are testing artificial intelligence in drones to identify violent behavior in crowds. The hope is to detect violent and suspicious behavior in real time to prevent devastating attacks.

In their paper,  Eye in the Sky the researchers explain that the drone would support two cameras that record live footage from a crowd, transmit it for analysis where an algorithm then searches for any of the five poses struck by people in the crowd that match with what the researchers have assigned as ‘violent’. Currently, those ‘violent poses’ include strangling, punching, kicking, shooting, and stabbing. Though ideally, the researchers aim for the AI-powered drones to capture and deter any sort of violent attacks, including the type of attacks that inspired its creation.

Lead researcher Amarjot Singh at the University of Cambridge told The Verge that the idea was “motivated by events” similar to Manchester Arena bombing of 2017 where a suicide bomber killed 22 people leaving an Ariana Grande concert. “Attacks like this could be prevented in [the] future if surveillance cameras can automatically spot suspicious behavior, like someone leaving a bag unattended for a long period of time.”

The technology has yet to be used outside of a controlled setting, though researchers plan on unleashing it in its first real-time setting during two upcoming festivals in India raising concerns about accuracy and ethics.

The system has a 94 percent success rate in scenarios played by actors striking one of the five poses, with that number decreasing as the number of people in the frame increases. When they introduced 10 people in the drone’s frame, accuracy in detecting ‘violence’ fell to 79 percent, making it less reassuring and reliable in a noncontrolled environment where crowd size is unpredictable and interactions like head gestures and dance, specifically moshing, are likely to be misinterpreted in its first few real-world uses. But even if the system is able to run with little to no glitches, the two questions remain: who is responsible for this AI drone and how they will choose to use it?

Companies like Axon, a police technology company that makes stun guns and body cameras, have created a line of drones that are “marketed as a way to help law enforcement with search-and-rescue operations, crowd monitoring, traffic-accident reconstruction, and evidence collection,” according to Slate. Shortly after their announcement, civil rights advocates penned a letter to the company citing the social implications of such a technology placed in the hands of law enforcement who are known to have a “documented history of racial discrimination.” The fear is that AI technology would increase the incident of racial profiling that has been known to lead wrongful outcomes and injustices.

Studies have shown that AI technology like this that collects data and footage to get a comprehensive understanding will get algorithmic bias from the racial and sexist prejudices that are projected, like in the algorithm used to create the Florida risk scores on former prisoners. The system disproportionately marked black people at a risk for re-offending compared to their white counterparts. This often meant longer sentences and plea deals being revoked for black people labeled as high risk.

Your scientists were so preoccupied with whether or not they could that they didn't stop to think if they should. - Jurassic Park Click To Tweet

Researcher and Executive Director at the AI Now Institute Meredith Whittaker tweeted her dissent at a drone surveillance system set to find “violent” people saying, that it failed to consider “its grave consequences,” showing that “those who have the knowledge to make AI don’t deserve the power to determine its use.”

Dominique Stewart

Dominique Stewart

Dominique Stewart is a creative with a BA in Magazine Media from Ball State University. She's interested in understanding how humans and pop culture relates to one another. In her spare time she enjoys hoarding books, gardening, and going on a deep dive into true crime podcasts. If every Hallmark murder mystery lead transformed into one person and was Afro-Latina, it would be her.

Our weekly email will change your life.