As someone deeply concerned about their digital footprint, I take privacy issues and precautions very seriously, especially when it comes to facial recognition. However, too often when I attempt to urge those closest to me to join me, I get asked “Why does it matter?, “What could they possibly want with my info?” and “If it’s all online anyway, what can I do?”
I do not know what is more alarming, the indifference or the ignorance.
I do not know what is more alarming, the indifference or the ignorance because both are interrelated and very necessary when it comes to securing your digital identity. For the same population that marveled at Deepfake videos of Obama and caused a public panic over the FaceApp’s usage of its user’s faces, we appear to be seemingly complacent with the very real possibility of mass deployment of such technologies. And as we move closer to a world where biometric security is the norm, none are more worry-inducing than the issues surrounding facial recognition software.
See, there are two types of usages for this type of technology: commercial applications and governmental applications, and both are equally concerning. Many of us use Face ID to unlock our phones and are comfortable with doing so because we have given our consent voluntarily to do so. However, what about Yandex’s advanced reverse image search function? Where you can pull up all information on a person that exists anywhere on the web-based on an image uploaded, even from private accounts.
There are two types of usages for this type of technology.
On a wider scale, a case of commercial unregulated use going wrong was Clearview AI, which had used the open web, including social media, to put together a database of three billion online images, which was then a victim of a data breach earlier this year. This was particularly troubling because Clearview AI’s clientele included multiple law enforcement agencies that used its services to solve cases.
On the other end, governments around the world are either already implementing such systems or in the process of awarding contracts for these. In Australia, in an effort to tackle identity fraud, the federal government was creating a national network verification service of sorts that could be accessible to private companies along with governmental agencies.
Alicem in France has already been rolled out for some public services, while in the UK, the Metropolitan Police in London have already deployed live facial recognition on the streets in the city, albeit as stated for ‘specific instances’ only.
In India, the largest democracy in the world, implementation of a nationwide facial recognition system, one of the biggest in the world, has been underway for a while now, causing quite a bit of stir with regards to exactly where and how it will be used.
I’m not entirely innocent either.
And of course, China’s surveillance state has been the subject of many discussions, made even more visible by Xinjiang, home to the Uighur minority, wherein it was used as an active tool to track down and detain dissidents.
So, what does this all mean?
Trust me, I’m not entirely innocent either, I have my face plastered all over the web under my name. Primarily because, I don’t really mind that my face is out there associated with my name, that’s how it is on all my official documents anyways. I do, however, mind my face being used for invasive surveillance tactics that undermine essential right to privacy.
There are various very right and truthful concerns with using facial recognition platforms, such as:
- Accuracy checks and inherent biases are a key issue, as seen with other AI applications, and could manifest to the detriment of the population, especially more so for people of color.
- While there are reports of 10,000 criminals were arrested in China using the technology, it has also been found that the Metropolitan Police’s deployment of facial recognition in the UK, found an 81% inaccuracy rate in the system.
- If law enforcement and governmental authorities mandate facial recognition technology, there also exists the possibility that false positives and misidentifications could potentially rig the system unfairly against more vulnerable sections of society.
- Abusing the systems for unrestrained surveillance is another issue. Setting aside ‘tinfoil hat’ claims, the potential for restraints to free speech is something we would definitely have to discuss.
- And as with any other data collection platform, storage is a clear issue as it can oftentimes be either inaccessible, obscure, or insecure. And with non-consensual data collection, there is an even greater risk as seen with Clearview’s data breach.
I am not naïve either, I understand that there may already be live deployment of facial recognition technology surveillance being used in the places I’ve visited without my consent. I also understand the pull of such advancements; it’s exciting to be at the threshold of something science fiction has been heralding for years and there are, of course, use-cases wherein you could use the very technology to anonymize someone, generate a generic-looking face by scanning many others and transpose it onto someone’s face to hide the identity of at-risk people. And I don’t believe in outright bans either as San Francisco did, instead, I think maybe stricter regulations are the key here.
It’s a balancing act, after all, of proportionally weighing the right to privacy and security in the name of public interest. But at the same time, you only have one face, and it’s not as easy as changing your password to change your face. So, if there has ever been a time to care about what exactly you put online and how exactly you navigate online, now would be the right time to at the very least, start giving a shit.