Clearview AI – A controversial technology

So, a while ago, I stumbled upon something called Clearview AI, a facial recognition tool that helped identify perpetrators and victims of crime. Wasn’t that cool? But I wondered how it actually worked. So, I paid their official site a visit.

The site said, ‘Clearview AI’s technology has helped law enforcement track down hundreds of at-large criminals, including pedophiles, terrorists and sex traffickers. It is also used to help exonerate the innocent and identify the victims of crimes including child sex abuse and financial fraud.

[From <https://clearview.ai/>]

That sounded cool too; but how did it actually work? I rushed to YouTube. I ended up watching an interview of the CEO of the company Hoan Ton-That explaining how it works.

Let me explain it to you in simple terms:

Do you remember how Chatur finds the whereabouts of Ranchod through a picture of his secretary in 3 Idiots?

Clearview pretty much does the same. It helps law enforcements to track down criminals by matching their faces through various pictures available on the web. Basically, to get leads in the investigation process.

The concept is ingenious, and its accuracy is also very high, but there’s a big catch. Clearview scrapes social media accounts of giants like Facebook, Google and Twitter to create a database of photos and matches these with photos of the perpetrators using facial recognition tech. To put it more simply, it has every photo you’ve uploaded or anyone else has uploaded, that you were in, on every social media, unless your account was always private. If you’d kept your account open just for a minute, and immediately made it private, it would still have those pictures. Also, these pictures stay in their database even if you delete them off social media permanently.

You give Clearview one of your pictures; it will browse through the web and give all the posts on all the social media that you were in, within seconds.

So, what about our privacy and our consent?

Clearview argues that it has the First Amendment right to ‘public’ images available online, and that it is only for law enforcement agencies and not for public access. But isn’t enhancing law enforcement’s ability to instantaneously identify and track individuals a dubious move? People who realize this and don’t post anything online, might still end up being in Clearview’s database through others’ posts, which is wrong on so many levels. Also, Clearview’s facial recognition has an accuracy rate of 99.6%, which evidently makes it easier for people with its access to do wrong.

So, due to all these reasons, Clearview has been a target of controversies ever since its inception in 2017. When Twitter found out that Clearview was scraping images from its databases, it immediately sent ‘cease and desist’ letters to Clearview. Google and Facebook followed. Clearview faces scrutiny internationally as well. the European Union said that Clearview’s data processing violates the General Data Protection Regulation. Canada’s privacy commissioner, Daniel Therrein also called the company’s services illegal, and said they amounted to mass surveillance that put all of society “continually in a police line-up”. He demanded that the company delete the images of all Canadians from its database.

(https://www.msn.com/en-us/news/us/clearview-ai-uses-your-online-photos-to-instantly-id-you-thats-a-problem-lawsuit-says/ar-BB1epVAn)

Recently, what everyone feared happened. Clearview suffered a security breach and all of its customer list, the searches those customers had made and their accounts were compromised. Clearview’s attorney assured that it was a minor flaw which was fixed. But wasn’t our privacy at stake?

(https://www.cnet.com/news/clearview-ai-had-entire-client-list-stolen-in-data-breach/)

– Vivek Jaju (FE IT), Sumeet Haldipur (SE Comps)