AI Supremacy

Share this post
Privacy Spotlight
aisupremacy.substack.com

Privacy Spotlight

Clearview AI set to get US patent for its facial recognition technology

Michael Spencer
Dec 14, 2021
2
Share this post
Privacy Spotlight
aisupremacy.substack.com

Clearview AI is an American facial recognition company, providing software to companies, law enforcement, universities, and individuals. They scrapped most of our faces from the internet, to train their facial recognition system and database.

Facebook, LinkedIn, Twitter and YouTube objected, but it still occurred.

Now it appears Clearview AI is closer to getting a US patent for its facial recognition technology. This is troubling in the ethics of AI because we already trust social platforms less for their disregard for our privacy, human rights and exploitation of our attention systems in their apps.

The company’s controversial technology fills its database with images it scrapes from social media.

We backlist companies like SenseTime, yet companies like Clearview proliferate in our own backyard. After Clearview AI scraped billions of photos from the public web — from websites including Instagram, Venmo and LinkedIn — to create a facial recognition tool for law enforcement authorities, many concerns were raised about the company and its norm-breaking tool.

The legal and regulatory mechanisms around the internet, AI and the metaverse are very much lacking, and it’s creating a winner-takes-all capitalism with illegal exploitations. It will result in a Metaverse that infringes on our human rights. That’s the new meta of our new normal it appears.

I cover and follow privacy related news related to artificial intelligence, though I rarely write about each case. However, I think this case is a bit significant.

Last week it was reported Clearview AI is on track to receive a US patent for its facial recognition technology, according to a report from Politico.

Beyond the privacy implications and legality of what Clearview AI had done, there were questions about whether the tool worked as advertised: Could the company actually find one particular person’s face out of a database of billions? While China is ten years ahead of us in facial recognition, facial recognition will be ubiquitous by 2035 globally. BigTech has a history of selling this tech to the police. Police who have been abusing their positions of authority against racial minorities.

In an era where the U.S. is suffering a mental health crisis (partly caused by technology apps I might add), this is getting a bit serious. Facebook (now Meta) takes on responsibility for the spread of Covid-19 misinformation.

In December, 2021 Clearview has reportedly sent a “notice of allowance” by the US Patent and Trademark Office, which means that once it pays the required administration fees, its patent will be officially approved.

In the spirit of Surveillance Capitalism invented by companies like Google and Facebook, Clearview AI builds its facial recognition database using images of people that it scrapes across social media (and the internet in general), a practice that has the company steeped in controversy.

Surveillance Capitalism that’s Ad-Centric is Becoming Normalized

So how does it work? Clearview’s patent application details its use of a “web crawler” to acquire images, even noting that “online photos associated with a person’s account may help to create additional records of facial recognition data points,” which its machine learning algorithm can then use to find and identify matches.

America is backlisting companies in China while it has startups that are on the same path. Alphabet and your mother are normalizing a brand of surveillance capitalism that will turn our cities into micro dystopias of data-streams. This is not great for your privacy or your human freedoms.

Critics argue that Clearview AI’s facial recognition technology is a violation of privacy and that it may negatively impact minority communities. If everything online is open-source, people are going to gradually behave quite differently online. An exodus from apps like Instagram, Facebook and LinkedIn demonstrates that. Some people on LinkedIn now use avatars (cartoon faces) as their images.

Clearview AI’s tech also has significant biases to identity women and people of color. Male led technologies have a tendency to racially profile. Especially when one of your biggest clients is law enforcement.

According to the Verge, Clearview’s data is already being used by the Pentagon and DoD sector. Last year, the company said that its technology was used by over 2,400 police agencies — including the FBI and Department of Homeland Security — to identify suspects. In the aftermath of the Capitol riots this January, Clearview AI said the use of its technology by law enforcement sharply increased as detectives worked to identify those associated with the incident.

This suggests tech companies are working with Government and National Defense in a way some of the public may not be aware of. The U.S. has its own surveillance architecture it’s trying to put in place to compete with China.

According to the New York Times, Clearview AI’s app was in the hands of law enforcement agencies for years before its accuracy was tested by an impartial third party. Now, after two rounds of federal testing in October, 2021, the accuracy of the tool is no longer a prime concern.

The American Civil Liberties Union sued the company last year for violating the Illinois Biometric Information Privacy Act, resulting in Clearview stopping the sale of its technology to private companies and non-law enforcement entities. In Australia it’s gotten much of the same. Yet American surveillance companies are not backing down. They now work as a unit to manipulate society together like we see with the Metaverse narrative.

So in late 2021 this amounts to the U.S. government moving to award a lucrative patent for a “search engine for faces,” a technology that has members of Congress and privacy advocates up in arms. Any picture you uploaded online is likely already in there. Maybe it wasn’t even you who uploaded it. BigTech has readily shared all of its data on you with its partners for years, without paying you anything.

Last year, Facebook, LinkedIn, Twitter, YouTube all sent cease and desist letters demanding that the company stop scraping images and videos from the platforms, as the practice is in violation of each site’s policies. But this is mostly a PR tactic. They all violate our privacy and algorithmically seek to alter our online behavior to maximize ARPU. That’s average revue per user.

As citizens they don’t have our consent and won’t need it. Clearview’s software — which scrapes public images from social media to help law enforcement match images in government databases or surveillance footage — has long faced fire from privacy advocates who say it uses people’s faces without their knowledge or consent.

Share this post
Privacy Spotlight
aisupremacy.substack.com
Comments

Create your profile

0 subscriptions will be displayed on your profile (edit)

Skip for now

Only paid subscribers can comment on this post

Already a paid subscriber? Sign in

Check your email

For your security, we need to re-authenticate you.

Click the link we sent to , or click here to sign in.

TopNewCommunity

No posts

Ready for more?

© 2022 Michael Spencer
Privacy ∙ Terms ∙ Collection notice
Publish on Substack Get the app
Substack is the home for great writing