Data Backup Digest

Do-It-Yourself Windows File Recovery Software: A Comparison

results »

Is Clearview AI a Threat to Consumer Privacy?

Consumers are worried about their online privacy more than ever before – and for good reason. With cyber-threats and attacks becoming even more prevalent than in years past, there is a genuine cause for concern. Given the scope and scale of the hacks we've seen thus far in the 21st century, how can the average consumer expect to stay safe?

If these fears weren't enough, consumers around the globe face yet another emerging threat: facial recognition. While the technology has some legitimate uses, and it has been able to successfully locate and identify known or wanted criminals in the past, the technology is far from perfect. Not only that, but some privacy watchdog groups suggest that facial recognition has the vast potential for misuse – even in official and legitimate applications.

One company, Clearview AI, is at the center of much of this controversy. Although the brand has lived in relative obscurity up until now, that's all about to change thanks to their highly innovative facial recognition app.

Users of their app, which bears the Clearview AI moniker, simply use their smartphones to snap facial portraits and upload it to the site. Once there, it's immediately matched up with other public photos of that person. In some cases, the site even provides links to show you where the photo originally appeared – such as Facebook, LinkedIn, YouTube, or another source entirely.

The facial recognition functionality of Clearview AI all hinges on its comprehensive database of images – some three billion strong – that have been scraped from every corner of the Internet. Although it seems like a relatively basic and fundamentally simple app, and while it originated as a homebrew app but a little-known developer, it amounts to a facial recognition system that is beyond the scope of anything that's been tried by the U.S. government thus far.

In fact, the U.S. government – including state and federal law officials – admits that they have little understanding of the Clearview AI database or even an idea of who develops the app; an Australian programmer that once made an app to let users affix Donald Trump's hairdo to their own digital images. But this isn't stopping them from using Clearview AI in some of their own investigations – including cases of murder and sexual exploitation.

Moreover, the use of such technology on a governmental level is extremely controversial – even if it does solve some crimes. Other software developers have even refused to release similar tools in the past due to their controversial nature.

Yet U.S. law enforcement agencies continue to use the controversial app. According to reports from Clearview AI, more than 600 different agencies have started using the app in the past year alone – although they wouldn't release a list of specific agencies. The app is also popular amongst private security firms.

While Clearview AI is still in its infancy, there's no telling how far this technology could go. Clearview AI itself already includes programming to make it compatible with devices like augmented reality (AR) glasses, which could make it nearly impossible to detect when the app is in use. For more information on Clearview AI, please visit their official website at {{|}}.


No comments yet. Sign in to add the first!