Clearview AI – Banjo CEO Damien Patton cause Privacy Problems for People

Controversial tech company Clearview AI says it’s in talks with federal and state agencies to track COVID-19 using facial recognition.

This technology is from China, the biggest and most advanced surveillance regime in the world. Privacy and civil liberties are what need to be protected and discussed. Not surprisingly they violated Facebook’s TOS by scraping every photo off every single user to build up their facial database. Don’t ever grant Clearview AI one shred of legitimacy. Burn them to the ground.

The Far-Right Helped Create The World’s Most Powerful Facial Recognition Technology

Clearview AI, which has alarmed privacy experts, hired several far-right employees, a HuffPost investigation found.

The notion that we already have CCTV so people won’t mind if we add facial recognition/other types of AI image analysis is deeply concerning.

Face recognition and other types of biometric analysis can turn passive video surveillance into something active and searchable across time and space. This massively increases the potential for abuse. It is terrifying that people developing these tools don’t see the difference.

An early-1980s SCOTUS case allowed for a proto-GPS radio tracking device has now provided the legal underpinning for automated license plate readers, which didn’t exist at the time.

The New York Times published an exposé about a shadowy facial recognition firm called Clearview AI in January, it seemed like the worst nightmare of privacy advocates had arrived.
Clearview is the most powerful form of facial recognition technology ever created, according to the Times. With more than 3 billion photos scraped surreptitiously from social media profiles and websites, its image database is almost seven times the size of the FBI’s. Its mobile app can match names to faces with a tap of a touchscreen. The technology is already being integrated into augmented reality glasses so people can identify almost anyone they look at.
Clearview has contracts with Immigration and Customs Enforcement and the U.S. Attorney’s Office for the Southern District of New York, BuzzFeed reported earlier this year, and FBI agents, members of Customs and Border Protection, and hundreds of police officers at departments nationwide are among its users.

Banjo CEO Damien Patton Was Once Tied to KKK and Neo-Nazis

A photograph from an August 23, 1992, article in The Tennessean about an effort to build “a Southern headquarters in Middle Tennessee for the Aryan Nations.” Patton is identified in the photo as “Damien Patton, left on couch, who pleaded guilty to the drive-by shooting of the West End Synagogue.” Photo illustration:

CEO of Surveillance Firm Banjo Once Helped KKK Leader Shoot Up a Synagogue
Documents reveal Damien Patton, CEO of SoftBank-backed Banjo, admitted to being a neo-Nazi skinhead in his youth
In grand jury testimony that ultimately led to the conviction of two of his associates, Patton revealed that, as a 17-year-old, he was involved with the Dixie Knights of the Ku Klux Klan.

Utah Attorney General suspends state contract with Banjo in light of founder’s KKK past


Damien Patton, who helped launch and now leads the secretive startup Banjo, was part of the Dixie Knights of the Ku Klux Klan as a 17-year-old and joined a leader of the group in a drive-by shooting of a synagogue in a Nashville suburb, according to a report by the online outlet OneZero, citing transcripts of courtroom testimony, sworn statements and more than 1,000 pages of records produced from a federal hate crime prosecution.

Clearview AI’s software can find matches in billions of internet images.

Law enforcement is using a facial recognition app with huge privacy issues

Clearview AI’s software can find matches in billions of internet images.

We have to keep fighting for our right to privacy.

You may have good reason to be worried that police use of facial recognition might erode your privacy — many departments are already using software with serious privacy concerns. The New York Times has learned that over 600 law enforcement agencies in the US and Canada have signed up in the past year to use software from little-known startup Clearview AI that can match uploaded photos (even those with imperfect angles) against over three billion images reportedly scraped from the web, including Facebook and YouTube. While it has apparently helped solve some cases, it also creates massive privacy concerns — police could intimidate protesters, stalk people and otherwise abuse the system with few obstacles.

Part of the problem stems from a lack of oversight. There has been no real public input into adoption of Clearview’s software, and the company’s ability to safeguard data hasn’t been tested in practice. Clearview itself remained highly secretive until late 2019. It’s certainly capable of looking at search data if it wants — police helping to test the software for the NYT‘s story got calls asking if they’d been talking to the media.

The software also appears to explicitly violate policies at Facebook and elsewhere against collecting users’ images en masse. Facebook said it was looking into the situation and would “take appropriate action” if Clearview is breaking its rules.

Company chief Hoan Ton-That tried to downplay some of the privacy concerns. He noted that surveillance cameras are “too high” to deliver truly reliable facial recognition, and that Clearview was only notified about the reporter’s inquiries because of a system built to catch “anomalous search behavior.” Customer support reps don’t look at uploaded photos, the company added. And while there’s underlying code that could theoretically be used for augmented reality glasses that could identify people on the street, Ton-That said there were no plans for such a design.