Dailymail

Policing tool so controversial authorities rarely admit to using it is linked to more than 1,000 criminal investigations

A.Davis2 hr ago
A controversial facial recognition company that's built a massive photographic dossier of the world's people for use by law enforcement, has been used in more than 1,000 police investigations without authorities announcing use of the software.

Despite opposition from lawmakers, regulators, privacy advocates and the websites it scrapes for data, Clearview AI has continued to rack up new contracts with police departments and other government agencies.

But now an investigation by The Washington Post has revealed how hundreds of U.S. citizens have been arrested after being connected to the crime , not through good old fashioned policing, but through use of the facial recognition software.

The Post was able to sift through four years of records from police departments in 15 states that documented how the software was used.

Suspects placed under arrest were never informed how they were identified with police officers actively obscuring the use of the software through convoluted phrasing such as 'through investigative means', in an effort to disguise its use.

Clearview has provided access to its facial recognition software to more than 2,220 different government and law enforcement agencies around the country, including Immigration and Customs Enforcement, the US Secret Service, the Drug Enforcement Agency and more.

It pulls photos and personal data from a wide range of online sources, including social media sites like Facebook, Instagram, X, and LinkedIn, which it uses to create individual profiles of people.

Clearview's app then uses these profiles to identify individuals in photos that their clients, such as police departments, upload.

In the past the company was the subject of controversy after it was found to be scraping pictures from social media without people's consent. The pictures were used to train its facial recognition algorithm.

The collection of images - approximately 14 photos for each of the 7 billion people on the entire planet, scraped from social media and other sources makes the company's extensive surveillance system, already the most elaborate of its kind.

But now it is law enforcement who are under scrutiny after failing to be transparent as to how the tech is being used.

Sometimes police stated how suspects were identified through a witness or that 'a police officer made the identification.'

The real concern comes after about two dozen U.S. state or local governments passed laws restricting facial recognition after studies found the technology was less effective in identifying black people and had on a number of occasions wrongly identified suspects.

When police departments were asked by The Post to explain how the specialized software was used, most refused to give details while others claimed facial recognition was never used solely to provide a concrete match - just to suggest possible suspects.

In two cases, suspects were identified and arrested using the software but the accused were never told of its use.

In the town of Evansville, Indiana police claimed a man was identified by his long hair and tattoos on his arms with his apprehension spurred by previous booking photos.

In Pflugerville, Texas a man who stole $12,500 in merchandise was arrested after they found the name of the suspect 'by utilization of investigative databases.'

Both Pflugerville and Evansville police departments have not commented on the cases.

One police department in Coral Springs, Florida tells its officers not to reveal how facial recognition is being used when writing up reports.

When officers search for possible suspects they are warned: 'Please do not document this investigative lead.'

Should criminal proceedings then follow, operations deputy chief Ryan Gallagher has said the department would reveal its source should it be obliged to do so.

The facial recognition system functions by analyzing an image, typically from surveillance footage, and comparing it to a large database of photos, such as those from driver's licenses or mugshots.

AI is used to identify similarities between the 'probe image' and the faces in the database.

However, there is no universal standard for determining a match, so different software providers may show varying results and degrees of resemblance.

Clearview AI, a tool becoming ever more widely used by law enforcement, scans a vast database of publicly available images from the internet, meaning anyone's photo online could potentially be linked to an investigation based on facial similarity.

But in some of the more bizarre matches, the system suggested a suspect was basketball legend Michael Jordan and in another search a cartoon of a black man was found to be a match.

There now appears to be a steadily growing opposition to use of the facial recognition tool that can instigate false arrests.

It was revealed during the study how seven Americans who were innocent, six of whom were black, were incorrectly arrested only to have their charges later dropped.

While some were told they had been identified by AI, others were simply informed casually how 'the computer found them' of that they had simply been a 'positive match'.

Civil rights groups and defense lawyers say people should be told if the software is used to identify them.

In some recent court cases, the reliability of the tool has been successfully questioned with defense lawyers suggesting police and prosecutors are working to intentionally to shield the technology from the scrutiny of the courts.

Police probably 'want to avoid the litigation surrounding reliability of the technology,' said Cassie Granos, an assistant public defender in Minnesota to The Washington Post.

In their contracts with individual police departments, Clearview attempts to distance themselves from the reliability of their results.

The contracts state how the program is not designed 'as a single-source system for establishing the identity of an individual' and that 'search results produced by the Clearview app are not intended nor permitted to be used as admissible evidence in a court of law or any court filing.'

Currently there are no federal laws regulate facial recognition leaving it to individual states and cities to push for greater transparency over the use of the tool.

The company, founded in 2016 by Australian CEO Hoan Ton-That, 37, is currently valued at more than $225 million. Clearview is funded in part by Peter Thiel, the conservative venture capitalist who helped found the data analytics company Palantir, which has worked with the FBI, CIA, Marine Corps, and Department of Homeland Security.

Clearview's technology is also being used by private companies, including Macy's, Walmart, BestBuy and the NBA.

European nations - including the United Kingdom, France, Italy, Greece and Austria - have all expressed disapproval of Clearview's method of extracting information from public websites, saying it comes in violation of European privacy policies.

Canadian provinces from Quebec to British Columbia have requested the company take down the images obtained without subjects' permission.

Despite opposition from lawmakers, regulators, privacy advocates and the websites it scrapes for data, Clearview has continued to rack up new contracts with police departments and other government agencies.

In the meantime, its growing database has helped Clearview's artificial intelligence technology learn and grow more accurate.

One of its biggest known federal contracts is with U.S. Immigration and Customs Enforcement - particularly its investigative arm, which has used the technology to track down both the victims and perpetrators of child sexual exploitation.

0 Comments
0