In this excerpt from “Your Face Belongs to Us” (Simon & Schuster, 2023), journalist Kashmir Hill recalls the rise of Clearview AI, the facial recognition technology company that burst into the public consciousness with its artificial intelligence (AI) software that supposedly identify virtually anyone with just a single shot of their face.
In November 2019, I had just become a reporter at The New York Times when I received a tip that seemed too outrageous to be true: A mysterious company called Clearview AI claimed it could identify virtually anyone based on just a snapshot of their face.
I was sitting in a hotel room in Switzerland when I got the email, on the last international plane trip I would take for a while because I was six months pregnant. It was the end of a long day and I was tired, but the email shocked me. My source had unearthed a legal memo titled “Privileged & Confidential,” in which a Clearview lawyer had said the company had scraped billions of photos from the public internet, including social media sites like Facebook, Instagram and LinkedIn, to create a revolutionary app .
Give Clearview a photo of a random person on the street, and the program spits back all the places on the Internet where it has seen his face, potentially revealing not only his name but other personal details about his life. The company sold this superpower to police departments across the country, but tried to keep its existence a secret.
Not long ago, automated facial recognition was a dystopian technology that most people associated only with science fiction novels or movies like “Minority Report.” Engineers first tried to make this a reality in the 1960s, attempting to program an early computer to match a person’s portrait against a larger database of people’s faces. In the early 2000s, police began experimenting with searching police photo databases for the faces of unknown criminal suspects. But the technology had proven largely disappointing. Performance varied by race, gender and age, and even the most modern algorithms struggled to do something as simple as matching a mugshot with a grainy ATM surveillance.
Clearview claimed to be different, touting a “98.6% accuracy rate” and a huge collection of photos that were unlike anything police had used before.
This is huge if true, I thought, as I read and reread the Clearview memo, which was never intended to be made public. I’ve been talking about privacy and its steady erosion for over a decade. I often describe my beat as “the looming tech dystopia – and how we can try to avoid it,” but I’ve never seen such a bold attack on anonymity.
Privacy, a word that is notoriously difficult to define, was most famously described in an 1890 article in the Harvard Law Review as “the right to be left alone.” The two lawyers who wrote the article, Samuel D. Warren Jr. and Louis D. Brandeis, called for the right to privacy to be protected by law, along with the other rights – to life, liberty and private property – already established . They were inspired by a then-new technology – the portable Eastman Kodak film camera, invented in 1888, which made it possible to take a camera outside a studio for ‘instant’ photographs of everyday life – and by people like me , a meddlesome member of the press.
“Instant photographs and newspaper activity have invaded the sacred confines of private and domestic life,” Warren and Brandeis wrote, “and countless mechanical devices threaten to fulfill the prophecy that ‘what is whispered in the closet, from the roofs of the houses will be proclaimed. .'”
Related: Humanity faces a ‘catastrophic’ future if we don’t regulate AI, says ‘Godfather of AI’ Yoshua Bengio
This article is among the most famous legal essays ever written, and Louis Brandeis subsequently joined the Supreme Court. Yet privacy never received the kind of protection that Warren and Brandeis said it deserved. More than a century later, there is still no overarching law that guarantees Americans control over what photos are taken of them, what is written about them, or what is done with their personal information. Meanwhile, companies based in the United States – and other countries with weak privacy laws – are creating increasingly powerful and invasive technologies.
Facial recognition has been on my radar for a while. Throughout my career, I’ve covered major new offerings from billion-dollar companies at places like Forbes and Gizmodo: Facebook automatically tags your friends in photos; Apple and Google let people look at their phones to unlock them; digital billboards from Microsoft and Intel with cameras that detect age and gender to show passers-by appropriate advertisements.
I had written about how this sometimes clunky and error-prone technology excited law enforcement and industry but frightened privacy-conscious citizens. As I processed what Clearview claimed it could do, I thought back to a federal workshop I had attended years earlier in Washington, DC, where industry representatives, government officials, and privacy advocates had sat down to hammer out the rules of the road.
The one thing they all agreed on was that no one should roll out an application to identify strangers. It was too dangerous, they said. A weirdo in a bar could take a photo of you and within seconds know who your friends were and where you lived. It could be used to identify anti-government protesters or women who walked into Planned Parenthood clinics. It would be a weapon of intimidation and intimidation. Accurate facial recognition, on the scale of hundreds of millions or billions of people, was the third rail of technology. And now Clearview, an unknown player in the field, claimed to have built it.
I was skeptical. Startups are notorious for making grandiose claims that amount to nothing. Even Steve Jobs famously recreated the capabilities of the original iPhone when he first unveiled it on stage in 2007.*
We tend to believe that computers have almost magical powers, that they can come up with the solution to any problem and, given enough data, ultimately solve it better than humans. So investors, customers and the public can be misled by outrageous claims and digital sleights of hand from companies that are striving to do something great but aren’t quite there yet.
But in this confidential legal memo, Clearview’s high-profile attorney Paul Clement, who had been the United States Attorney General under President George W. Bush, claimed to have tried the product with lawyers from his firm and “found that it delivers fast and accurate search results.”
Clement wrote that more than 200 law enforcement agencies were already using the tool and that he had determined that they “do not violate the Federal Constitution or relevant existing state biometric and privacy laws when using Clearview for its intended purpose.” Not only were hundreds of police departments secretly using this technology, but the company also hired a fancy lawyer to reassure officers that they were not committing a crime by doing so.
I returned to New York with an impending birth deadline. I had three months to get to the bottom of this story, and the deeper I dug, the weirder it got…
Concerns about facial recognition had been growing for decades. And now the vague bogeyman had finally found its form: a small company with mysterious founders and an unfathomably large database. And none of the millions of people who made up that database had given permission for that. Clearview AI represents our worst fears, but also finally offers the opportunity to confront them.
*Steve Jobs came up with a quick fix, hiding the iPhone prototype’s memory problems and frequent crashes by having his engineers spend countless hours finding a “golden path”: a specific set of tasks that the phone could perform without any problems .
Kashmir Hill’s Your Face Belongs to Us: The Secretive Startup Dismantling Your Privacy Shortlisted for the 2024 Royal Society Trivedi Science Book Prizewhich celebrates the best popular science writings from around the world.