/cdn.vox-cdn.com/uploads/chorus_image/image/61836481/facial_recognition_public_space_ai_now.0.jpg)
Automation and artificial intelligence (AI) are often marketed as wondrous and futuristic technologies that will help us live more convenient lives. But beyond the idealistic marketing hype the reality is far more malignant.
“AI” isn’t just asking Siri for directions or telling Alexa to turn on your lights; it’s already being used—and has been used for decades—to decide who receives medical care, who attends which schools, and who receives housing assistance.
It can also be used to identify people and people and make predictions about their behavior, a type of AI known as facial recognition.
Facial recognition software is already widespread. Customs and Border Protection uses it to screen non-U.S. residents on international flights and TSA plans to expand this to all international travelers. The New Orleans police department, in partnership with Palantir, was using facial recognition in its predictive policing program for six years before the public knew about it. It’s estimated that half of all American adults are in some sort of facial recognition database.
And at AI Now—a symposium held October 16 about the intersection of artificial intelligence, ethics, organizing and accountability, presented by an NYU research institute of the same name—panelists warned that facial-recognition technology has troubling implications for civil rights, especially amid current debates about who has access to public space.
“We should be deeply worried about the impact of AI face recognition on civil rights,” said Sherrilyn Ifill, president and director-counsel of the NAACP Legal Defense Fund. “So much of this implicates public space, contested public space, who can step into it, and what happens to us when we do.”
:no_upscale()/cdn.vox-cdn.com/uploads/chorus_asset/file/13293695/new_york_city_domain_awareness_image.jpg)
The rise of mass surveillance and biometric analysis in public space
Since 9/11, mass surveillance has become the norm for law enforcement, which uses a variety of tactics to monitor people. Body cameras have become more common and security cameras are already ubiquitous. What’s changing today isn’t that we are being watched, it’s who’s doing the watching: computers.
“Autonomous facial recognition” is a system that uses computer vision to look at images. Snapchat’s filters and Apple’s FaceID use computer vision, for example. Then, an algorithm—designed by human programmers and data scientists—processes those images and uses machine learning to analyze them by cross referencing the image to other data sources.
The facial recognition software technology companies are developing can identify a person in a crowd in real time, track someone’s movements, detect emotions, and predict behavior. In a highly controversial study, one AI researcher claimed he could use facial recognition to predict someone’s sexuality. IBM used NYPD footage to create software that lets police search for people by their skin color.
There are, of course, huge caveats with facial recognition technology: it’s notorious for being highly inaccurate. Amazon’s face-recognition software—which analyzes facial attributes like gender and emotions—wrongly identified 28 members of Congress as people who have been charged with a crime. Facial recognition software used by the Metropolitan Police, law enforcement in London, had an embarrassingly high rate of false-positive identifications: 98 percent.
:no_upscale()/cdn.vox-cdn.com/uploads/chorus_asset/file/13293815/ai_now_facial_recognition_panel.jpg)
Why facial recognition is a Civil Rights issue
During the AI Now panel, Nicole Ozer, the technology and civil liberties director of the ACLU’s California chapter, called facial recognition technology a “grave threat” that “feeds off and exacerbates bias in society.”
If facial recognition systems sound a lot like souped-up versions of phrenology’s quack science, it’s because they are. Can someone look like a criminal? Look like they’re about to cause harm? Appear to be a risk? Who decides that these traits actually signify criminality or risk? Or what a neutral expression is? This isn’t much different than the stereotyping and profiling that happens today, which disproportionately affects people of color.
“It’s not just recognizing a face; it’s evaluating a person,” Ifill said of the technology.
Facial recognition technology accelerates such judgments, makes them at a larger scale, and does so with a greater level of opacity. Then “we deposit this tech into institutions that are unable to address inequality, and drop it into a period of racial profiling and Stop and Frisk,” Ifill said.
Take New York City’s Gang Database, which currently lists between 17,000 and 20,000 individuals. It’s only one-percent white—and it’s not entirely clear how someone is added to the list or removed, which has potentially damaging effects if someone is labeled a gang member when they have no affiliations.
In December 2017 and February 2018, NAACP Legal Defense and Educational Fund and the Center for Constitutional Rights filed a Freedom of Information Law request to understand how the NYPD builds, maintains, and and audits the database but didn’t receive the documents it requested. In August, the advocacy groups sued the NYPD for failure to disclose its practices.
:no_upscale()/cdn.vox-cdn.com/uploads/chorus_asset/file/13293703/AI_now_news_2018.jpg)
Why public space is a civil rights issue
The civil-rights implications of facial recognition become particularly clear in public space. What’s troubling about this tech—and data collection in general—entering the public realm at a rapid clip is that they erode our constitutional rights to privacy, free speech, freedom of assembly, and due process. Meanwhile, our current political climate is becoming more hostile to these rights in public space. It’s not too far of a stretch to see how facial recognition could make it worse.
The Trump Administration is actively trying to restrict access to public space that’s popular with protesters. Would fewer people attend demonstrations if they knew their image could end up in a database just by attending? Police departments are already scanning crowds and protests to find and arrest people with outstanding warrants by cross referencing footage with social media profiles.
Facial recognition adds fuel to the fire by making it impossible to move through public space freely. As Ifill said during the panel: “The Civil Rights movement was about dignity in public space.” Facial recognition has the potential to strip dignity, all behind the scenes, and all surreptitiously.
Ifill draws parallels to the 1958 Supreme Court Case NAACP v. Alabama. In an attempt to intimidate Civil Rights activists, Alabama subpoenaed the NAACP’s membership list, which the organization declined to do. The Supreme Court ruled that identifying who was in the NAACP would infringe on the right to privacy and free association.
In Wales, the human rights organization Liberty sued South Wales police for using facial recognition technology and told the Guardian, “The police’s creeping rollout of facial recognition into our streets and public spaces is a poisonous cocktail—it shows a disregard for democratic scrutiny, an indifference to discrimination, and a rejection of the public’s fundamental rights to privacy and free expression.”
Public space, in its lowest common denominator, is space that’s open and accessible to all people regardless of race, age, income, sex/gender, or economic background. With facial recognition software in widespread use, you essentially lose privacy as soon as you enter the public realm. And public space ceases to be public if people can’t freely use it—including using it free of fear that they will be inaccurately profiled by humans or autonomous systems.
Powerful technology with virtually no oversight
This summer, a group of Amazon employees called on Jeff Bezos to stop selling its face-recognition technology—called Amazon Web Services Rekognition—to law enforcement, citing concerns about violating human rights, especially considering the targeting of black activists by law enforcement and ICE’s mistreatment of migrants and refugees. Just this week, Jeff Bezos essentially washed his hands of any culpability to civil or human rights violations related to Rekognition, saying that society’s “immune response” will eventually fix biased tech.
This week, an anonymous Amazon employee wrote a Medium post which discusses how law enforcement around the country—specifically Orlando, Florida, and and unnamed Oregon sheriff’s department—is testing facial recognition software with live video feeds and mugshot databases with virtually no public oversight.
It’s not just law enforcement that’s playing into mass surveillance. Cities are using machine learning and computer vision to “optimize” their operations and redesign their systems. Sidewalk Labs is building a data-driven “smart city” in Toronto. Consumer brands are latching onto facial recognition software to target advertising in public space.
“We only know the tip of the iceberg on how the government is ramping up face surveillance and how companies are using it,” Ozer said.
More public oversight of facial recognition may be coming. This summer, the ACLU asked Congress to institute a moratorium on the use of facial recognition by government agencies. Microsoft also urged Congress to regulate the technology. New York City created an “Automated Decision Systems Task Force” to create recommendations on how AI should be regulated and monitored. This summer, 20 experts in civil rights and artificial intelligence authored a letter to the task force with preliminary suggestions about what policy could look like. In Toronto, Sidewalk Labs is proposing a “Civic Data Trust” to oversee the information it will eventually collect.
Here’s a chilling game: count how many security cameras you pass on your way to work. I tried it yesterday and passed 85 on a trip that included about six blocks of walking in Brooklyn and Manhattan and passing through two subway stations. Some were private (the ones outside my apartment building and in its hallways, the cameras in front of shops and bodegas, and security cameras in and around my office building). Thirty-seven were associated with New York City governance (the NYPD cameras on light poles and service vehicles, security cameras in LinkNYC kiosks, cameras in MTA stations).
What was troubling to me is that I don’t know, off the top of my head, what the privacy policies are for any of these companies, how long footage is stored, who has deals to share data, or how secure any of the data is. Was I being profiled? By leaving my apartment, did my image enter some sort of database? I would have no idea, just like everyone else who ventures outside. (I looked into it and learned that some of the MTA’s cameras feed into NYPD’s Domain Awareness System, which uses facial recognition. According to a LinkNYC spokesperson, the company has a provision banning the use of facial recognition technology and does not send footage to the NYPD unless it is subpoenaed.)
In her opening statement to the symposium, AI Now co-founder Kate Crawford said a few words that stuck with me: “AI isn’t tech; it’s power, politics, and culture.” Right now, our political climate is hostile to our civil liberties, it’s using its power to restrict access to public space, and we live in a culture of fear. What good is a public space if people are too anxious to use it?
Loading comments...