A not-so-friendly four-legged friend briefly joined the New York City Police Department this year, shocking New Yorkers in viral videos when it was deployed in response to a hostage situation in the Bronx, and later in a public housing building in Manhattan.
The tool – a 70-pound robot made by the company Boston Dynamics that is capable of transmitting video, sound and two-way communication – sparked an outcry about an overreach in the kinds of police surveillance that New Yorkers are exposed to.(The device was first tested and deployed by the NYPD last year.) Police have touted the so-called “digidog” for its ability to evaluate the safety of a scene, such as a hostage situation, and make officers aware of any potential threats before going in themselves.Facing scrutiny of the tool, department officials have countered that robots have been used to assess dangerous situations and diffuse bombs for decades.
But to some onlookers, the robot evoked a “Black Mirror”-like dystopian future – one that represented a particular threat to overpoliced communities and people of color. “When was the last time you saw next-generation, world class technology for education, healthcare, housing, etc consistently prioritized for underserved communities like this?” Rep. Alexandria Ocasio-Cortez tweeted in February, calling the digidog a “robotic surveillance ground drone.” The digidog has since been abandoned by the department, following persistent criticism of its use and cost.
Use of the digidog shouldn’t have been a huge surprise, thanks to a city law passed last summer called the Public Oversight of Surveillance Technology Act that requires the NYPD to disclose impact and use policies on all its surveillance technologies and open them to public comment. But you have to dig around a bit to find mention of the digidog in the disclosures made by the NYPD in February and April of this year. The disclosures don’t outline every individual tool used by the NYPD, but explain their use policies in 36 broader umbrella categories such as unmanned aircraft systems (drones) and social network analysis tools. Digidog is wrapped up in the category called “situational awareness cameras,” which also includes cameras attached to poles and handheld scope cameras. The policy published by the NYPD, however, does not describe any differences in how something like a robotic dog and a camera on a pole are allowed to be used. “Having that fall just within ‘situational awareness cameras’ is definitely skewing the intention of the POST Act to its extreme,” said Daniel Schwarz, privacy and technology strategist at the New York Civil Liberties Union.
That’s just one of the issues advocates like Schwarz have with the new city law, and what they say is continued obfuscation around the surveillance tools used by law enforcement in New York. Advocacy organizations like the New York Civil Liberties Union have criticized the POST Act disclosures – first released in February and then revised after a public comment period in April – pointing out, for example, that some of the language appeared to be copy and pasted from one category to the next. In one instance, language about body-worn cameras wasn’t replaced with relevant language in the use policy for drones, Schwarz said. A spokesperson for the NYPD did not comment on criticisms about the department’s disclosures.
Now, major advocacy organizations, along with some city and state lawmakers, are pushing for regulation of the use of surveillance technologies by law enforcement and other government agencies. Regulating these technologies – something New York hasn’t really done yet – seems like a logical next step for the police reform movement, and may even gain some momentum amid renewed calls for scaling back police powers. But given the long road that City Council Member Vanessa Gibson faced in passing the POST Act, which doesn’t even regulate but just requires transparency about surveillance tools, limiting or even banning the use of these technologies won’t be an easy feat.
Among the most ambitious proposals is the banning of facial recognition and other kinds of biometric surveillance by law enforcement or other government agencies. Earlier this year, a global campaign led by Amnesty International launched a new effort called “Ban the Scan,” backing existing state legislation sponsored by state Sen. Brad Hoylman and Assembly Member Deborah Glick that would institute an outright ban on the use of biometric surveillance by law enforcement, as well as create a task force to study possible future uses under a regulatory framework. The campaign is also calling on the New York City Council to introduce legislation to ban the use of facial recognition by any city agency, contractor or employee.
Partnering on the campaign is the New York Civil Liberties Union and local organizations focused on privacy, including the Surveillance Technology Oversight Project, as well as the Immigrant Defense Project and New York City Public Advocate Jumaane Williams’ office. While objections to the use of facial recognition include run-of-the-mill privacy concerns, there is also a particular focus on the threat the technology could pose to communities of color. Studies have found that facial analysis systems misidentified the faces of Black women at disproportionately high rates, included higher rates of misidentification of darker-skinned faces in general, and leave Black and Asian people up to a 100 times more likely to be misidentified than white people, according to one federal study. “The further you are from being a cis white male, the more likely it’ll make an error,” Williams said. “That’s just the truth.”
Jose Chapa, senior policy associate at Immigrant Defense Project, said that facial recognition could be used to help track down and deport undocumented immigrants, too. U.S. Immigration and Customs Enforcement has used the technology to run searches on driver’s license photos in Maryland, for example. “It's a combination, I would say, of not only fear, but also vigilance, in light (of the fact) that we don't know exactly how much they use this technology,” Chapa said of the group’s push for regulation.
New York has its own history with facial recognition that has come under scrutiny. The NYPD maintains a facial recognition database – including on photos of teenagers, despite evidence that it doesn’t work as well on younger faces.And recent reports have revealed that the department has a relationship with the facial recognition company Clearview AI. BuzzFeed reported earlier this year that more than 40 individuals in the NYPD had run over 11,000 searches with the software, despite telling the outlet in 2020 that it had no “institutional relationship” or formal or informal contract or agreement with the company. The New York State Police has also used the company’s tool, running more than 5,100 searches with the software, BuzzFeed News reported.
“While institutionally, the NYPD has had a narrow use of facial recognition, our previous practices did not authorize the use of services such as Clearview AI nor did they specifically prohibit it simply because Clearview AI did not exist at the time our Facial Recognition Practices were established,” a spokesperson for the department wrote in an email, pointing to a facial recognition policy introduced last year. “Technology developments are happening rapidly and law enforcement works to keep up with this technology in real time … And most importantly, a facial recognition match is merely a lead; no one has ever been arrested solely on the bases of a computer match alone.”
Now, there’s some hope that amid a renewed push for sweeping police reforms that began last summer during the George Floyd protests, calls to regulate law enforcement’s use of this kind of technology could gain momentum too. But obstacles are apparent. The POST Act, for example, which doesn’t even place limits on the use of surveillance technology but requires police to disclose the tools being used, didn’t pass until last summer. That was after several years of pushback from the NYPD to the Council legislation. “Sometimes you have to fight hard for the simplest of things,” Williams said. “In terms of dealing with police accountability and transparency, we’re probably where we needed to be maybe six or seven years ago.”
But privacy and civil liberties advocates may have some reason to be optimistic. Last year, New York passed a ban on the use of biometric technology in schools until at least July 2022. And in an executive order last year, Syracuse Mayor Ben Walsh introduced oversight policies for surveillance technology, including a ban on biometric or facial recognition technologies, and predictive policing algorithms by city departments. Other locales across the country have banned facial recognition, including San Francisco and Boston.
The state legislation sponsored by Hoylman and Glick was first introduced last year, and has been reintroduced, but has amassed only a handful of co-sponsors so far. In a recent Daily News op-ed, Albert Fox Cahn of the Surveillance Technology Oversight Project and Hoylman pushed for passage of the bill. “The truth is that we have been down this road before. For years, New Yorkers accepted the lie that stop-and-frisk would protect us,” Hoylman and Cahn wrote. “Eventually, after hundreds of thousands of New Yorkers were traumatized by the tactic, we finally admitted our mistake. We can’t afford to take so long to fix another big mistake.”
As for the POST Act, New York City’s first real attempt at checking police surveillance, advocates said the pressure should continue. “As we're seeing the NYPD still be reticent to come clean about what tools and technologies they have, we should see lawmakers ask them questions about these policies,” said Michael Sisitzky, senior policy counsel at the New York Civil Liberties Union. “Ask them to justify the continued use and acquisition of these tools and the continued devotion of city monies to these purposes.”
NEXT STORY: Is NY reopening too quickly?