Technology

How New York is fighting political influence campaigns

Cybersecurity experts on how bad actors on social media can be stopped.

Facebook announced in April that it would increase disclosure of political ads on its platform.

Facebook announced in April that it would increase disclosure of political ads on its platform. JuliusKielaitis/Shutterstock

Russian interference in the 2016 presidential election has caused high drama in Washington, D.C.: federal investigation verging on two years, a series of dramatic Senate Intelligence Committee hearings with internet giants, and, as state Senator Todd Kaminsky points out, new protections against political influence campaigns in New York.

“It took basically a hacked election in order for Facebook to agree and for New York to have a law, to put it into the legal code, that [Facebook] has to at least say who is paying for an ad,” he said.

Kaminsky was the lead sponsor of a bill (S.6896) last year which requires social media ads and other forms of political communications to come with a “paid for by” statement, adding more transparency to the practice. Some aspects of that legislation were enacted in the 2018 New York State budget, and in April, 2018, Facebook announced that it would increase disclosure of political ads on its platform.

Kaminsky counts that as a victory, but he says there’s still a need to combat disinformation and “fake news,” something he has encountered in his own campaigns. On a national scale, these concerns were embodied in 2016’s “Pizzagate” conspiracy, in which a fake story about Democratic Party leaders holding children as sex slaves in a Washington pizzeria spread through YouTube, Twitter, and Facebook, and led to a man firing an automatic rifle in the restaurant.

While fake news stories do not always lead to terrorist attacks, they are prevalent and are often part of hostile foreign governments’ efforts to undermine American democracy or help elect more pliant candidates. And because social media platforms like Facebook or YouTube remove posts at their own discretion, the damage of a fake news story may be done before anyone catches it.

“Can someone put an ad on Facebook tomorrow saying that you shouldn't vote for Todd Kaminsky because two weeks ago he ran someone over in the street and left them for dead, and have a picture of me driving away somewhere?” Kaminsky asked.

“Right now there's nothing you can do about that. And if you do it'll take three weeks for anyone to act on it. What if it's two days before an election?”

At a City & State panel discussion on privacy and cybersecurity on Thursday, experts in the private and public sector alike spoke to the complexity of tackling this issue, given the First Amendment implications.

“It is a difficult line to determine where exactly the First Amendment rights are being broached to the point where we need to start monitoring content,” said Eun Young Choi, Cybercrime Coordinator of the U.S. Attorney's Office in the Southern District of New York. “But I think a pretty clean line is, ‘Well, are you lying about who you are when you say that you're buying this ad space for X when really you are Y?’”

Still, it’s something Choi says the FBI and the U.S. attorney’s office is attempting to address by tracking the activities of groups known to focus on influence and disinformation campaigns, whether the bad actor is a nation-state – say, Russia – or a wealthy donor who wants to buy ad to spread false information against a candidate she opposes. When it’s a question of an ad buyer misrepresenting themselves, it’s more straightforward. “It seems like a novel approach, but this is just old-school fraud,” Choi said.

Prashanth Mekala, a supervisory special agent at the FBI’s New York field office, echoed Choi, adding that they’ll get involved along with the U.S. Attorney’s Office when attempts to influence decision making are done illegally, like by perpetrating fraud.

While Kaminsky is eager to see the question of monitoring content on digital platforms brought up in the new sessions, he harbors no illusions about just how sensitive free speech issues can become.

“These are really difficult and thorny questions,” he admitted. “Maybe because I decide to run for office it should be okay for someone to say that I murdered someone three years ago and you can photoshop me standing over a body, but what about you? And what about a business owner?”

Platforms like Facebook do employ content moderators to identify these types of posts, and in October, the social media company announced that they removed 800 pages and accounts for spreading fake news. “If you ask Facebook, there are people that are looking for this, because they don't want anyone to die on a live feed and they don't want these things to happen,” Kaminsky said. “But it's the Wild West. Who is deciding where the line is? How do we figure this stuff out?”

Drawing that line will be difficult for New York legislators, but Kaminsky is confident the after-effects of election meddling are still strong enough to propel movement on the issue, arguing the social media platforms themselves aren’t doing enough on their own. “I think the days of letting them do want they want to do and waiting for a crisis to jump in are over,” he said.