By Published: Aug. 5, 2020

Banner image: A crowd tunes in for a televised debate between Hillary Clinton and Donald Trump in 2016. (Credit: LBJ Library)

In the lead up to the 2016 presidential election, thousands of Twitter users changed their behavior after coming in contact with social media bots created by a notorious troll farm in Russia—adopting increasingly negative language in their tweets, among other shifts.

That conclusion comes from a new study led by CU Boulder and published online this week. It’s the latest research to dig into the affairs of the Russian government-backed Internet Research Agency (IRA). For more than two years, according to an investigation by the U.S. House Intelligence Committee, this organization set out to undermine the U.S. electoral process—through a campaign of posting false information and racist memes to social media sites like Facebook and Twitter.

The new CU Boulder findings, however, are some of the first to examine the behavior of a broad swath of Twitter users who had contact with the IRA.

In the study, which is currently undergoing peer review, the researchers took a deep dive into thousands of everyday Twitter accounts that were active between 2014 and 2016. The results show a clear before-and-after picture: Some Twitter users, for example, began mentioning Donald Trump and Hillary Clinton a lot more in their tweets after they encountered the phony accounts.

A building in St. Petersburg, Russia, that once housed the offices of the Internet Research Agency. "Save time. Avoid the line."

Top: A building in St. Petersburg, Russia, that once housed the offices of the Internet Research Agency; bottom: A fake ad posted by IRA accounts in the lead up to the 2016 election that urged Americans to vote by text message. (Credit: CC photo via Wikimedia Commons)

The team notes that it can’t prove that the IRA was behind this shift in online behavior. But the findings point to a troubling pattern—especially as the country gears up for another presidential election, said study coauthor Richard Han.

“Given the relevance of this research to the upcoming 2020 presidential elections, we felt it was important to release these findings promptly,” said Han, a professor in the Department of Computer Science

Study coauthor Qin (Christine) Lv agreed.

“There has been a lot of research on IRA accounts and their behavior,” said Lv, an associate professor in computer science. “But we wanted to focus on the users who were targeted by the IRA accounts.”

Safeguarding democracy

The scientists, who hail from the Colorado Research Center for Democracy and Technology, say that the research delves into a still-relevant threat to the safety of internet users everywhere.

“We think that trying to make the internet safe for democracy is a key part of cybersafety,” Han said.

Combating the efforts of the IRA definitely fits the bill. According to data from Twitter, this St. Petersburg-based troll farm created 3,841 bot accounts on the social media platform in the lead up to the 2016 elections—with handles like @MetsTheBest and @Patriot_archive. The accounts shared diverse content, ranging from anti-immigrant conspiracy theories to internet gaming chat and official-looking ads that urged Americans to “avoid the line” and vote for president by text message.

Researchers, however, still know little about what impact those tweets may have had on legitimate Twitter users, said study coauthor Shivakant Mishra.

“We said ‘sure, there are all these bots and they are spreading misinformation,’” said Mishra, a professor of computer science. “But was there an actual change in the behavior of Twitter users?”

Tagging Hillary and Donald

To get closer to the answer, Mishra and his team, including graduate students Upasana Dutta, Rhett Hanscom and Jason Zhang, did a bit of detective work. The team pored through public Twitter data to come up with a list of roughly 5,000 Twitter accounts that the IRA had contacted between 2014 and 2016 by way of a trolling trifecta—a bot had retweeted, replied to and mentioned their tweets.

The team started its clock after that first contact. And the results were “surprising,” Mishra said.

After landing on the radar of the IRA, social media users began, on average, to post tweets that contained language with increasingly negative sentiments. One Twitter account, for example, declared: “Only the United States is able to bomb hospitals, excavators and weddings! Not Soldiers! Jackals!" Another wrote: "You're bashing Trump for things we don't care about. Culture has changed & Old Guard Repubs are out; they're losers."

Many users also tagged Donald Trump and Hillary Clinton’s official Twitter accounts more often in their tweets. The team dug deeper, too. Roughly one-half of the group’s list of Twitter users also engaged back with the IRA accounts—through their own mentions, retweets or replies.

Those responsive users exhibited more drastic shifts in behavior. They tagged @HillaryClinton, for example, about 55% more often after contact with the IRA—compared to just a 15% increase among users who had, seemingly, ignored the overtures from the Russian bots.

The study, Mishra notes, shows a clear case of correlation and not necessarily causation. The team can’t yet say for sure who or what was behind these changes.

“The next question is what caused the shift,” Mishra said. “There could be many factors, and we plan to continue this work to determine whether the change was due to contact with the IRA.” 

Still, the study highlights the importance of cybersafety to the democratic process, said Tamara Lehman, an assistant professor in the Department of Electrical, Computer and Energy Engineering. She urges concerned social media users to do their best to escape the internet’s “echo chamber.”

“You need to be aware of all the news, opinions and facts,” Lehman said. “You can’t just look at content that asserts your own beliefs. You need to try to figure out what the other side is saying.”