UT Wordmark Primary UT Wordmark Formal Shield Texas UT News Camera Chevron Close Search Copy Link Download File Hamburger Menu Time Stamp Open in browser Load More Pull quote Cloudy and windy Cloudy Partly Cloudy Rain and snow Rain Showers Snow Sunny Thunderstorms Wind and Rain Windy Facebook Instagram LinkedIn Twitter email alert map calendar bullhorn

UT News

Trolling the U.S.: Q&A on Russian Interference in the 2016 Presidential Election

work-933061_1920

It’s been more than two years since the 2016 presidential election, and the United States is still piecing together Russia’s propaganda-filled interference in U.S. political conversations on social media.

According to a February 2018 poll by The University of Texas at Austin and The Texas Tribune, 40 percent of Texans believe Russian interference played a role in the outcome of the 2016 election; and in their most recent poll, 41 percent disapprove of how the investigation into Russian meddling is being handled, leading many to ask, “How did this all happen?”

“From a basic democratic perspective, it is absolutely critical for us to know whether the entire premise of the country itself has been tampered with,” says UT Austin psychology postdoctoral researcher Ryan Boyd.

He and researchers from Carnegie Mellon University and Microsoft Research analyzed Facebook ads and Twitter troll accounts run by Russia’s Internet Research Agency (IRA) to determine how people with differing political ideologies were targeted and pitted against each other through this “largely unsophisticated and low-budget” operation. To learn more about the study and its findings, we asked Boyd the following questions:

Ryan Boyd is a postdoctoral fellow in the Department of Psychology at UT Austin.

Q. Why is it important to continue studying the interference in the 2016 election?

In the U.S., it is a core principle that we have the right to make informed decisions about our own government and collective destiny. By better understanding how interference occurred, we can understand how to best protect those core tenets. Aside from this — and from a pretty basic scientific perspective — it’s generally important to understand what is actually happening in the world around us. Whether or not you have a vested interest in election interference, knowing the truth is valuable in its own right.

Q. When did the IRA begin disseminating political propaganda on social media, and were they successful from the very beginning?

According to the House of Representatives Permanent Select Committee on Intelligence, the operations date back to 2014 and continued well past the 2016 presidential election. However, the degree to which they were actually successful is another question. Especially early on in their influence operation, the IRA appears to have been doing a lot of trial-and-error testing, with several early attempts appearing to have made little impact — for example, trying to play liberals and conservatives off of each other on LGBTQ issues. It wasn’t until later that they had moderately more success by amplifying racial tensions in the U.S.

Q. Are there any themes that the IRA focused on more than others?

Probably the longest-running theme of the IRA ads was attempting to divide people on issues of civil rights. Regardless of the specific topic (e.g., law enforcement officers, Second Amendment rights, racial issues), they seem to have been trying to get people to believe that the “other side” were the bad guys, and that people who aren’t like you are trying to hurt you or threaten you in some way. The goal appears to have been to make people afraid and hateful towards anyone who is different, regardless of what those differences are.

"They seem to have been trying to get people to believe that the “other side” were the bad guys, and that people who aren’t like you are trying to hurt you or threaten you."

Ryan Boyd

Q. Did Russian ads target one political party over another?

For the most part, ads seem to have targeted both ends of the political spectrum. Some ads were designed to make liberals feel like all conservatives are violent racists, and others were designed to make conservatives feel like all liberals are trying to take their rights away.

Of course, the reality is that neither of these is true. The IRA ads are a perfect example of “divide and conquer” tactics — subtly manipulating people into focusing on their differences instead of their common ground, resulting in even less focus on the big picture.

Q. How can you decipher a real post versus an IRA-manipulated post?

A lot of what we’re finding is that the IRA posts aren’t super easy to spot. If you were to approach any given social media ad or user post, it might just look like a person with a strong opinion. By using some fairly sophisticated techniques, we can get a more zoomed-out understanding of what was really going on in a way that is pretty difficult for the average person (or even an expert) to see by just looking at bits and pieces up close.

Q. Was the IRA creating its own content or simply elevating existing tweets/FB posts?

The IRA content appears to be largely homegrown. Linguistic analyses of their influence operations suggest a distinctly non-native pattern in the language that they were using.

Q. Why, do you believe, did the Russians fail to cover their tracks?

I’d guess that it’s a simple cost/effect balance. If the goal was to cause as much disruption as possible, it could be accomplished with a fairly simple approach. After all, some of these operations were going on for more than four years, but we’re still sitting around trying to put the pieces together.

Q. What should future research in this area look at?

I think that one of the most critical things will be to integrate all of the work being done in this area into a unified understanding of what has taken place. The work that we have done is just a tiny part of trying to understand the bigger picture, but the public is still a long way away from having some kind of an objective barometer for cyberwarfare and influence operations. It’s difficult to be absolutely sure of the motivations behind information that we’re exposed to in any venue — online, traditional media and even face-to-face interactions. More research in this area, in particular, could help us to be more accurate and informed in our vigilance for misinformation and external manipulation.