It’s not easy to spot disinformation on Twitter

Here’s what we learned from 8 political ‘astroturfing’ campaigns.

Dr. Sebastian Stier and his co-authors Franziska Keller, David Schoch, and JungHwan Yang examine so called “astroturfing” campaigns which aim at misleading the public by giving a false impression that there is genuine grass-roots support or opposition for a particular group or policy. Those campaigns don’t solely rely on automated “bots” or bot accounts — humans, not programmers or AI, are behind the “troll farms.”

Dr. Sebastian Stier und seine Co-AutorInnen Franziska Keller, David Schoch und JungHwan Yang untersuchen so genannte “Astroturfing”-Kampagnen, die gezielt den (irreführenden) Eindruck erwecken, als würde eine bestimmte Gruppe bzw. politische Position massenhaft Unterstützung in den Sozialen Medien erfahren. Diese Kampagnen stützen sich nicht nur auf automatisierte “Bots” oder Bot-Accounts – hinter den “Troll-Farmen” stehen häufig echte Menschen.


DOI: 10.34879/gesisblog.2020.2

Facebook recently announced it had dismantled a number of alleged disinformation campaigns, including Russian troll accounts targeting Democratic presidential candidates. Over the summer, Twitter and Facebook suspended thousands of accounts they alleged to be spreading Chinese disinformation about Hong Kong protesters.

Disinformation campaigns based in Egypt and the United Arab Emirates used fake accounts on several platforms this year to support authoritarian regimes across nearly a dozen Middle East and African countries. And the Mueller report describes in detail how Russian trolls impersonated right-wing agitators and Black Lives Matter activists to sow discord in the 2016 U.S. presidential election.

How can you distinguish real netizens from participants in a hidden influence campaign on Twitter? It’s not easy.

Hidden campaigns leave traces

We examined eight hidden propaganda campaigns worldwide, comprising over 20,000 individual accounts. We looked at Russia’s interference in the 2016 presidential election, and the South Korean secret service’s attempt to influence that country’s 2012 presidential election. And we looked at further examples associated with Russia, China, Venezuela, Catalonia and Iran.

All of these were “astroturfing” campaigns — the goal is to mislead the public, giving a false impression that there is genuine grass-roots support or opposition for a particular group or policy.

We found that these disinformation campaigns don’t solely rely on automated “bots” or bot accounts — contrary to popular media stories. Only a small fraction of the 20,000 accounts we reviewed (between 0 and 18 percent, depending on the campaign) are “bot accounts” that posted more than 50 tweets per day on a regular basis — a threshold some researchers use to distinguish automated accounts from bona fide individual users.

This isn’t a big surprise. Insiders have long reported that humans, not programmers or AI, are behind these “troll farms.” Court records published in the case against the South Korean National Intelligence Service (NIS) mentioned above paint a similar picture of their internal organization.

Trolls behave like you and me

That means that examining whether an account tweets a lot of spamlike content or behaves like a “robot” will detect only a fraction of the astroturfing accounts. The key to finding many of the rest lies in the fact that humans are paid to post messages in these astroturfing accounts.

By looking at digital traces left by astroturfing accounts, we found unique patterns that reflect what social scientists call the principal-agent problem. The people behind astroturfing accounts are not intrinsically motivated participants of a genuine grass-roots campaign, so they will try to minimize their workload by taking shortcuts just to finish assigned tasks. Astroturfers (the “agents”) will find all kinds of ways to make their lives easier instead of doing what their boss (the “principal”) would like them to do.

In addition, the agents’ actions may reflect the timing of instructions they got from the principal to achieve specific campaign goals at key campaign moments. True grass-roots activists, in contrast, usually react in a more organic fashion, with more variation in message contents and timing.

The secret service agents in South Korea, for instance, were instructed to cover a specific agenda at the beginning of every workday. They apparently considered their work a 9-to-5 job: The figure below shows that most of their tweets (the red line) were posted during office hours on weekdays, while ordinary Koreans (black, dashed line) tweet more in the evening and on the weekends.

The same is true for other astroturfing campaigns. Other research has noted, for instance, that Russian trolls tend to be more active during office hours, St. Petersburg time.

Of course, not every account that tweets during office hours is a troll account. So what other hints are there? As we show in our research, message coordination is another clue. Any information campaign — even genuine grass-roots movements — will feature large numbers of accounts that post similar tweets. But because astroturfers receive centralized instructions and try to avoid having to work too hard, they are much more likely to post similar or even identical tweets within a very short time frame.

To detect and illustrate this phenomenon, we traced networks of coordinated messaging, connecting accounts that tweet or retweet the same content within a one-minute time window or that frequently retweet each other.

Coordinated messages are a giveaway

The differences between regular users and astroturfers are stark. In the South Korean election case, there are 153,632 instances of account pairs posting the same original tweet within a minute, while this kind of co-tweeting does not occur once among a group of ordinary Korean-speaking users of similar size. You can find the corresponding network shown here.

The co-tweet networks in the other seven campaigns were similarly concentrated. The co-tweet network of the Russian Internet Research Agency (IRA) campaign during the 2016 presidential election followed a familiar pattern. Researchers have shown that the IRA campaign targeted both ends of the political spectrum and therefore posted very different messages. But our research showed that left-wing trolls impersonating Black Lives Matter activists and the right-wing accounts posted 1,661 identical tweets.

Here’s why: It turns out that some of these agents found common ground during the #OscarsSoWhite debate. Even though they were supposed to impersonate very different characters, they ended up copy-pasting similar messages.

Our research framework could thus help social media platforms identify astroturfing campaigns more efficiently. It’s not clear whether all these companies systematically look for this type of coordinated messaging. Twitter, for instance, has released the Iranian campaign as five different data sets at different times, despite the fact that accounts in different data sets tweet the same messages. That seems to indicate that Twitter did not look for co-tweeting as we define it.

So how do these findings help identify Russian trolls on Twitter? Looking at accounts individually may not reveal much — the real hints are when there’s a group of suspicious accounts.

Are there other accounts following or retweeting the suspected troll accounts? Or accounts that tweet the exact same tweet at the same time? If there’s a group of suspect accounts, how similar are they, and do they co-tweet? These are clear signs of an astroturfing campaign.

Originally published as “It’s not easy to spot disinformation on Twitter. Here’s what we learned from 8 political ‘astroturfing’ campaigns.” in The Monkey Cage at The Washington Post on Oct 28, 2019. Reprinted with permission. 

Original article: Franziska B. Keller, David Schoch, Sebastian Stier & JungHwan Yang (2020) Political Astroturfing on Twitter: How to Coordinate a Disinformation Campaign, Political Communication, 37:2, 256-280, DOI: 10.1080/10584609.2019.1661888

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.