Beware the Bots

With the word Brexit, I searched SSRN, the Social Sciences Research Network, on scholarly articles. There were only 25 articles. Several of these are extremely interesting. I focus on one of them.

The Howard and Kollanyi paper looks at bots, robotic messages. Their comments on political bots in the Brexit campaign are of interest.

“We tend to think of the Internet in general, and social networks in particular, as connecting human beings. And it’s true that the Internet permits us to connect and convene at an unprecedented scale. But the Internet is also famously mediated. We do not reach one another directly so much as through a layer of technology—an interface, a platform, a network—that someone else has designed. What this means in part is that some of the personalities we encounter in cyberspace are not who or what they purport to be. In fact, people are increasingly agreeing, arguing, and even flirting with fleeting bits of code known as bots.”

What this says to me is that communication in major internet social platforms is polluted. Cyberspace is polluted. Perhaps internet entrepreneurs will see a profit opportunity in creating an unpolluted space for communications, one that prevents automated messages. Twitter did suspend some bots.

“It is no secret that citizens, journalists, and political leaders now make use of political bots—automated scripts that produce content and mimic real users. But it is not clear that average users can distinguish bot from human activity.”

If we can’t tell the difference between a human message and a scripted message or it’s costly to tell the difference, then we cannot rely on what we’re reading from these sources. The notion that the social media represent democratic or public opinion or unbiased opinion has to be discarded as faulty.

“Political campaigns are complex exercises in the creation, transmission, and mutation of significant political symbols.[14] Increasingly, political campaigns automate their messaging and many citizens who use social media are not always able to evaluate the sources of a message or critically assess the forcefulness of an argument.”

Communication and information are essential to any form of government, including self-government or voluntary means of ordering society. If people’s use of social media for political purposes is corrupted by heavy pollution that’s slanted, these media lose a great deal of their supposed usefulness.

“Fake social media accounts now spread pro-governmental messages, beef up website follower numbers, and cause artificial trends. Political strategists worldwide are using botgenerated propaganda and misdirection.”

This is the kind of process that is used to generate color revolutions, manipulate press reports and generate insurrections and revolutions. This becomes a powerful tool used by hidden groups and forces. Propaganda finds a new way to propagate. False flags find a new form.

“Research suggests that when digital media become an important part of civic engagement, social movements can generate immense amounts of content that cascade across public conversations—both across social media platforms and over international borders.[15] And increasingly, political elites have been learning and applying communication innovations by activists as tools for social control.”

The drive for power and control doesn’t evaporate because there’s an internet and social platforms. It assumes new forms that utilize the new media.

“The measures of undecided voters suggest that 30 percent of UK voters will decide how to vote in the week before the election, and half of these will decide on polling day.[16] The pervasive use of bots over social media heightens the risk of massive cascades of misinformation at a time when voters will be thinking about their options and canvasing their social networks for the sentiments of friends and family.”

The prediction markets were wrong for a very good reason: There was huge uncertainty for a one-off event. There were no past precedents or patterns to go by. The undecideds were large in number. If last-minute decisions are influenced by political bots and these are biased or spread misinformation, the vote can go in unexpected directions. In a closely-contested issue, a small tilt one way or another could be caused by automated and self-generating political bots.

“Bots have been used by political actors around the world to attack opponents, choke off hashtags, and promote political platforms. During this sample period, however, we found that social media bots were used mostly for amplifying messages rather than argumentative engagement or even impression management.”

Politics, which inherently is dirty and involves dirty messages, is not enhanced or cleaned up by the existence of social media polluted by bots.

How large is the pollution? “Robotic lobbying tactics have been deployed in many countries, including Russia, Mexico, China, Australia, the United Kingdom, the United States, Azerbaijan, Iran, Bahrain, South Korea, Turkey, Saudi Arabia, and Morocco. Indeed, experts estimate that bot traffic now makes up over 60 percent of all traffic online—up nearly 20 percent from two years prior.[17]”

“Bots operate on many sensitive political topics during close electoral contests in many advanced democracies.[3], [18], [19] Political algorithms have become a powerful means of political communication for ‘astroturfing’ movements — defined as the managed perception of grassroots support.[20] In this way bots have become a means of managing citizens.”

All citizens of democracies, advanced or not, are susceptible to being manipulated by the new technology. One must learn to filter out and avoid pollution of this kind. One must be skeptical of mass movements being engendered by these means.

“In this analysis, we find that bots generate a noticeable portion of all the traffic about the UK referendum, very little of it original.”

“We find that political bots have a small but strategic role in the referendum conversations: (1) the family of hashtags associated with the argument for leaving the EU dominates, (2) different perspectives on the issue utilize different levels of automation, and (3) less than 1 percent of sampled accounts generate almost a third of all the messages.”

This study’s weakness is that it didn’t examine individual messages within the hashtags in order to determine the direction of the message, leave or stay. It counted the number of messages under #Brexit and found them to be 51.8% of all messages on the issue, of which about 14.7% appear to be bot-generated. However, messages within #Brexit can be pro or con. The number of messages under #StrongerIn was only 14.6% of which about 15.1% were bots. More detail is in their Table 2.

While these data do not permit a definitive conclusion as to which side may have generated more messages and we also do not know the effects of messages upon voting, we can see that bot pollution was a significant source of messages in the lead-up to the vote.

The Twitterization of public communications encourages short messages of 140 characters or less. Arguments cannot be conducted in this way, but influencing thought and opinion can. Beware the bots.

Share

8:01 am on June 25, 2016