Stumbled across an HTML article featuring a paper by Samuel C. Woolley, which is a Ph.D. student in the Department of Communications at the University of Washington (UW).
The paper is even more interesting to read and reflect upon given it was written in April 2016, well before the recent US election that was stymied by the issues discussed by the paper. On first impression that election may have opened up a new era in Politics, one dominated by digital media and the latest new forms of digital communications. Digital technologies are starting to become widespread and mainstream and obviously political interests and politicians see this as something not to miss or overlook. But, and as the election in the US demonstrated, the relative immaturity of the use of the technologies, the openness inherent in technological communities coupled with temptation to manipulate perceptions and uninformed consciousness of many voters, all concur to unpleasant outcomes and a culture where authenticity and true beliefs backed by true facts are relegated to the locker room…
This make the wider readership of this paper all the more relevant. I must say that I am posting this without any partisan position whatsoever. This is a completely neutral critical review post, so from the outset whatever it is written here shouldn’t be wrongly interpreted.
Over the last several years political actors worldwide have begun harnessing the digital power of social bots — software programs designed to mimic human social media users on platforms like Facebook, Twitter, and Reddit. Increasingly, politicians, militaries, and government-contracted firms use these automated actors in online attempts to manipulate public opinion and disrupt organizational communication. Politicized social bots — here ‘political bots’ — are used to massively boost politicians’ follower levels on social media sites in attempts to generate false impressions of popularity. They are programmed to actively and automatically flood news streams with spam during political crises, elections, and conflicts in order to interrupt the efforts of activists and political dissidents who publicize and organize online. They are used by regimes to send out sophisticated computational propaganda. This paper conducts a content analysis of available media articles on political bots in order to build an event dataset of global political bot deployment that codes for usage, capability, and history. This information is then analyzed, generating a global outline of this phenomenon. This outline seeks to explain the variety of political bot-oriented strategies and presents details crucial to building understandings of these automated software actors in the humanities, social and computer sciences.
From the very first paragraph of the paper we get a sense of the relevance of automated software to the current traffic made online. I do also state here that all of my social media profiles, Facebook pages, Twitter and LinkedIn accounts are completely tailored by myself and I never used a bot in them. And probably never will. On the other hand I do not want to convey the belief that all automated software for text and speech recognition are evil doers; there may be good applications for this kind of technology. The problem is always with humans and their ambiguities and irrepressible tendency for abuse and misbehavior.
The following paragraphs make the point clear as to what this is about:
In August 2014, Twitter filed a U.S. Securities and Exchange Commission report revealing that over 23 million active user accounts on the company’s social networking site were actually social bots — a particular type of automated software agent written to gather information, make decisions, and both interact with and imitate real users online. Security experts believe that bots generate more than 55 percent of all traffic online (Zeifman, 2014). The ubiquity of these programs on social platforms, and throughout the Web at large, is of pressing concern to academic and civil communities interested in understanding how digital automation effects particular aspects of culture, society, and — most central to this study — politics.
Social bots are distinct from more general Web bot software. The average bot is used for information gathering. These ‘spiders’ and ‘scrapers’ dominate many mundane facets of the Internet. They aid in the generation of personalized online news preferences and advertisements. They facilitate the organization of search engines and help maintain Web pages. This variety of bot doesn’t engage in discourse with human users. These bots can, however, be used for political purposes. Governments, corporations, and other actors use monitoring bots in intelligence gathering, social listening and scanning for copyright infringement (Desouza, 2001; Barford and Yegneswaran, 2007; Stinson and Mitchell, 2007). The key feature of this variety of bots is not where they live, i.e., on a particular platform, but what they do, i.e., gather and sort information.
Further down the author elucidate us about what the recent trend with use of social bots has been. Instead of evolving within the human-machine interaction exclusively, bots are being used for data mining and manipulation of complex information with the purpose of distorting it in way as to distort beliefs and truth:
The ways in which this variety of automated social software are being deployed, and the groups behind deployment, are changing. Computer science researchers have found that social bots can be used beyond simple human-bot interaction and towards large scale mining of users’ data and actual manipulation of public opinion on sites like Facebook and Twitter (Boshmaf, et al., 2011; Hwang, et al., 2012).
Until roughly six years ago, technologically adept marketers used social bots to send blatant spam in the form of automatically proliferated social media advertising content (Chu, et al., 2010). A growing collection of recent research reveals, however, that political actors worldwide are beginning to make use of these automated software programs in subtle attempts to manipulate relationships and opinion online (Boshmaf, et al., 2011; Ratkiewicz, et al., 2011a; 2011b; Metaxas and Mustafuraaj, 2012; Alexander, 2015; Abokhodair, et al., 2015). Politicians now emulate the popular twitter tactic of purchasing massive amounts of bots to significantly boost follower numbers (Chu, et al., 2012). Militaries, state contracted firms, and elected officials use political bots to spread propaganda and flood newsfeeds with political spam (Cook, et al., 2014; Forelle, et al., 2015).
Political bots are among the latest, and most unique, technological advances situated at intersection of politics and digital strategy. Numerous news outlets worldwide have covered government and military bot deployments, paying special attention to the rapid rise in usage of such software. Journalists, bloggers, and citizen reporters have worked to explain how governments and those vying for power have used the software in specific contexts. According to media reports political bots have been deployed in several countries: Argentina (Rueda, 2012), Australia (Peel, 2014), Azerbaijan (Pearce, 2013), Bahrain (York, 2011), China (Krebs, 2011), Iran (York, 2011), Italy (Vogt, 2012), Mexico (Orcutt, 2012), Morocco (York, 2011), Russia (Krebs, 2011), South Korea (Sang Hung, 2013), Saudi Arabia (Freedom House, 2013), Turkey (Poyrazlar, 2014), the United Kingdom (Downes, 2012), the Unites States (Coldeway, 2012), and Venezuela (Howard, 2014) among them. The New York Times (Urbina, 2013) and New Yorker (Dubbin, 2013) have published comprehensive articles about the rise of social bot technology, giving very mainstream exposure to the important new political tool.
A case where innovation is used not to the benefit of the economy or wider society but to its detriment:
Many computer scientists and policy makers treat bot-generated traffic as a nuisance to be detected and managed. System administrators at companies like Twitter work to simply shut down accounts that appear to be running via automatic scripts. These approaches are too simplistic and avoid focusing on the larger, and systemic, problems presented by political bot software. Political bots suppress free expression and civic innovation through the demobilization of activist groups and the suffocation of democratic free speech. They subtly work to manipulate public opinion by giving false impressions of candidate popularity, regime strength and international relations. The disruption to public life caused by political bots is enhanced by innovations in parallel computation and innovations to algorithm construction.
As such there should not be a place/opportunity for unthinking reactions from regulators and moderators. Instead there should be a place to deepen understanding of these technologies. Technological developments are often inevitable and enabling of efficiency or promoters of more productive activities. But the misuse must be tackled. Otherwise that value is diminished or destroyed:
Political bots must, therefore, be better understood for the sake of free speech and the future of digitally mediated civic engagement. The information that exists on political bots is disjointed and often isolated to specific, country or election-oriented events. This paper helps to comparatively plot out the evolutionary trajectory of this new medium of interest in the fields of computer mediated communication, political communication, information science, science, technology, and society (STS), and computer science.
Many studies in social science map the relationship of contemporary politics and new and evolving technologies by analyzing media reports about events and tools in question (Earl, et al., 2004; Krippendorff, 2004; Edwards, 2013; Strange, et al., 2013). This paper takes up this method, conducting a content analysis of credible news articles on political bot usage in order to construct a global event dataset that codes for political bot location, proliferation, and strategy. From this information, a working description of both political bot use and state-specific tactics is presented.
Findings in the paper
Financial and economic interests get in the way of all of these developments. Social media bots were ( and may still be…) bought by powerful interests, with the intention of using it to gather influence and public visibility. This is a classic case of traffic of influence – a criminal behavior in all normal circumstances – but within a lightly regulated misunderstood technology, turns into an opportunity to unaccountable misconduct. Law enforcement is required, with officials properly qualified to deal with this:
Findings and analysis
There is a cohesive nature to how authors report on the ways political bots were used from country to country. Governments and other political actors most generally deployed political bots during elections or moments of distinct, and country-specific, political conversation or crisis. It is worth noting that some articles also spoke of instances in which political bots were used for preemptive online security purposes. The Syrian government, for example, has reportedly used bots to generate pro-regime propaganda targeted at both in state and external targets on Twitter during the ongoing revolution (Abokhodair, et al., 2015). Venezuelan political bots described focus solely on attempts to manipulate public opinion in state (Forelle, et al., 2015). Several journalists reported that politicians in Australia, Italy, the U.K., and U.S. bought fake, bot-driven, social media followers in attempts to seem more popular to constituents.
The distinct ways in which political bots have been used varies from country to country and political instance to political instance. During elections political bots have been used to demobilize an opposing party’s followers. In this case, the deployer sends out Twitter “bombs:” barrages of tweets from a multitude of bot-driven accounts. These tweets co-opt tags commonly used by supporters of the opposing party and re-tweet them thousands of times in an attempt to prevent organization amongst detractors. For instance, if a political actor notices that their opponent’s supporters consistently use the tag #freedomofspeech in organizational messages, then that actor might make an army of bots to prolifically re-tweet this specific tag. The effect of this is that the opponent’s supporters have a very difficult time searching common tags in attempts to organize and communicate with their fellows.
What we can also sense here is several instances of what currently is the state-of-the-art in bot(automated software) – humans interactions, and how they are used in complex social settings. This is a historical treasure trove of posterior future analysis, undoubtedly. I am reminded here of the classic ‘positive outcomes from a negative development‘ in many other historical innovative technologies; but only if we are fully able to fully understand what is going on right now:
Political bots have also been used during elections to pad politicians’ social media follower lists. In this case, Politicians buy bot followers — which mimic real human users — in attempts to look more popular or relevant. There are several prominent examples, particularly in Western states. According to Downes (2012), U.K. political candidate Lee Jasper used bots to boost the number of his Twitter followers in order “to give a false impression of the popularity of his campaign.” Coldeway (2012) details a similar bid by former U.S. presidential candidate Mitt Romney in which political bots were used for social media follower padding. According to Coldeway, “[in] over 24 hours starting July 21, the presumptive Republican nominee acquired nearly 117,000 followers — an increase of about 17 percent.” This rapid and huge rise in supporters was immediately noted by bloggers. Opponents attributed the boost to bots deployed by campaign-oriented reputation management or marketing firms. Supporters of the Romney campaign said the bot-driven inflation came from detractors in a bid to discredit the candidate.
The ways political bots have been used in other instances of civil disobedience and security crises are strikingly similar to the ways they have been used during elections. York’s (2011) Guardian article notes that certain governments being protested during Arab Spring movements used political bots in combinations of the previously mentioned ways. Not only did governments in Syria, Bahrain, Iran, and Morocco use bots to prevent organization by Twitter bombing the opposition with spam, they also used them to send out masses of pro-government tweets.
Astroturfing is the practice of masking the sponsors of a message or organization (e.g., political, advertising, religious or public relations) to make it appear as though it originates from and is supported by grassroots participant(s).” From “Astroturfing,” at https://en.wikipedia.org/wiki/Astroturfing, accessed 8 March 2016.
An interrogation of this table suggests that government actors in countries with a longer history of democracy — Australia, Italy, the U.S., and U.K. — are more likely to only, or exclusively, use bots for social media follower padding. Countries that polity rates as mostly democratic, such as Argentina, Mexico or South Korea, host actors that also use bots for demobilization of opposition and to spread pro-government or candidate messages. Actors in countries ranked as more authoritarian, Russia, China, and Venezuela, also engage in this type of political bot usage. Firmly authoritarian countries, Azerbaijan, Bahrain, and Saudi Arabia, tend not to use political bots for social media follower number padding. Actors within, or related to, these governments tend to use bots to send out pro-government messages and demobilize opposition.
The author of this paper concludes with some important remarks. Of most significance is the recognition of the global, international nature of this phenomena. Other points worth to mention is the difference in usage and, obviously, in the adequacy of the response, within different countries or jurisdictions; the possible numerous research avenues in the field, which does not only encompass the technological details, but also the legal, social, economic and political dimensions all possibly engaged in future research efforts. Important, timely and highly to recommend readership.
This research project demonstrates how media articles frame the ways in which bots impact the social systems, and particular countries, in which they are deployed. It details how specific news accounts of this computational propaganda, proliferated by political actors using political bots, enables control globally. The ways in which particular state oriented political actors make use of political bots are explored herein.
There are many potential avenues for continued research in this arena. Plans for further study might examine how certain cases of political bot usage in one country may have affected implementation and usage in other countries. Another project could lie in the building of a prediction model of bot usage in upcoming international elections. Each year sees numerous moderately or highly contested international elections. Several of these take place in countries with authoritarian regimes and emerging democracies. It would be interesting to work towards predicting political bot usage in these upcoming elections and determine what potential impact such use has on electoral outcomes. Continued study of political bots is, undoubtedly, a rich and necessary area for continued academic research.
body text image: Got Bad Reviews Online? Don’t Even Think About Astroturfing
featured image: The Beginner’s Guide To Social Media Chat Bots