This module is a resource for lecturers
Information warfare, disinformation and electoral fraud
Information warfare is a term used to describe the collection, distribution, modification, disruption, interference with, corruption, and degradation of information in order to gain some advantage over an adversary (Marlatt, 2008; Prier, 2017). The purpose of this information is to utilize and communicate it in a way that alters the target's perception of an issue or event in order to achieve some desired outcome (Wagnsson and Hellman, 2018). Two tactics used in information warfare are disinformation (i.e., the deliberate spreading of false information) and fake news (i.e., propaganda and disinformation masquerading as real news). It is important to note that the latter term is not well defined and can be misused (see below box on Joint Declaration on Freedom of Expression and "Fake News", Disinformation and Propaganda).
Declining levels of trust have contributed to the rapid spread and consumption of fake news by the public (Morgan, 2018, p. 39). Disinformation and fake news are spread on social media platforms, and mainstream and non-mainstream media (Prier, 2017, p. 52). Social media platforms enable disinformation to spread faster and to a larger audience than other online platforms; depending on the platform, this can occur in real-time (e.g., Twitter). Automated bot accounts assist in this endeavour by spreading information at a faster and more frequent rate than individual users can. For example, ISIS had developed an app (the Dawn of Glad Tidings) that members and supporters would download to their mobile device; the app, among other things, was designed to access the users' Twitter account and tweet on the behalf of users (Berger, 2014). Supporters of disinformation and bots also amplify disinformation and fake news online (Prier, 2017, p. 52). Selective, repetitive, and frequent exposure to disinformation and fake news helps shape, reinforce, and confirm what is being communicated as valid. Disinformation and fake news are believed to have influenced voter behaviour and ultimately, the outcome of elections.
Election Infrastructure as Critical Infrastructure
Following the accusations of foreign agents' use of ICT to influence and interfere with the 2016 US elections, the US designated election infrastructure, which includes "storage facilities, polling places, and centralized vote tabulations locations used to support the election process, and information and communications technology to include voter registration databases, voting machines, and other systems to manage the election process and report and display results on behalf of state and local governments," as part of its critical infrastructure (US Department of Homeland Security, 2017; Abdollah, 2017).
Electoral fraud "can be defined as any purposeful action taken to tamper with electoral activities and election-related materials in order to affect the results of an election, which may interfere with or thwart the will of the voters" (López-Pintor, 2010, p. 9). An example of electoral fraud involves gaining unauthorized access to voting machines and altering voting results. It is important to note that:
There is no widely accepted definition of election fraud because the applied understanding of fraud depends on the context: what is perceived as fraudulent manipulation of the electoral process differs over time and from country to country. Even within academia, the theoretical definitions of fraud have yet to be united across the fields of international and domestic law, comparative and American political science, and election administration in developed and developing countries (Alvarez, Hall, and Hyde, 2008, pp. 1-2).
Some countries have laws that criminalize the distribution of false information that could influence voter behaviour and election results, and other forms of electoral fraud (e.g., France, the United Kingdom, and various states in the United States) (Daniels, 2010; Alouane, 2018). Other countries that have laws that criminalize false information and fake news have used these laws to prosecute journalists and other individuals who criticize or otherwise challenge the government (Reuters, 2018; Gathright, 2018; Priday, 2018). Despite these regulations, many political groups and actors continue to push the envelope in attempts to manipulate public opinions, often taking advantage of loopholes or omissions in the legislation. Moreover, politically motivated groups have developed mechanisms to influence public opinion by exploiting the features of various websites, such as the "like," "heart," or "upvote" functions of social media services, with the intent of popularizing certain ideologically-loaded news items. These actions, often referred to as "astroturfing" (Zhang, Carpenter, and Ko, 2013, p. 3), do not necessarily entail publishing misleading or libelous information but rather focus on manipulating the newsfeed of users (Popham, 2018).
Joint Declaration on Freedom of Expression and "Fake News", Disinformation and Propaganda (2017)
The United Nations Special Rapporteur on Freedom of Opinion and Expression, the Organization for Security and Co-operation in Europe (OSCE) Representative on Freedom of the Media, the Organization of American States (OAS) Special Rapporteur on Freedom of Expression and the African Commission on Human and Peoples' Rights (ACHPR) Special Rapporteur on Freedom of Expression and Access to Information, [...]
1. General Principles
a. States may only impose restrictions on the right to freedom of expression in accordance with the test for such restrictions under international law, namely that they be provided for by law, serve one of the legitimate interests recognised under international law, and be necessary and proportionate to protect that interest.
b. Restrictions on freedom of expression may also be imposed, as long as they are consistent with the requirements noted in paragraph 1(a), to prohibit advocacy of hatred on protected grounds that constitutes incitement to violence, discrimination or hostility (in accordance with Article 20(2) of the International Covenant on Civil and Political Rights).
c. The standards outlined in paragraphs 1(a) and (b) apply regardless of frontiers so as to limit restrictions not only within a jurisdiction but also those which affect media outlets and other communications systems operating from outside of the jurisdiction of a State as well as those reaching populations in States other than the State of origin.
d. Intermediaries should never be liable for any third party content relating to those services unless they specifically intervene in that content or refuse to obey an order adopted in accordance with due process guarantees by an independent, impartial, authoritative oversight body (such as a court) to remove it and they have the technical capacity to do that.
e. Consideration should be given to protecting individuals against liability for merely redistributing or promoting, through intermediaries, content of which they are not the author and which they have not modified.
f. State mandated blocking of entire websites, IP addresses, ports or network protocols is an extreme measure which can only be justified where it is provided by law and is necessary to protect a human right or other legitimate public interest, including in the sense of that it is proportionate, there are no less intrusive alternative measures which would protect the interest and it respects minimum due process guarantees.
g. Content filtering systems which are imposed by a government and which are not end-user controlled are not justifiable as a restriction on freedom of expression.
h. The right to freedom of expression applies "regardless of frontiers" and jamming of signals from a broadcaster based in another jurisdiction, or the withdrawal of rebroadcasting rights in relation to that broadcaster's programmes, is legitimate only where the content disseminated by that broadcaster has been held by a court of law or another independent, authoritative and impartial oversight body to be in serious and persistent breach of a legitimate restriction on content (i.e. one that meets the conditions of paragraph 1(a)) and other means of addressing the problem, including by contacting the relevant authorities of the host State, have proven to be demonstrably ineffective.
2. Standards on Disinformation and Propaganda
a. General prohibitions on the dissemination of information based on vague and ambiguous ideas, including "false news" or "non-objective information", are incompatible with international standards for restrictions on freedom of expression, as set out in paragraph 1(a), and should be abolished.
b. Criminal defamation laws are unduly restrictive and should be abolished. Civil law rules on liability for false and defamatory statements are legitimate only if defendants are given a full opportunity and fail to prove the truth of those statements and also benefit from other defences, such as fair comment.
c. State actors should not make, sponsor, encourage or further disseminate statements which they know or reasonably should know to be false (disinformation) or which demonstrate a reckless disregard for verifiable information (propaganda).
d. State actors should, in accordance with their domestic and international legal obligations and their public duties, take care to ensure that they disseminate reliable and trustworthy information, including about matters of public interest, such as the economy, public health, security and the environment.
Full text see: OSCE.
Did you know?
Online child sexual exploitation and abuse can be countered using tools as Photo DNA and Net Clean's Whitebox. The use of these tools does not violate rule of law and human rights (for more information about online child sexual exploitation and abuse, see Cybercrime Module 12 on Interpersonal Cybercrime; for a discussion of legal frameworks and human rights that relate to cybercrime, see Cybercrime Module 3 on Legal Frameworks and Human Rights).
Based on the inoculation theory, a solution to misinformation (i.e., false or inaccurate information) and disinformation (i.e., purposely false or inaccurate information) has been proposed. This solution seeks to inoculate individuals against misinformation and disinformation by providing them with the means to build resistance to messaging and propaganda, reducing their susceptibility to misinformation and disinformation, and leading them to question the veracity of the information being presented to them as well as the legitimacy of the source presenting the information. The inoculationtheory, which is empirically supported in its application to highly politicized topics (e.g., climate change and terrorism) (van der Linden et al., 2017; Cook, Lewandowsky and Ecker, 2017; Banas and Miller, 2013), has been predominantly applied to misinformation and could be applied to disinformation (Compton and Pfau, 2005; Roozenbeek and van der Linden, 2018). When applied to misinformation (or disinformation), this theory holds that if individuals are exposed to small amounts of misinformation (or disinformation), this can help them build resistance to being influenced by actual misinformation (or disinformation) (Cook, 2017). The way that this resistance is built is by informing individuals about the dangers of misinformation and disinformation, exposing them to the techniques used by others to distort facts, and providing them with the tools they need to identify misinformation and disinformation (Cook, 2017; van der Linden et al., 2017; Roozenbeek and van der Linden, 2018).
Misinformation and disinformation can thus be countered with education, not just in regards to the topics being communicated, but also education about the tactics and methods used to create and spread misinformation and disinformation. Roozenbeek and van der Linden (2018) created a multi-player game whereby players (consumers of the news) were asked to play the role of the fake news producer. The results of this study showed that because players were both introduced to small amounts of misinformation in the game and asked to think about the ways in which they could mislead people with information, at the end of the game, they were better able to "recognize and resist fake news and propaganda" (Roozenbeek and van der Linden, 2018, p. 7). To fight the spread of disinformation and fake news, media literacy (i.e., "the ability to access, analyze, evaluate, and communicate messages in a wide variety of forms;" Aufderheide, 1993, cited in Hobbs, 1998, p. 16) campaigns have also been created in certain countries (e.g., Sweden, Denmark and Nigeria) (Funke, 2018). What is more, units have been created that are dedicated to identifying, collecting, and reviewing disinformation and fake news, and alerting the media and the public about it (e.g., EU East StratCom Task Force) (Morrelli and Archick, 2016). For information about the ethical obligations of media professionals and the responsibility of all individuals to practice ethical behaviour in the creation and dissemination of information via social media platforms see Module 10 of the E4J University Module Series on Integrity and Ethics.
Did you know?
UNODC's Education for Justice (E4J) initiative created educational resources for children and young people to help them develop their conflict resolution, critical thinking, teamwork and empathy skills (UNODC, n.d.). Among these is the animated series for children between the ages of 6 and 12, The Zorbs , about an imaginary planet and its inhabitants who overcome a range of challenges relating to justice, cybercrime, human rights, gender and integrity, through the enforcement of core values, such as acceptance, fairness, integrity, and respect. In addition to the animated series, an interactive game, Chuka, Break the Silence , as well as The Online Zoo (available in multiple languages), which teaches children (between the ages of 6 and 12) about the value of the Internet and safe Internet practices. Moreover, game-based learning material, among other learning material and resources, has been created for youth (above the age of 12).
Want to learn more?
National cybercrime awareness campaigns are also highly recommended in these cases, especially in developing countries where generally awareness of cybercrime risks is particularly limited. For example, Ghana has launched a National Cybercrime Awareness Campaign to address some of these risks which could potentially impact on elections and consequently on national security (Business Ghana, 2018). Other ways of countering misinformation, disinformation, and fake news: (1) fact checking by independent parties and (2) limiting the propagation of fake news, disinformation, and misinformation based on an online platform's community rules.
Next: Responses to cyberinterventions as prescribed by international law
Back to top