- MOOC coordinators Manuel Gértrudix Barrio & Rubén Arcos Martín
- Content written by Irena Chiru
- Multimedia design by Alejandro Carbonell Alcocer
- Visual Identity by Juan Romero Luis
Recently we have witnessed the evolution of disinformation strategies from randomly sharing conspiracy or made-up stories, into a more complex ecosystem where information is manipulated by using a mix of emotionality and rationality and narratives are used to feed people with emotionally charged true and false information aimed at destroying social cohesion, manipulate relationships and create isolation, both online and offline. Complex data-driven disinformation and propaganda operations are performed as to replace aspirations with anger born from confusion and despair, to sow distrust and to distort or representations and understanding of reality via hostile narratives, ready to be “weaponised” for specific goals:
“Manipulated information, using a mix of emotionality and rationality, has recently become so pervasive and powerful to the extent of rewriting reality, when spread in an environment where the narration of facts (true, partial or false) counts more than the facts themselves” (Flore et al., 2019).
The potential success of these operations is highly dependent on the capacity to conceal identities and to mislead the individual users regarding the nature of the communication in which they are engaging. Such a fraudulent activity is mixed with active user participation through sharing and liking – this blend creates a sense of uncontrollable anarchy for potential supervisors and legislators.
Although the recent technological developments have been a significant game changer, this is not something new. For hundreds of years, states have sought to intervene in the affairs of others in a deniable manner (Cormac, Aldrich, 2018). Since the professionalization of intelligence services in the aftermath of the Second World War, this behavior has become known as covert action: commonly understood as activity to inﬂuence events in a plausibly deniable manner. Traditionally, covert actions include the activities of a government to influence political, economic, or military conditions abroad, where it is intended that the role of the initiator will not be apparent or acknowledged publicly. While covert action encompassed a broad spectrum of activities, propaganda and disinformation have always very useful tolls to covertly disseminate specific information to advance foreign policy goals.
However, due to changes in technology and the media, currently both state and non-state actors can use social media to employ time-tested propaganda techniques to yield far-reaching results. Thus, “the democratization of hybrid warfare is further blurring the lines between state and non-state actors, giving states increased deniability in the murky world of accountability and international scrutiny” (Bjola, Pamment, 117).
The concept of “plausible deniability” is central to understandings of covert action both academically and in practice. Covert action is multidimensional, with varying audiences and degrees of exposure. Paradoxically, in many cases, the target is aware of the perpetrator as quite often the effectiveness of covert action depends on this awareness to achieve a degree of coercion that lies somewhere between diplomacy and conventional force.
We live in an era of ambiguous warfare which does not spell Bjola, Pamment the end of covert action, on the contrary. By creating multiple levels of attribution or “grayscale of deniability” (Helmus et al. 2018, 11) to, players at the level of covert attribution produce and circulate exploitable content, add fear-mongering commentary to and amplify content produced by others and supply exploitable content to data dump websites.
For example, the Kremlin has built a complex production and dissemination apparatus that integrates actors at varying levels of attribution to enable large-scale and complex information operations:
The challenge of attribution (Weisburd, Watts, and Berger, 2016)
Meanwhile, hackers deface websites, execute denial of service attacks, and extract secrets to feed content production (Weisburd, Watts, and Berger, 2016).
Attribution in online disinformation campaigns is complicated, therefore it is not entirely possible to define the source, the funding of the disinformation campaign or whether it had a domestic or international effect. Moreover, the question of attribution is one of the most problematic areas: if a state denies responsibility in a cyber activity, attribution of the attack to the (suspected) state is close to impossible.