Credits

  • MOOC coordinators Manuel Gértrudix Barrio & Rubén Arcos Martín
  • Content written by Rubén Arcos Martín
  • Multimedia design by Alejandro Carbonell Alcocer
  • Visual Identity by Juan Romero Luis

Images generated by deepfake technology

Everything fake: Detecting AI-generated forgeries (GAN). Training analogic and deepfakes detection

The Internet and social media have remarkable advantages, but they also provide a channel for conducting deception activities through disinformation and digital forgeries, including audiovisual content. As noted, by a team of researchers at the University of Washington, today most online disinformation is manually written but “as progress continues in natural language generation, malicious actors will increasingly be able to controllably generate realistic-looking propaganda at scale. Thus […] we are also concerned with the inevitability of AI-generated ‘neural’ fake news” (Zellers et al. 2019: 1-2)

We know from scientific research that “the more visual the input becomes, the more likely it is to be recognized and recalled” (Medina 2014: 191). A phenomenon that is called the pictorial superiority effect (Stenberg 2006). Images and videos can trigger emotional responses and thus they can be used by malicious actors in propaganda and disinformation campaigns.

“Text and oral presentations are not just less efficient than pictures for retaining certain types of information; they are far less efficient. If information is presented orally, people remember about 10 percent, tested 72 hours after exposure. That figure goes up to 65 percent if you add a picture.” (Medina 2014: 192).

When it comes to social media, we know from statistics that articles, posts, tweets, and the like with images are more shared

Audiovisual disinformation may present in the future a bigger problem than simple text-based disinformation (Lin 2018). Deepfake forgeries could trigger strong emotions in target audiences and even if source reliability and credibility is posteriorly checked, the audiovisual content is likely to be remembered and it will be difficult to get rid of the visual information that we have been exposed to (Bayer et al. 2019). Future developments in Deepfake technology will likely make very difficult or time-consuming to spot false images or videos and it is likely that only with the help of AI we will be able to detect these forgeries. Deepfake technology is improving and images of people produced by generative adversarial networks – GAN (Karras et al. 2019) can already be difficult to identify as such. The website “This person does not exist” displays images of persons generated by these methods. Every time the browser is refreshed the website displays an image of someone that does not exist!

However, the manipulation of images and fabrication of visual “evidence” for different purposes like producing meanings favorable to the interested parties or for falsifying history is nothing new. Photo edition software like Photoshop is facilitates the edition of images for different purposes and it is in the ethics of people and professions to make a good use of these softwares. The website altered images bdc instance provides many examples of manipulated images, including the elimination of people from official photos.

Image forensics is required for detecting some of this manipulation and can become an important skill assisted by software with the proliferation of online disinformation and propaganda. In the same way AI and Deep Learning can be used in malicious ways to create Deepfakes, they can also be used on the other hand to help detect manipulations

Researchers are developing tools for the “detection of malicious manipulation with digital images” (Fridrich, Soukal, and Lukáš 2003). This is the case of Forensically, developed by Jonas Wagner. You can get familiarized with this seat of tools by watching the following tutorial.

Methodology and Resources

  • Bayer, Judit et al. (2019). Disinformation and propaganda – impact on the functioning of the rule of law in the EU and its Member States 
  • Fridrich, Jessica; Soukal, David and Jan Lukáš (2003). Detection of Copy-Move Forgery in Digital Images
  • Gross, James J. and Robert W. Levenson (1995). “Emotion elicitation using films,” Cognition and emotion 9 (1): 87-108. Available at:  Bpl
  • Karras, Tero et al. (2019). Analyzing and Improving the Image Quality of StyleGAN. arXiv preprint. Available at: Arxiv
  • Lin, Herb (2018) “The Danger of Deep Fakes: Responding to Bobby Chesney and Danielle Citron,” Lawfare Blog
  • Medina, John. Brain Rules (Updated and Expanded): 12 Principles for Surviving and Thriving at Work, Home, and School. Kindle version.Seattle: Pear Press.
  • Stenberg, Georg (2006). “Conceptual and perceptual factors in the picture superiority effect,” European Journal of Cognitive Psychology, 18:6, 813-847, DOI: 10.1080/09541440500412361
  • Tucker, Joshua A. et al. (2018).  Social Media, Political Polarization, and Political Disinformation: A Review of the Scientific Literature
  • Uhrig MK, Trautmann N, Baumgärtner U, Treede R-D, Henrich F, Hiller W and Marschall S (2016). “Emotion Elicitation: A Comparison of Pictures and Films,” Frontiers in Psychology 7:180. Available at:NCBI
  • Zellers, Rowan et al. (2019). Defending against neural fake news. arXiv preprint. Available at: Arxiv
  • Zupan, Barbra and Duncan R. Babbage (2017). “Film clips and narrative text as subjective emotion elicitation techniques,” The Journal of Social Psychology, 157:2, 194-210, DOI: 10.1080/00224545.2016.1208138