Russian Disinformation and the Psychology of Deepfakes

Capstone

In May 2024, Foreign Policy and Democracy Intern Desiree Winns, joined students from around the world in Athens, Greece to present to the International Association of Political Science Students (IAPSS) World Congress research on the use of AI-generated articles and imagery in Russian disinformation. The IAPSS awarded Desiree Best Panelist. The following is an edited excerpt of her research.

Ukrainian President Volodymyr Zelenskyy with a reporter on the front lines of the Russian invasion of Ukraine
Teaser Image Caption
Ukraine's President Volodymyr Zelenskyy has been targeted by Russian deepfakes

How do deepfakes affect beliefs? As technology improves, AI generated videos are becoming harder to discern. However, even when they are crude, deepfakes can plant the subconscious idea of a person committing an act or saying something that may be distasteful, controversial or disturbing.  Research has shown that even obvious replications can affect memory, beliefs, and decision-making, as “people's opinions can be swayed by interactions with a digital replica, even when they know it is not the real person.” These false memories can compromise the target’s integrity in the minds of audiences. This can have a significant effect on the way people see politicians and their policies. One 2020 study on microtargeting, or strategic distribution to tailored individuals based on political, economic or religious allegiance, found that it is “possible to stage a political scandal with a deepfake”. 

The impact of deepfakes lies not only in the release of AI-generated content, but in the denial of real content. The liar’s dividend, a concept developed by law professors Bobby Chesney and Danielle Citron, suggests that the very possibility of deepfakes provides opportunities for politicians or other motivated actors to dispute real content as AI generated. In a society where truth can be portrayed as “alternative facts” or “fake news,” public figures are already dismissing real imagery as fake or altered. The widespread use of deepfakes makes it easier to claim content is fake. Confirmation bias also affects  how people view the truthfulness of content. One study focusing specifically on deepfakes after the start of the 2022 Russian invasion found that individuals tended to use “deepfakes as a catch-all insult for information they did not like.”

Disinformation is commonly associated with Russian interests in undermining American politics. But is Russia likely to use deepfakes in future disinformation campaigns? The Russian government utilizes fake web pages, personas, and stories to incite tensions and manipulate public opinion. Russian disinformation campaigns present false narratives that increase doubt in the capability of the leadership of its political enemies. According to the U.S. Department of State, Russia utilizes disinformation as a “quick and fairly cheap way to destabilize societies and set the stage for potential military action.” Three prominent examples have shown how AI generated disinformation has already served the interests of the Kremlin. Video deepfakes of Ukrainian President Volodymyr Zelensky and Moldovan President Maia Sandu, as well as one audio deepfake of Slovakian politician Michal Simecka with false positions of the three politicians spread around social media. 

Although the video deepfakes of Zelensky and Sandu appear to be easily identifiable as fake through closer examination, the psychological effect of such imagery can still be damaging. As mentioned, even false recreations of politicians can have a similar psychological effect on viewers as real audio and videos. The context where actors share deepfakes is likely also to shape their impact. The Aspen Institute’s AI Elections Initiative has expressed concerns over the use of deepfakes in “small or local races, [where] deepfaked video or audio may go unchecked by local journalists.” This includes the distribution of deepfakes through unmoderated channels, such as Telegram or WhatAapp, where close circles are more likely to be believed and less likely to be debunked. “Bad Grammar”, a network of accounts traced back to Russia, operated mainly on Telegram and posted AI generated political comments against Ukraine, Moldova, and the Baltic states. 

Ultimately, deepfakes are a potential tool for Russia to compromise the information space, but they may not yet be the primary means of sowing disinformation and confusion. While Russian-affiliated disinformation campaigns often include fake web pages, comments, and profiles on social media, deepfakes are another tool. As digital replications challenge reality and technology improves, the battle against disinformation requires a greater awareness of the confirmation bias, self-interest, and tendency to trust online content that make us vulnerable.