A Transatlantic Perspective on Disinformation

Interview

What are disinformation campaigns? And how do they impact democratic societies? We talked to David Salvo and Bret Schafer, two leading experts in the field, to find out more.

This external content requires your consent. Please note our privacy policy.

video-thumbnail Open external content on original site

Derived from the Russian word dezinformatsiya, disinformation is false information that is deliberately spread to influence public opinion or obscure the truth, often in a covert manner. Thrust into the global spotlight following the 2016 US presidential election, disinformation has generally been referred to as “Fake News.” However, that term is increasingly applied in a distorted way for political purposes. This politicization distracts from the real threat disinformation poses to democratic societies. No matter whom they are carried out by, disinformation campaigns are usually targeted at undermining public trust in democratic institution and processes, and the media.

To gain a better understanding of what disinformation is, who disseminates it, how that affects democratic institutions, and how that impact can be reduced, we talked to David Salvo and Bret Schafer, two experts who track disinformation campaigns at the Alliance for Securing Democracy. 

The Alliance for Securing Democracy is a bipartisan, transatlantic initiative housed at the German Marshall Fund of the United States. They conduct research and analyses on authoritarian interference, develop policy-oriented strategies to better deter those interference efforts, and raise public about authoritarian interference, including disinformation campaigns.

Listen to the full interview here, or read the transcript below to learn more.

David Salvo (middle) and Bret Schafer (right) from the Alliance for securing Democracy

FULL TRANSCRIPT

This script has been edited for clarity by removing filler words. The meaning of sentences has not been changed.
 
Jonas (J): Two years after the 2016 US presidential election, details about the magnitude and methods of Russian interference efforts are still emerging. These events sparked debates and raised public awareness about foreign interference in elections, most prominently exemplified by the spread of disinformation on social media platforms. While the targeting of the United States and the exploitation of modern technologies were in many ways unprecedented, disinformation campaigns by themselves are nothing new. And they are not just used during elections.
 
My name is Jonas Heering, and to find out more about disinformation, who spreads it, its effect on democratic institutions and societies, and ways to reduce its impact, I talked to David Salvo and Bret Schafer. They’re two leading disinformation experts who work with the Alliance for Securing Democracy, here in Washington, DC.
 
The Alliance for Securing Democracy is a bipartisan, transatlantic initiative housed at the German Marshall Fund of the United States. They conduct research and analyses on authoritarian interference, develop policy-oriented strategies to better deter those interference efforts, and raise public about authoritarian interference, including disinformation campaigns.
 
David is the Alliance for Securing Democracy’s Deputy Director and an expert on Russia and Eurasia. Bret is the Alliance’s Social Media Analyst and Communications Officer and an expert on computational propaganda.
 
David and Bret thanks so much for joining me today.
 
You guys actually just launched a new tool last week called the Authoritarian Interference Tracker. Could you tell me a little more about that project?
 
David (D): So, it catalogs over 400 incidents of Russian interference across the transatlantic community since 2000, when Vladimir Putin first came to power. And it shows the intersection of the various tools that the Russian government has used thus far to conduct these operations. It looks at five tools specifically: information operations, cyber attacks, malign finance, subversion of political parties and social organizations, and strategic economic coercion. And it shows how these various tools intersect and are used in conjunction with one another in order to destabilize democratic governments and societies.
 
And, what’s great about the tracker, is to reinforce the point about zooming out from the elections framework. It shows you just how multi-faceted a lot of these operations are, and how they are much broader and their aims are more insidious even than just targeting, say one particular party or election.
 
J: Asides from these tools that you provide on your website and trying to feed that into journalistic coverage, what other steps have you taken to directly reach the public and convince them that this is a problem they should care about?
 
D: Part of it is through working with journalists. I think that really expands the reach and impact of our analysis. But we also do a lot of face to face engagement with Americans and Europeans. Bret and I and our two co-directors, Laura Rosenberger and Jamie Fly, have done a lot of travel across the United States, at various conferences and town hall events, meeting with the average American citizen who maybe doesn't follow these issues every day like we do and others do in Washington, to put a bipartisan face on the issue, first and foremost, to try to remove some of the politicization that inherently is attached to the issue in the [United] States because of what happened in 2016 in the presidential election. That's one objective.
 
The other is to simply provide this sort of broader context in which these operations are conducted. To make the point again, this isn't just about Donald Trump or what happened in November 2016. This is an ongoing threat and will be on our radar even long after Donald Trump leaves the stage. That's part of our objectives here. In Europe, the objectives are similar. You also are confronted by some skepticism. Maybe the motivations are different. It's not necessarily a partisan issue like it is here, but you face some criticism that we're sort of anti-Russian and this is just targeted… we’re trying to discredit the Russian government. And we're trying to explain, no, that's not our objective. Our objective is to shine a light on operations that this [Russian] government is conducting against these European countries and people as well. Similar objectives but slightly different messages.
 
Bret (B): It's also critical to engage with local media. Too often this conversation is being had with foreign policy reporters, which is important of course, but it's also important to have these conversations with local reporters in Florida who might have a more specific concern about foreign interference and events or a movement, whatever the local context is.
 
For example, there was a concern recently about how many of these veterans’ Facebook pages are run by foreign entities. That's something that, I think, is broadly regarded as a concern with all Americans, to have veterans groups targeted by outside influence. So [it’s important] to be able to have conversations with not just the foreign policy reporters in Washington, but local reporters and local groups who have very specific concerns that are different from foreign policy concerns.
 
J: I know you mentioned you kind of want to move away from just talking about elections, but since we are talking just about a month after the midterm elections here in Washington, DC, I briefly want to touch on that. Prior to the election, there was a lot of discussion about what level of interference we would witness during the election. And there was a lot of speculation about that. And I know that part of your work is to track Kremlin-linked Twitter bots through the Hamilton 68 online dashboard. I'd just be interested in hearing your perspective. What levels of interference did you notice during the election, particularly in regard to disinformation campaigns?
 
B: Part of that question depends on how you define an election. If you're looking at interference in an election only happening the day before an election or on election day, you're probably missing what's happening. Because, of course, if you're looking at political campaigns, those campaigns aren't suddenly messaging as people are heading to the polls. This is something that happens months or years out. When you look at efforts to interfere in elections, it's more about trying to influence the electorate and not an individual election. And that takes a lot of time. Some of the issues that were the hot button issues and this election season, immigration, for example, we've seen consistent messaging from the accounts that we look at going back six months, a year. They've been slowly chipping away at the issues that voters care about.
 
We did not see any sort of dramatic last-minute attempt to leak information online, for example. We didn't see the same kind of spike in activity that you saw in 2016. You wouldn't be able to say that we saw this massive effort to swing any individual election, but we certainly saw, over several months and several years, efforts to influence voters on issues that voters actually care about.
 
D: You also saw a little bit of activity trying to undermine the integrity of the electoral process itself. Stories that were being amplified by these networks online about disenfranchisement of voters for example, or stories about how voting infrastructure was indeed hacked by foreign actors. Anything that could cast doubt on the integrity of the process. We saw stories that were being promoted by this network as well.
 
B: Yeah, we actually saw a bigger spike after the election, when there were some of these contested votes in Florida, for example, of amplifying those real narratives and real concerns. It's not like they created these things. But we saw more efforts there to sort of poke at the legitimacy of elections that happened after the election itself, not before.
 
J: Most of the narrative here in the media and most of your work, in fact, has been focused on Russian interference. There is an alternative narrative, particularly pushed by the current administration that we really should be focusing on China. What do you think about that? Is there merit to these concerns?
 
D: I think the administration talks about Chinese interference in a way that confuses sort of traditional nation-state influence and actual subversive interference. Now that said, the Chinese government does conduct interference operations across the world, in its near abroad and beyond. And our organization is starting to focus on the sort of longer-term China challenge because even though the administration may talk about Chinese interference in a way that sort of confuses the two issues as it relates to American elections, in the global sense, there actually is interference challenged that the Chinese government poses. We as an organization are starting to expand our focus beyond the Russian government to look at the Chinese government and other authoritarian actors.
 
J: If I understand you correctly, China as of right now is not using the same kind of tools it's the Russians are using? Do you think China is going to adopt, specifically in regard to disinformation campaigns, similar tools as the Russians, or do you think they're going to use completely different [tools]?
 
B: I think it will be completely different. You're looking at a rising power versus declining one. China has long-term strategic objectives were Russia is more opportunistic. I think it's highly doubtful that we'll see covert Chinese sock puppet accounts come online and adopt personas of Black Lives Matter activists, for example. I just don't see the Chinese going down that route. So I don't think they're going to adopt the same kind of playbook that the Russians have used, of trying to kind of amplify these local concerns and these wedge issues. That being said, who knows what's going to kind of unfold in the future. But I think that Chinese interference will look very different from Russian interference.
 
J: I'd like to shift the discussion more towards Europe now since you are a transatlantic initiative. 2019 is probably going to be a very pivotal moment for European democracies because we have the European Parliament elections, but we also have a lot of parliamentary and presidential national elections, especially in a lot of Eastern European countries such as Estonia, Poland, Latvia, Lithuania, and of course in Ukraine, and there are probably going to be a lot of likely opportunities for foreign interference in those elections.
 
How is this issue being discussed among your European colleagues? Do you think there's the same type of concern, the same sense of urgency [to address this] as might be perceived here in the US?
 
D: I think in Brussels, in both the EU and NATO, there is a sense of urgency. You've seen both institutions take steps to try to address this challenge. It’s hard and I think they sort of move in fits and starts and not all their efforts are entirely well coordinated. And the problem is magnified by the fact that in the case of elections, Member States have jurisdiction over how elections are conducted. So Brussels is not impotent but it's not able to influence capitals in the same way in order to ensure that elections are held to one standard. That also presents a challenge for the sort of integrity of the electoral process in Europe.
 
B: Also, in Central and Eastern Europe, they've been dealing with this threat for decades. So this is nothing new. Whereas a lot of the focus right now in the US is because, before 2016, nobody really thought this would happen here or if it did happen [people thought] it wouldn't be effective. You're kind of just dealing with a different level of focus because they'd been focused on it for so long that it's kind of just old hat now to be prepared for this in a way that in the US, this is all new for us.
 
J: You touched on the fact that in Europe national governments oversee the elections. How do the structural differences in European elections compared to here in the US affect the effectiveness of disinformation campaigns? For example, in a lot of European countries, you have shorter campaign cycles, or in France, they grant equal [TV] airtime to all candidates. What effect does it have on disinformation campaigns?
 
B: It’s tough to say exactly, but I mean if you look at the French context, for example, the Macron leaks didn't work because they were leaked at a time where there was a press blackout. There are probably different contexts in each country that will determine how this will operate. Also, in Germany for example, most Germans still get their news from kind of the traditional news sources, the legacy news sources. So there's less of an impact for some of these social media campaigns that you'd see in the US. I think in each country there's a bit of a different local context that will determine how this will look and whether or not it will be effective.
 
J: One other thing I wanted to touch on in the European context is that some European lawmakers I've heard from actually expressed concern about interference in European elections by non-state actors from the US. Have your European partners echoed this concern? What is your take on that?
 
B: Yeah, I mean I think it's definitely a concern. If you look at the Irish abortion referendum, for example, their primary concern was coming from evangelicals in the US. So I mean fair is fair here. When you look at these sorts of interference efforts, those are not just coming from Russia; they're coming from many different places. But I would also distinguish between the two of covert and overt activity. An American evangelical who goes online and expresses their opinion about what's happening in another country, if they're doing that and saying who they are and what they believe in, I think that's one thing, as opposed to going online and saying “Hey, I am Irish like you. I'm part of this community. Here are my thoughts.”
 
That would be the clear dividing point in the same way that the Russians are free to talk about political and social issues in the US as well. Where the line is crossed is when they come online and say “I'm a member of the Black Lives Matter movement and have you seen this article,” that's not really true or manipulated. That's a very important dividing line of what's done openly and what's done kind of covertly to manipulate people.
 
J: We talked a lot about some of the threats facing Europe next year and also some of the work that you have done here in the US and in Europe. I also want to talk about other initiatives, organizations, [and] efforts to combat disinformation and interference that have been undertaken here in the US and also in Europe. Are there any initiatives in Europe or institutional efforts that you have cooperated with or that you think you could learn from?
 
D: I mean there are a number of European organizations in civil society that are working, particularly on the issue of disinformation. That's everything from StopFake in Ukraine to GLOBESEC. And of course, on the institutional level in the EU and NATO, you have dedicated task forces and centers of excellence looking at aspects of the foreign interference challenge, from cyber to information operations. To the extent of our knowledge, I don't think there's any organization other than ours that's sort of looking at the entirety of this interference toolkit. That's not necessarily a pat on the back. I just think that's our approach to this issue. We look at all these tools in conjunction with one another.
 
B: In every European country there is somebody on the ground looking into this. I've had conversations with Romanians, [and] people in Moldova. And it's important when you were talking about disinformation, to have local organizations looking at it because they understand the context. Disinformation tends to be very localized and very specific, and if we are trying to monitor what's happening in Moldova, we might be able to pick up on broadly what accounts are talking about, but we don't understand why they're talking about it or how it will work. All of these efforts really need to be down to the very local level because otherwise you can sort of say, “Yeah, they seem to be promoting this hashtag,” but if you have no idea what that hashtag is referencing, it's not very effective to monitor it.
 
D: And that's no different at the institutional level, too. We've now expanded our staff to have people in Brussels specifically because they understand the European political landscape in a way that we can't from Washington. So I think even when we're engaging policymakers in Euro-Atlantic institutions or European institutions, we need that local context too. So it's no different from our vantage point.
 
J: On the US side, do you expect there to be more a legislative action taken in regard to interference and disinformation now that we have a majority Democratic Congress?
 
D: Hopefully. I mean, what's interesting about the American political landscape is that you generally – I think you mentioned this earlier – there's sort of a broad bipartisan acknowledgment about what the threat is, what it isn't, and the duration of the threat. It's not over because the election is over. This is an ongoing challenge and there've been a number of pieces of legislation that have been introduced on a bipartisan basis. The problem is that politics have gotten in the way of that legislation passing.
 
Will a Democratic house change the calculus? Maybe. You still have a divided Congress, ultimately. I think we're hopeful that because members of Congress on both sides of the aisle tend to see eye-to-eye at least on the nature of the threat that they will finally put politics behind them, at least early on before the 2020 election cycle gets underway in earnest, that you have this window of opportunity to pass some common sense measures that will really close off some of the vulnerabilities that the Russian government and others can exploit.
 
B: I will say that that legislation, particularly if you're talking about computational propaganda, can only go so far because, by the time Congress understands an issue in terms of the tech behind it, it's probably an issue that's two years in the past. For example, bots: we've seen recently a lot of efforts to put forth bot legislation. Bots are kind of a 2016 problem at this point. Legislation is never gonna keep up with tech because tech is always going to be emerging and developing way faster than any legislation can happen. So, let's even assume that everyone in Congress understands what a bot is, which I would not assume that, by the time they understand the issue, by the time it makes its way through the legislative process, it's probably not the issue they need to be focusing on. There's, there's a limit to what we can legislate.
 
J: From what I understand you saying, legislation is probably not going to be the most effective tool to address [disinformation]. One other remedy to this might be increasing media literacy education. And that's something that you have mentioned in one of your reports you recently wrote with 10 policy proposals. You, for example, suggested creating a public-private fund for media literacy education. What does this kind of look like in practice? I mean at what level of education do you suggest incorporating media literacy education? Should that be mandatory in schools? Do you have any specific ideas with regards to that?
 
D: What's interesting about media literacy is that when it's talked about, I think people focus on the sort of public education aspect of it. And that's not to deemphasize the importance of that part of it. I mean, children, and teenagers, and college students should be taught ways to become critical consumers of information. But, if you look at the demographics of which part of the population is most susceptible to disinformation campaigns, it's the older generation. Media literacy, when we talk about it, it shouldn't just be focused on the youth, although that obviously is important. But, even in our sort of informal awareness-raising around the country, we try to engage the adult segment of the population as well because they're the ones more often than not who were falling for this.
 
B: Yeah, and I mean also when you look at some of these activist groups who are targeted. A guy named Kris Goldsmith who is a veteran and works for a veterans group and has done a bunch of research on, and I mentioned this earlier, the amount of Facebook pages that are run by foreign entities trying to target veterans groups, because of course veterans, they're a target for many reasons, mainly because they tend to have an elevated voice in [political] conversations and discussions. There needs to be media literacy [education] happening within specific communities as well. He has promoted, for example, the VA (US Department of Veterans Affairs) to start a program of teaching cyber hygiene and media literacy there.
 
You need to look at the groups that are actually targeted, which, as they mentioned, often are older Americans, and then they're often Americans who are part of either activist groups or specific communities that we know are targeted directly. So, yes, it's important to have it in schools, but we need to look where the actual problem is. Because it's like if you're putting in place at drug prevention program and doing it in senior citizens homes, like that's not the most effective way to do it. We need to look at the vulnerable populations first and then put in place the efforts to kind of address those issues within those communities.
 
J: One last thing I want to touch on, which we haven't really talked about yet, is the role of the private sector in this issue. For example, following the 2016 election, a lot of the social media companies like Facebook and Twitter faced a lot of backlash for their lack of action in addressing disinformation online. How have they dealt with this issue since then? What role have they played in curbing disinformation online?
 
B: I'd say their security teams have actually done a fair amount to start to address some of these issues. Twitter has cracked down on bots. I mean it's still a problem, but they have started addressing some of the issues there. Facebook has started to look at these coordinated, inauthentic efforts to create fake pages. The security teams have actually done a lot. Where the problem has been is their PR around this, [which] for a year and a half tried to minimize the problems on their platforms. What they were saying publicly and what was being done behind the scenes, were not syncing up.
 
I think, we've kind of gotten close to where the two have caught up to a degree, where they've now at least openly kind of admitted that there's a problem. They have been more transparent when they've taken down pages. There's still a lot more that needs to be done, for sure, because it seems like each week a new revelation comes out that we didn't know before. So, on the tech side, on the security side, there have been efforts made by the companies to address some of these issues, but they have been slow. And for the first year when there was really a time when these needed to be addressed, they were basically more or less trying to fend off bad publicity as opposed to just actually tackling the problem.
 
David: That's still going on as demonstrated by recent investigative reports that have come out in the [New York] Times and stuff.
 
J: Going off of that, from my understanding in Europe governments are a lot more open to the idea of kind of regulating social media content. Whereas here in the US, that's often seen as kind of a taboo because of the implication of limiting free speech. Do you think there should be more institutional oversight over social media companies here in the US? You mentioned all the scandals, especially involving Facebook, that have come out in the last couple of weeks and months. Can they be trusted to deal with this issue on their own?
 
D: I think there's emerging consensus across the aisle that leaving this to the companies themselves to regulate is probably not the solution. That said, I have a hard time imagining the US government adopting sort of a European approach. And I would argue that levying a 50 million Euro fine on a company for not making a snap judgment about nebulous contents – not all this is like hard facts; this isn't child pornography, right, or terrorist content necessarily. Some of it does operate in and that sort of gray zone. It’s not always so cut and dry. And I think you mentioned first amendment issues. I think they are reasons why we probably wouldn't go down that road here in the States.
 
But it's clear that the companies themselves or have not been willing to take… or at least demonstrate transparency into what some of their efforts have been to get at this issue. And Bret mentioned earlier that sort of behavior, inauthentic behavior issue. I think if we look at, if we use that as the paradigm instead of content regulation, [it’d] probably gain more traction.
 
B: Yes, absolutely. I mean a lot of the European efforts have looked at content moderation or regulation, which I just think it's probably a non-starter in the US. And even in Germany with their NetzDG law, there's a lot of concern there that because it puts the emphasis on the platforms themselves to make snap decisions about what to take down, they're erring on the side of taking things down because they don't want to be fine. I think that that would get very problematic very quickly.
 
Where I think legislation could help is with data protection and data privacy. I think that is probably where we'll see things headed too, because one of the reasons why disinformation is so effective in the US, [is that] it's very easy to micro-target in ways that it's more challenging in Europe because of some of the laws they put in place there. If we're looking at where I would like to see maybe a bit of a heavier hand in terms of congressional legislation, it is on the protection user privacy, data protection, as opposed to getting into content moderation, content regulation, which is going to run in the first amendment issues.
 
And [with] the amount of content that is produced, it's just not feasible for a platform to be able to deal with all of those issues. And when we talk about the platforms, we're usually talking about Google, Facebook, Twitter, that actually at least have the resources to do it. On some of these smaller platforms, they just don't have teams in place. So, I don't know how you would do it and I don't think it's a great idea to get into content moderation.