Disinformation is out of control as malicious actors seek to capitalise on the Covid-19 pandemic. To date, EU schemes to tackle disinformation have focused on self regulation, but there are widespread concerns about how the EU is managing the crisis.
Drinking bleach will not protect you from Covid-19, and neither will burning down cellphone towers. The false claim that 5G mobile networks fasten the spread of the coronavirus is just one of the most outlandish conspiracy theories that governments in Europe and around the world have been up against in their efforts to contain a torrent of disinformation.
Outlandish or not, the online myth about the role of 5G in the epidemic has had real-life consequences all over Europe. According to the GSMA, as of 2 May 2020, there had been 67 arson attacks on 5G towers in the UK, 22 in the Netherlands, 17 in France, three in Ireland, two in Cyprus, one each in Belgium, Italy and Sweden, and one suspected arson attack in Finland.
“Let me be blunt – disinformation can kill,” the head of the European External Action Service (EEAS), Josep Borrell, said in a statement to MEPs. “Disinformation can also have a material impact”, he continued, referring to the arson cases. “We can’t let baseless claims against 5G undermine public trust in the technology Europe will ultimately need to drive its economic recovery,” warned Noelle Knox, Europe PR director at GSMA.
The bogus health claims related to the virus may seem laughable, and as of yet there are no reports from European emergency services of citizens drinking bleach or disinfectant even after US President Donald Trump amplified this online falsehood from the White House podium. Yet the “truths and counter-truths” debated online and offline have created a chaotic background noise that makes fact harder to distinguish from fiction. This is one of the most insidious ways in which disinformation works: the public can very quickly become fatigued with any sort of news and dismiss even responsible, factual, ethical reporting as “fake news.”
EU report points finger at China and Russia
The EEAS sees reason to suspect that this fatigue is not an accidental but an intended result. In a report published on 24 April 2020, the EU’s foreign service flagged several categories of health-related disinformation including “attempts to downplay the pandemic and suggest that it is a hoax, for example by saying that the mortality rate is exaggerated. These messages frequently focus on attempting to undermine trust in institutions and governments by alleging that they are using the pandemic as an excuse to exert undue power and control over their citizens.”
The report specifically warned of “coordinated campaigns” behind such intentional disinformation “about the EU and its partners, including from foreign state-controlled media and social media channels.” News stories with headlines such as “Coronavirus pandemic is exaggerated in order to turn countries into fascist hygiene dictatorships”, “The Covid-19 crisis is manufactured by media” or “It is too early to tell whether any extra people will die because of Covid-19” were published by notorious Russian state-owned or sponsored outlets such as Sputnik, South Front and RT.
The report also pointed at China, from where the epidemic first spread. “Reports indicate that there are continued efforts at deflecting blame for the outbreak of the pandemic, involving both overt and covert tactics,” it said.
Illustrating how much this topic sets nerves on edge in Brussels, EEAS chief Josep Borrell was forced to appear before the European Parliament on 30 May after it turned out that a leaked earlier draft had been more critical of China. He denied allegations that he had watered down the report in response to pressure from China. “We have not bowed to anyone,” he said. “I can assure you that no changes have been introduced to the report published last week to allay the concerns of a third party, in this case, China”.
German Green MEP Reinhard Bütikofer said he agreed with Borrell “that the report has been mischaracterised publicly when it was attacked as caving in to China or, worse still, as an example of appeasement. Borrell himself did not mince words and spoke of ‘Chinese disinformation’, which is correct,” he added.
However, Bütikofer also said: “The Chinese disinformation efforts are not very successful, as far as I can see. Their efforts to deflect criticism regarding their lack of transparency are not going to be successful, just as President Trump will not be successful either with his attempts to deflect criticism of his crisis management by going after China.”
Exposure to fake news higher in Spain and Italy
Successful or not, EEAS analysis found that “false or highly misleading content continues to go viral, even when it has been flagged by local fact-checkers. While aggregate reach figures are impossible to calculate, it is safe to say that respective content is reaching millions of users.”
A study by the international NGO Avaaz confirms this finding. “Representing only the tip of the misinformation iceberg, we found that the pieces of content we sampled and analysed were shared over 1.7 million times on Facebook and viewed an estimated 117 million times”, said the report.
Both EEAS and Avaaz point out that the exposure to harmful misinformation is especially high in smaller media markets such as Spain and Italy, where internet platform such as Facebook are slower at verifying content. In the NGO’s survey, only 29% of English content remained unlabeled, compared to 68% of the Italian content and 70% of Spanish content.
Alexandre Alaphilippe, Executive Director of EUDisinfo Lab, an independent NGO focused on researching and tackling sophisticated disinformation campaigns targeting the EU, said that one of the European peculiarities emerging during the Covid-19 pandemic is the prevalent spread of disinformation through private messaging. “It has happened before in Brazil and India for example, but this is the first time we in Europe are hit with this dimension. It’s very hard to debunk and it spreads very fast,” he explained.
“In Europe we also see a lot of cross-border disinformation. For example, the alleged story of an Italian nurse is translated into English or Spanish. So we are developing some sort of public sphere, but unfortunately it’s used for disseminating false claims!” he continued.
However Alaphilippe believes the current situation could be a turning point in the fight against disinformation: “I think it is a turning point because I think for the first time we are seeing the harmful impact in real time. We also see that most of the tech platforms are doing things that they claimed were “not possible” only two months ago. We will have to see whether this is only the first step and whether it will go further.”
Letting platforms tackle disinformation
Previous attempts by governments to work with the tech platforms on this suggest this won’t be easy. In response to the Facebook-Cambridge Analytica scandal, in which data of 87 million Americans had been abused for targeted political advertising, the European Commission had established the Disinformation Code of Practice in September 2018, a voluntary scheme under which many big tech companies including Facebook, Twitter, Google (YouTube) and Mozilla committed to a variety of actions to tackle online disinformation.
By January 2019, the Commission was already calling on signatories to intensify their efforts, saying that although there had been “some progress” in removing fake accounts and limiting the visibility of sites that promote disinformation, additional action would be needed to ensure full transparency of political ads. Facebook agreed to set up an Ad Library where users could see who had targeted them with political advertising, but the scheme has so far proved ineffective in delivering real transparency or accountability.
In the current public health crisis, YouTube said it will reduce recommendations for videos which target 5G as the source or accelerator of the coronavirus, however it will stop short of removing the content altogether saying it does not break “community guidelines.” According to YouTube, so-called “borderline content”, such as conspiracy theories, accounts for less than 1% of its videos. But it is worth remembering that YouTube itself determines what is and is not “borderline.” Community guidelines are no substitute for robust laws.
Neither is leaving it to the platforms to separate facts from falsehoods. A European Parliament resolution from 17 April demands the establishment of “a European information source to ensure that all citizens have access to accurate and verified information”.
The EU has had a patchy record with providing good information about bad information. So far, the most visible attempt has been a disinformation database named EUvsDisinfo. One source, speaking on condition of anonymity described the Commission page as “a disaster” because it mixes fact-checking with self-promotion.
Pirate Party MEP Patrick Breyer thinks that independent NGOs should handle this campaign against disinformation and the EU could help by funding them rather than trying to spread their own view. “Especially since those that disinformation is targeted at are conspiracy theorists – why would they trust the EU institutions?”
Breyer highlighted a wide-spread concern over the website’s far-reaching definition of disinformation, which goes beyond content created with the malicious intent to mislead the public and includes “messages in the international information space that are identified as providing a partial, distorted, or false depiction of reality”. In other words: the definition includes value statements. “’The EU is dead’, is something certain Russian outlets are saying, but a lot of other sources are also saying that”, cautions Breyer.
According to the MEP, EUvsDisinfo does not follow many of the International Fact-Checking Network’s code of principles, including on avoiding partisanship, providing biographies of the fact-checkers, and offering publications the right to respond. “There is no corrections policy and no form for objecting to entries in the database”, said Breyer.
Grey area between harmful content and opinion
European policymakers have not found a way to navigate the grey area of what should and shouldn't be censored. “Illegal content” is low hanging fruit. But “content that could cause harm” is much trickier to identify. When the President of the United States muses about the efficacy of ingesting bleach, Brazilian President, Jair Bolsonaro, urges citizens not to comply with social distancing, and a leading Pakistani cleric claims that the “wrongdoing of women” is the cause of the Covid-19, don’t news outlets have a duty to report it?
MEP Breyer pointed out: “Something is not disinformation just because it is not proven to be true,” he continued. “For example, reporting that Covid-19 might have been brought to Wuhan from outside China or produced in a laboratory, may or may not be true, but as long as the authors don’t claim it to be verifiable, you must have the right to report on possibilities or speculation as long as you clearly state that.” Or, as the Center for Democracy and Technology’s EU tech policy brief put it: “One person’s robust opinion may be another’s disinformation.”
According to EUDisinfoLab’s Alaphilippe, most narratives that are circulating around the coronavirus are based on assigning blame – whether they were coming from Chinese embassies, Russian state media, or the White House. “In suburbs in France, there are people blaming migrants for not socially distancing. Some are blaming Europe for not doing enough,” he continued. “But we have to differentiate criticism from disinformation – especially as it is fair to say that nobody was prepared for this pandemic.”