Photo by Amelia Holowaty Krales / The Verge
Tech companies operating some of the worldโs biggest online platforms โ including Facebook-owner Meta, Microsoft, Google, Twitter, Twitch, and TikTok โ have signed up to a new EU rulebook for tackling online disinformation.
These firms and others will have to make greater efforts to halt the spread of fake news and propaganda on their platforms, as well as share more granular data on their work with EU member states. Announcing the new โCode of Practice on disinformation,โ the European Commission said that the guidelines had been shaped particularly by โlessons learnt from the COVID19 crisis and Russiaโs war of aggression in Ukraine.โ
โThis new anti-disinformation Code comes at a time when Russia is weaponising disinformation as part of its military aggression against Ukraine, but also when we see attacks on democracy more broadly,โ said the Commissionโs vice president for values and transparency, Vฤra Jourovรก, in a press statement.
The code itself contains 44 specific โcommitmentsโ for companies that target an array of potential harms from disinformation. These include commitments to:
create searchable libraries for political adverts
demonetize fake news sites by removing their advertising revenue
reduce the number of bot networks and fake accounts used to spread disinformation
give users tools to flag disinformation and access โauthoritative sourcesโ
give researchers โbetter and wider access to platformsโ dataโ
work closely with independent fact-checkers to verify information sources
Many US tech firms like Facebook and Twitter have already adopted similar initiatives following pressure from politicians and regulators, but the EU claims its new code of practice will allow for greater oversight into these operations and stronger enforcement.
Despite the scope of the anti-disinformation code, there are some notable absences from the list of signatories. Apple, for example, has not signed up, despite its burgeoning advertising business and the codeโs focus on demonetizing sources of disinformation by cutting off ads. Other large platforms, like Telegram, which has been a major battleground for propaganda following the Russian invasion of Ukraine, are also absent.
Although the predecessor for these guidelines, 2018โs Code of Practice on Disinformation, was entirely voluntary, the EU notes that this new rulebook will be enforced by its new Digital Services Act, or DSA.
โTo be credible, the new Code of Practice will be backed up by the DSA โ including for heavy dissuasive sanctions,โ said the EUโs commissioner for the internal market, Thierry Breton, in a press statement. โVery large platforms that repeatedly break the Code and do not carry out risk mitigation measures properly risk fines of up to 6% of their global turnover.โ
Although the EU is presenting the code as a strong deterrent against misinformation with clear methods of enforcement, itโs worth remembering how difficult it is to even gauge the impact of disinformation, let alone curb its negative impacts.
Take, for example, the codeโs 31st commitment, in which signatories agree to โintegrate, showcase, or otherwise consistently use fact-checkersโ work in their platformsโ services, processes, and contents.โ Platforms signed up to this portion of the code will, in the future, have to share data on fact-checkersโ work on their platform, giving each EU member state information including โnumber of fact-check articles published; reach of fact-check articles; number of content pieces reviewed by fact-checkers.โ
Such data will offer new insight, no doubt, but can hardly give the full picture of fact-checkersโ work. Consider that Facebook has been partnering with fact-checkers as far back as 2016, but has also been criticized for using partisan groups (like the Check Your Fact team, which has ties to conservative website The Daily Caller) to verify sources.