The first batch of reports by tech giants, adtech entities and others on how they’re tackling online disinformation in the European Union since the bloc unveiled a strengthened version of its Code of Practice last year have been published today.
Measures that signatories of the Code commit to cover areas like fact-checking, demonetization of disinformation, transparency of political advertising, bots and fake accounts, user empowerment and providing data access for researchers.
All the reports can be found at a new Transparency Center website that’s also launching today — called Disinfocode.eu.
Google, Meta, TikTok, Twitch and Twitter are among 30 platforms who submitted reports to the EU on time, meeting a deadline at the end of last month, the EU’s executive confirmed today. However the Commission has singled out Twitter for delivering the least substantial submission, per its first look at the reports.
“All signatories have submitted their reports on time, using an agreed harmonised reporting template aiming to address all commitments and measures they signed onto. This is however not fully the case for Twitter, whose report is short of data, with no information on commitments to empower the fact-checking community,” the Commission wrote in a press announcement.
EU officials said today that they will be following up with Twitter to try to better understand its approach to disinformation.
Commenting on the publication of the first “baseline” reports in a statement, Věra Jourová, the EU’s VP for values and transparency, added:
The publication of the first reports of the revamped anti-disinformation Code is an important milestone in the fight against disinformation and I am pleased to see how most signatories, big and small, are engaging. I’m glad to see for the first time reporting on the country-level, but more work is needed when it comes to providing access to data for researchers. We must have more transparency and cannot rely on the online platforms alone for the quality of information. They need to be independently verifiable. I am disappointed to see that Twitter report lags behind others and I expect a more serious commitment to their obligations stemming from the Code. Russia is engaged also in a full-blown disinformation war and the platforms need to live up to their responsibilities.
The EU’s revamped Disinformation Code currently has 38 signatories in full — which run the gamut from major social media networks to smaller platforms, adtech entities, civil society organizations and fact-checkers.
A key strategy the Commission has used to evolve — and it hopes, strengthen — the Code vs the first (tepid) version of it back in 2018 has been to solicit participation from a whole ecosystem of players, rather than just focusing on big social platforms. And some examples of reported actions it flagged today gave a nod to cooperation between signatories on interventions against purveyors of disinformation.
Examples of actions Commission officials highlighted included a claim by Google to have prevented more than €13M in ad revenue flowing to “disinformation actors” in the EU in Q3 last year; TikTok reporting it removed more than 800,000 fake accounts, with over 18M followers in total, in the same quarter, which it said represented 0.6% of total monthly users in the EU; Meta reporting that around 28M fact-checking labels were applied on Facebook and around 1.7M on Instagram in December — with the company also saying that, on average, 25% of Facebook users and 38% of Instagram users did not forward content labelled as false by fact checkers; and Twitch reporting, via a partnership with the Global Disinformation Index (another signatory of the Code), that it had removed six accounts that had actively promoted the QAnon conspiracy theory.
The Commission said it expects to have a fuller picture of how the beefed up Code is functioning later this year, with the next batch of reports from major platforms due in six months’ time (smaller entities only have to report annually) — further noting that signatories only had a short time to implement the enhanced measures they had agreed to under the strengthened regime unveiled last fall. So it’s also expecting to see action stepping up by the summer.
While the Code is still not legally binding, and participation remains voluntary, the Commission confirmed last year it will be linking observance of commitments made under it with compliance with the Digital Services Act (DSA) — a major update to the bloc’s digital regulation which will start to apply for a sub-set of larger platforms later this year (and from early 2014 for all in scope platforms).
This means there’s now a big regulatory stick to drive action on disinformation as steps taken voluntarily can count as mitigation measures towards DSA compliance.
The Commission’s goal for the Code of Practice is, therefore, that it becomes a defacto Code of Conduct under the DSA — which has teeth aplenty, including penalties for infringements that can scale up to 6% of global annual turnover. Hence officials are sounding confident that the impact of the Code will only step up as the bloc’s regime of regulating the digital sphere slides up a gear.
Still, there’s at least one early wrinkle: Elon Musk’s Twitter…
Tackling Twitter
While the EU’s executive emphasized today that it will need more time to fully analyze all the submissions from signatories to the Disinformation Code — which it noted collectively run to more than 1,000 pages — Commission officials said it’s clear Twitter has not lived up to its reporting commitments.
And it’s no accident that the timespan here covers the first months of Musk’s ownership of the platform which he took over at the end of October. Er, let that sink in!
The EU describes Twitter’s report as very thin — noting, for example, that it has simply not provided data in some areas, such as fact-checking, which the platform told the Commission it does not consider is applicable to it.
While, in a section on political advertising, Twitter’s report remarks that — “broadly” — a sweep of ten commitments are “not relevant to Twitter’s current approach to political and issue advertising in Europe at the time of writing”. Despite the company ending a legacy ban on political advertising early last month.
This includes commitments to take a consistent approach across political and issue advertising and have clear policies indicating which types of ads are permitted or prohibited; to clearly label such ads to distinguish them from non-paid for content; to verify the IDs of political advertisers; and provide users with clear and comprehensive info on why they’re seeing political ads (to name a few).
Although Twitter’s report adds the caveat that: “This may change going forward” — so, sheesh, who knows what Musk might do! — before also stipulating that “under the DSA” it will be relaunching an ads transparency center which was operational until 2019. (Which, well, looks like a big signal to regulators that Musk will only do what he’s legally compelled to do.)
One Commission official described Twitter’s submission as earning the equivalent of a ‘yellow card’ at this baseline reporting stage — pointing also to the recent decision by Musk to end free access to its APIs, threatening the ability of researchers to study how information (good or bad) flows across the platform, as going against at least the spirit of the Code. (And, just yesterday, the EU’s top diplomat also raised the Twitter API issue as a concern — criticizing the platform and its owner in a speech which zeroed in on the threats posed by disinformation and the need for the West to take the issue seriously.)
The Commission has plenty of other reasons to be worried about what Musk at Twitter means for the spread of disinformation, too. Since he took over the platform last fall Twitter has simply stopped enforcing an existing policy against misleading information about COVID-19, for example.
Musk has also overseen a series of erratic product changes that have revised (and at times) entirely upended verification by allowing anyone to pay to get a blue check on their account — co-mingling an existing legacy verification program, under which Twitter had applied badges to public figures and other notable accounts, with accounts owned by anyone willing to pay a monthly fee to boost their visibility and gain the perception of credibility a blue badge may convey.
As verification chaos and confusion ensued, Musk did iterate the approach — claiming checks would be made on subscribers to avoid direct impersonations and adding a range of other badge colors denoting different types of accounts (so reintroducing partial verification for some notable public accounts). But the upshot of all his changes is verification is terribly confusing. And blue checks can still mean one of two very different things (either legacy verification and/or a paying subscriber). So you’d be hard pressed to find anyone other than a Musk fanboy who’d describe the verification situation as an improvement vs Twitter’s previous (and certainly not perfect) approach.
More chaos may be coming too: Musk recently announced that Twitter will start sharing ad revenue with creators if they are paying for the aforementioned “Twitter Blue” subscription — saying the platform will serve ads in replies to these subscribers’ tweets. This incoming change has sparked fresh concerns it will incentivize users to spread false information and outrage as a bad faith way to fuel engagement and generate more ad revenue.
Much will rest on any policies Twitter applies around the creator monetization capability if the feature is to avoid turning into a disinformation firehose. But what we’ve seen so far of Musk’s approach to this stuff hardly looks reassuring. (Not least as the Chief Twit has frequently been caught spreading disinformation himself.)
In Twitter’s report to the Commission on how it’s applied (or, well, hasn’t) the Code, it writes that its “evolving” approach to tackling disinformation is centered on a feature called Community Notes — which essentially seeks to outsource any response to disinformation or misinformation spreading on the platform via a hands-length process of crowdsourcing views from users, some of who may be allowed to append notes to dubious tweets. (Or, as Twitter puts it, the process “relies on user participation rather than centralised enforcement”.)
This ‘evolving approach’ has even extended to Musk suggesting Community Notes be used to ‘correct’ Russian disinformation about the Ukraine war that he himself had amplified, as we reported previously. (So, er, ‘sliding’ might be the better descriptor here).
“Community Notes is an inherently scalable and localised response to the challenge of disinformation,” Twitter claims in the report. “By making this feature an integral and highly visible part of the Twitter product, and by ensuring that the user interface is simple and intuitive, we are investing in a tool that can be truly global in its application. It also reduces reliance on forms of content moderation that are more centralised, manual and bespoke; or which require intensive and time-consuming interactions with third parties.”
The Commission isn’t commenting publicly on this kind of granular detail as yet — but EU officials admitted today that they will need to engage further with Twitter to understand its approach.
They also stipulated that this forthcoming dialogue will aim to make sure that Twitter is taking the Code seriously.
For now, it’s fair to say that the EU is keeping its powder dry by reserving judgement on whether Musk’s Twitter is literally playing a bad faith game with disinformation — by deliberately trying to evade responsibility and duck accountability while doing the minimum possible to claim to lawmakers it’s taking ‘action’ — or, er, ‘evolving its approach’ in a bona fide search for a better way to tackle ‘bad speech’.
The Commission appears to be trying to choose its battles carefully here. But also choosing to focus on the reporting timeline and structure it’s already put in place, via the Code and the DSA, as the best tool-set for assessing — and, in time, nailing — whatever game Musk is playing. Aka, the devil will be in the self-reported detail.
This means the next reporting deadline for the Code, in the summer, will be a major test for the EU on calling out Musk’s disingenuousness with disinformation.
There is still a question of what the bloc can do if Twitter does not respond to its now more urgent push for platforms to deal seriously with disinformation — given, for example, that measures like active audits of Code commitments will only apply to the subset of large platforms that are designated as VLOPs (or VLOSEs) under the DSA — and we still don’t know if Twitter will meet the criteria for that (a Commission official confirmed today that it’s still too early to say).
Asked what the EU can do if Twitter does not respond to pressure to tackle disinformation, Commission officials emphasized they’re in this for the long haul — and so, therefore, are the platforms. The underlying point is the bloc’s regulatory regime for digital services is only growing in size and reach — and, as it evolves, there’s no doubt regulators are paying far closer attention to operational details. So the scope to use disingenuous claims as a tactic to avoid taking action is, well, on the slide.
“We will have a frank conversational with all signatories — we will not let this be a whitewashing exercise,” a Commission official also told TechCcrunch, adding: “And Twitter will know that… This is not a ‘red card’ from us — towards Twitter — they have done something. Now they have six months to show are they serious or not?”
Musk’s Twitter gets ‘yellow card’ for missing data in EU disinformation report by Natasha Lomas originally published on TechCrunch