Elon Musk and the EU are in a dispute over the Twitter owner’s plan to use more volunteers and artificial intelligence to help moderate the social media platform, as the company responds to strict new rules designed to police online content.
According to four people familiar with talks between Musk, Twitter executives and regulators in Brussels, the billionaire has been told to hire more human moderators and fact-checkers to review posts.
The demand complicates Musk’s efforts to reorganise the lossmaking business he acquired for $44bn in October. The new owner has slashed more than half of Twitter’s 7,500 staff, including the entire trust and safety teams in some offices, while seeking cheaper methods to monitor tweets.
Twitter currently uses a mix of human moderation and AI technology to detect and review harmful material, in line with other social media platforms. However it does not employ fact checkers, unlike larger rival Meta, which owns Facebook and Instagram.
Twitter has also been using volunteer moderators for a feature called Community Notes to tackle the deluge of misinformation on the platform.
Musk also told EU commissioner Thierry Breton last January that it will lean further on its AI processes, according to people with direct knowledge of the talks.
Those people said that Breton advised that, while it was up to Twitter to come up with the best way to moderate the site, he was expecting the company to hire people to comply with the Digital Services Act.
The DSA is landmark legislation that will force Big Tech groups to police their platforms more aggressively for illegal content. Major platforms, including Twitter, will have to be fully compliant by September this year at the latest. Those in breach face fines of up to 6 per cent of global turnover. Musk told Breton that hiring will take time but that staff will be in place to comply with the DSA this year.
Following their January meeting, Musk tweeted: “Good meeting with @ThierryBreton regarding EU DSA. The goals of transparency, accountability & accuracy of information are aligned with ours. @CommunityNotes will be transformational for the latter.”
Further talks have been held between Twitter and EU regulators regarding its moderation plans in recent weeks. At those, officials have admitted that pursuing the Community Notes model could work to weed out a large proportion of misleading information in the same way editors achieve similar results on Wikipedia.
But concerns have been raised that the site does not have hundreds of thousands of volunteer editors, as Wikipedia does, and that Twitter already has a poor record on non-English language content moderation, an issue that plagues other social networks.
“Community Notes is not a terrible idea but Musk needs to prove that it works,” said a person with direct knowledge of the talks.
A person familiar with the Community Notes project said the feature is only one part of Twitter’s wider disinformation moderation approach.
“Platforms should be under no illusion that cutting costs risks cutting corners in an area that has taken years to develop and refine,” said Adam Hadley, director of Tech Against Terrorism, a UN-backed organisation that helps platforms police extremist content online.“ We are worried about the signal Twitter’s latest move sends to the rest of the industry.”
The European Commission said: “We believe that ensuring sufficient staff is necessary for a platform to respond effectively to the challenges of content moderation, which are particularly complex in the field of hate speech. We expect platforms to ensure the appropriate resources to deliver on their commitments.”
Twitter did not immediately respond to requests for comment.