The social-media platform X has agreed to strengthen protections for users in the United Kingdom against illegal hate speech and terrorist content [1, 2].
The agreement comes as regulators push for greater accountability from tech giants to prevent real-world violence. The move follows a series of antisemitic attacks on British Jews earlier in May 2024 [2].
Ofcom, the UK media regulator, pressured the platform to improve its handling of illegal content [2]. Under the new commitments, X will review flagged posts and engage external experts to better identify harmful material [2, 3]. The company also pledged to improve its reporting systems to make it easier for users to flag illegal activity [2, 3].
An Ofcom official said, "These commitments are of particular importance" [1].
A spokesperson for X said, "We are committed to protecting our users and will work with Ofcom to improve our systems" [3].
Despite the agreement, some community leaders remain skeptical of the platform's ability to implement these changes. One Jewish community activist said, "X is still failing in so many regards" [2].
The deal reflects a growing tension between the platform's stated commitment to free speech and the legal requirements of national regulators, particularly in the UK where hate speech laws are strictly enforced [1, 2].
“X has agreed to strengthen protections for UK users against illegal hate speech and terrorist content.”
This agreement signals a shift in X's operational approach within the UK, moving from a more permissive content moderation stance toward one that aligns with Ofcom's regulatory demands. By agreeing to external expert reviews and improved reporting, X is acknowledging that self-regulation is insufficient to satisfy British legal standards regarding the prevention of terrorism and hate speech.





