The social media platform X has pledged to crack down on illegal hate speech and terrorist content for users in the United Kingdom.
This agreement comes as the platform faces intense scrutiny over its moderation policies. The commitment is intended to address ongoing safety concerns and the rise of antisemitic attacks targeting users within the country.
Ofcom, the UK media regulator, announced the development on Friday, May 15, 2026 [1]. The regulator said that X has agreed to specific commitments aimed at reducing the prevalence of illegal material on the service. These measures focus specifically on content that promotes terrorism or constitutes illegal hate speech under British law.
The move follows a period of tension between the platform, owned by Elon Musk, and European regulators. While X has often championed a broad interpretation of free speech, the legal requirements in the UK necessitate a more restrictive approach to content that is deemed illegal. The regulator said the goal is to better protect the public from the spread of militant material.
Ofcom did not provide the specific technical details of how X will identify and remove the content. However, the regulator said that the commitments are part of a broader effort to ensure digital platforms remain accountable for the safety of their users. The announcement was made in London, marking a formal shift in how the platform interacts with British oversight bodies.
Industry observers note that this agreement may serve as a blueprint for how X handles similar regulatory pressures in other jurisdictions. By agreeing to these terms, X avoids potential legal escalations that could disrupt its operations within the UK market.
“X has pledged to crack down on illegal hate speech and terrorist content for users in the United Kingdom.”
This agreement signals a pragmatic shift for X, suggesting that the platform is willing to implement region-specific moderation to avoid regulatory penalties. By aligning with Ofcom, X is acknowledging that the legal definition of 'illegal speech' in the UK overrides its global preference for minimal intervention, potentially creating a fragmented user experience where content available in one country is blocked in another to satisfy local laws.





