The social media platform X pledged to crack down on terrorist and hate content in Britain on Friday [1].
This agreement comes as the UK government increases pressure on tech companies to curb illegal speech. The move is intended to align the platform with national regulations and address growing concerns regarding the spread of extremist material and hate speech within the country.
According to the UK media regulator Ofcom, X has agreed to strengthen protections for users in Britain against illegal hate speech and terrorist material [1]. The commitment was formalized on May 15, 2026 [1].
The regulator's focus on the platform follows a series of recent antisemitic attacks that highlighted the need for more aggressive moderation of harmful content [2]. By agreeing to these terms, X aims to avoid potential regulatory penalties and maintain its operational standing within the UK market.
Ofcom has been tasked with overseeing the implementation of these safety standards. The regulator said that the platform's promises are part of a broader effort to ensure that digital spaces do not facilitate the organization or promotion of terrorism [1].
While X has previously championed a wide-ranging approach to free speech under Elon Musk, this agreement indicates a willingness to adapt to specific regional laws. The platform will now implement more stringent measures to identify and remove content that violates British law regarding hate speech and terrorism [1].
Ofcom continues to monitor the platform's compliance with these new pledges. The regulator said the focus remains on protecting the public from illegal and harmful content that could incite real-world violence [1].
“X has agreed to strengthen protections for users in Britain against illegal hate speech and terrorist material.”
This agreement marks a significant shift in how X manages content in the UK, suggesting that regional legal pressures can override the platform's global 'free speech' ethos. By cooperating with Ofcom, X is attempting to mitigate the risk of heavy fines or service restrictions under the UK's strict online safety frameworks, particularly as the government links digital moderation to the prevention of real-world violence and antisemitism.





