X has agreed to new commitments to crack down on illegal hate speech and terrorist content for users in the United Kingdom [1, 2].

The agreement follows a period of intense scrutiny from British regulators and a series of antisemitic attacks. The move signals a shift in how the platform, owned by Elon Musk, manages content moderation within one of its largest international markets.

Ofcom, the UK media regulator, pressured the platform to curb the spread of illegal material. A spokesperson for Ofcom said, "X has pledged to strengthen protection for UK users against illegal hate speech and terrorist content" [4].

The decision comes amid criticism regarding the platform's handling of racial and religious hatred. The chief executive of the Antisemitism Policy Trust said the platform had been "failing in so many regards to tackle open racism on its platform" [2].

These new commitments are designed to ensure that illegal content is identified and removed more aggressively. The platform must now align its operations with the expectations of the UK government to prevent the coordination of terror-related activities, and the proliferation of hate speech [1, 3].

While the platform has historically championed a broad interpretation of free speech under Musk's ownership, the UK's regulatory environment provides a distinct legal framework. Ofcom has the authority to enforce safety standards that may differ from the platform's global policies [1, 2].

X has pledged to strengthen protection for UK users against illegal hate speech and terrorist content

This agreement demonstrates the growing power of regional regulators to force global tech platforms into stricter moderation practices. By agreeing to Ofcom's terms, X is acknowledging that the legal risks of non-compliance in the UK outweigh the platform's ideological commitment to unrestricted speech.