X has pledged to act more quickly to remove illegal hate and terror content from its platform in the United Kingdom [1, 2, 3].

The agreement with the UK online safety regulator, Ofcom, comes as the government increases pressure on social media companies to curb harmful content [1, 2]. The shift is particularly significant following recent crimes targeting Jewish communities in the UK, which highlighted the need for swifter intervention on digital platforms [1, 3].

These commitments were established in 2024 to ensure the platform meets safety standards required by the regulator [1]. The move follows a period of scrutiny regarding how the platform, formerly known as Twitter, handles illegal material that could incite violence or spread terrorism [2].

While X has agreed to these new measures, the effectiveness of the commitments remains a point of contention. Some activists said X is still failing in many regards despite the formal agreement with Ofcom [3].

Ofcom continues to monitor the platform's compliance with UK law. The regulator is tasked with ensuring that tech companies do not allow illegal content to proliferate, a mandate that has led to friction between the UK government and the platform's leadership [2].

X has not provided specific internal metrics on the speed of removals, but the pledge represents a formal commitment to the UK's regulatory framework [1, 2].

X pledged to act more quickly to remove illegal hate and terror content.

This agreement signals a tightening of regulatory oversight for X in the UK, moving away from a self-regulatory model toward one enforced by Ofcom. It reflects a broader global trend where governments are treating online hate speech and terror content as urgent public safety threats rather than mere moderation issues.