X has agreed to new commitments with the UK regulator Ofcom to crack down on illegal hate speech and terrorist content [1].
This agreement follows a series of antisemitic attacks in Britain and represents a significant shift in how the platform manages illegal content within the UK [3, 5].
Under the new terms, X will withhold UK access to accounts linked to terror groups based in the United Kingdom [2]. The platform is also pledging to improve its response times for user-reported illegal content. X will assess at least 85% of these reports within a maximum of 48 hours [2].
Some reports indicate the platform aims to review suspected illegal hate and terrorist content on average within 24 hours [4]. This commitment targets the speed of moderation to prevent the viral spread of harmful material.
Ofcom, the UK's online safety regulator, established these requirements to better protect users from illegal content [1]. The move comes as the regulator increases pressure on social media companies to adhere to stricter safety standards.
X has not provided further details on the specific technical mechanisms it will use to identify and block the designated terror accounts [2]. The platform's cooperation marks a pivotal moment in its relationship with British authorities, a dynamic that has previously been characterized by tension over free speech and moderation policies.
The agreement focuses specifically on content that is illegal under UK law, distinguishing it from broader content moderation policies that may apply globally [1, 2].
“X will assess at least 85% of these reports within a maximum of 48 hours”
This agreement signals a pragmatic compromise between Elon Musk's preference for minimal moderation and the legal requirements of the UK's Online Safety Act. By agreeing to specific, measurable benchmarks for report assessments and targeted blocking of terror-linked accounts, X is attempting to avoid potential regulatory penalties while maintaining its global operational model.





