European Union negotiators have reached a provisional agreement to ban AI systems that generate sexually explicit images without the subject's consent [1].
This move addresses the growing proliferation of non-consensual deepfakes and the potential for AI to be used for digital harassment. By establishing a legal prohibition, the EU seeks to protect individuals from the creation and distribution of synthetic sexual imagery that violates personal privacy, and dignity [1, 2].
The agreement focuses specifically on AI applications designed to create explicit content. Under the new rules, systems capable of generating such imagery without the explicit permission of the person depicted will be prohibited [1, 3]. This regulatory step targets the software and platforms that facilitate the production of non-consensual sexual content, a challenge that has historically outpaced existing privacy laws.
According to the agreement, the ban is scheduled to take effect at the end of 2026 [1, 3]. This timeline provides a window for developers and service providers to adjust their systems or cease operations within the EU market to comply with the new standards.
The decision comes as part of a broader effort by the European Union to regulate artificial intelligence. While the EU has implemented general AI frameworks, this specific measure targets the harmful application of generative AI in the context of sexual violence, and harassment [1, 2].
Negotiators said that the goal is to prevent the misuse of technology to create harmful content [1, 2]. The provisional nature of the agreement means it must still undergo formal approval processes before it becomes binding law across member states [1].
“EU negotiators have reached a provisional agreement to ban AI systems that generate sexually explicit images without the subject's consent”
This agreement represents a shift from general AI ethics guidelines to enforceable legal prohibitions. By targeting the tools themselves rather than just the users, the EU is attempting to dismantle the infrastructure used to create non-consensual deepfakes. This may set a global precedent for how other jurisdictions handle the intersection of generative AI and digital bodily autonomy.





