International efforts to establish a binding treaty regulating lethal autonomous weapons have stalled as major powers resist limiting their military capabilities [1].

The lack of a legal framework creates a governance vacuum while AI technology integrates into active combat zones. This gap increases the risk of uncontrolled escalation and challenges the traditional principles of international humanitarian law.

Discussions have taken place for several years during the United Nations Convention on Certain Conventional Weapons (CCW) meetings in Geneva [1]. However, the CIGI analysis team said that prospects for guardrails have dimmed as major powers pull back from negotiations [1]. The inability to reach a consensus is driven by the speed of AI development, which continues to outpace diplomatic efforts [2].

This regulatory void is evident in current conflicts. In Ukraine, the war has lasted four years [3] and resulted in tens of thousands of casualties [3]. The conflict has become a testing ground for these technologies, with Ukraine recently opening its battlefield AI data to allies [4]. Military Times staff said this move underscores the absence of a global governance framework [4].

Critics argue that removing human oversight from lethal decisions is a fundamental breach of ethics. Marie‑des‑Neiges Ruffo de Calabre said the use of lethal autonomous weapons controlled by AI goes against the principles of a just war [2].

While the U.S. and other nations continue to develop autonomous capabilities, the international community remains divided. The current stalemate suggests that national security interests are being prioritized over the creation of a universal legal standard to govern AI in warfare [1, 2].

Efforts to negotiate a binding treaty on lethal autonomous weapons under the CCW have stalled, with major powers pulling back.

The failure to secure a binding treaty suggests that the international community is currently unable to constrain the AI arms race. As battlefield data is shared and integrated into military AI, the 'normative' gap between technological capability and legal restriction widens, likely shifting the focus from global treaties to smaller, bilateral agreements between allied nations.