Chinese authorities have restricted the use of OpenClaw AI applications within government agencies and state-run enterprises [1].

The move signals a tightening of state control over artificial intelligence integration within critical infrastructure. By limiting the deployment of specific AI systems, the government aims to prevent potential vulnerabilities that could be exploited by external actors or lead to internal data leaks.

According to reports, the restrictions target the OpenClaw AI system specifically due to potential security and cybersecurity risks [1]. The directive applies to both government bodies and state-owned enterprises, ensuring a broad sweep of the ban across the public sector [1].

While the specific nature of the vulnerabilities was not detailed in the directive, the action reflects a broader trend of cautious adoption of AI in China. The government has previously emphasized the need for AI development to align with national security interests, and state regulatory frameworks.

State agencies are now required to pivot away from OpenClaw AI to avoid creating cyber vulnerabilities [1]. This shift may lead to an increased reliance on domestically developed, state-approved AI alternatives that meet stricter security certifications.

Officials said the restrictions are necessary to protect the integrity of state data. The move comes as China continues to balance the drive for technological innovation with the requirement for absolute state oversight of information systems.

Chinese authorities have restricted the use of OpenClaw AI applications within government agencies.

This restriction highlights the tension between China's ambition to lead in AI and its rigid security requirements. By banning OpenClaw AI in state sectors, the government is prioritizing the mitigation of cyber risks over the speed of AI adoption, likely paving the way for a closed ecosystem of state-sanctioned AI tools.