South Africa's Minister Schreiber suspended senior officials from the Department of Home Affairs after fabricated AI-generated references appeared in a government policy document [1].
The incident highlights the risks of integrating generative artificial intelligence into official statecraft without rigorous human oversight. Because these documents form the basis of national law and immigration policy, the use of "hallucinations"—where AI invents plausible but false data—threatens the legal integrity of the state's administrative processes.
The suspensions occurred last week with immediate effect [2]. The fabricated citations were discovered within the Revised White Paper on Citizenship, Immigration and Refugee Protection [1]. This document is intended to guide the legal framework for how the country manages its borders and residency requirements.
Reports on the number of personnel affected vary. Some sources said that two senior officials were suspended [1], while other reports indicate the number of government officials suspended is four [3]. The department said the inclusion of fake research was a serious breach of integrity [1].
The use of AI in drafting the white paper led to the creation of references that did not exist in any real-world academic or legal database [4]. This failure in verification meant that a high-level policy document was submitted with foundational errors, a lapse that Minister Schreiber said was unacceptable for official government work [4].
This case is one of the first high-profile instances in South Africa where the technical failure of a large language model led to direct disciplinary action against senior civil servants [5]. The department has not yet detailed whether the officials intentionally used AI to bypass research or if the tools were used as aids and the output was simply not verified [1].
“The department said the inclusion of fake research was a serious breach of integrity.”
This event underscores a growing global tension between the efficiency of generative AI and the necessity of factual accuracy in governance. When governments rely on AI for policy drafting, 'hallucinations' are not merely technical glitches but legal liabilities that can invalidate legislation or lead to administrative collapse. The suspensions signal that South African authorities will hold human officials accountable for the output of the tools they employ, establishing a precedent that AI-assisted work requires absolute human verification.





