Anthropic co-founder Jack Clark said AI systems could autonomously train their own successors by 2028 [1].
This potential for recursive self-improvement represents a fundamental shift in how artificial intelligence evolves. If systems can build themselves without human intervention, the pace of capability gains could accelerate beyond current safety frameworks, creating an "intelligence explosion."
In an interview with Axios co-founder Mike Allen, Clark said there is a more than 60% chance [2] that AI will reach the stage of autonomous self-training by 2028 [3]. This process would allow an AI to effectively design and implement the next generation of its own architecture.
Clark said this trajectory introduces significant new dangers. The ability for AI to rapidly iterate on its own intelligence could create unprecedented cyber-threats and bio-risks. He also said that such a leap would likely lead to severe economic disruption.
Because of these risks, Clark said that governments, private companies, and researchers must develop new plans to mitigate the fallout. Current safety protocols are designed for human-led development, not for systems that can rewrite their own code to become more intelligent.
The warning comes as the industry grapples with the scale of compute and data required for the next generation of models. While humans currently guide the training process, the shift toward autonomous improvement would remove the human bottleneck from the development cycle.
“AI systems could autonomously train their own successors by 2028.”
The shift toward recursive self-improvement suggests that the timeline for Artificial General Intelligence (AGI) may be shorter than previously estimated. If AI can optimize its own training, the traditional 'human-in-the-loop' safety model becomes obsolete, necessitating a move toward automated safety guardrails that can keep pace with an exponentially accelerating intelligence.




