Tom Friedman said that something bad will eventually happen if artificial intelligence is not properly regulated [1].
The warning comes as AI capabilities accelerate, moving beyond simple data processing toward autonomous decision-making. This shift creates a critical window for governments to establish guardrails before the technology causes systemic harm or geopolitical instability.
Friedman, a columnist for The New York Times, said these concerns during an appearance on CNBC's Squawk Box [1]. He characterized the current era as a transition from an "age of information" to an "age of intelligence" [1]. This evolution represents a fundamental change in how humanity interacts with knowledge and technology.
During the discussion, Friedman connected the risks of unregulated AI to existing global volatility. He referenced the standoff in the Strait of Hormuz as an example of the precarious nature of modern geopolitics [1]. He said that the intersection of advanced AI and such high-tension regions could amplify the risk of accidental or intentional escalation.
Friedman said that the need for regulation is urgent because the speed of AI development is outpacing the ability of legal and ethical frameworks to adapt [1]. Without these boundaries, the potential for misuse grows as the tools become more powerful.
He said that the global community must coordinate a response to manage these risks. The goal is to harness the benefits of the intelligence age, while mitigating the possibility of a catastrophic event [1].
“Something bad is going to happen at some point”
Friedman's warnings reflect a growing consensus among geopolitical analysts that AI is not merely a software upgrade but a strategic shift. By linking AI to the Strait of Hormuz, he highlights the danger of 'algorithmic escalation,' where AI-driven decisions in military or diplomatic contexts could trigger real-world conflicts faster than humans can intervene.





