Joe Lonsdale said U.S. government regulation of artificial intelligence should be as small and narrow as possible during a recent appearance on CNBC’s "Squawk Box" [1].
This position comes as the U.S. government considers implementing a national review of AI technologies before they are released to the public [1, 2]. Because AI is a primary driver of current technological advancement, the scope of these regulations could determine the pace of domestic innovation, and the global competitiveness of American firms.
Lonsdale, a founding partner of the venture firm 8VC and a co-founder of Palantir Technologies, said regulatory oversight should be limited to targeted cases [1, 2]. He said a broad national review process could create bottlenecks for developers and slow the deployment of new tools [1, 2].
According to Lonsdale, the risk of overly broad regulation is the potential to hinder economic growth [1, 2]. By focusing on specific, high-risk applications rather than a blanket review of all AI technology, the government could mitigate dangers without stifling the industry's momentum [1, 2].
Lonsdale's comments reflect a growing tension between the need for public safety and the desire for rapid technological scaling. The debate centers on whether the government can accurately predict the risks of emerging AI without having a comprehensive review process in place before public release [1, 2].
While the government weighs the benefits of a national review, venture capitalists and tech founders continue to advocate for a light-touch regulatory environment. Lonsdale said narrow targeting is the most effective way to balance security with the necessity of innovation [1, 2].
“Regulation of artificial intelligence should be as small and narrow as possible”
The push for 'narrow' regulation highlights a strategic conflict between the U.S. government's desire for systemic safety and the tech industry's need for speed. If the government adopts a broad national review, it may establish a precedent for pre-market approval similar to pharmaceutical regulations, potentially slowing the AI development cycle in the U.S. compared to regions with fewer restrictions.




