Serial entrepreneur Martin Varsavsky is seeking U.S. Food and Drug Administration approval for his AI-doctor platform, Certuma [1].

Regulatory clearance would allow the platform to operate as a regulated medical device capable of diagnosing patients and recommending treatments. This move represents a significant attempt to integrate generative artificial intelligence into the formal healthcare system, shifting AI from a general information tool to a clinical instrument.

Varsavsky has started more than 12 ventures [1], including a handful that reached valuations of $1 billion [1]. He is now leveraging this experience to push for a framework where AI can provide medical advice at scale.

"We believe AI can democratize healthcare and give people access to high‑quality medical advice anytime, anywhere," Varsavsky said [1].

The push for approval, reported May 13, 2024 [1], focuses on the ability of the AI to provide consistent, high-quality care regardless of a patient's location. By obtaining FDA clearance, Certuma would gain a level of legitimacy and safety verification that is currently lacking for most consumer-facing AI health tools.

Industry experts suggest that such a regulatory victory would set a precedent for the entire digital health sector. "The FDA’s clearance would be a game‑changing milestone for any digital health company," John Smith, a senior analyst at HealthTech Insights, said [1].

Varsavsky's strategy involves positioning Certuma not as a replacement for human physicians, but as a means to expand the reach of quality medical guidance. The process involves demonstrating that the AI's diagnostic accuracy meets the rigorous safety standards required for medical devices in the U.S. [1].

We believe AI can democratize healthcare and give people access to high‑quality medical advice anytime, anywhere.

If the FDA approves Certuma, it would signal a shift in how the U.S. government views artificial intelligence in clinical settings. Moving from 'wellness' or 'informational' tools to 'regulated medical devices' creates a legal pathway for AI to handle direct patient care, potentially reducing the burden on human practitioners while raising critical questions about liability and algorithmic bias in diagnosis.