Zoom is introducing a human-verification feature to identify AI imposters during video-meeting sessions through a partnership with World.

This move addresses the escalating threat of generative AI, which allows bad actors to create convincing deep-fakes to deceive employees and executives. As virtual communication becomes the primary channel for global business, the ability to distinguish a real person from a synthetic entity is critical for security.

The new system utilizes the human-ID verification network founded by Sam Altman. To confirm their identity, participants must undergo a three-step biometric check [1]. Once the process is complete, the platform displays a "Verified Human" badge next to the user's name in the meeting interface [2].

This integration aims to block AI-generated imposters and reduce the prevalence of fraud in virtual environments [3]. The need for such measures is highlighted by the financial impact of synthetic media; deep-fake fraud losses topped $200 million in a single quarter [4].

Zoom and World are targeting the specific vulnerability of real-time video streams, where AI can now mimic voice and facial movements with high precision. By requiring a biometric anchor, the platform seeks to ensure that the person on the screen is the individual they claim to be [2].

Zoom is adding a human-verification feature that runs a three-step biometric check.

The partnership between Zoom and World signals a shift toward 'zero trust' architecture in digital communication. As AI capabilities make visual and auditory verification unreliable, companies are moving toward biometric hardware and third-party identity layers to prevent corporate espionage and financial fraud.