Researchers at the MIT Media Lab are developing a wearable device that translates subvocal speech into text and digital commands [1].

This technology represents a shift toward unobtrusive computing, allowing users to interact with AI and digital systems without speaking aloud or using a screen. By capturing internal speech, the system aims to augment human cognitive abilities and improve the delivery of health-related tools [2].

Dr. Pattie Maes, head of the Fluid Interfaces Group at the MIT Media Lab, is leading the effort to advance these wearable systems [1]. The project focuses on the AlterEgo wearable, a device designed to enable a form of silent communication [2]. The system works by detecting the neuromuscular signals associated with subvocal speech—the internal dialogue people have without moving their vocal cords—and converting those signals into actionable data [2].

The development takes place in Cambridge, Massachusetts, where the team is exploring how seamless technology can help users become better versions of themselves [1]. The AlterEgo system is intended to provide a discreet way to access information or control devices, potentially removing the friction associated with traditional voice assistants, or handheld keyboards [2].

Maes said the goal is to evolve tools that integrate into the user's life without being disruptive [1]. The research emphasizes the intersection of human biology and computer interfaces, seeking to create a symbiotic relationship between the wearer and the machine [2].

The AlterEgo system translates subvocal speech into text and commands.

The transition from external voice commands to subvocal recognition could fundamentally change how humans interact with artificial intelligence. By removing the need for audible speech, this technology enables private, high-speed data exchange in public spaces and offers a potential communication pathway for individuals with speech impairments.