AI-powered smart glasses are helping visually impaired individuals navigate their environments through real-time audio guidance and obstacle detection.

This technology represents a shift toward greater autonomy for people with sight loss. By converting visual data into spoken instructions, these devices reduce the reliance on human guides or traditional canes in complex urban settings.

The glasses, developed by the UK-based company Mavis Tech, integrate cameras, distance sensors, microphones, and speakers [2]. The system provides users with turn-by-turn guidance and text-to-speech capabilities, allowing them to identify objects and read signs while moving [2].

In India, the All India Institute of Medical Sciences (AIIMS) in New Delhi distributed these AI-enabled glasses to 40 visually impaired people in early 2024 [1]. This initiative aimed to provide the recipients with a tool to increase their safety and independence during daily activities [1].

Individual users have already applied the technology to high-endurance activities. Tilly Dowler, a runner who has about 10% useful vision due to Stargardt disease, used the glasses to assist with training for the London Marathon in 2023 [3]. The device allowed her to navigate training routes more effectively by providing audio alerts about her surroundings [3].

The hardware functions as a wearable computer that constantly scans the environment. When the sensors detect a potential hazard or a specific landmark, the AI processes the image and delivers a spoken alert through the integrated speakers [2]. This immediate feedback loop is designed to prevent collisions and provide confidence in unfamiliar territory [2].

AI-powered smart glasses are helping visually impaired individuals navigate their environments

The integration of AI-driven computer vision into wearable hardware transforms accessibility from passive assistance to active navigation. By deploying these tools in both clinical settings like AIIMS and athletic contexts like the London Marathon, developers are proving that assistive AI can scale across different levels of visual impairment and activity intensity.