UK-based Mavis Tech is developing AI-powered smart glasses to help blind and partially sighted people navigate environments with greater independence [1].

This technology provides a critical safety layer for users who lack full sight, potentially reducing their reliance on human guides or traditional aids during high-risk activities like urban running.

The wearable devices integrate cameras, distance sensors, microphones, and speakers to assist the user [1]. These components allow the glasses to read text aloud and detect obstacles in the user's path [2]. The system also provides turn-by-turn audio guidance to help users move through everyday settings [2].

Recent applications of the technology include field testing with runners preparing for the London Marathon [3]. One such athlete, Tilly Dowler, has approximately 10% useful vision [4]. The glasses allow runners to maintain their pace while receiving real-time information about their surroundings, a necessity for safety in a crowded city environment [3].

Mavis Tech is not the only firm entering this space. Other manufacturers, including Meta and Oakley, are also developing smart glasses with similar capabilities [1]. These devices aim to solve the fundamental challenge of accessibility by converting visual data into audible cues [2].

Despite the utility, the technology faces scrutiny. Experts said that tools like the smart glasses from Meta bring significant privacy concerns [5]. These concerns often center on the use of cameras in public spaces, though proponents argue the benefits of independence for the visually impaired outweigh these risks [2].

The glasses offer independence for the visually impaired by providing real-time obstacle detection and audio guidance.

The shift toward AI-integrated wearables marks a transition from passive assistive tools, such as white canes, to active environmental interpretation. By utilizing real-time computer vision, these devices move beyond simple detection to provide contextual understanding of a user's surroundings. This development suggests a future where augmented reality is used not for entertainment, but as a primary accessibility interface to bridge the gap in sensory perception.