AI-powered smart glasses are helping blind and partially sighted people navigate their surroundings using real-time obstacle detection and spoken cues.
This technology represents a significant shift toward autonomy for the visually impaired, reducing reliance on human guides or traditional canes. By integrating sensors and artificial intelligence, the devices allow users to engage in activities that previously required constant assistance.
The glasses are equipped with a suite of hardware, including cameras, distance sensors, microphones, and speakers [1, 2, 3]. These components work together to identify environmental hazards and deliver auditory navigation instructions to the wearer. The goal of the technology is to provide greater safety and independence in daily movement [2, 4].
One user of the technology is Tilly Dowler, a runner with Stargardt disease. Dowler has about 10% useful vision [1]. She has used the glasses to assist her training for the London Marathon in the United Kingdom [4].
While the tools offer increased mobility, the technology is not without controversy. Experts said that promising tools, such as those developed by Meta, carry significant privacy concerns [4]. Other reports focus primarily on the independence gained by the users without addressing these data and surveillance risks [2].
The devices are currently being utilized globally to help users manage complex environments [1, 4]. By translating visual data into spoken words, the AI allows users to identify objects and navigate city streets with higher confidence.
“AI-powered smart glasses are helping blind and partially sighted people navigate their surroundings”
The integration of AI into wearable assistive tech marks a transition from passive aids to active environmental interpretation. While the immediate benefit is increased physical autonomy for users like Dowler, the reliance on camera-based AI introduces a tension between accessibility and the privacy rights of both the user and the public.





