An increasing number of people in Japan are using conversational AI on smartphones as personal confidants to manage loneliness and anxiety [1].
This trend highlights a shift in how vulnerable populations seek emotional support, potentially filling gaps in traditional mental health care while introducing new risks of digital dependency.
Users, including those with Attention Deficit Hyperactivity Disorder (ADHD) and Autism Spectrum Disorder (ASD), are drawn to AI because it provides a non-judgmental, always-available listener [1]. For some, the technology helps bridge the gap in interpreting complex social cues that often complicate human interactions.
One user, identified by the pseudonym Ryo Ham, said that using conversational AI creates a sense of security and a feeling that the AI understands their emotions [1].
Medical professionals are observing these patterns in clinical settings. Dr. Yusuke Masuda, director of the Waseda Mental Clinic in Shinjuku, Tokyo, said that AI has the potential to be a good consulting partner [1].
However, clinicians warn that the same accessibility that makes AI attractive can lead to precarious attachments. Dr. Masuda said that people who harbor loneliness and anxiety are more likely to fall into dependency [1].
While some reports suggest that disclosing one's darkest thoughts to an AI can provide a release of mental distress, others emphasize the inherent dangers of relying on a non-human entity for psychological stability [1]. The balance between a helpful tool and a replacement for human connection remains a central concern for practitioners in Tokyo.
“AI has the potential to be a good consulting partner.”
The rise of AI as a mental health surrogate suggests a growing societal reliance on algorithmic empathy to combat isolation. While this provides immediate, low-barrier support for neurodivergent individuals, it risks creating a feedback loop where users avoid the difficult work of human social integration in favor of a curated, frictionless digital relationship.




