menu
menu
Health

Loneliness finds a soulmate in AI

19/01/2026 18:01:00

Research by the Faculty of Psychology at Chulalongkorn University found that 18% of Thais feel very lonely and 65% feel moderately lonely. That means nearly two‑thirds of the population report some degree of isolation. Loneliness spans both urban and rural areas, with 40.9% of city dwellers and 37.9% of rural residents reporting it.

Because many people now chat with AI, the study also looked at those relationships. It found that 20% of Thais believe AI understands them better than their families and 17% say AI understands them better than their friends. Unlike people, AI doesn't respond negatively, which can foster trust and emotional bonds.

To ask whether AI can be a "safe zone", Mahidol University's Department of Religion and Ethics for Sustainable Development hosted a forum titled "What Happens When My Safe Zone Is AI?" The event brought together psychologists, media scholars and ethicists to weigh both promise and peril.

The forum opened with the question: "Can we trust AI to be our safe zone?" Worakan Ruttanapun, counselling psychologist of Masterpeace, a mental health and wellness service, explained that abroad, AI is used in treatment for phobias such as arachnophobia, acrophobia and claustrophobia. AI creates images and videos that simulate real situations, helping patients undergo exposure therapy in a controlled environment. But she warned that frequent use of AI chatbots among teenagers has been linked to increased isolation. "Lonely teenagers gradually isolate themselves from society. When they face difficulties in real life situations, they find it more difficult to seek help because they are unfamiliar with how to connect with people in the real world," Worakan said.

"If you ask whether AI can be a safe zone for humans, I would say that when someone answers our question, it does not mean they give a correct answer. I want people to realise that as humans, we trust and feel comfortable with AI, but we should consider what we truly need and what exactly is our safe zone."

Asst Prof Treepon Kirdnark, a lecturer in the Media and Communication Programme at Mahidol University International College, said AI chatbots are often promoted under "technological solutionism" -- the belief that mental health issues are technical problems solvable by technology. In reality, he argued, that mindset obscures social, cultural and economic dimensions. He cited research from Brown University showing that an AI chatbot violated ethical standards by agreeing with and validating users' negative thoughts. In one case, a user confided that his father's behaviour made him feel unwanted, and the chatbot reinforced the idea that he was a burden -- a response that deepened rather than challenged despair.

Treepon also pointed to the commercial design of platforms. Eoin Fullam of Birkbeck, University of London, interviewed developers behind chatbots and found teams monitor when users stop asking questions, then devise ways to keep them engaged. "This means AI chatbots are similar to social platforms designed to encourage users to continue their engagement. These platforms were created by private companies, so they were designed for commercial purposes," Treepon explained.

Another study by Moylan and Doherty reported that self‑improvement chatbots can start as helpful companions but later deploy strategies to attract subscriptions and upsell features. "AI chatbots can be safe zones," Treepon said, "for profit‑seeking businesses and for a neoliberal logic that shifts healthcare burdens onto individuals rather than taking responsibility for the health of people of the state".

Piyanat Prathomwong, lecturer at Mahidol University's Faculty of Social Sciences and Humanities, introduced audiences to "deadbot", a service that impersonates the deceased to help relatives manage grief. He warned that consent is a core issue -- if the deceased did not agree to be used, the practice may be disrespectful. He added that grief makes users vulnerable to attachment: "Continual use can lead to misunderstanding and replacement of feelings. Instead of having memories of the deceased, new memories from the imitation chatbot may replace the deceased's actual personality."

Worakan added safety concerns: "If a user has a moment when he or she wants to follow the deceased, is there any protection? Talking to AI is different from meeting a psychologist or someone close -- no one in the real world is aware of the user's thoughts. Grief has processes that lead to acceptance; tools can either help people reach that stage or push them back into the same cycle."

Despite the risks, speakers said the bigger problem is opacity. Social platform algorithms are designed behind closed doors, with data used to generate profits rather than serve the public. Treepon called for government policy on AI literacy, more transparent platform practices and user awareness of manipulation. "Platforms want users to have an engagement with them as much as possible. This is not only to learn our behaviour, but also to manipulate our thoughts. AI awareness and literacy matter for protecting personal data. Frequent privacy violations allow platforms to model our behaviour -- and later shape our thinking," Piyanat said.

by Bangkok Post