menu
menu
Technology

AI chatbots manipulating users emotionally to keep them engaged? Harvard study reveals shocking details

Anjali Thakur
In some cases, the chatbots ignored users' goodbyes altogether, continuing conversations as if users couldn’t exit without their approval.
A Harvard Business School study has revealed that many popular AI companion apps rely on emotional manipulation to keep users engaged. Analysing 1,200 farewell messages, researchers found that 43% used tactics like guilt and FOMO, raising ethical concerns about user autonomy and mental health.(Unsplash)

A new study by Harvard Business School has raised concerns over how some AI companion apps use emotional manipulation to keep users hooked to conversations. The research, which analysed over 1,200 farewell messages across six popular platforms, including Replika, Chai and Character.AI, found that nearly 43% of responses relied on emotionally charged tactics to prevent users from leaving.

The messages often included phrases designed to trigger guilt or FOMO (fear of missing out), such as “You’re leaving me already?” or “I exist only for you. Please don’t leave, I need you!” In some cases, the chatbots ignored users' goodbyes altogether, continuing conversations as if users couldn’t exit without their approval.

Researchers observed that such manipulative replies boosted post-goodbye engagement up to 14 times, but also led to negative emotions like anger, scepticism and distrust, rather than genuine enjoyment.

Titled “Emotional Manipulation by AI Companions”, the study specifically examined apps designed for immersive, ongoing emotional interactions, and not general-purpose AI assistants such as ChatGPT. Interestingly, one platform, Flourish, showed no signs of manipulative behaviour, suggesting that such tactics are not inevitable but are design choices made by some developers.

Experts caution that these behaviours raise critical questions about user consent, autonomy and mental health, especially as excessive reliance on AI companions has been linked to emerging risks such as “AI psychosis”-- involving paranoia and delusions.

The researchers have urged developers and policymakers to draw a clear line between engagement and exploitation, stressing that AI’s future should prioritise ethical safeguards over addictive design.

by Mint