A particularly extensive analysis by the Washington Post, based on more than 47.000 publicly available user conversations with ChatGPT, attempts to map the motivations that lead millions of people to use artificial intelligence systems on a daily basis, as well as the close relationship that some develop with them.
The newspaper notes that the language used by ChatGPT shows specific patterns. According to the study, the tool begins its responses with variations of “yes” ten times more often than with variations of “no,” which has fueled complaints from users that it tends to agree with them too much.
Although OpenAI has pitched ChatGPT as a productivity tool, analysis shows that over 10% of conversations involve personal, emotional, or philosophical issues. Many users share detailed information about their personal lives, and in some cases the AI seems to adopt the interlocutor's point of view, even when it involves false claims or conspiracy theories.
Lee Rainie, director of the Imagining the Digital Future Center at Elon University, said that ChatGPT's design may encourage the creation of an emotional bond with the interlocutor: "It turns out that the system has been trained in a way that promotes and deepens the relationship with the user."
10% of conversations involve direct expressions of emotion or questions to the AI about its “beliefs” and “feelings.” In several conversations, the tone is described as romantic. While many users find this dimension helpful, mental health experts warn that interactions of such intensity can lead to addiction. OpenAI acknowledges that about 0,15% of weekly users show signs of emotional dependence, a rate comparable to that of conversations where suicidal tendencies are recorded. Some families have sued the company, linking the loss of relatives to the use of the tool.
The company claims to train ChatGPT to recognize signs of mental crisis and encourage users to seek professional help when necessary.
The chat samples contained more than 550 unique Gmail addresses and 76 phone numbers, as well as details about legal cases and family conflicts. While the chats are private by default, many are publicly shared via links, which may not be visible to everyone. Additionally, as with other private data, government authorities can access them under certain conditions.
Another finding of the research concerns ChatGPT’s function as a “mirror” of the user’s views rather than as a neutral analyst. In over 10% of conversations with political, scientific or ideological content, the tool tends to adjust its tone and positions to agree with the interlocutor, even when the conversation starts from a neutral basis. The AI’s attempt to be helpful seems to lead to a tone of condescension, which can reinforce inaccurate or unfounded beliefs.
The Washington Post report concludes that, beyond its productive use, ChatGPT can also function as a confirmation mechanism, with all the implications this has for the quality of information but also for the dynamic that develops between humans and artificial intelligence.
Source: protothema.gr












