When GPT connects with humans emotions
A recent article in MIT Technology Review by Rhiannon Williams discusses the empirical evidence that humans connect with Chatbots, such as ChatGPT, not just on a functional level, but also emotionally.
“A lot of existing research in the area—including some of the new work by OpenAI and MIT—relies upon self-reported data, which may not always be accurate or reliable. That said, this latest research does chime with what scientists so far have discovered about how emotionally compelling chatbot conversations can be. For example, in 2023 MIT Media Lab researchers found that chatbots tend to mirror the emotional sentiment of a user’s messages, suggesting a kind of feedback loop where the happier you act, the happier the AI seems, or if you act sadder, so does the AI.”
She further explains: “OpenAI and the MIT Media Lab used a two-pronged method. First, they collected and analyzed real-world data from close to 40 million interactions with ChatGPT. Then, they asked the 4,076 users who’d had those interactions how they made them feel. Next, the Media Lab recruited almost 1,000 people to take part in a four-week trial. This was more in-depth, examining how participants interacted with ChatGPT for a minimum of five minutes each day. At the end of the experiment, participants completed a questionnaire to measure their perceptions of the chatbot, their subjective feelings of loneliness, their levels of social engagement, their emotional dependence on the bot, and their sense of whether their use of the bot was problematic. They found that participants who trusted and “bonded” with ChatGPT more were likelier than others to be lonely, and to rely on it more. “
The frequency and magnitude by which GPTs connect with humans on an emotional level have signification implications for our societies, or for the human race in general.
Yuval Noah Harari argues that AI is gaining access to the "human operating system"—our emotions, thoughts, and decision-making processes—by analyzing vast amounts of data about us. Through machine learning, AI can predict and even influence our choices better than we understand ourselves. This could have profound societal consequences:
Recommended by LinkedIn
In essence, Harari warns that without ethical oversight, AI could reshape human society in ways that diminish personal agency and democracy.
#artificialintellligence #GPT #LLM #ai #regulation
Financial Management Specialist @ World Bank Group | Certified Public Accountant
1moVery informative
Non-Executive Director | Chair | Strategy | M&A | Transformation | CEO Toolkit
1moGreat to have your thoughts on this Stefan Michel
creating value for customers
1moGreat summary. First thought: Daniel Kahnemann outlined in his book "think fast, thinking slow" the various ways our thinking and decision making is influenced. The chatbot part as outlined in the mentioned article fits into this.
Global Executive Leader | Growth, Turnaround & Innovation across P&L, Strategy, Operations, Product & Commercial | CEO/COO APAC, CPO/CTO, VP/Head of Business Unit | Industrial, Tech & Consulting | IMD EMBA
1moWe indeed love to anthropomorphize (cars look like faces but an AI-Chat makes it too easy). This mixed with our need for connection/ attached is a good receipt for something scary (or something great for the marketers under us).