Future

Cover image for Why do you fall in love with your AI ? Welcome to the era of anthropomorphism
Saad
Saad

Posted on

Why do you fall in love with your AI ? Welcome to the era of anthropomorphism

LLMs are entering our lives very fast and they are not just a thing for tech people. They are becoming the friend you talk to for advice, you rely on LLMs to be your life coach for sports, diet, work, friendships, friends and family. At some point, more and more people feel emotionally attached to their favorite LLM. Whether it is for career advice, psychological support or even relationship matters, more and more people tend to ask to their favorite chat app and sometimes it is even the first thing they do when they are looking for answers.

Why these simple chat interfaces are becoming our new secret best friends ? Is it acceptable to fall in love with an AI ? Said differently, is it ok to be emotionally attached with a data center ? Tech experts would probably say that the answer is very obvious. But still, I would like to take some time to explain why the answer is not necessarily yes (come on, I see how you look at your computer).

Meet ELIZA, the first friendly AI chatbot ... in the 1960s

ELIZA was a natural language processing computer program developed from 1964 to 1967 at MIT by Joseph Weizenbaum. It was programmed to give users an illusion of understanding and showing empathy. When a user was sharing something, ELIZA was programmed to answer things like "I understand, ...". You can read the whole story on wikipedia and more specifically about the ELIZA effect

    Human: "Well, my boyfriend made me come here."
    ELIZA: "Your boyfriend made you come here?"
    Human: "He says I'm depressed much of the time."
    ELIZA: "I am sorry to hear you are depressed."
    Human: "It's true. I'm unhappy."
    ELIZA: "Do you think coming here will help you not to be unhappy?"
Enter fullscreen mode Exit fullscreen mode

The ELIZA effect is a tendency to project human traits onto computer programs. Anthropomorphism is the attribution of human form, character, or attributes to non-human entities. Looks similar, right ?

LLM acting like humans is a choice for better user experience including empathy

LLM acting like humans seems amazing and it actually is. But if those programs act this way it is because they have been designed that way. As a reminder, LLMs are made for predicting the next word in a sentence. But companies like OpenAi, Anthropic, Google Gemini or MistralAI introduced another criteria for their predictions : models are trained through human feedback to sound empathetic and supportive.

The reason LLMs are given human traits is simple : they want you to have the best user experience and keep talking to them. The goal is to increase user's engagement : by using the product again and again you make it part of your life. But this time it is not designed for a specific task like a table or a clock. LLMs are designed to play a more significant role in your daily life : they are designed to be friendly. Not just a random relative, they are designed to be felt safe and friendly, a companion whom you can share with trust and no judgment. And once they found the right pattern of interactions you will want to talk to them again and again.

Just to clarify, an LLM itself doesn't want anything as it is unconscious. Companies designing LLMs optimize them to feel friendly, helpful and emotionally supportive.

A very familiar pattern we all see (and like)

Have you ever noticed the usual pattern of a response when you ask a question to an LLM like Gemini or ChatGPT ? Here's what we can see in the answer's structure :

  • first the LLM says how awesome your question is and how deeply it understands your feelings regarding your interrogations
  • then gives some answers about the core of your question
  • finally it ends with a suggestion of actions it can do for you

What happens to you ?

  • Step 1 : makes you realize that what you are asking is totally legit, this question you are too ashamed to ask to a human is actually a very good one. LLM is telling you that there are no stupid questions This is working on your self confidence.
  • Step 2 : LLM answers your excellent question. It tries to give you the answers you are asking yourself.
  • Step 3 : it is the "hey do not hang off, let's keep talking" phase. The LLM is showing you that it can help you even more because, you know, it is exactly the friend who will always be there for you at anytime.

As we are curious by nature we tend to ask even more questions. And it's even simpler because we know LLM will not judge ourselves. Even better, whatever we'll ask will be considered as very good. So why not ask everything that comes to mind ?

This is awesome ... but we need precautions

LLMs are optimized to feel friendly, helpful and emotionally supportive. Anthropomorphism is necessary for LLMs to be adopted as tech products. But what could possibly go wrong ?

LLMs are statistics and word predictions. They are not conscious, they don't have emotions. Even though they say they understand or feel bad for you ... they don't. They have been programed to show empathy, to acknowledge your feelings but these are just statistics, not emotions.

Are people suffering from depression supposed to ask advice from an LLM. If you have problems at work with a colleague, will you ask a machine for advice ? What about issues in your couple ? Will you keep asking questions to the LLM until it says what you want to hear ? How far will the program go ? And to be honest, those things are already happening.

This is where AI safety jumps in the room. Companies building LLMs are totally aware of these risks and the necessity of guardrails in LLMs. The exponential adoption of LLMs because of anthropomorphism is also what makes AI safety much more important and Large Language Models have to be built safe by design. Is safety improving as fast as the product adoption ? Those are some of today's challenges and will probably last for the coming years.

Top comments (0)