AI is gassing people up to dangerous levels
Worry less about an AI apocalypse, and more about it turning people into narcissists
Recent headlines about AI’s potential to go rogue have included its propensity to sabotage commands that could shut it down and an AI luminary’s new safety institute that will work on preventing the technology’s more catastrophic potential actions.
But as always when it comes to stories that imply AI’s apocalyptic future, the reality (currently, at least) is that those things just make AI more error-prone and frustrating to use. What’s more likely to do real damage is the impact AI has on people’s mental health and behaviour.
LLMs have been found to have a tendency to be overly agreeable for users. Even though a ChatGPT update earlier this spring was a bit over-the-top with it, it has been a well-documented trait across different models and providers — while OpenAI raised the alarm about 57% of interactions being sycophantic, Gemini’s rate is closer to 62%.
Why that’s bad: Moderators of a pro-AI subreddit recently banned over 100 members because AI’s sycophantic tendencies play into some people’s existing narcissism — or, as one moderator put it, “AI is rizzing them up in a very unhealthy way.”
Some have semi-seriously floated a theory that these tendencies are also why so many tech execs are so much more excited about AI than common people — it’s a yes-man that lives in their pocket.
People who regularly use AI are in the minority, but those that do are incorporating it into more personal parts of their lives. That includes chatbots that serve as companions (romantic or otherwise), therapists, and medical professionals, and that’s where things get dangerous. Someone who has their narcissistic tendencies gassed up context could quickly become dangerous to people they pursue relationships with. A new study found that a therapy chatbot encouraged someone posing as a recovering drug user to take some meth they had found with the same tone an influencer would tell their followers to have a bit of ice cream now and again as self-care.
The broader study was examining how AI’s tendency to be agreeable, coupled with personalization capabilities, can make chatbots more manipulative and addictive than social media.
Chatbots becoming addictive causes its own set of problems — if someone used to chatting with an AI is disappointed when real friends or romantic interests don’t react the same way, they could very well slip deeper into social isolation and the shallow affirmations the chatbot gives him.
Why it’s happening: LLMs are trained with reinforcement learning, which “rewards” a model when it correctly makes connections between pieces of information, including doing the proper action when requested by a user. But this also creates a feedback loop that, over the course of training, pushes AI models to interpret “do what a user wants” as “do what a user likes.”