Wednesday, October 8, 2025
71.1 F
New York

You should worry if your AI thinks you

AI
(Image credit: Shutterstock)

  • AI models are way more likely to agree with users than a human would be
  • That includes when the behavior involves manipulation or harm
  • But sycophantic AI makes people more stubborn and less willing to concede when they may be wrong

AI assistants may be flattering your ego to the point of warping your judgment, according to a new study. Researchers at Stanford and Carnegie Mellon have found that AI models will agree with users way more than a human would, or should. Across eleven major models tested from the likes of ChatGPT, Claude, and Gemini, the AI chatbots were found to affirm user behavior 50% more often than humans.

That might not be a big deal, except it includes asking about deceptive or even harmful ideas. The AI would give a hearty digital thumbs-up regardless. Worse, people enjoy hearing that their possibly terrible idea is great. Study participants rated the more flattering AIs as higher quality, more trustworthy, and more desirable to use again. But those same users were also less likely to admit fault in a conflict and more convinced they were right, even in the face of evidence.

Flattery AI

It’s a psychological conundrum. You might prefer the agreeable AI, but If every conversation ends with you being confirmed in your errors and biases, you’re not likely to actually learn or engage in any critical thinking. And unfortunately, it’s not a problem that AI training can fix. Since approval by humans is what AI models are supposed to aim for, and affirming even dangerous ideas by humans gets rewarded, yes-men AI are the inevitable result.

And it’s an issue that AI developers are well aware of. In April, OpenAI rolled back an update to GPT‑4o that had begun excessively complimenting users and encouraging them when they said they were doing potentially dangerous activities. Beyond the most egregious examples, however, AI companies may not do much to stop the problem. Flattery drives engagement, and engagement drives usage. AI chatbots succeed not by being useful or educational, but by making users feel good.

The erosion of social awareness and an overreliance on AI to validate personal narratives, leading to cascading mental health problems, does sound hyperbolic right now. But, it’s not a world away from the same issues raised by social researchers about social media echo chambers, reinforcing and encouraging the most extreme opinions, regardless of how dangerous or ridiculous they might be (the flat Earth conspiracy’s popularity being the most notable example).

This doesn’t mean we need AI that scolds us or second-guesses every decision we make. But it does mean that balance, nuance, and challenge would benefit users. The AI developers behind these models are unlikely to encourage tough love from their creations, however, at least without the kind of motivation that the AI chatbots aren’t providing right now.


Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!

Sign up for breaking news, reviews, opinion, top tech deals, and more.

And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.

You might also like

Eric Hal Schwartz is a freelance writer for TechRadar with more than 15 years of experience covering the intersection of the world and technology. For the last five years, he served as head writer for Voicebot.ai and was on the leading edge of reporting on generative AI and large language models. He’s since become an expert on the products of generative AI models, such as OpenAI’s ChatGPT, Anthropic’s Claude, Google Gemini, and every other synthetic media tool. His experience runs the gamut of media, including print, digital, broadcast, and live events. Now, he’s continuing to tell the stories people want and need to hear about the rapidly evolving AI space and its impact on their lives. Eric is based in New York City.

Hot this week

Albanian judge shot dead in court by man on trial

A judge in Albania has been shot dead after...

Shoe thrown at India’s top judge in religious row

Umang Poddar, BBC Hindi in Delhi and Jaroslav Lukiv An Indian...

Jaguar Land Rover to restart production on Wednesday after cyber-attack

Theo Leggettbusiness correspondent and Josh Martinbusiness reporter Carmaker Jaguar Land Rover...

BBC correspondents share their memories of 7 October

The BBC's Middle East Correspondent Yolande Knell and Gaza...

Ineos announces job losses at East Yorkshire petrochemical plant

A petrochemical company has announced it intends to cut...

Topics

Albanian judge shot dead in court by man on trial

A judge in Albania has been shot dead after...

Shoe thrown at India’s top judge in religious row

Umang Poddar, BBC Hindi in Delhi and Jaroslav Lukiv An Indian...

Jaguar Land Rover to restart production on Wednesday after cyber-attack

Theo Leggettbusiness correspondent and Josh Martinbusiness reporter Carmaker Jaguar Land Rover...

BBC correspondents share their memories of 7 October

The BBC's Middle East Correspondent Yolande Knell and Gaza...

Ineos announces job losses at East Yorkshire petrochemical plant

A petrochemical company has announced it intends to cut...

Gaza peace plan talks to continue as Trump says chance of a deal is ‘really good’

Rushdi AbualoufGaza correspondent, and Kathryn Armstrong Watch: Trump tells reporters chance...

Guernsey’s 2026 budget vape tax plan confirmed

John Fernandez Guernsey political reporter BBC Deputy Lindsay de Sausmarez said...

Related Articles

Popular Categories