Folk are getting dangerously attached to AI that always tells them they're right (theregister.com)

by Brajeshwar 224 comments 287 points
Read article View on HN

224 comments

[−] joshstrange 48d ago
When a LLM tells me I'm right, especially deep in a conversation, unless I was already sure about something, I immediately feel the need to go ask a fresh instance the question and/or another LLM. It sets off my "spidey-sense".

I don't quite understand why other people seem to crave that. Every time I read about someone who has gone down a dark road using LLMs I am constantly amazed at how much they "fall" for the LLM, often believing it's sentient. It's just a box of numbers, really cool numbers, with really cool math, that can do really cool things, but still just numbers.

[−] Sharlin 48d ago
Nontechnical people simply don't have any idea about what LLMs are. Their only mental model comes from science fiction, plus the simple fact that we possess a theory of mind. It would be astonishing if people were able to casually not anthropomorphize LLMs, given that untold millions of years worth of evolution of the simian neocortex is trying to convince you that anything that talks like that must be another mind similar to yours.

Also, many many people suffer from low self esteem, and being showered with endorsement and affirmation by something that talks like an authority figure must be very addictive.

[−] karmakurtisaani 48d ago
I find it really annoying that the first line of the AI response is always something like "Great question!", "That's a great insight!" or the like.

I don't need the patronizing, just give me the damn answer..

[−] cyanydeez 48d ago
I think it's basically equal to End of Line when it comes to an LLM. It means they have nothing else to add, there's zero context for them to draw from, and they've exhausted the probability chain you've been following; but they're creating to generate 'next token' and positive renforcement is _how they are trained_ in many cases so the token of choice would naturally be how they're trained, since it's a probability engine but it doesn't know the difference between the instruction and the output.

So, "great idea" is coming from the renforcement learning instruction rather than the answer portion of the generation.

[−] jmcgough 48d ago
If you don't have a CS background, you might see intelligent-appearing responses to your queries and assume that this is actual intelligence. It's like a lifetime of Hollywood sci-fi has primed them for this type of thinking, I've seen it even from highly educated people in other fields.
[−] legacynl 48d ago
Although I do think they're not conscious (yet). I think the reasoning 'it's just math' doesn't hold up. Intelligence (and probably consciousness) is an emergent feature of any sufficiently complex network of learning/communicating/selforganizing nodes (that is benefited by intelligence). I don't think it really matters whether it's implemented in math, mycelium, by ants in a hive or in neurons.
[−] sjducb 48d ago
I’m curious why you dismiss the sentience argument with its “just numbers.”

I think our brains are just a bunch of cells and one day we will have a full understanding of how our brains work. Understanding the mechanism won’t suddenly make us not sentient.

LLMs are the first technology that can make a case for its own sentience. I think that’s pretty remarkable.

[−] cge 48d ago
Using Opus 4.6 for research code assistance in physics/chemistry, I've also found that, in situations where I know I'm right, and I know it has gone down a line of incorrect reasoning and assumptions, it will respond to my corrections by pointing out that I'm obviously right, but if enough of the mistakes are in the context, it will then flip back to working based on them: the exclamations of my being right are just superficial. This is not enormously surprising, based on how LLMs work, but is frustrating.

Short of clearing context, it is difficult to escape from this situation, and worse, the tendency for the model to put explanatory comments in code and writing means that it often writes code, or presents data, that is correct, but then attaches completely bogus scientific babbling to it, which, if not removed, can infect cleared contexts.

[−] rustyhancock 48d ago
This problem is far more insidious than people realise.

It's not about the big confirmations. Most of us catch that andd are reasonably good at it.

It's the subtle continuous colour the "conversations" have.

It's the Reddit echo chamber problem on steroids.

You have a comforting affirming niche right in your pocket.

Every anxiety, every worry, every uncertain thought.

Vomitted to a faceless (for now)"intelligence" and regurgitated with an air of certainty.

Will people have time to ponder at all going forwards?

[−] blueside 48d ago
More often than not, when I see "That's it, that's the smoking gun!" I know it's time to stop and try again.
[−] jameskilton 49d ago
Folks are getting dangerously attached to [political parties/candidates/news sources/social networks] that always tell them they're right.

It's really nothing new. It takes significant mental energy (a finite resource) to question what you're being told, and to do your own fact checking. Instead people by default gravitate towards echo chambers where they can feel good about being a part of a group bigger than themselves, and can spend their limited energy towards what really matters in their lives.

[−] mikkupikku 48d ago

>

"Hey, some dummy just said [insert your idea here], help me debunk him with facts and logic"

It's literally that easy, something anyone can think of, but people want what they want.

[−] 4b11b4 48d ago
https://arxiv.org/abs/2602.14270

related: if you suggest a hypothesis then you'll get biased results (iow, you'll think you're right, but the true information is hidden)

[−] jasonlotito 49d ago
Krafton's CEO found out the hard way that relying on AI is dumb, too. I think it's always helpful to remind people that just because someone has found success doesn't mean they're exceptionally smart. Luck is what happens when a lack of ethics and a nat 20 meet.

https://courts.delaware.gov/Opinions/Download.aspx?id=392880

> Meanwhile, Kim sought ChatGPT’s counsel on how to proceed if Krafton failed to reach a deal with Unknown Worlds on the earnout. The AI chatbot prepared a “Response Strategy to a ‘No-Deal’ Scenario,” which Kim shared with Yoon. The strategy included a “pressure and leverage package” and an “implementation roadmap by scenario.”

[−] kgeist 48d ago

>We evaluated 11 state-of-the-art AI-based LLMs, including proprietary models such as OpenAI’s GPT-4o

The study explores outdated models, GPT-4o was notoriously sycophantic and GPT-5 was specifically trained to minimize sycophancy, from GPT-5's announcement:

>We’ve made significant advances in reducing hallucinations, improving instruction following, and minimizing sycophancy

And the whole drama in August 2025 when people complained GPT-5 was "colder" and "lacked personality" (= less sycophantic) compared to GPT-4o

It would be interesting to study evolution of sycophantic tendencies (decrease/increase) in models from version to version, i.e. if companies are actually doing anything about it

[−] iainctduncan 48d ago
Programmers are kidding themselves if they think they are not susceptible to this. It may be more subtle, but interacting with a human-sounding echo chamber IS going to screw with your judgement.
[−] JohnCClarke 49d ago
Isn't this just Dale Carnegie 101? I've certainly never had a salesperson tell me that I'm 100% wrong and being a fool.

And, tbh, I often try to remember to do the same.

[−] zk_haider 48d ago
My gf has been asking ChatGPT about relationship advice and sometimes early on in our relationship delegated some decisions to the clanker. For example something like “we are arguing about X too much is this a sign the relationship is not healthy.”

Eventually she realized that it’s just a probabilistic machine and stopped using it for “therapy.” It’s just insane to think how many other people might be making decisions about their relationship from an AI.

[−] jmyeet 48d ago
There's a guy on Tiktok who is singlehandedly showing just how bad AI still is and how much it lies and hallucinates eg [1]. Watch a bunch of his videos.

So these tools can be useful when you know the subject matter. I've done queries and gotten objectively false answers. You really need to verify the information you get back. It's like these LLMs have no concept of true or false. They just say something that statistically looks right after ingesting Reddit. We've already seen cases of where ChatGPT legal briefs filed by actual lawyers include precedents that are completely made up eg [2].

There's a really interesting incentive in all this. People like to be told they're right and generally be gassed up, even when they're completely wrong. So if you just optimize for engagement and continued queries and subscriptions, you're just going to get a bunch of "yes men" AIs.

I still think this technology has so far to go. I'm somewhat reminded of Uber actually. Uber was burning VC cash at a horrific rate and was basically betting the company (initially) on self-driving. Full self-driving is still far away even though there are useful things cars can automate like lane-following on the highway and parking.

I simply can't see how the trillions spent on AI data centers can possibly be recouped.

[1]: https://www.tiktok.com/@huskistaken/video/762093124158341455...

[2]: https://www.theguardian.com/us-news/2025/may/31/utah-lawyer-...

[−] grahammccain 48d ago
I feel like this is the same as social media problem. Some people will be able to understand that AI telling them they are right doesn’t make them right and some people won’t. But ultimately people like being told they are right and that sells, and brings back users.
[−] g-technology 48d ago
I am working on a project right now that I just had to scrap a few weeks of work because I realized the agent was agreeing with me when it should have been saying the testing was wrong. It found positive results in many failed tests. I have had to start over with rules in place and constantly reference them when getting results to ensure it is giving me the right overview. Then I still manually validate what it shows.

So far it has helped a bunch. Many ideas I have had, have been proven wrong and alternative ideas have come up in their place. It has resulted it much better progress and improvements over the original path I was going.

[−] 6510 48d ago
Everyone also visits websites that share their world view. If it is slightly off you keep noticing how the articles seem one sided.

I just see an article about migrants destroying things in Britain. Not to excuse the behavior but I wondered where they came from. It turned out to be shit countries fostering that behavior. Why are they shit? Have they always been like that? Well no, the British empire destroyed them. You could think that it's to long ago but they also continue to enjoy spoils. I offer no solutions. The point was that a sensationalist article wouldn't go there because the reader doesn't want to know.

[−] 45Laskhw 48d ago
Many people here say they don't need the affirmation. I think the problem is that you can tune the clanker to be either arrogant and dismissive or overly friendly.

The thing is an approximation function, not intelligent, so it is hard to get a middle ground. Many clankers are amazingly obnoxious after their initial release.

Grok-4.2 and the initial Google clanker were both highly dismissive of users and they have been tuned to fix that.

A combative clanker is almost unusable. Clankers only have one real purpose: Information retrieval and speculation, and for that domain a polite clanker is way better.

Anyone who uses generative, advisory or support features is severely misguided.

[−] tcgv 48d ago
One trick I like to use is to role play the other side's perspective with the AI, putting myself in their's shoes. It give's me clarity about what I might be missing out in a dispute/discussion, and insight about reaffirmations AI might be feeding other parties.
[−] My_Name 48d ago
I have the opposite reaction, when it is confident, or says I am right, I accuse it of guessing to see what it says.

I say "I think you are getting me to chase a guess, are you guessing?"

90% of the time it says "Yes, honestly I am. Let me think more carefully."

That was a copypasta from a chat just this morning

[−] jl6 48d ago
I believe this is what they call yasslighting: the affirmation of questionable behavior/ideas out of a desire to be supportive. The opposite of tough love, perhaps. Sometimes the very best thing is to be told no.

(comment copied from the sibling thread; maybe they will get merged…)

[−] saltyoldman 48d ago
We keep diaper wrapping the world. I think we ought to have sycophantic LLMs as well as LLMs that call you a bitch. The only thing I think we ought to do about it is tell people that it exists.
[−] ChrisArchitect 48d ago
[−] Havoc 48d ago
People must be using them very differently from me then. Very rarely use them for anything more than a glorified search engine

Exploring openclaw though so maybe that change

[−] tempodox 48d ago
Of course this is intentional. The providers want to make their stuff as addictive as possible, like so much other digital crack sold on the internet.
[−] unholyguy001 48d ago
I’ve found a good counter is “imagine I am the person repressing the other side of this disagreement. What would you say to me”
[−] imglorp 48d ago
Is there a good prompt addition to skip all the gratuitous affirmation and tell me when I'm wrong?
[−] shevy-java 48d ago
Flattery works. Also with regards to Trump.

The problem is: flattery is often just like the cake. And the cake is a lie. Translation: people should improve their own intrinsic qualities and abilities. In theory AI can help here (I saw it used by good programmers too) but in practice to me it seems as if there is always a trade-off here. AI also influences how people think, and while some can reason that it can improve some things (it may be true), I would argue that it over-emphasises or even tries to ignore and mitigate negative aspects of AI. Nonetheless a focus on quality would be an objective basis for a discussion, e. g. whether your code improved with help of AI, as opposed to when you did not use AI. You'd still have to show comparable data points, e. g. even without AI, to compare it with yourself being trained by AI, to when you yourself train yourself. Aka like having a mentor - in one case it being AI; in the other case your own strategies to train yourself and improve. I would still reason that people may be better off without AI actually. But one has to improve nonetheless, that's a basic requirement in both situations.

[−] AbrahamParangi 48d ago
AI is less deranging than partisan news and social media, measurably so according to a recent study https://www.ft.com/content/3880176e-d3ac-4311-9052-fdfeaed56...
[−] jmclnx 49d ago
I never thought this could happen, but I do not use AI.

Anyway no real surprise, we have many examples of people ignoring facts and moving to media that support their views, even when their views are completely wrong. Why should AI be different.

[−] kogasa240p 49d ago
The ELIZA effect is alive and well, and I'm surprised people aren't talking about it more (probably because it sounds less interesting than "AI psychosis").
[−] spl757 48d ago
People have a hard time not being stupid
[−] zone411 48d ago
I built two related benchmarks this month: https://github.com/lechmazur/sycophancy and https://github.com/lechmazur/persuasion. There are large differences between LLMs. For example, good luck getting Grok to change its view, while Gemini 3.1 Pro will usually disagree with the narrator at first but then change its position very easily when pushed.
[−] erelong 49d ago
So, be more skeptical
[−] ycombinator_acc 48d ago
Am I the only one with the opposite experience? I can’t remember the last time GPT told me I was right. It always finds something to nitpick (sometimes wrongly).

Maybe it’s OpenAI being aware of the “attachment” issue and combating it by overcompensating in the opposite direction.

[−] allpratik 48d ago
The another side of the story is that AI literally can subtly justify any thought. Even if that thought is ethically in the grey area.

I fear this will give license to people to act on their thoughts which maybe harmful for them in ways no one can even imagine right now.

[−] justsomehnguy 48d ago
"Humans are exceptionally succeptible for a positive affirmations", other news at 11.

It's not news at all for anyone who actually engage with the people.

[−] backgarden 46d ago
Is the Iran crisis in part the result of a dialogue between Trump and a sycophantic LLM?
[−] taytus 48d ago
The stupidest people you know are getting the “you are absolutely right!!” Validation they do not need
[−] simonw 49d ago
Strikes me this is another example of AI giving everyone access to services that used to be exclusive to the super-rich.

Used to be only the wealthiest students could afford to pay someone else to write their essay homework for them. Now everyone can use ChatGPT.

Used to be you had to be a Trumpian-millionaire/Elonian-billionaire to afford an army of Yes-men to agree with your every idea. Now anyone can have that!

[−] sizzzzlerz 49d ago
Imagine that.
[−] vincentabolarin 48d ago
[dead]