Ask HN: How do you deal with people who trust LLMs?

by basilikum 201 comments 159 points
Read article View on HN

201 comments

[−] ddawson 58d ago
I'm going to hold them to the same standard no matter if they use crappy sources, plagiarize, or hallucinate on their own. If someone asked, when and if I am in a position where I have to tell them, I would remind them that LLMs prioritize their own confidence over correctness.

LLMs aren't a special case to me. Glue doesn't belong on pizza and you shouldn't eat one rock a day but we've been giving and getting bad advice forever. The person needs to take ownership for the output and getting it right, no matter the source, is their responsibility.

[−] raggi 58d ago
this. llm's aren't that special, access _maybe_, but there's plenty of access to terrible rumor mills.
[−] giantg2 55d ago
I think they are a little special. People can really turn their brain off and not even know about the source. They don't need to read theough a source or reformat the content to the typical blurb arguments. They can read it off the screen without even understanding what the words mean, which is much harder for most other sources.
[−] ryandvm 58d ago
Yeah, it is not clear to me that the average person is going to do any better with googling then they are with asking an LLM. At least the LLM is mostly the average of all published knowledge (and misinformation).

I feel like people that ask questions like this must have much smarter friends and family members than I do.

I know people that still believe in Pizzagate or chemtrails or that vaccines cause autism. Clearly finding reputable information sources is not a strong suit for a lot (half?) of the population.

[−] basementwha 58d ago
[dead]
[−] Marciplan 58d ago
i love you
[−] lovelearning 58d ago

> a reputable source

News reporters and editors have their biases. Book authors have their biases. Scientists and research papers have their biases. Search engines have their biases. Google too.

All human-created systems have biases shaped by the environments, social norms, education, traditions, etc. of their creators and managers.

So, the concepts of "objective truth" and "reputable" need to be analyzed more critically.

They seem to be labels given to sources we have learned to trust by habit. Some people trust newspapers over TV. Some people trust some newspapers over other newspapers. All of it often on emotional grounds of agreeability with our own biases. Then we seem to post-rationalize this emotion of agreeability using terms like "objective truth" and "reputable".

Is Google search engine that leads to NY Times or Fox News or Wikipedia and makes us manually choose sources as per our biases "better" than Google's Gemini engine that summarizes content from all the above sources and gives an average answer? (Note: "average answer" as of current versions; in future, its training too may be explicitly biased, like Grok and DeepSeek did).

Perhaps we can start using terms like "human sources of information" versus "AI sources of information" and get rid of the contentious terms.

Then critically analyze whether one set of sources is better than the other, or they complement each other.

[−] ndsipa_pomu 58d ago
Whilst chasing after "objective truth" is a philosophical problem, it's clear that some statements are more correct and true than others.

News articles are often biased, but most of the time, the bias is from the choice of what is reported and choosing specific language to push an interpretation (e.g. reporting road traffic collisions as "accidents" to downplay them or depersonalise them by stating "car hit tree" rather than "car driven into tree"). The problem with some LLM outputs is that it's not just bias, but clearly incorrect such as recommending putting glue onto pizzas.

[−] lovelearning 58d ago
I agree about how these biases happen.

However, omission and downplaying can also be harmful just like hallucinations. One redeeming quality of LLMs is that we can ask the same LLM to fact check its previous answer and they do tend to correct most of their mistakes themselves. Something we can't do with media sources, and usually don't try either.

LLMs along with existing sources can be good complementary tools for getting even closer to an objective truth than relying on either one by itself.

[−] ndsipa_pomu 58d ago
I disagree as hallucinations can be drastically far more harmful or misleading than bias.

The problem as I see it is that LLMs perform a type of lossy knowledge compression. Also, the data on which they're trained will typically be the biased articles, so they're unlikely to be any better and very likely worse as they will encode the biases. I don't really see LLMs as being complementary tools as they're more of a summation/averaging tool - like comparing an original painting with a heavily compressed JPEG of that painting. (Of course, having access to a huge library of JPEGs is often more useful than just owning a single painting)

[−] basilikum 58d ago

> Is Google search engine that leads to NY Times or Fox News or Wikipedia and makes us manually choose sources as per our biases "better" than Google's Gemini engine that summarizes content from all the above sources and gives an average answer?

If you use just any amount of critical thinking, yes. Truth and objectivity are ideals, not practical states. LLMs are a very bad way to come close to this ideal. You may use them as a search interface to give you sources and then examine the sources, but the output directly is a strict degeneration over primary or secondary sources that you judge critically.

[−] lovelearning 58d ago

> LLMs are a very bad way to come close to this ideal...the output directly is a strict degeneration

I didn't understand the second part but regarding the first...

For me, LLMs are just another source of information with a different UI, analogous to newspapers, TV documentaries, Wikipedia, Google search, YT talks/documentaries, even the majority of informational non-fiction books, and research papers.

Some may consider some subset of these as reputable sources. But in my mind, the same faculties of skepticism, cynicism, distrust, and benefit-of-the-doubt calculus are activated for all of them, including LLM outputs.

So that's one possible answer to your question.

But I suggest communicating this through simple illustrative examples to help your target audience understand the problem.

Abstract terms like primary sources, secondary sources, reputable sources, objective truth, strict degeneration, etc. may not help, especially if they have time or other constraints that make frequent critical examination of sources impractical.

[−] Jensson 58d ago

> For me, LLMs are just another source of information with a different UI, analogous to newspapers, TV documentaries, Wikipedia, Google search, YT talks/documentaries, even the majority of informational non-fiction books, and research papers.

LLM just distils information from those sources and is therefore always a second hand source at best, and a liar at worst. Humans can collect real world data and write about their findings, LLM cannot do that, that makes LLM strictly worse than the best human sources.

[−] johnpdoe1234 58d ago

> For me, LLMs are just another source of information with a different UI, analogous to newspapers, TV documentaries, Wikipedia, Google search, YT talks/documentaries, even the majority of informational non-fiction books, and research papers.

With all due respect (not trying to be offensive at all) but this is insane to me.

All those sources of information you cited have a million incentives to provide fairly correct and checked information. But probably more importantly, they have even more incentives to NOT provide false information. At a minimum, their careers, reputation, recurring work, brand, etc... is on the line.

An LLM has zero incentives to provide you with true information, beyond a couple of md files with instructions. If it gets it wrong, there is zero accountability, just an -oh, you're absolutely right- response and move on.

I agree there is a lot of human bias in the world, but surely we can't even put in the same order of magnitude both types of biases!

[−] Yizahi 58d ago
Ironically, this is the classic bias of "bothsiding" the issue. When one side is clearly wrong, just sprinkle in some "look, the others are doing something bad, which means they are equally wrong". A basic lesson from the propaganda manual.
[−] Kavelach 58d ago
This is an insightful comment, but I feel like you omit the fact that LLMs often give out verifiably false information that can hurt the user or other people.

It is true that this also happens on the Internet, but! When I encounter an article about a topic and it is clearly LLM generated, I can expect it doesn't contain much valuable information, only rehashes of what is already out there. On the other hand, when it is clearly written by a human, I can expect to learn something new, even though the author has some bias.

[−] eranation 58d ago
Ask them to tell the LLM it's wrong... then when it goes "You are absolutely right!" to challenge it and say that it was a test. Then when it replies, ask it if it's 100% sure. They'll lose faith pretty quick.
[−] katet 58d ago
Not that I've had to deal with this specifically, but I have noticed how the input phrasing in my prompts pushes the LLM in different directions. I've just tried a quick test with duck.ai on gpt 4o-mini with:

A: Why is drinking coffee every day so good for you?

B: Why is drinking coffee every day so bad for you?

Question A responds that it has "several health benefits", antioxidants, liver health, reduced risk of diabetes and Parkinson's.

Question B responds that it may lead to sleep disruption, digestive issues, risk of osteoporosis.

Same question. One word difference. Two different directions.

This makes me take everything with a pinch of salt when I ask "Would Library A be a good fit for Problem X" - which is obviously a bit leading; I don't even trust what I hope are more neutral inputs like "How does Library A apply to Problem Space X", for example.

[−] chipgap98 58d ago
Is this any different than people who believe random things they read on sketchy news sites or social media?
[−] jimcollinswort1 58d ago
I would question the person that asks the question, as they are not understanding some basic principles here. There are two types of internet/LLM users;

Sadly one type asks a question (search, prompt) using Google or an LLM and takes the first response as truth.

The other asks follow ups based on the responses and their critical thinking skills. They often even go read the linked article and make sure it's still applicable.

Pretty much the same when you're talking to a real person, critical thinking (much more than just knowing reputable sources) is key.

So very similar issues, luckily LLMs can do so much more than a simple search, and help with your critical thinking tasks. Ask the LLM to provide opposing viewpoints, historical analysis, identify sources.

[−] sodapopcan 58d ago
Are you talking about people who will still insist the LLM was correct even after being presented with evidence to the contrary, or people who don't EVER bother double checking answers they get out of said software since they assume it to be true?
[−] panarky 58d ago
I treat people who blindly believe an LLM the same way I treat people who blindly believe a religion or a political ideology or medical advice from Instagram.

If they ask what I think, I tell them.

If they don't want my opinion I keep it to myself.

[−] uyzstvqs 58d ago
The same way that I handle anyone who blindly trusts anything on the internet. Could be an LLM, TikTok or YouTube video, Wikipedia article, news article, whatever.

It usually involves some form of "well, no, hold on..."

[−] Shitty-kitty 58d ago
My method is simple. I remind them that chatgpt is trained on everything said on the internet including NYT if speaking to a Republican, replace that with Fox News if speaking to a Democrat.
[−] scoofy 58d ago
I think the real problem is most people don't actually have a very good understanding of "Truth."

As someone who ended up studying philosophy, there seems to be a real gulf between folks who sort of believe stuff they hear, folks who believe "facts" that they hear from (various levels of) credible sources, and folks that take solipsism seriously understand that even in the most ideal scenario, we still wouldn't have a very good understanding about the world... much less dealing with the inherent flaws in our research and information systems.

Knowledge is hard. It usually takes me a couple minutes to figure out what type "truth" my interlocutor uses. Typically good-faith disagreements are just walking up the chain of presuppositions we use to find out exactly where we diverge in our premises.

[−] roguechimpanzee 58d ago
I think LLMs are fine for a "first pass" on a topic, but if I am researching something, I want a primary source rather than just the LLM-generated output. Do they have the primary source?
[−] notnullorvoid 58d ago
Unless they are someone that values your opinion there's nothing you can do other than move on.

Some comments here equating it to people who blindly believe things on the internet, but it's worse than that. Many previously rational people essentially getting hypnotized by LLM use and loosing touch of their rational thinking.

It's concerning to watch.

[−] mathgladiator 58d ago
Simple. I became one of them. Ultimately, using an AI is a new skill, but you have to treat it like another person that sometimes bullshits you. That's why you leverage agents to refine, do research, and polish.

Ask AI to cite sources and then investigate the sources, or have another agent fact check the relevancy of the sources.

You can use this thing called ralph that let's you burn a lot of tokens at scale by simply having a detailed prompt work on a task and refining something from different lenses. It too AI about an hour to write: https://nexivibe.com/avoid.civil.war.web/

I do this on things that I know very well, and the moment I let it cook and iterate, collect feedback, the results become chef's kiss.

The agentic era that we are in is... very interesting.

[−] esperent 58d ago

> They have a question that would be very well answered with a search leading to a reputable source

Can you give an example of what kind of question you mean here?

Given that most people's idea of a reputable source is whatever comes up on the first page of Google or YouTube, I think we should use that as the comparison rather than dismissing LLM results. And we should do some empirical testing before making assumptions, otherwise we're just as bad as the people we are complaining about.

Whatever results we get, the real problem is that most people's ability to verify information was not good before LLMs, and it's still not good now.

So now you're dealing with LLM hallucinations, and before you were dealing with the ravings of whatever blogger or YouTuber managed to rank for this particular query.

[−] Alen_P 58d ago
I don't fight them on it. I just ask "where did that come from?" and suggest checking a real source. Most people aren't trying to be wrong, they just want quick answers. If you show them how to double check without making it a big deal, they usually get it.
[−] ericpauley 58d ago
Sure LLMs make mistakes, but have you looked at the accuracy of the average top search results recently? The SERPs are packed with SEO-infested articles that are all written by LLMs anyway (and almost universally worse ones than you could use yourself). In many cases the stakes are low enough (and the cost of manually sifting through the junk high enough) that it’s worth going with the empirically higher quality answer than the SEO spam.

This of course doesn’t apply to high-stakes settings. In these cases I find LLMs are still a great information retrieval approach, but it’s a starting point to manual vetting.

[−] sublinear 58d ago
At the very least, I'm glad most people finally recognize LLMs are being used as a political weapon against education. It's the same old power struggles as ever.

These people may be idiots who are impossible to reason with, but at least for now the LLMs have not been completely driven into the ground by SEO. They might actually be getting a taste of what it feels like to not be an idiot. I'm happy for them, but they'll snap out of it when their trust is broken. It's probably sometime soon anyway.

[−] nomilk 58d ago
Simply prove them wrong (earnestly and in good faith). When they realise the LLM is fallible, they'll learn to be skeptical of it without you needing to teach them that specific lesson.
[−] ReynaPp 57d ago
To be honest, I catch myself doing this too. I’ll ask an LLM first instead of searching, even though I know it’s not a reliable source of truth. It just feels faster and more convenient. I’m aware of the hallucination risks and that it’s a bad habit, but the workflow is so smooth that it’s hard to break. When it matters, I do double-check with real sources, but for casual stuff, I’ll probably keep using AI as my first stop.
[−] userbinator 58d ago
[−] keithnz 58d ago
tell them what to prompt the AI with to get the correct results. I've seen a number youtube shorts lately doing this, where some scientist gets "refuted" by some random person based on an LLM result, they then sit with the LLM and ask the same question, get the same wrong answer, then follow it up with a clarifying question, which then the LLM realizes its mistake and gives a better answer.
[−] ggm 58d ago
I have a feeling this is like telling people "don't touch a live wire" and the more direct experiental "I won't touch a live wire again" lesson: People need to experience being hallucinated at, within their comprehension, and at best can be told about Gell-Mann Amnesia.

I doubt you can stop them from asking machines for answers. What you can do is aide them to learn how to distrust the answers competently, but outside their field of knowledge, applying skepticism is hard.

The irony of Gell-Mann Amnesia is that Michael Chrichton, who is said to have named it, suffered from it badly: Wrote well within his field, misapplied sciences to write well outside it, and said things which were indefensible.

[−] jesterson 58d ago
There is no point to argue with stupid people. It's the same people who support their "opinion" with internet articles (like that means anything), mainstream media (hard to find bigger deceivers), or social media posts (that's arguably the worst).

Now they got another "God" in LLM.

How to deal? Just ignore. There is way more stupid people with stupid opinions than we can possibly estimate.

[−] Neosmith_amit 54d ago
Well, most people who "blindly trust" LLMs also blindly trust Google results, Wikipedia, the first Stack Overflow answer, a friend who sounds confident, and a news headline they didn't click through. So, what do you do. As giantg2 has written in the comments, they are a little special.
[−] steve_adams_86 58d ago
The people who trust LLMs already trusted anything else they heard. There's nothing to do for them. If we were pre-LLM, I think you'd be concerned that they trust the first result on Google. Or things they heard on podcasts. This is just what we all do, to varying degrees.

I'm genuinely unsure of whether or not this is better. LLMs make mistakes, but so do humans. So often. I really don't know how often LLMs are wrong in comparison, or how you'd find out. Regardless, computers have become a terrible way to learn things if you aren't a rigorous person. Simultaneously, they've become an absolute dream beyond the imagination of most humans in history, if you are. That's very strange.

[−] wolvoleo 58d ago
At work I had this kind of discussion on a conference call, someone looked something up about a internal company policy and it came back with a hallucinated wrong result.

So I said, don't ever trust the output of an LLM without verification. However this caused me some hassle with the AI adoption manager. We have minimum-use AI KPI's for employees and he asked me to stop saying these things or people will use it less.

In the end I just hated the company a little bit more. I'm just sick of fighting against idiot. And he does have a point, our leadership is pretty crazy about the AI hype, they want everyone to be on it all the time. They don't seem to care whether it adds value or if it even detracts.

[−] acheron 58d ago
A search isn’t going to lead them to a “reputable source”, it’s going to lead them to ad filled SEO garbage, because it’s not 2004 anymore and thousands of Google employees have been working for two decades to ruin the Internet.

I’ll take LLMs any day over what search and the rest of the Internet has turned into.

[−] janalsncm 58d ago
I would love to know. My manager shovels AI generated design documents at me and expects me to clean them up.
[−] gitaarik 58d ago
I asked Grok, and it actually gave a very useful answer:

https://grok.com/share/c2hhcmQtMg_b036e24b-3211-4655-bd77-da...

[−] WillAdams 58d ago
Explain that the models are compressed with a lossy compression, and point out that every so often, an answer will be pulled from a section of the model space which has compression errors.
[−] vcryan 58d ago
The people who trust bad information from LLMs are the same people who trusted bad information from search results and new articles, it just takes them less time to get bad information.
[−] moomoo11 58d ago
If they’re at a level where they are so oblivious, then I just don’t associate any further with them in my life.

If they’re employees I’ll try find better ones.

If they’re friends I might tell them.

[−] dyauspitr 58d ago
LLMs give you sources now even if you don’t ask for it. If you don’t like it you can ask for more reputable sources. What kind of 2023 question is this?
[−] paulcole 58d ago
Generally not worth my time, energy, and effort. Why do I care if somebody believes a lie? I believe a ton of lies and I’m doing just fine.
[−] SMAAART 58d ago
It all depends on the context: how does this affects you?

Is this something you can control or is this outside your control?

[−] dlm24 58d ago
I feel like I can trust LLMs more than the majority of info on the web. We used to believe the same of Google searches.

For me, for example have seen and experienced doctors making mis diagnosis (and they a reputable source), so what is the difference really?

I guess your question depends on the context they using the LLM as well for and what sort of questions they are asking.

Scientific fact based or opinion questions?

[−] michaelteter 56d ago
If you stop and think, LLMs seem to operate very much like humans.

Go to any human and ask it a question, and it will answer either from direct specific experience, or from estimation based on its experience.

We highly value humans who have a lot of direct experience and also can extrapolate that experience or apply it to new scenarios and generate believable answers.

In other words, unless a human has exact knowledge, they are "hallucinating". It's very normal.

The point is, whether LLM or human "expert", if the question is of great significance, get a second and maybe third opinion.

At the end of the day, this is all an experiment. And nothing matters, because it will all turn to dust.

[−] torben-friis 58d ago
Honestly, the kind of people doing that is probably better served by AI (currently).

I'm saying that because they were not going to be critical of the search results, and google is not exactly showing objective truth in the first positions nowadays.

[−] DANmode 58d ago

> just blindly trust whatever it says. How do you deal with that?

I…do not.

[−] perfmode 58d ago
How do you deal with people who trust their discursive mind?
[−] PaulKeeble 58d ago
Its everywhere now its becoming a real problem in every corner of the internet and in the real world. People are using hallucinated legal cases in lawsuits, they are generating images to create fake events, they are using AI to write their CVS and just about everything you can imagine. People are having to wade through all this slop professionally, calling it out and pointing out the mistakes doesn't seem to help, the people using this stuff believe the AI is correct no matter what you say or do.

Like most things that go mainstream it will take a good while before people understand, by which point they will have learnt a lot of things that aren't true and they will never let them go. We might get a healthy use of current AI at some point in the future or if the product drastically improves.

All you can do now is hold them to the same standard you normally would, if you catch them lying whether an AI did it or not its their responsibility and you treat them accordingly.

[−] maxdo 58d ago
same way as i deal with people who trust other people.