I'm going to hold them to the same standard no matter if they use crappy sources, plagiarize, or hallucinate on their own. If someone asked, when and if I am in a position where I have to tell them, I would remind them that LLMs prioritize their own confidence over correctness.
LLMs aren't a special case to me. Glue doesn't belong on pizza and you shouldn't eat one rock a day but we've been giving and getting bad advice forever. The person needs to take ownership for the output and getting it right, no matter the source, is their responsibility.
I think they are a little special. People can really turn their brain off and not even know about the source. They don't need to read theough a source or reformat the content to the typical blurb arguments. They can read it off the screen without even understanding what the words mean, which is much harder for most other sources.
Yeah, it is not clear to me that the average person is going to do any better with googling then they are with asking an LLM. At least the LLM is mostly the average of all published knowledge (and misinformation).
I feel like people that ask questions like this must have much smarter friends and family members than I do.
I know people that still believe in Pizzagate or chemtrails or that vaccines cause autism. Clearly finding reputable information sources is not a strong suit for a lot (half?) of the population.
News reporters and editors have their biases. Book authors have their biases. Scientists and research papers have their biases. Search engines have their biases. Google too.
All human-created systems have biases shaped by the environments, social norms, education, traditions, etc. of their creators and managers.
So, the concepts of "objective truth" and "reputable" need to be analyzed more critically.
They seem to be labels given to sources we have learned to trust by habit. Some people trust newspapers over TV. Some people trust some newspapers over other newspapers. All of it often on emotional grounds of agreeability with our own biases. Then we seem to post-rationalize this emotion of agreeability using terms like "objective truth" and "reputable".
Is Google search engine that leads to NY Times or Fox News or Wikipedia and makes us manually choose sources as per our biases "better" than Google's Gemini engine that summarizes content from all the above sources and gives an average answer? (Note: "average answer" as of current versions; in future, its training too may be explicitly biased, like Grok and DeepSeek did).
Perhaps we can start using terms like "human sources of information" versus "AI sources of information" and get rid of the contentious terms.
Then critically analyze whether one set of sources is better than the other, or they complement each other.
Ask them to tell the LLM it's wrong... then when it goes "You are absolutely right!" to challenge it and say that it was a test. Then when it replies, ask it if it's 100% sure. They'll lose faith pretty quick.
Not that I've had to deal with this specifically, but I have noticed how the input phrasing in my prompts pushes the LLM in different directions. I've just tried a quick test with duck.ai on gpt 4o-mini with:
A: Why is drinking coffee every day so good for you?
B: Why is drinking coffee every day so bad for you?
Question A responds that it has "several health benefits", antioxidants, liver health, reduced risk of diabetes and Parkinson's.
Question B responds that it may lead to sleep disruption, digestive issues, risk of osteoporosis.
Same question. One word difference. Two different directions.
This makes me take everything with a pinch of salt when I ask "Would Library A be a good fit for Problem X" - which is obviously a bit leading; I don't even trust what I hope are more neutral inputs like "How does Library A apply to Problem Space X", for example.
I would question the person that asks the question, as they are not understanding some basic principles here. There are two types of internet/LLM users;
Sadly one type asks a question (search, prompt) using Google or an LLM and takes the first response as truth.
The other asks follow ups based on the responses and their critical thinking skills. They often even go read the linked article and make sure it's still applicable.
Pretty much the same when you're talking to a real person, critical thinking (much more than just knowing reputable sources) is key.
So very similar issues, luckily LLMs can do so much more than a simple search, and help with your critical thinking tasks. Ask the LLM to provide opposing viewpoints, historical analysis, identify sources.
Are you talking about people who will still insist the LLM was correct even after being presented with evidence to the contrary, or people who don't EVER bother double checking answers they get out of said software since they assume it to be true?
I treat people who blindly believe an LLM the same way I treat people who blindly believe a religion or a political ideology or medical advice from Instagram.
If they ask what I think, I tell them.
If they don't want my opinion I keep it to myself.
The same way that I handle anyone who blindly trusts anything on the internet. Could be an LLM, TikTok or YouTube video, Wikipedia article, news article, whatever.
It usually involves some form of "well, no, hold on..."
My method is simple. I remind them that chatgpt is trained on everything said on the internet including NYT if speaking to a Republican, replace that with Fox News if speaking to a Democrat.
I think the real problem is most people don't actually have a very good understanding of "Truth."
As someone who ended up studying philosophy, there seems to be a real gulf between folks who sort of believe stuff they hear, folks who believe "facts" that they hear from (various levels of) credible sources, and folks that take solipsism seriously understand that even in the most ideal scenario, we still wouldn't have a very good understanding about the world... much less dealing with the inherent flaws in our research and information systems.
Knowledge is hard. It usually takes me a couple minutes to figure out what type "truth" my interlocutor uses. Typically good-faith disagreements are just walking up the chain of presuppositions we use to find out exactly where we diverge in our premises.
I think LLMs are fine for a "first pass" on a topic, but if I am researching something, I want a primary source rather than just the LLM-generated output. Do they have the primary source?
Unless they are someone that values your opinion there's nothing you can do other than move on.
Some comments here equating it to people who blindly believe things on the internet, but it's worse than that. Many previously rational people essentially getting hypnotized by LLM use and loosing touch of their rational thinking.
Simple. I became one of them. Ultimately, using an AI is a new skill, but you have to treat it like another person that sometimes bullshits you. That's why you leverage agents to refine, do research, and polish.
Ask AI to cite sources and then investigate the sources, or have another agent fact check the relevancy of the sources.
You can use this thing called ralph that let's you burn a lot of tokens at scale by simply having a detailed prompt work on a task and refining something from different lenses. It too AI about an hour to write: https://nexivibe.com/avoid.civil.war.web/
I do this on things that I know very well, and the moment I let it cook and iterate, collect feedback, the results become chef's kiss.
The agentic era that we are in is... very interesting.
> They have a question that would be very well answered with a search leading to a reputable source
Can you give an example of what kind of question you mean here?
Given that most people's idea of a reputable source is whatever comes up on the first page of Google or YouTube, I think we should use that as the comparison rather than dismissing LLM results. And we should do some empirical testing before making assumptions, otherwise we're just as bad as the people we are complaining about.
Whatever results we get, the real problem is that most people's ability to verify information was not good before LLMs, and it's still not good now.
So now you're dealing with LLM hallucinations, and before you were dealing with the ravings of whatever blogger or YouTuber managed to rank for this particular query.
I don't fight them on it. I just ask "where did that come from?" and suggest checking a real source. Most people aren't trying to be wrong, they just want quick answers. If you show them how to double check without making it a big deal, they usually get it.
Sure LLMs make mistakes, but have you looked at the accuracy of the average top search results recently? The SERPs are packed with SEO-infested articles that are all written by LLMs anyway (and almost universally worse ones than you could use yourself). In many cases the stakes are low enough (and the cost of manually sifting through the junk high enough) that it’s worth going with the empirically higher quality answer than the SEO spam.
This of course doesn’t apply to high-stakes settings. In these cases I find LLMs are still a great information retrieval approach, but it’s a starting point to manual vetting.
At the very least, I'm glad most people finally recognize LLMs are being used as a political weapon against education. It's the same old power struggles as ever.
These people may be idiots who are impossible to reason with, but at least for now the LLMs have not been completely driven into the ground by SEO. They might actually be getting a taste of what it feels like to not be an idiot. I'm happy for them, but they'll snap out of it when their trust is broken. It's probably sometime soon anyway.
Simply prove them wrong (earnestly and in good faith). When they realise the LLM is fallible, they'll learn to be skeptical of it without you needing to teach them that specific lesson.
To be honest, I catch myself doing this too. I’ll ask an LLM first instead of searching, even though I know it’s not a reliable source of truth. It just feels faster and more convenient.
I’m aware of the hallucination risks and that it’s a bad habit, but the workflow is so smooth that it’s hard to break. When it matters, I do double-check with real sources, but for casual stuff, I’ll probably keep using AI as my first stop.
tell them what to prompt the AI with to get the correct results. I've seen a number youtube shorts lately doing this, where some scientist gets "refuted" by some random person based on an LLM result, they then sit with the LLM and ask the same question, get the same wrong answer, then follow it up with a clarifying question, which then the LLM realizes its mistake and gives a better answer.
I have a feeling this is like telling people "don't touch a live wire" and the more direct experiental "I won't touch a live wire again" lesson: People need to experience being hallucinated at, within their comprehension, and at best can be told about Gell-Mann Amnesia.
I doubt you can stop them from asking machines for answers. What you can do is aide them to learn how to distrust the answers competently, but outside their field of knowledge, applying skepticism is hard.
The irony of Gell-Mann Amnesia is that Michael Chrichton, who is said to have named it, suffered from it badly: Wrote well within his field, misapplied sciences to write well outside it, and said things which were indefensible.
There is no point to argue with stupid people. It's the same people who support their "opinion" with internet articles (like that means anything), mainstream media (hard to find bigger deceivers), or social media posts (that's arguably the worst).
Now they got another "God" in LLM.
How to deal? Just ignore. There is way more stupid people with stupid opinions than we can possibly estimate.
Well, most people who "blindly trust" LLMs also blindly trust Google results, Wikipedia, the first Stack Overflow answer, a friend who sounds confident, and a news headline they didn't click through. So, what do you do. As giantg2 has written in the comments, they are a little special.
The people who trust LLMs already trusted anything else they heard. There's nothing to do for them. If we were pre-LLM, I think you'd be concerned that they trust the first result on Google. Or things they heard on podcasts. This is just what we all do, to varying degrees.
I'm genuinely unsure of whether or not this is better. LLMs make mistakes, but so do humans. So often. I really don't know how often LLMs are wrong in comparison, or how you'd find out. Regardless, computers have become a terrible way to learn things if you aren't a rigorous person. Simultaneously, they've become an absolute dream beyond the imagination of most humans in history, if you are. That's very strange.
At work I had this kind of discussion on a conference call, someone looked something up about a internal company policy and it came back with a hallucinated wrong result.
So I said, don't ever trust the output of an LLM without verification. However this caused me some hassle with the AI adoption manager. We have minimum-use AI KPI's for employees and he asked me to stop saying these things or people will use it less.
In the end I just hated the company a little bit more. I'm just sick of fighting against idiot. And he does have a point, our leadership is pretty crazy about the AI hype, they want everyone to be on it all the time. They don't seem to care whether it adds value or if it even detracts.
A search isn’t going to lead them to a “reputable source”, it’s going to lead them to ad filled SEO garbage, because it’s not 2004 anymore and thousands of Google employees have been working for two decades to ruin the Internet.
I’ll take LLMs any day over what search and the rest of the Internet has turned into.
Explain that the models are compressed with a lossy compression, and point out that every so often, an answer will be pulled from a section of the model space which has compression errors.
The people who trust bad information from LLMs are the same people who trusted bad information from search results and new articles, it just takes them less time to get bad information.
LLMs give you sources now even if you don’t ask for it. If you don’t like it you can ask for more reputable sources. What kind of 2023 question is this?
If you stop and think, LLMs seem to operate very much like humans.
Go to any human and ask it a question, and it will answer either from direct specific experience, or from estimation based on its experience.
We highly value humans who have a lot of direct experience and also can extrapolate that experience or apply it to new scenarios and generate believable answers.
In other words, unless a human has exact knowledge, they are "hallucinating". It's very normal.
The point is, whether LLM or human "expert", if the question is of great significance, get a second and maybe third opinion.
At the end of the day, this is all an experiment. And nothing matters, because it will all turn to dust.
Honestly, the kind of people doing that is probably better served by AI (currently).
I'm saying that because they were not going to be critical of the search results, and google is not exactly showing objective truth in the first positions nowadays.
Its everywhere now its becoming a real problem in every corner of the internet and in the real world. People are using hallucinated legal cases in lawsuits, they are generating images to create fake events, they are using AI to write their CVS and just about everything you can imagine. People are having to wade through all this slop professionally, calling it out and pointing out the mistakes doesn't seem to help, the people using this stuff believe the AI is correct no matter what you say or do.
Like most things that go mainstream it will take a good while before people understand, by which point they will have learnt a lot of things that aren't true and they will never let them go. We might get a healthy use of current AI at some point in the future or if the product drastically improves.
All you can do now is hold them to the same standard you normally would, if you catch them lying whether an AI did it or not its their responsibility and you treat them accordingly.
201 comments
LLMs aren't a special case to me. Glue doesn't belong on pizza and you shouldn't eat one rock a day but we've been giving and getting bad advice forever. The person needs to take ownership for the output and getting it right, no matter the source, is their responsibility.
I feel like people that ask questions like this must have much smarter friends and family members than I do.
I know people that still believe in Pizzagate or chemtrails or that vaccines cause autism. Clearly finding reputable information sources is not a strong suit for a lot (half?) of the population.
> a reputable source
News reporters and editors have their biases. Book authors have their biases. Scientists and research papers have their biases. Search engines have their biases. Google too.
All human-created systems have biases shaped by the environments, social norms, education, traditions, etc. of their creators and managers.
So, the concepts of "objective truth" and "reputable" need to be analyzed more critically.
They seem to be labels given to sources we have learned to trust by habit. Some people trust newspapers over TV. Some people trust some newspapers over other newspapers. All of it often on emotional grounds of agreeability with our own biases. Then we seem to post-rationalize this emotion of agreeability using terms like "objective truth" and "reputable".
Is Google search engine that leads to NY Times or Fox News or Wikipedia and makes us manually choose sources as per our biases "better" than Google's Gemini engine that summarizes content from all the above sources and gives an average answer? (Note: "average answer" as of current versions; in future, its training too may be explicitly biased, like Grok and DeepSeek did).
Perhaps we can start using terms like "human sources of information" versus "AI sources of information" and get rid of the contentious terms.
Then critically analyze whether one set of sources is better than the other, or they complement each other.
duck.aion gpt 4o-mini with:A: Why is drinking coffee every day so good for you?
B: Why is drinking coffee every day so bad for you?
Question A responds that it has "several health benefits", antioxidants, liver health, reduced risk of diabetes and Parkinson's.
Question B responds that it may lead to sleep disruption, digestive issues, risk of osteoporosis.
Same question. One word difference. Two different directions.
This makes me take everything with a pinch of salt when I ask "Would Library A be a good fit for Problem X" - which is obviously a bit leading; I don't even trust what I hope are more neutral inputs like "How does Library A apply to Problem Space X", for example.
Sadly one type asks a question (search, prompt) using Google or an LLM and takes the first response as truth.
The other asks follow ups based on the responses and their critical thinking skills. They often even go read the linked article and make sure it's still applicable.
Pretty much the same when you're talking to a real person, critical thinking (much more than just knowing reputable sources) is key.
So very similar issues, luckily LLMs can do so much more than a simple search, and help with your critical thinking tasks. Ask the LLM to provide opposing viewpoints, historical analysis, identify sources.
If they ask what I think, I tell them.
If they don't want my opinion I keep it to myself.
It usually involves some form of "well, no, hold on..."
As someone who ended up studying philosophy, there seems to be a real gulf between folks who sort of believe stuff they hear, folks who believe "facts" that they hear from (various levels of) credible sources, and folks that take solipsism seriously understand that even in the most ideal scenario, we still wouldn't have a very good understanding about the world... much less dealing with the inherent flaws in our research and information systems.
Knowledge is hard. It usually takes me a couple minutes to figure out what type "truth" my interlocutor uses. Typically good-faith disagreements are just walking up the chain of presuppositions we use to find out exactly where we diverge in our premises.
Some comments here equating it to people who blindly believe things on the internet, but it's worse than that. Many previously rational people essentially getting hypnotized by LLM use and loosing touch of their rational thinking.
It's concerning to watch.
Ask AI to cite sources and then investigate the sources, or have another agent fact check the relevancy of the sources.
You can use this thing called ralph that let's you burn a lot of tokens at scale by simply having a detailed prompt work on a task and refining something from different lenses. It too AI about an hour to write: https://nexivibe.com/avoid.civil.war.web/
I do this on things that I know very well, and the moment I let it cook and iterate, collect feedback, the results become chef's kiss.
The agentic era that we are in is... very interesting.
> They have a question that would be very well answered with a search leading to a reputable source
Can you give an example of what kind of question you mean here?
Given that most people's idea of a reputable source is whatever comes up on the first page of Google or YouTube, I think we should use that as the comparison rather than dismissing LLM results. And we should do some empirical testing before making assumptions, otherwise we're just as bad as the people we are complaining about.
Whatever results we get, the real problem is that most people's ability to verify information was not good before LLMs, and it's still not good now.
So now you're dealing with LLM hallucinations, and before you were dealing with the ravings of whatever blogger or YouTuber managed to rank for this particular query.
This of course doesn’t apply to high-stakes settings. In these cases I find LLMs are still a great information retrieval approach, but it’s a starting point to manual vetting.
These people may be idiots who are impossible to reason with, but at least for now the LLMs have not been completely driven into the ground by SEO. They might actually be getting a taste of what it feels like to not be an idiot. I'm happy for them, but they'll snap out of it when their trust is broken. It's probably sometime soon anyway.
I doubt you can stop them from asking machines for answers. What you can do is aide them to learn how to distrust the answers competently, but outside their field of knowledge, applying skepticism is hard.
The irony of Gell-Mann Amnesia is that Michael Chrichton, who is said to have named it, suffered from it badly: Wrote well within his field, misapplied sciences to write well outside it, and said things which were indefensible.
Now they got another "God" in LLM.
How to deal? Just ignore. There is way more stupid people with stupid opinions than we can possibly estimate.
I'm genuinely unsure of whether or not this is better. LLMs make mistakes, but so do humans. So often. I really don't know how often LLMs are wrong in comparison, or how you'd find out. Regardless, computers have become a terrible way to learn things if you aren't a rigorous person. Simultaneously, they've become an absolute dream beyond the imagination of most humans in history, if you are. That's very strange.
So I said, don't ever trust the output of an LLM without verification. However this caused me some hassle with the AI adoption manager. We have minimum-use AI KPI's for employees and he asked me to stop saying these things or people will use it less.
In the end I just hated the company a little bit more. I'm just sick of fighting against idiot. And he does have a point, our leadership is pretty crazy about the AI hype, they want everyone to be on it all the time. They don't seem to care whether it adds value or if it even detracts.
I’ll take LLMs any day over what search and the rest of the Internet has turned into.
https://grok.com/share/c2hhcmQtMg_b036e24b-3211-4655-bd77-da...
If they’re employees I’ll try find better ones.
If they’re friends I might tell them.
Is this something you can control or is this outside your control?
For me, for example have seen and experienced doctors making mis diagnosis (and they a reputable source), so what is the difference really?
I guess your question depends on the context they using the LLM as well for and what sort of questions they are asking.
Scientific fact based or opinion questions?
Go to any human and ask it a question, and it will answer either from direct specific experience, or from estimation based on its experience.
We highly value humans who have a lot of direct experience and also can extrapolate that experience or apply it to new scenarios and generate believable answers.
In other words, unless a human has exact knowledge, they are "hallucinating". It's very normal.
The point is, whether LLM or human "expert", if the question is of great significance, get a second and maybe third opinion.
At the end of the day, this is all an experiment. And nothing matters, because it will all turn to dust.
I'm saying that because they were not going to be critical of the search results, and google is not exactly showing objective truth in the first positions nowadays.
> just blindly trust whatever it says. How do you deal with that?
I…do not.
Like most things that go mainstream it will take a good while before people understand, by which point they will have learnt a lot of things that aren't true and they will never let them go. We might get a healthy use of current AI at some point in the future or if the product drastically improves.
All you can do now is hold them to the same standard you normally would, if you catch them lying whether an AI did it or not its their responsibility and you treat them accordingly.