AI-assisted cognition endangers human development? (heidenstedt.org)

by i5heu 190 comments 230 points
Read article View on HN

190 comments

[−] svnt 29d ago
It is a quirky article but the author, instead of engaging with information sources to understand what important thoughts people have had about these topics, feels the best thing to do is introduce new terms that other terms already exist for. This is basically just inductive bias plus the AI homogenization idea producing a distribution shift.

This is what happens in thought-isolation. It isn’t better than educating yourself, whether that education involves AI or not.

Phillip Kitcher is known for epistemic monoculture, Dawkins and then Henrich popularized collective intelligence and cultural evolution.

The thing about these fear pieces is concepts like the hollowed mind are reductive and that reductionism is based on a reductive view of (usually other) people.

But what actually happens is we have formalized processes and can externalize them. This is a benefit if you can use your newfound capacity and free time for something better, which I think most people ultimately will.

[−] antonvs 29d ago
When I read pieces like this all I think is, resistance to change is a helluva drug.

I've been working on a project and using LLMs heavily to inform my design decisions. There's already a long list of cases where it has taught me things I wasn't familiar with, alerted me to possibilities I didn't consider, shown me how to do things that I was struggling with. In those cases I ask for references, and it delivers.

This is not "endangering human development". If anything, it's the exact opposite - allowing human knowledge to be transmitted to other humans in an accessible way that otherwise, usually simply would not have happened.

Of course, this all depends on using AI to enhance cognition and access to knowledge, as opposed to just letting a machine write all your code for you without review, Yegge-style.

I'm not saying there isn't a moral dimension to all this, and areas of serious concern. But the one about "endangering human development" is wholly in our individual hands. You can use AI to help you learn, or to replace the need to learn. The former will be better for human development.

One real lesson from this is perhaps that we need to teach people how to use AI in ways that benefit their development, not just their output.

[−] nathan_compton 29d ago
I think it depends on the person. As a teacher, I see this. Some kids (the gifted ones) use AI to multiply their efforts. Most kids use to just get by and are actually coming out of the class with less knowledge than they would have without one.
[−] latexr 29d ago

> When I read pieces like this all I think is, resistance to change is a helluva drug.

When I read comments like yours, I’m reminded of (though I’m not comparing you to—I believe you are arguing in good faith) the cryptocurrency shills saying anyone who is against cryptocurrencies is just jealous they didn’t get in on the gold rush; they are incapable of imagining or accepting other people have their own reasons beyond what the author can themselves conceptualise.

When people criticise cryptocurrencies, NFTs, the Metaverse, LLMs, they’re not just stubbornly “resisting change”. Those technologies have important issues and repercussions which should be addressed, we shouldn’t just accept change unquestionably.

> Of course, this all depends on using AI to enhance cognition and access to knowledge, as opposed to just letting a machine write all your code for you without review, Yegge-style.

And the latter is exactly what is going to happen and is already happening in large enough quantity that it’s going to be a serious problem.

> But the one about "endangering human development" is wholly in our individual hands. You can use AI to help you learn, or to replace the need to learn.

That completely ignores the loss of skill that happens without you realising, as you lean more on a tool.

https://www.thelancet.com/journals/langas/article/PIIS2468-1...

https://arxiv.org/abs/2506.08872

This is nothing new. We already know that e.g. heavy GPS use makes us weaker at navigating on our own.

https://www.nature.com/articles/s41598-020-62877-0

> One real lesson from this is perhaps that we need to teach people how to use AI in ways that benefit their development, not just their output.

Yes, that is a good goal. But good luck achieving it.

[−] array_key_first 29d ago
Usually when things are "in our individual hands" it ends very poorly.

This is because humans are actually extremely easy to exploit. Our biology is very stupid and also dumb, so even basic attacks can cause us to self-destruct.

And that's how we get obesity, smoking, war, I mean... you name it.

LLMs are basically perfect. While I'm sure some people, somewhere, can theoretically exist attacks from LLMs, on the whole I'm not sure that will be the case.

[−] Forgeties79 29d ago
As I see it, LLM’s require far more self-discipline and introspection than people expect or generally engage in.

It’s a corner cutting machine that allows people to shift the burden of their work on to others either in the form of more slop we have to wade through OR more work we have to correct because they couldn’t bother to vet the results.

It’s like writing a paper, running spellcheck, then sending it to some less to look over for you without ever taking a pass yourself. It’s selfish.

[−] imoverclocked 29d ago

> But what actually happens is we have formalized processes and can externalize them.

Even if I believe that is what happens in 10% of uses of AI, it doesn't excuse what happens with the rest.

Many people can not do mental math anymore and still more question why we need to learn math at all in the first place when we have simple calculators. "When will I ever use XYZ?" is a common refrain.

AI is currently developed and owned by billionaires who also happen to own news sources. If that correlation doesn't spark questions about why we shouldn't externalize processes to AI, you have likely been using AI too much already.

[−] Yokohiii 29d ago
It's possible I miss something, but are you saying that the author should relax and she should leaves this to smarter people?
[−] superxpro12 29d ago
I think we're excluding from this analysis the probability that these "AI" products will remain truly unbiased and free from external (corporate) influences.

When AI gains true marketshare in the "think-space", I have zero trust that the corporate overlords controlling these machines will use them in the fairest interests of humanity.

[−] Forgeties79 29d ago

>This is a benefit if you can use your newfound capacity and free time for something better, which I think most people ultimately will.

I think for a lot of of us the problem is that this is not a given. It’s often promised and rarely occurs, especially in the modern era. Increased productivity usually just means increased demands in the workplace.

[−] jbethune 29d ago
This was a bit word-salad-y but I share the same basic concern. I think more I worry about the tendency toward greater and greater cognitive off-loading to LLMs. My sister told me a story the other day about how she caught her plumber using chatgpt on his phone to fix an issue with her bathroom. I just think it's good for humans to know how to do stuff.
[−] bomewish 29d ago
Doh. I went in expecting a really cool thesis — because the idea seems somehow intuitive, or at least really intriguing. But I have no clue what I read. Just totally odd and unconvincing. Greenland? Dialectal substrate? The idea is still super intriguing to me though!
[−] thepasch 29d ago
AI-assisted, I can see. I believe it doesn’t have to be that way, though. If you use AI as a grounding tool - essentially something that can take your stream of consciousness and parse it into a series of concerete and pointed search terms to do real-time research with instead of falling back on what’s in the weights - then it’s honestly hard to think of a technology that had the potential to be more useful in the history of the species - it gives you much more direct access to both your unknown unknowns and your unknown knowns.

That is, of course, provided that you pay attention it actually does research. In their current state, LLMs are practically useless for this purpose for the vast majority of users, as no one knows how they work, what to watch out for, what the failure modes look like, and how to keep nonsense apart from facts when both are presented with an equal amount of conviction. That’s not a user problem, it’s an education problem.

[−] drusepth 29d ago
This is absolutely something to potentially be worried about, but one thing I never see highlighted in critiques of AI-assisted cognition is that some elements of physiology may not actually be biologically necessary if they can be fully supplanted by some replacement (in this case, new tools). I can't traverse as much land on foot as my ancestors did (my muscles are weaker, my endurance is less, etc), but I can travel even further than they could by car/plane/etc.

Nothing about the nature of evolution implies our current cognitive processing is ideal/sacred and shouldn't ever change.

[−] dcre 29d ago
I've never seen an argument like this that, if true, wouldn't also apply to the cognitive offloading we do by relying on culture, by working with others, or working with the artifacts built by others.
[−] YackerLose 29d ago
A real artificial intelligence would be capable of independent and original thought. What we have today are mere plagiarism factories. They need to be called out for what they are.
[−] gobdovan 29d ago
By the logic that today's news is fundamental to know as true, there really is no point in reading books older than 6 months old. If Einstein woke up from a coma, he'd be useless, as he doesn't even know who won the World Cup. For real now, if an AI can help you solve a problem using 2,000 years of human logic, does it really matter if it's "skewed" away from a political shift that happened three weeks ago?

I also don't believe that everybody I know is idiosyncratic in the way they view the world. And even if they were, I'd probably just pay attention to the things that are directly relevant to me. So probably I'll misunderstand most of what they say anyway.

[−] contingencies 29d ago
Strong disagree. The "AI-Assisted Cognition" phrase is loaded.

Would you attempt to, for example, simultaneously modify for available ingredients, number of diners, and time-optimize the prep method for a recipe you've never cooked before if you were following an old-school cookbook? No. You'd have to be a pretty solid chef to try all that on at once.

Using AI, you might branch out confidently in to new areas, executing all of these modifications simultaneously, and even adapting the output for a specific audience or language.

This toy example shows an important property of AI as decision support systems, which are well studied in the military domain: using these systems, we build confidence to act in unfamiliar domains, thereby extending our reach. From this experience we can learn more. The fact that the learning may then occur through, ie. during or after the experience, rather than beforehand, is secondary. It's still there. The fact we didn't know the language the AI translated to for our chef is totally irrelevant.

Sitting comfortably at the effective apex of millions of years of human cognitive and technology development with the entire world's knowledge at our fingertips, every day we can extend confidence in novel domains through AI, and enjoy it. We should be feeling pretty damn "developed".

Rote formalism and fixed paths in pedagogy are gone: good riddance. This is the hacker age.

[−] Manuel_D 29d ago

> In early 2026, the USA prepared to invade Greenland and, therefore, the EU4. Only a few months prior to that it was completely unthinkable that the USA would even think about threatening an invasion of Greenland. As AI base models are stuck in the past, they do not easily accept these events as real and often label them as “hypothetical”, “fake news”, or “impossible”. This also affects new models like Gemini 3 Pro, GLM-5 or GPT-5.3-codex5.

Isn't this just inherent to any system that takes some time to update? E.g. if a country moves its capital to a different city, then textbooks, maps, etc. are going to contain incorrect information for a while until updated editions are published.

A lot of the complaints about AI are really about the drawbacks of information systems more generally, and the failure modes pointed out are rarely novel. The "Cognitive Inbreeding" effect attributed to AI would also have occurred with Google search would it not? Lots of people type the same question into google and read the top results, instead of searching a more diverse set of information sources. It's interesting that the author mentions web search as a way to ameliorate this, when it seems to me that web search is just as capable of causing cognitive inbreeding.

[−] adamtaylor_13 29d ago
One thing that's always been true with human communication that is becoming increasingly obvious to me through my interactions with LLMs is the art of asking a good question.

The framing of questions massively affects the results you get from discussion with humans, and I'd argue it's even more pronounced with LLMs.

[−] demorro 29d ago
This Dynamic Dialectical Substrate sounds a lot like Pirsigs Metaphysics of Quality to me, which I think is neat.
[−] MillionOClock 29d ago
Say someone uses AI, treating it as if it was a developer (probably not recommended today due to the risk of errors), and working and speaking with it as if they were some kind of product manager or senior engineer who only makes architectural decisions etc. I wonder what kind of difference would it really make? Sure the person might not be as good anymore as a developer, but how is this different from being a usual product manager or whatever the day AI truly is good enough for a developer role? I'm not saying I know what the answer to this question is, but this is something I genuinely wonder, and I think the same kind of questioning can apply to broader domains.
[−] giancarlostoro 29d ago
I think the best way I can put it is probably; this is the same as if you just cheat off someone else in school, you aren't learning much are you? AI is the same thing. Don't just cheat, use it to learn instead.
[−] steve_adams_86 29d ago
"Cognitive inbreeding" is an interesting (though maybe not entirely accurate) term for something I dislike a lot about LLMs. It really is a thing. You're recycling the same biases over and over, and it can be very difficult to tell if you don't review and distill the contents of your discourse with LLMs. Especially true if you're only using one.

I do think there's a solution to this—kind of—which dramatically reduces the probability and allowing for broad inductive biases. And that's to ask question with narrower scopes, and to ensure you're the one driving conversation.

It's true with programming as well. When you clearly define what you need and how things should be done, the biases are less evident. When you ask broad questions and only define desired outcomes in ambiguous terms, biases will be more likely to take over.

When people ask LLMs to build the world, they will do it in extremely biased ways. This makes sense. When you ask it specifics about narrow topics, this is still be a problem, but greatly mitigated.

I suppose what's happening is an inversion of cognitive load, so the human is taking on more and selecting bias such that the LLM is less free to do so. This is roughly in line with the article's premise (maybe not the entire article, though), which is fine; I think I generally agree that these are cognitive muscles that need exercising, and allowing an LLM to do it all for you is potentially harmful. But I don't think we're trapped with the outcome, we do have agency, and with care it's a technology that can be quite beneficial.

[−] mayankd 29d ago
The cognitive effects are going to be so divergent. While the avid learners will learn knowledge and skills on the fly exponentially faster, the populace offloading thinking to the AI models will see unprecedented cognitive decline. This is similar to the effect that the internet had on knowledge retention but this time on critical thinking
[−] chunky1994 29d ago
Does anyone use LLMs in such a manner that they believe it always has the most up to date information (without web search tools?).

Isn't this whole thesis negated by the fact that tool calling web search exists? This just feels like a whole lot of words to say, don't treat a LLM as an always up to date infallible statistical predictor.

[−] anigbrowl 29d ago
So does talking to uninformed people. The size of the group is inversely correlated with deviation from the mean (of IQ, productivity, or whatever proxy for cognitive capability you care to specify).

I'm not sure why this is at the top of the page; it's not that it's wrong, it's just a sequence of truisms.

[−] darepublic 29d ago
The original "person who most of humanity talked to" was, I reckon, google dot com
[−] cyanydeez 29d ago
Do we think AI is similar to being rich, but without all that cash? I mean, they can basically offload most things to other people to think about.
[−] blackqueeriroh 29d ago
This is bad science. Horrifically bad science.
[−] 2OEH8eoCRo0 29d ago
Economic incentives are forcing us to use tools that make us dumber. What does the future hold?
[−] cowlby 29d ago
Sometimes it feels like as developers we live in a a bubble. Don't most jobs endanger human development? I can't help but think about all the billions of factory, food service, assembly line type jobs. Do these not threaten "human development"? My cynical take would be all AI endangers is "white collar" work.
[−] measurablefunc 29d ago
Calculators endanger the development of mental arithmetic skills as well.
[−] kazinator 29d ago

> Speaking and discussing with other humans [who aren't incessantly blathering about AI] is obviously the most effective way to mitigate these problems.

Slightly FTFY.

[−] SegfaultSeagull 29d ago
It’s a bit ironic that the author includes an AI generated audio version of the article, you know, so we don’t have to read it.
[−] LetsGetTechnicl 29d ago
Well no shit
[−] zozbot234 29d ago
At the Egyptian city of Naucratis, there was a famous old god, whose name was Theuth; the bird which is called the Ibis is sacred to him, and he was the inventor of many arts, such as arithmetic and calculation and geometry and astronomy and draughts and dice, but his great discovery was the use of letters. Now in those days the god Thamus was the king of the whole country of Egypt; and he dwelt in that great city of Upper Egypt which the Hellenes call Egyptian Thebes, and the god himself is called by them Ammon. To him came Theuth and showed his inventions, desiring that the other Egyptians might be allowed to have the benefit of them; he enumerated them, and Thamus enquired about their several uses, and praised some of them and censured others, as he approved or disapproved of them. It would take a long time to repeat all that Thamus said to Theuth in praise or blame of the various arts. But when they came to letters, This, said Theuth, will make the Egyptians wiser and give them better memories; it is a specific both for the memory and for the wit. Thamus replied: O most ingenious Theuth, the parent or inventor of an art is not always the best judge of the utility or inutility of his own inventions to the users of them. And in this instance, you who are the father of letters, from a paternal love of your own children have been led to attribute to them a quality which they cannot have; for this discovery of yours will create forgetfulness in the learners' souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves. The specific which you have discovered is an aid not to memory, but to reminiscence, and you give your disciples not truth, but only the semblance of truth; they will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing; they will be tiresome company, having the show of wisdom without the reality.
[−] drivebyhooting 29d ago
If computers are bicycles for the mind and AI are cars, I wonder what the analogue for the obesity epidemic is.
[−] greatpost 29d ago
[dead]
[−] waffletower 29d ago
[dead]
[−] throwaway613746 29d ago
[dead]
[−] toooomato 29d ago
[dead]