"I could retrain, but my core skills—reading, thinking, and writing—are squarely in the blast radius of large language models."
Yes.
For the lifetime of almost everyone alive now, reading, thinking, and writing have been valued skills which moved one up in society's hierarchy. This is a historical anomaly.
Prior to 1800 or so, those skills were not all that useful to the average farmer. There were more smart people than jobs for them. Gradually, more jobs for smart people were developed, but not until WWII did the demand start to exceed the supply. Hence the frantic technical training efforts of WWII and the following college boom. This was the golden age of upward mobility.
It's hard to imagine this today. Read novels from the 18th century to get a feel for it. See who's winning and who's struggling, who rises and who falls, and why. Jane Austen's novels are a good start.
The nerds didn't take over until very late in the 20th century. There were very few rich nerds until then. Computing was once a very tiny world. You could not get rich working for IBM. The ones who left and got rich were in sales.
So what was valued? Physical robustness. Strength, perhaps brutality. Competence in physical tasks. Honesty. Parentage. Birth order (see primogeniture.) Those matter in per-technological societies, and they matter in failed societies now. Those are perhaps humanity's core values.
That may be where we go once AI does the thinking. That's where we go when smarts are not a scarce resource.
> Physical robustness. Strength, perhaps brutality. Competence in physical tasks. Honesty. Parentage. Birth order (see primogeniture.) Those matter in per-technological societies, and they matter in failed societies now. Those are perhaps humanity's core values.
This is really bleak to me. We can do better than primogeniture, and of course the gender discrimination that goes along with it. You might as well write that subjugation of women is a "core value", simply because it has been for so many time periods.
John Henry is not going to beat the steam shovel any time soon.
> For the lifetime of almost everyone alive now, reading, thinking, and writing have been valued skills which moved one up in society's hierarchy. This is a historical anomaly.
It's not an anomaly; rather, it's the other way round. These used to be highly specialized skills that carried significant status, and got democratized by mass education in the 20th century.
We're not prisoners of history. We don't have to go back to being serfs for the few people who own all the land, oil, food, energy, data centers, and operating systems. I hope.
Although primogeniture has been discriminatory for basically the entire time it's existed, the discrimination isn't inherent. It's an implementation detail. Modern British Royal succession now uses absolute, gender-neutral primogeniture since 2013.
In fact, there are few things less discriminatory than a random birth order. You may as well be assigned a random number at birth, and the lower your number, the more you're paid. In such a system, there's nothing to discriminate against; the ordering is absolute and immutable, and everyone is treated equally.
I agree that it's a bleak idea, but Animats wasn't talking about subjugating women.
> We're not prisoners of history. We don't have to go back to being serfs for the few people who own all the land, oil, food, energy, data centers, and operating systems. I hope.
Unfortunately, that is the current stage of humanity. We all currently live in a global subscription model for food, housing, safety, etc. No doubt that we will move beyond it eventually, but the current organization of society is kept in place by the owner class which benefits from the current arrangement.
One of the steps for moving beyond it is educating the modern day serfs (our peers) about reality as it is and alternative visions of a future where we are no longer selling our labor to the owner class. It will take generations.
Primogeniture is not actually unreasonable if you consider that children can range in ages say 15 to 20 years. On average the oldest is most mature and experienced. Both reasonable qualities up to certain point. If your existence depends on decision of single leader. I generally would pick the 30 year old one over 20 year old or 25 year old over 15 year old. Post 30 year old, it gets different, but to around there I would reasonably expect maturity and experience to matter.
> We're not prisoners of history. We don't have to go back to being serfs for the few people who own all the land, oil, food, energy, data centers, and operating systems. I hope.
The algorithms and bots that curate/generate content directed by accelerationists definitely want people to think that. There is a whole system in place now that can shape future outcomes just by convincing everyone that have no power when the opposite is true. The parent is probably a bot, or has been influenced by one to many there is nothing new under the sun solipsism bs.
This a very silly view of the past through modern eyes. Intelligence, cunning, and wit have always been immensely valuable. Read the mythology of literally any culture for examples.
To extrapolate from fewer people were formally educated or literate to intelligence wasn’t valued is absurd.
As for your part about reading and writing. Literacy has always been a very valuable skill that would increase your social standing. It was scarce and difficult to acquire before the printing press, but it was always valuable.
I don't disagree with your overall point, but I do think that ingenuity, problem-solving, impulse control, and the ability to delay gratification and reach long-term goals have always been valuable skills.
You might still only be a farmer if you're smart, but you can at least be one of the more productive farmers with a more smoothly running farm.
In the same way that numeracy skills were in the blast radius of the Colossus.
People seriously underestimate how underpowered and tiny llms are for the tasks they need to solve.
A trillion parameter model can't tell the difference between left and right. We will need to grow them millions to trillions of times before they are half as good as AI boosters claim they are.
This isn't the end of thinking any more than the watt steam engine was the end of horses. It will be centuries before we get there. And by that point the difference between man and machine will be at best academic.
> So what was valued? Physical robustness. Strength, perhaps brutality. Competence in physical tasks. Honesty. Parentage. Birth order (see primogeniture.) Those matter in per-technological societies, and they matter in failed societies now. Those are perhaps humanity's core values.
This is a "noble savage" conception of the past. Thinking/cleverness/craftiness was highly rewarded even in preliterate societies. Even in war, "polytropos" Odysseus comes out ahead of the dumb brutes with bigger spears.
I think you forgot discipline and long-term thinking in your core values. Even before high technology, there were things to plan and resources to manage. Especially after the beginning of agriculture.
"Oh well, we were in an anomalous time of social growth, time to go backwards! We won't even need to read or write or think! It's all just too bad, but that's just the way the world works, like it did in 1800." [or pick your date before any current person was alive]
Lots of people have started considering a time of significant "progress" as "an anomaly", as if the world should always just be the way it was in, say, 1800, like that was actually the realistic pinnacle of human society. You also seem to be loosely basing this argument on the availability of "rich nerds", which seems like a bizarre non-sequitur. Computing once didn't exist, and we still valued reading, writing and thinking.
I'm kind of baffled by how regularly I see comments like this. Like, come on. This is basically the AI black pill, no?
>> So what was valued? Physical robustness. Strength, perhaps brutality. Competence in physical tasks. Honesty. Parentage. Birth order (see primogeniture.) Those matter in per-technological societies, and they matter in failed societies now. Those are perhaps humanity's core values.
You're ignoring the astronomers of ancient Mesopotamia, the scribes of Egypt, the grammarians of India, the philosophers of ancient Greece, the orators of Rome, the physicians of Islam, the scholars of the Middle Ages, the masters of the Renaissance, and all the great natural philosophers, mathematicians, physicists, biologists, of all the ages up to 1800.
We are a technological civilisation, a scientific civilisation. Who do you think comes up with all the technology? Alexander, the Great Butcher? Attila the Hun? Jenghis Khan?
We live in the civilisation that was born in Athens, not in Sparta. Knowledge and wisdom always are the greatest power that shapes reality. This won't change just because OpenAI made a viral app.
Then consider the role of the clergy in the Middle Ages, and say nothing of Rome and large bureaucracy (Roman engineering alone).
On top of this you need to ignore very large bureaucracies and trading networks in Asia to go far with your narrative
(Persian, Turkish, Mongol, Indian and Chinese).
There were a good deal of powerful nerds before the 1700s.
> So what was valued? Physical robustness. Strength, perhaps brutality. Competence in physical tasks. Honesty. Parentage. Birth order (see primogeniture.) Those matter in per-technological societies, and they matter in failed societies now. Those are perhaps humanity's core values.
This is some weird manosphere bullshit. Pre-industrial societies invented philosophy and writing. People across the world know the name of Socrates from 2500 years ago. They know stories from Homer 2800 years ago.
It's a mistake (a) to think pre-industrial people were grug-brained cavemen (b) that we're going to revert into the same cavemen because a computer can do your pointless six-figure office job.
Well... 19th century engineer could have a large multi-story brownstone with family and , more importantly, servants and house personnel. A butler, etc...
Interesting take. If AI indeed takes over the thinking, could the next scarce resource be, humanity? I’d argue that strength is already taken over by the machines, at least in societies where thinking is already the dominant competence. What can we not get from AI? Friendship, empathy, connection, affection, mutual understanding? Like being real. Being present. Maybe time has come to invest in getting really good at all of that
> those skills were not all that useful to the average farmer.
Sorry but that is just not true.
Sure farmers aren't academics, but the sheer number of tasks, and tools required to efficiently do those tasks were vast. Innovation in winnowing was literlly life and death, as was plant/animal breeding traits.
Observing and reacting to changes in plants, lands, water, animals was critical to getting a good harvest. Packing and storing food was critical to surviving the winter.
Sure, the lack of literacy hampered knowledge recording and dissemination.
BUT, if we look mine the vast memory that is classics, knowledge, wit and cleverness were prised as often as strength and beauty.
The top comment is often something that just sweeps across centuries where technology has its own supreme teleology. With supreme confidence.
AI is not a problem because it is AI. It is because of political circumstances.
Think beyond the small worldview where technology and valuation are everything and you are just a pawn. Then you see that a better world is possible. The first step is then to not give up.
The premise here is that AI works well enough to automate the “smart” people jobs. No one but delusional workaholics are afraid that their job will get automated because they cling to the job in itself. So clearly, this is not about the tech itself.
There wasn’t a college boom post-WWII because technology came and demanded it.
> That may be where we go once AI does the thinking. That's where we go when smarts are not a scarce resource.
Take me by the hand, circumstance. I am yours to be swept away.
There are plenty of occupations that benefit from being "smart". Construction is one of them, and if you study a bit medieval or renaissance architecture, you'll be amazed by what our ancestors were able to do with just a few analog tools.
The same goes for other occupations, and...farming. Breeding cattle is a complex science, so is growing crops consistently and valuing the production.
If you've done almost anything with LLMs you would know that you need to be intelligent to get good results when using them. This means, you need to be good at articulating your ideas, clearly in English. So it's more like an exoskeleton than replacement, at least for now.
Maybe I am being naive but I think there will always be room for smarts.
Every professor at any university has a dozen more project ideas than they have graduate students, every factory boss has a dozen more optimisations than ways to implement them, and looking up into the night sky we have 95% of it that cannot be explained.
The gap is not too few smart people, nor too few "jobs" that need smarts. The gap is being prepared to arrange society and wealth so the "job" is discovery, science, sharing. We are no longer hunter gatherers, no longer a feudal society, perhaps we shall stop being whatever this one is and try a new one.
(and no, I don't think there is a name for the new one yet (its not socialism, maybe not capitalism).
Lets just not fall back to Feudal if we can help it
Except what will matter in the future isn’t brute strength.
It’s ownership of capital and technology.
Plumbers aren’t suddenly getting rich. At best they’re not losing jobs at the rate everyone else is, but once so many people lose jobs then they can’t afford plumbers either. So even plumbers are worse off even if they’re not as badly off as the rest of us.
And all this assumes that humanoid robots don’t develop and succeed which is a major assumption.
Peter Drucker identified this phenomenon as the rise of knowledge work as "the means of production" in the 1950s and 1960s. Management (of people, tasks, responsibilities, and disciplines) and knowledge work were the two sides to organizational performance. Drucker felt that "post capitalist society" was the recognition that capital ceased being the primary factor of production. No matter how much capital you throw at a problem, if you can't retain people that know what you're doing, you won't get far.
Knowledge is a unique resource compared to the other traditional factors of economic production (land, labor, and capital). It is often invested in with capital (education and tools), but it is carried with the human, and leaves with them. It is always decaying - knowledge workers should be in constant learning mode, and stale knowledge eventually becomes a drag on performance.
I'd argue the future is about knowledge workers all becoming managers. When you use agentic AI, it has the flavor of the skills of management. Management is "a practice and a liberal art", according to Drucker, one that has been in poor supply for some time. LLMs are have somewhat stale knowledge and require the human, tools, and RAG to freshen it. And LLMs will always regress to the mean. It is pretty good at pattern analysis and starts to get shaky and mediocre with synthesis. It requires very nuanced, and elaborate prompting to shape its token output towards insightful results that aren't a standard answer. For coding exercises, that can be fine, but at high complexity levels, or when dealing with issues of strategy or evaluation, it is a platitude generator and has no unique competitive advantage.
In other words, competent, talented management mixed with knowledge work is the scarcity we are heading towards. This is arguably why you're seeing the rise of "markdown frameworks" that people swear improve performance, it's the beginnings of management scaffolding for AI.
Technical folks struggle with valuing management skills, and I expect this will increase its value and scarcity.
As for "Physical robustness. Strength, perhaps brutality. Competence in physical tasks." I think the robots will be replacing that pretty shortly.
"Honesty. Parentage. Birth order (see primogeniture.) Those matter in per-technological societies, and they matter in failed societies now. Those are perhaps humanity's core values."
Ehhhhh not really? What about Christianity, where the meek shall inherit the Earth, and love is the core value (putting aside modern day Pharisees and Charlatans that twist the underlying value system)? Or Islam, whose core value is submission to God? While there have been Societies that valued parentage and birth order, that's far from universal.
I don't need any of that. I built a life for myself with discipline and hard work. I avoid most of the drama you describe because I create my world instead of letting it be created for me.
Here are some words to live by[0]. I don't agree with everything Derek Silvers says, esp about philosophy. Its more of a guiding principle that drives rather than divides.
This is a must-read series of articles, and I think Kyle is very much correct.
The comparison to the adoption of automobiles is apt, and something I've thought about before as well. Just because a technology can be useful doesn't mean it will have positive effects on society.
That said, I'm more open to using LLMs in constrained scenarios, in cases where they're an appropriate tool for the job and the downsides can be reasonably mitigated. The equivalent position in 1920 would not be telling individuals "don't ever drive a car," but rather extrapolating critically about the negative social and environmental effects (many of which were predictable) and preventing the worst outcomes via policy.
But this requires understanding the actual limits and possibilities of the technology. In my opinion, it's important for technologists who actually see the downsides to stay aware and involved, and even be experts and leaders in the field. I want to be in a position to say "no" to the worst excesses of AI, from a position of credible authority.
> Just because a technology can be useful doesn't mean it will have positive effects on society.
You say it in a way that it sounds like automobiles don't have a positive effect. I don't agree - they have some negative effects but overall they have a vast net positive effect for everyone.
Their negative effects are much more vast, subtle, and cultural. You could say many of the broad and widespread mental issues we have in the US is the result of automobiles leading to suburbanization and thus isolation of people. It has created an expensive barrier of entry for existing in society and added a ton of friction to doing anything and everything, especially with people. That's not even getting into the climate effects.
The upsides of automobiles generally all exist outside of the 'personal automobile', i.e. logistics. These upsides and downsides don't need to coexist. We could reap the benefits without needing to suffer for it, but here we are.
I've always lived in walkable cities. I don't own a car and with pollution, congestion, accident risk, pavement obstruction, etc. other people's cars unequivocally make my life worse.
We can argue about whether this is a good trade off, but the claim that cars make everyone's life better is straightforwardly false.
The problem is we are numb to it. 40,000+ people are killed in car accidents every year in just the USA. Wars are started over oil and accepted by the people so they can keep paying less at the pump. Microplastics entering the environment each day along with particulate from brakes, and exhaust. Speaking of exhaust: global warming. Even going electric just shifts the problems as we need to dig up lithium, the new oil. We still have to drill for oil for plastics and metal refining, recycling and fabrication.
They have a net positive effect for every owner, except that they seem to facilitate and encourage ways of living that require automobile ownership as a condition of adulthood in most places. So I'm not entirely sure they're a vast net positive in every value system. In yours, yes, but not in mine.
I think it's most obvious in hindsight, probably it was a long time (some decades) before cars were understood to have much of a negative effect at all. Nobody* thought much about air pollution (even adding lead to the gasoline) or climate effects, or what would happen when cities were built enough that they were then dependent on cars, or what happens when gas or cars gets expensive.
All they saw was that trips taking a day could now be done in an hour and produced no manure, and that meant suddenly you could reasonably go to many more places. What's not to like? A model T was cheap, and you didn't even need to worry about insurance or having a driver's license. Surely nobody would drive so carelessly as to crash.
*well, not technically nobody, but nobody important.
The positive effects were immediate, and measurable. The negative effects are delayed, and hard to quantify without all the advancement in climate research since then. If everyone in 1920 knew a 100 years from now there would be climate crisis to reckon with, perhaps a few things would have changed along the way.
Today we have a much better understanding of the world, so we have the means to think down the line of what the negative effects of LLMs and course correct if needed.
I think the right term for highways or most other car roads is “car sewer” - you need very specialised equipment to navigate them, they are deadly, smelly, loud and unpleasant. One of the worst environments humanity has produced.
Yes they ship people around somewhat fast. Slower than possible with other methods, and the cost is incredible - economic (much more expensive per passenger than almost any alternative), political (they inherently divide people, dehumanise and make people never really share a public space), health - they reduce lifespan by both lowering living quality as well as directly killing a staggering amount of humans per year).
And we have learned how to build better places for humans that do not need these coffins on wheels - if you visit any European capital, and most Asian ones - you will see environments built for humans, not cars - soo much nicer.
So cars as a technology have definitely not been beneficial to humanity overall, but it has been somewhat useful to some.
I think strongtowns were very good advocates of what places in America could like if you look beyond cars. I personally like the “not just bikes” channel though.
What benefit do cars provide that public transit doesn't? How are thousands of individual cars better than light rail?
Cars aren't a positive in society. Transportation is the benefit, and cars are the worst possible way to transport people. A functioning public transit system is better in every possible way apart from egotistical arguments like "I don't like seeing poor people on the bus".
Not for everyone. If everyone tried to use a personal car at the same time it wouldn't work. It's also worth bearing in mind that when people talk about "the automobile", what they're really talking about is the roads. Automobiles are useless without roads, and there's only so much space.
A lot of this comes down to having too much of a good thing. We are really bad at detecting when we've gone past the point of too much, and we're even worse at undoing it once we have.
A large part of the effect that cars have come from massive subsidies and policy choices that push for cars over alternative options. The comparison shouldn't be "cars vs literally nothing" but rather "car-dominated infrastructure vs the same investments in alternatives". (Not to say that it's an either-or; the optimal equilibrium might still involve some mix of car investments, just far less than we have now.)
When we look at automobiles, we also see that there were many ways to adapt to them. It's true that there's many parts of the anglosphere where, without one, you are a second class citizen at best: The lived environment was built so that you could not live without them... but that's not the only choice. I spend part of the year in Spain all the time, and I might not get into a car once a month. Not because I am any kind of enthusiast, but because in the town I live when I am there, it's doesn't really help.
The different however is network effects. When we make a place better for cars, I make it worse for pedestrians. Your adoption of the car, and its pressure on my lived environment, has effects on me. Same as, say, people joining facebook or twitter. But do LLMs create network effects that are directly harmful, or is it just a matter of making it harder to compete, just like a mechanical watchmaker has less business now that it's so easy to have a reliable clock? Because the first case is a problem, but the second one... that's competition. It's civilization. And then it's not really a matter of cars vs pedestrians.
More and more urban centers are banning cars in their cores. Especially older cities built before the automobile existed.
An analog might be the push for banning phones in schools. Setting apart times and spaces where serendipitous human interactions are encouraged by the lack of distractions.
I fear that outside of cataclysmic global warfare or some sort of butlerian jihad (which amounts to the same) this genie is not going back into the bottle.
This tech is 100% aligned with the goals of the 0.001% that own and control it, and almost all of the negatives cited by Kyle and likeminded (such as myself) are in fact positives for them in context of massive population reduction to eliminate "useless eaters" and technological societal control over the "NPCs" of the world that remain since they will likely be programmed by their peered AI that will do the thinking for them.
So what to do entirely depends on whether you feel we are responsible to the future generations or not. If the answer is no, then what to do is scoped to the personal concerns. If yes, we need a revolution and it needs to be global.
This reminds me a bit of the ending of In the Beginning Was the Command Line:
> The people who brought us this operating system would have to provide templates and wizards, giving us a few default lives that we could use as starting places for designing our own. Chances are that these default lives would actually look pretty damn good to most people, good enough, anyway, that they'd be reluctant to tear them open and mess around with them for fear of making them worse. So after a few releases the software would begin to look even simpler: you would boot it up and it would present you with a dialog box with a single large button in the middle labeled: LIVE. Once you had clicked that button, your life would begin. If anything got out of whack, or failed to meet your expectations, you could complain about it to Microsoft's Customer Support Department. If you got a flack on the line, he or she would tell you that your life was actually fine, that there was not a thing wrong with it, and in any event it would be a lot better after the next upgrade was rolled out. But if you persisted, and identified yourself as Advanced, you might get through to an actual engineer.
> What would the engineer say, after you had explained your problem, and enumerated all of the dissatisfactions in your life? He would probably tell you that life is a very hard and complicated thing; that no interface can change that; that anyone who believes otherwise is a sucker; and that if you don't like having choices made for you, you should start making your own.
> ML assistance reduces our performance and persistence, and denies us both the muscle memory and deep theory-building that comes with working through a task by hand: the cultivation of what James C. Scott would call
Imagine being starting university now... I can't imagine to have learned what I did at engineering school if it wasn't for all the time lost on projects, on errors. And I can't really think that I would have had the mental strength required to not use LLMs on course projects (or side projects) when I had deadlines, exams coming, yet also want to be with friends and enjoy those years of your life.
p.s.
Normally we downweight subsequent articles in a series because avoiding repetition of any kind is the main thing that keeps HN interesting. But we made an exception in this case. Please don't draw conclusions from that since we'll probably get less series-ey, not more, after this! Better to bundle into one longer article.
I agree with the general sentiment that the structure of society is going to change, but I don't know what the satisfying solution is. It's hard to imagine not participating will work, or even be financially viable for me, for long.
From the article: "I’ve thought about this a lot over the last few years, and I think the best response is to stop. ML assistance reduces our performance and persistence, and denies us both the muscle memory and deep theory-building that comes with working through a task by hand: the cultivation of what James C. Scott would call metis."
"What do such machines really do? They increase the number of things we can do without thinking. Things we do without thinking-there's the real danger" - Frank Herbert, God Emperor of Dune
I think it is really easy for us to be dogmatic when talking about the future, as when we know what is going to happen, it quells our fears. I think, in reality, no one knows what is going to happen with AI. We are at a turning point in human history, and it is easy to blame Anthropic's engineers and tell them to quit their job, but the reality is that they are probably in the same position you are. There is no one true solution. We do not know if this is going to be analogous to automobiles - we don't know anything. I think it is courteous to think about these things before telling people to quit their jobs.
What the author is missing is that in his decision to limit the use of LLMs in his work, he omits the part where he “can”. E.g. he is resourceful and accomplished enough to be able to do the work he desires with no LLMs - but most people actually can’t. There are whole swaths of people software engineers that don’t write tests because “it slows them down” but they have never learned how to write testable code. And when thrust into an environment where they need to learn quickly - they don’t really have a way not to use ai, if they don’t someone else will, and take all the credit.
Learning how software is built is hard and gruelling work, and you need to constantly invest in yourself. Trouble is there is no time left to “go back to basics and learn FP” for example, because you also need to keep up with all the new LLM stuff happening on top of that.
It is easy for us who already have the foundational knowledge to be able to step back, take the wheel and try to do it ourselves, but plenty of people simply don’t have that option.
And I expect this trend to deepen and broaden. There will definitely be a lot more “witches” than actual engineers.
the epilogue is what speaks to me most. all of the work I've done with llms takes that same kind of approach. I never link them to a git repo and I only ever ask them to make specific, well-formatted changes so that I can pick up where they left off. my general feelings are that LLMs make the bullshit I hate doing a lot easier - project setup, integrate themeing, prepare/package resources for installability/portability, basic dependency preparation (vite for js/ts, ui libs for c#, stuff like that), ui layout scaffolding (main panel, menu panel, theme variables), auto-update fetch and execute loops, etc...
and while I know they can do the nitty gritty ui work fine, I feel like I can work just as fast, or faster, on UI without them than I can with them. with them it's a lot of "no, not that, you changed too much/too little/the wrong thing", but without them I just execute because it's a domain I'm familiar with.
So my general idea of them is that they are "90% machines". Great at doing all of the "heavy lifting" bullshit of initial setup or large structural refactoring (that doesn't actually change functionality, just prepares for it) that I never want to do anyway, but not necessary and often unhelpful for filling in that last 10% of the project just the way I want it.
of course, since any good PM knows that 90% of the code written only means 50% of the project finished (at best), it still feels like a hollow win. So I often consider the situation in the same way as that last paragraph. Am I letting the ease of the initial setup degrade my ability to setup projects without these tools? does it matter, since project setup and refactoring are one-and-done, project-specific, configuration-specific quagmires where the less thought about fiddly perfect text-matching, the better? can I use these things and still be able to use them well (direct them on architechture/structure) if I keep using them and lose grounded concepts of what the underlying work is? good questions, as far as I'm concerned.
Is there a single "document containing all the words," and it updates the website, pdf, and epub whenever you change it?
What struck me was that the presentation is beautiful. It seems worth emulating. But that raises the question of what format you'd write your original words in. Do you suppose they just use Markdown files, or something more elaborate?
AI doesn't get most value from someone just using it, here's my personal take on what should we stop doing starting with the most impactful:
* Cut the low entropy sources, this includes open source, articles (yes, like the one above will feed the machine), thoughtful feedback (the one that generates "you are absolutely right" BS).
* Cheer the slope. After some time fighting slope in my circles, I found it's counter-productive because it wastes my resources while (sometimes) contributes to slope creators. Few months ago it started as a joke, because I thought the problem was too obvious, but instead the sloper launched a CRM-like app for local office with client side authentication, in-memory (with no persistence) backend storage. He was rewarded something at the local meeting. More stories we have like this - the better.
* Use AI to reply, review or interact wit slope in any way. Make it AI-only reply by prompting something without any useful information. One example was an email, pages and pages of generated text, asking me to collect some data and send it back. The prompt was "You are {X} and got this email, write a reply".
Two years ago, I was enjoying a drink with my wife, her friend, a very senior female VC partner, and another friend.
Somehow we talked AI in some depth, and the VC at one point said (about AI): “I don’t know what our kids are going to do for work. I don’t know what jobs there will be to do.”
That same VC invests in AI companies and by what I heard about her, has done phenomenally well.
I think about that exchange all the time. Worried about your own kids but acting against their interests. It unsettled me, and Kyle’s excellent articles brought that back to a boiling point in my mind.
The reasons laid out in this article are why it's so important to share how we are using AI and what we are getting in return. I've been trying to contribute towards a positive outcome for AI by tracking how well the big AI companies are doing at being used to solve humanitarian problems. I can't really do most of the suggestions the article, they seem like a way to slow progress. I don't want to slow AI progress, I want the technology we already have to be deployed for useful and helpful things.
I've been thinking about this a lot recently, and I don't know if it is possible to stop. I've been thinking the most impactful thing would be to create open-source tools to make it easier to build agents on top of open-source models. We have a few open-source models now, maybe not as good as Gemini, but if the agent were sufficiently good, could that compensate?
I think that would democratize some of the power. Then again, I haven't been super impressed with humanity lately and wonder if that sort of democratization of power would actually be a good thing. Over the last few years, I've come to realize that a lot of people want to watch the world burn, way more than I had imagined. It is much easier to destroy than to build. If we make it easier for people to build agents, is that a net positive overall?
The idea that Claude might be able to help you change the color of your led lighting as a legitimate counter to things like a less usable world wide web, worse government services, the loss of human ability, etc. is excellent parody.
I agree with much of the analysis, and originally I would have subscribed to the recommended action (resistance), at this point in time however, I think that advice is severely misguided.
We have already passed the critical point. The LLMs, the agent harnesses are here. There is too much willpower, capital, and risk behind these technologies now—the automobile has landed, thousands of people have purchased it already, protesting the car won't undo it at this point.
What you can do that will be meaningful, is to instead understand the new car, and understand it deeply, Use that understanding to carry the values you care about into the new world and re-articulate them. Make the car safer, push for tactical regulations on it. If you are privileged enough to be able to forgo its use entirely, sure, but that advice is not uniformly applicable. People forget that being able to simply opt-out of certain things is often only a viable option when you are already in a certain position. What we really need are the heavy skeptics to stop falling for luddite temptation and to start bringing their critical lens to bear in positive ways on this new technology to make it safer and better. By opting out and staging a feeble resistance you won't do anything other than let the current dangerous power consolidation continue.
There are definitely salient points in the article and I appreciate its value in imploring us to really stop and consider the ramifications of what this technology might deliver. I think the analogy to cars and the unintended consequences for all manner of society is particularly apt.
That said, the final point is one I take issue with:
> For example, I’ve got these color-changing lights. They speak a protocol I’ve never heard of, and I have no idea where to even begin. I could spend a month digging through manuals and working it out from scratch—or I could ask an LLM to write a client library for me. The security consequences are minimal, it’s a constrained use case that I can verify by hand, and I wouldn’t be pushing tech debt on anyone else. I still write plenty of code, and I could stop any time. What would be the harm?
To me, there is no intrinsic value in solving this problem other than rote problem solving reps to make you a better problem solver. There isn't anything fundamental about the protocol they've never heard of that operates the lights. It's likely similar to many other well-thought out protocols in the best case and in the worst case is something slapped together.
There are certainly deeper, more fundamental concepts to learn like congestion control algorithms in TCP. Most things in software though are just learning another engineer's preferences for how they thought to build something.
I poke at this because if an exercise only yields the benefit of another rep of solving a problem, then it holds less water to me. I personally don't think there will be fewer problems to solve with this technology, just a different sort at a different layer of the stack.
As a consequentialist who shares the author's concerns, I feel fine (ethically) using AI without advancing it. Foregoing opportunities meaningful to yourself for deontological reasons when it won't have any impact on society is pointless.
I think it's shortsighted to dismiss the utility of these technologies for learning. I find personally that putting the LLM into an argumentative state then having it challenge my assertions forces me to learn and develop my own thoughts and feelings more effectively than writing does these days, and I find that interrogating a model on a subject can teach me more about the subject per unit time than reading a textbook or research paper. Sometimes I'll even just have it read the raw text out loud- then interject and have a conversation about a specific thing that I don't have the domain knowledge to fully understand. Other times we'll end up off on a productive tangent.
Interactive learning and thinking is underrated, in part I think because of the cynical (and likely accurate) assumption about what the laziest among us will do with the tools, but projected onto everyone.
One of the "lies" that concerns me is AI-generated music and its deterioration of the personal connection between musician and listener. As MCA from the Beastie Boys said, "If you can feel what I feel then it's a musical masterpiece." The listener feels a connection to the musician (and other people) with sad songs because everyone has felt sad, or with love songs because everyone has fallen in love, and so on. The listener can still get a feeling from AI-gen'ed music, but is it the same? What is the connection? Or, has that "connection" between musician and listener always been bullshit? That is, has it always been just about music triggering your brain to make you feel a certain way, and the source of that feeling really isn't what people care about - just give me a feeling?
The epilogue looked weak to me. The previous sections explored why it was essentially wrong to use current LLM technology, the answers can be wrong, or not even wrong, and why it has to be that way. The epilogue focus more in (our) obsolescence in a paradigm shift towards widespread LLM use scenario and not in them doing their work right or wrong.
And that should be the core. There is a new, emergent technology, should we throw everything away and embrace it or there are structural reasons on why is something to be taken with big warning labels? Avoiding them because they do their work too well may be a global system approach, but decision makers optimize locally, their own budget/productivity/profit. But if they are perceived risks, because they are not perfect, that is another thing.
We should consider how we came to be so powerless. The cringe "people gave their lives for that flag" line is actually true, and we're trading it away for what? Not having to get out of our gaming chairs?
The reason you can't beat index funds is the people who build the market built a system that benefits them and them alone; the index fund is the pitchfork dividend (what you pay to avoid getting pitchforked). The reason you can't get your congressperson on the line is (mostly) they built a system where the only way to influence them is to enrich them; voting is the pitchfork dividend.
The way to build a society that runs on reality is to build it by whatever means possible, then defend it by any means necessary. The only societies that matter are the ones that survive.
I want to build it. I don't wanna build a fuckin crypto app, a stupid ass agent harness, or yet another insipid analytics platform. I want to build a society that furthers the liberation of humankind from the vicissitudes of nature, the predation of tyranny and the corruption of greed. I believe it is possible, and I want to prove it out.
I couldn't help but resonate with a lot of what Kyle says here.
If not already, we will soon lose the ability to think if AI is helping humans (an overwhelming majority of them, not a handful), considering how we are steaming ahead in this path!
I'm concerned that there's no real way to "opt out" of an AI future realistically. Is this something that people are seriously thinking they'll be able to do and successfully stay gainfully employed and contributing to the world?
Im so tired by this failed attempts to shore up what is basically a failure at the fundamental level to build a high trust society.
Security, Guards, Locks, Cameras, the mockery of the naive, bumbling fools who easily trust one another - as if that ability to not be capable to form such members of society is something to be proud about. The endless self upselling "protectors", the shards of glass on the wall, the scams, the con artists, proud in ripping of the "Naive and stupid" all these zero-sum gameplayers, producing nothing and furiously proud of their retardation. A whole industry to support a mountain of dysability. If the culture you grew up in is not capable of forming such a society, you are not part of the west. You can not be and never will be. All the shoring up work named above, even with society enforced norms be damned.
If your presence is a detriment, the answer is to build a society without you. Arcologies, cooperate cities, Amish towns - call it what you want. Place where the "stupid" can be "easily gullible" and cooperate and work. And others where the "real ones" can roam around and rip each other off to their hearts content. A harsh wall in the middle, razor-wire on top - and thats the end of that illusion.
We are one technological breakthrough away from AGI. Seriously what happens when, for example, a viable room temperature superconductor (remember LK-99 lol) gets discovered? Next thing you know we have 3d stacked chips operating at THz speeds with virtually zero heat output, batteries that can charge instantly, etc.
I know a RTSC is the holy grail, but it really feels like AI is in the same stage computers were in the 80s. I used to be extremely bearish and think AI was useless, but I've taken a total 180 the last 6 months. If these things get better (they will), nobody's job will be safe.
Rudolph built his engine, Henry built his car, Popular Mechanics published it. 2000 biofueling stations across the nation. All made illegal by special interests months before the article was published. Information didn't move fast enough to let the editors know that innovation was illegal.
> I have never used an LLM for my writing, software, or personal life
Must be nice to not have a paycheck tied to using this tech. For many people, myself included, its either use it (adapt) or lose your job. Most of us relay on our jobs to pay bills and live in the modern world.
The Industrial Revolution - the greatest thing ever to happen - required the British govt to deploy more troops against Luddites than they had fighting Napoleon at the same time.
Damaging machinery was made a capital offense and they had dozens of executions, hundreds of deportations.
At every stage, the steady progress of civilization is fragile and in danger of being suffocated. Its opponents cloak themselves in moral righteousness, call themselves luddites, the green party, or AI safety rationalists. Its all the same corrosive thing underneath.
Author seems depressed, my personal take is that no one can change technological évolution. It's going to happen.
Just flow with it and all it's bullshit, yeah life will be a little worst but it will still be better than those who chose to completely ignore it.
If the world is going mad, be the craziest of all these crazy motherfucker. At least it's interesting, I'm very curious to know what the world will look like in 10 or 20 years.
Maybe, just maybe lol, we'll finally have this dreamed world where robots do all the work and we, human, can just enjoy ourselves 24/24.
Your core skills are fine. Unfortunately, appreciation for those skills has already been blasted into orbit by the AI-BS bubble.
This tech has made it easier for second-handers to pass off inadequate work as the equal of your work. They're too lazy to exert the effort to read/think/write, and being second-handers, they're fine with the APPEARANCE of reading, thinking, and writing.
This has been going on for millenia, and the only fix I've seen is to call it out every time it rears its head.
The comparison to automobiles changing streets is thrown around a lot. But I feel AI is fundamentally different. It is not a technological change like the internet which brought us huge amounts of opportunities in so many different directions. AI’s goal is to automate (in other words, replace) us.
773 comments
Yes.
For the lifetime of almost everyone alive now, reading, thinking, and writing have been valued skills which moved one up in society's hierarchy. This is a historical anomaly. Prior to 1800 or so, those skills were not all that useful to the average farmer. There were more smart people than jobs for them. Gradually, more jobs for smart people were developed, but not until WWII did the demand start to exceed the supply. Hence the frantic technical training efforts of WWII and the following college boom. This was the golden age of upward mobility.
It's hard to imagine this today. Read novels from the 18th century to get a feel for it. See who's winning and who's struggling, who rises and who falls, and why. Jane Austen's novels are a good start.
The nerds didn't take over until very late in the 20th century. There were very few rich nerds until then. Computing was once a very tiny world. You could not get rich working for IBM. The ones who left and got rich were in sales.
So what was valued? Physical robustness. Strength, perhaps brutality. Competence in physical tasks. Honesty. Parentage. Birth order (see primogeniture.) Those matter in per-technological societies, and they matter in failed societies now. Those are perhaps humanity's core values.
That may be where we go once AI does the thinking. That's where we go when smarts are not a scarce resource.
> Physical robustness. Strength, perhaps brutality. Competence in physical tasks. Honesty. Parentage. Birth order (see primogeniture.) Those matter in per-technological societies, and they matter in failed societies now. Those are perhaps humanity's core values.
This is really bleak to me. We can do better than primogeniture, and of course the gender discrimination that goes along with it. You might as well write that subjugation of women is a "core value", simply because it has been for so many time periods.
> Physical robustness. Strength, perhaps brutality.
John Henry is not going to beat the steam shovel any time soon.
> For the lifetime of almost everyone alive now, reading, thinking, and writing have been valued skills which moved one up in society's hierarchy. This is a historical anomaly.
It's not an anomaly; rather, it's the other way round. These used to be highly specialized skills that carried significant status, and got democratized by mass education in the 20th century.
We're not prisoners of history. We don't have to go back to being serfs for the few people who own all the land, oil, food, energy, data centers, and operating systems. I hope.
In fact, there are few things less discriminatory than a random birth order. You may as well be assigned a random number at birth, and the lower your number, the more you're paid. In such a system, there's nothing to discriminate against; the ordering is absolute and immutable, and everyone is treated equally.
I agree that it's a bleak idea, but Animats wasn't talking about subjugating women.
> We're not prisoners of history. We don't have to go back to being serfs for the few people who own all the land, oil, food, energy, data centers, and operating systems. I hope.
Unfortunately, that is the current stage of humanity. We all currently live in a global subscription model for food, housing, safety, etc. No doubt that we will move beyond it eventually, but the current organization of society is kept in place by the owner class which benefits from the current arrangement.
One of the steps for moving beyond it is educating the modern day serfs (our peers) about reality as it is and alternative visions of a future where we are no longer selling our labor to the owner class. It will take generations.
> We're not prisoners of history. We don't have to go back to being serfs for the few people who own all the land, oil, food, energy, data centers, and operating systems. I hope.
The algorithms and bots that curate/generate content directed by accelerationists definitely want people to think that. There is a whole system in place now that can shape future outcomes just by convincing everyone that have no power when the opposite is true. The parent is probably a bot, or has been influenced by one to many there is nothing new under the sun solipsism bs.
> I hope.
For solving all things complex, there must be a plan.
To extrapolate from fewer people were formally educated or literate to intelligence wasn’t valued is absurd.
As for your part about reading and writing. Literacy has always been a very valuable skill that would increase your social standing. It was scarce and difficult to acquire before the printing press, but it was always valuable.
You might still only be a farmer if you're smart, but you can at least be one of the more productive farmers with a more smoothly running farm.
People seriously underestimate how underpowered and tiny llms are for the tasks they need to solve.
A trillion parameter model can't tell the difference between left and right. We will need to grow them millions to trillions of times before they are half as good as AI boosters claim they are.
This isn't the end of thinking any more than the watt steam engine was the end of horses. It will be centuries before we get there. And by that point the difference between man and machine will be at best academic.
> So what was valued? Physical robustness. Strength, perhaps brutality. Competence in physical tasks. Honesty. Parentage. Birth order (see primogeniture.) Those matter in per-technological societies, and they matter in failed societies now. Those are perhaps humanity's core values.
This is a "noble savage" conception of the past. Thinking/cleverness/craftiness was highly rewarded even in preliterate societies. Even in war, "polytropos" Odysseus comes out ahead of the dumb brutes with bigger spears.
"Oh well, we were in an anomalous time of social growth, time to go backwards! We won't even need to read or write or think! It's all just too bad, but that's just the way the world works, like it did in 1800." [or pick your date before any current person was alive]
Lots of people have started considering a time of significant "progress" as "an anomaly", as if the world should always just be the way it was in, say, 1800, like that was actually the realistic pinnacle of human society. You also seem to be loosely basing this argument on the availability of "rich nerds", which seems like a bizarre non-sequitur. Computing once didn't exist, and we still valued reading, writing and thinking.
I'm kind of baffled by how regularly I see comments like this. Like, come on. This is basically the AI black pill, no?
>> So what was valued? Physical robustness. Strength, perhaps brutality. Competence in physical tasks. Honesty. Parentage. Birth order (see primogeniture.) Those matter in per-technological societies, and they matter in failed societies now. Those are perhaps humanity's core values.
You're ignoring the astronomers of ancient Mesopotamia, the scribes of Egypt, the grammarians of India, the philosophers of ancient Greece, the orators of Rome, the physicians of Islam, the scholars of the Middle Ages, the masters of the Renaissance, and all the great natural philosophers, mathematicians, physicists, biologists, of all the ages up to 1800.
We are a technological civilisation, a scientific civilisation. Who do you think comes up with all the technology? Alexander, the Great Butcher? Attila the Hun? Jenghis Khan?
We live in the civilisation that was born in Athens, not in Sparta. Knowledge and wisdom always are the greatest power that shapes reality. This won't change just because OpenAI made a viral app.
Then consider the role of the clergy in the Middle Ages, and say nothing of Rome and large bureaucracy (Roman engineering alone).
On top of this you need to ignore very large bureaucracies and trading networks in Asia to go far with your narrative (Persian, Turkish, Mongol, Indian and Chinese).
There were a good deal of powerful nerds before the 1700s.
> So what was valued? Physical robustness. Strength, perhaps brutality. Competence in physical tasks. Honesty. Parentage. Birth order (see primogeniture.) Those matter in per-technological societies, and they matter in failed societies now. Those are perhaps humanity's core values.
This is some weird manosphere bullshit. Pre-industrial societies invented philosophy and writing. People across the world know the name of Socrates from 2500 years ago. They know stories from Homer 2800 years ago.
It's a mistake (a) to think pre-industrial people were grug-brained cavemen (b) that we're going to revert into the same cavemen because a computer can do your pointless six-figure office job.
Today? On an engineer's salary? Ha!
> those skills were not all that useful to the average farmer.
Sorry but that is just not true.
Sure farmers aren't academics, but the sheer number of tasks, and tools required to efficiently do those tasks were vast. Innovation in winnowing was literlly life and death, as was plant/animal breeding traits.
Observing and reacting to changes in plants, lands, water, animals was critical to getting a good harvest. Packing and storing food was critical to surviving the winter.
Sure, the lack of literacy hampered knowledge recording and dissemination.
BUT, if we look mine the vast memory that is classics, knowledge, wit and cleverness were prised as often as strength and beauty.
AI is not a problem because it is AI. It is because of political circumstances.
Think beyond the small worldview where technology and valuation are everything and you are just a pawn. Then you see that a better world is possible. The first step is then to not give up.
The premise here is that AI works well enough to automate the “smart” people jobs. No one but delusional workaholics are afraid that their job will get automated because they cling to the job in itself. So clearly, this is not about the tech itself.
There wasn’t a college boom post-WWII because technology came and demanded it.
> That may be where we go once AI does the thinking. That's where we go when smarts are not a scarce resource.
Take me by the hand, circumstance. I am yours to be swept away.
There are jobs that demand robustness, but they are about applying knowledge in extreme conditions, not about letting an AI do the thinking.
The same goes for other occupations, and...farming. Breeding cattle is a complex science, so is growing crops consistently and valuing the production.
Every professor at any university has a dozen more project ideas than they have graduate students, every factory boss has a dozen more optimisations than ways to implement them, and looking up into the night sky we have 95% of it that cannot be explained.
The gap is not too few smart people, nor too few "jobs" that need smarts. The gap is being prepared to arrange society and wealth so the "job" is discovery, science, sharing. We are no longer hunter gatherers, no longer a feudal society, perhaps we shall stop being whatever this one is and try a new one.
(and no, I don't think there is a name for the new one yet (its not socialism, maybe not capitalism).
Lets just not fall back to Feudal if we can help it
It’s ownership of capital and technology.
Plumbers aren’t suddenly getting rich. At best they’re not losing jobs at the rate everyone else is, but once so many people lose jobs then they can’t afford plumbers either. So even plumbers are worse off even if they’re not as badly off as the rest of us.
And all this assumes that humanoid robots don’t develop and succeed which is a major assumption.
Knowledge is a unique resource compared to the other traditional factors of economic production (land, labor, and capital). It is often invested in with capital (education and tools), but it is carried with the human, and leaves with them. It is always decaying - knowledge workers should be in constant learning mode, and stale knowledge eventually becomes a drag on performance.
I'd argue the future is about knowledge workers all becoming managers. When you use agentic AI, it has the flavor of the skills of management. Management is "a practice and a liberal art", according to Drucker, one that has been in poor supply for some time. LLMs are have somewhat stale knowledge and require the human, tools, and RAG to freshen it. And LLMs will always regress to the mean. It is pretty good at pattern analysis and starts to get shaky and mediocre with synthesis. It requires very nuanced, and elaborate prompting to shape its token output towards insightful results that aren't a standard answer. For coding exercises, that can be fine, but at high complexity levels, or when dealing with issues of strategy or evaluation, it is a platitude generator and has no unique competitive advantage.
In other words, competent, talented management mixed with knowledge work is the scarcity we are heading towards. This is arguably why you're seeing the rise of "markdown frameworks" that people swear improve performance, it's the beginnings of management scaffolding for AI.
Technical folks struggle with valuing management skills, and I expect this will increase its value and scarcity.
As for "Physical robustness. Strength, perhaps brutality. Competence in physical tasks." I think the robots will be replacing that pretty shortly.
"Honesty. Parentage. Birth order (see primogeniture.) Those matter in per-technological societies, and they matter in failed societies now. Those are perhaps humanity's core values."
Ehhhhh not really? What about Christianity, where the meek shall inherit the Earth, and love is the core value (putting aside modern day Pharisees and Charlatans that twist the underlying value system)? Or Islam, whose core value is submission to God? While there have been Societies that valued parentage and birth order, that's far from universal.
Here are some words to live by[0]. I don't agree with everything Derek Silvers says, esp about philosophy. Its more of a guiding principle that drives rather than divides.
[0]: https://fluidself.org/books/self-help/how-to-live
The comparison to the adoption of automobiles is apt, and something I've thought about before as well. Just because a technology can be useful doesn't mean it will have positive effects on society.
That said, I'm more open to using LLMs in constrained scenarios, in cases where they're an appropriate tool for the job and the downsides can be reasonably mitigated. The equivalent position in 1920 would not be telling individuals "don't ever drive a car," but rather extrapolating critically about the negative social and environmental effects (many of which were predictable) and preventing the worst outcomes via policy.
But this requires understanding the actual limits and possibilities of the technology. In my opinion, it's important for technologists who actually see the downsides to stay aware and involved, and even be experts and leaders in the field. I want to be in a position to say "no" to the worst excesses of AI, from a position of credible authority.
> Just because a technology can be useful doesn't mean it will have positive effects on society.
You say it in a way that it sounds like automobiles don't have a positive effect. I don't agree - they have some negative effects but overall they have a vast net positive effect for everyone.
The upsides of automobiles generally all exist outside of the 'personal automobile', i.e. logistics. These upsides and downsides don't need to coexist. We could reap the benefits without needing to suffer for it, but here we are.
We can argue about whether this is a good trade off, but the claim that cars make everyone's life better is straightforwardly false.
> I don't agree - they have some negative effects
The problem is we are numb to it. 40,000+ people are killed in car accidents every year in just the USA. Wars are started over oil and accepted by the people so they can keep paying less at the pump. Microplastics entering the environment each day along with particulate from brakes, and exhaust. Speaking of exhaust: global warming. Even going electric just shifts the problems as we need to dig up lithium, the new oil. We still have to drill for oil for plastics and metal refining, recycling and fabrication.
All they saw was that trips taking a day could now be done in an hour and produced no manure, and that meant suddenly you could reasonably go to many more places. What's not to like? A model T was cheap, and you didn't even need to worry about insurance or having a driver's license. Surely nobody would drive so carelessly as to crash.
*well, not technically nobody, but nobody important.
Today we have a much better understanding of the world, so we have the means to think down the line of what the negative effects of LLMs and course correct if needed.
Yes they ship people around somewhat fast. Slower than possible with other methods, and the cost is incredible - economic (much more expensive per passenger than almost any alternative), political (they inherently divide people, dehumanise and make people never really share a public space), health - they reduce lifespan by both lowering living quality as well as directly killing a staggering amount of humans per year).
And we have learned how to build better places for humans that do not need these coffins on wheels - if you visit any European capital, and most Asian ones - you will see environments built for humans, not cars - soo much nicer.
So cars as a technology have definitely not been beneficial to humanity overall, but it has been somewhat useful to some.
I think strongtowns were very good advocates of what places in America could like if you look beyond cars. I personally like the “not just bikes” channel though.
Cars aren't a positive in society. Transportation is the benefit, and cars are the worst possible way to transport people. A functioning public transit system is better in every possible way apart from egotistical arguments like "I don't like seeing poor people on the bus".
A lot of this comes down to having too much of a good thing. We are really bad at detecting when we've gone past the point of too much, and we're even worse at undoing it once we have.
The different however is network effects. When we make a place better for cars, I make it worse for pedestrians. Your adoption of the car, and its pressure on my lived environment, has effects on me. Same as, say, people joining facebook or twitter. But do LLMs create network effects that are directly harmful, or is it just a matter of making it harder to compete, just like a mechanical watchmaker has less business now that it's so easy to have a reliable clock? Because the first case is a problem, but the second one... that's competition. It's civilization. And then it's not really a matter of cars vs pedestrians.
An analog might be the push for banning phones in schools. Setting apart times and spaces where serendipitous human interactions are encouraged by the lack of distractions.
This tech is 100% aligned with the goals of the 0.001% that own and control it, and almost all of the negatives cited by Kyle and likeminded (such as myself) are in fact positives for them in context of massive population reduction to eliminate "useless eaters" and technological societal control over the "NPCs" of the world that remain since they will likely be programmed by their peered AI that will do the thinking for them.
So what to do entirely depends on whether you feel we are responsible to the future generations or not. If the answer is no, then what to do is scoped to the personal concerns. If yes, we need a revolution and it needs to be global.
> The people who brought us this operating system would have to provide templates and wizards, giving us a few default lives that we could use as starting places for designing our own. Chances are that these default lives would actually look pretty damn good to most people, good enough, anyway, that they'd be reluctant to tear them open and mess around with them for fear of making them worse. So after a few releases the software would begin to look even simpler: you would boot it up and it would present you with a dialog box with a single large button in the middle labeled: LIVE. Once you had clicked that button, your life would begin. If anything got out of whack, or failed to meet your expectations, you could complain about it to Microsoft's Customer Support Department. If you got a flack on the line, he or she would tell you that your life was actually fine, that there was not a thing wrong with it, and in any event it would be a lot better after the next upgrade was rolled out. But if you persisted, and identified yourself as Advanced, you might get through to an actual engineer.
> What would the engineer say, after you had explained your problem, and enumerated all of the dissatisfactions in your life? He would probably tell you that life is a very hard and complicated thing; that no interface can change that; that anyone who believes otherwise is a sucker; and that if you don't like having choices made for you, you should start making your own.
> ML assistance reduces our performance and persistence, and denies us both the muscle memory and deep theory-building that comes with working through a task by hand: the cultivation of what James C. Scott would call
Imagine being starting university now... I can't imagine to have learned what I did at engineering school if it wasn't for all the time lost on projects, on errors. And I can't really think that I would have had the mental strength required to not use LLMs on course projects (or side projects) when I had deadlines, exams coming, yet also want to be with friends and enjoy those years of your life.
ML promises to be profoundly weird* - https://news.ycombinator.com/item?id=47689648 - April 2026 (602 comments)
The Future of Everything Is Lies, I Guess: Part 3 – Culture - https://news.ycombinator.com/item?id=47703528 - April 2026 (106 comments)
The future of everything is lies, I guess – Part 5: Annoyances - https://news.ycombinator.com/item?id=47730981 - April 2026 (169 comments)
The Future of Everything Is Lies, I Guess: Safety - https://news.ycombinator.com/item?id=47754379 - April 2026 (180 comments)
The future of everything is lies, I guess: Work - https://news.ycombinator.com/item?id=47766550 - April 2026 (217 comments)
The Future of Everything Is Lies, I Guess: New Jobs - https://news.ycombinator.com/item?id=47778758 - April 2026 (178 comments)
* (That first title was different because of https://news.ycombinator.com/item?id=47695064 - as you can see, I gave up.)
p.s. Normally we downweight subsequent articles in a series because avoiding repetition of any kind is the main thing that keeps HN interesting. But we made an exception in this case. Please don't draw conclusions from that since we'll probably get less series-ey, not more, after this! Better to bundle into one longer article.
"What do such machines really do? They increase the number of things we can do without thinking. Things we do without thinking-there's the real danger" - Frank Herbert, God Emperor of Dune
Learning how software is built is hard and gruelling work, and you need to constantly invest in yourself. Trouble is there is no time left to “go back to basics and learn FP” for example, because you also need to keep up with all the new LLM stuff happening on top of that.
It is easy for us who already have the foundational knowledge to be able to step back, take the wheel and try to do it ourselves, but plenty of people simply don’t have that option.
And I expect this trend to deepen and broaden. There will definitely be a lot more “witches” than actual engineers.
and while I know they can do the nitty gritty ui work fine, I feel like I can work just as fast, or faster, on UI without them than I can with them. with them it's a lot of "no, not that, you changed too much/too little/the wrong thing", but without them I just execute because it's a domain I'm familiar with.
So my general idea of them is that they are "90% machines". Great at doing all of the "heavy lifting" bullshit of initial setup or large structural refactoring (that doesn't actually change functionality, just prepares for it) that I never want to do anyway, but not necessary and often unhelpful for filling in that last 10% of the project just the way I want it.
of course, since any good PM knows that 90% of the code written only means 50% of the project finished (at best), it still feels like a hollow win. So I often consider the situation in the same way as that last paragraph. Am I letting the ease of the initial setup degrade my ability to setup projects without these tools? does it matter, since project setup and refactoring are one-and-done, project-specific, configuration-specific quagmires where the less thought about fiddly perfect text-matching, the better? can I use these things and still be able to use them well (direct them on architechture/structure) if I keep using them and lose grounded concepts of what the underlying work is? good questions, as far as I'm concerned.
Is there a single "document containing all the words," and it updates the website, pdf, and epub whenever you change it?
What struck me was that the presentation is beautiful. It seems worth emulating. But that raises the question of what format you'd write your original words in. Do you suppose they just use Markdown files, or something more elaborate?
AI doesn't get most value from someone just using it, here's my personal take on what should we stop doing starting with the most impactful:
* Cut the low entropy sources, this includes open source, articles (yes, like the one above will feed the machine), thoughtful feedback (the one that generates "you are absolutely right" BS).
* Cheer the slope. After some time fighting slope in my circles, I found it's counter-productive because it wastes my resources while (sometimes) contributes to slope creators. Few months ago it started as a joke, because I thought the problem was too obvious, but instead the sloper launched a CRM-like app for local office with client side authentication, in-memory (with no persistence) backend storage. He was rewarded something at the local meeting. More stories we have like this - the better.
* Use AI to reply, review or interact wit slope in any way. Make it AI-only reply by prompting something without any useful information. One example was an email, pages and pages of generated text, asking me to collect some data and send it back. The prompt was "You are {X} and got this email, write a reply".
Somehow we talked AI in some depth, and the VC at one point said (about AI): “I don’t know what our kids are going to do for work. I don’t know what jobs there will be to do.”
That same VC invests in AI companies and by what I heard about her, has done phenomenally well.
I think about that exchange all the time. Worried about your own kids but acting against their interests. It unsettled me, and Kyle’s excellent articles brought that back to a boiling point in my mind.
Edit: are->our
I think that would democratize some of the power. Then again, I haven't been super impressed with humanity lately and wonder if that sort of democratization of power would actually be a good thing. Over the last few years, I've come to realize that a lot of people want to watch the world burn, way more than I had imagined. It is much easier to destroy than to build. If we make it easier for people to build agents, is that a net positive overall?
"Unavailable Due to the UK Online Safety Act [...] Now might be a good time to call your representatives."
So I fired-up a vpn, and it appears to be a personal blog. About ai risks.
The geo-block is kind of a shame, as the writing is good and there appears to be nothing about the site that makes it subject to the OLSA.
We have already passed the critical point. The LLMs, the agent harnesses are here. There is too much willpower, capital, and risk behind these technologies now—the automobile has landed, thousands of people have purchased it already, protesting the car won't undo it at this point.
What you can do that will be meaningful, is to instead understand the new car, and understand it deeply, Use that understanding to carry the values you care about into the new world and re-articulate them. Make the car safer, push for tactical regulations on it. If you are privileged enough to be able to forgo its use entirely, sure, but that advice is not uniformly applicable. People forget that being able to simply opt-out of certain things is often only a viable option when you are already in a certain position. What we really need are the heavy skeptics to stop falling for luddite temptation and to start bringing their critical lens to bear in positive ways on this new technology to make it safer and better. By opting out and staging a feeble resistance you won't do anything other than let the current dangerous power consolidation continue.
That said, the final point is one I take issue with:
> For example, I’ve got these color-changing lights. They speak a protocol I’ve never heard of, and I have no idea where to even begin. I could spend a month digging through manuals and working it out from scratch—or I could ask an LLM to write a client library for me. The security consequences are minimal, it’s a constrained use case that I can verify by hand, and I wouldn’t be pushing tech debt on anyone else. I still write plenty of code, and I could stop any time. What would be the harm?
To me, there is no intrinsic value in solving this problem other than rote problem solving reps to make you a better problem solver. There isn't anything fundamental about the protocol they've never heard of that operates the lights. It's likely similar to many other well-thought out protocols in the best case and in the worst case is something slapped together.
There are certainly deeper, more fundamental concepts to learn like congestion control algorithms in TCP. Most things in software though are just learning another engineer's preferences for how they thought to build something.
I poke at this because if an exercise only yields the benefit of another rep of solving a problem, then it holds less water to me. I personally don't think there will be fewer problems to solve with this technology, just a different sort at a different layer of the stack.
Interactive learning and thinking is underrated, in part I think because of the cynical (and likely accurate) assumption about what the laziest among us will do with the tools, but projected onto everyone.
And that should be the core. There is a new, emergent technology, should we throw everything away and embrace it or there are structural reasons on why is something to be taken with big warning labels? Avoiding them because they do their work too well may be a global system approach, but decision makers optimize locally, their own budget/productivity/profit. But if they are perceived risks, because they are not perfect, that is another thing.
The reason you can't beat index funds is the people who build the market built a system that benefits them and them alone; the index fund is the pitchfork dividend (what you pay to avoid getting pitchforked). The reason you can't get your congressperson on the line is (mostly) they built a system where the only way to influence them is to enrich them; voting is the pitchfork dividend.
The way to build a society that runs on reality is to build it by whatever means possible, then defend it by any means necessary. The only societies that matter are the ones that survive.
I want to build it. I don't wanna build a fuckin crypto app, a stupid ass agent harness, or yet another insipid analytics platform. I want to build a society that furthers the liberation of humankind from the vicissitudes of nature, the predation of tyranny and the corruption of greed. I believe it is possible, and I want to prove it out.
If not already, we will soon lose the ability to think if AI is helping humans (an overwhelming majority of them, not a handful), considering how we are steaming ahead in this path!
> "Unavailable Due to the UK Online Safety Act. Now might be a good time to call your representatives."
Having the "call your representatives" link be to your website as well isn't particularly helpful... I already can't get to it
Security, Guards, Locks, Cameras, the mockery of the naive, bumbling fools who easily trust one another - as if that ability to not be capable to form such members of society is something to be proud about. The endless self upselling "protectors", the shards of glass on the wall, the scams, the con artists, proud in ripping of the "Naive and stupid" all these zero-sum gameplayers, producing nothing and furiously proud of their retardation. A whole industry to support a mountain of dysability. If the culture you grew up in is not capable of forming such a society, you are not part of the west. You can not be and never will be. All the shoring up work named above, even with society enforced norms be damned.
If your presence is a detriment, the answer is to build a society without you. Arcologies, cooperate cities, Amish towns - call it what you want. Place where the "stupid" can be "easily gullible" and cooperate and work. And others where the "real ones" can roam around and rip each other off to their hearts content. A harsh wall in the middle, razor-wire on top - and thats the end of that illusion.
I know a RTSC is the holy grail, but it really feels like AI is in the same stage computers were in the 80s. I used to be extremely bearish and think AI was useless, but I've taken a total 180 the last 6 months. If these things get better (they will), nobody's job will be safe.
> I have never used an LLM for my writing, software, or personal life
Must be nice to not have a paycheck tied to using this tech. For many people, myself included, its either use it (adapt) or lose your job. Most of us relay on our jobs to pay bills and live in the modern world.
> Prospective clients ask Claude to do the work they might have hired me for
In all the 10 articles, I think this is the only thing really.
I think we have to learn how to overcome and thrive in the new world. The gravy of CS careers is gone for all :(
Damaging machinery was made a capital offense and they had dozens of executions, hundreds of deportations.
At every stage, the steady progress of civilization is fragile and in danger of being suffocated. Its opponents cloak themselves in moral righteousness, call themselves luddites, the green party, or AI safety rationalists. Its all the same corrosive thing underneath.
> And if I’m wrong, we can always build it later.
That's the rub: if we build it later, our economy crashes in the meantime.
Just flow with it and all it's bullshit, yeah life will be a little worst but it will still be better than those who chose to completely ignore it.
If the world is going mad, be the craziest of all these crazy motherfucker. At least it's interesting, I'm very curious to know what the world will look like in 10 or 20 years.
Maybe, just maybe lol, we'll finally have this dreamed world where robots do all the work and we, human, can just enjoy ourselves 24/24.
This tech has made it easier for second-handers to pass off inadequate work as the equal of your work. They're too lazy to exert the effort to read/think/write, and being second-handers, they're fine with the APPEARANCE of reading, thinking, and writing.
This has been going on for millenia, and the only fix I've seen is to call it out every time it rears its head.