I had to code something on a plane today. It used to be that you couldn't get you packages or check stackoverflow. But now, I'm useless. My mind has turned to pudding. I cannot remember basic boilerplate stuff. Crazy how fast that goes.
All skill degrade with disuse. For example, here in Canada we have observed a literacy and numeracy skills curve that peaks with post-secondary education and declines with retirement.[0]
In my 7th years of professionally programming node, not even once I remember the express or html boilerplate, neither is the router definition or middleware. Yet I can code normally provided there's internet accessible. It's simply not worth remembering, logic and architecture worth more IMO
I thought this comment was going the opposite way - previously no internet/googling but now you can run a local model and figure things out without the need for internet at all
Others have addressed other aspects of this, but I want to address this:
> I cannot remember basic boilerplate stuff.
I don't know exactly what you mean by boilerplate stuff, but honestly, that's stuff we should have automated away prior to AI. We should not be writing boilerplate.
I'd highly encourage you to take the time to automate this stuff away. Not even with AI, but with scripts you can run to automate boilerplate generation. (Assuming you can't move it to a library/framework).
For my money, while surely it must have been jarring, that experience would seem to say that on-device LLMs are more important programming tools than package repositories.
As another commenter said, the affordability of LLM subscriptions (or, as others are predicting, the lack thereof) is the primary concern, not the technology itself stealing away your skills.
I am far from the definitive voice in the does-AI-use-corrupt-your-thinking conversation, and I don't want to be. I don't want LLMs to replace my thinking as much as the next person, but I also don't want to shun anything useful that can be gained from these tools.
All that said, I do feel that perhaps "dumber" LLMs that work on-device first will allow us to get further and be better, more reliable tools overall.
Will you do anything differently knowing this? Does the risk of LLMs being unaffordable to you in the near future make you wary about losing the skills?
> But now, I'm useless. My mind has turned to pudding.
I do use AI daily to help me enhance code but then... I also very regularly turn off, physically, the link between a sub-LAN at home and the Internet and I still can work. It's incredibly relaxing to work on code without being connected 24/7. Other machines (like kid's Nintendo switch) can still access the Internet: but my machines for developing are cut off. And as I've got a little infra at home (Proxmox / VMs), I have quite a few services without needing to be connected to the entire world: for example I've got a pastebin, a Git server, a backuping procedure, all 100% functional without needing to be connected to the net (well 99% for the backuping procedure as the encrypted backup files won't be synch'ed with remote servers until the connection is operational).
Sure it's not a "laptop on a plane", but it's also not "24/7 dependent on Sam Altman or Anthropic".
I'll probably enhance my setup at some point with a local Gemma model too.
And all this is not mutually exclusive with my Anthropic subscription: at the flick of a switch (which is right in front of me), my sub-LAN can access the Internet again.
I'm old enough to have programmed C in IBM/PC with the book Turbo C/C++ [1] at my desk side as reference, around 1993.
I remember at that time, my "mentor" suggested to memorize all the "keywords" from C (which were few). But given my bad memory I had to constantly look at the book.
I haven't been able to code without reading a sample of code for years before AI. Maybe it's just what happens when you're polyglot but I remember thinking even stupid things like how to declare a class in whatever lang I had to see. But once I saw a sample of code I'd get back into it. Then there's stuff I never committed to memory, like the nonsensical dance of reading from a file in go, or whatever.
It was a long time ago but I attended a session by IBM at an OO conference. The speaker's claim was that the half-life of programming language knowledge was 6 months i.e. if not reinforced, that how fast it goes.
I learned the Q array language five years ago and then didn't touch it for six months. I was surprised how little I remembered when I tried to resume.
I thought the same but I tried to create a small Django project with APIs, small React frontend from scratch, no LLM, no autocomplete, just a text editor. I was surprised it was all still there after a couple of hours. Not sure it's a skill that useful today, it feels like remembering your multiplication table.
Maybe it's my memory issues, but I personally could never remember basic boilerplate. 30 years ago I would spend half of my time in Borland's help menu coupled with grepping through man pages. These days I use LLMs, including ollama when on a plane. I don't feel worse off.
I'm currently looking for sort of niche clothes for an event and it's the first time I had to give up on buying online because of the sheer amount of AI-generated pictures. Going to a physical store was just a much better experience, I can't recall the last time this happened, almost all sellers on Etsy are using AI for their pictures.
The Perez model contains a falsification test the article doesn't apply to its own thesis. In Perez's framework, the installation phase is characterized by financialization, frothy infrastructure bets, and capital rushing toward uncertain new technology—exactly the behavior we see with US AI investment (hyperscalers committing $500B+ to uncertain infrastructure, speculative valuations). Deployment phases look like industrial efficiency gains and normal returns. By those criteria, US AI investment is behaving like an installation-phase bet, not late-deployment optimization.
The article's US-China comparison quietly reveals the prediction that would follow from the thesis: if the Perez 'late deployment' framing is right, then the Chinese model—lean, industrial, healthcare and education application, grounded in near-term ROI—is betting correctly on where we are in the curve and should outperform over the next decade. That's a concrete, testable claim that would validate or falsify the argument independently of whether AI constitutes a 'new surge.'
I view this post as primarily pattern-matching and storytelling. But I think there’s a buried truth there, and that they were nibbling at the edges of it when they started talking about the overlapping stages.
There are some very interesting information network theories that present information growth as a continually evolving and expanding graph, something like a virus inherent to the universe’s structure, as a natural counterpoint to entropy. And in that view, atomic bonds and cells and towns and railroads and network connections and model weights are all the same sort of thing, the same phenomenon, manifesting in different substrates at different levels of the shared graph.
To me, that’s a much better and deeper explanation that connects the dots, and offers more predictive power about what’s next.
Highly recommend the book Why Information Grows to anyone whose interest is piqued by this.
I think it's clear to me that AI will be both things:
1) as in the article it's a contraction of work- industrialization getting rid of hand-made work or the contraction of all things horse-related when the internal combustion engine came around
but- it will also be
2) new technologies and ideas enabled by a completely new set of capabilities
The real question is if the economic boost from the latter outpaces the losses of the former. History says these transitions aren't easy on society.
But also, the AI pessimism is hard to understand in this context- do people really believe no novel things will be unlocked with this tech? That it's all about cost-cutting?
It seems really premature to talk about AI being the end of anything. What’s at an end stage is adoption of smart phones and monetizing human attention. That’s been the fuel that powered the last quarter century of tech gains, and while still huge in absolute terms it has been running out of steam as a growth engine and facing cultural pushback (eg. Social media lawsuits) for a while.
AI so far has really only shown massive utility for programming. It has broad potential across almost all knowledge work, but it’s unclear how much of that can be fulfilled in practice. There are huge technical, UX and social hurdles. Integrating middle brow chatbots everywhere is not the end game.
tangentially related, but as someone who built multiple internet businesses -- mostly unsuccessful, some mildly successful -- I barely have any new ideas to work on.
I don't know if this is the effect of relying on AI too much in my day-to-day work or leading a more monotonous life as of late, but I'm sure I'm not the only one. Lots of ideas that I could have built before LLMs took over now seem trivial to build with Claude & friends.
The lack of robotics mention somewhat undermines this article.
I don't think it's intrinsically wrong, we are in a late stage of a transformation. Software is eating the world and AI is (so far) most profitably an automation of software.
There is plenty of money to be made along the way. I don't really buy the article's seeming confusion about where the money is going to come from. Anthropic is making billions and signing up prodigious amounts of recurring revenue every month.
The question it raises is if this is the fake surge, the one we see, what is the real one we don't see? Renewable energy comes to mind. Robotics too but maybe that's too tied up with AI.
Introduction of new mass production techniques often has an initial wave of high profit when early adopters have an initial advantage... existing workers are more efficient... but this will followed by a long term decline in the rate of profit as margins aggressively fall ...
e.g. if every software company uses AI to double its coding speed, the price of software will eventually drop by half.
As "AI" becomes a required and common commodity input, competition will drive prices down until the productivity gains are entirely captured by customers, leading to margin compression across the sector.
Also... firms will be forced to invest in using AI just to stay in the same place. If you don't adopt it aggressively, you'll be priced out; if you do, your margins still shrink because everyone else did too.
So... yeah, I don't think this is the next part of a "digital wave" if that means giant increase in new startup investments and SaaS companies etc, it's actually probably the start of I think a margin collapse and consolidation in our industry.
If it's 2x easier to build e.g. a CRM, we’ll end up with 10x more CRMs, leading to a "race to the bottom" on pricing.
The last 15 years of investment by people like YC etc seems to have been in businesses that were "like Uber but for ". Service businesses on which a small layer of software automated things, and drove some sort of explosion of customers. I don't really see how VCs are going to separate wheat from chaff on this front anymore? If anybody can do it.... what's the value of any particular approach over the others? I'd think the result would be consolidation?
So I suppose if you're selling "the means of production" in the form of GPUs you're in a good spot, but even that is likely to be subject to aggressive downward pricing.
if this could last till a point where AI have actual automation ability, it's not a tool for humans anymore. it could have a identity and start to evolve literally.
i don't understand why some people consider AI as tech revolution.
maybe i'm into sf, but AI can be something other than just a tool.
Probably a bit unrelated but I wondering if there is any economic theory that actually predicted something for real rather than extrapolate trends from past data in hindsight - even if crossing different kinds of events.
Honest question, I'm not trying to mock economists or anything like that.
I sort of agree with the premise of the article. I ask myself, did more non-technical people pick up AI chat bots when they were invented than picked up personal computers in the late 70s/early 80's? I think probably. From my conversations with others.
I could totally see it, recently there has been a social club opened near me and it has 100+ people attending weekly. All younger, 20-30 year olds in their early career.
Separately, I have a local camera repair shop and my friend told me its 2 months backlog to get your film based camera worked on.
Ultimately if the deal we get online is infinite tracking, infinite scrolling and infinite enshittification, real life start to sound a whole lot better.
this perez model thing completely misses the communications revolutions of the telegraph, radio and television not to mention demonopolization of bell.
> Then came AI, revealing new dynamics. ChatGPT’s breakthrough didn’t come from a garage startup but from OpenAI,
i thought the transformer and large language models came from google research.
> There’s also social pushback—in the UK the campaigns against big ringroad schemes started in the late 1960s and early 1970s. And perhaps we’re seeing some of that about AI. The U.S. map of local pushback against data centres from Data Center Watch covers the whole of the country, in red states and blue. People seem to hate Google’s inserting of AI tools into its search results, and hate even more that it is all but impossible to turn it off.
the us had the highway revolts. in most cities where the revolts succeeded it is widely heralded today as a success.
the data center hate is interesting. i think many people are just learning what data centers are. but that said, they've come to represent something different in recent years. previously they were part of the infrastructure that made industry hum, now public messaging from tech leaders and academics is along the lines of "this is how your livelihood is going to be replaced" while the institutions that are supposed to provide any sort of backstop are being dismantled or slashed to pieces by crazypants trumpist politics. i think focusing the energy on the tangible like mundane buildings is interesting, but the hate makes a lot of sense.
addressing the core thesis, i'd argue that ai is not the next step in the 70s digital technological wave (especially considering the future of ai compute is probably hybrid digital-analog systems), but rather is something fundamentally new that also changes how technology interacts with society and how economics itself will function.
previous systems helped, these systems can do. that's a fundamental change and one that may not be compatible with our existing economic systems of social sorting and mobility. the big question in my mind is: if it succeeds, will we desperately try to hold onto the old system (which essentially would be a disaster that freezes everyone in place and creates a permanent underclass) or will we evolve to a new, yet to be defined, system? and if so, how will the transition look?
Every time I see these I am thinking to myself: Is microsoft copilot a problem of implementation or the capability of the models?
I have ZERO doubt that if you put people that haven't used a computer in front of one and you had copilot everywhere and I mean not the way it is now instead you're presented with a chatbox in the middle of the screen and you just ask the computer what you want I am 99.99% sure that everyone would prefer to use that chatbox rather than trying to figure out how to use a computer which is why I am not quick to discredit "microslop", they're most likely pivoting windows to how it will look like in the future.
Obviously, the strongest argument here is that it should have been an entirely different product such as "Windows AI" where the entire system is designed around it. But if you look at their current implementation it's more of a copilot which is just there, letting you know it exists. Obviously not all of these features were thought through such as recall, that should have been dead and burried since it doesn't offer that much real value a magical box that takes in english sentences and does roughly what you want.
At the end of the day it's a question if AI will/is doing more harm than good. AI has really only existed in this form for a little more than 3 years and really started shining since the advent of Opus 4.5. We went from having models producing more security vulnerabilities than one can count to fixing obscure human made ones and the capabilities will keep increasing (if anthropic is to be believed). We will enter an era where it will have 95%+ accuracy in doing what a typical computer user would want from AI and there's really nothing anyone can do to stop it.
So my opinion is that AI will be the next big thing and it might spread way beyond what we can even imagine.
I think that we will have things similar to non technical people that just talk on the phone with an AI agent to get a website done, register a domain and have a website done within a 1 hour phone call all for pennies while the AI has access to their financials, mail and other things. All of that is relatively possible today with the simple caviat of security and I do believe we have enough smart people in the world that can figure out how to make AI better at rejecting social engineering than 99% of humans.
AI is destroying the economic premise that has drawn so much investment into Silicon Valley. It's going from a capital light business model with network driven moats that allow market domination, to a capital heavy, high burn-rate model with the potential to not only offer ZERO moat protection but destroy the ones that already exist. Cloud infrastructure + vibe coding now make it possible to quickly replace existing apps with custom fit alternatives. Open source+cheap Chinese LLMs may not be as good as Opus but maybe good enough turns out to be good enough ( Sun Microsystems Vs. Linux is a good example). Currently AI has just as much potential destroying Silicon Valley as it does building it up.
The theory doesn't seem to make much sense to me - like why can't there be simultaneous technological revolutions? And why would they last an arbitrary 50-60 years?
> People seem to hate Google’s inserting of AI tools into its search results, and hate even more that it is all but impossible to turn it off.
That could do with a solid citation tbh. The anti-AI people are really vocal on social media but personally I like having the AI results given how awful navigating the modern internet has become with all the cookie banners and anti-Ad Blocker popups etc.
Honestly, the LLMs seem like the most transformative technology we've had since the release of the iPhone.
279 comments
Use it or lose it, as it were.
0: https://www150.statcan.gc.ca/n1/daily-quotidien/241210/dq241...
> I cannot remember basic boilerplate stuff.
I don't know exactly what you mean by boilerplate stuff, but honestly, that's stuff we should have automated away prior to AI. We should not be writing boilerplate.
I'd highly encourage you to take the time to automate this stuff away. Not even with AI, but with scripts you can run to automate boilerplate generation. (Assuming you can't move it to a library/framework).
As another commenter said, the affordability of LLM subscriptions (or, as others are predicting, the lack thereof) is the primary concern, not the technology itself stealing away your skills.
I am far from the definitive voice in the does-AI-use-corrupt-your-thinking conversation, and I don't want to be. I don't want LLMs to replace my thinking as much as the next person, but I also don't want to shun anything useful that can be gained from these tools.
All that said, I do feel that perhaps "dumber" LLMs that work on-device first will allow us to get further and be better, more reliable tools overall.
Were people actually physically typing every character of the software they were writing before a couple of years ago?
> But now, I'm useless. My mind has turned to pudding.
I do use AI daily to help me enhance code but then... I also very regularly turn off, physically, the link between a sub-LAN at home and the Internet and I still can work. It's incredibly relaxing to work on code without being connected 24/7. Other machines (like kid's Nintendo switch) can still access the Internet: but my machines for developing are cut off. And as I've got a little infra at home (Proxmox / VMs), I have quite a few services without needing to be connected to the entire world: for example I've got a pastebin, a Git server, a backuping procedure, all 100% functional without needing to be connected to the net (well 99% for the backuping procedure as the encrypted backup files won't be synch'ed with remote servers until the connection is operational).
Sure it's not a "laptop on a plane", but it's also not "24/7 dependent on Sam Altman or Anthropic".
I'll probably enhance my setup at some point with a local Gemma model too.
And all this is not mutually exclusive with my Anthropic subscription: at the flick of a switch (which is right in front of me), my sub-LAN can access the Internet again.
I remember at that time, my "mentor" suggested to memorize all the "keywords" from C (which were few). But given my bad memory I had to constantly look at the book.
Aaah how times have changed.
[1] https://openlibrary.org/books/OL1601323M/Turbo_C_C
So I don't think this is all AI tbh.
I learned the Q array language five years ago and then didn't touch it for six months. I was surprised how little I remembered when I tried to resume.
The fact that this is being called out is strange.
Yes, you lost some abilities. Install local model so you have someone to talk to while you are on the plane ;)
The article's US-China comparison quietly reveals the prediction that would follow from the thesis: if the Perez 'late deployment' framing is right, then the Chinese model—lean, industrial, healthcare and education application, grounded in near-term ROI—is betting correctly on where we are in the curve and should outperform over the next decade. That's a concrete, testable claim that would validate or falsify the argument independently of whether AI constitutes a 'new surge.'
There are some very interesting information network theories that present information growth as a continually evolving and expanding graph, something like a virus inherent to the universe’s structure, as a natural counterpoint to entropy. And in that view, atomic bonds and cells and towns and railroads and network connections and model weights are all the same sort of thing, the same phenomenon, manifesting in different substrates at different levels of the shared graph.
To me, that’s a much better and deeper explanation that connects the dots, and offers more predictive power about what’s next.
Highly recommend the book Why Information Grows to anyone whose interest is piqued by this.
1) as in the article it's a contraction of work- industrialization getting rid of hand-made work or the contraction of all things horse-related when the internal combustion engine came around
but- it will also be
2) new technologies and ideas enabled by a completely new set of capabilities
The real question is if the economic boost from the latter outpaces the losses of the former. History says these transitions aren't easy on society.
But also, the AI pessimism is hard to understand in this context- do people really believe no novel things will be unlocked with this tech? That it's all about cost-cutting?
AI so far has really only shown massive utility for programming. It has broad potential across almost all knowledge work, but it’s unclear how much of that can be fulfilled in practice. There are huge technical, UX and social hurdles. Integrating middle brow chatbots everywhere is not the end game.
I don't know if this is the effect of relying on AI too much in my day-to-day work or leading a more monotonous life as of late, but I'm sure I'm not the only one. Lots of ideas that I could have built before LLMs took over now seem trivial to build with Claude & friends.
I don't think it's intrinsically wrong, we are in a late stage of a transformation. Software is eating the world and AI is (so far) most profitably an automation of software.
There is plenty of money to be made along the way. I don't really buy the article's seeming confusion about where the money is going to come from. Anthropic is making billions and signing up prodigious amounts of recurring revenue every month.
Introduction of new mass production techniques often has an initial wave of high profit when early adopters have an initial advantage... existing workers are more efficient... but this will followed by a long term decline in the rate of profit as margins aggressively fall ...
e.g. if every software company uses AI to double its coding speed, the price of software will eventually drop by half.
As "AI" becomes a required and common commodity input, competition will drive prices down until the productivity gains are entirely captured by customers, leading to margin compression across the sector.
Also... firms will be forced to invest in using AI just to stay in the same place. If you don't adopt it aggressively, you'll be priced out; if you do, your margins still shrink because everyone else did too.
So... yeah, I don't think this is the next part of a "digital wave" if that means giant increase in new startup investments and SaaS companies etc, it's actually probably the start of I think a margin collapse and consolidation in our industry.
If it's 2x easier to build e.g. a CRM, we’ll end up with 10x more CRMs, leading to a "race to the bottom" on pricing.
The last 15 years of investment by people like YC etc seems to have been in businesses that were "like Uber but for". Service businesses on which a small layer of software automated things, and drove some sort of explosion of customers. I don't really see how VCs are going to separate wheat from chaff on this front anymore? If anybody can do it.... what's the value of any particular approach over the others? I'd think the result would be consolidation?
So I suppose if you're selling "the means of production" in the form of GPUs you're in a good spot, but even that is likely to be subject to aggressive downward pricing.
Show HN: RedSOC – adversarial evaluation framework for LLM-integrated SOCs
Honest question, I'm not trying to mock economists or anything like that.
Separately, I have a local camera repair shop and my friend told me its 2 months backlog to get your film based camera worked on.
Ultimately if the deal we get online is infinite tracking, infinite scrolling and infinite enshittification, real life start to sound a whole lot better.
> Then came AI, revealing new dynamics. ChatGPT’s breakthrough didn’t come from a garage startup but from OpenAI,
i thought the transformer and large language models came from google research.
> There’s also social pushback—in the UK the campaigns against big ringroad schemes started in the late 1960s and early 1970s. And perhaps we’re seeing some of that about AI. The U.S. map of local pushback against data centres from Data Center Watch covers the whole of the country, in red states and blue. People seem to hate Google’s inserting of AI tools into its search results, and hate even more that it is all but impossible to turn it off.
the us had the highway revolts. in most cities where the revolts succeeded it is widely heralded today as a success.
the data center hate is interesting. i think many people are just learning what data centers are. but that said, they've come to represent something different in recent years. previously they were part of the infrastructure that made industry hum, now public messaging from tech leaders and academics is along the lines of "this is how your livelihood is going to be replaced" while the institutions that are supposed to provide any sort of backstop are being dismantled or slashed to pieces by crazypants trumpist politics. i think focusing the energy on the tangible like mundane buildings is interesting, but the hate makes a lot of sense.
addressing the core thesis, i'd argue that ai is not the next step in the 70s digital technological wave (especially considering the future of ai compute is probably hybrid digital-analog systems), but rather is something fundamentally new that also changes how technology interacts with society and how economics itself will function.
previous systems helped, these systems can do. that's a fundamental change and one that may not be compatible with our existing economic systems of social sorting and mobility. the big question in my mind is: if it succeeds, will we desperately try to hold onto the old system (which essentially would be a disaster that freezes everyone in place and creates a permanent underclass) or will we evolve to a new, yet to be defined, system? and if so, how will the transition look?
I have ZERO doubt that if you put people that haven't used a computer in front of one and you had copilot everywhere and I mean not the way it is now instead you're presented with a chatbox in the middle of the screen and you just ask the computer what you want I am 99.99% sure that everyone would prefer to use that chatbox rather than trying to figure out how to use a computer which is why I am not quick to discredit "microslop", they're most likely pivoting windows to how it will look like in the future.
Obviously, the strongest argument here is that it should have been an entirely different product such as "Windows AI" where the entire system is designed around it. But if you look at their current implementation it's more of a copilot which is just there, letting you know it exists. Obviously not all of these features were thought through such as recall, that should have been dead and burried since it doesn't offer that much real value a magical box that takes in english sentences and does roughly what you want.
At the end of the day it's a question if AI will/is doing more harm than good. AI has really only existed in this form for a little more than 3 years and really started shining since the advent of Opus 4.5. We went from having models producing more security vulnerabilities than one can count to fixing obscure human made ones and the capabilities will keep increasing (if anthropic is to be believed). We will enter an era where it will have 95%+ accuracy in doing what a typical computer user would want from AI and there's really nothing anyone can do to stop it.
So my opinion is that AI will be the next big thing and it might spread way beyond what we can even imagine.
I think that we will have things similar to non technical people that just talk on the phone with an AI agent to get a website done, register a domain and have a website done within a 1 hour phone call all for pennies while the AI has access to their financials, mail and other things. All of that is relatively possible today with the simple caviat of security and I do believe we have enough smart people in the world that can figure out how to make AI better at rejecting social engineering than 99% of humans.
it be the beginning of vast and infinite potentia spreading out beyond us
> People seem to hate Google’s inserting of AI tools into its search results, and hate even more that it is all but impossible to turn it off.
That could do with a solid citation tbh. The anti-AI people are really vocal on social media but personally I like having the AI results given how awful navigating the modern internet has become with all the cookie banners and anti-Ad Blocker popups etc.
Honestly, the LLMs seem like the most transformative technology we've had since the release of the iPhone.