These aren't related in the way you think they are. Stock price reacts quickly to broader market trends, but more slowly for company-specific trends where revenue is likely stable. The impact of AI in engineering work will take months to show up in the product, probably a year after that for customers and the market to take notice. An AI product is a different thing entirely.
Interesting data point from the other direction — while CEOs see no productivity impact, engineers are feeling the pressure of AI regardless of actual output gains. In our burnout survey, AI pressure to do more has emerged as a top 4 burnout driver in 2026. That finding didn't exist two years ago.
So even if AI isn't delivering the productivity CEOs expect, it's still extracting a cost from engineers in the form of heightened expectations and stress.
If anyone wants to add their data:
https://docs.google.com/forms/d/e/1FAIpQLSdu-1Sa6oPvhDtFtBuK...
Live results:
rechargedaily.co/state-of-burnout-2026
It seems like that because economic bubbles can last a lot longer than just 3 years. We are also in one of the longest credit cycles ever(2009 - Present) which has exacerbated this behavior.
I believe the lack of quick evident profit increases are partly a failure of imagination or a failure of understanding that AI agents are different from people. More impressive or faster in some ways, but much much less reliable in others.
The evolution of harnesses like claude code or open cause, and metaharnesses like Ralph loops, gas town, claws, etc. Will progressively allow for gradually better results and abilities even if models stopped evolving, and if the Mythos eval numbers are to be believed, there is still no hard ceiling to be felt yet.
At the same time, small models that can run on PCs VRAM/UNIFIED RAM have like Qwen are becoming more useful.
I predict that having more and more loops within loops within loops and layers of cloud/local models of different capabilities will solve a great many limitations of LLMS today...at the cost of speed and token count.
We've never had a tool that is at the same time so unreliable and complicated as GenAI before. It will take us a minute to figure out how to use it properly.
Unlikely. There’s no change in operating profit per employee trends for major software companies like Alphabet since GenAI became a thing. But MS employees are now making 3 times more profit, than they were before Nadella took over. Clearly leadership can make a difference but there is no visible impact after several years of the technology being available. I can’t imagine a technology, that shows no economic impact at all while we figure it out. There ought to be _something_. Yes, big companies have inertia, but Nadella showed clear results in a year.
Actually I think the opposite - we will learn that the most important thing is the ability to manage context & steer these models instead of using a rube goldberg machine. Some of the top performing agent harnesses on Terminal Bench provide literally one tool: tmux, which outperforms Claude Code et al. To me, the most important thing by far when getting reasonable output from these machines are what you put into it.
i think its too hard to separate the noise from the signal unless there is some huge differential like 2x profits immediately following ai adoption or a real deep longitudinal study (which no-one pushing this at companies seems to want to do)
Has anyone studied the converse? Not using AI leading to loss of productivity? I feel like AI is no longer a "gain" but rather simply a requirement to compete.
Productivity gain or loss is in comparison to something else. In the article “using AI” is compared “not using AI.” So, the question is, what converse do you want to study? “Not using AI” compared to what?
Productivity per dollar doesn't increase because for maturity levels 1&2 the costs for inference and extra team load (PRs quantity and size) eat up all gains. Only on level 3 one can see actual productivity impact. Most companies are between levels 1 and 2, that's where only costs are rising.
Levels: 0 - no AI, 1 - AI enabled (copilots), 2 - AI assisted (autonomous agent pipelines not on your PC) , 3 - AI measured.
AI isn’t going away, but it’s also clear the much promised impacts aren’t there and aren’t coming anytime soon. A bit like the claims a few years back that we’d all have self driving cars by now.
The most likely outcome is an AI bubble correction that will be somewhat painful and wipe out many/most AI startups, followed by AI settling into day to day in a way that’s useful and found in many places, but not world-as-we-know-it-ending like the AI bros predict.
If AI just means automation, then sure. We absolutely need more automation and if LLMs are not the mechanism then something else better be. More automation is the life blood of our industry. But are LLMs a game changer or today's fuzzy logic? [1] Time will tell...
P.S. I'm not saying fuzzy logic doesn't have applications, I know rice cookers are a thing, but I think it's safe to say we have other options for controlling non-linear systems these days.
> the much promised impacts aren’t there and aren’t coming anytime soon
at least according to industry analysts, the thesis at the moment is that reasoning models (which loop over their own output and backtrack if necessary) will bring fidelity close to 100% and find novel solutions not present in the training dataset. but they consume more tokens, they require more computing and the infra for it is still being built. so the outlook for those impacts is ~2030
what is fidelity meaning here? creating perfectly lifelike images and video or code that is "perfect" even with imperfect inputs? or something else...?
WE do have self driving cars with Waymo data showing it is clearly better than human drivers in certain markets like Phoenix. It is human regulations, laws and the general societal unease that is preventing a total rapid change. In fact a Robotaxis only urban area which is continuously mapped might be feasible today and probably could even reduce the no of cars needed for the population making it accessible to many more.
Phoenix is probably about as good a location as you could get for a self driving car. It’s not yet clear how wide their success will be outside of that niche.
No, it’s actually the same issue with AI in a lot of cases. In perfect conditions it can work reliably, but outside of that it falls apart in a way humans don’t.
This has not been my experience with Waymo. I drove a total of about ~3.5 hours in Waymos in LA when I was visiting and their robustness to very unusual situations absolutely floored me.
I am sure you can find truly out-of-distribution cases where the car will make a mistake, but the data shows that this is more rare than a human driver making a mistake.
AI has the same problem. It’s not that it doesn’t work, but that folks just aren’t all that interested in adopting it at scale. Tech makes this “build it and they will come” error a lot. The tech is quite good, but it’s all the non tech aspects of this that are why it’s not getting impact at scale.
The tech is good but not as good as advertised: note how Microsoft is simultaneously running ads saying Copilot can run your business and claiming it’s only for entertainment purposes in the EULA? Self-driving vehicles have a similar struggle where the manufacturers talk about the capabilities but aren’t willing to sign a legal agreement accepting liability for errors except in the easiest situations (and in the case of Waymo, only with pliable governments and control so they could immediately halt operations in the event of a major problem).
That’s more “build part of it, say you built all of it, and wonder why they don’t come”.
You’re generalizing too much here. One of the biggest problems with LLM’s today is in-fact that they are not at the level being advertised. This is not solely a case of regulation standing in the way of a «revolution».
depends if post-correction it is worth anyone's money to keep training new frontier models. It could be that it isn't, so we are left with models that were trained in the bubble, but are now increasingly out of date, or (open?) models that are trained much more cheaply somehow with consequent lack of utility.
Did the hype cycle not have an impact on employment with the various layoffs? Or is this and admission that the layoffs were for other reasons and were just attributed to AI?
I’m not surprised about productivity though. Efficiency gains are limited by the actual bottlenecks. And truthfully, I think people are deluding themselves a bit about how effective vibe coding is and how much faster they are actually moving when you consider developers still need to form an understanding of the codebase and its systems.
Outside of coding, is there really a use case for LLMs that has the potential to make big efficiency gains? Idk.
I've found the best way for me to wield it is the tool to build tools. I would have never in a million years been able to code. But I've used it to replace things I was paying hefty monthly subscriptions for....
So I'm not actually being more productive, but I've cut my costs significantly to do the same things I could do before.
On one side, there is the usual process of figuring out how to properly use this new tech. It is to be expected that some experimentation is necessary to figure out what applications AI boosts productivity for and what applications it doesn't. There is unusually strong evangelism pushing AI into everything, so the negatives are going to be salient and may make it hard to spot some of the successes.
On the other side is something a little bit new: Deliberate enshittification. OpenAI and others no doubt saw the power crunch coming years in advance, yet it's still happening and is, ostensibly, the reason why prices are starting to go up. This was not unexpected. It's the business model. Build to the capacity that is cheaply available while offering your customers a sweetheart deal to get them addicted, and then jack up the prices when the competition has no cheap power to build upon. The result is locked in customers and locked out competition.
On one side, you have people learning when AI is appropriate and how to use it efficiently. On the other side, you have a small number of AI companies trying to extract every last bit of value so that any productivity gains wind up in their owners' pockets. Will the gains of more appropriately applying AI be entirely wiped out by enshittification?
This article is underlining the stark contrast between the viewpoints of “AI Enthusiasts” and everyone else.
Don’t get me wrong, I use these tools daily. That being said I’m having a very hard time finding where the productivity gains are.
I imagine I’m far from alone in that search and when you pair that with the constant marketing and glowing “analysis” from some of the enthusiasts about how this technology is “solving coding” or “changing the face of security” or even leading to AGI it starts to tickle that part of my brain where I keep blockchain, NFTs and copper bracelets.
So TLDR the tech is good but the hype-slaves and their masters are killing it with overpromising and under delivering.
A lot of organizations live in some game theoretic equilibrium that prevents cost improvements from being metabolized by the org without burning the cost elsewhere.
For example, consider a commodity business for software product X. All vendors of this product had their costs reduced by a factor of 100 over night for developing new product. They could increase their profits, lower their price, or re-invest the dividend. In software, the buyer usually buys on quality - so they all re-invest.
Now they are spending the same amount on product development, for the same price tag, and earning the same profit - but they might be shipping much faster.
82 comments
Our stock price has also gone down 70% in the last few months
Naturally, we're pivoting our platform to put AI front and center
Live results: rechargedaily.co/state-of-burnout-2026
The evolution of harnesses like claude code or open cause, and metaharnesses like Ralph loops, gas town, claws, etc. Will progressively allow for gradually better results and abilities even if models stopped evolving, and if the Mythos eval numbers are to be believed, there is still no hard ceiling to be felt yet.
At the same time, small models that can run on PCs VRAM/UNIFIED RAM have like Qwen are becoming more useful.
I predict that having more and more loops within loops within loops and layers of cloud/local models of different capabilities will solve a great many limitations of LLMS today...at the cost of speed and token count.
We've never had a tool that is at the same time so unreliable and complicated as GenAI before. It will take us a minute to figure out how to use it properly.
i think its too hard to separate the noise from the signal unless there is some huge differential like 2x profits immediately following ai adoption or a real deep longitudinal study (which no-one pushing this at companies seems to want to do)
So we are all in this "scheme".
Some related discussions recently and months ago:
90% of CEOs Say AI Changed Nothing. The Other 10% Have a PR Team
https://news.ycombinator.com/item?id=47766164
Majority of CEOs report zero payoff from AI splurge
https://news.ycombinator.com/item?id=46696636
Levels: 0 - no AI, 1 - AI enabled (copilots), 2 - AI assisted (autonomous agent pipelines not on your PC) , 3 - AI measured.
The most likely outcome is an AI bubble correction that will be somewhat painful and wipe out many/most AI startups, followed by AI settling into day to day in a way that’s useful and found in many places, but not world-as-we-know-it-ending like the AI bros predict.
[1] https://www.electronicdesign.com/technologies/embedded/digit...
P.S. I'm not saying fuzzy logic doesn't have applications, I know rice cookers are a thing, but I think it's safe to say we have other options for controlling non-linear systems these days.
> the much promised impacts aren’t there and aren’t coming anytime soon
at least according to industry analysts, the thesis at the moment is that reasoning models (which loop over their own output and backtrack if necessary) will bring fidelity close to 100% and find novel solutions not present in the training dataset. but they consume more tokens, they require more computing and the infra for it is still being built. so the outlook for those impacts is ~2030
https://www.thecity.nyc/2026/04/06/waymo-driverless-cars-tes...
Phoenix is probably about as good a location as you could get for a self driving car. It’s not yet clear how wide their success will be outside of that niche.
> certain markets like Phoenix
So, basically the easiest robotaxi market on the planet? Call me when it works in Bucharest, Mumbai, Istanbul, Cairo, etc.
For software the last 80% of effort needed to finish the 20% remaining items is the hardest and hardware is even harder.
I am sure you can find truly out-of-distribution cases where the car will make a mistake, but the data shows that this is more rare than a human driver making a mistake.
That’s more “build part of it, say you built all of it, and wonder why they don’t come”.
> AI isn’t going away, but it’s also clear the much promised impacts aren’t there and aren’t coming anytime soon.
Even if it doesn't result in increased productivity, AI can still take the fun out of the job (goodbye coding, hello code reviews all day).
I’m not surprised about productivity though. Efficiency gains are limited by the actual bottlenecks. And truthfully, I think people are deluding themselves a bit about how effective vibe coding is and how much faster they are actually moving when you consider developers still need to form an understanding of the codebase and its systems.
Outside of coding, is there really a use case for LLMs that has the potential to make big efficiency gains? Idk.
So I'm not actually being more productive, but I've cut my costs significantly to do the same things I could do before.
On one side, there is the usual process of figuring out how to properly use this new tech. It is to be expected that some experimentation is necessary to figure out what applications AI boosts productivity for and what applications it doesn't. There is unusually strong evangelism pushing AI into everything, so the negatives are going to be salient and may make it hard to spot some of the successes.
On the other side is something a little bit new: Deliberate enshittification. OpenAI and others no doubt saw the power crunch coming years in advance, yet it's still happening and is, ostensibly, the reason why prices are starting to go up. This was not unexpected. It's the business model. Build to the capacity that is cheaply available while offering your customers a sweetheart deal to get them addicted, and then jack up the prices when the competition has no cheap power to build upon. The result is locked in customers and locked out competition.
On one side, you have people learning when AI is appropriate and how to use it efficiently. On the other side, you have a small number of AI companies trying to extract every last bit of value so that any productivity gains wind up in their owners' pockets. Will the gains of more appropriately applying AI be entirely wiped out by enshittification?
Don’t get me wrong, I use these tools daily. That being said I’m having a very hard time finding where the productivity gains are.
I imagine I’m far from alone in that search and when you pair that with the constant marketing and glowing “analysis” from some of the enthusiasts about how this technology is “solving coding” or “changing the face of security” or even leading to AGI it starts to tickle that part of my brain where I keep blockchain, NFTs and copper bracelets.
So TLDR the tech is good but the hype-slaves and their masters are killing it with overpromising and under delivering.
For example, consider a commodity business for software product X. All vendors of this product had their costs reduced by a factor of 100 over night for developing new product. They could increase their profits, lower their price, or re-invest the dividend. In software, the buyer usually buys on quality - so they all re-invest.
Now they are spending the same amount on product development, for the same price tag, and earning the same profit - but they might be shipping much faster.