Pull to refresh

CEOs admit AI had no impact on employment or productivity (fortune.com)

by tcp_handshaker 82 comments 92 points
Read article View on HN

82 comments

[−] prh8 25d ago
My company has pushed engineering all-in for AI in the last few months

Our stock price has also gone down 70% in the last few months

Naturally, we're pivoting our platform to put AI front and center

[−] dehrmann 25d ago
These aren't related in the way you think they are. Stock price reacts quickly to broader market trends, but more slowly for company-specific trends where revenue is likely stable. The impact of AI in engineering work will take months to show up in the product, probably a year after that for customers and the market to take notice. An AI product is a different thing entirely.
[−] Eddy_Viscosity2 25d ago
Did they try and be an blockchain-first company when that was the rage? Making NFTs and whatnot. Is your CEO just a trend-follower?
[−] Grimblewald 25d ago
The beatings will continue until moral improves
[−] salawat 25d ago
Don't you mean morale? Businesses are basically amoral by desi....ooooooooh. I see what you did there.
[−] rechargedaily 24d ago
Interesting data point from the other direction — while CEOs see no productivity impact, engineers are feeling the pressure of AI regardless of actual output gains. In our burnout survey, AI pressure to do more has emerged as a top 4 burnout driver in 2026. That finding didn't exist two years ago. So even if AI isn't delivering the productivity CEOs expect, it's still extracting a cost from engineers in the form of heightened expectations and stress. If anyone wants to add their data: https://docs.google.com/forms/d/e/1FAIpQLSdu-1Sa6oPvhDtFtBuK...

Live results: rechargedaily.co/state-of-burnout-2026

[−] davebren 25d ago
Are businesses all running on sunk cost fallacy now? These findings have been coming out for a while but it doesn't seem to change anything.
[−] flextheruler 25d ago
It seems like that because economic bubbles can last a lot longer than just 3 years. We are also in one of the longest credit cycles ever(2009 - Present) which has exacerbated this behavior.
[−] grebc 25d ago
They’ll say no but really… you know.
[−] gozucito 25d ago
I believe the lack of quick evident profit increases are partly a failure of imagination or a failure of understanding that AI agents are different from people. More impressive or faster in some ways, but much much less reliable in others.

The evolution of harnesses like claude code or open cause, and metaharnesses like Ralph loops, gas town, claws, etc. Will progressively allow for gradually better results and abilities even if models stopped evolving, and if the Mythos eval numbers are to be believed, there is still no hard ceiling to be felt yet.

At the same time, small models that can run on PCs VRAM/UNIFIED RAM have like Qwen are becoming more useful.

I predict that having more and more loops within loops within loops and layers of cloud/local models of different capabilities will solve a great many limitations of LLMS today...at the cost of speed and token count.

We've never had a tool that is at the same time so unreliable and complicated as GenAI before. It will take us a minute to figure out how to use it properly.

[−] belZaah 25d ago
Unlikely. There’s no change in operating profit per employee trends for major software companies like Alphabet since GenAI became a thing. But MS employees are now making 3 times more profit, than they were before Nadella took over. Clearly leadership can make a difference but there is no visible impact after several years of the technology being available. I can’t imagine a technology, that shows no economic impact at all while we figure it out. There ought to be _something_. Yes, big companies have inertia, but Nadella showed clear results in a year.
[−] EPWN3D 24d ago
That's not apples to apples due to Microsoft's massive force reductions and Azure's massive growth.
[−] slopinthebag 25d ago
Actually I think the opposite - we will learn that the most important thing is the ability to manage context & steer these models instead of using a rube goldberg machine. Some of the top performing agent harnesses on Terminal Bench provide literally one tool: tmux, which outperforms Claude Code et al. To me, the most important thing by far when getting reasonable output from these machines are what you put into it.
[−] doctaj 25d ago
[−] Avicebron 25d ago
I wish anytime someone used the word "productivity" there was an accompanying definition.
[−] andrekandre 23d ago
its because its all vibes

i think its too hard to separate the noise from the signal unless there is some huge differential like 2x profits immediately following ai adoption or a real deep longitudinal study (which no-one pushing this at companies seems to want to do)

[−] ritcgab 25d ago
They all know that, and we all know that.

So we are all in this "scheme".

[−] charlie90 25d ago
Has anyone studied the converse? Not using AI leading to loss of productivity? I feel like AI is no longer a "gain" but rather simply a requirement to compete.
[−] jdlshore 25d ago
Productivity gain or loss is in comparison to something else. In the article “using AI” is compared “not using AI.” So, the question is, what converse do you want to study? “Not using AI” compared to what?
[−] antisthenes 25d ago
Ah yes, first the return to office, now being forced to use AI in 50%+ of projects. Will the ingenuity of modern executives never cease?
[−] ChrisArchitect 25d ago
repost from february; many referencing the same NBER report.

Some related discussions recently and months ago:

90% of CEOs Say AI Changed Nothing. The Other 10% Have a PR Team

https://news.ycombinator.com/item?id=47766164

Majority of CEOs report zero payoff from AI splurge

https://news.ycombinator.com/item?id=46696636

[−] zihotki 25d ago
Productivity per dollar doesn't increase because for maturity levels 1&2 the costs for inference and extra team load (PRs quantity and size) eat up all gains. Only on level 3 one can see actual productivity impact. Most companies are between levels 1 and 2, that's where only costs are rising.

Levels: 0 - no AI, 1 - AI enabled (copilots), 2 - AI assisted (autonomous agent pipelines not on your PC) , 3 - AI measured.

[−] cmiles8 25d ago
AI isn’t going away, but it’s also clear the much promised impacts aren’t there and aren’t coming anytime soon. A bit like the claims a few years back that we’d all have self driving cars by now.

The most likely outcome is an AI bubble correction that will be somewhat painful and wipe out many/most AI startups, followed by AI settling into day to day in a way that’s useful and found in many places, but not world-as-we-know-it-ending like the AI bros predict.

[−] ua709 25d ago
If AI just means automation, then sure. We absolutely need more automation and if LLMs are not the mechanism then something else better be. More automation is the life blood of our industry. But are LLMs a game changer or today's fuzzy logic? [1] Time will tell...

[1] https://www.electronicdesign.com/technologies/embedded/digit...

P.S. I'm not saying fuzzy logic doesn't have applications, I know rice cookers are a thing, but I think it's safe to say we have other options for controlling non-linear systems these days.

[−] negura 25d ago

> the much promised impacts aren’t there and aren’t coming anytime soon

at least according to industry analysts, the thesis at the moment is that reasoning models (which loop over their own output and backtrack if necessary) will bring fidelity close to 100% and find novel solutions not present in the training dataset. but they consume more tokens, they require more computing and the infra for it is still being built. so the outlook for those impacts is ~2030

[−] andrekandre 23d ago

  > fidelity close to 100%
what is fidelity meaning here? creating perfectly lifelike images and video or code that is "perfect" even with imperfect inputs? or something else...?
[−] newyankee 25d ago
WE do have self driving cars with Waymo data showing it is clearly better than human drivers in certain markets like Phoenix. It is human regulations, laws and the general societal unease that is preventing a total rapid change. In fact a Robotaxis only urban area which is continuously mapped might be feasible today and probably could even reduce the no of cars needed for the population making it accessible to many more.
[−] afavour 25d ago
As a counterpoint, Waymo conducted a pilot in NYC then abandoned the permit for it:

https://www.thecity.nyc/2026/04/06/waymo-driverless-cars-tes...

Phoenix is probably about as good a location as you could get for a self driving car. It’s not yet clear how wide their success will be outside of that niche.

[−] oblio 25d ago

> certain markets like Phoenix

So, basically the easiest robotaxi market on the planet? Call me when it works in Bucharest, Mumbai, Istanbul, Cairo, etc.

For software the last 80% of effort needed to finish the 20% remaining items is the hardest and hardware is even harder.

[−] nothinkjustai 25d ago
No, it’s actually the same issue with AI in a lot of cases. In perfect conditions it can work reliably, but outside of that it falls apart in a way humans don’t.
[−] namr2000 25d ago
This has not been my experience with Waymo. I drove a total of about ~3.5 hours in Waymos in LA when I was visiting and their robustness to very unusual situations absolutely floored me.

I am sure you can find truly out-of-distribution cases where the car will make a mistake, but the data shows that this is more rare than a human driver making a mistake.

[−] acdha 25d ago
How many times did they need remote assistance? Those teams aren’t driving remotely but Waymo doesn’t pay for entire groups to exist without need.
[−] nothinkjustai 25d ago
No, it’s actually the same issue with AI in a lot of cases. In perfect conditions it can work reliably, but outside of that it falls apart.
[−] cmiles8 25d ago
AI has the same problem. It’s not that it doesn’t work, but that folks just aren’t all that interested in adopting it at scale. Tech makes this “build it and they will come” error a lot. The tech is quite good, but it’s all the non tech aspects of this that are why it’s not getting impact at scale.
[−] acdha 25d ago
The tech is good but not as good as advertised: note how Microsoft is simultaneously running ads saying Copilot can run your business and claiming it’s only for entertainment purposes in the EULA? Self-driving vehicles have a similar struggle where the manufacturers talk about the capabilities but aren’t willing to sign a legal agreement accepting liability for errors except in the easiest situations (and in the case of Waymo, only with pliable governments and control so they could immediately halt operations in the event of a major problem).

That’s more “build part of it, say you built all of it, and wonder why they don’t come”.

[−] civvv 25d ago
You’re generalizing too much here. One of the biggest problems with LLM’s today is in-fact that they are not at the level being advertised. This is not solely a case of regulation standing in the way of a «revolution».
[−] grebc 25d ago
Ever driven in Bali?
[−] palmotea 25d ago

> AI isn’t going away, but it’s also clear the much promised impacts aren’t there and aren’t coming anytime soon.

Even if it doesn't result in increased productivity, AI can still take the fun out of the job (goodbye coding, hello code reviews all day).

[−] somewhereoutth 25d ago
depends if post-correction it is worth anyone's money to keep training new frontier models. It could be that it isn't, so we are left with models that were trained in the bubble, but are now increasingly out of date, or (open?) models that are trained much more cheaply somehow with consequent lack of utility.
[−] cmiles8 25d ago
Good point. At some point there will be a reality check for the giant pile of burning cash that is new model training.
[−] hsuduebc2 25d ago
Was there any recent technology that really delivered what was the general promise?
[−] grebc 25d ago
Starlink is pretty darn good.
[−] nothinkjustai 25d ago
Did the hype cycle not have an impact on employment with the various layoffs? Or is this and admission that the layoffs were for other reasons and were just attributed to AI?

I’m not surprised about productivity though. Efficiency gains are limited by the actual bottlenecks. And truthfully, I think people are deluding themselves a bit about how effective vibe coding is and how much faster they are actually moving when you consider developers still need to form an understanding of the codebase and its systems.

Outside of coding, is there really a use case for LLMs that has the potential to make big efficiency gains? Idk.

[−] smalltorch 25d ago
I've found the best way for me to wield it is the tool to build tools. I would have never in a million years been able to code. But I've used it to replace things I was paying hefty monthly subscriptions for....

So I'm not actually being more productive, but I've cut my costs significantly to do the same things I could do before.

[−] beloch 25d ago
There's an interesting race happening here.

On one side, there is the usual process of figuring out how to properly use this new tech. It is to be expected that some experimentation is necessary to figure out what applications AI boosts productivity for and what applications it doesn't. There is unusually strong evangelism pushing AI into everything, so the negatives are going to be salient and may make it hard to spot some of the successes.

On the other side is something a little bit new: Deliberate enshittification. OpenAI and others no doubt saw the power crunch coming years in advance, yet it's still happening and is, ostensibly, the reason why prices are starting to go up. This was not unexpected. It's the business model. Build to the capacity that is cheaply available while offering your customers a sweetheart deal to get them addicted, and then jack up the prices when the competition has no cheap power to build upon. The result is locked in customers and locked out competition.

On one side, you have people learning when AI is appropriate and how to use it efficiently. On the other side, you have a small number of AI companies trying to extract every last bit of value so that any productivity gains wind up in their owners' pockets. Will the gains of more appropriately applying AI be entirely wiped out by enshittification?

[−] Simulacra 25d ago
Then why the layoffs???
[−] expedition32 25d ago
Dutch AI would just demand a 3 day workweek.
[−] throwuxiytayq 25d ago
they’re holding it wrong.
[−] ofjcihen 25d ago
This article is underlining the stark contrast between the viewpoints of “AI Enthusiasts” and everyone else.

Don’t get me wrong, I use these tools daily. That being said I’m having a very hard time finding where the productivity gains are.

I imagine I’m far from alone in that search and when you pair that with the constant marketing and glowing “analysis” from some of the enthusiasts about how this technology is “solving coding” or “changing the face of security” or even leading to AGI it starts to tickle that part of my brain where I keep blockchain, NFTs and copper bracelets.

So TLDR the tech is good but the hype-slaves and their masters are killing it with overpromising and under delivering.

[−] 10sunbee 25d ago
[dead]
[−] lumost 25d ago
A lot of organizations live in some game theoretic equilibrium that prevents cost improvements from being metabolized by the org without burning the cost elsewhere.

For example, consider a commodity business for software product X. All vendors of this product had their costs reduced by a factor of 100 over night for developing new product. They could increase their profits, lower their price, or re-invest the dividend. In software, the buyer usually buys on quality - so they all re-invest.

Now they are spending the same amount on product development, for the same price tag, and earning the same profit - but they might be shipping much faster.