Google has the same AI adoption curve as John Deere (twitter.com)

by andsoitis 26 comments 28 points
Read article View on HN

26 comments

[−] aleksiy123 31d ago
Is there a name for the thing where "plausible" sounding, easily digestible narratives with nothing to back them up are used to explain complex interactions that noone really fully understands.

While google definitely has issues, this aint the root cause, even if there where only one root cause.

[−] aleksiy123 31d ago
Also rereading, the tweet the central point seems to be why the adoption curve of ai by (individual ?) developers within Google seems to be the same as of other companies.

I think this begs the question, why should we expect AI adoption curve of Google devs to be any different from any other company?

Adoption, has been pretty rapid everywhere from what I can see, the tooling itself is still fairly unstable as everyone is rapidly iterating.

The productivity gains are there, but they seem much more modest than people like to admit (10-20%).

[−] Spooky23 31d ago
Shitposting.

Yeah Google has a fundamentally broken engineering organization because some influencer dude thinks they should vibe code, with technology those presumably incompetent engineers substantially invented.

[−] mtklein 31d ago
The closest term I know is "just-so story".
[−] solarkraft 31d ago
This is so doomy, like not being all in is the biggest mistake.

> My Google friend and I had this conversation over a month ago. I didn't share it because I wanted to look around a bit, and see if it's really as bad as all that. I've been talking to people from dozens of companies since then. And yeah. It's as bad as all that.

> They may have moats and high walls, but the horde is coming for them all the same.

Can somebody explain how engineering getting a bit cheaper justifies this hysteria?

From the little I know, it seems that there is plenty more to running a software business than engineering, especially if you don’t include project management or product vision in that.

Maybe I’m an AI laggard or naive, but I see plenty of things that can’t easily be automated because I tried.

Maybe I’ll be automated away tomorrow by somebody who believes harder ...

[−] headius 31d ago
A simple rule to follow right now: never trust someone who's selling an AI coding product to tell you how great AI is at coding.

It sounds like my experience has been similar to yours. I have found a few places where agentic coding produces pretty good results.. generally very small patches that could have been written by anyone. I give the tool credit for finding small bugs that nobody noticed before.

On the larger or novel tasks I've thrown at these models, including some of the top tier options, the tools have either produced incorrect solutions, solutions written in a very inefficient way, or solutions that actually introduced more problems. I've taken some of these same challenges to other AI experts as who couldn't believe the tool failed. None of them were able to get good results either.

Everybody is desperate to carve out their slice of the AI Gold Rush right now before it all condenses down and developers realize they can't give up all agency to coding tools trained on the great mass of garbage that's out there. If at some point these tools truly do make developers 10x more efficient, they'll naturally get adopted. Hype chasers and product marketers are not the ones to listen to right now.

[−] pensatoio 31d ago
This is a very smug post with very little substance.
[−] 827a 31d ago

> Most of the industry has the same internal adoption curve: 20% agentic power users, 20% outright refusers, 60% still using Cursor or equivalent chat tool.

The 20% in the agentic power user camp broadly refuse to do any external educational communication. They're only interested in pulling the ladder up behind them; that is, if they are climbing the ladder at all, and its reasonable to have doubts about this for the reason that they broadly allow very little observability into their processes.

[−] code51 31d ago
Anything from Steve Yegge must come with an AI bias disclaimer now.
[−] firefoxd 31d ago
I'm not sure what the expectations is supposed to be. Is Google supposed to have 100% adoption rate? Even OpenAI and Anthropic don't have that.

I have managers asking similar things at work: how can we increase AI adoption in dev teams? Why though? How does it benefit the manager? How can we increase the vim adoption rate for dev teams?

Google has thousands of mature products. You don't just throw a single solution (AI) to all problems.

[−] pingou 31d ago
What does it have to do with hiring freeze? They say 20% are agentic power users at Google. Can't they teach the others? Why would someone from outside be somehow more likely to use AI, or more likely to influence their colleagues?
[−] riskassessment 31d ago
Can someone explain to me how and in what way Claude Code is considered "agentic" and Cursor/Gemini CLI/Antigravity are not?
[−] stanfordkid 31d ago
They basically wrote the equivalent to Claude Code and launched it as a product... how does their adoption curve lag behind John Deere?
[−] skizm 31d ago

> How is it that a handful of companies are taking off like a spaceship, and the rest, including Google, are mired in inaction?

Which companies? Not counting companies directly benefiting from selling AI.

[−] krackers 31d ago

> just cancelled IntelliJ for a thousand engineers

IntelliJ can't cost more than the AI provider subscriptions, and it will actually handle large refactors without breaking your codebase.

[−] eigen-vector 31d ago
Yegge is basically posting straight up slop these days. He's making some extremely confident claims about google after talking to one person. Didn't stop to think if that person is the most informed on this.

Somehow yegge also has problems with the adoption curve being consistent with other companies. Whatever that means.

[−] awkward 31d ago
Why would we expect John Deere to have a low AI adoption curve? It's main product line is just as computerized as Tesla's.
[−] dieortin 31d ago
It’s interesting that your code not being 100% AI slop will get your engineering org called “utterly mediocre” nowadays
[−] danielovichdk 31d ago
20 years at Google and this is the shit he's relaying. Fire his ass.
[−] bayarearefugee 31d ago

> How is it that a handful of companies are taking off like a spaceship, and the rest, including Google, are mired in inaction?

All these companies are "taking off like a spaceship", so... where is all the (quality, non-slop) space traffic?

I use LLMs. I believe LLMs (especially combined with agentic coding) increase coding productivity and in the right hands can produce non-slop, but by and large on a macro level everything still feels pretty much the same industry wide as it did last year, and five years ago and ten years ago.