> If this tech is as amazing as you say it is, I'll be able to pick it up and become productive on a timescale of my choosing not yours.
Broadly speaking, I think this is a wise assessment. There are opportunities for productivity gains right now, but it I don't think it's a knockout for anyone using the tech, and I think that onboarding might be challenging for some people in the tech's current state.
It is safe to assume that the tech will continue to improve in both ways: productivity gains will increase, onboarding will get easier. I think it will also become easier to choose a particular suite of products to use too. Waiting is not a bad idea.
What I get a bit annoyed is companies forcing AI tools, getting usage metrics and actively hunting the engineers that don't use the tool "enough", I've never seen anything like it for a technically optional tool. Even in the past, aside from technical limitaions, you were not required to use enough of a tool.
It just sounds like a giant scheme to burn through tokens and give money to the AI corps, and tech directors are falling for it immediately.
> I've never seen anything like it for a technically optional tool
Cloud had a very similar vibe when it was really running advertising to CIO/CTOs hard. Everything had to be jammed into the cloud, even if it made absolutely no sense for it to be run there.
This seems to come pretty frequently from visionless tech execs. They need to justify their existence to their boss, and thus try to show how innovative and/or cost cutting they can be.
It's using a bad tool to try to aim at something reasonable-ish: Developers not taking advantage of the tools in places where it's very easy to get use out of them. I have coworkers like that: One spent 3 days researching a bug that Claude found in 10 minutes by pointing it at the logs in the time window and the codebase. And he didn't even find the bug, when Claude nailed it in one.
But is this something that is best done top to bottom, with a big report, counting tokens? Hell no. This is something that is better found, and tackled at the team level. But execs in many places like easy, visible metrics, whether they are actually helping or not. And that's how you find people playing JIRA games and such. My worse example was a VP has decided that looking at the burndown charts from each team under them, and using their shape as a reasonable metric is a good idea.
It's all natural signs of a total lack of trust, and thinking you can solve all of this from the top.
> t just sounds like a giant scheme to burn through tokens and give money to the AI corps, and tech directors are falling for it immediately.
This is exactly what's happening. The top 5 or 6 companies in the s&p 500 are running a very sophisticated marketing/pressure campaign to convince every c-suite down stream that they need to force AI on their entire organization or die. It's working great. CEOs don't get fired for following the herd.
I've never seen anything like it for a technically optional tool
If you broaden the comparison (only a little bit) it looks suspiciously like employees being forced to train their own replacement (be that other employees, or factory automation), a regular occurrence.
Tech directors, CEOs, managers, etc tend to be people with a certain personality and ( learned behaviors/thinking ) just like "technical people".
Yes, they tend to be incredible gullible to certain things, over-simplistic and over-confident but also very "agile" when it comes to sweep their failures under the rug and move on to keep their own neck in one piece. At this point in time even the median CEO knows AI has been way overhyped and they over invested to a point of absolute financial insanity.
The first line of defense about the pressure to deliver is to mandate their minions to use it as much as possible.
We spent a fortune on this over-rated Michelin star reservation, and now you kids are going to absolutely enjoy it, like it or not goddammit!
I'm just using Copilot CLI for mindless stuff and set it to the premium models to meet the quota, as long as they can't see the prompts I think I should be fine
It's really insane what is happening. My wife manages 70 software developers. Her boss mandated that managers replace 50% of the staff with AI within a year. And, she's scrambling trying to figure out if any of the tools actually work and annoying her team because she keeps pushing AI on them. Unsurprisingly it's only slowed things down and put her in a terrible position.
> I've never seen anything like it for a technically optional tool.
It has often been the case for technologies though, like “now we’re doing everything in $language and $technology”. If you see LLM coding as a technology in that vein, it’s not a completely new phenomenon, although it does affect developers differently.
It also seems like skills with particular tech (prompt engineering, harnesses, mixture of experts set ups) doesn't always necessarily pay off when there's a sea change. Hard to predict what you'll want in a few years anyway, right?
The early adopters started years ago and they've seen improvements over time that they started attributing them to their own skill. They tell you that if you didn't spend years prompting the AI, it will be difficult to catch up.
However, the exact opposite is happening. As the models get better, the need for the perfect prompt starts waning. Prompt engineering is a skill that is obsoleting faster than handwriting code.
I personally started using codex in march and honestly, the hardest part was finding and setting up the sandbox. (I use limactl with qemu and kvm). Meanwhile the agentic coding part just works.
There really isn't anything special to using AI anyways it's not rocket science. Sometimes I will use AI to write me some tailwind tags, sometimes I will use AI to write me a static site for a custom report.
Most of my AI usage comes from doing things I don't enjoy doing like making a series of small tweaks to a function or block of code. Honestly, I just levelled the playing field with vim users and its nothing to write home about
I almost entirely agree with the author's assessment of new technology. Yet that statement rubbed me the wrong way.
Sometimes it is better to get into things early because it will grow more complex as time goes on, so it will be easier to pick up early in its development. Consider the Web. In the early days, it was just HTML. That was easy to learn. From there on, it was simply a matter of picking up new skills as the environment changed. I'm not sure how I would deal with picking up web development if I started today.
Counter point. It’s always advantageous to learn and grow as things evolve. This way you have an active role and maybe a say in how it will evolve. And maybe you could contribute towards that evolution (despite poor execution openclaw showed what LLMs could be doing)
> There are a 16,000 new lives being born every hour. They're all starting with a fairly blank slate.
Not long ago we were ridiculing genZ for not knowing why save icon looks like a floppy disk.
But it's so easy to try something like Claude Code. It's not like you need to get up to speed. There is no learning curve*, that's the nature of AI. Just start using it and you'll see why it has attracted so much hype.
*I should qualify that "using" CC in the strict sense has no learning curve, but really getting the most out of it may take some time as you see its limitations. But it's not learning tech in the traditional sense.
I've let tech pass me by many times, then the tech that passed me which I was never in a position to use got replaced by the next big tech innovation. I've found that you can climb aboard the train at anytime since everything new is a lot easier to get started on than learning C and having to manually allocate memory.
Mistakes are less costly in the beginning and the knowledge gained from them is more valuable.
Over-sharing on social media. Secret / IP leaks with LLMs. That kind of thing.
I agree:
FOMO is an all-in mindset. Author admits to dabbling out of curiosity and realizing the time is not right for him personally. I think that's a strong call.
We've seen multiple ideas/products get quickly absorbed into frontier models, OSS, or well-funded startups. The cycle from "interesting idea" to "commoditized feature" is getting very short. Personally, there were three of these in the last year.
And even if your product is genuinely great, distribution is becoming the real bottleneck. Discovery via prompting or search is limited, and paid acquisition is increasingly expensive.
One alternative is to loop between build and kill, letting usage emerge organically rather than trying to force distribution.
Some might be getting into AI in order to sell AI. As OpenClaw has shown, there is opportunity in this space to be a trailblazer. There are no doubt companies that are not tech-aligned that someone could help set up local LLMs for…
For me though, I'm dabbling in AI because it fascinates me. Bitcoin was like, I don't know, Herbalife? —never interesting to me at all.
I think it was challenging 2 or 3 years ago. I plunged a year ago and it already was quite easy to use mainstream tools. I could run some local models with Ollama by just installing it. I could use coding assistance in VSCode. Connecting with http API to use AI within applications you build was also easy for local models or cloud.
There are loads of BS tools out there of course but I don’t use that many tools.
Broadly speaking I agree. But the reality for many SWEs is that if they don't learn new AI tools they'll get let go. It's use AI or be replaced by AI (or, more accurately, be replaced by someone using Ai) for many folks.
I think it's a luxury to be able to ignore a trend like AI. crypto was fine to ignore because it didn't really replace anyone, but Ai is a different beast
One area where it may end up leaving you behind is if you’re looking for a job right now. There are a lot of companies putting vibe coding in their job requirements. The more companies that do this the harder it will be to find employment if you’re not adopting this tool/workflow.
Even if it reaches the end state of AGI, e.g. AI that's smarter and more capable than 90% of humans, there'll still be a huge learning curve to using it well, as anyone who's tried managing very smart humans can attest.
This is the central thing that changes in a person with age. When you are born, the only thing you do is pick up new things. Literally nothing else. When you're young, picking up new things is how you improve your social position. It's what you do to even be talked to in the first place. It's what you do to get a girl/boyfriend, or be the best student in class, or to be the best (or worst even) employee at your first job ...
Once you have a good social position, or at least one you're happy with, you stop doing this, and you grow ever more irritated at others doing it ... because it's your social position that they're coming after. And they're younger, more motivated and hungrier. More than that, a decent chunk of these people want a better social position, even if that means taking yours.
The thing is, this post is hitting a straw man. ngmi culture was deeply toxic and pervasive in crypto. I think the people who are really into LLMs are having a blast.
Ok, here is the risk of being left behind - if we have moderately fast take-off, the 1-2 years required to upskill in AI might mean you find yourself unemployable when your role gets axed.
I don’t think folks are taking seriously the possible worlds at the P(0.25) tail of likelihood.
You do not get to pick up this stuff “on a timescale of my choosing”, in the worlds where the capability exponential keeps going for another 5-10 years.
I’m sure the author simply doesn’t buy that premise, but IMO it’s poor epistemics to refuse to even engage with the very obvious open question of why this time might be different.
i thought so too. but now we are onboarding project managers in non-tech fields to Claude Code and they are crushing it. on a terminal. vs code. the first thirty min is the hard part, after that the feedback loop kicks in. they ask for what they really want, they get it.
I don't know, I kind of wonder if this applies to all technologies equally.
for example, (dodging the whole full-self-driving controversy) tesla cars have had advanced safety features like traffic aware cruise control and autosteer for over a decade.
so, buying into safety early...
for other technologies, there's sort of the rugpull effect. The people who get in early enjoy something with little drama vs the late adopters. ask people who bought into sonos early vs late, probably more exampless of this.
so getting technology the founders envisioned, vs later enshittified versions.
There's value in being early - in the right thing.
- If you'd invested in Bitcoin in 2016, you'd have made a 200x return
- If you'd specialized in neural networks before the transformer paper, you'd be one of the most sought-after specialists right now
- If you'd started making mobile games when the iPhone was released, you could have built the first Candy Crush
Of course, you could just as well have
- become an ActionScript specialist as it was clearly the future of interactive web design
- specialized in Blackberry app development as one of the first mobile computing platforms
- made major investments in NFTs (any time, really...)
Bottom line - if you want to have a chance at outsized returns, but are also willing to accept the risks of dead ends, be early. If you want a smooth, mid-level return, wait it out...
Feels like a false equivalency. It's just my experience, but I've completely ignored crypto and the metaverse, and I don't get the sense I'm missing out on much.
In contrast, LLMs in their current state have (for me) dramatically reduced the distance between an idea and a working implementation, which has been legitimately transformative in my software dev life. Transformative for the better? Time will tell I suppose, but I'm really enjoying it so far.
It's a horrifying feeling facing the possibility that the career I spent so much time and money to get into is fading away. Sure, LLMs are not there yet, and they might not ever quite get there. But will companies start hiring again? If productivity has gone up, and it seems like it has, then no.
So, a decade of hanging by a thread, getting by and doubling down on CS, hoping that the job market sees an uptick? Or trying to switch careers?
I went to get a flat tire fixed yesterday and the whole time I was envious of the cheerful guy working on my car. A flat tire is a flat tire, no matter whether a recession is going on or whether LLMs are causing chaos in white collar work. If I had no debt and a little bit saved up I might just content myself with a humble moat like that.
I actually think the opposite approach might be the most optimal one, at least from a monetary perspective. That is, be on the cutting-edge of something, but be willing to bail out at the moment its future starts seeming questionable. Or even more specifically, maximize your foothold in it while minimizing your downside.
Bitcoin is a good example: if you bought it 15 years ago and held it, you're probably quite wealthy by now. Even if you sold it 5 years ago, you would have made a ton of money. But if you quit your job and started a cryptocurrency company circa 2020, because you thought crypto would eat the entire economic system, you probably wasted a lot of time and opportunities. Too much invested, too much risked.
AI is another one. If you were using AI to create content in the months/years before it really blew up, you had a competitive advantage, and it might have really grown your business/website/etc. But if you're now starting an AI company that helps people generate content about something, you're a bit late. The cat is out of the bag, and people know what AI-speak is. The early-adopter advantage isn't there anymore.
But IMO the most fruitful thing for an engineering org to do RIGHT NOW is learn the tools well enough to see where they can be best applied.
Claude Code and its ilk can turn "maybe one day" internal projects into live features after a single hour of work. You really, honestly, and truly are missing out if you're not looking for valuable things like that!
I am increasingly feeling okay with the idea of being left out. The worst parts of working professionally in a software development team have been amplified by LLMs. Ridiculously large PRs, strong opinions doubled down due to being LLM-"confirmed", bigger expectations coming from above, exceptionally unwarranted confidence in the change or approach the LLM has come up with.
I am dying inside when I make a comment and receive a response that has clearly been prompted toward my comment and possibly filtered in the voice of the responder if not copied and pasted directly. Particularly when it's wrong. And it often is wrong because the human using them doesn't know how to ask the right questions.
Fortunately, most of the fundamental technological infrastructure is well in place at this point (networking, operating systems, ...). Low skilled engineers vibe coding features for some fundamentally pointless SaaS is OK with me.
Agree with the message, coding since 1986, I have learned to not suffer from FOMO and wait for the dust to settle.
Ironically one might even get projects to fix the mess left behind, as the magpies focus their attention into something else.
In the case of AI, the fallacy is thinking that even if ridding the wave, everyone is allowed to stay around, now that the team can deliver more with less people.
Maybe rushing out to the AI frontline won't bring in the interests that one is hoping for.
EDIT: To make the point even clearer, with SaaS and iPaaS products, serverless, managed clouds, many projects now require a team that is rather small, versus having to develop everything from scratch on-prem. AI based development reduces even further the team size.
On the otherhand, when Cloud Computing started to come in, I knew a bunch of sysadmins. Some were in the "it'll never take off" camp and no doubt they know it now, kicking and screaming.
But the curious early adopters were the ones best positioned to be leading the charge on "cloud migration" when the business finally pulled the trigger.
Similarly with mobile dev. As a Java dev at the time that Android came along, I didn't keep abreast of it - I can always get into it later. Suddenly the job ads were "Android Dev. Must have 3 years experience".
Sometimes, even just from self-interest, it's easier to get in on the ground floor when the surface area of things to learn is smaller than it is to wait too long before checking something out.
For me, it's beyond doubt these tools are an essential skill in any SWE's toolkit. By which I mean, knowing their capabilities, how they're valuable and when to use them (and when not to).
As with any other skill, if you can't do something, it can be frustrating to peers. I don't want collegeues wasting time doing things that are automatable.
I'm not suggesting anyone should be cranking out 10k LOC in a week with these tools, but if you haven't yet done things like sent one in an agentic loop to produce a minimal reprex of a bug, or pin down a performance regression by testing code on different branches, then you could potentially be hampering the productivity of the team. These are examples of things where I now have a higher expectation of precision because it's so much easier to do more thorough analysis automatically.
There's always caveats, but I think the point stands that people generally like working with other people who are working as productively as possible.
That's a reasonable strategy. I don't think spreading FOMO is good. But pragmatically, I enjoy working with the latest crop of AI models regarding all sorts of computer tasks, including coding but many other sysadmin stuff and knowledge organization.
I didn't pick them up until last November and I don't think I missed out on much. Earlier models needed tricks and scaffolding that are no longer needed. All those prompting techniques are pretty obsolete. In these 3-4 months I got up to speed very well, I don't think 2 years of additional experience with dumber AI would have given me much.
For now, I see value in figuring out how to work with the current AI. But next year even this experience may be useless. It's like, by the time you figure out the workarounds, the new model doesn't need those workarounds.
Just as in image generation maybe a year ago you needed five loras and controlnet and negative prompts etc to not have weird hands, today you just no longer get weird hands with the best models.
Long term the only skill we will need is to communicate our wants and requirements succinctly and to provide enough informational context. But over time we have to ask why this role will remain robust. Where do these requirements come from, do they simply form in our heads? Or are they deduced from other information, such that the AI can also deduce it from there?
This really hinges on what you mean by "didn't use git".
If you were using bzr or svn, that's one thing.
If you were saving multiple copies of files ("foo.old.didntwork" and the like), then I'd submit that you're making the point for the AI supporters. I consulted with a couple developers at the local university as recently as a couple years ago who were still doing the copy files method and were struggling, when git was right there ready to help.
I don't understand the rush to be "the first". Facebook isn't the first social media, Google isn't the first search engine, iPhone is not the first smart phone, Microsoft is not the first OS, the list goes on.
Clearly there's an advantage for being an early adopter, but the advantage is often overblown, and the cost to get it is often underestimated.
“There are a 16,000 new lives being born every hour. They're all starting with a fairly blank slate. Are you genuinely saying that they'll all be left behind because they didn't learn your technology in utero?”
> If this tech is as amazing as you say it is, I'll be able to pick it up and become productive on a timescale of my choosing not yours.
In contrast to the current top comment [1], I don't think this is a wise assessment. I'm already seeing companies in my network stall hiring, and in fact start firing. I think if you're not trying to take advantage of this technology today then there may not be a place for you tomorrow.
I find it hard to empathise with people who can't get value out of AI. It feels like they must be in a completely different bubble to me. I trust their experience, but in my own experience, it has made things possible in a matter of hours that I would never have even bothered to try.
Besides the individual contributor angle, where AI can make you code at Nx the rate of before (where N is say... between 0.5 and 10), I think the ownership class are really starting to see it differently from ICs. I initially thought: "wow, this tool makes me twice as productive, that's great". But that extra value doesn't accrue to individuals, it accrues to business owners. And the business owners I'm observing are thinking: "wow, this tool is a new paradigm making many people twice as productive. How far can we push this?"
The business owners I know who have been successful historically are seeing a 2x improvement and are completely unsatisfied. It's shattered their perspective on what is possible, and they're rebuilding their understanding of business from first principles with the new information. I think this is what the people who emerge as winners tomorrow are doing today. The game has changed.
Speaking as an IC who is both more productive than last year, but simultaneously more worried.
In general, a good strategy is just staying a little bit behind. Let the new fads play themselves out. Some have staying power. Bitcoin never did turn into a usable currency, just another speculator's toy. Luckily I am - so far - in a position where I can watch the AI thing from the sidelines to see how it plays out.
I'm healthily skeptical of new technology. Meaning I'm not the early adopter. But I've also found over the years I don't get left behind. I become curious at the time things are stabilising. Maybe on the cusp where there's still a lot of pushback but there's also clear value. Crypto in 2014-2017. AI in 2023-2024. You don't have to feel FOMO but if you're a technologist, if you have a healthy desire to evolve, change and learn then you'll naturally pick things up. I went from total crypto skepticism in 2014 to investing most of what I had. I went from total AI skepticism to doing RAG for the Quran and agentic tech for the small web. I think there's value in staying true to who you are but also naturally discovering and learning on your own timeline.
765 comments
> If this tech is as amazing as you say it is, I'll be able to pick it up and become productive on a timescale of my choosing not yours.
Broadly speaking, I think this is a wise assessment. There are opportunities for productivity gains right now, but it I don't think it's a knockout for anyone using the tech, and I think that onboarding might be challenging for some people in the tech's current state.
It is safe to assume that the tech will continue to improve in both ways: productivity gains will increase, onboarding will get easier. I think it will also become easier to choose a particular suite of products to use too. Waiting is not a bad idea.
It just sounds like a giant scheme to burn through tokens and give money to the AI corps, and tech directors are falling for it immediately.
> I've never seen anything like it for a technically optional tool
Cloud had a very similar vibe when it was really running advertising to CIO/CTOs hard. Everything had to be jammed into the cloud, even if it made absolutely no sense for it to be run there.
This seems to come pretty frequently from visionless tech execs. They need to justify their existence to their boss, and thus try to show how innovative and/or cost cutting they can be.
But is this something that is best done top to bottom, with a big report, counting tokens? Hell no. This is something that is better found, and tackled at the team level. But execs in many places like easy, visible metrics, whether they are actually helping or not. And that's how you find people playing JIRA games and such. My worse example was a VP has decided that looking at the burndown charts from each team under them, and using their shape as a reasonable metric is a good idea.
It's all natural signs of a total lack of trust, and thinking you can solve all of this from the top.
> t just sounds like a giant scheme to burn through tokens and give money to the AI corps, and tech directors are falling for it immediately.
This is exactly what's happening. The top 5 or 6 companies in the s&p 500 are running a very sophisticated marketing/pressure campaign to convince every c-suite down stream that they need to force AI on their entire organization or die. It's working great. CEOs don't get fired for following the herd.
>
I've never seen anything like it for a technically optional toolIf you broaden the comparison (only a little bit) it looks suspiciously like employees being forced to train their own replacement (be that other employees, or factory automation), a regular occurrence.
Yes, they tend to be incredible gullible to certain things, over-simplistic and over-confident but also very "agile" when it comes to sweep their failures under the rug and move on to keep their own neck in one piece. At this point in time even the median CEO knows AI has been way overhyped and they over invested to a point of absolute financial insanity.
The first line of defense about the pressure to deliver is to mandate their minions to use it as much as possible.
We spent a fortune on this over-rated Michelin star reservation, and now you kids are going to absolutely enjoy it, like it or not goddammit!
> I've never seen anything like it for a technically optional tool.
It has often been the case for technologies though, like “now we’re doing everything in $language and $technology”. If you see LLM coding as a technology in that vein, it’s not a completely new phenomenon, although it does affect developers differently.
The early adopters started years ago and they've seen improvements over time that they started attributing them to their own skill. They tell you that if you didn't spend years prompting the AI, it will be difficult to catch up.
However, the exact opposite is happening. As the models get better, the need for the perfect prompt starts waning. Prompt engineering is a skill that is obsoleting faster than handwriting code.
I personally started using codex in march and honestly, the hardest part was finding and setting up the sandbox. (I use limactl with qemu and kvm). Meanwhile the agentic coding part just works.
Most of my AI usage comes from doing things I don't enjoy doing like making a series of small tweaks to a function or block of code. Honestly, I just levelled the playing field with vim users and its nothing to write home about
Sometimes it is better to get into things early because it will grow more complex as time goes on, so it will be easier to pick up early in its development. Consider the Web. In the early days, it was just HTML. That was easy to learn. From there on, it was simply a matter of picking up new skills as the environment changed. I'm not sure how I would deal with picking up web development if I started today.
> There are a 16,000 new lives being born every hour. They're all starting with a fairly blank slate.
Not long ago we were ridiculing genZ for not knowing why save icon looks like a floppy disk.
Do you want to feel like that in next 5-10 years?
*I should qualify that "using" CC in the strict sense has no learning curve, but really getting the most out of it may take some time as you see its limitations. But it's not learning tech in the traditional sense.
Mistakes are less costly in the beginning and the knowledge gained from them is more valuable.
Over-sharing on social media. Secret / IP leaks with LLMs. That kind of thing.
I agree:
FOMO is an all-in mindset. Author admits to dabbling out of curiosity and realizing the time is not right for him personally. I think that's a strong call.
And even if your product is genuinely great, distribution is becoming the real bottleneck. Discovery via prompting or search is limited, and paid acquisition is increasingly expensive.
One alternative is to loop between build and kill, letting usage emerge organically rather than trying to force distribution.
For me though, I'm dabbling in AI because it fascinates me. Bitcoin was like, I don't know, Herbalife? —never interesting to me at all.
There are loads of BS tools out there of course but I don’t use that many tools.
I think it's a luxury to be able to ignore a trend like AI. crypto was fine to ignore because it didn't really replace anyone, but Ai is a different beast
I waited until it seemed good enough to use without having to spend most of my time keeping up with the latest magical incantations.
Now I have multiple Claude instances running and producing almost all of my commits at work.
Yes, with a lot of time spent planning and validating.
Once you have a good social position, or at least one you're happy with, you stop doing this, and you grow ever more irritated at others doing it ... because it's your social position that they're coming after. And they're younger, more motivated and hungrier. More than that, a decent chunk of these people want a better social position, even if that means taking yours.
I don’t think folks are taking seriously the possible worlds at the P(0.25) tail of likelihood.
You do not get to pick up this stuff “on a timescale of my choosing”, in the worlds where the capability exponential keeps going for another 5-10 years.
I’m sure the author simply doesn’t buy that premise, but IMO it’s poor epistemics to refuse to even engage with the very obvious open question of why this time might be different.
for example, (dodging the whole full-self-driving controversy) tesla cars have had advanced safety features like traffic aware cruise control and autosteer for over a decade.
so, buying into safety early...
for other technologies, there's sort of the rugpull effect. The people who get in early enjoy something with little drama vs the late adopters. ask people who bought into sonos early vs late, probably more exampless of this.
so getting technology the founders envisioned, vs later enshittified versions.
- If you'd invested in Bitcoin in 2016, you'd have made a 200x return
- If you'd specialized in neural networks before the transformer paper, you'd be one of the most sought-after specialists right now
- If you'd started making mobile games when the iPhone was released, you could have built the first Candy Crush
Of course, you could just as well have
- become an ActionScript specialist as it was clearly the future of interactive web design
- specialized in Blackberry app development as one of the first mobile computing platforms
- made major investments in NFTs (any time, really...)
Bottom line - if you want to have a chance at outsized returns, but are also willing to accept the risks of dead ends, be early. If you want a smooth, mid-level return, wait it out...
So, a decade of hanging by a thread, getting by and doubling down on CS, hoping that the job market sees an uptick? Or trying to switch careers?
I went to get a flat tire fixed yesterday and the whole time I was envious of the cheerful guy working on my car. A flat tire is a flat tire, no matter whether a recession is going on or whether LLMs are causing chaos in white collar work. If I had no debt and a little bit saved up I might just content myself with a humble moat like that.
Bitcoin is a good example: if you bought it 15 years ago and held it, you're probably quite wealthy by now. Even if you sold it 5 years ago, you would have made a ton of money. But if you quit your job and started a cryptocurrency company circa 2020, because you thought crypto would eat the entire economic system, you probably wasted a lot of time and opportunities. Too much invested, too much risked.
AI is another one. If you were using AI to create content in the months/years before it really blew up, you had a competitive advantage, and it might have really grown your business/website/etc. But if you're now starting an AI company that helps people generate content about something, you're a bit late. The cat is out of the bag, and people know what AI-speak is. The early-adopter advantage isn't there anymore.
But IMO the most fruitful thing for an engineering org to do RIGHT NOW is learn the tools well enough to see where they can be best applied.
Claude Code and its ilk can turn "maybe one day" internal projects into live features after a single hour of work. You really, honestly, and truly are missing out if you're not looking for valuable things like that!
I am dying inside when I make a comment and receive a response that has clearly been prompted toward my comment and possibly filtered in the voice of the responder if not copied and pasted directly. Particularly when it's wrong. And it often is wrong because the human using them doesn't know how to ask the right questions.
Fortunately, most of the fundamental technological infrastructure is well in place at this point (networking, operating systems, ...). Low skilled engineers vibe coding features for some fundamentally pointless SaaS is OK with me.
Ironically one might even get projects to fix the mess left behind, as the magpies focus their attention into something else.
In the case of AI, the fallacy is thinking that even if ridding the wave, everyone is allowed to stay around, now that the team can deliver more with less people.
Maybe rushing out to the AI frontline won't bring in the interests that one is hoping for.
EDIT: To make the point even clearer, with SaaS and iPaaS products, serverless, managed clouds, many projects now require a team that is rather small, versus having to develop everything from scratch on-prem. AI based development reduces even further the team size.
But the curious early adopters were the ones best positioned to be leading the charge on "cloud migration" when the business finally pulled the trigger.
Similarly with mobile dev. As a Java dev at the time that Android came along, I didn't keep abreast of it - I can always get into it later. Suddenly the job ads were "Android Dev. Must have 3 years experience".
Sometimes, even just from self-interest, it's easier to get in on the ground floor when the surface area of things to learn is smaller than it is to wait too long before checking something out.
As with any other skill, if you can't do something, it can be frustrating to peers. I don't want collegeues wasting time doing things that are automatable.
I'm not suggesting anyone should be cranking out 10k LOC in a week with these tools, but if you haven't yet done things like sent one in an agentic loop to produce a minimal reprex of a bug, or pin down a performance regression by testing code on different branches, then you could potentially be hampering the productivity of the team. These are examples of things where I now have a higher expectation of precision because it's so much easier to do more thorough analysis automatically.
There's always caveats, but I think the point stands that people generally like working with other people who are working as productively as possible.
I didn't pick them up until last November and I don't think I missed out on much. Earlier models needed tricks and scaffolding that are no longer needed. All those prompting techniques are pretty obsolete. In these 3-4 months I got up to speed very well, I don't think 2 years of additional experience with dumber AI would have given me much.
For now, I see value in figuring out how to work with the current AI. But next year even this experience may be useless. It's like, by the time you figure out the workarounds, the new model doesn't need those workarounds.
Just as in image generation maybe a year ago you needed five loras and controlnet and negative prompts etc to not have weird hands, today you just no longer get weird hands with the best models.
Long term the only skill we will need is to communicate our wants and requirements succinctly and to provide enough informational context. But over time we have to ask why this role will remain robust. Where do these requirements come from, do they simply form in our heads? Or are they deduced from other information, such that the AI can also deduce it from there?
>I didn't use Git when it first came out.
This really hinges on what you mean by "didn't use git".
If you were using bzr or svn, that's one thing.
If you were saving multiple copies of files ("foo.old.didntwork" and the like), then I'd submit that you're making the point for the AI supporters. I consulted with a couple developers at the local university as recently as a couple years ago who were still doing the copy files method and were struggling, when git was right there ready to help.
Clearly there's an advantage for being an early adopter, but the advantage is often overblown, and the cost to get it is often underestimated.
Nothing is happening. And if it is, it's just hype.
And if it isn't, it only works on toy problems. And if it doesn't, I'll learn it when it stabilizes.
And if I can't, the gains all go to owners anyway. And if they don't, it's just managers chasing metrics.
And if it isn't, well I'm a real programmer. And if I'm not, then neither are you.
This is a great framing.
> If this tech is as amazing as you say it is, I'll be able to pick it up and become productive on a timescale of my choosing not yours.
In contrast to the current top comment [1], I don't think this is a wise assessment. I'm already seeing companies in my network stall hiring, and in fact start firing. I think if you're not trying to take advantage of this technology today then there may not be a place for you tomorrow.
I find it hard to empathise with people who can't get value out of AI. It feels like they must be in a completely different bubble to me. I trust their experience, but in my own experience, it has made things possible in a matter of hours that I would never have even bothered to try.
Besides the individual contributor angle, where AI can make you code at Nx the rate of before (where N is say... between 0.5 and 10), I think the ownership class are really starting to see it differently from ICs. I initially thought: "wow, this tool makes me twice as productive, that's great". But that extra value doesn't accrue to individuals, it accrues to business owners. And the business owners I'm observing are thinking: "wow, this tool is a new paradigm making many people twice as productive. How far can we push this?"
The business owners I know who have been successful historically are seeing a 2x improvement and are completely unsatisfied. It's shattered their perspective on what is possible, and they're rebuilding their understanding of business from first principles with the new information. I think this is what the people who emerge as winners tomorrow are doing today. The game has changed.
Speaking as an IC who is both more productive than last year, but simultaneously more worried.
[1] https://news.ycombinator.com/item?id=47454614