The future of everything is lies, I guess: Work (aphyr.com)

by aphyr 219 comments 290 points
Read article View on HN

219 comments

[−] jerf 31d ago
The interesting question to me at the moment is whether we are still at the bottom of an exponential takeoff or nearing the top of a sigmoid curve. You can find evidence for both. LLMs probably can't get another 10 times better. But then, almost literally at any minute, someone could come up with a new architecture that can be 10 times better with the same or fewer resources. LLMs strike me as still leaving a lot on the table.

If we're nearing the top of a sigmoid curve and are given 10-ish years at least to adapt, we probably can. Advancements in applying the AI will continue but we'll also grow a clearer understanding of what current AI can't do.

If we're still at the bottom of the curve and it doesn't slow down, then we're looking at the singularity. Which I would remind people in its original, and generally better, formulation is simply an observation that there comes a point where you can't predict past it at all. ("Rapture of the Nerds" is a very particular possible instance of the unpredictable future, it is not the concept of the "singularity" itself.) Who knows what will happen.

[−] peterbell_nyc 31d ago
I model this as "stacked sigmoid curves". I have no reason to believe that any specific technological implementation will be exponential in impact vs sigmoidal.

However if we throw enough money and smart people at the problems and get enough value from the early sigmoid curves, the effective impact of a large number of stacked sigmoids could theoretically average to a linear impact, but if the sigmoids stay of a similar magnitude (on average) and appear at a higher velocity over time, you end up with an exponential made up of sigmoids*

* To be fair, it has been so long since I have done math that this may be completely incorrect mathematically - I'm not sure how to model it. However I think in practice more and more sigmoids coming faster and faster with a similar median amplitude is gonna feel very fast to humans very soon - whether or not it's a true exponential.

I'm honestly having a very hard time thinking through the likely implications of what's currently happening over the next 2-10 years. Anyone who has the answers, please do share. I'm assuming from Cynafin that it's a peturbated complex adaptive system so I can just OODA or experiment, sense and respond to what happens - not what I think might happen.

[−] juped 31d ago
Neither! A logistic curve is just an exponential with a carrying capacity - it is still an exponential! There is no reason to believe that AI capability, which grows logarithmically with the handwaved-resources used on it (roughly, this is compute and training data), grows, has grown, or is growing exponentially!

I know this sounds like "the moderate position" to people but you are accepting that something logarithmic is somehow in fact exponential (these are inverse functions of one another) based on no evidence or argument.

Here is Sam Altman, the one man in the world with the most incentive to overstate AI capability, accepting the extremely-well-known logarithmic growth: https://blog.samaltman.com/three-observations

What we see in reality is a basically-linear growth pattern due to pushing exponentially more resources into this logarithm.

[−] fny 31d ago
Why is everyone so damn obsessed with the singularity? You don't need superintelligence to disrupt humanity. We easily have enough advancement to change the economy dramatically as is. The adoption isn't there yet.
[−] jerf 30d ago
Even after I explained the exact usage I was invoking, the attractive nuisance of all the science fiction that has gotten attached to the term still prevented you and Quarrelsome from reading my post as written.

I really wish the term hadn't been mangled so much. Though the originator of the term bears a non-trivial amount of the responsibility for it, having written some rather good science fiction on the topic himself. The original meaning from the paper is quite useful and nothing has stepped up to replace it.

All the singularity means as I explicitly used it here is you entirely lose the ability to predict the future. It is relative to who is using it... we are all well past the Caveman Singularity, where no (metaphorical) caveman could possibly predict anything about our world. If we stabilize where we are now I feel like I have at least a grasp on the next ten years. If we continue at this pace I don't. That doesn't mean I believe AI will inevitably do this or that... it means I can't predict anymore, which is really the exact opposite. AI doesn't have to get to "superintelligence" to wreck up predictions.

[−] tim333 30d ago

>the originator of the term ... rather good science fiction

I guess you are thinking of Vernor Vinge but the term first came up with John von Neumann in the 1950s:

>...on the accelerating progress of technology and changes in human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue

[−] Folcon 30d ago
The most interesting factor of the dynamic around things like near singularity is the things that I feel are coupled to it

Basically the ability to reason about first and second order effects

IE, before the cellphone was invented you could have predicted the it, things like star trek envisaged a world of portable communication

What impact the cellphone had was predictable to some people, on the one hand increased convenience of communication as well as the end of making a call and wondering who was going to pick up, which was a definite consideration pre-mobile when you called a place and not a person, now we just assume that when we call someone we'll get them and not their family

The second order effects were less obvious, ease of access to someone meant being always accessible, so now everyone could be contacted whenever someone wanted them, it changed the dynamics of life for many, not to mention the effects of different technologies combining, the personal computer and the mobile phone becoming one in the form of the smartphone gave everyone a computer in their pocket, let alone adding the internet into the mix

Each of these changes were completely unpredictable to the people pre-cellphone, once again, compare modern day trek and the originals

I still vividly remember the moment one of the characters in discovery asked the computer to give her a mirror, the same behaviour of countless people now using the fact that their selfie camera functionally gives them a portable mirror in the form of their phone, that was unpredictable

So that's one form of being unable to predict the future

But there's another interesting dynamic I think, which is each direction of technical development is accelerating, which means that we may soon hit the point that only a subject matter expert will be able to predict or perhaps even be aware of what happens in any particular field, so we may get a period where before we can't predict the future, we may have some strange middle ground where we're constantly surprised by the developments we see around ourselves and when we look into it find this new discovery has been around months or years

I certainly have experienced that once or twice, however I'm wondering if that may become the new normal

[−] lamasery 30d ago

> The adoption isn't there yet.

It's worth noting that after ~50 years[edit: to preempt nitpicking, yes I know we've been using computers productively quite a bit longer than that, but that's roughly the time when the computerized office started to really gain traction across the whole economy in developed countries], we've only extracted a tiny proportion of the hypothetical value of computers, period, as far as benefits to the economy and potential for automation.

I actually think a lot of the real value of LLMs is "just" going to be making accessing a little (only a little!) more of that existing unrealized benefit feasible for the median worker.

My expectation is that we'll also harness only a tiny proportion of the hypothetical value of LLMs. We're just not good enough at organizing work to approach the level of benefit folks think of when they speculate about how transformational these things will be. A big deal? Yes. As big a deal as some suppose? Probably not.

[edit: in positive ways, I mean. I think we're going to see huge boosts in productivity to anti-social enterprises. I'd not want to bet on whether the development of LLMs are going to be net-positive or net-harmful to humanity, not due to the "singularity" or "alignment" or whatever, but because of the sorts of things they're most-useful for]

[−] arbitrary_name 29d ago
it's an interesting question: how much more productive would we all be if we were all as savvy/literate/productive with computers as some hypothetical comparator (I'm not sure programmers are the right comparison to make)?

for example, i am in operations and strategy, but have always wanted to be more technical because i could see the value for many many tasks. however, the learning curve was steep and so learning and doing other things drove better returns for me.

now, LLMs make learning basic concepts and executing simple tasks extremely easy, and i am realizing a higher level of productivity then previously; i used codex to do a test data migration and then evaluate the data quality. i could simply not have done this previously, but it is a meaningful change for me, that i can execute on this.

there is no maintenance burden: i don't have to keep the code alive. it simply sped up an otherwise manual and non repeated task.

i think that's what's so interesting and concerning about this technology: i think power and productivity will flow more broadly across the workforce. this will result in relative winners and losers, and some who will experience no real change at all.

similarly to the costs and benefits of mobile devices diffusing technology access; it changed some things, it created winners and losers and yet our daily lives are recognizable to someone from 50 or even more years ago.

[−] balamatom 30d ago

>Why is everyone so damn obsessed with the singularity?

Because they are captives (to a system of incentives that is already "superintelligent" in comparison to any individual) who are hoping for salvation (something to make them free against their will; since it is their will which is captured).

Singularity, then, is the point at which the system itself "finally becomes able to imagine what it is like to be a person", and decides to stop torturing people. IMO, this is unlikely to work out like that.

[−] gilfaethwy 30d ago
We've had enough advancement to change the economy for many decades, but the powers that be have insisted that, despite the lack of need, we continue to toil doing completely unnecessary work, because that's what's required to extend their fiefdoms.

Not that the singularity has any relevance here, either - except maybe that the robots take over, and the billionaires have missed the boat? I don't know.

[−] Quarrelsome 31d ago
Moreover the singularity makes this crass assumption that a single player takes all. It seems to ignore a future of many, many AI players, or many, many human + AI players instead.

Furthermore, regardless of how smart one thing is, it cannot win towards infinite games of poker against 7 billion humans, who as a race are cognitively extremely diverse and adaptive.

[−] kaibee 30d ago

> regardless of how smart one thing is, it cannot win towards infinite games of poker against 7 billion humans,

AI isn't one thing though. Really its kind of a natural evolution of 'higher order life'. I think that something like a 'organization', (corps, governments, etc) once large enough is at least as alive as a tardigrade. And for the people who are its cells, it is as comprehensible as the tardigrade is to any of its individual cells. So why wouldn't organizations over all of human history eventually 'evolve' a better information processing system than humans making mouth sounds at each other? (writing was really the first step on this). Really if you look at the last 12,000 years of human society as actually being the first 12,000 years of the evolutionary history of 'organizations', it kinda makes a lot of sense. And so much of it was exploring the environment, trying replication strategies, etc. And we have a lot of different organizations now, like an evolutionary explosion, where life finds various niches to exploit.

/schitzoposting

[−] Quarrelsome 30d ago

> AI isn't one thing though.

What's the single in "singularity" doing then?

My issue is I feel like some people treat intelligence as an integer value and make the crass assumption that "perfect intelligence" beats all other intelligences and just think that's quite a thick way to think about it. A fool can beat an expert over the course of towards infinite hands because they happen to do something unexpected. Everything is a trade off and there's no such thing as perfect, every player has to take risk.

[−] fzzzy 30d ago
The singularity does no such thing.
[−] ikrenji 30d ago
that's kind of optimistic. for example a misaligned super AI might engineer a virus that wipes out most of the 7 billion humans. that would put a damper on the adaptability of the human race...
[−] tim333 30d ago

>Why is everyone so damn obsessed with the singularity?

I don't think most are - it tends to regarded as rather cranky stuff, and a lot of people who use the term are a bit cranky.

Even so AI maybe overtaking human intelligence is an interesting thing in human history.

[−] CamperBob2 30d ago
Why is everyone so damn obsessed with the singularity? You don't need superintelligence to disrupt humanity.

And at the same time, we don't take advantage of the intelligence we already have.

[−] guelo 30d ago
Because it's happening no matter how much you'd rather ignore it or scoff at it.
[−] HappMacDonald 30d ago
I don't think the kind of exponential you are looking for (and especially not "the singularity") can manifest until the product (AI) is at a point where it can meaningfully take over the task of improving itself directly.

I would say we have certainly seen a bottleneck in the ability of LLMs to handle any kind of broad abstractions or master the architecture of coding. That is the hinge of why "vibe coding" is as trashy of an approach as it is: the LLM can't cut the mustard on any actual software design.

So they have nothing close to the deep understanding required to improve their own substrate.

They can be exceptionally good at understanding what humans mean when they say things, far better than poking keywords into a google search for example, especially when said keywords are noisy and overloaded. And they can be a very good encyclopedic store of concepts (the more general the idea the less likely they hallucinate it, while the details and citations are far more frequently made up on the spot). But they suck at volition, and at state representation (thanks to those limited context windows) which cuts them off at the knees if they ever have to tenaciously search for anything including performing creative problem solving.

We do have AI models which can get somewhere on theorem proving or protein folding or high level competitive game playing, but those only sometimes even glancingly involve LLMs, and are primarily custom-built amalgams of different kinds of neural networks each trained on specific tasks in their fields.

None of that can directly move the needle on actual AI research yet.

[−] keeda 30d ago
I've said it before, but it would be a mistake to just focus on the models, and ignore everything else that is changing in the ecosystem -- tools, harnesses, agents, skills, availability of compute, etc. -- things are changing very quickly overall.

The thing that is changing most rapidly, however, is the understanding of how to harness this insanely powerful, versatile, and unpredictable new technology.

Like, those who experimented deeply with LLMs could tell that even if all model development completely froze in 2024, humanity had decades worth of unrealized applications and optimizations to explore. Even with AI recursively accelerating this process of exploration. As a trivial example, way back in 2023 anyone who got broken code from ChatGPT, fed it the error message, and got back working code, knew agents were going to wreck things up very quickly. It wasn't clear that this would look like MD files, Claude Code, skills, GasTown, and YOLO vibe-coding, but those were "mere implementation details."

I'm half-convinced an ulterior goal of these AI companies (other than the lack of a better business model) to give away so many cheap tokens is to encourage experimentation and overcome this "capability overhang."

Given all this, it's very hard to judge where we are on the curve, because there isn't just one curve, there are actually multiple inter-playing curves.

[−] echelon 31d ago

> The interesting question to me at the moment is whether we are still at the bottom of an exponential takeoff or nearing the top of a sigmoid curve.

Even using the models we have today, we have revolutionized VFX, video production, and graphics design.

Similarly, many senior software engineers are reporting 2-10x productivity increases.

These tools are some of the most useful tools of my career. I don't even think the general consumer public needs "AI" in their products. If we just create control surfaces for experts to leverage and harness the speed up and shape and control the outcomes, we're going to be in a very good spot.

These alone will have ripple effects throughout the economy and innovation. We've barely begun to tap into the benefits we have already.

We don't even need new models.

[−] nostrademons 31d ago
Somewhere around 2005-2007, when people were wondering if the Internet was done, PG was fond of saying "It has decades to run. Social changes take longer than technical changes."

I think we're at a similar point with LLMs. The technical stuff is largely "done" - LLMs have closer to 10% than 10x headroom in how much they will technologically improve, we'll find ways to make them more efficient and burn fewer GPU cycles, the cost will come down as more entrants mature.

But the social changes are going to be vast. Expect huge amounts of AI slop and propaganda. Expect white-collar unemployment as execs realize that all their expensive employees can be replaced by an LLM, followed by white-collar business formation as customers realize that product quality went to shit when all the people were laid off. Expect the Internet as we loved it to disappear, if it hasn't already. Expect new products or networks to arise that are less open and so less vulnerable to the propagation of AI slop. Expect changes in the structure of governments. Mass media was a key element in the formation of the modern nation state, mass cheap fake media will likely lead to its fragmentation as any old Joe with a ChatGPT account can put out mass quantities of bullshit. Probably expect war as people compete to own the discourse.

[−] forgetfreeman 31d ago
"given 10-ish years at least to adapt, we probably can"

Social media would like a word...

[−] joquarky 30d ago
Anyone who believes in materialism should recognize that there is still a lot of room to improve.
[−] moritzwarhier 30d ago
How would you label the y axis?
[−] faangguyindia 31d ago
We are bottom. It's just a start.

We are in era of pre pentium 4 in AI terms.

[−] MagicMoonlight 31d ago
We aren’t anywhere near AGI. They’ve consumed the entirety of human knowledge and poisoned the well, and it still can’t help but tell you to walk to the car wash.

A peasant villager was sentient without a single book, film or song. You don’t need this much data to be sentient. They’re using a stupid method, and a better one will be discovered some day.

[−] itissid 31d ago
For any one who has not read the cockpit recording of air-france-447 I would encourage them to[1]. It is simply jaw dropping study in how things go wrong so fast — a risk with AI we have barely begun to acknowledge let alone regulate as a community.

[1](https://tailstrike.com/database/01-june-2009-air-france-447/)

[−] mannanj 31d ago

> This feels hopelessly naïve. We have profitable megacorps at home, and their names are things like Google, Amazon, Meta, and Microsoft. These companies have fought tooth and nail to avoid paying taxes (or, for that matter, their workers). OpenAI made it less than a decade before deciding it didn’t want to be a nonprofit any more. There is no reason to believe that “AI” companies will, having extracted immense wealth from interposing their services across every sector of the economy, turn around and fund UBI out of the goodness of their hearts.

> If enough people lose their jobs we may be able to mobilize sufficient public enthusiasm for however many trillions of dollars of new tax revenue are required. On the other hand, US income inequality has been generally increasing for 40 years, the top earner pre-tax income shares are nearing their highs from the early 20th century, and Republican opposition to progressive tax policy remains strong.

I think we are in general a highly naive, gullible class of people: we were conditioned, programmed and put into environments where being this was the norm and rewarded. The leaders and those extracting resources, who we gullibly allow to trample over our dignity and our rights, take advantage of this and reinforce it through lobby and influence of the mainstream culture and media campaigns around us. Further, if social media becomes a threat to their statuses, they have been shown to employ their influence there too through censorship and more; we therefore, may be best to learn how to not to be gullible and grow some balls.

[−] omega3 31d ago
The answer has always been the same: self-regulated profession and trade unions. Instead the ever efficient software engineers have efficiently dug their own grave. The regulated professions aren't going to be affected by the AI because their members understand that preservation of job security[0], their pay and QOL is more important than automating themselves out of existence.

[0] https://www.bma.org.uk/news-and-opinion/medical-degree-appre...

[−] greatpost 31d ago
Thank you for this aphyr.

My one ask is people seem to put “CEOs” on a pedestal any time things come up, like they’re an alien life form and oh no they’re going to do something terrible. There are good company executives and shitty ones. You should try to start a company and see if you can be one of the better ones.

[−] itissid 31d ago
Everyday I sit down to build a product for my clients. I am a one man shop _now_. Before I had people helping me. My mental state is not good. A very odd thing happens when claude or codex complete code fast, I begin to think of all the other things that are needed to make AI Agent work better. I begin to worry about problems that other people use to help me with and think "Can I do those too?". Problems like product design, devops work etc. In a bid to try I get nerd sniped by the velocity people seem to have — and these are respected devs not just twitter claims. And because I am so bad at "doing it all" its causing my mental health to suffer because of long hours i have to put it in. I miss my friends and colleagues who I worked with.

I always struggled with coding before 2023, but i made ends meet and put food on the table and could work sane hours and knew what I needed to do. Logically I should have been happy that I did not have to grind on code — and some days I truly am — but it would yield such poor quality of life at such a high cost was not what I expected...

[−] elcapitan 31d ago
I really appreciate this series of posts, as it serves as a good summary of key points of the discourse around AIs, and links to the relevant articles etc. I find following all those discussions myself exhausting, so if I can find this all in one place and read it nicely grouped, that is very helpful.
[−] _doctor_love 31d ago
Another interesting one from 'aphyr -- I think the points around the Ironies of Automation deserve deeper focus, possibly even a separate follow-up post.

I would encourage folks to look at the following industries: nuclear safety, commercial aviation, remote surgery. These industries have dealt with the issues of automation for much longer than we have as programmers.

In the research I've done, these industries went through a similar journey in the 20th century as we are now: once something becomes automated enough, the old way simply won't work. You have to evolve new frameworks and procedures to deal with it.

So in the case of aviation they developed CRM and SRM - how to manage the airplane as a crew and how to manage it as a solo operator. Remember that modern airplanes are highly automated!! The human pilot is not typically hands-on-wheel for most of the flight.

In the case of surgeons, they found that de-skilling without regular practice can occur in as little as four weeks! So to combat that, some surgeons are now required to practice in simulated environment to keep their skills sharp.

My feeling is that 'aphyr is right in the short-to-medium term. Current market forces and US regulatory posture (or lack thereof) makes it so that there are less rules and less enforcement. IMHO the results are depressingly predictable but the train has left the station with enough momentum that there's no stopping it. If we survive long enough to make it past the medium-term things will change.

[−] asdfman123 31d ago

> I can imagine a future in which some or even most software is developed by witches, who construct elaborate summoning environments, repeat special incantations (“ALWAYS run the tests!”), and invoke LLM daemons who write software on their behalf.

This sort of prompting is only necessary now because LLMs are janky and new. I might have written this in 2025, but now LLMs are capable of saying "wait, that approach clearly isn't working, let's try something else," running the code again, and revising their results.

There's still a little jankiness but I have confidence LLMs will just get better and better at metacognitive tasks.

UPDATE: At this very moment, I'm using a coding agent at work and reading its output. It's saying things like:

> Ah! The command in README.md has specific flags! I ran: . Without these flags! I missed that. I should have checked README.md again or remembered it better. The user just viewed it, maybe to remind me or themselves. But let's first see what the background task reported. Maybe it failed because I missed the flags, or passed because the user got access and defaults worked.

AI is already developing better metacognition.

[−] hliyan 31d ago

> One of her key lessons is that automation tends to de-skill operators

I recently discovered an example of this phenomenon in a completely unrelated area: navigation. About a week ago, I realized that I couldn't remember the exact turns to reach a certain place I started driving to recently, even after having driven there about 3-4 times over a period of a month. Each time I had used Google Maps. When I used to drive pre-Google-Maps, I would typically develop a good spatial model of a route on my third drive. This skill seems to have atrophied now. Even when I explicitly decide to drive without Google Maps, and make mental notes of the turns, my retention of new routes is now much weaker than it used to be. Thankfully, routes I retained before becoming Google Maps dependent, are still there.

[−] GistNoesis 31d ago
Programming is indeed becoming witchcraft, with LLMs it is of the utmost importance that you chose the right database administrator.

For example I'm now relying on Soteria, the greek goddess of safety, salvation and preservation from harm to act as my database administrator.

[−] wslh 31d ago
I wonder if vibe coding is partly what happens when software engineering fails to converge on reusable abstractions. Instead, we got fragmented tools and endless reinvention of the same components, and LLMs arrived as an ad hoc abstraction layer on top.
[−] sambuccid 30d ago
Great article, near the end it talks about where the money go and if there will be universal basic income. I think those paragraph had an assumption that if models get very smart all the money will go to big tech.

But, thanks to all the companies working on open-weight models, I'm starting to think this might no longer happen. Currently open-weights models are said to be just months behind the top players (and I think we should really try to do what we can to keep it that way).

I'm wondering what the predictions would be in the case where AI becomes very powerfull, but also models are generally available.

Two possibilities come to mind, the first one where all the money no longer spent on employment would go towards hardware. New hardware manufacturers or innovators could jump in and create a bit more employment, but eventually it would probably all progress in one direction, which is the only finite resource in the chain, the materials/minerals needed for the hardware. Those materials might become the new "petrol". It's possible that eventually we would have build enough chips to power all the AI we need without needing more extraction, but I wouldn't underestimate our ability to waste resources when they feel aboundant.

In the second possibility, alongside a very powerful open-weight LLM, there could be big performance advancements, which would make the hardware no longer the bottleneck. But I'm struggling to imagine this scenario, maybe we would all be better off? Maybe we would all just be deppressed because most people won't feel "usefull" to society or their peers anymore?

[−] groby_b 31d ago
I really wish we'd stop arguing about AI with an "some automation failed, so all automation is bad" approach.

Yes, AF447 crashed due to lack of training for a specific situation. And yet, air travel is safer than ever.

Yes, that Tesla drove into a wall, and yet robotaxis exist, work well, and are significantly safer than human drivers.

Yes, there are a lot of "witchcraft" approaches to working with AI, but there are also significant accelerations coming out of the field that have nothing to do with AI.

Yes, AI occasionally makes very stupid mistakes - but ones any competent engineer would have guardrails in place against.

And so a lot of the piece spends time arguing strawmen propped up by anecdotes. And that detracts from the deeply necessary discussion kicked off in the second part, on labor shock, capital concentration, and fever dreams of AI.

The problem of AI isn't that it's useless and will disrupt the world. It's that it's already extremely useful - and that's the thing that'll lead to disrupting the world.

[−] buildbot 31d ago
I love the analogy of AI coding as witchcraft! It’s very accurate to how working with these tools feels - At one point I was forced to invoke a “litany against stubbing” in a loop to make claude code actually implement a renode setup for some firmware. That worked really well.

It feels like hexing the technical interview come to real life ;)

[−] lrvick 30d ago

> Machine learning seems likely to further consolidate wealth and power in the hands of large tech companies

Only if you let it. You can own the means of production. I self host my daily driver LLMs in hardware in my garage.

Never given money to an LLM provider and never will. I only do work with tools I own.

[−] yubblegum 30d ago
Always a pleasure to read the thoughts of Kyle.

I wonder if he self censored here regarding potential futures here as concentration of stupendous wealth in generational hands and obscene wealth disparity coupled with machines that can do what "bodies" can naturally points to depopulation as a goal for the elite and their (future) spawn that are not on the choping blocks.

[−] hoppp 31d ago
Unavailable Due to the UK Online Safety Act
[−] drivebyhooting 31d ago
In the case of UBI, how would we differentiate between a previously highly paid professional (SWE, lawyer, author) and a pauper (janitor, car washer, unemployed).

It’s only fair that they would receive the same amount. But then how can the former category continue to fulfill their obligations?