People tend to forget how fast technological development advances. Even if you lived trough it you tend to forget how recently the world looked very different.
- before 2012 there was no smartphone
- before 2001 there was no wikipedia
- before 1995 less then 10 percent of the rich country home users had internet
- before 2023 there was no ai available to home users.
Hardware has been getting faster by a factor of 100 in 10 years and ~10‘000 in 20 years. Ai currently develops faster because of a combination of software and hardware improvements. Even if the best current system is only right 1/100 times right now, its likely nearly allways accurate in 10 years.
I also like to remind people that the phone i am writing this on (iphone 12), has the same computing power as the earth simulator in 2003. that was the fastest computer on the earth back then.
Imagine this development and think what changes might come.
Those TFLOPS numbers are quite useless as they are "marketing peak TFLOPS". There's usually a 10-100× difference between that and actual computational capabilities in meaningful general workloads.
It only makes sense to compare specific, well-calibrated benchmarks, such as Linpack, which is what I did.
But we also hit serious diminishing returns effects—in the last decade for hardware, in the last 2-3 years for LLMs.
Just because things have advanced in leaps and bounds for X amount of time does not mean they will continue to do so at the same (or increasing) rates indefinitely.
The problem is that just what will end up being the thing that advances dramatically in the next few years is often very unpredictable, especially if one is looking at surface-level trends over the last few years.
Given what we, who have a better understanding of them than "wow, look how far they've come since 2018", can see, LLMs seem unlikely to advance by another 100x over the next several years. Frankly, another 100% seems unlikely. (Though of course, quantifying LLM performance is a tricky proposition to begin with...it is, to a large extent, a qualitative endeavor.)
Technology advances quickly, but the apocalypse narrative is still bullshit. The reality is much closer to what it has always been: technological advances are a way to enhance what humans can do, not replace them. Being adaptable and adopting a spirit of learning and growth is (as it always has been) a key factor in a successful career trajectory.
Through a sufficiently narrow lens, any technological advancement can be perceived as a threat. If your job was to perform calculations for your company using a microscope and calculator (computer, the job title) then the invention of the computer (the machine) was absolutely a threat to your job security. That's not to say that there aren't challenges to adapting or considerations for how to do it well, but it has always been the case that the old way is a casualty of the new way.
I am neither anti-AI nor an AI evangelist but I think a more productive viewpoint is to think about how these advancements could open the door to new opportunity. For example, democratization of learning. It has never been easier for anyone in the world with an Internet connection and a computing device to have access to a personal math tutor or nutrition coach.
Why is it an apocalypse if people don't have jobs anymore? Why is it an apocalypse if the most intelligent being in the world isn't a human anymore.
It will be a major change, but only to people that see it as a thread to their existence - to not work anymore - are in real danger. There are tons of people in our society that are fine without working. We could tend to our children, we could tend to our elders. We could do arts and improve our world instead of competing and being better then others.
Every real-world example of material needs being met without economic competition (dorms, retirement communities, high school) produces vicious social hierarchies, not enlightenment." But this's a full blog post, someday, not an HN comment. Just for now realize that Star Trek is probably actually high school hell if you think about it for five minutes.
Because all signs right now are pointing to new locked in systems of control instead of shared prosperity. These companies were supposedly non-profits who were supposedly deep thinking on improving things but they can't even get a basic narrative/philosophy out of how things will improve and have instead pivoted to for profit.
Our systems of power are locked in caveman style thought and don't seem capable of creating something new, just applying new tech to very very old, very coercive systems of power. Gone are the techno optimist days replaced by the tech companies with enshitification with them explicitly stating you will live worse so that they can have more profit, and that if there is nothing you can do to stop them, they will cater to their worst instincts.
Right. Technology is more distributed, empowering for the individual, however the power that wields it is stuck in a more feudalistic mindset and so we get this weird state where technological advancements seem more dystopian.
Okay. So you talk to your personal math tutor and you learn math. That the AI knows. Are you gonna get paid to use that? I don't understand "there is a machine which will teach you everything, so you'll be fine" when "there is a machine that will teach you everything and will work cheaper than you" is also true. Please explain?
Every morning the turkey rejoiced and said to himself "oh joy, I'm such a lucky turkey, I don't have to do anything, the food is plentiful, I just eat and shoot the breeze the whole day long, what an awesome life!" Until one morning, the day before thanksgiving, the turkey rejoiced about the awesome day he's about to have ... just to be picked up 5 minutes later and dragged to the slaughterhouse.
It’s a compelling story; but what you’re describing is, to the turkey, a black swan event, rather than an obvious inevitability that all the other turkeys keep telling the turkey is going to happen.
Years ago when you went into computers, you didn't have normies warning you that one day computers will program themselves? 20 years ago, nobody could tell you if this would happen in 20 years or 200 years, but I do believe there has been a general sense of this sort of thing happening eventually.
The good thing with stories is you can make them say anything you like.
Every morning the dog rejoiced and said to himself "oh joy, I'm such a lucky dog, I don't have to do anything, the food is plentiful, I just eat and shoot the breeze the whole day long, what an awesome life!". And the dog went on to have an awesome life of plentiful food, shooting breeze and leasure. The end.
No you don’t get the hedonistic life. The turkey gets a hedonistic life because its value is in its consumption. It’s worth more to its owner fattened up as much as possible.
You on the other hand, are not bred to be consumed. And in fact the fatter you are, the more expensive and less useful you become.
So what you get is more likely starvation, if you aren’t culled to free resources.
Let's suppose you are a medium sized business. You've always wanted to provide top quality customer service but couldn't do it before because you'd need to hire 5 people to do it right. Instead, you strategically decided to not provide quality customer service and sell the product at a lower price than competitors. So you have no customer service person in the company. Service is bad. It limits growth. But it was strategic to not provide good service in order to gain an advantage somewhere else in the business.
But now, you can hire 1 customer service person, who could then use AI agents to provide the top quality customer service. Previously, you needed to hire 5 people, which wasn't worth it.
So you went from no customer service employee to 1.
I suspect that this is what will happen. Many companies will hire their first customer service person or more. Many big companies will layoff most of their customer service people. The net effect might actually increase total customer service employment.
I suspect that job openings for customer service employees will actually be higher than now but companies won't be able to find enough AI-skilled people to fill the job. We're going to read about how there are more job openings than ever but companies can't find the AI skillset they need. This is why I think people who adopt AI now, learn it, understand it, get good at it, will be in high demand.
Don't get lost in the customer service example. Focus on the idea instead which can be applied to many other professions.
People thought AI being better than a human at reading medical images would put radiologists out of a job. But instead, radiologists had more demand than ever because it made getting a scan more affordable, more accurate which led to more customer demand.
Same can happen for customer service. AI makes customer service cheaper, better, faster. More companies offer good customer service in order to stay competitive. More customers demand customer service because it's better now and they expect it since all companies big or small can afford quality customer service.
I run a small SaaS business that both sells a super niche AI product to a non-technical audience of small businesses, and also is built and run by AIs, managed by me. I'm the only human in the mix.
I do all the sales and customer service myself, because it's a genuine selling point for my customers that they can talk to the owner if they have issues, and because these customers are the lifeblood of my company, and I want to stay as close to them and their needs as I can.
But it's still time-consuming.
On the customer service side, my next crack at automation will just be having an agent triage inbound requests, queuing up the actions that need to be taken in response (cancel account, upgrade, split team, whatever), and then giving me the whole thing for approval and replying to customer. That alone should easily cut my time spent on CS by 80%, while keeping a more personal touch. I should also note that some of the customer support burden will be lifted by having more self-serve options to do things, better docs, etc. But given that my customers are non-technical, there are always going to be some of them that just want to dash off a text or email because they hate tech and don't want to hassle with it.
On the sales side, I've thus far been 100% sales driven, but I'd like to introduce a self-serve signup flow that targets the 80% of customers who have simpler needs and could probably sign up on their own, and save the sales calls for bigger or more complicated deals.
AI needn't respond, it can instead be used to sort the meaningless noise from the actionable complaints, where previously all would have been ignored. Raise only the issues that matter and can be addressed to the human.
This I can somewhat agree to. But how much time is saved vs just skimming a support request by an agent? Or, just having filters for "keywords" like "sales" / "purchase" to increase priority?
You sure could let the agent try to do the task and supervise it before sending of replies to customers. Similar to as to that I still check the code produced by an agent.
If you remove the human from the loop in customer service, you won't gain a thing.
Spot on! I hate being sucked into an "accountability sink", where delay/bad treatment/ tangential answers are ok (somehow acceptable) and justified because it's not personal, "it's just the process".
In some ways AI sounds almost utopian. I theory it could redistribute manpower more evenly between small and large businesses, allowing them to compete more fairly and improving the efficiency of capitalism (the idealistic model, not the real world state). However, than you remember that the AI tech is currently almost fully in control by the big tech (and its next generation) and you have to ask whether they’ll be able to sabotage that improvement because they will do their worst for sure since liberating the market is not beneficial to them. Let’s hope that despite all odds and current trends we actually reach a state where AI is possible to run on-prem/locally and there are still SOTA models at least as open as they are today.
Strongly dispute this. Compute very depreciates rapidly. Inference is cheaper than training. DeepSeek was the warning shot across their bow, but the big AI firms can't afford to change course without jeopardizing their "Wile E. Coyote off the cliff" economics.
LLM performance is already plateauing; models will get more efficient. Good-enough models will be deployed on chips, the same way H.264 is a good-enough video codec but used ubiquitously.
More than your points, I'm very curious how these AI companies are going to turn profit without making using AI insanely expensive. Some time ago, each prompt was highly subsidized, I doubt the picture has changed much.
Edit: maybe the model efficiency you mentioned is the key, we'll see.
I suspect they just won't. First-mover disadvantage is real for many markets. Everyone knows Amazon, but how many remember Kozmo.com?
My assumption is that OpenAI, Anthropic, etc will go bankrupt and eventually be subsumed into Microsoft/Google/ByteDance & friends. New entrants will take their pioneering work and sell inference for pennies on the dollars without investing in massive R&D spend.
Supposedly they could make money, if they wouldn't have to burn a lot on the research. There was an interview with Dario where he stated this and hinted to the fact that a monopoly would not have the research problem, and thus could start making money.
> who could then use AI agents to provide the top quality customer service
I’m actually ROFL. Are you brain damaged? Or have you simply been in a coma over the past decade when businesses have outsourced their support to automation.
Hint: the result nearly universally has been closer to bottom quality support, if it even exits.
The current deployments of chatbots are not the bar to compare with. There’s an incoming wave of extremely capable agents and process reimagining that is going to be highly disruptive.
Been in this space over a decade and this time really is different. It’s hard for humans to perceive the exponential, it will be slow then sudden.
I think the argument here is a bit of a strawman, though there is a good point in there as well. AI will not automate all customer support, but it has the potential to automate a large fraction of it.
The anecdote in there is about complex B2B enterprise software. That's not the majority of customer support, and is very heavy on escalating to actual experts.
You don't have to remove 100% of the jobs to have huge effects. Automating large parts of a few sectors would already create significant disruptions.
To be perfectly honest, the majority of work is going to see a restructure soon anyway.
"Triaging by LLM before sending task to any human" can work for almost anything, not just support calls. On another story I saw someone mention that they'd like something like an ad-blocker, but for content - a "content-blocker". Not too hard running even a local model that, via a browser extension, scnas the current page and places it into one of several bins: Read verbatim, summarise with ChatAI, Ignore completely, Read and mark for re-reading.
Software dev? Bin a ticket into "complex", "simple", "talk to lead dev".
Software proposal? Bin the proposal into "CotS available", "FOSS available", "Quick dev", "Too costly to proceed".
Bookkeeping? Accounting? They all have tasks that can be binned.
What does this all mean, I hear you ask? Well, you no longer need as many employees if some of the bins are "ChatAI and/or agent can complete this" with human review.
So, yeah, a lot of people are going to be out of work if this works like they say it does.
> Because the remaining 10% is what required most of the CS team’s time. They built an FAQ you can talk to.
These days it's hard to get people to read an email longer then 5 lines - yet people are super excited about abundant masses of text generated by LLMs. It does not compute....
Of course it is, it’s just a scapegoat to lower the wages, another power dynamic trick pulled on employees. I have noticed a lot of managers going with coop+AI combo or outsourcing+AI, thinking it’s the ultimate goldmine to minimize expenses and maximize profits, and they soon hit a reality check. And when they do, unfortunately, instead of resolving the root cause issue, they go and hire only one senior in the team and overload and overwork him, while praising the AI how it increased the productivity and all.
Managed decline policies of western governments are much more threatening to white-collar workers and everyone else than AI will ever be.
AI will enable significantly faster economic growth, which is something the EU has been making impossible with legislation designed to destroy Europe's economic advantage.
Bifurcation is the right model and it’s already happening:
For things where the end customer doesn’t care if they’re interacting with an AI, reading content by an AI, etc. – or if the company doesn’t care what the customer thinks (see: automated phone customer support lines for the last twenty years) – the work will be replaced by AI work. Examples are any kind of rote documentation, generic digital asset creation like blog images, low level customer support, and most things where the company doesn’t really care about the customer, because the company is getting paid regardless.
If it does matter what the end customer thinks, the role will become increasingly humanistic in nature. Examples are high-end enterprise sales, personality and expertise-driven media and content, and anything where being “revealed” as an AI is perceived negatively.
My biggest worry currently isn't even job-related, it's that corporations and authorities will use AI for customer/client relations but that this AI will not be allowed to make any significant changes and is therefore an utter waste of time. In many places, this could turn an already dire situation into an absolute nightmare. What might make it even worse is that authorities - and probably also corporations - will likely ban or block user AI agents, so you cannot even use your own AI to negotiate with their AI.
That's something that needs to be addressed by lawmakers ASAP. There needs to be a right to speak to a human, or (the perhaps overly tech optimistic route) a prohibition of AI that doesn't have adequate decision-making power.
104 comments
- before 2012 there was no smartphone
- before 2001 there was no wikipedia
- before 1995 less then 10 percent of the rich country home users had internet
- before 2023 there was no ai available to home users.
Hardware has been getting faster by a factor of 100 in 10 years and ~10‘000 in 20 years. Ai currently develops faster because of a combination of software and hardware improvements. Even if the best current system is only right 1/100 times right now, its likely nearly allways accurate in 10 years.
I also like to remind people that the phone i am writing this on (iphone 12), has the same computing power as the earth simulator in 2003. that was the fastest computer on the earth back then.
Imagine this development and think what changes might come.
That's still almost three orders of magnitude from the iPhone 12 (0.02 Linpack TFLOPS, 4GB RAM, 256GB storage).
edit: you are right, this source is wrong, but we are getting closer fast.
A19 seems to be getting 2.3 tflops (still only 10%, but still a whole floor of computers vs a smartphone is crazy!).
It only makes sense to compare specific, well-calibrated benchmarks, such as Linpack, which is what I did.
> - before 2012 there was no smartphone
- by 2011, 35% of US adults had smartphones, iPhone released June, 2007
Just because things have advanced in leaps and bounds for X amount of time does not mean they will continue to do so at the same (or increasing) rates indefinitely.
The problem is that just what will end up being the thing that advances dramatically in the next few years is often very unpredictable, especially if one is looking at surface-level trends over the last few years.
Given what we, who have a better understanding of them than "wow, look how far they've come since 2018", can see, LLMs seem unlikely to advance by another 100x over the next several years. Frankly, another 100% seems unlikely. (Though of course, quantifying LLM performance is a tricky proposition to begin with...it is, to a large extent, a qualitative endeavor.)
Through a sufficiently narrow lens, any technological advancement can be perceived as a threat. If your job was to perform calculations for your company using a microscope and calculator (computer, the job title) then the invention of the computer (the machine) was absolutely a threat to your job security. That's not to say that there aren't challenges to adapting or considerations for how to do it well, but it has always been the case that the old way is a casualty of the new way.
I am neither anti-AI nor an AI evangelist but I think a more productive viewpoint is to think about how these advancements could open the door to new opportunity. For example, democratization of learning. It has never been easier for anyone in the world with an Internet connection and a computing device to have access to a personal math tutor or nutrition coach.
It will be a major change, but only to people that see it as a thread to their existence - to not work anymore - are in real danger. There are tons of people in our society that are fine without working. We could tend to our children, we could tend to our elders. We could do arts and improve our world instead of competing and being better then others.
> Why is it an apocalypse if people don't have jobs anymore?
because i need to pay rent? eat? not die naked in ditch when i'm old?
Our systems of power are locked in caveman style thought and don't seem capable of creating something new, just applying new tech to very very old, very coercive systems of power. Gone are the techno optimist days replaced by the tech companies with enshitification with them explicitly stating you will live worse so that they can have more profit, and that if there is nothing you can do to stop them, they will cater to their worst instincts.
It's not a certainty, but if you're not prepared for an extreme event, it can be ouch.
Every morning the dog rejoiced and said to himself "oh joy, I'm such a lucky dog, I don't have to do anything, the food is plentiful, I just eat and shoot the breeze the whole day long, what an awesome life!". And the dog went on to have an awesome life of plentiful food, shooting breeze and leasure. The end.
You on the other hand, are not bred to be consumed. And in fact the fatter you are, the more expensive and less useful you become.
So what you get is more likely starvation, if you aren’t culled to free resources.
But now, you can hire 1 customer service person, who could then use AI agents to provide the top quality customer service. Previously, you needed to hire 5 people, which wasn't worth it.
So you went from no customer service employee to 1.
I suspect that this is what will happen. Many companies will hire their first customer service person or more. Many big companies will layoff most of their customer service people. The net effect might actually increase total customer service employment.
I suspect that job openings for customer service employees will actually be higher than now but companies won't be able to find enough AI-skilled people to fill the job. We're going to read about how there are more job openings than ever but companies can't find the AI skillset they need. This is why I think people who adopt AI now, learn it, understand it, get good at it, will be in high demand.
* Customer wants the human touch
* The company's systems were broken and the customer wouldnt have called at all if they could quickly and easily do what they wanted online.
* Customers are routinely furious and want to complain and/or understand and the company wants to brush them off.
AI doesnt help the first two, it only helps with deflection (what they call the last one).
You provide customer support but think it's just a cost center so you happily reduce workforce by 90% and count on AI to cover the workload.
The non-trivial cases end up with the remaining workforce. They get burned out, but luckily there is plenty of people looking for any job.
People thought AI being better than a human at reading medical images would put radiologists out of a job. But instead, radiologists had more demand than ever because it made getting a scan more affordable, more accurate which led to more customer demand.
Same can happen for customer service. AI makes customer service cheaper, better, faster. More companies offer good customer service in order to stay competitive. More customers demand customer service because it's better now and they expect it since all companies big or small can afford quality customer service.
I do all the sales and customer service myself, because it's a genuine selling point for my customers that they can talk to the owner if they have issues, and because these customers are the lifeblood of my company, and I want to stay as close to them and their needs as I can.
But it's still time-consuming.
On the customer service side, my next crack at automation will just be having an agent triage inbound requests, queuing up the actions that need to be taken in response (cancel account, upgrade, split team, whatever), and then giving me the whole thing for approval and replying to customer. That alone should easily cut my time spent on CS by 80%, while keeping a more personal touch. I should also note that some of the customer support burden will be lifted by having more self-serve options to do things, better docs, etc. But given that my customers are non-technical, there are always going to be some of them that just want to dash off a text or email because they hate tech and don't want to hassle with it.
On the sales side, I've thus far been 100% sales driven, but I'd like to introduce a self-serve signup flow that targets the 80% of customers who have simpler needs and could probably sign up on their own, and save the sales calls for bigger or more complicated deals.
If you remove the human from the loop in customer service, you won't gain a thing.
Disclaimer: I'm an AI compute investor.
LLM performance is already plateauing; models will get more efficient. Good-enough models will be deployed on chips, the same way H.264 is a good-enough video codec but used ubiquitously.
Edit: maybe the model efficiency you mentioned is the key, we'll see.
My assumption is that OpenAI, Anthropic, etc will go bankrupt and eventually be subsumed into Microsoft/Google/ByteDance & friends. New entrants will take their pioneering work and sell inference for pennies on the dollars without investing in massive R&D spend.
> Whoever has compute, will have the power.
Nonsense. There's a temporary shortage, but even with the shortage it's still a commodity.
> who could then use AI agents to provide the top quality customer service
I’m actually ROFL. Are you brain damaged? Or have you simply been in a coma over the past decade when businesses have outsourced their support to automation.
Hint: the result nearly universally has been closer to bottom quality support, if it even exits.
Been in this space over a decade and this time really is different. It’s hard for humans to perceive the exponential, it will be slow then sudden.
The anecdote in there is about complex B2B enterprise software. That's not the majority of customer support, and is very heavy on escalating to actual experts.
You don't have to remove 100% of the jobs to have huge effects. Automating large parts of a few sectors would already create significant disruptions.
"Triaging by LLM before sending task to any human" can work for almost anything, not just support calls. On another story I saw someone mention that they'd like something like an ad-blocker, but for content - a "content-blocker". Not too hard running even a local model that, via a browser extension, scnas the current page and places it into one of several bins: Read verbatim, summarise with ChatAI, Ignore completely, Read and mark for re-reading.
Software dev? Bin a ticket into "complex", "simple", "talk to lead dev".
Software proposal? Bin the proposal into "CotS available", "FOSS available", "Quick dev", "Too costly to proceed".
Bookkeeping? Accounting? They all have tasks that can be binned.
What does this all mean, I hear you ask? Well, you no longer need as many employees if some of the bins are "ChatAI and/or agent can complete this" with human review.
So, yeah, a lot of people are going to be out of work if this works like they say it does.
> Because the remaining 10% is what required most of the CS team’s time. They built an FAQ you can talk to.
These days it's hard to get people to read an email longer then 5 lines - yet people are super excited about abundant masses of text generated by LLMs. It does not compute....
AI will enable significantly faster economic growth, which is something the EU has been making impossible with legislation designed to destroy Europe's economic advantage.
Get prepared. Something is coming *soon*
And how any even slightly skepctical commend gets downvoted to hell. One may start thinking there are bots promoting the narrative.
For things where the end customer doesn’t care if they’re interacting with an AI, reading content by an AI, etc. – or if the company doesn’t care what the customer thinks (see: automated phone customer support lines for the last twenty years) – the work will be replaced by AI work. Examples are any kind of rote documentation, generic digital asset creation like blog images, low level customer support, and most things where the company doesn’t really care about the customer, because the company is getting paid regardless.
If it does matter what the end customer thinks, the role will become increasingly humanistic in nature. Examples are high-end enterprise sales, personality and expertise-driven media and content, and anything where being “revealed” as an AI is perceived negatively.
That's something that needs to be addressed by lawmakers ASAP. There needs to be a right to speak to a human, or (the perhaps overly tech optimistic route) a prohibition of AI that doesn't have adequate decision-making power.
infact, i go and implement dumb AI models in many companies and executives immediately show "how many people they can fire with this advancement".