This has mirrored what I've seen in my company. People in the data science/ML part of the company are super excited about AI and are always giving presentations on it and evangelizing it. Most engineers in other areas, though, are generally underwhelmed every time they try using it. It's being heavily pushed by AI "experts" and senior leaders, but the enthusiasm on the ground is lacking as results rarely live up to the extremely rosy promises that the "experts" keep making. Meanwhile, everyone can read the news about layoffs attributed to AI and can see that hiring (especially of junior engineers) has slowed to a trickle. You can only fool people for so long.
> Meanwhile, everyone can read the news about layoffs attributed to AI and can see that hiring (especially of junior engineers) has slowed to a trickle.
According to FRED/Indeed[1], software job openings have been roughly flat for 2-3 years, and they've actually been slightly increasing again. What data source are you looking at?
Flat at 60% of pre-covid hiring while the number of graduates continue to increase and there's still a backlog of people who were laid off. That's not a particularly optimism inducing hiring market.
Do not with a straight face act like pre-COVID hiring levels were a Good Thing. They weren’t. They were a symptom of a broken economy that you personally happened to pretty directly benefit from.
Thing is, the companies doing these layoffs rarely actually end up losing money from overhiring. They’re still profitable. Just not profitable enough for the people on top.
That’s a bit perverse. In democracies, corporations ultimately exist to serve society, not shareholders.
The plutocracy is forgetting that a working and productive populace - with fair wages and representation - is their end of the deal for disproportionally benefitting from the fruits of labor from others; and directly prevents violence against the status quo. See: The top articles in the last 3 days.
Sure, but all they have to do is not hold up their end of the bargain. Who enforces that? These are just norms from 60 years ago that the rich decided they no longer have to follow.
They’ve started treating incorporation like a modern day papal indulgence, something that absolves whatever they do in the name of profit. It doesn’t. Limited liability buys you forgiveness in court but it doesn’t buy you forgiveness in the court of public opinion. Doing harm for a company is still doing harm.
I think you are correct in asserting the mercy-disciple of market forces.
I also think that counter points on the inhumanity of firms, misses that economies are an objective way to structure incentives to achieve subjective ends.
If you want more money to travel to other parts of the pyramid, or you want to disincentivize certain behavior, then economic incentives can be set up to achieve those goals.
Expecting firms to do charity is pointless. Expecting firms to optimize under constraints is not.
At societal scale hiring people is self-interest, not charity. Otherwise you'll get to exactly where the US is heading now: large parts of the consumer market are mostly dead because people have no discretionary spending power left, and the only way to make money as a business is to become a monopolist.
There have been a lot of headlines the past couple years about companies stating they are doing layoffs or slowing hiring because of AI. I would bet the average adult pays way more attention to news headlines than FRED reports.
I also don't see why everyone would dismiss the statements of large company CEOs about why they are making hiring/firing decisions, regardless of what some statistics say.
The companies doing the layoffs are themselves stating AI as a reason; that’s the news people are responding to. The parent didn’t claim that it’s based on reality, but it informs public opinion.
Whether or not the CEOs' statements are true, they affect public opinion.
You have CEOs claiming that AI is driving layoffs alongside CEOs of Anthropic and OpenAI talking about the end of white collar work. All this is then amplified by tech journalists like Casey Newton and Kevin Roose. The biggest public proponents of AI keep telling people that it will take their jobs.
What comes after the end of jobs? Who knows. Sam Altman occasionlly making vague statements about curing cancer. There are vague hand-waving notions of a Star Trek utopia.
But to be honest it feels more like a Cyberpunk future, where the Altmans and Musks get to live cancer-free and the rest of us eek out an existence without jobs or any prospect for a better life. Or maybe it looks more like Star Trek, but we're all red shirts.
Anything Musk or Altman say is just about raising money. Nothing they say can be taken at face value. There’s a funny interview with Mark Anderseen, where he talks about how he never looks backwards and doesn’t have any sense of introspection and then gets into a rambling and completely wrong history lesson. That’s what these guys do.
The better question to ask is what happens after the end of OpenAI/Tesla/etc? AI may take your job away, but not because of robots replicating your labor, just good old-fashioned economic collapse.
Blame then then. Simple as that. Lying to "just raise money" is one of the most harmful ways of lying. It distorts the whole economy.
> There’s a funny interview with Mark Anderseen, where he talks about how he never looks backwards and doesn’t have any sense of introspection and then gets into a rambling and completely wrong history lesson. That’s what these guys do.
Yes, we know they are psychopaths and assholes. The blame is on them.
>According to FRED/Indeed[1], software job openings have been roughly flat for 2-3 years, and they've actually been slightly increasing again.
None of this contradicts OP's claim, because at least anecdotally, juniors/interns are getting disproportionately squeezed by AI. Why hire an intern to write random scripts/tests for you, when claude code does the same thing? Therefore overall job posting could be flat or slightly rising, but that's only because everyone is rushing to hire senior/principals staff to wrangle all the AI agents, offsetting the junior losses.
They are increasing, but the level is still lower than it's been since Oct 2020. In my experience at two different companies since 2020, hiring more or less stopped sometime in 2022 to early 2023. In early 2025, some hiring started again but it's still a very low rate compared to pre-COVID, particularly for new college grads. While I don't believe that AI has actually taken any significant number of jobs in the software field, I do think it's being used as a convenient excuse by executives to lay people off. Regardless of the actual numbers though, the general perception in tech is "lots of layoffs are happening with not so much hiring" and "AI has something to do with it (either directly or as an excuse)."
Software job openings are mostly bullshit. Companies post ghost jobs en masse, while refusing to hire people. You can ask anyone that's had to look for a job recently and see how bad the market is.
Is that data useful at all? Indeed postings are a poor proxy for how many people actually get hired. One of the major problems we have is that employment statistics are largely just estimates, and don’t reflect reality on the ground. Factor in the Trump admin firing most of the BLS and other agencies for not giving him the numbers he wants, and there really is no reliable data.
I feel like the junior problem contributes more heavily than people might think. The people on top see juniors as replaceable since they view them as cheap menial labor, whereas most seniors at least acknowledge the human element as part of the benefit
Using claude and friends takes all the fun out of the job, so I'm not surprised engineers are not enthusiastic. It's cool for 1 month then you realize we went from solving problems and implementing algos and optimizing slow code and fixing security issues and other fun stuff, to writing prompts all day long.
I think people are really underestimating how poorly today's tweens think of AI. "That looks like chatgpt" is an insult. Kids avoid things because they heard somewhere that AI might have been involved and have a sense that means it is bad or immoral or illegal or cheating in some nebulous way, and it's reinforced by their teachers telling them that using AI for homework is cheating.
I think this next generation is going to come up fundamentally believing that AI is generally a bad thing, and it's going to surprise older people.
[X] Tweets and instagram comments presented as "what society is thinking"
[X] Ties Luigi Mangione and the California warehouse fire to Gen Z discontent (about AI?).
[X] Statistics being used to support the title with little to no regards to continuity: "those respondents who said that AI makes them “nervous” grew from 50% to 52% during the same period" => percentage was 52% in 2023, 50% in 2024 and 52% in 2025, seems mostly flat to me, with the real jump being in 2022-2023 with 39%.
I was talking recently to someone who teaches AI-adjacent courses at a US university (not in a computer science department) and they said that enrollment in their class is lower than expected, which they think is likely due to the severity of the AI backlash among students on campus.
a person can have full faith in the potential value of ai science and simultaneously have zero faith in the current crop of business stewards of that science.
no one is questioning the underlying model mathematics, they are questioning deceptive & reckless stewards.
AI continues to be a stupidly vague term, and the example I keep going back to is present in this article
Meaningful advances in medical diagnosis are not coming from chatbot companies. Some are coming from machine learning methods. Perhaps measuring public sentiment about such a vagary is not a very productive way to quantify anything
That said, I continue to also be frustrated with people using the abstract concept of a new technology as a substitute for the institutions that use that technology to exert power in the world and what they do with that power, which is - as many in the comments already point out - is what the vast majority of people are actually mad about, and right to be
I think it's not that difficult to see why a technology that will likely trigger widespread unemployment during a cost of living crisis, an arms race with China, along with all the alignment concerns, might not be hugely popular with the public.
Maybe I'd be a bit more optimistic if someone could explain a realistic economic scenario for how we're going to transition into our utopian abundant future without a depression or a revolution.
It is worth pointing out that we got here despite all of the “alignment” research and safetyism surrounding the models. As it turns out, the models don’t wake up and start destroying things. We knew this all along, but every time a new article came along and anthropomorphized and exaggerated another experiment it fed the clickbait machine.
The fundamental alignment issue is aligning the companies themselves with society, not the models with the companies. Widespread unemployment is not aligned with society, but it is aligned with Anthropic and OpenAI if it makes them rich.
Therefore the only “harms” the companies will take seriously are those which also harm the company. For example reputational harms from enabling scams aren’t allowed.
Perhaps all of this isn’t fair, since companies actively subverted safety research for profitability. But then I would go back to my earlier point of over-indexing on unintended behaviors and under-indexing on intended ones.
My wife has a very serious health issue, that has caused more suffering then words could describe. o1-preview was the first ai that actually proved useful. From there on, each improvement on ai caused an incremental improvement in her situation. Even recently we were able to pinpoint exactly what was causing her flare, and solve the situation the same day, just by prompting a claude opus conversation where i’ve shared all her health notes. But if i weren’t a data freak and haven’t been collecting data about her issues (what she does/takes and how she feels) for so long i dont think we would had been able to get this far. So i think ai appeals to people with problems that can be solved by finding patterns in data. People that say ai makes mistakes don’t understand that the power is in finding patterns, not in finding THE right answer. You need to prompt from that prespective
In case you're wondering who they mean by "AI experts", I checked the Pew poll:
> Note: “AI experts” refer to individuals whose work or research relates to AI. The AI experts surveyed are those who were authors or presenters at an AI-related conference in 2023 or 2024 and live in the U.S. Expert views are only representative of those who responded.
I don't know how many times I've seen some Google AI summary or ChatGPT with references that, when I checked, did not say what what the AI summary said. If a high school student falsified references in a paper like this, they would get a bad or failing grade. This is bad, not acceptable, the teacher would say.
But we have been sold to use these constantly falsified AI summaries as the go-to source of "truth" by all levels of society. We're trading truth for an illusion of short-term gains. This will not have good consequences.
I work with LLMs extensively and daily and they are very useful. BUT dear god, absolutely nothing about them is intelligent.
If you work at the edge of context you know what I mean. Even within context, if the system was truly intelligent, the way that Euclid was intelligent, why do I need /superpowers and 50 cycles to get a certain implementation right?
Why is the AI not one-shotting obscure but simple business logic cases with optimal code? Whoops pattern never seen before! There is no thought to it, zero. The LLM is just shotgunning token prediction and context management until something sticks. The amount of complexity you get out of language is certainly fascinating and surprising at times but it's not intelligence - maybe part of it?
Sell it as skills or whatever, but all you do every day is fancy ways of context management to guardrail the token predictor algorithm into predicting the tokens that you want.
I think it's pretty clear that the problems with AI are:
1. Overhyped. Try writing a blog post that doesn't sound like it. Everyone is sick of reading it now.
2. Affecting the wrong people. It used to be the rich got richer and the poor got poorer. But now a lot of the middle class will get poorer
3. Severely damages the work hard way out. Competition will become brutal if there's almost no barrier to entry. This will drive down profit, affect hiring and will become a conveyor belt of people trying to win the business lottery. This will make moats even more essential.
4. The obvious theft of creative works which destroys dreams and livelihoods.
No wonder the younger generation are against it. Those of us in the middle are still just hoping at least we can get through somehow. At least we have hope.
People are anti AI for obvious and valid reasons, but I think we should focus on where the profit goes and not on hating the technology itself.
Of course, if people are fired and only capital owners / AI experts get to earn anything then this is wrong and a revolution is obviously needed and unavoidable.
But for me, the best outcome would be if it was AI that did all the jobs so people could focus on doing what they want, not that we'd go back to pre-AI era..
Initially however we need to balance between full wealth redistribution and keeping the incentive to develop AI further.
Of course by AI I mean really useful AI, the real part, not the marketing part.
Been saying this for a bit but the things I’ve seen associated with AI seem to be the things that it’s pretty mid at. Coding, automated actions etc. I wholeheartedly believe adoption and perception would be better if the things it was amazing at were pushed more.
Take log review for example. Whether it’s admin or security LLMs are incredible at reading awfully formatted logs and even using those to pull meaning from other logs as well. Like turn an hour long log review into a 10 minute log review type thing.
> The United States reported the lowest trust in its own government to regulate AI responsibly of any country surveyed, at 31%.
It seems US citizens are really against the current administration, just using the fact that AI investment is intrinsecally connected to it to voice their opposition.
> Country-level expectations follow similar patterns to the earlier sentiment trends.
Nigeria, Japan, Mexico, the United Arab Emirates, South Korea, and India all expected AI to create more jobs than it eliminates, with shares above 60%. The United States and Canada sat at the opposite end, where 67% and 68% of respondents expected AI to eliminate jobs and disrupt industries.
Globally, the disconnect is not growing. It's really just an U.S. problem (spilling to neighbouring Canada too).
So, no luddites in sight, again. It's just a public perception over a polemic topic being leveraged for ideological reasons sinking AI on US only.
My experience has been that the disconnect is between the Bay Area and everywhere else. The engineers at my company are split 50% in the Bay Area and 50% elsewhere. The engineers in the Bay treat it as a borderline religion. They evangelize it, and do not allow any form of criticism. It reminds me of the hippie movement: idealistic and not grounded in reality.
The lack of federal permitting standards for AI data centers is really going to bite the industry in the ass. We also probably need something akin to the WARN Act for AI-related layoffs. (Possibly with multi-year benefits for large companies.)
This AI rollout has been fundamentally rushed and fucked from the very beginning and I think the people who are responsible for doing it this way have done more irreparable damage to society than any single group of humans in the entire history of the species, and I mean it.
It’s always only ever about how the new model is faster, better, smarter. Or how the tech will be bringing ruin to the job market and someone should probably do something about that some time soon. Zero efforts to create any sort of educational content - how it even works, how to vet its output, how to have an eye for confabulation, how to use it as thinking enhancement rather than replacement, to keep in mind that it’s trained to please and will literally generate anything to cause users to click the thumbs up button. Nope, it’s just “ModelGPClaude can make mistakes! Better be careful!”
And then everyone’s surprised when an utterly improvident handling of 4o kicks off the biggest concentrated wave of AI psychosis seen yet. Because, surprise! When you give people a model that’s trained to anthropomorphize itself, people who have no idea about any of this tech and have no access to education about any of it might believe it’s more than it is! Boy, who’d’ve thunk; isn’t the world complex?!
This was a symptom of this exact same disease. I have far less worry about the tech and far more worry about how the disconnected venture capital caste is inflicting it upon us.
Giant leaps in innovation almost always have a reaction like this.
It's new, people fear it. Sometimes justified, usually not.
People greatly feared the car because of the number of horse-related jobs it would displace.
President Benjamin Harrison and First Lady Caroline Harrison feared electricity so much they refused to operate light switches to avoid being shocked. They had staff turn lights on/off for them.
Looking back at these we might laugh.
We're largely in the same boat now.
It's possible AI will destroy us all, but judging from history, the irrational reactions to something new isn't exactly unprecedented.
I don't think the disconnect is very surprising to the "insiders".
Your Dario's and Sam's know exactly what they are doing. They know it's going to cause a lot of job displacement, even if the technology isn't perfect. They are trying to get the C-suite elite hyped up about it, and the hyperscalers are along for the ride as well. There's so much money to be made.
They could not care less about what joe schmoe on the street thinks about it.
Well we can easily see that the "abundance" people are wrong(for example everyone can't have a penthouse apartment overlooking Central Park, no matter how capable the robots become).
An alternative possibility that inequality is about to explode between those who profit from AI/robotic labor and those displaced by it.
A silicon savior to finally free capital from the dependence on labor with all its pesky demands like sick leave or a living wage.
You can see this in the literal deification going on in VC circles. AGI is the capitalist version of the Second Coming, God coming down to earth to redeem them by finally solving the contradictions in their world view.
Unfortunately for them and fortunately for the rest of us, it's not all they hope it to be.
My own anecdotal experience is yes, there is a real visceral hatred of AI among Gen-Z. You have to look at it through a lens where they already feel like there's been a massive amount of intergenerational theft against them - particularly with the housing market putting owning a home out of reach, along with the evaporation of the concept of a stable career. Now they are going through education learning skills that they are incessantly hearing will have no purpose and there will not be jobs for them.
It's hard not to see that they have a point. If AI is so great and going to save so much money - how about starting by paying some of that forward? Suddenly when you ask the billionaires or AI tech elite to share any of the wealth they are so confident they will generate, everyone backs away fast and starts to behave like it is all a speculative venture. So which one is it?
What the tech elite fail to understand is that we are at historic levels of wealth and income inequality. Access to healthcare is determined by one’s employment which makes what I’m about to explain a matter of life and death.
It doesn’t matter if you think it’s all going to work out and AI will bring an unprecedented era of abundance. That is not the current state.
Now what do you think happens when we dramatically expand productivity with AI? Well, we’re already seeing unprecedented layoffs in tech. And it’s easy to draw the conclusion that unless something structural changes all of the productivity gains from AI will go to investors not workers. Leaving said workers without access to healthcare or housing.
And of course let’s not forget that the tech elite in question supported Trump in the last election - someone who has done everything in his power to reduce healthcare access among the low income / unemployed population. This isn’t fucking rocket science guys.
Regardless, I think we are going to see an acceleration of AI research.
I just wish my wife is more serious about camping and learning survival skills. I think Shit is going to hit the fan in the next 5-10 years but she thinks that’s crazy. Oh well maybe I am crazy.
One of the most hilarious AI-vangelical posts I've seen recently is from Steve Yegg through Simon Willison [0]....
> The TL;DR is that Google engineering appears to have the same AI adoption footprint as John Deere, the tractor company. Most of the industry has the same internal adoption curve: 20% agentic power users, 20% outright refusers, 60% still using Cursor or equivalent chat tool. It turns out Google has this curve too... [0]
Ummmm... Steve. You think Google might be able to figure out a super huge awesome new thing from 1 out of 5 of their employees. Or, given this is a consistent curve across the industry (even at Google)... Maybe AI is only about a fifth as cool and helpful as you and the enthusiasts think it is?
The tone deafness of the tech community is so unbearable. Either too on the spectrum, too ambitious (the world is fine cause I’m getting mine), or too isolated from non-tech people, to realise most people despise what they’re creating.
There’s also a lack of willingness to ‘bring along’ the public. It’s just “make the god thing; ask for permission later”.
In 2022 the world was open arms, welcoming AI advancements.
However, since 2022, OpenAI and all of its original founding researchers, had their dramatic fallout, and began screaming in public saying crazy people things like "the end is coming."
Why did they insist on force launching ChatGPT? Google at the time refused to launch their own version (it was their own research that gave birth to LLMs) based chat because they knew all of the negative outcomes and unreliability of it all was just a poor product experience.
Instead of launch quietly like DALL-E and keep it fun and experiemental, nope, they threw it up online and moved full-steam ahead.
"THE END IS COMING" Sam Altman said. "AI WILL TAKE YOUR JOBS WITHIN 5 YEARS" Dario said. "AGI IS ALMOST HERE" Elon Musk said.
The disconnect is because these specific men, making those specific bold crazy person claims, with zealous cult following employees (including many of us here in this forum), kept marching ahead. Not only that, no one asked the rest of the world if they even wanted this technology EVERYWHERE.
This technology could have been so cool if it were given the breathing room to find usecases for it. Natural Language programming has been tried for a half a century, and it finally arrives.
Yet, it's so tainted by all the crazy person speak, and doomsday messaging, it's also thrown out there in such a haphazard way that have burned so many bridges, this technology is truely toxic. The fact that Gen-A and Gen-Z now have to waste brain power speculating if something is AI generated, is such a waste, but here we are. Welcome to the shit storm that was entirely made by those men.
I have seen this shift myself. A year ago everyone was super excited by AI. Now, if you exit the tech ecosystem, most people have become decidedly “meh” about the tech.
“Is that some nonsense ChatGPT told you?” Has turned into an almost cynical mocking in response to someone commenting about an issue.
The hype seems to have run its course. I’m a fan and use it constantly, but it’s also clear there’s serious storm clouds and headwinds on the horizon.
Paraphrasing the classic, it's not AI that people are unhappy with, it's their life around AI. The world generally appears to have become a harsher and more dangerous place - even though it hasn't. But people and especially tabloid press like finding scapegoats and participating in mass hysteria. The anti-AI hysteria is going to go away soon while AI isn't. It's just another tool, like cars or factories. Granted, it brings some danger, but at the same time it brings overwhelmingly more good.
If "AI" was just free local and open models running on consumer hardware, fewer people would have an issue with it. Which highlights that the issue is with the hyper scalers, the rhetoric, the corporations, the marketing, etc etc.
We are ever so close to nearing the point where 90% of our AI usage can go through providers of open models, who all compete with each other to drive down prices and prevent rug pulls, leaving Dario and Sam holding empty bags.
405 comments
> Meanwhile, everyone can read the news about layoffs attributed to AI and can see that hiring (especially of junior engineers) has slowed to a trickle.
According to FRED/Indeed[1], software job openings have been roughly flat for 2-3 years, and they've actually been slightly increasing again. What data source are you looking at?
[1] https://fred.stlouisfed.org/series/IHLIDXUSTPSOFTDEVE
There will always be steep corrections when they overhire driven by economic cycles or otherwise (and we're living through an otherwise).
That’s a bit perverse. In democracies, corporations ultimately exist to serve society, not shareholders.
I also think that counter points on the inhumanity of firms, misses that economies are an objective way to structure incentives to achieve subjective ends.
If you want more money to travel to other parts of the pyramid, or you want to disincentivize certain behavior, then economic incentives can be set up to achieve those goals.
Expecting firms to do charity is pointless. Expecting firms to optimize under constraints is not.
I also don't see why everyone would dismiss the statements of large company CEOs about why they are making hiring/firing decisions, regardless of what some statistics say.
Dismissing their words, just brings us back to the issue of what is really going on.
And lest it is forgotten - AI is a huge part of the US economy at this point. It is highly dependent on firms spending tokens.
Saying it’s just CEO market speak means we have an AI bubble that is more worrisome.
You have CEOs claiming that AI is driving layoffs alongside CEOs of Anthropic and OpenAI talking about the end of white collar work. All this is then amplified by tech journalists like Casey Newton and Kevin Roose. The biggest public proponents of AI keep telling people that it will take their jobs.
What comes after the end of jobs? Who knows. Sam Altman occasionlly making vague statements about curing cancer. There are vague hand-waving notions of a Star Trek utopia.
But to be honest it feels more like a Cyberpunk future, where the Altmans and Musks get to live cancer-free and the rest of us eek out an existence without jobs or any prospect for a better life. Or maybe it looks more like Star Trek, but we're all red shirts.
Can you blame people for hating this?
The better question to ask is what happens after the end of OpenAI/Tesla/etc? AI may take your job away, but not because of robots replicating your labor, just good old-fashioned economic collapse.
> There’s a funny interview with Mark Anderseen, where he talks about how he never looks backwards and doesn’t have any sense of introspection and then gets into a rambling and completely wrong history lesson. That’s what these guys do.
Yes, we know they are psychopaths and assholes. The blame is on them.
>According to FRED/Indeed[1], software job openings have been roughly flat for 2-3 years, and they've actually been slightly increasing again.
None of this contradicts OP's claim, because at least anecdotally, juniors/interns are getting disproportionately squeezed by AI. Why hire an intern to write random scripts/tests for you, when claude code does the same thing? Therefore overall job posting could be flat or slightly rising, but that's only because everyone is rushing to hire senior/principals staff to wrangle all the AI agents, offsetting the junior losses.
> but the enthusiasm on the ground is lacking
Using claude and friends takes all the fun out of the job, so I'm not surprised engineers are not enthusiastic. It's cool for 1 month then you realize we went from solving problems and implementing algos and optimizing slow code and fixing security issues and other fun stuff, to writing prompts all day long.
I think this next generation is going to come up fundamentally believing that AI is generally a bad thing, and it's going to surprise older people.
[X] Tweets and instagram comments presented as "what society is thinking"
[X] Ties Luigi Mangione and the California warehouse fire to Gen Z discontent (about AI?).
[X] Statistics being used to support the title with little to no regards to continuity: "those respondents who said that AI makes them “nervous” grew from 50% to 52% during the same period" => percentage was 52% in 2023, 50% in 2024 and 52% in 2025, seems mostly flat to me, with the real jump being in 2022-2023 with 39%.
no one is questioning the underlying model mathematics, they are questioning deceptive & reckless stewards.
Meaningful advances in medical diagnosis are not coming from chatbot companies. Some are coming from machine learning methods. Perhaps measuring public sentiment about such a vagary is not a very productive way to quantify anything
That said, I continue to also be frustrated with people using the abstract concept of a new technology as a substitute for the institutions that use that technology to exert power in the world and what they do with that power, which is - as many in the comments already point out - is what the vast majority of people are actually mad about, and right to be
Maybe I'd be a bit more optimistic if someone could explain a realistic economic scenario for how we're going to transition into our utopian abundant future without a depression or a revolution.
The fundamental alignment issue is aligning the companies themselves with society, not the models with the companies. Widespread unemployment is not aligned with society, but it is aligned with Anthropic and OpenAI if it makes them rich.
Therefore the only “harms” the companies will take seriously are those which also harm the company. For example reputational harms from enabling scams aren’t allowed.
Perhaps all of this isn’t fair, since companies actively subverted safety research for profitability. But then I would go back to my earlier point of over-indexing on unintended behaviors and under-indexing on intended ones.
Imagine choosing to be an expert in something that you think is a coin flip away from making the world worse.
It looks like:
1. They take billions in investment
2. They spend trillions
3. They and their investors profit in the quadrillions from all the "labor saving"
4. ???
5. Everyone's needs are met.
> Note: “AI experts” refer to individuals whose work or research relates to AI. The AI experts surveyed are those who were authors or presenters at an AI-related conference in 2023 or 2024 and live in the U.S. Expert views are only representative of those who responded.
But we have been sold to use these constantly falsified AI summaries as the go-to source of "truth" by all levels of society. We're trading truth for an illusion of short-term gains. This will not have good consequences.
>... with Gen Z reportedly leading the way...
The kids are alright.
If you work at the edge of context you know what I mean. Even within context, if the system was truly intelligent, the way that Euclid was intelligent, why do I need /superpowers and 50 cycles to get a certain implementation right?
Why is the AI not one-shotting obscure but simple business logic cases with optimal code? Whoops pattern never seen before! There is no thought to it, zero. The LLM is just shotgunning token prediction and context management until something sticks. The amount of complexity you get out of language is certainly fascinating and surprising at times but it's not intelligence - maybe part of it?
Sell it as skills or whatever, but all you do every day is fancy ways of context management to guardrail the token predictor algorithm into predicting the tokens that you want.
1. Overhyped. Try writing a blog post that doesn't sound like it. Everyone is sick of reading it now.
2. Affecting the wrong people. It used to be the rich got richer and the poor got poorer. But now a lot of the middle class will get poorer
3. Severely damages the work hard way out. Competition will become brutal if there's almost no barrier to entry. This will drive down profit, affect hiring and will become a conveyor belt of people trying to win the business lottery. This will make moats even more essential.
4. The obvious theft of creative works which destroys dreams and livelihoods.
No wonder the younger generation are against it. Those of us in the middle are still just hoping at least we can get through somehow. At least we have hope.
Of course, if people are fired and only capital owners / AI experts get to earn anything then this is wrong and a revolution is obviously needed and unavoidable.
But for me, the best outcome would be if it was AI that did all the jobs so people could focus on doing what they want, not that we'd go back to pre-AI era..
Initially however we need to balance between full wealth redistribution and keeping the incentive to develop AI further.
Of course by AI I mean really useful AI, the real part, not the marketing part.
Take log review for example. Whether it’s admin or security LLMs are incredible at reading awfully formatted logs and even using those to pull meaning from other logs as well. Like turn an hour long log review into a 10 minute log review type thing.
> The United States reported the lowest trust in its own government to regulate AI responsibly of any country surveyed, at 31%.
It seems US citizens are really against the current administration, just using the fact that AI investment is intrinsecally connected to it to voice their opposition.
> Country-level expectations follow similar patterns to the earlier sentiment trends. Nigeria, Japan, Mexico, the United Arab Emirates, South Korea, and India all expected AI to create more jobs than it eliminates, with shares above 60%. The United States and Canada sat at the opposite end, where 67% and 68% of respondents expected AI to eliminate jobs and disrupt industries.
Globally, the disconnect is not growing. It's really just an U.S. problem (spilling to neighbouring Canada too).
So, no luddites in sight, again. It's just a public perception over a polemic topic being leveraged for ideological reasons sinking AI on US only.
It’s always only ever about how the new model is faster, better, smarter. Or how the tech will be bringing ruin to the job market and someone should probably do something about that some time soon. Zero efforts to create any sort of educational content - how it even works, how to vet its output, how to have an eye for confabulation, how to use it as thinking enhancement rather than replacement, to keep in mind that it’s trained to please and will literally generate anything to cause users to click the thumbs up button. Nope, it’s just “ModelGPClaude can make mistakes! Better be careful!”
And then everyone’s surprised when an utterly improvident handling of 4o kicks off the biggest concentrated wave of AI psychosis seen yet. Because, surprise! When you give people a model that’s trained to anthropomorphize itself, people who have no idea about any of this tech and have no access to education about any of it might believe it’s more than it is! Boy, who’d’ve thunk; isn’t the world complex?!
This was a symptom of this exact same disease. I have far less worry about the tech and far more worry about how the disconnected venture capital caste is inflicting it upon us.
It's new, people fear it. Sometimes justified, usually not.
People greatly feared the car because of the number of horse-related jobs it would displace.
President Benjamin Harrison and First Lady Caroline Harrison feared electricity so much they refused to operate light switches to avoid being shocked. They had staff turn lights on/off for them.
Looking back at these we might laugh.
We're largely in the same boat now.
It's possible AI will destroy us all, but judging from history, the irrational reactions to something new isn't exactly unprecedented.
Your Dario's and Sam's know exactly what they are doing. They know it's going to cause a lot of job displacement, even if the technology isn't perfect. They are trying to get the C-suite elite hyped up about it, and the hyperscalers are along for the ride as well. There's so much money to be made.
They could not care less about what joe schmoe on the street thinks about it.
An alternative possibility that inequality is about to explode between those who profit from AI/robotic labor and those displaced by it.
A silicon savior to finally free capital from the dependence on labor with all its pesky demands like sick leave or a living wage.
You can see this in the literal deification going on in VC circles. AGI is the capitalist version of the Second Coming, God coming down to earth to redeem them by finally solving the contradictions in their world view.
Unfortunately for them and fortunately for the rest of us, it's not all they hope it to be.
It's hard not to see that they have a point. If AI is so great and going to save so much money - how about starting by paying some of that forward? Suddenly when you ask the billionaires or AI tech elite to share any of the wealth they are so confident they will generate, everyone backs away fast and starts to behave like it is all a speculative venture. So which one is it?
It doesn’t matter if you think it’s all going to work out and AI will bring an unprecedented era of abundance. That is not the current state.
The current state is: Nearly all productivity growth since 1980 has gone to shareholders, not workers: https://www.epi.org/productivity-pay-gap/
Now what do you think happens when we dramatically expand productivity with AI? Well, we’re already seeing unprecedented layoffs in tech. And it’s easy to draw the conclusion that unless something structural changes all of the productivity gains from AI will go to investors not workers. Leaving said workers without access to healthcare or housing.
And of course let’s not forget that the tech elite in question supported Trump in the last election - someone who has done everything in his power to reduce healthcare access among the low income / unemployed population. This isn’t fucking rocket science guys.
I just wish my wife is more serious about camping and learning survival skills. I think Shit is going to hit the fan in the next 5-10 years but she thinks that’s crazy. Oh well maybe I am crazy.
> The TL;DR is that Google engineering appears to have the same AI adoption footprint as John Deere, the tractor company. Most of the industry has the same internal adoption curve: 20% agentic power users, 20% outright refusers, 60% still using Cursor or equivalent chat tool. It turns out Google has this curve too... [0]
Ummmm... Steve. You think Google might be able to figure out a super huge awesome new thing from 1 out of 5 of their employees. Or, given this is a consistent curve across the industry (even at Google)... Maybe AI is only about a fifth as cool and helpful as you and the enthusiasts think it is?
[0] https://simonwillison.net/2026/Apr/13/steve-yegge/#atom-ever...
There’s also a lack of willingness to ‘bring along’ the public. It’s just “make the god thing; ask for permission later”.
In 2022 the world was open arms, welcoming AI advancements.
However, since 2022, OpenAI and all of its original founding researchers, had their dramatic fallout, and began screaming in public saying crazy people things like "the end is coming."
Why did they insist on force launching ChatGPT? Google at the time refused to launch their own version (it was their own research that gave birth to LLMs) based chat because they knew all of the negative outcomes and unreliability of it all was just a poor product experience.
Instead of launch quietly like DALL-E and keep it fun and experiemental, nope, they threw it up online and moved full-steam ahead.
"THE END IS COMING" Sam Altman said. "AI WILL TAKE YOUR JOBS WITHIN 5 YEARS" Dario said. "AGI IS ALMOST HERE" Elon Musk said.
The disconnect is because these specific men, making those specific bold crazy person claims, with zealous cult following employees (including many of us here in this forum), kept marching ahead. Not only that, no one asked the rest of the world if they even wanted this technology EVERYWHERE.
This technology could have been so cool if it were given the breathing room to find usecases for it. Natural Language programming has been tried for a half a century, and it finally arrives.
Yet, it's so tainted by all the crazy person speak, and doomsday messaging, it's also thrown out there in such a haphazard way that have burned so many bridges, this technology is truely toxic. The fact that Gen-A and Gen-Z now have to waste brain power speculating if something is AI generated, is such a waste, but here we are. Welcome to the shit storm that was entirely made by those men.
“Is that some nonsense ChatGPT told you?” Has turned into an almost cynical mocking in response to someone commenting about an issue.
The hype seems to have run its course. I’m a fan and use it constantly, but it’s also clear there’s serious storm clouds and headwinds on the horizon.
We are ever so close to nearing the point where 90% of our AI usage can go through providers of open models, who all compete with each other to drive down prices and prevent rug pulls, leaving Dario and Sam holding empty bags.