AI Will Be Met with Violence, and Nothing Good Will Come of It (thealgorithmicbridge.com)

by gHeadphone 630 comments 348 points
Read article View on HN

630 comments

[−] Avicebron 33d ago
I feel like if people keep using AI as a blanket term for "inequality" and "inequality accelerants" then yeah, it's "AI"'s fault. When in reality the whole thing needs to be decoupled..

"Gleefully taking away people's livelihoods will be met with violence, and nothing good will come of it." - fixed.

[−] DavidPiper 33d ago
I wholeheartedly agree with and encourage this kind of academic distinction. However...

Until people with billions of dollars behind them do something with that money to offset the financial hardship that they're knowingly - and gleefully - bringing to others... The distinction has no practical use.

(And before someone says "that's the government's job!", consider how much lobbying money is coming from CEOs and companies who know the domain best and are agitating for better financial and social safeguards for all. None, naturally.)

[−] pxc 33d ago
We often look back on earlier stages in world history like we're somehow more advanced, or inherently smarter, than past societies. But one of the things made clear by the way this problem lines up perfectly with conflict during the industrial revolution (including the innovators flagrantly violating the law in order to win their advantage) is that for all our technological sophistication, we haven't really gotten better at the hard, human things: social coordination, planning, democracy. (Perhaps that's because we're still living under the same system that the industrial revolution finally birthed.)
[−] rayiner 33d ago
How much actual money do you think the “people with billions of dollars” have in comparison to the needs of the population as a whole? I think you’re very confused about where the actual income in the economy goes.
[−] integralid 33d ago

>consider how much lobbying money is coming from CEOs and companies

Make lobbying illegal, I don't understand why it's normalized.

[−] juleiie 33d ago
America isn’t the only country on earth, it’s just one of hundreds of others. That alone makes me confident about future not being even 1/10 as gloomy as some people think.

We have a lattice of diverse legal and economic systems in the world and it takes just a single one to figure out the solution for others to learn from.

[−] giaour 33d ago

> consider how much lobbying money is coming from CEOs and companies who know the domain best and are agitating for better financial and social safeguards for all.

To hear Marc Andreessen tell it, the US tech industry's rightward turn in the 2024 campaign was specifically intended to head off any attempt to regulate AI [0]. So the blame rebounds to tech CEOs even if you believe that only the government should take a holistic view of a given technology's impact.

[0]: https://www.bloomberg.com/news/features/2025-06-11/marc-andr...

[−] 0xpiguy 33d ago
I do think there is a good chance that, in the not-so-distant future, universal basic income will become the norm. In previous industrial revolutions, large numbers of jobs were created to offset those that were lost. But there are very few things AI cannot perform faster and cheaper. Best case scenario, we will be in a world with both high productivity and high unemployment. Governments may have no choice but to provide universal income to everyone.
[−] username223 33d ago

> Until people with billions of dollars behind them do something with that money…

Or until actual people take the billions of dollars sitting behind those weak man-children. The US has fewer than 1000 billionaires now, and more than 300,000,000 people. That seems like a solvable problem.

[−] armchairhacker 33d ago
The distinction matters.

For example, the people fighting inequality can use AI to their advantage, and focus criticism on billionaires (and general bad AI usage, like slop PRs) instead of ordinary AI users.

[−] ethin 33d ago
This distinction is good in academic circles and similar (like on here). But the public (and ordinary people who aren't people who regularly visit Hacker News -- or even know that Hacker News exists) don't care. To them, AI == inequality and inequality accelerants, because it is funded and run by the richest, most powerful people on Earth. And those very people are making everything worse for all but them, not better. Nobody is going to care about academic distinctions in such circumstances.
[−] tedivm 33d ago
How do you decouple it when the people who own it and are building it seem to be driven on increasing inequality?
[−] SoftTalker 33d ago
They haven't learned from Risky Business.

"Joel, you look like a smart kid. I'm going to tell you something I'm sure you'll understand. You're having fun now, right? Right, Joel? The time of your life. In a sluggish economy, never ever fuck with another man's livelihood."

[−] moron4hire 33d ago
My question to you is, are you willing to give up the tools of the oppressor in that pursuit of combatting the true villain of "gleefully taking away people's livelihoods"? What I mean is, yes, you are right, technically AI itself is not the problem. But it is the tool by which the oppressors are working their oppression.

Do you make this distinction that it's not the AI that is doing this to us so that you can be more clear in where to target your ire, or are you making the distinction so you can continue to use LLMs with a clear conscience?

[−] MontyCarloHall 33d ago
People currently assume AI will be an accelerant of inequality because all currently useful models (i.e. those potentially capable of mass labor disruption) are only able to run in multibillion dollar datacenters, with all returns accruing disproportionately to the oligarchs who own said datacenters.

I'm not sure this moat is inevitably perpetual. It's likely computing technology evolves to the point of being able to run frontier-level models on our phones and laptops. It's also likely that with diminishing marginal returns, future datacenter-level models will not be dramatically more capable than future local models. In that case, the power of AI would be (almost) fully democratized, obviating any oligarchic concentration of power. Everyone would have equal access to the ultimate means of production.

[−] saidnooneever 33d ago
its always peoples fault. blaming technology is the shortest sight. people make it, and wittingly use it in a disagreeable way, because it earns them money.

there is something else that needs to change which everyone is reluctant to admit, or struggling with internally.

thats ok, its called conscious evolution. it hurts, but it will be ok someday. its generational, so progress is always slower than one would hope. Just know that every step in the right direction is one, even if the entire world seems to disagree keep pushing for what you beleive is right, and hopefully thats something which is not infringing on other peoples capacity to live a happy life.

[−] themenomen 33d ago

> "Gleefully taking away people's livelihoods will be met with violence, and nothing good will come of it." - fixed.

This statement is not decoupled; if anything, it is a more generalized one, as it does not point at any cause or causes for livelihoods to be taken.

[−] api 33d ago
I feel like the entire discourse is a proxy for what should be direct discourse about inequality and the regressive (rob from the poor, give to the rich) nature of our system.

Eliminate the AI variable entirely and the problem remains, therefore AI is not the problem.

[−] bdangubic 33d ago
Amazon did, Uber did, Walmart did… not seeing anyone throwing molotov coctails at their CEOs houses…
[−] trymas 32d ago
How can you decouple this, when there's a flood of headlines saying something like: "Company X layoffs Y% of workforce, betting on AI productivity".

For that Y% of people (and their dependants like kids, spouses and aging parents) - AI[1] is direct reason for "inequality accelerant".

-----

[1] Lets not discuss if AI in these layoff reasonings is actually true or not.

[−] lumost 33d ago
So far, AI is a "unique" technology in that the main use case is "work replacement." Consumer applications have only existed to "destroy human creative media with low quality slop".

The vast majority of individuals derive no value from AI, they are instead told to do their jobs faster and own the mistakes of the AI for flat/declining pay. It's a bad deal for most people.

[−] slashdave 33d ago
Exhibit one: MAGA
[−] jimbo808 33d ago
AI is massively asymmetric in its benefits, which are overwhelmingly concentrated among those with extreme capital, and the authoritarians they're aligned with.

The benefits for them include:

- replacing workers with lower quality (but good enough) AI solutions, which degrade the quality of nearly every product or service for the consumer, but not by enough to offset the labor cost savings

- mass surveillance at low cost, a way to take the absurd amounts of data humanity now produces, and use to subjugate them

- propaganda/deception/misinformation, a new vector for propaganda which people are naively inclined to trust. bonus points for the "flooding the zone" strategy which AI makes easier

Benefits to the worker:

- lower cost of goods and services (but not for you, silly - they'll still be taxing you via inflation to fund their wars of conquest)

- you won't have to work anymore

- you won't have to eat anymore

[−] only-one1701 33d ago
The terms are defined by the AI dealers!
[−] dfxm12 33d ago
You have it backwards. People are using billionaire owned AI, billionaire lobbying efforts gaming the system, and billionaire owned media as a propaganda arm for AI as a specific example of the larger general idea.
[−] AtlasBarfed 33d ago
The PC revolution in the 1990s is one of the core drivers of inequality, where the rich took almost all of the dividends from the vast productivity gains from personal computers as the prime development of Moore's law rocketed computers from 66 MHz to over 8 gigahertz.

Judging by the gleeful texts of CEOs, collapsed hiring, internal policy changes and pushes, and the additional decades of centralized political control, it's clear this is going to be even worse..

[−] stego-tech 33d ago
Maybe it’s my own lived experience coloring my perspective, but man the author feels like a centrist sitting upon an imagined moral high ground. “Violence is bad but inevitable” is the kind of milquetoast non-committal position one takes when they have nothing else to contribute to the discussion at hand.

My own take goes that one step further, as I said in a prior comment rebutting Altman’s whinging blog post:

> Your staunch refusal to heed the critiques of those you harm means that these outcomes were inevitable; not acceptable, not justifiable, but inevitable nonetheless. In a society where two full-time working adults still cannot afford a home, or children, or healthcare, or education, your insistence upon robbing them of their ability to survive at all is tantamount to a direct threat of violence against them. Your insistence that the pain is necessary, that others must clean up the messes that you and your peers are willfully creating, is the sort of behavior expected from toddlers rather than statesmen.

The problem does not lie with technological innovation itself, so much as the powerful humans behind it leveraging it for selfish ends without the consent of the governed. Violence becomes inevitable when people see no alternative, and necessary when the stakes are kill or be killed, as AI is currently steered towards. That’s not to condone the actions of the alleged perpetrators so much as it’s highlighting the litany of historical examples around such transformations and the effects violence has in forcing a peaceful compromise in most (but not all) cases. The New Deal couldn’t have happened without the decades of preceding strikes, protests, and government-sanctioned violence against workers; the violence made it impossible to ignore or delay any further, and the result was outing corporate entities who had been stockpiling chemical weapons and machine guns, so fierce was their opposition to sharing the products of labor with the workforce. AI already has the weapons, it has the surveillance apparatus, the government backing; violence is presently the sole recourse left to a growing number of people, because they know they’re an obstacle to the powers that be - and will be destroyed, lest they strike first.

That’s the real story, here, and those who haven’t lived in the gutters of society cannot possibly understand the desperation of those victimized by it in the name of greed.

[−] softwaredoug 33d ago
Highly recommend people learn the history of the Industrial Revolution. I recently discovered the Industrial Revolutions Podcast[1] and have been enjoying it. What's happening today isn't unprecedented. The pace of change that's happening IS similar to periods of the industrial revolution.

For example, the flying jenny, overnight, basically put an entire craft industry of weaving into question. Probably more dramatically than anything Claude Code ever did.

It took A LOT and several world wars for brief periods of normalcy post WW2 - probably the exception, not the rule.

1 - https://industrialrevolutionspod.com/

[−] ben8bit 33d ago
A lot of the magic of LLMs, I think, has been tarnished by these CEOs and other FAANG companies. It might have been a far more interesting world if they didn't bring "AI" or "AGI" into the conversation in such a politicized way.
[−] zkmon 33d ago
History has shown that an alien invasion can only happen because of the internal competition and in-fighting of the natives. Colonial empires proved it only a few centuries back. The invading alien powers are fuelled by the inviting natives.

AI (and computing technology in general) is an alien as it defies all wordly norms. It can have exact identical copies, can replicate, can exist everywhere, communicate across huge distance without time lapse, do huge work without time lapse, has no physical mass of it's own,, no respect for time, distance, mass and thinking work, not a living thing but can think.... Just the perfect alien creature qualities.

Why are they allowed to invade Earth? The business goals, of course. To get a temporary edge over the competitors, until they acquire the same. But once everyone has the same Ai, there is no going back. Ai has established itself through the weak channels that are filled with greed, that can bribed by giving toys (business edge), in return to the keys to the dominance of human race.

[−] ahjustacommente 33d ago
I think a lot of HN readers and a lot of first world/law abiding dwelllers in this and recent threads forget to think.

Violence is not a panacea, but often, the outlet.

Yes we all (majority of sane) people know that violence is not the answer yada yada yada. Doesn’t matter. It will happen anyway. Saying “it shouldn’t happen, it does not solve X” will not stop it to becoming an outlet for frustrated people.

[−] tokioyoyo 33d ago
A bit tangent, but is there anyone working on something for “what if AI pans out?” world? I’m not sure how to explain it, but if in the next 5 years a lot of jobs get displaced because of AI, obviously we’ll have big problems. Is there anyone working on analysis, outcomes, strategies and etc.? I think about it a lot, and would be cool to help and contribute.
[−] dwroberts 33d ago

> But this is not the way. This is how things devolve into chaos.

Meanwhile

https://www.reuters.com/world/middle-east/how-many-people-ha...

> U.S.-based rights group HRANA said 3,636 people have been killed since the war erupted. It said 1,701 of those were civilians, including at least 254 children.

(Mentioning this specifically because we know the DoD is using AI)

[−] conartist6 33d ago
I have said repeatedly that when AI eliminates the need for human creativity and work, the only thing left as the natural domain of humans will be bloodshed.

The fact that we're using AI killer robots to wipe each other out in droves doesn't bode well for that future does it...

[−] Hamuko 33d ago
One thing I'm kinda worried is what happens to social trust in society once we have more and more LLMs flooding the Internet. Divison in society, in particular in the United States, already seemed to be increasing at a rapid pace as social media became more and more relevant, and I'm afraid that LLMs are just going to add more fuel into the already started fire.

I'm less concerned about AI becoming the Skynet and killing humans and more concerned about AI making the world so miserable that we'll be killing ourselves and each other.

[−] nacozarina 33d ago
Humans have been successfully using violence for conflict-resolution for tens of thousands of years. We’ll be fine, it’s not our first rodeo.
[−] paulorlando 33d ago
I've been research Luddite movements around the world. Agreed that the topic is timely.

A closer comparison to Sam Altman might be Edmund Cartwright (inventor of the power loom that automated weaving). The Horsfall and Altman situations differ in that Horsfall was a factory owner but didn't create or organize the teams that built the stocking frames. There was also an attempt on Cartwright's life as he was out riding. But like Altman and unlike Horsfall, he wasn't killed.

[−] taffydavid 33d ago

> It hit Horsfall in the groin, who, nominative-deterministically, fell from his horse.

Lovely writing. I once knew someone who's surname was HorsFELL and now I wonder if they were related

[−] markus_zhang 33d ago
There is nothing new about it. I just hope when people scream “unions” they do expect to do things that early unions did, not just being some armchair unionists.

But individuals can’t fight with the trend. Might as well reduce costs/debts and prepare to go into the mountains for a few weeks once SHTF.

[−] ares623 33d ago
All this, so people like us can have an easier time doing a job that wasn’t that hard in the first place, and in reality was actually quite comfortable, for employers who are promising to lay us off, for productivity gains that aren’t even measurable.
[−] AvAn12 33d ago
I’m not sure anyone needs to break anything. I’m not sure this is a commercially viable business once all of the VC and foreign funding scaffolding goes away.
[−] MrOrelliOReilly 33d ago

> People hate AI so much that they are prone to attribute to it everything that’s going wrong in their lives, regardless of the truth. That’s why they mix real arguments, like data theft, with fake ones, like the water stuff. Employers do it, too. Most layoffs are not caused by AI, but it’s the perfect excuse to do something that’s otherwise socially reprehensible.

Pertinent quote. A lot of AI discourse goes in circles trying to evaluate the truthiness of every individual complaint about AI. Obviously it's good to ensure claims are factual! But I believe it misses a broader point that people are resistant to AI, often out of fear, and are grasping for strategies to exert control. Or at least that's my read of it.

Refuting individual claims won't make a difference if the underlying anxieties aren't addressed (e.g., if I lose my job will I be compensated, will we protect ourselves against x-risk, etc).

[−] spaceman_2020 33d ago
The worst part is that AI's first casualties are jobs that no one really asked to kill.

AI is killing writing, music, art, and coding. I've done all of these voluntarily because I simply enjoyed them

Meanwhile the parts of my existence that I actually hate - dealing with customer support, handling government forms, dealing with taxes - is far from being automated by AI

Look at Suno. Fantastic tool, but where was the capital need to make music generation so cheap that no musician could ever compete with it? Did the world really wake up one day and concluded that, "wait, we're spending too much on musicians"?

Seems like a complete misallocation of capital if I'm perfectly honest

[−] titanomachy 33d ago
I’m curious what the threshold is where violence IS considered acceptable. A lot of people seem to think that it was acceptable (or even necessary) for America to fight a war for its independence, or for the French commoners to launch the attack on the Tuileries that extinguished a thousand lives and led to the French Republic.

I don’t think that we can know in advance whether history will judge a particular violent act to be “acceptable”, but the rule seems to be more complicated than “violence is never acceptable”.

[−] bluegatty 33d ago
'Rogue super intelligence' is the most ridiculous sci-fi nonsense of the AI hype, worse than the pro AI hype.

AI will be 'dangerous' because humans will use it irresponsibly, and that's all of the risk.

- giving it too much trust, being lazy, improper guards and accidents - leveraging it for negative things (black hats, military targetting) - states and governments using it as instrument of control etc.

That's it.

Stop worrying about the ghost in the machine and start worrying about crappy and evil businesses and governing institutions.

Democracy, vigilance, laws, responsibility are what we need, in all things.

[−] mrweasel 33d ago
I really should have gone into sewage work.
[−] mystraline 33d ago
I'm urging AI literacy and also self-hosting.

These are the means of production. Probabilistic, sure. Sycophantic? Yep.

But speeding up the boring parts is where LLMs excel at.

[−] tsunamifury 33d ago
We are in an inverse innovators dilemma

Automaters dilemma: the labor that is removed from production due to automation can no longer sustain the market’s that that automater was trying to make more efficient.

By optimizing just the production half of the economy and not the consumption half you end up breaking the market