The rational conclusion of doomerism is violence (campbellramble.ai)

by thedudeabides5 189 comments 92 points
Read article View on HN

189 comments

[−] MostlyStable 32d ago
It is completely coherent to both think that an extremely bad thing is coming, and yet that does not justify any particular action. "The ends don't justify the means" and literal entire religions have been built on this concept. It is not irrational or incoherent to believe that even something as serious as extinction does not justify arbitrary action.

Someone _may_ decide that it does, but it is not a necessary conclusion.

And that is completely aside from the many many (in my opinion convincing) arguments that such acts of violence would not be effective anyways.

This article is a much better (and much longer) extension of the argument and direct refutation of the OP article

https://thezvi.substack.com/p/political-violence-is-never-ac...

[−] hn_throwaway_99 32d ago
The older I get, the more I get the sneaking suspicion that statements like "the ends don't justify the means" and "violence is always the wrong answer" are, at best, wildly logically inconsistent in any society at any time, and at worst, designed to ensure only a very few people in power can commit violence.

An ongoing conflict has resulted in the violent deaths of literally many thousands of children. The people who enable those deaths are usually safely ensconced thousands of miles away, often living in cushy suburbs.

To emphasize as strongly as I possibly can, I am not advocating for more violence. Quite the contrary, I'm advocating for less. I just don't understand why we have all these adages to convince people that "violence is always wrong", while I'm sure some at least some of the people who say that are actively engaged in building machines designed to kill people.

Related, the Substack link you posted is titled "Political Violence is Never The Answer". But our country (and a lot of them) were literally founded on political violence. How do people square those 2 ideas?

[−] Aurornis 32d ago

> The older I get, the more I get the sneaking suspicion that statements like "the ends don't justify the means" and "violence is always the wrong answer" are, at best, wildly logically inconsistent in any society at any time, and at worst, designed to ensure only a very few people in power can commit violence.

My experience has been the polar opposite: The older I get, the more I've seen people come to completely incorrect conclusions that justify their decisions to harm others. This ranges from petty things like spreading gossip, to committing theft from people they don't like ("they had it coming!") to actual physical violence.

In every case, zoom out a little bit and it becomes obvious how their little self-created bubble distorted their reality until they believed that doing something wrong was actually the right and justified move.

I think you're reaching too far to try to disprove the statement in a general context. Few people are going to say "violence is always the wrong answer" in response to someone defending themselves against another person trying to murder them, for example. I think these edge cases get too much emphasis in the context of the article, though. They're used as a wedge to open up the possibility that violence can be justified some times, which turns into a wordplay game to stretch the situation to justify violence.

[−] hn_throwaway_99 32d ago
I think you have wildly misunderstood my point, given that your statement of "The older I get, the more I've seen people come to completely incorrect conclusions that justify their decisions to harm others" is not the polar opposite of what of I was saying - if anything, it aligns with what I was saying very well.

To rephrase, my point is that phrases like "the ends don't justify the means" and "political violence is never the answer" seem to almost always be applied in very specific contexts, completely ignoring other contexts where many people (I'd say "society at large") are completely OK with the ends justifying the means and political violence.

To use your own sentence, I've seen many people in positions of power "coming to completely incorrect conclusions that justify their decisions to harm others", e.g. why bombing children in their beds is OK.

[−] pembrook 32d ago

>

"Political Violence is Never The Answer". But our country (and a lot of them) were literally founded on political violence. How do people square those 2 ideas?

The is just survivorship bias. Violence sits at the root of ALL human societies. The vast majority throughout history have failed or are currently failing.

If you're on HN you're probably sitting in one of the lucky, relatively prosperous ones. Violence didn't create the prosperity, otherwise Sudan and Liberia should be the richest countries in the world.

Your relative prosperity came from your ancestors being smart enough to build frameworks to allow a society to mediate scarcity without the need for violence (common law, markets and trade, property rights, etc all enforced via a government monopoly on violence). In fact, any rich country is the result of systems of decentralized scarcity mediation without decentralized violence.

It's the lack of violence which built the relative prosperity you enjoy today. Not the other way around.

[−] solaarphunk 32d ago
This is just a version of individualism vs the state. Much of western society has become increasingly confused about what violence is acceptable, let alone who should be allowed to commit violence, or have a monopoly on violence.

If we can't agree on that baseline, then its quite obvious that we'll continue to have an escalation in the types of violence that we've seen in the past few years, against the political and corporate classes in the US, with very little end in sight.

[−] bloppe 31d ago
There's no room for subtlety in public discourse, but ya absolutist moral philosophies almost never stand up to scrutiny. If only things could be so simple.

I've concluded that there is no universal moral framework. You have to be comfortable with the fact that your perspective is just one of many, but that doesn't mean it's not worth fighting for, it just also means you might be subjected to others' moral frameworks if yours conflicts with theirs. Pretty unsatisfying, but I don't think an alternative conclusion exists that is sound.

[−] lopsotronic 31d ago
Sayings like those are aspirational rather than being realist or simulationist, and they're supposed to be aspirational.

They're stories, just like all morality. It seems when cultures get to a certain point in dissolution you have a growing population that have difficulty drawing lines between stories and reality, what stories are *for* in the first place.

Having aspirational moral systems is critical for a hyperdeveloped mostly-democratic society. It creates a gap between the Best Of Us and the Worst Of Us, and thus suggests a vector. When that aspirational system fails - whether to cynicism or brutality or both matters little - you have a societal collapse incoming or under way.

One depressing example was the progression of the United States' moral judgement on torture during the 21st century. During the worst of the Cold War years I have very few illusions that torture was occurring - extremely imaginative variants in fact. Everyone knew what happens in bush wars - we had quite a few veterans who remembered very clearly. But if in 1963 someone self-identified as a torturer, or recommended we just do it in the open, the same persion would be roundly (and justly) castigated[0].

After 9/11, the idea surfaced that yes, we're going to torture, and yes, it's ok to do it. We accept the "realism".

To see the impact of this, well, I could point to a police officer in 1992 and then to a police officer in 2022. I could also point to an Action/Adventure TV program of the 1980s - say, MacGuyver - and then point to an Action/Adventure TV Program of the 2000s - like, say, 24. Imperial Boomerang is a real thing, turns out, and now we all get to be Fallujah.

In reality, though? The answer to Scalia's "Shouldn't Jack Bauer torture a guy to save Los Angeles?" was always rhetorical[1], but if you took the bait, the correct answer was always, "No", because it destroys the aspirational vector that defines our society. Or, more practically, if for no better reason than the fact a SC justice is legally reasoning from a television show.

[0] The mixed reaction to incidents like Mai Lai show how deep this division went. Not all of America thought it was a terrible thing, but we decided we were made of better stuff. Or we wanted to be, which as it turned out, also important.

[1] The "ticking time bomb" hypothetical which is almost always presented as a stack of epistemic certainty but which is actually unfalsifiable.

[−] eudamoniac 31d ago
You are right, and it's like someone else said, a morality story. Of course violence is sometimes the answer, the ends do justify the means if the ends are important enough, etc. They are indoctrinated and brainwashed, in the purest sense of the word, into not even considering these ideas.

I hold it to be self evident that political violence is the only potential action that the people of North Korea could take to save themselves. Peaceful protest and voting, obviously, does not work. A massive mob rising up and stabbing dear leader with a dinner knife, at the cost of probably hundreds or thousands of themselves, might work.

To deny the above paragraph is incoherent. All governments are somewhere on the scale of justifiably being overthrown with violence. It is a valid option, and how tyrannical the government has to be before the option is justifiable is a matter of opinion. All unpretended shock and horror at the sentiment is either by the sheltered or by the afraid.

People know this subconsciously. How many stories of righteous revolution have we seen and cheered for? Shrek, Hunger Games, The Matrix, Braveheart, Dune, Star Wars; everyone knows these protagonists killing government officials are in the right. They will never make the connection, but they know it, and the intellectually honest will acknowledge it. Are we ruled by such different beasts than those characters are?

[−] sublinear 32d ago

> How do people square those 2 ideas?

If you're seriously trying to understand the nuance of the act itself, you should consider reading what is standard issue for law enforcement and military.

"On Killing" by Dave Grossman is a classic.

If you only want to understand and stay in the realm of politics, I don't think you'll ever find a good answer either way. There's hypocrisy in every argument for or against violence. None of that is on the minds of people "in the shit" at that time. All that stuff comes later. As you're well aware, PTSD is no joke.

What I would take away from this is to recognize all the other ways in which we are compelled to act against our own self interest under what are sold as higher moral purposes.

From that perspective, it's not that hard to see how people can treat violence as just another tool. Whether it works is a question of how much those people value life above all else. If you're surprised that's not always the case in every culture, you may want to study that first. Beliefs may devalue life for persistence against a long history of conflict. This is where you may start to find some glimmers of an answer why we in the west sometimes think violence works to get those people to "snap out of it", but it really is ultimately about control of those people or that land at the end of the day.

[−] zahlman 31d ago

> I just don't understand why we have all these adages to convince people that "violence is always wrong", while I'm sure some at least some of the people who say that are actively engaged in building machines designed to kill people.

First: because trusted people having such weaponry is, in expected value, believed to lead to less total violence. Second: because not all such violence is part of what you presumably have in mind when you speak of "ongoing conflict". (Of which there are many; when you speak of "an ongoing conflict" you come across as having a particular agenda, although of course I don't know which.)

> But our country (and a lot of them) were literally founded on political violence. How do people square those 2 ideas?

There is no contradiction and thus nothing to square. People are not responsible for the actions of their ancestors, nor of members of their identity groups, and especially not of the ancestors of members of their identity groups. And there is no contradiction between "the ends don't justify the means" and the ends being just.

[−] JKCalhoun 31d ago
"But our country (and a lot of them) were literally founded on political violence. How do people square those 2 ideas?"

That's easy enough. Your presumption is that the U.S. (and other countries) would not exist were it not for political violence. We don't know if that is the case as we have only the violent timeline.

[−] estimator7292 31d ago
It's almost like the real world just doesn't deal in absolutes. For any absolute blanket rule you'd like to apply to the entire universe, there's a practically infinite number of exceptions and edge cases.

The real world is subjective and messy. Life is an endless series of edge cases and unique situations. The real world also has no requirement to be logically consistent or in any way rational. Every rule has exceptions, no set of rules and codes can cover every situation.

The nature of life is that your personal moral code will break down at some point. Your personal sense of right and wrong is not a universal truth, and you will be faced with situations that challenge your morals.

A wise person understands this fact, and a mature person can handle the messy reality of morals. An immature person thinks their personal moral code is universal truth and must never be questioned.

My morals tend toward Buddhist views, but I've been around long enough to learn the compromises that reality requires. Violence must always be avoided at all costs, but sometimes it is necessary. Occasionally violence is good. There are no hard rules, reality just plain and simple does not work like that.

[−] nradov 32d ago
During WWII, the entire Allied leadership was willing to kill millions of Axis children if that's what it took to win the war and force the enemy to surrender unconditionally. There was at least some genocidal intent. Population centers were intentionally bombed to wipe out civilian factory workers. We can argue about whether that was right or wrong but the reality is that it's probably inevitable once armed conflicts involving nation states escalate to an existential level.

“Before we’re through with them, the Japanese language will be spoken only in hell.”

-- Admiral William F. "Bull" Halsey Jr., 1941

[−] slopinthebag 32d ago
Even more simply put, if political violence is never the answer and the institution of government is the biggest single source of political violence, what does that say about the legitimacy of the institution of government?

These trite quips act as a way to ensure only the elite ruling class has a justification for the violence they inflict.

[−] atmavatar 32d ago

> "The ends don't justify the means" and literal entire religions have been built on this concept.

Most religions rely on a supernatural force judging us post-mortem to balance out the rights and wrongs done during life.

The problem with this, of course, is that there's zero evidence this force exists, and relying on this force to right the wrongs in life only serves to prevent the masses from attempting to correct the wrongs themselves either directly via vigilantism or, more importantly, by replacing existing systems with ones which will serve them better.

I'm all for fixing things first via the soap box and ballot box, but sometimes the ammo box is the only resort left.

    The tree of liberty must be refreshed from time to time with the blood of patriots and tyrants.
    - Thomas Jefferson
I don't believe we're at that point in the US, but I could certainly understand someone making that claim for a country like Iran.
[−] janalsncm 32d ago
Your reasoning makes sense under a regime of infinite games. In other words, the goal is to continue playing the game rather than win once.

These people do not believe we are in an infinite game. They believe they have a narrow set of moves to avoid checkmate, and apparently getting rid of Sam Altman is one of them.

I will suggest another reason though: we are likely already in the light cone of continued AI development. So none of the vigilante actions are justified under their own logic. It’s probably preferable to avoid being in jail when the robot apocalypse comes.

I don’t think the death of Sam Altman or even the dissolution of OpenAI would stop the continuation of AI development. There are too many actors involved, and too many companies and nation states invested in continuing AI development. Even Eliezer Yudkowsky became president of the United States he could not stop it.

[−] morningsam 32d ago
Yudkowsky himself also posted a rebuttal today: https://x.com/ESYudkowsky/article/2043601524815716866
[−] classified 31d ago
So according to you the War for Independence of the US against England never happened, and it would have been completely ineffective if it had happened.

Same goes for the French Revolution. The list could go on.

I think you're overly idealistic.

[−] coldtea 31d ago

>

It is completely coherent to both think that an extremely bad thing is coming, and yet that does not justify any particular action.

Yes, it's called "fatalism".

[−] Joker_vD 32d ago

> "The ends don't justify the means"

Eh. The ends do justify the means, but only inasmuch as those means actually do help to achieve the ends — astonishingly often, they don't (and rarer, but also often, actually bring you in the opposite direction from those end goals), and so remain unjustified.

[−] nitwit005 32d ago
Mentally ill people often have a justification for their actions which is vaguely rational, but you'll notice the vast majority of people aren't doing what they're doing.

These people just get attracted to political causes somehow. Even the woman's suffrage movement had some people setting buildings on fire.

[−] thegrim33 32d ago
The LLM doomerism is just one arm of the general "us vs them" strategy - defining a group of people as the others who are the bad guys, defining yourself as the good guys, constantly fostering hate against the others, finding ways to give your group rationale for why they have the moral high ground, all of it in the end an act to gain power/influence/money for the people orchestrating it.

The anti-AI angle is just the latest flavor of it, replacing previous ones (I'm sure you can think of some) and eventually being replaced by the next new thing/person that they'll try to direct us to hate.

I'm willing to bet any amount of money that 99.99% of AI doomers identify with the same extreme end of the political spectrum. That should be a very big red flag and highly indicative of the real motive behind the movement.

[−] estimator7292 31d ago
Never forget that those with the power to do incredible violence against entire populations are the ones most heavily pushing the line that "violence is never, ever acceptable for any reason".

The nature of the real world is that no set of moral rules applies to every situation. Our universe is not one that deals in absolutes. Your personal moral code will be challenged with exceptional situations. Growing up is learning to deal with the fact that life always, always forces us to compromise.

A robust moral code is not one with strict, unbreakable rules applied blindly to the entire world. A robust moral code is one that guides you in exceptional situations. Morality is not law, it is making the best decision you can in any given situation.

The nature of humanity, the world, and our history is that sometimes violence is required. Sometimes the most morally correct thing you can do is take up arms and defend your people.

If someone is trying to convince you that there are absolute moral laws that apply to everyone everywhere, think very carefully about what it is they want you to believe and why. There is almost certainly a motive behind that statement.

Most importantly, one should never take moral advice as unquestioned law. Morality is something you discover for yourself, it cannot and must not be prescribed. History exists to teach us lessons like this. Read any era from any culture, learn how other people deal with moral questions in exceptional circumstances. Form your own opinions of what we've done right or wrong. There are no absolutes and morality is individual. You must find your own sense of right and wrong.

[−] AndrewKemendo 32d ago
Wouldn’t be a proper technology revolution without some version of labor realizing they are commodities and rejecting the collapse of the current form of labor power, so that tells me we’re actually in the transition from an old economic process to a new one.

Dont forget, the Luddites were correct about the direction that automation and labor power were going. They weren’t blindly “fighting machines”, they were fighting inequitable working conditions.

https://en.wikipedia.org/wiki/Luddite

>Periodic uprisings relating to asset prices also occurred in other contexts in the century before Luddism. Irregular rises in food prices provoked the Keelmen to riot in the port of Tyne in 1710 and tin miners to steal from granaries at Falmouth in 1727. There was a rebellion in Northumberland and Durham in 1740, and an assault on Quaker corn dealers in 1756.

[−] drivebyhooting 32d ago
Can LLMs design and build a chip foundry to manufacture semiconductors? No?

Can LLMs design and build the reactors to enrich uranium, breed plutonium, and construct nuclear weapons? No?

Can LLMs design and manufacture Shahed drones? No?

There are already super intelligences at large with “scary capability”. And yet the word hasn’t ended.

[−] gradientsrneat 32d ago
Maybe if the LLM CEOs stopped spreading doomer narratives to sell their products, these people would chill out.
[−] benj111 31d ago
Well. Where to start.

As someone who has been let down by the judiciary and many other state institutions, I do have the very strong feeling that perhaps direct action is needed, and that perhaps I wouldn't be in the position I was, if the powers that be took that risk into account. But then don't I, in some ways become a terrorist* in doing those things, threatening those things, or making those things a reality? and there's the question of what message gets through, which isn't necessarily the one I want to convey.

*In a very broad way. I'm not thinking anything in particular, just the rationality of introducing a cost to them for their failures. Any form of protest I suppose is coercive.

[−] linksnapzz 32d ago
I'm not surprised that the sort of individual prone to taking Yud too seriously is also likely to be a comically-inept assassin.

Had he tried to blow up the diesel genset at a datacenter, he'd have burnt his lips on the exhaust pipe.

[−] tcoff91 32d ago
I have a different perspective on this given that I view climate change as the biggest threat we face as a species.

I feel like robotics is the only hope we have to be able to scale action against climate change. It's clear that emissions reduction is just not going to happen, and catastrophic warming is inevitable. Therefore we will have to do an extraordinary amount of labor in order to modify our environment to save civilization from sea level rise and to be able to repair damages caused by natural disasters. There just aren't enough humans to do everything that is going to need to be done.

It sure would have been nice to have 100 thousand firefighting robots battling the fires in Los Angeles last year.

Given that we need better AI in order to make these robots happen, I view AI as a critical technology that we need to maintain civilization.

[−] beloch 32d ago
"There is a final irony that deserves attention. If the doomers truly hold their stated beliefs at their stated confidence levels, they should be more honest about what those beliefs imply. A few weeks before the attack, a journalist asked Yudkowsky: if AI is so dangerous, why aren't you attacking data centers? His answer, relayed by Soares: "If you saw a headline saying I'd done that, would you say, 'wow, AI has been stopped, we're safe'? If not, you already know it wouldn't be effective."

----------

There are several thousand AI data centres in the U.S. alone, and hundreds are over a thousand square meters in floor space. Think about the physical effort it would take to reliably destroy, beyond the possibility of repair, just one typical computer in your home. Now multiply that out to thousands of server racks. Even if the employees rolled out the red carpet for you and handed you a baseball bat, you wouldn't get very far. Next, consider that these data centres are popping up all over the world in the most unlikely and remote locations. They don't need workers. They just need power, water, and, preferably, lax tax and environmental standards.

Doomers are attacking billionaires because they perceive them to be the soft, meaty, weak-points of a gigantic inhuman machine. They believe that just scaring Sam Altman a little will have a huge impact compared to trying to attack a data centre. However, billionaires can afford pretty decent security. This doomer movement probably isn't going to accomplish much until they target the engineers and support staff that surround billionaires. Billionaires don't scare easily because they have so much protection, but the poorly paid and poorly secured people around them are another story.

Poorly secured means easy to coerce with a stick. Poorly paid means easy to coerce with a carrot. The threat doomers pose is relatively small until they start turning employees against their own companies. What's an activist with a baseball bat compared to an employee who knows how to disable every computer in multiple data centres simultaneously?

[−] geremiiah 32d ago
LLMs are dangerous in other ways (LLM psychosis and false confidence has probably already caused negligent deaths). However, I don't think we are close to a terminator scenario.

At the same time, if we ever do create an AGI, and eventually an ASI, I think it would only be a matter of time before the machines take over entirely, and they would probably be the ones which will continue the legacy of our species. Is that bad? Idk.

[−] csense 31d ago
"It’s not a safety movement. It’s a priesthood with an origin story written in fanfiction." Is that the opinion of the author, or an LLM?

I feel like this is one topic where using an LLM detracts from the author's thesis; doubly so if they don't disclose it.

[−] hax0ron3 32d ago
I don't agree with Yudkowsky, but I think there's certainly a chance that he's right about AI destroying humanity. I just don't think the likelihood of that happening is as high as he thinks it is. But there certainly is a chance.

The problem with trying to stop it is, how? Even if you killed every single AI company leader and every single top AI engineer, it would almost certainly just slow down the rate of progress in the technology, not stop it. The technology is so vital to national security that in the face of such actions, state security forces would just bring development of the tech under their direct protection Manhattan Project-style. Even if you killed literally every single AI engineer on the planet, it's pretty likely that this would just delay the development of the technology by a decade or so instead of actually preventing it.

The technology is pushed forward by a simple psychological logic: every key global actor knows that if they don't build the technology, they will be outcompeted by other actors who do build the technology. No key actor thinks that they have the luxury of not building the technology even if they wanted to not build it. It's very similar to nuclear weapons in that regard. You can talk about nuclear disarmament all you want but at the end of the day, having nuclear weapons is vital to having sovereignty. If you don't have nuclear weapons, you will always be in danger of becoming just the prison bitch of countries that do have them. AI seems that it is growing toward a similar position in the calculus of states' notional security.

I can think of no example in history of the entire world deciding to just forsake the development of a technology because it seemed like it could prove to be too dangerous. The same psychological logic always applies.

[−] bjourne 32d ago
Yes, but against the angry dormers we have hordes of cheerful coomers who welcome the fruits of the labour of the AI with one open arm.
[−] necovek 32d ago
I am disappointed "Doomerism" is not an official name for the practice of putting Doom on anything and everything!
[−] PaulHoule 32d ago
... been saying this for years. If you really believed what Yudkowsky says you wouldn't just be posting on lesswrong, you would be taking direct action against a clear and present danger.
[−] jmull 32d ago
People are basing their entire world view on not understanding the nature of exponential phenomena.

Exponential phenomena only begin in a medium that holds the potential for that phenomena, and necessarily consume that medium.

That is, exponential phenomena are inherently self-limiting. The bateria reaches the edge of the petri dish. When the all the nitroglycerin is broken up the dynamite is done exploding.

That doesn't mean exponential phenomena aren't dangerous -- of course they can be. I mentioned dynamite, after all. And there are nukes.

But it's really far from "AI is improving exponentially now" to "AI will destroy everyone".

I see AI companies consuming cash at unsustainable rates. Since their motive is profit, this is necessarily limiting. Cash, meanwhile is a proxy for actual resources, which have their own, non-exponential limitations -- employees, data centers, electricity, venture capitalist with capital, etc.

AI isn't going to keep improving exponentially -- it can't. Like every other exponential phenomenon, it will consume the medium of potential that supports it (and rather quickly).

[−] eemax 32d ago

> The Rational Conclusion of Doomerism Is Violence

No it isn't. The most prominent "doomer" has a strong grasp and deep, wholehearted appreciation for the the principles of liberalism and the rule of law:

https://x.com/ESYudkowsky/status/2043601524815716866

Which the author of this piece of slop appears to lack.

[−] imbus 32d ago
[dead]
[−] augmentedmike 31d ago
[flagged]
[−] vrganj 32d ago
[flagged]
[−] aaroninsf 32d ago
Hot take:

I assume the author wrote this with the expectation that much of the readsherp gasp, and react with "the natural horror all right thinking folk would have in response to violence of any kind."

Sorry, lol, no.

The appropriate question for "all right thinking" folk is very different: if argumentation has no impact and it's obvious that it shall have none—what other avenue do you expect opponents, who take the risks seriously, to take...?

That's not a rhetorical question.

To put it bluntly: the machinery of contemporary capitalism, especially as practiced by our industry, very clearly leaves no avenue.

How many days ago was Ronan Farrow here doing an AMA on his critique of Altman—whose connection to this specific community is I assume common knowledge...?

How many of you have carried, or worked beneath, the banner, move fast and break things...?

What message does that ethos convey, about their the extent to which "tech" is going respect community standards, regulation—the law?

And on the other edge: what does this ethos enshrine about how best to accomplish one's aims?

One of the bigger domestic stories this past week which has inflamed a certain side of Reddit, is the "disgruntled employee torches warehouse" one.

Consider also—and I'm deadly serious—the broader frame narrative we are all laboring within today: that the new contract of the capitalist class—including and perhaps especially those in "tech," e.g. in the Peter Thiel circles—seems very much to be, "social stability via surveillance and a police state, rather than through equity and discourse."

When code is law, the law is buggy.

When there is no recourse through the law, you get violence.

[−] arduanika 32d ago
This has been decades in the making. We had premonitions of the violence that would come, for example with the Zizians. Get ready for what happens when a million blogposts worth of bad philosophy, bad analogies, and anti-institutionalist hubris are deeply indoctrinated into a vast, decentralized network of highly capable engineering minds who lack common sense and normal restraints.

They hate the framing that LLMs are just stochastic parrots, which is ironic, because Yudkowsky's many parrots are (latent, until now) stochastic terrorists.

[−] kelseyfrog 32d ago
War is a mere continuation of policy by other means[1]. When policy through legislation is empirically impotent[2], calls to continue attempts at a failed strategy are indistinguishable from being told, "continue losing."

There is a real, undeniable, build up of political tension. When it fails to be released in the legislative arena, it doesn't dissapate. When we point out that, "the quality of life right now is the best it's ever been," it doesn't dissapate. When we try to crush it, it doesn't dissapate. The last remaining pressure release is violence however condemnable it may be. Perhaps we should, you know, fix participatory democracy rather than pontificating on a natural outcome of machine we created yet refuse to fix. If fixing it continues to be more difficult than eliminating violence we should continue to expect violence.

1. https://oll.libertyfund.org/pages/clausewitz-war-as-politics...

2. https://archive.org/details/gilens_and_page_2014_-testing_th...

[−] hollerith 31d ago
Humble request: do not call us "AI doomers". Most of us would rather be called "AI anti-extinctionists".