Why do we tell ourselves scary stories about AI? (quantamagazine.org)

by lschueller 126 comments 59 points
Read article View on HN

126 comments

[−] ACCount37 35d ago
It's simple. It's because AI is the scariest technology ever made.

Human intelligence has proven itself capable of doing a lot of scary things. And AI research is keen on building ever more capable and more scalable intelligence.

By now, the list of advantages humans still have over machines is both finite and rapidly diminishing. If that doesn't make you uneasy, you're in denial.

[−] jaredcwhite 35d ago
AI, the way you are describing it, has not been invented yet. It is a fiction.

What is called "AI" today is an extremely vague marketing term being applied to various software technologies which are only dangerous because humans are dangerous. Nuclear & chemical weapons are also "very scary" but only because the humans who might decide to use them in fits of insanity are scary.

I'm not in the slightest bit uneasy about "AI" itself right now, because as I said, the AI of Sci-Fi has not yet been invented…and seems unlikely to in any of our lifetimes. (Not throwing shade on clever researchers. We also don't have working FTL travel, though plenty of scientists speculate on how such an engine might be built.)

[−] ACCount37 35d ago
"It's just marketing" is just the "denial" stage wearing a flimsy disguise.

Even LLMs of today routinely do the kind of tasks that would have "required human intelligence" a few years prior. The gap between "what humans can do" and "what frontier AIs can do" is shrinking every month.

What makes you think that what remains of that gap can't be closed in a series of incremental upgrades? Just 4 years have passed since the first ChatGPT. There are a lot of incremental upgrades left in "any of our lifetimes".

[−] jaredcwhite 35d ago
You don't seem to be engaging seriously with respected experts in this field who have been reporting for years at this point that merely scaling LLMs and so-called "agentic systems" doesn't get us anywhere close to true AGI.

Also computers in the 1980s could perform many tasks that previously would have "required human intelligence". So? Are you saying computers in the 1980s were somehow intelligent?

[−] ACCount37 35d ago
And you don't seem to be engaging seriously with respected experts in this field who say "scaling still works, and will work for a good while longer".

If your only reference points are LeCun, or, worse, some living fossils from the "symbolic AI" era, then you'll be showered in "LLMs can't progress". Often backed by "insights" that are straight up wrong, and were proven wrong empirically some time in 2023.

If you track LLM capabilities over time, however, it's blindingly obvious that the potential of LLMs is yet to be exhausted. And whether it will be any time soon is unclear. No signs of that as of yet. If there's a wall, we are yet to hit it.

[−] dsa3a 34d ago
That aside.

Lets look at the facts.

Are LLMs displacing labour? In the aggregate - not from what one can see. The aggregate statistics tell a different story e.g. the hiring of software engineers is still growing Y-o-Y.

The limits of LLMs will be put in place through financial constraints. People like you seem to think there's an infinite stream of money to fund this stuff. Not really. Its the same reason why Anthropic and OAI are now shifting focus to generate revenues and cash flows because they will not receive external funding forever.

[−] selfhoster11 34d ago
LLMs are indeed displacing labour. Junior IT roles are drying up in places. Translation and art are also becoming harder to earn from.
[−] robkop 34d ago
I can’t speak for the states, but in AU I clearly see a massive displacement of undergrad and junior roles (only in AI exposed domains).

I say this as both someone who works with many execs, hearing their musings, and someone who no longer can justify hiring junior roles themselves.

Irrespective of that; if we take this strategy of only taking action once it is visible to the layman - our scope of actions available will be invariably and significantly diminished.

Even if you are not convinced it is guaranteed and do not believe what myself and others see. I would ask you is your probability of it happening now really that close to 0? If not then would it not be prudent to take the risk seriously?

[−] jurgenburgen 34d ago

> If not then would it not be prudent to take the risk seriously?

What does taking the risk seriously look like?

[−] bigbadfeline 34d ago

> What does taking the risk seriously look like?

Politics - proper guardrails, adapting the legal framework to accommodate AI and make sure it doesn't benefit only preselected few.

Something that can and should be done yesterday is to stop the capital drain out of the economy and into accelerated, war-motivated AI development - there's no need for war-AI per se but clearly it's the most likely reason for the capital drain and rush.

Once the rush and wars stop, and some capital is made available for the rest of the economy, the latter can adapt to the introduction of AI at a normal pace, that should include legislative safeguards to support competition and prevent monopolization of AI and information sources.

[−] potsandpans 33d ago
Oh, you again. In every thread. Are you a respect expert in the field of ai? What are your qualifications?
[−] Xmd5a 34d ago
I'm not interested in reading the same arguments over and over angain. Ai is not scary anymore, it's fucking boring. Exits thread
[−] andrewmutz 35d ago
Modern discourse happens on social media where fear and outrage drive engagement, which drives virality. We have become convinced in a short amount of time that AI is going to take all the jobs and eventually kill us all because that's what people click on.

Any voices or studies that present the case for "useful technology that will improve productivity and wages while not murdering us" don't get clicked on or read.

[−] AlecSchueler 35d ago

> Any voices or studies that present the case for "useful technology that will improve productivity and wages while not murdering us" don't get clicked on or read.

It's not an either/or thing though. Compare to something like combustion. Sure it definitely improved productivity but also lead to countless violent deaths.

[−] afavour 35d ago
I don't know, I think nuclear weapons are scarier. And also probably a useful parallel: they're so dangerous that we coined the term "mutually assured destruction" and everyone recognized that it was so dangerous to use them that they've only ever been used once.

I see the flood of PR from AI firms as an attempt to make sure we don't build the appropriate safeguards this time around, because there's too much money to be made.

[−] ACCount37 35d ago
I remind you of why nuclear weapons exist.

They exist because human minds conceived them, and human hands made them.

One of the major dangers of advanced AI is being able to implement something not unlike Manhattan project with synthetic intelligence, in a single datacenter.

[−] nextaccountic 34d ago
Yeah, the problem with AI is that they can become too good at performing general tasks, ranging from, like, designing cancer treatments, or designing bioweapons, and everything in between
[−] deterministic 32d ago
You can't create and enrich nuclear materials inside a datacenter.
[−] SpicyLemonZest 35d ago
Everyone recognized that it was so dangerous to use them after the first two mass casualty events. At the time and even into the 50s it was not universally obvious, and the arguments in favor of nuclear weapons use were quite similar to arguments I often see with regards to AI: bombing cities into rubble is not a new concept, traditional explosives well within the supply capacity of large militaries are capable of it, so what are we even talking about when we say that there's scary new capabilities?
[−] afavour 35d ago

> Everyone recognized that it was so dangerous to use them after the first two mass casualty events

I really don’t think that’s true. Those who actually knew about the nuclear weapons knew very well how dangerous they were. Truman was deeply conflicted about using them.

[−] Hikikomori 34d ago
Truman changed after learning the real civilians death numbers that they caused. The military leaders absolutely knew the impact before, and kept advocating for its in later wars.
[−] lopsotronic 35d ago
By any quantifiable measure, yes, and not by small numbers either.

Until someone can demonstrate a quantitative measure of intelligence - with the same stability of measurement as "meters" or "joules" - any discussion of "Super-AI" as "the most dangerous X" is at best qualitative/speculative risk narratology, at worst discursive distractions. The architecture of the "social web" amplifies discursion to a harmful degree in an open population of agents, something I think we could probably prove mathematically. I am more suspicious of this social principle than I am scared of Weakly Godlike Intelligence at this moment in history; I am more scared of nuclear weapons than literally anything else.

People think we are out of the woods with nuclear weapons, but I don't think we've even seen the forest yet. We are Homo Erectus, puffing on a flame left by a lightning strike, carrying this magic fire back to our cave.

[−] webdoodle 35d ago
Nuclear weapons have rarely been used kinetically. Their real force multiplier is the fear.

A.I. is being used by so many people for so many diabolical things, hidden, unknown things that we may never fully understand it's purpose. But that doesn't mean it's purpose won't destroy us in the end.

The expression "Drinking the Koolaid" is used to explain the Jonestown mass suicide. It is an information hazard, aka, a cult that created the end result: 900 people drinking poisoned flavoraid. That's just one example of a human caused information hazard. What happens when someone with similar thinking applies that to A.I.? Will we even be able to sleuth out who did it?

[−] fontain 35d ago
The world we live in is a construct, not a natural outcome. Even if we take your premise at face value, that our success as a species is only because of advantages over others, what's to say that "intelligence" is that advantage? What's to say that we don't use our advantages to reconstruct a world that works in a way that doesn't advantage intelligence over all else?

And on intelligence specifically: even amongst the human race, we all know smart people who are abject failures, and idiots who are wildly successful. Intelligence is vastly overrated.

[−] otabdeveloper4 35d ago

> AI is the scariest technology ever made

Well, it's a good thing that all we managed so far is a large language model instead.

[−] saHqtr 35d ago
Most humans can do more than plagiarizing text. But let's hype up the clankers before the IPOs.
[−] mastermage 34d ago
I think the Nuclear Bomb is still scarier. But AI is not scary for its destructive potential but for its potential to disrupt our society fundamentally, and not just in a good way.
[−] psychoslave 35d ago
Machine still need a planetary complex production pipelines with human operators everywhere to achieve reproduction at scale. Even taking paperclip plant optimizer overlord as serious scenario, it’s still several order of magnitude less likely than humans letting the most nefarious individuals create international conflicts and engage in genocides, not even talking about destroying vast pans of the biosphere supporting humanity possibility of existence.

That is, also alien invasion and giant meteor are plausible scenario, but at some point one has to prioritize threats likeliness, and generally speaking it makes more sense to put more weight on "ongoing advanced operation" than on "not excluded in currently known scientifically realistic what-if".

[−] ThrowawayR2 35d ago
There was a lot of FUD in the mainframe era about computers being called "electronic brains" and fears of them taking people's jobs because the ignorant public mistook their lighting fast arithmetic skills for intelligence. Many did lose their jobs as digital record keeping, computerized accounting/ERP, robotics on assembly lines, became cost effective, but at no time did the "electronic brain" become intelligent.

There's a lot of FUD today about LLM's being sapient because the ignorant public mistakes their complex token prediction skills for intelligence. But it's just embarrassing to see people making that mistake on a forum ostensibly filled with hackers.

[−] causal 35d ago
"Why be afraid of nukes it's not like they WANT to blow up"
[−] deterministic 32d ago
Hmmm I would personally pick nuclear weapons as the #1 scary tech.

And a close (non-tech) second is the ruthlessness of sociopaths seeking power.

[−] sublinear 35d ago
This is untrue. What is being diminished is the value of humans doing repetitive or uncreative tasks.

Many have built their careers from that kind of work in the past and yes they are threatened, but that kind of work is inherently not collaborative and more vocational.

[−] Zigurd 35d ago
If I can plausibly say I'm making something super dangerous, the government is likely to want to be the first government to have it. If the check clears before they figure out if I'm BSing them or not, it's a win.
[−] everdrive 35d ago
One thing that strikes me that I never really see anyone discuss is that we've been afraid of conscious computers for a _long_ time. Back in the 50s and before people were quite afraid that we'd build conscious computers. This was long before there was any sense that could actually accomplish the task. I think that similarly to seeing faces in the clouds, we imagine a consciousness where none exists. (eg: a rain god rather than a complex system of physics and chemistry)

Even LLMs, which blow past any normal Turing test methods, are still not conscious. But they certainly _feel_ conscious. They trigger the same intuitions that we rely on for consciousness. You ask yourself "how would I need to frame this question so that Claude would understand it?" You use the same mental hardware that you'd use for consciousness.

So, you have an historical and permanent fear of consciousness in a powerful entity where no consciousness actually exists combined with the fact that we created things which definitely seem conscious. (not to mention that consciousness could genuinely be on its way soon)

[−] SpicyLemonZest 35d ago
The actual contents of this article are making reasonable arguments I largely agree with. It would be very surprising for LLM-based AI systems to act as monomanaical goal optimizers, since they're trained on human text and humans are extremely bad at goal-oriented behavior. (My goals for today include a number of work and self-maintenance tasks, and the time I'm spending here writing out a HN comment does not all help achieve them - I suspect most people reading this comment are in the same boat.)

It's very frustrating that the magazine wrote such a dumb headline which guarantees people won't talk about the issues the article raised. Obviously non-goal-oriented systems can still have important negative effects.

[−] ramon156 35d ago
I feel like this article is more written towards non-techies. A decent amount of programmers have touched coding agents, and know it "kind of" does it's job. It's good enough for some tasks... I cannot be arsed to figure out how to edit a graph in Drupal, so I ask Claude. Claude fixes it, and it's not anymore broken than it already was. Win win.

However, that's where I stop my agent usage. I let ~~Claude~~ GLM do the following: - Fix tedious tasks that cost me more to figure out than I care for - Research something I'm not familiar with, and give me the facts it had found, and even then I end up looking at the source myself

[−] 5asaKI 35d ago
Indeed. Apart from the obvious prompt research frauds mentioned in the article, the model learned all deceptive behaviors from hundreds of Yudkowsky scenarios that are easily available.

It literally plagiarizes its supposed free will like a good IP laundromat.

[−] Rzor 35d ago
For regulatory capture, of course. They are not fooling me. There may be other motives, and the more ever-doom-looking crowd can find something in it for themselves as well, but you don't have to dig any deeper if you are looking for an explanation for the perspective of the people actually building it.

The Chinese tech sector popularizing cheap and open source models sure did a number on that narrative, too. Llama models, a while ago, too.

[−] nalekberov 35d ago

> “The last four years have demonstrated that AI agents can acquire the will to survive and that AIs have already learned how to lie.”

Why Harari feels an obligation to comment about everything is of course beyond me, but describing 'AI' as if it takes independent decisions to lie, make moral judgements, etc. demonstrates either that he has zero clue how 'AI' trains itself or that he chooses to mislead the audience.

[−] ggambetta 35d ago
We tell ourselves scary stories about everything new. Advances in electricity + medicine == FRANKENSTEIN!
[−] yanis_t 35d ago
I wish we didn't call this AI as the term is crazily overloaded.

Those are programs. The only difference is how we write them. Not with "if"s and "for"s. We take a bunch of bits that do nothing. Then we organize them in a way so that it outputs whatever it is we want.

[−] qgin 32d ago
We’ve seen an incredibly powerful technology follow multiple exponential curves in its capability, but we’re supposed to ask why we’re telling ourselves “stories” if we think about what will happen if that technology continues to follow the curves it has been following without sign of hitting any walls?

Is AGI certain? No. But there’s currently no specific reason to believe it isn’t coming in the next few years.

[−] bharat1010 35d ago
The point about AI companies actively hyping the danger of their own products is something I hadn't really thought about before — it's a strange kind of marketing when you think about it.
[−] tim333 33d ago

>After talking to experts, I was convinced there’s no reason to fear AIs developing a will to live, and then tricking or destroying us to avoid shutdown and take over the world. Unless, of course, we tell them to.

Once we have super intelligent AIs I give it a day or two till someone tries that, prompting something like take over the world for me.

[−] xemdetia 34d ago
The only thing I find scary about the current AI is how many AI companies became completely untethered from basic ethical controls openly. There is no facade of decency and it seems like a lot of them are running from their own shadow. I feel like there is a middle ground that could have been taken to bring content experts in instead of looting the web the world has built.
[−] zaps 35d ago
Why do we tell ourselves scary stories about anything?
[−] dclowd9901 35d ago
My favorite part of this article was this bit, and naturally so, since I love the author:

> Where did we come up with this caricature of AI’s obsessive rationality? “There’s an article I love by [the sci-fi author] Ted Chiang,” Mitchell said, “where he asks: What entity adheres monomaniacally to one single goal that they will pursue at all costs even if doing so uses up all the resources of the world? A big corporation. Their single goal is to increase value for shareholders, and in pursuing that, they can destroy the world. That’s what people are modeling their AI fantasies on.” As Chiang put it in the article in The New Yorker(opens a new tab), “Capitalism is the machine that will do whatever it takes to prevent us from turning it off.”

I didn't realize it til I read it here, but yes, my fear isn't really about the machine, it's about the machine that drives the machine. We already have a class of amoral beings that treat the world as an expendable thing and are willing to burn it down for profit. We should focus on getting rid of that problem first.

[−] latentsea 34d ago
Humans are the existence proof we seem to be chasing and the goal appears to be to hit a superset of humanities collective capabilities. I don't know if you've seen what we do to each other but... I'm afraid of a decent chunk of us too. If what we build is a superset of our capabilities then I am afraid of it, because I am already afraid of us.