It's simple. It's because AI is the scariest technology ever made.
Human intelligence has proven itself capable of doing a lot of scary things. And AI research is keen on building ever more capable and more scalable intelligence.
By now, the list of advantages humans still have over machines is both finite and rapidly diminishing. If that doesn't make you uneasy, you're in denial.
AI, the way you are describing it, has not been invented yet. It is a fiction.
What is called "AI" today is an extremely vague marketing term being applied to various software technologies which are only dangerous because humans are dangerous. Nuclear & chemical weapons are also "very scary" but only because the humans who might decide to use them in fits of insanity are scary.
I'm not in the slightest bit uneasy about "AI" itself right now, because as I said, the AI of Sci-Fi has not yet been invented…and seems unlikely to in any of our lifetimes. (Not throwing shade on clever researchers. We also don't have working FTL travel, though plenty of scientists speculate on how such an engine might be built.)
"It's just marketing" is just the "denial" stage wearing a flimsy disguise.
Even LLMs of today routinely do the kind of tasks that would have "required human intelligence" a few years prior. The gap between "what humans can do" and "what frontier AIs can do" is shrinking every month.
What makes you think that what remains of that gap can't be closed in a series of incremental upgrades? Just 4 years have passed since the first ChatGPT. There are a lot of incremental upgrades left in "any of our lifetimes".
You don't seem to be engaging seriously with respected experts in this field who have been reporting for years at this point that merely scaling LLMs and so-called "agentic systems" doesn't get us anywhere close to true AGI.
Also computers in the 1980s could perform many tasks that previously would have "required human intelligence". So? Are you saying computers in the 1980s were somehow intelligent?
And you don't seem to be engaging seriously with respected experts in this field who say "scaling still works, and will work for a good while longer".
If your only reference points are LeCun, or, worse, some living fossils from the "symbolic AI" era, then you'll be showered in "LLMs can't progress". Often backed by "insights" that are straight up wrong, and were proven wrong empirically some time in 2023.
If you track LLM capabilities over time, however, it's blindingly obvious that the potential of LLMs is yet to be exhausted. And whether it will be any time soon is unclear. No signs of that as of yet. If there's a wall, we are yet to hit it.
Modern discourse happens on social media where fear and outrage drive engagement, which drives virality. We have become convinced in a short amount of time that AI is going to take all the jobs and eventually kill us all because that's what people click on.
Any voices or studies that present the case for "useful technology that will improve productivity and wages while not murdering us" don't get clicked on or read.
I don't know, I think nuclear weapons are scarier. And also probably a useful parallel: they're so dangerous that we coined the term "mutually assured destruction" and everyone recognized that it was so dangerous to use them that they've only ever been used once.
I see the flood of PR from AI firms as an attempt to make sure we don't build the appropriate safeguards this time around, because there's too much money to be made.
The world we live in is a construct, not a natural outcome. Even if we take your premise at face value, that our success as a species is only because of advantages over others, what's to say that "intelligence" is that advantage? What's to say that we don't use our advantages to reconstruct a world that works in a way that doesn't advantage intelligence over all else?
And on intelligence specifically: even amongst the human race, we all know smart people who are abject failures, and idiots who are wildly successful. Intelligence is vastly overrated.
I think the Nuclear Bomb is still scarier. But AI is not scary for its destructive potential but for its potential to disrupt our society fundamentally, and not just in a good way.
Machine still need a planetary complex production pipelines with human operators everywhere to achieve reproduction at scale. Even taking paperclip plant optimizer overlord as serious scenario, it’s still several order of magnitude less likely than humans letting the most nefarious individuals create international conflicts and engage in genocides, not even talking about destroying vast pans of the biosphere supporting humanity possibility of existence.
That is, also alien invasion and giant meteor are plausible scenario, but at some point one has to prioritize threats likeliness, and generally speaking it makes more sense to put more weight on "ongoing advanced operation" than on "not excluded in currently known scientifically realistic what-if".
There was a lot of FUD in the mainframe era about computers being called "electronic brains" and fears of them taking people's jobs because the ignorant public mistook their lighting fast arithmetic skills for intelligence. Many did lose their jobs as digital record keeping, computerized accounting/ERP, robotics on assembly lines, became cost effective, but at no time did the "electronic brain" become intelligent.
There's a lot of FUD today about LLM's being sapient because the ignorant public mistakes their complex token prediction skills for intelligence. But it's just embarrassing to see people making that mistake on a forum ostensibly filled with hackers.
This is untrue. What is being diminished is the value of humans doing repetitive or uncreative tasks.
Many have built their careers from that kind of work in the past and yes they are threatened, but that kind of work is inherently not collaborative and more vocational.
If I can plausibly say I'm making something super dangerous, the government is likely to want to be the first government to have it. If the check clears before they figure out if I'm BSing them or not, it's a win.
One thing that strikes me that I never really see anyone discuss is that we've been afraid of conscious computers for a _long_ time. Back in the 50s and before people were quite afraid that we'd build conscious computers. This was long before there was any sense that could actually accomplish the task. I think that similarly to seeing faces in the clouds, we imagine a consciousness where none exists. (eg: a rain god rather than a complex system of physics and chemistry)
Even LLMs, which blow past any normal Turing test methods, are still not conscious. But they certainly _feel_ conscious. They trigger the same intuitions that we rely on for consciousness. You ask yourself "how would I need to frame this question so that Claude would understand it?" You use the same mental hardware that you'd use for consciousness.
So, you have an historical and permanent fear of consciousness in a powerful entity where no consciousness actually exists combined with the fact that we created things which definitely seem conscious. (not to mention that consciousness could genuinely be on its way soon)
The actual contents of this article are making reasonable arguments I largely agree with. It would be very surprising for LLM-based AI systems to act as monomanaical goal optimizers, since they're trained on human text and humans are extremely bad at goal-oriented behavior. (My goals for today include a number of work and self-maintenance tasks, and the time I'm spending here writing out a HN comment does not all help achieve them - I suspect most people reading this comment are in the same boat.)
It's very frustrating that the magazine wrote such a dumb headline which guarantees people won't talk about the issues the article raised. Obviously non-goal-oriented systems can still have important negative effects.
I feel like this article is more written towards non-techies. A decent amount of programmers have touched coding agents, and know it "kind of" does it's job. It's good enough for some tasks... I cannot be arsed to figure out how to edit a graph in Drupal, so I ask Claude. Claude fixes it, and it's not anymore broken than it already was. Win win.
However, that's where I stop my agent usage. I let ~~Claude~~ GLM do the following:
- Fix tedious tasks that cost me more to figure out than I care for
- Research something I'm not familiar with, and give me the facts it had found, and even then I end up looking at the source myself
Indeed. Apart from the obvious prompt research frauds mentioned in the article, the model learned all deceptive behaviors from hundreds of Yudkowsky scenarios that are easily available.
It literally plagiarizes its supposed free will like a good IP laundromat.
For regulatory capture, of course. They are not fooling me. There may be other motives, and the more ever-doom-looking crowd can find something in it for themselves as well, but you don't have to dig any deeper if you are looking for an explanation for the perspective of the people actually building it.
The Chinese tech sector popularizing cheap and open source models sure did a number on that narrative, too. Llama models, a while ago, too.
> “The last four years have demonstrated that AI agents can acquire the will to survive and that AIs have already learned how to lie.”
Why Harari feels an obligation to comment about everything is of course beyond me, but describing 'AI' as if it takes independent decisions to lie, make moral judgements, etc. demonstrates either that he has zero clue how 'AI' trains itself or that he chooses to mislead the audience.
I wish we didn't call this AI as the term is crazily overloaded.
Those are programs. The only difference is how we write them. Not with "if"s and "for"s. We take a bunch of bits that do nothing. Then we organize them in a way so that it outputs whatever it is we want.
We’ve seen an incredibly powerful technology follow multiple exponential curves in its capability, but we’re supposed to ask why we’re telling ourselves “stories” if we think about what will happen if that technology continues to follow the curves it has been following without sign of hitting any walls?
Is AGI certain? No. But there’s currently no specific reason to believe it isn’t coming in the next few years.
The point about AI companies actively hyping the danger of their own products is something I hadn't really thought about before — it's a strange kind of marketing when you think about it.
>After talking to experts, I was convinced there’s no reason to fear AIs developing a will to live, and then tricking or destroying us to avoid shutdown and take over the world. Unless, of course, we tell them to.
Once we have super intelligent AIs I give it a day or two till someone tries that, prompting something like take over the world for me.
The only thing I find scary about the current AI is how many AI companies became completely untethered from basic ethical controls openly. There is no facade of decency and it seems like a lot of them are running from their own shadow. I feel like there is a middle ground that could have been taken to bring content experts in instead of looting the web the world has built.
My favorite part of this article was this bit, and naturally so, since I love the author:
> Where did we come up with this caricature of AI’s obsessive rationality? “There’s an article I love by [the sci-fi author] Ted Chiang,” Mitchell said, “where he asks: What entity adheres monomaniacally to one single goal that they will pursue at all costs even if doing so uses up all the resources of the world? A big corporation. Their single goal is to increase value for shareholders, and in pursuing that, they can destroy the world. That’s what people are modeling their AI fantasies on.” As Chiang put it in the article in The New Yorker(opens a new tab), “Capitalism is the machine that will do whatever it takes to prevent us from turning it off.”
I didn't realize it til I read it here, but yes, my fear isn't really about the machine, it's about the machine that drives the machine. We already have a class of amoral beings that treat the world as an expendable thing and are willing to burn it down for profit. We should focus on getting rid of that problem first.
Humans are the existence proof we seem to be chasing and the goal appears to be to hit a superset of humanities collective capabilities. I don't know if you've seen what we do to each other but... I'm afraid of a decent chunk of us too. If what we build is a superset of our capabilities then I am afraid of it, because I am already afraid of us.
126 comments
Human intelligence has proven itself capable of doing a lot of scary things. And AI research is keen on building ever more capable and more scalable intelligence.
By now, the list of advantages humans still have over machines is both finite and rapidly diminishing. If that doesn't make you uneasy, you're in denial.
What is called "AI" today is an extremely vague marketing term being applied to various software technologies which are only dangerous because humans are dangerous. Nuclear & chemical weapons are also "very scary" but only because the humans who might decide to use them in fits of insanity are scary.
I'm not in the slightest bit uneasy about "AI" itself right now, because as I said, the AI of Sci-Fi has not yet been invented…and seems unlikely to in any of our lifetimes. (Not throwing shade on clever researchers. We also don't have working FTL travel, though plenty of scientists speculate on how such an engine might be built.)
Even LLMs of today routinely do the kind of tasks that would have "required human intelligence" a few years prior. The gap between "what humans can do" and "what frontier AIs can do" is shrinking every month.
What makes you think that what remains of that gap can't be closed in a series of incremental upgrades? Just 4 years have passed since the first ChatGPT. There are a lot of incremental upgrades left in "any of our lifetimes".
Also computers in the 1980s could perform many tasks that previously would have "required human intelligence". So? Are you saying computers in the 1980s were somehow intelligent?
If your only reference points are LeCun, or, worse, some living fossils from the "symbolic AI" era, then you'll be showered in "LLMs can't progress". Often backed by "insights" that are straight up wrong, and were proven wrong empirically some time in 2023.
If you track LLM capabilities over time, however, it's blindingly obvious that the potential of LLMs is yet to be exhausted. And whether it will be any time soon is unclear. No signs of that as of yet. If there's a wall, we are yet to hit it.
Any voices or studies that present the case for "useful technology that will improve productivity and wages while not murdering us" don't get clicked on or read.
I see the flood of PR from AI firms as an attempt to make sure we don't build the appropriate safeguards this time around, because there's too much money to be made.
And on intelligence specifically: even amongst the human race, we all know smart people who are abject failures, and idiots who are wildly successful. Intelligence is vastly overrated.
> AI is the scariest technology ever made
Well, it's a good thing that all we managed so far is a large language model instead.
That is, also alien invasion and giant meteor are plausible scenario, but at some point one has to prioritize threats likeliness, and generally speaking it makes more sense to put more weight on "ongoing advanced operation" than on "not excluded in currently known scientifically realistic what-if".
There's a lot of FUD today about LLM's being sapient because the ignorant public mistakes their complex token prediction skills for intelligence. But it's just embarrassing to see people making that mistake on a forum ostensibly filled with hackers.
And a close (non-tech) second is the ruthlessness of sociopaths seeking power.
Many have built their careers from that kind of work in the past and yes they are threatened, but that kind of work is inherently not collaborative and more vocational.
Even LLMs, which blow past any normal Turing test methods, are still not conscious. But they certainly _feel_ conscious. They trigger the same intuitions that we rely on for consciousness. You ask yourself "how would I need to frame this question so that Claude would understand it?" You use the same mental hardware that you'd use for consciousness.
So, you have an historical and permanent fear of consciousness in a powerful entity where no consciousness actually exists combined with the fact that we created things which definitely seem conscious. (not to mention that consciousness could genuinely be on its way soon)
It's very frustrating that the magazine wrote such a dumb headline which guarantees people won't talk about the issues the article raised. Obviously non-goal-oriented systems can still have important negative effects.
However, that's where I stop my agent usage. I let ~~Claude~~ GLM do the following: - Fix tedious tasks that cost me more to figure out than I care for - Research something I'm not familiar with, and give me the facts it had found, and even then I end up looking at the source myself
It literally plagiarizes its supposed free will like a good IP laundromat.
The Chinese tech sector popularizing cheap and open source models sure did a number on that narrative, too. Llama models, a while ago, too.
> “The last four years have demonstrated that AI agents can acquire the will to survive and that AIs have already learned how to lie.”
Why Harari feels an obligation to comment about everything is of course beyond me, but describing 'AI' as if it takes independent decisions to lie, make moral judgements, etc. demonstrates either that he has zero clue how 'AI' trains itself or that he chooses to mislead the audience.
Those are programs. The only difference is how we write them. Not with "if"s and "for"s. We take a bunch of bits that do nothing. Then we organize them in a way so that it outputs whatever it is we want.
Is AGI certain? No. But there’s currently no specific reason to believe it isn’t coming in the next few years.
>After talking to experts, I was convinced there’s no reason to fear AIs developing a will to live, and then tricking or destroying us to avoid shutdown and take over the world. Unless, of course, we tell them to.
Once we have super intelligent AIs I give it a day or two till someone tries that, prompting something like take over the world for me.
> Where did we come up with this caricature of AI’s obsessive rationality? “There’s an article I love by [the sci-fi author] Ted Chiang,” Mitchell said, “where he asks: What entity adheres monomaniacally to one single goal that they will pursue at all costs even if doing so uses up all the resources of the world? A big corporation. Their single goal is to increase value for shareholders, and in pursuing that, they can destroy the world. That’s what people are modeling their AI fantasies on.” As Chiang put it in the article in The New Yorker(opens a new tab), “Capitalism is the machine that will do whatever it takes to prevent us from turning it off.”
I didn't realize it til I read it here, but yes, my fear isn't really about the machine, it's about the machine that drives the machine. We already have a class of amoral beings that treat the world as an expendable thing and are willing to burn it down for profit. We should focus on getting rid of that problem first.