> And maybe we don't want to build machines that are concious in this sense. The most positive outcome I can think of is one where computers get really good at doing, and humans get really good at thinking. If we never figure out how to make computers creative, then there will be a very natural division of labor between man and machine.
This is where LLM is currently going. Not really AGI since they can't think like humans, but they can do a lot of things and humans can train them on novel things.
Then human work is changed to figuring out new things and the AI solves all old things, that seems much more fun than most white collar work today.
That's an expression of class thinking from the beginning IMO. People think of themselves as thinkers and creators, while those who do labour they rely on without getting too much into details are merely doers and can ideally be replaced. But it's really thinking and creativity all the way down if you try to learn to do things well
You must have had limited exposure to uncreative types. You might be shocked to find there are people that can do nothing more than follow checklists.
Sometimes it's a lack of capacity for novel thinking. Sometimes it's fear caused by past trauma. Or it can be age. Or an inability to overcome habits. The list goes on, but the point is that I've had to work with or supervise employees (even in IT!) that didn't have a creative bone in their body. It wasn't a lack of motivation, it was usually something on the list above.
These people absolutely deserved the feeling of being useful, and those are the people I'm most concerned for in this new post-LLM world. The creative types will most likely be fine, but we have words to describe creativity as an acknowledgement that there can be an absence of creativity.
You are only thinking about people and creativity in the workplace. Creativity can be applied anywhere: cooking, a new route on your way to somewhere, read some random paragraphs in a book that spawns new thoughts, a new game with a child, optimize the way you paint the walls on your house, choose the plants in your garden (and how you'll water them), do a doodle, try or buy a new outfit, typing this paragraph in response to your message (kinda LLM-y maybe).
I think this is what makes me uneasy about the whole LLM/"consciousness" debate. I may be wrong, but as far as I know, we still don't really understand how a bunch of feedforward networks and attention modules result in the kind of crazy semantic context understanding and planning-in-human-language behavior we observe in LLMs. Neither do we know how the billions of neurons in a human brain do it.
The debate how similar or dissimilar LLMs are to brains wasn't solved by any kind of scientific finding, it feels we just sort of decided at some point that they'd have to be fundamentally different, because everything else would be highly problematic.
> Salespeople sell things that already exist. If you can envision new things that would sell well, that's a bit more than sales talent
A lot of gadgets that were claimed by Steve Jobs to have been envisioned by Apple (or rather: by him) - as I wrote: Steve Jobs was an exceptional salesman - already existed before, just in a way that had a little bit more rough edges. These did not sell so well, because the companies did not have a marketing department that made people believe that what they sell is the next big thing.
> But it's really thinking and creativity all the way down if you try to learn to do things well
Yes, everyone starts out creative.
But we all can tell the difference between a worker that is still creative and learning and a worker that gave up creativity and is just doing his job. The first will still be useful in this AI age the second will be replaced by AI learning what he already knows.
> Then human work is changed to figuring out new things and the AI solves all old things, that seems much more fun than most white collar work today.
But it's not fun to be figuring out new things all the time. Some amount of routine work is necessary to 1) exercise mastery (feels good), and 2) recover energy. This is why a lot of people find agentic coding exhausting and less fun, you're basically always having to be creative (what's the next feature?) or solve the hardest 5% of issues the LLM can't handle.
> > The most positive outcome I can think of is one where computers get really good at doing, and humans get really good at thinking.
> This is where LLM is currently going.
This is not where LLMs are currently going. They are trained and benchmarked explicitly in all areas that humans produce economically and cognitively valuable work: STEM fields, computer use, robotics, etc.
Systems are already emerging where AI agents autonomously orchestrate subagents which again all work towards a goal autonomously and only from time to time communicate with you to give you status updates.
Thinking that you as a slow human will be needed for much longer to fill some crucial role in this AI system that it cannot solve by itself, and to bring some crucial skill of creativity or thinking to the table that it cannot generate itself is just wishful thinking. And to me personally, telling an AI to "do cool thing X" without having made any contribution to it beyond the initial prompt also feels very depressing and seems like much less fun than actually feeling valued in what I do. I'm sorry for sounding harsh.
I see a lot less thinking as a result of using LLMs as they are today and I don't see the providers building tools to promote a better way to use them. They are still way too sycophantic.
LLMs are shit at doing stuff to anyone who is a domain expert in the thing that they are supposed to be doing. They are trained on a huge corpus of average stuff. They produce average to crappy solutions quickly. The technology industry bubble is trained to accept that as good enough which is why everyone is excited. Elsewhere it's a complete and utter joke.
And on top of that, a huge chunk of doing requires humans to physically do something or absolute determinism is better anyway, neither of which an LLM is capable of.
None of it makes sense.
Edit: actually the technology industry moves the goalposts to match the claims. That is the dishonest bit. I've not seen any evidence of novel capability which isn't corrupted by some dishonest measurement approach.
Came here to quote the same sentence, but say the exact opposite - it seems to me that today’s LLMs are progressing far faster on the “thinking” front than the “doing”.
I suppose it depends on your definition of “doing” - if it’s “writing code”, then sure. But there’s a whole world of actual, physical “doing” that AI is nowhere close to matching humans at, and it’s much easier for me to envision a world where AI replaces the management / “thinking” layer of society than the physical labor. Which is scary, because it’s the opposite of his (and I would assume most people’s) ideal.
> The most positive outcome I can think of is one where computers get really good at doing, and humans get really good at thinking. If we never figure out how to make computers creative, then there will be a very natural division of labor between man and machine.
Man will do nothing and machine will do everything. That's a bleak world no one is preparing for.
How is that universal basic income scheme coming along?
I found Sam's early 2015 posts on machine superintelligence and regulation [1] [2] to be even more interesting in hindsight, given OpenAI's accelerationist bent of late, OpenAI president Greg Brockman's lobbying efforts against AI regulation, and frequent accusations of attempted regulatory capture.
Sam's recommendations at the time include:
1) Provide a framework to observe progress…
2) Given how disastrous a bug could be, require development safeguards to reduce the risk of the accident case. For example, beyond a certain checkpoint, we could require development happen only on airgapped computers…, require that certain parts of the software be subject to third-party code reviews, etc.
3) Require that the first SMI developed have as part of its operating rules that a) it can’t cause any direct or indirect harm to humanity (i.e. Asimov’s zeroeth law), b) it should detect other SMI being developed but take no action beyond detection, c) other than required for part b, have no effect on the world.
…
4) Provide lots of funding for R+D for groups that comply with all of this, especially for groups doing safety research.
5) Provide a longer-term framework for how we figure out a safe and happy future for coexisting with SMI…
Also, in his acknowledgments he gives the greatest thanks to onetime partner, now rival, Dario Amodei.
"If we never figure out how to make computers creative, then there will be a very natural division of labor between man and machine."
Even if AI can't reach (yet) the creativity level, it performs well while trying, at least for now. Who knows in the near future? So far, the roadmap is clear.
The AI push is causing major layoffs in the tech and crypto industries nowadays. But we have been receiving the message "adapt or pay the consequences." Right now, even management positions are being replaced by software. It could sound rude, but it's also part of human nature and evolution. We have created these machines, and now we have to deal with them.
On the other hand, it could be rare at these stages, but we (regular human beings) barely know how the brain really works. And AI has demonstrated, at some point, that it can work very well in some roles (mostly operational, ofc), but it's also turning indispensable. Even governments like the Abu Dhabi one are pushing to rule the emirate fully by AI.
So yeah, even if we don't like it, AI is silently replacing humans. The best you can do is to learn how to leverage and not be left behind.
> (I originally was going to say a computer that plays chess, but computers play chess with no intuition or instinct--they just search a gigantic solution space very quickly.)
Isn't that how LLM models are trained right now? Trying to predict the next word within a "gigantic solution space". Interesting.
I model LLMs as searchers. Give the input search and they match an output. The massiveness of the parameters and training data let them map data in a way searching looks like human thinking. They can also permutate a little and still stay in a space that can overlap with reality.
The human brain may be doing a very similar thing though, search and permutation via searched rules. It may be doing it just in a functional way, with more ability to search on massive data that may be with holes but filled with synthetic data via mind subprocesses on learned rules.
I think machines can eventually get there, especially if we can figure out how to harness continuous models instead of discrete ones. And I have a feeling that functional analysis may be the key.
The most positive outcome I can think of is one where computers get really good at doing, and humans get really good at thinking
Steve Yegge said on some podcast recently that AI is going to have to come up with a more visual medium for communicating, because people don't want to read several paragraphs. He shared this uncritically, seemingly without judgement or disappointment. Yegge himself is a former Googler and by all accounts was an impressive person at one point, now best known as the person who vibe-birthed the inanity that is GasTown.
At work I'm seeing colleagues I once considered formidable completely turning off their brains and letting the bot drive, and wholly missing the mark on work quality. It's like a sickness, like COVID brain fog people don't even notice they have.
I see humans getting worse at reading, worse at writing, and worse at programming by themselves. It makes me angry and sad.
We are getting dumber, people, and I fully believe Altman and friends are lying when they say they want it otherwise.
Nailed it 12 Years ago... damn it, then after all Sam is not just talk and money.
I just got humbled.
This make me reconsider all my POV about Sam Altman.
71 comments
> And maybe we don't want to build machines that are concious in this sense. The most positive outcome I can think of is one where computers get really good at doing, and humans get really good at thinking. If we never figure out how to make computers creative, then there will be a very natural division of labor between man and machine.
This is where LLM is currently going. Not really AGI since they can't think like humans, but they can do a lot of things and humans can train them on novel things.
Then human work is changed to figuring out new things and the AI solves all old things, that seems much more fun than most white collar work today.
Sometimes it's a lack of capacity for novel thinking. Sometimes it's fear caused by past trauma. Or it can be age. Or an inability to overcome habits. The list goes on, but the point is that I've had to work with or supervise employees (even in IT!) that didn't have a creative bone in their body. It wasn't a lack of motivation, it was usually something on the list above.
These people absolutely deserved the feeling of being useful, and those are the people I'm most concerned for in this new post-LLM world. The creative types will most likely be fine, but we have words to describe creativity as an acknowledgement that there can be an absence of creativity.
The debate how similar or dissimilar LLMs are to brains wasn't solved by any kind of scientific finding, it feels we just sort of decided at some point that they'd have to be fundamentally different, because everything else would be highly problematic.
Steve Jobs
Now, what are doers in the age of LLM is another question.
> Well was Jobs a "doer"?
Jobs' talent was that he was an incredibly talented salesman.
> Salespeople sell things that already exist. If you can envision new things that would sell well, that's a bit more than sales talent
A lot of gadgets that were claimed by Steve Jobs to have been envisioned by Apple (or rather: by him) - as I wrote: Steve Jobs was an exceptional salesman - already existed before, just in a way that had a little bit more rough edges. These did not sell so well, because the companies did not have a marketing department that made people believe that what they sell is the next big thing.
Jobs envisioned the iPad and iPhone. Did he do the physical work? No. But he created direction.
Everyone around him at that time has commented on this. Are you going to claim they’re all lying?
> But it's really thinking and creativity all the way down if you try to learn to do things well
Yes, everyone starts out creative.
But we all can tell the difference between a worker that is still creative and learning and a worker that gave up creativity and is just doing his job. The first will still be useful in this AI age the second will be replaced by AI learning what he already knows.
> Then human work is changed to figuring out new things and the AI solves all old things, that seems much more fun than most white collar work today.
But it's not fun to be figuring out new things all the time. Some amount of routine work is necessary to 1) exercise mastery (feels good), and 2) recover energy. This is why a lot of people find agentic coding exhausting and less fun, you're basically always having to be creative (what's the next feature?) or solve the hardest 5% of issues the LLM can't handle.
> > The most positive outcome I can think of is one where computers get really good at doing, and humans get really good at thinking.
> This is where LLM is currently going.
This is not where LLMs are currently going. They are trained and benchmarked explicitly in all areas that humans produce economically and cognitively valuable work: STEM fields, computer use, robotics, etc.
Systems are already emerging where AI agents autonomously orchestrate subagents which again all work towards a goal autonomously and only from time to time communicate with you to give you status updates.
Thinking that you as a slow human will be needed for much longer to fill some crucial role in this AI system that it cannot solve by itself, and to bring some crucial skill of creativity or thinking to the table that it cannot generate itself is just wishful thinking. And to me personally, telling an AI to "do cool thing X" without having made any contribution to it beyond the initial prompt also feels very depressing and seems like much less fun than actually feeling valued in what I do. I'm sorry for sounding harsh.
So you just lose your job.
LLMs are shit at doing stuff to anyone who is a domain expert in the thing that they are supposed to be doing. They are trained on a huge corpus of average stuff. They produce average to crappy solutions quickly. The technology industry bubble is trained to accept that as good enough which is why everyone is excited. Elsewhere it's a complete and utter joke.
And on top of that, a huge chunk of doing requires humans to physically do something or absolute determinism is better anyway, neither of which an LLM is capable of.
None of it makes sense.
Edit: actually the technology industry moves the goalposts to match the claims. That is the dishonest bit. I've not seen any evidence of novel capability which isn't corrupted by some dishonest measurement approach.
I suppose it depends on your definition of “doing” - if it’s “writing code”, then sure. But there’s a whole world of actual, physical “doing” that AI is nowhere close to matching humans at, and it’s much easier for me to envision a world where AI replaces the management / “thinking” layer of society than the physical labor. Which is scary, because it’s the opposite of his (and I would assume most people’s) ideal.
> The most positive outcome I can think of is one where computers get really good at doing, and humans get really good at thinking. If we never figure out how to make computers creative, then there will be a very natural division of labor between man and machine.
Man will do nothing and machine will do everything. That's a bleak world no one is preparing for.
How is that universal basic income scheme coming along?
[1] https://blog.samaltman.com/machine-intelligence-part-1 [2] https://blog.samaltman.com/machine-intelligence-part-2
Sam's recommendations at the time include: 1) Provide a framework to observe progress… 2) Given how disastrous a bug could be, require development safeguards to reduce the risk of the accident case. For example, beyond a certain checkpoint, we could require development happen only on airgapped computers…, require that certain parts of the software be subject to third-party code reviews, etc. 3) Require that the first SMI developed have as part of its operating rules that a) it can’t cause any direct or indirect harm to humanity (i.e. Asimov’s zeroeth law), b) it should detect other SMI being developed but take no action beyond detection, c) other than required for part b, have no effect on the world. … 4) Provide lots of funding for R+D for groups that comply with all of this, especially for groups doing safety research. 5) Provide a longer-term framework for how we figure out a safe and happy future for coexisting with SMI…
Also, in his acknowledgments he gives the greatest thanks to onetime partner, now rival, Dario Amodei.
Even if AI can't reach (yet) the creativity level, it performs well while trying, at least for now. Who knows in the near future? So far, the roadmap is clear.
The AI push is causing major layoffs in the tech and crypto industries nowadays. But we have been receiving the message "adapt or pay the consequences." Right now, even management positions are being replaced by software. It could sound rude, but it's also part of human nature and evolution. We have created these machines, and now we have to deal with them.
On the other hand, it could be rare at these stages, but we (regular human beings) barely know how the brain really works. And AI has demonstrated, at some point, that it can work very well in some roles (mostly operational, ofc), but it's also turning indispensable. Even governments like the Abu Dhabi one are pushing to rule the emirate fully by AI.
So yeah, even if we don't like it, AI is silently replacing humans. The best you can do is to learn how to leverage and not be left behind.
> (I originally was going to say a computer that plays chess, but computers play chess with no intuition or instinct--they just search a gigantic solution space very quickly.)
Isn't that how LLM models are trained right now? Trying to predict the next word within a "gigantic solution space". Interesting.
The human brain may be doing a very similar thing though, search and permutation via searched rules. It may be doing it just in a functional way, with more ability to search on massive data that may be with holes but filled with synthetic data via mind subprocesses on learned rules.
I think machines can eventually get there, especially if we can figure out how to harness continuous models instead of discrete ones. And I have a feeling that functional analysis may be the key.
At work I'm seeing colleagues I once considered formidable completely turning off their brains and letting the bot drive, and wholly missing the mark on work quality. It's like a sickness, like COVID brain fog people don't even notice they have.
I see humans getting worse at reading, worse at writing, and worse at programming by themselves. It makes me angry and sad.
We are getting dumber, people, and I fully believe Altman and friends are lying when they say they want it otherwise.