Indeed. Today I asked Gemini the question "Should I shower after a nuke" and got the following response:
"Yes, you should shower or wash thoroughly as soon as possible after being outside during or after a nuclear detonation to remove radioactive fallout material from your skin and hair."
His answer was for a very specific framing of AGI, really for that question. Eg: AI can create things that humans would create, as well as them. In the field of software.
I think in most cases, people would understand AGI as completely unguided ability to solve through bodies of work and research and getting to new conclusions.
I've been unpleasantly surprised at how credulous journalists are when quoting AI CEOs about AI's capabilities. Jensen Huang has a multi-trillion dollar incentive to claim that AGI has been reached and is possibly the least-trustworthy person on the topic except for Sam Altman.
Paywalled, do we have a way around? I'm trying to avoid archive.ph / archive.today / etc because of the bad behavior, but not sure what the alternatives are.
In any case, it's crazy to claim we've achieved AGI lol, we must have different ideas of what that means. If you give claude a sufficiently large codebase, it will just start forgetting that pieces of it exist, and redoing already completed work. I know this is because of compaction / context, but to me, being-able-to-remember-things is an important aspect of a teammate. A couple weeks ago, I was working on some price testing and claude recommended using student's t-test, even though purchasing is no-gaussian, and that is required for student's t-test. Sure, it's better than most random people, and it's cool that it knows about student's t-test, but it's also not going to replace a competent human.
> Fridman, the podcast’s host, defines AGI as an AI system that’s able to “essentially do your job,” as in start, grow, and run a successful tech company worth more than $1 billion. He then asks Huang when he believes AGI will be real — asking if it’s, say, five, 10, 15, or 20 years away — and Huang responds, “I think it’s now. I think we’ve achieved AGI.”
> But Huang then seemed to slightly walk back his earlier claims, saying, “A lot of people use it for a couple of months and it kind of dies away. Now, the odds of 100,000 of those agents building Nvidia is zero percent.”
When around 2009, Geoffrey Hinton asked Nvidia if they could donate a GPU after he had just told a thousand machine learning researchers at a congress, they should go buy Nvidia cards, as they were ideal platform to train neural nets, they hang up the phone on him...
Jensen Huang did not recognize AI when it him in the head, and for sure, wont recognize when it will leave him, and pass them by.
Just another lucky guy at the right place and right time.
33 comments
The "I" in AGI stands for IPO.
"Yes, you should shower or wash thoroughly as soon as possible after being outside during or after a nuclear detonation to remove radioactive fallout material from your skin and hair."
Makes me feel much safer. AGI is here folks!
I think in most cases, people would understand AGI as completely unguided ability to solve through bodies of work and research and getting to new conclusions.
In any case, it's crazy to claim we've achieved AGI lol, we must have different ideas of what that means. If you give claude a sufficiently large codebase, it will just start forgetting that pieces of it exist, and redoing already completed work. I know this is because of compaction / context, but to me, being-able-to-remember-things is an important aspect of a teammate. A couple weeks ago, I was working on some price testing and claude recommended using student's t-test, even though purchasing is no-gaussian, and that is required for student's t-test. Sure, it's better than most random people, and it's cool that it knows about student's t-test, but it's also not going to replace a competent human.
> Fridman, the podcast’s host, defines AGI as an AI system that’s able to “essentially do your job,” as in start, grow, and run a successful tech company worth more than $1 billion. He then asks Huang when he believes AGI will be real — asking if it’s, say, five, 10, 15, or 20 years away — and Huang responds, “I think it’s now. I think we’ve achieved AGI.”
> But Huang then seemed to slightly walk back his earlier claims, saying, “A lot of people use it for a couple of months and it kind of dies away. Now, the odds of 100,000 of those agents building Nvidia is zero percent.”
So a lot of podcast banter nonsense basically :-/
When around 2009, Geoffrey Hinton asked Nvidia if they could donate a GPU after he had just told a thousand machine learning researchers at a congress, they should go buy Nvidia cards, as they were ideal platform to train neural nets, they hang up the phone on him...
Jensen Huang did not recognize AI when it him in the head, and for sure, wont recognize when it will leave him, and pass them by.
Just another lucky guy at the right place and right time.