No, it is isn’t and saying it is here by using vague goalposts does not make AGI show up. I will agree we have been unable to define what it is, but we can’t even figure out why humans have consciousness right now.
1) Still unreliable at logic and general inference: try and try again seems to be SoTA...
2) Comically bad at pro-activity and taking the right initiative: eg. "You're right to be upset."
3) Most likely already reaching the end of the line in terms of available good training data: looking at the posted article here, I would tend to agree...
The problem is that LeCun was obviously wrong on LLMs before. You have to take what he says with the caveat that he probably talks about these in a purist (academic) way. Most of the "downsides" and "failures" are not really happening in the real world, or if they happen, they're eventually fixed / improved.
~2 years ago he made 3 statements that he considered failures at the time, and he was quite adamant that they were real problems:
1. LLMs can't do math
2. LLMs can't plan
3. (autoregressive) LLMs can't maintain a long session because errors compound as you generate more tokens.
ALL of these were obviously overcome by the industry, and today we have experts in their field using them for heavy, hard math (Tao, Knuth, etc), anyone who's used a coding agent can tell you that they can indeed plan and follow that plan, edit the plan and generally complete the plan, and the long session stuff is again obvious (agentic systems often remain useful at >100k ctx length).
So yeah, I really hope one of Yann, Ilya or Fei-Fei can come up with something better than transformers, but take anything they say with a grain of salt until they do. They often speak on more abstract, academic downsides, not necessarily what we see in practice. And don't dismiss the amout of money and brainpower going into making LLMs useful, even if from an academic pov it seems like we're bashing a square peg into a round hole. If it fits, it fits...
> we can’t even figure out why humans have consciousness right now.
My uneducated guess is that it just means we save/remember (in a lossy way) inputs from our senses and then constantly decide what to do right now based on current and historical inputs, as well as contemplated future events.
I think the rest of our body greatly influences all of that as well, for example: we know running is healthy and we should do it, but we also decide not to run if we are busy, feel tired, or are in pain etc.
"the hard problem of consciousness" is not about "what are we conscious of", but rather: how is it possible to be conscious (i.e. experience qualia) at all?
I think we agree - we have arbritrary goalposts regarding AGI and they have been met. WE don't know what we consider to be "the big changing moment" and that moment is hard to define because we don't have a good definition of it when we talk about ourselves even.
So the convo becomes - what is that "thing" and do we need to draw similarities between "it" and our own intelligence.
The singularity doesn't mean that we get an AGI Day with a big announcement from The People In Charge that intelligence has been solved once and for all. It simply means that the frequency of "this changes everything" and "rumored model X at lab Y passes benchmark Z at 110% in less-than-zero-shot" style posts increases monotonically for ever.
Usefulness is here, and has been for a while time. I have been consistent in my stance that AGI will be achieved in the demonstration of iterative, stable self improvement. This can be demonstrated in knowledge creation or skill acquisition.
Repeating that we don't have a definition isn't helping anything except vapid blog posts having another thing to debate. I'll give one that I believe. It's the practical ability to use AI for most things that humans do at human levels of competence, without specifically being trained for each. There is no requirement for AGI actually think/reason beyond practical measures.
I’m not on either side of the argument, but one popular definition is missing which is “can automate most knowledge work”.
Not that this is my definition or anything, just pointing out that this is the one people actually care about, even if the acronym doesn’t say anything about economics or social change.
At one point in time, "AGI" included being able to learn skills that involved manipulating the physical world. While LLMs and their ilk may contribute to this, we are still (AFAICT) far, far from this at this time.
54 comments
in french ...so in my own words:
1) Still unreliable at logic and general inference: try and try again seems to be SoTA...
2) Comically bad at pro-activity and taking the right initiative: eg. "You're right to be upset."
3) Most likely already reaching the end of the line in terms of available good training data: looking at the posted article here, I would tend to agree...
~2 years ago he made 3 statements that he considered failures at the time, and he was quite adamant that they were real problems:
1. LLMs can't do math
2. LLMs can't plan
3. (autoregressive) LLMs can't maintain a long session because errors compound as you generate more tokens.
ALL of these were obviously overcome by the industry, and today we have experts in their field using them for heavy, hard math (Tao, Knuth, etc), anyone who's used a coding agent can tell you that they can indeed plan and follow that plan, edit the plan and generally complete the plan, and the long session stuff is again obvious (agentic systems often remain useful at >100k ctx length).
So yeah, I really hope one of Yann, Ilya or Fei-Fei can come up with something better than transformers, but take anything they say with a grain of salt until they do. They often speak on more abstract, academic downsides, not necessarily what we see in practice. And don't dismiss the amout of money and brainpower going into making LLMs useful, even if from an academic pov it seems like we're bashing a square peg into a round hole. If it fits, it fits...
> we can’t even figure out why humans have consciousness right now.
My uneducated guess is that it just means we save/remember (in a lossy way) inputs from our senses and then constantly decide what to do right now based on current and historical inputs, as well as contemplated future events.
I think the rest of our body greatly influences all of that as well, for example: we know running is healthy and we should do it, but we also decide not to run if we are busy, feel tired, or are in pain etc.
So the convo becomes - what is that "thing" and do we need to draw similarities between "it" and our own intelligence.
Intelligence is about being able to use information, make deductions, inferences or hypotheses. And presumably use that to inform action.
Consciousness is about having an internal experience. I would regard many living things as having consciousness but not a general intelligence.
Not that this is my definition or anything, just pointing out that this is the one people actually care about, even if the acronym doesn’t say anything about economics or social change.