Predictions from the METR AI scaling graph are based on a flawed premise

(garymarcus.substack.com)

49 points | by nsoonhui 3 days ago ago

27 comments

  • hatefulmoron 3 days ago ago

    I had assumed that the Y axis was corresponding to some measurement of the LLM's ability to actually work/mull over a task in a loop while making progress. In other words, I thought it meant something like "you can leave Sonnet 3.7 for a whole hour and it will meaningfully progress on a problem", but the reality is less impressive. Serves me right for not looking at the fine print.

  • aoeusnth1 2 days ago ago

    This post is a very weak and incoherent criticism of a well formulated benchmark: task length bucket for which a model succeeds 50% of the time.

    Gary says: - This is just the task length that the models were able to solve in THIS dataset. What about other tasks?

    Yeah, obviously. The point is that models are improving on these tasks in a predicable fashion. If you care about software, you should care how good ai is at software.

    - Gary says: Task length is a bad metric. What about a bunch of other factors of difficulty which might not factor into task length?

    Task length is a pretty good proxy for difficulty, that's why people grade a bug in days. Of course many factors contribute to this estimate, but averaged over many tasks, time is a great metric for difficulty.

    Finally, Gary just ignores that despite his perspective that the metric makes no sense and is meaningless, it has extremely strong predictive value. This should give you pause - how can an arbitrary metric with no connection to the true difficulty of a task, with no real way of comparing its validity of measuring difficulty across tasks or across task-takers, result in such a retrospectively smooth curve, and so closely predict the recent data points from sonnet and o3? something IS going on there, which cannot fit into Gary's ~spin~ narrative that nothing ever happens.

  • yorwba 3 days ago ago

    > you could probably put together one reasonable collection of word counting and question answering tasks with average human time of 30 seconds and another collection with an average human time of 20 minutes where GPT-4 would hit 50% accuracy on each.

    So do this and pick the one where humans do best. I doubt that doing so would show all progress to be illusory.

    But it would certainly be interesting to know what the easiest thing is that a human can do but current AIs struggle with.

    • K0balt 2 days ago ago

      The problem , really, is human cognitive dissonance. We draw false conclusions that competence at some tasks implies competence at another. It’s not a universal human problem, we intuit that a front end loader , just because it can dig really well, is not therefore good at all other tasks. But when it comes down to cognition, our models break down quickly.

      I suspect this is because our proxies are predicated on a task set that inherently includes the physical world, which at some level connects all tasks and creates links between capabilities that generally pervade our environment. LLMs do not exist in this physical world, and are therefore not within the set of things that can be reasoned about with those proxies.

      This will probably gradually change with robotics, as the competencies required to exist and function in the physical world will (I postulate) generalize to other tasks in such a way that it more closely matches the pattern that our assumptions are based on.

      Of course, if we segregate intelligence into isolated modules for motility and cognition, this will not be the case as we will not be taking advantage of that generalization. I think that would be a big mistake, especially in light of the hypotheses that the massive leap in capabilities of LLMs came more from the training on things we weren’t specifically trying to achieve- the bulk of seemingly irrelevant data that unlocked simple language processing into reasoning and world modeling.

      • the8472 2 days ago ago

        > LLMs do not exist in this physical world, and are therefore not within the set of things that can be reasoned about with those proxies.

        Perhaps not the mainstream models, but deepmind has been working on robotics models with simulated and physical RL for years https://deepmind.google/discover/blog/rt-2-new-model-transla...

      • mentalgear 2 days ago ago

        what you are describing are world models and physical AI, which has recently become much more mainstream after the recent nvidia GDC.

    • AIPedant 2 days ago ago

      Dogs can pass a dog-appropriate variant of this test: https://xcancel.com/SpencerKSchiff/status/191010636820533676... (the dog test uses a treat on one string and junk on the other, they have to pull the correct string to get the treat)

      This was before o3, but another tweet I saw (don't have the link) suggests it's also completely incapable of getting it.

    • xg15 3 days ago ago

      > But it would certainly be interesting to know what the easiest thing is that a human can do but current AIs struggle with.

      Still "Count the R's" apparently.

  • sandspar 2 days ago ago

    Gary Marcus could save himself lots of time. He just has to write a post called "Here's today's opinion." Because he's so predictable, he could just leave the body text blank. Everyone knows his conclusions anyways. This way he could save himself and his readers lots of time.

  • Nivge 3 days ago ago

    TL;DR - the benchmark depends on its specific dataset, and it isn't a perfect representation to evaluate AI progress. That doesn't mean it doesn't make sense, or doesn't have value.

  • ReptileMan 3 days ago ago

    [flagged]

    • tomhow 2 days ago ago

      Be kind. Don't be snarky. Converse curiously; don't cross-examine. Edit out swipes.

      Comments should get more thoughtful and substantive, not less, as a topic gets more divisive.

      Please don't fulminate. Please don't sneer...

      Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith.

      Eschew flamebait. Avoid generic tangents. Omit internet tropes.

      https://news.ycombinator.com/newsguidelines.html

    • ben_w 3 days ago ago

      AI is software.

      As software gets more reliable, people come to trust it.

      Software still has bugs, the trust means those bugs still get people killed.

      That was true with things we wouldn't call AI any more, and still does with things we do.

      Doesn't need to take over or anything when humans are literally asleep at the wheel because they mistakenly think the AI can drive the car for them.

      Heck, even for building codes and health & safety rules, they're written in blood. Why would AI be the exception?

      • clauderoux 3 days ago ago

        As Linus Thorval said in an interview recently, humans don't need AI to make bugs.

    • okthrowman283 3 days ago ago

      To be fair though the author of 2027 has been prescient in his previous predictions

    • dist-epoch 3 days ago ago

      Turkey fallacy.

      The apocalypse will only happen once. Just like global nuclear war.

      The fact that there was not a global nuclear war until now doesn't mean all those fearing nuclear war are crazy irrational people.

      • pvg 2 days ago ago

        Entire cities have been destroyed by nuclear bombs, the effects of nuclear weapons testing falout are measurable in everything around us. The risks are not even qualitatively comparable.

        • dwaltrip 2 days ago ago

          In 1940, a lot of people said it was impossible to build a nuclear bomb.

          • pvg 2 days ago ago

            Not really, no. Also they were perfectly aware of the destructive potential of really big bombs. This risk of increasingly big weapons that can destroy civilization were completely obvious, regardless of their mechanism of operation.

            Your analogy doesn't hold up because weapons were real, mass mobilization conflict between industrial societies was real. Plus you've now switched it around - first it was people who warned against the risks of nuclear war (effectively everyone), now it's people who didn't believe nuclear weapons were possible in 1940 (effectively nobody), etc. There has to be more to an argument than a mention of turkeys and rhetorical swerves.

      • ReptileMan 3 days ago ago

        No. It just means they are stupid in the way only extremely intelligent people could be

        • Sharlin 3 days ago ago

          People being afraid of a nuclear war are stupid in a way only extremely intelligent people can be? Was that just something that sounded witty in your mind?

  • Sharlin 3 days ago ago

    > Unfortunately, literally none of the tweets we saw even considered the possibility that a problematic graph specific to software tasks might not generalize to literally all other aspects of cognition.

    How am I not surprised?

  • dist-epoch 3 days ago ago

    > Abject failure on a task that many adults could solve in a minute

    Maybe author should check before pressing "Publish" if the info in the post is not already outdated.

    ChatGPT passed the image generation test mentioned: https://chatgpt.com/share/68171e2a-5334-8006-8d6e-dd693f2cec...

    • frotaur 3 days ago ago

      Even excluding the fact that this image is simply to illustrate, and it's really not the main point of the article, in the chat you posted, ChatGPT actually failed again, because the r's are not circled.

      • comex 2 days ago ago

        That's true, but it illustrates a point about 'jagged intelligence'. Just like there's a tendency to cherry-pick the tasks AI is best at and equate it with general intelligence, there's a counter-tendency to cherry-pick the tasks AI is worst at and equate it with a general lack of intelligence.

        This case is especially egregious because of how there were probably two different models involved. I assume Marcus' images came from some AI service that followed what until very recently was the standard pattern: you ask an LLM to generate an image; the LLM goes and fluffs out your text, then passes it to a completely separate diffusion-based image generation model, which has only a rudimentary understanding of English grammar. So of course his request for "words and nothing else" was ignored. This is a real limitation of the image generation model, but that has no relevance to the strengths and weaknesses of the LLM itself. And 'AI will replace humans' scenarios typically focus on text-based tasks that use the LLM itself.

        Arguably AI services are responsible for encouraging users to think of what are really two separate models (LLM and image generation) as a single 'AI'. But Marcus should know better.

        And so it's not surprising that ChatGPT was able to produce dramatically better results now that it has "native" image generation, which supposedly uses the native multimodal capabilities of the LLM (though rumors are that that description is an oversimplification). The results are still not correct. But it's a major advancement that the model now respects grammar; it no longer just spots the word "fruit" and generates a picture of fruit. Illustration or no, Marcus is misrepresenting the state of the art by not including this advancement.

        If Marcus had used a recent ChatGPT output instead, the comparison would be more fair, but still somewhat misleading. Even with native capabilities, LLMs are simply worse at both understanding and generating images than they are at understanding and generating text. But again, text capability matters much more. And you can't just assume that a model's poor performance on images will correlate with poor performance on text.

        The thing is, I tend to agree with the substance of Marcus's post, including the part where portrayals of current AI capabilities are suspect because they don't pass the 'sniff test', or in other words, because they don't take into account how LLMs continue to fall down on some very basic tasks. I just think the proper tasks for this evaluation should be text-based. I'd say the original "count the number of 'r's in strawberry" task is a decent example, even if it's been patched, because it really showcases the 'confidently wrong' issue that continues to plague LLMs.

    • croes 2 days ago ago

      So OpenAI fixed that, but the next simple task on which AI fails is just around the corner.

      The problem is AI doesn’t think and if a task is totally new it doesn’t produce the correct answer.

      https://news.ycombinator.com/item?id=43800686