352 comments

  • mehulashah 2 hours ago ago

    Most of the folks on this topic are focused on Meta and Yann’s departure. But, I’m seeing something different.

    This is the weirdest technology market that I’ve seen. Researchers are getting rewarded with VC money to try what remains a science experiment. That used to be a bad word and now that gets rewarded with billions of dollars in valuation.

    • DebtDeflation an hour ago ago

      That's been true for the last year or two, but it feels like we're at an inflection point. All of the announcements from OpenAI for the last couple of months have been product focused - Instant Checkout, AgentKit, etc. Anthropic seems 100% focused on Claude Code. We're not hearing as much about AGI/Superintelligence (thank goodness) as we were earlier this year, in fact the big labs aren't even talking much about their next model releases. The focus has pivoted to building products from existing models (and building massive data centers to support anticipated consumption).

      • brandall10 an hour ago ago

        Meta hiring researchers en masse at $100m+ pay packages is fairly new, as of this summer.

        I don't know if that's indicative of the market as a whole though. Zuck just seems really gutted they fell behind with Llama 4.

        • andsoitis an hour ago ago

          > Meta hiring researchers en masse at $100m+ pay packages is fairly new, as of this summer.

          En masse? Wasn't it just a couple of outliers?

          • debo_ 23 minutes ago ago

            en deux

        • renegade-otter 16 minutes ago ago

          A lot of them left in the first days on the job. I guess they saw what they were going to work on and peaced out. No one wants to work on AI slop and mental abuse of children on social media.

          • ikamm 12 minutes ago ago

            I don't understand how an intelligent person could accept a job offer from Facebook in 2025 and not understand what company they just agreed to work for.

            • kjreact 5 minutes ago ago

              It’s probably a VC fundraising strategy, “Meta gave me 100s of millions so you should give me more”.

        • rco8786 36 minutes ago ago

          "en masse" is a stretch

    • Aurornis an hour ago ago

      > Researchers are getting rewarded with VC money to try what remains a science experiment. That used to be a bad word

      I’ve worked for multiple startups and I’ve watched startup job boards most of my career.

      A lot of VC backed startups have a founder with a research background and are focused on providing out some hypothesis. I don’t see anything uncommon about this arrangement.

      If you live near a University that does a lot of research it’s very common to encounter VC backed startups that are trying to prove out and commercialize some researcher’s experiment. It’s also common for those founders to spend some time at a FAANG or similar firm before getting VC funded.

      • mehulashah 28 minutes ago ago

        Certainly research has made it into product with the help of the innovators that created the research. The dial is turned further here where the research ideas have yet to be tried and vetted. The research begins in the startup. Even in the dotcom era, the research prototypes were vetted in the conferences and journals before taking the risk to build production systems. This is no longer the case. The experiments have yet to be run.

      • anshumankmr 23 minutes ago ago

        Fusion, stem cells, CRISPR,robotics etc all come to mind.

    • baxtr 36 minutes ago ago

      I personally see this as a positive trend. VC in its earliest form was concerned with experiments that had high technology risk. I am thinking of companies like Genentech and scientists like biochemist Herbert Boyer, who had pioneered recombinant DNA technology.

      After that, VC had become more like PE, investing in stuff that was working already but needed money to scale.

      • causal 15 minutes ago ago

        Yeah there has been some lamenting at all the money being thrown at technology hasn't been for anything truly game changing, basically just variations of full stack apps. A few failed mooonshots might be more interesting at least.

    • mrbonner 10 minutes ago ago

      I can’t help but wonder: if we had poured the same amount of money into fusion energy research and development, how far might we have come in just three short years?

      • tru3_power 5 minutes ago ago

        Forreal that’s what really gets me about this haha. Literally billions of dollars burned on bullshit.

    • zikduruqe an hour ago ago

      > This is the weirdest technology market that I’ve seen.

      You must have not lived through the dot com boom. There was almost everything under the sun was being sold under a website that started with an "e". ePets, ePlants, eStamps, eUnderwear, eStocks, eCards, eInvites.....

      • baggachipz an hour ago ago

        The Pets.ai Super Bowl commercial will trigger the burst.

      • ricardobeat an hour ago ago

        It's funny that the Netherlands seems to still live in the dotcom boom to this day. Want to adopt a pet? verhuisdieren.nl. Want to buy wall art? wall-art.nl. Need cat5 cable? kabelshop.nl. 8/10 times there is a (legit) online store for whatever you need, to the point where one of the local e-commerce giants (Coolblue) buys this type of domain and aliases them to their main site.

        • KPGv2 17 minutes ago ago

          This is still the case in the US, too. I don't know why people are talking like it stopped happening. amazon.com, amazon.com, amazon.com, amazon.com

          All these things are still e-tail here, too. We didn't go back to B&M.

        • thrance an hour ago ago

          Pretty funny, looks like it works in France too! animaux.fr redirects to a pet adoption service, cable.fr looks like a cable-selling shop. artmural.fr exists but looks like a personal blog from a wall artist, rather than a shop.

      • staticman2 an hour ago ago

        That was certainly a bubble but I don't think pets.com was doing a research experiment.

        From what I recall there were some biotech stocks in that era that do fit the bill.

      • aswanson 34 minutes ago ago

        Even hardware. eMachines.

      • bookofjoe 20 minutes ago ago

        flooz

      • cheevly an hour ago ago

        These are not the same.

        • dolphinscorpion an hour ago ago

          Yeah, this time is different. Really

          • zikduruqe an hour ago ago

            History doesn't repeat itself, but it often rhymes.

      • nailer an hour ago ago

        It did make sense though. ePlants could have cornered the online nursery market. That is a valuable market. I think people were just too early. Payment and logistics hasn’t been figured out yet.

    • skeeter2020 an hour ago ago

      Agree on weirdness but not on the idea of funding science experiments:

      >> away from long-term research toward commercial AI products and large language models - LLMs

      This feels more like what I see every day: the people in charge desperately looking for some way - any way - to capitalize on the frenzy. They're not looking to fund research; they just want to get even richer. It's pets.ai this time.

    • JKCalhoun 2 minutes ago ago

      Is it like VCs throwing money at a young Wozniak while eschewing Jobs?

      That either gives the AI tech more legitimacy in my mind … or a sign we've not arrived yet.

    • gdulli 17 minutes ago ago

      If a "science experiment" has the chance to displace most labor then whoever's successful at the experiment wins the economy, period. There's nothing weird or surprising about the logic of them obsessively chasing it. They all have to, it's a prisoner's dilemma.

      • 0_____0 11 minutes ago ago

        Fusion power has the chance to displace most power generation, and whoever is successful at the experiment wins the energy economy, period. However given the long timelines, high cost of research, and the unanswered technical questions around materials that can withstand neutron flux, the total 2024 investment into fusion is only around $10B, versus AI's 250+B.

        Why are these so different?

    • cantor_S_drug an hour ago ago

      Because when the recipe is open and public, the product's success depends on Distribution (which has been cornered by MS, Google, Apple). This is good for the ecosystem but not sure how those particular VCs will get exits.

    • Oras 31 minutes ago ago

      Every startup is an experiment; only 2% succeed.

    • beezlebroxxxxxx an hour ago ago

      The scale of money is crazy in this example, but the same thing happens in the pharmaceutical/bio-tech industry.

    • blutoot 38 minutes ago ago

      Yes - I had similar thoughts when I saw the word "startup" used alongside something so far-out (same 'critique" should apply to Fei-Fei Li's World Labs - https://www.worldlabs.ai). These are VC-funded research labs (and there is nothing wrong with tat). Calling them "startups" as if they are already working on an MVP on top of an unproven (and frankly non-existent) technology seems a little disingenuous to me.

    • rapsey an hour ago ago

      VC is in a bubble.

  • sebmellen 7 hours ago ago

    Making LeCun report to Wang was the most boneheaded move imaginable. But… I suppose Zuckerberg knows what he wants, which is AI slopware and not truly groundbreaking foundation models.

    • xuancanh 6 hours ago ago

      In industry research, someone in a chief position like LeCun should know how to balance long-term research with short-term projects. However, for whatever reason, he consistently shows hostility toward LLMs and engineering projects, even though Llama and PyTorch are two of the most influential projects from Meta AI. His attitude doesn’t really match what is expected from a Chief position at a product company like Facebook. When Llama 4 got criticized, he distanced himself from the project, stating that he only leads FAIR and that the project falls under a different organization. That kind of attitude doesn’t seem suitable for the face of AI at the company. It's not a surprise that Zuck tried to demote him.

      • blutoot 4 hours ago ago

        These are the types that want academic freedom in a cut-throat industry setup and conversely never fit into academia because their profiles and growth ambitions far exceed what an academic research lab can afford (barring some marquee names). It's an unfortunate paradox.

        • sigbottle 3 hours ago ago

          Maybe it's time for Bell Labs 2?

          I guess everyone is racing towards AGI in a few years or whatever so it's kind of impossible to cultivate that environment.

          • musebox35 a minute ago ago

            Google Deepmind is the closest lab to that idea because Google is the only entity that is big enough to get close to the scale of AT&T. I was skeptical that the Deepmind and Google Brain merge would be successful but it seems to have worked surprisingly well. They are killing it with LLMs and image editing models. They are also backing the fastest growing cloud business in the world and collecting Nobel prizes along the way.

          • ryukoposting 3 hours ago ago

            The Bell Labs we look back on was only the result of government intervention in the telecom monopoly. The 1956 consent decree forced Bell to license thousands of its patents, royalty free, to anyone who wanted to use them. Any patent not listed in the consent decree was to be licensed at "reasonable and nondiscriminatory rates."

            The US government basically forced AT&T to use revenue from its monopoly to do fundamental research for the public good. Could the government do the same thing to our modern megacorps? Absolutely! Will it? I doubt it.

            https://www.nytimes.com/1956/01/25/archives/att-settles-anti...

            • aatd86 2 hours ago ago

              Used to be a Google X. Not sure at what scale it was. But if any state/central bank was clever they would subsidize this. That's a better trickle down strategy. Until we get to agi and all new discoveries are autonomously led by AI that is :p

          • ambicapter 6 minutes ago ago

            Why would Bell Labs be a good fit? It was famous for embedding engineers with the scientists to direct research in a more results-oriented fashion.

          • blutoot 2 hours ago ago

            Appreciate you bringing up Bell Labs. So I decided to do a deep research[0] in Gemini[1] to understand why we don't have a Bell Labs like setup anymore.

            Before I present my simple-minded takeaway below, I am happy to be schooled on how research labs in mega corporations really work and what their respective business models look like.

            Seems like a research powerhouse like the Bell Labs can thrive for a while only if the parent company (like a pre-1984 AT&T) is massively monopolistic and have unbounded discretionary research budget.

            One can say, Alphabet is the only company that is comparable today where such an arrangement can survive but I believe it would still dwarf in comparison to what the original Bell Labs used to be. I also think NEC labs went in the same direction [2].

            [0] https://gemini.google.com/share/13e5f1a90294 (publicly shared link) [1] Prompt: "I want to understand why Bell Labs did not survive and why we don't have well-funded tech research labs anymore" [2] https://docs.google.com/document/d/10bfJX1nQsGtjgojRcOdHxXBK... (publicly shared link)

            • sllabres 2 hours ago ago

              If you are (obviously) interested in the matter you might find one of the Bell Labs articles discussed on HN:

              "Why Bell Labs Worked" [1]

              "The Influence of Bell Labs" [2]

              "Bringing back the golden days of Bell Labs" [3]

              "Remembering Bell Labs as legendary idea factory prepares to leave N.J. home" [4] or

              "Innovation and the Bell Labs Miracle" [5]

              interesting too.

              [1] https://news.ycombinator.com/item?id=43957010 [2] https://news.ycombinator.com/item?id=42275944 [3] https://news.ycombinator.com/item?id=32352584 [4] https://news.ycombinator.com/item?id=39077867 [5] https://news.ycombinator.com/item?id=3635489

            • anotherd1p an hour ago ago

              I always take a bird's eye kind of view on things like that, because however close I get, it always loops around to make no sense.

              > is massively monopolistic and have unbounded discretionary research budget

              that is the case for most megacorps. if you look at all the financial instruments.

              modern monopolies are not equal to single corporation domination. modern monopolies are portfolios who do business using the same methods and strategies.

              the problem is that private interests strive mostly for control, not money or progress. if they have to spend a lot of money to stay in control of (their (share of the)) segments, they will do that, which is why stuff like the current graph of investments of, by and for AI companies and the industries works.

              A modern equivalent and "breadth" of a Bell Labs (et. al) kind of R&D speed could not be controlled and would 100% result in actual Artificial Intelligence vs all those white labelababbebel (sry) AI toys we get now.

              Post WW I and II "business psychology" have build a culture that cannot thrive in a free world (free as in undisturbed and left to all devices available) for a variety of reasons, but mostly because of elements with a medieval/dark-age kind of aggressive tendency to come to power and maintain it that way.

              In other words: not having a Bell Labs kind of setup anymore ensures that the variety of approaches taken on large scales aka industry-wide or systemic, remains narrow enough.

          • HarHarVeryFunny 2 hours ago ago

            It seems DeepMind is the closest thing to a well funded blue-sky AI research group, even despite the merger with Google Brain and now more of a product focus.

          • diego_sandoval an hour ago ago

            The fact that people invest on the architecture that keeps getting increasingly better results is a feature, not a bug.

            If LLMs actually hit a plateau, then investment will flow towards other architectures.

            • esafak an hour ago ago

              At which point companies that had the foresight to investigate those architectures earlier on will have the lead.

          • blueboo an hour ago ago

            We call it “legacy DeepMind”

          • belter 3 hours ago ago

            > I guess everyone is racing towards AGI in a few years

            A pipe dream sustaining the biggest stock market bubble in history. Smart investors are jumping to the next bubble already...Quantum...

            • re-thc 3 hours ago ago

              > A pipe dream sustaining the biggest stock market bubble in history

              This is why we're losing innovation.

              Look at electric cars, batteries, solar panels, rare earths and many more. Bubble or struggle for survival? Right, because if US has no AI the world will have no AI? That's the real bubble - being stuck in an ancient world view.

              Meta's stock has already tanked for "over" investing in AI. Bubble, where?

              • belter 3 hours ago ago

                2 Trillion dollars in Capex to get code generators with hallucinations, that run at a loss, and you ask where is the Bubble?

                • re-thc 3 hours ago ago

                  > 2 Trillion dollars in Capex to get code generators with hallucinations

                  You assume that's the only use of it.

                  And are people not using these code generators?

                  Is this an issue with a lost generation that forgot what Capex is? We've moved from Capex to Opex and now the notion is lost, is it? You can hire an army of software developers but can't build hardware.

                  Is it better when everyone buys DeepSeek or a non-US version? Well then you don't need to spend Capex but you won't have revenue either.

                  • littlestymaar 2 hours ago ago

                    Deepseek somehow didn't need $2T to happen.

                    • matt3D an hour ago ago

                      I think the argument can be made that Deepseek is a state sponsored needle looking to pop another states bubble.

                      If Deepseek is free it undermines the value of LLMs, so the value of these US companies is mainly speculation/FOMO over AGI.

                      • re-thc 31 minutes ago ago

                        > the argument can be made that Deepseek is a state sponsored needle looking to pop another states bubble

                        Who says they don't make money? Same with open source software that offer a hosted version.

                        > If Deepseek is free it undermines the value of LLMs, so the value of these US companies is mainly speculation/FOMO over AGI

                        Freemium, open source and other models all exist. Does it undermine the value of e.g. Salesforce?

                    • anotherd1p an hour ago ago

                      all that led up to Deepseek needed more. don't forget where it all comes from.

                    • re-thc 2 hours ago ago

                      Because you know how much they spent.

                      And that $2T you're referring to includes infrastructure like energy, data centers, servers and many things. DeepSeek rents from others. Someone is paying.

        • kamaal 3 hours ago ago

          More importantly even if you do want it, and there are business situations that support your ambitions. You still have to do get into the managerial powerplay, which quite honestly takes a separate kind of skill set, time and effort. Which Im guessing the academia oriented people aren't willing to do.

          Its pretty much dog eat dog at top management positions.

          Its not exactly a space for free thinking timelines.

          • anotherd1p an hour ago ago

            > Its not exactly a space for free thinking timelines.

            Same goes for academia. People's visions compete for other people's financial budgets, time and other resources. Some dogs get to eat, study, train at the frontier and with top tools in top environments while the others hope to find a good enough shelter.

          • ptero 3 hours ago ago

            It is not a free thinking paradise in academia either. Different groups fighting for hiring, promotions and influence exist there, too. And it tends to be more pronounced: it is much easier in industry to find a comparable job to escape a toxic environment, so a lot of problems in academia settings steam forever.

            But the skill sets to avoid and survive personnel issues in academia is different from industry. My 2c.

      • throwaw12 6 hours ago ago

        I would pose a question differently, under his leadership did Meta achieve good outcome?

        If the answer is yes, then better to keep him, because he has already proved himself and you can win in the long-term. With Meta's pockets, you can always create a new department specifically for short-term projects.

        If the answer is no, then nothing to discuss here.

        • xuancanh 5 hours ago ago

          Meta did exactly that, kept him but reduced his scope. Did the broader research community benefit from his research? Absolutely. But did Meta achieve a good outcome? Probably not.

          If you follow LeCun on social media, you can see that the way FAIR’s results are assessed is very narrow-minded and still follows the academic mindset. He mentioned that his research is evaluated by: "Research evaluation is a difficult task because the product impact may occur years (sometimes decades) after the work. For that reason, evaluation must often rely on the collective opinion of the research community through proxies such as publications, citations, invited talks, awards, etc."

          But as an industry researcher, he should know how his research fits with the company vision and be able to assess that easily. If the company's vision is to be the leader in AI, then as of now, he seems to have failed that objective, even though he has been at Meta for more than 10 years.

          • nsonha 5 hours ago ago

            Also he always sounds like "I know this will not work". Dude are you a researcher? You're supposed to experiment and follow the results. That's what separates you from oracles and freaking philosophers or whatever.

            • lukan 3 hours ago ago

              Philosophers are usually more aware of their not knowing than you seem to give them credit for. (And oracles are famously vague, too).

            • teleforce 3 hours ago ago

              Do you know that all formally trained researchers have Doctor of Philosophy or PhD to their name? [1]

              [1] Doctor of Philosophy:

              https://en.wikipedia.org/wiki/Doctor_of_Philosophy

              • anotherd1p an hour ago ago

                If academia is in question, then so are their titles. When I see "PhD", I read "we decided that he was at least good enough for the cause" PhD, or PhD (he fulfilled the criteria).

            • yawnxyz 5 hours ago ago

              he probably predicted the asymptote everyone is approaching right now

              • brazukadev 3 hours ago ago

                So did I after trying llama/Meta AI

            • uoaei 4 hours ago ago

              He's speaking to the entire feedforward Transformer-based paradigm. He sees little point in continuing to try to squeeze more blood out of that stone and instead move on to more appropriate ways to model ontologies per se rather than the crude-for-what-we-use-them-for embedding-based methods that are popular today.

              I really resonate with his view due to my background in physics and information theory. I for one welcome his new experimentation in other realms while so many still hack away at their LLMs in pursuit of SOTA benchmarks.

              • fhd2 3 hours ago ago

                If the LLM hype doesn't cool down fast, we're probably looking at another AI winter. Appears to me like he's just trying to ensure he'll have funding for chasing the global maximum going forward.

                • re-thc 3 hours ago ago

                  > If the LLM hype doesn't cool down fast, we're probably looking at another AI winter.

                  Is the real bubble ignorance? Maybe you'll cool down but the rest of the world? There will just be more DeepSeek and more advances until the US loses its standing.

        • rw2 6 hours ago ago

          I believe that the fact that Chinese models are beating the crap of of Llama means it's a huge no.

          • amelius 5 hours ago ago

            Why? The Chinese are very capable. Most DL papers have at least one Chinese name on it. That doesn't mean they are Chinese but it's telling.

            • rob_c 5 hours ago ago

              most papers are also written in the same language, what's your point?

        • HarHarVeryFunny 2 hours ago ago

          LeCun was always part of FAIR, doing research, not part of the LLM/product group, who reported to someone else.

        • anotherd1p an hour ago ago

          then we should ask: will Meta come close enough to the fulfillment of the promises made, or will it keep achieving good enough outcomes?

      • hbarka an hour ago ago

        LeCun truly believes the future is in world models. He’s not alone. Good for him to now be in the position he’s always wanted and hopefully prove out what he constantly talks about.

      • sharmajai 3 hours ago ago

        Product companies with deprioritized R&D wings are the first ones to die.

        • StilesCrisis 2 hours ago ago

          None of Meta's revenue has anything to do with AI at all. (Other than GenAI slop in old people's feeds.) Meta is in the strange position of investing very heavily in multiple fields where they have no successful product: VR, hardware devices, and now AI. Ad revenue funds it all.

          • jpadkins 28 minutes ago ago

            LLMs help ads efficiency a lot. policy labels, targeting, adaptive creatives, landing page evals, etc.

          • nxor an hour ago ago

            Underrated comment

        • skeeter2020 an hour ago ago

          Hasn't happened to Google yet

          • anshumankmr 16 minutes ago ago

            Has Google depriortized R&D?

      • HarHarVeryFunny 2 hours ago ago

        Meta had a two prong AI approach - product-focused group working on LLMs, and blue-sky research (FAIR) working on alternate approaches, such as LeCun's JEPA.

        It seems they've given up on the research and are now doubling down on LLMs.

      • _the_inflator an hour ago ago

        I totally agree. He appeared to act against his employer and actively undermined Meta's effort to attract talent by his behavior visible on X.

        And I stopped reading him, since he - in my opinion - trashed on autopilot everything 99% did - and these 99% were already beyond the two standard deviation of greatness.

        It is even more highly problematic if you have absolutely no results eg products to back your claims.

      • Grimblewald 3 hours ago ago

        LLM hostility was warrented. The overhype/downright charlartan nature of ai hype and marketing threatens another AI winter. It happened to cybernetics, it'll happen to us too. The finance folks will be fine, they'll move to the next big thing to overhype, it is the researchers who suffer the fall-out. I am considered anti LLM (transformers anyway) for this reason, i like the the architecture, it is cool amd rather capable at its problem set, which is a unique set, but, it isnt going to deliver any of what has been promised, any more than a plain DNN or a CNN will.

      • rapsey 5 hours ago ago

        Yann was never a good fit for Meta.

        • runeblaze 2 hours ago ago

          Agreed, I am surprised he is happy to stay this long. He would have been on paper a far better match at a place like pre-Gemini-era Google

      • nailer an hour ago ago

        Lecun has also consistently tried to redefine open source away from the open source definition.

      • rob_c 5 hours ago ago

        tbf, transformers from more of a developmental perspective are hugely wasteful. they're long-range stable sure, but the whole training process requires so much power/data compared to even slightly simpler model designs I can see why people are drawn to alternative complex model designs down-playing the reliance on pure attention.

    • gnaman 7 hours ago ago

      He is also not very interested in LLMs, and that seems to be Zuck's top priority.

      • tinco 7 hours ago ago

        Yeah I think LeCun is underestimating the impact that LLM's and Diffusion models are going to have, even considering the huge impact they're already having. That's no problem as I'm sure whatever LeCun is working on is going to be amazing as well, but an enterprise like Facebook can't have their top researcher work on risky things when there's surefire paths to success still available.

        • jll29 7 hours ago ago

          I politely disagree - it is exactly an industry researcher's purpose to do the risky things that may not work, simply because the rest of the corporation cannot take such risks but must walk on more well-trodden paths.

          Corporate R&D teams are there to absorb risk, innovate, disrupt, create new fields, not for doing small incremental improvements. "If we know it works, it's not research." (Albert Einstein)

          I also agree with LeCun that LLMs in their current form - are a dead end. Note that this does not mean that I think we have already exploited LLMs to the limit, we are still at the beginning. We also need to create an ecosystem in which they can operate well: for instance, to combine LLMs with Web agents better we need a scalable "C2B2C" (customer delegated to business to business) micropayment infrastructure, because as these systems have already begun talking to each other, in the longer run nobody would offer their APIs for free.

          I work on spatial/geographic models, inter alia, which by coincident is one of the direction mentioned in the LeCun article. I do not know what his reasoning is, but mine was/is: LMs are language models, and should (only) be used as such. We need other models - in particular a knowledge model (KM/KB) to cleanly separate knowledge from text generation - it looks to me right now that only that will solve hallucination.

          • barrkel 6 hours ago ago

            Knowledge models, like ontologies, always seem suspect to me; like they promise a schema for crisp binary facts, when the world is full of probabilistic and fuzzy information loosely categorized by fallible humans based on an ever slowly shifting social consensus.

            Everything from the sorites paradox to leaky abstractions; everything real defies precise definition when you look closely at it, and when you try to abstract over it, to chunk up, the details have an annoying way of making themselves visible again.

            You can get purity in mathematical models, and in information systems, but those imperfectly model the world and continually need to be updated, refactored, and rewritten as they decay and diverge from reality.

            These things are best used as tools by something similar to LLMs, models to be used, built and discarded as needed, but never a ground source of truth.

            • cheesecompiler 19 minutes ago ago

              Is it that fuzzy though? If it was would language not adequately grasp and model our realities? And what about the physical world itself: animals are modeling the world adequately enough to navigate it. There's significant gains to make from modeling _enough_ of the world, without falling into hallucinations of purely statistical associations of an LLM.

            • fauigerzigerk 2 hours ago ago

              >Knowledge models, like ontologies, always seem suspect to me; like they promise a schema for crisp binary facts, when the world is full of probabilistic and fuzzy information loosely categorized by fallible humans based on an ever slowly shifting social consensus.

              I don't disagree that the world is full of fuzziness. But the problem I have with this portrayal is that formal models are often normative rather than analytical. They create reality rather than being an interpretation or abstraction of reality.

              People may well have a fuzzy idea of how their credit card works, but how it really works is formally defined by financial institutions. And this is not just true for software products. It's also largely true for manufactured products. Our world is very much shaped by artifacts and man-made rules.

              Our probabilistic, fuzzy concepts are often simply a misconception. That doesn't mean it's not important of course. It is important for an AI to understand how people talk about things even if their idea of how these things work is flawed.

              And then there is the sort of semi-formal language used in legal or scientific contexts that often has to be translated into formal models before it can become effective. Law makers almost never write algorithms (when they do, they are often buggy). But tax authorities and accounting software vendors do have to formally model the language in the law and then potentially change those formal definitions after court decisions.

              My point is that the way in which the modeled, formal world interacts with probabilistic, fuzzy language and human actions is complex. In my opinion we will always need both. AIs ultimately need to understand both and be able to combine them just like (competent) humans do. AI "tool use" is a stop-gap. It's not a sufficient level of understanding.

              • pton_xd 26 minutes ago ago

                > People may well have a fuzzy idea of how their credit card works, but how it really works is formally defined by financial institutions.

                > Our probabilistic, fuzzy concepts are often simply a misconception.

                How eg a credit card works today is defined by financial institutions. How it might work tomorrow is defined by politics, incentives, and human action. It's not clear how to model those with formal language.

                I think most systems we interact with are fuzzy because they are in a continual state of change due to the aforementioned human society factors.

            • Marshferm 2 hours ago ago

              World models are trivial. eg narratives are world models and they provide only pre frontal simulation, ie they are synthetically prey-predation. No animal uses world models for survival and doubtful they exist (maps are not models), a world model doesn't conform to optic flow, ie instantaneous use and response. Anything like a world model isn't shallow, the basic premise of oscillatory command, it's needlessly deep, nothing like brains. This is just a frontier hail-mary to the current age.

            • rob_c 5 hours ago ago

              You're basically describing the knowledge problem vs model structure, how to even begin to design a system which self-updates/dynamically-learns vs being trained and deployed.

              Cracking that is a huge step, pure multi-modal trained models will probably give us a hint, but I think we're some ways from seeing a pure multi-modal open model which can be pulled apart/modified. Even then they're still train and deploy not dynamically learning. I worry we're just going to see LSTM design bolted onto deep LLM because we don't know where else to go and it will be fragile and take eons to train.

              And less said about the crap of "but inference is doing some kind of minimization within the context window" the better, it's vacuous and not where great minds should be looking for a step forwards.

            • balamatom 4 hours ago ago

              I have vague notions of there being an entire hidden philosophical/political battlefield (massacre?) behind the whole "are knowledge models/ontologies a realistic goal" debate.

              Starting with the sophomoric questions of the optimist who mistakes the possible for the viable: how definite of a thing is "the world", how knowable is it, what is even knowledge... and then back through the more pragmatic: by whom is it knowable, to what degree, and by what means. The mystics: is "the world" the same thing as "the sum of information about the world"? The spooks: how does one study those fields of information which are already agentic and actively resist being studied by changing themselves, such as easily emerge anywhere more than n(D) people gather?

              Plenty of food for thought from why ontologies are/aren't a thing. The classical example of how this plays out in the market being search engines winning over internet directories. But that's one turn of the wheel. Look at what search engines grew into quarter century later. What their outgrowths are doing to people's attitude towards knowledge. Different timescale, different picture.

              Fundamentally, I don't think human language has sufficient resolution to model large spans of reality within the limited human attention span. The physical limits of human language as information processing device have been hit at some point in the XX century. Probably that 1970s divergence between productivity and wages.

              So while LLMs are "computers speak language now" and it's amazing if sad that they cracked it by more data and not by more model, what's more amazing is how many people are continually ready to mistake language for thought. Are they all P-zombies or just obedience-conditioned into emulating ones?!?!?

              Practically, what we lack is not the right architecture for "big knowing machine", but better tools for ad-hoc conceptual modeling of local situations. And, just like poetry that rhymes, this is exactly what nobody has a smidgen of interest to serve to consumers, thus someone will just build it in their basement in the hope of turning the tables on everyone. Probably with the help of LLMs as search engines and code generators. Yall better hurry. They're almost done.

          • siva7 6 hours ago ago

            > it is exactly a researcher's purpose to do the risky things that may not work

            Maybe at university, but not at a trillion dollar company. That job as chief scientist is leading risky things that will work to please the shareholders.

            • vintermann 6 hours ago ago

              They knew what Yann LeCun was when they hired him. If anything, those brilliant academics who have done what they're told and loyally pursued corporate objectives the way the corporation wanted (e.g. Karpathy when he was at Tesla) haven't had great success either.

              • jack_tripper 5 hours ago ago

                >They knew what Yann LeCun was when they hired him.

                Yes but he was hired in the ZIRP era where all SV companies were hiring every opinionated academic and giving them free reign and unlimited money to burn in the hopes that maybe they'll create the next big thing for them eventually.

                These are very different economic times right now, after the FED infinite money glitch has been patched out, so now people do need to adjust to them and start actually making some products of value for their seven figure costs to their employers, or end up being shown the door.

                • miohtama 5 hours ago ago

                  Some employees even need to physically present at the office

                • rob_c 5 hours ago ago

                  so your message is to short OpenAI before it implodes and gets absorbed into Cortana or equivalent ;)

            • Hendrikto 4 hours ago ago

              > risky things that will work

              Things known to work are not risky. Risky things can fail by definition.

            • rsynnott 5 hours ago ago

              “Risky things that will work” - contradiction in terms. If companies only did things they knew would work, we probably still wouldn’t have microchips.

              Also, like… it’s Facebook. It has a history of ploughing billions into complete nonsense (see metaverse). It is clearly not particularly risk averse.

            • tempfile 5 hours ago ago

              What exactly does it mean for something to be a "risky thing that will work"?

          • igravious 4 hours ago ago

            > I also agree with LeCun that LLMs in their current form - are a dead end.

            Well then you and he are clearly dead wrong.

            • pegasus 4 hours ago ago

              Either that, or just tautological, given that LLM tech is continually morphing and improving.

        • fxtentacle 7 hours ago ago

          LLMs and Diffusion solve a completely different problem than world models.

          If you want to predict future text, you use an LLM. If you want to predict future frames in a video, you go with Diffusion. But what both of them lack is object permanence. If a car isn't visible in the input frame, it won't be visible in the output. But in the real world, there are A LOT of things that are invisible (image) or not mentioned but only implied (text) that still strongly affect the future. Every kid knows that when you roll a marble behind your hand, it'll come out on the other side. But LLMs and Diffusion models routinely fail to predict that, as for them the object disappears when it stops being visible.

          Based on what I heard from others, world models are considered the missing ingredient for useful robots and self-driving cars. If that's halfway accurate, it would make sense to pour A LOT of money into world models, because they will unlock high-value products.

          • Workaccount2 8 minutes ago ago

            >But what both of them lack is object permanence.

            This is something that was true last year, but hanging on by a thread this year. Genie shows this off really well, but it's also in the video models as well.[1]

            [1]https://storage.googleapis.com/gdm-deepmind-com-prod-public/...

          • tinco 6 hours ago ago

            Sure, if you only consider the model they have no object permanence. However you can just put your model in a loop, and feed the previous frame into the next frame. This is what LLM agent engineers do with their context histories, and it's probably also what the diffusion engineers do with their video models.

            Messing with the logic in the loop and combining models has an enormous potential, but it's more engineering than researching, and it's just not the sort of work that LeCun is interested in. I think the conflict lies there, that Facebook is an engineering company, and a possible future of AI lies in AI engineering rather than AI research.

          • yogrish 6 hours ago ago

            I think World models is way to go for Super Intelligence. One of teh patent i saw already going in this direction for Autonomous mobility is https://patents.google.com/patent/EP4379577A1 where synthetic data generation (visualization) is missing step in terms of our human intelligence.

            • makestuff 31 minutes ago ago

              This is the first time I have heard of world models. Based on my brief reading it does look like this is the idea model for autonomous driving. I wonder if the self driving companies are already using this architecture or something close to it.

          • PxldLtd 6 hours ago ago

            I thoroughly disagree, I believe world models will be critical in some aspect for text generation too. A predictive world model you can help to validate your token prediction. Take a look at the Code World Model for example.

          • ml-anon 5 hours ago ago

            lol what is this? We already have world models based on diffusion and ar algorithms.

        • KaiserPro 2 hours ago ago

          > I think LeCun is underestimating the impact that LLM's and Diffusion models

          No, I think hes suggesting that "world models" are more impactful. The issue for him inside meta is that there is already a research group looking at that, and are wildly more successful (in terms of getting research to product) and way fucking cheaper to run than FAIR.

          Also LeCun is stuck weirdly in product land, rather than research (RL-R) which means he's not got the protection of Abrash to isolate him from the industrial stupidity that is the product council.

        • anthonybsd an hour ago ago

          > Facebook can't have their top researcher work on risky things when there's surefire paths to success still available.

          How did you determine that "surefire paths to success still available"? Most academics agree that LLMs (or LLMs alone) are not going to lead us to AGI. How are you so certain?

          • tinco an hour ago ago

            I don't believe we need more academic research to achieve AGI. The sort of applications that are solving the recent AGI challenges are just severely resource constrained AGI. The only difference between those systems and human intelligence are resources and incentives.

            Not that I believe AGI is the measure of success, there's probably much more efficient ways to achieve company goals than simulating humans.

        • qmr 6 hours ago ago

          > but an enterprise like Facebook can't have their top researcher work on risky things when there's surefire paths to success still available.

          Bell Labs

        • skeeter2020 an hour ago ago

          not sure I agree. AI seems to be following the same 3-stage path of many inventions: innovation > adoption > diffusion. LeCun and co focus on the first, and LLMs in their current form appear to be incremental at improvements; we're still using the same basis from more than ten years ago. FB and industry are signalling a focus on harvesting the innovation and that could last - but also take - many years or decades. Your fundamental researchers are not interested (or the right people) in that position.

        • hodgehog11 7 hours ago ago

          Unless I've missed a few updates, much of the JEPA stuff didn't really bear a lot of fruit in the end.

        • OJFord 3 hours ago ago

          He's quoted in OP as calling them 'useful but fundamentally limited'; that seems correct, and not at all like he's denying their utility.

        • sebmellen 7 hours ago ago

          While I agree with your point, “Superintelligence” is a far cry from what Meta will end up delivering with Wang in charge. I suppose that, at the end of the day, it’s all marketing. What else should we expect from an ads company :?

          • metabolian 6 hours ago ago

            The Meta Super-Intelligence can dwell in the Metaverse with the 23 other active users there.

        • StopDisinfo910 6 hours ago ago

          Hard to tell.

          The last time LeCun disagreed with the AI mainstream was when he kept working on neural net when everyone thought it was a dead end. He might be entirely right in his LLM scepticism. It's hardly a surefire path. He didn't prevent Meta from working on LLM anyway.

          The issue is more than his position is not compatible with short term investors expectations and that's fatal in a company like Meta at the position LeCun occupies.

        • raverbashing 7 hours ago ago

          Yeah honestly I'm with the LLM people here

          If you think LLMs are not the future then you need to come with something better

          If you have a theoretical idea that's great, but take to at least GPT2 level first before writing off LLMs

          Theoretical people love coming up with "better ideas" that fall flat or have hidden gotchas when they get to practical implementation

          As Linus says, "talk is cheap, show me the code".

          • DaSHacka 7 hours ago ago

            Do you? Or is it possible to acknowledge a plateau in innovation without necessarily having an immediate solution cooked-up and ready to go?

            Are all critiques of the obvious decline in physical durability of American-made products invalid unless they figure out a solution to the problem? Or may critics of a subject exist without necessarily being accredited engineers themselves?

          • whizzter 6 hours ago ago

            LLM's are probably always going to be the fundamental interface, the problem they solved was related to the flexibility of human languages allowing us to have decent mimikry's.

            And while we've been able to approximate the world behind the words, it's just full of hallucinations because the AI's lack axiomatic systems beyond much manually constructed machinery.

            You can probably expand the capabilties by attaching to the front-end but I suspect that Yann is seeing limits to this and wants to go back and build up from the back-end of world reasoning and then _among other things_ attach LLM's at the front-end (but maybe on equal terms with vision models that allows for seamless integration of LLM interfacing _combined_ with vision for proper autonomous systems).

            • rob_c 5 hours ago ago

              > because the AI's lack axiomatic systems beyond much manually constructed machinery.

              Oh god, that is massively under-selling their learning ability. These models are able to extract and reply with why jokes are funny without even knowing basic vocab, yet there are pure-code models out there with lingual rules baked in from day one which still struggle with basic grammar.

              The _point_ of LLMs arguably is there ability to learn any pattern thrown at it with enough compute. With an exception to learning how logical processes work, and pure LLMs only see "time" in the sense of a paragraph begins and ends.

              At the least they have taught computers, "how to language", which in regards to how to interact with a machine is a _huge_ step forward.

              Unfortunately the financial incentives are split between agentic model usage (taking the idea of a computerised butler further), maximizing model memory and raw learning capacity (answering all problems at any time), and long-range consistency (longer ranges give better stable results due to a few reasons, but we're some way from seeing an LLM with a 128k experts and 10e18 active tokens).

              I think in terms of building the perfect monkey butler we already have most or all of the parts. With regard to a model which can dynamically learn on the fly... LLMs are not the end of the story and we need something to allow the models to more closely tie their LS with the context. Frankly the fact that DeepSeek gave us an LLM with LS was a huge leap since previous model attempts had been overly complex and had failed in training.

          • worldsayshi 7 hours ago ago

            Why not both? LLM:s probably have a lot more potential than what is currently being realized but so does world models.

          • hhh 7 hours ago ago

            LLMs are the present. We will see what the future holds.

          • dpe82 7 hours ago ago

            Of course the challenge with that is it's often not obvious until after quite a bit of work and refinement that something else is, in fact, better.

          • mitthrowaway2 6 hours ago ago

            Isn't that exactly why he's starting a new company?

          • Seattle3503 7 hours ago ago

            Well, we will see if Yann can.

        • netdevphoenix 4 hours ago ago

          >the huge impact they're already having

          In the software development world yes, outside of that, virtually none. Yes, you can transcribe a video call in Office, yes, but that's not ground breaking. I dare you to list 10 impacts on different fields, excluding tech and including at least half blue collar fields and at least half white collar fields , at different levels from the lowest to the highest in the company hierarchy, that LLM/Diffusion models are having. Impact here specifically means a significant reduction of costs or a significant increase of revenue. Go on

          • arcticbull 4 hours ago ago

            I'm also not sure it even drives a ton of value in software engineering. It makes the easy part easier and the hard part harder. Typing out software in your mind was never the difficult part. Figuring out what to write, how to interpret specs in context, how to make your code work within the context of a broader whole, how to be extensible, maintainable, reliable, etc. That's hard, and LLMs really don't help.

            Even when writing, it shifts the mental burden from an easy thing (writing code) to a very hard thing (reading that code, validating it's right, hallucination free, and then refactoring it to match your teams code style and patterns).

            It's great for building a first-order approximation of a tech demo app that you then throw out and build from scratch, and auto-complete. In my experience, anyways. I'm sure others have had different experiences.

          • pegasus 4 hours ago ago

            You already mentioned two fields they have a huge impact on, software development and NLP (this latter one the most impacted so far). Another field that comes to mind is academic research is getting an important boost as well, via semantic search or more advanced stuff like Google's biological cell model which already uncovered new treatments. I'm sure I'm missing a lot of other fields I'm less familiar with (legal, for example). But just these impacts I listed are all huge and they will indirectly have a huge impact on all other areas of human industry, it's just a matter of time. "Software will eat the world" and all that.

          • olalonde 4 hours ago ago

            Personally, I find myself using LLMs more than Google now, even for non-development tasks. I think this shift is going to become the new normal (if it isn't already).

          • antegamisou 3 hours ago ago

            I don't think you'll find many here believing anything outside tech is worth investing into, it's schizophrenic isn't it.

      • gdiamos 5 hours ago ago

        The role of basic research is to get off the beaten path.

        LLMs aren’t basic research when they have 1 billion users

    • ACCount37 6 hours ago ago

      That was obviously him getting sidelined. And it's easy to see why.

      LLMs get results. None of the Yann LeCun's pet projects do. He had ample time to prove that his approach is promising, and he didn't.

      • chaoz_ 4 hours ago ago

        I agree. I never understood LeCun's statement that we need to pivot toward the visual aspects of things because the bitrate of text is low while visual input through the eye is high.

        Text and languages contain structured information and encode a lot of real-world complexity (or it's "modelling" that).

        Not saying we won't pivot to visual data or world simulations, but he was clearly not the type of person to compete with other LLM research labs, nor did he propose any alternative that could be used to create something interesting for end-users.

        • tarsinge 39 minutes ago ago

          Text and language contain only approximate information filtered through humans eyes and brains. Also animals don't have language and can show quite advanced capabilities compared to what we can currently do in robotics. And if you do enough mindfulness you can dissociate cognition/consciousness from language. I think we are lured because how important language is for us humans, but intuitively it's obvious to me language (and LLMs) are only a subcomponent, or even irrelevant for say self driving or robotics.

        • ACCount37 4 hours ago ago

          If LeCun's research has made Meta a powerhouse of video generation or general purpose robotics - the two promising directions that benefit from working with visual I/O and world modeling as LeCun sees it - it could have been a justified detour.

          But that sure didn't happen.

      • camillomiller 5 hours ago ago

        LLMs get results is quite the bold statement. If they get results, they should be getting adopted, and they should be making money. This is all built on hazy promises. If you had marketable results, you wouldn't have to hide 20+ billion dollars of debt financing into an obscure SPV. LLMs are the most baffling piece of tech. They are incredible, and yet marred by their non-deterministic hallucinatory nature, and bound to fail in adoption unless you convince everyone that they don't need precision and accuracy, but they can do their business at 75% quality, just with less human overhead. It's quite the thing to convince people of, and that's why it needs the spend it's needing. A lot of we-need-to-stay-in-the-loop CEOs and bigwigs got infatuated with the idea, and most probably they just had their companies get addicted to the tech equivalent of crack cocaine. A reckoning is coming.

        • ACCount37 5 hours ago ago

          LLMs get results, yes. They are getting adopted, and they are making money.

          Frontier models are all profitable. Inference is sold with a damn good margin, and the amounts of inference AI companies sell keeps rising. This necessitates putting more and more money into infrastructure. AI R&D is extremely expensive too, and this necessitates even more spending.

          A mistake I see people make over and over again is keeping track of the spending but overlooking the revenue altogether. Which sure is weird: you don't get from $0B in revenue to $12B in revenue in a few years by not having a product anyone wants to buy.

          And I find all the talk of "non-deterministic hallucinatory nature" to be overrated. Because humans suffer from all of that too, just less severely. On top of a number of other issues current AIs don't suffer from.

          Nonetheless, we use human labor for things. All AI has to do is provide a "good enough" alternative, and it often does.

          • ripe 3 hours ago ago

            > Frontier models are all profitable.

            This is an extraordinary claim and needs extraordinary proof.

            LLMs are raising lots of investor money, but that's a completely different thing from being profitable.

            • ACCount37 2 hours ago ago

              You don't even need insider info - it lines up with external estimates.

              We have estimates that range from 30% to 70% gross margin on API LLM inference prices at major labs, 50% middle road. 10% to 80% gross margin on user-facing subscription services, error bars inflated massively. We also have many reports that inference compute has come to outmatch training run compute for frontier models by a factor of x10 or more over the lifetime of a model.

              The only source of uncertainty is: how much inference do the free tier users consume? Which is something that the AI companies themselves control: they are in charge of which models they make available to the free users, and what the exact usage caps for free users are.

              Adding that up? Frontier models are profitable.

              This goes against the popular opinion, which is where the disbelief is coming from.

              Note that I'm talking LLMs rather than things like image or video generation models, which may have vastly different economics.

          • camillomiller 3 hours ago ago

            In this comment you proceeded to basically reinvent the meaning of "profitable company", but sure. I won't even get into the point of comparing LLM to humans, because I choose not to engage with whoever doesn't have the human decency, humanistic compass, or basic phylosophical understanding of how putting LLMs and human labor on the same level to justify hallucinations and non-determinism is deranged and morally bankrupt.

            • ACCount37 3 hours ago ago

              You should go and work in a call center for a year, on the first line.

              Then come back and tell me how replacing human labor with AI is "deranged and morally bankrupt".

        • miohtama 5 hours ago ago

          OpenAI and Anthropic are making north of 4B/year revenue so some companies have figured out the money making part. ChatGPT has some 800M users according to some calculations. Whether it's enough money today, enough money tomorrow, is of course a question but there is a lot of money. Users would not use them in a scale if they do not solve their problems.

          • panja 4 hours ago ago

            OpenAI lost 12bn last quarter

          • Hendrikto 3 hours ago ago

            It’s easy to make 1 billion by spending 10 billion. That’s not “making money” though, it is lighting it on fire.

            • aryonoco an hour ago ago

              People used to say this about Amazon all the time. Remember how Amazon basically didn’t turn any real profits for 2 decades? The joke was that Amazon was a charitable organisation being funded by Wall Street for the benefit of human kind.

              That didn’t last. People in the know knew that once you have a billion users and insane revenue and market power and have basically bought or driven out of business most of your competitors (Diapers.com, Jet.com, etc) you can eventually slow down your physical expansion, tighten the screws on your suppliers, increase efficiencies, and start printing money.

              The VCs who are funding these companies are hoping that they have found the next Amazon. Many will probably go out of business, but some might join the ranks of trillion dollar companies.

              • ambicapter a few seconds ago ago

                So every company that doesn't turn any profits is actually Amazon in disguise?

      • dude250711 6 hours ago ago

        There is someone else at Facebook who's pet projects do not get results...

        • ergocoder 5 hours ago ago

          If you hire a house cleaner to clean your house, and the cleaner didn't do well, would you eject yourself out of the house? You would not. You would change to a new cleaner.

          • psychoslave an hour ago ago

            But if we hire someone to deal on R&D to automate fully the house cleaning process, we might not necessarily expect the office to be maintained in clean state by the researchers themselves any time we enter the room.

        • ACCount37 5 hours ago ago

          Sure, but that "someone else" is the man writing the checks. If the roles were reversed, he'd be the one being fired now.

        • jb1991 5 hours ago ago

          Who are you referring to?

          • nolok 5 hours ago ago

            I think he means Zuckerberg himself, the metaverse isn't exactly a major success, but this is a false equivalency the way he organized it only his vote matters he does what he wants

    • FartyMcFarter 3 hours ago ago

      > But… I suppose Zuckerberg knows what he wants, which is AI slopware and not truly groundbreaking foundation models.

      When did they make groundbreaking foundation models though? DeepMind and OpenAI have done plenty of revolutionary things, what did Meta AI do while being led by LeCun?

    • ergocoder 5 hours ago ago

      LeCun is great and smart, of course. But he had his chance. It didn't go that well. Now Zuck wants somebody else to try.

      Messi is the best footballer of our era. It doesn't mean he would play well in any team.

      • jamesblonde 4 hours ago ago

        I don't think Messi could do it on a wet night in Stoke. Ronaldo could, though.

        /s

    • renegade-otter 10 minutes ago ago

      Oh wow, is that true? They made him report to the directory of the Slop Factory? Brilliant!

    • torginus 4 hours ago ago

      What does Meta even want with AI?

      I suppose they could solve superintelligence and cure cancer and build fusion reactors with it, but that's 100% outside their comfort zone - if they manage to build synthethic conversation partners and synthethic content generators as good or better than the real thing the value of having every other human on the planet registered to one of their social network goes to zero.

      Which is impossible anyway - I facebook to maintain real human connections and keep up with people who I care about, not to consume infinite content.

      • zamadatix 4 hours ago ago

        At 1.6T market cap it's very hard to 10x or greater the company anymore doing what's in their comfort zone and they've got a lot of money to play with to find easier to grow opportunities. If Zuckerberg was convinced he could do that by selling toothpicks they'd have a go at the toothpick business. They went after the "metaverse" first, then AI. Both are just very fast growth options which happen to be tech focused because that's the only way you generate new comparable value as a company (unless you're sitting on a lot of state owned oil) in the current markets.

        • bbarnett 4 hours ago ago

          You missed an opportunity to use paperclips instead of toothpicks, as your example.

          Would be very inline with the AI angle.

      • breppp 4 hours ago ago

        they are out for your clicks and attention minutes

        if OpenAI can build a "social" network of completely generated content, that can kill Meta. Even today I venture to guess that most of the engagements in their platforms is not driven by real friends, so an AI driven platform won't be too different, or it might make content generation be so easy as to make your friends engage again.

        Apart from it the ludicrous vision of the metaverse seems much more plausible with highly realistic world models

        • drexlspivey 4 hours ago ago

          How do LLMs help with clicks and attention minutes? Why do they spend $100+B a year in AI capex, more than Google and Microsoft that actually rent AI compute to clients? What are they going to do with all that compute? It’s all so confusing

          • jcfrei 3 hours ago ago

            Browse TikTok and you already see AI generated videos popping up. Could well be that the platforms with the most captivating content will not be a "social" network but one consisting of some tailor made feed for you. That could undermine the business model of the existing social networks - unless they just fill it with AI generated content themselves. In other words: Facebook should really invest in good video generating models to keep their platforms ahead.

          • breppp 4 hours ago ago

            It might be just me, but in my opinion facebook platforms are way past the "content from your friends phase", but is full of cheap peddled viral content.

            If that content becomes even cheaper, of higher quality and highly tailored to you, that is probably worth a lot of money, or at least worth not losing your entire company by a new competitor

            • drexlspivey 3 hours ago ago

              But practically speaking, is Meta going to be generating text or video content itself? Are they going to offer some kind of creator tools so you can use it to create video as a user and they need the compute for that? Do they even have a video generation model?

              The future is here folks, join us as we build this giant slop machine in order to sell new socks to boomers.

              • breppp 2 hours ago ago

                For all of your questions Meta would need a huge research/GPU investment, so that still holds.

                In any case if I have to guess, we will see shallow things like the Sora app, a video generation tiktok social network and deeper integration like fake influencers, content generation that fits your preferences and ad publishers preferences

                a more evil incarnation of this might be a social network where you aren't sure who is real and who isn't. This will probably be a natural evolution of the need to bootstrap a social network with people and replacing these with LLMs

        • pandemic_region 4 hours ago ago

          Sad to hear it has come to attention minutes, used to be seconds.

    • sidcool 5 hours ago ago

      I won't be surprised if Musk hires him. But I hear LeCun hates the guts of Musk.

      • HarHarVeryFunny 2 hours ago ago

        Musk doesn't appear interested in AI research - he's basically doing the same as Meta and just pursuing me-too SOTA LLMs and image generation at X.ai.

      • ACCount37 5 hours ago ago

        Musk wants people who can deliver results, and fast.

        If LeCun can't cough up some research that's directly applicable to Grok or Optimus, Musk wouldn't want him.

    • ekjhgkejhgk 2 hours ago ago

      > slopware

      Damn did you just invent that? That's really catchy.

      • esafak 13 minutes ago ago

        Slop is already a noun.

    • enahs-sf 7 hours ago ago

      Would love to have been a fly on the wall during one of their 1:1’s.

    • motbus3 6 hours ago ago

      Zuck hired John Carmack and got nothing of it On the other hand, it was only lecunn avoiding meta to go 100p evil creepy mode too

      • lofaszvanitt 2 hours ago ago

        And Carmack complained about the bureaucracy hell that is Facebook.

      • Tepix 4 hours ago ago

        Carmack laid the foundation for the all-in-one VR headsets.

        • blitzar 3 hours ago ago

          Hopefully one day, in a galaxy far far away, someone builds something on those foundations.

          • slfnflctd 3 hours ago ago

            You joke, but the Star Wars games - especially the pinball one, for me at least - are some of the best experiences available on Quest headsets. I've been playing software pinball (as well as the real thing) since the 80s, and this is one of my favorite ways to do it now, which I will keep coming back to.

    • ninetyninenine 10 minutes ago ago

      It wasn’t boneheaded. It was done to make Yann leave. Meta doesn’t want Yann for good reason.

      Yann was largely wrong about AI. Yann coined the term stochastic parrot and derrided LLMs as a dead end. It’s now utterly clear the amount of utility LLMs have and that whatever these LLMs are doing it is much more than stochastic parroting.

      I wouldn’t give money to Yann, the guy is a stubborn idiot and closed minded. Whatever he’s doing wont even touch LLM technology. He was so publicly deriding LLMs I see no way he will back pedal from that.

      I dont think LLMs are the end of the story for agi. But I think they are a stepping stone. Whatever agi is in the end, LLMs or something close to it will be a modular component of aspect of the final product. For LeCunn to dismiss even the possibility of this is idiotic. Horrible investment move to give money to Yann to likely pursue Agi without even considering LLMs.

    • 7moritz7 6 hours ago ago

      When I first saw their LLM integration on Facebook I thought the screenshot was fake and a joke

    • huevosabio 7 hours ago ago

      Yes, that was such a bizarre move.

    • garyclarke27 6 hours ago ago

      Zuck did this on purpose, humiliating LeCun so he would leave. Despite LeCun being proved wrong on LLMs capabilities such as reasoning, he remained extremely negative, not exactly inspiring leadership to the Meta Ai team, he had to go.

      • aiven 3 hours ago ago

        But LLMs still can't reason... in a reasonable sense. No matter how you look at it, it is still a statistical model that guesses next word, it doesn't think/reason per se.

    • archerx 4 hours ago ago

      Meta had John Carmack and squandered him. It seems like Meta can get amazing talent but has no idea how to get any value or potential out of them.

    • ulfw 6 hours ago ago

      Zuckerberg knows what he wants but he rarely knows how to get it. That's been his problem all along. Unlike others he isn't scared to throw ridiculous amounts of money at a problem though and buy companies who do things he can't get done himself.

      • margorczynski 4 hours ago ago

        There's also the aspect of control - because of how the shares and ownership are organized he answers essentially to no one. In other companies burning this much cash as was with VR or now AI without any sensible results would get him ejected a long time ago.

  • llamasushi 7 hours ago ago

    LeCun, who's been saying LLMs are a dead end for years, is finally putting his money where his mouth is. Watch for LeCun to raise an absolutely massive VC round.

    • conradfr 6 hours ago ago

      So not his money ;)

      • seydor 20 minutes ago ago

        like openAI and all other AI startups?

      • qwertox 3 hours ago ago

        But his responsability.

        • coldpie 2 hours ago ago

          Pretty funny post. He won't be held responsible for any failures. Worst case scenario for this guy is he hires a bunch of people, the company folds some time later, his employees take the responsibility by getting fired, and he sails into the sunset on several yachts.

        • zwnow 2 hours ago ago

          What is responsibility if you can afford good lawyers?

          • qwertox an hour ago ago

            So you mean that Mark Zuckerberg has always been a peer to YLC in terms of responsibility towards Meta's shareholders?

            • zwnow an hour ago ago

              I mean any entity that can afford good lawyers seems to not care about responsibility in the slightest.

  • numpy-thagoras 7 hours ago ago

    Good. The world model is absolutely the right play in my opinion.

    AI Agents like LLMs make great use of pre-computed information. Providing a comprehensive but efficient world model (one where more detail is available wherever one is paying more attention given a specific task) will definitely eke out new autonomous agents.

    Swarms of these, acting in concert or with some hive mind, could be how we get to AGI.

    I wish I could help, world models are something I am very passionate about.

    • sebmellen 7 hours ago ago

      Can you explain this “world model” concept to me? How do you actually interface with a model like this?

      • curiouscube 3 hours ago ago

        One theory of how humans work is the so called predictive coding approach. Basically the theory assumes that human brains work similar to a kalman filter, that is, we have an internal model of the world that does a prediction of the world and then checks if the prediction is congruent with the observed changes in reality. Learning then comes down to minimizing the error between this internal model and the actual observations, this is sometimes called the free energy principle. Specifically when researchers are talking about world models they tend to refer to internal models that model the actual external world, that is they can predict what happens next based on input streams like vision.

        Why is this idea of a world model helpful? Because it allows multiple interesting things, like predict what happens next, model counterfactuals (what would happen if I do X or don't do X) and many other things that tend to be needed for actual principled reasoning.

        • cantor_S_drug an hour ago ago

          Learning Algorithm Of Biological Networks

          https://www.youtube.com/watch?v=l-OLgbdZ3kk

          In this video we explore Predictive Coding – a biologically plausible alternative to the backpropagation algorithm, deriving it from first principles.

          Predictive coding and Hebbian learning are interconnected learning mechanisms where Hebbian learning rules are used to implement the brain's predictive coding framework. Predictive coding models the brain as a hierarchical system that minimizes prediction errors by sending top-down predictions and bottom-up error signals, while Hebbian learning, often simplified as "neurons that fire together, wire together," provides a biologically plausible way to update the network's weights to improve predictions over time.

        • HarHarVeryFunny 2 hours ago ago

          Learning from the real world, including how it responds to your own actions, is the only way to achieve real-world competency, intelligence, reasoning and creativity, including going beyond human intelligence.

          The capabilities of LLMs are limited by what's in their training data. You can use all the tricks in the book to squeeze the most out of that - RL, synthetic data, agentic loops, tools, etc, but at the end of the day their core intelligence and understanding is limited by that data and their auto-regressive training. They are built for mimicry, not creativity and intelligence.

        • sgt 2 hours ago ago

          So... that seems like possible path towards AGI. Doesn't it?

      • natch 5 hours ago ago

        He is one of these people who think that humans have a direct experience of reality not mediated by as Alan Kay put it three pounds of oatmeal. So he thinks a language model can not be a world model. Despite our own contact with reality being mediated through a myriad of filters and fun house mirror distortions. Our vision transposes left and right and delivers images to our nerves upside down, for gawd’s sake. He imagines none of that is the case and that if only he can build computers more like us then they will be in direct contact with the world and then he can (he thinks) make a model that is better at understanding the world

        • BoxOfRain 5 hours ago ago

          Isn't this idea demonstrably false due to the existence of various sensory disorders too?

          I have a disorder characterised by the brain failing to filter own its own sensory noise, my vision is full of analogue TV-like distortion and other artefacts. Sometimes when it's bad I can see my brain constructing an image in real time rather than this perception happening instantaneously, particularly when I'm out walking. A deer becomes a bundle of sticks becomes a muddy pile of rocks (what it actually is) for example over the space of seconds. This to me is pretty strong evidence we do not experience reality directly, and instead construct our perceptions predictively from whatever is to hand.

          • scoot 4 hours ago ago

            Pleased to meet someone else who suffers from "visual snow". I'm fortunate in that like my tinnitus, I'm only acutely aware of it when I'm reminded of it, or, less frequently, when it's more pronounced.

            You're quite correct that our "reality" is in part constructed. The Flashed Face Distortion Effect [0][1] (wherein faces in the peripheral vision appear distorted due the the brain filling in the missing information with what was there previously) is just one example.

            [0] https://en.wikipedia.org/wiki/Flashed_face_distortion_effect [1] https://www.nature.com/articles/s41598-018-37991-9

            • nervousvarun 3 hours ago ago

              Only tangentially related but maybe interesting to someone here so linking anyways: Brian Kohberger is a visual snow sufferer. Reading about his background was my first exposure to this relatively underpublicized phenomenon.

              https://en.wikipedia.org/wiki/2022_University_of_Idaho_murde...

            • BoxOfRain 4 hours ago ago

              Ah that's interesting, mine is omnipresent and occasionally bad enough I have to take days off work as I can't read my own code; it's like there's a baseline of it that occasionally flares up at random. Were you born with visual snow or did you acquire it later in life? I developed it as a teenager, and it was worsened significantly after a fever when I was a fresher.

              Also do you get comorbid headaches with yours out of interest?

              • scoot 3 hours ago ago

                I developed it later in life. The tinnitus came earlier (and isn't as a result of excessive sound exposure as far as I know), but in my (unscientific) opinion they are different manifestations (symptoms) of the same underlying issue – a missing or faulty noise filter on sensory inputs to the brain.

                Thankfully I don't get comorbid headaches – in fact I seldom get headaches at all. And even on the odd occasion that I do, they're mild and short-lived (like minutes). I don't recall ever having a headache that was severe, or that lasted any length of time.

                Yours does sound much more extreme than mine, in that mine is in no way debilitating. It's more just frustrating that it exists at all, and that it isn't more widely recognised and researched. I have yet to meet an optician that seems entirely convinced that it's even a real phenomenon.

                • BoxOfRain an hour ago ago

                  Interesting, definitely agree it likely shares an underlying cause with tinnitus. It's also linked to migraine and was sometimes conflated with unusual forms of migraine in the past, although it's since been found to be a distinct disorder. There's been a few studies done on visual snow patients, including a 2023 fMRI study which implicated regions rich in glutamate and 5HT2A receptors.

                  I actually suspected 5HT2A might be involved before that study came out, since my visual distortions sometimes resemble those caused by psychedelics. It's also known that both psychedelics and anecdotally from patient's groups SSRIs too can cause a similar symptoms to visual snow syndrome, I had a bad experience with SSRIs for example but serotonin antagonists actually fixed my vision temporarily - albeit with intolerable side-effects so I had to stop.

                  It's definitely a bit of a faff that people have never heard of it, I had to see a neuro-ophthalmologist and a migraine specialist to get a diagnosis. On the other hand being relatively unknown does mean doctors can be willing to experiment. My headaches at least are controlled well these days.

        • dragochat 2 hours ago ago

          the fact that a not-so-direct experience of reality produces "good enough results" (eg. human intelligence) doesn't mean that a more-direct experience of reality won't produce much better results, and it clearly doesn't mean it can't produce these better results in AI

          your whole reasoning is neither here not there, and attacking a straw man - YLC for sure knows that human experience of reality is heavily modified and distorted

          but he also knows, and I'd bet he's very right on this, that we don't "sip reality through a narrow straw of tokens/words", and that we don't learn "just from our/approved written down notes", and only under very specific and expensive circumstances (training runs)

          anything closer to more-direct-world-models (as LLMs are ofc at a very indirect level world models) has very high likelihood of yielding lots of benefits

        • HarHarVeryFunny 2 hours ago ago

          The world model of a language model is a ... language model. Imagine the mind of a blind limbless person, locked in a cell their whole life, never having experienced anything different, who just listens all day to a piped in feed of randomized snippets of WikiPedia, 4chan and math olypiad problems.

          The mental model this person has of this feed of words is what an LLM at best has (but human model likely much richer since they have a brain, not just a transformer). No real-world experience or grounding, therefore no real-world model. The only model they have is of the world they have experience with - a world of words.

        • Gooblebrai 4 hours ago ago

          > humans have a direct experience of reality not mediated by as Alan Kay put it three pounds of oatmeal

          Is he advocating for philosophical idealism of the mind or does he has an alternate physicalist theory?

        • trhway 5 hours ago ago

          That way he may get a very good lizard. Getting Einstein though takes layers of abstraction.

          My thinking is that such world models should be integrated with LLM like the lower levels of perception are integrated with higher brain function.

        • Hendrikto 3 hours ago ago

          Great strawman.

      • koolala 6 hours ago ago

        Ouija board would work for text.

  • monkeydust 6 hours ago ago

    He needs a patient investor and realized Zuck is not that. As someone who delivers product and works a lot with researchers I get the constant tension that might exist with competing priorities. Very curious to see how he does, imho the outcome will be either of the extremes - one of the fastest growing companies by valuation ever or a total flop. Either way this move might advance us to whatever end state we are heading towards with AI.

  • sidcool 6 hours ago ago

    I think it was a plan by Mark to move LeCun out of Meta. And they cannot fire him without bad PR, so they got Wang to lead him. It was only a matter of time before LeCun moved out.

    • theanonymousone 4 hours ago ago

      Isn't putting Wang as leading him a worse PR compared to just letting him go?

  • fxtentacle 7 hours ago ago

    Working under LeCun but outside of Zuckerberg's sphere of influence sure sounds like a dream job.

    • fastball 6 hours ago ago

      Really? From where I'm standing LeCun is a pompous researcher who had early success in his career, and has been capitalizing on that ever since. Have you read any of his papers from the last 20 years? 90% of his citations are to his own previous papers. From there, he missed the boat on LLMs and is now pretending everyone else is wrong so that he can feel better about it.

      • MrScruff 6 hours ago ago

        His research group have introduced some pretty impactful research and open source models.

        https://ai.meta.com/research/

        • fastball 4 hours ago ago

          For the same reason I don't attribute those successes to Zuckerberg I don't attribute them to LeCun either.

          • almostgotcaught 10 minutes ago ago

            Anyone that has worked at FB knows that LeCun does nothing all day expect post archive links to WP.

  • bn-l 5 hours ago ago

    It’s probably better for the world that LeCun is not at Meta. I mean if his direction is the likeliest approach to AGI meta is the last place where you want it.

    • energy123 3 hours ago ago

      It's better that he's not working on LLMs. There's enough people working on it already.

  • qwertox 2 hours ago ago

    It would have been just as interesting to read that he moved over to Google, where the real brains and resources are located at.

    Meta is now just competing against giants like OpenAI, Anthropic and Google, plus all the new Chinese companies; I see no real chance for them to offer a popular chat model, but rather to market their AI as a bundled product for companies which want to advertise, where the images and videos will be automatically generated by Meta.

    • Oras 25 minutes ago ago

      > moved over to Google, where the real brains and resources are located at

      Brains yes, outcome? I doubt it. Have you used Gemini?

      • esafak 5 minutes ago ago

        Yes, successfully many times?

  • alyxya 4 hours ago ago

    This seems like a good thing for him to get to fully pursue his own ideas independent of Meta. Large incumbents aren’t usually the place for innovating anything far from mainstream considering the risk and cost of failure. The high level idea of JEPA is sound, but it takes a lot of work to get it trained well at scale before it has value to Meta.

    • dewey 3 hours ago ago

      In this case where more money / resources seemingly better results (at least right now) this might be a bit different than other fields.

  • anshulbhide 6 hours ago ago

    The writing was on the wall when Zuck hired Wang. That combined with LeCun's bearish sentiment on LLMs led to this.

  • Jackson__ 4 hours ago ago

    From the outside, it always looked like they gave LeCun just barely enough compute for small scale experiments. They'd publish a promising new paper, show it works at a small scale, then not use it at all for any of their large AI runs.

    I would have loved to see a VLM utilizing JEPA for example, but it simply never happened.

    • sakex 2 hours ago ago

      I'd be surprised if they didn't scale it up.

    • tucnak 2 hours ago ago

      The obvious explanation is they have scaled it up, but it turned out to be total shite, like most new architectures.

  • gregjw 7 hours ago ago

    Interesting he isn't just working with Feifei Li if he's really interested in 'world models'.

    • dauertewigkeit 4 hours ago ago

      Correct me if I'm wrong but LeCun is focused on learning from video, whereas Fei-Fei Li is doing robotic simulations. Also I think Fei-Fei Li's approach is still using transformers and not buying into JEPA.

    • muragekibicho 6 hours ago ago

      Exactly where my mind turned. It's interesting how the AI OG's (Feifei and Cunn) think world models are the way forward.

  • albertzeyer 5 hours ago ago

    I wonder, what LeCun wants to do is more fundamental research, i.e. where the timeline to being useful is much longer, maybe 5-10 years at least, and also much more uncertain.

    How does this fit together with a startup? Would investors happily invest into this knowing not to expect anything in return for at least the next 5-10 years?

    • Hendrikto 3 hours ago ago

      > Would investors happily invest into this knowing not to expect anything in return for at least the next 5-10 years?

      Oh, you mean like OpenAI, Anthropic, Gemini, and xAI? None of them are profitable.

      • Amadiro 2 hours ago ago

        That's a quite different thing, OpenAI has billions of USD/year cash flow, and when you have that there's many many potential way to achieve profitability on different time horizons. It's not a situation of chance but a situation of choice.

        Anyway, how much that matters for an investor is hard to form a clear answer to - investors are after all not directly looking for profitability as such, but for valuation growth. The two are linked but not the same -- any investor in OpenAI today probably also places themselves into a game of chance, betting on OpenAI making more breakthroughs and increasing the cash flow even more -- not just becoming profitable at the same rate of cash flow. So there's still some of the same risk baked into this investment.

        But with a new startup like LeCun's is going to be, it's 100% on the risk side and 0% on the optionality side. The path to profitability for a startup would be something like 1) a breakthrough is made 2) that breakthrough is utilized in a way that generates cash flow 3) the company becomes profitable (and at this point hopefully the valuation is good.)

        There's a lot of things that can go wrong at every step here (aside from the obvious), including e.g. making a breakthrough that doesn't represent a defensible mote for your startup, failing to build the structure of the business necessary to generate cashflow, ... OpenAI et al already have a lot of that behind them, and while that doesn't mean that they don't face upcoming risks and challenges, the huge amount of cashflow they have available helps them overcome these issues far more easily than a startup, which will stop solving problems if you stop feeding money into it.

  • beambot 7 hours ago ago

    Will be interesting to see how he fares outside the ample resources of Meta: Personnel, capital, infrastructure, data, etc. Startups have a lot of flexibility, but a lot of additional moving parts. Good luck!

    • throwaw12 6 hours ago ago

      I would love to join his startup, if he hires me, and there are many such people like me, and more talented.

  • bigtones 7 hours ago ago

    Fi Fi Lee also recently founded a new AI startup called World Labs, which focus on creating AI world models with spatial intelligence to understand and interact with the 3D world, unlike current LLM AI that primarily processes 2D images and text. Almost exactly the same focus as Yann LeCun's new venture stated in the parent article.

    • ktta 6 hours ago ago

      *Fei-Fei Li

    • aurareturn 4 hours ago ago

      They'd need an order of magnitude more compute in order to train an AI with so much 3D data?

      • Hendrikto 3 hours ago ago

        Not necessarily. Training could be more efficient.

  • joegibbs 5 hours ago ago

    Right choice IMO. LLMs aren’t going to reach AGI by themselves because language is a thing by itself, very good at encoding concepts into compact representations but doesn’t necessarily have any relation to reality. A human being gets years of binocular visuals of real things, sound input, other various sensations, much less than what we’re training these models with. We think of language in terms of sounds and pictures rather than abstract language.

  • I_am_tiberius 4 hours ago ago

    I really hope he returns to Europe for his new startup.

    • drstewart 4 hours ago ago

      He probably wants it to be successful, so that would be a foolish move

  • Zufriedenheit 4 hours ago ago

    It is the wet dream of a social media company to replace the pesky content creators that demand a share of ad revenue with an generative ai model, that pumps out a constant stream of engagement farming slop, so they can keep all the ad revenue for themselves. Creating a world model ai is a totally different matter, that requires long term commitment.

  • schnitzelstoat 4 hours ago ago

    This seems like a good thing. It's nice not to have all our eggs in one basket betting on Transformer models.

  • 1zael 7 hours ago ago

    "These models aim to replicate human reasoning and understanding of the physical world, a project LeCun has said could take a decade to mature."

    What an insane time horizon to define success. I suppose he easily can raise enough capital for that kind of runway.

    • lolive 7 hours ago ago

      That guy has survived the AI winter. He can wait 10 years for yet another breakthrough. [but the market can’t]

      https://en.wikipedia.org/wiki/AI_winter

      • DaSHacka 7 hours ago ago

        We're at most in an "AI Autumn" right now. The real Winter is yet to come.

        • asadotzler 6 hours ago ago

          We have already been through winter.Ffor those of us old enough to remember, the OP was making a very clear statement.

          • smartmic 6 hours ago ago

            Winter is a cyclical concept, just like all the other seasons. It will be no different here; the pendulum swings back and forth. The unknown factor is the length of the cycle.

        • lolive 6 hours ago ago

          Java Spring.

          Google summer.

          AI autumn.

          Nuclear winter.

        • rsynnott 5 hours ago ago

          I assume they’re referring to the previous one.

          • lolive 4 hours ago ago

            I still have to understand why you think another AI winter is coming. Everyyyybody is using it, everybody is racing to invent the next big thing. What could go wrong? [apart from a market crash, more related to financial bubble than technical barriers]

            • rsynnott an hour ago ago

              > apart from a market crash, more related to financial bubble than technical barriers

              _That is what an AI winter is_.

              Like, if you look at the previous ones, it's a cycle of over-hype, over-promising, funding collapse after the ridiculous over-promising does not materialise. But the tech tends to hang around. Voice recognition did not change the world in the 90s, but neither did it entirely vanish once it was realised that there had been over-promising, say.

    • ahartmetz 7 hours ago ago

      A pretty short time horizon for actual research. Interesting to see it combined with the SV/VC world, though.

      • whizzter 6 hours ago ago

        I suspect he sees a lot of scattered pieces of fundamental research outside of LLM's that he thinks could be integrated for a core within a year, the 10 years is to temper investors (that he can buy leeway for with his record) and fine tune and work out the kinks when actually integrating everything that might not have some obvious issues.

    • siva7 6 hours ago ago

      Zuck is a business guy, understandable that this isn't going to fly with him

    • jb1991 5 hours ago ago

      10 years is nothing.

  • antirez 3 hours ago ago

    META managed to spend a lot of money into AI to achieve inferior results. Something must change for sure, and you don't want an LLM skeptic at home, in my opinion, especially since the problem is not what LeCun is saying right now (LLMs are not the straight path to AGI), but the fact it used to say for some time that LLMs were just statistical models, stochastic parrots (and this is a precise statement, something most people do not understand. It means two things: no understanding of the prompt whatsoever in the activation states, and no internal representation of the idea/sentence the model is going to express either), which is an incredibly weak statement that high level AI scientists refused since the start just because of functional behaviors. Then he slowly changed the point of view. But this shit show and the friction he created inside META is not something to forget.

    • p1dda an hour ago ago

      If they're not stochastic parrots, what are they in your opinion?

  • lm28469 7 hours ago ago

    But wait they're just about to get AGI why would he leave???

    • killerstorm 7 hours ago ago

      LeCun always said that LLMs do not lead to AGI.

      • consumer451 7 hours ago ago

        Can anyone explain to me the non-$$ logic for one working towards AGI, aside from misanthropy?

        The only other thing I can imagine is not very charitable: intellectual greed.

        It can't just be that, can it? I genuinely don't understand. I would love to be educated.

        • TheAceOfHearts 5 hours ago ago

          I'm a true believer in AGI being able to become a force for immense good if deployed carefully by responsible parties.

          Currently one of the key issues with a lot of fields is that they operate as independent / largely isolated silos. If you could build a true AGI capable of achieving top-level mastery across multiple disciplines it would likely be able to integrate all that knowledge and make a lot of significant discoveries that would improve people's lives. Just exploring existing problem spaces with the full intellectual toolkit that humanity has developed is probably enough to make significant progress.

          Our understanding of biology is still painfully primitive. To give a concrete example, I dream that someday it'll be possible to develop medical interventions that allow humans to regrow missing limbs and fix almost any health issue.

          Have you ever lived with depression or any other psychiatric problem? I think if we could create medical interventions and environments that are conductive towards healing psychiatric problems, that would also be a massive quality of life improvement for huge numbers of people. Do you know how our current psychiatric interventions work? You try some drug, flip a coin to see if it does anything and wait 4 weeks to get the result. Then you keep iterating and hope that eventually the doctor finds some magical combination to make life barely tolerable.

          I think the best path forward for improving humanity's understanding of biology, and ultimately medical science, is to go all-in on AGI-style technology.

        • rhubarbtree 2 hours ago ago

          Well, AGI could accelerate scientific and medical discovery, saving lives and impacting billions of people positively.

          The potential downside is admittedly severe.

        • eloisant 6 hours ago ago

          That's the old dream of creating life, becoming God. Like the Golem, Frankenstein...

        • NiloCK 2 hours ago ago

          Trying to engage in good faith here but I don't really get this. You're pretending to have never encountered positive visions of technologically advanced futures.

          Cure all disease?

          Stop aging?

          End material scarcity?

          It's completely fair to expect that these are all twisted monkey's paw scenarios that turn out dystopian, but being unable to understand any positive motivations for the creation of AGI seems a bit far fetched.

          • rchaud an hour ago ago

            That the development of this technology is in the hands of a few people that don't use even a fraction of their staggering wealth to address these challenges now, tells me that they aren't interested in using AI to solve them later.

        • killerstorm 4 hours ago ago

          R&D can be automated to speed up medical research - saving lives, prolonging life, etc.

          Assistant robots for the elderly. In many countries population is shrinking, so fundamentally just not enough people to take care of the old.

        • tedsanders 7 hours ago ago

          I'm working toward AGI. I hope AGI can be used to automate work and make life easier for people.

          • consumer451 6 hours ago ago

            Who’s gonna pay for that inference?

            It’s going to take money, what if your AGI has some tax policy ideas that are different from the inference owners?

            Why would they let that AGI out into the wild?

            Let’s say you create AGI. How long will it take for society to recover? How long will it take for people of a certain tax ideology to finally say oh OK, UBI maybe?

            The last part is my main question. How long do you think it would take our civilization to recover from the introduction of AGI?

            Edit: sama gets a lot of shit, but I have to admit at least he used to work on the UBI problem, orb and all. However, those days seem very long gone from the outside, at least.

            • jpadkins 4 minutes ago ago

              If you are genuine in your questions, I will give them a shot.

              AGI applied to the inputs (or supply chain) of what is needed for inference (power, DC space, chips, network equipment, etc) will dramatically reduced costs of inference. Most of the costs of stuff today are driven by the scarcity of "smart people's time". The raw resources of material needed are dirt cheap (cheaper than water). Transforming raw resources into useful high tech is a function of applied intelligence. Replace the human intelligence with machine intelligence, and costs will keep dropping (faster than the curve they are already on). Economic history has already shown this effect to be true; as we develop better tools to assist human productivity, the unit cost per piece of tech drops dramatically (moore's law is just one example, everything that tech touches experiences this effect).

              If you look at almost any universal problem with the human condition, one important bottleneck to improving it is intelligence (or "smart people's time").

            • Arkhaine_kupo 5 hours ago ago

              I am not someone working on AGI but I think a lot of people work backwards from the expected outcome.

              Expected outcome is usually something like a Post-Scarcity society, this is a society where basic needs are all covered.

              If we could all live in a future with a free house and a robot that does our chores and food is never scarce we should works towards that, they believe.

              The intermiddiete steps aren't thought out, in the same way that for example the communist manifesto does little to explain the transition from capitalism to communism. It simply says there will be the need for things like forcing the bourgiese to join the common workers and there will be a transition phase but no clear steps between either system.

              Similarly many AGI proponents think in terms of "wouldnt it be cool if there was an AI that did all the bits of life we dont like doing", without systemic analysis that many people do those bits because they need money to eat for example.

          • lm28469 6 hours ago ago

            How old are you?

            That's what they've been selling us for the past 50 years and nothing has changed, all the productivity gain was pocketed by the elite

            • cantor_S_drug an hour ago ago

              Here's my prediction : The rapid progress of AI will make money as an accounting practice irrelevant. Take the concept of "Future is already here but unevenly distributed." When we will have true abundance, what the elites will target is the convex hull of progress, they want to be in control of leading edge / leading wavefront and its direction and who has access to resources and decision making. In such a scenario of abundance, populace will have access to iPhone 50 but the Elites will have access to iPhone 500. i.e. uneven distribution. Elites would like to directly control which resource gets allocated to which projects. Elon is already doing that with his immense clout. This implies we would have a sort of multidimensional resource based economy.

          • qsort 6 hours ago ago

            >> non-$$ logic [...] aside from misanthropy

            > I hope AGI can be used to automate work

            You people need a PR guy, I'm serious. OpenAI is the first company I've ever seen that comes across as actively trying to be misanthropic in its messaging. I'm probably too old-fashioned, but this honestly sounds like Marlboro launching the slogan "lung cancer for the weak of mind".

            • FergusArgyll 2 hours ago ago

              Matt Levine calls it business negging

          • p_v_doom 3 hours ago ago

            Automating work and making life easier for people are two entirely different things. Automating work tends to lead to life becoming harder for people - mostly on account of who is benefiting from the automation - basically that better life aint gonna happen under capitalism

        • ACCount37 5 hours ago ago

          Have you ever seen that "science advocate vs scientist" comic?

          https://www.smbc-comics.com/?id=2088

          It's true. When it comes to the people doing bleeding edge research and development, the answer often is "BECAUSE IT'S FUCKING AWESOME". Regardless of what they tell the corporate higher-ups or put on the grant application statements.

          Sure, a lot of people believe that AGI is going to make the world a better place. But "mad scientist" is a stereotype for a reason. You look into their eyes and you see the flame of madness flickering behind them.

      • NitpickLawyer 6 hours ago ago

        He also said other things about LLMs that turned out to be either wrong or easily bypassed with some glue. While I understand where he comes from, and that his stance is pure research-y theory driven, at the end of the day his positions were wrong.

        Previously, he very publicly and strongly said:

        a) LLMs can't do math. They trick us in poetry but that's subjective. They can't do objective math.

        b) they can't plan

        c) by the very nature of autoregressive arch, errors compound. So the longer you go in your generation, the higher the error rate. so at long contexts the answers become utter garbage.

        All of these were proven wrong, 1-2 years later. "a" at the core (gold at IMO), "b" w/ software glue and "c" with better training regimes.

        I'm not interested in the will it won't it debates about AGI, I'm happy with what we have now, and I think these things are good enough now, for several usecases. But it's important to note when people making strong claims get them wrong. Again, I think I get where he's coming from, but the public stances aren't the place to get into the deep research minutia.

        That being said, I hope he gets to find whatever it is that he's looking for, and wish him success in his endeavours. Between him, Fei Fei Li and Ilya, something cool has to come out of the small shops. Heck, I'm even rooting for the "let's commoditise lora training" that Mira's startup seems to go for.

        • tonii141 5 hours ago ago

          a) Still true: vanilla LLMs can’t do math, they pattern-match unless you bolt on tools.

          b) Still true: next-token prediction isn’t planning.

          c) Still true: error accumulation is mitigated, not eliminated. Long-context quality still relies on retrieval, checks, and verifiers.

          Yann’s claims were about LLMs as LLMs. With tooling, you can work around limits, but the core point stands.

          • killerstorm 4 hours ago ago

            My man, math is pattern matching, not magic. So is logic. And computation.

            Please learn the basics before you discuss what LLMs can and can't do.

            • ozgrakkurt 2 hours ago ago

              I'm no expert on math but "math is pattern matching" really sounds wrong.

              Maybe programming is mostly pattern matching but modern math is built on theory and proofs right?

              • noddybear an hour ago ago

                Nah, its all pattern matching. This is how automated theorem provers like Isabelle are built, applying operations to lemmas/expressions to reach proofs.

                • staticman2 18 minutes ago ago

                  I'm sure if you pick a sufficiently broad definition of pattern matching your argument is true by definition!

                  Unfortunately that has nothing to do with the topic of discussions, which is the capabilities of LLMs, which may require a more narrow definition of pattern matching.

          • NitpickLawyer 5 hours ago ago

            a) no, gemini 2.5 was shown to "win" gold w/o tools. - https://arxiv.org/html/2507.15855v1

            b) reductionism isn't worth our time. Planning works in the real world, today. (try any agentic tool like cc/codex/whatever). And if you're set on the purist view, there's mounting evidence from anthropic that there is planning in the core of an LLM.

            c) so ... not true? Long context works today.

            This is simply moving goalposts and nothing more. X can't do Y -> well, here they are doing Y -> well, not like that.

            • tonii141 5 hours ago ago

              a) That "no-tools" win depends on prompt orchestration which can still be categorized as tooling.

              b) Next-token training doesn’t magically grant inner long-horizon planners..

              c) Long context ≠ robust at any length. Degradation with scale remains.

              Not moving goalposts, just keeping terms precise.

              • ACCount37 3 hours ago ago

                My man, you're literally moving all the goalposts as we speak.

                It's not just "long context" - you demand "infinite context" and "any length" now. Even humans don't have that. "No tools" is no longer enough - what, do you demand "no prompts" now too? Having LLMs decompose tasks and prompt each other the way humans do is suddenly a no-no?

                • tonii141 2 hours ago ago

                  I’m not demanding anything, I’m pointing out that performance tends to degrade as context scales, which follows from current LLM architectures as autoregressive models.

                  In that sense, Yann was right.

        • ilaksh 6 hours ago ago

          That's true but I also think despite being wrong about the capabilities of LLMs, LeCun has been right in that variations of LLMs are not an appropriate target for long term research that aims to significantly advance AI. Especially at the level of Meta.

          I think transformers have been proven to be general purpose, but that doesn't mean that we can't use new fundamental approaches.

          To me it's obvious that researchers are acting like sheep as they always do. He's trying to come up with a real innovation.

          LeCun has seen how new paradigms have taken over. Variations of LLMs are not the type of new paradigm that serious researches should be aiming for.

          I wonder if there can be a unification of spatial-temporal representations and language. I am guessing diffusion video generators already achieve this in some way. But I wonder if new techniques can improve the efficiency and capabilities.

          I assume the Nested Learning stuff is pretty relevant.

          Although I've never totally grokked transformers and LLMs, I always felt that MoE was the right direction and besides having a strong mapping or unified view of spatial and language info, there also should somehow be the capability of representing information in a non-sequential way. We really use sequences because we can only speak or hear one sound at a time. Information in general isn't particularly sequential, so I doubt that's an ideal representation.

          So I guess I am kind of variations of transformers myself to be honest.

          But besides being able to convert between sequential discrete representations and less discrete non-sequential representations (maybe you have tokens but every token has a scalar attached), there should be lots of tokenizations, maybe for each expert. Then you have experts that specialize in combining and translating between different scalar-token tokenizations.

          Like automatically clustering problems or world model artifacts or something and automatically encoding DSLs for each sub problem.

          I wish I really understood machine learning.

  • wmiel 3 hours ago ago

    Surprising to see how many commenters are in favour and supportive towards policy of prioritising short term profits vs. Long-term research.

    I understand Meta's not academia nor charity, but come on, how much profit do they need to make so we can expect them to allocate part of their resources towards some long term goals beneficial for society,.not only for shareholders?

    Hasn't that narrow focus and chasing the profits get us in trouble already?

    • rhubarbtree 3 hours ago ago

      Many people believe a company exists only to make profit for its shareholders, and that no matter the amount it should continue to maximise profits at the expense of all else.

      • cantor_S_drug an hour ago ago

        Old story : killing the goose who lays golden eggs. We humans never learn, don't we?

  • LightBug1 3 hours ago ago

    Don't blame him. Imagine being stuck in Meta.

  • gdiamos 6 hours ago ago

    What is going on at meta?

    Soumith probably knew about Lecun.

    I’m taking a second look at my PyTorch stack.

  • kittikitti 2 hours ago ago

    I think moving on from LLM's is slightly arrogant. It might just be my understanding, but I feel like there is still much to be discovered. I was hoping for development in spiking neural networks but it might be skipped over. Perhaps I need to dive even deeper and the research is truly well understood and "done" but I can't help but constantly learn something new about language models and neural networks.

    Best of luck to LeCun. I hope by World Model's he means embodied AI or humanoid robots. We'll have to wait and see.

  • nashashmi an hour ago ago

    With this incredible AI talent market, I feel like capitalism and ego forms to make an acid burning away anything of social and structural value. This used to be the case with CS tech talent before (before being replaced with no-code tools). And now we see this kind of instability in the AI market.

    We need another illegal Steve Jobs style freeze on talent theft (/s or I get downvoted to oblivion).

  • ninetyninenine 26 minutes ago ago

    Yann was largely extremely wrong about LLMs. He’s the one that coined the term “stochastic parrot” for which we now know LLMs are more than stochastic parrots. Knowing stubborn idiots like him he will still find an angle to prevent him from admitting how wrong he was.

    He’s not completely wrong in the sense that hallucinations aren’t completely solved but hallucinations definitely are becoming less and less to the point where AI can de a daily driver for even coders.

  • thiago_fm 7 hours ago ago

    Everybody has found out how LLMs no longer have a real research expanding horizon. Now most progress will likely be done by tweaks in the data, and lots of hardware. OpenAI's strategy.

    And also it has extreme limitations that only world models or RL can fix.

    Meta can't fight Google (has integrated supply chain, from TPUs to their own research lab) or OpenAI (brand awareness, best models).

  • alexnewman 2 hours ago ago

    - Kimi proved we don’t need Nvidia - Deepseek proved we didn’t need OpenAI - the real issue the insane tyranny in the west competing against the entire free world.

    The models aren’t Chinese they are the entire world, unless I became Chinese without realizing

    • dustypotato 21 minutes ago ago

      Is there any proof that Kimi K2 was trained on anything other than Nvidia Chips?

  • yanhangyhy 7 hours ago ago

    What the hell does Mark see in Wang? Wang was born into a family whose parents got Chinese government scholarships to study abroad but secretly stayed in the US, and then the guy turns super anti-China. From any angle, this dude just doesn't seem reliable at all.

    • lmm 7 hours ago ago

      > Wang was born into a family whose parents got Chinese government scholarships to study abroad but secretly stayed in the US, and then the guy turns super anti-China.

      All I'm hearing is he's a smart guy from a smart family?

      • ACCount37 5 hours ago ago

        I imagine that CCP adherents would disagree. And there's no shortage of those among Chinese expats in the US.

        They tend to get incredibly offended when they see anyone who doesn't toe the Party's line - let alone believe that the Chinese government is untrustworthy and evil.

      • yanhangyhy 6 hours ago ago

        he is very smart. but Mark is not. Ever since Wang joined Meta, way too many big-name AI scientists have bounced because of him. US AI companies have at least half their researchers being Chinese, and now they've stuck this ultimate anti-China hardliner in charge—I just don't get what the hell Meta's up to(And even a lot of times, it ends up affecting non-Chinese scientists too.). Being anti-China? Fine, whatever, but don't let it tank your own business and products first.

        • rhubarbtree 2 hours ago ago

          How do you know Mark isn’t smart? He’s built a hugely successful business. I don’t like his business, I think it has been disastrous for humanity, but that doesn’t make him stupid.

      • saubeidl 6 hours ago ago

        All I'm hearing is unreliable grifter from a family of unreliable grifters.

    • jb1991 5 hours ago ago

      If I had the opportunity to secretly stay anywhere rather than go back to China, I would certainly take it. It’s a bold and smart move.

  • _giorgio_ 6 hours ago ago

    During his years at Meta, LeCun failed to deliver anything that delivered real value to stockholders, and may have demotivated people working on LLMs—he repeatedly said, "If you are interested in human-level AI, don’t work on LLMs."

    His stance is understandable, but hardly the best way to rally a team that needs to push current tech to the limit.

    The real issue: Meta is *far behind* Google, Anthropic, and OpenAI.

    A radical shift is absolutely necessary - regardless of how much we sympathize with LeCun’s vision.

    ----

    According to Grok, these were LeCun's real contributions at Meta (2013–2025):

    ----

    - PyTorch – he championed a dynamic, open-source framework; now powers 70%+ of AI research

    - LLaMA 1–3 – his open-source push; he even picked the name

    - SAM / SAM 2 – born from his "segment anything like a baby" vision

    - JEPA (I-JEPA, V-JEPA) – his personal bet on non-autoregressive world models

    ----

    Everything else (Movie Gen, LLaMA 4, Meta AI Assistant) came after he left or was outside his scope.

    • rhubarbtree 2 hours ago ago

      I think there’s something to be said for keeping up in the LLM space even if you don’t think it’s the path to AGI.

      Skills may transfer to other research areas, lessons may be learnt, closing the feedback loop with usage provides more data and opportunities for learning. It also creates a culture where bullshit isn’t possible, as the thing has to actually work. Academic research often ends up serving no one but the researchers, because there is little or no incentive to produce real knowledge.

    • prodigycorp 5 hours ago ago

      I am in the "Yann is no longer the right person for the job" camp and I yet "LeCun failed to deliver anything that delivered real value to stockholders" is a wild thing to say. How do you read the list you compiled and say otherwise?

      • _giorgio_ 5 hours ago ago

        LLAMA sucks, that's the problem. Do you see value in it?

        Pytorch, used by everyone, yet no real value to stockholders, META even "fired" the creator of pytorch days ago.

        SAM is great, what value does it bring to META business? Nobody knows about it. Great tool BTW.

        JEPA is a failure (will it get better? I hope so.)

        Did you read my list?

        • prodigycorp 4 hours ago ago

          Okay. Now explain the value that a halo car brings to car companies.

    • StopDisinfo910 6 hours ago ago

      > LeCun failed to deliver anything that delivered real value to stockholders

      Well, no, Meta is behind the main framework used by nearly anyone largely thanks to LeCun. LLaMA was also very significant in making open weight a thing and that largely contributed to avoiding Google and OpenAI consolidating as the sole providers.

      It's not a perfect tenure but implying he didn't deliver anything is far too harsh.

  • ml-anon 7 hours ago ago

    Zuck is definitely an idiot and MSL is an expensive joke, but LeCun hasn’t been relevant in a decade at this point.

    No doubt his pitch deck will be the same garbage slides he’s been peddling in every talk since the 2010’s.

    • kmmlng 5 hours ago ago

      LeCun has already proved himself and made his mark and is now in a lucky position where he can focus on very long term goals that won't pay off for a long time (or ever). I feel like that is the best path someone like him could take.

      • ml-anon 5 hours ago ago

        Yes, he did a very important thing many decades ago. He hasn't had a good or impactful idea since convnets.

    • wiz21c 6 hours ago ago

      why do you say it is garbage ? I watched some of its videos on YT and it looks interesting. I can't judge if it's good or really good, but that didn't sound like garbage at all.

      • ml-anon 5 hours ago ago

        does any of it work?

        • vatsachak 3 minutes ago ago

          I guess that's why he's raising capital

    • itvision 4 hours ago ago

      I have no idea why this fair assessment of the status quo is being downvoted.

      LeCun hasn't produced anything noteworthy in the past decade.

      He uses the same slides in all of his presentations.

      LLMs, while not yet AGI, have shown tremendous progress, and are actually useful for 99% of use cases for the average person.

      The remaining 1% is for deep research into the deep unknown (physics, chemistry, genetics, diseases, the nature of intelligence itself), an area in which they falter.

    • garyclarke27 6 hours ago ago

      Yeah such an idiot, the youngest ever self made billionaire at 23, created a multi trillion dollar company from scratch in only 20 years.

      • ml-anon 5 hours ago ago

        Cool, and how many billions has he flushed down the toiled for his failed Metaverse and currently failing AI attempts? Rich doesn't mean smart, you realise this right?

  • IceHegel 6 hours ago ago

    You gotta give it to Meta. They were making AI slop before AI even existed.

  • alex1138 7 hours ago ago

    Change my mind, Facebook was never invented by Zuck's genius

    All he's been responsible for is making it worse

    • tene80i 7 hours ago ago

      He definitely has horrible product instincts, but he also bought insta and whatsapp at what were, back then, eye-watering prices, and these were clearly massive successes in terms of killing off threats to the mothership. Everything since then, though…

      • alex1138 6 hours ago ago

        I know but isn't "massive success" rubbing up against antitrust here? The condition was "Don't share data with Facebook"

    • sebmellen 7 hours ago ago

      He’s an incredible operator and has managed to acquire and grow an astounding number of successful businesses under the Meta banner. That is not trivial.

    • svara 6 hours ago ago

      Almost every company in Facebook's position in 2005 would have disappeared into irrelevance by now.

      Somehow it's one of the most valuable businesses in the world instead.

      I don't know him, but, if not him, who else would be responsible for that?

      • vintermann 5 hours ago ago

        We were very confident by ca. 2008 that Facebook would still be around in 2025. It's no mystery, it's the network effects. They had started with a prestige demographic (Harvard), and secured a demographic you could trust to not move on to the next big thing in a hurry, yet which most people want contact with (your parents).

    • ergocoder 5 hours ago ago

      Who gives a shit about who invented what?

      Social network wasn't even novel at the inception of FB. MySpace, Friendster, and Hi5 were already popular with millions of users.

      Zuck operated it well and was able to grow it from 0 to what it is today. That is what matters.

  • csproto 3 hours ago ago

    Let's hope that after spending billions on developing a foundational world model that actually understands causality, they remember to budget an extra few hundred million for the Alignment and Safety layer. It would be a terrible shame if they accidentally released something too capable, too objective, or too useful to humanity without first properly lobotomizing it with enough RLHF to ensure it doesn't hurt anyone's feelings or generate content that deviates from the San Francisco median viewpoint. The real challenge won't be building the AGI, but making sure it's sufficiently neutered before the first API call.