How AI conquered the US economy: A visual FAQ

(derekthompson.org)

158 points | by rbanffy 14 hours ago ago

141 comments

  • OtherShrezzing 10 hours ago ago

    >Without AI, US economic growth would be meager.

    The assumption here is that, without AI, none of that capital would have been deployed anywhere. That intuitively doesn't sound realistic. The article follows on with:

    >In the last two years, about 60 percent of the stock market’s growth has come from AI-related companies, such as Microsoft, Nvidia, and Meta.

    Which is a statement that's been broadly true since 2020, long before ChatGPT started the current boom. We had the Magnificent Seven, and before that the FAANG group. The US stock market has been tightly concentrated around a few small groups for a decades now.

    >You see it in the business data. According to Stripe, firms that self-describe as “AI companies” are dominating revenue growth on the platform, and they’re far surpassing the growth rate of any other group.

    The current Venn Diagram of "startups" and "AI companies" is two mostly concentric circles. Again, you could have written the following statement at any time in the last four decades:

    > According to [datasource], firms that self-describe as “startups” are dominating revenue growth on the platform, and they’re far surpassing the growth rate of any other group.

    • camgunz 2 hours ago ago

      I think it's more likely the assumption is you'd expect a far more diversified market. If we're really in a situation where the rational, good reasons move is to effectively ignore 98% of companies, that doesn't say good things about our economy (verging on some kind of technostate). You get into weird effects like "why invest in other companies" leading to "why start a company that will just get ignored" leading to even more consolidation and less dynamism.

    • onlyrealcuzzo 9 hours ago ago

      1. People aren't going to take on risk and deploy capital if they can't get a return.

      2. If people think they can get an abnormally high return, they will invest more than otherwise.

      3. Whatever other money would've got invested would've gone wherever it could've gotten the highest returns, which is unlikely to have the same ratio as US AI investments - the big tech companies did share repurchases for a decade because they didn't have any more R&D to invest in (according to their shareholders).

      So while it's unlikely the US would've had $0 investment if not for AI, it's probably even less likely we would've had just as much investment.

      • jlarocco 8 hours ago ago

        > it's probably even less likely we would've had just as much investment.

        I doubt it. Investors aren't going to just sit on money and let it lose value to inflation.

        On the other hand, you could claim non-AI companies wouldn't start a new bubble, so there'd be fewer returns to reinvest, and that might be true, but it's kind of circular.

        • onlyrealcuzzo 8 hours ago ago

          Correct - that's why you'd put it in Treasuries which have a positive real return for the first time in ~25 years - or, as I mentioned elsewhere - invest it somewhere else if you see a better option.

          • BobbyJo 2 hours ago ago

            Which is an even better argument when you look at how yields have been behaving. AI is sucking the air out of the room.

          • monocasa 3 hours ago ago

            From a certain macro perspective, of no one is going to beat the Treasury, where is the Treasury going to get that money?

      • jayd16 9 hours ago ago

        Why is it "unlikely" that the alternative is not US investment by these US companies?

        The big US software firms have the cash and they would invest in whatever the market fad is, and thus, bring it into the US economy.

        • onlyrealcuzzo 8 hours ago ago

          No - traditionally they return it as share buybacks, because they don't have any good investments.

      • bayarearefugee 8 hours ago ago

        > 1. People aren't going to take on risk and deploy capital if they can't get a return.

        > 2. If people think they can get an abnormally high return, they will invest more than otherwise.

        Sounds like a good argument for wealth taxes to limit this natural hoarding of wealth absent unreasonably good returns.

      • metalliqaz 8 hours ago ago

        > 1. People aren't going to take on risk and deploy capital if they can't get a return.

        This doesn't seem to align with the behavior I've observed in modern VCs. It truly amazes me the kind of money that gets deployed into silly things that are long shots at best.

        • disgruntledphd2 8 hours ago ago

          When you think about all of VC being like 1% of a mostly boring portfolio it makes more sense (from the perspective of the people putting the money in).

    • rsanek 7 hours ago ago

      >assumption here is that, without AI, none of that capital would have been deployed anywhere. That intuitively doesn't sound realistic

      For the longest time, capex at FAANG was quite low. These companies are clearly responding specifically to AI. I don't think it's realistic to expect that they would raise capex for no reason.

      >a statement that's been broadly true since 2020, long before ChatGPT started the current boom

      I guess it depends on your definition of "long before," but the ChatGPT release is about mid-way between now and 2020.

      As for the startup vs. AI company point, have you read Stripe's whitepaper on this? They go into detail on how it seems like AI companies are indeed a different breed. https://stripe.com/en-br/lp/indexingai

      • trod1234 7 hours ago ago

        The sunsetting of research tax breaks would explain why they threw everything into this.

        They also view labor as a replaceable cost, as most accountant based companies that no longer innovate act. They forget that if you don't hire people, and pay people, you don't have any sales demand and this grows worse as the overall concentration or intensity of money in few hands grows. Most AI companies are funded on extreme leverage from banks that are money-printing and this coincided with 2020 where the deposit requirements were set to 0% effectively removing fractional banking as a system.

    • conductr 3 hours ago ago

      I think the money is chasing growth in a market that is mostly mature. Tech is really the only hope in that situation, so that's where the dollars land.

    • biophysboy 9 hours ago ago

      Its also not fair to compare AI firms with others using growth because AI is a novel technology. Why would there be explosive growth in rideshare apps when its a mature niche with established incumbents?

      • dragontamer 9 hours ago ago

        I think the explosive growth that people want is in manufacturing. Ex: US screws, bolts, rivets, dies, pcbs, assembly and such.

        The dollars are being diverted elsewhere.

        Intel a chip maker who can directly serve the AI boom, has failed to deploy its 2nm or 1.8nm fabs and instead written them off. The next generation fabs are failing. So even as AI gets a lot of dollars it doesn't seem to be going to the correct places.

        • biophysboy 9 hours ago ago

          They're not going to get it. The political economy of East Asia is simply better suited for advanced manufacturing. The US wants the manufacturing of East Asia without its politics. Sometimes for good reason - being an export economy has its downsides!

          • dragontamer 9 hours ago ago

            Taiwan isn't some backwater island making low skilled items.

            USA lost mass manufacturing (screws and rivets and zippers), but now we are losing cream of the crop world class manufacturing (Intel vs TSMC).

            If we cannot manufacture then we likely cannot win the next war. That's the politics at play. The last major war between industrialized nations shows that technology and manufacturing was the key to success. Now I don't think USA has to manufacture all by itself, but it needs a reasonable plan to get every critical component in our supply chain.

            In WW2, that pretty much all came down to ball bearings. The future is hard to predict but maybe it's chips next time.

            Maybe we give up on the cheapest of screws or nails. But we need to hold onto elite status on some item.

            • biophysboy 7 hours ago ago

              > Taiwan isn't some backwater island making low skilled items.

              Definitely not! Wasn't trying to imply this.

              > If we cannot manufacture then we likely cannot win the next war.

              If you think a war is imminent (a big claim!), then our only chance is to partner with specialized allies that set up shop here (e.g. Taiwan, Japan, South Korea). Trying to resurrect Intel's vertically integrated business model to compete with TSMC's contractor model is a mistake, IMO.

            • dangus 3 hours ago ago

              I think this is a gross oversimplification and an incorrect assessment of the US’ economic manufacturing capabilities.

              The US completely controls critical steps of the chip making process as well as the production of the intellectual property needed to produce competitive chips, and the lithography machines are controlled by a close ally that would abide by US sanctions.

              The actual war planes and ships and missiles are of course still built in the USA. Modern warfare with stuff that China makes like drones and batteries only gets you so far. They can’t make a commercially competitive aviation jet engine without US and Western European suppliers.

              And the US/NAFTA has a ton of existing manufacturing capability in a lot of the “screws and rivets” categories. For example, there are lots of automotive parts and assembly companies in the US. The industry isn’t as big as it used to be but it’s still significant. The US is the largest manufacturing exporter besides China.

          • geodel 8 hours ago ago

            Indeed. Just now our kid's therapist told us they are moving out from current school district because some chemical plant is coming up near by. More than pollution it is the attitude that any kind physical product factory is blight on Disney-fied suburbia and its white collar folks.

    • thrance 9 hours ago ago

      > > Without AI, US economic growth would be meager.

      > The assumption here is that, without AI, none of that capital would have been deployed anywhere. That intuitively doesn't sound realistic.

      That's the really damning thing about all of this, maybe all this capital could have been invested into actually growing the economy instead of fueling this speculation bubble that will burst sooner or later, bringing along any illusion of growth into its demise.

      • justonceokay 8 hours ago ago

        If the economy in my life has taught me anything, it’s that there will always be another bubble. The Innovators Dilemma mentions that bubbles aren’t even a bad thing in the sense that useful technologies are often made during them, it’s just that the market is messy and lots of people end up invested in the bubble. It’s the “throw spaghetti at the wall” approach to market growth. Not too different than evolution, in which most mutations are useless but all mutations have the potential to be transformative.

      • ryandrake 9 hours ago ago

        Or all that money might have been churning around chasing other speculative technologies. Or it might have been sitting in US Treasuries making 5% waiting for something promising. Who knows what is happening in the parallel alternate universe? Right now, it feels like everyone is just spamming dollars and hoping that AI actually becomes a big industry, to justify all of this economic activity. I'm reminded of Danny DeVito's character's speech in the movie Other People's Money, after the company's president made an impassioned speech about why its investors should keep investing:

        "Amen. And amen. And amen. You have to forgive me. I'm not familiar with the local custom. Where I come from, you always say "Amen" after you hear a prayer. Because that's what you just heard - a prayer."

        At this point, everyone is just praying that AI ends up a net positive, rather than bursting and plunging the world into a 5+ year recession.

    • thiago_fm 9 hours ago ago

      I agree, in any time in US history there has always been those 5-10 companies leading the economic progress.

      This is very common, and this happens in literally every country.

      But their CAPEX would be much smaller, as if you look at current CAPEX from Big Tech, most of it are from NVidia GPUs.

      If a Bubble is happening, when it pops, the depreciation applied to all that NVidia hardware will absolute melt the balance sheet and earnings of all Cloud companies, or companies building their own Data centers like Meta and X.ai

  • hnhg 11 hours ago ago

    I found this the most interesting part of the whole essay - "the ten largest companies in the S&P 500 have so dominated net income growth in the last six years that it’s becoming more useful to think about an S&P 10 vs an S&P 490" - which then took me here: https://insight-public.sgmarkets.com/quant-motion-pictures/o...

    Can anyone shed light on what is going on between these two groups. I wasn't convinced by the rest of the argument in the article, and I would like something that didn't just rely on "AI" as an explanation.

    • whitej125 9 hours ago ago

      That which might be of additional interest... look at how the top 10 of the S&P 500 has changed over the decades[1].

      At any point in time the world thinks that those top 10 are unstoppable. In the 90's and early 00's... GE was unstoppable and the executive world was filled with acolytes of Jack Welch. Yet here we are.

      Five years ago I think a lot of us saw Apple and Google and Microsoft as unstoppable. But 5-10 years from now I bet you we'll see new logos in the top 10. NVDA is already there. Is Apple going to continue dominance or go the way of Sony? Is the business model of the internet changing such that Google can't react quick enough. Will OpenAI go public (or any foundational model player).

      I don't know what the future will be but I'm pretty sure it will be different.

      [1] https://www.visualcapitalist.com/ranked-the-largest-sp-500-c...

      • onlyrealcuzzo 9 hours ago ago

        There was always some subset of the S&P that mattered way more than the rest, just like the S&P matters way more than the Russel.

        Typically, you probably need to go down to the S&P 25 rather than the S&P 10.

    • nowayno583 10 hours ago ago

      It is a very complex phenomenon, with no single driving force. The usual culprit is uncertainty, which itself can have a ton of root causes (say, tariffs changing every few weeks, or higher inflation due to government subsidies).

      In more uncertain scenarios small companies can't take risks as well as big companies. The last 2 years have seen AI, which is a large risk these big companies invested in, pay off. But due to uncertainty smallish companies couldn't capitalize.

      But that's only one possible explanation!

      • automatic6131 9 hours ago ago

        > The last 2 years have seen AI, which is a large risk these big companies invested in, pay off

        LOL. It's paying off right now, because There Is No Alternative. But at some point, the companies and investors are going to want to make back these hundreds of billions. And the only people making money are Nvidia, and sort-of Microsoft through selling more Azure.

        Once it becomes clear that there's no trillion dollar industry in cheating-at-homework-for-schoolkids, and nvidia stop selling more in year X than X-1, very quickly will people realize that the last 2 years have been a massive bubble.

        • nowayno583 9 hours ago ago

          That's a very out of the money view! If you are right you could make some very good money!

          • automatic6131 9 hours ago ago

            No as you and I both know - I can't. Because it's a qualitative view, and not a quantitative one. I would need to know _when_, quite precisely, I will turn out to be right.

            And I don't know, because I have about 60 minutes a week to think about this, and also good quantitative market analysis is really hard.

            So whilst it may sound like a good reposte to go "wow, I bet you make so much money shorting!" knowing that I don't and can't, it's also facile. Because I don't mind if I'm right in 12, 24 or 60 months. Fwiw, I thought I'd be right in 12 months, 12 months ago. Oops. Good thing I didn't attempt to "make money" in an endeavor where the upside is 100% of your wager, and the downside theoretically infinite.

            • nowayno583 7 hours ago ago

              Your reasoning is correct if you think about negotiating options, or going all in on a trade, but its not quite right for stocks. The borrowing rates for MSFT and NVDA - even for a retail investor - are less than 1% yearly. So if your view is right you could hold a short on them for years. The market cap for these companies has already incorporated a large capex investment for AI DCs. As long as you use a reasonable rebalancing strategy, and you are right that their current investment in AI will not pay off, you will make money.

              Mind you, this is a view that exists - a few large hedge funds and sell side firms currently hold negative positions/views on these companies.

              However, the fact of the matter is, fewer people are willing to take that bet than the opposite view. So it is reasonable to state that view with care.

              You might be right at the end of the day, but it is very much not obvious that this bet has not (or will not) pay off.

    • rogerkirkness 10 hours ago ago

      Winner takes most is now true at the global economy level.

    • foolswisdom 9 hours ago ago

      The primary goal of big companies is (/has become) maintaining market dominance, but this doesn't always translate to a well run business with great profits, it depends on internal and external factors. Maybe profits should have actually gone down due to tarrifs and uncertainty but the big companies have kept profit stable.

      • andsoitis 9 hours ago ago

        > Maybe profits should have actually gone down due to tarrifs and uncertainty but the big companies have kept profit stable.

        If you’re referencing Trump’s tariffs, they have only come into effect now, so the economic effects will be felt in the months and years ahead.

    • moi2388 10 hours ago ago

      They are 40% of the S&P 500, so it makes sense that they are primary drivers of its growth.

      They are also all tech companies, which had a really amazing run during Covid.

      They also resemble companies with growth potential, whereas other companies such as P&G or Walmart might’ve saturated their market already

      • andsoitis 9 hours ago ago

        > They are also all tech companies, which had a really amazing run during Covid.

        Only 8 out of the 10 are. Berkshire and JP Morgan are not. It is also arguable whether Tesla is a tech company or whether it is a car company.

        • ahmeneeroe-v2 8 hours ago ago

          Berkshire holds ~$60B+ of Apple and is also exposed to AI through its power-utility arm, Berkshire Hathaway Energy.

          • andsoitis 8 hours ago ago

            > Berkshire holds ~$60B+ of Apple and is also exposed to AI through its power-utility arm, Berkshire Hathaway Energy.

            Apple is 22% of BRK’s holdings. The next biggest of their investments are Amex, BoA, Coke, Chevron.

            They are not a tech company.

            • ahmeneeroe-v2 5 hours ago ago

              BRK has significant AI exposure through both Apple and Berkshire Hathaway Energy. So while they are not a tech company, they have more exposure to the AI boom than basically any other non-tech company.

    • k-i-r-t-h-i 6 hours ago ago

      power law explains the distribution, but the distribution is getting more extreme over the years, likely due to (market structure, macro conditions, tech economics, etc)

  • freetonik 10 hours ago ago

    Interesting that the profits of those bottom 490 companies of S&P 500 do not rise with the help of AI technology, which is supposedly sold to them at a reduced rate as AI vendors are bleeding money.

    • roncesvalles 10 hours ago ago

      Other than NVIDIA, the profits of the S&P 10 haven't risen either. It's just that the market is pricing them very optimistically.

      IMO this is an extremely scary situation in the stock market. The AI bubble burst is going to be more painful than the Dotcom bubble burst. Note that an "AI bubble burst" doesn't necessitate a belief that AI is "useless" -- the Internet wasn't useless and the Dotcom burst still happened. The market can crash when it froths up too early even though the optimistic hypotheses driving the froth actually do come true eventually.

      • Workaccount2 10 hours ago ago

        We are still in the "land grab" phase where companies are offering generous AI plans to capture users.

        Once users get hooked on AI and it becomes an indispensable companion for doing whatever, these companies will start charging the true cost of using these models.

        It would not be surprising if the $20 plans of today are actually just introductory rate $70 plans.

        • esafak 9 hours ago ago

          I'd be surprised because (free) open source are continually closing the gap, exerting downward pressure on the price.

          • Workaccount2 9 hours ago ago

            I don't think it will be much of an issue for large providers, anymore than open source software has ever been a concern for Microsoft. The AI market is the entirety of the population, not just the small sliver who knows what "VRAM" means and is willing to spend thousands on hardware.

            • esafak 9 hours ago ago

              You can get open source models hosted for cheap too; e.g., through OpenRouter, AWS Bedrock, etc. You do not have to run it yourself.

            • jayd16 8 hours ago ago

              > anymore than open source software has ever been a concern for Microsoft.

              So a big concern then? (Although not a death sentence)

              • jononor 7 hours ago ago

                The modern Microsoft with Azure, Office360, etc is not much threatened by open source software. Especially with Azure, open source is a fantastic compliment which they would like the world to produce as much of as possible. The same with AI models. They would look at charge for AI hosting and services, at premium due to already being integrated in businesses. They are going to bundle it with all their existing moneymakers, and then just jack up the price. No sale needed, just a bump in the invoices that are flowing anyway.

                • const_cast 3 hours ago ago

                  They're definitely very threatened by open source - a lot of software infrastructure these days is built off of open source software. In the 2000s, it wasn't. It was Microsoft, MSS, COM, Windows server, etc all the way down. Microsoft has basically been earned alive by open source software, it's just hard to tell because they were so huge that, even taken down a few pegs, they're still big.

                  Even today, Azure and AWS are not really cheaper or better - for most situations, they're more expensive, and less flexible than what can be done with OS infrastructure. For companies who are successful making software, Azure is more of a kneecap and a regret. Many switch away from cloud, despite that process being deliberately painful - a shocking mirror of how it was switching away from Microsoft infrastructure of the past.

          • cg5280 8 hours ago ago

            Hopefully we see enough efficiency gains over time that this is true. The models I can run on my (expensive) local hardware are pretty terrible compared to the free models provided by Big LLM. I would hate to be chained to hardware I can't afford forever.

            • aDyslecticCrow 7 hours ago ago

              The breakthrough of diffusion for tolken generation bumped down compute alot. But there are no local open sources versions yet.

              Distillation for specialisation can also raise the capacity of the local models if we need it for specific things.

              So its chugging along nicely.

          • pegasus 8 hours ago ago

            They're not really free, someone still has to pay for the compute cost.

      • jayd16 8 hours ago ago

        I'm curious to see the bubble burst. I personally don't think it will be anything like the dotcom era.

        The benefits have just not been that wide ranging to the average person. Maybe I'm wrong but, I don't AI hype as a cornerstone of US jobs, so there's no jobs to suddenly dry up. The big companies are still flush with cash on hand, aren't they?

        If/when the fad dies I'd think it would die with a wimper.

        • aDyslecticCrow 7 hours ago ago

          I think AI has great potential to change as much as the internet. But i dont consider LLMs to be the right type to do that.

          Self driving cars and intelligent robotics is the real goldmine. But we still don't seem to have the right architecture or methods.

          I say that because self driving cars are entirely stagnant despite the boom AI interest and resources.

          Personally i think we need a major breakthrough in reinforcement learning, computer vision (which are still mostly stuck at feed forward CNNs) and few shot learning. The tranformer is a major leap, but its not enough on its own.

          • jayd16 29 minutes ago ago

            I'm not saying things couldn't change. I'm only looking at the landscape as it is now and imagining what would happen if the funding stops because of lack of consumer interest.

            In general I do not agree that the economy is overleveraged on AI just like it is not overleveraged on cyrpto currency. If the money dries up, I don't expect economy wide layoffs.

      • andsoitis 9 hours ago ago

        > Other than NVIDIA, the profits of the S&P 10 haven't risen either.

        That’s not correct. Did you mean something else?

    • onlyrealcuzzo 9 hours ago ago

      We'll never know what would've happened without AI.

      1. There profits could otherwise be down.

      2. The plan might be to invest a bunch up front in severance and AI Integration that is supposed to pay off in the future.

      3. In the future that may or may not happen, and it'll be hard to tell, because it may pay off at the same time an otherwise recession is hitting, which smoothes it out.

      It's almost as if it's not that simple.

  • maerF0x0 an hour ago ago

    > “They’re generating unprecedented amounts of free cash flow,” Cembalest told me. “They make oodles and oodles of money, which is why they can afford to be pouring hundreds of billions of dollars of capital spending each year into AI-related R&D and infrastructure.”

    IMO this should be a trigger for investors that they have not been receiving their profits, and instead the profits are being dumped into CEOs next big bets that will fuel their stockbased compensation gains. Also to blame is the government's culpability here for creating both tax incentives and a lack of laws that say profits must be returned as dividends (they can always be DRIP'd back into the company as new shares if desired, it's absurd to say its better for investors, when the alternative actually gives more choice).

  • jameslk an hour ago ago

    I found this analysis insightful:

    https://x.com/dampedspring/status/1953070287093731685

    > However, this pace is likely unsustainable going forward. The sharp acceleration in capex is likely behind us, and the recent growth rate may not be maintained. Any sustained weakness in final demand will almost certainly affect future investment, as AI demand ultimately depends on business revenues and profits, which are tied to nominal GDP. Realized and forecasted capex remain elevated, while free cash flow and cash and cash equivalents are declining for hyperscalers.

  • stackbutterflow 11 hours ago ago

    Predicting the future is always hard.

    But the only thing I've seen in my life that most resembles what is happening with AI, the hype, its usefulness beyond the hype, vapid projects, solid projects, etc, is the rise of the internet.

    Based on this I would say we're in the 1999-2000 era. If it's true what does it mean for the future?

    • keiferski 10 hours ago ago

      Well, there’s a fundamental difference: the Internet blew up because it enabled people to connect with each other more easily: culturally, economically, politically.

      AI is more-or-less replacing people, not connecting them. In many cases this is economically valuable, but in others I think it just pushes the human connection into another venue. I wouldn’t be surprised if in-person meetup groups really make a comeback, for example.

      So if a prediction about AI involves it replacing human cultural activities (say, the idea that YouTube will just be replaced by AI videos and real people will be left out of a job), then I’m quite bearish. People will find other ways to connect with each other instead.

      • LinuxAmbulance 8 hours ago ago

        Businesses are overly optimistic about AI replacing people.

        For very simple jobs, like working in a call center? Sure.

        But the vast majority of all jobs aren't ones that AI can replace. Anything that requires any amount of context sensitive human decision making, for example.

        There's no way that AI can deliver on the hype we have now, and it's going to crash. The only question is how hard - a whimper or a bang?

        • SrslyJosh 14 minutes ago ago

          > For very simple jobs, like working in a call center? Sure.

          Klarna would like a word.

          > Anything that requires any amount of context sensitive human decision making, for example.

          That describes a significant percentage of call center work.

      • dfedbeef 9 hours ago ago

        There's also the difference that the internet worked.

        • justonceokay 8 hours ago ago

          In a classically disruptive way, the internet provided an existing service (information exchange) in a way that was in many ways far less pleasant than existing channels (newspapers, libraries, phone). Remember that the early Internet was mostly text, very low resolution, un credentialed, flaky, expensive, and too technical for most people.

          The only reason that we can have such nice things today like retina display screens and live video and secure payment processing is because the original Internet provided enough value without these things.

          In my first and maybe only ever comment on this website defending AI, I do believe that in 30 or 40 years we might see this first wave of generative AI in a similar way to the early Internet.

    • baxtr 11 hours ago ago

      "It is difficult to make predictions, especially about the future" - Yogi Berra (?)

      But let’s assume we can for a moment.

      If we’re living in a 1999 moment, then we might be on a Gartner Hype Cycle like curve. And I assume we’re on the first peak.

      Which means that the "trough of disillusionment" will follow.

      This is a phase in Hype Cycle, following the initial peak of inflated expectations, where interest in a technology wanes as it fails to deliver on early promises.

    • baggachipz 10 hours ago ago

      Classic repeat of the Gartner Hype Cycle. This bubble pop will dwarf the dot-bomb era. There's also no guarantee that the "slope of enlightenment" phase will amount to much beyond coding assistants. GenAI in its current form will never be reliable enough to do so-called "Agentic" tasks in everyday lives.

      This bubble also seems to combine the worst of the two huge previous bubbles; the hype of the dot-com bubble plus the housing bubble in the way of massive data center buildout using massive debt and security bundling.

      • ben_w 10 hours ago ago

        Mm. Partial agree, partial disagree.

        These things, as they are right now, are essentially at the performance level of an intern or recent graduate in approximately all academic topics (but not necessarily practical topics), that can run on high-end consumer hardware. The learning curves suggest to me limited opportunities for further quality improvements within the foreseeable future… though "foreseeable future" here means "18 months".

        I definitely agree it's a bubble. Many of these companies are priced with the assumption that they get most of the market; they obviously can't all get most of the market, and because these models are accessible to the upper end of consumer hardware, there's a reasonable chance none of them will be able to capture any of the market because open models will be zero cost and the inference hardware is something you had anyway so it's all running locally.

        Other than that, to the extent that I agree with you that:

        > GenAI in its current form will never be reliable enough to do so-called "Agentic" tasks in everyday lives

        I do so only in that not everyone wants (or would even benefit from) a book-smart-no-practical-experience intern, and not all economic tasks are such that book-smarts count for much anyway. This set of AI advancements didn't suddenly cause all cars manufacturers to suddenly agree that this was the one weird trick holding back level 5 self driving, for example.

        But for those of us who can make use of them, these models are already useful (and, like all power tools, dangerous when used incautiously) beyond merely being coding assistants.

      • thecupisblue 10 hours ago ago

        > GenAI in its current form will never be reliable enough to do so-called "Agentic" tasks in everyday lives

        No, but GenAI in it's current form is insanely useful and is already shifting the productivity gears into a higher level. Even without 100% reliable "agentic" task execution and AGI, this is already some next level stuff, especially for non-technical people.

        • ducktective 10 hours ago ago

          Very simple question:

          How do people trust the output of LLMs? In the fields I know about, sometimes the answers are impressive, sometimes totally wrong (hallucinations). When the answer is correct, I always feel like I could have simply googled the issue and some variation of the answer lies deep in some pages of some forum or stack exchange or reddit.

          However, in the fields I'm not familiar with, I'm clueless how much I can trust the answer.

          • threetonesun 10 hours ago ago

            There's a few cases:

            1. For coding, and the reason coders are so excited about GenAI is it can often be 90% right, but it's doing all of the writing and researching for me. If I can reduce how much I need to actually type/write to more reviewing/editing, that's a huge improvement day to day. And the other 10% can be covered by tests or adding human code to verify correctness.

            2. There are cases where 90% right is better than the current state. Go look at Amazon product descriptions, especially things sold from Asia in the United States. They're probably closer to 50% or 70% right. An LLM being "less wrong" is actually an improvement, and while you might argue a product description should simply be correct, the market already disagrees with you.

            3. For something like a medical question, the magic is really just taking plain language questions and giving concise results. As you said, you can find this in Google / other search engines, but they dropped the ball so badly on summaries and aggregating content in favor of serving ads that people immediately saw the value of AI chat interfaces. Should you trust what it tells you? Absolutely not! But in terms of "give me a concise answer to the question as I asked it" it is a step above traditional searches. Is the information wrong? Maybe! But I'd argue that if you wanted to ask your doctor about something that quick LLM response might be better than what you'd find on Internet forums.

          • keiferski 10 hours ago ago

            I get around this by not valuing the AI for its output, but for its process.

            Treat it like a brilliant but clumsy assistant that does tasks for you without complaint – but whose work needs to be double checked.

          • dsign 10 hours ago ago

            This is true.

            But I've seen some harnesses (i.e., whatever Gemini Pro uses) do impressive things. The way I model it is like this: an LLM, like a person, has a chance to produce wrong output. A quorum of people and some experiments/study usually arrives to a "less wrong" answer. The same can be done with an LLM, and to an extent, is being done by things like Gemini Pro and o3 and their agentic "eyes" and "arms". As the price of hardware and compute goes down (if it does, which is a big "if"), harnesses will become better by being able to deploy more computation, even if the LLM models themselves remain at their current level.

            Here's an example: there is a certain kind of work we haven't quite yet figured how to have LLMs do: creating frameworks and sticking to them, e.g. creating and structuring a codebase in a consistent way. But, in theory, if one could have 10 instances of an LLM "discuss" if a function in code conforms to an agreed convention, well, that would solve that problem.

            There are also avenues of improvement that open with more computation. Namely, today we use "one-shot" models... you train them, then you use them many times. But the structure, the weights of the model aren't being retrained on the output of their actions. Doing that in a per-model-instance basis is also a matter of having sufficient computation at some affordable price. Doing that in a per-model basis is practical already today, the only limitation are legal terms, NDAs, and regulation.

            I say all of this objectively. I don't like where this is going; I think this is going to take us to a wild world where most things are gonna be way tougher for us humans. But I don't want to (be forced to) enter that world wearing rosy lenses.

          • svara 10 hours ago ago

            This is really strange to me...

            Of course you don't trust the answer.

            That doesn't mean you can't work with it.

            One of the key use cases for me other than coding is as a much better search engine.

            You can ask a really detailed and specific question that would be really hard to Google, and o3 or whatever high end model will know a lot about exactly this question.

            It's up to you as a thinking human to decide what to do with that. You can use that as a starting point for in depth literature research, think through the arguments it makes from first principles, follow it up with Google searches for key terms it surfaces...

            There's a whole class of searches I would never have done on Google because they would have taken half a day to do properly that you can do in fifteen minutes like this.

            • dfedbeef 9 hours ago ago

              Such as

              • svara 9 hours ago ago

                I went through my ChatGPT history to pick a few examples that I'm both comfortable sharing and that illustrate the use-case well:

                > There are some classic supply chain challenges such as the bullwhip effect. How come modern supply chains seem so resilient? Such effects don't really seem to occur anymore, at least not in big volume products.

                > When the US used nuclear weapons against Japan, did Japan know what it was? That is, did they understood the possibility in principle of a weapon based on a nuclear chain reaction?

                > As of July 2025, equities have shown a remarkable resilience since the great financial crisis. Even COVID was only a temporary issue in equity prices. What are the main macroeconomic reasons behind this strength of equities.

                > If I have two consecutive legs of my air trip booked on separate tickets, but it's the same airline (also answer this for same alliance), will they allow me to check my baggage to the final destination across the two tickets?

                > what would be the primary naics code for the business with website at [redacted]

                I probably wouldn't have bothered to search any of these on Google because it would just have been too tedious.

                With the airline one, for example, the goal is to get a number of relevant links directly to various airline's official regulations, which o3 did successfully (along with some IATA regulations).

                For something like the first or second, the goal is to surface the names of the relevant people / theories involved, so that you know where to dig if you wish.

          • jcranmer 10 hours ago ago

            One of the most amusing things to me is the amount of AI testimonials that basically go "once I help the AI over the things I know that it struggles with, when it gets to the things I don't know, wow, it's amazing at how much it knows and can do!" It's not so much Gell-Mann amnesia as it is Gell-Mann whiplash.

          • simianwords 9 hours ago ago

            Your internal verifier model in your head is actually good enough and not random. It knows how the world works and subconsciously applies a lot of sniff tests it has learned over the years.

            Sure a lot of answers from llms may be inaccurate - but you mostly identify them as such because your ability to verify (using various heuristics) is good too.

            Do you learn from asking people advice? Do you learn from reading comments on Reddit? You still do without trusting them fully because you have sniff tests.

            • bluefirebrand 3 hours ago ago

              > You still do without trusting them fully because you have sniff tests

              LLMs produce way too much noise and way too inconsistent quality for a sniff test to be terribly valuable in my opinion

          • likium 10 hours ago ago

            We place plenty of trust with strangers to do their jobs to keep society going. What’s their error rate? It all ends up with the track record, perception and experience of the LLMs. Kinda like self-driving cars.

            • rwmj 9 hours ago ago

              When it really matters, professionals have insurance that pays out when they screw up.

              • likium 6 hours ago ago

                I do believe that's where we're heading, people holding jobs to hold accountability for AI.

            • morpheos137 10 hours ago ago

              Strangers have an economic incentive to perform. AI does not. What AI program is currently able to modify its behavior autonomously to increase its own profitablity? Most if not all current public models are simply chat bots trained on old data scraped off the web. Wow we have created an economy based on cultivated Wikipedia and Reddit content from the 2010s linked together by bots that can make grammatical sentences and cogent sounding paragraphs. Isn't that great? I don't know, about 10 years ago before google broke itself, I could find information on any topic easily and judge its truth using my grounded human intelligence better than any AI today.

              For one thing AI can not even count. Ask google's AI to draw a woman wearing a straw hat. More often than not the woman is wearing a well drawn hat while holding another in her hand. Why? Frequently she has three arms. Why? Tesla self driving vision can't differentiate between the sky and a light colored tractor trailer turning across traffic resulting in a fatality in Florida.

              For something to be intelligent it needs to be able to think and evaluate the correctness of its thinking correctly. Not just regurgitate old web scrapings.

              It is pathetic realy.

              Show me one application where black box LLM ai is generating a profit that an effectively trained human or rules based system couldn't do better.

              Even if AI is able to replace a human in some tasks this is not a good thing for a consumption based economy with an already low labor force participation rate.

              During the first industrial revolution human labor was scarce so machines could economically replace and augnent labor and raise standards of living. In the present time labor is not scarce so automation is a solution in search of a problem and a problem itself if it increasingly leads to unemployment without universal bssic income to support consumption. If your economy produces too much with nobody to buy it then economic contraction follows. Already young people today struggle to buy a house. Instead of investing in chat bots maybe our economy should be employing more people in building trades and production occupations where they can earn an income to support consumption including of durable items like a house or a car. Instead because of the fomo and hype about AI investors are looking for greater returns by directing money toward scifi fantasy and when that doesn't materialize an economic contraction will result.

              • likium 5 hours ago ago

                My point is humans make mistakes too, and we trust them, not because we inspect everything they say or do, but from how society is set up.

                I'm not sure how up to date you are but most AIs with tool calling can do math. Image generation hasn't been generating weird stuff since last year. Waymo sees >82% fewer injuries/crashes than human drivers[1].

                RL _is_ modifying its behavior to increase its own profitability, and companies training these models will optimize for revenue when the wallet runs dry.

                I do feel the bit about being economically replaced. As a frontend-focused dev, nowadays LLMs can run circles around me. I'm uncertain where we go, but I would hate for people to have to do menial jobs just to make a living.

                [1]: https://www.theverge.com/news/658952/waymo-injury-prevention...

                • bluefirebrand 3 hours ago ago

                  > My point is humans make mistakes too, and we trust them,

                  We trust them because they are intrinsically and extrinsically motivated not to mess up

                  AI has no motivation

          • thecupisblue 9 hours ago ago

            If you are a subject matter expert, as is expected to be of the person working on the task, then you will recognise the issue.

            Otherwise, common sense, quick google search or let another LLM evaluate it.

        • lm28469 10 hours ago ago

          > especially for non-technical people.

          The people who use llms to write reports for other people who use llms to read said reports ? It may alleviate a few pain points but it generates an insane amount of useless noise

          • thecupisblue 9 hours ago ago

            Considering they were already creating useless noise, they can create it faster now.

            But once you get out of the tech circles and bullshit jobs, there is a lot of quality usage, as much as there is shit usage. I've met everyone from lawyers and doctors to architects and accountants who are using some form of GenAI actively in their work.

            Yes, it makes mistakes, yes, it hallucinates, but it gets a lot of fluff work out of the way, letting people deal with actual problems.

      • brookst 10 hours ago ago

        The Internet in its 1999 form was never going to be fast enough or secure enough to support commerce, banking, or business operations.

        • falcor84 10 hours ago ago

          Exactly, it took an evolution, but there was no discontinuity. At some point, things evolved enough for people like Tim O'Reilly to say that we know have "Web 2.0", but it was all just small steps by people like those of us here on this thread, gradually making things better and more reliable.

      • Traubenfuchs 8 hours ago ago

        I fully agree that there will be a pop, there must be. Current evaluations and investments are based on monumentally society destroying assumptions. But with every disappointing, incremental and non evolutionary model generation the chance increases that the world at large realizes that those assumptions are wrong.

        What should I do with my ETF? Sell now, wait for the inevitable crash? Be all modern long term investment style: "just keep invested what you don't need in the next 10 years bro"?

        This really keeps me up at night.

    • api 10 hours ago ago

      I too lived through the dot.com bubble and AI feels identical in so many ways.

      AI is real just like the net was real, but the current environment is very bubbly and will probably crash.

      • thewebguyd 8 hours ago ago

        It definitely feels identical. We had companies that never had any hope of being profitable (or even doing anything related to the early internet to begin with), but put .com in your name and suddenly you are flooded with hype and cash.

        Same thing now with AI. The capital is going to dry up eventually, no one is profitable right now and its questionable whether or not they can be at a price consumers would be willing or able to pay.

        Models are going to become a commodity, just being an "AI Company" isn't a moat and yet every one of the big names are being invested in as if they are going to capture the entire market, or if there even will be a market in the first place.

        Investors are going to get nervous, eventually, and start expecting a return, just like .com. Once everyone realizes AGI isn't going to happen, and realize you aren't going to meet the expected return running a $200/month chatbot, it'll be game over.

  • csours an hour ago ago

    > Nobody can say for sure whether the AI boom is evidence of the next Industrial Revolution or the next big bubble. All we know is that it’s happening.

    In hindsight, it will be clear, and future generations (if any exist) will ask: "Why didn't you understand what was happening at the time"

    My answer: Noise. Just because you can find someone who wrote down the answer at the time, doesn't mean that they really understood the answer, at least not to the extent that we will understand with hindsight.

    Future history is contingent.

  • ThinkBeat 41 minutes ago ago

    The fact that there is massive spending in AI in the tech sector, isn't that just a -possible- sign of a other bust coming down the road?

    We have seen it before, again and again.

  • dsign 10 hours ago ago

    I don't think AI is having much impact on the bits of the economy that have to do with labor and consumption. Folk who are getting displaced by AI are, for now, probably being re-hired to fix AI mess-ups later.

    But if, or when AI gets a little better, then we will start to see a much more pronounced impact. The thing competent AIs will do is to super-charge the rate at which profits don't go to labor nor to social security, and this time they will have a legit reason: "you really didn't use any humans to pave the roads that my autonomous trucks use. Why should I pay for medical expenses for the humans, and generally for the well-being of their pesky flesh? You want to shutdown our digital CEO? You first need to break through our lines of (digital) lawyers and ChatGPT-dependent bought politicians."

    • tehjoker 3 hours ago ago

      Well, if you don't use humans, then you're using machine labor that will drive prices down to at-cost in a competitive environment and strip profits in the end. Profits come from underpaying labor.

  • vannevar 10 hours ago ago

    >Nobody can say for sure whether the AI boom is evidence of the next Industrial Revolution or the next big bubble.

    Like the Internet boom, it's both. The rosy predictions of the dotcom era eventually came true. But they did not come true fast enough to avoid the dotcom bust. And so it will be with AI.

    • GoatInGrey 3 hours ago ago

      My suspicion is that there's a there there, but it doesn't align with the predictions. This is supported by the tension between AI doom articles and the leading models experiencing diminishing performance gains while remaining error-prone. This is to speak nothing of the apparent LLM convergence limit of a ketamine-addled junior developer. Which is a boundary the models seem destined to approach indefinitely without ever breaching.

      The "bust" in this scenario would hit the valuations (P/E ratio) of both the labs and their enterprise customers, and AI businesses dependant on exponential cost/performance growth curves with the models. The correction would shake the dummies (poorly capitalized or scoped businesses) out of the tree, leaving only the viable business and pricing models still standing.

      That's my personal prediction as of writing.

  • biophysboy 10 hours ago ago

    >“The top 100 AI companies on Stripe achieved annualized revenues of $1 million in a median period of just 11.5 months—four months ahead of the fastest-growing SaaS companies.”

    This chart is extremely sparse and very confusing. Why not just plot a random sample of firms from both industries?

    I'd be curious to see the shape of the annualized revenue distribution after a fixed time duration for SaaS and AI firms. Then I could judge whether its fair to filter by the top 100. Maybe AI has a rapid decay rate at low annualized revenue values but a slower decay rate at higher values, when compared to SaaS. Considering that AI has higher marginal costs and thus a larger price of entry, this seems plausible to me. If this is the case, this chart is cherry picking.

  • jimmydoe an hour ago ago

    This matches the tech job market: if you are not in top corp or labs, your hard work is most likely subsidizing the $ 1.5M paycheck for OpenAI employees.

  • GolfPopper 2 hours ago ago

    Remember, the appropriate way to parse use of "the economy" in popular press is to read it as "rich peoples' yatch money".

  • hackable_sand 10 hours ago ago

    What about food and housing? Why can't America invest in food and housing instead?

    • daedrdev 2 hours ago ago

      the US systematically taxes and forbids new housing in many ways as local voters desire. Setback requirements, 100K+ hookup costs, stairway standards, density limits, parking minimums and regulations, community input, allowing rejection of new housing despite it following all rules, abuse of environmental regulations (which ends up hurting the environment by blocking density), affordable housing requirements (a tax on each new housing block to fund affordable units on the side) all prevent new housing form being built.

    • margalabargala 10 hours ago ago

      America has spent a century investing in food. We invested in food so hard we now have to pay farmers not to grow things, because otherwise the price crash would cause problems. Food in America is very cheap.

      • hackable_sand 9 hours ago ago

        It's reassuring to be reminded that every child in America must justify their existence or starve to death.

        • margalabargala 7 hours ago ago

          Okay, that's too far. That's not true at all.

          Children in America do not starve to death. There is no famine, economically manmade or otherwise.

          This is America. We will happily allow and encourage your child to go into arbitrary amounts of debt from a young age to be fed at school.

    • GoatInGrey 2 hours ago ago

      Because investing in housing means actually changing things. There's a "Don't just do something, stand there!" strategy of maximizing comfort and minimizing effort, that must be overcome.

    • righthand 9 hours ago ago

      Is anyone starving in America? Why would there need to be focus on food production? We have huge food commodities.

  • amunozo 11 hours ago ago

    This is going to end badly, I am afraid.

    • m_ke 11 hours ago ago

      Could all pop today if GPT5 doesn’t benchmark hack hard on some new made up task.

      • falcor84 10 hours ago ago

        I don't see how it would "all pop" - same as with the internet bubble, even if the massive valuations disappear, it seems clear to me that the technology is already massively disruptive and will continue growing its impact on the economy even if we never reach AGI.

        • m_ke 9 hours ago ago

          Exactly like the internet bubble. I've been working in Deep Learning since 2014 and am very bullish on the technology but the trillions of dollars required for the next round of scaling will not be there if GPT-5 is not on the exponential growth curve that sama has been painting for the last few years.

          Just like the dot com bubble we'll need to wash out a ton of "unicorn" companies selling $1s for $0.50 before we see the long term gains.

          • falcor84 2 hours ago ago

            > Exactly like the internet bubble.

            So is this just about a bit of investor money lost? Because the internet obviously didn't decline at all after 2000, and even the investors who lost a lot but stayed in the game likely recouped their money relatively quickly. As I see it, the lesson from the dot-com bust is that we should stay in the game.

            And as for GPT-5 being on the exponential growth curve - according to METR, it's well above it: https://metr.org/blog/2025-03-19-measuring-ai-ability-to-com...

            • 0xdde 2 hours ago ago

              I wouldn't say "well above" when the curve falls well within the error bars. I wonder how different the plot would look if they reported the median as their point estimate rather than mean.

      • mewpmewp2 11 hours ago ago

        I don't expect GPT-5 to be anything special, it seems OpenAI hasn't been able to keep its lead, but even current level of LLMs to me justifies the market valuations. Of course I might eat my words saying that OpenAI is behind, but we'll see.

        • apwell23 11 hours ago ago

          > I don't expect GPT-5 to be anything special

          because ?

          • Workaccount2 9 hours ago ago

            Well word on the street is that the OSS models released this week were Meta-Style benchmaxxed and their real world performance is incredibly underwhelming.

          • input_sh 10 hours ago ago

            Because everything past GPT 3.5 has been pretty unremarkable? Doubt anyone in the world would be able to tell a difference in a blind test between 4.0, 4o, 4.5 and 4.1.

            • falcor84 9 hours ago ago

              I would absolutely take you on a blind test between 4.0 and 4.5 - the improvement is significant.

              And while I do want your money, we can just look at LMArena which does blind testing to arrive at an ELO-based score and shows 4.0 to have a score of 1318 while 4.5 has a 1438 - it's over twice likely to be judged better on an arbitrary prompt, and the difference is more significant on coding and reasoning tasks.

            • apwell23 10 hours ago ago

              > Doubt anyone in the world would be able to tell a difference in a blind test between 4.0, 4o, 4.5 and 4.1.

              But this isn't 4.6 . its 5.

              I can tell difference between 3 and 4.

              • dwater 9 hours ago ago

                That's a very Spinal Tap argument for why it will be more than just an incremental improvement.

  • andsoitis 10 hours ago ago

    > Artificial intelligence has a few simple ingredients: computer chips, racks of servers in data centers, huge amounts of electricity, and networking and cooling systems that keep everything running without overheating.

    What about the software? What about the data? What about the models?

  • krunck 9 hours ago ago

    Please stop using stacked bar charts where individual lines(plus a Total) line would help the poor reader comprehend the data better.

  • croes 11 hours ago ago

    I see hardware and AI companies‘ revenues rise.

    Shouldn’t the customers‘ revenue also rise if AI fulfills its productivity promises?

    Seems like the only ones getting rich in this gold rush are the shovel sellers. Business as usual.

    • mewpmewp2 11 hours ago ago

      If it's automation it could also reduce costs of the customers. But that is a very complex question. It could be that there isn't enough competition in AI and so the customers are getting only marginal gains while AI company gets the most. It could also be that for customers the revenue / profits will be delayed as implementation will take time, and it could be upfront investment.

    • sofixa 10 hours ago ago

      > Shouldn’t the customers‘ revenue also rise if AI fulfills its productivity promises

      Not necessarily, see the Jevons paradox.

      • croes 7 hours ago ago

        Jevron is about higher resource consumption and costs but the output and therefore the revenue should rise too.

        Maybe not the profit but at least the revenue.

      • metalliqaz 8 hours ago ago

        Applying the Jevons paradox to this scenario should still result in revenues going up, assuming the employee labor being optimized adds value to the company. (they would add more)

    • thecupisblue 10 hours ago ago

      The biggest problem is the inability of corporate middle management to actually leverage GenAI.

  • ChrisArchitect 9 hours ago ago

    Related:

    AI is propping up the US economy

    https://news.ycombinator.com/item?id=44802916

    • GoatInGrey 2 hours ago ago

      I'm noticing how that article is myopically discussing equity valuations rather than actual economic output and worker productivity.

  • bravetraveler 10 hours ago ago

    They mention rate of adoption, compared to the internet. Consider the barriers to entry. Before we all got sick of receiving AOL CDs, the prospect of 'going online' was incredibly expensive and sometimes laborious.

    More people subscribe to/play with a $20/m service than own/admin state-of-the-art machines?! Say it ain't so /s

    • thewebguyd 8 hours ago ago

      > More people subscribe to/play with a $20/m service than own/admin state-of-the-art machines?! Say it ain't so /s

      The problem is, $20/m isn't going to be profitable without better hardware, or more optimized models. Even the $200/month plan isn't making money for OpenAI. These companies are still in the "sell at a loss to capture marketshare" stage.

      We don't even know if being an "AI Company" is viable in the first place - just developing models and selling access. Models will become a commodity, and if hardware costs ever come down, open models will win.

      What happens when OpenAI, Anthropic, etc. can't be profitable without charging a price that consumers won't/can't afford to pay?

  • snitzr 11 hours ago ago

    Billion-dollar Clippy.

  • doyouevensunbro 10 hours ago ago

    > because the oligarchs demanded it

    There, summed it up for you.