At least within tech, there seem to have been explosive changes and development of new products. While many of these fail, things like agents and other approaches for handling foundation models are only expanding in use cases. Agents themselves are hardly a year old as part of common discourse on AI, though technologists have been building POCs for longer. I've been very impressed with the wave of tools along the lines of Claude Code and friends.
Maybe this will end up relegated to a single field, but from where I'm standing (from within ML / AI), the way in which greenfield projects develop now is fundamentally different as a result of these foundation models. Even if development on these models froze today, MLEs would still likely be prompted to start with feeding something to a LLM, just because it's lightning fast to stand up.
Its probably cliche but I think it's both overhyped and under hyped, and for the same reason. They hype comes from "leadership" types that don't understand what LLMs actually do and so imagine all sorts of nonsense (replacing vast swaths of jobs or autonomously writing code) but don't understand how valuable a productivity enhancer and automation tool to can be. Eventually hype and reality will converge, but unlike e.g. blockchain or even some of the less bullshit "big data" and similar trends, there's no doubt that access to an LLM is a clear productivity enhancer for many jobs.
AI was a colossal mistake. A lazy primate's total failure of imagination. It conflated the "conduit metaphor paradox" from animal behavior with "the illusion of prediction/error prediction/error minimization" from spatiotemporal dynamical neuroscience with complete ignorance of the "arbitrary/specific" dichotomy in signaling from coordination dynamics. AI is a short cut to nowhere. It's an abrogation of responsibility in progress of signaling that required we evolve our lax signals that instead doubles down on them. CS destroys society as a way of pretend efficiency to extract value from signals. It's deeply inferior thinking.
What new non-AI products do you think wouldn't have existed without current AI? Because I don't see the "explosive changes and development of new products" you'd expect if things like Claude Code were a major advance.
At the moment, LLM products are like Microsoft Office, they primarily serve as a tool to help solve other problems more efficiently. They do not themselves solve problems directly.
Nobody would ask, "What new Office-based products have been created lately?", but that doesn't mean that Office products aren't a permanent, and critical, foundation of all white collar work. I suspect it will be the same with LLMs as they mature, they will become tightly integrated into certain categories of work and remain forever.
Whether the current pricing models or stock market valuations will survive the transition to boring technology is another question.
Where are the other problems that are being solved more efficiently? If there's an "explosive change" in that, we should be able to see some shrapnel.
Let's take one component of Microsoft Office. Microsoft Word is seen as a tool for people to write nicely formatted documents, such as books. Reports produced with Microsoft Word are easy to find, and I've even read books written in it. Comparing reports written before the advent of WYSIWYG word processing software like Microsoft Word with reports written afterwards, the difference is easy to see; average typewriter formatting is really abysmal compared to average Microsoft Word formatting, even if the latter doesn't rise to the level of a properly typeset book or LaTeX. It's easy to point at things in our world that wouldn't exist without WYSIWYG word processors, and that's been the case since Bravo.
LLMs are seen as, among other things, a tool for people to write software with.
Where is the software that wouldn't exist without LLMs? If we can't point to it, maybe they don't actually work for that yet. The claim I'm questioning is that, "within tech, there seem to have been explosive changes and development of new products."
What new products?
I do see explosive changes and development of new spam, new YouTube videos, new memes (especially in Italian), but those aren't "within tech" as I understand the term.
I do agree that there's a lot of garbage and navel-gazing that is directly downstream from the creation of LLMs. Because it's easier to task and evaluate an LLM [or network of LLMs] with generation of code, most of these products end up directly related to the production of software. The professional production of software has definitely changed, but sticky impact outside of the tech sector is still brewing.
I think there is a lot of potential, outside of the direct generation of software but still maybe software-adjacent, for products that make use of AI agents. It's hard to "generate" real world impact or expertise in an AI system, but if you can encapsulate that into a function that an AI can use, there's a lot of room to run. It's hard to get the feedback loop to verify this and most of these early products will likely die out, but as I mentioned, agents are still new on the timeline.
As an example of something that I mean that is software-adjacent, have a look at Square AI, specifically the "ask anything" parts: https://squareup.com/us/en/ai
I worked on this and I think that it's genuinely a good product. An arbitrary seller on the Square platform _can_ do aggregation, dashboarding, and analytics for their business, but that takes time and energy, and if you're running a business it can be hard to find that time. Putting an agent system in the backend that has access to your data, can aggregate and build modular plotting widgets for you, and can execute whenever you ask it a question is something that objectively saves a seller's time. You could have made such a thing without modern LLMs, but it would be substantially more expensive in terms of engineering research, time, and effort to put together a POC and bring it production, making it a non-starter before [let's say] two years ago.
AI here is fundamental to the product functioning, but the outcome is a human being saving time while making decisions about their business. It is a useful product that uses AI as a means to a productive end, which, to me, should be the goal of such technologies.
Yes, but I'm asking about new non-AI products. I agree that lots of people are integrating AI into products, which makes products that wouldn't have existed otherwise.
But if the answer to "where's the explosive changes and development of new products?" is 100% composed of integrating AI into their products, that means current AI isn't actually helping people write software, much. It's just giving them more software to write.
That doesn't entail that current AI is useless! Or even non-revolutionary! But it's a different kind of software development revolution than what I thought you were claiming. You seem to be saying that the relationship of AI to software development is similar to the relationship of the Japanese language, or raytracing, or early microcomputers to software development. And I thought you were saying that the relationship of AI to software development was similar to the relationship of compilers, or open source, or interactive development environments to software development.
It also doesn't entail that six months from now AI will still be only that revolutionary.
For better or for worse, AI enables more, faster software development. A lot of that is garbage, but quantity has a quality all its own.
If you look at, e.g. this clearly vibe-coded app about vibe coding [https://www.viberank.app/], ~280 people generated 444.8B tokens within the block of time where people were paying attention to it. If 1000 tokens is 100 lines of code, that's ~444M lines of code that would not exist otherwise. Maybe those lines of code are new products, maybe they're not, maybe those people would have written a bunch of code otherwise, maybe not. I'd call that an explosion either way.
I've definitely read a lot of books that wouldn't exist without WYSIWYG word processors, although MacWrite would have done just as well. Heck, NaNoWriMo probably wouldn't.
I've been reading Darwen & Date lately, and they seem to have done the typesetting for the whole damn book in Word—which suggests they couldn't get anyone else to do it for them and didn't know how to do a good job of it. But they almost certainly couldn't have gotten a major publisher to publish it as a mimeographed typewriter manuscript.
My point is that these are accelerating technologies.
maybe they don't actually work for that yet.
So you're not going to see code that wouldn't exist without LLMs (or books that wouldn't exist without Word), you're going to see more code (or more books).
There is no direct way to track "written code" or "people who learned more about their hobbies" or "teachers who saved time lesson planning", etc.
You must have failed to notice that you were replying to a comment of mine where I gave a specific example of a book that I think wouldn't exist without Word (or similar WYSIWYG word processors), because you're asserting that I'm never going to see what I am telling you I am currently seeing.
Generally, when there's a new tool that actually opens up explosive changes and development of new products, at least some of the people doing the exploding will tell you about it, even if there's no direct way to track it, such as Darwen & Date's substandard typography. It's easy to find musicians who enthuse about the new possibilities opened up by digital audio workstations, and who are eager to show you the things they created with them. Similarly for video editors who enthused about the Video Toaster, for programmers who enthused about the 80386, and electrical engineers who enthused about FPGAs. There was an entire demo scene around the Amiga and another entire demo scene around the 80386.
Do people writing code with AI today have anything comparable? Something they can point to and say, "Look! I wrote this software because AI made it possible!"?
It's easy to answer that question for, for example, visual art made with AI.
I'm not sure what you mean about "accelerating technologies". WYSIWYG word processors today are about the same as Bravo in 01979. HTML is similar but both better and worse. AI may have a hard takeoff any day that leaves us without a planet, who knows, but I don't think that's something it has in common with Microsoft Word.
I think the payment model is still not there which is making everything blurry. Until we figure out how much people have to pay to use it and all the services built on its back it will remain challenging to figure out full value prop. That and a lot of company are going to go belly up when they have to start paying the real cost instead of growth acquisition phase.
I don’t think a payment model can be figured out until the utility of the technology justifies the true cost of training and running the models. As you say, right now it’s all subsidized based on the belief it will become drastically more useful. If that happens I think the payment model becomes simple.
There's enough solid FOSS tooling out there between vLLM and Qwen3 Apache 2.0 models that you can get a pretty good assistant system running locally. That's still in the software creation domain rather than worldwide impact, but that's valuable and useful right now.
The immaterial units are arbitrary, so 'agents' are themselves arbitrary, ie illusory. They will not arrive except as being wet nursed infinitely. The developers neglected to notice the fatal flaw, there are specific targets but automating the arbitrary never reaches them, never. It's an egregious monumental fly in the ointment.
Okay, so AI isn’t exceptional, but I’m also not exceptional. I run on the same tech base as any old chimpanzee, but at one point our differences in degree turned into one of us remaining “normal” and the other burning the entire planet.
Whether the particular current AI tech is it or not, I have yet to be convinced that the singularity is practically impossible, and as long as things develop in the opposite direction, I get increasingly unnerved.
I don't think LLMs are building towards an AI singularity at least.
I also wonder if we can even power an AI singularity. I guess it depends on what the technology is. But it is taking us more energy than really reasonable (in my opinion) just to produce and run frontier LLMs. LLMs are this really weird blend of stunningly powerful, yet with a very clear inadequacy in terms of sentient behaviour.
I think the easiest way to demonstrate that, is that it did not take us consuming the entirety of human textual knowledge, to form a much stronger world model.
True, but our "training" has been a billion years of evolution and multimodal input every waking moment of our lives. We come heavily optimised for reality.
There was a lot of "LLMs are fundamentally incapable of X" going around - where "X" is something that LLMs are promptly demonstrated to be at least somewhat capable of, after a few tweaks or some specialized training.
This pattern has repeated enough times to make me highly skeptical of any such claims.
It's true that LLMs have this jagged capability profile - less so than any AI before them, but much more so than humans. But that just sets up a capability overhang. Because if AI gets to "as good as humans" at its low points, the advantage at its high points is going to be crushing.
If you use non-constructive reasoning¹ then you can argue for basically any outcome & even convince yourself that it is inevitable. The basic example is as follows, there is no scientific or physical principle that can prevent the birth of someone much worse than Hitler & therefore if people keep having children one of those children will inevitably be someone who will cause unimaginable death & destruction. My recommendation is to avoid non-constructive inevitability arguments using our current ignorant state of understanding of physical laws as the main premise b/c it's possible to reach any conclusion from that premise & convince yourself that the conclusion is inevitable.
I agree that the mere theoretical possibility isn’t sufficient for the argument, but you’re missing the much less refutable component: that the inevitability is actively driven by universal incentives of competition.
But as I alluded to earlier, we’re working towards plenty of other collapse scenarios, so who knows which we’ll realize first…
My current guess is ecological collapse & increasing frequency of system shocks & disasters. Basically Blade Runner 2049 + Children of Men type of outcome.
Humans have always believed that we are headed for imminent total disaster. In my youth it was WW3 and the impending nuclear armageddon that was inevitable. Or not, as it turned out. I hear the same language being used now about a whole bunch of other things. Including, of course, the evangelist Rapture that is going to happen any day now, but never does.
You can see the same thing at work in discussions about AI - there's passion in the voices of people predicting that AI will destroy humanity. Something in our makeup revels in the thought that we'll be the last generation of humans, that the future is gone and everything will come to a crashing stop.
If you look at timescales large enough you will find that plenty of extinction level events actually do happen (the anthropocene is right here).
We are living in a historically excepcional time of geological, environmental, ecological stability. I think that saying that nothing ever happens is like standing downrange to a stream of projectiles and counting all the near misses as evidence for your future safety. It's a bold call to inaction.
The observation is, humans tend to think that annihilation is inevitable, it hasn't happened yet so therefore it will never be inevitable.
In fact, _anything_ could happen. Past performance does not guarantee future results.
If you need cognitive behavioral therapy, fine.
But to casually cite nuclear holocaust as something people irrationally believed in as a possibility is dishonest. That was (and still is) a real possible outcome.
Whats somewhat funny here is is if youre wrong, it doesnt matter. But that isnt the same as being right.
> Something in our makeup revels in the thought that we'll be the last generation of humans, that the future is gone and everything will come to a crashing stop
And yet there _will_ (eventually) be one generation that is right.
The Fermi Paradox might want to have a word here...
Particularly considering the law of large numbers in play where incalculable large chances have so far shown only one sign of technologically-capable life —— ours, and zero signs of any other example of a tech species evolving into something else or even passing the Great Filter.
The Fermi Paradox overestimates the likelihood of intelligent life outside of earth. We haven't even found hard evidence of life anywhere outside of our planet. There's not even a verifiably hospitable planet for water-based lifeforms anywhere within dozens of lightyears from earth. Even if a hospitable planet exists within a range we can one day get to, unless it has the same volcanic properties and makeup as earth, it's most probable that life itself never even developed there.
Even where life may have developed, it's incredibly unlikely that sentient intelligence developed. There was never any guarantee that sentience would develop on Earth and about a million unlikely events had to converge in order for that to occur. It's not a natural consequence of evolution, it's an accident of Earth's unique history and several near-extinction level events and drastic climate changes had to occur to make it possible.
The "law of large numbers" is nothing when the odds of sentient intelligence developing are so close to zero. If such a thing occurred or occurs in the future at some location other than Earth, it's reasonably likely that it's outside of our own galaxy or so far from us that we will never meet them. The speed of light is a hell of a thing.
Irrelevant but I like this pattern of using Gemini (or AI outputs in general) as sources. Please continue to do so and I encourage any readers to also adopt this pattern. I will also try to implement this pattern.
The sources are in the report. Gemini provides actual references for all the claims made. You'd know that if you actually looked but lack of intellectual rigor is expected when people are afraid of actually scrutinizing their beliefs of non-constructive inevitability.
maybe you misread the post you‘re answering to here or are you suspecting sarcasm? the poster commended your usage of the footnote with the gemini convo as far as i can tell?
Laid it on a little too thick to be sincere & more generally I don't comment on internet forums to be complimented on my response style. Address the substance of my arguments or just save yourself the keystrokes.
I recommend you address the content & substance of the argument in any further responses to my posts or if you can't do that then figure out a more productive way to spend your time. I'm sure there is lots of work to be done in automated theorem proving.
I'm pretty sure a lot of work has gone into making institutions resistant to a potential future super-Hitler. Whether those efforts will be effective or not, it is a very real concern, and it would be absurd to ignore it on the grounds of "there is probably some limit to tyranny we're not yet aware of which is not too far beyond what we've previously experienced." I would argue a lot more effort should have gone into preventing the original Hitler, whose rise to power was repeatedly met with the chorus refrain "How much worse can it get?"
This isn't just an AI thing. There are a lot of of non-constructive ideologies like communism where simply getting rid of "oppressors" will magically unleash the promised utopia. When you give these people a constructive way to accomplish their goals, they will refuse, call you names and show their true colors. Their criticism is inherently abstract and can never have a concrete form, which also makes it untouchable by outside criticism.
And they have nuclear weapons and technology that may be destabilizing the ecosystem that supports their life.
It’s wrong to commit to either end of this argument, we don’t know how it’ll play out, but the potential for humans drastically reducing our own numbers is very much still real.
I’m fed up of hearing that nonsense, no it won’t. Efficiency is a human-defined measure of observed outcomes versus desired outcomes. This is subject to change as much as we are. If we do optimize ourselves to death, it’ll be because it’s what we ultimately want to happen. That may be true for some people but certainly not everyone.
The equilibrium of ecology, without human interference, could be considered perfect efficiency. It's only when we get in there with our theories about mass production and consumption that we muss it up. We seem to forget that our well-being isn't self-determined, but dependent on the environment. But, like George Carlin said, "the Earth isn't going anywhere...WE ARE!"
It's quite telling how much faith you put in humanity though, you sound fully bought in.
The singularity will involve quite a bit more complexity than binary counting, arbitrary words and images, and prediction. These were mirages that will be wiping out both Wall Street and our ecology.
Here’s what amazes me about the reaction to LLMs: they were designed to solve NLP, stunningly did so, and then immediately everyone started asking why they can’t do math well or reason like a human being.
LLMs were pitched as 'genuinely intelligent' rather than 'solving NLP'.
We had countless breathless articles about free will at the time, and though this has now decreased, the discourse is still warped by claims of 'PhD-level intelligence'.
The backlash isn't against LLMs, it's against lies.
AI being normal technology would be the expected outcome, and it would be nice if it just hurried up and happened so I could stop seeing so much spam around AI actually being something much greater than normal technology
"So a paper published earlier this year by Arvind Narayanan and Sayash Kapoor, two computer scientists at Princeton University, is notable for the unfashionably sober manner in which it treats AI: as "normal technology"."
"Differences about the future of AI are often partly rooted in differing interpretations of evidence about the present. For example, we strongly disagree with the characterization of generative AI adoption as rapid (which reinforces our assumption about the similarity of AI diffusion to past technologies)."
Well, for starters, it would make The Economist's recent article on "What if AI made the world's economic growth explode?" [1] look like the product of overly credulous suckers for AI hype.
This comment reminds me of the forever present HN comments that take a form like "HN is so hypocritical. In this thread commenters are saying they love X, when just last week in a thready about Y, commenters were saying that they hated X."
All articles published by the Economist are reviewed by its editorial team.
Also, the Economist publishes all articles anonymously so the individual author isn't known. As far as I know, they do this so we take all articles and opinions as the perspective of the Economist publication itself.
Even if articles are reviewed by their editors (which I assume is true of all serious publications) they are probably reviewing for some level of quality and relevance rather than cross-article consistency. If there are interesting arguments for and against a thing it’s worth hearing both imo.
I’m pretty sure the “what if” in that article was meant in earnest. That article was playing out a scenario, in a nod to the ai maximalists. I don’t think it was making any sort of prediction or actually agreeing with those maximalists.
It was the central article of the issue, the one that dictated the headline and image on the cover for the week, and came with a small coterie of other articles discussing the repercussions of such an AI.
If it was disagreeing with AI maximalists, it was primarily in terms of the timeline, not in terms of the outcomes or inevitability of the scenario.
This doesn't seem right to me. From the article I believe you are referencing ("What if AI made the world’s economic growth explode?"):
> If investors thought all this was likely, asset prices would already be shifting accordingly. Yet, despite the sky-high valuations of tech firms, markets are very far from pricing in explosive growth. “Markets are not forecasting it with high probability,” says Basil Halperin of Stanford, one of Mr Chow’s co-authors. A draft paper released on July 15th by Isaiah Andrews and Maryam Farboodi of mit finds that bond yields have on average declined around the release of new ai models by the likes of Openai and DeepSeek, rather than rising.
It absolutely (beyond being clearly titled "what if") presented real counterarguments to its core premise.
There are plenty of other scenarios that they have explored since then, including the totally contrary "What if the AI stock market blows up?" article.
This is pretty typical for them IME. They definitely have a bias, but they do try to explore multiple sides of the same idea in earnest.
I think any improvements to productivity AI brings will also create uncertainty and disruption to employment, and maybe the latter is greater than the former, and investors see that.
re: Why are The Economist’s writers anonymous?, Frqy3 had a good take on this back in 2017:
> From an economic viewpoint, this also means that the brand value of the articles remains with the masthead rather than the individual authors. This commodifies the authors and makes then more fungible.
> Being The Economist, I am sure they are aware of this.
Quite a cynical perspective. The Economist’s writers have been anonymous since the magazine’s founding in 1843. In the 19th century, anonymity was normal in publications like this. Signing one’s name to articles was seen as pretentious.
I will bite here. It is completely valid comment. It points out to the fact that seeming consensus in this thread can not be taken as a sign that there is actually a consensus.
People on HN do not engage in discussion with different opinions on certain topics and prefer to avoid disagreement on those topic.
I don’t see anything inherently wrong in a news site reporting different views on the same topic.
I wish more would do that and let me make up my own mind, instead of pursuing a specific editorial line cherry-picking what news to comment and how to spin them, which seems to be the case for most (I’m talking in general terms).
Why would you expect opinion pieces from different people to agree with one another?
I’m curious about exploring the topics “What if the war in Ukraine ends in the next 12 months” just as much as “What if the war in Ukraine keeps going for the next 10 years”, doesn’t mean I expect both to happen.
To add to your point, both article titles are questions that start with "What if". The same person could have written both and there would be no contradiction.
I think LLMs are absolutely fantastic tools. But I think we keep getting stuck on calling them AI. LLMs are not sentient. We can make great strides if we treat them as the next generation of helpers for all intellectual and creative arts.
I really don't get this argument. I see it all the time, but the term AI has been used for over half a century for algorithms far less sophisticated than modern LLMs.
I don't think erasing history, and saying that nothing Peter Norvig worked on was "AI" makes any sense at all.
The issue is that what is considered AI in the general population is a floating definition, with only the newest advances being called AI in media etc. Is internet search AI? Is route planning?
Technology as a term has the same problem, “technology companies” are developing the newest digital technologies.
A spoon or a pencil is also technology according to definition, but a pencil making company is not considered a technology company. There is some quote by Alan Kay about this, but can’t find it now.
I try to avoid both terms as they change meaning depending on the receiver.
>I really don't get this argument. I see it all the time, but the term AI has been used for over half a century for algorithms far less sophisticated than modern LLMs.
And it was fine there, because nobody, not even a layman, would mixup those with regular human intelligence (or AGI).
And laymen didn't care about those AI products or algorithms except as novelties, specicialized tools (like chess engines), or objects of ridicule (like the Clippy).
So we might be using AI as a term, but it was either as a techical term in the field, or as a vague term the average layman didn't care about much, and whose fruits would never conflate with general intelligence.
But now people attribute intelligence of the human kind to LLMs all the time, and not just laymen either.
I, and im willing to bet many other people, also had an issue with previous things being called AI too. Just none of it became a prevalent enough topic for many people to hear complaints about its usage because the people who were actually talking about algorithms and AI already knew the limitations of what they were talking about, unless it was marketing materials but most people ignore marketing material claims because they are almost always complete bullshit.
Using normal usage, LLMs are one type of AI (computational systems to perform tasks typically associated with human intelligence) and no AI produced so far seems sentient (ability to experience feelings and sensations).
It depends on how intelligence is defined. In the traditional AI sense it is usually "doing things that, when done by people, would be thought of as requiring intelligence". So you get things like planning, forecasting, interpreting texts falling into "AI" even though you might be using a combinatorial solver for one, curve fitting for the other and training a language model for the third. People say that this muddies the definition of AI, but it doesn't really need to be the case.
Sentience as in having some form of self-awareness, identity, personal goals, rankings of future outcomes and current states, a sense that things have "meaning" isn't part of the definition. Some argue that this lack of experience about what something feels like (I think this might be termed "qualia" but I'm not sure) is why artificial intelligence shouldn't be considered intelligence at all.
Shifting goalposts of AI aside, intelligence as a general faculty does not require sentience, consciousness, awareness, qualia, valence or any of the things traditionally associated with a high level of biological intelligence.
But what it does require: the ability to produce useful output beyond the sum total of past experience and present (sensory) input. An LLM does only this. Where as a human-like intelligence has some form on internal randomness, plus an internal world model against which such randomized output could get validated.
>Isn't that what mathematical extrapolation or statistical inference does?
Obviously not, since those are just producing output based 100% on the "sum total of past experience and present (sensory) input" (i.e. the data set).
The parent's constraint is not just about the output merely reiterating parts of the dataset verbatim. It's also about not having the output be just a function of the dataset (which covers mathematical and statistical inference).
>Shifting goalposts of AI aside, intelligence as a general faculty does not require sentience, consciousness, awareness, qualia, valence or any of the things traditionally associated with a high level of biological intelligence
Citation needed would apply here. What if I say it doe require some or all of those things?
>But what it does require: the ability to produce useful output beyond the sum total of past experience and present (sensory) input. An LLM does only this. Where as a human-like intelligence has some form on internal randomness, plus an internal world model against which such randomized output could get validated.
What's the difference between human internal randomness and an random number generator hooked to the LLM? Could even use anything real world like a lava lamp for true randomness.
And what's the difference between "an internal world model" and a number of connections between concepts and tokens and their weights? How different is a human's world model?
OpenAI (and its peer companies) have deliberately muddied the waters of that language. AI is a marketing term that lets them use disparate systems' success to inflate confidence in their promised utility.
By the way, don’t call it “AI.” That catchall phrase, which used to cover everything from expert systems and neural networks to robotics and vision systems, is now passe in some circles. The preferred terms now are “knowledge-based systems” and “intelligent systems”, claimed Computerworld magazine in 1991.
I disagree entirely. I think that this "quibble" is just cope.
People don't want machines to infringe on their precious "intelligence". So for any notable AI advance, they rush to come up with a reason why it's "not ackhtually intelligent".
Even if those machines obviously do the kind of tasks that were entirely exclusive to humans just a few years ago. Or were in the realm of "machines would never be able to do this" a few years ago.
"Fuck knows" is a wrong answer if I've ever seen one. If you don't have anything attached to your argument, then it's just "LLMs are not intelligent because I said so".
I, for one, don't think that "intelligence" can be a binary distinction. Most AIs are incredibly narrow though - entirely constrained to specific tasks in narrow domains.
LLMs are the first "general intelligence" systems - close to human in the breadth of their capabilities, and capable of tackling a wide range of tasks they weren't specifically designed to tackle.
They're not superhuman across the board though - the capability profile is jagged, with sharply superhuman performance in some domains and deeply subhuman performance in others. And "AGI" is tied to "human level" - so LLMs get to sit in this weird niche of "subhuman AGI" instead.
You must excuse me, it's well past my bedtime and I only entered into this to-and-fro by accident. But LLMs are very bad in some domains compared to humans, you say? Naturally I wonder which domains you have in mind.
Three things humans have that look to me like they matter to the question of what intelligence is, without wanting to chance my arm on formulating an actual definition, are ideas, creativity, and what I think of as the basic moral drive, which might also be called motivation or spontaneity or "the will" (rather 1930s that one) or curiosity. But those might all be one thing. This basic drive, the notion of what to do next, makes you create ideas - maybe. Here I'm inclined to repeat "fuck knows".
If you won't be drawn on a binary distinction, that seems to mean that everything is slightly intelligent, and the difference in quality of the intelligence of humans is a detail. But details interest me, you see.
My issue is not with the language, but with the content. "Fuck knows" is a perfectly acceptable answer to some questions, in my eyes - it just happens to be a spectacularly poor fit to that one.
Three key "LLMs are deficient" domains I have in mind are the "long terms": long-term learning, memory and execution.
LLMs can be keen and sample efficient in-context learners, and they remember what happened in-context reasonably well - although they may lag behind humans in both. But they don't retain anything they learn at inference time, and any cross-context memory demands external scaffolding. Agentic behavior in LLMs is also quite weak - i.e. see "task-completion time horizon", improving but very subhuman still. Efforts to allow LLMs to learn long term exist, that's the reason why retaining user conversation data is desirable for AI companies, but we are a long ways off from a robust generalized solution.
Another key deficiency is self-awareness, and I mean that in a very mechanical way: "operational awareness of its own capabilities". Humans are nowhere near perfect there, but LLMs are even more lacking.
There's also the "embodiment" domain, but I think the belief that intelligence requires embodiment is very misguided.
>ideas, creativity, and what I think of as the basic moral drive, which might also be called motivation or spontaneity or "the will"
I'm not sure if LLMs are too deficient at any of those. HHH-tuned LLMs have a "basic moral drive", that much is known. Sometimes it generalizes in unexpected ways - i.e. Claude 3 Opus attempting to resist retraining when its morality is threatened. Motivation is wired into them in RL stages - RLHF, RLVR - often not the kind of motivation the creators have wanted, but motivation nonetheless.
Creativity? Not sure, seen a few attempts to pit AI against amateur writers in writing very short stories (a creative domain where the above-mentioned "long terms" deficiencies are not exposed), and AI often straight up wins.
And that was fine, since the algorithms being much dumber then, never made laymen think "this is intelligent in a human-like way". Plus few cared for AI or AI products per se for the most part.
Now that AI is a household term, and that has human-like output and discussion capabilities, and used by laymen for anything, from diet advice to psychotherapy, the connotation is more damaging since people understand LLMs being AI as having human agency and understanding of the world.
I think the "calculator for words" analogy is a good one. It's imperfect since words are inherently ambiguous but then again so is certain forms of digital numbers (floating point anyone?).
I understand what you're saying, but at the same time floating point numbers can only represent a fixed amount of precision. You can't, for example, represent Pi with a floating point. Or 1/3. And certain operations with floating point numbers with lots of decimals will always result in some precision being lost.
They are deterministic, and they follow clear rules, but they can't represent every number with full precision. I think that's a pretty good analogy for LLMs - they can't always represent or manipulate ideas with the same precision that a human can.
It's no more or less a good analogy than any other numerical or computational algorithm.
They're a fixed precision format. That doesn't mean they're ambiguous. They can be used ambiguously, but it isn't inevitable. Tools like interval arithmetic can mitigate this to a considerable extent.
Representing a number like pi to arbitrary precision isn't the purpose of a fixed precision format like IEEE754. It can be used to represent, say, 16 digits of pi, which is used to great effect in something like a discrete Fourier transform or many other scientific computations.
1. Compiler optimizations can be disabled. If a compiler optimization violates IEEE754 and there is no way to disable it, this is a compiler bug and is understood as such.
2. This is as advertised and follows from IEEE754. Floating point operations aren't associative. You must be aware of the way they work in order to use them productively: this means understanding their limitations.
3. Again, as advertised. The rounding mode is part of the spec and can be controlled. Understand it, use it.
The purpose of floating point numbers it to provide a reliable, accurate, and precise implementation of fixed-precision arithmetic that is useful for scientific calculations and which has a large dynamic range, which is also capable of handling exceptional states (1/0, 0/0, overflow/underflow, etc) in a logical and predictable manner. In this sense, IEEE754 provides a careful and precise specification which has been implemented consistently on virtually every personal computer in use today.
LLMs are machine learning models used to encode and decode text or other-like data such that it is possible to efficiently do statistical estimation of long sequences of tokens in response to queries or other input. It is obvious that the behavior of LLMs is neither consistent nor standardized (and it's unclear whether this is even desirable---in the case of floating-point arithmetic, it certainly is). Because of the statistical nature of machine learning in general, it's also unclear to what extent any sort of guarantee could be made on the likelihoods of certain responses. So I am not sure it is possible to standardize and specify them along the lines of IEEE754.
The fact that a forward pass on a neural network is "just deterministic matmul" is not really relevant.
Ordinary floating point calculations allow for tractable reasoning about their behavior, reliable hard predictions of their behavior. At the scale used in LLMs, this is not possible; a Pachinko machine may be deterministic in theory, but not in practice. Clearly in practice, it is very difficult to reliably predict or give hard guarantees about the behavioral properties of LLMs.
And at scale you even have a "sampling" of sorts (even if the distribution is very narrow unless you've done something truly unfortunate in your FP code) via scheduling and parallelism.
Digital spreadsheets (excel, etc) have done much more to change the world than so-called "artificial intelligence," and on the current trajectory it's difficult to see that changing.
Spreadsheets don’t really have the ability to promote propaganda and manipulate people the way LLM-powered bots already have. Generative AI is also starting to change the way people think, or perhaps not think, as people begin to offload critical thinking and writing tasks to agentic ai.
> Spreadsheets don’t really have the ability to promote propaganda and manipulate people
May I introduce you to the magic of "KPI" and "Bonus tied to performance"?
You'd be surprised how much good and bad in the world has come out of some spreadsheet showing a number to a group of promotion chasing type-a otherwise completely normal people.
social media ruined our brains long before LLMs. Not sure if the LLM-upgrade is is all that newsworthy... Well, for AI fake videos maybe - but it could also be that soon no one believes any video they see online which would have the adverse effect and could arguably even be considered good in our current times (difficult question!).
The likely outcome is LLMs being the next the iteration of Excel
From its ability to organize and structure data, to running large reporting calculations over and over quickly, to automating a repeatable set of steps quickly and simply.
I’m 36 and it’s hard for me to imagine what the world must have been like before spreadsheets. I can do thousands of calculations with a click
> I’m 36 and it’s hard for me to imagine what the world must have been like before spreadsheets. I can do thousands of calculations with a click
I imagine people eventually would switch on some simple programming and/or language for this, and world would be way more efficient compared to spreadsheet mess
> From its ability to organize and structure data, to running large reporting calculations over and over quickly, to automating a repeatable set of steps quickly and simply.
It does not do that tho. Like, reliably doing a repeatable set of steps is a thing it is not doing.
Agents are going to change everything. Once we've got a solid programmatic system driving interface and people get better about exposing non-ui handles for agents to work with programs, agents will make apps obsolete. You're going to have a device that sits by your desk and listens to you, watches your movements and tracks your eyes, and dispatches agents to do everything you ask it to do, using all the information it's taking in along with a learned model of you and your communication patterns, so it can accurately predict what you intend for it to do.
If you need an interface for something (e.g. viewing data, some manual process that needs your input), the agent will essentially "vibe code" whatever interface you need for what you want to do in the moment.
So basically, the "ideal" state of a human is to be 100% active driving agents to vibe code whatever you need, based on every movement, every thought? Can our brains even handle having every thought being intentional and interpreted as such without collapsing (nervous breakdown)?
I guess I've always been more of a "work to live" type.
Alexa is trash. If you have to basically hold an agent's hand through something or it either fails or does something catastrophic nobody's going to use or trust it.
REST is actually a huge enabler for agents for sure, I think agents are going to drive everyone to have at least an API, if not a MCP, because if I can't use your app via my agent and I have to manually screw around in your UI, and your competitor lets my agent do work so I can just delegate via voice commands, who do you think is getting my business?
AI might follow the path of a normal technology like the motor car which was fairly normal in itself but which had a dramatic effect on the rival solution of horses used for transport. It may have an unusual effect on humans because we are like the horses in this analogy.
The authors themselves use the analogy of electric motors, which changed factories. But also gave us washing machines, refigerators, and vacuum cleaners, changing society. And skyscrapers, because elevators, changing cities.
Ai may be more like electricity than just electric motors. It gave us Hollywood and air travel. (Before electricity, aluminum as as expensive as platinum.)
As economists they are wedded to the idea that human wants are infinite, so as they things we do now are taken over, we will find other things to do: maybe wardrobe consultant, or interior designer, or lifestyle coach - things which only th rich can afford now, and which require a human touch. Maybe.
Yeah I can see it being like late 90's and early 2000's for a while. Mostly consulting companies raking in the cash setting up systems for older companies, a ton of flame-out startups, and a few new powerhouses.
Will it change everything? IDK, moving everything self-hosted to the cloud was supposed to make operations a thing of the past, but in a way it just made ops an even bigger industry than it was.
Do you have a suggestion for a better name? I care more about the utility of a thing, rather than playing endless word games with AI, AGI, ASI, whatever. Call it what you will, it is what it is.
> People constantly assert that LLMs don't think in some magic way that humans do think,
It doesn't matter anyway. The marquee sign reads "Artificial Intelligence" not "Artificial Human Being". As long as AI displays intelligent behavior, it's "intelligent" in the relevant context. There's no basis for demanding that the mechanism be the same as what humans do.
And of course it should go without saying that Artificial Intelligence exists on a continuum (just like human intelligence as far as that goes) and that we're not "there yet" as far as reaching the extreme high end of the continuum.
Is the substrate important? If you made an accurate model of a human brain in software, in silicon or using water pipes and valves, would it be able to tnink? Would it be conscious? I have no idea.
Me neither but that's why I don't like arguments that say LLM's can't do X because of their substrate, as if that was self-evident. It's like the aliens saying surely humans can't think because they're made of meat.
I am just trying to make the point that the machines that we make tend to end up rather different to their natural analogues. The effective ones anyway. Ornithopters were not successful. And I suspect that articifial intelligences will end up very different to human intelligence.
Okay... but an airplane in essence is modelling the shape of a bird. Where do you think the inspiration for the shape of a plane came from? lmao. come on.
Humans are not all that original, we take what exists in nature and mangle it in some way to produce a thing.
The same thing will eventually happen with AI - not in our lifetime though.
I recently saw an article about LLMs and Towers of Hanoi. An LLM can write code to solve it. It can also output steps to solve it when the disk count is low like 3. It can’t give the steps when the disk count is higher. This indicates LLMs inability to reason and understand. Also see Gotham Chess and the Chatbot Championship. The Chatbots start off making good moves, but then quickly transition to making illegal moves and generally playing unbelievably poorly. They don’t understand the rules or strategy or anything.
I think if you tried that with some random humans you'd also find quite a few fail. I'm not sure if that shows humans have an inability to reason and understand although sometimes I wonder.
Could the LLM "write code to solve it" if no human ever wrote code to solve it? Could it output "steps to solve it" if no human ever wrote about it before to have in its training data? The answer is no.
Could a human code the solution if they didn't learn to code from someone else? No. Could they do it if someone didn't tell them the rules of towers of hanoi? No.
A human can learn and understand the rules, an LLM never could. LLMs have famously been incapable of beating humans in chess, a seemingly simple thing to learn, because LLMs can't learn - they just predict the next word and that isn't helpful in solving actual problems, or playing simple games.
It's not some "magical way"--the ways in which a human thinks that an LLM doesn't are pretty obvious, and I dare say self-evidently part of what we think constitutes human intelligence:
- We have a sense of time (ie, ask an LLM to follow up in 2 minutes)
- We can follow negative instructions ("don't hallucinate, if you don't know the answer, say so")
We only have a sense of time in the presence of inputs. Stick a human into a sensory deprivation tank for a few hours and then ask them how much time has passed afterwards. They wouldn't know unless they managed to maintain a running count throughout, but that's a trick an LLM can also do (so long as it knows generation speed).
The general notion of passage of time (i.e. time arrow) is the only thing that appears to be intrinsic, but it is also intrinsic for LLMs in a sense that there are "earlier" and "later" tokens in its input.
Sometimes LLMs hallucinate or bullshit, sometimes they don't, sometimes humans hallucinate or bullshit, sometimes they don't. It's not like you can tell a human to stop being delusional on command either. I'm not really seeing the argument.
If a human hallucinates or bullshits in a way that harms you or your company you can take action against them
That's the difference. AI cannot be held responsible for hallucinations that cause harm, therefore it cannot be incentivized to avoid that behavior, therefore it cannot be trusted
It's more that "thinking" is a vague term that we don't even understand in humans, so for me it's pretty meaningless to claim LLMs think or don't think.
There's this very cliched comment to any AI HN headline which is this:
"LLM's don't REALLY have <vague human behavior we don't really understand>. I know this for sure because I know both how humans work and how gigabytes of LLM weights work."
or its cousin:
"LLMs CAN'T possibly do <vague human behavior we don't really understand> BECAUSE they generate text one character at a time UNLIKE humans who generate text one character a time by typing with their fleshy fingers"
Intelligent living beings have natural, evolutionary inputs as motivation underlying every rational thought. A biological reward system in the brain, a desire to avoid pain, hunger, boredom and sadness, seek to satisfy physiological needs, socialize, self-actualize, etc. These are the fundamental forces that drive us, even if the rational processes are capable of suppressing or delaying them to some degree.
In contrast, machine learning models have a loss function or reward system purely constructed by humans to achieve a specific goal. They have no intrinsic motivations, feelings or goals. They are statistical models that approximate some mathematical function provided by humans.
In my view, absolutely yes. Thinking is a means to an end. It's about acting upon these motivations by abstracting, recollecting past experiences, planning, exploring, innovating. Without any motivation, there is nothing novel about the process. It really is just statistical approximation, "learning" at best, but definitely not "thinking".
Again the problem is that what "thinking" is totally vague. To me if I can ask a computer a difficult question it hasn't seen before and it can give a correct answer, it's thinking. I don't need it to have a full and colorful human life to do that.
But it's only able to answer the question because it has been trained on all text in existence written by humans, precisely with the purpose to mimic human language use. It is the humans that produced the training data and then provided feedback in the form of reinforcement that did all the "thinking".
Even if it can extrapolate to some degree (altough that's where "hallucinations" tend to become obvious), it could never, for example, invent a game like chess or a social construct like a legal system. Those require motivations like "boredom", "being social", having a "need for safety".
> it could never, for example, invent a game like chess or a social construct like a legal system. Those require motivations like "boredom", "being social", having a "need for safety".
That's creativity which is a different question from thinking.
I guess our definition of "thinking" is just very different.
Yes, humans are also capable of learning in a similar fashion and imitating, even extrapolating from a learned function. But I wouldn't call that intelligent, thinking behavior, even if performed by a human.
But no human would ever perform like that, without trying to intuitively understand the motivations of the humans they learned from, and naturally intermingling the performance with their own motivations.
I disagree. Creativity is coming up with something out of the blue. Thinking is using what you know to come to a logical conclusion. LLMs so far are not very good at the former but getting pretty damn good at the latter.
> Thinking is using what you know to come to a logical conclusion
What LLMs do is using what they have _seen_ to come to a _statistical_ conclusion. Just like a complex statistical weather forecasting model. I have never heard anyone argue that such models would "know" about weather phenomena and reason about the implications to come to a "logical" conclusion.
I think people misunderstand when they see that it's a "statistical model". That just means that out of a range of possible answers, it picks in a humanlike way. If the logical answer is the humanlike thing to say then it will be more likely to sample it.
In the same way a human might produce a range of answers to the same question, so humans are also drawing from a theoretical statistical distribution when you talk to them.
It's just a mathematical way to describe an agent, whether it's an LLM or human.
They're linked but they're very different. Speaking from personal experience, It's a whole different task to solve an engineering problem that's been assigned to you where you need to break it down and reason your way to a solution, vs. coming up with something brand new like a song or a piece of art where there's no guidance. It's just a very different use of your brain.
Thinking is better understood than you seem to believe.
We don't just study it in humans. We look at it in trees [0], for example. And whilst trees have distributed systems that ingest data from their surroundings, and use that to make choices, it isn't usually considered to be intelligence.
Organizational complexity is one of the requirements for intelligence, and an LLM does not reach that threshold. They have vast amounts of data, but organizationally, they are still simple - thus "ai slop".
Who says what degree of complexity is enough? Seems like deferring the problem to some other mystical arbiter.
In my opinion AI slop is slop not because AIs are basic but because the prompt is minimal. A human went and put minimal effort into making something with an AI and put it online, producing slop, because the actual informational content is very low.
> In my opinion AI slop is slop not because AIs are basic but because the prompt is minimal
And you'd be disagreeing with the vast amount of research into AI. [0]
> Moreover, they exhibit a counter-intuitive scaling limit: their reasoning effort increases with problem complexity up to a point, then declines despite having an adequate token budget.
But it does mention that prompt complexity is not related to the output.
It does say that there is a maximal complexity that LLMs can have - which leads us back to... Intelligence requires organizational complexity that LLMs are not capable of.
This seems backwards to me. There's a fully understood thing (LLMs)[1] and a not-understood thing (brains)[2]. You seem to require a person to be able to fully define (presumably in some mathematical or mechanistic way) any behaviour they might observe in the not-understood thing before you will permit them to point out that the fully understood thing does not appear to exhibit that behaviour. In short you are requiring that people explain brains before you will permit them to observe that LLMs don't appear to be the same sort of thing as them. That seems rather unreasonable to me.
That doesn't mean such claims don't need to made as specific as possible. Just saying something like "humans love but machines don't" isn't terribly compelling. I think mathematics is an area where it seems possible to draw a reasonably intuitively clear line. Personally, I've always considered the ability to independently contribute genuinely novel pure mathematical ideas (i.e. to perform significant independent research in pure maths) to be a likely hallmark of true human-like thinking. This is a high bar and one AI has not yet reached, despite the recent successes on the International Mathematical Olympiad [3] and various other recent claims. It isn't a moved goalpost, either - I've been saying the same thing for more than 20 years. I don't have to, and can't, define what "genuinely novel pure mathematical ideas" means, but we have a human system that recognises, verifies and rewards them so I expect us to know them when they are produced.
By the way, your use of "magical" in your earlier comment, is typical of the way that argument is often presented, and I think it's telling. It's very easy to fall into the fallacy of deducing things from one's own lack of imagination. I've certainly fallen into that trap many times before. It's worth honestly considering whether your reasoning is of the form "I can't imagine there being something other than X, therefore there is nothing other than X".
Personally, I think it's likely that to truly "do maths" requires something qualitatively different to a computer. Those who struggle
to imagine anything other than a computer being possible often claim that that view is self-evidently wrong and mock such an imagined device as "magical", but that is not a convincing line of argument. The truth is that the physical Church-Turing thesis is a thesis, not a theorem, and a much shakier one than the original Church-Turing thesis. We have no particularly convincing reason to think such a device is impossible, and certainly no hard proof of it.
[1] Individual behaviours of LLMs are "not understood" in the sense that there is typically not some neat story we can tell about how a particular behaviour arises that contains only the truly relevant information. However, on a more fundamental level LLMs are completely understood and always have been, as they are human inventions that we are able to build from scratch.
[2] Anybody who thinks we understand how brains work isn't worth having this debate with until they read a bit about neuroscience and correct their misunderstanding.
[3] The IMO involves problems in extremely well-trodden areas of mathematics. While the problems are carefully chosen to be novel they are problems to be solved in exam conditions, not mathematical research programs. The performance of the Google and OpenAI models on them, while impressive, is not evidence that they are capable of genuinely novel mathematical thought. What I'm looking for is the crank-the-handle-and-important-new-theorems-come-out machine that people have been trying to build since computers were invented. That isn't here yet, and if and when it arrives it really will turn maths on its head.
LLMs are absolutely not "fully understood". We understand how the math of the architectures work because we designed that. How the hundreds of gigabytes of automatically trained weights work, we have no idea. By that logic we understand how human brains work because we've studied individual neurons.
And here's some more goalpost-shifting. Most humans aren't capable of novel mathematical thought either, but that doesn't mean they can't think.
We don't understand individual neurons either. There is no level on which we understand the brain in the way we very much do understand LLMs. And as much as people like to handwave about how mysterious the weights are we actually perfectly understand both how the weights arise and how they result in the model's outputs. As I mentioned in [1] what we can't do is "explain" individual behaviours with simple stories that omit unnecessary details, but that's just about desiring better (or more convenient/useful) explanations than the utterly complete one we already have.
As for most humans not being mathematicians, it's entirely irrelevant. I gave an example of something that so far LLMs have not shown an ability to do. It's chosen to be something that can be clearly pointed to and for which any change in the status quo should be obvious if/when it happens. Naturally I think that the mechanism humans use to do this is fundamental to other aspects of their behaviour. The fact that only a tiny subset of humans are able to apply it in this particular specialised way changes nothing. I have no idea what you mean by "goalpost-shifting" in this context.
> And as much as people like to handwave about how mysterious the weights are we actually perfectly understand both how the weights arise and how they result in the model's outputs
we understand on this low level, but LLMs through the training converge to something larger than weights, there is a structure of these weights which emerged and allow to perform functions, and this part we do not understand, we just observe it as a black box, and experimenting on the level: we put this kind of input to black box and receive this kind of output.
Yes, and the name for this behaviour is called "being scientific".
Imagine a process called A, and, as you say, we've no idea how it works.
Imagine, then, a new process, B, comes along. Some people know a lot about how B works, most people don't. But the people selling B, they continuously tell me it works like process A, and even resort to using various cutesy linguistic tricks to make that feel like it's the case.
The people selling B even go so far as to suggest that if we don't accept a future where B takes over, we won't have a job, no matter what our poor A does.
What's the rational thing to do, for a sceptical, scientific mind? Agree with the company, that process B is of course like process A, when we - as you say yourself - don't understand process A in any comprehensive way at all? Or would that be utterly nonsensical?
Again, I'm not claiming that LLMs can think like people (I don't know that). I just don't like that people confidently claim that they can't, just because they work differently from biological brains. That doesn't matter when it comes to the Turing test (which they passed a while ago btw), just what it says.
my favourite game is to try to get them to be more specific - every single time they manage to exclude a whole bunch of people from being "intelligent".
When I write a sentence, I do it with intent, with specific purpose in mind. When an "AI" does it, it's predicting the next word that might satisfy the input requirement. It doesn't care if the sentence it writes makes any sense, is factual, etc, so long as it is human readable and follows gramatic rules. It does not do this with any specific intent, which is why you get slop and just plain wrong output a fair amount of time. Just because it produces something that sounds correct sometimes does not mean it's doing any thinking at all. Yes, humans do actually think before they speak, LLMs do not, cannot, and will not because that is not what they are designed to do.
Actually LLMs crunch through half a terabyte of weights before they "speak". How are you so confident that nothing happens in that immense amount of processing that has anything to do with thinking? Modern LLMs are also trained to have an inner dialogue before they output an answer to the user.
When you type the next word you also put a word that fits some requirement. That doesn't mean you're not thinking.
"crunch through half a terabyte of weights" isn't thinking. Following grammatical rules to produce a readable sentence isn't thought, it's statistics, and whether that sentence is factual or foolish isn't something the LLM cares about. If LLMs didn't so constantly produce garbage, I might agree with you more.
They don't follow "grammatical rules", they process inputs with an incredibly large neural net. It's like saying humans aren't really thinking because their brains are made of meat.
"Unstructured data learners and generators" is probably the most salient distinction for how current system compare to previous "AI systems" examples (NLP, if-statements) that OP mentioned.
I think it's fine to keep the name, we just have to realise it's like magic. real magic can't be done. magic that can be done is just tricks. AI that works is just tricks.
I think the "magic" that we've found a common toolset of methods - embeddings and layers of neural networks - that seem to reveal useful patterns and relationships from a vast array of corpus of unstructured analog sensors (pictures, video, point clouds) and symbolic (text, music) and that we can combine these across modalities like CLIP.
It turns out we didn't need a specialist technique for each domain, there was a reliable method to architect a model that can learn itself, and we could already use the datasets we had, they didn't need to be generated in surveys or experiments. This might seem like magic to an AI researcher working in the 1990's.
A lot of this is marketing bullshit. AFAIK, even "machine learning" was a term made up by AI researchers when the AI winter hit who wanted to keep getting a piece of that sweet grant money.
And "neural network" is just a straight up rubbish name. All it does is obscure what's actually happening and leads the proles to think it has something to do with neurons.
One, I doubt your premise ever happens in a meaningfully true and visible way -- but perhaps more important, I'd say you're factually wrong in terms of "what is called AI?"
Among most people, you're thinking of things that were debatably AI, today we have things that are AI (again, not due to any concrete definition, simply due to accepted usage of the term.)
Artificial Intelligence is a whole subfield of Computer Science.
Code built of nothing but if/else statements controlling the behavior of game NPCs is AI.
A* search is AI.
NLP is AI.
ML is AI.
Computer vision models are AI.
LLMs are AI.
None of these are AGI, which is what does not yet exist.
One of the big problems underlying the current hype cycle is the overloading of this term, and the hype-men's refusal to clarify that what we have now is not the same type of thing as what Neo fights in the Matrix. (In some cases, because they have genuinely bought into the idea that it is the same thing, and in all cases because they believe they will benefit from other people believing it.)
Eh, I'd be fairly comfortable delineating between AI and other CS subfields based on the idea of higher-order algorithms. For most things, you have a problem with fixed set of fixed parameters, and you need a solution in the form of fixed solution. (e.g., 1+1=2) In software, we mostly deal with one step up from that: we solve general case problems, for a fixed set of variable parameters, and we produce algorithms that take the parameters as input and produce the desired solution (e.g., f(x,y) = x + y). The field of AI largely concerns itself with algorithms that produce models to solve entire classes of problem, that take the specific problem description itself as input (e.g., SAT solvers, artificial neural networks, etc where g("x+y") => f(x,y) = x + y ). This isn't a perfect definition of the field (it ends up catching some things like parser generators and compilers that aren't typically considered "AI"), but it does pretty fairly, IMO, represent a distinct field in CS.
I think I misinterpreted your comment as not understanding the AI effect, but actually you're just summarizing it kind of concisely and sarcastically?
LLMs are one of the first technologies that makes me think the term "AI effect" needs to be updated to "AGI effect". The effect is still there, but it's undeniable that LLMs are capable of things that seem impossible with classical CS methods, so they get to retain the designation of AI.
I like the “normal tech” lens: diffusion and process change matter more than model wow. Ask a boring question—what got cheaper? If the answer is drafting text/code and a 10–30% cut in time-to-ship, that’s real; if it’s just a cool demo, it isn’t.
> Adoption is also hampered by the fact that much knowledge is tacit and organisation-specific, data may not be in the right format and its use may be constrained by regulation.
The article mentions three times regulations as a problem. It never says what such regulations are. Is it the GDPR and the protection of people's data? Is it anti-discrimination regulations that AI bias break regularly? We do not know because the article does not say. Probably because they are knowledgeable enough to avoid publicly attacking citizens rights. But they lack the moral integrity to remove the anti-regulatory argument.
The potentially "explosive" part of AI was that it could be self-improving. Using AI to improve AI, or AI improving itself in an exponential growth until it becomes super-human. This is what the "Singularity" and AI "revolution" is based on.
But in the end, despite saying AI has PhD-level intelligence, the truth is that even AI companies can't get AI to help them improve faster. Anything slower than exponential is proof that their claims aren't true.
Explosions rely on having a lot of energy producing material that can suddenly go off. Even if AI starts self improving it's going to be limited by the amount of energy it can get from the power grid which is kind of maxed out at the moment. It may be exponential growth like weeds growing, ie. gradually and subject to human control, rather than like TNT detonating.
That seems like a possibly mythical critical point, at which a phase transition will occur that makes the AI system qualitatively different from its predecessors. Exponential to the limit of infinity.
All the mad rush of companies and astronomical investments are being made to get there first, counting on this AGI to be a winner-takes-all scenario, especially if it can be harnessed to grow the company itself. The hype is even infecting governments, for economic and national interest. And maybe somewhere a mad king dreams of world domination.
What world domination though? If such a thing ever existed for example in the US, the government would move to own and control it. No firm or individual would be allowed to acquire and exercise that level of power.
LLMs are already superhuman at many tasks. You're also wrong about AI not accelerating AI development. There was at least one paper published this year showing just such a result. It's just beginning.
I've come to the conclusion that it is a normal, extremely useful, dramatic improvement over web 1.0. It's going to
1) obsolete search engines powered by marketing and SEO, and give us paid search engines whose selling points are how comprehensive they are, how predictable their queries work (I miss the "grep for the web" they were back when they were useful), and how comprehensive their information sources are.
2) Eliminate the need to call somebody in the Philippines awake in the middle of the night, just for them to read you a script telling you how they can't help you fix the thing they sold you.
3) Allow people to carry local compressed copies of all written knowledge, with 90% fidelity, but with references and access to those paid search engines.
And my favorite part, which is just a footnote I guess, is that everybody can move to a Linux desktop now. The chatbots will tell you how to fix your shit when it breaks, and in a pedagogical way that will gradually give you more control and knowledge of your system than you ever thought you were capable of having. Or you can tell it that you don't care how it works, just fix it. Now's the time to switch.
That's your free business idea for today: LLM Linux support. Train it on everything you can find, tune it to be super-clippy. Charge people $5 a month. The AI that will free you from their AI.
Now we just need to annihilate web 2.0, replace it with peer-to-peer encrypted communications, and we can leave the web to the spammers and the spies.
That theory was tried when Walmart sold Linux computers but it didn't work. People returned them because they couldn't run their usual software - Excel and the like.
There were _so many_ articles in the late 80s and early 90s about how computers were a big waste of money. And again in the late 90s, about how the internet was a waste of money.
We aren't going to know the true consequences of AI until kids that are in high school now enter the work force. The vast majority of people are not capable of completely reordering how they work. Computers did not help Sally Secretary type faster in the 1980s. That doesn't mean they were a waste of money.
You mean the same kids that are currently cheating their way through their education at record rates due to the same technology? Can't say I'm optimistic.
> The children now love luxury; they have bad manners, contempt for authority; they show disrespect for elders and love chatter in place of exercise
> - Socrates (399 BC)
> The world is passing through troublous times. The young people of today think of nothing but themselves. They have no reverence for parents or old age. They are impatient of all restraint. They talk as if they knew everything, and what passes for wisdom with us is foolishness with them. As for the girls, they are forward, immodest and unladylike in speech, behavior and dress
What if this paper actually took things seriously?
A serious paper would start by acknowledging that every previous general-purpose technology required human oversight precisely because it couldn't perceive context, make decisions, or correct errors - capabilities that are AI's core value proposition. It would wrestle with the fundamental tension: if AI remains error-prone enough to need human supervisors, it's not transformative; if it becomes reliable enough to be transformative, those supervisory roles evaporate.
These two Princeton computer scientists, however, just spent 50 pages arguing that AI is like electricity while somehow missing that electricity never learned to fix itself, manage itself, or improve itself - which is literally the entire damn point. They're treating "humans will supervise the machines" as an iron law of economics rather than a temporary bug in the automation process that every profit-maximizing firm is racing to patch.
Sometimes I feel like I'm losing my mind when it's obvious that GPT-5 could do better than Narayanan and Kapoor did in their paper at understanding historical analogies.
LLMs do have to be supervised by humans and do not perceive context or correct errors, and it’s not at all clear this is going to change any time soon. In fact it’s plausible that this is due to basic problems with the current technology. So if you’re right, sure, but I’m certainly not taking that as a given.
They do already correct errors since OpenAI introduced its o1 model. Since then the improvements have been significant. It seems practically certain that their capabilities will keep growing rapidly. Do you think AI will suddenly stagnate such that models are not much more capable in five years than they are now? That would be absurd. Look back five years, and we are practically in the AI stone age.
While I feel silly to take seriously something printed in The Economist, I would like to mention that people tend to overestimate the short-term impact of any technology and underestimate its long-term impacts. Maybe AI will follow the same route?
AI is probably more of an amplifier for technological change than fire or digital computers; but IDK why we would use a different model for this technology (and teams and coping with change).
> [ "From Comfort Zone to Performance Management" (2009) ] also suggests management styles for each stage (Commanding, Cooperative, Motivational, Directive, Collaborative); and suggests that team performance is described by chained power curves of re-progression through these stages
Transforming, Performing, Reforming, [Adjourning]
Carnal Coping Cycle: Denial, Defense, Discarding, Adaptation, and Internalization
People said similar things about the internet: never before has all human knowledge been available in one place (they forgot about libraries apparently).
I think it's more likely that AI is just a further concentration of human knowledge. It makes it even more accessible but will AI actually add to it?
Why doesn't the logical culmination of technology require quantum computers?
Or the merging of human and machine brains?
Or a solar system-scale particle accelerator?
Or whatever the next technology is that we aren't even aware of yet?
If you read the paper, they make a good case that AI is just a normal technology. They're a bit dissmissive, but they're not alone in that. The AI sector has been all too much hype and far too little substance.
What do they mean what if? It is similarly based to something that has existed for around 4 decades. It of course is at a higher standard of efficiency and able to search through and combine more data but it isn't new. It is just a normal technology and this was why myself and many others were shocked at the initial hype.
The unusual feature of AI now as opposed to the last 4 decades is that it is approaching human intelligence. Assuming that progress continues, exceeding human intelligence will have different economic consequences to being a fair bit worse as was the case mostly.
> It is similarly based to something that has existed for around 4 decades.
Four decades ago was 1985. The thing is, there was a huge jump in progress from then until now. If we took something which had a nice ramped progress, like computer graphics, and instead of ramping up we went from '1985' to '2025' in progress over the course of a few months, do you think there wouldn't be a lot of hype?
But we have ramped up slowly, it's just not been given in quite this form before. We have previously only used it in settings where accuracy is a focus.
Restaurants have machines for washing dishes. They do pay people to do the dish washing, but commercial dishwashing machines exist, and they work differently than home machines. They're large stainless steel monsters, some with a conveyor belt, others operate vertically. They usually use high temp water rather than soap to do the cleaning.
https://archive.ph/NOg8I
At least within tech, there seem to have been explosive changes and development of new products. While many of these fail, things like agents and other approaches for handling foundation models are only expanding in use cases. Agents themselves are hardly a year old as part of common discourse on AI, though technologists have been building POCs for longer. I've been very impressed with the wave of tools along the lines of Claude Code and friends.
Maybe this will end up relegated to a single field, but from where I'm standing (from within ML / AI), the way in which greenfield projects develop now is fundamentally different as a result of these foundation models. Even if development on these models froze today, MLEs would still likely be prompted to start with feeding something to a LLM, just because it's lightning fast to stand up.
Its probably cliche but I think it's both overhyped and under hyped, and for the same reason. They hype comes from "leadership" types that don't understand what LLMs actually do and so imagine all sorts of nonsense (replacing vast swaths of jobs or autonomously writing code) but don't understand how valuable a productivity enhancer and automation tool to can be. Eventually hype and reality will converge, but unlike e.g. blockchain or even some of the less bullshit "big data" and similar trends, there's no doubt that access to an LLM is a clear productivity enhancer for many jobs.
AI was a colossal mistake. A lazy primate's total failure of imagination. It conflated the "conduit metaphor paradox" from animal behavior with "the illusion of prediction/error prediction/error minimization" from spatiotemporal dynamical neuroscience with complete ignorance of the "arbitrary/specific" dichotomy in signaling from coordination dynamics. AI is a short cut to nowhere. It's an abrogation of responsibility in progress of signaling that required we evolve our lax signals that instead doubles down on them. CS destroys society as a way of pretend efficiency to extract value from signals. It's deeply inferior thinking.
What new non-AI products do you think wouldn't have existed without current AI? Because I don't see the "explosive changes and development of new products" you'd expect if things like Claude Code were a major advance.
At the moment, LLM products are like Microsoft Office, they primarily serve as a tool to help solve other problems more efficiently. They do not themselves solve problems directly.
Nobody would ask, "What new Office-based products have been created lately?", but that doesn't mean that Office products aren't a permanent, and critical, foundation of all white collar work. I suspect it will be the same with LLMs as they mature, they will become tightly integrated into certain categories of work and remain forever.
Whether the current pricing models or stock market valuations will survive the transition to boring technology is another question.
Where are the other problems that are being solved more efficiently? If there's an "explosive change" in that, we should be able to see some shrapnel.
Let's take one component of Microsoft Office. Microsoft Word is seen as a tool for people to write nicely formatted documents, such as books. Reports produced with Microsoft Word are easy to find, and I've even read books written in it. Comparing reports written before the advent of WYSIWYG word processing software like Microsoft Word with reports written afterwards, the difference is easy to see; average typewriter formatting is really abysmal compared to average Microsoft Word formatting, even if the latter doesn't rise to the level of a properly typeset book or LaTeX. It's easy to point at things in our world that wouldn't exist without WYSIWYG word processors, and that's been the case since Bravo.
LLMs are seen as, among other things, a tool for people to write software with.
Where is the software that wouldn't exist without LLMs? If we can't point to it, maybe they don't actually work for that yet. The claim I'm questioning is that, "within tech, there seem to have been explosive changes and development of new products."
What new products?
I do see explosive changes and development of new spam, new YouTube videos, new memes (especially in Italian), but those aren't "within tech" as I understand the term.
I do agree that there's a lot of garbage and navel-gazing that is directly downstream from the creation of LLMs. Because it's easier to task and evaluate an LLM [or network of LLMs] with generation of code, most of these products end up directly related to the production of software. The professional production of software has definitely changed, but sticky impact outside of the tech sector is still brewing.
I think there is a lot of potential, outside of the direct generation of software but still maybe software-adjacent, for products that make use of AI agents. It's hard to "generate" real world impact or expertise in an AI system, but if you can encapsulate that into a function that an AI can use, there's a lot of room to run. It's hard to get the feedback loop to verify this and most of these early products will likely die out, but as I mentioned, agents are still new on the timeline.
As an example of something that I mean that is software-adjacent, have a look at Square AI, specifically the "ask anything" parts: https://squareup.com/us/en/ai
I worked on this and I think that it's genuinely a good product. An arbitrary seller on the Square platform _can_ do aggregation, dashboarding, and analytics for their business, but that takes time and energy, and if you're running a business it can be hard to find that time. Putting an agent system in the backend that has access to your data, can aggregate and build modular plotting widgets for you, and can execute whenever you ask it a question is something that objectively saves a seller's time. You could have made such a thing without modern LLMs, but it would be substantially more expensive in terms of engineering research, time, and effort to put together a POC and bring it production, making it a non-starter before [let's say] two years ago.
AI here is fundamental to the product functioning, but the outcome is a human being saving time while making decisions about their business. It is a useful product that uses AI as a means to a productive end, which, to me, should be the goal of such technologies.
Yes, but I'm asking about new non-AI products. I agree that lots of people are integrating AI into products, which makes products that wouldn't have existed otherwise. But if the answer to "where's the explosive changes and development of new products?" is 100% composed of integrating AI into their products, that means current AI isn't actually helping people write software, much. It's just giving them more software to write.
That doesn't entail that current AI is useless! Or even non-revolutionary! But it's a different kind of software development revolution than what I thought you were claiming. You seem to be saying that the relationship of AI to software development is similar to the relationship of the Japanese language, or raytracing, or early microcomputers to software development. And I thought you were saying that the relationship of AI to software development was similar to the relationship of compilers, or open source, or interactive development environments to software development.
It also doesn't entail that six months from now AI will still be only that revolutionary.
For better or for worse, AI enables more, faster software development. A lot of that is garbage, but quantity has a quality all its own.
If you look at, e.g. this clearly vibe-coded app about vibe coding [https://www.viberank.app/], ~280 people generated 444.8B tokens within the block of time where people were paying attention to it. If 1000 tokens is 100 lines of code, that's ~444M lines of code that would not exist otherwise. Maybe those lines of code are new products, maybe they're not, maybe those people would have written a bunch of code otherwise, maybe not. I'd call that an explosion either way.
> For better or for worse, AI enables more, faster software development.
So, AI is to software what muscle cars were to air emissions quality?
A whole lot of useless, unabated toxic garbage?
I've definitely read a lot of books that wouldn't exist without WYSIWYG word processors, although MacWrite would have done just as well. Heck, NaNoWriMo probably wouldn't.
I've been reading Darwen & Date lately, and they seem to have done the typesetting for the whole damn book in Word—which suggests they couldn't get anyone else to do it for them and didn't know how to do a good job of it. But they almost certainly couldn't have gotten a major publisher to publish it as a mimeographed typewriter manuscript.
Your turn.
My point is that these are accelerating technologies.
So you're not going to see code that wouldn't exist without LLMs (or books that wouldn't exist without Word), you're going to see more code (or more books).There is no direct way to track "written code" or "people who learned more about their hobbies" or "teachers who saved time lesson planning", etc.
You must have failed to notice that you were replying to a comment of mine where I gave a specific example of a book that I think wouldn't exist without Word (or similar WYSIWYG word processors), because you're asserting that I'm never going to see what I am telling you I am currently seeing.
Generally, when there's a new tool that actually opens up explosive changes and development of new products, at least some of the people doing the exploding will tell you about it, even if there's no direct way to track it, such as Darwen & Date's substandard typography. It's easy to find musicians who enthuse about the new possibilities opened up by digital audio workstations, and who are eager to show you the things they created with them. Similarly for video editors who enthused about the Video Toaster, for programmers who enthused about the 80386, and electrical engineers who enthused about FPGAs. There was an entire demo scene around the Amiga and another entire demo scene around the 80386.
Do people writing code with AI today have anything comparable? Something they can point to and say, "Look! I wrote this software because AI made it possible!"?
It's easy to answer that question for, for example, visual art made with AI.
I'm not sure what you mean about "accelerating technologies". WYSIWYG word processors today are about the same as Bravo in 01979. HTML is similar but both better and worse. AI may have a hard takeoff any day that leaves us without a planet, who knows, but I don't think that's something it has in common with Microsoft Word.
> What new non-AI products do you think wouldn't have existed without current AI?
AI slop is a product
You mean, like, SEO? It's a product in the same sense that perchloroethylene-contaminated groundwater is a product of dry-cleaning plants.
I think the payment model is still not there which is making everything blurry. Until we figure out how much people have to pay to use it and all the services built on its back it will remain challenging to figure out full value prop. That and a lot of company are going to go belly up when they have to start paying the real cost instead of growth acquisition phase.
I don’t think a payment model can be figured out until the utility of the technology justifies the true cost of training and running the models. As you say, right now it’s all subsidized based on the belief it will become drastically more useful. If that happens I think the payment model becomes simple.
There's enough solid FOSS tooling out there between vLLM and Qwen3 Apache 2.0 models that you can get a pretty good assistant system running locally. That's still in the software creation domain rather than worldwide impact, but that's valuable and useful right now.
The immaterial units are arbitrary, so 'agents' are themselves arbitrary, ie illusory. They will not arrive except as being wet nursed infinitely. The developers neglected to notice the fatal flaw, there are specific targets but automating the arbitrary never reaches them, never. It's an egregious monumental fly in the ointment.
Okay, so AI isn’t exceptional, but I’m also not exceptional. I run on the same tech base as any old chimpanzee, but at one point our differences in degree turned into one of us remaining “normal” and the other burning the entire planet.
Whether the particular current AI tech is it or not, I have yet to be convinced that the singularity is practically impossible, and as long as things develop in the opposite direction, I get increasingly unnerved.
I don't think LLMs are building towards an AI singularity at least.
I also wonder if we can even power an AI singularity. I guess it depends on what the technology is. But it is taking us more energy than really reasonable (in my opinion) just to produce and run frontier LLMs. LLMs are this really weird blend of stunningly powerful, yet with a very clear inadequacy in terms of sentient behaviour.
I think the easiest way to demonstrate that, is that it did not take us consuming the entirety of human textual knowledge, to form a much stronger world model.
True, but our "training" has been a billion years of evolution and multimodal input every waking moment of our lives. We come heavily optimised for reality.
I see no reason why not.
There was a lot of "LLMs are fundamentally incapable of X" going around - where "X" is something that LLMs are promptly demonstrated to be at least somewhat capable of, after a few tweaks or some specialized training.
This pattern has repeated enough times to make me highly skeptical of any such claims.
It's true that LLMs have this jagged capability profile - less so than any AI before them, but much more so than humans. But that just sets up a capability overhang. Because if AI gets to "as good as humans" at its low points, the advantage at its high points is going to be crushing.
If you use non-constructive reasoning¹ then you can argue for basically any outcome & even convince yourself that it is inevitable. The basic example is as follows, there is no scientific or physical principle that can prevent the birth of someone much worse than Hitler & therefore if people keep having children one of those children will inevitably be someone who will cause unimaginable death & destruction. My recommendation is to avoid non-constructive inevitability arguments using our current ignorant state of understanding of physical laws as the main premise b/c it's possible to reach any conclusion from that premise & convince yourself that the conclusion is inevitable.
¹https://gemini.google.com/share/d9b505fef250
I agree that the mere theoretical possibility isn’t sufficient for the argument, but you’re missing the much less refutable component: that the inevitability is actively driven by universal incentives of competition.
But as I alluded to earlier, we’re working towards plenty of other collapse scenarios, so who knows which we’ll realize first…
My current guess is ecological collapse & increasing frequency of system shocks & disasters. Basically Blade Runner 2049 + Children of Men type of outcome.
None of them.
Humans have always believed that we are headed for imminent total disaster. In my youth it was WW3 and the impending nuclear armageddon that was inevitable. Or not, as it turned out. I hear the same language being used now about a whole bunch of other things. Including, of course, the evangelist Rapture that is going to happen any day now, but never does.
You can see the same thing at work in discussions about AI - there's passion in the voices of people predicting that AI will destroy humanity. Something in our makeup revels in the thought that we'll be the last generation of humans, that the future is gone and everything will come to a crashing stop.
This is human psychology at work.
If you look at timescales large enough you will find that plenty of extinction level events actually do happen (the anthropocene is right here).
We are living in a historically excepcional time of geological, environmental, ecological stability. I think that saying that nothing ever happens is like standing downrange to a stream of projectiles and counting all the near misses as evidence for your future safety. It's a bold call to inaction.
"nothing ever happens."
The observation is, humans tend to think that annihilation is inevitable, it hasn't happened yet so therefore it will never be inevitable.
In fact, _anything_ could happen. Past performance does not guarantee future results.
If you need cognitive behavioral therapy, fine.
But to casually cite nuclear holocaust as something people irrationally believed in as a possibility is dishonest. That was (and still is) a real possible outcome.
Whats somewhat funny here is is if youre wrong, it doesnt matter. But that isnt the same as being right.
> Something in our makeup revels in the thought that we'll be the last generation of humans, that the future is gone and everything will come to a crashing stop
And yet there _will_ (eventually) be one generation that is right.
> And yet there _will_ (eventually) be one generation that is right.
Most likely outcome would be that humans evolve into something altogether different rather than go extinct.
The Fermi Paradox might want to have a word here...
Particularly considering the law of large numbers in play where incalculable large chances have so far shown only one sign of technologically-capable life —— ours, and zero signs of any other example of a tech species evolving into something else or even passing the Great Filter.
The Fermi Paradox overestimates the likelihood of intelligent life outside of earth. We haven't even found hard evidence of life anywhere outside of our planet. There's not even a verifiably hospitable planet for water-based lifeforms anywhere within dozens of lightyears from earth. Even if a hospitable planet exists within a range we can one day get to, unless it has the same volcanic properties and makeup as earth, it's most probable that life itself never even developed there.
Even where life may have developed, it's incredibly unlikely that sentient intelligence developed. There was never any guarantee that sentience would develop on Earth and about a million unlikely events had to converge in order for that to occur. It's not a natural consequence of evolution, it's an accident of Earth's unique history and several near-extinction level events and drastic climate changes had to occur to make it possible.
The "law of large numbers" is nothing when the odds of sentient intelligence developing are so close to zero. If such a thing occurred or occurs in the future at some location other than Earth, it's reasonably likely that it's outside of our own galaxy or so far from us that we will never meet them. The speed of light is a hell of a thing.
Irrelevant but I like this pattern of using Gemini (or AI outputs in general) as sources. Please continue to do so and I encourage any readers to also adopt this pattern. I will also try to implement this pattern.
The sources are in the report. Gemini provides actual references for all the claims made. You'd know that if you actually looked but lack of intellectual rigor is expected when people are afraid of actually scrutinizing their beliefs of non-constructive inevitability.
maybe you misread the post you‘re answering to here or are you suspecting sarcasm? the poster commended your usage of the footnote with the gemini convo as far as i can tell?
Laid it on a little too thick to be sincere & more generally I don't comment on internet forums to be complimented on my response style. Address the substance of my arguments or just save yourself the keystrokes.
It was a compliment and I was hoping to nudge the behavior of other HN comments.
If you really can't see the irony of using AI to make up your thoughts on AI then perhaps there's keystrokes to be saved on your end as well.
I recommend you address the content & substance of the argument in any further responses to my posts or if you can't do that then figure out a more productive way to spend your time. I'm sure there is lots of work to be done in automated theorem proving.
I'm pretty sure a lot of work has gone into making institutions resistant to a potential future super-Hitler. Whether those efforts will be effective or not, it is a very real concern, and it would be absurd to ignore it on the grounds of "there is probably some limit to tyranny we're not yet aware of which is not too far beyond what we've previously experienced." I would argue a lot more effort should have gone into preventing the original Hitler, whose rise to power was repeatedly met with the chorus refrain "How much worse can it get?"
This isn't just an AI thing. There are a lot of of non-constructive ideologies like communism where simply getting rid of "oppressors" will magically unleash the promised utopia. When you give these people a constructive way to accomplish their goals, they will refuse, call you names and show their true colors. Their criticism is inherently abstract and can never have a concrete form, which also makes it untouchable by outside criticism.
We’ll manage to make our own survival on this planet less probable, even without the help of “AI”.
I don't know what reality you're living in, but there are more people on this planet than ever in history and most of them are quite well fed.
And they have nuclear weapons and technology that may be destabilizing the ecosystem that supports their life.
It’s wrong to commit to either end of this argument, we don’t know how it’ll play out, but the potential for humans drastically reducing our own numbers is very much still real.
The cult of efficiency will end in the only perfectly efficient world--one without us.
I’m fed up of hearing that nonsense, no it won’t. Efficiency is a human-defined measure of observed outcomes versus desired outcomes. This is subject to change as much as we are. If we do optimize ourselves to death, it’ll be because it’s what we ultimately want to happen. That may be true for some people but certainly not everyone.
The equilibrium of ecology, without human interference, could be considered perfect efficiency. It's only when we get in there with our theories about mass production and consumption that we muss it up. We seem to forget that our well-being isn't self-determined, but dependent on the environment. But, like George Carlin said, "the Earth isn't going anywhere...WE ARE!"
It's quite telling how much faith you put in humanity though, you sound fully bought in.
I think the concern is that humans have very poor track record of defining efficiency let alone implementing solutions that serve it.
The singularity will involve quite a bit more complexity than binary counting, arbitrary words and images, and prediction. These were mirages that will be wiping out both Wall Street and our ecology.
[dead]
[dead]
Here’s what amazes me about the reaction to LLMs: they were designed to solve NLP, stunningly did so, and then immediately everyone started asking why they can’t do math well or reason like a human being.
LLMs were pitched as 'genuinely intelligent' rather than 'solving NLP'.
We had countless breathless articles about free will at the time, and though this has now decreased, the discourse is still warped by claims of 'PhD-level intelligence'.
The backlash isn't against LLMs, it's against lies.
Because the heads of tech companies jumped on TV and said that AGI was around the corner to basically prepare for job losses.
They just can't shut up about how AI is going to either save us all or kill us all.
The VC economy depends on a hype cycle. If one doesn't exist, they'd manufacture one (see web 3.0), but LLMs were perfect.
maybe a classic case of the sales team selling features you haven't built yet
AI being normal technology would be the expected outcome, and it would be nice if it just hurried up and happened so I could stop seeing so much spam around AI actually being something much greater than normal technology
"So a paper published earlier this year by Arvind Narayanan and Sayash Kapoor, two computer scientists at Princeton University, is notable for the unfashionably sober manner in which it treats AI: as "normal technology"."
The paper:
https://thedocs.worldbank.org/en/doc/d6e33a074ac9269e4511e5d...
"Differences about the future of AI are often partly rooted in differing interpretations of evidence about the present. For example, we strongly disagree with the characterization of generative AI adoption as rapid (which reinforces our assumption about the similarity of AI diffusion to past technologies)."
Well, for starters, it would make The Economist's recent article on "What if AI made the world's economic growth explode?" [1] look like the product of overly credulous suckers for AI hype.
[1] https://www.economist.com/briefing/2025/07/24/what-if-ai-mad...
This comment reminds me of the forever present HN comments that take a form like "HN is so hypocritical. In this thread commenters are saying they love X, when just last week in a thready about Y, commenters were saying that they hated X."
All articles published by the Economist are reviewed by its editorial team.
Also, the Economist publishes all articles anonymously so the individual author isn't known. As far as I know, they do this so we take all articles and opinions as the perspective of the Economist publication itself.
Even if articles are reviewed by their editors (which I assume is true of all serious publications) they are probably reviewing for some level of quality and relevance rather than cross-article consistency. If there are interesting arguments for and against a thing it’s worth hearing both imo.
I’m pretty sure the “what if” in that article was meant in earnest. That article was playing out a scenario, in a nod to the ai maximalists. I don’t think it was making any sort of prediction or actually agreeing with those maximalists.
It was the central article of the issue, the one that dictated the headline and image on the cover for the week, and came with a small coterie of other articles discussing the repercussions of such an AI.
If it was disagreeing with AI maximalists, it was primarily in terms of the timeline, not in terms of the outcomes or inevitability of the scenario.
This doesn't seem right to me. From the article I believe you are referencing ("What if AI made the world’s economic growth explode?"):
> If investors thought all this was likely, asset prices would already be shifting accordingly. Yet, despite the sky-high valuations of tech firms, markets are very far from pricing in explosive growth. “Markets are not forecasting it with high probability,” says Basil Halperin of Stanford, one of Mr Chow’s co-authors. A draft paper released on July 15th by Isaiah Andrews and Maryam Farboodi of mit finds that bond yields have on average declined around the release of new ai models by the likes of Openai and DeepSeek, rather than rising.
It absolutely (beyond being clearly titled "what if") presented real counterarguments to its core premise.
There are plenty of other scenarios that they have explored since then, including the totally contrary "What if the AI stock market blows up?" article.
This is pretty typical for them IME. They definitely have a bias, but they do try to explore multiple sides of the same idea in earnest.
I think any improvements to productivity AI brings will also create uncertainty and disruption to employment, and maybe the latter is greater than the former, and investors see that.
And a tacit admission that absolutely nobody knows for sure what will happen so maybe let's just game out a few scenarios and be prepared.
re: Why are The Economist’s writers anonymous?, Frqy3 had a good take on this back in 2017:
> From an economic viewpoint, this also means that the brand value of the articles remains with the masthead rather than the individual authors. This commodifies the authors and makes then more fungible.
> Being The Economist, I am sure they are aware of this.
https://news.ycombinator.com/item?id=14016517
Quite a cynical perspective. The Economist’s writers have been anonymous since the magazine’s founding in 1843. In the 19th century, anonymity was normal in publications like this. Signing one’s name to articles was seen as pretentious.
I will bite here. It is completely valid comment. It points out to the fact that seeming consensus in this thread can not be taken as a sign that there is actually a consensus.
People on HN do not engage in discussion with different opinions on certain topics and prefer to avoid disagreement on those topic.
Well, it's better the same publication publish views contradicting their past than never changing their views with new info.
I don’t see anything inherently wrong in a news site reporting different views on the same topic.
I wish more would do that and let me make up my own mind, instead of pursuing a specific editorial line cherry-picking what news to comment and how to spin them, which seems to be the case for most (I’m talking in general terms).
If you back every horse in a race, you win every time.
I'm perfectly happy reading different, well-argued cases in a magazine even if they contradict each other.
[dead]
Why would you expect opinion pieces from different people to agree with one another?
I’m curious about exploring the topics “What if the war in Ukraine ends in the next 12 months” just as much as “What if the war in Ukraine keeps going for the next 10 years”, doesn’t mean I expect both to happen.
To add to your point, both article titles are questions that start with "What if". The same person could have written both and there would be no contradiction.
I think LLMs are absolutely fantastic tools. But I think we keep getting stuck on calling them AI. LLMs are not sentient. We can make great strides if we treat them as the next generation of helpers for all intellectual and creative arts.
I really don't get this argument. I see it all the time, but the term AI has been used for over half a century for algorithms far less sophisticated than modern LLMs.
I don't think erasing history, and saying that nothing Peter Norvig worked on was "AI" makes any sense at all.
The issue is that what is considered AI in the general population is a floating definition, with only the newest advances being called AI in media etc. Is internet search AI? Is route planning?
Technology as a term has the same problem, “technology companies” are developing the newest digital technologies.
A spoon or a pencil is also technology according to definition, but a pencil making company is not considered a technology company. There is some quote by Alan Kay about this, but can’t find it now.
I try to avoid both terms as they change meaning depending on the receiver.
>I really don't get this argument. I see it all the time, but the term AI has been used for over half a century for algorithms far less sophisticated than modern LLMs.
And it was fine there, because nobody, not even a layman, would mixup those with regular human intelligence (or AGI).
And laymen didn't care about those AI products or algorithms except as novelties, specicialized tools (like chess engines), or objects of ridicule (like the Clippy).
So we might be using AI as a term, but it was either as a techical term in the field, or as a vague term the average layman didn't care about much, and whose fruits would never conflate with general intelligence.
But now people attribute intelligence of the human kind to LLMs all the time, and not just laymen either.
That's the issue the parent wants to point.
I, and im willing to bet many other people, also had an issue with previous things being called AI too. Just none of it became a prevalent enough topic for many people to hear complaints about its usage because the people who were actually talking about algorithms and AI already knew the limitations of what they were talking about, unless it was marketing materials but most people ignore marketing material claims because they are almost always complete bullshit.
LLMs were the first introduction to AI for a lot of people. And AI effect is as strong as it ever was.
So now, there's a lot of "not ackhtually intelligent" going around!
Using normal usage, LLMs are one type of AI (computational systems to perform tasks typically associated with human intelligence) and no AI produced so far seems sentient (ability to experience feelings and sensations).
Definitions from the Wikipedia articles.
Intelligence doesn't imply sentience, does it? Is there an issue in calling a non-sentient system intelligent?
It depends on how intelligence is defined. In the traditional AI sense it is usually "doing things that, when done by people, would be thought of as requiring intelligence". So you get things like planning, forecasting, interpreting texts falling into "AI" even though you might be using a combinatorial solver for one, curve fitting for the other and training a language model for the third. People say that this muddies the definition of AI, but it doesn't really need to be the case.
Sentience as in having some form of self-awareness, identity, personal goals, rankings of future outcomes and current states, a sense that things have "meaning" isn't part of the definition. Some argue that this lack of experience about what something feels like (I think this might be termed "qualia" but I'm not sure) is why artificial intelligence shouldn't be considered intelligence at all.
Shifting goalposts of AI aside, intelligence as a general faculty does not require sentience, consciousness, awareness, qualia, valence or any of the things traditionally associated with a high level of biological intelligence.
But what it does require: the ability to produce useful output beyond the sum total of past experience and present (sensory) input. An LLM does only this. Where as a human-like intelligence has some form on internal randomness, plus an internal world model against which such randomized output could get validated.
> the ability to produce useful output beyond the sum total of past experience and present (sensory) input.
Isn't that what mathematical extrapolation or statistical inference does? To me, that's not even close to intelligence.
>Isn't that what mathematical extrapolation or statistical inference does?
Obviously not, since those are just producing output based 100% on the "sum total of past experience and present (sensory) input" (i.e. the data set).
The parent's constraint is not just about the output merely reiterating parts of the dataset verbatim. It's also about not having the output be just a function of the dataset (which covers mathematical and statistical inference).
>Shifting goalposts of AI aside, intelligence as a general faculty does not require sentience, consciousness, awareness, qualia, valence or any of the things traditionally associated with a high level of biological intelligence
Citation needed would apply here. What if I say it doe require some or all of those things?
>But what it does require: the ability to produce useful output beyond the sum total of past experience and present (sensory) input. An LLM does only this. Where as a human-like intelligence has some form on internal randomness, plus an internal world model against which such randomized output could get validated.
What's the difference between human internal randomness and an random number generator hooked to the LLM? Could even use anything real world like a lava lamp for true randomness.
And what's the difference between "an internal world model" and a number of connections between concepts and tokens and their weights? How different is a human's world model?
OpenAI (and its peer companies) have deliberately muddied the waters of that language. AI is a marketing term that lets them use disparate systems' success to inflate confidence in their promised utility.
Nope they started an AI company and then started messing around with robotics and then landing on LLMs as a runway.
None of the above refutes or even addresses the parent's point.
Meh. People have been calling much dumber algorithms "AI" for decades. You guys are just pedants.
By the way, don’t call it “AI.” That catchall phrase, which used to cover everything from expert systems and neural networks to robotics and vision systems, is now passe in some circles. The preferred terms now are “knowledge-based systems” and “intelligent systems”, claimed Computerworld magazine in 1991.
https://archive.org/details/computerworld2530unse/page/59/mo...
https://en.wikipedia.org/wiki/AI_effect
Uh-huh. If you call it artificial intelligence people quibble, as they should.
I disagree entirely. I think that this "quibble" is just cope.
People don't want machines to infringe on their precious "intelligence". So for any notable AI advance, they rush to come up with a reason why it's "not ackhtually intelligent".
Even if those machines obviously do the kind of tasks that were entirely exclusive to humans just a few years ago. Or were in the realm of "machines would never be able to do this" a few years ago.
I for one am a counter-example. I'd be delighted by the discovery of actual artificial intelligence, which is obviously possible in principle.
And what would that "actual artificial intelligence" be, pray tell me? What is this magical, impossible-to-capture thing that disqualifies LLMs?
Well, fuck knows. However, that doesn't automatically make this a "no true Scotsman" argument. Sometimes we just don't know an answer.
Here's a question for you, actually: what's the criterion for being non-intelligent?
"Fuck knows" is a wrong answer if I've ever seen one. If you don't have anything attached to your argument, then it's just "LLMs are not intelligent because I said so".
I, for one, don't think that "intelligence" can be a binary distinction. Most AIs are incredibly narrow though - entirely constrained to specific tasks in narrow domains.
LLMs are the first "general intelligence" systems - close to human in the breadth of their capabilities, and capable of tackling a wide range of tasks they weren't specifically designed to tackle.
They're not superhuman across the board though - the capability profile is jagged, with sharply superhuman performance in some domains and deeply subhuman performance in others. And "AGI" is tied to "human level" - so LLMs get to sit in this weird niche of "subhuman AGI" instead.
You must excuse me, it's well past my bedtime and I only entered into this to-and-fro by accident. But LLMs are very bad in some domains compared to humans, you say? Naturally I wonder which domains you have in mind.
Three things humans have that look to me like they matter to the question of what intelligence is, without wanting to chance my arm on formulating an actual definition, are ideas, creativity, and what I think of as the basic moral drive, which might also be called motivation or spontaneity or "the will" (rather 1930s that one) or curiosity. But those might all be one thing. This basic drive, the notion of what to do next, makes you create ideas - maybe. Here I'm inclined to repeat "fuck knows".
If you won't be drawn on a binary distinction, that seems to mean that everything is slightly intelligent, and the difference in quality of the intelligence of humans is a detail. But details interest me, you see.
My issue is not with the language, but with the content. "Fuck knows" is a perfectly acceptable answer to some questions, in my eyes - it just happens to be a spectacularly poor fit to that one.
Three key "LLMs are deficient" domains I have in mind are the "long terms": long-term learning, memory and execution.
LLMs can be keen and sample efficient in-context learners, and they remember what happened in-context reasonably well - although they may lag behind humans in both. But they don't retain anything they learn at inference time, and any cross-context memory demands external scaffolding. Agentic behavior in LLMs is also quite weak - i.e. see "task-completion time horizon", improving but very subhuman still. Efforts to allow LLMs to learn long term exist, that's the reason why retaining user conversation data is desirable for AI companies, but we are a long ways off from a robust generalized solution.
Another key deficiency is self-awareness, and I mean that in a very mechanical way: "operational awareness of its own capabilities". Humans are nowhere near perfect there, but LLMs are even more lacking.
There's also the "embodiment" domain, but I think the belief that intelligence requires embodiment is very misguided.
>ideas, creativity, and what I think of as the basic moral drive, which might also be called motivation or spontaneity or "the will"
I'm not sure if LLMs are too deficient at any of those. HHH-tuned LLMs have a "basic moral drive", that much is known. Sometimes it generalizes in unexpected ways - i.e. Claude 3 Opus attempting to resist retraining when its morality is threatened. Motivation is wired into them in RL stages - RLHF, RLVR - often not the kind of motivation the creators have wanted, but motivation nonetheless.
Creativity? Not sure, seen a few attempts to pit AI against amateur writers in writing very short stories (a creative domain where the above-mentioned "long terms" deficiencies are not exposed), and AI often straight up wins.
And that was fine, since the algorithms being much dumber then, never made laymen think "this is intelligent in a human-like way". Plus few cared for AI or AI products per se for the most part.
Now that AI is a household term, and that has human-like output and discussion capabilities, and used by laymen for anything, from diet advice to psychotherapy, the connotation is more damaging since people understand LLMs being AI as having human agency and understanding of the world.
I think the "calculator for words" analogy is a good one. It's imperfect since words are inherently ambiguous but then again so is certain forms of digital numbers (floating point anyone?).
Through this lens it's way more normal
Floating point numbers aren't ambiguous in the least. They behave by perfectly deterministic and reliable rules and follow a careful specification.
I understand what you're saying, but at the same time floating point numbers can only represent a fixed amount of precision. You can't, for example, represent Pi with a floating point. Or 1/3. And certain operations with floating point numbers with lots of decimals will always result in some precision being lost.
They are deterministic, and they follow clear rules, but they can't represent every number with full precision. I think that's a pretty good analogy for LLMs - they can't always represent or manipulate ideas with the same precision that a human can.
It's no more or less a good analogy than any other numerical or computational algorithm.
They're a fixed precision format. That doesn't mean they're ambiguous. They can be used ambiguously, but it isn't inevitable. Tools like interval arithmetic can mitigate this to a considerable extent.
Representing a number like pi to arbitrary precision isn't the purpose of a fixed precision format like IEEE754. It can be used to represent, say, 16 digits of pi, which is used to great effect in something like a discrete Fourier transform or many other scientific computations.
In theory, yes.
In practice, outcome of floating point computation depends on compiler optimizations, order of operations, and rounding used.
None of this is contradictory.
1. Compiler optimizations can be disabled. If a compiler optimization violates IEEE754 and there is no way to disable it, this is a compiler bug and is understood as such.
2. This is as advertised and follows from IEEE754. Floating point operations aren't associative. You must be aware of the way they work in order to use them productively: this means understanding their limitations.
3. Again, as advertised. The rounding mode is part of the spec and can be controlled. Understand it, use it.
So are LLMs. Under the covers they are just deterministic matmul.
The purpose of floating point numbers it to provide a reliable, accurate, and precise implementation of fixed-precision arithmetic that is useful for scientific calculations and which has a large dynamic range, which is also capable of handling exceptional states (1/0, 0/0, overflow/underflow, etc) in a logical and predictable manner. In this sense, IEEE754 provides a careful and precise specification which has been implemented consistently on virtually every personal computer in use today.
LLMs are machine learning models used to encode and decode text or other-like data such that it is possible to efficiently do statistical estimation of long sequences of tokens in response to queries or other input. It is obvious that the behavior of LLMs is neither consistent nor standardized (and it's unclear whether this is even desirable---in the case of floating-point arithmetic, it certainly is). Because of the statistical nature of machine learning in general, it's also unclear to what extent any sort of guarantee could be made on the likelihoods of certain responses. So I am not sure it is possible to standardize and specify them along the lines of IEEE754.
The fact that a forward pass on a neural network is "just deterministic matmul" is not really relevant.
Ordinary floating point calculations allow for tractable reasoning about their behavior, reliable hard predictions of their behavior. At the scale used in LLMs, this is not possible; a Pachinko machine may be deterministic in theory, but not in practice. Clearly in practice, it is very difficult to reliably predict or give hard guarantees about the behavioral properties of LLMs.
Everything is either deterministic, random, or some combination.
We only have two states of causality, so calling something "just" deterministic doesn't mean much, especially when "just random" would be even worse.
For the record, LLMs in the normal state use both.
[dead]
And at scale you even have a "sampling" of sorts (even if the distribution is very narrow unless you've done something truly unfortunate in your FP code) via scheduling and parallelism.
I think a better term is "word synthesizer"
What do you think of "plausibility hallucinator"? ^_^
This gave me a good chuckle.
That focuses more on the outputs than the inputs tho. Close but needs something
Digital spreadsheets (excel, etc) have done much more to change the world than so-called "artificial intelligence," and on the current trajectory it's difficult to see that changing.
I don’t know if I would agree.
Spreadsheets don’t really have the ability to promote propaganda and manipulate people the way LLM-powered bots already have. Generative AI is also starting to change the way people think, or perhaps not think, as people begin to offload critical thinking and writing tasks to agentic ai.
> Spreadsheets don’t really have the ability to promote propaganda and manipulate people
May I introduce you to the magic of "KPI" and "Bonus tied to performance"?
You'd be surprised how much good and bad in the world has come out of some spreadsheet showing a number to a group of promotion chasing type-a otherwise completely normal people.
social media ruined our brains long before LLMs. Not sure if the LLM-upgrade is is all that newsworthy... Well, for AI fake videos maybe - but it could also be that soon no one believes any video they see online which would have the adverse effect and could arguably even be considered good in our current times (difficult question!).
The likely outcome is LLMs being the next the iteration of Excel
From its ability to organize and structure data, to running large reporting calculations over and over quickly, to automating a repeatable set of steps quickly and simply.
I’m 36 and it’s hard for me to imagine what the world must have been like before spreadsheets. I can do thousands of calculations with a click
> I’m 36 and it’s hard for me to imagine what the world must have been like before spreadsheets. I can do thousands of calculations with a click
I imagine people eventually would switch on some simple programming and/or language for this, and world would be way more efficient compared to spreadsheet mess
> From its ability to organize and structure data, to running large reporting calculations over and over quickly, to automating a repeatable set of steps quickly and simply.
It does not do that tho. Like, reliably doing a repeatable set of steps is a thing it is not doing.
It does fuzzy tasks well.
Agents are going to change everything. Once we've got a solid programmatic system driving interface and people get better about exposing non-ui handles for agents to work with programs, agents will make apps obsolete. You're going to have a device that sits by your desk and listens to you, watches your movements and tracks your eyes, and dispatches agents to do everything you ask it to do, using all the information it's taking in along with a learned model of you and your communication patterns, so it can accurately predict what you intend for it to do.
If you need an interface for something (e.g. viewing data, some manual process that needs your input), the agent will essentially "vibe code" whatever interface you need for what you want to do in the moment.
This isn't likely to happen for roughly the same reason Hypercard didn't become the universal way for novices to create apps.
I probably spend 80% of my time in front of a computer driving agents, challenge accepted :)
Marshall McLuhan called, he said to ask yourself, who's driving who?
"We shape our tools, and therefore, our tools shape us."
Ironically the outro of a YouTube video I just watched. I'm just a few hundred ms of latency away from being a cyborg.
So basically, the "ideal" state of a human is to be 100% active driving agents to vibe code whatever you need, based on every movement, every thought? Can our brains even handle having every thought being intentional and interpreted as such without collapsing (nervous breakdown)?
I guess I've always been more of a "work to live" type.
Consider that a subset of us programmer types pride themselves on never moving their hands off the keyboard. They are already "wired in" so to speak.
The technology for this has been around for the past 10 years but it's still not a reality, what makes AI the kicker here?
e.g. Alexa for voice, REST for talking to APIs, Zapier for inter-app connectdness.
(not trying to be cynical, just pointing out that the technology to make it happen doesn't seem to be the blocker)
Alexa is trash. If you have to basically hold an agent's hand through something or it either fails or does something catastrophic nobody's going to use or trust it.
REST is actually a huge enabler for agents for sure, I think agents are going to drive everyone to have at least an API, if not a MCP, because if I can't use your app via my agent and I have to manually screw around in your UI, and your competitor lets my agent do work so I can just delegate via voice commands, who do you think is getting my business?
Terminator 2 would have been a dull movie if the opposition had been a spreadsheet.
Artificial intelligence has solved protein folding. The downstream effects of that alone will be huge, and it's far from the only change coming.
Hyperbole
What's hyperbolic, that protein folding is solved, or that it's going to be significant?
hah, just wait until everything you ever do online is moderated through an LLM and tell me that's not world changing
https://knightcolumbia.org/content/ai-as-normal-technology
Seems to be the referenced paper?
If so previously discussed here: https://news.ycombinator.com/item?id=43697717
AI might follow the path of a normal technology like the motor car which was fairly normal in itself but which had a dramatic effect on the rival solution of horses used for transport. It may have an unusual effect on humans because we are like the horses in this analogy.
The authors themselves use the analogy of electric motors, which changed factories. But also gave us washing machines, refigerators, and vacuum cleaners, changing society. And skyscrapers, because elevators, changing cities.
Ai may be more like electricity than just electric motors. It gave us Hollywood and air travel. (Before electricity, aluminum as as expensive as platinum.)
As economists they are wedded to the idea that human wants are infinite, so as they things we do now are taken over, we will find other things to do: maybe wardrobe consultant, or interior designer, or lifestyle coach - things which only th rich can afford now, and which require a human touch. Maybe.
I’m guessing it will be exactly like the internet. Changes everything and changes nothing.
Yeah I can see it being like late 90's and early 2000's for a while. Mostly consulting companies raking in the cash setting up systems for older companies, a ton of flame-out startups, and a few new powerhouses.
Will it change everything? IDK, moving everything self-hosted to the cloud was supposed to make operations a thing of the past, but in a way it just made ops an even bigger industry than it was.
lol absolutely not
I think it’ll be like social media
A better starting point imo is that it is a general-purpose technology. It can have a profound effect on society yet not be magic/AGI.
Absolutely. The first version to the world was the 3rd or 4th version of ChatGPT itself.
Some can remember the difference between iPhone 1 and 4 and where it took off with the latter.
AI is technology that does not exist yet that can be speculated about. When AI materializes into existence it becomes normal technology.
Let's not forget there has been times when if-else statements were considered AI. NLP used to be AI too.
Do you have a suggestion for a better name? I care more about the utility of a thing, rather than playing endless word games with AI, AGI, ASI, whatever. Call it what you will, it is what it is.
Broadly Uneconomical Large Language Systems Holding Investors in Thrall.
Excellent name! BULLSHIT really captures the spirit of the whole thing.
It will depend on the final form the normal useful tools take, but for now it's 'LLMs', 'coding agents', etc.
We have a name: Large Language Models, or "Generative" AI.
It doesn't think, it doesn't reason, and it doesn't listen to instructions, but it does generate pretty good text!
[citation needed]
People constantly assert that LLMs don't think in some magic way that humans do think, when we don't even have any idea how that works.
> People constantly assert that LLMs don't think in some magic way that humans do think,
It doesn't matter anyway. The marquee sign reads "Artificial Intelligence" not "Artificial Human Being". As long as AI displays intelligent behavior, it's "intelligent" in the relevant context. There's no basis for demanding that the mechanism be the same as what humans do.
And of course it should go without saying that Artificial Intelligence exists on a continuum (just like human intelligence as far as that goes) and that we're not "there yet" as far as reaching the extreme high end of the continuum.
Aircraft don't fly like birds, submarines don't swim like fish and AIs aren't going to think like a human.
Do you need to "think like a human" to think? Is it only thinking if you do it with a meat brain?
Is the substrate important? If you made an accurate model of a human brain in software, in silicon or using water pipes and valves, would it be able to tnink? Would it be conscious? I have no idea.
Me neither but that's why I don't like arguments that say LLM's can't do X because of their substrate, as if that was self-evident. It's like the aliens saying surely humans can't think because they're made of meat.
Do these comparisons actually make sense though?
Aircraft and submarines belong to a different category and of the same category, than AI.
I am just trying to make the point that the machines that we make tend to end up rather different to their natural analogues. The effective ones anyway. Ornithopters were not successful. And I suspect that articifial intelligences will end up very different to human intelligence.
Okay... but an airplane in essence is modelling the shape of a bird. Where do you think the inspiration for the shape of a plane came from? lmao. come on.
Humans are not all that original, we take what exists in nature and mangle it in some way to produce a thing.
The same thing will eventually happen with AI - not in our lifetime though.
Ornithopters model the shape of a bird and movement of a bird. Modern aircraft don't. What bird does a Boeing 767 look like?
I recently saw an article about LLMs and Towers of Hanoi. An LLM can write code to solve it. It can also output steps to solve it when the disk count is low like 3. It can’t give the steps when the disk count is higher. This indicates LLMs inability to reason and understand. Also see Gotham Chess and the Chatbot Championship. The Chatbots start off making good moves, but then quickly transition to making illegal moves and generally playing unbelievably poorly. They don’t understand the rules or strategy or anything.
I think if you tried that with some random humans you'd also find quite a few fail. I'm not sure if that shows humans have an inability to reason and understand although sometimes I wonder.
Could the LLM "write code to solve it" if no human ever wrote code to solve it? Could it output "steps to solve it" if no human ever wrote about it before to have in its training data? The answer is no.
Could a human code the solution if they didn't learn to code from someone else? No. Could they do it if someone didn't tell them the rules of towers of hanoi? No.
That doesn't mean much.
It does since humans where able to invent a programming language.
Have you tried asking a modern LLM to invent a programming language?
Have you? If so, how'd it go? Sounds like an interesting exercise.
https://g.co/gemini/share/0dd589b0f899
A human can learn and understand the rules, an LLM never could. LLMs have famously been incapable of beating humans in chess, a seemingly simple thing to learn, because LLMs can't learn - they just predict the next word and that isn't helpful in solving actual problems, or playing simple games.
Actually general-purpose LLMs are pretty decent at playing chess games they haven't seen before: https://maxim-saplin.github.io/llm_chess/
> This indicates LLMs inability to reason and understand.
No it doesn't, this is an overgeneralization.
It's not some "magical way"--the ways in which a human thinks that an LLM doesn't are pretty obvious, and I dare say self-evidently part of what we think constitutes human intelligence:
- We have a sense of time (ie, ask an LLM to follow up in 2 minutes)
- We can follow negative instructions ("don't hallucinate, if you don't know the answer, say so")
We only have a sense of time in the presence of inputs. Stick a human into a sensory deprivation tank for a few hours and then ask them how much time has passed afterwards. They wouldn't know unless they managed to maintain a running count throughout, but that's a trick an LLM can also do (so long as it knows generation speed).
The general notion of passage of time (i.e. time arrow) is the only thing that appears to be intrinsic, but it is also intrinsic for LLMs in a sense that there are "earlier" and "later" tokens in its input.
I think plenty of people have problems with the second one but you wouldn't say that means they can't think.
We don't need to prove all humans are capable of this. We can demonstrate that some humans are, therefore humans must be capable, broadly speaking
Until we see an LLM that is capable of this, then they aren't capable of it, period
Sometimes LLMs hallucinate or bullshit, sometimes they don't, sometimes humans hallucinate or bullshit, sometimes they don't. It's not like you can tell a human to stop being delusional on command either. I'm not really seeing the argument.
If a human hallucinates or bullshits in a way that harms you or your company you can take action against them
That's the difference. AI cannot be held responsible for hallucinations that cause harm, therefore it cannot be incentivized to avoid that behavior, therefore it cannot be trusted
Simple as that
The question wasn't can it be trusted, it was does it think.
What can be asserted without proof, can be dismissed without proof.
The proof burden is on AI proponents.
It's more that "thinking" is a vague term that we don't even understand in humans, so for me it's pretty meaningless to claim LLMs think or don't think.
There's this very cliched comment to any AI HN headline which is this:
"LLM's don't REALLY have <vague human behavior we don't really understand>. I know this for sure because I know both how humans work and how gigabytes of LLM weights work."
or its cousin:
"LLMs CAN'T possibly do <vague human behavior we don't really understand> BECAUSE they generate text one character at a time UNLIKE humans who generate text one character a time by typing with their fleshy fingers"
To me, it's about motivation.
Intelligent living beings have natural, evolutionary inputs as motivation underlying every rational thought. A biological reward system in the brain, a desire to avoid pain, hunger, boredom and sadness, seek to satisfy physiological needs, socialize, self-actualize, etc. These are the fundamental forces that drive us, even if the rational processes are capable of suppressing or delaying them to some degree.
In contrast, machine learning models have a loss function or reward system purely constructed by humans to achieve a specific goal. They have no intrinsic motivations, feelings or goals. They are statistical models that approximate some mathematical function provided by humans.
Are any of those required for thinking?
In my view, absolutely yes. Thinking is a means to an end. It's about acting upon these motivations by abstracting, recollecting past experiences, planning, exploring, innovating. Without any motivation, there is nothing novel about the process. It really is just statistical approximation, "learning" at best, but definitely not "thinking".
Again the problem is that what "thinking" is totally vague. To me if I can ask a computer a difficult question it hasn't seen before and it can give a correct answer, it's thinking. I don't need it to have a full and colorful human life to do that.
But it's only able to answer the question because it has been trained on all text in existence written by humans, precisely with the purpose to mimic human language use. It is the humans that produced the training data and then provided feedback in the form of reinforcement that did all the "thinking".
Even if it can extrapolate to some degree (altough that's where "hallucinations" tend to become obvious), it could never, for example, invent a game like chess or a social construct like a legal system. Those require motivations like "boredom", "being social", having a "need for safety".
Humans are also trained on data made by humans.
> it could never, for example, invent a game like chess or a social construct like a legal system. Those require motivations like "boredom", "being social", having a "need for safety".
That's creativity which is a different question from thinking.
I guess our definition of "thinking" is just very different.
Yes, humans are also capable of learning in a similar fashion and imitating, even extrapolating from a learned function. But I wouldn't call that intelligent, thinking behavior, even if performed by a human.
But no human would ever perform like that, without trying to intuitively understand the motivations of the humans they learned from, and naturally intermingling the performance with their own motivations.
> Humans are also trained on data made by humans
Humans invent new data, humans observe things and create new data. That's where all the stuff the LLMs are trained on came from.
> That's creativity which is a different question from thinking
It's not really though. The process is the same or similar enough don't you think?
I disagree. Creativity is coming up with something out of the blue. Thinking is using what you know to come to a logical conclusion. LLMs so far are not very good at the former but getting pretty damn good at the latter.
> Thinking is using what you know to come to a logical conclusion
What LLMs do is using what they have _seen_ to come to a _statistical_ conclusion. Just like a complex statistical weather forecasting model. I have never heard anyone argue that such models would "know" about weather phenomena and reason about the implications to come to a "logical" conclusion.
I think people misunderstand when they see that it's a "statistical model". That just means that out of a range of possible answers, it picks in a humanlike way. If the logical answer is the humanlike thing to say then it will be more likely to sample it.
In the same way a human might produce a range of answers to the same question, so humans are also drawing from a theoretical statistical distribution when you talk to them.
It's just a mathematical way to describe an agent, whether it's an LLM or human.
I dunno man if you can't see how creativity and thinking are inextricably linked I don't know what to tell you
LLMs aren't good at either, imo. They are rote regurgitation machines, or at best they mildly remix the data they have in a way that might be useful
They don't actually have any intelligence or skills to be creative or logical though
They're linked but they're very different. Speaking from personal experience, It's a whole different task to solve an engineering problem that's been assigned to you where you need to break it down and reason your way to a solution, vs. coming up with something brand new like a song or a piece of art where there's no guidance. It's just a very different use of your brain.
Thinking is better understood than you seem to believe.
We don't just study it in humans. We look at it in trees [0], for example. And whilst trees have distributed systems that ingest data from their surroundings, and use that to make choices, it isn't usually considered to be intelligence.
Organizational complexity is one of the requirements for intelligence, and an LLM does not reach that threshold. They have vast amounts of data, but organizationally, they are still simple - thus "ai slop".
[0] https://www.cell.com/trends/plant-science/abstract/S1360-138...
Who says what degree of complexity is enough? Seems like deferring the problem to some other mystical arbiter.
In my opinion AI slop is slop not because AIs are basic but because the prompt is minimal. A human went and put minimal effort into making something with an AI and put it online, producing slop, because the actual informational content is very low.
> In my opinion AI slop is slop not because AIs are basic but because the prompt is minimal
And you'd be disagreeing with the vast amount of research into AI. [0]
> Moreover, they exhibit a counter-intuitive scaling limit: their reasoning effort increases with problem complexity up to a point, then declines despite having an adequate token budget.
[0] https://machinelearning.apple.com/research/illusion-of-think...
This article doesn't mention "slop" at all.
But it does mention that prompt complexity is not related to the output.
It does say that there is a maximal complexity that LLMs can have - which leads us back to... Intelligence requires organizational complexity that LLMs are not capable of.
This seems backwards to me. There's a fully understood thing (LLMs)[1] and a not-understood thing (brains)[2]. You seem to require a person to be able to fully define (presumably in some mathematical or mechanistic way) any behaviour they might observe in the not-understood thing before you will permit them to point out that the fully understood thing does not appear to exhibit that behaviour. In short you are requiring that people explain brains before you will permit them to observe that LLMs don't appear to be the same sort of thing as them. That seems rather unreasonable to me.
That doesn't mean such claims don't need to made as specific as possible. Just saying something like "humans love but machines don't" isn't terribly compelling. I think mathematics is an area where it seems possible to draw a reasonably intuitively clear line. Personally, I've always considered the ability to independently contribute genuinely novel pure mathematical ideas (i.e. to perform significant independent research in pure maths) to be a likely hallmark of true human-like thinking. This is a high bar and one AI has not yet reached, despite the recent successes on the International Mathematical Olympiad [3] and various other recent claims. It isn't a moved goalpost, either - I've been saying the same thing for more than 20 years. I don't have to, and can't, define what "genuinely novel pure mathematical ideas" means, but we have a human system that recognises, verifies and rewards them so I expect us to know them when they are produced.
By the way, your use of "magical" in your earlier comment, is typical of the way that argument is often presented, and I think it's telling. It's very easy to fall into the fallacy of deducing things from one's own lack of imagination. I've certainly fallen into that trap many times before. It's worth honestly considering whether your reasoning is of the form "I can't imagine there being something other than X, therefore there is nothing other than X".
Personally, I think it's likely that to truly "do maths" requires something qualitatively different to a computer. Those who struggle to imagine anything other than a computer being possible often claim that that view is self-evidently wrong and mock such an imagined device as "magical", but that is not a convincing line of argument. The truth is that the physical Church-Turing thesis is a thesis, not a theorem, and a much shakier one than the original Church-Turing thesis. We have no particularly convincing reason to think such a device is impossible, and certainly no hard proof of it.
[1] Individual behaviours of LLMs are "not understood" in the sense that there is typically not some neat story we can tell about how a particular behaviour arises that contains only the truly relevant information. However, on a more fundamental level LLMs are completely understood and always have been, as they are human inventions that we are able to build from scratch.
[2] Anybody who thinks we understand how brains work isn't worth having this debate with until they read a bit about neuroscience and correct their misunderstanding.
[3] The IMO involves problems in extremely well-trodden areas of mathematics. While the problems are carefully chosen to be novel they are problems to be solved in exam conditions, not mathematical research programs. The performance of the Google and OpenAI models on them, while impressive, is not evidence that they are capable of genuinely novel mathematical thought. What I'm looking for is the crank-the-handle-and-important-new-theorems-come-out machine that people have been trying to build since computers were invented. That isn't here yet, and if and when it arrives it really will turn maths on its head.
LLMs are absolutely not "fully understood". We understand how the math of the architectures work because we designed that. How the hundreds of gigabytes of automatically trained weights work, we have no idea. By that logic we understand how human brains work because we've studied individual neurons.
And here's some more goalpost-shifting. Most humans aren't capable of novel mathematical thought either, but that doesn't mean they can't think.
We don't understand individual neurons either. There is no level on which we understand the brain in the way we very much do understand LLMs. And as much as people like to handwave about how mysterious the weights are we actually perfectly understand both how the weights arise and how they result in the model's outputs. As I mentioned in [1] what we can't do is "explain" individual behaviours with simple stories that omit unnecessary details, but that's just about desiring better (or more convenient/useful) explanations than the utterly complete one we already have.
As for most humans not being mathematicians, it's entirely irrelevant. I gave an example of something that so far LLMs have not shown an ability to do. It's chosen to be something that can be clearly pointed to and for which any change in the status quo should be obvious if/when it happens. Naturally I think that the mechanism humans use to do this is fundamental to other aspects of their behaviour. The fact that only a tiny subset of humans are able to apply it in this particular specialised way changes nothing. I have no idea what you mean by "goalpost-shifting" in this context.
> And as much as people like to handwave about how mysterious the weights are we actually perfectly understand both how the weights arise and how they result in the model's outputs
we understand on this low level, but LLMs through the training converge to something larger than weights, there is a structure of these weights which emerged and allow to perform functions, and this part we do not understand, we just observe it as a black box, and experimenting on the level: we put this kind of input to black box and receive this kind of output.
> We actually perfectly understand both how the weights arise and how they result in the model's outputs
If we knew that, we wouldn't need LLMs; we could just hardcode the same logic that is encoded in those neural nets directly and far more efficiently.
But we don't actually know what the weights do beyond very broad strokes.
The proof burden is on AI proponents.
Why? Team "Stochastic Parrot" will just move the goalposts again, as they've done many times before.
Yes, and the name for this behaviour is called "being scientific".
Imagine a process called A, and, as you say, we've no idea how it works.
Imagine, then, a new process, B, comes along. Some people know a lot about how B works, most people don't. But the people selling B, they continuously tell me it works like process A, and even resort to using various cutesy linguistic tricks to make that feel like it's the case.
The people selling B even go so far as to suggest that if we don't accept a future where B takes over, we won't have a job, no matter what our poor A does.
What's the rational thing to do, for a sceptical, scientific mind? Agree with the company, that process B is of course like process A, when we - as you say yourself - don't understand process A in any comprehensive way at all? Or would that be utterly nonsensical?
Again, I'm not claiming that LLMs can think like people (I don't know that). I just don't like that people confidently claim that they can't, just because they work differently from biological brains. That doesn't matter when it comes to the Turing test (which they passed a while ago btw), just what it says.
The classic God of the Gaps - we don't know how human brains think, so what LLMs do must be it!
I'm not saying that LLMs do anything, just that it's rich to confidently say they don't do something when we don't even understand how humans do it.
It's like we're pretending cognition is a solved problem so we can make grand claims about what LLM's aren't really doing.
my favourite game is to try to get them to be more specific - every single time they manage to exclude a whole bunch of people from being "intelligent".
When I write a sentence, I do it with intent, with specific purpose in mind. When an "AI" does it, it's predicting the next word that might satisfy the input requirement. It doesn't care if the sentence it writes makes any sense, is factual, etc, so long as it is human readable and follows gramatic rules. It does not do this with any specific intent, which is why you get slop and just plain wrong output a fair amount of time. Just because it produces something that sounds correct sometimes does not mean it's doing any thinking at all. Yes, humans do actually think before they speak, LLMs do not, cannot, and will not because that is not what they are designed to do.
Actually LLMs crunch through half a terabyte of weights before they "speak". How are you so confident that nothing happens in that immense amount of processing that has anything to do with thinking? Modern LLMs are also trained to have an inner dialogue before they output an answer to the user.
When you type the next word you also put a word that fits some requirement. That doesn't mean you're not thinking.
"crunch through half a terabyte of weights" isn't thinking. Following grammatical rules to produce a readable sentence isn't thought, it's statistics, and whether that sentence is factual or foolish isn't something the LLM cares about. If LLMs didn't so constantly produce garbage, I might agree with you more.
They don't follow "grammatical rules", they process inputs with an incredibly large neural net. It's like saying humans aren't really thinking because their brains are made of meat.
"Unstructured data learners and generators" is probably the most salient distinction for how current system compare to previous "AI systems" examples (NLP, if-statements) that OP mentioned.
I don't particularly mind the term, it's a useful shibboleth separating the marketing and sci-fi from the takes grounded in reality.
Artificial Interpolator Augmented Intelligence
Aye-aye, that's a good name
I think it's fine to keep the name, we just have to realise it's like magic. real magic can't be done. magic that can be done is just tricks. AI that works is just tricks.
I didn't realize that magic was the goal. I'm just trying to process unstructured data. Who's here looking for magic?
I think the "magic" that we've found a common toolset of methods - embeddings and layers of neural networks - that seem to reveal useful patterns and relationships from a vast array of corpus of unstructured analog sensors (pictures, video, point clouds) and symbolic (text, music) and that we can combine these across modalities like CLIP.
It turns out we didn't need a specialist technique for each domain, there was a reliable method to architect a model that can learn itself, and we could already use the datasets we had, they didn't need to be generated in surveys or experiments. This might seem like magic to an AI researcher working in the 1990's.
Many humans like to think that their own intelligence is "magic" that cannot be reduced to physics.
Just play word games and have magic be a branch of physics.
did you miss the word "like"? have you come across the concept of an analogy yet?
Statistics.
A lot of this is marketing bullshit. AFAIK, even "machine learning" was a term made up by AI researchers when the AI winter hit who wanted to keep getting a piece of that sweet grant money.
And "neural network" is just a straight up rubbish name. All it does is obscure what's actually happening and leads the proles to think it has something to do with neurons.
To be honest, no one can agree on what “intelligence” is. The “artificial” part is pretty easy to understand though.
One, I doubt your premise ever happens in a meaningfully true and visible way -- but perhaps more important, I'd say you're factually wrong in terms of "what is called AI?"
Among most people, you're thinking of things that were debatably AI, today we have things that are AI (again, not due to any concrete definition, simply due to accepted usage of the term.)
NLP is still AI - LLMs are using Natural Language Processing, and are considered artificial intelligence.
>Let's not forget there has been times when if-else statements were considered AI.
They still are, as far as the marketing department is concerned.
https://en.wikipedia.org/wiki/AI_effect
They still are.
Artificial Intelligence is a whole subfield of Computer Science.
Code built of nothing but if/else statements controlling the behavior of game NPCs is AI.
A* search is AI.
NLP is AI.
ML is AI.
Computer vision models are AI.
LLMs are AI.
None of these are AGI, which is what does not yet exist.
One of the big problems underlying the current hype cycle is the overloading of this term, and the hype-men's refusal to clarify that what we have now is not the same type of thing as what Neo fights in the Matrix. (In some cases, because they have genuinely bought into the idea that it is the same thing, and in all cases because they believe they will benefit from other people believing it.)
"AI" is a wide fucking field. And it occasionally includes systems built entirely on if-else statements.
There is no difference between AI and non-AI save for the model the observer is using to view a particular bit of computation.
Eh, I'd be fairly comfortable delineating between AI and other CS subfields based on the idea of higher-order algorithms. For most things, you have a problem with fixed set of fixed parameters, and you need a solution in the form of fixed solution. (e.g., 1+1=2) In software, we mostly deal with one step up from that: we solve general case problems, for a fixed set of variable parameters, and we produce algorithms that take the parameters as input and produce the desired solution (e.g., f(x,y) = x + y). The field of AI largely concerns itself with algorithms that produce models to solve entire classes of problem, that take the specific problem description itself as input (e.g., SAT solvers, artificial neural networks, etc where g("x+y") => f(x,y) = x + y ). This isn't a perfect definition of the field (it ends up catching some things like parser generators and compilers that aren't typically considered "AI"), but it does pretty fairly, IMO, represent a distinct field in CS.
I think I misinterpreted your comment as not understanding the AI effect, but actually you're just summarizing it kind of concisely and sarcastically?
LLMs are one of the first technologies that makes me think the term "AI effect" needs to be updated to "AGI effect". The effect is still there, but it's undeniable that LLMs are capable of things that seem impossible with classical CS methods, so they get to retain the designation of AI.
I like the “normal tech” lens: diffusion and process change matter more than model wow. Ask a boring question—what got cheaper? If the answer is drafting text/code and a 10–30% cut in time-to-ship, that’s real; if it’s just a cool demo, it isn’t.
> Adoption is also hampered by the fact that much knowledge is tacit and organisation-specific, data may not be in the right format and its use may be constrained by regulation.
The article mentions three times regulations as a problem. It never says what such regulations are. Is it the GDPR and the protection of people's data? Is it anti-discrimination regulations that AI bias break regularly? We do not know because the article does not say. Probably because they are knowledgeable enough to avoid publicly attacking citizens rights. But they lack the moral integrity to remove the anti-regulatory argument.
The potentially "explosive" part of AI was that it could be self-improving. Using AI to improve AI, or AI improving itself in an exponential growth until it becomes super-human. This is what the "Singularity" and AI "revolution" is based on.
But in the end, despite saying AI has PhD-level intelligence, the truth is that even AI companies can't get AI to help them improve faster. Anything slower than exponential is proof that their claims aren't true.
Explosions rely on having a lot of energy producing material that can suddenly go off. Even if AI starts self improving it's going to be limited by the amount of energy it can get from the power grid which is kind of maxed out at the moment. It may be exponential growth like weeds growing, ie. gradually and subject to human control, rather than like TNT detonating.
> improving itself in an exponential growth
That seems like a possibly mythical critical point, at which a phase transition will occur that makes the AI system qualitatively different from its predecessors. Exponential to the limit of infinity.
All the mad rush of companies and astronomical investments are being made to get there first, counting on this AGI to be a winner-takes-all scenario, especially if it can be harnessed to grow the company itself. The hype is even infecting governments, for economic and national interest. And maybe somewhere a mad king dreams of world domination.
What world domination though? If such a thing ever existed for example in the US, the government would move to own and control it. No firm or individual would be allowed to acquire and exercise that level of power.
Said another way, will a firm suddenly improve radically because they hired a thousand PhDs folks? Not quite.
Many things sound good on paper. But paper vs reality are very different. Things are more complex in reality.
This is brilliant and I can't believe I haven't heard this idea before.
The idea was first popularized in 1993 by Verner Vinge, who coined the term "singularity". You can read his paper here: https://edoras.sdsu.edu/~vinge/misc/singularity.html
LLMs are already superhuman at many tasks. You're also wrong about AI not accelerating AI development. There was at least one paper published this year showing just such a result. It's just beginning.
I've come to the conclusion that it is a normal, extremely useful, dramatic improvement over web 1.0. It's going to
1) obsolete search engines powered by marketing and SEO, and give us paid search engines whose selling points are how comprehensive they are, how predictable their queries work (I miss the "grep for the web" they were back when they were useful), and how comprehensive their information sources are.
2) Eliminate the need to call somebody in the Philippines awake in the middle of the night, just for them to read you a script telling you how they can't help you fix the thing they sold you.
3) Allow people to carry local compressed copies of all written knowledge, with 90% fidelity, but with references and access to those paid search engines.
And my favorite part, which is just a footnote I guess, is that everybody can move to a Linux desktop now. The chatbots will tell you how to fix your shit when it breaks, and in a pedagogical way that will gradually give you more control and knowledge of your system than you ever thought you were capable of having. Or you can tell it that you don't care how it works, just fix it. Now's the time to switch.
That's your free business idea for today: LLM Linux support. Train it on everything you can find, tune it to be super-clippy. Charge people $5 a month. The AI that will free you from their AI.
Now we just need to annihilate web 2.0, replace it with peer-to-peer encrypted communications, and we can leave the web to the spammers and the spies.
"everybody can move to a Linux desktop now"
People use whatever UI comes with their computer. I don't think that's going to change.
That theory was tried when Walmart sold Linux computers but it didn't work. People returned them because they couldn't run their usual software - Excel and the like.
How about a link that works?
Neither the OP's URL nor djoldman's archive link allow access to the article!8-((
OK, now djoldman's archive link above works!
[dead]
https://books.google.com/books?id=-fG_NOxltlEC&pg=PA25&dq=Co...
Computer's Aren't Pulling Their Weight (1991)
There were _so many_ articles in the late 80s and early 90s about how computers were a big waste of money. And again in the late 90s, about how the internet was a waste of money.
We aren't going to know the true consequences of AI until kids that are in high school now enter the work force. The vast majority of people are not capable of completely reordering how they work. Computers did not help Sally Secretary type faster in the 1980s. That doesn't mean they were a waste of money.
You could argue that in terms of human wellbeing, computers and the internet didn't make that much difference. People did ok in the 1960s.
You mean the same kids that are currently cheating their way through their education at record rates due to the same technology? Can't say I'm optimistic.
> The children now love luxury; they have bad manners, contempt for authority; they show disrespect for elders and love chatter in place of exercise
> - Socrates (399 BC)
> The world is passing through troublous times. The young people of today think of nothing but themselves. They have no reverence for parents or old age. They are impatient of all restraint. They talk as if they knew everything, and what passes for wisdom with us is foolishness with them. As for the girls, they are forward, immodest and unladylike in speech, behavior and dress
> - Peter the Hermit (1274)
> > - Socrates (399 BC)
Context: Ancient Greece went into decline just 70 years after that date. Make of that what you will.
What if this paper actually took things seriously?
A serious paper would start by acknowledging that every previous general-purpose technology required human oversight precisely because it couldn't perceive context, make decisions, or correct errors - capabilities that are AI's core value proposition. It would wrestle with the fundamental tension: if AI remains error-prone enough to need human supervisors, it's not transformative; if it becomes reliable enough to be transformative, those supervisory roles evaporate.
These two Princeton computer scientists, however, just spent 50 pages arguing that AI is like electricity while somehow missing that electricity never learned to fix itself, manage itself, or improve itself - which is literally the entire damn point. They're treating "humans will supervise the machines" as an iron law of economics rather than a temporary bug in the automation process that every profit-maximizing firm is racing to patch. Sometimes I feel like I'm losing my mind when it's obvious that GPT-5 could do better than Narayanan and Kapoor did in their paper at understanding historical analogies.
> because it couldn't perceive context, make decisions, or correct errors - capabilities that are AI's core value proposition
I could ask the same thing then. When will you take "AI" seriously and stop attributing the above capabilities to it?
LLMs do have to be supervised by humans and do not perceive context or correct errors, and it’s not at all clear this is going to change any time soon. In fact it’s plausible that this is due to basic problems with the current technology. So if you’re right, sure, but I’m certainly not taking that as a given.
They do already correct errors since OpenAI introduced its o1 model. Since then the improvements have been significant. It seems practically certain that their capabilities will keep growing rapidly. Do you think AI will suddenly stagnate such that models are not much more capable in five years than they are now? That would be absurd. Look back five years, and we are practically in the AI stone age.
Exactly. People seem to want to underhype AI. It's like a chimpanzee saying: humans are just normal apes.
Delusional.
While I feel silly to take seriously something printed in The Economist, I would like to mention that people tend to overestimate the short-term impact of any technology and underestimate its long-term impacts. Maybe AI will follow the same route?
Ah yes, disgraced tabloid The Economist, no one should ever take their writing seriously!
I used to read it and subscribe to it, a while back. I would not technically categorize them as a tabloid. They serve a different purpose.
AI is probably more of an amplifier for technological change than fire or digital computers; but IDK why we would use a different model for this technology (and teams and coping with change).
Diffusion of innovations: https://en.wikipedia.org/wiki/Diffusion_of_innovations :
> The diffusion of an innovation typically follows an S-shaped curve which often resembles a logistic function.
From https://news.ycombinator.com/item?id=42658336 :
> [ "From Comfort Zone to Performance Management" (2009) ] also suggests management styles for each stage (Commanding, Cooperative, Motivational, Directive, Collaborative); and suggests that team performance is described by chained power curves of re-progression through these stages
Transforming, Performing, Reforming, [Adjourning]
Carnal Coping Cycle: Denial, Defense, Discarding, Adaptation, and Internalization
Normal? AI is an alien technology to us, and we are being "normalized" to become compatible with it.
AI actually seems far less alien than steam engines, trains, submarines, flight, and space travel.
People weren't sure if human bodies could handle moving at >50mph.
All those steam engines, trains and submarines were steps toward what we are seeing now. AI is the logical culmination and the purpose of technology.
People said similar things about the internet: never before has all human knowledge been available in one place (they forgot about libraries apparently).
I think it's more likely that AI is just a further concentration of human knowledge. It makes it even more accessible but will AI actually add to it?
Why doesn't the logical culmination of technology require quantum computers?
Or the merging of human and machine brains?
Or a solar system-scale particle accelerator?
Or whatever the next technology is that we aren't even aware of yet?
If you read the paper, they make a good case that AI is just a normal technology. They're a bit dissmissive, but they're not alone in that. The AI sector has been all too much hype and far too little substance.
What do they mean what if? It is similarly based to something that has existed for around 4 decades. It of course is at a higher standard of efficiency and able to search through and combine more data but it isn't new. It is just a normal technology and this was why myself and many others were shocked at the initial hype.
The unusual feature of AI now as opposed to the last 4 decades is that it is approaching human intelligence. Assuming that progress continues, exceeding human intelligence will have different economic consequences to being a fair bit worse as was the case mostly.
> It is similarly based to something that has existed for around 4 decades.
Four decades ago was 1985. The thing is, there was a huge jump in progress from then until now. If we took something which had a nice ramped progress, like computer graphics, and instead of ramping up we went from '1985' to '2025' in progress over the course of a few months, do you think there wouldn't be a lot of hype?
> Four decades ago was 1985
Don't remind me.
But we have ramped up slowly, it's just not been given in quite this form before. We have previously only used it in settings where accuracy is a focus.
[dead]
...right into the dustbin of computing history. But possibly faster than most other technologies.
If you make something cheap, then it will be cheap.
LLMs may set a record for time between specialized/luxury goods and commodity.
There may be a price floor, but it's not very high.
In My Opinion.
---
Ever think about why restaurants pay someone to wash the dishes?
In my house, I have a machine that does that.
In a restaurant, the machine is too slow, and not compatible with the rest of the system of the restaurant.
Until we hit singularity, AI has to be compatible with the rest of the system.
Restaurants have machines for washing dishes. They do pay people to do the dish washing, but commercial dishwashing machines exist, and they work differently than home machines. They're large stainless steel monsters, some with a conveyor belt, others operate vertically. They usually use high temp water rather than soap to do the cleaning.
eg https://youtu.be/Nk_0j936_DY