The distance between OpenAI and its competitors has been shrinking for years (and across some metrics is now negative), yet the price keeps going on up. And I cannot figure out Thrive. Weren't they an early investor, yet they keep chasing the price higher?
Surely we can say that we have artificial intelligence. Even now reasoning models are able to pattern match well enough to solve IQ tests that have been turned into text for them.
But we can certainly say that we don't have artificial intelligences. There's nothing with coherent, total beliefs, something able to have actual knowledge (as a pet example I like, if you ask an LLM about a situation in the abstract it might respond correctly, but in another context it fails to use what it 'knows' in another context). I actually think much can be done about this, but we don't have it.
This take on the nature of sentience and consciounsess, things we humans have, know we have, and which are quite distinct from unaware pattern-matching, is becoming foolish and tedious.
No, it's not meaningful to call that reasoning. Indeed it's wrong, because it's not reasoning. That would require sentience, and absolutely zero evidence indicates its presence in any LLM. Are some people so glittered by their algorithmic tricks with communication that they simply fall to near religious beliefs in a conversational LLM being an example of awareness?
computers have been solving mathematical problems for decades. Would you thus argue that those machines were also reasoning?
But consider it like this: the model lives in a reward environment where it's tasked with outputting prescribed text or outputting the answer to certain questions.
Instead of just outputting the answer it generates non-output tokens based on which the probability of the answer that got it rewards before are increased.
Is this not a sort of reasoning? It looks ahead at imagined things and tries to gauge what will get it the reward?
Because people think the progress shown with gpt5 is unimpressive. Meanwhile Claude is very successful, Grok has come out of nowhere and according to some benchmarks matches or exceeds gpt5 slightly. Meaning openai might not be THE horse to bet on. Doesn’t mean there isn’t a race going on with the potential for a big prize at the end, even at current valuations. Only time will tell! As per usual!
But that's just ... Not what a bubble is. A market leader having viable competitors doesn't make them any less of a market leader, and doesn't make them "a bubble".
(sound of a rubber balloon straining under high pressure)
Could it not deflate slightly and just be a sizable but limp sort of a balloon?
Possibly could, but investors chase trends so when the sell off starts it very often becomes an avalanche
wet flapping sound of a balloon emptying quickly but not too toooo quickly?
https://archive.ph/wip/0VZUI
The distance between OpenAI and its competitors has been shrinking for years (and across some metrics is now negative), yet the price keeps going on up. And I cannot figure out Thrive. Weren't they an early investor, yet they keep chasing the price higher?
It's not the tech that matters, it's the amount of users. See snapchat vs instagram.
OpenAI is definitely a bubble, AI as a technology is not.
AI isn't, but we don't have AI. We have LLM and ML.
Surely we can say that we have artificial intelligence. Even now reasoning models are able to pattern match well enough to solve IQ tests that have been turned into text for them.
But we can certainly say that we don't have artificial intelligences. There's nothing with coherent, total beliefs, something able to have actual knowledge (as a pet example I like, if you ask an LLM about a situation in the abstract it might respond correctly, but in another context it fails to use what it 'knows' in another context). I actually think much can be done about this, but we don't have it.
No, because reasoning models don't actually reason.
They generate non-output tokens that help correct generation. It is meaningful to call that reasoning.
After all, it can, with whatever secret tricks Google and OpenAI have, be used to solve IMO level maths problems.
If solving IMO problems can be done without reasoning, then what would be reasoning?
>It is meaningful to call that reasoning.
This take on the nature of sentience and consciounsess, things we humans have, know we have, and which are quite distinct from unaware pattern-matching, is becoming foolish and tedious.
No, it's not meaningful to call that reasoning. Indeed it's wrong, because it's not reasoning. That would require sentience, and absolutely zero evidence indicates its presence in any LLM. Are some people so glittered by their algorithmic tricks with communication that they simply fall to near religious beliefs in a conversational LLM being an example of awareness?
computers have been solving mathematical problems for decades. Would you thus argue that those machines were also reasoning?
But consider it like this: the model lives in a reward environment where it's tasked with outputting prescribed text or outputting the answer to certain questions.
Instead of just outputting the answer it generates non-output tokens based on which the probability of the answer that got it rewards before are increased.
Is this not a sort of reasoning? It looks ahead at imagined things and tries to gauge what will get it the reward?
If AI as a technology is not a bubble, why would the by-far-far-far most popular consumer technology leveraging that technology be a bubble?
Because people think the progress shown with gpt5 is unimpressive. Meanwhile Claude is very successful, Grok has come out of nowhere and according to some benchmarks matches or exceeds gpt5 slightly. Meaning openai might not be THE horse to bet on. Doesn’t mean there isn’t a race going on with the potential for a big prize at the end, even at current valuations. Only time will tell! As per usual!
But that's just ... Not what a bubble is. A market leader having viable competitors doesn't make them any less of a market leader, and doesn't make them "a bubble".
Does the economics of AI hold up?