GPT-5.2 and GPT-5.2-Codex are now 40% faster

(twitter.com)

43 points | by davidbarker 3 hours ago ago

21 comments

  • thadk 22 minutes ago ago

    It was probably from the other day when roon realized that normal people have it slower than staff.

    Then from that they realized they could just run API calls more like staff, fast, not at capacity.

    Then they leave the billion other people's calls at remaining capacity.

    https://thezvi.substack.com/i/185423735/choose-your-fighter

    > Ohqay: Do you get faster speeds on your work account?

    > roon: yea it’s super fast bc im sure we’re not running internal deployment at full load

  • prodigycorp 2 hours ago ago

    This is great.

    In the past month, OpenAI has released for codex users:

    - subagents support

    - a better multi agent interface (codex app)

    - 40% faster inference

    No joke, with the first two my productivity is already up like 3x. I am so stoked to try this out.

    • wahnfrieden 2 hours ago ago

      this is for api only

    • brianwawok 2 hours ago ago

      Try Claude and you can get x^2 performance. OpenAI is sweating

      • viraptor an hour ago ago

        May be a bit different depending on what kind of work you're doing, but for me 5.2-codex finally reached higher level than opus.

      • klipklop 2 hours ago ago

        5.2-codex is pretty solid and you get dramatically higher usage rates with cheap plans. I would assume API use is much cheaper as well.

        • jerkstate 7 minutes ago ago

          people are sleeping on openai right now but codex 5.2 xhigh is at least as good as opus and you get a TON more usage out of the OpenAI $20/mo plan than Claude's $20/mo plan. I'm always hitting the 5 hour quota with Opus but never have with Codex. Codex tool itself is not quite as good but close.

  • simianwords 3 hours ago ago

    It’s interesting that they kept the price the same while doing inference on Cerebras is much more expensive.

    • diwank 2 hours ago ago

      I dont think this is Cerebras. Running on cerebras would change model behavior a bit and it could potentially get a ~10x speedup and it'd be more expensive. So most likely this is them writing new more optimized kernels for Blackwell series maybe?

      • simianwords 2 hours ago ago

        Fair point but it remains to answer - why isn’t this speed up available in ChatGPT and only in the api?

    • chillee 2 hours ago ago

      this is almost certainly not being done on cerebras

  • OutOfHere 2 hours ago ago

    OpenAI in my estimation has the habit of dropping a model's quality after its introduction. I definitely recall the web ChatGPT 5.2 being a lot better when it was introduced. A week or two later, its quality suddenly dropped. The initial high looked to be to throw off journalists and benchmarks. As such, nothing that OpenAI says in terms of model speed can be trusted. All they have to do is lower the reasoning effort on average, and boom, it becomes 40% faster. I hope I am wrong, because if I am right, it's a con game.

    Starting off the ChatGPT Plus web users with the Pro model, then later swapping it for the Standard model -- would meet the claims of model behavior consistency, while still qualifying as shenanigans.

    • tedsanders 2 hours ago ago

      It's good to be skeptical, but I'm happy to share that we don't pull shenanigans like this. We actually take quite a bit of care to report evals fairly, keep API model behavior constant, and track down reports of degraded performance in case we've accidentally introduced bugs. If we were degrading model behavior, it would be pretty easy to catch us with evals against our API.

      In this particular case, I'm happy to report that the speedup is time per token, so it's not a gimmick from outputting fewer tokens at lower reasoning effort. Model weights and quality remain the same.

      • deaux 27 minutes ago ago

        It looks like you do pull shenanigans like these [0]. The person you're replying to even mentioned "ChatGPT 5.2", but you're specifically talking only about the API, while making it sound like it applies across the board. Also appreciate the attempt to further hide this degradation of the product they paid for from users by blocking the prompt used to figure this out.

        Happy to retract if you can state [0] is false.

        [0] https://x.com/btibor91/status/2018754586123890717

      • 8note 9 minutes ago ago

        so what actually happens if it isnt shenanigans?

        its worth you guys doing on your end, some analysis of why customers are getting worse results a week or two later, and putting out some guidelines about what context is poisonous and the like

      • OutOfHere 10 minutes ago ago

        Starting off the ChatGPT Plus web users with the Pro model, then later swapping it for the Standard model -- would meet the claims of model behavior consistency, while still qualifying as shenanigans.

      • zamadatix an hour ago ago

        Hey Ted, can you confirm whether this 40% improvement is specific to API customers or if that's just a wording thing because this is the OpenAI Developers account posting?

      • wahnfrieden an hour ago ago

        You're confirming you don't alter "juice" levels..?

    • bethekidyouwant 2 hours ago ago

      I mean you can just run the benchmark again

  • riku_iki 25 minutes ago ago

    tons of posts on reddit that they also significantly dropped quality