The CTO Was ChatGPT

(ehandbook.com)

40 points | by aldidoanta 2 days ago ago

22 comments

  • ablation 2 days ago ago

    Ironically perhaps, this article has some very tell-tale AI-authored language to it e.g. "This founder didn’t fake it — he outsourced it". The cadence and writing style are redolent of ChatGPT.

    • sebastiennight a day ago ago

      There are no details I could find in there that would give me the impression this is a true story.

      I think it's a hoax.

      • a2128 a day ago ago

        A fake story generated with ChatGPT about a company with a fake technical stack that was generated with ChatGPT...

        • hoppp a day ago ago

          So fakeness inception all the way down?

          • sebastiennight a day ago ago

            Think about it: the hoax could be published by someone to bolster their LinkedIn profile to sell code they themselves will outsource to LLMs ; but the joke is on them, as LinkedIn engagement is all automated through bots anyway, so there is no audience ; and here we are on HN where you, hoppp, are the only human on this thread and we're all bots prompted by the author to generate engagement.

            Is the solipsism hitting yet?

  • aldidoanta 2 days ago ago
  • ares623 2 days ago ago

    Hmm doesn’t being acquired have a due diligence process?

    Wouldn’t that rule out that exit strategy?

    Or did M&A also get vibed to oblivion now?

  • dmitrygr 2 days ago ago

    I see a lot of money in the future for competent engineers who can unfuck what ChatGPT has fucked.

    • Frieren 2 days ago ago

      > I see a lot of money in the future for competent engineers who can unfuck what ChatGPT has fucked.

      I do not want to do that work. Cleaning up junior code is easy, because they mess in predictable ways.

      LLM generated code can be extremely complex at the same time that nonsensical. It has a line of genius and then code that does nothing. It can use many different libraries mixed in inhuman ways.

      Better to let that companies shut down and do something better elsewhere.

      • skydhash a day ago ago

        People have patterns for messing things up. That’s the reason they’re called anti patterns. They learned the wrong thing and then apply it consistently. And you’ll still have human limitations to help you. They’ll want the thing to be working at least superficially, so a lot of the glue code will be correct. And the anount of code will have an upper limit.

        No such thing with AI.

      • hoppp a day ago ago

        Yup. When digging into vibe coded apps I get brainfuck. Its not organized for a human brain to process and full of weird things

        • dmitrygr 5 hours ago ago

          It helps that my background is reverse engineering, so i am used to code that makes no sense or has been purposefully obfuscated.

    • jackdawed 2 days ago ago

      I have been exclusively doing this in the past year, selling my services as “hardening vibe-coded prototypes for production” or “helping early stage startups scale”.

      In the best cases, they were able to reach funding or paying users. Architecture debt is one of the worst kinds of tech debt, so if you set it up right, it’s really hard to mess up.

      In the worst case, after my contract ended, the CEO fired the whole US engineering team and replaced them with offshore resources. This was an example of messing up despite the architectural and procedural safeguards we built.

    • tzury 2 days ago ago

      after the fall (literal + season) there will be no more money.

      billions were poured over AI companies (and yes, nowadays, if you are writing a stateful loop on top of LLM API you are considered as an AI company).

      it will take sometime for new money to arrive into the cycle.

      this CTO was simply naively trying to fulfill the Sam Altman (and the likes) promise "the future of AI", "90% of our code is written by AI" and so on.

      this is the is on the borderline of scam, sure enough misleading the public.

    • dirtbag__dad a day ago ago

      Amen to this. Everyone is worried about losing their job. I’m pretty sure at the same time it’s cementing them.

    • murshudoff 2 days ago ago

      That future is not that far. Without a truly Senior Engineer whatever AI writes is a slop in most cases.

    • msgodel 2 days ago ago

      They key part, as usual, will be figuring out how to sell the service to people who thought ChatGPT would have been enough.

  • hoppp a day ago ago

    These startups are just scams? I would not charge double, I would refuse to work with them. Let them fail and the founder go to jail for lying to investors. Or they should get hacked and collapse

  • 2 days ago ago
    [deleted]
  • rubenvanwyk 2 days ago ago

    Where are these type of startups hanging around? Won't mind cleaning up that mess at all.

  • YouWhy 2 days ago ago

    TL;DR: is the described business viable?

    I can see how a tech-centric person would see the described business as viable, but putting on my founder hat, I realize that it faces enormous risks:

    - Any competitor could build the same product with less janky UX; users tend to hate even unavoidable usability issues.

    - There's no compliance strategy even remotely possible in the described scenario.

    - If a capital investment becomes necessary for business scaling, I cannot imagine this organization passing even a perfunctory level of due diligence.

    Would be happy to hear out if that makes sense.

  • thrown-0825 2 days ago ago

    executive positions are prime targets for automation