Software Development in the Time of New Angels

(davegriffith.substack.com)

25 points | by calosa 9 days ago ago

17 comments

  • scuff3d 2 hours ago ago

    The core of the entire argument is that the $150/hour is based on a developers ability to physically write code, which is not true. Having something that can generate code reliabily (which these things can barely do even with an expert at the wheel) doesn't address any of the actual hard problems we deal with on a daily basis.

    Plus running AI tools is going to get much more expensive. The current prices aren't sustainable long term and they don't have any viable path to reducing costs. If anything the cost of operations for the big company are going to get worse. They're in the "get 'em hooked" stage of the drug deal.

    • bigiain 10 minutes ago ago

      > Having something that can generate code reliabily (which these things can barely do even with an expert at the wheel) doesn't address any of the actual hard problems we deal with on a daily basis.

      Not understanding that is something I've been seeing management repeatedly doing for decades.

      This article reads like all the things I discovered and the mistakes the company I worked for made learning how to outsource software development back in the late 90s and early 2000s. The only difference is this is using AI to generate the code instead of lower paid developers from developing nations. And, just like software outsourcing as an industry created practices and working styles to maximise profit to outsourcing companies, anyone who builds their business relying on OpenAI/Anthropic/Google/Meta/whoever - is going to need to address the risk of their chosen AI tool vendor ramping up the costs of using the tools to extract all; the value of the apparent cost savings.

      This bit matches exactly with my experience:

      "The trouble comes in that most people don't know what code needs to be created to solve their problem, for any but the most trivial problems. Who does know what code would be needed to solve complex problems? Currently that's only known by software developers, development managers and product managers, three job classifications that are going to be merging rapidly."

      We found that assuming the people you employ as "developers" weren't actually also doing the dev management and product management roles was wrong. At least for our business where there were 6 or 8 devs who all understood the business goals and existing codebase and technology. When we eventually;y got successful outsourced development working was after we realised that writing code from lists of tasks/requirements was way less than 50% of what our in-house development team had been doing for years. We ended up saving a lot of money on that 30 or 40% of the work, but 60 or 70% of the higher level _understanding the business and tech stack_ work still needed to be done by people who understood the whole business and had a vested interest in the business succeeding.

    • simonw an hour ago ago

      Completely agree on your first point: software development is so much more than writing code. LLMs are a threat to programmers for whom the job is 8 hours a day of writing code to detailed specifications provided by other people. I can't remember any point in my own career where I worked with people who got to do that.

      There's a great example of that in the linked post itself:

      > Let's build a property-based testing suite. It should create Java classes at random using the entire range of available Java features. These random classes should be checked to see whether they produce valid parse trees, satisfying a variety of invariants.

      Knowing what that means is worth $150/hour even if you don't type a single line of code to implement it yourself!

      And to be fair, the author makes that point themselves later on:

      > Agentic AI means that anything you know to code can be coded very rapidly. Read that sentence carefully. If you know just what code needs to be created to solve an issue you want, the angels will grant you that code at the cost of a prompt or two. The trouble comes in that most people don't know what code needs to be created to solve their problem, for any but the most trivial problems.

      On your second point: I wouldn't recommend betting against costs continuing to fall. The cost reduction trend has been reliable over the past three years.

      In 2022 the best available models was GPT-3 text-davinci-003 at $60/million input tokens.

      GPT-5 today is $1.25/million input tokens - 48x cheaper for a massively more capable model.

      ... and we already know it can be even cheaper. Kimi K2 came out two weeks ago benchmarking close to (possibly even above) GPT-5 and can be run at an even lower cost.

      I'm willing to bet there are still significantly more optimizations to be discovered, and prices will continue to drop - at least on a per-token basis.

      We're beginning to find more expensive ways to use the models though. Coding Agents like Claude Code and Codex CLI can churn through tokens.

      • bigiain a minute ago ago

        > In 2022 the best available models was GPT-3 text-davinci-003 at $60/million input tokens.

        >GPT-5 today is $1.25/million input tokens - 48x cheaper for a massively more capable model.

        Yes - but.

        GPT-5 and all the other modern "reasoning models" and tools burn through way more tokens to answer the same prompts.

        As you said:

        > We're beginning to find more expensive ways to use the models though. Coding Agents like Claude Code and Codex CLI can churn through tokens.

        Right now, it feels that "frontier models" costs to use are staying the same as they've been for the entire ~5 year history of the current LLM/AI industry. But older models these days are comparably effectively free.

        I'm wondering when/if there'll be a asymptotic flattening, where new frontier models are insignificantly better that older ones, and running some model off Huggingface on a reasonably specced up Mac Mini or gaming PC will provide AI coding assistance at basically electricity and hardware depreciation prices?

      • scuff3d 17 minutes ago ago

        I get your point, but I don't think the pricing is long term viable. We're in the burn everything to the ground to earn market share phase. Once things start to stabilize and there is no more user growth, they'll start putting the screws to the users.

        I said the same thing about Netflix in 2015 and Gamepass in 2020. It might have taken a while but eventually it happened. And they're gonna have to raise prices higher and faster at some point.

  • nateb2022 9 minutes ago ago

    > Coding, the backbone and justification for the entire economic model of software development, went from something that could only be done slowly by an expensive few to something anyone could turn on like tap water.

    The multitude of freely self taught programmers would suggest otherwise.

  • datadrivenangel 2 hours ago ago

    This is a very insightful article:

    "You might be expecting that here is where I would start proclaiming the death of software development. That I would start on how the strange new angels of agentic AI are simply going to replace us wholesale in order to feast on that $150/hour, and that it's time to consider alternative careers. I'm not going to do that, because I absolutely don't believe it. Agentic AI means that anything you know to code can be coded very rapidly. Read that sentence carefully. If you know just what code needs to be created to solve an issue you want, the angels will grant you that code at the cost of a prompt or two. The trouble comes in that most people don't know what code needs to be created to solve their problem, for any but the most trivial problems. Who does know what code would be needed to solve complex problems? Currently that's only known by software developers, development managers and product managers, three job classifications that are going to be merging rapidly."

    • amflare 35 minutes ago ago

      This. AI is not replacing us, it is pulling the ladder up behind us.

  • datadrivenangel 2 hours ago ago

    The bit about Knight Capital implies that the software engineers were bad, which is notably untrue.

    "A bad [software engineer] can easily destroy that much value even faster (A developer at Knight Capital destroyed $440 million in 45 minutes with a deployment error and some bad configuration logic, instantly bankrupting the firm by reusing a flag variable). "

    • charlieflowers 40 minutes ago ago

      Both the article's examples there are bogus -- yet in both cases the underlying points are true.

      Google generates a lot of revenue per employee not because the employees are good (though many of them are of course), but because they own the front door to the web. And the Knight Capital story has many nuances left out by that summary.

      In both cases the author needed a hard hitting but terse example. But as I said, both the claims are true, so in the voice of the courtroom judge, "I'll allow it."

    • hinkley 2 hours ago ago

      There were decidedly shitty engineering decisions behind that dumpster fire.

      The biggest being that the only safe way to recycle feature flag names is to put ample time separation between the last use of the previous meaning for the flag and the first application of the new use. They did not. If they had, they would have noticed that one server was not getting redeployed properly in the time gap between the two uses.

      They also did not do a full rollback. They rolled back the code but not the toggles, which ignited the fire.

      These are rookie mistakes. If you want to argue they are journeyman mistakes, I won’t fight you too much, but they absolutely demonstrate a lack of mastery of the problem domain. And when millions of dollars change hands per minute you’d better not be Faking it Til You Make It.

  • simonw 43 minutes ago ago

    It took me a while to get into it, but this is really good. You need to make it past the anecdote about building a property-based testing suite with Claude Code though, the real meat is in the second half.

  • hdivider 2 hours ago ago

    Here's what I don't understand.

    Developers who get excited by agentic development put out posts like this. (I get excited too.)

    Other developers tend to point out objections in terms of maintainability, scalability, overly complicated solutions, and so on. All of which are valid.

    However, this part of AI evolves very quickly. So given these are known problems, why shouldn't we expect rapid improvements in agentic AI systems for software development, to the point where software developers who stick with the old paradigm will indeed be eroded in time? I'm genuinely curious because clearly the speed of advancement is significant.

    • vages 2 hours ago ago

      Anecdotally, I find early mover advantage to be overrated (ask anyone who bought Betamax or HD-DVD players). It is significantly cheaper – on average – to exploit what you already know and learn from the mistakes of other, earlier movers.

  • williamstein 2 hours ago ago

    If I hired a software developer a few years ago, I might expect them to do roughly what Claude Code does today on some task (?). If I hired a dev today I would expect much more from them than what Claude Code can currently do.

  • facundo_olano an hour ago ago

    Willing to accept AI agents can replace programmers

    Not willing to accept ex-US devs can do a comparable job at half the price

  • aetherspawn 2 hours ago ago

    Ah.. you got me. Put (AI) in the title or something.