Don't fall into the anti-AI hype

(antirez.com)

381 points | by todsacerdoti 10 hours ago ago

546 comments

  • embedding-shape 9 hours ago ago

    > But what was the fire inside you, when you coded till night to see your project working? It was building.

    I feel like this is not the same for everyone. For some people, the "fire" is literally about "I control a computer", for others "I'm solving a problem for others", and yet for others "I made something that made others smile/cry/feel emotions" and so on.

    I think there is a section of programmer who actually do like the actual typing of letters, numbers and special characters into a computer, and for them, I understand LLMs remove the fun part. For me, I initially got into programming because I wanted to ruin other people's websites, then I figured out I needed to know how to build websites first, then I found it more fun to create and share what I've done with others, and they tell me what they think of it. That's my "fire". But I've met so many people who doesn't care an iota about sharing what they built with others, it matters nothing to them.

    I guess the conclusion is, not all programmers program for the same reason, for some of us, LLMs helps a lot, and makes things even more fun. For others, LLMs remove the core part of what makes programming fun for them. Hence we get this constant back and forth of "Can't believe others can work like this!" vs "I can't believe others aren't working like this!", but both sides seems to completely miss the other side.

    • zeroonetwothree 5 hours ago ago

      You’re right of course. For me there’s no flow state possible with LLM “coding”. That makes it feel miserable instead of joyous. Sitting around waiting while it spits out tokens that I then have to carefully look over and tweak feels like very hard work. Compared to entering flow and churning out those tokens myself, which feels effortless once I get going.

      Probably other people feel differently.

      • wpm 4 hours ago ago

        I'm the same way. LLMs are still somewhat useful as a way to start a greenfield project, or as a very hyper-custom google search to have it explain something to me exactly how I'd like it explained, or generate examples hyper-tuned for the problem at hand, but that's hardly as transformative or revolutionary as everyone is making Claude Code out to be. I loathe the tone these things take with me and hate how much extra bullshit I didn't ask for they always add to the output.

        When I do have it one-shot a complete problem, I never copy paste from it. I type it all out myself. I didn't pay hundreds of dollars for a mechanical keyboard, tuned to make every keypress a joy, to push code around with a fucking mouse.

        • mirror_neuron 4 hours ago ago

          I’m a “LLM believer” in a sense, and not someone who derives joy from actually typing out the tokens in my code, but I also agree with you about the hype surrounding Claude Code and “agentic” systems in general. I have found the three positive use cases you mentioned to be transformative to my workflow on its own. I’m grateful that they exist even if they never get better than they are today.

      • biophysboy 4 hours ago ago

        I feel differently! My background isn't programming, so I frequently feel inhibited by coding. I've used it for over a decade but always as a secondary tool. Its fun for me to have a line of reasoning, and be able to toy with and analyze a series of questions faster than I used to be able to.

    • loubbrad 9 hours ago ago

      > I think there is a section of programmer who actually do like the actual typing of letters, numbers and special characters into a computer...

      Reminds me of this excerpt from Richard Hamming's book:

      > Finally, a more complete, and more useful, Symbolic Assembly Program (SAP) was devised—after more years than you are apt to believe during which most programmers continued their heroic absolute binary programming. At the time SAP first appeared I would guess about 1% of the older programmers were interested in it—using SAP was “sissy stuff”, and a real programmer would not stoop to wasting machine capacity to do the assembly. Yes! Programmers wanted no part of it, though when pressed they had to admit their old methods used more machine time in locating and fixing up errors than the SAP program ever used. One of the main complaints was when using a symbolic system you do not know where anything was in storage—though in the early days we supplied a mapping of symbolic to actual storage, and believe it or not they later lovingly pored over such sheets rather than realize they did not need to know that information if they stuck to operating within the system—no! When correcting errors they preferred to do it in absolute binary addresses.

      • layer8 6 hours ago ago

        I think this is beside the point, because the crucial change with LLMs is that you don’t use a formal language anymore to specify what you want, and get a deterministic output from that. You can’t reason with precision anymore about how what you specify maps to the result. That is the modal shift that removes the “fun” for a substantial portion of the developer workforce.

        • hackable_sand 4 hours ago ago

          That's not it for me, personally.

          I do all of my programming on paper, so keystrokes and formal languages are the fast part. LLMs are just too slow.

        • convolvatron 5 hours ago ago

          its not not about fun. when I'm going through the actual process of writing a function, I think about design issues. about how things are named, about how the errors from this function flow up. about how scheduling is happening. about how memory is managed. I compare the code to my ideal, and this is the time where I realize that my ideal is flawed or incomplete.

          I think alot of us dont get everything specced out up front, we see how things fit, and adjust accordingly. most of the really good ideas I've had were not formulated in the abstract, but realizations had in the process of spelling things out.

          I have a process, and it works for me. Different people certainly have other ones, and other goals. But maybe stop telling me that instead of interacting with the compiler directly its absolutely necessary that instead I describe what I want to a well meaning idiot, and patiently correct them, even though they are going to forget everything I just said in a moment.

      • zahlman 5 hours ago ago

        I don't know what book you're talking about, but it seems that you intend to compare the switch to an AI-based workflow to using a higher-level language. I don't think that's valid at all. Nobody using Python for any ordinary purpose feels compelled to examine the resulting bytecode, for example, but a responsible programmer needs to keep tabs on what Claude comes up with, configure a dev environment that organizes the changes into a separate branch (as if Claude were a separate human member of a team) etc. Communication in natural language is fundamentally different from writing code; if it weren't, we'd be in a world with far more abundant documentation. (After all, that should be easier to write than a prompt, since you already have seen the system that the text will describe.)

        • immibis 4 hours ago ago

          > Nobody using Python for any ordinary purpose feels compelled to examine the resulting bytecode, for example,

          The first people using higher level languages did feel compelled to. That's what the quote from the book is saying. The first HLL users felt compelled to check the output just like the first LLM users.

          • zahlman 4 hours ago ago

            Yes, and now they don't.

            But there is no reason to suppose that responsible SWEs would ever be able to stop doing so for an LLM, given the reliance on nondeterminism and a fundamentally imprecise communication mechanism.

            That's the point. It's not the same kind of shift at all.

          • le-mark 4 hours ago ago

            Hamming was talking about assembler, not a high level language.

            • sanderjd 3 hours ago ago

              The same pattern held through the early days of "high level" languages that were compiled to assembly, and then the early days of higher level languages that were interpreted.

              I think it's a very apt comparison.

              • ThrowawayR2 3 hours ago ago

                If the same pattern held, then it ought to be easy to find quotes to prove it. Other than the one above from Hamming, we've been shown none.

      • quesera 3 hours ago ago

        Contra your other replies, I think this is exactly the point.

        I had an inkling that the feeling existed back then, but I had no idea it was documented so explicitly. Is this quote from The Art of Doing Science and Engineering?

    • phicoh 9 hours ago ago

      The problem I see is not so much in how you generate the code. It is about how to maintain the code. If you check in the AI generated code unchanged then do you start changing that code by hand later? Do you trust that in the future AI can fix bugs in your code. Or do you clean up the AI generated code first?

      • jt2190 6 hours ago ago

        LLMs remove the familiarity of “I wrote this and deeply understand this”. In other words, everything is “legacy code” now ;-)

        For those who are less experienced with the constant surprises that legacy code bases can provide, LLMs are deeply unsettling.

        • chrsw 4 hours ago ago

          This is the key point for me in all this.

          I've never worked in web development, where it seems to me the majority of LLM coding assistants are deployed.

          I work on safety critical and life sustaining software and hardware. That's the perspective I have on the world. One question that comes up is "why does it take so long to design and build these systems?" For me, the answer is: that's how long it takes humans to reach a sufficient level of understanding of what they're doing. That's when we ship: when we can provide objective evidence that the systems we've built are safe and effective. These systems we build, which are complex, have to interact with the real world, which is messy and far more complicated.

          Writing more code means that's more complexity for humans (note the plurality) to understand. Hiring more people means that's more people who need to understand how the systems work. Want to pull in the schedule? That means humans have to understand in less time. Want to use Agile or this coding tool or that editor or this framework? Fine, these tools might make certain tasks a little easier, but none of that is going to remove the requirement that humans need to understand complex systems before they will work in the real world.

          So then we come to LLMs. It's another episode of "finally, we can get these pesky engineers and their time wasting out of the loop". Maybe one day. But we are far from that today. What matters today is still how well do human engineers understand what they're doing. Are you using LLMs to help engineers better understand what they are building? Good. If that's the case you'll probably build more robust systems, and you _might_ even ship faster.

          Are you trying to use LLMs to fool yourself into thinking this still isn't the game of humans needing to understand what's going on? "Let's offload some of the understanding of how these systems work onto the AI so we can save time and money". Then I think we're in trouble.

          • discreteevent an hour ago ago

            " They make it easier to explore ideas, to set things up, to translate intent into code across many specialized languages. But the real capability—our ability to respond to change—comes not from how fast we can produce code, but from how deeply we understand the system we are shaping. Tools keep getting smarter. The nature of learning loop stays the same."

            https://martinfowler.com/articles/llm-learning-loop.html

          • dpark 2 hours ago ago

            > Are you trying to use LLMs to fool yourself into thinking this still isn't the game of humans needing to understand what's going on?

            This is a key question. If you look at all the anti-AI stuff around software engineering, the pervading sentiment is “this will never be a senior engineer”. Setting aside the possibility of future models actually bridging this gap (this would be AGI), let’s accept this as true.

            You don’t need an LLM to be a senior engineer to be an effective tool, though. If an LLM can turn your design into concrete code more quickly than you could, that gives you more time to reason over the design, the potential side effects, etc. If you use the LLM well, it allows you to give more time to the things the LLM can’t do well.

        • dpark 4 hours ago ago

          I suspect that we are going to have a wave of gurus who show up soon to teach us how to code with LLMs. There’s so much doom and gloom in these sorts of threads about the death of quality code that someone is going to make money telling people how to avoid that problem.

          The scenario you describe is a legitimate concern if you’re checking in AI generated code with minimal oversight. In fact I’d say it’s inevitable if you don’t maintain strict quality control. But that’s always the case, which is why code review is a thing. Likewise you can use LLMs without just checking in garbage.

          The way I’ve used LLMs for coding so far is to give instructions and then iterate on the result (manually or with further instructions) until it meets my quality standards. It’s definitely slower than just checking in the first working thing the LLM churns out, but it’s sill been faster than doing it myself, I understand it exactly as well because I have to in order to give instructions (design) and iterate.

          My favorite definition of “legacy code” is “code that is not tested” because no matter who writes code, it turns into a minefield quickly if it doesn’t have tests.

          • d0liver 4 hours ago ago

            How do you know that it's actually faster than if you'd just written it yourself? I think the review and iteration part _is_ the work, and the fact that you started from something generated by an LLM doesn't actually speed things up. The research that I've seen also generally backs this idea up -- LLMs _feel_ very fast because code is being generated quickly, but they haven't actually done any of the work.

            • dpark 3 hours ago ago

              Because I’ve been a software engineer for over 20 years. If I look at a feature and feel like it will take me a day and an LLM churns it out in a hour including the iterating, I’m confident that using the LLM was meaningfully faster. Especially since engineers (including me) are notoriously bad at accurate estimation and things usually take at least twice as long as they estimate.

              I have tested throwing several features at an LLM lately and I have no doubt that I’m significantly faster when using an LLM. My experience matches what Antirez describes. This doesn’t make me 10x faster, mostly because so much of my job is not coding. But in term of raw coding, I can believe it’s close to 10x.

      • seanmcdirmid 5 hours ago ago

        Are you just generating code with the LLM? Ya, you are screwed. Are you generating documentation and tests and everything else to help to code live? Your options for maintenance go up. Now just replace “generate” with “maintain” and you are basically asking AI to make changes to a description at the top that then percolate to multiple artifacts being updated, only one happening to be the code itself, and the code updates multiple time as the AI checks tests and stuff.

      • victorbjorklund 6 hours ago ago

        Is it really much different from maintaining code that other people wrote and that you merged?

        • zjzkshz 6 hours ago ago

          Yes, this is (partly) why developer salaries are so high. I can trust my coworkers in ways not possible with AI.

          There is no process solution for low performers (as of today).

          • dpark 4 hours ago ago

            The solution for low performers is very close oversight. If you imagine an LLM as a very junior engineer who needs an inordinate amount of hand holding (but who can also read and write about 1000x faster than you and who gets paid approximately nothing), you can get a lot of useful work out of it.

            A lot of the criticisms of AI coding seem to come from people who think that the only way to use AI is to treat it as a peer. “Code this up and commit to main” is probably a workable model for throwaway projects. It’s not workable for long term projects, at least not currently.

            • nmehner 4 hours ago ago

              A Junior programmer is a total waste of time if they don't learn. I don't help Juniors because it is an effective use of my time, but because there is hope that they'll learn and become Seniors. It is a long term investment. LLMs are not.

              • dpark 4 hours ago ago

                It’s a metaphor. With enough oversight, a qualified engineer can get good results out of an underperforming (or extremely junior) engineer. With a junior engineer, you give the oversight to help them grow. With an underperforming engineer you hope they grow quickly or you eventually terminate their employment because it’s a poor time trade off.

                The trade off with an LLM is different. It’s not actually a junior or underperforming engineer. It’s far faster at churning out code than even the best engineers. It can read code far faster. It writes tests more consistently than most engineers (in my experience). It is surprisingly good at catching edge cases. With a junior engineer, you drag down your own performance to improve theirs and you’re often trading off short term benefits vs long term. With an LLM, your net performance goes up because it’s augmenting you with its own strengths.

                As an engineer, it will never reach senior level (though future models might). But as a tool, it can enable you to do more.

                • 12_throw_away 3 hours ago ago

                  > It’s far faster at churning out code than even the best engineers.

                  I'm not sure I can think of a more damning indictment than this tbh

                  • dpark 2 hours ago ago

                    Can you explain why that’s damning?

                    • nmehner 44 minutes ago ago

                      I guess everyone dealing with legacy software sees code as a cost factor. Being able to delete code is harder, but often more important than writing code.

                      Owning code requires you to maintain it. Finding out what parts of the code actual implement features and what parts are not needed anymore (or were never needed in the first place) is really hard. Since most of the time the requirements have never been documented and the authors have left or cannot remember. But not understanding what the code does removed all possibility to improve or modify it. This is how software dies.

                      Churning out code fast is a huge future liability. Management wants solutions fast and doesn't understand these long term costs. It is the same with all code generators: Short term gains, but long term maintainability issues.

                    • 12_throw_away 34 minutes ago ago
                      • dpark 5 minutes ago ago

                        I feel like this is a forest for the trees kind of thing.

                        It is implied that the code being created is for “capabilities”. If your AI is churning out needless code, then sure, that’s a bad thing. Why would you be asking the AI for code you don’t need, though? You should be asking it for critical features, bug fixes, the things you would be coding up regardless.

                        You can use a hammer to break your own toes or you can use it to put a roof on your house. Using a tool poorly reflects on the craftsman, not the tool.

                • fzeroracer an hour ago ago

                  > It writes tests more consistently than most engineers (in my experience)

                  I'm going to nit on this specifically. I firmly believe anyone that genuinely believes this either never writes tests that actually matter, or doesn't review the tests that an LLM throws out there. I've seen so many cases of people saying 'look at all these valid tests our LLM of choice wrote' only for half of them to do nothing and half of them misleading as to what it actually tests.

              • embedding-shape 4 hours ago ago

                Just like LLMs are a total waste of time if you never update the system/developer prompts with additional information as you learn what's important to communicate vs not.

                • nmehner 4 hours ago ago

                  That is a completely different level. I expect a Junior Developer to be able to completely replace me long term and to be able decide when existing rules are outdated and when they should be replaced. Challenge my decisions without me asking for it. Being able to adapt what they have learned to new types of projects or new programming languages. Being Senior is setting the rules.

                  An LLM only follows rules/prompts. They can never become Senior.

        • YetAnotherNick 6 hours ago ago

          Yes. Firstly AI forgets why it wrote certain code and with humans at least you can ask them when reviewing. Secondly current gen AI(at least Claude) kind of wants to finish the thing instead of thinking of bigger picture. Human programmers code little differently that they hate a single line fix in random file to fix something else in different part of the code.

          I think the second is part of RL training to optimize for self contained task like swe bench.

          • seanmcdirmid 5 hours ago ago

            So you live in a world where code history must only be maintained orally? Have you ever thought to ask AI to write documentation on what and why and not just write the code. Asking it to document as well as code works well when the AI needs to go back and change either.

            • nemomarx 5 hours ago ago

              I don't see how asking AI to write some description of why it wrote this or that code would actually result in an explanation of why it wrote that code? It's not like it's thinking about it in that way, it's just generating both things. I guess they'd be in the same context so it might be somewhat correct.

              • seanmcdirmid 5 hours ago ago

                If you ask it to document why it did something, then when it goes back later to update the code it has the why in its context. Otherwise, the AI just sees some code later and has no idea why it was written or what it does without reverse engineering it at the moment.

                • immibis 4 hours ago ago

                  I'm not sure you understood the GP comment. LLMs don't know and can't tell you why they write certain things. You can't fix that by editing your prompt so it writes it on a comment instead of telling you. It will not put the "why" in the comment, and therefore the "why" won't be in the future LLM's context, because there is no way to make it output the "why".

                  It can output something that looks like the "why" and that's probably good enough in a large percentage of cases.

                  • seanmcdirmid 4 hours ago ago

                    LLMs know why they are writing things in the moment, and they can justify decisions. Asking it to write those things down when it writes code works, or even asking them to design the code first and then generate/update code from the design also works. But yes, if things aren’t written down, “the LLM don’t know and can’t tell.” Don’t do that.

                    • Avicebron 3 hours ago ago

                      I'm going to second seanmcdirmid here, a quick trick is to have Claude write a "remaining.md" if you know you have to do something that will end the session.

                      Example from this morning, I have to recreate the EFI disk of one of my dev vm's, it means killing the session and rebooting the vm. I had Claude write itself a remaining.md to complement the overall build_guide.vm I'm using so I can pick up where I left off. It's surprisingly effective.

                    • YetAnotherNick an hour ago ago

                      No, humans probably have tens of millions of token in memory of memory per PR. It includes not only what's in the code, but what all they searched, what all they tested and in which way, which order they worked on, the edge cases they faced etc. Claude just can't document all these, else it will run out of its working context pretty soon.

                  • dpark 2 hours ago ago

                    > It can output something that looks like the "why"

                    This feels like a distinction without difference. This is an extension of the common refrain that LLMs cannot “think”.

                    Rather than get overly philosophical, I would ask what the difference is in practical terms. If an LLM can write out a “why” and it is sufficient explanation for a human or a future LLM, how is that not a “why“?

              • dpark 4 hours ago ago

                Have you tried it? LLMs are quite good at summarizing. Not perfect, but then neither are humans.

            • zeroonetwothree 5 hours ago ago

              Have you never had a situation where a question arose a year (or several) later that wasn’t addressed in the original documentation?

              In particular IME the LLM generates a lot of documentation that explains what and not a lot of the why (or at least if it does it’s not reflecting underlying business decisions that prompted the change).

              • seanmcdirmid 5 hours ago ago

                You can ask it to generate the why, even if it the agent isn’t doing that by default. At least you can ask it to encode how it is mapping your request to code, and to make sure that the original request is documented, so you can record why it did something at least, even if it can’t have insight into why you made the request in the first place. The same applies to successive changes.

      • embedding-shape 9 hours ago ago

        Depends on what you do. When I'm using LLMs to generate code for projects I need to maintain (basically, everything non-throw-away-once-used), I treat it as any other code I'd write, tightly controlled with a focus on simplicity and well-thought out abstractions, and automated testing that verify what needs to be working. Nothing gets "merged" into the code without extensive review, and me understanding the full scope of the change.

        So with that, I can change the code by hand afterwards or continue with LLMs, it makes no difference, because it's essentially the same process as if I had someone follow the ideas I describe, and then later they come back with a PR. I think probably this comes naturally to senior programmers and those who had a taste of management and similar positions, but if you haven't reviewed other's code before, I'm not sure how well this process can actually work.

        At least for me, I manage to produce code I can maintain, and seemingly others to, and they don't devolve into hairballs/spaghetti. But again, requires reviewing absolutely every line and constantly edit/improve.

        • phicoh 8 hours ago ago

          We recently got a PR from somebody adding a new feature and the person said he doesn't know $LANG but used AI.

          The problem is, that code would require a massive amount of cleanup. I took a brief look and some code was in the wrong place. There were coding style issues, etc.

          In my experience, the easy part is getting something that works for 99%. The hard part is getting the architecture right, all of the interfaces and making sure there are no corner cases that get the wrong results.

          I'm sure AI can easily get to the 99%, but does it help with the rest?

          • embedding-shape 7 hours ago ago

            Yeah, so what I'm mostly doing, and advocate for others to do, is basically the pure opposite of that.

            Focus on architecture, interfaces, corner-cases, edge-cases and tradeoffs first, and then the details within that won't matter so much anymore. The design/architecture is the hard part, so focus on that first and foremost, and review + throw away bad ideas mercilessly.

          • simonw 8 hours ago ago

            Yes it does... but only in the hands of an expert who knows what they are doing.

            I'd treat PRs like that as proof of concepts that the thing that can be done, but I'd be surprised if they often produced code that should be directly landed.

            • teeeew 8 hours ago ago

              In the hands of an expert… right. So is it not incredibly irresponsible to release these tools into the wild, and expose it those who are not experts? They will actually become incredibly worse off. Ironically this does not ‘democratise’ intelligence at all - the gap widens between experts and the rest.

              • phicoh 6 hours ago ago

                I'm curious about the economic aspects of this. If only experts can use such tools effectively, how big will the total market be and does that warrant the investments?

                For companies, if these tools make experts even more special, then experts may get more power certainly when it comes to salary.

                So the productively benefits of AI have to be pretty high to overcome this. Does AI make an expert twice as productive?

                • paodealho 5 hours ago ago

                  I have been thinking about this in the last few weeks. First time I see someone commenting about it here.

                  - If the number of programmers will be drastically reduced, how big of a price increase companies like Anthropic would need to be profitable?

                  - If you are a manager, you now have a much higher bus factor to deal with. One person leaving means a greater blow on the team's knowledge.

                  - If the number of programmers will be drastically reduced, the need for managers and middle managers will also decline, no? Hmm...

              • simonw 7 hours ago ago

                I sometimes wonder what would have happened if OpenAI had built GPT3 and then GPT-4 and NOT released them to the world, on the basis that they were too dangerous for regular people to use.

                That nearly happened - it's why OpenAI didn't release open weight models past GPT2, and it's why Google didn't release anything useful built on Transformers despite having invented the architecture.

                If we lived in the world today, LLMs would be available only to a small, elite and impossibly well funded class of people. Google and OpenAI would solely get to decide who could explore this new world with them.

                I think that would suck.

                • teeeew 7 hours ago ago

                  So… what?

                  With all due respect I don’t care about an acceleration in writing code - I’m more interested in incremental positive economic impact. To date I haven’t seen anything convince me that this technology will yield this.

                  Producing more code doesn’t overcome the lack of imagination, creativity and so on to figure out what projects resources should be invested in. This has always been an issue that will compound at firms like Google who have an expansive graveyard of projects laid to rest.

                  In fact, in a perverse way, all this ‘intelligence’ can exist. At the same time humans can get worse in their ability to make judgments in investment decisions.

                  So broadly where is the net benefit here?

                  • simonw 7 hours ago ago

                    You mean the net benefit in widespread access to LLMs?

                    I get the impression there's no answer here that would satisfy you, but personally I'm excited about regular people being able to automate tedious things in their lives without having to spend 6+ months learning to program first.

                    And being able to enrich their lives with access to as much world knowledge as possible via a system that can translate that knowledge into whatever language and terminology makes the most sense to them.

                    • teeeew 7 hours ago ago

                      “I'm excited about regular people being able to automate tedious things in their lives without having to spend 6+ months learning to program first.”

                      Bring the implicit and explicit costs to date into your analysis and you should quickly realise none of this makes sense from a societal standpoint.

                      Also you seem to be living in a bubble - the average person doesn’t care about automating anything!

                      • bathtub365 7 hours ago ago

                        The average person already automates a lot of things in their day to day lives. They spend far less time doing the dishes, laundry, and cleaning because parts of those tasks have been mechanized and automated. I think LLMs probably automate the wrong thing for the average person (i.e., I still have to load the laundry machine and fold the laundry after) but automation has saved the average person a lot of time

                        • zeroonetwothree 5 hours ago ago

                          For example, my friend doesn’t know programming but his job involves some tedious spreadsheet operations. He was able to use an LLM to generate a Python script to automate part of this work. Saving about 30 min/day. He didn’t review the code at all, but he did review the output to the spreadsheet and that’s all that matters.

                          His workplace has no one with programming skills, this is automation that would never have happened. Of course it’s not exactly replacing a human or anything. I suppose he could have hired someone to write the script but he never really thought to do that.

                        • zahlman 5 hours ago ago

                          What sorts of things will the average, non-technical person think of automating on a computer that are actually quality-of-life-improving?

                      • simonw 7 hours ago ago

                        > Also you seem to be living in a bubble - the average person doesn’t care about automating anything!

                        One of my life goals is to help bring as many people into my "technology can automate things for you" bubble as I possibly can.

              • closewith 7 hours ago ago

                You can apply the same logic to all technologies, including programming languages, HTTP, cryptography, cameras, etc. Who should decide what's a responsible use?

          • bitwize 3 hours ago ago

            > We recently got a PR from somebody adding a new feature and the person said he doesn't know $LANG but used AI.

            "Oh, and check it out: I'm a bloody genius now! Estás usando este software de traducción in forma incorrecta. Por favor, consulta el manual. I don't even know what I just said, but I can find out!"

        • zahlman 5 hours ago ago

          ... And with this level of quality control, is it still faster than writing it yourself?

      • curt15 4 hours ago ago

        There is a related issue of ownership. When human programmers make errors that cost revenue or worse, there is (in theory) a clear chain of accountability. Who do you blame if errors generated by LLMs end up in mission critical software?

        • embedding-shape 4 hours ago ago

          > Who do you blame if errors generated by LLMs end up in mission critical software?

          I don't think many companies/codebases allow LLMs to autonomously edit code and deploy it, there is still a human in the loop that "prompt > generates > reviews > commits", so it really isn't hard to find someone to blame for those errors, if you happen to work in that kind of blame-filled environment.

          Same goes with contractors I suppose, if you end up outsourcing work to a contractor, they do a shitty job but that got shipped anyways, who do you blame? Replace "contractor" with "LLM" and I think the answer remains the same.

      • chii 6 hours ago ago

        Would it not be a new paradigm, where the generated code from AI is segregated and treated like a binary blob? You don't change it (beyond perhaps some cosmetic, or superficial changes that the AI missed). You keep the prompt(s), and maintain that instead. And for new changes you want added, the prompts are either modified, or appended to.

        • fireflash38 5 hours ago ago

          Sounds like a nondeterministic nightmare

      • hxugufjfjf 5 hours ago ago

        I have AI agents write, perform code review, improve and iterate upon the code. I trust that an agent with capabilities to write working code can also improve it. I use Claude skills for this and keep improving the skills based on both AI and human code reviews for the same type of code.

    • irthomasthomas 3 hours ago ago

      In my feed 'AI hype' outnumbers 'anti-AI hype' 5-1. And anti-hype moderates like antirez and simonw are rare. To be a radical in ai is to believe that ai tools offer a modest but growing net positive utility to a modest but growing subset of hackers and professionals

      • kaffekaka an hour ago ago

        Well put.

        AI obviously brings big benefits into the profession. We just have not seen exactly what they are just yet. How it will unfold.

        But personally I feel that a future of not having to churn out yet another crud app is attractive.

    • frizlab 9 hours ago ago

      > I think there is a section of programmer who actually do like the actual typing of letters, numbers and special characters into a computer, and for them, I understand LLMs remove the fun part.

      Exactly me.

      • tarsinge 7 hours ago ago

        Conversely I have very little interest in the process of programming by itself, all the magic is about the end result and the business value for me (which fortunately has served me quite well professionally). As young as I remember I was fascinated with the GUI DBMS (4th Dimension/FileMaker/MS Access/…) my dad used to improve his small business. I only got into programming only to not be limited by graphical tools. So LLMs for me are just a nice addition in my toolbox, like a power tool is to a manual one. It doesn’t philosophically changes anything.

      • judahmeek 8 hours ago ago

        That's because physical programming ing is a ritual.

        I'm not entirely sure what that means myself, so please speak up if my statement resonates with you.

        • kaffekaka an hour ago ago

          It resonates. But as I see it, that kind of ritual I rather devote myself to at home. At work, the more efficient and rapidly we can get stuff dobe, the better.

          Drawing and painting is a ritual to me as well. No one pays me for it and I am happy about that.

        • hackable_sand 4 hours ago ago

          Corporations trying to "invent" agi is like that boss in bloodborne

      • amelius 9 hours ago ago

        Same. However, for me the fun in programming was always a kind of trap that kept me from doing more challenging things.

        Now the fun is gone, maybe I can do more important work.

        • 12_throw_away 3 hours ago ago

          > Now the fun is gone, maybe I can do more important work.

          This is a very sad, bleak, and utilitarian view of "work." It is also simply not how humans operate. Even if you only care about the product, humans that enjoy and take pride in what they're doing almost invariably produce better products that their customers like more.

        • DrewADesign 8 hours ago ago

          You might be surprised to find out how much of your motivation to do any of it at all was tied to your enjoyment, and that’s much more difficult to overcome than people realize.

        • hxugufjfjf 5 hours ago ago

          My problem was the exact opposite. I wanted to deliver but the dislike of the actual programming / typing code prevented me from doing so. AI has solved this for me.

    • jcheng 3 hours ago ago

      > For others, LLMs remove the core part of what makes programming fun for them.

      Anecdotally, I’ve had a few coworkers go from putting themselves firmly in this category to saying “this is the most fun I’ve ever had in my career” in the last two months. The recent improvement in models and coding agents (Claude Code with Opus 4.5 in our case) is changing a lot of minds.

      • senordevnyc 2 hours ago ago

        Yeah, I'd put myself in this camp. My trust is slowly going up, and coupled with improved guardrails (more tests, static analysis, refactoring to make reviewing easier), that increasing trust is giving me more and more speed at going from thought ("hmm, I should change how this feature works to be like X") to deployment into the hands of my customers.

    • skybrian 8 hours ago ago

      I think it’s true that people get enjoyment from different things. Also, I wonder if people have fixed ideas about how coding agents can be used? For example, if you care about what the code looks like and want to work on readability, test coverage, and other “code health” tasks with a coding agent, you can do that. It’s up to you whether you ask it to do cleanup tasks or implement new features.

      Maybe there are people who are about literally typing the code, but I get satisfaction from making the codebase nice and neat, and now I have power tools. I am just working on small personal projects, but so far, Claude Opus 4.5 can do any refactoring I can describe.

    • jt2190 6 hours ago ago

      > … not all programmers program for the same reason, for some of us, LLMs helps a lot, and makes things even more fun. For others, LLMs remove the core part of what makes programming fun for them. Hence we get this constant back and forth of "Can't believe others can work like this!" vs "I can't believe others aren't working like this!", but both sides seems to completely miss the other side.

      Unfortunately the job market does not demand both types of programmer equally: Those who drive LLMs to deliver more/better/faster/cheaper are in far greater demand right now. (My observation is that a decade of ZIRP-driven easy hiring paused the natural business cycle of trying to do more with fewer employees, and we’ve been seeing an outsized correction for the past few years, accelerated by LLM uptake.)

      • aleph_minus_one 5 hours ago ago

        > Unfortunately the job market does not demand both types of programmer equally: Those who drive LLMs to deliver more/better/faster/cheaper are in far greater demand right now.

        I doubt that the LLM drivers deliver something better; quite the opposite. But I guess managers will only realize this when it's too late: and of course they won't take any responsibility for this.

        • jt2190 5 hours ago ago

          > I doubt that the LLM drivers deliver something better…

          That is your definition of “better”. If we’re going to trade our expertise for coin, we must ask ourselves if the cost of “better” is worth it to the buyer. Can they see the difference? Do they care?

          • aleph_minus_one 5 hours ago ago

            > if the cost of “better” is worth it to the buyer. Can they see the difference? Do they care?

            This is exactly the phenomenon of markets for "lemons":

            > https://en.wikipedia.org/wiki/The_Market_for_Lemons

            (for the HN readers: a related concept is "information asymmetry in markets").

            George Akerlof (the author of this paper), Michael Spence and Joseph Stiglitz got a Nobel Memorial Prize in Economic Sciences in 2001 for their analyses of markets with asymmetric information.

          • ThrowawayR2 2 hours ago ago

            HN: "Why should we craft our software well? Our employers don't care or reward us for it."

            Also HN: "Why does all commercial software seem to suck more and more as time goes on?"

    • zahlman 9 hours ago ago

      Indeed. My response was: actually, no, if I think about it I really don't think it was "building" at all. I would have started fewer things, and seen them through more consistently, if it were about "building". I think it has far more to do with personal expression.

      ("Solving a problem for others" also resonates, but I think I implement that more by tutoring and mentoring.)

    • Wowfunhappy 3 hours ago ago

      > I think there is a section of programmer who actually do like the actual typing of letters, numbers and special characters into a computer, and for them, I understand LLMs remove the fun part.

      I've "vibe coded" a ton of stuff and so I'm pretty bullish on LLMs, but I don't see a world where "coding by hand" isn't still required for at least some subset of software. I don't know what that subset will be, but I'm convinced it will exist, and so there will be ample opportunities for programmers who like that sort of thing.

      ---

      Why am I convinced hand-coding won't go away? Well, technically I lied, I have no idea what the future holds. However, it seems to me that an AI which could code literally anything under the sun would almost by definition be that mythical AGI. It would need to have an almost perfect understanding of human language and the larger world.

      An AI like that wouldn't just be great at coding, it would be great at everything! It would be the end of the economy, and scarcity. In which case, you could still program by hand all you wanted because you wouldn't need to work for a living, so do whatever brings you joy.

      So even without making predictions about what the limitations of AI will ultimately be, it seems to me you'll be able to keep programming by hand regardless.

    • zjzkshz 5 hours ago ago

      > I think there is a section of programmer who actually do like the actual typing of letters

      Do people actually spend a significant time typing? After I moved beyond the novice stage it’s been an inconsequential amount of time. What it still serves is a thorough review of every single line in a way that is essentially equivalent to what a good PR review looks like.

      • zeroonetwothree 5 hours ago ago

        Yes, for the type of work LLMs are good at (greenfield projects or lots of boilerplate).

    • a022311 4 hours ago ago

      I think both of you are correct.

      LLMs do empower you (and by "you" I mean the reader or any other person from now on) to actually complete projects you need in the very limited free time and have available. Manually coding the same could take months (I'm speaking from experience developing a personal project for about 3 hours every Friday and there's still much to be done). In a professional context, you're being paid to ship and AI can help you grow an idea to an MVP and then to a full implementation in record-breaking time. At the end of the day, you're satisfied because you built something useful and helped your company. You probably also used your problem solving skills.

      Programming is also a hobby though. The whole process matters too. I'm one of the people who feels incredible joy when achieving a goal, knowing that I completed every step in the process with my own knowledge and skills. I know that I went from an idea to a complete design based on everything I know and probably learned a few new things too. I typed the variable names, I worked hard on the project for a long time and I'm finally seeing the fruits of my effort. I proudly share it with other people who may need the same and can attest its high quality (or low quality if it was a stupid script I hastily threw together, but anyway sharing is caring —the point is that I actually know what I've written).

      The experience of writing that same code with an LLM will leave you feeling a bit empty. You're happy with the result: it does everything you wanted and you can easily extend it when you feel like it. But you didn't write the code, someone else did. You just reviewed an intern's work and gave feedback. Sometimes that's indeed what you want. You may need a tool for your job or your daily life, but you aren't too interested in the internals. AI is truly great for that.

      I can't reach a better conclusion than the parent comment, everyone is unique and enjoys coding in a different way. You should always find a chance to code the way you want, it'll help maintain your self-esteem and make your life interesting. Don't be afraid of new technologies where they can help you though.

    • martin-t 9 hours ago ago

      > programmer who actually do like the actual typing

      It's not about the typing, it's about the understanding.

      LLM coding is like reading a math textbook without trying to solve any of the problems. You get an overview, you get a sense of what it's about and most importantly you get a false sense of understanding.

      But if you try to actually solve the problems, you engage completely different parts of your brain. It's about the self-improvement.

      • embedding-shape 9 hours ago ago

        > It's not about the typing, it's about the understanding.

        Well, it's both, for different people, seemingly :)

        I also like the understanding and solving something difficult, that rewards a really strong part of my brain. But I don't always like to spend 5 hours in doing so, especially when I'm doing that because of some other problem I want to solve. Then I just want it solved ideally.

        But then other days I engage in problems that are hard because they are hard, and because I want to spend 5 hours thinking about, designing the perfect solution for it and so on.

        Different moments call for different methods, and particularly people seem to widely favor different methods too, which makes sense.

      • ben_w 8 hours ago ago

        > LLM coding is like reading a math textbook without trying to solve any of the problems. You get an overview, you get a sense of what it's about and most importantly you get a false sense of understanding.

        Can be, but… well, the analogy can go wrong both ways.

        This is what Brilliant.org and Duolingo sell themselves on: solve problems to learn.

        Before I moved to Berlin in 2018, I had turned the whole Duolingo German tree gold more than once, when I arrived I was essentially tourist-level.

        Brilliant.org, I did as much as I could before the questions got too hard (latter half of group theory, relativity, vector calculus, that kind of thing); I've looked at it again since then, and get the impressions the new questions they added were the same kind of thing that ultimately turned me off Duolingo, easier questions that teach little, padding out a progressions system that can only be worked through fast enough to learn anything if you pay a lot.

        Code… even before LLMs, I've seen and I've worked with confident people with a false sense of understanding about the code they wrote. (Unfortunately for me, one of my weaknesses is the politics of navigating such people).

        • habinero 5 hours ago ago

          Yeah, there's a big difference between edutainment like Brilliant and Duolingo and actually studying a topic.

          I'm not trying to be snobbish here, it's completely fine to enjoy those sorts of products (I consume a lot of pop science, which I put in the same category) but you gotta actually get your hands dirty and do the work.

          It's also fine to not want to do that -- I love to doodle and have a reasonable eye for drawing, but to get really good at it, I'd have to practice a lot and develop better technique and skills and make a lot of shitty art and ehhhh. I don't want it badly enough.

      • jebarker 6 hours ago ago

        > LLM coding is like reading a math textbook without trying to solve any of the problems.

        Most math textbooks provide the solutions too. So you could choose to just read those and move on and you’d have achieved much less. The same is true with coding. Just because LLMs are available doesn’t mean you have to use them for all coding, especially when the goal is to learn foundational knowledge. I still believe there’s a need for humans to learn much of the same foundational knowledge as before LLMs otherwise we’ll end up with a world of technology that is totally inscrutable. Those who choose to just vibe code everything will make themselves irrelevant quickly.

        • dehsge 5 hours ago ago

          Most math books do not provide solutions. Outside of calculus, advanced mathematics solutions are left as an exercise for the reader.

          • jebarker 5 hours ago ago

            The ones I used for the first couple of years of my math PhD had solutions. That's a sufficient level of "advanced" to be applicable in this analogy. It doesn't really matter though - the point still stands that _if_ solutions are available you don't have to use them and doing so will hurt your learning of foundational knowledge.

        • gosub100 5 hours ago ago

          I haven't used AI yet but I definitely would love a tool that could do the drudgery for me for designs that I already understand. For instance, if I want to store my own structures in an RDBMS, I want to lay the groundwork and say "Hey Jeeves, give me the C++ syntax to commit this structure to a MySQL table using commit/rollback". I believe once I know what I want, futzing over the exact syntax for how to do it is a waste of time. I heard c++ isn't well supported but eventually I'll give it a try.

      • williamcotton 9 hours ago ago

        Lately I've been writing DSLs with the help of these LLM assistants. It is definitely not vibe coding as I'm paying a lot of attention to the overall architecture. But most importantly my focus is on the expressiveness and usefulness of the DSLs themselves. I am indeed solving problems and I am very engaged but it is a very different focus. "How can the LSP help orient the developer?" "Do we want to encourage a functional-looking pipeline in this context"? "How should the step debugger operate under these conditions"? etc.

          GET /svg/weather
            |> jq: weatherData
            |> jq: `
              .hourly as $h |
              [$h.time, $h.temperature_2m] | transpose | map({time: .[0], temp: .[1]})
            `
            |> gg({ "type": "svg", "width": 800, "height": 400 }): `
              aes(x: time, y: temp) 
                | line() 
                | point()
            `
        
        I've even started embedding my DSLs inside my other DSLs!
      • svara 9 hours ago ago

        We've been hearing this a lot, but I don't really get it. A lot of code, most probably, isn't even close to being as challenging as a maths textbook.

        It obviously depends a lot on what exactly you're building, but in many projects programming entails a lot of low intellectual effort, repetitive work.

        It's the same things over and over with slight variations and little intellectual challenge once you've learnt the basic concepts.

        Many projects do have a kernel of non-obvious innovation, some have a lot of it, and by all means, do think deeply about these parts. That's your job.

        But if an LLM can do the clerical work for you? What's not to celebrate about that?

        To make it concrete with an example: the other day I had Claude make a TUI for a data processing library I made. It's a bunch of rather tedious boilerplate.

        I really have no intellectual interest in TUI coding and I would consider doing that myself a terrible use of my time considering all the other things I could be doing.

        The alternative wasn't to have a much better TUI, but to not have any.

        • zahlman 9 hours ago ago

          > It obviously depends a lot on what exactly you're building, but in many projects programming entails a lot of low intellectual effort, repetitive work.

          I think I can reasonably describe myself as one of the people telling you the thing you don't really get.

          And from my perspective: we hate those projects and only do them if/because they pay well.

          > the other day I had Claude make a TUI for a data processing library I made. It's a bunch of rather tedious boilerplate. I really have no intellectual interest in TUI coding...

          From my perspective, the core concepts in a TUI event loop are cool, and making one only involves boilerplate insofar as the support libraries you use expect it. And when I encounter that, I naturally add "design a better API for this" to my project list.

          Historically, a large part of avoiding the tedium has been making a clearer separation between the expressive code-like things and the repetitive data-like things, to the point where the data-like things can be purely automated or outsourced. AI feels weird because it blurs the line of what can or cannot be automated, at the expense of determinism.

        • nkrisc 9 hours ago ago

          And so in the future if you want to add a feature, either the LLM can do it correctly or the feature doesn’t get added? How long will that work as the TUI code base grows?

          • simonw 8 hours ago ago

            At that point you change your attitude to the project and start treating it like something you care about, take control of the architecture, rewrite bits that don't make sense, etc.

            Plus the size of project that an LLM can help maintain keeps growing. I actually think that size may no longer have any realistic limits at all now: the tricks Claude Code uses today with grep and sub-agents mean there's no longer a realistic upper limit to how much code it can help manage, even with Opus's relatively small (by today's standards) 200,000 token limit.

            • zahlman 5 hours ago ago

              The problem I'm anticipating isn't so much "the codebase grows beyond the agent-system's comprehension" so much as "the agent-system doesn't care about good architecture" (at least unless it's explicitly directed to). So the codebase grows beyond the codebase's natural size when things are redundantly rewritten and stuffed into inappropriate places, or ill-fitting architectural patterns are aped.

              • svara 3 hours ago ago

                Don't "vibe code". If you don't know what architecture the LLM is producing, you will produce slop.

        • martin-t 9 hours ago ago

          I've also been hearing variations of your comment a lot too and correct me if I am wrong but I think they always implicitly assume that LLMs are more useful for the low-intellectual stuff than solving the high-intellectual core of the problem.

          The thing is:

          1) A lot of the low-intellectual stuff is not necessarily repetitive, it involved some business logic which is a culmination of knowing the process behind what the uses needs. When you write a prompt, the model makes assumptions which are not necessarily correct for the particular situation. Writing the code yourself forced you to notice the decision points and make more informed choices.

          I understand your TUI example and it's better than having none now, but as a result anybody who wants to write "a much better TUI" now faces a higher barrier to entry since a) it's harder to justify an incremental improvement which takes a lot of work b) users will already have processes around the current system c) anybody who wrote a similar library with a better TUI is now competing with you and quality is a much smaller factor than hype/awareness/advertisement.

          We'll basically have more but lower quality SW and I am not sure that's an improvement long term.

          2) A lot of the high-intellectual stuff ironically can be solved by LLMs because a similar problem is already in the training data, maybe in another language, maybe with slight differences which can be pattern matched by the LLM. It's laundering other people's work and you don't even get to focus on the interesting parts.

          • svara 8 hours ago ago

            > but I think they always implicitly assume that LLMs are more useful for the low-intellectual stuff than solving the high-intellectual core of the problem.

            Yes, this follows from the point the GP was making.

            The LLM can produce code for complex problems, but that doesn't save you as much time, because in those cases typing it out isn't the bottleneck, understanding it in detail is.

    • threethirtytwo 4 hours ago ago

      This article is not about whether programming is fun, elegant, creative, or personally fulfilling.

      It is about business value.

      Programming exists, at scale, because it produces economic value. That value translates into revenue, leverage, competitive advantage, and ultimately money. For decades, a large portion of that value could only be produced by human labor. Now, increasingly, it cannot be assumed that this will remain true.

      Because programming is a direct generator of business value, it has also become the backbone of many people’s livelihoods. Mortgages, families, social status, and long term security are tied to it. When a skill reliably converts into income, it stops being just a skill. It becomes a profession. And professions tend to become identities.

      People do not merely say “I write code.” They say “I am a software engineer,” in the same way someone says “I am a pilot” or “I am a police officer.” The identity is not accidental. Programming is culturally associated with intelligence, problem solving, and exclusivity. It has historically rewarded those who mastered it with both money and prestige. That combination makes identity attachment not just likely but inevitable.

      Once identity is involved, objectivity collapses.

      The core of the anti AI movement is not technical skepticism. It is not concern about correctness, safety, or limitations. Those arguments are surface rationalizations. The real driver is identity threat.

      LLMs are not merely automating tasks. They are encroaching on the very thing many people have used to define their worth. A machine that can write code, reason about systems, and generate solutions challenges the implicit belief that “this thing makes me special, irreplaceable, and valuable.” That is an existential threat, not a technical one.

      When identity is threatened, people do not reason. They defend. They minimize. They selectively focus on flaws. They move goalposts. They cling to outdated benchmarks and demand perfection where none was previously required. This is not unique to programmers. It is a universal human response to displacement.

      The loudest opponents of AI are not the weakest programmers. They are often the ones most deeply invested in the idea of being a programmer. The ones whose self concept, status, and narrative of personal merit are tightly coupled to the belief that what they do cannot be replicated by a machine.

      That is why the discourse feels so dishonest. It is not actually about whether LLMs are good at programming today. It is about resisting a trend line that points toward a future where the economic value of programming is increasingly detached from human identity.

      This is not a moral failing. It is a psychological one. But pretending it is something else only delays adaptation.

      AI is not attacking programming. It is attacking the assumption that a lucrative skill entitles its holder to permanence. The resistance is not to the technology itself, but to the loss of a story people tell themselves about who they are and why they matter.

      That is the real conflict. HN is littered with people facing this conflict.

      • lins1909 29 minutes ago ago

        Why do you say this subjective thing so confidently? Does believing what you just wrote make you feel better?

        Have you considered that there are people who actually just enjoy programming by themselves?

      • kaffekaka 23 minutes ago ago

        Very good comment!

    • AndrewKemendo 5 hours ago ago

      Dead on and well said

      Almost more importantly is: the people who pay you to build software, don’t care if you type or enjoy it, they pay you for an output of working software

      Literally nothing is stopping people from writing assembly in their free time for fun

      But the number of people who are getting paid to write assembly is probably less than 1000

    • omnicognate 8 hours ago ago

      > do like the actual typing of letters, numbers and special characters into a computer

      and from the first line of the article:

      > I love writing software, line by line.

      I've said it before and I'll say it again: I don't write programs "line by line" and typing isn't programming. I work out code in the abstract away from the keyboard before typing it out, and it's not the typing part that is the bottleneck.

      Last time I commented this on HN, I said something like "if an AI could pluck these abstract ideas from my head and turn them into code, eliminating the typing part, I'd be an enthusiastic adopter", to which someone predictably said something like "but that's exactly what it does!". It absolutely is not, though.

      When I "program" away from the keyboard I form something like a mental image of the code, not of the text but of the abstract structure. I struggle to conjure actual visual imagery in my head (I "have aphantasia" as it's fashionable to say lately), which I suspect is because much of my visual cortex processes these abstract "images" of linguistic and logical structures instead.

      The mental "image" I form isn't some vague, underspecified thing. It corresponds directly to the exact code I will write, and the abstractions I use to compartmentalise and navigate it in my mind are the same ones that are used in the code. I typically evaluate and compare many alternative possible "images" of different approaches in my head, thinking through how they will behave at runtime, in what ways they might fail, how they will look to a person new to the codebase, how the code will evolve as people make likely future changes, how I could explain them to a colleague, etc. I "look" at this mental model of the code from many different angles and I've learned only to actually start writing it down when I get the particular feeling you get when it "looks" right from all of those angles, which is a deeply satisfying feeling that I actively seek out in my life independently of being paid for it.

      Then I type it out, which doesn't usually take very long.

      When I get to the point of "typing" my code "line by line", I don't want something that I can give a natural language description to. I have a mental image of the exact piece of logic I want, down to the details. Any departure from that is a departure from the thing that I've scrutinised from many angles and rejected many alternatives to. I want the exact piece of code that is in my head. The only way I can get that is to type it out, and that's fine.

      What AI provides, and it is wildly impressive, is the ability to specify what's needed in natural language and have some code generated that corresponds to it. I've used it and it really is very, very good, but it isn't what I need because it can't take that fully-specified image from my head and translate it to the exact corresponding code. Instead I have to convert that image to vague natural language, have some code generated and then carefully review it to find and fix (or have the AI fix) the many ways it inevitably departs from what I wanted. That's strictly worse than just typing out the code, and the typing doesn't even take that long anyway.

      I hope this helps to understand why, for me and people like me, AI coding doesn't take away the "line-by-line part" or the "typing". We can't slot it into our development process at the typing stage. To use it the way you are using it we would instead have to allow it to replace the part that happens (or can happen) away from the keyboard: the mental processing of the code. And many of us don't want to do that, for a wide variety of reasons that would take a whole other lengthy comment to get into.

      • ryandrake 5 hours ago ago

        > I've used it and it really is very, very good, but it isn't what I need because it can't take that fully-specified image from my head and translate it to the exact corresponding code. Instead I have to convert that image to vague natural language, have some code generated and then carefully review it to find and fix (or have the AI fix) the many ways it inevitably departs from what I wanted.

        I agree with this. The hard part of software development happens when you're formulating the idea in your head, planning the data structures and algorithms, deciding what abstractions to use, deciding what interfaces look like--the actual intellectual work. Once that is done, there is the unpleasant, slow, error-prone part: translating that big bundle of ideas into code while outputting it via your fingers. While LLMs might make this part a little faster, you're still doing a slow, potentially-lossy translation into English first. And if you care about things other than "does it work," you still have a lot of work to do post-LLM to clean things up and make it beautiful.

        I think it still remains to be seen whether idea -> natural language -> code is actually going to be faster or better than idea -> code. For unskilled programmers it probably already is. For experts? The jury may still be out.

      • teeeew 7 hours ago ago

        That’s because you’re a subset of software engineers who know what they’re doing and cares about rigour and so on.

        There’s many who’s thinking is not so deep nor sharp as yours - LLM’s are welcomed by them but come at a tremendous cost to their cognition and the firms future well-being of its code base. Because this cost is implicit and not explicit it doesn’t occur to them.

        • closewith 7 hours ago ago

          Companies don't care about you or any other developer. You shouldn't care about them or their future well-being.

          > Because this cost is implicit and not explicit it doesn’t occur to them.

          Your arrogance and naiveté blinds you to the fact it is does occur to them, but because they have a better understanding of the world and their position in it, they don't care. That's a rational and reasonable position.

          • jofla_net 2 hours ago ago

            >they have a better understanding of the world and their position in it.

            Try not to use better/worse when advocating so vociferously. As described by the parent they are short-term pragmatic, that is all. This discussion can open up into a huge worldview where different groups have strengths and weaknesses based on this axis of pragmatic/idealistic.

            "Companies" are not a monolith, both laterally between other companies, and what they are composed of as well. I'd wager the larger management groups can be pragmatic, where the (longer lasting) R&D manager will probably be the most idealistic of the firm, mainly because of seeing the trends of punching the gas without looking at long-term consequences.

          • habinero 5 hours ago ago

            No, they just have a different job than I do and they (and you, I suspect) don't understand the difference.

            Software engineers are not paid to write code, we're paid to solve problems. Writing code is a byproduct.

            Like, my job is "make sure our customers accounts are secure". Sometimes that involves writing code, sometimes it involves drafting policy, sometimes it involves presentations or hashing out ideas. It's on me to figure it out.

            Writing the code is the easy part.

      • zahlman 4 hours ago ago

        > I work out code in the abstract away from the keyboard before typing it out, and it's not the typing part that is the bottleneck.

        Funny thing. I tend to agree, but I think it wouldn't look that way to an outside observer. When I'm typing in code, it's typically at a pretty low fraction of my general typing speed — because I'm constantly micro-interrupting myself to doubt the away-from-keyboard work, and refine it in context (when I was "working in the abstract", I didn't exactly envision all the variable names, for example).

      • barrkel 6 hours ago ago

        I'm like you. I get on famously with Claude Code with Opus 4.5 2025.11 update.

        Give it a first pass from a spec. Since you know how it should be shaped you can give an initial steer, but focus on features first, and build with testability.

        Then refactor, with examples in prompts, until it lines up. You already have the tests, the AI can ensure it doesn't break anything.

        Beat it up more and you're done.

        • omnicognate 5 hours ago ago

          > focus on features first, and build with testability.

          This is just telling me to do this:

          > To use it the way you are using it we would instead have to allow it to replace the part that happens (or can happen) away from the keyboard: the mental processing of the code.

          I don't want to do that.

          • saltcured an hour ago ago

            I feel like some of these proponents act like a poet has the goal to produce an anthology of poems and should be happy to act as publisher and editor, sifting through the outputs of some LLM stanza generator.

            The entire idea using natural language for composite or atomic command units is deeply unsettling to me. I see language as an unreliable abstraction even with human partners that I know well. It takes a lot of work to communicate anything nuanced, even with vast amounts of shared context. That's the last thing I want to add between me and the machine.

            What you wrote futher up resonates a lot for me, right down to the aphantasia bit. I also lack an internal monologue. Perhaps because of these, I never want to "talk" to a device as a command input. Regardless of whether it is my compiler, smartphone, navigation system, alarm clock, toaster, or light switch, issuing such commands is never going to be what I want. It means engaging an extra cognitive task to convert my cognition back into words. I'd much rather have a more machine-oriented control interface where I can be aware of a design's abstraction and directly influence its parameters and operations. I crave the determinism that lets me anticipate the composition of things and nearly "feel" transitive properties of a system. Natural language doesn't work that way.

            Note, I'm not against textual interfaces. I actually prefer the shell prompt to the GUI for many recurring control tasks. But typing works for me and speaking would not. I need editing to construct and proof-read commands which may not come out of my mind and hands with the linearity it assumes in the command buffer. I prefer symbolic input languages where I can more directly map my intent into the unambiguous, structured semantics of the chosen tool. I also want conventional programming syntax, with unambiguous control flow and computed expressions for composing command flows. I do not want vagaries of natural language interfering here.

    • globalnode 8 hours ago ago

      yep theres all types of people. i get hung up on the structure and shape of a source file, like its a piece of art. if it looks ugly, even if it works, i dont like it. ive seen some llm code that i like the shape of but i wouldnt like to use it verbatim since i didnt create it.

    • dist-epoch 9 hours ago ago

      It's just a reiteration of the age-old conflict in arts:

      - making art as you thing it should be, but at the risk of it being non-commercial

      - getting paid for doing commercial/trendy art

      choose one

      • smikhanov 9 hours ago ago

        People who love thinking in false dichotomies like this one have absolutely no idea how much harder it is to “get paid for doing commercial/trendy art”.

        It’s so easy to be a starving artist; and in the world of commercial art it’s bloody dog-eat-dog jungle, not made for faint-hearted sissies.

      • smokel 9 hours ago ago

        I've given this quite some thought and came to the conclusion that there is actually no choice, and all parties fall into the first category. It's just that some people intrinsically like working on commercial themes, or happen to be trendy.

        Of course there are some artists who sit comfortably in the grey area between the two oppositions, and for these a little nudging towards either might influence things. But for most artists, their ideas or techniques are simply not relevant to a larger audience.

        • embedding-shape 9 hours ago ago

          > and all parties fall into the first category [...] Of course there are some artists who sit comfortably in the grey area between the two oppositions

          I'm not sure what your background is, but there are definitly artists out there drawing, painting and creating art they have absolutely zero care for, or even actively is against or don't like, but they do it anyways because it's easier to actually get paid doing those things, than others.

          Take a look in the current internet art community and ask how many artists are actively liking the situation of most of their art commissions being "furry lewd art", vs how many commissions they get for that specific niche, as just one example.

          History has lots of other examples, where artists typically have a day-job of "Art I do but do not care for" and then like the programmer, hack on what they actually care about outside of "work".

          • smokel 9 hours ago ago

            Agreed, but I'd say these would be artists in the "grey area". They are capable of drawing furry art, for example, and have the choice to monetize that, even though they might have become bored with it.

            I was mostly considering contemporary artists that you see in museums, and not illustrators. Most of these have moved on to different media, and typically don't draw or paint. They would therefore also not be able to draw commission pieces. And most of the time their work does not sell well.

            (Source: am professionally trained artist, tried to sell work, met quite a few artists, thought about this a lot. That's not to say that I may still be completely wrong though, so I liked reading your comment!)

            Edit: and of course things get way more complicated and nuanced when you consider gallerists pushing existing artists to become trendy, and artists who are only "discovered" after their deaths, etc. etc.)

      • embedding-shape 9 hours ago ago

        Yeah, but I guess wider. It's like the discussion would turn into "Don't use oil colors, then you don't get to do the fun process of mixing water and color together to get it just perfect" while maybe some artists don't think that's the fun process, and all the other categories, all mixed together, and everyone think their reason of doing it is the reason most people do it.

      • martin-t 9 hours ago ago

        With LLMs, if you did the first in the past, then no matter what license you chose, your work is now in the second category, except you don't get a dime.

      • FergusArgyll 9 hours ago ago

        It's not.

        It's:

        - Making art because you enjoy working with paint

        - Making art because you enjoy looking at the painting afterward

    • BananaaRepublik 4 hours ago ago

      > I think there is a section of programmer who actually do like the actual typing of letters, numbers and special characters into a computer...

      This sounds like an alien trying and failing to describe why people like creating things. No, the typing of characters in a keyboard has no special meaning, neither does dragging a brush across a canvas or pulling thread through fabric. It's the primitive desire to create something by your own hands. Have people using AI magically lost all understanding of creativity or creation, everything has to be utilitarian and business?

      • embedding-shape 4 hours ago ago

        My entire point is that people are different. For some people (read through the other comments), it's quite literally about typing of characters, or dragging a brush across the canvas. Sure, that might not be the point for you, but my entire point of my comment is that just because it's "obviously because of X" for you, that doesn't mean it's like that for others.

        Sometimes I like to make music because I have an idea of the final results, and I wanna hear it like that. Other times, I make music because I like the feeling of turning a knob, and striking keys at just the right moment, and it gives me a feeling of satisfaction. For others, they want to share an emotion via music. Does this mean someone of us are "making music for the wrong reasons"? I'd claim no.

        • Izkata an hour ago ago

          No, they're right. Your description is what you get from outsiders who don't understand what they're seeing.

          In a creative process, when you really know your tools, you start being able to go from thought to result without really having to think about the tools. The most common example when it comes to computers would be touch-typing - when your muscle memory gets so good you don't think about the keyboard at all anymore, your hands "know" what to do to get your thoughts down. But for those of us with enough experience in the programming languages and editor/IDE we use, the same thing can happen - going from thought to code is nearly effortless, as is reading code, because we don't need to think about the layers in between anymore.

          But this only works when those tools are reliable, when we know they'll do exactly what we expect. AI tooling isn't reliable: It introduces two lossy translation layers (thought -> English and English -> code) and a bunch of waiting in the middle that breaks any flow. With faster computers maybe we can eliminate the waiting, but the reliability just isn't there.

          This applies to music, painting, all sorts of creative things. Sure there's prep time beforehand with physical creation like painting, but when someone really gets into the flow it's the same: they're not having to think about the tools so much as getting their thoughts into the end result. The tools "disappear".

          > Other times, I make music because I like the feeling of turning a knob, and striking keys at just the right moment, and it gives me a feeling of satisfaction.

          But I'll bet you're not thinking about "I like turning this knob" at the moment you're doing it, I'll bet you're thinking "Increase the foo" and the knob's immediate visceral feedback is where the satisfaction comes from because you're increasing the foo without having to think about how to do it - in part because of how reliable it is.

        • card_zero 3 hours ago ago

          I bet you also sometimes like to make music because the final result emerges from your intimate involvement with striking keys, no? That's the suggestion.

      • aspenmartin 4 hours ago ago

        I don't think these characterizations in either direction are very helpful; I understand they're coming from a place with someone trying to make sense of why their ingrained notion of what creativity means and what the "right" way to generate software projects is is not shared by other people.

        I use CC for both business and personal projects. In both cases: I want to achieve something cool. If I do it by hand, it is slow, I will need to learn something new which takes too much time and often time the thing(s) I need to learn is not interesting to me (at the time). Additionally, I am slow and perpetually unhappy with the abstractions and design choices I make despite trying very hard to think through them. With CC: it can handle parts of the project I don't want to deal with, it can help me learn the things I want to learn, it can execute quickly so I can try more things and fail fast.

        What's lamentable is the conclusion of "if you use AI it is not truly creative" ("have people using AI lost all understanding of creativity or creation?" is a bit condescending).

        In other threads the sensitive dynamic from the AI-skeptic crowds is more or less that AI enthusiasts "threaten or bully" people who are not enthusiastic that they will get "punished" or fall behind. Yet at the same time, AI-skeptics seem to routinely make passive aggressive implications that they are the ones truly Creating Art and are the true Craftsman; as if this venture is some elitist art form that should be gate kept by all of you True Programmers (TM).

        I find these takes (1) condescending, (2) wrong and also belying a lack of imagination about what others may find genuinely enjoyable and inspiring, (3) just as much of a straw man as their gripes against others "bullying" them into using AI.

  • adityaathalye 10 hours ago ago

    Don't fall into the "Look ma, no hands" hype.

    Antirez + LLM + CFO = Billion Dollar Redis company, quite plausibly.

    /However/ ...

    As for the delta provided by an LLM to Antirez, outside of Redis (and outside of any problem space he is already intimately familiar with), an Apples to Apples comparison would be he trying this on an equally complex codebase he has no idea about. I'll bet... what Antirez can do with Redis and LLMs (certainly useful, huge Quality of Life improvement to Antirez), he cannot even begin to do with (say) Postgres.

    The only way to get there with (say) Postgres, would be to /know/ Postgres. And pretty much everyone, no matter how good, cannot get there with code-reading alone. With software at least, we need to develop a mental model of the thing by futzing about with the thing in deeply meaningful ways.

    And most of us day-job grunts are in the latter spot... working in some grimy legacy multi-hundred-thousand line code-mine, full of NPM vulns, schelpping code over the wall to QA (assuming there is even a QA), and basically developing against live customers --- "learn by shipping", as they say.

    I do think LLMs are wildly interesting technology, however they are poor utility for non-domain-experts. If organisations want to profit from the fully-loaded cost of LLM technology, they better also invest heavily in staff training and development.

    • roncesvalles 9 hours ago ago

      Exactly. AI is minimally useful for coding something that you couldn't have been able to code yourself, given enough time, without explicitly investing time in generic learning not specific to that codebase or particular task.

      Although calling AI "just autocomplete" is almost a slur now, it really is just that in the sense that you need to A) have a decent mental picture of what you want, and, B) recognize a correct output when you see it.

      On a tangent, the inability to identify correct output is also why I don't recommend using LLMs to teach you anything serious. When we use a search engine to learn something, we know when we've stumbled upon a really good piece of pedagogy through various signals like information density, logical consistency, structuredness/clarity of thought, consensus, reviews, author's credentials etc. But with LLMs we lose these critical analysis signals.

      • avbanks 4 hours ago ago

        I've been trying to articulate this exact point. The problem w/ LLM's is that at times they are very capable but always unreliable.

      • teeeew 8 hours ago ago

        Absolutely spot on.

        You are calling out the and subtle nuance that many don’t get…

      • deadbabe 5 hours ago ago

        You could have another LLM tell you which is the correct output.

        • s1mplicissimus 4 hours ago ago

          ... and then a third one to check wether the second one was right. then a forth one to... o wait

    • keeda 44 minutes ago ago

      What "domain expert" means is also changing however.

      As I've mentioned often, I'm solving problems in a domain I had minimal background in before. However, that domain is computer vision. So I can literally "see" if the code works or not!

      To expand, I've set up tests, benchmarks and tools that generate results as images. I chat with the LLM about a specific problem at hand, it presents various solutions, I pick a promising approach, it writes the code, I run the tests which almost always pass, but if they don't, I can hone in on the problem quickly with a visual check of the relevant images.

      This has allowed me to make progress despite my lack of background. Interestingly, I've now built up some domain knowledge through learning by doing and experimenting (and soon, shipping)!

      These days I think an agent could execute this whole loop by itself by "looking" at the test and result images itself. I've uploaded test images to the LLM and we had technical conversations about them as if it "saw" them like a human. However, there are ton of images and I don't want to burn the tokens at this point.

      The upshot is, if you can set up a way of reliably testing and validating the LLM's output, you could still achieve things in an unfamiliar domain without prior expertise.

      Taking your Postgres example, it's a heavily tested and benchmarked project. I would bet someone like Antirez would be able to jump in and do original, valid work using AI very quickly, because even if hasn't futzed with Postgres code, he HAS futzed with a LOT of other code and hence has a deep intuition about software architecture in general.

      So this is what I meant by the meaning of "domain expert" changing. The required skills have become a lot more fundamental. Maybe the only required skills are intuition about software engineering, critical thinking, and basic knowledge of statistics and the scientific method.

    • thunky 7 hours ago ago

      > And pretty much everyone, no matter how good, cannot get there with code-reading alone. With software at least, we need to develop a mental model of the thing by futzing about with the thing in deeply meaningful ways

      LLMs help with that part too. As Antirez says:

      Writing code is no longer needed for the most part. It is now a lot more interesting to understand what to do, and how to do it (and, about this second part, LLMs are great partners, too).

      • adityaathalye 6 hours ago ago

        How to "understand" what to do?

        How to know the "how to do it" is sensible? (sensible = the product will produce the expected outcome within the expected (or tolerable) error bars?)

        • thunky 6 hours ago ago

          > How to "understand" what to do?

          How did you ever know? It's not like everyone always wrote perfect code up until now.

          Nothing has changed, except now you have a "partner" to help you along with your understanding.

          • adityaathalye 5 hours ago ago

            Well, I have a whole blog post of an answer for you: https://www.evalapply.org/posts/tools-for-thought/

            Who "knows"?

            It's who has a world-model. It's who can evaluate input signal against said world-model. Which requires an ability to generate questions, probe the nature of reality, and do experiments to figure out what's what. And it's who can alter their world-model using experiences collected from the back-and-forth.

    • bodegajed 7 hours ago ago

      Yes most c-level executives (who often have to report to a board) have tendencies to predict the future after using claude code. It didn't happen in 2025 yet they still insist. While their senior engineers are still working at the production code.

    • falloutx 9 hours ago ago

      if you are very high up the chain like Linus, i think doing vibe coding gives you more feedback than any average dev. So they are having a positive feedback loop.

      For most of us vibe coding gives 0 advantage. Our software will just sit there and get no views and producing it faster means nothing. In fact, it just scares us that some exec is gonna look at this and write us for low performance because they saw someone do the same thing we are doing in 2 days instead of 4.

      • conorcleary 9 hours ago ago

        Less a 'chain' or hierarchy than a lecture hall with cliques. Many of the 'influencers', media personalities, infamous, famous, anyone with a recognizable name - for the most part - was introduced to the tsunami wave of [new tech] at the same time. They may come with advantages, but it's how they get back to the 'top' (for your chain) vs. staying up there.

        • conorcleary 9 hours ago ago

          For a while now I've felt that there's an apathy in: there's more content being created than consumed.

          • falloutx 9 hours ago ago

            this is true, like 90% of projects submitted on product hunt have 1 vote or less.

            • conorcleary 7 hours ago ago

              I've set the bar so low that getting a reply to that was already unexpected.

              • rightbyte 4 hours ago ago

                There is a lot of "attention" to go around for small group interactions like this subthread. Like a bar chat I guess.

              • falloutx 4 hours ago ago

                Lmao, me too, the internet has become a single player game at this point. I usually just type and forget.

      • crote 9 hours ago ago

        Except that Linus does basically zero programming these days. He's a manager, combining code from the subsystem managers below him into a final release.

        • SirensOfTitan 6 hours ago ago

          Right, but Linus also has an extremely refined mental model of the project he maintains, and has built up a lot of skills reading code.

          Most engineers in my experience are much less skillful at reading code than writing code. What I’ve seen so far with use of LLM tools is a bunch of minimally edited LLM produced content that was not properly critiqued.

          • simonw 6 hours ago ago

            Here's some of the code antirez described in the OP, if you want to see what expert usage of Claude Code looks like: https://github.com/antirez/linenoise/commit/c12b66d25508bd70... and https://github.com/antirez/linenoise/commit/a7b86c17444227aa...

            • yobbo 5 hours ago ago

              This looks more worrying than impressive. It's long files of code with if-statements and flag-checking unicode bit patterns, with an enormous number of potential test-cases.

              It's not conceptually challenging to understand, but time consuming to write, test, and trust. Having an LLM write these types of things can save time, but please don't trust it blindly.

            • falloutx 5 hours ago ago

              I see dividing the tests and code into two different changes is pretty nice, In fact I have been using double agent thing where one is writing tests and other is writing the code, solves the attention issue also. Although the code itself looks harder to read, but that is probably more on me than Claude.

    • UncleEntity 4 hours ago ago

      >> ...however they are poor utility for non-domain-experts.

      IDK, just two days ago I had a bug report/fix accepted by a project which I would have never dreamt of digging into as what it does is way outside my knowledge base. But Claude got right on in there and found the problem after a few rounds of printf debugging which lead to an assertion we would have hit with a debug build which led to the solution. Easy peasy and I still have no idea how the other library does its thing at all as Claude was using it to do this other thing.

    • CraftingLinks 9 hours ago ago

      Keep believing. To the bitter end. For such human slop codebases AI slop additions will do equally fine. Add good testing and the code might even improve over the garbage that came before.

      • ruszki 9 hours ago ago

        Generating also the tests happens a little bit too often for any kind of improvement. simonw posted here a generated “something” the other day, which he didn’t know whether it’s really working or not, but he was happy that his generated, completely unchecked tests are green, and yet some other root commenter here praises him.

        It needs a lot of work to not be skeptical, when when I try it, it generates shit, especially when I want something completely new, not existing anywhere, and also when these people when they show how they work with it, it always turns out that it’s on the scale of terrible to bad.

        I also use AI, but I don’t allow it to touch my code, because I’m disgusted by its code quality. I ask it, and sometimes it delivers, but mostly not.

        • simonw 9 hours ago ago

          Which thing was that?

          (If you need help finding it try visiting https://tools.simonwillison.net/hn-comments-for-user and searching for simonw - you can then search my 1,000 most recent comments in one place.)

          If my tests are green then it tells me a LOT about what the software is capable of, even if I haven't reviewed every line of the implementation.

          The next step is to actually start using it for real problems. That should very quickly shake out any significant or minor issues that sneaked past the automated tests.

          I've started thinking about this by comparing it to work I've done within larger companies. My team would make use of code written by other teams without reviewing everything those other teams had written. If their tests passed we would build against their stuff, and if their stuff turned out not to work we would let them know or help debug and fix it ourselves.

  • dom96 10 hours ago ago

    > As a programmer, I want to write more open source than ever, now.

    I want to write less, just knowing that LLM models are going to be trained on my code is making me feel more strongly than ever that my open source contributions will simply be stolen.

    Am I wrong to feel this? Is anyone else concerned about this? We've already seen some pretty strong evidence of this with Tailwind.

    • RadiozRadioz 10 hours ago ago

      I feel similarly for a different reason. I put my code out there, licensed under the GPL. It is now, through a layer of indirection, being used to construct products that are not under the GPL. That's not what I signed up for.

      I know the GPL didn't have a specific clause for AI, and the jury is still out on this specific case (how similar is it to a human doing the same thing?), but I like to imagine, had it been made today, there probably would be a clause covering this usage. Personally I think it's a violation of the spirit of the license.

      • wmwragg 10 hours ago ago

        Yep, this is my take as well. It's not that open source is being stolen as such, as if you abide by an open source license you aren't stealing anything, it's that the licenses are being completely ignored for the profit of a few massive corporations.

        • dom96 9 hours ago ago

          Yeah, that's what I meant by "stolen", I should have been clearer. But indeed, this is the crux of the problem, I have no faith that licenses are being abided by.

        • leonidasv 5 hours ago ago

          What profit? All labs are taking massive losses and there's no clear path to profit for most of them yet.

          • rurp 3 hours ago ago

            The wealthiest people in tech aren't spending 10s of billions on this without the expectation of future profits. There's risk, but they absolutely expect the bets to be +EV overall.

          • karmakurtisaani 3 hours ago ago

            Expected profit.

      • luke5441 10 hours ago ago

        GPL works via copyright. Since AI companies claim fair use no copyright applies. There is no fixing this. The only option is not to publish.

        There are non-US jurisdictions where you have some options, but since most of them are trained in the US that won't help much.

        • ThunderSizzle 10 hours ago ago

          > Since AI companies claim fair use no copyright applies. There is no fixing this.

          They can claim whatever they want. You can still try to stop it via lawsuits and make them claim it in court. Granted, I believe there's already been some jurisdictions that have sided with fair use in those particular cases.

          • zarzavat 9 hours ago ago

            Laws can be changed. This is right now a trillion dollar industry, perhaps later it could even become a billion dollar industry. Either way, it's very important.

            Strict copyright enforcement is a competitive disadvantage. Western countries lobbied for copyright enforcement in the 20th century because it was beneficial. Now the tables have turned, don't hold your breath for copyright enforcement against the wishes of the markets. We are all China now.

            • luke5441 9 hours ago ago

              Yes, I think Japan added an AI friendly copyright law. If there were problems in the US, they'd just move training there.

              • martin-t 9 hours ago ago

                Moving training won't help them if their paying customers are in jurisdictions which do respect copyright as written and intended.

                • luke5441 8 hours ago ago

                  OPs idea is about having a new GPL like license with a "may not be used for LLM training" clause.

                  That the LLM itself is not allowed to produce copyrighted work (e.g. just copies of works or too structurally similar) without using a license for that work is something that is probably currently law. They are working around this via content filters. They probably also have checks during/after training that it does not reproduce work that is too similar. There are law suits about this pending if I remember correctly e.g. with the New York Times.

                  • martin-t 8 hours ago ago

                    The issue is that everyone is focusing on verbatim (or "too similar") reproduction.

                    LLMs themselves are compressed models of the training data. The trick is the compression is highly lossy by being able to detect higher-order patterns instead of fucusing on the first-order input tokens (or bytes). If you look at how, for example, any of the Lempel-Ziv algorithms work, they also contain patterns from the input and they also predict the next token (usually byte in their case), except they do it with 100% probability because they are lossless.

                    So copyright should absolutely apply to the models themselves and if trained on AGPL code, the models have to follow the AGPL license and I have the right to see their "source" by just being their user.

                    And if you decompress a file from a copyrighted archive, the file is obviously copyrighted. Even if you decompress only a part. What LLMs do is another trick - by being lossy, they decompress probabilistically based on all the training inputs - without seeing the internals, nobody can prove how much their particular work contributed to the particular output.

                    But it is all mechanical transformation of input data, just like synonym replacement, just more sophisticated, and the same rules regarding plagiarism and copyright infringement should apply.

                    ---

                    Back to what you said - the LLM companies use fancy language like "artificial intelligence" to distract from this so they can they use more fancy language to claim copyright does not apply. And in that case, no license would help because any such license fundamentally depends on copyright law, which as they claim does not apply.

                    That's the issue with LLMs - if they get their way, there's no way to opt out. If there was, AGPL would already be sufficient.

                    • luke5441 8 hours ago ago

                      I agree with your view. One just has to go into courts and somehow get the judges to agree as well.

                      An open question would be if there is some degree of "loss" where copyright no longer applies. There is probably case law about this in different jurisdictions w.r.t. image previews or something.

                      • martin-t an hour ago ago

                        I don't think copyright should be binary or should work the way it does not. It's just the only tool we have now.

                        There should be a system which protects all work (intellectual and physical) and makes sure the people doing it get rewarded according to the amount of work and skill level. This is a radical idea and not fully compatible with capitalism as implemented today. I have a lot on my to-read list and I don't think I am the first to come up with this but I haven't found anyone else describing it, yet.

                        And maybe it's broken by some degenerate case and goes tits up like communism always did. But AFAICT, it's a third option somewhere in between, taking the good parts of each.

                        For now, I just wanna find ways to stop people already much richer than me from profiting from my work without any kind of compensation for me. I want inequality to stop worsening but OTOH, in the past, large social change usually happened when things got so bad people rejected the status quo and went to the streets, whether with empty hands or not. And that feels like where we're headed and I don't know whether I should be exited or worried.

        • martin-t 9 hours ago ago

          I recall a basics of law class saying that in some countries (e.g. Czech Republic), open source contributors have the right to small compensation if their work is used to a large financial benefit.

          At some point, I'll have to look it up because if that's right, the billionaires and wannabe-trillionaires owe me a shitton of money.

      • ndsipa_pomu 6 hours ago ago

        One work-around would be to legislate that code produce by an LLM trained on GPL code would also be GPL.

        • layer8 6 hours ago ago

          There are licenses that are incompatible with each other, which implies that one wouldn’t be allowed to train LLMs on code based on multiple such licenses.

          • ndsipa_pomu 3 hours ago ago

            Sounds reasonable to me - much the same way that building a project from multiple incompatible licenses wouldn't be allowed. The alternative is that using an LLM could just be an end-run around the choice of license that a developer used.

            • layer8 3 hours ago ago

              Copyright normally only applies when you’re plagiarizing. LLM output typically isn’t that. It’s more like someone having studied multiple open source projects with incompatible licenses and coding up their own version of them, which is perfectly fine. So your “workaround” is overshooting things by far, IMO.

              • ndsipa_pomu 2 hours ago ago

                My understanding is that LLMs are plagiarising openly available code - it's not like the code is used to inspire a person as that involves creative thinking. I'm thinking that taking a piece of code and applying a transformation to it to make it look different (e.g. changing variable/function names) would be still considered plagiarism. In the case of the GPL, I think it would be entirely appropriate for a GPL trained LLM to be required to license its code output as GPL.

                I suppose the question is when does a machine applied transformation become a new work?

      • delusional 10 hours ago ago

        The argument of the AI megacorps is that generated work is not "derivative" and therefore doesn't fall interact with the original authors copyright. They have invented a machine that takes in copyrighted works, and from a legal standpoint produces "entirely original" code. No license, be that GPL or otherwise, can do anything about that, because they ultimately rely on the authors copyright to required the licensee to observe the license.

        They cannot violate the license, because in their view they have not licensed anything from you.

        I think that's horse shit, and a clear violation of the intellectual property rights that are supposed to protect creatives from the business boys, but apparently the stock market must grow.

        • Ekaros 9 hours ago ago

          What makes this whole thing even weirder for me is the similar fact that any output from AI might not enjoy copyright protections. So basically if you can steal software made with AI you can freely resell it.

          • martin-t 9 hours ago ago

            During the gold rush, it is said, the only people who made money were the ones selling the pickaxes. A"I" companies are ~selling~ renting the pickaxes of today.

            (I didn't come up with this quote but I can't find the source now. If anything good comes out of LLMs, it's making me appreciate other people's more and trying to give credit where it's due.)

            • netsharc 5 hours ago ago

              Wasn't it shovels?

              NVidia is a shovel-maker worth a few trillion dollars...

            • kapsi 8 hours ago ago

              What about the people who sold gold? Didn't they make money?

              • martin-t 8 hours ago ago

                To be honest, I haven't looked at any statistics but I imagine a tiny few of those looking for gold found any and got rich, the most either didn't find anything, died of illness or exposure or got robbed. I just like the quote as a comparison. Updated the original comment to reflect I haven't checked if it's correct.

      • DrewADesign 8 hours ago ago

        Now imagine how much more that sucks for artists and designers that were putting artwork out there to advertise themselves only to have some douchebag ingest it in order to sell cheap simulacra.

      • martin-t 9 hours ago ago

        If you want, I made a coherent argument about how the mechanics of LLMs mean both their training and inference is plagiarism and should be copyright infringement.[0] TL;DR it's about reproducing higher order patterns instead of word for word.

        I haven't seen this argument made elsewhere, it would be interesting to get it into the courtrooms - I am told cases are being fought right now but I don't have the energy to follow them.

        Plus as somebody else put it eloquently, it's labor theft - we, working programmers, exchanged out limited lifetime for money (already exploitative) in a world with certain rules. Now the rules changed, our past work has much more value, and we don't get compensated.

        [0]: https://news.ycombinator.com/item?id=46187330

        • ThrowawayR2 3 hours ago ago

          There was a legal analysis of the copyright implications of Copilot among a set of white papers commissioned by the Free Software Foundation: https://www.fsf.org/licensing/copilot/copyright-implications...

        • williamcotton 8 hours ago ago

          The first thing you need to do is brush up on some IP law around software in the United States. Start here:

          https://en.wikipedia.org/wiki/Idea–expression_distinction

          https://en.wikipedia.org/wiki/Structure,_sequence_and_organi...

          https://en.wikipedia.org/wiki/Abstraction-Filtration-Compari...

          In a court of law you're going to have to argue that something is an expression instead of an idea. Most of what LLMs pump out are almost definitionally on the idea side of the spectrum. You'd basically have to show verbatim code or class structure at the expressive level to the courts.

          • martin-t 5 hours ago ago

            Thanks for the links, I'll read them in more detail later.

            There's a couple issues I see:

            1) All of the concepts were developed with the idea that only humans are capable of certain kinds of work needed for producing IP. A human would not engage in highly repetitive and menial transformation of other people's material to avoid infringement if he could get the same or better result by working from scratch. This placed, throughout history, an upper limit on how protective copyright had to be.

            Say, 100 years ago, synonym replacement and paraphrasing of sentences were SOTA methods to make copies of a book which don't look like copies without putting in more work than the original. Say, 50 years ago, computers could do synonym replacement automatically so it freed up some time for more elaborate restructuring of the original work and the level of protection should have shifted. Say, 10 years ago, one could use automatic replacement of phrases or translation to another language and back, freeing up yet more time.

            The law should have adapted with each technological step up and according to your links it has - given the cases cited. It's been 30 years and we have a massive step up in automatic copying capabilities - the law should change again to protect the people who make this advancement possible.

            Now with a sufficiently advanced LLM trained on all public and private code, you can prompt them to create a 3D viewer for Quake map files and I am sure it'll most of the time produce a working program which doesn't look like any of the training inputs but does feel vaguely familiar in structure. Then you can prompt it to add a keyboard-controlled character with Quake-like physics and it'll produce something which has the same quirks as Quake movement. Where did bunny hopping, wallrunning, strafing, circlejumps, etc. come from if it did not copy the original and the various forks?

            Somebody had to put in creative work to try out various physics systems and figure out what feels good and what leads to interesting gameplay.

            Now we have algorithms which can imitate the results but which can only be created by using the product of human work without consent. I think that's an exploitative practice.

            2) It's illegal to own humans but legal to own other animals. The USA law uses terms such as "a member of the species Homo sapiens" (e.g. [0]) in these cases.

            If the legality of tech in question was not LLMs but remixing of genes (only using a tiny fraction of human DNA) to produce a animals which are as smart as humans with chimpanzee bodies which can be incubated in chimpanzee females but are otherwise as sentient as humans, would (and should) it be legal to own them as slaves and use them for work? It would probably be legal by the current letter of the law but I assure you the law would quickly change because people would not be OK with such overt exploitation.

            The difference is the exploitation by LLM companies is not as overt - in fact, mane people refer to LLMs as AIs and use pronouns such as "he" or "she", indicating them believe them to be standalone thinking entities instead of highly compressed lossy archives of other people's work.

            3) The goal of copyright is progress, not protection of people who put in work to make that progress possible. I think that's wrong.

            I am aware of the "is" vs "should" distinction but since laws are compromises between the monopoly in violence and the people's willingness to revolt instead of being an (attempted) codification of a consistent moral system, the best we can do is try to use the current laws (what is) to achieve what is right (what should be).

            [0]: https://en.wikipedia.org/wiki/Unborn_Victims_of_Violence_Act

            • williamcotton 4 hours ago ago

              But "vaguely familiar in structure" could be argued to be the only reasonable way to do something, depending on the context. This is part of the filtration step in AFC.

              The idea of wallrunning should not be protected by copyright.

              • martin-t 2 hours ago ago

                The thing is a model trained on the same input as current models except Quake and Quake derivatives would not generate such code. (You'd have to prompt it with descriptions of quake physics since it wouldn't know what you mean, depending on whether only code or all mentions were excluded.)

                The quake special behaviors are results of essentially bugs which were kept because it led to fun gameplay. The model would almost certainly generate explicit handling for these behaviors because the original quake code is very obviously not the only reasonable way to do it. And in that case the model and its output is derivative work of the training input.

                The issue is such an experiment (training a model with specific content excluded) would cost (tens/hundreds of?) millions of dollars and the only companies able to do it are not exactly incentivized to try.

                ---

                And then there's the thing that current LLMs are fundamentally impossible to create without such large amounts of code as training data. I honestly don't care what the letter of the law is, to any reasonable person, that makes them derivative work of the training input and claiming otherwise is a scam and theft.

                I always wonder if people arguing otherwise think they're gonna get something out of it when the dust settles or if they genuinely think society should take stuff from a subgroup of people against their will when it can to enrich itself.

                • williamcotton an hour ago ago

                  “Exploitative” is not a legal category in copyright. If the concern is labor compensation or market power, that’s a question for labor law, contract law, or antitrust, not idea-expression analysis and questions of derivative works.

        • martin-t 9 hours ago ago

          And HN does its thing again - at least 3 downvotes, 0 replies. If you disagree, say why, otherwise I have to assume my argument is correct and nobody has any counterarguments but people who profit from this hate it being seen.

          • dahart 5 hours ago ago

            I agree that training on copyrighted material is violating the law, but not for the reasons you stated.

            That said, this comment is funny to me because I’ve done the same thing too, take some signal of disagreement, and assume the signal means I’m right and there’s a low-key conspiracy to hold me down, when it was far more likely that either I was at least a bit wrong, or said something in an off-putting way. In this case, I tend to agree with the general spirit of the sibling comment by @williamcotton in that it seems like you’re inventing some criteria that are not covered by copyright law. Copyrights cover the “fixation” of a work, meaning they protect only its exact presentation. Copyrights do not cover the Madlibs or Cliff Notes scenarios you proposed. (Do think about Cliff Notes in particular and what it implies about AI - Cliff Notes are explicitly legal.)

            Personally, I’ve had a lot of personal forward progress on HN when I assume that downvotes mean I said something wrong, and work through where my own assumptions are bad, and try to update them. This is an important step especially when I think I’m right.

            I’m often tempted to ask for downvote explanations too, but FWIW, it never helps, and aside from HN guidelines asking people to avoid complaining about downvotes, I find it also helps to think of downvotes as symmetric to upvotes. We don’t comment on or demand an explanation for an upvote, and an upvote can be given for many reasons - it’s not only used for agreement, it can be given for style, humor, weight, engagement, pity, and many other reasons. Realizing downvotes are similar and don’t only mean disagreement helps me not feel personally attacked, and that can help me stay more open to reflecting on what I did that is earning the downvotes. They don’t always make sense, but over time I can see more places I went wrong.

            • martin-t an hour ago ago

              > or said something in an off-putting way

              It shouldn't matter.

              Currently, downvote means "I want this to be ranked lower". There really should be 2 options "factually incorrect" and "disagree". For people who think it should matter, there should be a third option, "rude", which others can ignore.

              I've actually emailed about this with a mod and it seems he conflated talking about downvotes with having to explain a reason. He also told me (essentially) people should not have the right to defend themselves against incorrect moderator decisions and I honestly didn't know what to say to that, I'll probably message him again to confirm this is what he meant but I don't have high hopes after having similar interactions with mods on several different sites.

              > FWIW, it never helps

              The way I see it, it helped since I got 2 replies with more stuff to read about. Did you mean it doesn't work for you?

              > downvotes as symmetric to upvotes

              Yes, and we should have more upvote options too. I am not sure the explanation should be symmetric though.

              Imagine a group conversation in which somebody lies (the "factually incorrect" case here). Depending on your social status within the group and group politics, you might call out the lie in public, in private with a subset or not at all. But if you do, you will almost certainly be expected to provide a reasoning or evidence.

              Now imagine he says something which is factually correct. If you say you agree, are you expected to provide references why? I don't think so.

              ---

              BTW, on a site which is a more technical alternative to HN, there was recently a post about strange behavior of HN votes. Other people posted their experience with downvotes here and they mirrored mine - organic looking (i.e. gradual) upvotes, then within minutes of each other several downvotes. It could be coincidence but me and others suspect voting rings evading detection.

              I also posted a link to my previous comment as an experiment - if people disagree, they are more likely to also downvote that one. But I did not see any change there so I suspect it might be bots (which are unlikely to be instructed to also click through and downvote there). Note sample size is 1 here, for now.

          • ThrowawayR2 3 hours ago ago

            Maybe if you constructed your argument in terms of the relevant statutes for your jurisdiction, like an actual copyright attorney does, HN might be more receptive to it?

            • martin-t 2 hours ago ago

              I argue primarily about morality (right and wrong), not legality. The argument is valid morally, if LLM companies found a loophole ion the law, it should be closed.

              • ThrowawayR2 an hour ago ago

                You literally wrote "it would be interesting to get it into the courtrooms". A court won't give a hoot about your opinions on morality.

    • embedding-shape 10 hours ago ago

      > I want to write less, just knowing that LLM models are going to be trained on my code is making me feel more strongly than ever that my open source contributions will simply be stolen. Am I wrong to feel this? Is anyone else concerned about this?

      I don't think it's wrong, but misdirected maybe. What do you that someone can "steal" your open source contributions? I've always released most of my code as "open source", and not once has someone "stolen" it, it still sits on the same webpage where I initially published it, decades ago. Sure, it's guaranteed ingested into LLMs since long time ago, but that's hardly "stealing" when the thing is still there + given away for free.

      I'm not sure how anyone can feel like their open source code was "stolen", wasn't the intention in the first place that anyone can use it for any purpose? That's at least why I release code as open source.

      • krior 10 hours ago ago

        "Open Source" does not equal "No terms on how to share and use the code". Granted, there are such licenses but afaik the majority requires attribution at the minimum.

        • embedding-shape 9 hours ago ago

          Then I'd say they're "breaking the license", not "stolen your project", but maybe I'm too anal about the meaning of words.

          • dom96 9 hours ago ago

            Yeah, fair, I could have been clearer. But yes, that is what I meant: breaking the license.

            • otterley 4 hours ago ago

              I’m unaware of any mainstream Open Source licenses that forbid training an AI model on the work. Are you using one?

      • gus_massa 9 hours ago ago

        [A]GPL is viral, so the derived code must use the same license. People that like that license care a lot about that.

        On the other side BSD0 is just a polite version of WTFPL, and people that like it doesn't care about what you do with the code.

        • embedding-shape 9 hours ago ago

          And I mostly use MIT, which requires attribution. Does that mean when people use my code, without attribution me, that they're "stealing my code"? I would never call it that, I'd say they're "breaking the license", or similar.

          • otterley 4 hours ago ago

            The MIT license doesn’t require attribution for “using...code.” It reads as follows:

            > Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

            > The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

            > THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

            The operative language here is “all copies or substantial portions of the Software.” LLMs, with rare exceptions, don’t retain copies or substantial portions of the software it was trained on. They’re not libraries or archives. So it’s unclear to me how training an AI model with an MIT-licensed project could violate the license.

            (IAAL and this is my personal analysis, not legal advice.)

            • gus_massa 2 hours ago ago

              I think the GP said "use" in the programer sense, i.e. ctr-C&ctr-V into your program. Not in the normal sense, i.e. double click on the icon. So I guess we all agree.

    • serf 10 hours ago ago

      I don't understand the mindset because I began my foray into open source exactly because I wanted to distribute and share my code.

      in other words, i've never been in the position that I felt my charitable givings anywhere were ever stolen.

      Some people write code and put it out there without caveats. Some people jump into open source to be license warriors. Not me. I just write code and share it. If youre a person, great. if you're a machine then I suppose that's okay too -- I don't want to play musical chairs with licenses all day just to throw some code out there, and I don't particularly care if someone more clever than myself uses it to generate a profit.

      • ChrisMarshallNY 10 hours ago ago

        Me too.

        I’ve never been a fan of coercive licensing. I don’t consider that “open.” It’s “strings-attached.”

        I make mine MIT-licensed. If someone takes my stuff, and gets rich (highly unlikely), then that’s fine. I just don’t want some asshole suing me, because they used it inappropriately, or a bug caused them problems. I don’t even care about attribution.

        I mainly do it, because it forces me to take better care, when I code.

      • matthewmacleod 9 hours ago ago

        Do you really struggle to understand the mindset?

        Some people are happy to release code openly and have it used for anything, commercial or otherwise. Totally understandable and a valid choice to make.

        Other people are happy to release code openly so long as people who incorporate it into their projects also release it in the same way. Again, totally understandable and valid.

        None of this is hard to understand or confusing or even slightly weird.

    • babarock 10 hours ago ago

      I don't know if you're "wrong", but I do feel differently about this.

      I've written a ton of open source code and I never cared what people do with it, both "good" or "bad". I only want my code to be "useful". Not just to the people I agree with, but to anyone who needs to use a computer.

      Of course, I'd rather people use my code to feed the poor than build weapons, but it's just a preference. My conviction is that my code is _freed_ from me and my individual preferences and shared for everyone to use.

      I don't think my code is "stolen", if someone uses it to make themselves rich.

      • auggierose 9 hours ago ago

        And in that case, use MIT license or something like that for your code, and all is good. If I use AGPL, on the other hand, AI companies should not be allowed to train on that and then use the result of that training while ignoring the license.

      • martin-t 9 hours ago ago

        > Not just to the people I agree with, but to anyone who needs to use a computer.

        Why not say "... but to the people I disagree with"?

        Would you be OK knowing your code is used to cause more harm than good? Would you still continue working on a hypothetical OSS which had no users, other than, say, a totalitarian government in the middle east which executes homosexuals? Would you be OK with your software being a critical directly involved piece of code for example tracking, de-anonymizing and profiling them?

        Where is the line for you?

        • stravant 5 hours ago ago

          As for me that's a risk I'm willing to accept in return for the freedom of the code.

          I'm not going to deliberately write code that's LIKELY to do more harm than good, but crippling the potential positive impact just because of some largely hypothetical risk? That feels almost selfish, what would I really be trying to avoid, personally running into a feel-bad outcome?

          • martin-t 3 hours ago ago

            I think it would be most interesting to find ways to restrict bad usage without crippling the positive impact.

            Douglas Crockford[0] tried this with JSON. Now, strictly speaking, this does not satisfy the definition of Open Source (it merely is open source, lowercase). But after 10 years of working on Open Source, I came to the conclusion that Open Source is not the absolute social good we delude ourselves into thinking.

            Sure, it's usually better than closed source because the freedoms mean people tend to have more control and it's harder for anyone (including large corporations) to restrict those freedoms. But I think it's a local optimum and we should start looking into better alternatives.

            Android, for example, is nominally Open Source but in reality the source is only published by google periodically[1], making any true cooperation between the paid devs and the community difficult. And good luck getting this to actually run on a physical device without giving up things like Google Play or banking apps or your warranty.

            There's always ways to fuck people over and there always will be but we should look into further ways to limit and reduce them.

            [0]: https://en.wikipedia.org/wiki/Douglas_Crockford

            [1]: https://www.androidauthority.com/aosp-source-code-schedule-3...

        • layer8 5 hours ago ago

          I agree with the GP. While I wouldn’t be happy about such uses, I see the use as detached from the software as-is, given (assuming) that it isn’t purpose-built for the bad uses. If the software is only being used for nefarious purposes, then clearly you have built the wrong thing, not applied the wrong license. The totalitarian government wouldn’t care about your license anyway.

          The one thing I do care about is attribution — though maybe actually not in the nefarious cases.

          • martin-t 3 hours ago ago

            > The totalitarian government wouldn’t care about your license anyway.

            I see this a lot and while being technically correct, I think it ignores the costs for them.

            In practice such a government doesn't need to have laws and courts either but usually does because the appearance of justice.

            Breaking international laws such as copyright also has costs for them. Nobody will probably care about one small project but large scale violations could (or at least should) lead to sanctions.

            Similarly, if they want to offer their product in other countries, now they run the risk of having to pay fines.

            Finally, see my sibling comment but a lot of people act like Open Source is an absolute good just because it's Open Source. By being explicit about our views about right and wrong, we draw attention to this delusion.

            • layer8 3 hours ago ago

              It’s fine to use whatever license you think is right. That includes the choice of using a permissive license. Restrictions are generally an impediment for adoption, due to their legal risk, even for morally immaculate users. I think that not placing usage restrictions on open source is just as natural as not placing usage restrictions on published research papers.

              • martin-t 2 hours ago ago

                Tragedy of the commons. If all software had (compatible) clauses about permitted usage, then the choice would be to rewrite it inhouse or accept the restrictions. When there are alternatives (copyleft or permissive) which are not significantly worse, those will get used instead, even if taken in isolation, the restricted software was a bigger social good.

    • uyzstvqs 9 hours ago ago

      Then why open source something in the first place? The entire point is to make it public, for anyone to use however is useful to him or her, and often to publicly collaborate on a project together.

      If I made something open source, you can train your LLM on it as much as you want. I'm glad my open source work is useful to you.

      • jeroenhd 8 hours ago ago

        Plenty of people will gladly give you their hard work for free if you promise you'll return the favor. Or if you promise not to take your work and make others pay for it when they could just get it for free. Basically, help the people that want to embrace the freedoms of open source, but not the ones that are just in it for the free labour. Or at the very, very least, include a little "thank you" note.

        AI doesn't hold up its end of the bargain, so if you're in that mindset you now have to decide between going full hands-off like you or not doing any open source work at all.

        • simonw 8 hours ago ago

          Given the amount of value I get from having AI models help me write code I would say that AI is paying me back for my (not insignificant) open source contributions a thousand times over.

          • jeroenhd 7 hours ago ago

            Good for you, I guess? That doesn't really change the situation much for the people who do care and/or don't use AI.

            I consider the payment I and my employer make to these AI companies to be what the LLM is paying me back for. Even the free ones get paid for my usage somehow. This stuff isn't charity.

          • hexbin010 8 hours ago ago

            You're quite vigorously replying to anyone disagreeing with the post (and haven't contributed to the top level as far as I can tell).

            It comes across as really trying too hard and a bit aggressive.

            You could just write one top level comment and chill a bit. Same advice for any future threads too...

      • tw04 9 hours ago ago

        > The entire point is to make it public, for anyone to use however is useful to him or her

        The entire point isn’t to allow a large corporation to make private projects out of your open source project for many open source licenses. It’s to ensure the works that leverage your code are open source as well. Something AI is completely ignoring using various excuses as to why their specific type of theft is ok.

    • JacobAsmuth an hour ago ago

      This is why I never got into open source in the first place. I was worried that new programmers might read my code, learn how to program, and then start independently contributing the the projects I know and love - significantly devaluing my contributions.

    • Freak_NL 10 hours ago ago

      I don't worry about that too much. I still contribute to FOSS projects, and I use FOSS projects. Whenever I contribute, I usually fix something that affects me (or maybe just something I encountered), and fixing it has a positive effect on the users of that software, including me.

    • oxag3n 3 hours ago ago

      This is a dilemma for me that gets more and more critical as I finalize my thesis. My default mental model was to open source for the sake of contributing back to the community, enhance my ideas and discuss them with whoever finds it interesting.

      To my surprise, my doctoral advisor told me to keep the code closed. She told me not only LLMs will steal it and benefit from it, but there's a risk of my code becoming a target after it's stolen by companies with fat attorney budgets and there's no way I could defend and prove anything.

    • prodigycorp 10 hours ago ago

      I dont understand the invocation of tailwind here. It doesn't make sense. Tailwind's LLM struggles had nothing to do with open source, it had to do with the fact that they had the same business model as publisher, with ads pointing to their only product.

      • aspaviento 9 hours ago ago

        Exactly, their issue was about a drop in visits to their documentation site where they promote their paid products. If they were making money from usage, their business could really thrive with LLMs recommending Tailwind by default

        • dom96 9 hours ago ago

          AFAIK their issue is that LLMs have been trained on their paid product (Tailwind UI, etc.) and so can reproduce them very easily for free. Which means devs no longer pay for the product.

          In other words, the open source model of "open core with paid additional features" may be dead thanks to LLMs. Perhaps less so for some types of applications, but for frameworks like Tailwind very much so.

          • prodigycorp 9 hours ago ago

            That's not what Adam said. He said it was a traffic issue.

    • fabianholzer 7 hours ago ago

      > Am I wrong to feel this?

      Why would a feeling be invalid? You have one life, you are under no obligation to produce clean training material, much less feel bad about this.

    • jillesvangurp 8 hours ago ago

      A common intention with opensource is to allow people, and AI tools they use, to reuse, recombine, etc. OSS code in any way they see fit. If that's not what you want, don't open source your work. It's not stealing if you gave it away and effectively told people "do whatever you want". Which is one way licenses such as the MIT license are often characterized.

      It's very hard to prevent specific types of usage (like feeding code to an LLM) without throwing out the baby with the bathwater and also preventing all sorts of other valid usages. AGPLv3, which is what antirez and Redis use goes to far IMHO and still doesn't quite get the job done. It doesn't forbid people (or tools) to "look" at the code which is what AI training might be characterized as. That license creates lots of headaches for corporate legal departments. I switched to Valkey for that reason.

      I actually prefer using MIT style licenses for my own contributions precisely because I don't want to constrain people or AI usage. Go for it. More power to you if you find my work useful. That's why I provide it for free. I think this is consistent with the original goals of open source developers. They wanted others to be able to use their stuff without having to worry about lawyers.

      Anyway, AI progress won't stop because of any of this. As antirez says, that stuff is now part of our lives and it is a huge enabler if you are still interested in solving interesting problems. Which apparently he is. I can echo much of what he says. I've been able to solve larger and larger problems with AI tools. The last year has seen quite a bit of evolution in what is possible.

      > Am I wrong to feel this?

      I think your feelings are yours. But you might at least examine your own reasoning a bit more critically. Words like theft and stealing are big words. And I think your case for that is just very weak. And when you are coding yourself are you not standing on the shoulders of giants? Is that not theft?

    • tiborsaas 4 hours ago ago

      Yes. If you didn't care before when contributing to open source who uses your code then it shouldn't matter now that a company picks up your code. You are also contributing this way too.

      Tailwind is a business and they picked a business model that wasn't resilient enough.

    • chrishare 10 hours ago ago

      I think the Tailwind case is more complicated than this, but yes - I think it's reasonable to want to contribute something to the common good but fear that the value will disproportionally go to AI companies and shareholders.

    • bromuro 10 hours ago ago

      I do open source exactly because i’m fine my work can be “stolen”.

      • arter45 9 hours ago ago

        GPL requires attribution. Some people are fine with their code being used by others for free while still expecting their work to be acknowledged. Code posted on Stackoverflow is apparently CC-BY-SA licensed, which means attribution is still required.

      • m4rtink 10 hours ago ago

        Stolen means no attribution and not following the rules of the GPL, instead producing un-attributed AI-washed closed source code owned by companies.

    • samwillis 10 hours ago ago

      I'm convinced that LLMs results in all software needing to be open source (or at the very least source available).

      In future everyone will expect to be able to customise an application, if the source is not available they will not chose your application as a base. It's that simple.

      The future is highly customisable software, and that is best built on open source. How this looks from a business perspective I think we will have to find out, but it's going to be fun!

      • charcircuit 9 hours ago ago

        Why do you think customization can only viably done via changing the code of the application itself.

        I think there is room for closed source platforms that are built on top of using LLMs via some sort of API that it exposes. For example, iOS can be closed source and LLMs can develop apps for it to expand the capabilities of one's phone.

        Allowing total customization by a business can allow them to mess up the app itself or make other mistakes. I don't think it's the best interface for allowing others to extend the app.

      • dom96 9 hours ago ago

        I'm convinced of the opposite. I think a lot more software will be closed source so that an LLM cannot reproduce it from its training data for free.

      • MaxBarraclough 8 hours ago ago

        > In future everyone will expect to be able to customise an application, if the source is not available they will not chose your application as a base. It's that simple.

        This seems unlikely. It's not the norm today for closed-source software. Why would it be different tomorrow?

        • simonw 8 hours ago ago

          Because we now have LLMs that can read the code for us.

          I'm feeling this already.

          Just the other day I was messing around with Fly's new Sprites.dev system and I found myself confused as to how one of the "sprite" CLI features worked.

          So I went to clone the git repo and have Claude Code figure out the answer... and was surprised to find that the "sprite" CLI tool itself (unlike Fly's flycli tool, which I answer questions about like this pretty often) wasn't open source!

          That was a genuine blocker for me because it prevented me from answering my question.

          It reminded me that the most frustrating thing about using macOS these days is that so much of it is closed source.

          I'd love to have Claude write me proper documentation for the sandbox-exec command for example, but that thing is pretty much a black hole.

          • MaxBarraclough 7 hours ago ago

            I'm not convinced that lowering the barrier to entry to software changes will result in this kind of change of norms. The reasons for closed-source commercial software not supporting customisation largely remain the same. Here are the ones that spring to mind:

            • Increased upfront software complexity

            • Increased maintenance burden (to not break officially supported plugins/customizations)

            • Increased support burden

            • Possible security/regulatory/liability issues

            • The company may want to deliberately block functionality that users want (e.g. data migration, integration with competing services, or removing ads and content recommendations)

            > That was a genuine blocker for me because it prevented me from answering my question.

            It's always been this way. From the user's point of view there has always been value in having access to the source, especially under the terms of a proper Free and Open Source licence.

    • qsera 9 hours ago ago

      Unless I am missing something, it seems that you only need to use something like the following that was (obtained using quick search, haven't tried)

      https://archclx.medium.com/enforcing-gpg-encryption-in-githu...

      My opinion on the matter is that AI models stealing the open source code would be ok IF the models are also open and remain so, and the services like chatgpt will remain free of cost (at least a free tier), and remain free of ads.

      But we all know how it is going to go.

    • CraftingLinks 9 hours ago ago

      Not wrong. But i don't share your concerns at all. I like sharing code and if people, and who knows, machines, can make use of it and provide some value however minute, that makes me content.

    • zahlman 9 hours ago ago

      > But, in general, it is now clear that for most projects, writing the code yourself is no longer sensible, if not to have fun.

      I want to write code to defy this logic and express my humanity. "To have fun", yes. But also to showcase what it means when a human engages in the act of programming. Writing code may increasingly not be "needed", but it increasingly is art.

    • burnermore 10 hours ago ago

      This is an absolute valid concern. We either need strong governmental interventions to these models who don't comply with OSS.

      Or accept that there definitely wont be open model businesses. Make them proprietary and accept the fact that even permissive licenses such as MIT, BSD Clause 2/3 wont't be followed by anyone while writing OSS.

      And as for Tailwind, I donno if it is cos of AI.

    • rolisz 10 hours ago ago

      With Tailwind, wasn't the problem that much fewer people visited the documentation, which showed ads? The LLMs still used Tailwind

    • ben_w 9 hours ago ago

      > Am I wrong to feel this?

      There's no such thing as a wrong feeling.

      And I say this as one of those with the view that AI training is "learning" rather than "stealing", or at least that this is the goal because AI is the dumbest, the most error prone, and also the most expensive way, to try to make a copy of something.

      My fears about setting things loose for public consumption are more about how I will be judged for them than about being ripped off, which is kinda why that book I started writing a decade ago and have not meaningfully touched in the last 12 months is neither published properly nor sent to some online archive.

      When it comes to licensing source code, I mostly choose MIT, because I don't care what anyone does with the code once it's out there.

      But there's no such thing as a wrong feeling, anyone who dismisses your response is blinding themselves to a common human response that also led to various previous violent uprisings against the owners of expensive tools of automation that destroyed the careers of respectable workers.

    • oncallthrow 10 hours ago ago

      I want to write less, because quite frankly I get zero satisfaction from having an LLM churn out code for me, in the same way that Vincent van Gogh would likely derive no joy from using Nano Banana to create a painting.

      And sure, I could stubbornly refuse to use an LLM and write the code myself. But after getting used to LLM-assisted coding, particularly recent models, writing code by hand feels extremely tedious now.

    • williamcotton 8 hours ago ago

      I've been writing a bunch of DSLs lately and I would love to have LLMs train on this data.

    • tmplostpwd 10 hours ago ago

      If you don't want people "stealing" your code, you don't want open source. You want source available.

      • pferde 10 hours ago ago

        You're confusing open source with public domain.

    • andrewstuart 9 hours ago ago

      If you give, and expect something in return, then you are not giving, that is a transaction.

    • zsoltkacsandi 9 hours ago ago

      > As a programmer, I want to write more open source than ever, now.

      I believe open source will become a bit less relevant in it’s current form, as solution/project tailored libraries/frameworks can be generated in a few hours with LLMs.

    • andrewstuart 9 hours ago ago

      I’ve written plenty of open source and I’m glad it’s going into the great training models that help everyone out.

      I love AI and pay for four services and will never program without AI again.

      It pleases me that my projects might be helping out.

    • risyachka 10 hours ago ago

      Also open source without support has zero value. And you can support only 1-2 projects.

      Meaning 99% of everything oss released now is de-facto abandonware.

    • noosphr 10 hours ago ago

      Use a license that doesn't allow it then.

      Not everything needs to be mit or gnu.

      • bakugo 10 hours ago ago

        LLMs don't care about licenses. And even if they did, the people who use them to generate code don't care about licenses.

        • noosphr 9 hours ago ago

          Thieves don't care about locks, so doors are pointless.

          • bakugo 7 hours ago ago

            Thieves very much do care about doors and locks, because they are a physical barrier that must be bypassed, and doing so is illegal.

            Software licenses aren't, AI companies can just take your GPL code and spit it back out into non-GPL codebases and there's no way for you to even find out it happened, much less do anything about it, and the law won't help you either.

    • 63stack 9 hours ago ago

      Also why would I use your open source project, when I can just prompt the AI to generate one for me, gracefully stripping the license as a bonus?

    • martin-t 9 hours ago ago

      No, you're absolutely right.

      LLMs are labor theft on an industrial scale.

      I spent 10 years writing open source, I haven't touched it in the last 2. I wrote for multiple reasons none of which any longer apply:

      - I believe every software project should have an open source alternative. But writing open source now means useful patterns can be extracted and incorporated into closed source versions _mechanically_ and with plausible deniability. It's ironically worse if you write useful comments.

      - I enjoyed the community aspect of building something bigger than one person can accomplish. But LLMs are trained on the whole history and potentially forum posts / chat logs / emails which went into designing the SW too. With sufficiently advanced models, they effectively use my work to create a simulation of myself and other devs.

      - I believe people (not just devs) should own the product they build (an even stronger protection of workers against exploitation than copyright). Now our past work is being used to replace us in the future without any compensation.

      - I did it to get credit. Even though it was a small motivation compared to the rest, I enjoyed everyone knowing what I accomplished and I used it during job interviews. If somebody used my work, my name was attached to it. With LLMs, anyone can launder it and nobody knows how useful my work was.

      - (not solely LLM related) I believed better technology improves the world and quality of life around me. Now I see it as a tool - neutral - to be used by anyone for both good and bad purposes.

      Here's[0] a comment where I described why it's theft based on how LLMs work. I call it higher order plagiarism. I haven't seen this argument made by other people, it might be useful for arguing about those who want to legalize this.

      In fact, I wonder if this argument has been made in court and whether the lawyers understand LLMs enough to make it.

      [0]: https://news.ycombinator.com/item?id=46187330

    • poszlem 10 hours ago ago

      You are not wrong to feel this, because you cannot control what you feel. But it might be worth investigating why you feel this, and why were you writing open source in the first place.

      • DrewADesign 10 hours ago ago

        Job insecurity while a bunch of companies claim LLM coding agents are letting them decimate their workforces is a pretty solid reason to feel like your code is being stolen. Many, if not most tech workers have been very sheltered from the harsher economic realities most people face, and many are realizing that labor demand, rather than being special, is why. A core goal of AI products is increasing the supply of what developer labor produces, which reduces demand for that labor. So yeah— feeling robbed when your donated code is used to train models is pretty rational.

      • supriyo-biswas 10 hours ago ago

        Ultimately most things in life and society where one freely gives (and open source could be said to be one such activity) is also balanced by advising everyone participating in the "system" to also reciprocate the same, without which it becomes an exploitative relationship. Examples of such sayings can be found in most major world religions, but a non-religious explanation of the dynamics at hand follows below.

        If running an open source model means that I have only given out without receiving anything, there remains the possibility of being exploited. This dynamic has always existed, such as companies using a project and sending in vulnerability reports and the like but not offering to help, and instead demanding, often quite rudely.

        In the past working with such extractive contributors may have been balanced with other benefits such as growing exposure leading to professional opportunities, or being able to sell hosted versions, consulting services and paid features, which would have helped the maintainer of the open source project pay off their bills and get ahead in life.

        However with the rise of LLMs, it both facilitates usage of the open source tools without getting a chance to direct their attention towards these paid services, nor allows the maintainer to have direct exposure to their contributors. It also indirectly violates the spirit of said open source licenses, as LLMs can spit out the knowledge contained in these codebases at a scale that humans cannot, thus allowing people to bypass the license and create their own versions of the tools, which are themselves not open source despite deriving their knowledge from such data.

        Ultimately we don't need to debate about this; if open source remains a viable model in the age of LLMs, people will continue to do it regardless of whether we agree or disagree regarding topics such as this; on the other hand, if people are not rewarded in any way we will only be left with LLM generated codebases that anyone could have produced, leaving all the interesting software development to happen behind closed doors in companies.

      • abc123abc123 10 hours ago ago

        It is actually very simple to control what you feel, and very much possible. This deterministic idea about our feelings must die quick. Pro-tip, call the psychology department at your local university and they will happily teach you how to control your feelings.

  • systemf_omega 10 hours ago ago

    What I don't understand about this whole "get on board the AI train or get left behind" narrative, what advantage does an early adopter have for AI tools?

    The way I see it, I can just start using AI once they get good enough for my type of work. Until then I'm continuing to learn instead of letting my brain atrophy.

    • simonw 9 hours ago ago

      This is a pretty common position: "I don't worry about getting left behind - it will only take a few weeks to catch up again".

      I don't think that's true.

      I'm really good at getting great results out of coding agents and LLMs. I've also been using LLMs for code on an almost daily basis since ChatGPT's release on November 30th 2022. That's more than three years ago now.

      Meanwhile I see a constant flow of complaints from other developers who can't get anything useful out of these machines, or find that the gains they get are minimal at best.

      Using this stuff well is a deep topic. These things can be applied in so many different ways, and to so many different projects. The best asset you can develop is an intuition for what works and what doesn't, and getting that intuition requires months if not years of personal experimentation.

      I don't think you can just catch up in a few weeks, and I do think that the risk of falling behind isn't being taken seriously enough by much of the developer population.

      I'm glad to see people like antirez ringing the alarm bell about this - it's not going to be a popular position but it needs to be said!

      • systemf_omega 9 hours ago ago

        > Using this stuff well is a deep topic.

        Just like the stuff LLMs are being used for today. Why wouldn't "using LLMs well" be not just one of the many things LLMs will simplify too?

        Or do you believe your type of knowledge is somehow special and is resistant to being vastly simplified or even made obsolete by AI?

        • simonw 6 hours ago ago

          An interesting trend over the past year is that LLMs have learned how to prompt each other.

          Back in ~2024 a lot of people were excited about having "LLMs write the prompt!" but I found the results to be really disappointing - they were full of things like "You are the world's best expert in marketing" which was superstitious junk.

          As of 2025 I'm finding they actually do know how to prompt, which makes sense because there's a ton more information about good prompting approaches in the training data as opposed to a couple of years ago. This has unlocked some very interesting patterns, such as Claude Code prompting sub-agents to help it explore codebases without polluting the top level token window.

          But learning to prompt is not the key skill in getting good results out of LLMs. The thing that matters most is having a robust model of what they can and cannot do. Asking an LLM "can you do X" is still the kind of thing I wouldn't trust them to answer in a useful way, because they're always constrained by training data that was only aware of their predecessors.

        • leonidasv 5 hours ago ago

          Unless we figure out how to make 1 billion+ tokens multimodal context windows (in a commercially viable way) and connect them to Google Docs/Slack/Notion/Zoom meetings/etc, I don't think it will simplify that much. Most of the work is adjusting your mental model to the fact that the agent is a stateless machine that starts from scratch every single time and has little-to-no knowledge besides what's in the code, so you have to be very specific about the context of the task in some ways.

          It's different from assigning a task to a co-worker who already knows the business rules and cross-implications of the code in the real world. The agent can't see the broader picture of the stuff it's making, it can go from ignoring obvious (to a human that was present in the last planning meeting) edge cases to coding defensively against hundreds of edge cases that will never occur, if you don't add that to your prompt/context material.

      • coffeemug 5 hours ago ago

        Strongly disagree. Claude Code is the most intuitive technology I've ever used-- way easier than learning to use even VS Code for example. It doesn't even take weeks. Maybe a day or two to get the hang of it and you're off to the races.

        • simonw 4 hours ago ago

          Don't underestimate the number of developers who aren't comfortable with tools that live in the terminal.

          • HDThoreaun 9 minutes ago ago

            Well these people are left behind either way. Competent devs can easily learn to use coding assistants in a day or two

      • furyofantares 4 hours ago ago

        I think I'm also very good at getting great results out of coding agents and LLMs, and I disagree pretty heavily with you.

        It is just way easier for someone to get up to speed today than it was a year ago. Partly because capabilities have gotten better and much of what was learned 6+ months ago no longer needs to be learned. But also partly because there is just much more information out there about how to get good results, you might have coworkers or friends you can talk to who have gotten good results, you can read comments on HN or blog posts from people who have gotten good results, etc.

        I mean, ok, I don't think someone can fully catch up in a few weeks. I'll grant that for sure. But I think they can get up to speed much faster than they could have a year ago.

        Of course, they will have to put in the effort at that time. And people who have been putting it off may be less likely to ever do that. So I think people will get left behind. But I think the alarm to raise is more, "hey, it's a deep topic and you're going to have to put in the effort" rather than "you better start now or else it's gonna be too late".

      • mmcnl 7 hours ago ago

        Why can't both be true at the same time? Maybe their problems are more complex than yours. Why do you assume it's a skill issue and ignore the contextual variables?

        • simonw 6 hours ago ago

          On the rare occasions that I can convince them to share the details of the problems they are tackling and the exact prompts they are using it becomes very clear that they haven't learned how to use the tools yet.

          • UncleEntity 3 hours ago ago

            I'm kind of curious about the things you're seeing since I find the best way is to have them come up with a plan for the work they're about to do and then make sure they actually finish it because they like to skip stuff if it requires too much effort.

            I mean, I just think of them like a dog that'll get distracted and go off doing some other random thing if you don't supervise them enough and you certainly don't want to trust them to guard your sandwich.

      • jeroenhd 8 hours ago ago

        So far every new AI product and even model update has required me to relearn how to get decent results out of them. I'm honestly kind of sick of having to adjust my work flow every time.

        The intuition just doesn't hold. The LLM gets trained and retrained by other LLM users so what works for me suddenly changes when the LLM models refresh.

        LLMs have only gotten easier to learn and catch up on over the years. In fact, most LLM companies seem to optimise for getting started quickly over getting good results consistently. There may come a moment when the foundations solidify and not bothering with LLMs may put you behind the curve, but we're not there yet, and with the literally impossible funding and resources OpenAI is claiming they need, it may never come.

      • rubslopes 5 hours ago ago

        I don't disagree, knowing how to use the tools is important. But I wanted to add that great prompting skill nowadays are far far less necessary for top-tier models that it was years ago. If I'm clear about what I want and how I want it to behave, Claude Opus 4.5 almost always nails it first time. The "extra" that I do often, that maybe newcomers don't, is to setup a system where the LLM can easily check the results of its changes (verbose logs in terminal and, in web, verbose logs in console and playwright).

      • biophysboy 4 hours ago ago

        What are your tips? Any resources you would recommend? I use Claude code and all the chat bots, but my background isn't programming, so I sometimes feel like I'm just swimming around.

      • Mawr 8 hours ago ago

        I don't see how your position is compatible with the constant hype about the ever-growing capabilities of LLMs. Either they are improving rapidly, and your intuition keeps getting less and less valuable, or they aren't improving.

        • simonw 8 hours ago ago

          They're improving rapidly, which means your intuition needs to be constantly updated.

          Things that they couldn't do six months go might now be things that they can do - and knowing they couldn't do X six months ago is useful because it helps systematize your explorations.

          A key skill here is to know what they can do, what they can't do and what the current incantations are that unlock interesting capabilities.

          A couple I've learned in the past week:

          1. Don't give Claude Code a URL to some code and tell it to use that, because by default it will use its WebFetch tool but that runs an extra summarization layer (as a prompt injection defense) which loses details. Telling it to use curl sometimes works but a guaranteed trick is to have it git clone the relevant repo to /tmp and look at the code there instead.

          2. Telling Claude Code "use red/green TDD" is a quick to type shortcut that will cause it to write tests first, run them and watch them fail, then implement the feature and run the test again. This is a wildly effective technique for getting code that works properly while avoiding untested junk code that isn't needed.

          Now multiply those learnings by three years. Sure, the stuff I figure out in 2023 mostly doesn't apply today - but the skills I developed in learning how to test and iterate on my intuitions from then still count and still keep compounding.

          The idea that you don't need to learn these things because they'll get better to the point that they can just perfectly figure out what you need is AGI science fiction. I think it's safe to ignore.

          • crakhamster01 7 hours ago ago

            I feel like both of these examples are insights that won't be relevant in a year.

            I agree that CC becoming omniscient is science fiction, but the goal of these interfaces is to make LLM-based coding more accessible. Any strategies we adopt to mitigate bad outcomes are destined to become part of the platform, no?

            I've been coding with LLMs for maybe 3 years now. Obviously a dev who's experienced with the tools will be more adept than one who's not, but if someone started using CC today, I don't think it would take them anywhere near that time to get to a similar level of competency.

            • simonw 6 hours ago ago

              I base part of my skepticism about that on the huge number of people who seem to be unable to get good results out of LLMs for code, and who appear to think that's a commentary on the quality of the LLMs themselves as opposed to their own abilities to use them.

              • svara 6 hours ago ago

                I suspect that's neither a skill issue nor a technical issue.

                Being "a person who can code" carries some prestige and signals intelligence. For some, it has become an important part of their identity.

                The fact that this can now be said of a machine is a grave insult if you feel that way.

                It's quite sad in a way, since the tech really makes your skills even more valuable.

          • mmcnl 7 hours ago ago

            Personally I think this is an extreme waste of time. Every week you're learning something new that is already outdated the next week. You're telling me AI can write complex code but isn't able to figure out how to properly guide the user into writing usable prompts?

            A somewhat intelligent junior will dive deep for one week and be on the same knowledge level as you in roughly 3 years.

            • simonw 6 hours ago ago

              No matter how good AI gets we will never be in a situation where a person with poor communication skills will be able to use it as effectively as someone who's communication skills are razor sharp.

              • q3k 6 hours ago ago

                But the examples you've posted have nothing to do with communication skills, they're just hacks to get particular tools to work better for you, and those will change whenever the next model/service decides to do things differently.

                • simonw 6 hours ago ago

                  I'm going to resist the temptation to spend more time coming up with more examples. I'm sorry those weren't to your liking!

                • zahlman 4 hours ago ago

                  I'm generally skeptical of Simon's specific line of argument here, but I'm inclined to agree with the point about communication skill.

                  In particular, the idea of saying something like "use red/green TDD" is an expression of communication skill (and also, of course, awareness of software methodology jargon).

                  • habinero 4 hours ago ago

                    Ehhh, I don't know. "Communication" is for sapients. I'd call that "knowing the right keywords".

                    And if the hype is right, why would you need to know any of them? I've seen people unironically suggest telling the LLM to "write good code", which seems even easier.

                    • zahlman 3 hours ago ago

                      I sympathize with your view on a philosophical level, but the consequence is really a meaningless semantic argument. The point is that prompting the AI with words that you'd actually use when asking a human to perform the task, generally works better than trying to "guess the password" that will magically get optimum performance out of the AI.

                      Telling an intern to care about code quality might actually cause an intern who hasn't been caring about code quality to care a little bit more. But it isn't going to help the intern understand the intended purpose of the software.

            • habinero 4 hours ago ago

              Right? I kinda roll my eyes at the endless urgent FOMO that somehow always links back to their podcast or blog.

              Fuck you, I learned k8s and general relativity, I can learn how to yell at a computer lol

              Y'all go on and waste your time learning and relearning and chanting at websites. I'll just copy and paste your shit if and when I need to.

    • quitit 9 hours ago ago

      You're right, it's difficult to get "left behind" when the tools and workflows are being constantly reinvented.

      You'd be sage with your time just to keep a high-level view until workflows become stable and aren't advancing every few months.

      The time to consider mastering a workflow is when a casual user of the "next release" wouldn't trivially supersede your capabilities.

      Similarly we're still in the race to produce a "good enough" GenAI, so there isn't value in mastering anything right now unless you've already got a commercial need for it.

      This all reminds me of a time when people were putting in serious effort to learn Palm Pilot's Graffiti handwriting recognition, only for the skill to be made redundant even before they were proficient at it.

    • antirez 10 hours ago ago

      I think that who says that you need to be accustomed to the current "tools" related to AI agents, is suffering from a horizon effect issue: these stuff will change continuously for some time, and the more they evolve, the less you need to fiddle with the details. However, the skill you need to have, is communication skills. You need to be able to express yourself and what matters for your project fast and well. Many programmers are not great at communication. In part this is a gift, something you develop at small age, and this will, I believe, kinda change who is good at programming: good communicators / explorers may not have a edge VS very strong coders that are bad at explaining themselves. But a lot of it is attitude, IMHO. And practice.

      • embedding-shape 10 hours ago ago

        > Many programmers are not great at communication.

        This is true, but still shocking. Professional (working with others at least) developers basically live or die by their ability to communicate. If you're bad at communication, your entire team (and yourself) suffer, yet it seems like the "lone ranger" type of programmer is still somewhat praised and idealized. When trying to help some programmer friends with how they use LLMs, it becomes really clear how little they actually can communicate, and for some of them I'm slightly surprised they've been able to work with others at all.

        An example the other day, some friend complained that the LLM they worked with was using the wrong library, and using the wrong color for some element, and surprised that the LLM wouldn't know it from the get go. Reading through the prompt, they never mentioned it once, and when asked about it, they thought "it should have been obvious" which yeah, to someone like you who worked for 2 years on this project that might be obvious, but for some with zero history and zero context about what you do? How you expect it to know this? Baffling sometimes.

        • prodigycorp 9 hours ago ago

          Yup. I'd take a gander than most complaints by people who have even used LLMs for long time can be resolved by "describe your thing in detail". LLM's are such a relief on my wrists that I often get tempted to write short prompts and pray that the LLM divines my thoughts. I always get much better results in a lot faster time when i just turn on the mic and have whisper transcribe a couple minutes of my speaking though.

      • menaerus 9 hours ago ago

        I am using Google Antigravity for the same type of work you mention, such as many things and ideas I had over the years but I couldn't justify the time I needed to invest into them. Pretty non-trivial ideas and yet with a good problem definition communication skills I am getting unbelievable results. I am even intentionally sometimes being too vague in my problem definition to avoid introducing the bias to the model and the ride has been quite crazy so far. In 2 days I've implemented several substantial improvements that i had in my head for years.

        The world changed for good and we will need to adapt. The bigger and more important question at this point isn't anymore if LLMs are good enough, for the ones who want to see, but, as you mention in your article, is what will happen to people who will get unemployed. There's a reality check for all of us.

    • nikcub 10 hours ago ago

      I've used cursor and claude code both daily[0] within a month of their releases - i'm learning something new on how to work with and apply the tools almost every day.

      I don't think it's a coincidence that some of the best developers[1] are using these tools and some openly advocating for them because it still requires core skills to get the most out of them

      I can honestly say that building end-to-end products with claude code has made me a better developer, product designer, tester, code reviewer, systems architect, project manager, sysadmin etc. I've learned more in the past ~year than I ever have in my career.

      [0] abandoned cursor late last year

      [1] see Linus using antigravity, antirez in OP, Jared at bun, Charlie at uv/ruff, mitushiko, simonw et al

      • dkdcio 9 hours ago ago

        I started heavy usage in April 2025 (Codex CLI -> some Claude Code and trying other CLIs + a bit of Cursor -> Warp.dev -> Claude Code) and I’m still learning as well (and constantly trying to get more efficient)

        (I had been using GitHub Copilot for 5+ years already, started as an early beta tested, but I don’t really consider that the same)

        I like to say it’s like learning a programming language. it takes time, but you start pattern matching and knowing what works. it took me multiple attempts and a good amount of time to learn Rust, learning effective use of these tools is similar

        I’ve also learned a ton across domains I otherwise wouldn’t have touched

    • oncallthrow 10 hours ago ago

      My take: learning how to do LLM-assisted coding at a basic level gets you 80% of the returns, and takes about 30 minutes. It's a complete no-brainer.

      Learning all of the advanced multi-agent worklows etc. etc... Maybe that gets you an extra 20%, but it costs a lot more time, and is more likely to change over time anyway. So maybe not very good ROI.

    • zahlman 9 hours ago ago

      The idea, I think, is to gain experience with the loop of communicating ideas in natural language rather than code, and then reading the generated code and taking it as feedback.

      It's not that different overall, I suppose, from the loop of thinking of an idea and then implementing it and running tests; but potentially very disorienting for some.

    • CuriouslyC 10 hours ago ago

      AI development is about planning, orchestration and high throughput validation. Those skills won't go away, the quality floor of model output will just rise over time.

    • edg5000 10 hours ago ago

      It took me a few months of working with the agents to get really productive with it. The gains are significant. I write highly detailed specs (equiv multiple A4 pages) in markdown and dicate the agent hierarchy (which agent does what, who reports to who).

      I've learned a lot of new things this year thanks to AI. It's true that the low levels skills with atrophy. The high level skills will grow though; my learning rate is the same, just at a much higher abstraction level; thus covering more subjects.

      The main concern is the centralisation. The value I can get out of this thing currently well exceeds my income. AI companies are buying up all the chips. I worry we'll get something like the housing market where AI will be about 50% of our income.

      We have to fight this centralisation at all costs!

      • wmwragg 10 hours ago ago

        This is something I think a lot of people don't seem to notice, or worry about, the moving of programming as a local task, to one that is controlled by big corporations, essentially turning programming into a subscription model, just like everything else, if you don't pay the subscription you will no longer be able to code i.e. PaaS (Programming as a Service). Obviously at the moment most programmers can still code without LLMs, but when autocomplete IDEs became main stream, it didn't take long before a large proportion of programmers couldn't program without an autocomplete IDE, I expect most new programmers coming in won't be able to "program" without a remote LLM.

        • Lio 9 hours ago ago

          That ignores the possibility that local inference gets good enough to run without a subscription on reasonably priced hardware.

          I don't think that's too far away. Anthropic, OpenAI, etc. are pushing the idea that you need a subscription but if opensource tools get good enough they could easily become an expensive irrelivance.

          • wmwragg 9 hours ago ago

            There is that, but the way this usually works is that there is always a better closed service you have to pay for, and we see that with LLMs as well. Plus there is the fact that you currently need a very powerful machine to run these models at anywhere near the speed of the PaaS systems, and I'm not convinced we'll be able to do the Moore's law style jumps required to get that level of performance locally, not to mention the massive energy requirements, you can only go so small, and we are getting pretty close to the limit. Perhaps I'm wrong, but we don't see the jumps in processing power we used to see in the 80s and 90s, due to clock speed jumps, the clock speed of most CPUs has stayed pretty much the same for a long time. As LLMs are essentially probabilistic in nature, this does open up options not available to current deterministic CPU designs, so that might be an avenue which gets exploited to bring this to local development.

          • flyinglizard 9 hours ago ago

            My concern is that inference hardware is becoming more and more specialized and datacenter-only. It won’t be possible any longer to just throw in a beefy GPU (in fact we’re already past that point).

            • wmwragg 5 hours ago ago

              Yep, good point. If they don't make the hardware available for personal use, then we wouldn't be able to buy it even it could be used in a personal system.

        • smallerfish 9 hours ago ago

          This is the most valid criticism. Theoretically in several years we may be able to run Opus quality coding models locally. If that doesn't happen then yes, it becomes a pay to play profession - which is not great.

      • nebula8804 9 hours ago ago

        The hardware needs to catch up I think. I asked ChatGPT (lol) how much it would cost to build a Deepseek server that runs at a reasonable speed and it quoted ~400k-800k(8-16 H100 + the rest of the server).

        Guess we are still in the 1970s era of AI computing. We need to hope for a few more step changes or some breakthrough on model size.

        • cyber_kinetist 9 hours ago ago

          The problem is that Moore's law is dead, silicon isn't advancing as fast as what we've envisioned in the past, we're experiencing all sorts of quantum tunneling effects in order to cram as much microstructure as possible into silicon, and R&D for manufacturing these chips are climbing at a rapid rate. There's a limit to how we can fight against Physics, and unless we discover a totally new paradigm to alleviate this issues (ex. optical computing?) we're going to experience diminishing returns at the end of the sigmoid-like tech advancement cycle.

        • NitpickLawyer 9 hours ago ago

          You can run most open models (excluding kimi-k2) on hardware that costs anywhere from 45 - 85k (tbf, specced before the vram wars of late 2025 so +10k maybe?). 4-8 PRO6000s + all the other bits and pieces gives you a machine that you can host locally and run very capable models, at several quants (glm4.7, minimax2.1, devstral, dsv3, gpt-oss-120b, qwens, etc.), with enough speed and parallel sessions for a small team (of agents or humans).

      • iLoveOncall 10 hours ago ago

        > I write highly detailed specs (equiv multiple A4 pages) in markdown and dicate the agent hierarchy (which agent does what, who reports to who).

        That sounds incredibly inneficient.

        Are you all AI bros calcualting productivity gains on how fast the code was outputed and nothing else?

        • isoprophlex 9 hours ago ago

          Well, if you're programming without AI you need to understand what you're building too, lest you program yourself into a corner. Taking 3-5 minutes to speech-to-text an overview of why you want to build what exactly, using which general philosophies/tool seems like it should cost you almost zero extra time and brainpower

    • bsaul 10 hours ago ago

      An ecosystem is being built around AI : Best prompting practices, mcps, skills, IDE integration, how to build a feedback loop so that LLM can test its output alone, plug to the outside world with browser extensions, etc...

      For now i think people can still catch up quickly, but at the end of 2026 it's probably going to be a different story.

      • Avshalom 9 hours ago ago

        Okay, end of 2026 then what? No one ever learns how to use the tools after that? No one gets a job until the pre-2026 generation dies?

        • hackable_sand 3 hours ago ago

          For now i think people can still catch up quickly, but at the end of 2027 it's probably going to be a different story.

      • edg5000 10 hours ago ago

        > probably going to be a different story

        Can you elaborate? Skill in AI use will be a differentiator?

      • rvz 10 hours ago ago

        > Best prompting practices, mcps, skills, IDE integration, how to build a feedback loop so that LLM can test its output alone, plug to the outside world with browser extensions, etc...

        Ah yes, an ecosystem that is fundamentally inherently built on probabilisitic quick sand and even with the "best prompting practices", you still get agents violating the basics of security and committing API keys when they were told not to. [0]

        [0] https://xcancel.com/valigo/status/2009764793251664279

        • simonw 8 hours ago ago

          One of the skills needed to effectively use AI for code is to know that telling AI "don't commit secrets" is not a reliable strategy.

          Design your secrets to include a common prefix, then use deterministic scanning tools like git hooks to prevent then from being checked in.

          Or have a git hook that knows which environment variables have secrets in and checks for those.

          • jeroenhd 8 hours ago ago

            That's such an incredibly basic concept, surely AIs have evolved to the point where you don't need to explicitly state those requirements anywhere?

            • simonw 6 hours ago ago

              They can still make mistakes.

              For example, what if your code (that the LLM hasn't reviewed yet) has a dumb feature in where it dumps environment variables to log output, and the LLM runs "./server --log debug-issue-144.log" and commits that log file as part of a larger piece of work you ask it to perform.

              If you don't want a bad thing to happen, adding a deterministic check that prevents the bad thing to happen is a better strategy than prompting models or hoping that they'll get "smarter" in the future.

            • thunky 6 hours ago ago

              Doesn't seem to work for humans all the time either.

              Some of this negativity I think is due to unrealistic expectations of perfection.

              Use the same guardrails you should be using already for human generated code and you should be fine.

        • dkdcio 9 hours ago ago

          I have tons of examples of AI not committing secrets. this is one screenshot from twitter? I don’t think it makes your point

          CPUs are billions of transistors. sometimes one fails and things still work. “probabilistic quicksand” isn’t the dig you think it is to people who know how this stuff works

          • Mawr 8 hours ago ago

            I have tons of examples of drivers not running into objects.

            • dkdcio 8 hours ago ago

              like my other comment, my point is one screenshot from twitter vs one anecdote. neither proves anything. cool snarky response though!

          • rvz 9 hours ago ago

            > I have tons of examples of AI not committing secrets.

            "Trust only me bro".

            It takes 10 seconds to see the many examples of API keys + prompts on GitHub to verify that tweet. The issue with AI isn't limited to that tweet which demonstrates its probabilistic nature; Otherwise why do need a sandbox to run the agent in the first place?

            Nevermind, we know why: Many [0] such [1] cases [2]

            > CPUs are billions of transistors. sometimes one fails and things still work. “probabilistic quicksand” isn’t the dig you think it is to people who know how this stuff works

            Except you just made a false equivalence. CPUs can be tested / verified transparently and even if it does go wrong, we know exactly why. Where as you can't explain why the LLM hallucinated or decided to delete your home folder because the way it predicts what it outputs is fundamentally stochastic.

            [0] https://old.reddit.com/r/ClaudeAI/comments/1pgxckk/claude_cl...

            [1] https://old.reddit.com/r/ClaudeAI/comments/1jfidvb/claude_tr...

            [2] https://www.google.com/search?q=ai+deleted+files+site%3Anews...

            • dkdcio 8 hours ago ago

              you could find tons of API keys on GitHub before these “agentic” tools too. that was my point, one screenshot from twitter vs one anecdote from me. I don’t think either proves the point, but posting a screenshot from twitter like it’s proof of some widespread problem is what I was responding to (N=2, 1 vs 1)

              my point is more “skill issue” than “trust me this never happens”

              my point on CPUs is people who don’t understand LLMs talk like “hallucinations” are a real thing — LLMs are “deciding” to make stuff up rather than just predicting the next token. yes it’s probabilistic, so is practically everything else at scale. yet it works and here we are. can you really explain in detail how everything you use works? I’m guessing I can explain failure modes of agentic systems (and how to avoid them so you don’t look silly on twitter/github) and how neural networks work better than most people can explain the technology they use every day

              • rvz 7 hours ago ago

                > you could find tons of API keys on GitHub before these “agentic” tools too. that was my point, one screenshot from twitter vs one anecdote from me. I don’t think either proves the point, but posting a screenshot from twitter like it’s proof of some widespread problem is what I was responding to (N=2, 1 vs 1)

                That doesn't refute the probabilistic nature of LLMs despite best prompting practices. In fact it emphasises it. More like your 1 anecdotal example vs my 20+ examples on GitHub.

                My point tells you that not only it indeed does happen, but a previous old issue is now made even worse and more widespread, since we now have vibe-coders without security best practices assuming the agent should know better (when it doesn't).

                > my point is more “skill issue” than “trust me this never happens”

                So those that have this "skill issue" are also those who are prompting the AI differently then? Either way, this just inadvertently proves my whole point.

                > yes it’s probabilistic, so is practically everything else at scale. yet it works and here we are.

                The additional problem is can you explain why it went wrong as you scale the technology? CPUs circuit design go through formal verification and if a fault happens, we know exactly why; hence it is deterministic in design which makes them reliable.

                LLMs are not and don't have this. Which is why OpenAI had to describe ChatGPT's misaligned behaviour as "sycophancy", but could not explain why it happened other than tweaking the hyper-parameters which got them that result.

                So LLMs being fundamentally probabilistic and are hence, more unexplainable being the reason why you have the screenshot of vibe-coders who somehow prompted it wrong and the agent committed the keys.

                Maybe that would never have happened to you, but it won't be the last time we see more of this happening on GitHub.

                • dkdcio 7 hours ago ago

                  I was pointing out one screenshot from twitter isn’t proof of anything just to be clear; it’s a silly way to make a point.

                  yes AI makes leaking keys on GH more prevalent, but so what? it’s the same problem as before with roughly the same solution

                  I’m saying neural networks being probabilistic doesn’t matter — everything is probabilistic. you can still practically use the tools to great effect, just like we use everything else that has underlying probabilities

                  OpenAI did not have to describe it as sycophancy, they chose to, and I’d contend it was a stupid choice

                  and yes, you can explain what went wrong just like you can with CPUs. we don’t (usually) talk about quantum-level physics when discussing CPUs; talking about neurons in LLMs is the wrong level of abstraction

                  • rvz 6 hours ago ago

                    > I was pointing out one screenshot from twitter isn’t proof of anything just to be clear; it’s a silly way to make a point.

                    Verses your anecdote being a proof of what? Skill issue for vibe coders? Someone else prompting it wrong?

                    You do realize you are proving my entire point?

                    > yes AI makes leaking keys on GH more prevalent, but so what? it’s the same problem as before with roughly the same solution

                    Again, it exacerbates my point such that it makes the existing issue even worse. Additionally, that wasn't even the only point I made on the subject.

                    > I’m saying neural networks being probabilistic doesn’t matter — everything is probabilistic.

                    When you scale neural networks to become say, production-grade LLMs, then it does matter. Just like it does matter for CPUs to be reliable when you scale them in production-grade data centers.

                    But your earlier (fallacious) comparison ignores the reliability differences between them (CPUs vs LLMs.) and determinism is a hard requirement for that; which the latter, LLMs are not.

                    > OpenAI did not have to describe it as sycophancy, they chose to, and I’d contend it was a stupid choice

                    For the press, they had to, but no-one knows the real reason, because it is unexplainable; going back to my other point on reliability.

                    > and yes, you can explain what went wrong just like you can with CPUs. we don’t (usually) talk about quantum-level physics when discussing CPUs; talking about neurons in LLMs is the wrong level of abstraction

                    It is indeed wrong for LLMs because not even the researchers can practically give an explanation why a single neuron (for every neuron in the network) gives different values on every fine-tune or training run. Even if it is "good enough", it can still go wrong at the inference-level for other unexplainable reasons other than it "overfitted".

                    CPUs on the other hand, have formal verification methods which verify that the CPU conforms to its specification and we can trust that it works as intended and can diagnose the problem accurately without going into atomic-level details.

                    • dkdcio 5 hours ago ago

                      …what is your point exactly (and concisely)? I’m saying it doesn’t matter it’s probabilistic, everything is, the tech is still useful

                      • rvz 5 hours ago ago

                        No one is arguing that it isn't useful. The problem is this:

                        > I’m saying it doesn’t matter it’s probabilistic, everything is,

                        Maybe it doesn't matter for you, but it generally does matter.

                        The risk level of a technology failing is far higher if it is more random and unexplainable than if it is expected, verified and explainable. The former eliminates many serious use-cases.

                        This is why your CPU, or GPU works.

                        LLMs are neither deterministic, no formal verification exists and are fundamentally black-boxes.

                        That is why many vibe-coders reported many "AI deleted their entire home folder" issues even when they told it to move a file / folder to another location.

                        If it did not matter, why do you need sandboxes for the agents in the first place?

                        • dkdcio 4 hours ago ago

                          I think we agree then? the tech is useful; you need systems around them (like sandboxes and commit hooks that prevent leaking secrets) to use them effectively (along with learned skills)

                          very little software (or hardware) used in production is formally verified. tons of non-deterministic software (including neural networks) are operating in production just fine, including in heavily regulated sectors (banking, health care)

    • Ekaros 10 hours ago ago

      By their promises it should get so good that basically you do not need to learn it. So it is reasonable to wait until that point.

      • simonw 9 hours ago ago

        If you listen to promises like that you're going get burned.

        One of the key skills needed in working with LLMs is learning to ignore the hype and marketing and figure out what these things are actually capable of, as opposed to LinkedIn bluster and claims from CEOs who's net worth are tied to investor sentiment in their companies.

        If someone spends more time talking about "AGI" then what they're actually building, filter that person out.

        • pydry 7 hours ago ago

          >One of the key skills needed in working with LLMs is learning to ignore the hype and marketing and figure out what these things are actually capable of

          This is precisely what led me to realize that while they have some use for code review and analyzing docs, for coding purposes they are fairly useless.

          The hypesters responses' to this assertion exclusively into 5 categories. Ive never heard a 6th.

      • dkdcio 10 hours ago ago

        this is a straw man, nobody serious is promising that. it is a skill like any other that requires learning

        • fabianholzer 7 hours ago ago

          > nobody serious is promising that

          There is a staggering number of unserious folks in the ears of people with corporate purchasing power.

        • robot-wrangler 8 hours ago ago

          I agree about skills actually, but it's also obvious that parent is making a very real point that you cannot just dismiss. For several years now and far short of wild AGI promises, the answer to literally every issue with casual or production AI has been something like "but the rate of model improvement.." or "but the tools and ecosystem will evolve.."

          If you believe that uncritically about everything else, then you have to answer why agentic workflows or MCP or whatever is the one thing that it can't evolve to do for us. There's a logical contradiction here where you really can't have it both ways.

          • dkdcio 8 hours ago ago

            I’m not understanding your point… (and would be genuinely curious to)? the models and systems around them have evolved and gotten better (over the past few years for LLMs and decades for “AI” more broadly)

            oh I think I do get your point now after a few rereads (correct if wrong but you’re saying it should keep getting better until there’s nothing for us to do). “AI”, and computer systems more broadly, are not and cannot be viable systems. they don’t have agency (ironically) to affect change in their environment (without humans in the loop). computer systems don’t exist/survive without people. all the human concerns around what/why remain, AI is just another tool in a long line of computer systems that make our lives easier/more efficient

            • robot-wrangler 6 hours ago ago

              AI Engineer to Software Engineer: Humans writing code is a waste of time, you can only hope to add value by designing agentic workflows

              Prompt Engineer to AI Engineer: Designing agentic workflows is a waste of time, just pre/postfix whatever input you'd normally give to the agentic system with the request to "build or simulate an appropriate agentic workflow for this problem"

        • Ekaros 9 hours ago ago

          OpenAI is going to get to AGI. And AGI should in minutes build a system that takes vague input and produces fully functioning product out of it. Isn't singularity being promised by them?

          • dkdcio 9 hours ago ago

            you’re just repeating the straw man. if you can’t think critically and just regurgitate every dumb thing you hear idk what to tell you. nobody serious thinks a “singularity” is coming. there’s not even a proper definition of “AGI”

            your argument amounts to “some people said stupid shit one time and I took it seriously”

    • nicce 4 hours ago ago

      > What I don't understand about this whole "get on board the AI train or get left behind" narrative, what advantage does an early adopter have for AI tools?

      Replace that with anything and you will notice that people who are building startups in this area will want to bring the narrative like that as it usually highly increases the value of their companies. When narrative gets big enough, then big companies must follow - or they look like "lagging behind". Whether the current thing brings value or not. It is a fire that keeps feeding itself. In the end, when it gets big enough - we call it as bubble. Bubble that may explode. Or not.

      Whether the end user gets actual value or not, is just side effect. But everyone wants to believe that that it brings value - otherwise they were foolish to jump in the train.

    • rvz 10 hours ago ago

      > What I don't understand about this whole "get on board the AI train or get left behind" narrative, what advantage does an early adopter have for AI tools?

      The ones pushing this narrative have either the following:

      * Invested in AI companies (which they will never disclose until they IPO / acquired)

      * Employees at AI companies that have stock options which they are effectively paid boosters around AGI nonsense.

      * Mid-life crisis / paranoia that their identity as a programmer is being eroded and have to pivot to AI.

      It is no different to the crypto web3 bubble of 2021. This time, it is even more obvious and now the grifters from crypto / tech are already "pivoting to ai". [0]

      [0] https://pivot-to-ai.com/

      • KaiserPro 10 hours ago ago

        I'm not an AI booster, but I can't argue with Opus doing lots of legwork

        > It is no different to the crypto web3 bubble of 2021

        web3 didn't produce anything useful, just noise. I couldn't take a web3 stack to make an arbitrary app. with the PISS machine I can.

        Do I worry about the future, fuck yeah I do. I think I'm up shit creek. I am lucky that I am good at describing in plain English what I want.

        • jeroenhd 8 hours ago ago

          Web3 generated plenty of use if you're in on it. Pension funds, private investors, public companies, governments, gambling addicts, teenagers with more pocket money than sense, they've all moved billions into the pockets of Web3 grifters. You follow a tutorial on YouTube, spam the right places, maybe buy a few illegal ads, do a quick rugpull, and if you did your opsec right, you're now a millionaire. The major money sources have started to dry up (although the current American regime has been paid off by crypto companies so a Web3 revival might just happen).

          With AI companies still selling services far below cost, it's only a matter of time before the money runs out and the true value of these tools will be tested.

          • KaiserPro an hour ago ago

            > Pension funds, private investors, public companies

            As someone who was at a large company that was dabbling in NFTs, there was no value apart from pure gambling. At the time that we were doing it, it was also too late, so it was just a jinormous

            My issue with GenAI is the rampant copyright violation, and the effect it will have on the economy. Its also replacing all of the fun bits of the world that I inhabit.

            At least with web3 it was mostly contained with in the BO infested basement that crypto bros inhabit. AI bollocks has infected half the world.

      • menaerus 7 hours ago ago

        Comparing crypto and web3 scam with AI advancements is disingenuous at its best. I am a long time C and C++ systems programming engineer oriented at (sometimes novel) algorithmic design and high-performance large-scale systems operating at the scale of internet. I am specializing in low-level details that generally very small amount of engineers around the globe are familiar with. We can talk at the level of CPU microarchitectural details or memory bank conflicts or OS internals, and all the way up to the line of code we are writing. AI is the most transformative technology ever designed. I'd go that far and say that not even industrial revolution is going to be comparable to it. I have no stakes in AI.

  • cmiles8 10 hours ago ago

    The “anti-AU hype” phrase oversimplifies what’s playing out at the moment. On the tech side, while things are a bit rough around the edges still the tech is very useful and isn’t going away. I honestly don’t see much disagreement there.

    The concern mostly comes from the business side… that for all the usefulness on the tech there is no clearly viable path that financially supports everything that’s going on. It’s a nice set of useful features but without products with sufficient revenue flowing in to pay for it all.

    That paints a picture of the tech sticking around but a general implosion of the startups and business models betting on making all this work.

    The later isn’t really “anti-AI hype” but more folks just calling out the reality that there’s not a lot of evidence and data to support the amount of money invested and committed. And if you’ve been around the tech and business scene a while you’ve seen that movie before and know what comes next.

    In 5 years time I expect to be using AI more than I do now. I also expect most of the AI companies and startups won’t exist anymore.

    • nielsole 10 hours ago ago

      In the late 2000s i remember that "nobody is willing to pay for things on the Internet" was a common trope. I think it'll culturally take a while before businesses and people understand what they are willing to pay for. For example if you are a large business and you pay xxxxx-xxxxxx per year per developer, but are only willing to pay xxx per year in AI tooling, something's out of proportion.

      • embedding-shape 9 hours ago ago

        > For example if you are a large business and you pay xxxxx-xxxxxx per year per developer, but are only willing to pay xxx per year in AI tooling, something's out of proportion.

        One is the time of a human (irreplaceable) and the other is a tool for some human to use, seems proportional to me.

        • thunky 6 hours ago ago

          > human (irreplaceable)

          Everyone is replaceable. Software devs aren't special.

          • embedding-shape 6 hours ago ago

            Yes, with another human. I meant more that you cannot replace a human with a non-human, at least not yet and if you care about quality.

      • qcnguy 3 hours ago ago

        Late 1990s maybe. Not late 2000s.

    • antirez 10 hours ago ago

      The blog post title is a joke about the AI hype.

      • iLoveOncall 10 hours ago ago

        Well it completely misses the mark, because your whole article IS hyping up AI, and probably more than anything I've seen before honestly.

        If it's all meant to be ironical, it's a huge failure and people will use it to support their AI hype.

        • antirez 9 hours ago ago

          I was not clear enough. I wanted to write a PRO-AI blog post. The people against AI always say negative things with using as central argument that "AI is hyped and overhyped". So I, for fun, consider the anti-AI movement a form of hype. It's a joke but not in the sense it does not mean what it means.

        • danielbln 9 hours ago ago

          There are too many people who see the absurd AI hype (especially absurd in terms of investment) and construct a counter-argument with it that AI is useless, overblown and just generally not good. And that's a fallacy. Two things can be true at the same time. Coding agents are a step change and immensely useful, and the valuations and breathless AGI evangelizing is a smoke screen and pure hype.

          Don't let hype deter you to get your own hands dirty and try shit.

    • senordevnyc 2 hours ago ago

      On the tech side, while things are a bit rough around the edges still the tech is very useful and isn’t going away. I honestly don’t see much disagreement there.

      What? HN is absolutely packed with people complaining about LLMs are nothing more than net useless creators of slop.

      Granted, fewer than six months ago, which should tell people something...

    • dist-epoch 10 hours ago ago

      People said the exact same thing about (numbers from memory, might be off):

      - when Google paid $1 bil for YouTube

      - when Facebook paid $1 bil for Instagram

      - when Facebook paid $1 bil for WhatsApp

      The same thing - these 3 companies make no money, and have no path to making money, and that the price paid was crazy and decoupled from any economics.

      Yet now, in hindsight, they look like brilliant business decisions.

      • qcnguy 3 hours ago ago

        We don't really know how much money Google sunk into YouTube before it became (presumably) profitable. It might have actually not been strongly coupled to economics.

        • Izkata 41 minutes ago ago

          Also they attempted their own competitor before buying YouTube, called Google Video. It never got very popular.

      • ThrowawayR2 3 hours ago ago

        You listed only acquisitions that paid off and not the many, many more that didn't though.

      • cmiles8 9 hours ago ago

        There’s no comparison to what’s going on now vs those examples. Not even remotely similar.

        • dist-epoch 9 hours ago ago

          > that for all the usefulness on the tech there is no clearly viable path that financially supports everything that’s going on

          you lack imagination, human workers are paid globally over $10 trillion dollars.

  • edg5000 9 hours ago ago

    > state of the art LLMs are able to complete large subtasks or medium size projects alone, almost unassisted, given a good set of hints about what the end result should be

    No. I agree with the author, but it's hyperbolic of him to phrase it like this. If you have solid domain knowledge, you'll steer the model with detailed specs. It will carry those out competently and multiply your productivity. However, the quality of the output still reflects your state of knowledge. It just provides leverage. Given the best tractors, a good farmer will have much better yields than a shit one. Without good direction, even Opus 4.5 tends to create massive code repetion. Easy to avoid if you know what you are doing, albeit in a refactor pass.

    • biophysboy 4 hours ago ago

      I feel like a lot of the disagreement over this "large project" capability is that "large project" can mean anything. It can mean something that has a trillion github repos to work with, or it can mean something that is basically uncharted territory.

    • falloutx 9 hours ago ago

      If this only works for people with like 10+ years of domain experience, doesnt that make this an Anti-AI article? Whole vibe coding sells on the promise that it works and it works for every tom and their mom.

      • gherkinnn 5 hours ago ago

        This conflates two things.

        One is LLMs writing code. Not everything and not for everyone. But they are useful for most of the code being written. It is useful.

        What it does not do (yet, if ever) is bridging the gap from "idea" to a working solution. This is precisely where all the low-code ideas of the past decades fell apart. Translating an idea in to formal rules is very, very hard.

        Think of all of the "just add a button there"-type comments we've all suffered.

    • artdigital 9 hours ago ago

      Yes that’s how I see it too. It’s a productivity multiplier, but depends on what you put in.

      Sure Opus can work fully on its own by just telling it “add a button that does X”, but do that 20 times and the good turns into mush. Steer the model with detailed tech specs on the other hand, and the output becomes magical

  • xg15 an hour ago ago

    > However, this technology is far too important to be in the hands of a few companies.

    I worry less about the model access and more about the hardwire required to run those models (i.e. do inference).

    If a) the only way to compete in software development in the future is to outsource the entire implementation process to one of a few frontier models (Chinese, US or otherwise)

    and b) only a few companies worldwide have the GPU power to run inference with those models in a reasonable time

    then don't we already have a massive amount of centralization?

    That is also something I keep wondering with agentic coding - being able to realize your epic fantasy hobby project you've on and off been thinking about for the last years in a couple of afternoons is absolutely amazing. But if you do the same with work projects, how do you solve the data protection issues? Will we all now just hand our entire production codebases to OpenAI or Anthropic etc and hope their pinky promises hold?

    Or will there be a race for medium-sized companies to have their own GPU datacentets, not for production but solely for internal development and code generation?

  • NitpickLawyer 10 hours ago ago

    > Whatever you believe about what the Right Thing should be, you can't control it by refusing what is happening right now. Skipping AI is not going to help you or your career. Think about it. Test these new tools, with care, with weeks of work, not in a five minutes test where you can just reinforce your own beliefs.

    This is the advice I've been giving my friends and coworkers as well for a while now. Forget the hype, just take time to test them from time to time. See where it's at. And "prepare" for what's to come, as best you can.

    Another thing to consider. If you casually look into it by just reading about it, be aware that almost everything you read in "mainstream" places has been wrong in 2025. The people covering this, writing about this, producing content on this have different goals in this era. They need hits, likes, shares and reach. They don't get that with accurate reporting. And, sadly, negativity sells. It is what it is.

    THe only way to get an accurate picture is to try them yourself. The earlier you do that, the better you'll be. And a note on signals: right now, a "positive" signal is more valuable for you than many "negative" ones. Read those and try to understand the what, if not the how. "I did this with cc" is much more valuable today than "x still doesn't do y reliably".

  • bluGill 10 hours ago ago

    I'm trying not to fall for it, but when I try ai to write code it fails more often than not - at least for me. some people claim it does everything but I keep finding major problems. Even when it writes something that works often I can't explain that in 2026 we should be using smart pointers (C++) or what ever the modern thing

    • criddell 9 hours ago ago

      Same here. I’ve had limited success getting AIs to do very simple stuff. Every one I’ve tried invents APIs that don’t exist and eventually get stuck in a circle where it tells me to try A. When that doesn’t work, try B. No luck? Try C. Hmmm my apologies, try A. Lather, rinse, repeat.

      • simonw 9 hours ago ago

        Are you using a coding agent running in auto-approve mode?

        If so then none of this matters, because it will run through that lather-rinse-repeat loop itself in less than a minute.

        • criddell 4 hours ago ago

          No, I haven’t tried that yet. I don’t really want to turn on auto mode when it’s iterating on my credit card and it looks like it’s in an infinite loop… Is that a silly thing to be worried about?

          I work mostly in C++ (MFC applications on Windows) and assembly language (analyzing crash reports).

          For the C++ work, the AIs do all kinds of unsafe things like casting away constness or doing hacks to expose private class internals. What they give me is sometimes enough to get unstuck though which is nice.

          For crash reports (a disassembly around the crash site and a stack trace) they are pretty useless and that’s coming from someone who considers himself to be a total novice at assembly. (Looking to up my x64 / WinDbg game and any pointers to resources would be appreciated!)

          I do prototyping in Python and Claude is excellent at that.

  • phtrivier 10 hours ago ago

    How would we measure the effects of AI coding tool taking over manual coding ? Would we see an increase in the number of GitHub projects ? In the number of stars (given the ai is so good) ? In the number of start up ipos (surely if all your engineers are 1000x engineers thanks to Claude code, we'll have plenty of googles and Amazons to invest in) ? In the price of software (if I can just vibe code everything, than a 10$ fully compatible replacement for MS Windows is just a few months away, right ?) In the the numbers of app published in the stores ?

    • CuriouslyC 10 hours ago ago

      Plot twist: the bottleneck when you have a development force multiplier is __MARKETING__. If you develop at 10X the rate, you still have to grind/growth marketing. Unmarketed products might as well not exist, even if they're fantastic.

      Github stars? That's 100% marketing. Shit that clears a low quality bar can rack up stars like crazy just by being well marketed.

      Number of startups? That's 100% marketing. Investors put money into products that have traction, or founders that look impressive, and both of those are mostly marketing.

      People actually are vibe coding stuff rather than using SaaS though, that one's for real. Your example is hyperbolic, but the Tailwind scenario is just one example of AI putting pressure on products.

      • falloutx 9 hours ago ago

        You cant vibe code users or traction. If you make they will come is not a strategy for 2026. In fact, the amount of money needed for marketing will wipe out any savings from not having a Software dev.

        • CuriouslyC 9 hours ago ago

          If you make they will come has never been a valid strategy. And marketing is fucking miserable now because of the proliferation low quality software people are trying to turn into SaaS.

          If you don't have a halo already, you need to be blessed or you're just going to suffer. Getting a good mention by someone like Theo or SimonW >> 1000 well written articles.

      • Ekaros 9 hours ago ago

        Someone should really take AI to these task. Let the agents run wild. Let them astroturf every possible platform in existence. Especially like this one here HN. Insert marketing messages to every post and every thread.

        There is not bad publicity. More you spam more you will be noticed. Human attention is limited. So grab as much as you can. And also this helps your product name to get into training data and thus later in LLM outputs.

        Even more ideas. When you find an email address. Spam that too. Get your message out multiple times to each address.

        • CuriouslyC 9 hours ago ago

          HN has been astroturfed for a while. Ever notice low quality linkedin blogspam that hits the front page before people would even have had time to finish reading it?

          It's hard to disambiguate this from people who have a "fanbase." People will upvote stuff from people like simonw sight unseen without reading. I'd like to do a study on HN where you hide the author, to see how upvote patterns change, in order to demonstrate the "halo" benefit.

      • FergusArgyll 9 hours ago ago

        I get annoyed that no one mentions software for just the user. Part of the joy of programming is making stuff you want not just to sell or to get famous. I vibe coded so many chrome extensions I lost count. Most apply just to one site, they save me one click or something. It's fun!

        • hxugufjfjf 3 hours ago ago

          Wouldn't it be easier and/or faster to create a userscript? I've "vibe coded" tens myself, but never really saw the use case for making a full extension out of any of them. Genuinly curious what you made.

    • falloutx 9 hours ago ago

      I was looking my homebrewed product hunt data and this week we had 5000 projects submitted, in 5 days. Thats more than a entire month in 2018.

    • yobbo 5 hours ago ago

      > How would we measure the effects of AI coding tool taking over manual coding ?

      Falling salaries?

      • zeroonetwothree 4 hours ago ago

        All the other tools before that made programming more efficient results in rising salaries. I imagine salaries would only fall if AI can 100% replace a human, which currently it cannot. It remains to be seen what happens in the future of course.

        Remember that an average software engineer only spends around 25% of their time coding.

    • robot-wrangler 7 hours ago ago

      > How would we measure the effects of AI coding tool taking over manual coding ?

      Instead of asking "where are the AI-generated projects" we could ask about the easier problem of "where are the AI-generated ports". Why is it still hard to take an existing fully concrete specification, and an existing test suite, and dump out a working feature-complete port of huge, old, and popular projects? Lots of stuff like this will even be in the training set, so the fact that this isn't easy yet must mean something.

      According to claude, wordpress is still 43% of all the websites on the internet and PHP has been despised by many people for many years and many reasons. Why no python or ruby portage? Harder but similar, throw in drupal, mediawiki, and wonder when can we automatically port the linux kernel to rust, etc.

      • simonw 7 hours ago ago

        > Why is it still hard to take an existing fully concrete specification, and an existing test suite, and dump out a working feature-complete port of huge, old, and popular projects? Lots of stuff like this will even be in the training

        We have a smaller version of that ability already:

        - https://simonwillison.net/2025/Dec/15/porting-justhtml/

        See also https://www.dbreunig.com/2026/01/08/a-software-library-with-...

        I need to write these up properly, but I pulled a similar trick with an existing JavaScript test suite for https://github.com/simonw/micro-javascript and the official WebAssembly test suite for https://github.com/simonw/pwasm

        • robot-wrangler 7 hours ago ago

          So extrapolating from here and assuming applications are as easy as libraries, operating systems are as easy as applications.. at this rate with a few people in a weekend you can convert anything to anything else, and the differences between different programming languages are very nearly effectively erased. Nice!

          And yet it doesn't feel true yet, otherwise we'd see it. Why do you think that is?

          • simonw 7 hours ago ago

            Because it's not true yet. You can't convert anything to anything else, but you CAN get good results for problems that can be reduced to a robust conformance suite.

            (This capability is also brand new: prior to Claude Opus 4.5 in November I wasn't getting results from coding agents that convinced me they could do this.)

            It turns out there are some pretty big problems that works for, like HTML5 parsers and WebAssembly runtimes and reduced-scoped JavaScript language interpreters. You have to be selective though. This won't work for Linux.

            I thought it wouldn't work for web browsers either - one of my 2026 predictions was "by 2029 someone will build a new web browser using mostly LLM-code"[1] - but then I saw this thread on Reddit https://www.reddit.com/r/Anthropic/comments/1q4xfm0/over_chr... "Over christmas break I wrote a fully functional browser with Claude Code in Rust" and took a look at the code and it's surprisingly deep: https://github.com/hiwavebrowser/hiwave

            [1] https://simonwillison.net/2026/Jan/8/llm-predictions-for-202...

            • robot-wrangler 4 hours ago ago

              > you CAN get good results for problems that can be reduced to a robust conformance suite.

              If that's what is shown then why doesn't it work on anything that has a sufficiently large test-suite, presumably scaling linearly in time with size? Why should we be selective, and based on what?

              • simonw 3 hours ago ago

                It probably does. This only become possible over the last six weeks, and most people haven't yet figured out the pattern.

  • wasmainiac 9 hours ago ago

    These personal blogs are starting to feel like Linkdin Lunatic posts, kinda similar. to the optimised floor sweeping blog, “I am excited to provide shareholder value, at minimum wage”

    • simonw 9 hours ago ago

      What does it tell you that programmers with the credibility of antirez - and who do not have an AI product to sell you - are writing things like this even when they know a lot of people aren't going to like reading them?

      • kibwen 4 hours ago ago

        What it tells me is that humans are fallible, and that being a competent programmer has no correlation with having strong mental defenses against the brainrot that typifies the modern terminally-online internet user.

        I leverage LLMs where it makes sense for me to do so, but let's dispense with this FOMO silliness. People who choose not to aren't missing out on anything, any more than people who choose to use stock Vim rather than VSCode aren't missing out on anything.

      • atmavatar 4 hours ago ago

        Just because he doesn't have an AI product to sell doesn't mean he doesn't have a bias. For all we know, he's heavily invested in AI companies.

        We have to abandon the appeal to authority and take the argument on its merits, which honestly, we should be doing regardless.

      • falloutx 9 hours ago ago

        People higher up the ladder aren't selling anything but they also have to not worry about losing jobs. We are worried that execs are going to see the advances and quickly clear the benches, might not be true but every programmer believing they have become a 10x programmer pushes us more into that reality.

      • wasmainiac 9 hours ago ago

        Nothing at all, it just sounds like a desperate post on LinkedIn riding the slight glimmer of hope it will help them land their next position.

      • fabianholzer 7 hours ago ago

        That is an argument to authority. There is a large enough segment of folks who like to be confirmed in either direction. Doesn't make the argument itself correct or incorrect. Time will tell though.

      • ThrowawayR2 3 hours ago ago

        Being famous doesn't mean that they're right about everything, e.g. Einstein and "God does not play dice with the universe".

        That LLMs advocates are resorting to the appeal to authority fallacy isn't a good look for them either.

  • chrz 10 hours ago ago

    > How do I feel, about all the code I wrote that was ingested by LLMs? I feel great to be part of that, because I see this as a continuation of what I tried to do all my life: democratizing code, systems, knowledge. LLMs are going to help us to write better software, faster, and will allow small teams to have a chance to compete with bigger companies.

    You might feel great, thats fine, but I dont. And software quality is going down, I wouldn't agree that LLMs will help write better software

  • ironman1478 4 hours ago ago

    I'm not sure what to make of these technologies. I read about people doing all these things with them and it sounds impressive. Then when I use it, it feels like the tool produces junior level code unless I babysit it, then it really can produce what I want.

    If I have to do all this babysitting, is it really saving me anything other than typing the code? It hasn't felt like it yet and if anything it's scary because I need to always read the code to make sure it's valid, and reading code is harder than writing it.

    • casid 28 minutes ago ago

      I'm always puzzled by these claims. I usually know exactly what I want my code to look like. Writing a prompt instead and waiting for the result to return takes me right out of the flow. Sure, I can try to prompt and ask for larger junks, but then I have to review and understand the generated output first. If this makes people 10x faster, they must have worked really slow before.

  • kiriakosv 9 hours ago ago

    AI tools in their current form or another will definitely change software engineering, I personally think for the best

    However I can’t help but notice some things that look weird/amusing:

    - The exact time that many programmers were enlightened about the AI capabilities and the frequency of their posts.

    - The uniform language they use in these posts. Grandiose adjectives, standard phrases like ‘it seems to me’

    - And more importantly the sense of urgency and FOMO they emit. This is particularly weird for two reasons. First is that if the past has shown something regarding technology is that open source always catches up. But this is not the case yet. Second, if the premise is that we re just the in beginning all these ceremonial flows will be obsolete.

    Do not get me wrong, as of today these are all valid ways to work with AI and in many domains they increase the productivity. But I really don’t get the sense of urgency.

  • bob1029 10 hours ago ago

    > Test these new tools, with care, with weeks of work, not in a five minutes test where you can just reinforce your own beliefs. Find a way to multiply yourself, and if it does not work for you, try again every few months.

    I've been taking a proper whack at the tree every 6 months or so. This time it seems like it might actually fall over. Every prior attempt I could barely justify spending $10-20 in API credits before it was obvious I was wasting my time. I spent $80 on tokens last night and I'm still not convinced it won't work.

    Whether or not AI is morally acceptable is a debate I wish I had the luxury of engaging in. I don't think rejecting it would allow me to serve any good other than in my own mind. It's really easy to have certain views when you can afford to. Most of us don't have the privilege of rejecting the potential that this technology affords. We can complain about it but it won't change what our employers decide to do.

    Walk the game theory for 5 minutes. This is a game of musical chairs. We really wish it isn't. But it is. And we need to consider the implications of that. It might be better to join the "bad guys" if you actually want to help those around you. Perhaps even become the worst bad guy and beat the rest of them to a functional Death Star. Being unemployed is not a great position to be in if you wish to assist your allies. Big picture, you could fight AI downstream by capitalizing on it near term. No one is keeping score. You might be in your own head, but you are allowed to change that whenever you want.

    • falloutx 9 hours ago ago

      Wouldn't a lot of us become unemployed anyway if there are 75% less jobs? I don't see how I can use AI better than other people. People who keep their jobs are also not in for a fun time when they will be responsible for 4x the surface. And if you are not in top 7 companies, your company might not fire you but get bankrupt in a couple of years because all the investment is hogged by the top7. This is more of a lose-lose situation.

    • akomtu 5 hours ago ago

      > Big picture, you could fight AI downstream by capitalizing on it near term.

      Trying to beat a demon long term by making a contract with it short term?

  • golly_ned 4 hours ago ago

    As long as I'm not reviewing PRs with thousands of lines net new that weren't even read by their PR submitter, I'm fine with anything. The software design I've seen from AI code agent using peers has been dreadful.

    I think for some who are excited about AI programming, they're happy they can build a lot more things. I think for others, they're excited they can build the same amount of things, but with a lot less thinking. The agent and their code reviewers can do the thinking for them.

  • keyle 8 hours ago ago

    > The fun is still there, untouched.

    Well that's a way to put it. But not everyone enjoy the art only for the results.

    I personally love learning, and by letting AI drive forward and me following, I don't learn. To learn is to be human.

    So saying the fun is untouched is one-sided. Not everyone is in it for the same reasons.

  • agoodusername63 5 hours ago ago

    I never stop being amused that LLMs have made HN realize that many programmers are programmers for paychecks. Not for passion

  • eeixlk 9 hours ago ago

    If you dont call it AI and see it as a natural language search engine result merger it's a bit easier to understand. Like a search engine, it's clunky so you have to know how to use it to get any useful results. Sometimes it appears magical or clever but it's just analyzing billions of text patterns. You can use this search merger to generate text in various forms quickly, and request new generated text. But it doesn't have taste, comprehension, problem solving, vision, or wisdom. However it can steal your data and your work and include it in it's search engine.

  • kace91 10 hours ago ago

    >Yes, maybe you think that you worked so hard to learn coding, and now machines are doing it for you. But what was the fire inside you, when you coded till night to see your project working? It was building. And now you can build more and better, if you find your way to use AI effectively. The fun is still there, untouched.

    I wonder if I’m the odd one out or if this is a common sentiment: I don’t give a shit about building, frankly.

    I like programming as a puzzle and the ability to understand a complex system. “Look at all the things I created in a weekend” sounds to me like “look at all the weight I moved by bringing a forklift to the gym!”. Even ignoring the part that there is barely a “you” in this success, there is not really any interest at all for me in the output itself.

    This point is completely orthogonal to the fact that we still need to get paid to live, and in that regard I'll do what pays the bills, but I’m surprised by the amount of programmers that are completely happy with doing away with the programming part.

    • simonw 9 hours ago ago

      Interestingly, I read "I like programming as a puzzle and the ability to understand a complex system." and thought that you were about to argue in favor of AI-assisted programming!

      I enjoy those things about programming too, which is why I'm having so much fun using LLMs. They introduce new layers of complex system understanding and problem solving (at that AI meta-layer), and let me dig into and solve harder and more time-consuming problems than I was able to without them.

      • kace91 8 hours ago ago

        >They introduce new layers of complex system understanding and problem solving (at that AI meta-layer), and let me dig into and solve harder and more time-consuming problems than I was able to without them.

        This is not my experience at all. My experience is that the moment I stop using them as google or search on steroids and let them generate code, I start losing the grip of what is being built.

        As in, when it’s time for a PR, I never feel 100% confident that I’m requesting a review on something solid. I can listen to that voice and sort of review myself before going public, but that usually takes as much time as writing myself and is way less fun, or I can just submit and be dishonest since then I’m dropping that effort into a teammate.

        In other words, I feel that the productivity gain only comes if you’re willing to remove yourself from the picture and let others deal with any consequence. I’m not.

        • simonw 8 hours ago ago

          Clearly you and I are having different experiences here.

          Maybe a factor here is that I've invested a huge amount of effort over the last ~10 years in getting better at reading code?

          I used to hate reading code. Then I found myself spending more time in corporate life reviewing code then writing it myself... and then I realized the huge unlock I could get from using GitHub search to find examples of the things I wanted to do, I'd only I could overcome my aversion to reading the resulting search results!

          When LLMs came along they fit my style of working much better than they would have earlier in my career.

          • kace91 6 hours ago ago

            I mean, I wouldn’t say that’s a personal limitation. I read and review code on the daily and have done so for years.

            The point is exactly that, that ai feels like reviewing other people’s code, only worse because bad ai written code mimics good code in a way that bad human code doesn’t, and because you don’t get the human factor of mentoring someone when you see they lack a skill.

            If I wanted to do that for a living it’s always been an option, being the “architect” overseeing a group of outsourced devs for example. But I stay as individual contributor for doing quite different work.

            • simonw 6 hours ago ago

              > The point is exactly that, that ai feels like reviewing other people’s code, only worse because bad ai written code mimics good code in a way that bad human code doesn’t, and because you don’t get the human factor of mentoring someone when you see they lack a skill.

              Yeah, that's a good way to put it.

              I've certainly felt the "mimics good code" thing in the past. It's been less of a problem for me recently, maybe because I've started forcing Claude Code into a red/green TDD cycle for almost everything which makes it much less likely to write code that it hasn't at least executed via the tests.

              The mentoring thing is really interesting - it's clearly the biggest difference between working with a coding agent and coaching a human collaborator.

              I've managed to get a weird simulacrum of that by telling the coding agents to take notes as they work - I even tried "add to a til.md document of things you learned" on a recent project - and then condensing those lessons into an AGENTS.md later on.

    • falloutx 10 hours ago ago

      Learning, solving puzzles and understanding something was a bigger desire for me than building another to-do list. In fact, most of my building effort has been used by corporations to make software worse for users.

  • robot-wrangler 10 hours ago ago

    Let's maybe avoid all the hype, whether it is for or against, and just have thoughtful and measured stances on things? Fairly high points for that on this piece, despite the title. It has the obligatory remark that manually writing code is pointless now but also the obligatory caveat that it depends on the kind of code you're writing.

  • 28ahsgT7 an hour ago ago

    It is somewhat amusing that the pro-LLM faction increasingly co-opts their opponents' arguments—now they are turning AI-hype into anti-AI hype.

    They did the same with Upton Sinclair's quote, which is now used against any worker who dares to hope for salary.

    There is not much creativity in the pro-LLM faction, which is guided by monetary interests and does not mind to burn its social capital in exchange for loss of credibility and money.

  • dbacar an hour ago ago

    There are different opinions on this:

    https://spectrum.ieee.org/ai-coding-degrades

  • yndoendo 4 hours ago ago

    I want to know if any content has been made using AI or not.

    There really should be a label on the product to let the consumer know. This should be similar to Norway that requires disclosure of retouched images. No other way can I think of to help body image issues arising from pictorial people and how they never can being in real life.

  • falloutx 9 hours ago ago

    Where is this Anti-AI hype? We are seeing 100x videos of Claude Code & Vibe Coding and then may be we get 1 or 2 people saying "Maybe we should be cautious"

    • simonw 9 hours ago ago

      I would count about two-thirds of the comments in this thread as anti-AI hype, and this thread is pretty mild in that regard compared to most other threads here about AI for code.

      And this is Hacker News, which you might expect to attract people who thrive on exploring the edges of weird new technology!

      • tuesdaynight 7 hours ago ago

        I don't have decades of experience under my belt, but I feel like the reaction is happening mostly because it is the first time that developers are at the risk of being automated out of work. "Learn a new field" is easy to say when you are not the one that will need to do it. Now a lot of developers are afraid of having to follow the advice that they gave to a lot of workers.

        I don't believe that AI will put most of the working force out of jobs. That would be so different from what we had in history that I think the chances are minimal. However, they are not zero, and that is scary as fuck for a lot of people.

        • falloutx 5 hours ago ago

          This is literally true, we have been automating other people out of their jobs without empathy for ages, so it makes sense at some point the knife would fall on us. Because of low solidarity we have shown with others and even our fellow programmers, I guess we deserve it. My real worry at this point is that the most destructive ones will continue and only the destructive programmers will be safe.

      • falloutx 9 hours ago ago

        I mean most of us dont work in our own thing or open source, so making badly thought & designed features faster isn't really a dream. Software already has so much bloat and slop that this way of doing just scares us.

    • tucnak 9 hours ago ago

      Honestly, "Maybe we should be cautious" seems akin to concern trolling.

  • didip 2 hours ago ago

    The paragraph that was started with this sentence:

    > However, this technology is far too important to be in the hands of a few companies.

    I wholeheartedly agree 1000%. Something needs to change this landscape in the US.

    Furthermore, the entire open source models being dominated by China is also problematic.

  • ChrisMarshallNY 9 hours ago ago

    I generally have a lot of respect for this guy. He’s an excellent coder, and really cares about his craft. I can relate to him (except he’s been more successful than me, which is fine -he deserves it).

    Really, one of the first things he said, sums it up:

    > facts are facts, and AI is going to change programming forever.

    I have been using it in a very similar manner to how he describes his workflow, and it’s already greatly improved my velocity and quality.

    I also can relate to this comment:

    > I feel great to be part of that, because I see this as a continuation of what I tried to do all my life: democratizing code, systems, knowledge.

    • archagon 3 minutes ago ago

      “Facts are facts” or “this is inevitable” is what you say when you don’t have a convincing argument. I can’t respect this kind of propagandistic writing. Among other things, it suggests an ulterior motive or, at the very least, lazy thinking.

  • theturtletalks 9 hours ago ago

    LLMs are breaking open-source monetization.

    Group 1 is untouched since they were writing code for the sake of writing and they have the reward of that altruism.

    Group 2 are those that needed their projects to bring in some revenue so they can continue writing open-source.

    Group 3 are companies that used open-source as a way to get market share from proprietary companies, using it more in a capitalistic way.

    Overtime, I think groups 2 and 3 will leave open-source and group 1 will make up most of the open-source contributors. It is up to you to decide if projects like Redis would be built today with the monetary incentives gone.

    • antirez 4 hours ago ago

      Please note that the majority of OSS efforts where already non monetized and deeply exploited. At least, what it is happening has the potential to change the model towards a more correct one. What you see with Tailwind and similar cases, it is not really an open source business model issue, it is a "low barrier to entry" business model issue, since with AI a lot of things can be done without efforts and without purchasing PRO products. And also documentation is less useful, but this is a general thing, not just related to OSS software. In general people that write OSS are, for the most part, not helped enough by the companies using their code to make money, by users, buy everybody else, basically.

  • sreekanth850 9 hours ago ago

    People here generalise vibcoders into single category. I don’t write code line-by-line the traditional way, but I do understand architecture deeply. Recently I started using AI to write code. not by dumping random prompts and copy-pasting blindly, but inside VS Code, reviewing what it generates, understanding why it works, and knowing exactly where each feature lives and how it fits. I also work with a frontend developer (As i do backend only and not interested in building UI and css) to integrate things properly, and together we fix bugs and security issues. Every feature built with AI works flawlessly because it’s still being reviewed, tested, and owned by humans. If I have a good Idea, and use AI to code, without depending on a developer friction due to limited budget, why people think its Sin? Is the implication that if you don’t have VC money to hire a team of developers, you’re supposed to just lose? I saw the exact same sentiment when tools like Elementor started getting popular among business owners. Same arguments, same gatekeeping. The market didn’t care. It feels more like insecurity about losing an edge. And if the edge was I type code myself, that edge was always fragile. Edit: The biggest advantage is that you don’t lose anything in translation. There’s no gap between the idea in your head and what gets built.

    You don’t spend weeks explaining intent, edge cases, or what I really meant to a developer. You iterate 1:1 with the system and adjust immediately when something feels off.

  • zkmon 9 hours ago ago

    So, by "AI", you mean programming AI. Generalizing it as "AI" and "anti-AI" is adding great confusion to the already dizzying level of hype.

    At it's core, AI has capability to extract structure/meaning from unstructured content and vice-versa. Computing systems and other machines required inputs with limited context. So far, it was a human's job to prepare that structure and context and provide it to the machines. That structure can be called as "program" or "form data" or "a sequence of steps or lever operations or button presses".

    Now the machines got this AI wrapper or adapter that enables them to extract the context and structure from the natural human-formatted or messy content.

    But all that works only if the input has the required amount of information and inherent structure to it. Try giving a prompt with jumbled up sequence of words. So it's still the human jobs to provide that input to the machine.

  • elktown 8 hours ago ago

    I wonder if being a literal AI sci-fi author, antirez acknowledges that there's possible bias and willingness to extrapolate here? That said, I respect his work immensely and I do put a lot of weight to his recommendations. But I'd really prefer the hype fog that's clouding signal [for me] to dissipate a bit - maybe economic realities will sort this out soon.

    There's also a short-termism aspect of AI generated code that's seemingly not addressed as much. Don't pee your pants in the winter to keep warm.

  • bwfan123 4 hours ago ago

    I am not sure why the OP is painting it as a "us-vs-them" - pro or anti-AI ? AI is a tool. Use it if it helps.

    I would draw an analogy here between building software and building a home.

    When building a home we have a user providing the requirements, the architect/structural engineer providing the blueprint to satisfy the reqs, the civil engineer overseeing the construction, and the mason laying the bricks. Some projects may have a project-manager coordinating these activities.

    Building software is similar in many aspects to building a structure. If developers think of themselves as a mason they are limiting their perspective. If AI can help lay the bricks use it ! If it can help with the blueprint or the design use it. It is a fantastic tool in the tool belt of the profession. I think of it as a power-tool and want to keep its batteries charged to use it at any time.

  • dzonga 6 hours ago ago

    antirez gave us reddit - but somehow I think the part him and other smart folks who talk about A.I so much is they forget about agency | self-sufficiency.

    If A.I writes everything for you - cool, you can produce faster ? but is it really true ? if you're renting capacity ? what if costs go up, now you can't rent anymore - but you can't code anymore, the documentation is no longer there - coz mcp etc assumption that everything will be done by agents then what ?

    what about the people that work on messy 'Information Systems' - things like redis - impressive but it's closed loop software just like compilers -

    some smart guy back in the 80s - wrote it's always a people problem -

    • otterley 4 hours ago ago

      Redis, not Reddit. :)

  • insane_dreamer an hour ago ago

    The reason I am "anti-AI" is not because I think LLMs being bad at what they do, nor because I'm afraid they'll take my job. I use CC to accelerate my own work (it's improved by leaps and bounds though I still find I have to keep it on a short leash because it doesn't always think things through enough). It's also a great research tool (search on steroids). It's excellent at summarizing long documents, editing and proofreading, etc. I use it for all those things. It's useful.

    The reason I am anti-AI is because I believe it poses a net-negative to society overall. Not because it is inherently bad, but because of the way it is being infused into society by large corps (and eventually governments). Yes, it makes me, and other developers, more productive. And it can more quickly solve certain problems that were time consuming or laborious to solve. And it might lead to new and greater scientific and technological advances.

    But those gains do not outweigh all of the negatives: concentration of power and capital into an increasingly small group, the eventual loss of untold millions of jobs (with, as of yet, not even a shred of indication of what might be replace them), the loss of skills in the next generations who are delegating much of their critical thinking (or thinking period), to ChatGPT; the loss of trust in society now that any believable video can be easily generated; the concentration of power in the the control of information if everyone is getting their info from LLMs instead of the open internet (and ultimately, potentially the death of the open internet); the explosion in energy consumption by data centers which exacerbates rather than mitigates global warming; and plenty more.

    AI might allow us to find better technological solutions to world hunger, poverty, mental health, water shortages, climate change, and war. But none of those problems are technological problems; technology only plays a small part. And the really important part is being negatively exacerbated by the "AI arms race". That's why I, who was my whole life a technological optimist, am no longer hopeful for the future. I wish I was.

  • Ekaros 9 hours ago ago

    I think best hope against AI is copy right. That is AI generated software has none. Everyone is free to steal and resell it. And those who generated have zero rights to complain or take legal action.

  • threethirtytwo 5 hours ago ago

    > the more isolated, and the more textually representable, the better: system programming is particularly apt

    I’ve written complete GUIs in 3D on the front end. This GUI was non traditional. It allows you to playback, pause speed up, slow down and rewind a gps track like a movie. There is real time color changing and drawing of the track as the playback occurs.

    Using mapbox to do this straight would be to slow. I told the AI to optimize it by going straight into shader extensions for mapbox to optimize GPU code.

    Make no mistake. LLMs are incredible for things that are non systems based that require interaction with 3D and GUIs.

    • antirez 4 hours ago ago

      Yep, they work especially if you instruct them to add into your program ways for them to "see" what it is happening. And the more embedding models are getting better, the better results we will get too, from their ability to "see". For now Gemini 3 is the best at this, but is not the best at coding as an agent, so we will have to wait a bit.

  • galdauts 10 hours ago ago

    I feel like the use of the term "anti-AI hype" is not really fully explored here. Even limiting myself to tech-related applications - I'm frankly sick of companies trying to shove half-baked "AI features" down my throat, and the enshittification of services that ensues. That has little to do with using LLMs as coding assistants, and yet I think it is still an essential part of the "anti-AI hype".

    • falloutx 9 hours ago ago

      The dreaded summarize feature, its in places you wouldn't expect, and not to mention the whole lets record every meeting and then summarize it for leaders. Big Brother in work is back and its even more powerful.

  • anovikov 5 hours ago ago

    I'm sure it will go in the worst way possible: demand for code will not expand at nearly the same rate in which coding productivity will increase, and vast majority of coders will become permanently jobless, the rest will become disposable cheap labor just due to overabundance of them.

    This is already happening.

    AI had an impact on simplest coding first, this is self-evident. So any impact it had, had to be on the quantity of software created, and only then on its quality and/or complexity. And mobile apps are/were a tedious job with a lot of scaffolding and a lot of "blanks to fill" to make them work and get accepted by stores. So first thing that had to skyrocket in numbers with the arrival of AI, had to be mobile apps.

    But the number of apps on Apple Store is essentially flat and rate of increase is barely distinguishable from the past years, +7% instead of +5%. Not even visible.

    Apparently the world doesn't need/can't make monetisable use of much more software than it already does. Demand wasn't quite satisfied say 5 years ago, but the gap wasn't huge. It is now covered many times over.

    Which means, most of us will probably never get another job/gig after the current one - and if it's over, it's over and not worth trying anymore - the scraps that are left of the market are not worth the effort.

  • esperent 10 hours ago ago

    > But I'm worried for the folks that will get fired. It is not clear what the dynamic at play will be: will companies try to have more people, and to build more?

    This is the crux. AI suddenly became good and society hasn't caught on yet. Programmers are a bit ahead of the curve here, being closer to the action of AI. But in a couple of years, if not already, all the other technical and office jobs will be equally affected. Translators, admin, marketing, scientists, writers of all sorts and on and on. Will we just produce more and retain a similar level of employment, or will AI be such a force multiplier that a significant number or even most of these jobs will be gone? Nobody knows yet.

    And yet, what I'm even more worried about for their society upending abilities, is robots. These are coming soon and they'll arrive with just as much suddeness and inertia as AI did.

    The robots will be as smart as the AI running them, so what happens when they're cheap and smart enough to replace humans in nearly all physical jobs?

    Nobody knows the answer to this. But in 5 years, or 10, we will find out.

    • falloutx 8 hours ago ago

      In one of the scenarios programmers get replaced then the progress slows, thus saving jobs of writers, lawyers, marketing, scientists, artists. At this point I am okay with that scenario seeing how programmers have showed no solidarity while every other field has been rejecting AI. Lawyers have even started hiring junior lawyers back and Art industry has basically shoved AI into a bin of irrelevance.

      • esperent 7 hours ago ago

        > Art industry

        I don't agree, unless by "art industry" what you actually mean is "art establishment".

        If we broaden it to mean "anywhere that money is paid, or used to be paid, to people for any kind of artistic endeavor" - even if we limit that to things related to drawing, painting, illustrating, graphic design, 3d design etc. - then AI is definitely replacing or augmenting a ton of human work. Just go on any Photoshop forum. It's all about AI now, just like everywhere else.

  • kruuuder 6 hours ago ago

    What happens if the bubble bursts - can we still use all the powerful models to create all this code? Aren't all the agents effectively using venture capital today? Is this sustainable?

    If I can run an agent on my machine, with no remote backend required, the problem is solved. But right now, aren't all developers throwing themselves into agentic software development betting that these services will always be available to them at a relatively low cost?

    • simonw 6 hours ago ago

      If the bubble bursts we club together to buy one of those big GPU servers (now available at rock bottom prices thanks to the bubble bursting) and run a shared instance of GLM-4.7 (the current best-at-coding Chinese open weight model) on it.

  • expedition32 7 hours ago ago

    There is too much money invested in AI. You can't trust anyone talking about it.

  • richardjennings 9 hours ago ago

    SOTA LLMs are now quite good at typing out code that passes tests. If you are able to instruct the creation of sufficient tests and understand the code generated structurally, there is a significant multiplier in productivity. I have found LLMs to be hugely useful in understanding codebases more quickly. Granted it may be necessary to get 2nd opinions and fact check what is stated, but there is a big door now open to anyone to educate themselves.

    I think there are some negative consequences to this; perhaps a new form of burn out. With the force multiplier and assisted learning utility comes a substantial increase in opportunity cost.

  • JackSlateur 10 hours ago ago

    "Die a hero or live long enough to see yourself become the villain"

    AI is both a near-perfect propaganda machine and, in the programming front, a self-fulfilling prophecy: yes, AI will be better at coding than human. Mostly because humans are made worse by using AI.

    • gentooflux 10 hours ago ago

      It's a zero sum game. AI cannot innovate, it can only predictively generate code based on what it's already seen. If we get to a point where new code is mostly or only written by AI, nothing new emerges. No new libraries, no new techniques, no new approaches. Fewer and fewer real developers means less and less new code.

      • edg5000 10 hours ago ago

        Nonsense indeed. The model knowledge is the current state of the art. Any computation it does, advances it. It re-ingests work of prior agents every time you run it on your codebase, so even though the model initializes the same way (until they update the model), upon repeated calls it ingests more and more novel information, inching the state of the art ever forwards.

        • JackSlateur 9 hours ago ago

          Current state of the art ? You must be joking .. I see code it has generated, some interns does better.

          Obviously, you are also joking about the thing that AI is immune to consanguinity, right ?

          • simonw 9 hours ago ago

            If you have had interns who can write better code than Opus 4.5 I would very much like to hire them.

      • vanviegen 10 hours ago ago

        Nonsense. LLMs can easily build novel solutions based on my descriptions. Even in languages and with (proprietary) frameworks they have not been trained on, given a tiny bit of example code and the reference docs.

        • gentooflux 9 hours ago ago

          That's not novel, it's still applying techniques it's already seen, just in a different platform. Moreover it has no way of knowing if it's approach is anywhere near idiomatic in that new platform.

          • vanviegen 39 minutes ago ago

            I didn't say the platform was the novel aspect. And I'm getting pretty idiomatic code actually, just based on a bit of example code that shows it how. It's rather good at extrapolating.

    • simonw 9 hours ago ago

      > Mostly because humans are made worse by using AI.

      I'm confident you are wrong about that.

      AI makes people who are intellectually lazy and like to cheating worse, in the same way that a rich kid who hires someone to do their university homework for them is hurting their ability to learn.

      A rich kid who hires a personal tutor and invests time with them is spending the same money but using it to get better, not worse.

      Getting worse using AI is a choice. Plenty of people are choosing to use it to accelerate and improve their learning and skills instead.

    • zinodaur 10 hours ago ago

      [not an ai booster] I think you are the target of this article. I believe you are misunderstanding the current capacity AI

      • JackSlateur 10 hours ago ago

        I think I spend too much time at work fixing the greatness of AI.

        • edg5000 10 hours ago ago

          Are you hand-fixing the issues or having AI do it? I've found that second pass quality is miles away from an initial implementation. If you're experienced, you'll know exactly where the code smells are. Point this out, and the agents will produce a much better implementation in this second pass. And have those people store the promps in the repo! I put my specifications in ./doc/spec/*.md

          Every time I got bad results, looking back I noticed my spec was just vague or relied on assumptions. Of course you can't fix your collegues, if they suck they suck and sombody gotta do the mopping :)

        • vouwfietsman 10 hours ago ago

          I think it would make sense to have these issues bubble up into the public consciousness of hackernews.

          I've never used AI to code, I'm a software architect and currently assume I get little value out of an LLM. It would be useful for me if this debate had a vaguely engineering-smelling quality to it, because its currently just two groups shouting at eachother and handwaving criticism away.

          If you actually deal with AI generated problems, I love it, please make a post about it so we have something concrete to point to.

          • insin 5 hours ago ago

            PRs where somebody who clearly doesn't know the tech being used well enough, or enough about how the complex app they're working on really works, thus isn't able to determine a good design from a bad one for the feature they're working on, but has AI*-assisted themselves to something which "works", can become an absolute death spiral.

            I wasted so much work time trying to steer one of these towards the light, which is very demotivating when design and "why did you do this?" questions are responded to with nothing but another flurry of commits. Even taking the time to fully understand the problem and suggest an alternative design which would fix most of the major issues did nothing (nothing useful must have emerged when that was fed into the coin slot...)

            Since I started the review, I ended up becoming the "blocker" for this feature when people started asking why it wasn't landed yet (because I also have my own work to do), to the point where I just hit Approve because I knew it wouldn't work at all for the even more complex use cases I needed to implement in that area soon, so I could just fix/rewrite it then.

            From my own experience, the sooner you accept code from an LLM the worse a time you're going to have. If wasn't a good solution or even was the wrong solution from the get-go, no amount of churning away at the code with an LLM will fix it. If you _don't know_ how to fix it yourself, you can't suddenly go from reporting your great progress in stand-ups to "I have nothing" - maybe backwards progress is one of those new paradigms we'll have to accept?

          • JackSlateur 8 hours ago ago

            Here is a sample

            We are talking about a "stupid" tool that parses a google sheet and makes calls to a third-party API

            So there is one google sheet per team, with one column per person

            One line per day

            And each day, someone is in charge of the duty

            The tool grabs the data from the sheet and configures pagerduty so that alerts go to the right person

            Very basic, no cleverness needed, really straightforward actually

            So we have 1 person that wrote the code, with AI. Then we have a second person that checked the code (with AI). Then the shit comes to my desk. To see this kind of cruft:

              def create_headers(api_token: str) -> dict:
                """Create headers for PagerDuty API requests.
            
                Args:
                    api_token: PagerDuty API token.
            
                Returns:
                    Headers dictionary.
                """
                return {
                    "Accept": "application/vnd.pagerduty+json;version=2",
                    "Authorization": f"Token token={api_token}",
                    "Content-Type": "application/json",
                }
            
            And then, we have 5 usage like this:

              def delete_override(
                base_url: str,
                schedule_id: str,
                override_id: str,
                api_token: str,
              ) -> None:
                """Delete an override from a schedule.
            
                Args:
                    base_url: PagerDuty API base URL.
                    schedule_id: ID of the schedule.
                    override_id: ID of the override to delete.
                    api_token: PagerDuty API token.
                """
                headers = create_headers(api_token)
            
                override_url = f"{base_url}/schedules/{schedule_id}/overrides/{override_id}"
                response = requests.delete(override_url, headers=headers, timeout=60)
                response.raise_for_status()
            
            
            
            No HTTP keep-alive, no TCP reuse, the API key is passed down to every method, so is the API's endpoint. Timeout is defined in each method. The file is ~800 lines of python code, contains 19 methods and only deals with pagerduty (not google sheet). It tooks 2 fulltime days.

            These people fail to produce anything meaningful, this is not really a surprise given their failure to do sane things with such a basic topic

            Does AI brings good idea: obviously no, but we knew this. Does AI improves the quality of the result (regardless of the quality of the idea): apparently no Does AI improves productivity: again, given this example: no Are these people better, more skilled or else: no

            Am I too demanding ? Am I asking too much ?

            • simonw 5 hours ago ago

              Try pasting that full code into Claude and prompting:

              > No HTTP keep-alive, no TCP reuse, the API key is passed down to every method, so is the API's endpoint. Timeout is defined in each method. Fix all of those issues.

              • JackSlateur 2 hours ago ago

                AI is a wonderful tool that will answer all of your questions, as long as you give it the right answer ? That's probably right.

                • minimaxir 2 hours ago ago

                  Even in normal human-written code, it's not guaranteed to get the code completely correct in one-shot. That's why code review and QA still exists.

                  The issue here is more organizational with the engineers not getting the code up to standards before handing off, not the capabilities of the AI itself.

            • ej88 5 hours ago ago

              I'm sorry your teammates have skill issues when it comes to using these tools.

    • vanviegen 10 hours ago ago

      > Mostly because humans are made worse by using AI.

      For the type of work I do, I found it best to tightly supervise my LLMs. Giving lots of design guidance upfront, and being very critical towards the output. This is not easy work. In fact, this was always the hard part, and now I'm spending a larger percentage of my time doing it. As the impact of design mistakes is a lot smaller, I can just revert after 20 minutes instead of 3 days, I also get to learn from mistakes quicker. So I'd say, I'm improving my skills faster than before.

      For juniors though, I think you are right. By relying on this tech from early on in their careers, I think it will be very hard to grow their skills, taste and intuition. But maybe I'm just an old guy yelling at the clouds, and the next generation of developers will do just fine building careers as AI whisperers.

  • lofaszvanitt 6 hours ago ago

    People are afraid, because while AI seemingly gobbles up programmer jobs, on the economic side there are no guardrails visible or planned whatsoever.

  • honeybadger1 8 hours ago ago

    I've found awesome use cases for quick prototyping. It saves me days when I can just describe the final step and iterate on it backwards to perfection and showcase an idea.

  • oulipo2 9 hours ago ago

    > Writing code is no longer needed for the most part.

    Said by someone who spent his career writing code, it lacks a bit of details... a more correct way to phrase it is: "if you're already an expert in good coding, now you can use these tools to skip most of code writing"

    LLMs today are mostly some kind of "fill-in-the-blanks automation". As a coder, you try to create constraints (define types for typechecking constraints, define tests for testing constraints, define the general ideas you want the LLM to code because you already know about the domain and how coding works), then you let the model "fill-in the blanks" and you regularly check that all tests pass, etc

  • Juliate 9 hours ago ago

    > What is the social solution, then? Innovation can't be taken back after all.

    It definitely can.

    The innovation that was the open, social web of 20 years ago? still an option, but drowned between closed ad-fueled toxic gardens and drained by AI illegal copy bots.

    The innovation that was democracy? Purposely under attack in every single place it still exists today.

    Insulin at almost no cost (because it costs next to nothing to produce)? Out of the question for people that live under the regime of pharmaceutical corporations that are not reigned by government, by collective rules.

    So, a technology that has a dubious ROI over the energy and water and land consumed, incites illegal activities and suicides, and that is in the process of killing the consumer public IT market for the next 5 years if not more, because one unprofitable company without solid verifiable prospects managed to pass dubious orders with unproven money that lock memory components for unproven data centers... yes, it definitely can be taken back.

    • Philpax 6 hours ago ago

      You cannot stop someone from running llama-server -m glm-4.7.gguf on their own hardware. That is the argument: even if all the AI companies go bust and the datacenters explode, the technology has been fundamentally proliferated and it is impossible to return to a world in which it does not exist.

      • Juliate 4 hours ago ago

        Of course not. But that's only the raw tech.

        The tech will still be there. As much as blockchains, crypto, NFTs and such, whose bubbles have not yet burst (well, the NFT one did, it was fast).

        But (Gen)AI today is much less about the tech, and much more about the illegal actions (harvesting copyrighted works) that permit it to run and the disastrous impact it has on ... everything (resources, jobs, mistaken prospectives, distorted IT markets, culture, politics) because it is not (yet) regulated to the extent it should.

  • bakugo 10 hours ago ago

    > LLMs are going to help us to write better software

    No, I really don't think they will. Software has only been getting worse, and LLMs are accelerating the rate at which incompetent developers can pump out low quality code they don't understand and can't possibly improve.

    • Trasmatta 7 hours ago ago

      Exactly. Many of us have learned, after decades of experience, that more code and more features is not a net positive. Lots of additional code is a liability that your carefully accept given the value it provides.

  • artemonster 10 hours ago ago

    I see AI effect as exact opposite, a turbo version of "lisp curse".

  • imiric 9 hours ago ago

    This is the first time I hear sentiments against "AI" hype be referred to as hype itself. Yes, there are people ignoring this technology altogether, possibly to their own detriment, but at the stage where we are now it is perfectly reasonable to want to avoid the actual hype.

    What I would really urge people to avoid doing is listening to what any tech influencer has to say, including antirez. I really don't care what famous developers think about this technology, and it doesn't influence my own experience of it. People should try out whatever they're comfortable with, and make up their own opinions, instead of listening what anyone else has to say about it. This applies to anything, of course, but it's particularly important for the technology bubble we're currently in.

    It's unfortunate that some voices are louder than others in this parasocial web we've built. Those with larger loudspeakers should be conscious of this fact, and moderate their output responsibly. It starts by not telling people what to do.

  • BoredPositron 10 hours ago ago

    We are 5 years in... it's fine to be sceptical. The model advancements are in the single digits now. It's not on us that they promised the world 3 years ago. It's fine and will be just fine for the next few years. A real breakthrough is at least another 5 years away and if it comes everything you do now will be obsolete. Nobody will need or care about the dude that Sloperatored Claude Code on release and that's the reality everyone who goes full AI evangelist needs to understand. You are just a stopgap. The knowledge you are accumulating now is just worthless transitional knowledge. There is no need for FOMO and there is nothing hard operating LLMs for coding and it will get easier by the day.

    • danielbln 10 hours ago ago

      5 years ago we had GPT-3, not even instruction-following GPT yet, a mere completion model. ChatGPT release was late 2022 (3 years ago). True agentic systems with reliable tool calling in a loop, that came maybe a year ago, agentic coding harnesses less than a year ago.

      Model improvements may have flattened, the quality improvements due to engineering work around those models certainly have not.

      If we always wait for technology to calcify and settle before we interact with it, then that would be rather boring for some of us. Acquiring knowledge is not really that much of a heavy burden that it's an issue if it's outdated a year in . But that's maybe just a mindset thing.

    • baq 10 hours ago ago

      I haven't been listening to any promises, I'm simply trying out the models as they get released. I agree with the article wholeheartedly - you can't pretend these tools are not worth learning anymore. It's irresponsible if you're a professional.

      Next breakthrough will happen in 2030 or it might happen next Tuesday; it might have already happened, it's just that the lab which did it is too scared to release it. It doesn't matter: until it happens, you should work with what you've got.

    • oncallthrow 10 hours ago ago

      I would have wholeheartedly agreed with this comment one year ago. Now, not so much.

    • prodigycorp 10 hours ago ago

      Where we're at is a lot better than we expected to be three years ago TBH.

  • threethirtytwo an hour ago ago

    This has nothing to do with whether programming is enjoyable, elegant, creative, or personally meaningful.

    It is about economic value.

    Programming exists at scale because it reliably produces money. It creates leverage for businesses, reduces costs, and accelerates execution. For a long time that value could only be produced by human labor. That assumption is now breaking.

    When a skill consistently converts into income, it stops being just a skill. It becomes a foundation for people’s lives. Mortgages, families, social standing, and long term security get built on top of it. At that point the work is no longer something you do. It becomes something you are.

    That is why people do not say “I write code sometimes.” They say “I am a software engineer,” in the same way someone says “I am a pilot” or “I am a doctor.” The label carries status. Programming has been culturally framed as difficult, intelligence signaling, and exclusive. Historically it has rewarded that framing with high compensation and prestige. Identity attachment is the expected outcome.

    Once identity is involved, objectivity becomes extremely difficult.

    Most resistance to AI is framed as technical concern. Accuracy, edge cases, hallucinations, safety. Those arguments exist, but they are secondary. They are not the cause. The real issue is identity threat.

    These systems are not just automating tasks. They are encroaching on the thing many people have used to define their value. A machine that can generate code, reason about systems, and produce usable solutions challenges the belief that “this is what makes me special and irreplaceable.” That registers as an existential problem, not a technical one.

    When people experience an existential threat, they do not calmly update their beliefs. They defend. They minimize progress. They fixate on flaws. They shift standards. They demand perfection from machines that was never required from humans. This is not unique to programmers. It is a universal response to displacement.

    The loudest critics are rarely the weakest programmers. They are often the most invested ones. The people whose self concept, status, and personal narrative are tightly bound to the idea that what they do cannot be replicated by a machine.

    That is why the discourse feels evasive and dishonest. It is not really about whether AI is good at programming today. It is about resisting a trajectory that points toward a future where the economic value of programming is no longer tightly coupled to human identity.

    This is not a moral failure. It is a psychological reaction. But pretending it is something else only delays adaptation.

    AI is not attacking programming. It is attacking the assumption that a lucrative skill guarantees permanence. What people are defending is not the craft itself, but a story about who they are and why they matter.

    That is the real conflict. And places like HN are full of people running straight into it.

  • on_the_train 7 hours ago ago

    Another one of these sickening pieces. Framing opposition to an expensive tech that doesn't work as "anti". I tried letting the absolutely newest models write c++ today again. Gpt 5.1 and opus 4.5. single function with two or less input parameters, a nice return value, doing simple geometry with the glm library. Yes the code worked. But I took as long fixing the weird parts as it would have taken me myself. And I still don't trust the result, because reviewing is so much harder than writing.

    There's still no point. Resharper and clang-tidy still have more value than all LLMs. It's not just a hype, it's a bloody cult, right besides those nft and church of COVID people.

    • simonw 7 hours ago ago

      Did you try telling the model to write the unit tests first, watch them fail, then write a function that passes them?

  • senko 10 hours ago ago

    The anti-AI hype, in the context of software development, seems to focus on a few things:

    > AI code is slop, therefore you shouldn't use it

    You should learn how to responsibly use it as a tool, not a replacement for you. This can be done, people are doing it, people like Salvatore (antirez), Mitchell (of Terraform/Ghostty fame), Simon (swillison) and many others are publicly talking about it.

    > AI can't code XYZ

    It's not all-or-nothing. Use it where it works for you, don't use it where it doesn't. And btw, do check that you actually described the problem well. Slop-in, slop-out. Not sayin' this is always the case, but turns out it's the case surprisingly often. Just sayin'

    > AI will atrophy your skills, or prevent you from learning new ones, therefore you shouldn't use it

    Again, you should know where and how to use it. Don't tune out while doing coding. Don't just skim the generated code. Be curious, take your time. This is entirely up to you.

    > AI takes away the fun part (coding) and intensifies the boring (management)

    I love programming but TBH, for non-toy projects that need to go into production, at least three quarters are boring boilerplate. And making that part interesting is one of the worst things you can do in software development! That path lies resume-driven development, architecture astronautics, abusing design patterns du jour, and other sins that will make code maintenance on that thing a nightmare! You want boring, stable, simple. AI excels at that. Then you can focus on the small tiny bit that's fun and hand-craft that!

    Also, you can always code for fun. Many people with boring coding jobs code for fun in the evenings. AI changes nothing here (except possibly improving the day job drudgery).

    > AI is financially unsustainable, companies are losing money

    Perhaps, and we're probably in the bubble. Doesn't detract from the fact that these things exist, are here now, work. OpenAI and Anthropic can go out of business tomorrow, the few TB of weights will be easily reused by someone else. The tech will stay.

    > AI steals your open source code, therefore you shouldn't write open-source

    Well, use AI to write your closed-source code. You don't need to open source anything if you're worried someone (AI or human) will steal it. If you don't want to use something on moral grounds, that's a perfectly fine thing to do. Others may have different opinion on this.

    > AI will kill your open source business, therefore you shouldn't write open-source

    Open source is not a business model (I've been saying this for longer than median user of this site has been alive). AI doesn't change that.

    As @antirez points out, you can use AI or not, but don't go hiding under a rock and then being surprised in a few years when you come out and find the software development profession completely unrecognizable.

    • zahlman 9 hours ago ago

      > at least three quarters are boring boilerplate. And making that part interesting is one of the worst things you can do in software development! That path lies resume-driven development, architecture astronautics, abusing design patterns du jour, and other sins that will make code maintenance on that thing a nightmare!

      You apparently see "making the boilerplate interesting" as doing a bunch of overengineering. Strange. To my mind, the overengineering is part of the boilerplate. "Making the boilerplate interesting" in my mind is not possible; but rather the goal is to fix the system such that it doesn't require boilerplate any more. (Sometimes that just means a different implementation language.)

      • senko 8 hours ago ago

        I agree with what you said, but I think we might be talking about slightly different things. Let me give a few examples in an attempt to better explain myself:

        A company I worked with a while ago had a microservices architecture, and have decided to not use one of a few standard API serialization/deserialization options, but write their own, because was going to be more performant, easier to maintain, better fit for their use case. A few years on, after having grown organically to support all the edge cases, it's more convoluted, slower, and buggy than if they went with the boring option that ostensibly had "a bit more boilerplate" from the start.

        A second example is from a friend, whose coworker decided to write a backend-agnostic, purpose-agnostic, data-agnostic message broker/routing library. They spent a few months of this, delivered a beautifully architected solution in a few dozen k lines of code. The problem is the solution solves many problems the company didnt and wouldn't have, and will be a maintenance drag from then forevermore. Meanwhile, they could have done it in a few hundred lines of code that would be coupled to the problem domain, but still farily decend from most people's point of view.

        These two are from real projects. But you can also notice that in general people are often picking a fancy solution over a boring one, ostensibly because it has something "out of the box". The price of the "out of the box"-ness (aside from potential SaaS/infra costs and vendor lock in), is that you now need to adapt your own code to work with the mental model (domain) of the fancy solution.

        Or to harp on something trivial, you end up depending on left-pad because writing it yourself was boring.

        > fix the system such that it doesn't require boilerplate any more.

        I think perhaps I used a more broad meaning for "boilerplate" than you had in mind. If we're talking about boilerplate as enumerating all the exceptions a Java method may raise, or whatever unholy sad thing we have to do in C to use GTK/GObject, then I agree.

        But I also meant something more closer to "glue code that isn't the primary carrier of value of the project", or to misuse financial language in this context, the code that's a cost center, not a profit center.

  • metalman 9 hours ago ago

    the end run around copyright, is TOS that are forced on users, through distribution chanels(platforms),service providors, and actual "patented" hardware, so money will continue to flow up, not sideways. Given that there are a very limited number of things that can actualy be done with computer/phones, and it becomes clear that "AI" can arrange those in any possible configuration, the rest is deciding if it will jive with the users, and noticing when it doesn't, which I believe that AI will be unable to disern from other AI slop, imitating actual useres

  • echelon 4 hours ago ago

    I love Antirez.

    > However, this technology is far too important to be in the hands of a few companies.

    This is the most important assessment and we should all heed this warning with great care. If we think hyperscalers are bad, imagine what happens if they control and dictate the entire future.

    Our cellphones are prisons. We have no fundamental control, and we can't freely distribute software amongst ourselves. Everything flows through funnels of control and monitoring. The entire internet and all of technology could soon become the same.

    We need to bust this open now or face a future where we are truly serfs.

    I'm excited by AI and I love what it can do, but we are in a mortally precarious position.

  • vmaurin 9 hours ago ago

    > facts are facts, and AI is going to change programming forever

    Show me these "facts"

    • antirez 9 hours ago ago

      If you can't see this by working with Claude Code for a few weeks, I don't want to go into bigger efforts than writing a blog post to convince you. It's not a mission, mine. I just want to communicate with the part of people that are open enough to challenge their ideas and are willing to touch with their hands what is happening. Also, if you tried and failed, it means that either for your domain AI is not good enough, or you are not able to extract the value. The fact is, this does not matter: a bigger percentage of programmers is using AI with success every day, and as it progresses this will happen more and in more diverse programming fields and tasks. If you disagree and are happy to avoid LLMs, well, it's ok as well.

      • vmaurin 9 hours ago ago

        I am waiting people to commits their prompt/agents setup instead of the code to call this a changing paradigm. So far it is "just" machine generating code and generating code doesn't solve all the software problem (but yeah they get pretty good at generating code)

      • timmytokyo 2 hours ago ago

        Replace "Claude Code" or "AI" with "Jesus". It all sounds very familiar.

      • oulipo2 9 hours ago ago

        okay, but again: if you say in your blog that those are "facts", then... show us the facts?

        You can't just hand-wavily say "a bigger percentage of programmers is using AI with success every day" and not give a link to a study that shows it's true

        as a matter of fact, we know that a lot of companies have fired people by pretending that they are no longer needed in the age of AI... only to re-hire offshored people for much cheaper

        for now, there hasn't been a documented sudden increase in velocity / robustness for code, a few anecdotical cases sure

        I use it myself, and I admit it saves some time to develop some basic stuff and get a few ideas, but so far nothing revolutionary. So let's take it at face value:

        - a tech which helps slightly with some tasks (basically "in-painting code" once you defined the "border constraints" sufficiently well)

        - a tech which might cause massive disruption of people's livelihoods (and safety) if used incorrectly, which might FAR OUTWEIGH the small benefits and be a good enough reason for people to fight against AI

        - a tech which emits CO2, increases inequalities, depends on quasi slave-work of annotators in third-world countries, etc

        so you can talk all day long about not dismissing AI, but you should take it also with everything that comes with it

        • antirez 8 hours ago ago

          1. If you can't convince yourself, after downloading Claude Code or Codex and playing with them for 1 week, that programming is completely revolutionized, there is nothing I can do: you have it at your fingertips and you search for facts I should communicate for you.

          2. The US alone air conditioning usage is around 4 times the energy / CO2 usage of all the world data centers (not just AI) combined together. AI is 10% of the data centers usage, so just AC is 40 times that.

          • keybits 4 hours ago ago

            I enjoyed about your blog post, but I was curious about the claim in point 2 above. I asked Claude and it seems the claim is false:

            # Fact-Checking This Climate Impact Claim

            Let me break down this claim with actual data:

            ## The Numbers

            *US Air Conditioning:* - US A/C uses approximately *220-240 TWh/year* (2020 EIA data) - This represents about 6% of total US electricity consumption

            *Global Data Centers:* - Estimated *240-340 TWh/year globally* (IEA 2022 reports) - Some estimates go to 460 TWh including cryptocurrency

            *AI's Share:* - AI represents roughly *10-15%* of data center energy (IEA estimates this is growing rapidly)

            ## Verdict: *The claim is FALSE*

            The math doesn't support a 4:1 ratio. US A/C and global data centers use *roughly comparable* amounts of energy—somewhere between 1:1 and 1:1.5, not 4:1.

            The "40 times AI" conclusion would only work if the 4x premise were true.

            ## Important Caveats

            1. *Measurement uncertainty*: Data center energy use is notoriously difficult to measure accurately 2. *Rapid growth*: AI energy use is growing much faster than A/C 3. *Geographic variation*: This compares one country's A/C to global data centers (apples to oranges)

            ## Reliable Sources - US EIA (Energy Information Administration) for A/C data - IEA (International Energy Agency) for data center estimates - Lawrence Berkeley National Laboratory studies

            The quote significantly overstates the disparity, though both are indeed major energy consumers.

          • oulipo2 7 hours ago ago

            1. "if you can't convince yourself by playing anecdotically" is NOT "facts"

            2. it's not because the US is incredibly bad at energy spending in AC that it somehow justifies the fact that we would add another, mostly unnecessary, polluting source, even if it's slightly lower. ACs have existed for decades. AI has been exploding for a few years, so we can definitely see it go way, way past the AC usage

            also the idea is of "accelerationnism". Why do we need all this tech? What good does it make to have 10 more silly slop AI videos and disinformation campaigns during election? Just so that antirez can be a little bit faster at doing his code... that's not what the world is about.

            Our world should be about humans, connecting together (more slowly, not "faster"), about having meaningful work, and caring about planetary resources

            The exact opposite of what capitalistic accelerationism / AI is trying to sell us

            • simonw 7 hours ago ago

              If you can solve "measure programming productivity with data" you'll have cracked one of the hardest problems in our industry.

              > Why do we need all this tech?

              Slightly odd question to be asking here on Hacker News!

              • lunar_mycroft 2 hours ago ago

                > If you can solve "measure programming productivity with data" you'll have cracked one of the hardest problems in our industry.

                That doesn't mean that we have to accept claims that LLMs drastically increase productivity without good evidence (or in the presence of evidence to the contrary). If anything, it means the opposite.

              • oulipo2 6 hours ago ago

                Sure, but I wasn't the one pretending to have "facts" on AI...

                > Slightly odd question to be asking here on Hacker News!

                It's absolutely not? The first line of question when you work in a domain SHOULD BE "why am I doing this" and "what is the impact of my work on others"

                • simonw 6 hours ago ago

                  Yeah, I think I quoted you out of context there. I'm very much in agreement about asking "what is the impact of my work on others".

            • akomtu 4 hours ago ago

              This is obviously a collision between our human culture and the machine culture, and on the surface its intent is evil, as many have guessed already. But what it also does is it separates the two sides cleanly, as they want to pursue different and wildly incompatible futures. Some want to herd sheep, others want to unite with tech, and the two can't live under one sky. The AI wedge is a necessity in this sense.

        • senordevnyc 2 hours ago ago

          Just dismiss what he says and move on, he's already made it clear he's not trying to convince you.

        • simonw 7 hours ago ago

          How does widespread access to AI tools increase inequalities?

          • AstroBen 5 hours ago ago

            It's pretty clear that if AI delivers on its promise it'll decimate the income of all but the top 1% developers

            Labor is worth less, capital and equity ownership make more or the same

            • simonw 3 hours ago ago

              I don't think that's a forgone conclusion yet.

              I continue to hope that we see the opposite effect: the drop of cost in software development drives massively increased demand for both software and our services.

              I wrote about that here: https://simonwillison.net/2026/Jan/8/llm-predictions-for-202...

              • AstroBen 2 hours ago ago

                I keep flip-flopping between being optimistic and pessimistic on this, but yeah we just need to wait and see

          • oulipo2 7 hours ago ago

            Because as long as it is done in a capitalistic economy, it will be excluding the many from work, while driving profits to a few

  • yard2010 10 hours ago ago

    This is making me sad. The people that are going to lose their jobs will be literally weaponized against minorities by the crooked politicians that are doing their thing right now, it's going to be a disaster I can tell. I just wish I could go back in time. I don't want to live in this timeline anymore. I lost my passion job before anything of it even happened. On the paper.

    • falloutx 9 hours ago ago

      We already may have hit the point where easier it is to make software, harder it is to sell (or make money from it).

      There is no way I can convince a user that my vibe coded version of Todolist is better than 100 other made this week

    • tim333 7 hours ago ago

      Industries have come and gone for centuries and it doesn't always go horribly wrong.

  • mehdi1964 9 hours ago ago

    The shift isn’t about replacing programmers, it’s about changing what programming means—from writing every line to designing, guiding, and validating. Excited to see how open source and small teams can leverage this without being drowned by centralization.