What makes Claude Code so damn good

(minusx.ai)

333 points | by samuelstros 17 hours ago ago

236 comments

  • brokegrammer 6 hours ago ago

    I don't get it. The title says "What makes Claude Code so damn good", which implies that they will show how Claude Code is better than other tools, or just better in general. But they go about repeating the Claude Code documentation using different wording.

    Am I missing something here? Or is this just Anthropic shilling?

    • slimebot80 8 minutes ago ago

      Nowhere in the title does it compare to other tools? Just that's it's damn good.

    • whazor 5 hours ago ago

      I think this article is targeted towards readers who subjectively agree that Claude Code is the best.

    • nuwandavek 6 hours ago ago

      (blogpost author here) Haha, that's totally fair. I've read a whole bunch of posts comparing CC to other tools, or with a dump of the the architecture. This post was mainly for people who've used CC extensively, know for a fact that it is better and wonder how to ship such an experience in their own apps.

      • brokegrammer 6 hours ago ago

        I've used Claude Code, Cursor, and Copilot is Vscode and I don't "know" that Claude Code is better apart from the fact that it runs in the terminal, which makes it a little faster but less ergonomic than tools running inside the editor. All of the context tricks can be done with Copilot instructions as well, so I simply can't see how Claude Code is superior.

        • techwiz137 5 hours ago ago

          For code generation, nothing so far beats Opus. More likely than not it generated working code and fixed bugs that Gemini 2.5 pro couldn't solve or even Gemini Code Assist. Gemini Code Assist is better than 2.5 pro, but has way more limits per prompt and often truncates output.

          • d4rkp4ttern 32 minutes ago ago

            Don’t sleep on Codex-CLI + gpt-5. While the Codex-CLI scaffolding is far behind CC, the gpt-5 code seems solid from what I’ve seen (you can adjust thinking level using /model).

          • baq 4 hours ago ago

            I found Anthropic’s models untrustworthy with SQL (e.g. confused AND and OR operator precedence - or simply forgot to add parens, multiple times), Gemini 2.5 pro has no such issues and identified Claude’s mistakes correctly.

          • jonasft 4 hours ago ago

            Let’s say that is correct, you can still just use Opus in Cursor or whatever.

          • rendx 4 hours ago ago

            The article is not comparing models, but how the models are used by tools, in this case Claude Code. It's not merely a thin wrapper around an API.

          • faangguyindia 3 hours ago ago

            for me gemini 2.5 pro with thinking tokens enabled blows Opus out of the water for "difficult problems".

  • the_mitsuhiko 16 hours ago ago

    Unfortunately, Claude Code is not open source, but there are some tools to better figure out how it is working. If you are really interested in how it works, I strongly recommend looking at Claude Trace: https://github.com/badlogic/lemmy/tree/main/apps/claude-trac...

    It dumps out a JSON file as well as a very nicely formatted HTML file that shows you every single tool and all the prompts that were used for a session.

    • rbren 9 hours ago ago

      If you’re looking for an OSS alternative check out OpenHands CLI: https://github.com/All-Hands-AI/OpenHands?tab=readme-ov-file

    • CuriouslyC 15 hours ago ago

      https://github.com/anthropics/claude-code

      You can see the system prompts too.

      It's all how the base model has been trained to break tasks into discrete steps and work through them patiently, with some robustness to failure cases.

      • the_mitsuhiko 15 hours ago ago

        > https://github.com/anthropics/claude-code

        That repository does not contain the code. It's just used for the issue tracker and some example hooks.

        • koakuma-chan 15 hours ago ago
          • throwaway314155 13 hours ago ago

            That's been DMCA'd since you posted it. Happen to know where I can find a fork?

            • koakuma-chan 13 hours ago ago

              > That's been DMCA'd since you posted it.

              I know, thus the :trollface:

              > Happen to know where I can find a fork?

              I don't know where you can find a fork, but even if there is a fork somewhere that's still alive, which is unlikely, it would be for a really old version of Claude Code. You would probably be better off reverse engineering the minified JavaScript or whatever that ships with the latest Claude Code.

            • mlrtime 8 hours ago ago

              Just search dnakov/claude-code mirror and there is a path to the source code, I found it in 2 minutes.

        • CuriouslyC 15 hours ago ago

          It's a javascript app that gets installed on your local system...

          • the_mitsuhiko 14 hours ago ago

            I'm aware of how it works since I have been spending a lot of time over the last two months working with Claude's internals. If you have spent some time with it, you know that it is a transpiled and minified mess that is annoyingly hard to detangle. I'm very happy that claude-trace (and claude-bridge [1]) exists because it makes it much easier to work with the internals of Claude than if you have to decompile it yourself.

            [1]: https://github.com/badlogic/lemmy/tree/main/apps/claude-brid...

  • 0xpgm 3 hours ago ago

    So, what great new products or startups have these amazing coding agents helped create so far (and not on the AI supply side).

    Anywhere to check?

    • anonzzzies 3 hours ago ago

      You really should not check that... I saw some dude on reddit saying that you can build your own saas in 20 days and launch and sell it. I checked out some of his; Claude Code can do that in a few hours. So can I without AI as I have a batteries included framework ready that has all the plumbing done. But Claude can do those from scratch in hours. So 1 day with me doing some testing and fixing. That is not a product or a startup: it's a grift. But glory to him for getting it done anyway. Not many people launch and then actually make a few bucks.

      • noduerme 2 hours ago ago

        >> launch and sell it

        What AI can definitely not do is launch or sell anything.

        I can write some arbitrary SaaS in a few hours with my own framework, too - and know it's much more secure than anything written by AI. I also know how to launch it. (I'm not so good at the "selling" part).

        But if anyone can do all of this - including the launching the selling - then they would not be selling themselves on Reddit or Youtube. Once you see someone explaining to you how to get rich quickly, you must assume that they have failed or else they would not be wasting their time trying to sell you something. And from that you should deduce that it's not wise to take their advice.

        • anonzzzies 2 hours ago ago

          > What AI can definitely not do is launch or sell anything.

          Sure but he was particularly talking about the technical side of things.

          > (I'm not so good at the "selling" part).

          In person I am, but this new fangled 'influencer' selling or what not I do not understand and cannot do (yet) (i'm in my 50s so I can still learn).

          > But if anyone can do all of this - including the launching the selling - then they would not be selling themselves on Reddit or Youtube

          Yeah but most don't actually name the url of the product and he does. So that's a difference.

  • ahmedhawas123 15 hours ago ago

    Thanks for sharing this. At a time where this is a rush towards multi-agent systems, this is helpful to see how an LLM-first organization is going after it. Lots of the design aspects here are things I experiment with day to day so it's good to see others use it as well

    A few takeaways for me from this (1) Long prompts are good - and don't forget basic things like explaining in the prompt what the tool is, how to help the user, etc (2) Tool calling is basic af; you need more context (when to use, when not to use, etc) (3) Using messages as the state of the memory for the system is OK; i've thought about fancy ways (e.g., persisting dataframes, parsing variables between steps, etc, but seems like as context windows grow, messages should be ok)

    • chazeon 8 hours ago ago

      I want to note that: long prompts are good only if the model is optimized for it. I have tried to swap the underlying model for Claude Code. Most local models, even those claimed to work with long context and tool use, don't work well when instruction becomes too long. This has become an issue for tool use, where tool use works well in small ChatBot-type conversation demos, but when Claude's code-level prompt length increases, it just fails, either forgetting what tools are there, forgetting to use them, or returning in the wrong formats. Only the model by OpenAI, Google's Gemini, kind of works, but not as well as Anthropic's own models. Besides they feel much slower.

    • nuwandavek 13 hours ago ago

      (author of the blogpost here) Yeah, you can extract a LOT of performance from the basics and don't have to do any complicated setup for ~99% of use cases. Keep the loop simple, have clear tools (it is ok if tools overlap in function). Clarity and simplicity >>> everything else.

      • samuelstros 12 hours ago ago

        does a framework like vercel's ai sdk help, or is handling the loop + tool calling so straightforward that a framework is overcomplicating things?

        for context, i want to build a claude code like agent in a WYSIWYG markdown app. that's how i stumbled on your blog post :)

        • brabel 4 hours ago ago

          Check the OpenAI REST API reference. Most engines implement that and you can see how tool calls work. It’s just a matter of understanding the responses they give you, how to put them in the messages history and how to invoke a tool when the LLM asks for it.

        • ahmedhawas123 10 hours ago ago

          Function / tool calling is actually super simple. I'd honestly recommend either doing it through a single LLM provider (e.g., OpenAI or Gemini) without a hard framework first, and then moving to one of the simpler frameworks if you feel the need to (e.g., LangChain). Frameworks like LangGraph and others can get really complicated really quickly.

        • nuwandavek 6 hours ago ago

          There may be other reasons to use ai sdk, but I'd highly recommend starting with a simple loop + port most relevant tools from Claude Code before using any framework.

          Nice, do share a link, would love to check out your agent!

  • alex1138 16 hours ago ago

    What do people think of Google's Gemini (Pro?) compared to Claude for code?

    I really like a lot of what Google produces, but they can't seem to keep a product that they don't shut down and they can be pretty ham-fisted, both with corporate control (Chrome and corrupt practices) and censorship

    • CuriouslyC 15 hours ago ago

      Gemini is amazing for taking a merge file of your whole repo, dropping it in there, and chatting about stuff. The level of whole codebase understanding is unreal, and it can do some amazing architectural planning assistance. Claude is nowhere near able to do that.

      My tactic is to work with Gemini to build a dense summary of the project and create a high level plan of action, then take that to gpt5 and have it try to improve the plan, and convert it to a hyper detailed workflow xml document laying out all the steps to implement the plan, which I then hand to claude.

      This avoids pretty much all of Claude's unplanned bumbling.

    • koakuma-chan 16 hours ago ago

      I don't think Gemini Pro is necessarily worse at coding, but in my experience Claude is substantially better at "terminal" tasks (i.e. working with the model through a CLI in the terminal) and most of the CLIs use Claude, see https://www.tbench.ai/leaderboard.

    • jsight 16 hours ago ago

      For the web ui (chat)? I actually really like gemini 2.5 pro.

      For the command line tool (claude code vs gemini code)? It isn't even close. Gemini code was useless. Claude code was mostly just slow.

      • lifthrasiir 3 hours ago ago

        Yeah, the main strength of gemini-cli is being open-sourced and it still needs much polishing. I ended up building my own web-based interactive agent based on gemini-cli [1] out of frustration.

        [1] https://github.com/lifthrasiir/angel

      • upcoming-sesame 13 hours ago ago

        You mean Gemini CLI. Yeah it's confusing

        • jsight 12 hours ago ago

          Thanks, that's the one!

      • Herring 13 hours ago ago

        Yeah I was also getting much better results on the Gemini web ui compared to the Gemini terminal. Haven't gotten to Claude yet.

    • esafak 9 hours ago ago

      I used to like it a lot but I feel like it got dumber lately. Am I imagining things or has anyone else observed this too?

    • jonfw 16 hours ago ago

      Gemini is better at helping to debug difficult problems that require following multiple function calls.

      I think Claude is much more predictable and follows instructions better- the todo list it manages seems very helpful in this respect.

    • donperignon 6 hours ago ago

      Personally gemini has been giving me better results. Claude keeps trying to generate react code even when the whole context and my command is svelte, and failing constantly to give me something that can at least run, gemini, on the other hand has been pretty good with styling, and useful with the bussines logic. I dont get all the hype around claude.

    • divan 15 hours ago ago

      In my recent tests I found it quite smart at analyzing bigger picture (i.e. "hey, test failing not because of that, but because of whole assumption has changed and let me rewrite this test from scratch". But it also got stuck few times "I can't edit file, I'm stuck, let me try completely differently". But the biggest difference so far is the communication style - it's a bit.. snarky? I.e. comments like "yeah, tests are failing - as I suspected". Why the f it suspected failing test on the project it sees for the first time? :D

    • Keyframe 16 hours ago ago

      It's doing rather well at thinking, but not at coding. When it codes, often enough it runs in circles and ignores input. Where I find it useful is to read through larger codebases and distill what I need to find out from it. Even using gemini from claude to consult it for certain things. Opus is also like that btw, but a bit better at coding. Sonnet though, excels at coding.. from my experience though.

    • yomismoaqui 16 hours ago ago

      According to the guys from Amp Claude Sonnet/Opus are better at tool use.

    • ezfe 16 hours ago ago

      Gemini frequently didn't write code for me for no explicable reason, and just talked about a hypothetical solution. Seems like a tooling issue though.

      • djmips 16 hours ago ago

        Sounds almost human!

        • brabel 4 hours ago ago

          LLMs are built on human content and they do behave similarly to humans sometimes, including both the good and the bad.

    • nicce 15 hours ago ago

      If you could control the model with system command, it would be very good. But at last I have failed miserably. Model is too verbose and helpful.

    • stabbles 16 hours ago ago

      In my experience it's better at lower level stuff, like systems programming. A pass afterwards with claude makes the code more readable.

    • filchermcurr 14 hours ago ago

      The Gemini CLI tool is atrocious. It might work sometimes for analyzing code, but for modifying files, never. The inevitable conclusion of every session I've ever tried has been an infinite loop. Sometimes it's an infinite loop of self-deprecation, sometimes just repeating itself to failure, usually repeating the same tool failure until it catches it as an infinite loop. Tool usage frequently (we're talking 90% of the time) fails. It's also, frankly, just a bummer to talk to. The "personality" is depressed, self-deprecating, and just overall really weird.

      That's been my experience, anyway. Maybe it hates me? I sure hate it.

      • klipklop 11 hours ago ago

        This matches my experience with it. I won’t let it touch any code I have not yet safely checked in before firing up Gemini. It will commonly get into a death loop mid session that can’t be recovered from.

    • KaoruAoiShiho 16 hours ago ago

      It sucks.

      • KaoruAoiShiho 14 hours ago ago

        Lol downvoted, come on anyone who has used gemini and claude code knows there's no comparison... gimme a break.

        • bitpush 13 hours ago ago

          You're getting down voted because of the curt "it sucks" which shows a level of shallowness in your understanding.

          Nothing in the world is simply outright garbage. Even the seemingly worst products exist for a reason and is used for a variety of use cases.

          So, take a step back and reevaluate whether your reply could have been better. Because, it simply "just sucks"

  • 1zael 15 hours ago ago

    I've literally built the entire MVP of my startup on Claude Code and now have paying customers. I've got an existential worry that I'm going to have a SEV incident that will trigger a house of falling cards, but until then I'm constantly leveraging Claude for fixing security vulnerabilities, implementing test-driven-development, and planning out the software architecture in accordance with my long-term product roadmap. I hope this story becomes more and more common as time passes.

    • ComputerGuru 15 hours ago ago

      > but until then I'm constantly leveraging Claude for fixing security vulnerabilities

      That it authored in the first place?

      • dpe82 15 hours ago ago

        Do you ever fix your own bugs?

        • janice1999 14 hours ago ago

          Humans have the capacity to learn from their own mistakes without redoing a lifetime of education.

        • ComputerGuru 14 hours ago ago

          Bugs, yes. Security vulnerabilities? Rarely enough that it wouldn’t make my HN list. It’s not remotely hard to avoid the most common issues.

    • davepeck 7 hours ago ago

      > I've literally built the entire MVP of my startup on Claude Code and now have paying customers.

      Would you mind linking to your startup? I’m genuinely curious to see it.

      (I won’t reply back with opinions about it. I just want to know what people are actually building with these tools!)

      • jaggederest 4 hours ago ago

        My github has examples of work I've done recently that are open source.

        I'm deliberately trying not to do too much manual coding right now so I can figure out these (infuriating/wonderful) tools.

    • lajisam 15 hours ago ago

      “Implementing test-driven development, and planning out software architecture in accordance with my long-term product roadmap” can you give some concrete examples of how CC helped you here?

      • 1zael 8 hours ago ago

        Yeah, so I continuously maintain a claude.md file with the feature roadmap for my product (which changes every week but acts as a source of truth). I feed that into a claude software architecture agent that I created, which reviews proposed changes for my current feature build against the longer-term roadmap to ensure I don't 1\ create tech debt with my current approach and 2\ identify opportunities to parallelize work that could help with multiple upcoming features at once.

        I have also a code reviewer agent in CC that writes all my unit and integration tests, which feeds into my CI/CD pipeline. I use the "/security" command that Claude recently released to review my code for security vulnerabilities while also leveraging a red team agent that tests my codebase for vulnerabilities to patch.

        I'm starting to integrate Claude into Linear so I can assign Linear tickets to Claude to start working on while I tackle core stuff. Hope that helps!

    • imiric 15 hours ago ago

      Well, don't be shy, share what CC helped you build.

      • orsorna 15 hours ago ago

        You're speaking to a wall. For whatever reason, the type of people to espouse the wonders of their LLM workflow never reveal what kind of useful output they get from it, never mind substantiate their claims.

        • jaggederest 4 hours ago ago

          I generally work as openly as possible on github, and I am deliberately avoiding manual coding for a while to try to learn these (infuriating/wonderful) tools more thoroughly.

          Unfortunately I can't always share all of my work, but everything on github after perhaps 2025-06-01 is as vibe-coded as I can get it to be. (I manually review commits before they're pushed, and PRs once in a complete state, but I always feed those reviews back into the tooling, not fix them manually, unless I get completely fed up.)

        • turnsout 9 hours ago ago

          There’s still a stigma. I think people are worried that if it gets out that their startup was built with the help of an LLM, they’ll lose customers who don’t want to pay for something “vibe coded.”

          Honestly I don’t think customers care.

          • mlrtime 8 hours ago ago

            I used the analogy to how online dating started. I remember [some] people were embarrassed to say they met online so would make up a story. We're in that phase of AI development, it will pass.

      • 1zael 8 hours ago ago

        Answered above, but to be concrete on features --> it helped me build an end-to-end multi-stage pipeline architecture for video and audio transcription, LLM analysis, content generation, and evals. It took care of stuff like Postgres storage and pgvector for RAG-powered semantic search, background job orchestration with intelligent retry logic, Celery workers for background jobs, and MCP connectors.

    • lifestyleguru 15 hours ago ago

      duh, I ordered Claude Code to simply transfer money monthly to my bank account and it does.

    • foobarbecue 15 hours ago ago

      > I hope this story becomes more and more common as time passes.

      Why????????????

      Why do you want devs to lose cognaizance of their own "work" to the point that they have "existential worry"?

      Why are people like you trying to drown us all in slop? I bet you could replace your slop pile with a tenth of the lines of clean code, and chances are it'd be less work than you think.

      Is it because you're lazy?

      • 1zael 8 hours ago ago

        Congratulations, you replace my pile of "slop" (which really is functional, tight code written by AI in 1/1000th of the time it would take me to write it) with your "shorter" code that has the exact same functionality and performance. Congrats? The reality is no one (except in the case of like competitive programming) cares about the length of your code so long as it's maintainable.

      • PUSH_AX 6 hours ago ago

        Clean code? Fewer lines? Found the intermediate.

      • BeetleB 14 hours ago ago

        > I bet you could replace your slop pile with a tenth of the lines of clean code, and chances are it'd be less work than you think.

        Actually, no. When LLMs produce good, working code, it also tends to be efficient (in terms of lines, etc).

        May vary with language and domain, though.

        • stavros 14 hours ago ago

          Eh, when is that, though? I'm always worrying about the bugs that I haven't noticed if I don't review the changes. The other day, I gave it a four-step algorithm to implement, and it skipped three of the steps because it didn't think they were necessary (they were).

          • BeetleB 14 hours ago ago

            Hmm...

            It may be the size of the changes you're asking for. I tend to micromanage it. I don't know your algorithm, but if it's complex enough, I may have done 4 separate prompts - one for each step.

            • foobarbecue 14 hours ago ago

              Isn't it easier to just write the code???

              • BeetleB 13 hours ago ago

                Depends on the algorithm. When you've been coding for a few decades, you really, really don't want to write yet another trivial algorithm you've written multiple tens of times in your life. There's no joy in it.

                Let the LLM do the boring stuff, and focus on writing the fun stuff.

                Also, setting up logging in Python is never fun.

                • foobarbecue 10 hours ago ago

                  Right-- it's only really capable of trivial code and boilerplate, which I usually just copy from one of my older programs, examples in docs, or a highly-ranked recent SO answer. Saves me from having to converse with an expensive chatbot, and I don't have to worry about random hallucinations.

                  If it's a new, non-trivial algorithm, I enjoy writing it.

                  • BeetleB 7 hours ago ago

                    For me, it's a lot easier getting the LLM to do it than browsing through multiple SO answers, or even finding some old code of mine.

                    Oh, and the chatbot is cheap. I pay for API usage. On average I'm paying less than $5 per month.

                    > and I don't have to worry about random hallucinations.

                    For boilerplate code, I don't think I've ever had to fix anything. It's always worked the first time. If it didn't, my prompt was at fault.

                • a5c11 7 hours ago ago

                  > Also, setting up logging in Python is never fun.

                  import logging

                  • BeetleB 7 hours ago ago

                    Not fun at all.

                    Configuring it to produce useful stuff (e.g. timestamps, autologging exceptions, etc). Very boilerplate and tedious.

            • stavros 14 hours ago ago

              It was really simple, just traversing a list up and down twice. It just didn't see the reason why, so it skipped it all (the reason was to prevent race conditions).

      • Mallowram 15 hours ago ago

        second

  • OtherShrezzing 16 hours ago ago

    I think it’s just that the base model is good at real world coding tasks - as opposed to the types of coding tasks in the common benchmarks.

    If you use GitHub Copilot - which has its own system level prompts - you can hotswap between models, and Claude outperforms OpenAI’s and Google’s models by such a large margin that the others are functionally useless in comparison.

    • ec109685 16 hours ago ago

      Anthropic has opportunities to optimize their models / prompts during reinforcement learning, so the advice from the article to stay close to what works in Claude code is valid and probably has more applicability for Anthropic models than applying the same techniques to others.

      With a subscription plan, Anthropic is highly incentivized to be efficient in their loops beyond just making it a better experience for users.

    • paool 8 hours ago ago

      It's not just the base model

      Try using opus with cline in vs code. Then use Claude code.

      I don't know the best way to quantify the differences, but I know I get more done in CC.

    • badestrand 8 hours ago ago

      I read all the praise about Claude Code, tried it for a month and was very disappointed. For me it doesn't work any better than Cursor's sidebar and has worse UX on top. I wonder if I am doing something wrong because it just makes lots of stupid mistakes when coding for me, in two different code bases.

  • sdsd 16 hours ago ago

    Oof, this comes at a hard moment in my Claude Code usage. I'm trying to have it help me debug some Elastic issues on Security Onion but after a few minutes it spits out a zillion lines of obfuscated JS and says:

      Error: kill EPERM
          at process.kill (node:internal/process/per_thread:226:13)
          at Ba2 (file:///usr/local/lib/node_modules/@anthropic-ai/claude-code/cli.js:506:19791)
          at file:///usr/local/lib/node_modules/@anthropic-ai/claude-code/cli.js:506:19664
          at Array.forEach (<anonymous>)
          at file:///usr/local/lib/node_modules/@anthropic-ai/claude-code/cli.js:506:19635
          at Array.forEach (<anonymous>)
          at Aa2 (file:///usr/local/lib/node_modules/@anthropic-ai/claude-code/cli.js:506:19607)
          at file:///usr/local/lib/node_modules/@anthropic-ai/claude-code/cli.js:506:19538
          at ChildProcess.W (file:///usr/local/lib/node_modules/@anthropic-ai/claude-code/cli.js:506:20023)
          at ChildProcess.emit (node:events:519:28) {
        errno: -1,
        code: 'EPERM',
        syscall: 'kill'
      }
    
    I'm guessing one of the scripts it runs kills Node.js processes, and that inadvertantly kills Claude as well. Or maybe it feels bad that it can't solve my problem and commits suicide.

    In any case, I wish it would stay alive and help me lol.

    • schmookeeg 12 hours ago ago

      Claude and some of the edgier parts of localstack are not friends either. It's pretty okay at rust which surprised me.

      It makes me think that the language/platform/architecture that is "most known" by LLMs will soon be the preferred -- sort of a homogenization of technologies by LLM usage. Because if you can be 10x as successfully vibey in, say, nodejs versus elixir or go -- well, why would you opt for those in a greenfield project at all? Particularly if you aren't a tech shop and that choice allows you to use junior coders as if they were midlevel or senior.

      • actsasbuffoon 10 hours ago ago

        This mirrors a weird thought I’ve had recently. It’s not a thing I necessarily agree with, but just an idea.

        I hear people say things like, “AI isn’t coming for my job because LLMs suck at [language or tech stack]!”

        And I wonder, does that just mean that other stacks have an advantage? If a senior engineer with Claude Code can solve the problem in Python/TypeScript in significantly less time than you can solve it in [tech stack] then are you really safe? Maybe you still stack up well against your coworkers, but how well does your company stack up against the competition?

        And then the even more distressing thought accompanies it: I don’t like the code that LLMs produce because it looks nothing like the code I write by hand. But how relevant is my handwritten code becoming in a world where I can move 5x faster with coding agents? Is this… shitty style of LLM generated code actually easier for code agents to understand?

        Like I said, I don’t endorse either of these ideas. They’re just questions that make me uncomfortable because I can’t definitively answer them right now.

        • majormajor 8 hours ago ago

          All the disadvantages of those stacks still exist.

          So if you need to avoid GC issues, or have robust type safety, or whatever it is, to gain an edge in a certain industry or scenario, you can't just switch to the vibe tool of choice without (best case) giving up $$$ to pay to make up for the inefficiency or (worst case) having more failures that your customers won't tolerate.

          But this means the gap between the "hard" work and the "easy" work may become larger - compensation included. Probably most notably in FAANG companies where people are brought in expected to be able to do "hard" work and then frequently given relatively-easy CRUD work in low-ROI ancillary projects but with higher $$$$ than that work would give anywhere else.

          And the places currently happy to hire disaffected ex-FAANG engineers who realized they were being wasted on polishing widgets may start having more hiring difficulty as the pipeline dries up. Like trying to hire for assembly or COBOL today.

        • dgunay 10 hours ago ago

          Letting go of the particulars of the generated code is proving difficult for me. I hand edit most of the code my agents produce for taste even if it is correct, but I feel that in the long term that's not the optimal use of my time in agent-driven programming. Maybe the models will just get so good that they know how I would write it myself.

          • bilekas 9 hours ago ago

            I would argue this approach will help you in the long term with code maintainability. Which I feel will be one of the biggest issues down the line with AI generated codebases as they get larger.

            • monkpit 9 hours ago ago

              The solution is to codify these sorts of things in prompts and tool use and gateways like linters etc. you have to let go…

              • bilekas 7 hours ago ago

                What do you mean "you have to let go".

                I use some ai tools and sometimes they're fine, but I won't in my lifetime anyway hand over everything to an AI, not out of some fear or anything, but even purely as a hobby. I like creating things from scratch, I like working out problems, why would I need to let that go?

                • jaggederest 4 hours ago ago

                  Well, the point is, if it's not a hobby, you have to encode your preferences in lint and formatters, rather than holding onto manually messing with the output.

                  It's really freeing to say "Well, if the linter and the formatter don't catch it, it doesn't matter". I always update lint settings (writing new rules if needed) based on nit PR feedback, so the codebase becomes easier to review over time.

                  It's the same principle as any other kind of development - let the machine do what the machine does well.

        • hoyo1s 8 hours ago ago

          Sometimes one just need [language or tech stack] to do something, especially for some performance/security considerations.

          For now LLMs still suffers from hallucination and lack of generalizability, The large amount of code generated is sometimes not necessarily a benefit, but a technical debt.

          LLMs are good for open and fast, prototype web applications, but if we need a stable, consistent, maintainable, secure framework, or scientific computing, pure LLMs are not enough, one can't vibe everything without checking details

        • fragmede 10 hours ago ago

          LLMs write python and typescript well, because of all the examples in their training data. But what if we made a new programming language whos goal was to be optimal for an LLM to generate it? Would it be closer to assembly? If we project that the future is vibe coded, and we scarcely look at the outputted code, testing, instead, that the output matches the input correctly, not looking at the code, what would that language look like?

          • alankarmisra 9 hours ago ago

            They’d presumably do worse. LLMs have no intrinsic sense of programming logic. They are merely pattern matching against a large training set. If you invent a new language that doesn’t have sufficient training examples for a variety of coding tasks, and is syntactically very different from all the existing languages, the LLMs wouldn’t have enough training data and would do very badly.

          • majormajor 8 hours ago ago

            What is it that you think would make a certain non-Python language "more optimal" for an LLM? Is there something inherently LLM-friendly about certain language patterns or is "huge sets of training examples" and "a robust standard library" (the latter to conserve tokens/attention vs having to spit out super-verbose 20x longer assembly all day) all "optimality" means?

          • metrix 9 hours ago ago

            I have thought the same thing. How is it created? is it an idea by an LLM to make the language, or a dev to create a language designed for an llm.

            How do we get the LLM to gain knowledge on this new language that we have no example usage of?

          • hoyo1s 8 hours ago ago

            Strict type-checking and at least with some dependent type and inductive type

    • yc-kraln 14 hours ago ago

      I get this issue when it uses sudo to run a process with root privileges, and then times out.

    • triyambakam 15 hours ago ago

      I would try upgrading or wiping away your current install and re-installing it. There might be some cached files somewhere that are in a bad state. At least that's what fixed it for me when I recently came across something similar.

    • sixtyj 16 hours ago ago

      Jump to another LLM helps me to find what happened. *This is not a official advice :)

    • idontwantthis 15 hours ago ago

      I have had zero good results with any LLM and elastic search. Everything it spits out is a hallucination because there aren’t very many examples of anything complete and in context on the internet.

  • whazor 5 hours ago ago

    I think the key success to the success of Claude Code is unix.

    Claude can run commands to search code, test compilation, and perform various other operations.

    Unix is great because its commands are well-documented, and the training data is abundant with examples.

  • 12ian34 an hour ago ago

    claude code is a nightmare compared to cursor. terminal is not an appropriate UX unless you want to do stuff from your phone in a pinch. the main thing they got right is selling the idea of vibing to skeptical engineers by making it a CLI. i think it has more sensible defaults than cursor though which is another reason folks like it out of the box. cursor with a planner/executor system prompt works much nicer and is way less destructive. cc more for vibing IMO

  • anonzzzies 3 hours ago ago

    What's the best current cli (with a non interactive option) that is on par with Claude code but can work with other llms like ollama, openrouter etc? I tried stuff like aider but it cannot discover files, the open source gemini one but it was terrible; what is a good one that maybe is the same as CC if you plug in Opus?

  • athrowaway3z 16 hours ago ago

    > "THIS IS IMPORTANT" is still State of the Art

    Had a similar problems until I saw the advice "Dont say what it shouldn't but focus on what it should".

    i.e. make sure when it reaches for the 'thing', it has the alternative in context.

    Haven't had those problems since then.

    • amelius 15 hours ago ago

      I mean, if advice like this worked, then why wouldn't Anthropic let the LLM say it, for instance?

      • donperignon 6 hours ago ago

        Because it’s embarrassing, and probably nobody understands why this works, depending on such heuristics that can completely change in the next model is really bad…

        • amelius an hour ago ago

          I'd say exactly because behavior might change you have to include proper instructions for each model.

          And depending on people in forums to provide these instructions is of course not great.

  • noduerme 2 hours ago ago

    Claude Code has definitely attracted me as in, I would like to try it on a new project. But just speaking as a lone coder, it absolutely terrifies me to give something access to my whole system and CLI. I have one main laptop and everything is on it. All my repos and API keys and SSH keys, my carefully tuned dev environment...I have no idea what it might read or upload, let alone what it might try to execute. I'm tempted enough to try it that I might set up a completely walled-off virtual machine for the purpose, but then I don't know how much benefit I'd get from it.

    Do you just let it run rampant on your system and do whatever it thinks it should, installing whatever it wants and sucking all your config files into the cloud or what?

  • yumraj 15 hours ago ago

    I made insane progress with CC over last several weeks, but lately have noticed progress stalling.

    I’m in the middle of some refactoring/bug fixing/optimization but it’s constantly running into issues, making half baked changes, not able to fix regressions etc. Still trying to figure out how to make do a better job. Might have to break it into smaller chunks or something. Been pretty frustrating couple of weeks.

    If anyone has pointers, I’m all ears!!

    • jampa 9 hours ago ago

      I felt that, too. It turns out I was getting 'too comfortable' while using CC. The best way is to treat CC like a junior engineer and overexplain things before letting it do anything. With time, you start to trust CC, but you shouldn't do that because it is still the same LLM when you started.

      Another thing is that before, you were in a greenfield project, so Claude didn't need any context to do new things. Now, your codebase is larger, so you need to point out to Claude where it should find more information. You need to spoon-feed the relevant files with "@" where you want it to look up things and make changes.

      If you feel Claude is lazy, force it to use more thinking budget "think" < "think hard" < "think harder" < "ultrathink.". Sometimes I like to throw "ultrathink" and do something else while it codes. [1]

      [1]: https://www.anthropic.com/engineering/claude-code-best-pract...

    • fourthark 7 hours ago ago

      I ran into this too.

      In my case it was exactly the kind of situation where I would also run into trouble on my own - trying to change too many things at once.

      It was doing superbly for smaller, more contained tasks.

      I may have to revert and approach each task on its own.

      I find I need to know better than Claude what is going on, and guide it every step. It will figure out the right code if I show it where it should go, that kind of thing.

      I think people may be underestimating / underreporting how much they have to be in the loop, guiding it.

      It’s not really autonomous or responsible. But it can still be very useful!

    • swader999 7 hours ago ago

      Sometimes I take a repomix dump of a slice where there's issues and then get chat gpt to analyze it and come up with a step by step guide to fix it for Claude to follow. That has worked.

    • imiric 15 hours ago ago

      > If anyone has pointers, I’m all ears!!

      Give programming a try, you might like it.

      • yumraj 13 hours ago ago

        Yeah, have been doing that for 30 years.

        Next…

  • nojs 5 hours ago ago

    I’ve noticed that custom subagents in CC often perform noticeably worse than the main agent, even when told to use Opus and despite extreme prompt tuning. This seems to concur with the “keep it flat” logic here. But why should this be the case?

    • nuwandavek 5 hours ago ago

      (blogpost author here) I've noticed this too. My top guess for any such thing would be that this type of sub-agent routing is outside the training distribution. Its possible that this gets better overnight with a model update. The second reason is that sub-agents make it very hard to debug - was the issue with the router prompt or the agent prompt? Flat tools and loop make this a non-issue without loss of any real capability.

  • erelong 9 hours ago ago

    > The main takeaway, again, is to keep things simple.

    if true this seems like a bloated approach but tbh I wouldn't claim to know totally how to use Claude like the author here...

    I find you can get a lot of mileage out of "regular" prompts, I'd call them?

    Just asking for what you need one prompt at a time?

    I still can't visualize how any of the complexity on top of that like discussed in the article adds anything to carefully crafted prompts one at a time

    I also still can't really visualize how claude works compared to simple prompts one at a time.

    Like, wouldn't it be more efficient to generate a prompt and then check it by looping through the appendix sections ("Main Claude Code System Prompt" and "All Claude Code Tools"), or is that basically what the LLM does somewhat mysteriously (it just works)? So like "give me while loop equivalent in [new language I'm learning]" is the entirety of the prompt... then if you need to you can loop through the appendix section? Otherwise isn't that a massive over-use of tokens, and the requests might even be ignored because they're too complex?

    The control flow eludes me a bit here. I otherwise get the impression that the LLM does not use the appendix sections correctly by adding them to prompts (like, couldn't it just ignore them at times)? It would seem like you'd get more accurate responses by separating that from whatever you're prompting and then checking the prompt through looping over the appendix sections.

    Does that make any sense?

    I'm visualizing coding an entire program as prompting discrete pieces of it. I have not needed elaborate .md files to do that, you just ask for "how to do a while loop equivalent in [new language I'm learning]" for example. It's possible my prompts are much simpler for my uses, but I still haven't seen any write-ups on how people are constructing elaborate programs in some other way.

    Like how are people stringing prompts together to create whole programs? (I guess is one question I have that comes to mind)

    I guess maybe I need to find a prompt-by-prompt breakdown of some people building things to get a clearer picture of how LLMs are being used

    • zackify 9 hours ago ago

      How you see and use it is the same way I do. So interested to hear other replies

      • zackify 8 hours ago ago

        Wow. Auto correct. I meant “interested”

  • syntaxing 16 hours ago ago

    I don’t know if I’m doing something wrong. I was using Sonnet 4 with GitHub Copilot. Recently a week ago switched to Claude Code. I find GitHub Copilot solves problem and bugs way better than Claude Code. For some reason, Claude Code seems very lazy. Has anyone experience something similar?

    • riazrizvi 7 hours ago ago

      I use ChatGPT, and I have used Claude several times. I’ve not found Claude to be better. I’ve come to the conclusion that all these posts asking why Claude is so good at coding, are all part of some marketing approach. I think it’s tied to how Claude prefers to hook into repos, I think maybe it’s tied to a business strategy of acquiring a mega code dataset. So they are especially motivated to push this narrative vs say OpenAI or other players.

      (I don’t use any clients that answer coding questions by using the context of my repos).

      • tcoff91 6 hours ago ago

        If you aren’t using a client that automatically uses the context of your repos then you don’t understand why people like Claude. You need to use the Claude Code CLI in order to really get the best results.

    • libraryofbabel 16 hours ago ago

      The consensus is the opposite: most people find copilot does less well than Claude with both using sonnet 4. Without discounting your experience, you’ll need to give us more detail about what exactly you were trying to do (what problem, what prompt) and what you mean by “lazy” if you want any meaningful advice though.

      • sojournerc 13 hours ago ago

        Where do you find this "consensus"?

        • rsanek 11 hours ago ago

          read HN threads, talk to people using AI alot. I have the same perception

    • StephenAshmore 16 hours ago ago

      It may be a configuration thing. I've found quite the opposite. Github Copilot using Sonnet 4 will not manage context very well, quite frequently resorting to running terminal commands to search for code even when I gave it the exact file it's looking for in the copilot context. Claude code, for me, is usually much smarter when it comes to reading code and then applying changes across a lot of files. I also have it integrated into the IDE so it can make visual changes in the editor similar to GitHub Copilot.

      • syntaxing 16 hours ago ago

        I do agree with you, Github Copilot uses more tokens like you mentioned with redundant searches. But at the end of the day, it solves the problem. Not sure if the cost out weights the benefit though compared to Claude Claude. Going to try Claude Code more and see if I'm prompting it incorrectly.

    • cosmic_cheese 16 hours ago ago

      I haven’t tried other LLMs but have a fair amount of experience with Claude Code, and there definitely times when you have to be explicit about the route you want it to take and tell it to not take shortcuts.

      It’s not consistent, though. I haven’t figured out what they are but it feels like there are circumstances where it’s more prone to doing ugly hacky things.

    • wordofx 15 hours ago ago

      I have most of the tools setup so I can switch between them and test which is better. So far Amp and Claude Code are on top. GH Copilot is the worst. I know MS is desperately trying to copy its competitors but the reality is, they are just copying features. They haven’t solved the system prompts. So the outcomes are just inferior.

  • gervwyk 16 hours ago ago

    We’re considering building a coding agent for Lowdefy[1], a framework that lets you build web apps with YAML config.

    For those who’ve built coding agents: do you think LLMs are better suited for generating structured config vs. raw code?

    My theory is that agents producing valid YAML/JSON schemas could be more reliable than code generation. The output is constrained, easier to validate, and when it breaks, you can actually debug it.

    I keep seeing people creating apps with vibe coder tools but then get stuck when they need to modify the generated code.

    Curious if others think config-based approaches are more practical for AI-assisted development.

    [1] https://github.com/lowdefy/lowdefy

    • hamandcheese 15 hours ago ago

      > easier to validate

      This is essential to productivity for humans and LLMs alike. The more reliable your edit/test loop, the better your results will be. It doesn't matter if it's compiling code, validating yaml, or anything else.

      To your broader question. People have been trying to crack the low-code nut for ages. I don't think it's solvable. Either you make something overly restrictive, or you are inventing a very bad programming language which is doomed to fail because professional coders will never use it.

      • gervwyk 15 hours ago ago

        Good point. i’m making the assumption that if the LLM has a more limited feature space to produce as output, then the output is more predictable, and thus faster to comprehend changes. Similar to when devs use popular libraries, there is a well known abstraction, therefore less “new” code to comprehend as i see familiar functions, making the code predictable to me.

    • ec109685 16 hours ago ago

      I wouldn’t get hung up on one shotting anything. Output to a format that can be machine verified, ideally in a format there is plenty of industry examples for.

      Then add a grader step to your agentic loop that is triggered after the files are modified. Give feedback to the model if there any errors and it will fix them.

    • amelius 16 hours ago ago

      How do you specify callbacks?

      Config files should be mature programming languages, not Yaml/Json files.

      • gervwyk 15 hours ago ago

        Callback: Blocks (React components) can register events with action chains (a sequential list of async functions) that will be called when the event is triggered. So it is defined in the react component. This abstraction of blocks, events, actions, operations and requests are the only abstraction required in the schema to build fully functional web apps.

        Might sound crazy but we built full web apps in just yaml.. Been doing this for about 5 years now and it helps us scale to build many web apps, fast, that are easy to maintain. We at Resonancy[1] have found many benefits in doing so. I should write more about this.

        [1] - https://resonancy.io

  • BobSonOfBob 6 hours ago ago

    KISS always win. Great breakdown article. Thanks!

  • itbeho 8 hours ago ago

    I use Claude code with Elixir and Phoenix. It's been mostly great but after a short time into a project it seems to break something unrelated to the task at hand.

    • mike1o1 8 hours ago ago

      If you haven’t yet, you should try out usage_rules mix package. I mostly use Ash, which has great support for usage rules and it’s a night and day difference in effectiveness. Tidewave is also really nice as an MCP as it lets the agent query hexdocs or your schema directly.

      https://hexdocs.pm/usage_rules/readme.html

      • itbeho 8 hours ago ago

        Thank you! I'll definitely check that out.

        • arcanemachiner 4 hours ago ago

          Also check out the AGENTS.MD file that's been added to Phoenix 1.8.

          Make sure you read it first though... I believe it expected Req to be present as a dependency when generating code that makes HTTP requests.

  • conception 15 hours ago ago

    Ive seen context forge has a way to use hooks to keep CC going after context condensing. Are there any other patterns or tools people are using with CC to keep it on task, with current context until it has a validated completion of its task? I feel like we have all these tools separately but nothing brings it all together and also isn’t crazy buggy.

    • kroaton 15 hours ago ago

      Load up the context with your information + task list (broken down into phases). Have Sonnet implement phase one tasks and mark phase 1 as done. Go into planning mode, have Opus review the work (you should ideally also review it at this point). Double press escape and go back to the point in the conversation where you loaded up the context with your information + task list. Tell it to do phase 2. Repeat until you run out of usage.

      • conception 13 hours ago ago

        Yes, i can manage CC through a task list but there’s nothing technically stopping all your steps from happening automatically. That tool just doesn’t exist yet as far as I can tell but it’s not a very advanced tool to build. I’m surprised no one has put those steps together.

        Also if the task runs out of context it will get progressively worse rather than refresh its own context from time to time.

      • kroaton 15 hours ago ago

        From time to time, go into Opus planning mode, have it review your entire codebase and tell it to go file by file and look for bugs, security issues, logical problems, etc. Have it make a list. Then load up the context + task list...

    • rolls-reus 15 hours ago ago

      What’s context forge?

  • siva7 16 hours ago ago

    It's more interesting to compare what gemini cli and codex cli did wrong? (though i haven't used both of them for weeks to months)

  • marmalade2413 15 hours ago ago

    I would be remis if after reading this I didn't point people towards talk box ( https://github.com/rich-iannone/talk-box) from one of the creators of great tables.

  • diego_sandoval 16 hours ago ago

    It shocks me when people say that LLMs don't make them more productive, because my experience has been the complete opposite, especially with Claude Code.

    Either I'm worse than then at programming, to the point that I find an LLM useful and they don't, or they don't know how to use LLMs for coding.

    • timr 16 hours ago ago

      It depends very much on your use case, language popularity, experience coding, and the size of your project. If you work on a large, legacy code base in COBOL, it's going to be much harder than working on a toy greenfield application in React. If your prior knowledge writing code is minimal, the more amazing the results will seem, and vice-versa.

      Despite the persistent memes here and elsewhere, it doesn't depend very much on the particular tool you use (with the exception of model choice), how you hold it, or your experience prompting (beyond a bare minimum of competence). People who jump into any conversation with "use tool X" or "you just don't understand how to prompt" are the noise floor of any conversation about AI-assisted coding. Folks might as well be talking about Santeria.

      Even for projects that I initiate with LLM support, I find that the usefulness of the tool declines quickly as the codebase increases in size. The iron law of the context window rules everything.

      Edit: one thing I'll add, which I only recently realized exists (perhaps stupidly) is that there is a population of people who are willing to prompt expensive LLMs dozens of times to get a single working output. This approach seems to me to be roughly equivalent to pulling the lever on a slot machine, or blindly copy-pasting from Stack Overflow, and is not what I am talking about. I am talking about the tradeoffs involved in using LLMs as an assistant for human-guided programming.

      • ivan_gammel 15 hours ago ago

        Overall I would agree with you, but I start feeling that this „iron law“ isn’t as simple as that. After all, humans have limited „context window“ too — we don’t remember every small detail on a large project we have been working on for several years. Loose coupling and modularity helps us and can help LLM to make the size of the task manageable if you don’t ask it to rebuild the whole thing. It’s not the size that makes LLMs fail, but something else, probably the same things where we may fail.

        • timr 15 hours ago ago

          Humans have a limited short-term memory. Humans do not literally forget everything they've ever learned after each Q&A cycle.

          (Though now that I think of it, I might start interrupting people with “SUMMARIZING CONVERSATION HISTORY!” whenever they begin to bore me. Then I can change the subject.)

          • ivan_gammel 15 hours ago ago

            LLMs do not „forget“ everything completely either. Probably all major tools by now consume information from some form of memory (system prompt, Claude.md, project files etc) before your prompt. Claude Code rewrites the Claude.md, ChatGPT may modify the chat memory if it finds it necessary etc.

            • timr 15 hours ago ago

              Writing stuff in a file is not “memory” (particularly if I have to do it), and in any case, it consumes context. Overrun the context window, and the tool doesn’t know about what is lost.

              There are various hacks these tools take to cram more crap into a fixed-size bucket, but it’s still fundamentally different than how a person thinks.

              • ivan_gammel 14 hours ago ago

                > Writing stuff in a file is not “memory”

                Do you understand yourself what you just said? File is a way to organize data in memory of a computer by definition. When you write instructions to LLM, they persistently modify your prompts making LLM „remember“ certain stuff like coding conventions or explanations of your architectural choices.

                > particularly if I have to do it

                You have to communicate with LLM about the code. You either do it persistently (must remember) or contextually (should know only in context of a current session). So word „particularly“ is out of place here. You choose one way or another instead of bring able to just tell that some information is important or unimportant long-term. This communication would happen with humans too. LLMs have different interface for it, more explicit (giving the perception of more effort, when it is in fact the same; and let’s not forget that LLM is able to decide itself on whether to remember something or not).

                > and in any case, it consumes context

                So what? Generalization is an effective way to compress information. Because of it persistent instructions consume only a tiny fraction of context, but they reduce the need for LLM to go into full analysis of your code.

                > but it’s still fundamentally different than how a person thinks.

                Again, so what? Nobody can keep in short-term memory the entire code base. It should not be the expectation to have this ability neither it should not be considered a major disadvantage not to have it. Yes, we use our „context windows“ differently in a thinking process. What matters is what information we pack there and what we make of it.

          • faangguyindia 3 hours ago ago

            the "context" is the short term memory equivalent of LLM.

            Long term memory is its training data.

          • BeetleB 14 hours ago ago

            Both true and irrelevant.

            I've yet had the "forgets everything" to be a limiting factor. In fact, when using Aider, I aggressively ensure it forgets everything several times per session.

            To me, it's a feature, not a drawback.

            I've certainly had coworkers who I've had to tell "Look, will you forget about X? That use case, while it look similar, is actually quite different in assumptions, etc. Stop invoking your experiences there!"

    • Aurornis 16 hours ago ago

      I’ve found LLMs useful at some specific tasks, but a complete waste of time at others.

      If I only ever wrote small Python scripts, did small to medium JavaScript front end or full stack websites, or a number of other generic tasks where LLMs are well trained I’d probably have a different opinion.

      Drop into one of my non-generic Rust codebases that does something complex and I could spent hours trying to keep the LLM moving in the right direction and away from all of the dead ends and thought loops.

      It really depends on what you’re using them for.

      That said, there are a lot of commenters who haven’t spent more than a few hours playing with LLMs and see every LLM misstep as confirmation of their preconceived ideas that they’re entirely useless.

    • SXX 16 hours ago ago

      This heavily depends on what project and stack you working on. LLMs are amazing for building MVPs or self-contained micro-services on modern, popular and well-defined stacks. Every single dependency, legacy or proprietary library and every extra MCP make it less usable. It get's much worse if codebase itself is legacy unless you can literally upload documentation for each used API into context.

      A lot of programmers work on maintaining huge monolith codebases, built on top of 10-years old tech using obscure proprietary dependencies. Usually they dont have most of the code to begin with and APIs are often not well documented.

    • majormajor 8 hours ago ago

      > It is extremely important to identify the most important task the LLM needs to perform and write out the algorithm for it. Try to role-play as the LLM and work through examples, identify all the decision points and write them explicitly. It helps if this is in the form of a flow-chart.

      I get lost a bit at things like this, from the link. The lessons in the article match my experience with LLMs and tools around them (see also: RAG is a pain in the ass and vector embedding similarity is very far from a magic bullet), but the takeaway - write really good prompts instead of writing code - doesn't ring true.

      If I need to write out all the decision points and steps of the change I'm going to make, why am I not just doing it myself?

      Especially when I have an editor that can do a lot of automated changes faster/safer than grep-based text-first tooling? If I know the language the syntax isn't an issue; if I don't know the language it's harder to trust the output of the model. (And if I 90% know the language but have some questions, I use an LLM to plow through the lines I used to have to go to Google for - which is a speedup, but a single-digit-percentage one.)

      My experience is that the tools fall down pretty quickly because I keep trying to make them to let me skip the details of every single task. That's how I work with real human coworkers. And then something goes sideways. When I try to pseudocode the full flow vs actually writing the code I lose the speed advantage, and often end up with a nasty 80%-there-but-I-don't-really-know-how-to-fix-the-other-20%-without-breaking-the-80% situation because I noticed a case I didn't explicitly talk about that it guessed wrong on. So then it's either slow and tedious or `git reset` and try again.

      (99% of these issues go away when doing greenfield tooling or scripts for operations or prototyping, which is what the vast majority of compelling "wow" examples I've seen have been, but only applies to my day job sometimes.)

    • breuleux 12 hours ago ago

      Speaking for myself, LLMs are reasonably good at writing tests or adapting existing structures, but they are not very good at doing what I actually want to do (design, novelty, trying to figure out the very best way to do a thing). I gain some productivity from the reduction of drudgery, but that's never been much of a bottleneck to begin with.

      The thing is, a lot of the code that people write is cookie-cutter stuff. Possibly the entirety of frontend development. It's not copy-paste per se, but it is porting and adapting common patterns on differently-shaped data. It's pseudo-copy-paste, and of course AI's going to be good at it, this is its whole schtick. But it's not, like, interesting coding.

    • jsight 16 hours ago ago

      What is performance like for you? I've been shocked at how many simple requests turn into >10 minutes of waiting.

      If people are getting faster responses than this regularly, it could account for a large amount of the difference in experiences.

      • totalhack 16 hours ago ago

        Agree with this, though I've mostly been using Gemini CLI. Some of the simplest things, like applying a small diff, take many minutes as it loses track of the current file state and takes minutes to figure it out or fail entirely.

    • tjr 16 hours ago ago

      What do you work on, and what do LLMs do that helps?

      (Not disagreeing, but most of these comments -- on both sides -- are pretty vague.)

      • SXX 16 hours ago ago

        For once LLMs are good for building game prototypes. When all you care is to check whatever something is fun to play it really doesn'a matter how much of tech debt you generate in process.

        And you start from the stratch all the time so you can generate all the documentation before you ever start to generate code. And when LLM slop become overwhelming you just drop it and go to check next idea.

    • lambda 15 hours ago ago

      It can be more than one reason.

      First of all, keep in mind that research has shown that people generally overestimate the productivity gains of LLM coding assistance. Even when using a coding assistant makes them less productive, they feel like they are more productive.

      Second, yeah, experience matters, both with programming and LLM coding assistants. The better you are, the less helpful the coding assistant will be, it can take less work to just write what you want than convince an LLM to do it.

      Third, some people are more sensitive to the kind of errors or style that LLMs tend to use. I frequently can't stand the output of LLMs, even if it technically works; it doesn't live to to my personal standards.

      • pton_xd 15 hours ago ago

        > Third, some people are more sensitive to the kind of errors or style that LLMs tend to use. I frequently can't stand the output of LLMs, even if it technically works; it doesn't live to to my personal standards.

        I've noticed the stronger my opinions are about how code should be written or structured, the less productive LLMs feel to me. Then I'm just fighting them at every step to do things "my way."

        If I don't really have an opinion about what's going on, LLMs churning out hundreds of lines of mostly-working code is a huge boon. After all, I'd rather not spend the energy thinking through code I don't care about.

      • Uehreka 15 hours ago ago

        > research has shown that people generally overestimate the productivity gains of LLM coding assistance.

        I don’t think this research is fully baked. I don’t see a story in these results that aligns with my experience and makes me think “yeah, that actually is what I’m doing”. I get that at this point I’m supposed to go “the effect is so subtle that even I don’t notice it!” But experience tells me that’s not normally how this kind of thing works.

        Perhaps we’re still figuring out how to describe the positive effects of these tools or what axes we should really be measuring on, but the idea that there’s some sort of placebo effect going on here doesn’t pass muster.

    • socalgal2 16 hours ago ago

      I’m trying to learn jj. Both Gemini and ChatGPT gave me incorrect instructions 4 of 5 times

      https://jj-vcs.github.io/jj/

      • BeetleB 14 hours ago ago

        That's because jj is relatively new, and constantly changing. The official tutorial is (by their own admission), out of date. People's blog posts are fairly different in what commands/usage they recommend, as well.

        I know it, because I recently learned jj, with a lot of struggling.

        If a human struggles learning it, I wouldn't expect LLMs to be much better.

        • esafak 9 hours ago ago

          That's ironic considering jj is supposed to make version control easier.

          • BeetleB 7 hours ago ago

            It does make it easier. Don't conflate documentation with the tool itself.

    • d-lisp 15 hours ago ago

      Basic engineering skills (frontend development, python, even some kind of high level 3d programming) are covered. If you do C/C++, or even Java in a preexisting project then you will have a hard time constantly explaining the LLM why <previous answer> is absolute nonsense.

      Everytime I tried LLMs, I had the feeling of talking with a ignorant trying to sound VERY CLEVER: terrible mistakes at every line, surrounded with punchlines, rocket emojis and tons of bullshit. (I'm partly kidding).

      Maybe there are situations where LLMs are useful e.g. if you can properly delimit and isolate your problem; but when you have to write code that is meant to mess up with the internal of some piece of software then it doesn't do well.

      It would be nice to know from each part of the "happy users" and "mecontent usere" of LLMs in what context they experimented with it to be more informed on this question.

    • ta12653421 16 hours ago ago

      Productivity boost is unbelieveable! If you handle it right, its a boon - its like having 3 junior devs at hand. And I'm talking about using the web interface.

      I guess most people are not paying and cant therefore apply the project-space (which is one of the best features), which unleashes its full magic.

      Even if I'm currently without a job, I'm still paying because it helps me.

      • ta12653421 15 hours ago ago

        LOL why do I get downvoted for explaining my experience? :-D

        • fourthark 6 hours ago ago

          So describe your experience without being a booster

        • pawelduda 13 hours ago ago

          Because you posted a success story about LLM usage on HN

          • ta12653421 13 hours ago ago

            Well, understood, but that part between the lines is not my fault?

            • pawelduda 11 hours ago ago

              Nah, never implied that

    • dsiegel2275 16 hours ago ago

      Agreed. I only started using Claude Code about a week and a half ago and I'm blown away by how productive I can be with it.

      • pawelduda 16 hours ago ago

        I've had occasions where a relatively short prompt solved me an entire day of debugging and fixing things, because it was tech stack I barely knew. Most impressive part was when CC knew the changes may take some time to be applied and just used `sleep 60; check logs;` 2-3 times and then started checking elsewhere if something's stuck. It was, CC cleaned it up and a minute later someone pinged me that the it works.

    • cpursley 16 hours ago ago

      I feel like I could have written this myself; I'm truly dumbfounded. Maybe I am just a crappy coder but I don't think I'd be getting such good results with Claude Code if I were.

    • AaronAPU 15 hours ago ago

      If you’re working with a massive complicated C++ repository, you have to take the time to collect the right context and describe the problem precisely enough. Then you should actually read the code to verify it even makes sense. And at that point, if you’re a principle level developer, you could just as easily do it yourself.

      But the situation is very different if you’re coding slop in the first place (front end stuff, small repo simple code). The LLMs can churn that slop out at a rapid clip.

    • exe34 16 hours ago ago

      it makes me very productive with new prototypes in languages/frameworks that I'm not familiar with. conversely, a lot of my work involves coding as part of understanding the business problem in the first place. think making a plot to figure out how two things relate, and then based on the understanding trying out some other operation. it doesn't matter how fast the machine can write code, my slow meat brain is still the bottleneck. the coding is trivial.

    • wredcoll 16 hours ago ago

      The best part about llm coding is that you feel productive even when you aren't, makes coding a lot more fun.

  • myflash13 16 hours ago ago

    CC is so damn good I want to use its agent loop in my agent loop. I'm planning to build a browser agent for some specialized tasks and I'm literally just bundling a docker image with Claude Code and a headless browser and the Playwright MCP server.

  • gauravvppnd 4 hours ago ago

    Honestly, Claude’s code feels so good because it’s clean, logical, and easy to follow. It doesn’t just work—it makes sense when you read it, which saves a ton of time when debugging or building on top of it.

  • kristianp 7 hours ago ago

    Just fyi, at the end of the article there is a link to minusx.com which has an expired certificate.

    This server could not prove that it is minusx.com; its security certificate expired 553 days ago

  • faangguyindia 3 hours ago ago

    I am curious if any good existing solution exist for this tool:

    `Tool name: WebFetch Tool description: - Fetches content from a specified URL and processes it using an AI model - Takes a URL and a prompt as input - Fetches the URL content, converts HTML to markdown - Processes the content with the prompt using a small, fast model - Returns the model's response about the content - Use this tool when you need to retrieve and analyze web content`

    I came up with this one:

    `import asyncio from playwright.async_api import async_playwright from readability import Document from markdownify import markdownify as md

    async def web_fetch_robust(url: str, prompt: str) -> str: """ Fetches content from a URL using a headless browser to handle JS-heavy sites, processes it, and returns a summary. """ try: async with async_playwright() as p: # Launch a headless browser (Chromium is a good default) browser = await p.chromium.launch() page = await browser.new_page()

                # --- Avoiding Blocks ---
                # Set a realistic User-Agent to mimic a real browser
                await page.set_extra_http_headers({
                    'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36'
                })
    
                # Navigate to the URL
                await page.goto(url, wait_until='networkidle', timeout=15000) # wait_until='networkidle' is key
    
                # --- Extracting Content ---
                # Get the fully rendered HTML content
                html_content = await page.content()
                await browser.close()
    
                # --- Processing for Token Minimization ---
                # 1. Extract main content using Readability.js
                doc = Document(html_content)
                main_content_html = doc.summary()
    
                # 2. Convert to clean Markdown
                markdown_content = md(main_content_html, strip=['a', 'img']) # Strip links/images to save tokens
    
                # 3. Use the small, fast model to process the clean content
                # summary = small_model.process(prompt, markdown_content) # Placeholder for your model call
    
                # For demonstration, we'll just return a message
                summary = f"A summary of the JS-rendered content from {url} would be generated here."
    
                return summary
    
        except Exception as e:
            return f"Error fetching or processing URL with headless browser: {e}"
    
    # To run this async function # result = asyncio.run(web_fetch_robust("https://example.com", "Summarize this.")) # print(result) `
  • radleta 15 hours ago ago

    I’d be curious to know what MCPs you’ve found useful with CC. Thoughts?

    • nuwandavek 13 hours ago ago

      (blogpost author here) I actually found none of them useful. I think MCP is an incomplete idea. Tools and the system prompt cannot be so cleanly separated (at least not yet). Just slapping on tools hurts performance more than it helps.

      I've now gone back to just using vanilla CC with a really really rich claude.md file.

    • faangguyindia 3 hours ago ago

      One area of improvement is being able to plug the github issues.

      I run into bugs which are not documented in documentation or anywhere except github issues.

      Is it legal to search github issues using LLM? if yes how?

  • whoknowsidont 15 hours ago ago

    It's not that good, most developers are just really that subpar lol.

  • roflyear 15 hours ago ago

    Claude Code is hilarious because often it'll say stuff that's basically "that's too hard, here's a bandaid fix" and implement it lol

  • revskill 4 hours ago ago

    Smart tool use.

  • sergiotapia 15 hours ago ago

    Is Claude Code better than Amp?

  • system2 7 hours ago ago

    As expected, many graybeard gatekeepers are telling others not to use LLM for any type of coding or assistance.

  • ftyuiiooool 8 hours ago ago

    Qetyuuioooooddfhj

  • HacklesRaised 16 hours ago ago

    Delusional asshats trying to draft the grift?

  • dingnuts 16 hours ago ago

    the article says CC doesn't use RAG, but then describes how it uses tools to Retrieve context to Aid Generation... RAG

    what am I missing here?

    edit: lol I "love" that I got downvoted for asking a simple question that might have an open answer. "be curious" says the rules. stay classy HN

    • ebzlo 16 hours ago ago

      Yes technically it is RAG, but a lot of the community is associating RAG with vector search specifically.

      • dingnuts 16 hours ago ago

        it does? why? the term RAG as I understand it leaves the methodology for retrieval vague so that different techniques can be used depending on the, er, context.. which makes a lot more sense to me

        • koakuma-chan 16 hours ago ago

          > why?

          Hype. There's nothing wrong with using, e.g., full-text search for RAG.

    • faangguyindia 3 hours ago ago

      It doesn't use RAG is the most obvious way, like taking whole text/code and generating embeddings and performing vector search on it.

    • nuwandavek 13 hours ago ago

      (blogpost author here) You're right! I did make the distinction in an earlier draft, but decided to use "RAG" interchangeably with vector search, as it is popularly known today in code-gen systems. I'd probably go back to the previous version too.

      But I do think there is a qualitative different between getting candidates and adding them to context before generating (retrieval augmented generation) vs the LLM searching for context till it is satisfied.

    • BoorishBears 16 hours ago ago

      If you want to be really stringent, RAG originally referred to going from user query to retrieving information directly based on the query then passing it to an LLM: With CC the LLM is taking the raw user query then crafting its own searches

      But realistically lots of RAG systems have LLM calls interleaved for various reasons, so what they probably mean it not doing the usual chunking + embeddings thing.

      • theptip 16 hours ago ago

        Yeah, TFA clearly explains their point. They mean RAG=vector search, and contrast this with tool calling (eg Grep).

  • LaGrange 16 hours ago ago

    [flagged]

    • dang 16 hours ago ago

      Please don't post unsubstantive comments to Hacker News, and especially not putdowns.

      The idea here is: if you have a substantive point, make it thoughtfully. If not, please don't comment until you do.

      https://news.ycombinator.com/newsguidelines.html

      • dingnuts 16 hours ago ago

        I appreciate the vague negative takes on tools like this where it feels like there is so much hype it's impossible to have a different opinion. "It's bad" is perfectly substantiative in my opinion; this person tried it, didn't like it, and doesn't have much more to say because of that, but it's still a useful perspective.

        Is this why HN is so dang pro-AI? the negative comments, even small ones, are moderated away? explains a lot TBH

        • danielbln 15 hours ago ago

          There is no value in a single poster saying "it's bad". I don't know this person, there is zero context on why I should care that this user thinks it's bad. Unless they state why they think it's bad, it adds nothing to the conversation and is just noise

        • dang 10 hours ago ago

          HN is by no means "pro-AI". It's sharply divided, and (as always with these things) each side assumes the other side is dominant.

        • h4ch1 16 hours ago ago

          I think this comment would be a little better by specifying WHY it's bad instead of just a "it's bad" like it's a Twitter thread.

          • LaGrange 16 hours ago ago

            The subject is pretty exhausted. The reason I post "it's bad" because, honestly, expending on it just feels like a waste of time and energy. The point is demonstrating that this _isn't_ a consensus, and not much more than that.

            Edit: bonus points if this gets me banned.

            • dang 10 hours ago ago

              (We don't ban people for posting like this!)

              If it felt like a waste of time and energy to post something substantive, rather than the GP comment (https://news.ycombinator.com/item?id=44998577), then you should have just posted nothing. That comment was obviously neither substantive nor thoughtful. This is hardly a borderline call!

              We want substantive, thoughtful comments from people who do have the time and energy to contribute them.

              Btw, to avoid a misunderstanding that sometimes shows up: it's fine for comments to be critical; that is, it's possible to be substantive, thoughtful, and critical all at the same time. For example, I skimmed through your account's most recent comments and saw several of that kind, e.g. https://news.ycombinator.com/item?id=44299479 and https://news.ycombinator.com/item?id=42882357. If your GP comment had been like that, it would have been fine; you don't have to like Claude Code (or whatever the $thing is).

        • exe34 16 hours ago ago

          that wasn't a negative comment though. a negative comment would explain what they didn't like about it. this was the digital equivalent of flytipping.

  • on_the_train 15 hours ago ago

    The lengths people will go through to avoid to code is astonishing

    • apwell23 15 hours ago ago

      writing code is not the fun part of coding. I only realized that after using claude code.