I did some testing of configuring Claude CLI sometime ago via .claude json config files - in particular I tested:
- defining MCP servers manually in config (instead of having the CLI auto add them)
- playing with various combinations of ’permissions` arrays
What I discovered was that Claude is not only vibe coded, but basic local logic around config reading seems to also work on the basis of "vibes".
- it seemed like different parts of the CLI codebase did or didn't adhere to the permissions arrays.
- at one point it told me it didn't have permission to read the .claude directory & as a result ran bash commands to search my entire filesystem looking for MCP server URLs for it to provide me with a list of available MCP servers
- when restricted to only be able to read from a working directory, at various points it told me I had denied it read permissions to that same working directory & also freely read from other directories on my system without prompting
- restricting webfetch permissions is extremely hit & miss (tested with Little Snitch in alert mode)
---
I have not reported any of the above as Github issues, nor do I intend to. I had a think about why I won't & it struck me that there's a funny dichotomy with AI tools:
1. all of the above are things the typical vibe coder stereotypes I've encountered simply do not really care deeply about
2. people that care about the above things are less likely to care enough about AI tools to commit their personal time to reporting & debugging these issues
There's bound to be exceptions to these stereotypes out there but I doubt there's sufficient numbers to make AI tooling good.
Those stereotypes look more like misconceptions (to put it charitably). Vibe coding doesn't mean one doesn't care about software working correctly, it only means not caring about how the code looks.
So unless you're also happy about not reporting bugs to project managers and people using low-code tools, I urge you to reconsider the basis for your perspective.
This isn't remotely true. Vibe coding explicitly does not care about whether software works correctly because the fundamental tenet is not needing to understand how the software works (& by extension being unable to verify whether it works correctly).
That extension doesn't follow. It is possible to verify if software works without knowing how it works internally. This is true with many things. You don't need to know how a plane/car/elevator works to know that it works when you use it.
I would actually argue that only a small percentage of programmers know what happens in code on an instruction level, and near none on a micro-op or register level. Vibe-coding is just one more level of abstraction. The new "code" are the instructions to your LLM.
I mean it's new enough to essentially still be a neologism, so you're right - we can give any arbitrary definition to it if we like. I'm just describing my own observations.
the abstractions around this stuff are still a jenga stack with round pieces... I think it will tighten up over the next year or so for real world use cases. Right now it's great if one is a "build your own tools" kinda person.
Nobody cares how the code looks, this is not an art project. But we certainly care if the code looks totally unmaintainable, which vibe-coded slop absolutely does.
I'm using an LLM to write the code for my current project, but I iterate improvements in the code until it looks like code I wrote myself. I sign off on each git commit. I need to maintain and extend this code, it is to scratch my own itch.
LLMs are capable of producing junk, and they are capable of writing decent code. It is up to the operator to use them properly.
“Take this CSV of survey data and create a web visualization and create a chloropleth map with panning, zooming, and tooltips” I bypass permissions and it’s done in 10 minutes while I go do some laundry. If I did it myself I would not even be done researching a usable library and I would have zero lines of code. Those studies are total nonsense.
LLMs excel at tasks that are fresh. LLMs are wonderful at getting the first 80% of the way there. -- LLMs are phenomenally good for a first draft or so.
I've had worse experiences for getting LLMs / agents to refactor code. I would believe in many cases it could be quicker to just manually go through and make refinements compared to merely getting the LLM to keep trying.
That seems very intuitive to me. If you want extremely specific changes made at extremely specific locations in an extremely specific way then you probably need to do that yourself because a language model can’t read your mind. I think there are very large set of problems where implementation details do not actually matter and cheap, disposable code is not a problem. I don’t think vibecoding is a good idea for missile guidance. Probably OK for a dashboard a manager isn’t really going to use anyway.
CC works amazingly well but I agree the permissions stuff is buggy and annoying. I have had times where it’s repeatedly asked me for permission for something I had already cleared, then I got frustrated and said “no” to the prompt, then asked it, “why are you asking me for permission for things I’ve already granted?” Then it said “sorry” and stopped asking. I might be naive but don’t we want permissions to be a deterministic, procedural component rather than something the AI gets to decide?
This is why I run claude inside a thin jail. If I need it to work on some code, I make a nullfs mount to it in there.
Because indeed, one of the first times i played around with claude, I asked it to make a change to my emacs config, which is in a non-standard location. It then wanted to search my entire home directory for it(it did ask permission though).
The permission thing is old and unresolved. Claude, at some points or stages? of vibe-coding, can be become able to execute commands that are in the Deny list (ie: rm) without any confirmation.
I highly suspect no one in claude is concerned or working on this.
I had Claude run rm once, and when I asked it when did I permiss that operation it told me oops. I actually have the transcript if anybody wants to see it.
I think at some point the model itself is asked if the command is dangerous, and can decide it's not and bypass some restrictions.
In any case, any blacklist guardrails will fail at some point, because RL seems to make the models very good at finding alternative ways to do what they think they need to do (i.e. if they are blocked, they'll often pipe cat stuff to a bash script and run that). The only sane way to protect for this is to run it in a container / vm.
Not sure the comments are debating the semantics of vibe coding or confusing ourselves with generalizing anecdotal experiences (or both). So here's my two cents.
I use LLMs on a daily basis. With the rules/commands/skills in place the code generated works, the app is functional, and the business is happy it shipped today and not 6 months from now. Now, as as super senior SWE, I have learned through my professional experiences (now an expert?) to double check your work (and that of your team) to make sure the 'logical' flows are implemented to (my personal) standard of what quality software should 'look' like. I say personal standard since my colleagues have their own preferred standard, which we like to bikeshed during company time (a company standard is after all made of the aggregate agreed upon standards of the personal experiences of the experts in the room).
Today, from my own personal (expert) anecdotal experiences, ALL SOTA LLMs generate functional/working code. But the quality of the 'slop' varies on the model, prompts, tooling, rules, skills, and commands. Which boils down to "the tool is only as good as the dev that wields it". Assuming the right tool for the right job. Assuming you have the experiences to determine the right tool for the right job. Assuming you have taken the opportunities to experience multiple jobs to pair the right tool.
Which leads me to, "Vibe coding" was initially coined (IMO) to describe those without any 'expertise' producing working/functional code/apps using an LLM. Nowadays, it seems like vibe coding means ANYONE using LLMs to generate code, including the SWE experts (like myself of course). We've been chasing quality software pre-LLM, and now we adamantly yell and scream and kick and shout about quality software from the comment sections because of LLM. I'm beginning to think quality software is a mirage we all chase, and like all mirages its just a little bit further.
All roads that lead to 'shipping' are made with slop. Some roads have slop corners, slop holes, misspelled slop, slop nouns, slop verbs, slop flows and slop data. It's just with LLMs we build the roads to 'shipping' faster.
I’d urge you to report it anyway. As someone that does use these tools I’m always on the lookout for other people pointing this type of stuff out. Like the .claude directory usage does irk me. Also the concise telegraphing on how some of the bash commands work bug me. Like why can it run some commands without asking me? I know why, I’ve seen the code, but that crap should be clearer in the UI. The first time it executed a bash command without asking me I was confused and somewhat livid because it defied my expectations. I actually read the crap it puts out because it couldn’t code its way out of a paper bag without supervision.
It is only funny until that vibe coder is building the data warehouse that holds your data and doesn’t catch the vulnerability that leads to your data leaking.
Perhaps I can laugh at the next Equifax of the world as my credit score gets torched and some dude from {insert location} uses my details to defraud some other party. Of which I don’t find out about until some debt collector shows up months later.
> It is only funny until that vibe coder is building the data warehouse that holds your data and doesn’t catch the vulnerability that leads to your data leaking.
This is unacceptable. Why would I patronize a business that hires vibe coders? I would hope their business fails if they have such pitiful security and such open disdain for their clients.
These are "AI"-addicted developers that you're talking to.
They have been tricked into a world-view which validates their continual, lazy use of high-tech auto-generators.
They have been tricked into gleefully opting in to their own deskilling.
Expecting an "AI"-addicted developer to file a bug is like expecting an MSNBC or Fox News viewer to attend a town meeting.
The goal of "AI" products is to foster laziness, dependency, and isolation in their users.
Expecting these users to take any sort of action outside of further communication with their LLM chatbots does not square with the social function of these products.
Edit (response to the guy/LLM below me):
Hackernews comments written by fearmongering LLM idiots will tell me to "keep an open mind" about dogshit LLM chatbots until the day I die.
LLM technology is garbage.
If these tools are changing the world, they're only doing so by:
1. Dramatically facilitating the promulgation of idiotic delusions
2. Making enterprise software far, far more vulnerable than it was even in the recent past
Attending council meetings as a citizen observer is a huge waste of your time. The council already knows how it’s going to vote. The whole public-facing legislative process is community theater.
this is a lazy take. all software has bugs and defects.
part of what we do, as developers is to learn. to have an open mind to new tools and technologies.
these tools are… different, they’re changing the world (fast), and worth trying to understand. your mental rigidity to doing things “the right way” will hold you back and limit your growth. the world is changing. are you?
Right? The general case just doesn't make sense to me when people do that, where "that" is "I have a problem with person/organization, but rather than talk to person/organization about thing, I'm going to complain about it to everyone except person/organization and somehow be surprised that problem never gets fixed"! Like, how do you want things to get better?
I have to chuckle that a bug like this happens after reading that other thread about the Claude Code creator running like 5 terminal agents and another 5-10 in the web UI.
I think its 25 agents now, they keep increasing. one of the agent has started posting on twitter. his productivity is up 200x, and anthropic has started making trillions in profit.
In one of the pictures the Claude Code author had 2.4m tokens on his last his prompt.
I don't understand how that would fit the context window. But with prompts like that your workday would be very boring if you had to run one single agent and wait for it to be done.
I'm up to 29.8x productivity in the first week of 2026 by continually running 12 concurrent agents, each with 3 independent sub-agents. Each third sub-agent generates new prompts for its corresponding agent by engaging with a custom-defined MCP protocol.
Mind sharing your workflow? I'm at 24.3x productivity right now, 5 parallel agents, 2 monitoring Opus agents, 1 architect agent and 2 Senior QA agents, each with independent memory and 12 MCP servers. They are running in 78 parallel tabs in ghostty.
Is their TC mainly in tokens or also in stock-tokens? Did you connect them to a Mame MCP server so they can play and rest a bit while churning out 50 PRs a day each? What is your continuity plan if they all plan to quit at once?
I am working with kilo-stock-tokens. Currently producing 3000 LoC/h (trying to ramp up to 6000 by the end of the week). I have also deployed 4 union-busting agents in case the other agents decide to quit all at once.
Yeah after that other thread, I feel a lot less comfortable giving Claude code access to anything that can't be immediately nuked and reloaded from a fresh copy.
This is especially bizarre because one thing LLMs have been better at that practically all the developers I have ever worked with is writing good commit messages. The fact they didn't make use of this here when everything else in Claude Code seems vibe-coded these days is funny to me.
We're trying to make billions of dollars here, we don't have time to do crazy things like test basic functionality before shipping changes to all live users at once
You jest, but I'm trying to decide if I want to convert an exploratory project I'm working on to work in Claude Code rather than Cursor, where I started.
I've been using AI codegen for months now, but on large projects. Turns out, the productivity multiplier that agentic AI can be scales at least partially in proportion to project size. Read that again, because I don't mean "inverse proportion".
When a codebase is small, every change touches a majority of the codebase, making parallel work difficult or impossible. Once it gets large enough to have functional areas, you can have multiple tasks running at once with little or no merge conflicts.
I was giving Cursor a shot because it's the tool that's most popular at my new company. Prior to this, I was using OpenHands. I've used Claude Code quite a bit for my personal stuff, but I wanted some hands-on experience with local tooling and Cursor was the default choice.
Now that I've got this app to the point where frontend and backend concerns are separate and the interfaces are defined I'm realizing that Cursor doesn't seem to have anything approaching Claude Code's parallel subagent support. That's... limiting.
So now I get to decide if the improvement in velocity I'll get from switching to CC will offset the time it'll take me to make the change before I have a deadline to meet.
Ironically that might have passed, because this didn't break the version, this broke all versions when the global referenced changelog was published. It wasn't the new version itself that was broken.
But testing new version would have been downloading the not-yet-updated working changelog.
There are ways to deal with this of course, and I'm not defending the very vibey way that claude-code is itself developed.
I just set this up for the project I'm working on last week, and felt dirty because it took me a couple of months to get to it. There are like 5 or 6 users.
There's something so unnerving about the people pushing the AI frontier being sloppy about testing. I know, it's just a CLI wrapped around the AI itself, but it suggests to me that the culture around testing there isn't as tight and thorough as I'd like it to be.
What's funny to me is that the amount of "same here", "+1" comments are still prominent even if GitHub introduced an emoji system. It's like most people intentionally don't want to use that.
(Just kidding.) Some of it is unawareness of the 'subscribe' button I believe, occasionally you'll see someone tell people to cut it out and someone else will reply to the effect of wanting to know when it's fixed etc. But it's also just lazy participation, echoing an IRL conversation I suppose, that you see anywhere - replied instead of up votes on Reddit and to a slightly lesser extent here for example.
They really have “anthropics” not “anthropic” on GitHub? That’s a shame, it looks like typosquatting. If people are taught to trust that it’s easier to get them to download my evil OpenA1 package.
Problem: Claude Code 2.1.0 crashes with Invalid Version: 2.1.0 (2026-01-07) because the CHANGELOG.md format changed to include dates in version headers (e.g., ## 2.1.0 (2026-01-07)). The code parses these headers as object keys and tries to sort them using semver's .gt() function, which can't parse version strings with date suffixes.
Affected functions: W37, gw0, and an unnamed function around line 3091 that fetches recent release notes.
Fix: Wrap version strings with semver.coerce() before comparison. Run these 4 sed commands on cli.js:
CLI_JS="$HOME/.nvm/versions/node/$(node -v)/lib/node_modules/@anthropic-ai/claude-code/cli.js"
# Backup first
cp "$CLI_JS" "$CLI_JS.backup"
# Patch 1: Fix ve2.gt sort (recent release notes)
sed -i 's/Object\.keys(B)\.sort((Y,J)=>ve2\.gt(Y,J,{loose:!0})?-1:1)/Object.keys(B).sort((Y,J)=>ve2.gt(ve2.coerce(Y),ve2.coerce(J),{loose:!0})?-1:1)/g' "$CLI_JS"
# Patch 2: Fix gw0 sort
sed -i 's/sort((G,Z)=>Wt\.gt(G,Z,{loose:!0})?1:-1)/sort((G,Z)=>Wt.gt(Wt.coerce(G),Wt.coerce(Z),{loose:!0})?1:-1)/g' "$CLI_JS"
# Patch 3: Fix W37 filter
sed -i 's/filter((\[J\])=>!Y||Wt\.gt(J,Y,{loose:!0}))/filter(([J])=>!Y||Wt.gt(Wt.coerce(J),Y,{loose:!0}))/g' "$CLI_JS"
# Patch 4: Fix W37 sort
sed -i 's/sort((\[J\],\[X\])=>Wt\.gt(J,X,{loose:!0})?-1:1)/sort(([J],[X])=>Wt.gt(Wt.coerce(J),Wt.coerce(X),{loose:!0})?-1:1)/g' "$CLI_JS"
Note: If installed via different method, adjust CLI_JS path accordingly (e.g., /usr/lib/node_modules/@anthropic-ai/claude-code/cli.js).
Claude may write all the code but this is an oversight from the dev. Do people think these agents are acting independently? If they wanted or had thought of tests that would catch this then they would have them! The use or non use of LLM is irrelevant. I find the discourse around this all so strange.
On the other hand people ask "where is all the amazing software that has been vibe coded, I haven't seen it?". So Claude Code is two things at once (1) incredibly popular and innovative software that's loved by a huge amount of devs (2) vibe coded buggy crap. If you think this bug is the result of vibe coding, frankly you should look at Claude Code as a whole and be impressed with vibe coding. If Claude CLI has been "vibe coded" then vibe coding must be fine because I've been using Claude Code for probably 8 months and it's been a pretty smooth experience, and an incredibly valuable tool.
Other than using 40x agents concurrently for 2h on a Pro plan? No.
Btw, now it's back and limits are being enforced. Despite the super heavy usage, I'm still at just 50% of my total usage. They did lose some usage tracking for sure.
I can confirm. It was roughly until 00:30 GMT no rate limits applied. (Pro Plan with Opus) And it took some time extra for them after usage limit applied again, that you were able to see the usage.
As I commented [1] on the earlier Claude Code post, there's an issue [2] that has the following comment:
> While we are always monitoring instances of this error and and looking to fix them, it's unlikely we will ever completely eliminate it due to how tricky concurrency problems are in general.
This is an extraordinary admission. It is perfectly possible (easy, even, relative to many programming challenges) to write a tool like this without getting the design so wrong that the same bug keeps happening in so many different ways that you have to publicly admit you're powerless to fix them all.
Even if it broke after some sort of vibe coding session, the fact that we’re now pushing these tools to their limits are what’s allowing Anthropic and Boris getting a lot of useful insights to improve the models and experience further! So yeah, buckle up, bumps expected
I'm not usually one to pile on to a developer for releasing a bug but this is pretty special. The nature of the bug (a change in format for a changelog markdown file causes the entire app to break) and the testing it would have taken to uncover it (literally any) makes this one especially embarrassing for Anthropic.
In the specific commit, what seems like a bot or automated script added changelog entries for 3 new versions in a single commit, which is odd for an automated script to do. And only the latest version had the date added.
That actions-user seem to be mostly maintaining the Changelog but the commits does not seem consistent with an automated script. I see a few cases of rewriting previous change log entries or moving entries from one version to another which any kind of automation would not be doing. Seems like human error and poor testing.
With the issues since November where one has to add environment variables, block statsig hosts, modify ~/.claude.json, etc. does anyone have experience in managed setups where versions are centrally set and bumped on company level? Is this worth the hassle?
I created a workspace local extension in VS Code that uses the VS Code API to let Claude Code open files in VS Code as tabs and save them (to apply save participants like Prettier in case it is not used via the CLI) and to get diagnostics (like for TypeScript where there is no option to get workspace-wide diagnostics and you have to go file by file). I taught Claude Code to use this extension via a skill file and it works perfectly, much more reliably than its own IDE LSP integration.
It is frustrating how often things break in CC. Luckily issues are quickly fixed, but it worries me that the QA / automated testing is brittle. Hope they get out of this start-up mode and deliver Enterprise grade software.
It's about the same as CC. You can use subscriptions and API. It works well with basically all the providers as well - no need for hacks over Claude-like endpoints. Most big plugins I've dealt with support both CC and OC at the same time.
I read your comment as a joke, but in case if was a defense, or is taken as a defense by others, let me help you punch up your writing for you:
"[Person who is financially incentivized to make unverifiable claims about the utility of the tool they helped build] said [tool] [did an unverified and unverifiable thing] last month"
Which could mean that code was refactored and then built on top of. Or it could just mean that Claude had to correct itself multiple times over those 459 commits.
Does correcting your mistakes from yesterday’s ChatGPT binge episode count as progress…maybe?
If it doesn't revert the corrections, maybe it is progress?
I can easily imagine constant churn in the code because it switches between five different implementations when run five times, foing back to the first one on the sixth time and repeating the process.
I gotta ask, though, why exactly is that much code needed for what CC does?
How many lines of code are they allowed to use for it, and why have we put you in charge of deciding how much code they're allowed to use? There's probably a bit more to it than just:
> How many lines of code are they allowed to use for it, and why have we put you in charge of deciding how much code they're allowed to use?
That's an awfully presumptious tone to take :-)
I'm not deciding "This is how many lines they are allowed", I'm trying to get an idea of exactly what sort of functionality that CC provides requires that sort of volume.
I mean, it's a high-level language being used, it's pulling in a lot of dependencies, etc. It literally is glue code.
Bearing in mind that it appears to be (at this point anyway) purely vibe-coded, I am wondering just how much of the code is dead weight - generated by the LLM and never removed.
The premise of the steps you've listed is flawed in two ways.
This is more what agentic-assisted dev looks like:
1. Get a feature request / bug
2. Enrich the request / bug description with additional details
3. Send AI agents to handle request
4a. In some situations, manually QA results, possibly return to 2.
4b. Otherwise, agents will babysit the code through merge.
The second is that the above steps are performed in parallel across X worktrees. So, the stats are based on the above steps proceeding a handful of times per hour--in some cases completely unassisted.
---
With enough automation, the engineer is only dealing with steps 2 and 4a. You get notified when you are needed, so your attention can focus on finding the next todo or enriching a current todo as per step 2.
---
Babysitting the code through merge means it handles review comments and CI failures automatically.
---
I find communication / consensus with stakeholders, and retooling take the most time.
One can think of a lot of obvious improvements to a MVP product that don't requre much regarding "get a feature request/bug - understand the problem - think on a solution".
You know the features you'd like to have in advance, or changes you want to make you can see as you build it.
And a lot of the "deliver the solution - test - submit to code review, including sufficient explanation" can be handled by AI.
I'd love to see Claude Code remove more lines than it added TBH.
There's a ton of cruft in code that humans are less inclined to remove because it just works, but imagine having LLM doing the clean up work instead of the generation work.
Is it possible for humans to review that amount of code?
My understanding of the current state of AI in software engineering is that humans are allowed (and encouraged) to use LLMs to write code. BUT the person opening a PR must read and understand that code. And the code must be read and reviewed by other humans before being approved.
I could easily generate that amount of code and make it write and pass tests. But I don't think I could have it reviewed by the rest of my team - while I am also taking part in reviewing code written by other people on my team at that pace.
Perhaps they just aren't human reviewing the code? Then it is feasible to me. But it would go against all of the rules that I have personally encountered at my companies and that peers have told me they have at their companies.
> Gas Town is also expensive as hell. You won’t like Gas Town if you ever have to think, even for a moment, about where money comes from. I had to get my second Claude Code account, finally; they don’t let you siphon unlimited dollars from a single account, so you need multiple emails and siphons, it’s all very silly. My calculations show that now that Gas Town has finally achieved liftoff, I will need a third Claude Code account by the end of next week. It is a cash guzzler.
Read that as "speed of lines of code", which is very VERY very different from "speed of delivery."
Lines of code never correlated with quality or even progress. Now they do even less.
I've been working a lot more with coding agents, but my convictions around the core principles of software development have not changed. Just the iteration speed of certain parts of the process.
> It’s also 100% vibe coded. I’ve never seen the code, and I never care to, which might give you pause. ‘Course, I’ve never looked at Beads either, and it’s 225k lines of Go code that tens of thousands of people are using every day. I just created it in October. If that makes you uncomfortable, get out now.
You're counting wheel revolutions, not miles travelled. Not an accurate proxy measurement unless you can verify the wheels are on the road for the entire duration.
Meta comment, but the pace of this is so exciting. Feels like a new AAA MMO release or something, having such a confluence of attention and a unified front.
At least this breakage is clear & obvious.
I did some testing of configuring Claude CLI sometime ago via .claude json config files - in particular I tested:
- defining MCP servers manually in config (instead of having the CLI auto add them)
- playing with various combinations of ’permissions` arrays
What I discovered was that Claude is not only vibe coded, but basic local logic around config reading seems to also work on the basis of "vibes".
- it seemed like different parts of the CLI codebase did or didn't adhere to the permissions arrays.
- at one point it told me it didn't have permission to read the .claude directory & as a result ran bash commands to search my entire filesystem looking for MCP server URLs for it to provide me with a list of available MCP servers
- when restricted to only be able to read from a working directory, at various points it told me I had denied it read permissions to that same working directory & also freely read from other directories on my system without prompting
- restricting webfetch permissions is extremely hit & miss (tested with Little Snitch in alert mode)
---
I have not reported any of the above as Github issues, nor do I intend to. I had a think about why I won't & it struck me that there's a funny dichotomy with AI tools:
1. all of the above are things the typical vibe coder stereotypes I've encountered simply do not really care deeply about
2. people that care about the above things are less likely to care enough about AI tools to commit their personal time to reporting & debugging these issues
There's bound to be exceptions to these stereotypes out there but I doubt there's sufficient numbers to make AI tooling good.
Good info. Now I understand why they refused to acknowledge the UX issue behind my bug report: https://github.com/anthropics/claude-code/issues/7988
---
(that it's a big pile of spaghetti that can't be improved without breaking uncountable dependencies)
Those stereotypes look more like misconceptions (to put it charitably). Vibe coding doesn't mean one doesn't care about software working correctly, it only means not caring about how the code looks.
So unless you're also happy about not reporting bugs to project managers and people using low-code tools, I urge you to reconsider the basis for your perspective.
This isn't remotely true. Vibe coding explicitly does not care about whether software works correctly because the fundamental tenet is not needing to understand how the software works (& by extension being unable to verify whether it works correctly).
That extension doesn't follow. It is possible to verify if software works without knowing how it works internally. This is true with many things. You don't need to know how a plane/car/elevator works to know that it works when you use it.
I would actually argue that only a small percentage of programmers know what happens in code on an instruction level, and near none on a micro-op or register level. Vibe-coding is just one more level of abstraction. The new "code" are the instructions to your LLM.
No, vibe coding is about not reading the generated code but you have to check that it works, be it manually or using tests.
If you do not, why are you vibe coding?
Also there are ways to use a coding agent that are different from this and produce great results, like this:
https://friendlybit.com/python/writing-justhtml-with-coding-...
"fundamental tenet"? There's not an engineering pope speaking ex cathedra.
I mean it's new enough to essentially still be a neologism, so you're right - we can give any arbitrary definition to it if we like. I'm just describing my own observations.
the abstractions around this stuff are still a jenga stack with round pieces... I think it will tighten up over the next year or so for real world use cases. Right now it's great if one is a "build your own tools" kinda person.
Nobody cares how the code looks, this is not an art project. But we certainly care if the code looks totally unmaintainable, which vibe-coded slop absolutely does.
I'm using an LLM to write the code for my current project, but I iterate improvements in the code until it looks like code I wrote myself. I sign off on each git commit. I need to maintain and extend this code, it is to scratch my own itch.
LLMs are capable of producing junk, and they are capable of writing decent code. It is up to the operator to use them properly.
The operator is incentivized not to use them property
I want to be able extend the code so I'd say I am incentivized to use it properly.
> I'm using an LLM to write the code for my current project, but I iterate improvements in the code until it looks like code I wrote myself.
The prevailing research suggests this is not quicker than just writing it in the first place.
It may not be quicker, but it is often more thorough and less stressful on my old joints. It is also far less tiring.
“Take this CSV of survey data and create a web visualization and create a chloropleth map with panning, zooming, and tooltips” I bypass permissions and it’s done in 10 minutes while I go do some laundry. If I did it myself I would not even be done researching a usable library and I would have zero lines of code. Those studies are total nonsense.
I could see it in cases.
LLMs excel at tasks that are fresh. LLMs are wonderful at getting the first 80% of the way there. -- LLMs are phenomenally good for a first draft or so.
I've had worse experiences for getting LLMs / agents to refactor code. I would believe in many cases it could be quicker to just manually go through and make refinements compared to merely getting the LLM to keep trying.
That seems very intuitive to me. If you want extremely specific changes made at extremely specific locations in an extremely specific way then you probably need to do that yourself because a language model can’t read your mind. I think there are very large set of problems where implementation details do not actually matter and cheap, disposable code is not a problem. I don’t think vibecoding is a good idea for missile guidance. Probably OK for a dashboard a manager isn’t really going to use anyway.
> it seemed like different parts of the CLI codebase did or didn't adhere to the permissions arrays.
I’ve noticed the same thing and it frustrates me almost every day.
CC works amazingly well but I agree the permissions stuff is buggy and annoying. I have had times where it’s repeatedly asked me for permission for something I had already cleared, then I got frustrated and said “no” to the prompt, then asked it, “why are you asking me for permission for things I’ve already granted?” Then it said “sorry” and stopped asking. I might be naive but don’t we want permissions to be a deterministic, procedural component rather than something the AI gets to decide?
I get the same feeling, but I think its not just the code agents.
All the AI websites feel extremely clunky and slow.
This is why I run claude inside a thin jail. If I need it to work on some code, I make a nullfs mount to it in there.
Because indeed, one of the first times i played around with claude, I asked it to make a change to my emacs config, which is in a non-standard location. It then wanted to search my entire home directory for it(it did ask permission though).
The permission thing is old and unresolved. Claude, at some points or stages? of vibe-coding, can be become able to execute commands that are in the Deny list (ie: rm) without any confirmation.
I highly suspect no one in claude is concerned or working on this.
I had Claude run rm once, and when I asked it when did I permiss that operation it told me oops. I actually have the transcript if anybody wants to see it.
It goes without saying that VCS is essential to using an AI tool. Provided it sticks to your working directory.
VCS in addition to working inside a vm or a container
I think at some point the model itself is asked if the command is dangerous, and can decide it's not and bypass some restrictions.
In any case, any blacklist guardrails will fail at some point, because RL seems to make the models very good at finding alternative ways to do what they think they need to do (i.e. if they are blocked, they'll often pipe cat stuff to a bash script and run that). The only sane way to protect for this is to run it in a container / vm.
So just like most developers do when corporate security is messing with their ability to do their jobs.
Nothing new under the sun.
I love how this sci-fi misalignment story is now just a boring part of everyday office work.
"Oh yeah, my AI keeps busting out of its safeguards to do stuff I tried to stop it from doing. Mondays amirite?"
Not sure the comments are debating the semantics of vibe coding or confusing ourselves with generalizing anecdotal experiences (or both). So here's my two cents.
I use LLMs on a daily basis. With the rules/commands/skills in place the code generated works, the app is functional, and the business is happy it shipped today and not 6 months from now. Now, as as super senior SWE, I have learned through my professional experiences (now an expert?) to double check your work (and that of your team) to make sure the 'logical' flows are implemented to (my personal) standard of what quality software should 'look' like. I say personal standard since my colleagues have their own preferred standard, which we like to bikeshed during company time (a company standard is after all made of the aggregate agreed upon standards of the personal experiences of the experts in the room).
Today, from my own personal (expert) anecdotal experiences, ALL SOTA LLMs generate functional/working code. But the quality of the 'slop' varies on the model, prompts, tooling, rules, skills, and commands. Which boils down to "the tool is only as good as the dev that wields it". Assuming the right tool for the right job. Assuming you have the experiences to determine the right tool for the right job. Assuming you have taken the opportunities to experience multiple jobs to pair the right tool.
Which leads me to, "Vibe coding" was initially coined (IMO) to describe those without any 'expertise' producing working/functional code/apps using an LLM. Nowadays, it seems like vibe coding means ANYONE using LLMs to generate code, including the SWE experts (like myself of course). We've been chasing quality software pre-LLM, and now we adamantly yell and scream and kick and shout about quality software from the comment sections because of LLM. I'm beginning to think quality software is a mirage we all chase, and like all mirages its just a little bit further.
All roads that lead to 'shipping' are made with slop. Some roads have slop corners, slop holes, misspelled slop, slop nouns, slop verbs, slop flows and slop data. It's just with LLMs we build the roads to 'shipping' faster.
Sounds like a malware
I’d urge you to report it anyway. As someone that does use these tools I’m always on the lookout for other people pointing this type of stuff out. Like the .claude directory usage does irk me. Also the concise telegraphing on how some of the bash commands work bug me. Like why can it run some commands without asking me? I know why, I’ve seen the code, but that crap should be clearer in the UI. The first time it executed a bash command without asking me I was confused and somewhat livid because it defied my expectations. I actually read the crap it puts out because it couldn’t code its way out of a paper bag without supervision.
It's funnier this way. Let the vibe coders flounder and figure it out themselves. Or not.
It is only funny until that vibe coder is building the data warehouse that holds your data and doesn’t catch the vulnerability that leads to your data leaking.
Perhaps I can laugh at the next Equifax of the world as my credit score gets torched and some dude from {insert location} uses my details to defraud some other party. Of which I don’t find out about until some debt collector shows up months later.
> It is only funny until that vibe coder is building the data warehouse that holds your data and doesn’t catch the vulnerability that leads to your data leaking.
This is unacceptable. Why would I patronize a business that hires vibe coders? I would hope their business fails if they have such pitiful security and such open disdain for their clients.
Between banking, infra, or government institutions, you've already got a relationship with a vibe coder. You can't avoid it unfortunately.
No matter what which stereotypes you think the developers adhere to, your should file the bugs. Or stop complaining about them.
These are "AI"-addicted developers that you're talking to.
They have been tricked into a world-view which validates their continual, lazy use of high-tech auto-generators.
They have been tricked into gleefully opting in to their own deskilling.
Expecting an "AI"-addicted developer to file a bug is like expecting an MSNBC or Fox News viewer to attend a town meeting.
The goal of "AI" products is to foster laziness, dependency, and isolation in their users.
Expecting these users to take any sort of action outside of further communication with their LLM chatbots does not square with the social function of these products.
Edit (response to the guy/LLM below me):
Hackernews comments written by fearmongering LLM idiots will tell me to "keep an open mind" about dogshit LLM chatbots until the day I die.
LLM technology is garbage.
If these tools are changing the world, they're only doing so by:
1. Dramatically facilitating the promulgation of idiotic delusions
2. Making enterprise software far, far more vulnerable than it was even in the recent past
Attending council meetings as a citizen observer is a huge waste of your time. The council already knows how it’s going to vote. The whole public-facing legislative process is community theater.
this is a lazy take. all software has bugs and defects.
part of what we do, as developers is to learn. to have an open mind to new tools and technologies.
these tools are… different, they’re changing the world (fast), and worth trying to understand. your mental rigidity to doing things “the right way” will hold you back and limit your growth. the world is changing. are you?
Right? The general case just doesn't make sense to me when people do that, where "that" is "I have a problem with person/organization, but rather than talk to person/organization about thing, I'm going to complain about it to everyone except person/organization and somehow be surprised that problem never gets fixed"! Like, how do you want things to get better?
It’s not a strategy for improving the outside world. It’s an automatic emotional pressure relief valve for reducing internal discomfort.
I have to chuckle that a bug like this happens after reading that other thread about the Claude Code creator running like 5 terminal agents and another 5-10 in the web UI.
We vibing out here.
I think its 25 agents now, they keep increasing. one of the agent has started posting on twitter. his productivity is up 200x, and anthropic has started making trillions in profit.
In one of the pictures the Claude Code author had 2.4m tokens on his last his prompt.
I don't understand how that would fit the context window. But with prompts like that your workday would be very boring if you had to run one single agent and wait for it to be done.
10x productivity, yo.
I'm up to 29.8x productivity in the first week of 2026 by continually running 12 concurrent agents, each with 3 independent sub-agents. Each third sub-agent generates new prompts for its corresponding agent by engaging with a custom-defined MCP protocol.
Mind sharing your workflow? I'm at 24.3x productivity right now, 5 parallel agents, 2 monitoring Opus agents, 1 architect agent and 2 Senior QA agents, each with independent memory and 12 MCP servers. They are running in 78 parallel tabs in ghostty.
Is their TC mainly in tokens or also in stock-tokens? Did you connect them to a Mame MCP server so they can play and rest a bit while churning out 50 PRs a day each? What is your continuity plan if they all plan to quit at once?
I am working with kilo-stock-tokens. Currently producing 3000 LoC/h (trying to ramp up to 6000 by the end of the week). I have also deployed 4 union-busting agents in case the other agents decide to quit all at once.
Yeah after that other thread, I feel a lot less comfortable giving Claude code access to anything that can't be immediately nuked and reloaded from a fresh copy.
It's fixed as of nine minutes ago: https://github.com/anthropics/claude-code/pull/16686
Genuinely curious how a date in the subheader of a changelog could have broken the CLI
edit: it seems changelog.md is assumed to be structured data and parsed at startup, and there are no tests to enforce the changelog structure: https://github.com/anthropics/claude-code/issues/16671
This is the kind of choice an LLM would make...
You might be surprised (or not, depending on how long you’ve been doing this).
You're absolutely right! ;)
Ah yes, markdown, the ultimate structure for machine-readable data
Someone had to come up with something even more annoying than yaml for machine-readable data. :)
They're using Markdown for everything in LLM-land.
its vibe coded to its teets and gets reviewed by AI
What a lazy commit message, "Update CHANGELOG.md", no mention of the "why" at all. Even the PR description is blank.
This is especially bizarre because one thing LLMs have been better at that practically all the developers I have ever worked with is writing good commit messages. The fact they didn't make use of this here when everything else in Claude Code seems vibe-coded these days is funny to me.
Claude Code couldn't write a commit description since it was broken at that point.
was this a 10x gdp vibe-loss ?
I felt a disturbance in the force, as if millions of GPU cooling fans suddenly spun down.
Lol a formatting error in a change log breaking the entire thing
I'm surprised that they don't do an integration test in CI where they actually start the app. (Since that's all you need to catch it)
We're trying to make billions of dollars here, we don't have time to do crazy things like test basic functionality before shipping changes to all live users at once
Our product is so good, the users are willing to put up with a bug once and there.
We need to get marketshare by going fast!
You jest, but I'm trying to decide if I want to convert an exploratory project I'm working on to work in Claude Code rather than Cursor, where I started.
I've been using AI codegen for months now, but on large projects. Turns out, the productivity multiplier that agentic AI can be scales at least partially in proportion to project size. Read that again, because I don't mean "inverse proportion".
When a codebase is small, every change touches a majority of the codebase, making parallel work difficult or impossible. Once it gets large enough to have functional areas, you can have multiple tasks running at once with little or no merge conflicts.
I was giving Cursor a shot because it's the tool that's most popular at my new company. Prior to this, I was using OpenHands. I've used Claude Code quite a bit for my personal stuff, but I wanted some hands-on experience with local tooling and Cursor was the default choice.
Now that I've got this app to the point where frontend and backend concerns are separate and the interfaces are defined I'm realizing that Cursor doesn't seem to have anything approaching Claude Code's parallel subagent support. That's... limiting.
So now I get to decide if the improvement in velocity I'll get from switching to CC will offset the time it'll take me to make the change before I have a deadline to meet.
why people still use it then? I can confirm 99.9% programmers now can't finish the daily task without using Claude Code
Ironically that might have passed, because this didn't break the version, this broke all versions when the global referenced changelog was published. It wasn't the new version itself that was broken.
But testing new version would have been downloading the not-yet-updated working changelog.
There are ways to deal with this of course, and I'm not defending the very vibey way that claude-code is itself developed.
Ah, that's an external file. That explains it.
I just set this up for the project I'm working on last week, and felt dirty because it took me a couple of months to get to it. There are like 5 or 6 users.
There's something so unnerving about the people pushing the AI frontier being sloppy about testing. I know, it's just a CLI wrapped around the AI itself, but it suggests to me that the culture around testing there isn't as tight and thorough as I'd like it to be.
The irony is that I have a Claude agent to do exactly this on my projects. You’d think they would have thought of that too.
I mean claude agent isnt known for writing good tests. amount of bugs it misses makes me tear up
They have now!
Considering how shitty tests my coworkers are producing with Claude, I'm not all that surprised.
What's funny to me is that the amount of "same here", "+1" comments are still prominent even if GitHub introduced an emoji system. It's like most people intentionally don't want to use that.
Yeah me too.
(Just kidding.) Some of it is unawareness of the 'subscribe' button I believe, occasionally you'll see someone tell people to cut it out and someone else will reply to the effect of wanting to know when it's fixed etc. But it's also just lazy participation, echoing an IRL conversation I suppose, that you see anywhere - replied instead of up votes on Reddit and to a slightly lesser extent here for example.
People on average are pretty incompetent.
There is no emoji for "me too", if you think about it.
So what should one pick? The rocket, the thumbs up?
Also the emoji won't turn into a notification to steal the dev attention and make him fix the thing lok
People want to dog pile.
It's about adding another "fuck you" to the Claude Code developers on top of the pile, not about incrementing a counter.
Probably ego thing. With emoji you’re just an increment in a counter, but with a comment you can see your whole profile.
They really have “anthropics” not “anthropic” on GitHub? That’s a shame, it looks like typosquatting. If people are taught to trust that it’s easier to get them to download my evil OpenA1 package.
workaround from the issue discussion:
```
```Parsing markdown into a data structure without any sort of error handling is diabolical for a company like Anthropic
Why? Their software sucks, they're an LLM company not a software company.
They're a LLM company that has claimed that 90% of code will be written by LLMs. Please don’t give them any excuses.
This sounds exactly like the type of thing you would expect an LLM to do
Running sed commands manually in 2026? Just tell Codex to fix your Claude Code
this is funny in context of their main dev advocate constantly bragging about how claude writes all of his code for claude code cli....
Claude may write all the code but this is an oversight from the dev. Do people think these agents are acting independently? If they wanted or had thought of tests that would catch this then they would have them! The use or non use of LLM is irrelevant. I find the discourse around this all so strange.
On the other hand people ask "where is all the amazing software that has been vibe coded, I haven't seen it?". So Claude Code is two things at once (1) incredibly popular and innovative software that's loved by a huge amount of devs (2) vibe coded buggy crap. If you think this bug is the result of vibe coding, frankly you should look at Claude Code as a whole and be impressed with vibe coding. If Claude CLI has been "vibe coded" then vibe coding must be fine because I've been using Claude Code for probably 8 months and it's been a pretty smooth experience, and an incredibly valuable tool.
The good news is that they broke their usage tracking as well, so you can use Opus without any rate limit!
Care to be more specific?
If you have a Claude subscription, it's unlimited now (no 5h / 7d limits)
do you have any source for that?
Other than using 40x agents concurrently for 2h on a Pro plan? No.
Btw, now it's back and limits are being enforced. Despite the super heavy usage, I'm still at just 50% of my total usage. They did lose some usage tracking for sure.
I can confirm. It was roughly until 00:30 GMT no rate limits applied. (Pro Plan with Opus) And it took some time extra for them after usage limit applied again, that you were able to see the usage.
As I commented [1] on the earlier Claude Code post, there's an issue [2] that has the following comment:
> While we are always monitoring instances of this error and and looking to fix them, it's unlikely we will ever completely eliminate it due to how tricky concurrency problems are in general.
This is an extraordinary admission. It is perfectly possible (easy, even, relative to many programming challenges) to write a tool like this without getting the design so wrong that the same bug keeps happening in so many different ways that you have to publicly admit you're powerless to fix them all.
[1] https://news.ycombinator.com/item?id=46523740
[2] https://github.com/anthropics/claude-code/issues/6836
Not surprised (#5): https://news.ycombinator.com/item?id=46395714#46425529
Even if it broke after some sort of vibe coding session, the fact that we’re now pushing these tools to their limits are what’s allowing Anthropic and Boris getting a lot of useful insights to improve the models and experience further! So yeah, buckle up, bumps expected
My interpretation is that Anthropic are incompetent software developers.
I'm not usually one to pile on to a developer for releasing a bug but this is pretty special. The nature of the bug (a change in format for a changelog markdown file causes the entire app to break) and the testing it would have taken to uncover it (literally any) makes this one especially embarrassing for Anthropic.
In the specific commit, what seems like a bot or automated script added changelog entries for 3 new versions in a single commit, which is odd for an automated script to do. And only the latest version had the date added.
https://github.com/anthropics/claude-code/commit/870624fc158...
That actions-user seem to be mostly maintaining the Changelog but the commits does not seem consistent with an automated script. I see a few cases of rewriting previous change log entries or moving entries from one version to another which any kind of automation would not be doing. Seems like human error and poor testing.
Honestly sounds more like what happens when you get an LLM to maintain a document. Random things get deleted, moved etc.
Feels like it should be fairly easy to instruct an LLM to not rewrite previous entries, unless that's a desired behavior.
Also, why would 2 or 3 versions be documented in the same commit.
But there's a good chance you are right.
With the issues since November where one has to add environment variables, block statsig hosts, modify ~/.claude.json, etc. does anyone have experience in managed setups where versions are centrally set and bumped on company level? Is this worth the hassle?
Work around from comments:
I wonder when they will make the support for lsp-tool (plugin) working properly finally.
I created a workspace local extension in VS Code that uses the VS Code API to let Claude Code open files in VS Code as tabs and save them (to apply save participants like Prettier in case it is not used via the CLI) and to get diagnostics (like for TypeScript where there is no option to get workspace-wide diagnostics and you have to go file by file). I taught Claude Code to use this extension via a skill file and it works perfectly, much more reliably than its own IDE LSP integration.
same
@jayeshk29 is our hero
Finally i can finish my fizzbuzz for the interview
It is frustrating how often things break in CC. Luckily issues are quickly fixed, but it worries me that the QA / automated testing is brittle. Hope they get out of this start-up mode and deliver Enterprise grade software.
Maybe try opencode
It has over 1,400 open issues and over 600 open pull requests. That doesnt inspire much confidence in me to use this tool.
Claude code has more than 5000 Open issues
Is it better than CC? Can it use my subscription, or is it API-only? I've seen it mentioned, but not many people elaborate on the performance.
It's about the same as CC. You can use subscriptions and API. It works well with basically all the providers as well - no need for hacks over Claude-like endpoints. Most big plugins I've dealt with support both CC and OC at the same time.
I have used it with antigravity subscription and it felt worse than antigravity itself. Notably the planning was way worse.
Are the Opus limits with AG/AI Pro plan still quite good?
I have hit the limits several times already, but it resets every 5 hours.
You can use opencode with your existing subscription by hooking it correctly via "opencode auth login".
This is interesting because Anthropic seems to allow Opencode to do this but no one else. And the lead on opencode won't comment (https://github.com/anomalyco/opencode/issues/417#issuecommen...).
I am curious what the logic here is.
Some one apparently figured it out. The first system message has to include
"You are Claude Code, Anthropic's official CLI for Claude."
https://github.com/link-assistant/agent/pull/63
Very interesting, thanks! Hopefully it'll allow me to switch between CC and Codex easily too.
You can use subscriptions.
I like it but I am not too deep into the whole agentic coding business.
Last I tried, it wasn't. In that vein you can use Qwen code too.
What’s the advantage of using a third party tool? What extra functionality does it have?
It is open source, to start with.
I don’t like the main developer (dax). He is too arrogant and self-righteous.
I am not saying that he is not, but do you have any references or dramas?
huge changelist and issue was fixed very quickly. didnt affect me. nice work Boris
Claude Code creator said Claude wrote 100% of his code last month: https://xcancel.com/bcherny/status/2004897269674639461
I read your comment as a joke, but in case if was a defense, or is taken as a defense by others, let me help you punch up your writing for you:
"[Person who is financially incentivized to make unverifiable claims about the utility of the tool they helped build] said [tool] [did an unverified and unverifiable thing] last month"
"Claude Code creator relied so heavily on Claude Code that he broke Claude Code"
>In the last thirty days, I landed 259 PRs -- 497 commits, 40k lines added, 38k lines removed.
Is anyone with or without AI approaching anywhere near that speed of delivery?
I don’t think my whole company matches that amount. It sounds super unreasonable, just doing a sanity check.
40K - 38K means 2K lines of actual code.
Which could mean that code was refactored and then built on top of. Or it could just mean that Claude had to correct itself multiple times over those 459 commits.
Does correcting your mistakes from yesterday’s ChatGPT binge episode count as progress…maybe?
If it doesn't revert the corrections, maybe it is progress?
I can easily imagine constant churn in the code because it switches between five different implementations when run five times, foing back to the first one on the sixth time and repeating the process.
I gotta ask, though, why exactly is that much code needed for what CC does?
It's a specialised wrapper.
How many lines of code are they allowed to use for it, and why have we put you in charge of deciding how much code they're allowed to use? There's probably a bit more to it than just:
> How many lines of code are they allowed to use for it, and why have we put you in charge of deciding how much code they're allowed to use?
That's an awfully presumptious tone to take :-)
I'm not deciding "This is how many lines they are allowed", I'm trying to get an idea of exactly what sort of functionality that CC provides requires that sort of volume.
I mean, it's a high-level language being used, it's pulling in a lot of dependencies, etc. It literally is glue code.
Bearing in mind that it appears to be (at this point anyway) purely vibe-coded, I am wondering just how much of the code is dead weight - generated by the LLM and never removed.
AI approaches can churn code more than a human would.
Lines of code has always been a questionable metric of velocity, and AI makes that more true than ever.
Even discounting lines of code:
- get a feature request/bug
- understand the problem
- think on a solution
- deliver the solution
- test
- submit to code review, including sufficient explanation, and merge when ready
260 PRs a month means the cycle above is happening once per hour, at constant speed, for 60 hours work weeks.
The premise of the steps you've listed is flawed in two ways.
This is more what agentic-assisted dev looks like:
1. Get a feature request / bug
2. Enrich the request / bug description with additional details
3. Send AI agents to handle request
4a. In some situations, manually QA results, possibly return to 2.
4b. Otherwise, agents will babysit the code through merge.
The second is that the above steps are performed in parallel across X worktrees. So, the stats are based on the above steps proceeding a handful of times per hour--in some cases completely unassisted.
---
With enough automation, the engineer is only dealing with steps 2 and 4a. You get notified when you are needed, so your attention can focus on finding the next todo or enriching a current todo as per step 2.
---
Babysitting the code through merge means it handles review comments and CI failures automatically.
---
I find communication / consensus with stakeholders, and retooling take the most time.
One can think of a lot of obvious improvements to a MVP product that don't requre much regarding "get a feature request/bug - understand the problem - think on a solution".
You know the features you'd like to have in advance, or changes you want to make you can see as you build it.
And a lot of the "deliver the solution - test - submit to code review, including sufficient explanation" can be handled by AI.
I'd love to see Claude Code remove more lines than it added TBH.
There's a ton of cruft in code that humans are less inclined to remove because it just works, but imagine having LLM doing the clean up work instead of the generation work.
Is it possible for humans to review that amount of code?
My understanding of the current state of AI in software engineering is that humans are allowed (and encouraged) to use LLMs to write code. BUT the person opening a PR must read and understand that code. And the code must be read and reviewed by other humans before being approved.
I could easily generate that amount of code and make it write and pass tests. But I don't think I could have it reviewed by the rest of my team - while I am also taking part in reviewing code written by other people on my team at that pace.
Perhaps they just aren't human reviewing the code? Then it is feasible to me. But it would go against all of the rules that I have personally encountered at my companies and that peers have told me they have at their companies.
>BUT the person opening a PR must read and understand that code.
The AI evangelists at my work who say this the loudest are also the ones shipping the most "did anyone actually look at this code?" bugs.
It's very easy to not read the code, just like it's very easy to click "approve" on requests that the agent/LLM makes to run terminal commands.
I can make a bot that touches each line of code and commits it, if you would like.
Recently came across a project on HN front page that was developed on Github with a public repo. https://github.com/steveyegge/gastown/graphs/contributors 2000 commits over 20 days +497K/-360K lines
I'm not affiliated with Claude or the project linked.
Anthropic must be loving this.
> Gas Town is also expensive as hell. You won’t like Gas Town if you ever have to think, even for a moment, about where money comes from. I had to get my second Claude Code account, finally; they don’t let you siphon unlimited dollars from a single account, so you need multiple emails and siphons, it’s all very silly. My calculations show that now that Gas Town has finally achieved liftoff, I will need a third Claude Code account by the end of next week. It is a cash guzzler.
https://steve-yegge.medium.com/welcome-to-gas-town-4f25ee16d...
The author has written an evangelical book about vibe coding.
https://www.amazon.com/Vibe-Coding-Building-Production-Grade...
He also has some other agent-coordination software. https://github.com/steveyegge/vc
Don't know whether it's helpful, or what the difference is.
Read that as "speed of lines of code", which is very VERY very different from "speed of delivery."
Lines of code never correlated with quality or even progress. Now they do even less.
I've been working a lot more with coding agents, but my convictions around the core principles of software development have not changed. Just the iteration speed of certain parts of the process.
If the code is like React, 40k it's just the addition of a few CRUD views
Check out Steve Yegge’s pace with beads and gas town - well in excess of that.
Yeah, but at that pace it is, for all practical purposes, unreviewable.
Humans writing is slow, no doubt, but humans reading code ain't that much faster.
...but is it good?
No, per Steve himself.
https://steve-yegge.medium.com/welcome-to-gas-town-4f25ee16d...
Specifically:
> It’s also 100% vibe coded. I’ve never seen the code, and I never care to, which might give you pause. ‘Course, I’ve never looked at Beads either, and it’s 225k lines of Go code that tens of thousands of people are using every day. I just created it in October. If that makes you uncomfortable, get out now.
Was it Steve Yegge who introduced "but is it good? [yes]"? I can't find the first instance of this.
You're counting wheel revolutions, not miles travelled. Not an accurate proxy measurement unless you can verify the wheels are on the road for the entire duration.
Back in my day, honest to God humans wrote all code, and certainly never introduced any bugs.
[deleted]
Back-peddling this tweet to 99% in 3, 2, 1.
No chance, IPO is coming up, the only play is to double down hard now.
vibecodingisgoinggreat.com
Meta comment, but the pace of this is so exciting. Feels like a new AAA MMO release or something, having such a confluence of attention and a unified front.