I wonder how.
Everything I let claude code majorly write, whether Go, F#, C or Python, I end up eventually at a point where I systematically rip it apart and start writing it over.
In my study days, we talked of “spikes”. Software or components which functionally addressed some need, but often was badly written and architected.
That’s what I think most resembles claude code output.
And I ask the llm to write todo-lists, break tasks into phases, maintain both larger docs on individual features and a highly condensed overview doc.
I also have written claude code like tools myself, run local LLMs and so on.
That is to say, I may still be “doing it wrong”, but I’m not entirely clueless .
The only place where claude code has nearly done the whole thing and largely left me with workable code was some react front-end work I did (and no, it wasn’t great either, just fair enough).
> Because companies/users don’t pay for “great code”
Unless you work in an industry with standards, like medical or automotive. Setting ISO compliance aside, you could also work for a company which values long term maintainability, uptime etc. I'm glad I do. Not everyone is stuck writing disposable web apps.
The tweet from Dec 24 was interesting, why is Boris only now deciding to engage?
I refuse to believe real AI conversations of any value are happening on X.
Hi I'm Boris and I work on Claude Code. I am going to start being more active here on X, since there are a lot of AI and coding related convos happening here.
One dives deep into to the philosophical here, but how different is that from ”I recompiled the code, which removed 500kloc of assembly and created 503kloc of assembly”
Claude Code user¹ says Claude Code wrote continuously incorrect code for the last hour.
I asked it to write Python code to retrieve a list of Kanbord boards using the official API. I gave it a link to the API docs. First, it wrote a wrong JSONRPC call. Then it invented a Python API call that does not exist. In a new try, I I mentioned that there is an official Python package that it could use (which is prominently described in the API docs). Claude proceeded to search the web and then used the wrong API call. Only after prompting it again, it used the correct API call - but still used an inelegant approach.
I still find some value in using Claude Code but I'm much happier writing code myself and rather teach kids and colleagues how to do stuff correctly than a machine.
I’m nearly the same. Though I do find I’m still writing code, just not the code that’s ending up in the commit. I’ll write pseudo code, example code, rough function signatures then Claude writes the rest.
"If the AI builds the house, the human must become the Architect who understands why the house exists."
In Japanese traditional carpentry (Miya-daiku), the master doesn't just cut wood. He reads the "heart of the tree" and decides the orientation based on the environment.
The author just proved that "cutting wood" (coding) is now automated. This is not the end of engineers, but the beginning of the "Age of Architects."
We must stop competing on syntax speed and start competing on Vision and Context.
AI optimizes for "Accuracy" (minimizing error), but it cannot optimize for "Taste" because Taste is not a variable in its loss function.
As code becomes abundant and cheap, "Aesthetics" and "Gestalt" will become the only scarcity left. The Architect's job is not to build, but to choose what is beautiful.
I use the house analogy a lot these days. A colleague vibe-coded an app and it does what it is supposed to, but the code really is an unmaintainable hodgepodge of files. I compare this to a house that looks functional on the surface, but has the toilet in the middle of the living room, an unsafe electrical system, water leaks, etc. I am afraid only the facade of the house will need to be beautiful, only to realize that they traded off glittery paint for shaky foundations.
>Taste, Aesthetics, Gestalt Synergy now matter more.
The AI is better at that too. Truth is, nothing matters except the maximal delusion. Only humans can generate that. Only humans can make a goal they find meaningful.
honestly i've been becoming too lazy, I know exactly what I want and AI is at a point where it can turn that into code. It's good enough to a point where I start to design code around AI where it's easier for AI to understand (less DRY, less abtractions, closer to C)
I just let myself use AI on non-critical software. Personal projects and projects without deadline or high quality standards.
If it uses anything I don't know, some tech I hadn't grasped yet, I do a markdown conversation summary and make sure to include technical solutions overview. I then shove that into note software for later and, at a convenient time, use that in study mode to make sure I understand implications of whatever AI chose. I'm mostly a backend developer and this has been a great html+css primer for me.
You are treating the AI not as a tool, but as a "Material" (like wood or stone).
A master carpenter works with the grain of the wood, not against it. You are adapting your architectural style to the grain of the AI model to get the best result.
That is exactly what an Architect should do. Don't force the old rules (DRY) on a new material.
Not sure why you are getting downvoted, but this IS the key worry: That people lose contact with the code and really don’t understand what is going on, increasing “errors” in production (for some definition of error), that result in much more production firefighting that, then, reduce the amount of time to write code.
Losing contact with the code is definitely on my mind too. Just like how writing can be a method of thinking, so can programming. I fear that only by suffering through the implementation will you realise the flaws of your solution. If this is done by an LLM you are robbed the opportunity and produce a worse solution.
Still, I use LLM assisted coding fairly frequently, but this is a nagging feeling I have.
IMHO it's very misleading to claim that some LLM wrote all the code, if it's just a compression of thousands of peoples' codes that lead to this very LLM even having something to output.
No. LLMs can only reorder what they've seen in training data in novel ways. Humans can have original ideas that aren't in their training data. As a trivial example, John Carmack invented raycasting for Wolfenstein 3D. No matter how much prompting you could have given an LLM it could never have done that because there was no prior art for it to have been trained on.
In pragmatic terms though, innovation like that doesn't happen often. LLMs could do the things that most developers do.
That said, I don't agree with the notion that LLMs are simply generating content based on their training data. That ignores all the work of the AI devs who build systems to take the training data and make something that creates new (but not innovative) things from it.
Humans can have original ideas because they forget 99% of their input. I am of the opinion there are no original ideas. Most of what most humans do is just remix and reshaping like a potter shapes the clay.
I wonder how. Everything I let claude code majorly write, whether Go, F#, C or Python, I end up eventually at a point where I systematically rip it apart and start writing it over.
In my study days, we talked of “spikes”. Software or components which functionally addressed some need, but often was badly written and architected.
That’s what I think most resembles claude code output.
And I ask the llm to write todo-lists, break tasks into phases, maintain both larger docs on individual features and a highly condensed overview doc. I also have written claude code like tools myself, run local LLMs and so on. That is to say, I may still be “doing it wrong”, but I’m not entirely clueless .
The only place where claude code has nearly done the whole thing and largely left me with workable code was some react front-end work I did (and no, it wasn’t great either, just fair enough).
Because companies/users don’t pay for “great code”. They pay for results.
Does it work? How fast can we get it? How much does it cost to use it?
> Because companies/users don’t pay for “great code”
Unless you work in an industry with standards, like medical or automotive. Setting ISO compliance aside, you could also work for a company which values long term maintainability, uptime etc. I'm glad I do. Not everyone is stuck writing disposable web apps.
Cool, the person who financially benefits from hyping AI is hyping AI.
What's with the ad here though?
The tweet from Dec 24 was interesting, why is Boris only now deciding to engage?
I refuse to believe real AI conversations of any value are happening on X.
Hi I'm Boris and I work on Claude Code. I am going to start being more active here on X, since there are a lot of AI and coding related convos happening here.
https://xcancel.com/bcherny/status/2003916001851686951
> I landed 259 PRs -- 497 commits, 40k lines added, 38k lines removed
I wonder how much of these 40k lines added/38k lines removed were just replacing the complete code of a previous PR created by Claude Code.
I'm happy that it's working for them (whatever that means), but shouldn't we see an exponential improvement in Claude Code in this case?
One dives deep into to the philosophical here, but how different is that from ”I recompiled the code, which removed 500kloc of assembly and created 503kloc of assembly”
No one says that as a linkedin metric though.
Claude Code user¹ says Claude Code wrote continuously incorrect code for the last hour.
I asked it to write Python code to retrieve a list of Kanbord boards using the official API. I gave it a link to the API docs. First, it wrote a wrong JSONRPC call. Then it invented a Python API call that does not exist. In a new try, I I mentioned that there is an official Python package that it could use (which is prominently described in the API docs). Claude proceeded to search the web and then used the wrong API call. Only after prompting it again, it used the correct API call - but still used an inelegant approach.
I still find some value in using Claude Code but I'm much happier writing code myself and rather teach kids and colleagues how to do stuff correctly than a machine.
¹) me
I’m nearly the same. Though I do find I’m still writing code, just not the code that’s ending up in the commit. I’ll write pseudo code, example code, rough function signatures then Claude writes the rest.
It shows, I have to kill it forcefully over 10 times per day.
View the full thread without Twitter/X account: https://xcancel.com/bcherny/status/2004897269674639461
"If the AI builds the house, the human must become the Architect who understands why the house exists."
In Japanese traditional carpentry (Miya-daiku), the master doesn't just cut wood. He reads the "heart of the tree" and decides the orientation based on the environment.
The author just proved that "cutting wood" (coding) is now automated. This is not the end of engineers, but the beginning of the "Age of Architects."
We must stop competing on syntax speed and start competing on Vision and Context.
Taste, Aesthetics, Gestalt Synergy now matter more.
Precisely.
AI optimizes for "Accuracy" (minimizing error), but it cannot optimize for "Taste" because Taste is not a variable in its loss function.
As code becomes abundant and cheap, "Aesthetics" and "Gestalt" will become the only scarcity left. The Architect's job is not to build, but to choose what is beautiful.
I use the house analogy a lot these days. A colleague vibe-coded an app and it does what it is supposed to, but the code really is an unmaintainable hodgepodge of files. I compare this to a house that looks functional on the surface, but has the toilet in the middle of the living room, an unsafe electrical system, water leaks, etc. I am afraid only the facade of the house will need to be beautiful, only to realize that they traded off glittery paint for shaky foundations.
>Taste, Aesthetics, Gestalt Synergy now matter more.
The AI is better at that too. Truth is, nothing matters except the maximal delusion. Only humans can generate that. Only humans can make a goal they find meaningful.
honestly i've been becoming too lazy, I know exactly what I want and AI is at a point where it can turn that into code. It's good enough to a point where I start to design code around AI where it's easier for AI to understand (less DRY, less abtractions, closer to C)
And it's probably a bad thing? Not sure yet.
I just let myself use AI on non-critical software. Personal projects and projects without deadline or high quality standards.
If it uses anything I don't know, some tech I hadn't grasped yet, I do a markdown conversation summary and make sure to include technical solutions overview. I then shove that into note software for later and, at a convenient time, use that in study mode to make sure I understand implications of whatever AI chose. I'm mostly a backend developer and this has been a great html+css primer for me.
It is not bad. It is mastery.
You are treating the AI not as a tool, but as a "Material" (like wood or stone).
A master carpenter works with the grain of the wood, not against it. You are adapting your architectural style to the grain of the AI model to get the best result.
That is exactly what an Architect should do. Don't force the old rules (DRY) on a new material.
First I thought CC wrote all its code, but it’s about the engineer’s contributions to CC, which is quite different.
I'm sure it's unrelated(right guys? right?) but they had to revert a big update to CC this month.
https://x.com/trq212/status/2001848726395269619
What %age of his reversions this month are done by Claude? ;)
Not sure why you are getting downvoted, but this IS the key worry: That people lose contact with the code and really don’t understand what is going on, increasing “errors” in production (for some definition of error), that result in much more production firefighting that, then, reduce the amount of time to write code.
Losing contact with the code is definitely on my mind too. Just like how writing can be a method of thinking, so can programming. I fear that only by suffering through the implementation will you realise the flaws of your solution. If this is done by an LLM you are robbed the opportunity and produce a worse solution.
Still, I use LLM assisted coding fairly frequently, but this is a nagging feeling I have.
> Not sure why you are getting downvoted
A: The comment is bad for business.
I mean, that’s possible, but the more interesting datapoint would be “and then how much did you have to delete and/or redo because it was slop”
IMHO it's very misleading to claim that some LLM wrote all the code, if it's just a compression of thousands of peoples' codes that lead to this very LLM even having something to output.
Is a human engineer not the same way?
No. LLMs can only reorder what they've seen in training data in novel ways. Humans can have original ideas that aren't in their training data. As a trivial example, John Carmack invented raycasting for Wolfenstein 3D. No matter how much prompting you could have given an LLM it could never have done that because there was no prior art for it to have been trained on.
In pragmatic terms though, innovation like that doesn't happen often. LLMs could do the things that most developers do.
That said, I don't agree with the notion that LLMs are simply generating content based on their training data. That ignores all the work of the AI devs who build systems to take the training data and make something that creates new (but not innovative) things from it.
Humans can have original ideas because they forget 99% of their input. I am of the opinion there are no original ideas. Most of what most humans do is just remix and reshaping like a potter shapes the clay.
> John Carmack invented raycasting for Wolfenstein 3D.
No. He merely reimplemented it.