Read your code

(etsd.tech)

189 points | by noeclement 4 days ago ago

116 comments

  • simonw 4 days ago ago

    New vibe coding definition just dropped! "Vibe-Coding is a dialogue-based coding process between a human and an AI where the human guides and the AI implements."

    Reminds me of Steve Yegge's short-lived CHOP - Chat Oriented Programming: https://sourcegraph.com/blog/chat-oriented-programming-in-ac...

    I remain a Karpathy originalist: I still define vibe coding as where you don't care about the code being produced at all. The moment you start reviewing the code you're not vibe coding any more, by the definition I like.

    • neutronicus 4 days ago ago

      IDK.

      My wife used the $20 claude.ai and Claude Code (the latter at my prompting) to vibe-code an educational game to help our five-year-old with phonics and basic math.

      She noticed that she was constantly hitting token limits and that tweaking or adding new functionality was difficult. She realized that everything was in index.html, and she scrolled through it, and it was clear to her that there was a bunch of duplicated functionality.

      So she embarked on a quest to refactor the application, move stuff from code to config, standardize where the code looks for assets, etc. She did all this successfully - she's not hitting token limits anymore and adding new features seems to go more smoothly - without ever knowing a lick of JS.

      She's a UX consultant so she has lots of coding-adjacent skills, and talks to developers enough that she understands the code / content division well.

      What would you call what she's doing (she still calls it vibe coding)?

      • simonw 4 days ago ago

        No, I think the moment she "embarked on a quest to refactor the application" she graduated from vibe-coding to coding.

        • neutronicus 4 days ago ago

          I kind of agree that she isn't just "vibe-coding" anymore.

          But I think the fact that she's managing without even knowing (or caring to know) the language the code base is written in means that it isn't really "coding" either.

          She's doing the application architecture herself without really needing to know how to program.

          • simonw 4 days ago ago

            I frequently tell people with complicated Excel spreadsheets that they are software developers without them realizing it, so I have a broad personal definition!

          • wonger_ 4 days ago ago

            LLM-assisted coding? Pairing with LLMs?

            I've copy-pasted snippets for tools and languages that I do not know. Refactored a few parameters, got things working. I think that counts as programming in a loose sense. Maybe not software development or engineering, but programming.

          • CodeMage 4 days ago ago

            It's coding, it just isn't professional software engineering.

            The first non-toy program I ever wrote was in BASIC, on ZX Spectrum 48, and although I don't have it anymore, I remember it was one of the shittiest, most atrocious examples of spaghetti code that I've ever seen in my life.

            Everyone starts somewhere.

            • Izkata 4 days ago ago

              How exactly do people think those of us who are self-taught got started?

              I was picking apart other peoples' javascript to see how it worked years before I was taught anything about coding in a formal setting.

              • skydhash 3 days ago ago

                Pretty much this. My first programming experience was going through books/forums and copy-pasting stuff. It wasn't until years later that I cared to know how to create an actual project with an IDE.

        • theshrike79 2 days ago ago

          I'd go with this old one: https://en.wikipedia.org/wiki/Extreme_programming

          Also I feel old now, I think we _just_ did XP at a company, but that was almost a quarter century ago :D

      • lubujackson 3 days ago ago

        "Vibe architecting" or maybe less pretentiously, "vibe orchestrating". I think that nicely encompasses thw workflow and required skillset. A very knowledgeable eng can orchestrate all the way but clearly nocode people are able to do this as well.

        I think orchestration may be a step that can't really be magicked away by advancements, beyond toy implementations. Because at the end of the day it is just adding specific details to the idea. Sure, you can YOLO an idea and the LLMs can get better at magically cleaning things up, but the deeper you go the larger the drift will be from the concept and the reality without continued guidance.

        If LLMs could construct buildings you might describe what you want room by room and the underlying structure will need to be heavily revamped at each addition. Unless you start with "I am making a 20 floor building" you are going to waste a lot of time having the LLM re-architect.

        I think the real new skill people are going to get scary good at is rapid architecting without any strong awareness of how things work underneath the hood. This is the new "web programming isn't real programming" moment where future developers might not ever look at (or bother learning about!) variables or functions.

      • savanaly 4 days ago ago

        I mean this out of genuine curiosity, not as criticism: did she consider telling claude code to look for any duplicate functionality and remove it? Or even edit CLAUDE.md to tell it to check for duplicates after major refactors and remove them? I use CC every day at the moment and I would expect these sorts of instructions to work extremely well, especially in what appears to be relatively small codebase.

        • neutronicus 4 days ago ago

          Once it became plain that there was duplicate functionality and that it was a problem, yeah, the first thing she did was tell Claude at a high level to factor out duplicate functionality.

          This didn't really work - it would often implement a factored-out piece of logic but just leave the pre-existing code alone. So she had to be pretty specific about problem areas in order to actually push the refactoring plan forward.

    • kmac_ 4 days ago ago

      Nah, we need defined levels like for autonomous driving. Level 5 is live deployment of a change where a business need was discovered by AI, the change specified and planned by AI, implemented by AI, and reviewed by AI. Vibe coding (without reading) would be even lower, as a human is in the loop.

      • dllthomas 4 days ago ago

        Do you want HypnoDrones? This is how you get HypnoDrones.

    • js8 4 days ago ago

      I used to work with a guy who double-checked the machine code generated by assembler (he was an old mainframe programmer, and allegedly used to work on a linker for ES EVM).

      So, clearly, almost nobody does that anymore. So according to Karpathy's definition, we have all been vibe coding for quite time now. (An aside - if AIs were any good, they would just skip human languages entirely and go straight to binary.)

      So I think the "vibe" in vibe coding refers to inputting a fuzzy/uncertain/unclear/incomplete specification to a computer, where the computer will fill in details using an algorithm which in itself is incomprehensible for humans (so they can only "feel the vibes").

      Personally, I don't find the fuzziness of the specification to be the problem; on some level it might be desirable, having a programming tool like that. But the unpredictability of the output is IMHO a real issue.

      • skydhash 3 days ago ago

        > So, clearly, almost nobody does that anymore. So according to Karpathy's definition, we have all been vibe coding for quite time now.

        Because the compiler is deterministic and the cost of getting something better (based on the processor capability) is higher than just going with the compiled version, which has like 99.9% chance of being correct (compiler bugs are rare). It's not vibecoding. It's knowing your axioms is correct, when viewing the programming language as a proof system (which it is). You go from some low level semantics, upon which you build higher level semantics which forms your business rules.

        So giving the LLM a fuzzy specs is hoping that the stars will align and your ancestors spirits awaken to hear your prayers to get a sensible output.

        • js8 3 days ago ago

          I am not sure what you're saying that I am not. Read the vibe coding definition given by grandparent (according to Karpathy) - you can see that compilers already satisfy that definition - we don't read the final code being produced, we just trust it blindly. That's my problem with that definition, and I claim and agree it is more about the predictability/understandability of the output based on the input.

          And I agree with the logic part too. I think we could have humans input fuzzy specification in some formal logic that allows for different interpretations, like fuzzy logic or various modal logics or a combination of them. But then you have a defined and understandable set of rules of how to resolve what would be the contradictions in classical logic.

          The problem with LLMs is they are using a completely unknown logic system, which can shift wildly from version to version, and is not even guaranteed to be consistent. They're the opposite of where the software engineering, as an engineering discipline, should be going - to formalize the production process more, so it can be more rigorously studied and easier to reproduce.

          I think what SW engineering needs is more metaprogramming. (What we typically call metaprogramming - macros - is just a tip of the iceberg.) What I mean is making more programs that study, modify and transform the resulting programs in a certain way. Most of our commonly used tools are woefully incapable of metaprogramming. But LLMs are decent at it, that's why they're so interesting.

          For example, we don't publish version modifications to language runtimes as programs. We could have, for example, produce a program that would automatically transform your code to a new version of the programming language. But we don't do that. It is mostly because we have really just started to formalize mathematics, and it will take some time until we completely logically formalize all the legacy computer systems we have. Then we will be able to prove, for instance, that a certain program can transform a program from one programming runtime to another at a certain maximal incurred extra execution cost.

          • skydhash 3 days ago ago

            > LLMs is they are using a completely unknown logic system, which can shift wildly from version to version, and is not even guaranteed to be consistent.

            It's not unknown. And it's not a logic system. Roughly, it takes your prompt, add it to the system prompts, runs it trough a generative program that pattern-match it to an output. It's like saying your MP3 player (+ your mp3 files) is a logic system. It's just data and it's translator. And having a lot of storage to have all the sounds in the world just means you have all the sounds in the world, not that it's automatically a compositor.

            And consistency is the basic condition for formalism. You don't change your axioms, nor your rules, so that everyone can understand whatever you said was what you intended to say.

            > What I mean is making more programs that study, modify and transform the resulting programs in a certain way.

            That certain way is usually fully defined and spec-ed out (again, formalism). It's not about programming roulette, even if the choices are mostly common patterns. Even casinos don't want their software to be unpredictable.

            > Most of our commonly used tools are woefully incapable of metaprogramming.

            Because no one wants it. Lisp has been there for ages and only macros have seen extensive use, and mostly as a way to cut down on typing. Almost no one has the needs to alter the basic foundation of the language to implement a new system (CLOS is kinda the exception there). It's a lot of work to be consistent, and if the existing system is good enough, you just go with it.

            > we don't publish version modifications to language runtimes as programs

            Because patching binary is harzadous, and loading programs at runtime (plugins) is nerfed on purpose. Not because we can't. It's a very big can of worms (we've just seen the crowdstrike incident when you're not careful about it).

      • slavik81 3 days ago ago

        Compiler Explorer (godbolt.org) is quite popular. It's not uncommon for anyone working on performance sensitive code to give the compiler output a quick sanity check.

    • elieteyssedou 4 days ago ago

      Hi Simon, thanks for your comment. I’m the author, just discovering the thread here on HN (and thanks everyone for the enthusiasm!).

      I do think we need a new definition for vibe-coding, because the way the term is used today shouldn’t necessarily include “not even reading the code”.

      I’m aware that Karpathy’s original post included that idea, but I think we now have two options: - Let the term vibe-coding evolve to cover both those who read the code and those who don’t. - Or define a new term — something that also reflects production-grade coding where you actually read the code. If that’s not vibe-coding, then what is it? (To me, it still feels different from traditional coding.)

      • simonw 4 days ago ago

        I've been calling it AI-assisted development, but that's clearly not snappy enough.

        I have a few problems with evolving "vibe coding" to mean "any use of LLMs to help write code:

        1. Increasingly, that's just coding. In a year or so I'll be surprised if there are still a large portion of developers who don't have any LLM involvement in their work - that would be like developers today who refuse to use Google or find useful snippets on Stack Overflow.

        2. "Vibe coding" already carries somewhat negative connotations. I don't want those negative vibes to be associated with perfectly responsible uses of LLMs to help write code.

        3. We really need a term that means "using prompting to write unreviewed code" or "people who don't know how to code who are using LLMs to produce code". We have those terms today - "vibe coding" and "vibe coders"! It's useful to be able to say "I just vibe-coded this prototype" and mean "I got it working but didn't look at the code" - or "they vibe-coded it" as a warning that a product might not be reviewed and secure.

        • ckcheng 4 days ago ago

          I propose: Software Development/Engineering with “Gen AI Assistance” (yes, “Gaia” [1]. Also spelled “Gaea” in engineering communities).

          Just like no one speaks of vibe-aeronautics-engineering when they’re “just” using CAD.

          More specifically, GAIA in SDE produces code systematically with human in the loop to systematically ensure correctness. e.g. Like the systematic way tptacek has been describing recently [2].

          [1] https://en.m.wikipedia.org/wiki/Gaia

          [2] https://news.ycombinator.com/item?id=44163063

          Briefly summarized here I guess: https://news.ycombinator.com/item?id=44296550

          • 8n4vidtmkvmk 4 days ago ago

            I don't think Googlers will be very fond of that acronym.

            • ckcheng 3 days ago ago

              Why? I doubt they’d confuse Gen AI Assistance with an ID management system?

              • 8n4vidtmkvmk 3 days ago ago

                Still doesn't help to overload the term and ruin search results.

      • visarga 4 days ago ago

        > If that’s not vibe-coding, then what is it?

        Blind-coding.

      • a_spy 4 days ago ago

        [dead]

    • 4 days ago ago
      [deleted]
    • lysecret 4 days ago ago

      Fully agree, from the original post: "and forget that the code even exists."

    • siva7 4 days ago ago

      We professionals need to stop with paroles like "vibe coding is bad". If you don't have dev skills and experience, your code will be shit - no matter if vibe coded or manually done. If you can't guide another dev - here the a.i. - the result will be as good as the clueless leading the clueless.

      • the_af 4 days ago ago

        Karpathy's definition was no guidance, just blindly copy and pasting error messages and code back and forth, hammering into place until it kinda worked.

      • josephg 4 days ago ago

        Vibe coding can make it a lot harder to learn programming though, depending on how you use an AI. If you're a beginner and you can't read code very well, you're going to struggle a lot more when you have thousands of lines of the stuff written badly by an AI.

        • visarga 4 days ago ago

          Which means real experience still takes years. But you need to consider the speed at which coding agents improve. Maybe next year they will be more reliable to use without domain experience. Today? You can get a small app or POC without knowing how to code.

      • tempodox 4 days ago ago

        If you don't guide the LLM, vibe coding is Russian roulette, no matter how good you think you are. If you look at the code, it's not vibe coding any more by its original definition. So vibe coding is only “good” for code that gets thrown away real quick.

  • ericpauley 4 days ago ago

    Personally, I find that if a model can vibe-code the functionality I'm working on then it's not very high-value functionality. Perhaps (a) it's boilerplate (fine), (b) I'm not creating enough/the right abstraction, or (c) the code could easily be written by a junior dev. If I'm working on truly new functionality, modeling complex states and assumptions, or producing something that generalizes to many settings, the model does poorly beyond being a souped up auto-complete.

    That's not to say that these models don't provide value, especially when writing code that is straightforward but can't be easily generalized/abstracted (e.g., some test-case writing, lots of boilerplate idioms in Go, and basic CRUD).

    In terms of labor I potentially see this increasing the value (and therefore cost) of actual experienced developers who can approach novel and challenging problems, because their productivity can be dramatically amplified through proper use of AI tooling. At the other end of spectrum, someone who just writes CRUD all day is going to be less and less valuable.

    • skydhash 3 days ago ago

      Boilerplate code is my signal to take a few moment and says do I really need all that code? Or should I write some generators/parsers instead? Even when I'm not able to, copy-pasting is often easier.

  • BadBadJellyBean 4 days ago ago

    We have a simple rule: You commit it, you own it. If you vibe coded it that's okay. If it's garbage that's on you. Blaming the LLM doesn't count. So of course you have to read the code. You have to read it and understand it. Works well for us.

    • yosito 4 days ago ago

      Another related rule: never commit code to the main branch that hasn't been read by at least two humans.

      • AdieuToLogic 3 days ago ago

        > Another related rule: never commit code to the main branch that hasn't been read by at least two humans.

        Passive code reviews ("read by at least two humans") are fraught with error. IMHO, a better mantra is:

          Never allow a merge to the main branch that does
          not have some amount of documented test coverage,
          be it unit, functional, and/or integration specs.
        • BadBadJellyBean 3 days ago ago

          Of course that is true but with AI you can add documentation and tests. You can have absolut garbage with all green tests. We do human reviews on PRs but in the end it's your code that you have to stand behind.

  • yanis_t 4 days ago ago

    If you write code yourself your productivity is limited by how much code you can write. If you read/review code generated by AI, then it's limited by how much you can read/review. The latter is not that much bigger than the former. I believe at some point we'll have to let go.

    • tempodox 4 days ago ago

      > productivity is limited by how much code you can write.

      So you measure “productivity” in lines of code? Say no more.

    • BrunoRB 4 days ago ago

      I'm not clear on the point you're making, but I do know that reading/reviewing is often more time-consuming and harder than actually writing code - at least if you're doing it right. Anyhow, I think we all just got too mesmerized with the LLM magic tricks and are slowly realizing that in real software writing we simply got a "slightly better stackoverflow". The claims of hyper-productivity are the same old ones of "I just copy and paste stuff other people wrote", i.e. people are just writing simple PoCs and really bad software - it's mostly bullshiters that couldn't and still can't do fizzbuzz. The real stuff continues to be written same way as before - and this is gonna become more and more apparent as the LLM garbage sneaks in more and more software.

      • skydhash 3 days ago ago

        > "slightly better stackoverflow"

        I'm still not sold on that. Stack overflow UI has lot of signals for a good response. The amount of answers, the upvotes on those answers, the comments,... With a quick scan, you can quickly see if you need to go back to the web search page (I've never used SO search) or do a slower read.

  • jedimastert 4 days ago ago

    This comment section is unbelievable lol. I'm genuinely struggling to believe this many people would let code into a codebase without review.

    • exasperaited 4 days ago ago

      I am not struggling to believe it at all. Because I am an old guy with a CS degree but decades of web development experience. Apparently this is called being "a legacy developer".

      This is the logical conclusion of the indiscipline of undereducated developers who have copied and pasted throughout their career.

      This reality is then expressed as "but humans also copy and paste" as if this makes it OK to just hand that task over now to an AI that might do it better, where the solution is to train people to not copy and paste.

      Everything about AI is the same story, over and over again: just pump it out. Consequences are what lawyers are for.

      It's really interesting to me that within basically a generation we've gone from people sneering at developers with old fashioned, niche development skills and methodologies (fortran, cobol, Ada) to sneering at people with the old-fashioned mindset that knowing what your code is doing is a fundamental facet of the job.

      • tempodox 4 days ago ago

        +1. By now I've given up hope that software development will ever become a true engineering discipline, or even just somewhat close to it. Bungling it is so much cheaper and practically everyone seems to accept the current bad state of affairs. Only small subfields are exceptions to this.

        • ozim 3 days ago ago

          I pretty much don’t know a team that doesn’t have at least testing/staging environments. I don’t know a team that doesn’t use GIT and some issue tracker.

          That’s already engineering. Your sentiment is cute but I think you have some romantic vision of what „real engineering” is.

          • exasperaited 3 days ago ago

            Those are (trivial) software project management. It's a tiny part of the picture.

            Software engineering goes a lot deeper than that; look at any serious accredited syllabus.

            But almost nobody practises it these days, you are right. The web kind of blurred the line between software and document for a while and a lot of stuff got lost.

            That's not a reason to practise even less.

            • ozim 2 days ago ago

              Can you give an example because your „accredited syllabus” is vague.

              Keep in mind I have engineering degree and 15 years of experience so you are not writing back to 20 years old hack who just learned PHP.

    • apwell23 4 days ago ago

      why should we give even tiny bit F about our employer when they are using AI to do interviews?

  • andix 4 days ago ago

    Coding with AI agents is very similar to working with junior developers (or even other developers) in a team. Someone needs to give the project direction, keep the code organized, suggest refactorings, and many more things.

    I guess it's really hard to work with AI agents, if you don't have real project experience in a more senior position.

    • funkyfourier 4 days ago ago

      In my experience it is more like a quite skilled developer, but who is dead drunk.

    • apwell23 4 days ago ago

      junior developer doesn't delete tests to "make tests pass"

      • michaelrpeskin 4 days ago ago

        You haven't been in the industry long enough :)

      • spauldo 4 days ago ago

        I do that and I'm senior :)

        (To be fair my kind of testing is a lot different than unit tests, and the tests I'm cancelling are mulit-page forms that require three signatures.)

  • redhale 4 days ago ago

    > It’s always easier to straighten a sapling than a grown tree.

    Ok, but in this case you can just throw away the tree and a new one will grow immediately for you to review anew. Rinse and repeat.

    I'm not saying the author's proposed approach is never the right one, but there is a meaningfully different approach between the two suggested. You can look at the result of a fully autonomous agent, note the issues, tweak your prompt + inputs and then start over. You get the benefits of more closely-steered output without the drag of having to constantly monitor it synchronously. For some things, this approach is token-wasteful, but for small (yet critical / high-value) features, I have found it to work quite well. And an ancillary benefit is that your starting prompts and inputs improve over time.

    • aDyslecticCrow 3 days ago ago

      But once you have carefully inspected 8 trees and thrown them away, you may need to start from sapling after-all.

      And what about when you need a Forrest? Seed from trees grow new trees. A whole Forrest cannot be inspected and discarded over and over.

      > but for small (yet critical / high-value) features

      These are the features than need the human attention. These are the features that are the most fun for a human to make. These are the features that make the human improve the most. So it's the last ones i would leave to the AI.

  • vermon 4 days ago ago

    Part of the definition of vibe coding is not looking at the code. If you read code you are not "vibing" anymore, you are just using AI tools to write code.

    • Uehreka 4 days ago ago

      “using AI tools to write code” is too clunky of a phrase. Unless someone proposes a sticky two/three-syllable verb (that people actually want to use, inb4 some sarcastic person suggests “poop-coding” or something) then people are going to fall back on vibe-coding because it’s quick to say, its definition will become blurred, and we’ll have no one to blame but ourselves.

      • achierius 4 days ago ago

        AI coding? Seems straightforward enough.

  • joe8756438 4 days ago ago

    Currently, the only way to understand code is to read it. You no longer need to understand code to produce it (maybe in some pre-AI cases that was also true).

    So no, you don’t _need_ to read code anymore. But not reading code is a risk.

    That risk is proportional to characteristics that are very difficult, and in many cases impossible, to measure.

    So currently best practice would be to continue reading code. Sigh.

  • ollysb 4 days ago ago

    You can be an architect without reading the code. My process involves building a detailed plan before starting, at which point I ask lots of questions about architecture. After implementation I have a custom agent for reviewing the architecture and I usually ask a few questions at this point as well.

  • fxtentacle 4 days ago ago

    That article is like advising a blind person to thoroughly look at the road before crossing it.

  • kbrannigan 4 days ago ago

    Are LMM compilers that are generating computer understandable instructions. I think of back in the day people use to chisel assembly code by hand. then came a system that all people to code more in english-like commands(prompts).

  • AdieuToLogic 3 days ago ago

    It is funny how "vibe coding" best practices are indistinguishable from stakeholders defining functional requirements. Both require sufficient formalization of what is needed such that it can be delivered and neither is overly concerned with how the solution is encoded.

    It is almost as if understanding the problem to be solved is the hard part.

  • broast 4 days ago ago

    Reading code is just as fun as writing it, to be honest

    • eddd-ddde 3 days ago ago

      Personally I love reading code and reviewing code. I always find it funny when people say that LLMs result in their job having less of what they like (coding) and more of what they hate (reviewing).

  • _jab 4 days ago ago

    I've found that there's a pretty consistent relationship between how clearly I can imagine what the code should look like, and how effective vibe coding is. Part of the reason for that is that it means I'll be more opinionated about the output of the model, and can more quickly tell whether it's done something reasonable.

  • 4 days ago ago
    [deleted]
  • alphazard 4 days ago ago

    I'm not going to read code created by an AI. The AI exists to prevent me from having to deal with the complexity of a task. The absolute most I want to read and write are type signatures. I'll set those up, let the AI go, see if it works. If it doesn't, maybe retry with a better prompt. If it still doesn't work, then I'll have to get involved. Start implementing from the top down, and once there is enough architectural structure, the AI can usually finish up.

    This still happens quite a bit, and it's just like taking away a hard task from someone less experienced. The difference is there is no point in investing your time teaching or explaining anything to the AI. It can't learn in that way, and it's not a person.

    • jleask 4 days ago ago

      The problem is that hidden among that complexity/detail is often the value that software developers bring e.g. security issues, regulatory compliance, diagnosability/monitoring, privacy, scalability, resilience etc.

      There will be bugs that the AI cannot fix, especially in the short term, which will mean that code needs to be readable and understandable by a human. Without human review that will likely not be the case.

      I'm also intrigued by "see if it works". How is this being evaluated? Are you writing a test suite, manually testing?

      Don't get me wrong, this approach will likely work in lower risk software, but I think you'd be brave to go no human review in any non-trivial domain.

    • exasperaited 4 days ago ago

      > see if it works

      But this is, properly expressed, "see if it works, based on my incomplete understanding of the code that I haven't worked through and corrected by trying to write it, but I have nevertheless communicated directly to an AI that may not have correctly understood it and therefore may not have even the vaguest inkling of the edge cases I haven't thought to mention yet".

      Vibe-coded output could only properly be said to "work" if we were communicating our intentions in formal methods logic.

      What you mean is "apparently works".

      • ath3nd 4 days ago ago

        LLMs have successfully finally taken the last ounce of "engineering" from software engineering. Now what we have is "lgtm-ing", which is a great selling point for LLMs.

        You notice how LLMs do good on small tasks and on small projects? The more LLM code you add to your projects, the slower and worse (read: more tokens) they perform. If this was by design (create bigger, unmaintainable projects, so you can slowly squeeze out more and more tokens out of your users), I'd have applauded the LLM creators, but I think it's by accident. Still funny though.

    • nsxwolf 4 days ago ago

      Might as well have it code in assembler then. Get the performance boost.

      • stirfish 4 days ago ago

        Too many tokens, imo.

        I like to vibe code single self-contained pages in html, css, and JavaScript, because there's a very slim chance that something in the browser is going to break my computer.

    • dwringer 4 days ago ago

      In my experience, it's still possible to learn from the AI's mistakes. I've seen others comment similarly: sometimes the AI will invent a feature, variable, exception type, etc., and while this is wrong for the specific implementation, it's often a pretty good idea for something that may have been overlooked with the initial spec. Reading the AI outputs can still be a valuable step in an interactive feedback process, even if you only ever start a new context from scratch with the AI each time. The user's context is what grows.

      • crinkly 4 days ago ago

        Assuming of course than you are both competent enough and attentive enough to identify the AI's mistakes.

        This is the problem I have seen a lot. From professionals to beginners, unless you actually carved the rock yourself, you don't really have an insight into every detail of what you see. This is incidentally why programmers tend to like to just rewrite stuff than fix it. We're lazy.

    • jazzyjackson 4 days ago ago

      > just like taking away a hard task from someone less experienced

      except someone less experienced never gets to try, since all the experienced programmers are now busy shepherding AI agents around, they aren't available to mentor the next generation.

    • Cheer2171 4 days ago ago

      Then enjoy your security holes, placeholder fake features, and miles of atrocious 'clever' hacks.

      • bbqfog 4 days ago ago

        [flagged]

        • chrisjj 4 days ago ago

          > Sounds like every piece of human generated code I’ve ever seen.

          Good code is less visible.

        • exasperaited 4 days ago ago

          The obvious advantage of making your own framework, or your own libraries, say, is that you can centralise all the security implementation. You either will or you won't do that security work, but you can't centralise it if you don't write some sort of library/framework to do it. And if you don't, you're more likely to leave the hole open in some situations.

          Vibe coding is "don't write any libraries, just copy-paste and use these existing libraries uncritically".

          The worst outcome of this will be NPM full of vibe-coded copy-paste code shoved into opaque modules that nobody has ever really read, that other vibe-coding tools will use uncritically.

          • bbqfog 4 days ago ago

            This makes the assumption that vibe coded software is worse than human coded software. I think that’s already false on average and will quickly become “no contest”. We’ve already seen the best humans can do, ai is just getting started.

            • exasperaited 4 days ago ago

              I am making no such assumption, nor does my argument need it.

              My concern is that the act of surrendering ownership itself is always worse.

    • chrisjj 4 days ago ago

      > I'll set those up, let the AI go, see if it works.

      Hmm. And maintenance?

      • gorbachev 4 days ago ago

        Pitch deck comes first. /s

  • chanux 4 days ago ago

    Related reading: Programming as theory building ~ Peter Naur

    [PDF] https://pages.cs.wisc.edu/~remzi/Naur.pdf

  • chrisjj 4 days ago ago

    > Not reviewing AI-generated code will lead to serious problems.

    A requirement to do so might lead to more. Like loss of job for the illiterate "programmer".

  • 4 days ago ago
    [deleted]
  • SubiculumCode 3 days ago ago

    ANyone else getting a warning when they go to the linked https site that there is a security risk?

  • MomsAVoxell 4 days ago ago

    I have found the following methods to be workable and productive when using AI/ML in a project:

    * Treat the AI/ML as a junior programmer, not a senior - albeit one willing to make a leap on basically any subject, nevertheless - a junior is someone whose code must always be questioned, reviewed, and understood - before execution. Senior code is only admissible from a human being. However, human beings may have as many junior AI’s in their armpit as they want, as long as those humans do not break this rule.

    * Have good best practices in the first f’in place!!

    Vibe-coding is crap because ‘agile hacking’ is crap. Put your code through a proper software process, with a real workflow - i.e. don’t just build it and ship it. Like, ever. Even if you’ve written every line of code yourself - but especially if you haven’t - never ship code you haven’t tested, reviewed, proven, demonstrated in a production analog environment, and certified before release. Yes, I mean it, your broken FNORD hacking habits will be force-magnified immediately by any AI/ML system you puke them into. Waterfall or gtfo, vibe-code’rs…

    * Embrace Reading Code. Look, if you’re gonna churn milk into butter, know what both milk and butter taste like, at least for your sake. Don’t ship sour butter unless you’re making cheese, and even then, taste your own cheese. AI/ML is there to make you a more competent human being; if you’re doing it to avoid actually doing any work, you’re doing it wrong. Do it to make work worth doing again….

  • davidmurdoch 4 days ago ago

    Title made me think this was about writing code for vibrators.

    • tempodox 4 days ago ago

      That would be safety-critical, so you had better not vibe code it.

      • patrickmay 4 days ago ago

        Just how powerful are your . . . . Never mind, I don't want to know.

  • 4 days ago ago
    [deleted]
  • bbqfog 4 days ago ago

    These tools continue to increase in power. It’s incredible and there’s no use denying it. Asking people to read ai generated code will soon make as much sense as asking people to read compiler generated ASTs.

    • danielbln 4 days ago ago

      Soon, but not yet. At this point one should at least skim the code, and have a veritable zoo of validation and correction mechanisms in place (tests, LSP, complexity eval, completion eval, bot review, human review etc).

      That said, if you spend most of your time sussing out function signatures and micromanaging every little code decision the LLM makes, then that's time wasted imo and something that will become unacceptable before long.

      Builders will rejoice, artisan programmers maybe not so much.

      • Rooster61 4 days ago ago

        > Builders will rejoice, artisan programmers maybe not so much.

        Maintainers definitely not so much.

        • danielbln 4 days ago ago

          At first, maybe not. But maintenance is not a task immune to automation.

  • glitchc 4 days ago ago

    I'm waiting for LLMs to vibe-code new algorithms.

  • energy123 4 days ago ago

    In what other area is it fashionable to use your own shortcomings and lack of knowledge as evidence that something is a bad idea?

  • vdupras 3 days ago ago

    > Everyone keeps saying it these days: treat your AI like a (brilliant) new junior dev.

    Really? You mean that person that drains senior developer time, but in which we still invest time because we're confident they're going to turn out great after the onboarding?

    Until then, they're a net time sink, aren't they? Unless the projects you've got to deliver are fantastically boring.

    So is this really what it's all about? Perpetual onboarding?

  • goodspc 4 days ago ago

    we can arrange them, not arranged by them

  • davidmurdoch 4 days ago ago

    "You cannot delegate the act of thinking"

    This is very dumb. Of course you can.

    • exasperaited 4 days ago ago

      It is a philosophical remark and it is true on that level for sure.

      You cannot delegate the act of thinking because the process of delegation is itself a decision you have made in your own mind. The thoughts of your delegates are implicitly yours.

      Just like if you include a library in your code, you are implicitly hiring the developers of that library onto your project. Your decision to use the library is hiring the people who wrote it, to implicitly write the code it replaces. (This is something I wish more people understood)

    • crinkly 4 days ago ago

      Indeed. You can however be a moron and delegate it to another moron.

    • visarga 4 days ago ago

      > This is very dumb. Of course you can.

      When it's your problem being delegated, you can't delegate consequences away. I can eat for you, but you won't get satiated this way.

  • pvds78 4 days ago ago

    [dead]

  • swedishPerson1 4 days ago ago

    [dead]