Using Claude Code to modernize a 25-year-old kernel driver

(dmitrybrant.com)

897 points | by dmitrybrant 2 days ago ago

281 comments

  • theptip 2 days ago ago

    A good case study. I have found these two to be good categories of win:

    > Use these tools as a massive force multiplier of your own skills.

    Claude definitely makes me more productive in frameworks I know well, where I can scan and pattern-match quickly on the boilerplate parts.

    > Use these tools for rapid onboarding onto new frameworks.

    I’m also more productive here, this is an enabler to explore new areas, and is also a boon at big tech companies where there are just lots of tech stacks and frameworks in use.

    I feel there is an interesting split forming in ability to gauge AI capabilities - it kinda requires you to be on top of a rapidly-changing firehose of techniques and frameworks. If you haven’t spent 100 hours with Claude Code / Claude 4.0 you likely don’t have an accurate picture of its capabilities.

    “Enables non-coders to vibe code their way into trouble” might be the median scenario on X, but it’s not so relevant to what expert coders will experience if they put the time in.

    • bicx 2 days ago ago

      This is a good takeaway. I use Claude Code as my main approach for making changes to a codebase, and I’ve been doing so every day for months. I have a solid system I follow through trial and error, and overall it’s been a massive boon to my productivity and willingness to attempt larger experiments.

      One thing I love doing is developing a strong underlying data structure, schema, and internal API, then essentially having CC often one-shot a great UI for internal tools.

      Being able to think at a higher level beyond grunt work and framework nuances is a game-changer for my career of 16 years.

      • kccqzy 2 days ago ago

        This is more of a reflection of how our profession has not meaningfully advanced. OP talks about boilerplate. You talk about grunt work. We now have AI to do these things for us. But why do such things need to exist in the first place? Why hasn't there been a minimal-boilerplate language and framework and programming environment? Why haven't we collectively emphasized the creation of new tools to reduce boilerplate and grunt work?

        • abathologist 2 days ago ago

          This is the glaring fallacy! We are turning to unreliable stochastic agents to churn out boilerplate and do toil that should just be abstracted or automated away by fully deterministic, reliably correct programs. This is, prima facie, a degenerative and wasteful way to develop software.

          • jama211 2 days ago ago

            Saying boilerplate shouldn’t exist is like saying we shouldn’t need nails or screws if we just designed furniture to be cut perfectly as one piece from the tree. The response is “I mean, sure, that’d be great, not sure how you’ll actually accomplish that though”.

            • philjackson 2 days ago ago

              Great analogy. We've attempted to produce these systems and every time what emerges is software which makes easy things easy and hard things impossible.

            • mejutoco a day ago ago

              There are construction systems, for example in Japanese traditional architecture, that use no nails or screws. Good joinery often removes their need.

            • coldtea a day ago ago

              I can tell you about 1000 ways, the problem is there are no corporate monetary incentives to follow them, and not much late-90s-era FOSS ethos going around either...

            • jampekka a day ago ago

              Saying boilerplate should exist is like saying every nail should have its own hammer.

              Some amount of boilerplate probably needs to exist, but in general it would be better off minimized. For a decade or so there's sadly been a trend of deliberately increasing it.

              • coldtea a day ago ago

                >Saying boilerplate should exist is like saying every nail should have its own hammer

                It's rather saying that we should have parts that join without nailing by now, especially for things we do again and again and again and again.

              • kazinator a day ago ago

                Rather, it is boilerplate that replicates hammers along with nails.

            • kazinator a day ago ago

              Since we invented the tree and control its parameters and features, this is actually correct.

            • philsnow a day ago ago

              Even Star Trek has self-sealing stem bolts, they don't just 3d print their ships

              • joombaga a day ago ago

                They do sometimes 3D print at least smaller ships by the 2380s.

            • Ygg2 a day ago ago

              You can design furniture without nails or screws. See https://en.m.wikipedia.org/wiki/Japanese_carpentry

              Reason Japanese carpenters do or did that is that sea air + high humidity would absolutely rot anything with nail and screw.

              No furniture is really designed from a single tree, though. They aren't massive enough.

              I agree with overall sentiment. But the analogy is higly flawed. You can't compare physical things with software. Physical things are way more constrained while software is super abstract.

              • oldsecondhand a day ago ago

                > Reason Japanese carpenters do or did that is that sea air + high humidity would absolutely rot anything with nail and screw.

                The other reason was that iron was very expensive in Japan as they had only low quality iron ore.

            • jonstewart a day ago ago

              Carpenters/framers are less skilled and paid less than cabinetmakers. But the world needs more carpenters.

              • namibj 14 hours ago ago

                While it sounds likely true for the US, it's the opposite in Germany: likely due to societal expectations on "creature comforts" and German homes not being framed with 2x4's but instead getting guild-approved craftsmen to construct a roof for a brick building (with often precast concrete slabs forming the intermediate floors; they're segmented along the non-bridging direction to be less customized).

              • j45 a day ago ago

                The value is where the demand is, or where the market values it and not just in a skill of working with wood with tools to create nearly anything.

            • okr 2 days ago ago

              Love this analogy.

          • jazzyjackson 2 days ago ago

            Yes and its why AI fills me with impending doom: handing over the reigns to an AI that can deal with the bullshit for us means we will get stuck in a groundhog day scenario of waking up with the same shitty architecture for the foreseeable future. Automation is the opposite of plasticity.

            • bicx 5 hours ago ago

              Maybe if you fully hand over the reigns and go watch Youtube all day.

              LLMs allow us to do large but cheap experiments that we would never attempt otherwise. That includes new architectures. Automation in the traditional sense is opposite of plasticity (because it's optimizing and crystalizing around a very specific process), but what we're doing with LLMs isn't that. Every new request can be different. Experiments are more possible, not less. We don't have to tear down years of scaffolding like old automated systems. We just nudge it in a new direction.

            • ako 2 days ago ago

              I don’t think that will happen. It’s more like a 3d printer where you can feed in a new architecture and new design every day and it will create it. More flexibility instead of less.

            • Chris2048 10 hours ago ago

              I find it more likely it will result in an influx of new architectures.

              Eventually, prog-lang designers will figure out how to get llms to create new prog-langs.

          • jclarkcom 2 days ago ago

            When humans are in the loop everything pretty much becomes stochastic as well. What matters more is the error rate and result correctness. I think this shifts the focus towards test cases, measurement, and outcome.

            • elzbardico 2 days ago ago

              No. This is a fundamentally erroneous analogy. We don't generate code by a stochastic process.

              • aargh_aargh 2 days ago ago

                You don't? I do.

                A few days ago I lost some data including recent code changes. Today I'm trying to recreate the same code changes - i.e. work I've just recently worked through - and for the life of me I can't get it to work the same way again. Even though "just" that is what I set out to do in the first place - no improvements, just to do the same thing over again.

              • jxf 2 days ago ago

                Everything we do is a stochastic process. If you throw a dart 100 times at a target, it's not going to land at the same spot every time. There is a great deal of uncertainty and non-deterministic behavior in our everyday actions.

                • discreteevent a day ago ago

                  > throw a dart ... great deal of uncertainty and nongdeterministic behavior in our everyday actions.

                  Throwing a dart could not be further away from programming a computer. It's one of the most deterministic things we can do. If I write if(n>0) then the computer will execute my intent with 100% accuracy. It won't compare n to 0.005.

                  You see arguments like yours a lot. It seems to be a way of saying "let's lower the bar for AI". But suppose I have a laser guided rifle that I rely on for my food and someone comes along with a bow and arrow and says "give it a chance, after all lots of things we do are inaccurate, like throwing darts for example". What would you answer?

                • utyop22 a day ago ago

                  Go say this to a darts player who has hit a 9 darter…..

                  Actually no wait let’s expand it. Why not go say this to Ronnie O’Sullivan too!

                  The way you’re describing is such that there is no determinism behind what is being done. Simply not true.

                  • tankenmate a day ago ago

                    a stochastic system can can deterministic sub-parts, a deterministic system cannot have stochastic sub-parts.

                    • Chris2048 10 hours ago ago

                      If we are talking in terms of IRL/physics, there is no such thing as a deterministic system outside of theory - everything is stochastic to differing degrees - including you brain that came up with these thoughts.

                    • utyop22 a day ago ago

                      Theres nothing stochastic about a human that hits a 147 mate nor a 9 darter mate. I cant believe people seriously post this nonsense.

                • jay-barronville a day ago ago

                  As much as it’s true that there’s stochasticity involved in just about everything that we do, I’m not sure that that’s equivalent to everything we do being a stochastic process. With your dart example, a very significant amount of the stochasticity involved in the determination of where the dart lands is external to the human thrower. An expert human thrower could easily make it appear deterministic.

              • MostlyStable 2 days ago ago

                We don't understand how human minds work anywhere close to well enough to say this.

              • tankenmate 2 days ago ago

                I have a strong suspicion that the world is not as deterministic as you'd like it to be.

                • lukan 2 days ago ago

                  Or it is deterministic, but infinitely complex, so that also leaves us only with stochastic.

                  • Chris2048 10 hours ago ago

                    stochastic vs deterministic is arguable a property of modelling, not reality.

                    Something so complex that we cannot model it as deterministic is hence stochastic. We can just as easily model a stochastic thing by ignoring the stochastic parts.

                    separating subjective appearance of things from how we can conceptualise them as models begs a deeper philosophical question of how you can talk about the nature of things you cannot perceive.

              • jcelerier a day ago ago

                I remember one of my ex-bosses in 2015 telling us basically he was doing "intuitive programming" instead of rational. Worked quite well.

              • flir a day ago ago

                Not interested in joining a pile-on, but I just wanted to point out how difficult reproducible builds are. I think there's still a bit of unpredictability in there, unless we go to extraordinary lengths (see also: software proofs).

              • jay-barronville a day ago ago

                I think that both of you are right to some extent.

                It’s undeniable that humans exhibit stochastic traits, but we’re obviously not stochastic processes in the same sense as LLMs and the like. We have agency, error-correction, and learning mechanisms that make us far more reliable.

                In practice, humans (especially experts) have an apparent determinism despite all of the randomness involved (both internally and externally) in many of our actions.

          • zer00eyz 2 days ago ago

            > This is the glaring fallacy!

            It feels like toil because it's not the interesting or engaging part of the work.

            If you're going to build a piece of furniture. The cutting, nailing, gluing are the "boiler plate" that you have to do around the act of creation.

            LLM's are just nail guns.

            • nickserv a day ago ago

              At least for me when woodworking, the cutting, nailing, and gluing are the fun bits. The sanding and finishing is the grunt work/boilerplate.

              • peteforde a day ago ago

                The AI BAD folks camping in this thread would be angry that you're still producing work that requires sanding.

            • baq 2 days ago ago

              and sanding. don't forget sanding. 90% of building furniture is sanding.

            • ori_b a day ago ago

              Great analogy. As someone else pointed out in a different subthread, quality furniture isn't held together with nails.

            • jamesnorden a day ago ago

              Maybe nail guns that have a chance to randomly shoot nails into your leg and apologize when you ask why it did that.

          • baq 2 days ago ago

            nothing prevents stochastic agents from producing reliable, deterministic and correct programs. it's literally what the agents are designed for. it's much less wasteful than me doing the same work and much much less wasteful trying to find a framework for all frameworks.

          • Wowfunhappy a day ago ago

            Isn’t trying to remove boilerplate how we end up with situations like left-pad?

            I actually think I like the idea that, maybe by handling my boilerplate over to AI we can be more comfortable with having boilerplate to begin with.

          • eru a day ago ago

            Reliably correct is good, but why does it need to be fully deterministic?

            • skydhash a day ago ago

              Reduced mental load. When it’s proven that a set of input will always result in the same output, you don’t have to verify the output. And you can just chain process together and not having to worry about time wasted because of deviations.

          • nurettin 2 days ago ago

            Great point, but there is absolutely no way of doing this for every framework and then maintain it for ages. It is logistically impossible.

          • mquander 2 days ago ago

            I guess this is probably what Lucifer said to God about why it was stupid to give humans free will.

          • j45 a day ago ago

            This is very true. For the most basic approaches of using stochastic agents for this purpose, especially with genralized agents and approaches.

            It is possible to get much higher quality with not just oversight, but creating the alignment from the stochastic agents to have no choice but to converge towards the desired vector of work reliably.

            Human in the loop AI is fine, I'm not sure that everything doesn't to be automated, it's entirely possible to get further and more reps in on a problem with the tool as long as the human is the driver and using the stochastic agent as a thinking partner and not the other way around.

        • travisgriggs 2 days ago ago

          My take: money. Years ago, when I was cutting my teeth in software, efficiency was a real concern. Not just efficiency for limited CPU, memory, and storage. But also how you could maximize the output of smaller head count of developers. There was a lot of debate over which methodologies, languages, etc, gave the biggest bang for buck.

          And then… that just kind of dropped out of the discussion. Throw things at the wall as fast as possible and see what stuck, deal with the consequences later. And to be fair, there were studies showing that choice of language didn’t actually make as big of difference as found in the emotions behind the debates. And then the web… committee designed over years and years, with the neve the ability to start over. And lots of money meant that we needed lots of manager roles too. And managers elevate their status by having more people. And more people means more opportunity for specializations. It all becomes an unabated positive feedback loop.

          I love that it’s meant my salary has steadily climbed over the years, but I’ve actually secretly thought it would be nice if there was bit of a collapse in the field, just so we could get back to solid basics again. But… not if I have to take a big pay cut. :)

          • cestith a day ago ago

            Many of the languages that allow people to quickly develop software end up with their own tradeoffs. Some of them have unusual syntax, at least in part of the language. Many of them allow duck typing, which many consider a major detriment to production reliability. Some of them are only interpreted. Some of them have a syntax people just don’t like. Some of them are just really big languages with lots of features, because getting rid of the boilerplate often means more builtins or a bigger standard library. Some of them either the runtime or the build time leaves a lot to be desired.

            Here’s an incomplete list for those traits. For unusual, there’s many of the FP languages, Ada, APL, Delphi/Object Pascal, JS, and Perl. For duck typing, there’s Ruby, Python, PHP, JS, and Perl. For only interpreted, there are Ruby, PHP, and Perl (and formerly for some time Python and JS). For syntax that’s not necessarily odd (but may be) but lots of people find distasteful there’s Perl, any form of Lisp, APL, Haskell, the ML family, Fortran, JS, and in some camps Python, PHP, Ruby, Go, or anything from the Pascal family. For big languages with lots of interacting parts there’s Perl, Ada, PHP, Lisp with CLOS, Julia, and PHP. For slowdowns, there’s Julia, Python, PHP, and Ruby. The runtime for Perl is actually pretty fast once it’s up and running, but having to build the app before running it on every invocation makes for a slow start time.

            All that said, certain orgs do impressive projects pretty quickly with some of these languages. Some do impressively quick work with even less popular languages like Pike, Ponie, Elixir, Vala, AppScript, Forth, IPL, Factor, Raku, or Haxe. Notice some of those are very targeted, which is another reason boilerplate is minimal. It’s built into the language or environment. That makes development fast, but general reuse of the code pretty low.

        • jalk 2 days ago ago

          We have been emphasizing on creating abstractions since forever. We now have several different hardware platforms, programming languages, OS's, a gazillion web frameworks, tons of databases, build tools, clustering frameworks and on and on and on. We havn't done so entirely collectively, but I don't think the amount of choice here reflects that we are stupid, but that rather that "one size doesn't fit all". Think about the endless debates and flame wars about the "best" of those abstractions. I'm sure that Skynet will end that discussion and come up with the one true and only abstraction needed ;)

        • mikepurvis 2 days ago ago

          I feel this some days, but honestly I’m not sure it’s the whole answer. Every piece of code has some purpose or expresses a decision point in a design, and when you “abstract” away those decisions, they don’t usually go away — often they’re just hidden in a library or base class, or become a matter of convention.

          Python’s subprocess for example has a lot of args and that reflects the reality that creating processes is finicky and there a lot of subtly different ways to do it. Getting an llm to understand your use case and create a subprocess call for you is much more realistic than imagining some future version of subprocess where the options are just magically gone and it knows what to do or we’ve standardized on only one way to do it and one thing that happens with the pipes and one thing for the return code and all the rest of it.

        • lukaslalinsky a day ago ago

          I actually prefer the world with boilerplate connecting more important pieces of code together, over opinionated frameworks, because the boilerplate can evolve, charging the opinionated frameworks is much harder, and it's probably done by full rewrite. The thing is, the boilerplate needs to be kept to minimum, that's what I consider good API design. It allows you to do custom things, so you need some glue code, but not so much that you are writing a new framework each time you use it.

        • jcelerier a day ago ago

          > Why hasn't there been a minimal-boilerplate language and framework and programming environment

          Because everyone needs a boilerplate but it's a different boilerplate for everyone unless you're doing the most basic toy apps

        • anyfoo 2 days ago ago

          Because people think learning Haskell is too hard.

          • do_not_redeem 2 days ago ago

            Haskell isn't immune to boilerplate. Luckily if you're stuck using Haskell there's a package to help you deal with it all: https://hackage.haskell.org/package/boilerplate

            • anyfoo 2 days ago ago

              I find of all languages, Haskell often allows me to get by with the least boilerplate. Packages like lenses/optics (and yes, scrap your boilerplate/Generics) help. Funny package, though!

            • wyager 2 days ago ago

              It's very minimal-boilerplate. It's done an exceptional job of eliminating procedural, tedious work, and it's done it in a way that doesn't even require macros! "Template Haskell" is Haskell's macro system and it's rarely used anymore.

              These days, people mostly use things like GHC.Generics (generic programming for stuff like serialization that typically ends up being free performance-wise), newtypes and DerivingVia, the powerful and very generalized type system, and so on.

              If you've ever run into a problem and thought "this seems tedious and repetitive", the probability that you could straightforwardly fix that is probably higher in Haskell than in any other language except maybe a Lisp.

        • chii 19 hours ago ago

          > Why haven't we collectively emphasized the creation of new tools to reduce boilerplate and grunt work?

          i think it has. How much easier is it today than yester-decade to write, and deploy an application to multiple platforms (and have it look/run similarly)?

          How little knowledge it requires now than before?

        • kwanbix 2 days ago ago

          It used to be. When I learned to program for windows, I will basically learn Delphi or Visual basic at the time. Maybe some database like paradox. But I was reading a website that lists the skills needed to write backend ant it was like 30 different things to learn.

          • kccqzy a day ago ago

            That's exactly what I have in mind when I wrote the original comment. I learned Visual Basic as a kid faffing around a computer and it was so little boilerplate to make an app. It's been a regression since the.

        • ZYbCRq22HbJ2y7 2 days ago ago

          > Why hasn't there been a minimal-boilerplate language and framework and programming environment?

          There are? For example, rails has had boilerplate generation commands for a couple of decades.

          • mhluongo 2 days ago ago

            There's boilerplate in Rails too. We move the goal posts for what we define as boilerplate as we better explore and solve a class of problems.

            • dymk 2 days ago ago

              What boilerplate is there in rails?

              • TheDong 2 days ago ago

                html is like 90% boilerplate, and so .html.erb in rails is mostly boilerplate.

                • skydhash a day ago ago

                  We have the component architecture pattern to reduce the amount of html we have to write. If you’re duplicating html element in every page, that’s mostly on you. There’s a reason every template language have include statement. That’s a problem that’s been solved for ages.

          • yencabulator a day ago ago

            Generating boilerplate is the worst of both worlds. The point is to not need so much of it.

        • andoando 2 days ago ago

          Theres a million different million environments. This includes, OS, languages, frameworks and setups within those frameworks. Spring, java or kotlin, rest or grpc, mysql or postgres or, okhhtp or ktor, etc etc.

          There is no software you could possibly write that works for everything thatd be as good as "Give me an internal dashboard with these features"

        • Aperocky a day ago ago

          > collectively emphasized the creation of new tools

          In fact, collectively created 1000s of them and all of them a various flavor of mid.

        • jimbokun a day ago ago

          Because no one wants to develop and use Lisp macros.

        • anbende a day ago ago

          I think this one way of looking at what your parent was describing.

          They weren’t just saying ‘AI writes the boilerplate for me.’ They were saying: once you’ve written the same glue the 3rd, 4th, 5th time, you can start folding that pattern into your own custom dev tooling.

          AI not as a boilerplate writer but as an assistant to build out personal scaffolding toolset quickly and organically. Or maybe you think that should be more systemized and less personal?

        • logicchains a day ago ago

          >Why haven't we collectively emphasized the creation of new tools to reduce boilerplate and grunt work?

          Lisp completely eliminates boilerplate and has been around for decades, but hardly anyone uses it because programs that use macros to eliminate boilerplate aren't easy to read.

        • codeulike a day ago ago

          Why haven't we collectively emphasized the creation of new tools to reduce boilerplate and grunt work?

          You dont understand how things evolve.

          There have been plenty of platforms that got rid of boilerplate - e.g. ruby on rails about 20 years ago

          But once they become the mainstream, people can get a competitive edge by re-adding loads of complexity and boilerplate again. E.g. complex front end frameworks like react.

          If you want your startup to look good you've got to use the latest trendy front end thingummy

          Also to be fair, its not just fashion. Features that would have been advanced 20 years ago become taken for granted as time goes on, hence we are always working at the current limit of complexity (and thats why we're always overrun with bugs and always coming up with new platforms to solve all the problems and get rid of all thr boilerplate so that we can invent new boilerplate)

        • zipzapzip 2 days ago ago

          Because of the obsession with backwards compatibility and not breaking code. The web development industry is the prime example. HTML, Javascript, CSS, a backend frontend architecture - absolutely terrible stack.

          • lenkite 2 days ago ago

            I don't even know why things like templating and inclusion are not just part of the core web stack (ideally declaratively with no JS). There should be no need for an external tool or build process or third-party framework.

            • skydhash a day ago ago

              Html is rendered document. It’s ok to write it if you only need one document, but it’s better to use an actual template language or some generators if you’re going to have the same layout and components for many pages.

              You’re asking to shift this job from the editor (you) to the viewer (the browser).

              • lenkite a day ago ago

                Maybe it was a "viewer" in the 90s. The viewer is not a viewer - it is a full fledged application runtime that has a developer environment and media stack, along with several miscellaneous runtimes. A standard template language and document inclusion feature is very small peanuts compared to that. A teeny house compared to the galaxy already built-in - with several planets worth of features being added yearly.

                • cestith a day ago ago

                  You both make good points, and I come down on the side of adding some template mechanism to web standards. Of course, that all starts with an RFC and a reference implementation. Any volunteers?

                  • lenkite a day ago ago

                    Would raise my hand to volunteer for the reference implementation. I guess it would need to be in C++/Rust ? RFC, however, involves way too much talking and also needs solid networking amongst the web crowd. Not qualified for that. For a template language, it would be better to copy a subset from an existing de-facto standard like jinja2 which already has a lean, performant subset implementation at https://github.com/Keats/tera.

                    Document/template inclusion model should be OK now in modern era thanks to HTTP/3. Not really sure how that should ideally look like though.

          • baq 2 days ago ago

            if the simplest web page pulls in react in an attempt to be a small OS unto itself, that's what you get.

        • wyager 2 days ago ago

          > Why hasn't there been a minimal-boilerplate language and framework and programming environment?

          Haskell mostly solves boilerplate in a typed way and Lisp mostly solves it in an untyped way (I know, I know, roughly speaking).

          To put it bluntly, there's an intellectual difficulty barrier associated with understanding problems well enough to systematize away boilerplate and use these languages effectively.

          The difficulty gap between writing a ton of boilerplate in Java and completely eliminating that boilerplate in Haskell is roughly analogous to the difficulty gap between bolting on the wheels at a car factory and programming a robot to bolt on the wheels for you. (The GHC compiler devs might be the robot manufacturers in this analogy.) The latter is obviously harder, and despite the labor savings, sometimes the economics of hiring a guy to sit there bolting on wheels still works out.

          • skydhash a day ago ago

            Lisp can be very productive, but it requires actual design skills to wield it. It’s easier to teach python.

        • IanCal 2 days ago ago

          Because the set of problems we make to be solvable with code is huge and the world is messy. Many of these things really are at a very high level of abstraction and the boiler plate feels boilerplatey but is actually slightly different in a way not automatable. Or it is but the configuration for that automation becomes the new bit you look at and see as grunt work.

          Now we have a way we can get computers to do it!

      • player1234 13 hours ago ago

        How did you measure this productivity gain? Please share your methodology

      • JOnAgain a day ago ago

        Can you share more?

    • nine_k 2 days ago ago

      Yes. The author essentially asked Claude to port a driver from Linux 2.4 to Linux 6.8. Very certainly there must be sufficient amounts of training material, and web-searchable material, that describes such tasks. The author provided his own expertise where Claude could not find a good analogue in the training corpus, that is, the few actually non-trivial bits of porting.

      "Use these tools as a massive force multiplier of your own skills" is a great way to formulate it. If your own skills in the area are near-zero, multiplying them by a large factor may still yield a near-zero result. (And negative productivity.)

      • rmoriz 2 days ago ago

        You can still ask, generate a list of things to learn etc. basically generate a streamlined course based on all tutorials, readmes and source code available when the model was trained. You can call your tutor 24/7 as long as you got tokens.

        • seba_dos1 a day ago ago

          You have to keep guard at each step to notice the inconsistencies and call your tutor's mistakes out though, or you'll inevitably learn some garbage. This is a use case that certainly "feels" like it's boosting your learning (it sure does to me), but I'd like to read an actual study on whether it really does before reaching any conclusions.

          It seems to me that LLMs help the most at the initial step of getting into some rabbit hole - when you're getting familiar with the jargon, so you can start reading some proper resources without being confused too much. The sooner you manage to move there, the better.

          • rmoriz 14 hours ago ago

            You overestimate hallucinations in known settings. If you ask to show source code, it‘s easy to check the sources (of a framework, language, local code)

        • theshrike79 2 days ago ago

          ChatGPT even has a specific "Study mode" where it refrains from telling you the answer directly and kinda guides you to figure it out yourself.

    • not_that_d 2 days ago ago

      For me is not so. It makes me way faster in languages that I don't know, but makes me slower on the ones I know because a lot of times, it creates code that will fail eventually.

      Then I need to expend extra time following everything it did so I can "fix" the problem.

      • peteforde a day ago ago

        My daily experience suggests that this happens primarily when the developer isn't as good as they assume that they are at expressing the ideas in their head into a structure that the LLM can run with. That's not intended to be a jab, just an opportunity for reflection.

        • skydhash a day ago ago

          But the moment I got the idea in my head, is the moment I got the code for it. The time spent is moslty checking the library semantics, or if there’s not some function already written for a specific bit. There’s also checking if you’re not violating some contract somewhere.

          A lot of people have the try and see if it works approach. That can be insanely wasteful in any moderately complex system. The scientist way is to have a model that reduce the system to a few parameters. Then you’ll see that a lot of libraries are mostly surface works and slighlty modified version of the same thing.

    • ZYbCRq22HbJ2y7 2 days ago ago

      We have members on my team that definitely feel empowered to wade into new territory, but they make so much misdirected code with LLMs, even when we make everyone use Claude 4 thinking agents.

      It seems to me that if you have been pattern matching the majority of your coding career, then you have a LLM agent pattern match on top of that, it results in a lot of headaches for people who haven't been doing that on a team.

      I think LLM agents are supremely faster at pattern matching than humans, but are not as good at it in general.

      • baq 2 days ago ago

        > they make so much misdirected code with LLMs

        just points to the fact that they've no idea what they're doing and would produce different, pointless code by hand, though much slower. this is the paradigm shift - you need a much bigger sieve to filter out the many more orders of magnitude of crap that inexperienced operators of LLMs create.

        • matwood a day ago ago

          It'll be interesting if tools like Claude will end up being used to highlight the people who have no clue what they are doing.

          • johnisgood a day ago ago

            I think you can do this already. If you do not know the underlying concepts, or have no idea about how you have to architecture your project and so forth, then you will have problems with LLMs. So I think many if not most people who have problems with LLMs, it is most likely due to their lack of knowledge and/or their expectation that you can just simply write two sentences and it will figure out what you want and how you want it.

            You cannot outsource thinking to LLMs, at least not yet, if ever. You have to be part of the whole process. You need to have knowledge. If you have no idea what it is doing or what you want it to do, you are going to have a difficult time.

            • skydhash a day ago ago

              The thing is, is it slower to code with LLMs if you already have the knowledge? I think it is so. Coding is formal. There’s usually one correct way to tell the computer to do something (all the alternatives are equivalent through abstraction or transposition). The other ways are what we called bugs and there’s an infinty of them.

              The programming language eliminates some (incorrect syntax) while the type system get rid of others (contract error). We also have linter that helps us with harmful patterns. But the range of errors is still enormous. So what’s the probability of having the LLMs be error free or as close as possible to the intended result?

              We as humans have reduced the probability of error by having libraries of correct code (or outsourcing the correction of code), thus having a firmer and cognitively manageable foundation to create new code. As well as not having to rely on language to solve problems.

              • johnisgood 3 hours ago ago

                > is it slower to code with LLMs if you already have the knowledge? I think it is so.

                In my case it is not slower, so it works for me. I cannot speak for others.

              • theptip 20 hours ago ago

                > Coding is formal

                I just don’t see it like this; code is craft, and there are ten ways to solve any given problem. Reasonable people can select different tradeoffs, and taste is also a big factor.

                Maybe if you are working in very low-level algorithmic, compiler, or protocol development it’s less ambiguous. But almost all software is written many layers above that.

                I’m sure if you already sat down and thought through every detail, you might find LLMs slow you down vs typing. Many people use the process of writing, or the process of iterating with customers, to flesh out the ambiguous detail; in which case improving cycle time can improve your time to PMF.

              • matwood a day ago ago

                > The thing is, is it slower to code with LLMs if you already have the knowledge?

                Maybe if all you do is code, but that’s not how most people work. Being able write I need these things done in this way and then attend a meeting or start researching the next thing is valuable. And because of my other obligations there’s no way I could do more without Claude.

      • maccard a day ago ago

        One of the things I’ve noticed is that those people are the same people who before would spend 3 weeks on something before coming out with a copy of the docs that doesn’t actually solve the problem at hand, but it spits out a result that almost matches what you asked for. They never understood the problem in the first place, they always just hammered until the nail went in - now they just have a different tool.

        • skydhash a day ago ago

          Everytime I have to mentor juniors, it’s more productive to get them to articulate the problem and their initial solution. It’s often sufficient to highlight (mostly for them) how little they actually know about the problem itself to actually rush to solve it.

    • meesles 2 days ago ago

      > Use these tools as a massive force multiplier of your own skills.

      I've felt this learning just this week - it's taken me having to create a small project with 10 clear repetitions, messily made from AI input. But then the magic is making 'consolidation' tasks where you can just guide it into unifying markup, styles/JS, whatever you may have on your hands.

      I think it was less obvious to me in my day job because in a startup with a lack of strong coding conventions, it's harder to apply these pattern-matching requests since there are fewer patterns. I can imagine in a strict, mature codebase this would be way more effective.

      • rmoriz 2 days ago ago

        In times of Rust and Typescript (just examples) coding standards are explicit. It‘s not optional anymore. All my vibe coding projects are using CI with tests including style and type checks. The agent makes mistakes but it sees the failing tests and fixes it. If you vibe code like we did Perl and PHP in 1999 you‘re gonna have a bad time.

        • baq 2 days ago ago

          In a startup, especially early, the only thing that isn't optional is shipping. You can fix any and all issues later, first you have to ensure there is a 'later'. (That's not to say you shouldn't do it at all, but definitely focus on the business before the tech.)

          • rmoriz 2 days ago ago

            I‘ve been with a couple of startups that shipped and not a single one was cutting corners in this area anymore. Last time I saw this was probably around 2005-2008.

            • bcrosby95 a day ago ago

              Cutting corners here is one of the worst decisions you can make. Especially in an environment where you don't know your end product and you're likely to rework your code over and over and over again.

              Piling shit on top of shit only pays off on very short time scales - like a month or two. Because once you revisit that shit code all your time savings are out the window. If you have to revisit it more than once you probably slowed yourself down already.

              • rmoriz a day ago ago

                And you can add an AI code agent (Copilot, Opencode, Claude) to the workflow just to deal with failing tests, automatically create PRs to fix the issues. It may boost the productivity a lot doing housekeeping while coding manually.

    • marcus_holmes 2 days ago ago

      >> Use these tools for rapid onboarding onto new frameworks.

      Also new languages - our team uses Ruby, and Ruby is easy to read, so I can skip learning the syntax and get the LLM to write the code. I have to make all the decisions, and guide it, but I don't need to learn Ruby to write acceptable-level code [0]. I get to be immediately productive in an unfamiliar environment, which is great.

      [0] acceptable-level as defined by the rest of the team - they're checking my PRs.

      • AdieuToLogic 2 days ago ago

        >>> Use these tools for rapid onboarding onto new frameworks.

        > Also new languages - our team uses Ruby, and Ruby is easy to read, so I can skip learning the syntax and get the LLM to write the code.

        If Ruby is "easy to read" and assuming you know a similar programming language (such as Perl or Python), how difficult is it to learn Ruby and be able to write the code yourself?

        > ... but I don't need to learn Ruby to write acceptable-level code [0].

        Since the team you work with uses Ruby, why do you not need to learn it?

        > [0] acceptable-level as defined by the rest of the team - they're checking my PRs.

        Ah. Now I get it.

        Instead of learning the lingua franca and being able to verify your own work, "the rest of the team" has to make sure your PR's will not obviously fail.

        Here's a thought - has it crossed your mind that team members needing to determine if your PR's are acceptable is "a bad thing", in that it may indicate a lack of trust of the changes you have been introducing?

        Furthermore, does this situation qualify as "immediately productive" for the team or only yourself?

        EDIT:

        If you are not a software engineer by trade and instead a stakeholder wanting to formally specify desired system changes to the engineering team, an approach to consider is authoring RSpec[0] specs to define feature/integration specifications instead of PR's.

        This would enable you to codify functional requirements such that their satisfaction is provable, assist the engineering team's understanding of what must be done in the context of existing behavior, identify conflicting system requirements (if any) before engineering effort is expended, provide a suite of functional regression tests, and serve as executable documentation for team members.

        0 - https://rspec.info/features/6-1/rspec-rails/feature-specs/fe...

        • maccard a day ago ago

          > Instead of learning the lingua franca and being able to verify your own work, "the rest of the team" has to make sure your PR's will not obviously fail.

          I lead the engineering team at my org and we hire almost exclusively for c++ engineers (we make games). Our build system by happenstance is written in c#, as are all the automation scripts. Out of our control to change. Should we require every engineer to be competent and write fluent c# or should we let them just get on with their value adds?

          • skydhash a day ago ago

            Programming language are not actually that different. There’s only a few models of computation and paradigms. The effort is mostly about learning the syntax, the standard library and whatever abstractions built around the above paradigms and computation models. And learning the standard library is the tough one.

            I would expect every engineer to be able to read C#. It’s not that hard.

            • marcus_holmes 18 hours ago ago

              This. Reading a language (and not only programming languages) is very different from being able to construct good, elegant, routines (or sentences) in that language.

        • hamdingers a day ago ago

          > If Ruby is "easy to read" and assuming you know a similar programming language (such as Perl or Python), how difficult is it to learn Ruby and be able to write the code yourself?

          Reading code doesn't mean you can write it, as any programmer will tell you.

          If I want to know if a string in ruby begins with another string, is the method starts_with or start_with or startwith like python or is it like perl where I have to use some completely different method? I don't know, better google it.

          But if I'm reading and see `str.start_with?("https://")` I know instantly what it's doing.

        • ponector 2 days ago ago

          That is what I observe at work: people who heavily use LLM in their coding don't read, review and test their code, pushing this work to teammates and testers.

          Great skill multiplier, right?

        • nchmy 2 days ago ago

          are you advocating for not having code reviews...? Just straight force push to main?

          • AdieuToLogic 2 days ago ago

            > are you advocating for not having code reviews...? Just straight force push to main?

            No, not at all.

            What I was speaking about was if the person to whom I replied is not a s/w engineer, then perhaps a better contribution to their project would be to define requirements in the form of RSpec specifications (since Ruby is in use) and allow the engineering team to satisfy them as they determine appropriate.

            I have seen product/project managers attempt to "contribute" to a development effort much like what was described. Usually there is a power dynamic such that engineers cannot overtly tell the manager(s), "you define the 'what' and we will define the 'how'." Instead, something like the PR flow described is grudgingly accepted and then worked around.

            • marcus_holmes 18 hours ago ago

              I'm the person you replied to. I've been developing software for >30 years now. In this case I have domain knowledge, architecture knowledge, experience with the type of systems we're building, but not the language (it's an odd situation). I'm using an LLM to avoid the weeks/months of getting up to speed with Ruby myself, and it appears to be working.

              To address your comments about PRs: without the LLM I would be submitting shitty PRs with lots of basic Ruby mistakes. With the LLM I am submitting PRs that are on a par with everyone else's PRs (Ruby has many ways of doing the same thing, so most suggested changes to my PRs are the usual "or you could do it this way and that might be more elegant" discussions). It's not that the rest of the team are picking up my slack, it's actually better this way.

              I was a bit sceptical when I started, and like you I assumed that I would end up having to learn Ruby, but in fact it's working well.

          • cyphar a day ago ago

            Code reviews (especially internal ones) generally assume that the person writing the original code has an idea of what they are doing and are designed to catch mistakes that humans might make. Just because they probably work to improve codebases with human submissions doesn't mean that they are good enough filter for LLM-generated code that the submitter doesn't sufficiently understand and has submitted without their own review. Same goes for CI and testing.

            This reminds of some of the comments made by reviewers during the infamous Schön scientific fraud case. The scientific review process is designed to catch mistakes and honest flaws in research. It is not designed to catch fraud, and the evidence shows that it is bad at it.

            Another applicable example would be the bad patches fiasco with the Linux kernel. (And there is going to be a session at the upcoming maintainers' summit about LLM-generated kernel patches.)

    • davidw a day ago ago

      I'm feeling quite wary of the fact that if it's a real productivity booster, it's all in the hands of one company. Perhaps some of the others will be able to compete with it, but: still all big corporations.

    • faangguyindia a day ago ago

      those who use claude code, what you think are its best features which you cannot live without and makes your life so much easier? I am using claude code but I am not sure what stuff i should look into.

    • player1234 13 hours ago ago

      How did you measure this productivity gain? Please share your methodology.

    • emilecantin a day ago ago

      One area where it really shines for me is personal projects. You know, the type of projects you might get to spend a couple hours on once the kids are in bed... Spending that couple hours guiding Claude do do what I want is way quicker than doing it all myself. Especially since I do have the skills to do it all myself, just not the time. It's been particularly effective around UI stuff since I've selected a popular UI library (MUI) but I don't use it in my day job; I had to keep looking up documentation but Claude just bangs it out very easily.

      One thing where it hasn't shone is configuring my production deployment. I had set this project up with a docker-compose, but my selected CI/CD (Gitlab) and my selected hosting provider (DigitalOcean) seemed to steer me more towards Kubernetes, which I don't know anything about. Gitlab's documentation wanted me to setup Flux (?) and at some point referred to a Helm chart (?)... All words I've heard but their documentation is useless to newcomers ("manage containers in production!": yes, that's obviously what I'm trying to do... "Getting started: run this obscure command with 5 arguments": wth is this path I need to provide? what's this parameter? etc.) I honestly can't believe how complex the recommended setup is, to ultimately run 2 containers that I already have defined in ~20 lines of docker-compose...

      Claude got me through it. Took it about 5-6 hours of trying stuff, build failing, trying again. And even then, it still doesn't deploy when I push. It builds, pushes the new container images, and spins up a new pod... which it then immediately kills because my older one is still running and I only want one pod running... Oh well, I'll just keep killing the old pod until I have some more energy to throw at it to try and fix it.

      TL;DR: it's much better at some things than others.

      • j45 a day ago ago

        Totally. Being able to start shipping from the first commit using something like Picocss and just add features helps gets things out of the design stage, but shipping features individually.

        Some folks seem to like Docker Swarm before kubernetes as well and I've found it's not bad for personal projects for sure.

        AI will always return the average of it's corpus given the chance (or not clear direction in the prompt). I usually let my opinions rip and say to avoid building myself a stack temple to my greatness. It often comes back with a nice lean stack.

        I usually avoid or minimize Javascript libraries for their brittleness, and the complexity can eat up more of the AI's context and awareness to map the abstractions vs something it knows incredibly well.

        Python is great, but web stuff is still emerging, FastAPI is handy though, and putting something like Pico/HTMX/alpine.js on the front seems reasonable.

        Laravel is also really hard to overlook sometimes when working with LLMs on quick things, there's so much working code out there that it can really get a ton done for an entire production environment with all of the built in tools.

        Happy to learn about what other folks are using and liking.

    • mettamage a day ago ago

      I don’t have a lot of experience with your first point. I do I have a lot of experience with your second point. And I would say that you hit the nail on the head

    • tonkinai 2 days ago ago

      It’s less about AI vs boilerplate and more about having good tests. if the code works and you can move fast, who cares who typed it.

      • skydhash a day ago ago

        Code working is a very high bar. And the only way close for most projects is formal verification.

    • mattfrommars 2 days ago ago

      Do you get to use Claude Code through your employer to have the opportunity to spend 100 hours with it? Or do you do this on your own persona project?

    • stevex a day ago ago

      I had an Amiga disk image (*.adf) that I wanted to extract the files from. There are probably tools to do this but I was just starting with Claude Code, so I asked it to write a tool to extract the files by implementing the filesystem.

      It took a few prompts but I know enough about FFS (the Amiga filesystem) to guide it, and it created exactly the tool I wanted.

      "force multiplier of your own skills" is a great description.

  • jillesvangurp a day ago ago

    I think this is illustrative of the kind of productive things you can do with an LLM if you know what you are doing. Is it perfect, no. Can they do useful things if you prompt correctly, absolutely. It helps knowing what you are doing and having enough skill to make good judgment calls yourself.

    There are currently multiple posts per day on HN that escalate into debates on LLMs being useful or not. I think this is a clear example that it can be. And results count. Porting and modernizing some ancient driver is not that easy. There's all sorts of stuff that gets dropped from the kernel because it's just too old to bother maintaining it and when nobody does, deleting code becomes the only option. This is a good example. I imagine, there are enough crusty corners in the kernel that could benefit from a similar treatment.

    I've had similar mixed results with agentic coding sometimes impressing me and other times disappointing me. But if you can adapt to some of these limitations it's alright. And this seems to be a bit of a moving goalpost thing as well. Things that were hard a few months ago are now more doable.

    • ASinclair a day ago ago

      > There are currently multiple posts per day on HN that escalate into debates on LLMs being useful or not.

      My main worry is whether they will be useful when priced above actual cost. I worry about becoming depending on these tools only for them to get prohibitively expensive.

    • mexicocitinluez a day ago ago

      The more you use the tools, the more you're able to recognize the situations in which they're useful.

      These studies keep popping up where they randomly decide whether someone will use AI to assist in a feature or not and it's hard for me to explain just how stupid that is. And how it's a fundamental misunderstanding of when and how you'd want to use these tools.

      It's like being a person who hangs up drywall with screws and your boss going "Hey, I'm gonna flip a coin and if it's heads you'll have to use the hammer instead of a screwdriver" and that being the method in which the hammer is judged.

      I don't wake and go "I'm going to use AI today". I don't use it to create entire features. I use it like a dumb assistant.

      > I've had similar mixed results with agentic coding sometimes impressing me and other times disappointing me. But if you can adapt to some of these limitations it's alright. And this seems to be a bit of a moving goalpost thing as well. Things that were hard a few months ago are now more doable.

      Exactly my experience too.

      • jillesvangurp a day ago ago

        > I don't use it to create entire features.

        I actually do this now. That's one of those things that went from impossible to doable under some circumstances. Still a bit of a coin flip but it can work well in some code bases. I still have a mental block even asking for these things under the assumption it would not work anyway. But I've been pleasantly surprised a few times where this actually works.

        • mexicocitinluez a day ago ago

          I sorta mispoke. I use Bolt to create UI designs for entire features, but write the back-end code by hand (with Copilot as Autocomplete).

          Honestly amazed at how good it is getting something going. I always had issues extrapolating on existing designs, so the ability to get EXACT screens built without having a designer yell at me for being stupid has been a godsend.

  • eisa01 2 days ago ago

    I've used Claude Code in the past month to do development on CoMaps [1] using the 20 USD/month plan.

    I've been able to do things that I would not have the competence for otherwise, as I do not have a formal software engineering background and my main expertise is writing python data processing scripts.

    E.g., yesterday I fixed a bug [2] by having Claude compare the CarPlay and iOS search implementations. It did at first suggest another code change than the one that fixed it, but that felt just like a normal part of debugging (you may need to try different things)

    Most of my contributions [3] have been enabled by Claude, and it's also been critical to identify where the code for certain things are located - it's a very powerful search in the code base

    And it is just amazing if you need to write a simple python script to do something, e.g., in [4]

    Now this would obviously not be possible if everyone used AI tools and no one knew the existing code base, so the future for real engineers and architects is bright!

    [1] https://codeberg.org/comaps/comaps [2] https://codeberg.org/comaps/comaps/pulls/1792 [3] https://codeberg.org/comaps/comaps/pulls?state=all&type=all&... [4] https://codeberg.org/comaps/comaps/pulls/1782

    • maelito 2 days ago ago

      Thanks for your contributions to Comaps. As the main developer of cartes.app, I'm happy to see libre traction in the world of maps.

      Hope to make the bridge soon with i18n of cartes.app.

      I also use LLMs to work on it. Mistral, mostly.

  • lukaslalinsky a day ago ago

    I mainly use Claude Code for things I know, where I just don't want to focus on the coding part. However, I recently found a very niche use. I have a small issue with an open source project. Instead of just accepting it, it occurred to me I can just clone the repo, and ask CC to look into my issue. For example, I was annoyed with Helix/Zed that replacing parameter in Zig code only works for function declarations, not function calls. I suspected it will be in the tree-sitter grammar, but I let it go through the Zed source code, then it asked for the grammar, so I cloned it and gave it access to that, and it happily fixed the grammar for me and tested the results. It needed a few nudges to make the fix properly, but I spent maybe 5 minutes on this, while CC was probably working for half an hour. I even had it fork the repo, and open the PR for me. In the end I have an useful change that people will benefit from, that I'd never attempt myself.

  • d4rkp4ttern a day ago ago

    > using these tools as a massive force multiplier…

    Even before tools like CC it was the case that LLMs enabled venturing into projects/areas that would be intimidating otherwise. But Claude-Code (and codex-cli as of late) has made this massively more true.

    For example I recently used CC to do a significant upgrade of the Langroid LLM-Agent framework from Pydantic V1 to V2, something I would not have dared to attempt before CC:

    https://github.com/langroid/langroid/releases/tag/0.59.0

    I also created nice collapsible html logs [2] for agent interactions and tool-calls, inspired by @badlogic/Zechner’s Claude-trace [3] (which incidentally is a fantastic tool!).

    [2] https://github.com/langroid/langroid/releases/tag/0.57.0

    [3] https://github.com/badlogic/lemmy/tree/main/apps/claude-trac...

    And added a DSL to specify agentic task termination conditions based on event-sequence patterns:

    https://langroid.github.io/langroid/notes/task-termination/

    Needless to say, the docs are also made with significant CC assistance.

  • codedokode 2 days ago ago

    LLMs are also good for writing quick experiments and benchmarks to satisfy someone's curiosity. For example, once I was wondering, how much time does it take to migrate a cache line between cores when several processes access the same variable - and after I wrote a detailed benchmark algorithm, LLM generated the code instantly. Note that I described the algorithm completely and what it did is just translated it into the code. Obviously I could write the code myself, but I might need to lookup a function (how does one measure elapsed time?), I might make mistakes in C, etc. Another time a made a benchmark to compare linear vs tree search for finding a value in a small array.

    It's very useful when you get the answer in several minutes rather than half a hour.

    • codedokode 2 days ago ago

      Also I wanted to add that LLMs (at least free ones) are pretty dumb sometimes and do not notice obvious thing. For example, when writing tests they generate lot of duplicated code and do not move it into a helper function, or do not combine tests using parametrization. I have to do it manually every time.

      Maybe it is because they generate the code in one pass and cannot return back and fix the issues. LLM makers, you should allow LLMs to review and edit the generated code.

      • kelnos a day ago ago

        I see that often enough too, but if I then ask it to review what it's done and look for opportunities to factor out duplicated code, it does a decent job.

      • nikki93 a day ago ago

        https://github.com/terryyin/lizard has been useful to track when functions get too convoluted or long, or when there's too much duplication -- in code generated by agents. Still have to see how well it works long term but it's caught things here and there, I have it in the build steps in my scripts so the agent sees its output.

      • jlei523 2 days ago ago

          Also I wanted to add that LLMs (at least free ones) are pretty dumb sometimes and do not notice obvious thing. For example, when writing tests they generate lot of duplicated code and do not move it into a helper function, or do not combine tests using parametrization. I have to do it manually every time.
        
        Do you prompt it to reduce duplicated code?
        • codedokode a day ago ago

          I can prompt anything but I would prefer it not to make obvious mistakes from the start.

          • jlei523 a day ago ago

            "Use DRY coding". 3 words can solve this problem. Maybe put it in the parent prompt.

      • scotty79 a day ago ago

        > I have to do it manually every time.

        You can tell it to move it and they'll move it and use this shared code from now on.

        • codedokode a day ago ago

          Sometimes it seems like explaining what I want could take more time than actually editing the code.

          For example, imagine if you test a vector-like collection. In every test case dumb LLM creates vector manually and makes inserts/deletes. It could be replaced by adding a helper function that accepts a sequence of operations and returns the processed vector. Furthermore, once you have that function, you can merge multiple tests with parametrization, by having a test function accept a sequence of operation and expected result:

              parametrize('actions, result', (
                  # Test that remove removes items from vector
                  ([Ins(1, 2, 3, 4), Remove(4)], [1, 2, 3]),
                  ...
              )
          
          But it takes time to write this explanation, and dumb LLM might not merge all tests from the first time.
          • scotty79 a day ago ago

            Try something like:

            "Don't create vector manually inline in every test case, make a helper function for that."

            and see what agent does. It might do something smart. It might do something a bit dumb but by understanding why exactly it's dumb, you can communicate what correction is needed pretty smoothly.

  • meander_water 2 days ago ago

    > Be as specific as possible, making sure to use the domain-specific keywords for the task.

    If you don't have the technical understanding of a language or framework, there is going to be a lot of ambiguity in your prompts.

    This specificity gap leads the LLM to fill in those gaps for you, which may not be what you intended. And that's usually where bugs hide.

    I think this is the flip side to being a "force multiplier"

    • SV_BubbleTime 2 days ago ago

      “I need a C class that has a constructor for a tuple”

      I would hope Claude would be like “Hold on man…”

      • qayxc 2 days ago ago

        Hm. That was interesting, so I ran it as prompt to Qwen3-Coder (which I run locally). It spat out a bunch of C functions together with a C struct and commented:

          This implementation provides:
          Generic Tuple Structure: A flexible tuple that can hold different types of elements (int, double, string, float)
          Constructor Function: tuple_create() initializes a new tuple with specified size
          Type-Safe Setters:
            tuple_set_int(), tuple_set_double(), tuple_set_string(), tuple_set_float()
          Type-Safe Getters:
            tuple_get_int(), tuple_get_double(), tuple_get_string(), tuple_get_float()
          Memory Management: Automatic memory allocation and deallocation
          Utility Functions:
           tuple_print() for displaying tuple contents
           tuple_destroy() for cleanup
        
        Seems to be a common enough request to be present in the model's training data :D Even the code looked fine, apart from being a very naive and thus slow implementation.
      • codedokode a day ago ago

        I asked ChatGPT to make a system of macros that would generate a struct with given fields (for example: int a, bool b, char *c) and a function that would print the struct contents. Instead of refusing it started inventing recursive macros that were never going to work. It seems LLMs are especially bad with C macros.

        If you have access to state-of-art coding assistants, try asking this. You can additionally add a requirement to generate a destructor function (that would free dynamically-allocated fields).

      • petesergeant 2 days ago ago

        > I need a C class that has a constructor for a tuple

        | I'll create a simple C++ class with a constructor that takes a tuple. Here's an example...

        > nah C

        | C doesn't have classes or constructors - those are C++ features. In C, you would use a struct and create an initialization function. Here's how you'd handle a tuple-like structure:

  • jabl 2 days ago ago

    Blast from the past! When I was a kid we had such a floppy tape device connected to a 386 or 486 computer my parents had. I think it was a Colorado Jumbo 250. I think the actual capacity was 125MB, but the drive or the backup software had some built-in compression, hence why it was marketed as a 250MB drive. Never tried to use it with the Linux ftape driver, though.

    It wouldn't surprise me if the drive and the tapes are still somewhere in my parents storage. Could be a fun weekend project to try it out, though I'm not sure I have any computer with a floppy interface anymore. And I don't think there's anything particularly interesting on those tapes either.

    In any case, cool project! Kudos to the author!

    • driverdan a day ago ago

      I've been trying to remember the tape drive we had on our 486 when I was a kid and that's it. Thank you!

  • Brendinooo 2 days ago ago

    When I read an article like this it makes me think about how the demand for work to be done was nowhere close to being fully supplied by the pre-LLM status quo.

    • theshrike79 2 days ago ago

      LLM assisted coding can get you from an idea to MVP in an evening (within maybe 1 or 2 Claude 5 hour quota windows).

      I've done _so_ many of these where I go "hmm, this might be useful", planned the project with gemini/chatgpt free versions to a markdown project file and then sic Claude on it while I catch up on my shows.

      Within a few prompts I've got something workable and I can determine if it was a good idea or not.

      Without an LLM I never would've even tried it, I have better and more urgent things to do than code a price-watcher for very niche Blu-ray seller =)

      • jason-johnson a day ago ago

        This, for me, is the actual gain and I don't see a lot of people talking about it: it's not that I finish a project faster the LLMs. From what I've read and personally experienced, it probably takes about as long to complete a project with or without the LLMs. But the difference is, without it I spend all that time deeply engaged, unable to do anything else. With the LLMs I no longer require continuous focus. It may be the same wall-clock time but my own mental capacity is not being used at or near capacity.

      • matwood a day ago ago

        This right here. It's pretty amazing tbh. I'm typing this comment while Claude churns on an idea I had...

    • measurablefunc 2 days ago ago

      It's never about lack of work but lack of people who have the prerequisite expertise to do it. If you don't have experience w/ kernel development then no amount of prompting will get you the type of results that the author was able to achieve. More specifically, in theory it should be possible to take all the old drivers & "modernize" them to carry them forward into each new version of the kernel but the problem is that none of the LLMs are capable of doing this work w/o human supervision & the number of people who can actually supervise the LLMs is very small compared to the amount of unmaintained drivers that could be ported into newer kernels.

      There is a good discussion/interview¹ between Alan Kay & Joe Armstrong about how most code is developed backwards b/c none of the code has a formal specification that can be "compiled" into different targets. If there was a specification other than the old driver code then the process of porting over the driver would be a matter of recompiling the specification for a new kernel target. In absence of such specification you have to substitute human expertise to make sure the invariants in the old code are maintained in the new one b/c the LLMs has no understanding of any of it other than pattern matching to other drivers w/ similar code.

      ¹https://www.youtube.com/watch?v=axBVG_VkrHI

      • ekidd 2 days ago ago

        There is usually a specification for how hardware works. But:

        1. The original hardware spec is usually proprietary, and

        2. The spec is often what the hardware was supposed to do. But hardware prototype revisions are expensive. So at some point, the company accepts a bunch of hardware bugs, patches around them in software, ships the hardware, and reassigns the teams to a newer product. The hardware documentation won't always be updated.

        This is obviously an awful process, but I've seen and heard of versions of it for over 20 years. The underlying factors driving this can be fixed, if you really want to, but it will make your product totally uncompetitive.

      • DrewADesign 2 days ago ago

        AI doesn’t need to replace a specialist in their entirety for it to tank demand for a skill. If the people that currently do the work are significantly more productive, fewer people will be necessary to the same amount of work. Then, people trying to escape obsolescence in different, more popular specialties move into the niche ones. You could easily pass the threshold of having less work than people without having replaced a single specialist.

    • bandrami 2 days ago ago

      IDK, the bottleneck really still seems to be "marketable ideas" rather than their implementation. There's only so much stuff people are willing to actually pay for.

    • pluto_modadic 2 days ago ago

      things were on the backlog, but more important things absolutely needed to be done.

    • mercenario a day ago ago

      demand is infinite, we will always want new things and things faster, smaller/bigger, lighter, cheaper.

  • 0xbadcafebee 2 days ago ago

    I had a suspicion AI would lower the barrier to entry for kernel hacking. Glad to see it's true. We could soon see much wider support for embedded/ARM hardware. Perhaps even completely new stripped-down OSes for smart devices.

    • eviks 2 days ago ago

      Nothing was lowered because there was no barrier:

      > As a giant caveat, I should note that I have a small bit of prior experience working with kernel modules, and a good amount of experience with C in general

      But yeah, the dream of new OSes is sweet...

      • baq 2 days ago ago

        I'd bet a couple dollars that it'd take a week for someone who hasn't hacked on the kernel at all, but knows some C and two weeks for someone who doesn't even know C but is a proficient programmer. This would previously take months.

        We're talking about an order of magnitude quicker onboarding. This is absolutely massive.

        • eviks 2 days ago ago

          It's so massive that your own fantasy bet is just a couple of dollars...

        • hulitu a day ago ago

          > This is absolutely massive.

          Just like security holes generated by those LLMs. /s

    • neop1x 11 hours ago ago

      I fear of it hallucinating a shitty code which will introduce bugs and vulnerabilities.

    • giancarlostoro 2 days ago ago

      If used correctly it can help you get up to speed quicker, sadly most people just want it to build the house instead of using it to help them hammer nails.

    • mrheosuper 2 days ago ago

      >new stripped-down OSes for smart devices.

      What's wrong with exist one?

      • 0xbadcafebee a day ago ago

        The popular OSes/kernels are too bloated and don't have wide embedded support (and porting to new devices takes considerable time), and the few free embedded OSes don't have much traction (and aren't used on more powerful platforms). Would be nice to have a middle ground.

        For example, FreeRTOS doesn't support 64-bit intel arch. And you don't "ship an app on FreeRTOS", it's more of an API and framework you use, and you sort of write a module in C and compile one big app. Quite different from non-embedded app design/shipping. You won't be able to run an Android app on an ESP32, but it should be possible to write apps for ESP32 and run them on Android-compatible hardware. But FreeRTOS would need optional MMU support, and you'd need extra components to load the app, in addition to hardware support.

        If you're asking "why would you do that", it's because I want to write simple purpose-built apps without all the trappings of a larger OS and run them on all types of hardware. You could technically build a 'smart watch' that isn't so smart but runs on a single battery charge for 1 year. But not if you use a power-hungry SoC. Want a more efficient SoC? Good luck figuring that out. Making that whole process easier unlocks more technical solutions and products.

        • mrheosuper 18 hours ago ago

          There are countless different RTOS. The most polishing one is Zephyr. FreeRTOS is more like "Scheduler" than a proper OS.

          Also am i correctly assume you want an OS that's truly "cross-platform", that can run on every architecture ? Or you just want to "have an app on ESP32 to run on Android" ? Because the latter can be done with proper abstraction layer.

  • rmoriz 2 days ago ago

    I was banned from an OpenSource project [1] recently because I suggested a bug fix. Their „code of conduct“ not only prevents PRs but also comments on issues with information that was retrieved by any AI tool or resource.

    Thinking about asking Claude to reimplement it from scratch in Rust…

    [1] https://codeberg.org/superseriousbusiness/gotosocial/src/bra...

    • lordhumphrey 2 days ago ago

      > 2. We will not accept changes (code or otherwise) created with the aid of "AI" tooling. "AI" models are trained at the expense of underpaid workers filtering inputs of abhorrent content, and does not respect the owners of input content. Ethically, it sucks.

      Do you disagree with some part of the statement regarding "AI" in their CoC? Do you think there's a fault in their logic, or do you yourself personally just not care about the ethics at play here?

      I find it refreshing personally to see a project taking a clear stance. Kudos to them.

      Recently enjoyed reading the Dynamicland project's opinion on the subject very much too[0], which I think is quite a bit deeper of an argument than the one above.

      Ethics seems to be, unfortunately, quite low down on the list of considerations of many developers, if it factors in at all to their decisions.

      [0] https://dynamicland.org/2024/FAQ/#What_is_Realtalks_relation...

      • KingMob a day ago ago

        Setting aside the categories of art and literature, training LLMs on FOSS software seems aligned with the spirit, if not the letter, of the licenses.

        It does nothing to fix the issues of unpaid FOSS labor, though, but that was a problem well before the recent rise of LLMs.

        • creesch a day ago ago

          > FOSS software seems aligned with the spirit, if not the letter, of the licenses.

          Yeah, only if you look at permissive licenses like MIT and Apache, it most certainly doesn't follow the spirit of other licenses.

        • vbarrielle a day ago ago

          I'm not sure it's very well aligned with the spirit of copyleft licenses.

      • wordofx 2 days ago ago

        I disagree with their CoC on AI. There are so many projects which are important and don’t let you contribute or make the barrier to entry so hard, and so you do best effort to raise a detailed bug description for it to sit there for 14 years or them to tell you to get fucked. So anyone who complains about AI isn’t worth the time and day and I support not getting paid as much if at all.

    • pluto_modadic 2 days ago ago

      you disobeyed a code of conduct? that's not a good look.

    • QuadmasterXLII 2 days ago ago

      That must be so hard for you.

      • rmoriz 2 days ago ago

        The bugs are on them. I‘ve fixed them in my fork but of course I‘ll migrate to a non-discriminating alternative.

        • skydhash a day ago ago

          Your fork works, so why are you so unhappy? You can always publish you diff to help other people if you really want to do so.

          • rmoriz a day ago ago

            I don‘t want others to get trappend, hence I‘ve unpublished my fixes. I‘ll also migrate to another software as I clearly have no time dealing with such exclusive politics. There is no point in discussing with stubborn and brain-washed people, the only solution is to move forward and warn others.

            That’s the reason I posted my comment.

    • sreekanth850 2 days ago ago

      / Suddenly i saw this: //Update regarding corporate sponsors: we are open to sponsorship arrangements with organizations that align with our values; see the conditions below.// They should know that beggars cant be choosers.

      • 3836293648 2 days ago ago

        That's not begging. That's a premptive rejection for people who think they can take control of the project through money.

      • driverdan a day ago ago

        It's pretty funny that they say "We are not interested in input from right-wingers, nazis, ... or capitalists." and then say they're open to corporate sponsorships. If they want to be consistent they'd only be open to government or individual sponsors, not corps.

    • ok123456 a day ago ago

      "You used AI!" is now being weaponized by project maintainers who don't want to accept contributions, regardless of how innocuous.

      A large C++ emulator project was failing to build with a particular compiler with certain Werror's enabled. It came down to reordering a few members (that matters in C++) and using the universal initializer syntax in a few places. It was a +3-3 diff. I got lambasted. One notoriously hostile maintainer accused me of making AI slop. The others didn't understand why the order mattered and referred to it as "churn."

    • bgwalter a day ago ago

      There is no "from scratch" for "AI". Claude will read the original, launder it, strip the license and pass it off as its own work.

      • TuxSH a day ago ago

        Indeed, LLMs cannot do truly novel thinking, and the laundering analogy is spot-on.

        However they're able to do more than just regurgitating code, I can have them explain to me the underlying (mathematical or whatever) concept behind the code and write new code from scratch myself, with that knowledge.

        Can/should this new code be considered as derivative work, if the underlying principles were already documented in literature?

        • wizzwizz4 a day ago ago

          They can regurgitate explanations as well as code. I'd strongly recommend doing actual research: you'll find better (less-distorted, better laid out, more complete) explanations.

    • encom a day ago ago

      That particular CoC is a colossal red flag that the maintainers are utterly deranged. This might actually be the worst CoC I've ever seen. Any CoC is a red flag, but people often get pressured into it, so it's a sliding scale.

  • csmantle 2 days ago ago

    It's a good example of a developer who knows what to do with and what to expect from AI. And a healthy sprinkle of skepticism, because of which he chose to make the driver a separate module.

  • sedatk 2 days ago ago

    Off-topic, but I wish Linux had a stable ABI for loadable kernel modules. Obviously the kernel would have to provide shims for internal changes because internal ABI constantly evolves, so it would be costly and the drivers would probably run slower over time. Yet, having the ability to use a driver from 15 years ago can be a huge win at times. That kind of compatibility is one of the things I love about Windows.

    • fruitworks a day ago ago

      I think this would be terrible for the driver ecosystem. I don't want to run 15 year old binary blob drivers because they technicially still work.

      Just get the source code published into mainline.

      • dd_xplore 10 hours ago ago

        And publishing shitty code invites wrath from Linus

      • sedatk 17 hours ago ago

        Ideally, yes. But, obviously not possible for every driver in existence.

  • mintflow 2 days ago ago

    When I was port fd.io vpp to apple platform for my App, there is code that's implement coroutine in inline ASM in a C file but not in Apple supported syntax, I have succesfully use Claude web interface to get the job done (Claude code was not yet released), though as like in this article, I have strong domain specific knowledge to provide a relevant prompt to the code.

    Nowadays I heavily rely Claude Code to write code, I start a task by creating a design, then I write a bunch of prompt which cover the design details and detail requirements and interaction/interface with other compoments. So far so good, it boost the productivity much.

    But I am really worrying or still not be able to believe this is the new norm of coding.

  • tedk-42 2 days ago ago

    Really is an exciting future ahead. So many lost arts that don't need a dedicated human to relearn deep knowledge required to make an update.

    A reminder though these LLM calls cost energy and we need reliable power generation to iterate through this next tech cycle.

    Hopefully all that useless crypto wasted clock cycle burn is going to LLM clock cycle burn :)

    • rvz 2 days ago ago

      > Really is an exciting future ahead. So many lost arts that don't need a dedicated human to relearn deep knowledge required to make an update.

      You would certainly need an expert to make sure your air traffic control software is working correctly and not 'vibe coded' the next time you decide to travel abroad safely.

      We don't need a new generation who can't read code and are heavily reliant on whatever a chat bot said because: "you're absolutely right!".

      > Hopefully all that useless crypto wasted clock cycle burn is going to LLM clock cycle burn :)

      Useful enough for Stripe to building their own blockchain and even that and the rest of them are more energy efficient than a typical LLM cycle.

      But the LLM grift (or even the AGI grift) will not only cost even more than crypto, but the whole purpose of its 'usefulness' is the mass displacement of jobs with no realistic economic alternative other than achieving >10% global unemployment by 2030.

      That's a hundred times more disastrous than crypto.

      • peteforde a day ago ago

        Have you ever read David Graeber's Bullshit Jobs? Because if not, you really should.

    • konfusinomicon 2 days ago ago

      yes they do! those are the humans that pass down those lost arts even if the audience is a handful. to trust an amalgamation of neurally organized binary carved intricately into metal with deep and often arcane knowledge and the lineage of lessons that produced it is so absurd that if a catastrophe that destroyed life as we know it did occur, we deserve our fate of devolution back to stone tools and such.

  • MrContent04 10 hours ago ago

    It’s fascinating to see LLMs breathe new life into legacy code. But I wonder — if AI rewrites outpace human review, are we just creating a new layer of technical debt? Maybe the real challenge is balancing modernization with long-term maintainability.

  • brainless 2 days ago ago

    Empowering people is a lovely thing.

    Here the author has a passion/side project they have been on for a while. Upgrading the tooling is a great thing. Community may not support this since the niche is too narrow. LLM comes in and helps in the upgrade. This is exactly what we want - software to be custom - for people to solve their unique edge cases.

    Yes author is technical but we are lowering the barrier and it will be lowered even more. Semi technical people will be able to solve some simpler edge cases, and so one. More power to everyone.

  • athrowaway3z 2 days ago ago

    > so I loaded the module myself, and iteratively pasted the output of dmesg into Claude manually,

    One of the things that has Claude as my goto option is its ability to start long-running processes, which it can read the output of to debug things.

    There are a bunch of hacks you could have used here to skip the manual part, like piping dmesg to a local udp port and having Claude start a listener.

    • mattmanser 2 days ago ago

      I think that's the thing holding a lot of coders back on agentic coding, these little tricks are still hard to get working. And that feedback loop is so important.

      Even something simple like getting it to run a dev server in react can have it opening multiple servers and getting confused. I've watched streams where the programmer is constantly telling it to use an already running server.

  • AdieuToLogic 2 days ago ago

    Something not yet mentioned by other commenters is the "giant caveat":

      As a giant caveat, I should note that I have a small bit of 
      prior experience working with kernel modules, and a good 
      amount of experience with C in general, so I don’t want to 
      overstate Claude’s success in this scenario. As in, it 
      wasn’t literally three prompts to get Claude to poop out a 
      working kernel module, but rather several back-and-forth 
      conversations and, yes, several manual fixups of the code. 
      It would absolutely not be possible to perform this 
      modernization without a baseline knowledge of the internals 
      of a kernel module.
    
    Of note is the last sentence:

      It would absolutely not be possible to perform this 
      modernization without a baseline knowledge of the internals 
      of a kernel module.
    
    This is critical context when using a code generation tool, no matter which one chosen.

    Then the author states in the next section:

      Interacting with Claude Code felt like an actual 
      collaboration with a fellow engineer. People like to 
      compare it to working with a “junior” engineer, and I think 
      that’s broadly accurate: it will do whatever you tell it to 
      do, it’s eager to please, it’s overconfident, it’s quick to 
      apologize and praise you for being “absolutely right” when 
      you point out a mistake it made, and so on.
    
    I don't know what "fellow engineers" the author is accustomed to collaborating with, junior or otherwise, but the attributes enumerated above are those of a sycophant and not any engineer I have worked with.

    Finally, the author asserts:

      I’m sure that if I really wanted to, I could have done this 
      modernization effort on my own. But that would have 
      required me to learn kernel development as it was done 25 
      years ago.
    
    This could also be described as "understanding the legacy solution and what needs to be done" when the expressed goal identified in the article title is:

      ... modernize a 25-year-old kernel driver
    
    Another key activity identified as a benefit to avoid in the above quote is:

      ... required me to learn ...
    • rmoriz 2 days ago ago

      Gatekeeping is toxic. I love agents explaining me projects I don‘t know. Recently I cloned sources of Firefox and asked qwen-code (tool not significant) about the AI features of Firefox and how it‘s implemented. Learning has become awesome.

      • AdieuToLogic 2 days ago ago

        > Gatekeeping is toxic.

        Learning what must be done to implement a device driver in order for it to operate properly is not "gatekeeping." It is a prerequisite.

        > I love agents explaining me projects I don‘t know.

        Awesome. This is one way to learn about implementations and I applaud you for benefiting from same.

        > Recently I cloned sources of Firefox and asked qwen-code (tool not significant) about the AI features of Firefox and how it‘s implemented. Learning has become awesome.

        Again, this is not the same as implementing an OS device driver. Even though one could justify saying Firefox is way more complicated than a Linux device driver (and I would agree), the fact is that a defective device driver can lock-up the machine[0], corrupt internal data structures resulting in arbitrary data corruption, and/or cause damage to peripheral devices.

        0 - https://en.wikipedia.org/wiki/Kernel_panic

        • kelnos a day ago ago

          > Learning what must be done to implement a device driver in order for it to operate properly is not "gatekeeping." It is a prerequisite.

          Apparently it's not, though. The author here had some baseline knowledge of how Linux kernel modules work, but the impression I got is that they would not have been able to do this on their own without a lot of learning.

          > the fact is that a defective device driver can lock-up the machine[0], corrupt internal data structures resulting in arbitrary data corruption, and/or cause damage to peripheral devices.

          Now that's some gatekeeping right there. "Only experts can write kernel modules" is a pretty toxic attitude to have.

          • skydhash a day ago ago

            Anyone can write kernel module.

            On their computers.

            Not mine.

    • rgoulter 2 days ago ago

      > I don't know what "fellow engineers" the author is accustomed to collaborating with, junior or otherwise, but the attributes enumerated above are those of a sycophant and not any engineer I have worked with.

      I read "junior" as 'subordinate' and 'lacking in discernment'.. -- Sycophancy is a good description. I also like "bullshit" (as in 'for the purpose of convincing'). https://en.wikipedia.org/wiki/Bullshit#In_the_philosophy_of_...

      The point being, there's nuance to "it felt like a collaboration with another developer (some caveats apply)". -- It's not a straightforward hype of "LLM is perfect for everything", nor is it so simple as "LLM has imperfections, it's not worth using".

      > Another key activity identified as a benefit to avoid in the above quote is: > > ... required me to learn ...

      It would be bad to avoid learning fundamentals, or things which will be useful later.

      But, it's not bad to say "there are things I didn't need to know to solve a problem".

    • badsectoracula 2 days ago ago

      > Another key activity identified as a benefit to avoid in the above quote is: ... required me to learn ...

      "...kernel development as it was done 25 years ago."

      Not "...kernel development as it is done today".

      That "25 years ago" is important and one might be interested in the latter but not in the former.

    • kelnos a day ago ago

      To be fair, a "baseline knowledge of the internals of a kernel module" is not that difficult to acquire.

      I think a moderately-skilled developer with experience in C could have done this, with Claude's help, even if they had little or no experience with the Linux kernel. It would probably take longer to do, and debugging would be harder, but it would still be doable.

  • wg0 a day ago ago

    I have used Gemeni and OpenAI models too but at this point - Sonnet is next level undisputed King.

    I was able to port a legacy thermal printer user mode driver from legacy convoluted JS to pure modern Typescript in two to three days at the end of which printer did work.

    Same caveats apply - I have decent understanding of both languages specifically various legacy JavaScript patterns for modularity to emulate other language features that don't exist in JavaScript such as classes etc.

    • piskov a day ago ago

      Check swe-bench results but for C#.

      It’s literally pathetic how these things just memorize, not achieve any actual problem-solving

      https://arxiv.org/html/2506.12286v3

      • antonvs a day ago ago

        You've misunderstood the study that you linked. LLMs certainly memorize, and this can certainly skew benchmarks, but that's not all they do.

        Anyone with experience with LLMs will have experienced their actual problem solving ability, which is often impressive.

        You'd be better off learning to use them, than speculating without basis about why they won't work.

        • piskov a day ago ago

          What exactly did I misunderstand?

          Also “learn to use them” feels you’re holding it wrong vibes.

          See also

          https://machinelearning.apple.com/research/illusion-of-think...

          • wg0 a day ago ago

            You did not misunderstand anything. Sure, LLMs have no cognitive abilities. So even with widely used languages, they do hit the wall and need lots of hand holding.

          • antonvs 14 hours ago ago

            The study doesn't show that "these things just memorize, not achieve any actual problem-solving."

            Re learning to use them, I'm more suggesting that you should actually try to use them, because if you believe that they don't "achieve any actual problem-solving," you clearly haven't done so.

            There are plenty of reports in this thread alone about how people are using them to solve problems. For coding applications, most of us are working on proprietary code that the LLMs haven't been trained on, yet they're able to exhibit strong functional understanding of large, unfamiliar codebases, and they can correctly solve many problems that they're asked to solve.

            The illusion of thinking paper you linked seems to imply another misunderstanding on your part. All that's pointing out is a fact that's fairly obvious to anyone paying attention: if you use a text generation model to generate the text of supposed "thoughts", those aren't necessarily going to reflect the model's internal functioning.

            Functionally, the models can clearly understand almost arbitrary domains and solve problems within them. If you want to claim that's not "thinking", that's really just semantics, and doesn't really matter except philosophically. The point is their functional capabilities.

  • miki123211 a day ago ago

    IMO, the most under-appreciated trick when working with these coding agents is to give them an automatic way to check their work.

  • DrNosferatu a day ago ago

    Uses like this will only get more pervasive.

    • fruitworks a day ago ago

      If we allow it

      • DrNosferatu a day ago ago

        If it delivers what we need, isn't it a net positive?

        And clearly define what we need with specs and thorough tests.

        • fruitworks a day ago ago

          Do we need code generated from a stochastic model of previous code? I think we need actual people who are familiar with the kernel and hardware and are capable of reasoning about it.

          We are constantly reminded that LLMs are the future despite the real world evidence to the contrary. Look at what happens when LLMs are trained on the output of other LLMs, such as the low quality code flooding the internet. It is all a self-solving problem set in motion.

          • jason-johnson a day ago ago

            As sad as it makes some version of me to say, the majority of code that gets written every day doesn't need to be good code. I am doing a pet side project right now, completely with copilot. I haven't written a line of code, documentation or anything else. The code is pretty poor but I don't care because I just need it to take a specific input and produce a specific output and I only need this to happen once and then I'll throw the whole thing away. As I mentioned elsewhere: this didn't get me the result faster than I could have done by hand, but it did let me not spend the entire time in deep focus trying to write all this.

      • DrNosferatu a day ago ago

        Compilers also came a long way.

        • fruitworks a day ago ago

          compilers are deterministic

          • DrNosferatu a day ago ago

            barely so - in practice they display (local) chaotic behaviour.

            And LLMs are deterministic too if you freeze the seed.

      • rob_c a day ago ago

        The alternative is to go back to stone tablet in the cave with Plato.

  • fourthark 2 days ago ago

    Upgrades and “collateral evolution” are very strong use cases for Claude.

    I think the training data is especially good, and ideally no logic needs to change.

  • grim_io a day ago ago

    What a great use case.

    It demonstrates how much the LLM use can boost productivity on specific tasks where the complete manual implementation would take much longer than the verification.

  • anonymousiam 2 days ago ago

    I hope Dmitry did a good job. I've got a box of 2120 tapes with old backups from > 20 years ago, and I'm in the process of resurrecting the old (486) computer with both of my tape drives (floppy T-1000 and SCSI DDS-4). It would be nice to run a modern kernel on it.

  • fho a day ago ago

    I wonder if the author could now go one step further and wrote some code to interface the take drive with an ESP32, thereby removing the floppy drive from the equation and going straight to USB.

    • rob_c a day ago ago

      I imagine pull requests are welcome :p

  • criticalfault a day ago ago

    Would be good to do the same to 'modernize' disassembled drivers for various devices in mobile phones.

    Would give postmarketos a boost.

  • IshKebab a day ago ago

    > From this point forward, since loading/unloading kernel modules requires sudo, I could no longer let Claude “iterate” on such sensitive operations by itself.

    Hilarious! https://xkcd.com/1200/

    • undebuggable a day ago ago

      Imagine the horror when a random stranger finally installs and sets up that bloody printer on your laptop.

  • vkaku 2 days ago ago

    Excellent. This is the kind of W that needs more people to jump into.

  • aussieguy1234 2 days ago ago

    AI works better when it has an example. In this case, all the code needed for the driver to work was already there as the example. It just had to update the code to reflect modern kernel development practices.

    The same approach can be used to modernise other legacy codebases.

    I'm thinking of doing this with a 15 year old PHP repo, bringing it up to date with Modern PHP (which is actually good).

  • MagicMoonlight 2 days ago ago

    Is Claude code better than ChatGPT?

    • prameshbajra 2 days ago ago

      I have been testing both Claude code and Codex CLI for the past few weeks and I found codex output to be better than claude.

      I like how Claude code is more advanced in terms of CLI functionality but I prefer Codex output (with model high)

      If you do not want to pay for both, then you can pick anyone and go with it. I don't think the difference is huge.

    • Amadiro 2 days ago ago

      In my experiments claude4 opus generated by far the best code (for my taste and purposes) but it's also a pretty expensive model. I think I used up $40 in one evening of frantic vibe-coding.

  • globular-toast 2 days ago ago

    I don't think we really need an article a day fawning over LLMs. This is what they do. Yep.

    Only thing I got from this is nostalgia from the old PC with its internals sprawled out everywhere. I still use desktop PCs as much as I can. My main rig is almost ten years old and it's been upgraded countless times although is now essentially "maxed out". Thank god for PC gamers, otherwise I'm not sure we'd still have PCs at all.

  • yieldcrv 2 days ago ago

    I’ve been doing assembly subroutines in Solidity for years with LLMs, I wouldn't even have tried beforehand

  • lloydatkinson a day ago ago

    I hope it gets mainlined again!

  • rob_c a day ago ago

    Qudos to the author.

    I keep beating on the drum that they correctly point out. It's not perfect. But it saves hours and hours of work in generating compared to small conceptual debugging.

    The era of _needing_ teams of people to spit out boilerplate is coming to an end. I'm not saying doing learn to write it, learning demands doing, making mistakes and personal growth. But after you've mastered this there's no need to waste time writing booklet plate on the clock unless you truly enjoy it.

    This is a perfect example of time taken to debug small mistakes << time to start from scratch as a human.

    Time, equivalent money, energy saved all a testament to what is possible with huge context windows and generic modern LLMs :) :) :)

  • unethical_ban 2 days ago ago

    Neat stuff. I just got Claude code and am training myself on Rails, I'm excited to have assistance working through some ideas I have and seeing it handle this kind of iterative testing is great.

    One note: I think the author could have modified sudoers file to allow loading and unloading the module* without password prompt.

    • nico 2 days ago ago

      Claude is really good with frameworks like Rails. Both because it’s probably seen a lot of code in its training set, and because it works way better when there is a very well defined structure

    • anyfoo 2 days ago ago

      ... which would allow you to load arbitrary code into the kernel, pretty much bypassing any and all security. You might as well not have a password at all. Which, incidentally, can be a valid strategy for isolated external dev boards, or QEMU VMs. But on a machine with stuff you care about? You're basically ripping it open.

      • unethical_ban 2 days ago ago

        He was already loading "arbitrary" Claude code, no? I'm suggesting there was a way to skip password entry by narrowly tailoring an exception.

        Another thought, IIRC in the plugins for Claude code in my IDE, you can "authorize" actions and have manual intervention without having to leave the tool.

        My point is there were ways I think they could have avoided copy/paste.

        • anyfoo 2 days ago ago

          While I personally would have used a dedicated development target, the workflow he had at least allowed him to have a good look at any and all code changes, before approving with the root password.

          That is a bit different than allowing unconfirmed loading of arbitrary kernel code without proper authentication.

    • frumplestlatz 2 days ago ago

      > One note: I think the author could have modified sudoers file to allow loading and unloading the module* without password prompt.

      Even a minor typo in kernel code can cause a panic; that’s not a reasonable level of power to hand directly to Claude Code unless you’re targeting a separate development system where you can afford repeated crashes.

  • bgwalter a day ago ago

    There is literally a GitHub repository, six years old, that ports an out-of-tree ftape driver to modern Linux:

    https://github.com/Godzil/ftape

    Could it be that Misanthropic has trained on that one?

    • lloydatkinson a day ago ago

      > Maybe this driver have problems on SMP machines.

      > Maybe this driver have problems on 64Bit x86 machines.

      Ouch. The part where it says it’s not possible to use a normal floppy and the tape flip anymore seemed odd enough, but those last points should scare anyone away from trying these on anything important.

      • bgwalter a day ago ago

        Yes, Godzil's repo could have the issues you point out but still give hints to Claude what APIs to replace. Or the latest possibly-Claude-plagiarized version perhaps has the same issues.

  • Keyframe 2 days ago ago

    pipe dream - now automate Asahi development to M3, M4, and onwards.

    • mschuster91 2 days ago ago

      the problem here is that Apple, while at least not standing actively in the way (like console manufacturers), provides zero documentation on how stuff works internally. You gotta reverse-engineer everything, and that either takes at least a dozen highly qualified and thus rare and expensive-to-hire people or someone hard on the autism-hyperfixation spectrum with lots of free time to spare and/or the ability to turn it into an academic project. AI can't help at all here because even if it were able to decompile Apple's driver code, it would not be able to draft a coherent mental model on how things work.

      M3, to answer the second part why AI won't be of much help, onwards use a massively different GPU architecture that needs to be worked out, again, from scratch. And all of that while there is a substantial number of subsystems remaining on M1, M2 and its variants that aren't supported at all, only partially supported or with serious workarounds, or where the code quality needs massive work to get upstreamed into Linux.

      And on top of that, a number of contributors burned out along the way, some from dealing with the ultra-neckbeard faction amongst Linux kernel developers, some from other mental health issues, and Alyssa departed for Intel recently.

      • Keyframe a day ago ago

        You mean to tell me those agents aren't PhD-level experts in every field as we were told by OpenAI?? I'm shocked!

        Seriously though, it does seem a menial task in itself to reverse engineer what's going on. Would be a really powerful show of force by one of leading AI providers if they setup shop like that to do it in the open.. if they could.

        • mschuster91 a day ago ago

          the menial work used to be decompiling, that can be automated though... but that's maybe 1/3rd of the game. you still need to figure out and observe what happens on what kind of external input. that is for now far beyond the capability of any AI.

    • flykespice 2 days ago ago

      How long would the prompt be? Longer than C++ standard specification?

  • punnerud 2 days ago ago

    What was the new speed after the upgrade?

    • qayxc 2 days ago ago

      Since it's the still the same driver addressing the same hardware it should be identical.

    • Cthulhu_ 2 days ago ago

      ...it's a tape drive, they have mechanically fixed speeds. Why do you ask?

      • punnerud a day ago ago

        He wrote “The tradeoff, of course, is that the data rate is limited by the speed of the floppy controller,”. Implying it could be faster by switching the controller. I guess tape drives could in theory have way way faster transfer speeds, as other tape drives does.

  • rvz 2 days ago ago

    No tests whatsoever. This isn't getting close to being merged into mainline and it will stay out-of-tree for a long time.

    That's even before taking on the brutal linux kernel mailing lists for code review explaining what that C code does which could be riddled with bugs that Claude generated.

    No thanks and no deal.

    • geor9e 2 days ago ago

      "The intention is to compile this driver as an out-of-tree kernel module, without needing to copy it into the kernel source tree. That's why there's just a simple Makefile, and no other affordances for kernel inclusion. I can't really imagine any further need to build this driver into the kernel itself.

      The last version of the driver that was included in the kernel, right up until it was removed, was version 3.04.

      BUT, the author continued to develop the driver independently of kernel releases. In fact, the last known version of the driver was 4.04a, in 2000.

      My goal is to continue maintaining this driver for modern kernel versions, 25 years after the last official release." - https://github.com/dbrant/ftape

      • fock 2 days ago ago

        and there have been continous ports since then: https://github.com/Godzil/ftape/tree/master - note the caveats which apparently all disappeared here...

        • kelnos a day ago ago

          Looks like that hasn't been updated in 6 years, and only supports the 2.6.x kernel.

          I doubt it would have been significantly easier to start the porting effort from that vs. the original 2.4.x source.

        • fock 2 days ago ago

          and of course this didn't take into account you posted that, because I got directed straight here by AI!

    • kelnos a day ago ago

      > No tests whatsoever.

      Test coverage between subsystems in the Linux kernel varies widely. I don't think a lack of tests would prevent inclusion.

      > No thanks and no deal.

      I mean, now we have a driver for this old hardware that runs on a modern kernel, which we didn't before. I imagine you don't even have that hardware, so why do you care if someone else gets some use out of it?

      The negativity here in many of these comments is just staggering. I've only recently started adopting LLM coding tools, and I still remain a skeptic about the whole thing overall, but... damn. Seems like most people aren't thinking critically and are just regurgitating "durrrr LLMs bad" over and over.

      • cmpxchg8b a day ago ago

        Yes, the negativity is infuriating. This is the mindset that is going to get left behind. I'm no LLM maximalist but they clearly have their uses in the right context and the right hands.