33 comments

  • mwkaufma 7 hours ago ago

    A little strange to write up a bug hunt that was resolved by the ffi upstream already, and not by the hunt itself. OP didn't fix the bug, though identifying that the upgrade was relevant is of some interest. Writing could have been clearer.

    • mbac32768 6 hours ago ago

      The bug that was fixed in upstream manifested differently than what he was experiencing so the journey was to validate it for his case.

      OTOH I'm a bit surprised he didn't pull back earlier and suggest to his user to update to the latest version though and let him know.

      • eichin 5 hours ago ago

        15 or so years ago I had a similar journey - a single python interpreter "impossible" segfault in production that turned out to be a bug in glibc realloc, that had already been fixed in an update, we just didn't figure out to even look for one until we'd narrowed it down that far. (We were shipping custom Debian installs on DVD, a fair number of our customer installs weren't internet accessible so casual upgrades were both impossible and unwanted, but it was also a process mistake on my part to not notice the existence of the upgrade sooner.)

        Never wrote it up externally because it was already solved and "Debian updates to existing releases are so rare that you really want to pay attention to all of them" (1) was already obvious (2) was only relevant to a really small set of people (3) this somewhat tortured example wasn't going to reach that small set anyway. (Made a reasonable interview story, though.)

  • Animats an hour ago ago

    So they turned on GC after every allocate ("GC stress"), and

    "With GC.stress = true, the GC runs after every possible allocation. That causes immediate segfaults because objects get freed before Ruby can even allocate new objects in their memory slots."

    That would seem to indicate a situation so broken that you can't expect anything to work reliably. The wrong-value situation would seem to be a subset of a bigger problem. It's like finding C code that depends on use-after-free working and which fails when you turn on buffer scrubbing at free.

  • philipp-gayret 5 hours ago ago

    Had me in the first half. But from the "The Microsecond Window" chapter and on...;

    > No warning. No error. Just different methods that make no sense.

    > This is why write barriers exist. They're not optional extras for C extension authors. They're how you tell the garbage collector: "I'm holding a reference. Don't free this

    It's all ChatGPT LinkedIn and Instagram spam type slop. An unfortunate end to an otherwise interesting writeup.

  • khazhoux 3 hours ago ago

    I don’t understand why people are saying this article was AI generated. Do you think the author told chatgpt “Write me an article (with diagrams) about a Ruby hash race condition” and pasted that to their blog?

    • Jweb_Guru 3 hours ago ago

      Parts of it being generated by Claude or ChatGPT (which they very clearly were) does not necessarily mean that the whole article was fabricated.

  • alexnewman 7 hours ago ago

    I don’t get it. Also it reads llmish

  • fleshmonad 6 hours ago ago

    LLM slop. Why do people (presumably) take the time to debug something like this, do tests and go to great lengths, but are too lazy to do a little manual writeup? Maybe the hour saved makes up for being associated with publishing AI slop under your own name? Like there is no way the author would have written a text that reads more convoluted than what we have here.

    • fn-mote 5 hours ago ago

      > Why do people […] take the time to debug […] but are too lazy to do a little manual writeup[?]

      They like to code. They don’t like to write.

      I’m not excusing it, but after you asked the question the conclusion seems logical.

      • PKop an hour ago ago

        > They like to code. They don’t like to write.

        People like reading LLM slop less than either of those. So it should become a common understanding not to waste your (or our) time to "write" this. It's frustrating to give it a chance then get rug-pulled with nonsense and there's really no reason to excuse it.

    • sb8244 6 hours ago ago

      I read it just fine and everything made sense in it.

      I would spend similar time debugging this if I were the author. It's a pretty serious bug, a non obvious issue, and would be impossible to connect to the ffi fix unless you already knew the problem.

    • iberator 2 hours ago ago

      For example beigg non native english speaker:)

    • michaelcampbell 4 hours ago ago

      > LLM slop

      Is this the new "looks shopped. I can tell by the pixels."?

    • dpark 6 hours ago ago

      Sorry, why is this LLM slop? I only got about halfway through because I don’t care about this enough to finish the read, but I don’t see the “obvious LLM” signal you do.

      • scmccarthy 6 hours ago ago

        It's clearest in the conclusion.

        • dpark 5 hours ago ago

          I still don’t see it.

          I feel like the “this is AI” crowd is getting ridiculous. Too perfect? Clearly AI. Too sloppy? That’s clearly AI too.

          Rarely is there anything concrete that the person claiming AI can point to. It’s just “I can tell”. Same confident assurance that all the teachers trusting “AI detectors” have.

          • dkdcio 3 hours ago ago

            I came to this thread hoping to read an interesting discussion of a topic I don’t understand well; instead it’s this

            I have opened a wager r.e. detecting LLM/AI use in blogs: https://dkdc.dev/posts/llm-ai-blog-challenge/

            • dpark 2 hours ago ago

              I feel like it’s on every other article now. The “this is ai” comments detract way more from the conversation than whatever supposed ai content is actually in the article.

              These ai hunters are like the transvestigators who are certain they can always tell who’s trans.

              • PKop an hour ago ago

                No. These articles are annoying to read, the same dumb patterns and structures over and over again in every one. It's a waste of time; the content gives off a generic tone and it's not interesting.

                • dkdcio 29 minutes ago ago

                  say that! that’s independent of whether AI/LLM tools were used to write it and more valuable (“this was boring and repetitive” vs “I don’t like the tool I suspect you may have used to write this”)

            • internetter 2 hours ago ago

              > I will make a bet for $1,000,000!

              > I won't actually make this bet!

              > But if I did make this bet, I would win!

              ???

              • dkdcio 30 minutes ago ago

                if two parties put up $1,000,000 each and I get a large cut I’ll do the work! one commenter already wagered $1,000, which I’d easily win, but I suspect this would take me idk at least a few days of work (not worth the time). and, again, for a million dollars I’d make sure I win

                see other comment though, the point is that assessing quality of content on whether AI was used is stupid (and getting really annoying)

            • _dain_ 2 hours ago ago

              I don't have a million dollars but I'll take you up on it for like a grand. I'm serious, email me.

              • dkdcio 2 hours ago ago

                the problem is it’s a lot of work (not actually worth it for me for a thousand dollars) — but you cannot win

                just one scenario, I write 100 rather short, very similar blog posts. run 50 through Claude Code with instructions “copy this file”. have fun distinguishing! of course that’s an extreme way to go about it, but I could use the AI more and end up at the same result trivially

                • _dain_ 2 hours ago ago

                  This is so childish and pathetic it doesn't deserve a response.

                  • dkdcio an hour ago ago

                    why? LLM/AI use doesn’t denote anything about style or quality of a blog, that’s the point — and why this type of commentary all of HackerNews and elsewhere is so annoying.

                    obviously if a million dollars are on the line I’m going to do what I can to win. I’m just pointing out how that can be taken to the extreme, but again I can use the tools more in the spirit of the challenge and (very easily) end up with the same results

                    • Panzer04 an hour ago ago

                      People object to using AI to write their articles (poorly). Your answer to them saying it's obvious when it's AI written is to.. write it yourself, then pretend copy-pasting that article via an AI counts as AI-written?

                      That's a laughable response.

                      • dkdcio 32 minutes ago ago

                        my point is using AI is distinct from from the quality of blog posts. these frequent baseless, distracting claims of AI use are silly

                        this wager is a thought exercise to demonstrate that. want to wager $1,000,000 or think you’ll lose? if you’ll lose, why is it ok to go around writing “YoU uSeD aI” instead of actually assessing the quality of a post?

          • PKop an hour ago ago

            That's your issue not ours. It's obvious; if you don't have a problem with it, enjoy reading slop; many people can't stand it and we don't have to apologize for recognizing or not liking it.

            • dpark 30 minutes ago ago

              I don’t believe you can recognize anything. Like everyone else claiming they can clearly identify AI you can’t actually point to why it’s AI or what parts are clearly AI.

              If you could actually identify AI deterministically you would have a very profitable product.

      • Jweb_Guru 3 hours ago ago

        Parts of it were 100% LLM written. Like it or not, people can recognize LLM-generated text pretty easily, and if they see it they are going to make the assumption that the rest of the article is slop too.

        • dpark 2 hours ago ago

          And yet you don’t call out any parts that are 100% AI and how you recognize them as such.

          I’m not saying there’s no AI here. I am asking for some evidence to back up the claim though.