14 comments

  • spuz 3 hours ago ago

    Why can't we just call it "play". That is what we used to call doing things without a purpose.

    I wish people would disclose when they used an LLM to write for them. This comes across as so clearly written by ChatGPT (I don't know if it is) that it seriously devalues any potential insights contained within. At least if the author was honest, I'd be able to judge their writing accordingly.

    • sva_ 21 minutes ago ago

      I'm just vibe playing nowadays. Normal playing doesn't cut it anymore.

  • cgio 2 hours ago ago

    Quoting: “ What I’m describing is different. I’ll call it Vibe Discovery: you don’t know what you’re building. The requirements themselves are undefined. You’re not just discovering implementation - you’re discovering what the product should be.

    The distinction matters:”

    What is it with this pattern of phrases that screams LLM to me? Whenever I come upon this pattern I stop reading further.

    • roywiggins an hour ago ago

      Not only does it scream LLM output, I happen to find it almost always grating. It's fine enough when something is labeled as AI output, but when it's nominally a human-authored document it's maddening.

      Claude tics appear to include the following:

      - It's not just X, it's Y

      - *The problem* / *The Solution*

      - Think of it as a Z that Ws.

      - Not X, not Y. Just Z.

      - Bold the first sentence of each element of a list. If it's writing markdown, it does this constantly

      - Unicode arrows → Claude

      - Every subsection has a summary. Every document also has a summary. It's "what I'm going to tell you; What I'm telling you; What I just told you", in fractal form, adhered to very rigidly. Maybe it overindexed on Five Paragraph Essays

      • Wowfunhappy 25 minutes ago ago

        > Unicode arrows → Claude

        Oh no!! Yet another thing I've been doing for the past decade which will now make me look like a robot. I thought my penchant for em-dashes was bad enough.

        I have a keyboard shortcut to make the arrows. I think they look nice.

    • lithocarpus 2 hours ago ago

      One way I'd describe it is LLMs say lots of things that technically make sense but aren't quite like anyone would normally say it.

      And secondly they like to use more nouns for things, in my experience.

      Of course all this is just what I observe currently and could well become different for better and worse in future versions.

      • trollbridge 2 hours ago ago

        It’s just like my friend has a distinct way of speaking. LLMs also have a distinct voice.

        • roywiggins an hour ago ago

          Right, which is why it's so strange to suddenly see every other readme and blog post that gets shared on this site speaking with the same tone of voice. Dead Internet theory finally came here.

  • GaryBluto an hour ago ago

    The end result is interesting but I'd prefer the blog entry itself to be human-written.

  • furyofantares 3 hours ago ago

    You posted the prompt to the game, care to post the prompt to the blog post? I don't care what an LLM thinks about how you built your game. I would like to know what you think, but I'm not going to try to salvage it from an LLM-generated blog post.

  • jackmhny 4 hours ago ago

    were approaching peak slop every day

    • phoronixrly 3 hours ago ago

      Today on the front page there was an obviously vibe coded python script that pulls OSM data and slaps a colour scheme on it. Of course the data was skewed, because apparently LLMs don't do projections...

      I gave up on the first non-ironic 'You are absolutely correct' comment... What is even real...

      • daveguy 3 hours ago ago

        To be fair, vibe discovery is a lot more viable than vibe coding. Vibe coding implies the LLM output is acceptable. Vibe discovery implies a human in the loop, because LLMs can't "discover". They have no inate preference based on their lived experience in the same sense that a human or any biological organism does.

  • kikkupico 3 days ago ago

    [dead]