6 comments

  • aaronbrethorst an hour ago ago

    Here's the part nobody talks about

    This feels like such an obvious LLM tell; it has that sort of breathless TED Talk vibe that was so big in the late oughts.

  • kburman 13 minutes ago ago

    This is a recipe for model collapse/poisoning.

  • Darkskiez 35 minutes ago ago

    Yay, we've found another way to give LLMs biases.

    I can see how obscure but useful nuggets of information that you rarely need, but are critical when you do, will be lost too.

    If the weighting was shared between users, an attacker could use this feedback loop to promote their product or ideology, by executing fake interactions that look successful.

  • sbinnee 13 minutes ago ago

    AI slop indeed. But it caught my eyes nonetheless because I have been doing some work around the same concept. These days I found that GitHub is just flooded with "AI Agent Memory" modules, where in fact they are all skills-based (meaning all text instruction) solutions.

  • stephantul 14 minutes ago ago

    Stop the slop!

  • littlestymaar 17 minutes ago ago

    Why do people submit AI slop like that in here?