I can see how obscure but useful nuggets of information that you rarely need, but are critical when you do, will be lost too.
If the weighting was shared between users, an attacker could use this feedback loop to promote their product or ideology, by executing fake interactions that look successful.
AI slop indeed. But it caught my eyes nonetheless because I have been doing some work around the same concept. These days I found that GitHub is just flooded with "AI Agent Memory" modules, where in fact they are all skills-based (meaning all text instruction) solutions.
Here's the part nobody talks about
This feels like such an obvious LLM tell; it has that sort of breathless TED Talk vibe that was so big in the late oughts.
This is a recipe for model collapse/poisoning.
Yay, we've found another way to give LLMs biases.
I can see how obscure but useful nuggets of information that you rarely need, but are critical when you do, will be lost too.
If the weighting was shared between users, an attacker could use this feedback loop to promote their product or ideology, by executing fake interactions that look successful.
AI slop indeed. But it caught my eyes nonetheless because I have been doing some work around the same concept. These days I found that GitHub is just flooded with "AI Agent Memory" modules, where in fact they are all skills-based (meaning all text instruction) solutions.
Stop the slop!
Why do people submit AI slop like that in here?