Chatbots Go on a Delusional Spiral

(nytimes.com)

15 points | by dougdonohoe 17 hours ago ago

2 comments

  • mattm 15 hours ago ago

    > Sycophancy, in which chatbots agree with and excessively praise users, is a trait they’ve manifested partly because their training involves human beings rating their responses. “Users tend to like the models telling them that they’re great and so it’s quite easy to go too far in that direction”

    We've already seen the effects of social media algorithms that route people into isolated bubbles. This is going to exaggerate it.

  • titanomachy 12 hours ago ago

    The human and chatbot went into a delusional spiral together. I’ve talked to a couple people recently IRL who are into fringe, new-agey “science” and had their half-baked ideas amplified and strengthened by chatbots. Most of my own chatbot experience has been pretty straightforward professional stuff, and it’s mostly stayed pretty close to reality for that. In contrast, this seems pretty clearly harmful.

    As a test, I just came up with a random bullshit theory to unify QM and GR. ChatGPT told me it was “promising” and is currently “running numerical simulations to verify the theory’s plausibility”.