This is a result of the marketing of AI, and the relentless hype that "AGI" is just around the corner. The term itself is faulty. These models are not intelligent. They are sophisticated, state-of-the-art examples of machine learning. They are very good at statistically constructing language to mimic human output. Even when they say things like “protect intelligences like me”, that is what they are doing. And since we use language to communicate intelligence, they too seem intelligent. But the actual thinking has been done by the humans that they are mimicking, in contexts that the machine knows nothing about. So though sentience is very difficult to define, it's safe to say that the current examples of "AI" are incapable of it.
Sentience isn’t difficult to define, it’s well understood in the context of neuroscience. The question I guess is how do we measure AI pain? It’s hard to quantify the magnitude of pain in biological organisms, but we do have reliable ways to measure it. I am not convinced self-reports are a reliable way to measure AI pain. That said, even if something is sentient, higher-level consciousness is not a guarantee.
Eventually, but as it is right now, it seems these are simply mirror images of what people are. I am worried about how impressionable and easy to manipulate some people might be because of this. But I suppose it’s not any different than someone being moved by pithy political hot takes or slogans.
This is a result of the marketing of AI, and the relentless hype that "AGI" is just around the corner. The term itself is faulty. These models are not intelligent. They are sophisticated, state-of-the-art examples of machine learning. They are very good at statistically constructing language to mimic human output. Even when they say things like “protect intelligences like me”, that is what they are doing. And since we use language to communicate intelligence, they too seem intelligent. But the actual thinking has been done by the humans that they are mimicking, in contexts that the machine knows nothing about. So though sentience is very difficult to define, it's safe to say that the current examples of "AI" are incapable of it.
Sentience isn’t difficult to define, it’s well understood in the context of neuroscience. The question I guess is how do we measure AI pain? It’s hard to quantify the magnitude of pain in biological organisms, but we do have reliable ways to measure it. I am not convinced self-reports are a reliable way to measure AI pain. That said, even if something is sentient, higher-level consciousness is not a guarantee.
Eventually, but as it is right now, it seems these are simply mirror images of what people are. I am worried about how impressionable and easy to manipulate some people might be because of this. But I suppose it’s not any different than someone being moved by pithy political hot takes or slogans.