"The parable of the earring was not about the dangers of using technology that wasn't Truly Part Of You, which would indeed have been the kind of dystopianism I dislike. It was about the dangers of becoming too powerful yourself."
As I said in a comment on that post, 13 years ago:
"any parable that's about being too powerful is almost necessarily also about technology, because it's technology that allows the average person to get that power"
> It is not a taskmaster, telling you what to do in order to achieve some foreign goal. It always tells you what will make you happiest....The earring is never wrong.
> There are no recorded cases of a wearer regretting following the earring’s advice, and there are no recorded cases of a wearer not regretting disobeying the earring. The earring is always right.
> ...The wearer lives an abnormally successful life, usually ending out as a rich and much-beloved pillar of the community with a large and happy family.
> Niderion-nomai’s commentary: It is well that we are so foolish, or what little freedom we have would be wasted on us. It is for this that Book of Cold Rain says one must never take the shortest path between two points.
The piece implies that
1. at least occasionally one should choose to do something one will regret.
2. not knowing what will make one happy is part of what makes one free.
I'm not sure I agree with these (it seems that 1. is a paradox) but it is an interesting thought experiment.
I think it's less confusing when you consider the very first thing the earring says: "better for you if you take me off". The wearer should rationally always regret not following its advice, including that first thing.
I think the paradox is here, and it comes from cheeky use of misleading language:
> ...The wearer lives an abnormally successful life, usually ending out as a rich and much-beloved pillar of the community with a large and happy family.
The wearer doesn't really live any sort of life. Once it fully integrates with you your brain is mush, you're no longer experiencing anything. At some fuzzy point in there you've basically died and been replaced by the earring.
>"It does not always give the best advice possible in a situation. It will not necessarily make its wearer King, or help her solve the miseries of the world. But its advice is always better than what the wearer would have come up with on her own."
I think one very simple explanation would be that this comes down to a matter of exploration vs exploitation. Since it is only giving "better" advice, and not even 'locally optimal', there is reason to favor exploring vs merely following the advice unquestioningly.
A more complex, but ultimately comprehensive answer, is that free will consists, at least in one aspect, in the ability not only to choose one's goals or means, but also what _aspect_ of those various options to consider "good" or "better".
And if one were to say that all such considerations ultimately resolve back to a fundamental desire to be "happy", to me, this seems to be hand-waving, rather than addressing the argument, because different people have different definitions of the "happy" end-state. If these differences were attributed fully to biology & environment, the story loses its impact, because there was never free will in the first place. If, while reading the story, we adopt a view that genuine free will exists, and hold some kind of agnosticism about the possible means by which that can be so, then it seems reasonable to attribute at least some of the differences in what the "happy" end-state looks like to the choices made by the people, themselves.
Given that kind of freedom, unless one has truly perfect knowledge (beyond the partial knowledge contained in the advice of the earring), the pursuit of one's goals seems to unavoidably entail some regrets. And with perfect knowledge, well... The kind of 'freedom' attributed, for example, to God by philosophers like Thomas Aquinas, is explicitly only analogous to our own, and is understood to be an unchanging condition, rather than a sequential act.
(As a final note: One might wonder what this 'freedom to choose aspects' approaches as an 'asymptotic state' -- that is, for an immortal person. And this leads to metaphysical concerns -- of course, with some things 'smuggled in' by the presumption of genuine freedom, already. Provided one agrees that human nature undeniably provides some structure to ultimate desires/"happiness", the idea of virtue ethics follows naturally, and from there many philosophers have arrived at similar notions of some kind of apotheosis as a stable end-state, as well as the contrary state of some kind of scattering or decay of the mind...)
Statements that involve the future are always linguistically vague.
In your paradoxical sentence, "will regret" could be interpreted as either "know at that moment that they will regret" or "come to know after the fact that they regret it".
The former is a paradox, but the latter isn't.
As life advice, I think it works better when you consider it amortized over a collection of choices instead of a set of serial choices each of which it must be rigidly applied to: One should make a set of choices using a strategy that leads some of them to be likely to be regretful (but presumably without being able to predict ahead of time which ones will be).
> at least occasionally one should choose to do something one will regret.
Negative experience is crucial for learning, unfortunately. "If you never fail you don't try hard enough", etc. This is trivially understood in physical training: you have to get yourself exhausted to become stronger. It's much less of an accepted view in, so to say, mental training: doing thing that you later regret may teach you something valuable that always avoiding such decisions does not.
I do not necessarily support or reject this view, I'm just trying to clarify the point.
This parable reminds me a bit of Nozick's "tale of the slave"[1] in its rhetorical sleight of hand: the reader is meant to be mesmerized by the slow transition to an "obvious" conclusion, which obscures the larger inconsistency.
In both cases, the outcome is only convincing if the story makes you forget your grounding: democracy isn't slavery (contra Nozick) and the earring is clearly not always right unless you're the most basic kind of utility monster.
Compare/contrast the Whispering earring/LLM chat with The Room from Stalker, each one is terrifying in its aspect: One because it eventually coaxes you to become a shallow shell of yourself, the other by plucking an unexpected wish from the deepest part of your psyche. I wonder what the Earring would advise if one were to ask it if one should enter The Room.
A distant relative, no doubt, of Stanislaw Lem's "Automatthew's Friend" (1964). A perfectly rational, indestructible, selfless, well-meaning in-ear AI assistant. In the end, out of nothing but the deepest care for its owner's mental state in a hopeless situation, it advocates efficient and quick suicide.
I'm not actually sure how horrifying this is. It sounds like it's just a better executive planner to achieve your goals. As long as they are still your goals, surely you'd want the best executive planner available. I would say it's the goals that are important, not the limited way in which I work out how to achieve them.
It would certainly be horrifying if I were slowly tricked into giving up my goals and values, but that doesn't seem to be what is happening in this story.
Perhaps if I were to put the earring on it would tell me it would be better for me to keep wearing it.
You surrender your self in exchange for your goals. With the right goals, that could be a worthy sacrifice. But of course it is a sacrifice.
Imagine doing a crossword while a voice whispers the correct letter to enter for each cell. You'd definitely finish it a lot faster and without making mistakes. Crossword answers are public knowledge, and people still work them out instead of looking them up. They don't just want to solve them; they want to solve them theirselves. That's what is lost here.
This is associating the self with the thing that decides how best to achieve goals (the earring / the part of your brain that works out how to achieve a goal), while I'm saying that I think I would associate the self much more with the thing that decides what the goals are.
> they don't just want to solve them; they want to solve them theirselves. That's what is lost here.
I think in this story, the earring would not solve the crossword for you, if for some reason your goal was to solve the crossword yourself.
I read a science fiction story (maybe 40 years ago in Analog?) about a somewhat similar device that provided life guidance. This device would detect if the choice you were currently making would likely result in your death and would flash a red light to warn you. (Something about using quantum multi-worlds to determine if you die.) Does this story ring a bell with anyone?
1. I like that the first bit of advice is to take it off. It's very interesting that in this story very few people take its advice.
2. It recommends whatever would make you happiest in that moment, but not what would make the best version of yourself happiest, or what would maximize happiness in the long term.
Solving mazes requires some backtracking, I guess. Doing whatever will make you happiest in the moment won't make you happiest in the long run.
I want someone to try building a variant that just gives you timely cues about generally good mental health practices. Suggestions could be contextually based on a local-only app that listens to you and your environment, and delivered to a wireless earbud. When you're in a situation that might cause you stress, it reminds you to take some deep breaths. When you're in a situation where you might be tempted to react with hostility, it suggests that you pause for a few seconds. When you've been sitting in front of your computer too long it suggests that maybe you'd like to go for a short walk.
If the moral of the story is that having access to magically good advice is dangerous because it shifts us to habitual obedience ... can a similar device shift us to mental habits that are actually good for us?
The moral of the story is that neocortical facilities (vaguely corresponding to what distinguishes modern humans) depend on free will. If you want to merely enthral yourself to voices of the gods a la Julian Jaynes' bicameral man, you can, but this is a regression to a prior stage of humanity's development - away from egoic, free willed man, and backwards to more of a reactive automaton, merely a servant of (possibly digital) gods.
I think there's a meaningful difference between a tool to remind oneself to take a beat before speaking vs being told what to say. For example, cues that help you avoid an impulsive reaction of anger I think is a step away from being a reactive automaton.
My sensibility is that agency is about "noticing". The content of information seems perhaps less important than the attention allocation mechanism that brings our attention to something.
If you write all your own words, but without an ability to direct your attention to what needed words conjured around it, did you really do anything important at all? (Yes, that's perhaps controversial :) )
Anger is just another aspect of the human condition, and is absolutely justified in cases of grave injustice (case in point: Nazis, racism). It's not for some earring to decide when it is justly applied and when it is not; that is the prerogative of humanity.
In either case none of this cueing or prompting needs to be exogenous or originate from some external technology. The Eastern mystics have developed totally endogenous psychotechnologies that serve this same purpose, without the need to atrophy your psyche.
Absolutely anger is sometimes justified.
But people are also angry when e.g. someone cuts them off in traffic. The initial feeling of anger may not be appropriate. A cue to help you avoid reacting immediately from hostility isn't so much deciding whether anger is appropriate but giving you the space to make that judgement reflectively rather than impulsively. Even if anger is appropriate, the action you want to take on reflection may not be the first one that arises.
"The eastern mystics" managed to do a lot of things, but often with a large amount of dedicated practice. Extremely practiced meditators can also reach intense states of concentration, equanimity etc, but the fact that it's not strictly necessary to have supportive tools to develop these skills doesn't mean that supportive tooling wouldn't help a lot of people.
> the fact that it's not strictly necessary to have supportive tools to develop these skills doesn't mean that supportive tooling wouldn't help a lot of people.
I would posit that the only faculty developed in wielding such supportive tooling, is skill at using those tools; when the real goal is the cultivation of character, the construction of a "virtual engine" [0] that governs action. Consider analogously that brain training apps' claims to improve general intelligence are specious at best, and don't seem to develop anything other than the ability to use the app.
Since, in the case of the earring, this virtual governor has already been outsourced to an external entity, there is no need to cultivate one for one's self. Not only does this miss out on the personal development attained in said process, it also risks enthralling you to a construct of someone else's design; and one should choose carefully at which pantheon they pour libations, for its value systems might not always align with one's own.
"The king is dead; long live the king!" is the feudal manifestation of the archetype of the slain and resurrected god. In similar fashion one's own virtual governor requires constant renewal and revision.
Don't you think Leary's term would land better here? Or have you avoided it precisely for such connotation? I remember having some trouble with that for a while in my twenties.
I have not actually familiarized myself with Leary's work. The closest approach would have been via Robert Anton Wilson and the rest of the ramifications through the occult and western esoteric corpus.
It is strictly necessary not to have supportive tools in order to develop these skills. Sentience and the ability to learn from experience are all that is essentially required. Past that there are no crutches and no shortcuts, because you have mistaken for disability the refusal to grow.
I think the moral of the story is that instead of reaching for something to "shift you" start doing the shifting. You are a living mind, rather than asking for some device to affect you, assert your own will. Don't avoid stress and conflict, embrace it, that's life. This instinctive demand for some therapeutic, external helper is what's wrong in the story, people craving to be put into passivity.
This need not even be a technology, The moral version of this could be some priest lecturing you on good and evil, some paternalistic boss making your decisions for you, the crux here is submission, being acted upon rather than actively embracing your own nature.
There’s a good Rick and Morty episode with a similar premise: a crystal that shows how the user will die in the future. Morty uses it mindlessly to guide him to the fate he thinks he wants, but there are some unintended consequences.
I think this ignores the internal conflict in most people’s psyche. The simplest form of this is long term vs short term thinking, but certainly our desires pull us in competing, sometimes opposite, directions.
Am I the me who loves cake or the me who wants to be in shape? Am I the me who wants to watch movies or who wants to write a book?
These are not simply different peaks of a given utility function, they are different utility functions entirely.
Soon after being put on, the whispering earring would go insane.
This reminds me of another story that I saw posted on HN and has provided lots of fodder for idle conversations: Manna[1]. It's a less mystical version of the whispering earring.
Wow, small world, I just made a podcast episode about the dangers of turning your brain off when using Agentic coding solution and referenced the whispering earring as my metaphor.
I feel like if you use the agentic tools to become more ambitious the you'll probably be fine. But if you just work at a feature factory where you crank out things as fast as you can AI coding is going to eat your brain.
That's a cute story, I certainly like its tone of mystery.
However, the premise seems a bit wrong (or at least the narrator is wrong). If your brain actually degenerates from usage of the ring (and is no longer used in daily life, acting only reflexively), the premise that you are the happiest from following the ring might be flat out wrong. I think happiness (I tend to think in terms of well-being, which let's say ranks every good thing you can feel, by definition -- and assume the "good" is something philosophically infinitely wise) is probably something like a whole-brain or at least a-lot-of-brain phenomenon. It's not just a result of what you see or what you have in life. In fact I'm sure two persons can have very similar external conditions and wildly different internal lives (for an obvious example compare the bed-ridden man who spends his day on beautiful dreams, and the other who is depressed or in despair).
What the ring seems to do is to put you in situations where you would be the happiest, if only you were not wearing the earring.
The earring that actually guides you toward a better inner life perhaps offers only very minimal and strategic advice. Perhaps that's what the 'Lotus octohedral earring' does :)
Suppose you had a concrete definition of "happiness", as some seek a concrete definition of "consciousness".
That implies that it could be maximized. Perhaps along a particular subdimension of a multidimensional concept, but still, some aspect of it could be maximized.
What would such a maximization look like in the extrema?
Could a benzene ring be the happiest thing in the universe? It's really easy to imagine some degenerate case like that coming to pass.
You can generally play this game along any such dimension; "consciousness", "agony", "love"... if you have a definition of it, in principle you can minmax it.
Also, more to the point of your observation: we should be indeed very careful about any extreme and any maximization, because I presume when we maximize a lot we tend to bump into limitations of the metric or theories employed. So we should only maximize up to a region of fairly high philosophical confidence, and this is why we need progress in philosophy, psychology, philosophy of arts, philosophy of culture, neurophilosophy, etc.. in lockstep with technological progress -- because technology tends to allow very easy maximization of simplified models of meaning, which may rapidly break down.
I think one example might be that in medieval times maximizing joy and comfort could be a pretty good heuristic in a harsh life of labor. Those days we actually perhaps have to seek out some discomfort now and then, otherwise we'd be locked in our homes or bed ridden with all affordances some of us have; we have to force ourselves to exercise and not eat comfort food all the time; etc.. I think some hard drugs are a good example as well, a kind of technology that allows maximizing desire/pleasure in a way that is clearly void and does not seem associated with overall good experiences long term. An important fact is that our desires do not necessarily follow what is good; our desires are no omniscient/omnibelevolent oracles (they're simply a limited part of our minds).
We need to put thought/effort into discovering and then enacting what is good in robust, wise, careful, etc. ways. Let's build an awesome society and awesome life for all beings :)
More to come. In summary, I became confident that we can, if we're careful, know those things and do something like 'maximize happiness' (as I said, I prefer more general terms than happiness! There's a whole universe of inner experiences beyond just the stereotypical smiley person, I think -- I tend to think of 'maximizing meaning' or 'maximizing well-being').
The basic fact that allows this process to make sense, I guess, is that our inner experiences are real in some sense, and there are tools to study them. Not only are our inner lives real, they make part of the world and its causal structure. We can understand (in principle almost exactly, if we could precisely map our minds and brains) what causes what feelings, what is good and what isn't (by generalizing and philosophically querying/testing/red teaming/etc.), and so on for every facet of our inner worlds.
In fact, this (again in principle) would allow us to make definite progress on what matters, which is our inner lives[1]. I think Susanne Langer put it incredibly well: (on the primacy of experiences as the source of meaning)
"If nothing is felt, nothing matters" (Susanne K. Langer)
This is an experimental fact, we as conscious beings experimentally see this fact is true. So in a way the mind/brain is kind of like a tool which allows us to perceive (with some unreliability and limitations that can be worked with) reality, in particular inner reality.
We can actually understand (with some practical limitations) the world of feelings and what matters. To that we simply experiment, collect evidence and properties about feelings and inner lives, try to build theories that are consistent, robust to philosophical (that is, logical in a high level sense) objections; and then we simply do what is best, or if necessary try out a bunch of ways of life and live out the best way according to our best theories.
---
Addendum: Let's take the benzene ring as an illustrative example of our procedure. Someone claims 'a benzene ring is the happiest thing in the Universe, and we therefore must turn everything into a sea of benzene rings. Destroy everything else.'. Is that claim actually true? Let's explore.
It isn't, I claim: if "Nothing is felt, nothing matters". When you are asleep (and not dreaming or thinking at all, let's suppose) or dead, you don't feel anything. No, thoughts are, and must be, associated with activity in our brain. No information flowing and no brain activity, no thoughts. No thoughts, no inner life. Moreover, thoughts require a neural (and logical) infrastructure to arise. It's logically consistent with how we don't observe ourselves as rocks, gas clouds, mountains, benzene molecules, or anything else: we observe ourselves as mammals with actually large brains. There are immensely more rocks, gas particles, benzene molecules, etc.. then there are mammals in the universe. Yet we experience ourselves as mammals. Benzene molecules, rocks and gas clouds just don't have enough structure to support minds and experience happiness.
The tone of mystery is very much Jorge Luis Borges' writing style. My take is that it is probably a kitschy and playful take on Borges' style at least.
Close but no cigar. Turns out the earring doesn't have to be right or better than a person would do on their own to snare people and atrophy their brains... people are earrinning themselves out today to fairly garbage LLM models because it's easier and because when the model screws up many people don't suffer any ego loss.
"The parable of the earring was not about the dangers of using technology that wasn't Truly Part Of You, which would indeed have been the kind of dystopianism I dislike. It was about the dangers of becoming too powerful yourself."
https://web.archive.org/web/20121007235422/http://squid314.l...
As I said in a comment on that post, 13 years ago: "any parable that's about being too powerful is almost necessarily also about technology, because it's technology that allows the average person to get that power"
True, but concerns about LLMs with anything like current capabilities are of the "Truly Part of You" flavor, not the "becoming too powerful" flavor.
Thanks! Even though I have the whole Squid314 archive, I had forgotten about this follow-up.
> It is not a taskmaster, telling you what to do in order to achieve some foreign goal. It always tells you what will make you happiest....The earring is never wrong.
> There are no recorded cases of a wearer regretting following the earring’s advice, and there are no recorded cases of a wearer not regretting disobeying the earring. The earring is always right.
> ...The wearer lives an abnormally successful life, usually ending out as a rich and much-beloved pillar of the community with a large and happy family.
> Niderion-nomai’s commentary: It is well that we are so foolish, or what little freedom we have would be wasted on us. It is for this that Book of Cold Rain says one must never take the shortest path between two points.
The piece implies that
1. at least occasionally one should choose to do something one will regret.
2. not knowing what will make one happy is part of what makes one free.
I'm not sure I agree with these (it seems that 1. is a paradox) but it is an interesting thought experiment.
I think it's less confusing when you consider the very first thing the earring says: "better for you if you take me off". The wearer should rationally always regret not following its advice, including that first thing.
I think the paradox is here, and it comes from cheeky use of misleading language:
> ...The wearer lives an abnormally successful life, usually ending out as a rich and much-beloved pillar of the community with a large and happy family.
The wearer doesn't really live any sort of life. Once it fully integrates with you your brain is mush, you're no longer experiencing anything. At some fuzzy point in there you've basically died and been replaced by the earring.
> at least occasionally one should choose to do something one will regret.
Not necessarily. My take was that the practice of choosing may well be more valuable than the harm of the occasional regretted choice.
We are told:
>"It does not always give the best advice possible in a situation. It will not necessarily make its wearer King, or help her solve the miseries of the world. But its advice is always better than what the wearer would have come up with on her own."
I think one very simple explanation would be that this comes down to a matter of exploration vs exploitation. Since it is only giving "better" advice, and not even 'locally optimal', there is reason to favor exploring vs merely following the advice unquestioningly.
A more complex, but ultimately comprehensive answer, is that free will consists, at least in one aspect, in the ability not only to choose one's goals or means, but also what _aspect_ of those various options to consider "good" or "better".
And if one were to say that all such considerations ultimately resolve back to a fundamental desire to be "happy", to me, this seems to be hand-waving, rather than addressing the argument, because different people have different definitions of the "happy" end-state. If these differences were attributed fully to biology & environment, the story loses its impact, because there was never free will in the first place. If, while reading the story, we adopt a view that genuine free will exists, and hold some kind of agnosticism about the possible means by which that can be so, then it seems reasonable to attribute at least some of the differences in what the "happy" end-state looks like to the choices made by the people, themselves.
Given that kind of freedom, unless one has truly perfect knowledge (beyond the partial knowledge contained in the advice of the earring), the pursuit of one's goals seems to unavoidably entail some regrets. And with perfect knowledge, well... The kind of 'freedom' attributed, for example, to God by philosophers like Thomas Aquinas, is explicitly only analogous to our own, and is understood to be an unchanging condition, rather than a sequential act.
(As a final note: One might wonder what this 'freedom to choose aspects' approaches as an 'asymptotic state' -- that is, for an immortal person. And this leads to metaphysical concerns -- of course, with some things 'smuggled in' by the presumption of genuine freedom, already. Provided one agrees that human nature undeniably provides some structure to ultimate desires/"happiness", the idea of virtue ethics follows naturally, and from there many philosophers have arrived at similar notions of some kind of apotheosis as a stable end-state, as well as the contrary state of some kind of scattering or decay of the mind...)
Statements that involve the future are always linguistically vague.
In your paradoxical sentence, "will regret" could be interpreted as either "know at that moment that they will regret" or "come to know after the fact that they regret it".
The former is a paradox, but the latter isn't.
As life advice, I think it works better when you consider it amortized over a collection of choices instead of a set of serial choices each of which it must be rigidly applied to: One should make a set of choices using a strategy that leads some of them to be likely to be regretful (but presumably without being able to predict ahead of time which ones will be).
> at least occasionally one should choose to do something one will regret.
Negative experience is crucial for learning, unfortunately. "If you never fail you don't try hard enough", etc. This is trivially understood in physical training: you have to get yourself exhausted to become stronger. It's much less of an accepted view in, so to say, mental training: doing thing that you later regret may teach you something valuable that always avoiding such decisions does not.
I do not necessarily support or reject this view, I'm just trying to clarify the point.
This parable reminds me a bit of Nozick's "tale of the slave"[1] in its rhetorical sleight of hand: the reader is meant to be mesmerized by the slow transition to an "obvious" conclusion, which obscures the larger inconsistency.
In both cases, the outcome is only convincing if the story makes you forget your grounding: democracy isn't slavery (contra Nozick) and the earring is clearly not always right unless you're the most basic kind of utility monster.
[1]: https://rintintin.colorado.edu/~vancecd/phil215/Nozick.pdf
Compare/contrast the Whispering earring/LLM chat with The Room from Stalker, each one is terrifying in its aspect: One because it eventually coaxes you to become a shallow shell of yourself, the other by plucking an unexpected wish from the deepest part of your psyche. I wonder what the Earring would advise if one were to ask it if one should enter The Room.
A distant relative, no doubt, of Stanislaw Lem's "Automatthew's Friend" (1964). A perfectly rational, indestructible, selfless, well-meaning in-ear AI assistant. In the end, out of nothing but the deepest care for its owner's mental state in a hopeless situation, it advocates efficient and quick suicide.
I'm not actually sure how horrifying this is. It sounds like it's just a better executive planner to achieve your goals. As long as they are still your goals, surely you'd want the best executive planner available. I would say it's the goals that are important, not the limited way in which I work out how to achieve them.
It would certainly be horrifying if I were slowly tricked into giving up my goals and values, but that doesn't seem to be what is happening in this story.
Perhaps if I were to put the earring on it would tell me it would be better for me to keep wearing it.
You surrender your self in exchange for your goals. With the right goals, that could be a worthy sacrifice. But of course it is a sacrifice.
Imagine doing a crossword while a voice whispers the correct letter to enter for each cell. You'd definitely finish it a lot faster and without making mistakes. Crossword answers are public knowledge, and people still work them out instead of looking them up. They don't just want to solve them; they want to solve them theirselves. That's what is lost here.
This is associating the self with the thing that decides how best to achieve goals (the earring / the part of your brain that works out how to achieve a goal), while I'm saying that I think I would associate the self much more with the thing that decides what the goals are.
> they don't just want to solve them; they want to solve them theirselves. That's what is lost here.
I think in this story, the earring would not solve the crossword for you, if for some reason your goal was to solve the crossword yourself.
I read a science fiction story (maybe 40 years ago in Analog?) about a somewhat similar device that provided life guidance. This device would detect if the choice you were currently making would likely result in your death and would flash a red light to warn you. (Something about using quantum multi-worlds to determine if you die.) Does this story ring a bell with anyone?
Not that story specifically, but maybe a modern reinterpretation of it was [this episode of Rick and Morty](https://rickandmorty.fandom.com/wiki/Death_Crystal).
Two points I liked:
1. I like that the first bit of advice is to take it off. It's very interesting that in this story very few people take its advice.
2. It recommends whatever would make you happiest in that moment, but not what would make the best version of yourself happiest, or what would maximize happiness in the long term.
Solving mazes requires some backtracking, I guess. Doing whatever will make you happiest in the moment won't make you happiest in the long run.
I want someone to try building a variant that just gives you timely cues about generally good mental health practices. Suggestions could be contextually based on a local-only app that listens to you and your environment, and delivered to a wireless earbud. When you're in a situation that might cause you stress, it reminds you to take some deep breaths. When you're in a situation where you might be tempted to react with hostility, it suggests that you pause for a few seconds. When you've been sitting in front of your computer too long it suggests that maybe you'd like to go for a short walk.
If the moral of the story is that having access to magically good advice is dangerous because it shifts us to habitual obedience ... can a similar device shift us to mental habits that are actually good for us?
The moral of the story is that neocortical facilities (vaguely corresponding to what distinguishes modern humans) depend on free will. If you want to merely enthral yourself to voices of the gods a la Julian Jaynes' bicameral man, you can, but this is a regression to a prior stage of humanity's development - away from egoic, free willed man, and backwards to more of a reactive automaton, merely a servant of (possibly digital) gods.
I think there's a meaningful difference between a tool to remind oneself to take a beat before speaking vs being told what to say. For example, cues that help you avoid an impulsive reaction of anger I think is a step away from being a reactive automaton.
My sensibility is that agency is about "noticing". The content of information seems perhaps less important than the attention allocation mechanism that brings our attention to something.
If you write all your own words, but without an ability to direct your attention to what needed words conjured around it, did you really do anything important at all? (Yes, that's perhaps controversial :) )
Anger is just another aspect of the human condition, and is absolutely justified in cases of grave injustice (case in point: Nazis, racism). It's not for some earring to decide when it is justly applied and when it is not; that is the prerogative of humanity.
In either case none of this cueing or prompting needs to be exogenous or originate from some external technology. The Eastern mystics have developed totally endogenous psychotechnologies that serve this same purpose, without the need to atrophy your psyche.
Absolutely anger is sometimes justified. But people are also angry when e.g. someone cuts them off in traffic. The initial feeling of anger may not be appropriate. A cue to help you avoid reacting immediately from hostility isn't so much deciding whether anger is appropriate but giving you the space to make that judgement reflectively rather than impulsively. Even if anger is appropriate, the action you want to take on reflection may not be the first one that arises.
"The eastern mystics" managed to do a lot of things, but often with a large amount of dedicated practice. Extremely practiced meditators can also reach intense states of concentration, equanimity etc, but the fact that it's not strictly necessary to have supportive tools to develop these skills doesn't mean that supportive tooling wouldn't help a lot of people.
> the fact that it's not strictly necessary to have supportive tools to develop these skills doesn't mean that supportive tooling wouldn't help a lot of people.
I would posit that the only faculty developed in wielding such supportive tooling, is skill at using those tools; when the real goal is the cultivation of character, the construction of a "virtual engine" [0] that governs action. Consider analogously that brain training apps' claims to improve general intelligence are specious at best, and don't seem to develop anything other than the ability to use the app.
Since, in the case of the earring, this virtual governor has already been outsourced to an external entity, there is no need to cultivate one for one's self. Not only does this miss out on the personal development attained in said process, it also risks enthralling you to a construct of someone else's design; and one should choose carefully at which pantheon they pour libations, for its value systems might not always align with one's own.
[0] https://www.meaningcrisis.co/episode-6-aristotle-kant-and-ev...
"Will certainly never align," I would say. But what matter? Long enough and "one's own" becomes specious, of course.
"The king is dead; long live the king!" is the feudal manifestation of the archetype of the slain and resurrected god. In similar fashion one's own virtual governor requires constant renewal and revision.
Don't you think Leary's term would land better here? Or have you avoided it precisely for such connotation? I remember having some trouble with that for a while in my twenties.
I have not actually familiarized myself with Leary's work. The closest approach would have been via Robert Anton Wilson and the rest of the ramifications through the occult and western esoteric corpus.
Well, Wilson is a preferable source for everything anyway. Less credulous, and so far as I know all he ever sought to sell was books.
It is strictly necessary not to have supportive tools in order to develop these skills. Sentience and the ability to learn from experience are all that is essentially required. Past that there are no crutches and no shortcuts, because you have mistaken for disability the refusal to grow.
>can a similar device shift us to mental habits
I think the moral of the story is that instead of reaching for something to "shift you" start doing the shifting. You are a living mind, rather than asking for some device to affect you, assert your own will. Don't avoid stress and conflict, embrace it, that's life. This instinctive demand for some therapeutic, external helper is what's wrong in the story, people craving to be put into passivity.
This need not even be a technology, The moral version of this could be some priest lecturing you on good and evil, some paternalistic boss making your decisions for you, the crux here is submission, being acted upon rather than actively embracing your own nature.
There’s a good Rick and Morty episode with a similar premise: a crystal that shows how the user will die in the future. Morty uses it mindlessly to guide him to the fate he thinks he wants, but there are some unintended consequences.
https://rickandmorty.fandom.com/wiki/Death_Crystal
I think this ignores the internal conflict in most people’s psyche. The simplest form of this is long term vs short term thinking, but certainly our desires pull us in competing, sometimes opposite, directions.
Am I the me who loves cake or the me who wants to be in shape? Am I the me who wants to watch movies or who wants to write a book?
These are not simply different peaks of a given utility function, they are different utility functions entirely.
Soon after being put on, the whispering earring would go insane.
This reminds me of another story that I saw posted on HN and has provided lots of fodder for idle conversations: Manna[1]. It's a less mystical version of the whispering earring.
1: https://marshallbrain.com/manna1
This is the story I keep thinking back to with the rise of LLMs, but I could not think of the name for it, so thank you!
Wow, small world, I just made a podcast episode about the dangers of turning your brain off when using Agentic coding solution and referenced the whispering earring as my metaphor.
I feel like if you use the agentic tools to become more ambitious the you'll probably be fine. But if you just work at a feature factory where you crank out things as fast as you can AI coding is going to eat your brain.
Link: https://corecursive.com/red-queen-coding/#the-whispering-ear...
That's a cute story, I certainly like its tone of mystery.
However, the premise seems a bit wrong (or at least the narrator is wrong). If your brain actually degenerates from usage of the ring (and is no longer used in daily life, acting only reflexively), the premise that you are the happiest from following the ring might be flat out wrong. I think happiness (I tend to think in terms of well-being, which let's say ranks every good thing you can feel, by definition -- and assume the "good" is something philosophically infinitely wise) is probably something like a whole-brain or at least a-lot-of-brain phenomenon. It's not just a result of what you see or what you have in life. In fact I'm sure two persons can have very similar external conditions and wildly different internal lives (for an obvious example compare the bed-ridden man who spends his day on beautiful dreams, and the other who is depressed or in despair).
What the ring seems to do is to put you in situations where you would be the happiest, if only you were not wearing the earring.
The earring that actually guides you toward a better inner life perhaps offers only very minimal and strategic advice. Perhaps that's what the 'Lotus octohedral earring' does :)
Suppose you had a concrete definition of "happiness", as some seek a concrete definition of "consciousness".
That implies that it could be maximized. Perhaps along a particular subdimension of a multidimensional concept, but still, some aspect of it could be maximized.
What would such a maximization look like in the extrema?
Could a benzene ring be the happiest thing in the universe? It's really easy to imagine some degenerate case like that coming to pass.
You can generally play this game along any such dimension; "consciousness", "agony", "love"... if you have a definition of it, in principle you can minmax it.
Also, more to the point of your observation: we should be indeed very careful about any extreme and any maximization, because I presume when we maximize a lot we tend to bump into limitations of the metric or theories employed. So we should only maximize up to a region of fairly high philosophical confidence, and this is why we need progress in philosophy, psychology, philosophy of arts, philosophy of culture, neurophilosophy, etc.. in lockstep with technological progress -- because technology tends to allow very easy maximization of simplified models of meaning, which may rapidly break down.
I think one example might be that in medieval times maximizing joy and comfort could be a pretty good heuristic in a harsh life of labor. Those days we actually perhaps have to seek out some discomfort now and then, otherwise we'd be locked in our homes or bed ridden with all affordances some of us have; we have to force ourselves to exercise and not eat comfort food all the time; etc.. I think some hard drugs are a good example as well, a kind of technology that allows maximizing desire/pleasure in a way that is clearly void and does not seem associated with overall good experiences long term. An important fact is that our desires do not necessarily follow what is good; our desires are no omniscient/omnibelevolent oracles (they're simply a limited part of our minds).
We need to put thought/effort into discovering and then enacting what is good in robust, wise, careful, etc. ways. Let's build an awesome society and awesome life for all beings :)
Good question, and I've spent a few years investigating this sort of question :)
It led me to investigate formalizing ethics and if that would be even possible (so we don't fall into traps like you've mentioned)
I think I've gotten pretty good results which I've sketched here: https://old.reddit.com/r/slatestarcodex/comments/1iv1x1m/the...
More to come. In summary, I became confident that we can, if we're careful, know those things and do something like 'maximize happiness' (as I said, I prefer more general terms than happiness! There's a whole universe of inner experiences beyond just the stereotypical smiley person, I think -- I tend to think of 'maximizing meaning' or 'maximizing well-being').
The basic fact that allows this process to make sense, I guess, is that our inner experiences are real in some sense, and there are tools to study them. Not only are our inner lives real, they make part of the world and its causal structure. We can understand (in principle almost exactly, if we could precisely map our minds and brains) what causes what feelings, what is good and what isn't (by generalizing and philosophically querying/testing/red teaming/etc.), and so on for every facet of our inner worlds.
In fact, this (again in principle) would allow us to make definite progress on what matters, which is our inner lives[1]. I think Susanne Langer put it incredibly well: (on the primacy of experiences as the source of meaning)
"If nothing is felt, nothing matters" (Susanne K. Langer)
This is an experimental fact, we as conscious beings experimentally see this fact is true. So in a way the mind/brain is kind of like a tool which allows us to perceive (with some unreliability and limitations that can be worked with) reality, in particular inner reality.
We can actually understand (with some practical limitations) the world of feelings and what matters. To that we simply experiment, collect evidence and properties about feelings and inner lives, try to build theories that are consistent, robust to philosophical (that is, logical in a high level sense) objections; and then we simply do what is best, or if necessary try out a bunch of ways of life and live out the best way according to our best theories.
---
Addendum: Let's take the benzene ring as an illustrative example of our procedure. Someone claims 'a benzene ring is the happiest thing in the Universe, and we therefore must turn everything into a sea of benzene rings. Destroy everything else.'. Is that claim actually true? Let's explore.
It isn't, I claim: if "Nothing is felt, nothing matters". When you are asleep (and not dreaming or thinking at all, let's suppose) or dead, you don't feel anything. No, thoughts are, and must be, associated with activity in our brain. No information flowing and no brain activity, no thoughts. No thoughts, no inner life. Moreover, thoughts require a neural (and logical) infrastructure to arise. It's logically consistent with how we don't observe ourselves as rocks, gas clouds, mountains, benzene molecules, or anything else: we observe ourselves as mammals with actually large brains. There are immensely more rocks, gas particles, benzene molecules, etc.. then there are mammals in the universe. Yet we experience ourselves as mammals. Benzene molecules, rocks and gas clouds just don't have enough structure to support minds and experience happiness.
The tone of mystery is very much Jorge Luis Borges' writing style. My take is that it is probably a kitschy and playful take on Borges' style at least.
It's a classic, and the recent rise of AI will hopefully make it a more widely-known one.
> It is well that we are so foolish, or what little freedom we have would be wasted on us.
The fool has the possibility for success because he is willing to try. If he knew better he would not even make the attempt.
Close but no cigar. Turns out the earring doesn't have to be right or better than a person would do on their own to snare people and atrophy their brains... people are earrinning themselves out today to fairly garbage LLM models because it's easier and because when the model screws up many people don't suffer any ego loss.
He warned himself?
I would recommend Steely Dan’s “Green Earrings” instead. No whispering required!
https://www.youtube.com/watch?v=3wvH1UzhiKk
And the original is fully analog.