21 comments

  • Terr_ 2 days ago ago

    If I had my legal 'druthers, such, er, "brain-derived mental content" would be flatly illegal to obtain without a specific and discrete sharing decision by the person, and such decisions may not be part of any contract, so:

    1. You can buy a tool and use it to monitor yourself, whether daily-logger, dream-recorder, a fetish-detector, whatever.

    2. You can share specific results with others on a case-by-case basis, but it's illegal for them to obtain it any other way.

    3. It is not illegal (or at least unenforceable) for someone to require you to share results in exchange for something else, like requiring employees to wear a disloyalty-detector headband.

    The question of how it applies to the 5th amendment right against self-incrimination... Hmm. Someone placing Guilt-O-Meter on your head would be illegal, but if you did it yourself and left log files around...

    • survirtual 2 days ago ago

      You're close but the idea gets much broader and extends to augmented intelligence as well.

      Once the brain is readable, it actually becomes easier to justify legal protections for augmented compute intelligence, ironically enough.

      The inevitable end result for a benevolent manifested planet / universe is we all have equal share of impenetrable compute.

      This result is unavoidable if we want to live a free and prosperous life.

      It requires encoding certain human rights, freedoms, privacy protections, and corporate / government limitations that human beings are not remotely ready to encode yet.

      So by extension, I won't hold my breath on brain scanning being a technology I would be comfortable with in the hands of this current world and thinking.

  • hereme888 2 days ago ago

    And then wait for the discounted $5k deal* for an automated robotic surgery to implant a NeuraLink device.

    *You agree to allow the company to collect anonymized data, to help improve* the device.

    *Our lawyers are still working on this.

  • 8474_s 2 days ago ago

    What is the real-world fidelity of this "decoding both perceptual and mental content"? Can it record dream as video?

  • briga 2 days ago ago

    Is this the future technology that anyone wants?

    • juris 2 days ago ago

      if only to screen suitable material for the presidency.

      • Terr_ 2 days ago ago

        System output: "Person. Woman. Man. Camera. TV."

    • 2 days ago ago
      [deleted]
  • 2 days ago ago
    [deleted]
  • fouc 2 days ago ago

    In the future, we'll probably lose the ability to verbalize or construct sentences because our thoughts will be directly understood by LLMs, it'll be too easy and convenient.

    • Terr_ 2 days ago ago

      The shareholders yearn for the Torment Nexus.

      • HollowVoice 2 days ago ago

        And to think, I grew up thinking Greg Egan and Iain Banks were (mostly) trying to write hopeful stories. It was dystopian all along!

        Oh well, time to kill all the weirdos.

      • 2 days ago ago
        [deleted]
    • pessimizer 2 days ago ago

      They'll give up talking to us too, and just interface through our ears. The LLM earpiece will just make some 2800 baud modem noises and we'll move around like marionettes.

      • Terr_ 2 days ago ago

        Not quite a wordless scenario, but after seeing some people today already scrolling for dopamine, I'm still worried:

        > I can remember putting on the headset for the first time and the computer talking to me and telling me what to do. It was creepy at first, but that feeling really only lasted a day or so. Then you were used to it, and the job really did get easier. Manna never pushed you around, never yelled at you. The girls liked it because Manna didn’t hit on them either. Manna simply asked you to do something, you did it, you said, “OK”, and Manna asked you to do the next step. Each step was easy. You could go through the whole day on autopilot, and Manna made sure that you were constantly doing something. At the end of the shift Manna always said the same thing. “You are done for today. Thank you for your help.” Then you took off your headset and put it back on the rack to recharge. The first few minutes off the headset were always disorienting — there had been this voice in your head telling you exactly what to do in minute detail for six or eight hours. You had to turn your brain back on to get out of the restaurant.

        -- https://marshallbrain.com/manna1

        • Marshferm 12 hours ago ago

          It’s entirely reliant on symbols, ie it’s irrelevant in terms of brain-ecology processes.

    • UltraSane 2 days ago ago

      I can see many people not learning how to write when speech to text gets good enough.

    • 2 days ago ago
      [deleted]
    • Marshferm 2 days ago ago

      It can’t be LLMs they’re incompatible with thought.

      • fouc a day ago ago

        you didn't look at the paper? or you're taking umbrage with the "understanding" part?

        • Marshferm 14 hours ago ago

          It’s not understanding, it’s explanation. I read the paper, I posted it.

          Start at what are human explanations:

          https://www.alisongopnik.com/Papers_Alison/Explain%20final.p...

          Now what are words in relation to that drive?

          “We refute (based on empirical evidence) claims that humans use linguistic representations to think.” Ev Fedorenko Language Lab MIT 2024

          What are LLMs?

          As language or tokens cannot represent anything of merit in brains accurately, then the interpretation of what is semantic vs what is a task variable action potential is subject to the Gopnik problem.

          It’s an enforced circularity that never allows the brain/ecology to speak for itself in its native process.