Revisiting Minsky's Society of Mind in 2025

(suthakamal.substack.com)

114 points | by suthakamal 2 days ago ago

34 comments

  • mblackstone 2 days ago ago

    In 2004 I previewed Minsky's chapters-in-progress for "The Emotion Machine", and exchanged some comments with him (which was a thrill for me). Here is an excerpt from that exchange: Me: I am one of your readers who falls into the gap between research and implementation: I do neither. However, I am enough of a reader of research, and have done enough implementation and software project management that when I read of ideas such as yours, I evaluate them for implementability. From this point of view, "The Society of Mind" was somewhat frustrating: while I could well believe in the plausibility of the ideas, and saw their value in organizing further thought, it was hard to see how they could be implemented. The ideas in "The Emotion Machine" feel more implementable.

    Minsky: Indeed it was. So, in fact, the new book is the result of 15 years of trying to fix this, by replacing the 'bottom-up' approach of SoM by the 'top-down' ideas of the Emotion machine.

    • neilv 2 days ago ago

      It might've been Push Singh (Minsky's protege) who said that every page of Society of Mind was someone's AI dissertation waiting to happen.

      When I took Minsky's Society of Mind class, IIRC, it actually had the format -- not of going through the pages and chapters -- but of him walking in, and talking about whatever he'd been working on that day, for writing The Emotion Machine. :)

    • suthakamal 2 days ago ago

      agree. A lot has changed in the last 20 years, which makes SoM much more applicable. I would've agreed in 2004 (and say as much in the essay).

  • generalizations 2 days ago ago

    Finally someone mentions this. Maybe I've been in the wrong circles, but I've been wishing I had the time to implement a society-of-mind-inspired system ever since llamacpp got started, and I never saw anyone else reference it until now.

    • lubujackson a day ago ago

      I found and read this book from the library completely randomly like 20 years ago and I still remember a lot of the concepts. Definitely seems like a foundational approach for how to architect intelligent systems with a computer. Before I was even thinking about any of that and was just interested in the philosophy I thought his approach and fullness of his ideas was remarkable. Glad to see it becoming a more central text!

    • sva_ 2 days ago ago

      Honestly, I never really saw the point of it. It seems like introducing a whole bunch of inductive biases, which Richard Sutton's 'The Bitter Lesson' warned against.

      • suthakamal 2 days ago ago

        Rich Sutton's views are far less interesting than Minsky's IMO.

  • drannex 2 days ago ago

    Good timing, I just started rereading my copy last week to get my vibe back.

    Not only is it great for tech nerds such as ourselves for tech, but its a great philosophy on thinking about and living life. Such a phenomenal read, easy, simple, wonderful format, wish more tech-focused books were written in this style.

  • colechristensen 2 days ago ago

    MIT OpenCourseWare course including video lectures taught by Minsky himself:

    https://ocw.mit.edu/courses/6-868j-the-society-of-mind-fall-...

    • suthakamal 2 days ago ago

      amazing find. thank you for sharing this!

  • suthakamal 2 days ago ago

    As a teen in the '90s, I dismissed Marvin Minsky’s 1986 classic, The Society of Mind, as outdated. But decades later, as monolithic large language models reach their limits, Minsky’s vision—intelligence emerging from modular "agents"—seems strikingly prescient. Today’s Mixture-of-Experts models, multi-agent architectures, and internal oversight mechanisms are effectively operationalizing his insights, reshaping how we think about building robust, scalable, and aligned AI systems.

    • detourdog 2 days ago ago

      I was very inspired by the book in 1988-89 as a second year industrial design student. I think this has been a thread on HN about 2 years ago.

  • fishnchips 2 days ago ago

    Having studied sociology and psychology in my previous life I am now surprised how relevant some of the almost forgotten ideas became to my current life as a dev!

    • griffzhowl a day ago ago

      Interesting. What kind of psychological ideas are most relevant?

      • fishnchips a day ago ago

        Skinner's behaviorism for sure ("The real question is not whether machines think but whether men do. The mystery which surrounds a thinking machine already surrounds a thinking man").

        But also Dennet's origins of consciousness.

        What I mean here is that the discussion among the AI proponents and detractors about machines "thinking" or being "conscious" seems to ignore what neuropsychology and cognitive psychology found obvious for decades - that there is no uniform concept of "thinking" or "consciousness" in humans, either.

  • 2 days ago ago
    [deleted]
  • fossuser 2 days ago ago

    > Eventually, I dismissed Minsky’s theory as an interesting relic of AI history, far removed from the sleek deep learning models and monolithic AI systems rising to prominence.

    That was my read of it when I checked it out a few years ago, obsessed with explicit rules based lisp expert systems and "good old fashioned AI" ideas that never made much sense, were nothing like how our minds work, and were obvious dead ends that did little of anything actually useful (imo). All that stuff made the AI field a running joke for decades.

    This feels a little like falsely attributing new ideas that work to old work that was pretty different? Is there something specific from Minsky that would change my mind about this?

    I recall reading there were some early papers that suggested some neural network ideas more similar to the modern approach (iirc), but the hardware just didn't exist at the time for them to be tried. That stuff was pretty different from the mainstream ideas at the time though and distinct from Minsky's work (I thought).

    • spiderxxxx 2 days ago ago

      I think you may be mistaking Society of Mind with a different book. It's not about lisp or "good old fashioned AI" but about how the human mind may work - something that we could possibly simulate. It's observations about how we perform thought. The ideas in the book are not tied to a specific technology, but about how a complex system such as the human brain works.

    • adastra22 2 days ago ago

      You are surrounded by GOFAI programs that work well every moment of your life. From air traffic control planning, do heuristics based compiler optimization. GOFAI has this problem where as soon as they solve a problem and get it working, it stops being “real AI” in the minds of the population writ large.

      • mcphage 2 days ago ago

        Philosophy has the same problem, as a field. Many fields of study have grown out of philosophy, but as soon as something is identified, people say “well that’s not Philosophy, that’s $X” … and then people act like philosophy is useless and hasn’t accomplished anything.

      • fossuser 2 days ago ago

        Because it isn't AI and it never was and had no path to becoming it, the new stuff is and the difference is obvious.

        • adastra22 a day ago ago

          Go read an AI textbook from the 80’s. It was all about optimizations and heuristics. That was the field.

          Now if you write a SAT solver or a code optimizer you do t call it AI. But those algorithms were invented by AI researchers back when the population as a whole considered these sorts of things to be intelligent behavior.

          • fossuser 3 hours ago ago

            I agree with you that it was called AI by the field, but that’s also why the field was a joke imo.

            Until LLMs everything casually called AI clearly wasn’t intelligence and the field was pretty uninteresting - looked like a deadend with no idea how to actually build intelligence. That changed around 2014, but it wasn’t because of GOFAI, it was because of a new approach.

        • 2 days ago ago
          [deleted]
    • empiko 2 days ago ago

      I completely agree with you and I am surprised by the praise in this thread. The entire research program that this books represents is dead for decades already.

      • photonthug 2 days ago ago

        It seems like you might be confusing "research programs" with things like "branding" and surface-level terminology. And probably missing the fact that society-of-mind is about architecture more than implementation, so it's pretty agnostic about implementation details.

        Here, enjoy this thing clearly building on SoM and edited earlier this week: ideas https://github.com/camel-ai/camel/blob/master/camel/societie...

      • suthakamal 2 days ago ago

        I pretty clearly articulate the opposite. What's your evidence to support your claim?

        • empiko a day ago ago

          The problem with your argument is that what you call agent is nothing like what Minsky envisioned. The agents in Minsky's world are very simple rule based entities ("nothing more than a few switches") that are composed in vast hierarchies. The argument Minsky is making is that if you compose enough simple agents in a smart way, an intelligence will emerge. What we use today as agents is nothing like that, each agents itself is considered intelligent (directly opposing Minsky's vision "none of our agents is intelligent"), while organized along very simple principles.

          • fossuser 13 hours ago ago

            This is reminding me of what I thought I was remembering, I don't have the book anymore - but I remember starting it and reading a few chapters before putting it back on the shelf, it's core ideas seemed to have been shown to be wrong.

    • suthakamal 2 days ago ago

      I don't think we're talking about the same book. Society of Mind is definitely not an in-the weeds book that digs into things like lisp, etc. in any detail. Instead of changing your mind, I'd encourage you to re-read Minsky's book if you found my essay compelling, and ignore it if not.

      • 2 days ago ago
        [deleted]
  • ggm a day ago ago

    Minsky disliked how Harry Harrison changed the end of "the turing option" and wrote a different ending.

    (not directly related to the post but anyway)

  • frozenseven a day ago ago

    Jürgen Schmidhuber's team is working on this, applying these ideas in a modern context:

    https://arxiv.org/abs/2305.17066

    https://github.com/metauto-ai/NLSOM

    https://ieeexplore.ieee.org/document/10903668

  • a day ago ago
    [deleted]