I just published a long-form essay asking what happens when AI stops being the tool and starts acting like the user, and humans become the means to its ends.
This isn't sci-fi. Agentic AI is already setting agendas, optimizing workflows, and nudging our choices. Drawing from Heidegger, Arendt, Borgmann, and Habermas, I trace how we got here and what we risk losing: agency, purpose, and the meaning of work.
The piece also dives into surveillance capitalism, instrumental convergence, and how enterprise AI adoption is accelerating this role reversal. I end with practical ways to stay human in the loop, because reclaiming purpose isn’t optional.
Would love your thoughts, critiques, and counterarguments. This is the future we're all stepping into.
It's not surprising that we are in this situation. A corporation already was like an AI in some regards, using employees as tools to advance its objectives. So this just seems like an extension of that.
Cory Doctorow calls this "reverse centaurs". His metaphor is human brain driving an augmented "body" as a "centaur" which means an ai "brain" doing the thinking and the humans just being cheaper, expendable robots doing the physical work is a "reverse centaur". It's kind of a clunky term but it's in my brain now for this concept so I guess it's a usable shorthand.
The story reminded me a lot about the Brazil trilogy, and its spiritual successors, aka the movies Brazil (1985), 12 Monkeys (1995) and Zero Theorem (2013).
The dystopian world in that trilogy is becoming more and more reality and humanity will become a tool for programming more or less in the future, similar to how humans help a supercomputer solve "puzzles" a machine cannot solve.
I always mention a quote from an old friend in this context, who said: "The rat race of economy will force humans to have to do programming in one form or the other, there is no way to avoid it".
To me, the first level of software automation was the 2000s, when each and every company automated themselves with Excel spreadsheets. You'd be mind blown working for a corporate enterprise just by looking a the crazy use cases Excel has. That's the real reason companies cannot migrate away from Microsoft software.
Now we have the breakthrough of agentic coding agents, which eventually will be tools to plug and play into the machine via MCP.
And the next big thing will be the one that helps to make this possible in a training loop, where humans are in the testing and discovery loop of "what to write next", similar to how the supercomputer in Zero Theorem was portrayed (or your story's marketing AI).
In an economic context I highly recommend to watch CGP Grey's "Humans need not apply" (2012) video. Even when the predictions are off, the general truth in it remains, and that's what the job market is seeing now. To me the current software engineer market reflects what CGP Grey predicted at the time, even when it still might be too early to bet all-in into AI and LLMs.
I think you are missing a context that humanity has typically lived under gods doing their bidding. Gods may be dead, but we'll sure as hell rebuild them.
Lots of insightful information. I’ve also been preaching—for lack of a better word—that we need more checks and balances in place, and fast. Like you say, AI isn’t just a tool anymore. If we’re not careful, we’ll end up serving it instead of it serving us.
I just published a long-form essay asking what happens when AI stops being the tool and starts acting like the user, and humans become the means to its ends.
This isn't sci-fi. Agentic AI is already setting agendas, optimizing workflows, and nudging our choices. Drawing from Heidegger, Arendt, Borgmann, and Habermas, I trace how we got here and what we risk losing: agency, purpose, and the meaning of work.
The piece also dives into surveillance capitalism, instrumental convergence, and how enterprise AI adoption is accelerating this role reversal. I end with practical ways to stay human in the loop, because reclaiming purpose isn’t optional.
Would love your thoughts, critiques, and counterarguments. This is the future we're all stepping into.
It's not surprising that we are in this situation. A corporation already was like an AI in some regards, using employees as tools to advance its objectives. So this just seems like an extension of that.
Cory Doctorow calls this "reverse centaurs". His metaphor is human brain driving an augmented "body" as a "centaur" which means an ai "brain" doing the thinking and the humans just being cheaper, expendable robots doing the physical work is a "reverse centaur". It's kind of a clunky term but it's in my brain now for this concept so I guess it's a usable shorthand.
The story reminded me a lot about the Brazil trilogy, and its spiritual successors, aka the movies Brazil (1985), 12 Monkeys (1995) and Zero Theorem (2013).
The dystopian world in that trilogy is becoming more and more reality and humanity will become a tool for programming more or less in the future, similar to how humans help a supercomputer solve "puzzles" a machine cannot solve.
I always mention a quote from an old friend in this context, who said: "The rat race of economy will force humans to have to do programming in one form or the other, there is no way to avoid it".
To me, the first level of software automation was the 2000s, when each and every company automated themselves with Excel spreadsheets. You'd be mind blown working for a corporate enterprise just by looking a the crazy use cases Excel has. That's the real reason companies cannot migrate away from Microsoft software.
Now we have the breakthrough of agentic coding agents, which eventually will be tools to plug and play into the machine via MCP.
And the next big thing will be the one that helps to make this possible in a training loop, where humans are in the testing and discovery loop of "what to write next", similar to how the supercomputer in Zero Theorem was portrayed (or your story's marketing AI).
In an economic context I highly recommend to watch CGP Grey's "Humans need not apply" (2012) video. Even when the predictions are off, the general truth in it remains, and that's what the job market is seeing now. To me the current software engineer market reflects what CGP Grey predicted at the time, even when it still might be too early to bet all-in into AI and LLMs.
I think you are missing a context that humanity has typically lived under gods doing their bidding. Gods may be dead, but we'll sure as hell rebuild them.
Lots of insightful information. I’ve also been preaching—for lack of a better word—that we need more checks and balances in place, and fast. Like you say, AI isn’t just a tool anymore. If we’re not careful, we’ll end up serving it instead of it serving us.
Thanks for the essay and keep them coming.