The Turing Option: A Prophetic Vision of Modern AI
Table of Contents

Recently, I found myself reminiscing about a sci-fi novel I read in high school: The Turing Option by Marvin Minsky and Harry Harrison. Published in 1992, this book has been on my mind as I’ve watched the recent explosion in development of modern AI.
What stands out isn’t just that it was co-authored by Minsky—one of the founding fathers of artificial intelligence—but how many concepts in the book parallel today’s AI landscape.
Agents Before Agents Were Cool #
Throughout The Turing Option, Minsky and Harrison describe a core AI concept built on “agents” working together. Here’s a quote that stands out:
Thinking is the result of all those agents being connected in ways that make them help each other […] even though each one can do very little, it can still carry a little fragment of knowledge to share with the others
Minsky was famously an advocate for the “society of mind” theory that intelligence emerges from the interaction of many simple processes—agents—each specialised for different tasks.
Sound familiar? It should. The explosion of AI agents we’re seeing today—from coding assistants like Lovable, Cursor, Windsurf to research agents like Google’s Deep Research or Notebook LLM that can search, summarise, and “reason”—follows this exact paradigm. Especially when you bring MCP (Model Context Protocol, more on that later) into the mix. MCP gives Cursor or Claude Desktop the ability to talk to other tools, resources, and even other agents for more context.
Beyond the Turing Trap #
Erik Brynjolfsson’s 2022 article “The Turing Trap” provides a lens through which to reconsider Minsky’s (and Harrison’s) novel. Brynjolfsson argues that our focus on human-like AI (passing the Turing Test) may be limiting AI’s true potential:
“We already have intelligence that can carry on a conversation, it’s called a human being. We don’t have an abundance of intelligence that can simultaneously consider multiple viewpoints or alternative scenarios, and synthesize robust strategies or novel approaches.”
Minsky and Harrison weren’t just writing about AI that mimics humans—they were envisioning something more transformative—the interaction of agents performing many small tasks simultaneously while sharing context. This aligns with Brynjolfsson’s view that AI’s most profound contribution could arise from what Minsky called a “Society of Mind”:
“The society of mind theory views the human mind and any other naturally evolved cognitive systems as a vast society of individually simple processes known as agents. These processes are the fundamental thinking entities from which minds are built, and together produce the many abilities we attribute to minds. The great power in viewing a mind as a society of agents, as opposed to the consequence of some basic principle or some simple formal system, is that different agents can be based on different types of processes with different purposes, ways of representing knowledge, and methods for producing results.”
The Turing Option didn’t just anticipate conversational AI—it envisioned systems that are genuinely new and potentially more powerful-an “agentic” network sharing context.
LAMA vs. LLaMA: A Cosmic Coincidence #
In one of those strange moments of synchronicity, the novel also describes a programming language called LAMA (with one L!) that:
“lets you write programs that write and run other programs”
Fast forward to today, and we have Meta’s LLaMA (Large Language Model Meta AI). While the real-world LLaMA isn’t exactly what Minsky described, the parallel is fun—modern LLMs effectively enable programs to write and run other programs, especially with tools like function calling and agent frameworks and the new hotness, MCP…
MCP: The Missing Piece of the Agent Puzzle #
If Minsky were alive today, I bet he’d be excited about Machine Context Protocol (MCP). This emerging standard provides the “connective tissue” that enables different AI agents to communicate meaningfully—forming a kind of agentic society of mind, like he envisioned.
MCP creates a standardised way for AI systems to exchange context, share knowledge, and coordinate actions. It’s the technological embodiment of Minsky’s vision where “agents are connected in ways that make them help each other.” With MCP, one agent can seamlessly pass relevant information to another, preserving context and enabling more complex collaborative workflows.
Tools like Claude Desktop, Cursor, and Windsurf that implement MCP demonstrate precisely what Minsky described: systems where specialised agents carry “fragments of knowledge to share with others.” For example, one agent might retrieve information from documentation, another might analyse code structure, while a third handles natural language understanding—all seamlessly sharing context through MCP to solve problems collaboratively.
The Society of Mind Made Real #
With MCP as an enabling technology, today’s AI systems are increasingly built as collections of specialised models and tools working in concert:
- Large language models provide reasoning and generation capabilities
- Tool-using agents extend these abilities with specific functions
- Retrieval augmented systems access specialised knowledge stores
- Multi-agent architectures allow different AI instances to work together via MCP
What we’re seeing aligns perfectly with Brynjolfsson’s perspective—the most promising AI systems aren’t just single models trying to mimic humans but complex ecosystems of specialised components working together. This is Minsky’s “Society of Mind” theory finally finding its implementation, with MCP serving as the communication protocol that makes it all possible.
From The Turing Test through The Turing Trap and back to The Turing Option #
The novel’s title itself suggests moving beyond the simple human-mimicry of the Turing Test toward something more profound—the option to create truly intelligent systems that aren’t just imitations of human cognition.
In 2025, we’re facing exactly that option as AI capabilities accelerate. MCP and similar technologies are helping us transcend the limitations of single-model AI and move toward genuinely collaborative agent networks. The theoretical frameworks Minsky proposed decades ago are now being implemented, refined, and extended in ways that could help us escape the “Turing Trap” Brynjolfsson warns about—not by making AIs more human-like, but by embracing their unique potential as societies of interconnected agents.
Time for a Re-read #
Despite some silliness that dates it (it is a 33-year-old book after all), The Turing Option deserves a place on your bookshelf. It reminds us that many of today’s “breakthroughs” have deep roots in theories and concepts that pioneers like Minsky were exploring (and sometimes derided for) decades ago.
I’m grabbing my old copy for a full re-read. Sometimes looking back helps us better understand where we’re headed—beyond the Turing Trap and toward the true Turing Option.