The Metaphors We Use for AI Matter
- Matt Arnold

- Feb 1
- 3 min read
I’ve been thinking a lot about the metaphors we use for AI. Not the features. The frames.
This question surfaced during a conversation with a trusted colleague about an agentic AI concept I’m prototyping for futures work and horizon scanning. From the beginning, I’ve been trying not to let the technology lead. I want the system to be enabled by AI, not driven by it.
That distinction matters, because how we describe technology often determines how we design with it, what we trust it to do, and what responsibility we quietly hand over.
This reflection took me back to something Bill Moggridge once observed about paper. For centuries, paper functioned as both interface and storage medium. Because it was so familiar, we stopped seeing the work it was doing. Early digital systems struggled in part because designers tried to replicate paper rather than re-examining the underlying jobs it supported.
AI feels similar.
When we treat AI as software, we look for bugs.When we treat it as a black box, we outsource judgment. But increasingly, AI systems feel less like tools and more like organizations.
You do not debug an organization.You shape incentives, constraints, norms, and feedback.Behavior emerges over time.
That lens has helped me think more clearly about how different AI environments support different kinds of work.
Conversation spaces feel like whiteboards for thinking and articulation.Knowledge bases feel like databases for grounded recall.Sensemaking labs feel like research studios for pattern discovery. Agent workspaces feel like background processes for repeated scanning.
Each environment has a role. Each has limits.

A former colleague once described AI as an over-zealous intern. Desperate to help. Willing to put in enormous effort. But lacking wisdom and occasionally making things up. That metaphor stuck with me because it immediately clarifies where humans still matter. High output is not the same as judgment.
Another friend recently reminded me to fall in love with the problem, not the solution. That advice forced a harder question. What is the real problem when it comes to collaborative sensemaking in futures and strategy work? What genuinely requires human judgment, and where does assistance help without collapsing complexity too early?
I’m increasingly convinced that how we talk about AI shapes what we build with it, and what we prematurely hand over.
That belief goes back further than the current moment. In the early 1990s, as an undergraduate, I took an English class where we did not turn in papers. We built interactive presentations stored on SyQuest cartridges. The focus of the course was the 1893 World’s Fair, also known as the Columbian Exposition, and how language was used to explain, legitimize, and shape new technologies.

One lesson stuck with me. The way we describe technology can help us move quickly, but it can also quietly constrain how we think, design, and act.
That class was my first exposure to Structuration Theory. It later carried into my master’s research on computer-augmented group decision-making, where the central question was never whether technology was powerful, but where and how it could best augment human work.
Which brings me back to AI.
The metaphors we choose are not neutral. They shape expectations, responsibility, and agency.
So I’m trying to stay disciplined. Fall in love with the problem. Stay clear on the human work that actually matters. Let the technology serve that purpose, not define it.
The images below were generated as I continue to work through my prototype (build to learn). They helped me think. The matrix is me trying to clarify what I’m actually working with.
It’s a reminder that we have always needed language, structure, and shared meaning to make sense of new technologies. AI does not change that. If anything, it makes the work more important.






Comments