AI augmentation

Fluid Futures: Navigating an AI-Mediated World

What Happens When AI Stops Being a Tool and Starts Being the World?

There's a useful distinction that keeps getting lost in conversations about artificial intelligence: the difference between augmentation and mediation.

Augmentation is familiar. It's the calculator model — AI helps you work faster, smarter, better. You remain the agent. The tool amplifies your capacity.

Mediation is something else. When AI mediates your world, it's not just helping you do things — it's shaping the system you're doing them inside of. What information surfaces. What options appear. What feels like the obvious next move. You're not using the environment anymore. You're inside one that AI has constructed, and it's shifting around you in real time.

This distinction is at the heart of Exploring the Futures of Technology 2.0, the new report from the Copenhagen Institute of Future Studies — and it's the central thread of the latest episode of Modem Futura.

On this episode, my co-host Andrew Maynard, fresh from attending the report's launch in Copenhagen, joined me to work through ten signals the report identifies as defining the near future: the shift from static to liquid content, the rise of agentic organizations, neurotechnology and cognitive integration, synthetic simulations replacing real-world research populations, physical AI entering embodied space, the geopolitics of technological access, AI-mediated cybersecurity threats, the sustainability challenges of AI infrastructure, and quantum computing as the wildcard at the edge of everything.

What holds these signals together isn't a single prediction. It's a pattern: the world is becoming fluid, and the frameworks we built for a more static environment — static reports, static institutions, static skillsets — are increasingly inadequate for navigating it.

One of the episode's sharpest observations is about the cost of cognitive offloading. As we hand more of our decision-making and information retrieval to AI systems, we risk losing the capacity to recognize when something's wrong. Not because AI is malicious, but because we've stopped practicing the skills that would let us notice. Like losing the ability to read a map. Except the stakes are considerably higher.

The conversation doesn't resolve these tensions — and that's exactly the point. Futures thinking, at its best, isn't about prediction. It's about staying awake to what's changing, naming the tensions, and refusing to optimize for a world that no longer exists.

If you want the full report, the Copenhagen Institute has made it freely available. And if you want the conversation around it — the episode is a good place to start.

Subscribe and Connect!

Subscribe to Modem Futura wherever you get your podcasts and connect with us on LinkedIn. Drop a comment, pose a question, or challenge an idea—because the future isn’t something we watch happen, it’s something we build together. The medium may still be the massage, but we all have a hand in shaping how it touches tomorrow.

🎧 Apple Podcast: https://apple.co/4bSGsZP

🎧 Spotify: https://open.spotify.com/episode/4sdx83QUD6pIs9IXb9G0VY?si=JdbwVHUKRg2mFO0Gsi_EFw

📺 YouTube: https://youtu.be/-2enUvPYmHo

🌐 Website: https://www.modemfutura.com/   

The Invisible Upgrade: What AI Is Actually Doing to the People Who Use It

Human sitting at computer with a split half image of regular life and one augmented by AI

The loudest part of the conversation about artificial intelligence right now is focused on what AI produces. Can you detect it? Does it have tells? Is this essay, image, or report human-made or machine-made?

It's a reasonable place to start. But it's not where the most important transformation is happening.

In Episode 75 of Modem Futura, host Sean Leahy and co-host Andrew Maynard explore what Sean calls the invisible upgrade — the quiet, compounding cognitive shift taking place not in AI-generated artifacts, but in the minds and workflows of the people who have fully integrated these tools into how they think, create, and decide.

The Seam-Scanning Problem

Sean introduces the concept of "seam scanning" — the practice of looking for signs of AI in a piece of work. Early on, those seams were easy to spot: nine-fingered hands in AI images, suspicious em-dashes, the word "delve" where it didn't belong. But as AI systems become more sophisticated and more deeply woven into human workflows, those tells are disappearing. Not because the AI is getting better at hiding — but because the line between human and AI output is becoming genuinely indistinguishable when the integration is deep enough.

The question "how much AI did you use?" is becoming as meaningful, Sean argues, as asking a writer how much spellcheck they used. The tool has become part of the process.

Constitutive Resonance

Andrew brings a concept he's been developing to the conversation: constitutive resonance. Unlike a calculator, which you use and put down, AI reconfigures you as you use it — and is reconfigured in return. The relationship is recursive and dynamic. Drawing on physics, when two systems resonate at coupled frequencies, the exchange of energy between them can be transformative. Applied to human cognition and AI systems, this suggests that those who engage deeply with AI tools aren't just more productive — they are thinking differently, possibly in ways that are difficult to reverse.

This maps directly onto McLuhan's 1967 insight: all media work us over completely. AI, as Andrew and Sean explore, is the most cognitively-coupled medium humanity has ever produced.

The Productivity Gap

What emerges from this isn't just a philosophical concern — it's a structural divergence. A growing group of knowledge workers, students, and researchers are operating with what Sean calls a "multiplier effect" — not because they are inherently smarter, but because their total cognitive output, the speed and depth of synthesis, ideation, and iteration, has expanded significantly. Meanwhile, those still debating whether to engage are falling further behind, not necessarily in skill, but in thinking capacity.

The episode also explores the rise of multi-agent AI systems as what Andrew calls a step-change likely bigger than the launch of ChatGPT — and what it means for institutions, education, and our understanding of what individual human contribution actually looks like in a world where AI is already inside the walls.