AI infrastructure

Fluid Futures: Navigating an AI-Mediated World

What Happens When AI Stops Being a Tool and Starts Being the World?

There's a useful distinction that keeps getting lost in conversations about artificial intelligence: the difference between augmentation and mediation.

Augmentation is familiar. It's the calculator model — AI helps you work faster, smarter, better. You remain the agent. The tool amplifies your capacity.

Mediation is something else. When AI mediates your world, it's not just helping you do things — it's shaping the system you're doing them inside of. What information surfaces. What options appear. What feels like the obvious next move. You're not using the environment anymore. You're inside one that AI has constructed, and it's shifting around you in real time.

This distinction is at the heart of Exploring the Futures of Technology 2.0, the new report from the Copenhagen Institute of Future Studies — and it's the central thread of the latest episode of Modem Futura.

On this episode, my co-host Andrew Maynard, fresh from attending the report's launch in Copenhagen, joined me to work through ten signals the report identifies as defining the near future: the shift from static to liquid content, the rise of agentic organizations, neurotechnology and cognitive integration, synthetic simulations replacing real-world research populations, physical AI entering embodied space, the geopolitics of technological access, AI-mediated cybersecurity threats, the sustainability challenges of AI infrastructure, and quantum computing as the wildcard at the edge of everything.

What holds these signals together isn't a single prediction. It's a pattern: the world is becoming fluid, and the frameworks we built for a more static environment — static reports, static institutions, static skillsets — are increasingly inadequate for navigating it.

One of the episode's sharpest observations is about the cost of cognitive offloading. As we hand more of our decision-making and information retrieval to AI systems, we risk losing the capacity to recognize when something's wrong. Not because AI is malicious, but because we've stopped practicing the skills that would let us notice. Like losing the ability to read a map. Except the stakes are considerably higher.

The conversation doesn't resolve these tensions — and that's exactly the point. Futures thinking, at its best, isn't about prediction. It's about staying awake to what's changing, naming the tensions, and refusing to optimize for a world that no longer exists.

If you want the full report, the Copenhagen Institute has made it freely available. And if you want the conversation around it — the episode is a good place to start.

Subscribe and Connect!

Subscribe to Modem Futura wherever you get your podcasts and connect with us on LinkedIn. Drop a comment, pose a question, or challenge an idea—because the future isn’t something we watch happen, it’s something we build together. The medium may still be the massage, but we all have a hand in shaping how it touches tomorrow.

🎧 Apple Podcast: https://apple.co/4bSGsZP

🎧 Spotify: https://open.spotify.com/episode/4sdx83QUD6pIs9IXb9G0VY?si=JdbwVHUKRg2mFO0Gsi_EFw

📺 YouTube: https://youtu.be/-2enUvPYmHo

🌐 Website: https://www.modemfutura.com/   

Futures of Agentic AI and the 2025 AI Action Plan – Episode 42

A Wet Hot AI Summer: Decoding the U.S. AI Action Plan & the Agentic‑Bot Boom

If you stepped away from the screen / feed for even a moment this July, you might have missed two massive AI stories that could shape the near-term innovation in AI. First, the White House released its 2025 AI Action Plan—a 20 plus page blueprint built on three pillars: (1) Accelerate AI innovation, (2) Build national AI infrastructure, and (3) Lead global AI diplomacy. If that wasn’t news enough - just back on July 17th OpenAI, announced the roll out of its new “Agent” modes—autonomous-ish bots that promise to book your travel, manage your calendar, and even spend your money while you sleep. Joking aside - please be VERY careful about what sort of access, privacy, and information you give any automated service. Ask yourself “what would be the worst that could happen?” If the answer makes you cringe or sweat - don’t do that thing. Okay - PSA cautionary rant over… back to the episode notes.

In our latest Modem Futura episode, Andrew and I pull these threads together. We ask whether the Action Plan’s “build‑baby‑build” mantra—complete with massive semiconductor subsidies and calls to “remove regulatory barriers”—is a bold vision or reckless speed run. We also spotlight what’s missing: robust guard‑rails for deepfakes, algorithmic bias, and the colossal energy footprint of new data‑centers.

Switching to agentic AI, we run real‑time tests on OpenAI’s new Agent Mode and compare them with Manus’ more mature workflow. Yes, watching a bot open browser tabs for you is technically impressive—until you realize you can still do most tasks faster yourself . That friction sparks a wider debate:

Productivity paradox – Studies already show teachers and coders spending more time fact‑checking AI output than drafting from scratch.

Privacy trade‑offs – Granting an agent access to your email or bank account may save clicks now, but what’s the long‑term cost to autonomy?

Deepfake backlash – The Plan flags courtroom deepfakes as a national‑security risk, yet leaves broader social harms largely unaddressed.

Behind the policy prose and flashy demos lurks a wider narrative of tech nationalism. The document casts AI as a race the United States must win, positioning allies as followers and China as the ultimate adversary. That framing risks turning open research into a geopolitical arms sprint—one where ethical reflection gets lapped by hype.

So where does that leave forward‑thinking professionals, educators, and creators? We advocate to start the conversations now - here are some great starting topics to begin with:

Stay curious but critical. Piloting new agent tools is the best way to spot real value—and red flags—early.

Advocate for “responsible speed.” Innovation and regulation are not mutually exclusive; demand both from vendors and policymakers.

Own your data literacy. Whether you’re vetting deepfake evidence or AI‑generated lesson plans, will skepticism become a core career skill?

🎧 Tune in for the full discussion—including Hitchhiker’s Guide jokes, live agent fails, and pragmatic optimism about building a flourishing, not merely faster, future.

🎧 Listen on Apple Podcasts: https://apple.co/4l7eCKC

📺 Watch us on YouTube: https://www.youtube.com/@ModemFutura

If you’d like to dive deeper, jump into the link and listen to the podcast or watch the YouTube video. Join us as we explore the forces shaping our collective future and the urgent need to keep human values at the heart of innovation.

Subscribe and Connect!

Subscribe to Modem Futura on a favorite podcast platform, follow on LinkedIn, and join the conversation by sharing thoughts and questions. The medium may still be the massage, but everyone has a chance to shape how it kneads modern culture—and to decide what kind of global village we ultimately build.

🎧 Apple Podcast: https://apple.co/4l7eCKC

🎧 Spotify: https://open.spotify.com/episode/2fI044VpiPE3t4Y9MXrZjJ?si=mJ-xb414R3Ww7IkTOIlT0Q

📺 YouTube: https://youtu.be/6fcOiRYnIK8

🌐 Website: https://www.modemfutura.com/