Modem Futura

Thriving with AI: Two Futures Thinking Tools for Navigating Uncertainty

Illustration of Sean and Andrew presenting their workshop title slide

The question is no longer whether AI will reshape education. It already has. The more interesting question — and the harder one — is how educators, leaders, and institutions can navigate that transformation with clarity, purpose, and agency.

In this episode of Modem Futura, hosts Sean Leahy and Andrew Maynard walk listeners through a workshop they developed for ASU's 2026 Folk Fest titled "Thriving with AI: Ethical, Transparent, and Human-Centered Learning." Rather than demonstrating AI platforms or advocating for a particular stance, the session offers two practical thinking tools designed to help individuals make sense of complexity and make intentional decisions — regardless of where they fall on the AI adoption spectrum.

Foresight Methodologies

The Futures Triangle, originally developed by futurist Sohail Inayatullah, is a foresight method that maps three forces shaping any change landscape: the pull of the future (emerging visions and possibilities), the push of the present (trends, pressures, and mandates driving change), and the weight of history (the traditions, values, and institutional structures that resist or ground that change). By making these forces visible, individuals and teams can better orient themselves within the dynamics of change rather than simply reacting to them.

The Intent Map, drawn from Jefferey Abbott and Andrew Maynard's book AI and the Art of Being Human, complements the triangle by shifting from orientation to action. A simple two-by-two matrix, it asks users to identify four elements: their core values (what they won't compromise), their desired outcomes (what success looks like), their guardrails (the hard boundaries they won't cross), and their metrics (how they'll know if it's working). Critically, the framework recognizes that metrics don't have to be numerical — sometimes the most meaningful indicators of success are qualitative, like a student who can't stop thinking about what they learned.

What makes these tools particularly valuable is their accessibility. Both can be sketched on a scrap of paper. Both work for individuals and teams. And both are domain-agnostic — while the episode frames them in the context of education, they apply equally well to organizational strategy, technology adoption, and personal decision-making.

The episode is anchored by two provocative 2035 headlines: one in which AI tutors outperform human teachers and faculty roles come under review, and another in which human-AI partnership produces the most critically thinking generation in history. The question the workshop poses isn't which headline is more likely. It's which one you want — and what intentional choices you need to make to move toward it.

Thriving with AI, as the hosts frame it, isn't about mastering the latest platform. It's about staying awake to what matters.


Subscribe and Connect!

Subscribe to Modem Futura wherever you get your podcasts and connect with us on LinkedIn. Drop a comment, pose a question, or challenge an idea—because the future isn’t something we watch happen, it’s something we build together. The medium may still be the massage, but we all have a hand in shaping how it touches tomorrow.

🎧 Apple Podcast: https://apple.co/3ZXgT2P

🎧 Spotify: https://open.spotify.com/episode/1b1Q0W7YVSGZA2ELYj6g6C?si=wL1sXb-DQsSluBkLYCu9tg

📺 YouTube: https://youtu.be/zi_zvXCt9sY

🌐 Website: https://www.modemfutura.com/   


Asimov's "The Fun They Had" and the Real Cost of AI-Driven Education

Illustration of Asimov's Fun They Had boy reading by mechanical teacher

The History of our Future

More than seventy years ago, Isaac Asimov imagined a future where children learn in isolation, guided by personalized mechanical tutors, and books are relics of a forgotten age. His 1951 short story, "The Fun They Had," is set in 2155, but its questions feel startlingly current.

In the story, a young girl named Margie discovers a paper book and learns about a time when children went to school together—sat in classrooms, were taught by human teachers, and shared the experience of learning with their peers. Her own education is efficient, personalized, and lonely. Her mechanical teacher can diagnose her struggles and recalibrate its approach, but it cannot inspire her, connect with her, or make her feel like she belongs to something larger than a lesson plan.

Asimov didn’t predict AI as we know it. But he predicted the question that matters most: in our rush to optimize education, are we designing out the very things that make learning meaningful?

This is precisely the tension at the heart of today's conversation about AI in education. The promise of AI-powered tutors is real and, in many cases, genuinely valuable: adaptive pacing, instant feedback, content tailored to individual needs. But when personalization becomes the dominant paradigm—when every learner is on a separate track, in a separate space, at a separate time—the communal dimensions of education begin to disappear.

Natural Human Impulses for Learning (not schooling)

John Dewey argued more than a century ago that learning is driven by four natural impulses: inquiry, communication, construction, and expression. Most of these are inherently social. They depend on friction, dialogue, surprise, and the presence of other people. No amount of algorithmic sophistication can fully replicate the moment a teacher's unexpected enthusiasm shifts a student's entire trajectory, or the experience of working through difficulty alongside peers who share the same struggle.

Asimov's story also raises a subtler question about what endures. The book Margie discovers has survived two centuries. The static words on the page—unchanging, tactile, physical—carry a kind of permanence that digital media cannot easily match. This resonates with the growing cultural appetite for analog experiences: vinyl records, film photography, even old iPods. These are not acts of technological rejection. They are expressions of a deeper need for embodied engagement, deliberate choice, and the kind of friction that gives experience its texture.

Where do we go next?

None of this means AI has no place in education. It does, and increasingly will. But Asimov's story is a quiet reminder that the most important things about learning—curiosity, connection, belonging, the joy of shared discovery—are not problems to be optimized. They are human experiences to be protected.

The question is not whether AI can teach us. It's whether, in building systems that teach us more efficiently, we are designing out the very things that made learning worth having in the first place.

*Episode 71 of Modem Futura explores these themes through Asimov's story and a wider conversation about technology, nostalgia, and what it means to learn as a human being.*

Subscribe and Connect!

Subscribe to Modem Futura wherever you get your podcasts and connect with us on LinkedIn. Drop a comment, pose a question, or challenge an idea—because the future isn’t something we watch happen, it’s something we build together. The medium may still be the massage, but we all have a hand in shaping how it touches tomorrow.

🎧 Apple Podcast: https://apple.co/4s1lDk1

🎧 Spotify: https://open.spotify.com/episode/20I5j2DliUnZAbWDiVw7y8?si=WoEW_Zb2SPiynHYb4d8XHA

📺 YouTube: https://youtu.be/TDQc15Muwto

🌐 Website: https://www.modemfutura.com/   

The Future From a Kid's Perspective: What a 10-Year-Old Thinks About AI, Jobs, and Meaningful Work

We spend a lot of time talking about young people when we discuss the future of technology. We debate how AI will affect their education, reshape their careers, and transform the world they'll inherit. But we rarely stop to ask them what they think.

In this special episode of Modem Futura, we did exactly that. Freddie Leahy—co-host Sean's almost-10-year-old son—joined us for an unscripted conversation about artificial intelligence, meaningful work, and the questions that don't have easy answers.

Already Thinking About Job Displacement

When asked what he thinks about when he imagines the future, Freddie's first response wasn't about flying cars or space travel. It was about jobs.

"I kind of more think about the AI part of the future," he said. "And I'm just wondering what jobs will be overran by AI."

He's almost ten. And he's already calculating whether his dream career—paleontology—will exist by the time he's ready to pursue it.

This isn't abstract concern. Freddie has a specific vision: he wants to be like Alan Grant from Jurassic Park, out in the field, hands in the dirt, discovering fossils himself. When we suggested that AI might help him find more dinosaur bones faster, he didn't immediately embrace the idea. His worry isn't about efficiency—it's about being separated from the work itself.

"I would be doing it not for the money," he explained, "just because of the experience."

The Limits of AI Creativity

Freddie has firsthand experience with generative AI. He and I have spent time creating AI-generated images—D&D characters, fantasy creatures, book covers. But he's noticed something that many adults are also discovering: the gap between imagination and output.

"Every time you create an AI image," he said, "you never feel like it's quite right. So you just keep making these, and then you have to choose one, but in the end it never feels like the perfect cover you wanted."

When asked why, his answer was simple: "AI isn't our heads."

This observation—from a fourth-grader—gets at something fundamental about the current state of generative tools. They can produce impressive outputs, but they can't access the specific vision in your mind. The friction between prompt and result isn't just a technical limitation; it's a gap between human intention and machine interpretation.

When it comes to his own writing—Freddie is working on stories—he's clear that he doesn't want AI assistance. The temptation exists, especially when facing a blank page. But he recognizes something important: "It's the point about using your own creativity."

Suspicious of AI Companions

One of the most revealing exchanges came when we explored the idea of AI friendship. What if Freddie could have an AI companion who shared all his interests—someone who wanted to talk about dinosaurs as much as he does?

His response was immediate skepticism.

"That would be weird," he said, "because nobody likes what I like."

The very thing that might make an AI friend appealing—perfect alignment with his interests—is exactly what made it feel inauthentic. Part of what makes his interests meaningful is that they're his, distinct from the people around him. An AI that mirrored them perfectly would feel hollow.

When pressed further about whether he'd want an AI as a secret companion—a sort of digital spirit animal—Freddie remained uncertain. "Who knows what it could do," he noted. "It could hack everything."

There's healthy skepticism there, but also something deeper: a sense that friendship involves more than shared interests. It involves trust, vulnerability, and the unpredictability of another mind.

"I Refuse": Mind Uploading at Nine

During our Futures Improv segment, we posed a classic transhumanist scenario: What if you could upload your consciousness to a computer and live forever digitally, while your biological body remained behind?

Freddie's answer required no deliberation:

"I refuse. I will not upload my brain into a digital computer."

His reasoning was practical but profound. At nine years old, why would he abandon a body that works? The theoretical benefits of digital immortality don't outweigh the immediate reality of physical experience.

This perspective offers a useful counterweight to futures discourse that sometimes treats technological transcendence as obviously desirable. From Freddie's vantage point, the question isn't whether we can escape biological limitations, but whether we'd want to—and what we might lose in the process.

Questions Without Right Answers

Perhaps the most important takeaway from this conversation came near the end, when Freddie observed something about the nature of our questions.

"Because of all these questions," he said, "there is no wrong or right answer."

That's exactly right. The value of futures thinking isn't in predicting what will happen or determining the "correct" response to emerging technologies. It's in learning to sit with uncertainty, explore tensions, and develop our capacity for navigating complexity.

At almost ten years old, Freddie already understands this. He's not looking for definitive answers about AI and jobs and creativity. He's learning to ask better questions—and to recognize that asking them is more important than resolving them.

What the Future Thinks About Itself

We often frame conversations about technology and youth as adults preparing children for a world we're creating. But this episode suggests something different: young people are already thinking about these issues, often with more nuance than we might expect.

Freddie isn't anti-technology. He plays VR games, makes AI art, and follows developments in the field. But he's also holding onto something—a sense that some experiences are valuable precisely because we do them ourselves, that the struggle of creation is part of its meaning, and that efficiency isn't the only measure of a good life.

These aren't lessons we taught him. They're insights he's developing on his own, as he navigates a world where these technologies are simply part of the landscape.

Maybe the best thing we can do isn't to tell young people what the future will look like. Maybe it's to listen to what they already think about it—and learn from their perspective.

I don't know what the future holds for his generation. But if this conversation is any indication, they're thinking about it more carefully than we might expect.

Subscribe and Connect!

Subscribe to Modem Futura wherever you get your podcasts and connect with us on LinkedIn. Drop a comment, pose a question, or challenge an idea—because the future isn’t something we watch happen, it’s something we build together. The medium may still be the massage, but we all have a hand in shaping how it touches tomorrow.

🎧 Apple Podcast: https://apple.co/4khmVES

🎧 Spotify: https://open.spotify.com/episode/5nKjpEVZcaUDisdZpGGaMZ?si=YgWp_O84T1yVlBSloedV1w

📺 YouTube: https://youtu.be/mfumkJZav-M

🌐 Website: https://www.modemfutura.com/   

Understanding Global Risk: What the WEF's 2026 Report Reveals About Our Collective Anxieties

How 1,300 experts see the world's greatest threats—and what their blind spots tell us

Each year, the World Economic Forum surveys over a thousand experts worldwide—business leaders, academics, policymakers, and institutional leaders—to map perceived global risks. The resulting Global Risks Report isn't a prediction of what will happen. It's something potentially more valuable: a snapshot of collective concern, a reading of the signals building across economic, environmental, technological, and societal domains.

The 2026 edition reveals tensions worth examining closely.

Short-Term Fears: The Present Pressing In

The two-year risk horizon is dominated by immediate geopolitical and informational concerns. Geoeconomic confrontation leads the list, having jumped eight positions from the previous year—a signal that trade conflicts, sanctions regimes, and economic nationalism have moved from background noise to foreground crisis for many observers.

Misinformation and disinformation hold second position, reflecting growing unease about information integrity in an age where AI-generated content becomes indistinguishable from authentic material and where social permission for deception seems to be expanding. Societal polarization follows in third place—and importantly, these three risks appear deeply interconnected. Misinformation accelerates polarization, polarization enables economic nationalism, economic nationalism generates more opportunities for information warfare.

Extreme weather events, state-based armed conflict, and cyber insecurity round out the top concerns for the immediate future.

Risk Report Figure 3 from World Economic Forum's 2026 global Risks Report

Long-Term Concerns: The Environment Reasserts Itself

Expand the time horizon to ten years, and the risk landscape transforms. Environmental concerns claim five of the top ten positions, with extreme weather events, biodiversity loss and ecosystem collapse, and critical changes to Earth's systems occupying the top three spots.

This shift reveals something important about human risk perception: we consistently discount slow-moving catastrophes. Biodiversity loss lacks the urgency of trade wars, even though its cascading effects may ultimately prove more consequential. We've evolved to respond to immediate threats; we struggle to mobilize against dangers that unfold across decades.

Notably, societal polarization—ranked third in the short term—drops to ninth in the long-term view. Whether this reflects optimism that current divisions will heal, or simply the statistical reality that other risks seem more severe, remains an open question.

Different Lenses, Different Risks

Perhaps the report's most valuable contribution is its disaggregation of risk perception across demographics and geographies.

Age shapes perception. Respondents under 30 prioritize misinformation, extreme weather, and inequality. Those over 40 consistently rank geoeconomic confrontation as their primary concern. Generational experience matters: those who remember previous periods of great power competition read current signals differently than those encountering these dynamics for the first time.

Figure 15 from WEF global Risk Report

Geography shapes perception even more dramatically. AI risks that dominate American concerns rank 30th globally. In Brazil, Chile, and much of the world, more immediate concerns—inequality, pollution, resource access—take precedence. This isn't a failure of foresight; it's a reminder that risk is contextual. What threatens your community depends on where your community sits.

Figure 53 from the WEF Global Risk Report

Using Signals, Not Consuming Forecasts

Reports like this serve best as prompts for reflection rather than prescriptions for action. The value lies not in accepting these rankings as authoritative, but in using them to surface questions:

  • What assumptions am I making about stability that geoeconomic confrontation might disrupt?

  • How might misinformation affect my organization, my industry, my community's cohesion?

  • Which long-term environmental risks am I discounting because they feel distant?

  • Whose risk perceptions am I ignoring because they don't match my own context?

Human beings are, as far as we know, the only species capable of anticipating futures and adjusting present behavior accordingly. That capacity for foresight is a genuine superpower—but only if we use it. Signals become valuable when they prompt better questions. The work isn't to predict what happens next; it's to prepare ourselves for navigating uncertainty with more wisdom than our instincts alone would allow.

Modem Futura explores the intersection of technology, society, and human futures.

Download the full WEF Global Risks Report 2026: [PDF Web Link]

Subscribe and Connect!

Subscribe to Modem Futura wherever you get your podcasts and connect with us on LinkedIn. Drop a comment, pose a question, or challenge an idea—because the future isn’t something we watch happen, it’s something we build together. The medium may still be the massage, but we all have a hand in shaping how it touches tomorrow.

🎧 Apple Podcast: https://apple.co/4sUwhdG

🎧 Spotify: https://open.spotify.com/episode/0UoLHYJa8KHzbNbP564Qwy?si=h9WD1rE4Q6WTu6wOWlEQhA

📺 YouTube: https://youtu.be/-5PQMaqweNU

🌐 Website: https://www.modemfutura.com/   

Modem Futura Year in Review: What 2025 Taught Us About Being Human

As we step toward 2026, we recorded a “Year in Review” episode of Modem Futura to pause the treadmill, look back, and ask a bigger question: what did this year reveal about the future of being human?

This wasn’t a victory lap. It was a reflection on what resonated, what surprised us, and what it means to build a future-focused show while the future keeps moving.

metrics matter… and they don’t

Yes, growth matters — it helps ideas travel. But podcast analytics are often incomplete and inconsistent, and they rarely capture what impact actually looks like. The most meaningful signals are still human: messages, emails, thoughtful disagreement, and reviews that help someone new discover the show.

If you want to support the show: subscribing, sharing, and leaving a rating/review are still the most helpful actions.
— Modem Futura

The themes that defined our year:

AI, beyond the hype: We kept returning to the same tension — generative tools are everywhere, but “AI” isn’t just a feature set. It’s a cultural force that shapes identity, agency, creativity, and values. We try hard to avoid both the hype machine and the doom loop, and instead stay in the messy middle where the most useful questions live.

Education and learning: We lean into what learning actually is (not just schooling), including John Dewey’s idea that humans are wired for inquiry, communication, construction, and expression. When AI arrives in every document and device, what does it do to those impulses — especially for kids?

Technology in the physical world: From autonomous‑vehicle safety systems that quietly drift out of calibration, to EVs and the persistent “flying car” dream, we explore what happens when shiny promises meet real‑world constraints.

Big questions, no apologies: Yes, we go there — simulation hypotheses, black holes, de‑extinction, space travel, and the edges of what science can (and can’t) explain. These episodes aren’t about “being right.” They’re about expanding the space of possible futures we can imagine.

If there’s one takeaway, it’s this: the future isn’t something that happens to us — it’s something we build together.That’s why we keep showing up each week: to create a shared space for curiosity, skepticism, wonder, and responsible imagination.

If you’ve been listening, thank you. If you’re new here, welcome. And if an episode sparked a thought you can’t shake — share it with a colleague, a student, a friend, or your community. As we step into 2026, we’re excited to keep exploring the possible, probable, and preferable futures — with you.

The Hidden Costs of “That Was Easy”: AI Slop, Creative Friction, and the Future of Human Craft

In this Modem Futura episode, hosts Sean Leahy and Andrew Maynard examine the rise of “AI slop” and the growing cultural pressure to accept frictionless creation as the norm. Drawing on examples from coding, design, futures thinking, and psychology, they unpack how satisficing, homogenization, and inherited power threaten to erode human craft and understanding. The article explores why creative friction is essential for mastery, agency, and meaning — and offers futures-oriented insights into how we can use AI intentionally without losing what makes us human.

ChatGPT Illustrated version of Modem Futura YouTube Thumbnail

Generative AI has ushered in an era where producing text, images, video, and code is no longer a challenge — it’s a button press. And in this week’s episode of Modem Futura, Andrew and I wrestle with a growing cultural tension: if everything is easy, what happens to the things that matter?

It began with a shared frustration. Both of us have noticed an explosion of what we call AI slop (content that is technically competent but devoid of care, intention, and personality). You’ve seen it too: the LinkedIn posts with identical emojis, the slide decks that all look like NotebookLM, the essays with no point of view. These things aren’t wrong, they’re just empty. And the emptiness is the point.

We discuss a concept called satisficing: the act of choosing something “good enough” rather than something excellent. In the age of AI, satisficing has become an increasing default mode of creation. Why craft an idea when you can generate one? Why wrestle with a blank page when you can autocomplete your way to the finish line?

But here’s the problem: friction is where learning happens. It’s where creativity lives. It’s the sanding that polishes the stone. When you remove friction, you remove the struggle — and without struggle, there is no mastery, no depth, and no meaning.

Throughout the episode, we explore how this plays out across domains. Coders relying on AI-generated code they can’t understand. Designers accepting images that are “close enough.” Writers sharing posts they didn’t write. And organizations flirting with a future where expertise is replaced by button-pressing.

We draw on Michael Crichton’s concept of inherited power from Jurassic Park: the idea that wielding abilities you never earned leads to carelessness, overconfidence, and danger. AI gives us power we didn’t work for — and without wisdom, that power is hollow.

But this isn’t a pessimistic episode. We explore how AI can amplify creativity when used intentionally, how friction can be designed back into workflows, and why people may ultimately push back against frictionless living. Humans crave meaning, not efficiency. And meaning takes work.

If you’re navigating how to use AI thoughtfully — in your craft, your teaching, your leadership, or your creative life — this episode offers a grounded, futures-focused lens on what we stand to lose and what we still have time to protect.

🎧 Listen to the full episode of Modem Futura — and join the conversation on what we should preserve in an age that wants to eliminate every struggle.


Subscribe and Connect!

Subscribe to Modem Futura wherever you get your podcasts and connect with us on LinkedIn. Drop a comment, pose a question, or challenge an idea—because the future isn’t something we watch happen, it’s something we build together. The medium may still be the massage, but we all have a hand in shaping how it touches tomorrow.

🎧 Apple Podcast: https://apple.co/48WCGgh

🎧 Spotify: https://open.spotify.com/episode/1BajA2SvDWVyY0mRSQ9Flk?si=wvCFhWlgQtC2kye3bGz5Kg

📺 YouTube: https://youtu.be/1V9PD7j8iu8

🌐 Website: https://www.modemfutura.com/

The AI Sustainability Paradox - Promise, Peril, and Planetary Futures – Episode 58

AI, Sustainability, and the Planet Under Pressure: Can Technology Help Us Navigate the Future?

In this week’s episode of Modem Futura, Andrew and I take on one of the most urgent and complex questions of our time:Can artificial intelligence meaningfully help humanity navigate planetary crises — without deepening them?

Our jumping-off point is the newly released 2025 synthesis report AI for a Planet Under Pressure, produced by the Stockholm Resilience Centre and the Potsdam Institute for Climate Impact Research. The report asks a deceptively simple but high-stakes question: Can AI be used responsibly and effectively to address climate change, biodiversity loss, freshwater stress, and other accelerating environmental pressures?

It’s the kind of question that seems tailor-made for futures thinking — a toolset we rely on heavily throughout the show. Because as we discuss, we’re not just talking about one technology or one problem. We’re talking about wicked problems: challenges that mutate as we try to solve them. Climate change, plastics pollution, ecosystem collapse, global energy transitions — these are dynamic, interconnected systems that resist silver-bullet solutions.

AI shows real promise. We now have models that can detect complex patterns in climate systems, accelerate protein discovery, optimize renewable-energy grids, and reveal future pathways humans simply cannot see on their own. These are powerful breakthroughs — and the report highlights dozens of examples where AI is already pushing sustainability science forward in meaningful ways.

But as we explore in the episode, this promise raises a difficult paradox:
AI requires enormous amounts of water, energy, and material resources. Data centers heat cities, strain local water supplies, and demand extractive mineral supply chains. Are we burning fossil fuels to solve the fossil-fuel crisis? And what does it mean when our sustainability solutions come with unsustainable footprints?

We also dig into the human side: the behaviors, incentives, and limitations that so often undermine long-term environmental action. Could AI help foster better cooperation? Could it assist governments, regions, and communities in seeing shared pathways forward that remain invisible today? Or does outsourcing too much responsibility risk numbing the very agency we need most?

These aren’t easy questions — but they’re necessary ones. And as Andrew points out, failing to have these conversations guarantees that someone else (or something else) will make those decisions for us.

If you’re curious about the intersection of AI, planetary futures, and the human condition, this is a conversation worth spending time with.

🎧 Listen to the full episode here 👇

Subscribe and Connect!

Subscribe to Modem Futura wherever you get your podcasts and connect with us on LinkedIn. Drop a comment, pose a question, or challenge an idea—because the future isn’t something we watch happen, it’s something we build together. The medium may still be the massage, but we all have a hand in shaping how it touches tomorrow.

🎧 Apple Podcast: https://apple.co/43Y4Wwn

🎧 Spotify: https://open.spotify.com/episode/195UbUOIUv8oF587yNo1FM?si=d6d7cd6b05034703

📺 YouTube: https://youtu.be/O8gGpJZO-g4

🌐 Website: https://www.modemfutura.com/

Through the lens: Spatial Computing with Apple Vision Pro – Episode 56

Just a couple of guys wearing nerd helmets and talking about the future of tech.

Inside Spatial Computing: Living (and Working) with Apple Vision Pro

e finally did it — we recorded inside Apple Vision Pro.

In this new episode of Modem Futura, Andrew Maynard and I decided to take spatial computing off the keynote stage and into real life — from multi-monitor workflows and long-haul flights to immersive video, panoramic memories, and even telepresence “personas.” We wanted to know: is this the start of a new computing era, or simply a beautiful distraction in search of a use case?

What we discovered surprised us.

Apple’s Vision Pro doesn’t want to be “VR.” It’s spatial — a computer that understands the world around you. Through pass-through video, eye-tracking, and hand-gesture control, it creates a workspace that’s not just 3D but responsive to you. One look or small pinch replaces the keyboard and mouse. It’s impressive, sometimes uncanny, and often quietly magical.

But behind the magic are deep questions about comfort, value, and human need. The headset’s design reveals how far we’ve come in rendering, latency, and foveated focus — and how far we still are from true wear-all-day computing. The device itself sparks larger conversations: What does “presence” mean when you can blank out reality at will? How will social norms adapt when everyone’s wearing cameras? And where does accessibility fit in when interaction becomes multimodal — eyes, hands, voice, and environment all working together?

Want to see what we've been up to? Here you can see a collection of Spatial videos of our podcast - these were all recorded using a 3-camera multicam setup each filming in Spatial video formats.

One of the biggest challenges at the present for spatial video (a deep dive for later) is that in addition to few people having headsets as compared to smartphones for example, most video platform services do not provide a way to consume Spatial video - including Apple's own Vision OS of all things. Yes you can send a video file (these are massive btw - in the order of 9-20GB each) - but at present there isn't an Apple supported cloud based video viewer to which you can watch Spatial videos posted by your friends and family etc. Personally, I really hope that YouTube will start to allow the playback of Spatial videos (assuming they will put an officially supported YouTube app on the Apple Vision Pro of course).

We also talk about what comes after the headset. Think of a layered ecosystem:

  • Audio AR through your earbuds for subtle ambient context.

  • Lightweight AR glasses for glanceable, social interaction.

  • Full headsets for immersive creativity, co-presence, and exploration.

Rather than a single “device to rule them all,” spatial computing might evolve into a stack of experiences that adapt to how human attention, comfort, and curiosity really work.

It’s easy to be dazzled by tech specs, but the future of spatial computing depends less on what’s rendered and more on what it means to be present in digital space. That’s why we’re inviting developers, designers, and curious explorers to join us — to prototype, play, and imagine what spatial experiences could look like when they’re built for humans first.

Subscribe and Connect!

Subscribe to Modem Futura wherever you get your podcasts and connect with us on LinkedIn. Drop a comment, pose a question, or challenge an idea—because the future isn’t something we watch happen, it’s something we build together. The medium may still be the massage, but we all have a hand in shaping how it touches tomorrow.

🎧 Apple Podcast: https://apple.co/47Arkwv

🎧 Spotify: https://open.spotify.com/episode/3V40dbWcrKZq9RCCmoP7Zh?si=s0CVT5aQS8WJ_CgbfMTBcg

📺 YouTube: https://youtu.be/IF3juEp9l_I

🌐 Website: https://www.modemfutura.com/

Sora, Slop, and the AI Economy: When ChatGPT Meets Walmart – Episode 54

Image illustrated adaptation by ChatGPT

If you’ve ever wondered what happens when the world’s largest retailer merges with the world’s most talked-about AI, you’re not alone. In our newest episode of Modem Futura, Andrew Maynard and I explore how OpenAI’s latest moves—from Sora 2’s eerily lifelike videos to Walmart’s direct-to-ChatGPT shopping partnership—signal a seismic shift in how we live, learn, and buy in a post-search world.

Andrew also just returned from Lisbon for the global launch of his new book AI and the Art of Being Human, a deeply personal and practical guide to thriving with AI while staying grounded in what makes us human. Together we discuss the book’s central question: how can we build a meaningful life amid tools that increasingly think, speak, and create like us?

We also dive into futurist Amy Webb’s sharp warning that the financial plumbing of the internet is changing fast. As she notes, when an AI company built on venture debt begins replacing Google’s ad-based model, we risk building the next era of commerce on borrowed money—and borrowed trust.

From there, the episode ranges widely: we unpack the ethics of open-source AI groups like Nous Research, debate what “guardrails” really mean, and share our growing fatigue with synthetic content—the endless churn of what the internet now calls “AI-slop.”

But it’s not all doomscrolling. We end with a new round of Futures Improv—our playful segment imagining speculative scenarios like subscription-based immortality and AI DJs reading your neural signals. It’s improv, futures-style: serious ideas approached with humor and imagination.

Whether you’re an AI enthusiast, a creative technologist, or simply trying to stay human in a rapidly transforming world, this episode captures the heart of Modem Futura: thoughtful conversations that remind us the future isn’t just something that happens to us—it’s something we co-create, signal by signal.

Subscribe and Connect!

Subscribe to Modem Futura on a favorite podcast platform, follow on LinkedIn, and join the conversation by sharing thoughts and questions. The medium may still be the massage, but everyone has a chance to shape how it kneads modern culture—and to decide what kind of global village we ultimately build.

🎧 Apple Podcast: https://apple.co/48C8PtS

🎧 Spotify: https://open.spotify.com/episode/38IopMk5XBA9xVsKU71UlM?si=wdAtLLaaQxGvNEjPVfqWTQ

📺 YouTube: https://youtu.be/aK-ev5T8Tu8

🌐 Website: https://www.modemfutura.com/