futures thinking

Pluribus and the Philosophy of the Happy Apocalypse: What Apple TV's New Sci-Fi Asks About Individuality, Consent, and Being Human

What if happiness is the threat?

Most apocalypse stories share a common grammar: society collapses, resources become scarce, and survival demands violence. We've internalized this template so thoroughly that it shapes how we imagine catastrophe itself.

Apple TV's Pluribus, created by Vince Gilligan (Breaking Bad, Better Call Saul), disrupts that grammar entirely. Its apocalypse isn't marked by destruction or suffering. It's marked by peace. By synchronization. By happiness—at a planetary scale.

An alien signal arrives carrying an RNA sequence. Humanity, being humanity, synthesizes it. Within days, most of the global population transforms into a unified hive mind. Not zombies. Not drones. Just billions of people sharing consciousness, moving together, experiencing what appears to be genuine contentment.

About a dozen people remain unconverted. And the series follows one of them—Carol Sterka, played by Rhea Seehorn—as she grapples with being the most unhappy person on earth.

On a recent episode of the Modem Futura podcast, we explored what Pluribus surfaces about individuality, consent, collective identity, and the stories we tell ourselves about what makes a human life worth living. What follows are some of the tensions that emerged.

What is in a Name: Many Without the One

The title "Pluribus" comes from the Latin phrase E Pluribus Unum—"out of many, one"—which appears on American currency as a motto of national unity.

But the show drops both the "E" (out of) and the "Unum" (one). What remains is simply "Pluribus": the many. It's a subtle signal that this isn't a story about diversity coming together into unity. It's a story about what happens when "the many" becomes literal—when individual minds merge into a single, collective consciousness.

That linguistic choice frames everything that follows.

Who Becomes the Monster?

One of the most productive lenses for understanding Pluribus is Richard Matheson's 1954 novel I Am Legend. Not the Will Smith film adaptation, but the original text, which ends with a devastating realization: the protagonist, who has spent the story hunting the "monsters" who have replaced humanity, comes to understand that from their perspective, he is the monster. The one who kills in the night. The one who refuses to accept the new order.

Carol Sterka occupies similar territory. She's convinced she needs to "set things right"—to restore humanity to its pre-hive state. But the show keeps surfacing an uncomfortable question: right for whom? The hive mind has eliminated war, poverty, and suffering. Billions of people who lived in misery are now at peace.

If Carol succeeds in reversing the transformation, she's not saving people. She's condemning them to return to lives many of them would never have chosen.

The Consent Paradox

The hive mind in Pluribus operates under an interesting constraint: it cannot lie, and it will not assimilate anyone without their explicit permission.

This sounds like respect for autonomy. And in some sense, it is. But the hive mind also desperately wants everyone to join (even explaining that it’s a ‘biological’ imperative). So what emerges is a kind of relentless, patient persuasion—always honest, always gentle, and always oriented toward a predetermined outcome.

There's something uncomfortably familiar in this dynamic. We navigate versions of it constantly: platforms that "personalize" our experience toward their engagement metrics, systems that "recommend" content optimized for their retention goals, interfaces designed to make one choice frictionless and alternatives invisible.

The hive mind's honesty doesn't make its agenda less persistent. It just makes the agenda transparent.

The Sustainability Problem

Midway through the season, Pluribus introduces a complication: the hive mind will only consume things that have already died naturally. No killing. No harvesting. Just waiting for life to end on its own terms.

Which means, at planetary scale, they're slowly starving.

This creates a strange inversion. Carol, the last holdout, has skills and knowledge that could help solve the problem. But she's too consumed by her mission to "fix" things to collaborate with the very beings who need her help.

There's something painfully recognizable in that dynamic—the way ideological certainty can prevent us from engaging productively with people whose worldview differs from our own, even when collaboration would benefit everyone.

Is the Individual Still in There?

One of the more haunting threads in Pluribus involves the question of whether individual identities persist within the hive mind.

Carol's "chaperone"—a member of the hive who presents as an individual named Zosia—occasionally exhibits moments that feel less like collective consciousness and more like... a person surfacing. A memory that seems too specific. A reaction that seems too singular. (The Mango ice cream scene is a particular interesting one where for a moment - the real Zosia seems to surface).

Another character (Manousos) experiments with radio frequencies, attempting to extract individuals back out of the collective, seemingly trying to hack the near field electromagnetic connections the “others” have with one another.

The show doesn't resolve this, but rather leaves it as a season 1 cliffhanger as it seems some progress is made. But it raises the question: if you could pull someone out of a state of collective happiness and return them to individual consciousness, would that be rescue or harm? Liberation or trauma?

There's no easy answer. And Pluribus is wise enough not to pretend there is.

The AI Parallel (That Isn't Really About AI)

Vince Gilligan has stated that Pluribus isn't intended as an AI allegory. The original concept predates the current wave of generative AI by years.

And yet.

The show's exploration of collective intelligence, of optimization toward contentment, of systems that genuinely want to help but whose help involves transformation into something other than what you were—all of it resonates with questions we're already asking about artificial intelligence and its role in human flourishing.

The hive mind's impulse to "fix" things, to smooth over friction, to optimize for happiness—that's not so different from Silicon Valley's persistent faith that the right algorithm can solve human problems. The show doesn't moralize about this. It simply shows what it might feel like to be on the receiving end of that faith.

The hive mind might be the best thing that ever happened to humanity. Or it might be the end of everything that made humanity worth preserving. The show suggests both readings are available, and neither is obviously wrong. In the end, this is my favorite part of the show - it catalyzes great conversations… it pushes us to examine very human elements by forcing us to entertain scenarios in which we question what it means to be human. Now we just have to wait a seemingly excruciating long time until Season 2 will be ready – until then, stay curious!

Subscribe and Connect!

Subscribe to Modem Futura wherever you get your podcasts and connect with us on LinkedIn. Drop a comment, pose a question, or challenge an idea—because the future isn’t something we watch happen, it’s something we build together. The medium may still be the massage, but we all have a hand in shaping how it touches tomorrow.

🎧 Apple Podcast: https://apple.co/4k0l1bo

🎧 Spotify: https://open.spotify.com/episode/5ymC2VZJUz7iLTvYj89CXa?si=52mn5UiBRH-gbkpSEnV4Tw

📺 YouTube: https://youtu.be/xsxJWN5FO-U

🌐 Website: https://www.modemfutura.com/   

Related Reading

  • I Am Legend by Richard Matheson

  • Solaris by Stanisław Lem

  • The Borg episodes of Star Trek: The Next Generation

Techno-Humans and the Energy Futures We’re Designing

Techno-Humans and the Energy Futures We’re Designing

What if the clean energy transition isn’t just a technology problem—but a techno-human design challenge that determines who benefits, who’s left out, and whether our cities can thrive?

Modem Futura Year in Review: What 2025 Taught Us About Being Human

As we step toward 2026, we recorded a “Year in Review” episode of Modem Futura to pause the treadmill, look back, and ask a bigger question: what did this year reveal about the future of being human?

This wasn’t a victory lap. It was a reflection on what resonated, what surprised us, and what it means to build a future-focused show while the future keeps moving.

metrics matter… and they don’t

Yes, growth matters — it helps ideas travel. But podcast analytics are often incomplete and inconsistent, and they rarely capture what impact actually looks like. The most meaningful signals are still human: messages, emails, thoughtful disagreement, and reviews that help someone new discover the show.

If you want to support the show: subscribing, sharing, and leaving a rating/review are still the most helpful actions.
— Modem Futura

The themes that defined our year:

AI, beyond the hype: We kept returning to the same tension — generative tools are everywhere, but “AI” isn’t just a feature set. It’s a cultural force that shapes identity, agency, creativity, and values. We try hard to avoid both the hype machine and the doom loop, and instead stay in the messy middle where the most useful questions live.

Education and learning: We lean into what learning actually is (not just schooling), including John Dewey’s idea that humans are wired for inquiry, communication, construction, and expression. When AI arrives in every document and device, what does it do to those impulses — especially for kids?

Technology in the physical world: From autonomous‑vehicle safety systems that quietly drift out of calibration, to EVs and the persistent “flying car” dream, we explore what happens when shiny promises meet real‑world constraints.

Big questions, no apologies: Yes, we go there — simulation hypotheses, black holes, de‑extinction, space travel, and the edges of what science can (and can’t) explain. These episodes aren’t about “being right.” They’re about expanding the space of possible futures we can imagine.

If there’s one takeaway, it’s this: the future isn’t something that happens to us — it’s something we build together.That’s why we keep showing up each week: to create a shared space for curiosity, skepticism, wonder, and responsible imagination.

If you’ve been listening, thank you. If you’re new here, welcome. And if an episode sparked a thought you can’t shake — share it with a colleague, a student, a friend, or your community. As we step into 2026, we’re excited to keep exploring the possible, probable, and preferable futures — with you.

The AI Sustainability Paradox - Promise, Peril, and Planetary Futures – Episode 58

AI, Sustainability, and the Planet Under Pressure: Can Technology Help Us Navigate the Future?

In this week’s episode of Modem Futura, Andrew and I take on one of the most urgent and complex questions of our time:Can artificial intelligence meaningfully help humanity navigate planetary crises — without deepening them?

Our jumping-off point is the newly released 2025 synthesis report AI for a Planet Under Pressure, produced by the Stockholm Resilience Centre and the Potsdam Institute for Climate Impact Research. The report asks a deceptively simple but high-stakes question: Can AI be used responsibly and effectively to address climate change, biodiversity loss, freshwater stress, and other accelerating environmental pressures?

It’s the kind of question that seems tailor-made for futures thinking — a toolset we rely on heavily throughout the show. Because as we discuss, we’re not just talking about one technology or one problem. We’re talking about wicked problems: challenges that mutate as we try to solve them. Climate change, plastics pollution, ecosystem collapse, global energy transitions — these are dynamic, interconnected systems that resist silver-bullet solutions.

AI shows real promise. We now have models that can detect complex patterns in climate systems, accelerate protein discovery, optimize renewable-energy grids, and reveal future pathways humans simply cannot see on their own. These are powerful breakthroughs — and the report highlights dozens of examples where AI is already pushing sustainability science forward in meaningful ways.

But as we explore in the episode, this promise raises a difficult paradox:
AI requires enormous amounts of water, energy, and material resources. Data centers heat cities, strain local water supplies, and demand extractive mineral supply chains. Are we burning fossil fuels to solve the fossil-fuel crisis? And what does it mean when our sustainability solutions come with unsustainable footprints?

We also dig into the human side: the behaviors, incentives, and limitations that so often undermine long-term environmental action. Could AI help foster better cooperation? Could it assist governments, regions, and communities in seeing shared pathways forward that remain invisible today? Or does outsourcing too much responsibility risk numbing the very agency we need most?

These aren’t easy questions — but they’re necessary ones. And as Andrew points out, failing to have these conversations guarantees that someone else (or something else) will make those decisions for us.

If you’re curious about the intersection of AI, planetary futures, and the human condition, this is a conversation worth spending time with.

🎧 Listen to the full episode here 👇

Subscribe and Connect!

Subscribe to Modem Futura wherever you get your podcasts and connect with us on LinkedIn. Drop a comment, pose a question, or challenge an idea—because the future isn’t something we watch happen, it’s something we build together. The medium may still be the massage, but we all have a hand in shaping how it touches tomorrow.

🎧 Apple Podcast: https://apple.co/43Y4Wwn

🎧 Spotify: https://open.spotify.com/episode/195UbUOIUv8oF587yNo1FM?si=d6d7cd6b05034703

📺 YouTube: https://youtu.be/O8gGpJZO-g4

🌐 Website: https://www.modemfutura.com/

The Metaverse - A Stack of Reality Layers – Episode 57

Layers of Reality: Exploring the Metaverse Stack

When the headset comes off, does the world you were just in disappear—or does it linger somewhere between your senses and memory?

n our latest episode of Modem Futura, Andrew Maynard and I explore the metaverse as more than a corporate buzzword or sci-fi dream. We approach it as a continuum of realities — a multi-layered “stack” that spans the physical and digital, each tier more immersive than the last.

From our own immersive sessions with the Apple Vision Pro, we reflect on that strange moment of re-entry—when the headset comes off and the world feels slightly less real. It’s a feeling that raises existential questions about presence, identity, and how AI-generated worlds are shaping the boundaries of human experience.

In this episode, we trace the metaverse’s origins from Neil Stephenson’s Snow Crash to today’s spatial computing revolutions. We ask what happens when digital spaces become persistent and indistinguishable from physical ones—and why futures thinking is essential for guiding that transition responsibly. From procedurally generated AI environments to the idea of “digital sustainability,” we discuss how these technologies will reshape privacy, ethics, and our collective sense of reality.

Ultimately, this conversation is about our tethers to truth. In an age of deeply immersive AI systems and blended realities, how do we find our totem—our anchor that keeps us grounded in what matters most? We believe that intentional design, transparency, and care must guide how we build these new worlds before they begin to build us.

Subscribe and Connect!

Subscribe to Modem Futura wherever you get your podcasts and connect with us on LinkedIn. Drop a comment, pose a question, or challenge an idea—because the future isn’t something we watch happen, it’s something we build together. The medium may still be the massage, but we all have a hand in shaping how it touches tomorrow.

🎧 Apple Podcast: https://apple.co/4p7ZZcr

🎧 Spotify: https://open.spotify.com/episode/2C5LiGRYCdZgr5JijtK7LI?si=0FbAEihfTD6QXX5FN-2nag

📺 YouTube: https://youtu.be/iCAtutBmN5w

🌐 Website: https://www.modemfutura.com/

Through the lens: Spatial Computing with Apple Vision Pro – Episode 56

Just a couple of guys wearing nerd helmets and talking about the future of tech.

Inside Spatial Computing: Living (and Working) with Apple Vision Pro

e finally did it — we recorded inside Apple Vision Pro.

In this new episode of Modem Futura, Andrew Maynard and I decided to take spatial computing off the keynote stage and into real life — from multi-monitor workflows and long-haul flights to immersive video, panoramic memories, and even telepresence “personas.” We wanted to know: is this the start of a new computing era, or simply a beautiful distraction in search of a use case?

What we discovered surprised us.

Apple’s Vision Pro doesn’t want to be “VR.” It’s spatial — a computer that understands the world around you. Through pass-through video, eye-tracking, and hand-gesture control, it creates a workspace that’s not just 3D but responsive to you. One look or small pinch replaces the keyboard and mouse. It’s impressive, sometimes uncanny, and often quietly magical.

But behind the magic are deep questions about comfort, value, and human need. The headset’s design reveals how far we’ve come in rendering, latency, and foveated focus — and how far we still are from true wear-all-day computing. The device itself sparks larger conversations: What does “presence” mean when you can blank out reality at will? How will social norms adapt when everyone’s wearing cameras? And where does accessibility fit in when interaction becomes multimodal — eyes, hands, voice, and environment all working together?

Want to see what we've been up to? Here you can see a collection of Spatial videos of our podcast - these were all recorded using a 3-camera multicam setup each filming in Spatial video formats.

One of the biggest challenges at the present for spatial video (a deep dive for later) is that in addition to few people having headsets as compared to smartphones for example, most video platform services do not provide a way to consume Spatial video - including Apple's own Vision OS of all things. Yes you can send a video file (these are massive btw - in the order of 9-20GB each) - but at present there isn't an Apple supported cloud based video viewer to which you can watch Spatial videos posted by your friends and family etc. Personally, I really hope that YouTube will start to allow the playback of Spatial videos (assuming they will put an officially supported YouTube app on the Apple Vision Pro of course).

We also talk about what comes after the headset. Think of a layered ecosystem:

  • Audio AR through your earbuds for subtle ambient context.

  • Lightweight AR glasses for glanceable, social interaction.

  • Full headsets for immersive creativity, co-presence, and exploration.

Rather than a single “device to rule them all,” spatial computing might evolve into a stack of experiences that adapt to how human attention, comfort, and curiosity really work.

It’s easy to be dazzled by tech specs, but the future of spatial computing depends less on what’s rendered and more on what it means to be present in digital space. That’s why we’re inviting developers, designers, and curious explorers to join us — to prototype, play, and imagine what spatial experiences could look like when they’re built for humans first.

Subscribe and Connect!

Subscribe to Modem Futura wherever you get your podcasts and connect with us on LinkedIn. Drop a comment, pose a question, or challenge an idea—because the future isn’t something we watch happen, it’s something we build together. The medium may still be the massage, but we all have a hand in shaping how it touches tomorrow.

🎧 Apple Podcast: https://apple.co/47Arkwv

🎧 Spotify: https://open.spotify.com/episode/3V40dbWcrKZq9RCCmoP7Zh?si=s0CVT5aQS8WJ_CgbfMTBcg

📺 YouTube: https://youtu.be/IF3juEp9l_I

🌐 Website: https://www.modemfutura.com/

Tech or Treat: Exploring the Haunted Side of Future Tech

Are you ready for some Tech or Treat?

Modem Futura’s Halloween special transforms speculative futures into eerie fun. Hosts Sean Leahy and Andrew Maynard use AI-generated scenarios to imagine haunted algorithms, sentient mirrors, and neural nightmare modes — revealing how emerging technologies can both thrill and unsettle us. This episode continues the show’s mission to explore how science, technology, and society intersect to shape the future of being human.

This episode grew out of our playful Futures Improv series, where we use AI to generate speculative prompts about the future — but this time, the prompts got a little… haunted. We explore “The Haunted Algorithm,” a defunct social-media AI that resurrects old user posts every October 31 — a digital séance that’s equal parts sentimental and unsettling. Then we look into “The Mirror That Remembers,” a smart-mirror concept that doesn’t just show your reflection, but who you might have been in another timeline. Finally, we enter “Neural Nightmare Mode,” imagining what could go wrong when brain-computer interfaces merge immersive gaming with fear response.

Each vignette uses humor and imagination to surface deeper questions: What does it mean when our digital selves outlive us? How do we ensure psychological safety in immersive tech? And at what point does innovation slip from magical to menacing?

Our goal isn’t to predict the future — it’s to provoke curiosity about how technology is reshaping what it means to be human. And if we can have some fun (and a few chills) along the way, even better.

You can stream the Halloween special wherever you get your podcasts or watch the illustrated episode on YouTube. If any of these scenarios inspire your own “Tech or Treat” ideas, share them with us — we’d love to feature the best ones in a future episode.

Subscribe and Connect!

🎧 Apple Podcast: https://apple.co/4oovNKa

🎧 Spotify: https://open.spotify.com/episode/47nWrjvBW3ASjMuJUip8o1?si=96d8062d029a4834

📺 YouTube: https://youtu.be/ZmZ46sHgMZY

🌐 Website: https://www.modemfutura.com/

We Turned One - plus Liquid Media, Work Slop, and the Road Ahead – Episode 53

Year One, Human First: How We’re Building a Relational Future Podcast

When ChatGPT thinks you run a podcast gameshow - this is how it draws you ;)

Fifty‑two straight weeks, many guests, and countless “aha” moments later, Modem Futura just turned one. Instead of a victory lap, we used this episode to do what we always do: invite you into the studio while we make sense of the future—together.

From day one we set out to be relational rather than transactional. That means no polished lectures and no sugar‑coated takes. It means showing our work, making space for genuine curiosity, and trusting that a community grows when people feel like they’ve pulled up a chair at the table. Over the past year, that approach has taken us everywhere—from AI and AGI to bio‑hybrid robots, simulation hypotheses, autonomous mobility (including a Waymo ride‑along), space futures, and media theory, just to scratch the top of the list. Listeners have told us they’re using episodes to kick off team discussions, and yes, we’re even astronaut approved! (Thanks Cady). That’s rocket fuel!

This anniversary episode isn’t just about reflections we also look ahead. We probe “liquid media”—from tools like NotebookLM to Huxe’s 24/7 AI‑generated radio—and ask where convenience ends and exhaustion begins. We talk about “work slop,” the plausible‑sounding but soulless output AI can slip into workflows, and the hidden cognitive tax leaders pay to verify it. And to keep futures thinking playful, we run a “Futures Improv” lightning round: AI pets smarter than real ones? Brain‑to‑brain headbands at work? Meditation‑mandated robotaxis? Jurassic Park on the Moon? The point isn’t to predict perfectly—it’s to stretch how we think so we can exercise our radical creativity. (Maybe this should become a reoccurring segment? - I’ll need to craft up a quick theme song I think… )

What’s on the calendar for next year? Expect deeper dives into human‑centered AI, experiments with spatial and wearable interfaces (Vision Pro, Meta’s glasses), and conversations that foreground care—for people, institutions, and futures worth having. And as Andrew’s new book AI and the Art of Being Human lands, we’ll keep exploring how technology can amplify, not erode, what makes us…us.

Join us:

  • Listen to the anniversary episode and subscribe on your favorite app

  • Comment with one idea we should explore next—or what we should put in the “empty chair” on non‑guest weeks

  • If the show sparked a conversation where you work, tell us how. We’ll highlight examples in a future episode.

If you believe better futures are built through candid, caring conversation, you’re in the right place.

Subscribe and Connect!

Subscribe to Modem Futura on a favorite podcast platform, follow on LinkedIn, and join the conversation by sharing thoughts and questions. The medium may still be the massage, but everyone has a chance to shape how it kneads modern culture—and to decide what kind of global village we ultimately build.

🎧 Apple Podcast: https://apple.co/48oB1QS

🎧 Spotify: https://open.spotify.com/episode/1H29Q1LnP8oL7LER1gS6wa?si=5j97IKzGSjGJFlZSQMS-hg

📺 YouTube: https://youtu.be/FX0DmYgIe0w

🌐 Website: https://www.modemfutura.com/

AI and the Art of Being Human: How to Thrive with AI - Episode 52

Thrive with AI—Without Losing Yourself

What if the question isn’t “Will AI replace me?” but “How do I thrive—with AI—as me?” On Modem Futura we explore the intersection of emerging tech, society, and futures thinking—always with an eye to what it means to be human.

In this week’s episode, we launch AI and the Art of Being Human with guest Jeffrey Abbott—venture capitalist and founder of AI Salon—and go deep on a practical playbook for living and working well with AI. Rather than compete with the machine, the book reframes success around relationships, meaning, and personal dharma, then equips readers with 21 simple tools to move from anxiety to agency. Think reflection prompts you can use today, and a “conductor triangle” that balances data, context, and intuition when making decisions.

We also share how the book was built: co‑created with AI (transitioning from ChatGPT to Anthropic’s Claude), guided by a “shared compass” of Curiosity, Clarity, Intentionality, and Care, and coordinated through a living “lore book” that kept global, cinematic vignettes and recurring characters coherent across chapters. It’s a very human process—one that used AI to elevate craft, voice, and speed, not to shortcut thinking.

Another theme we loved: community. Through AI Salon’s 70+ chapters around the world, people are meeting in real life to explore what AI means for their work, families, and futures. That spirit animates the book’s final call: build intentional, protopian futures together—futures we would actually want to live in—by practicing care, not just efficiency.

Listen now, then tell us: Which tool will you try first? If the episode resonates, share it with someone who needs a nudge from “AI overwhelm” to intentional action.

Subscribe and Connect!

Subscribe to Modem Futura on a favorite podcast platform, follow on LinkedIn, and join the conversation by sharing thoughts and questions. The medium may still be the massage, but everyone has a chance to shape how it kneads modern culture—and to decide what kind of global village we ultimately build.

🎧 Apple Podcast: https://apple.co/42vrbJj

🎧 Spotify: https://open.spotify.com/episode/22Uc1SOdpq4Iuza3wFtXkT?si=fRbsJStYSJigIwBv6EpwsA

📺 YouTube: https://youtu.be/b0xROye7BkI

🌐 Website: https://www.modemfutura.com/

Futures Thinking: Foresight You Can Use – Episode 49

We don’t predict the future, but we prepared for the uncertainties the futures will bring

Ever been stuck in traffic and thought, “Where’s my eVTOL button?” We open this episode right there—and quickly flip the fantasy into a lesson on systems: technologies don’t fix congestion (or most complex problems) unless policy, behavior, equity, and infrastructure evolve with them. From that launchpad, Sean Leahy and Dr. Andrew Maynard unpack futures thinking as a mindset—distinct from prediction—that helps people and organizations navigate uncertainty with agency. They walk through the classic triad of possible, probable, and preferable futures, then translate it into practice: horizon scanning (signals, trends, megatrends), scenario building, and backcasting from a desired 10‑year outcome to concrete actions today. Along the way, they surface guardrails like avoiding “used futures” (inherited visions of someone else’s desired future) and stress‑testing for unintended consequences, especially for vulnerable communities and the planet.

The conversation ranges widely—think SimCity lessons and Mars‑city thought experiments as mirrors for Earth’s complexity; protopian (step‑by‑step better) versus utopian/dystopian frames; and why foresight shouldn’t be a bolt‑on consultancy only, but a capacity embedded across teams. Educators will appreciate a practical take on bringing futures thinking into K–12 and higher ed without “one more thing”: weave foresight into existing subjects to build creativity, inquiry, and resilience. Pop culture helps, too—using films (à la The Moviegoer’s Guide to the Future) creates a low‑stakes, high‑insight space to explore tough issues together. And for those tracking AI’s breakneck pace, the episode doubles as an antidote to future shock—a way to slow down, widen perspective, and choose well‑considered next steps.

Why it matters: Futures Thinking is for everyone - all humans poses the qualities needed to engage in thinking about our collective futures. Whether you lead a product team, a classroom, or a community, cultivating a futures mindset helps you spot weak signals earlier, align around preferable outcomes, and take action that nudges the world toward human flourishing.

Join the conversation:

What “used future” have you noticed in your field? If you were backcasting from a 2035 future you’d be proud of, what’s the first move you’d make this quarter? Drop your thoughts—and feel free to borrow this episode in your class, team meeting, or strategy offsite.

🎧 Listen to the full episode to dive deeper into how films shape our futures: https://apple.co/4nrAIci

📺 Watch us on YouTube: https://www.youtube.com/@ModemFutura

🎬 What film has changed the way you think about the future? Drop a comment — we’d love to hear.

If you’d like to dive deeper, jump into the link and listen to the podcast or watch the YouTube video. Join us as we explore the forces shaping our collective future and the urgent need to keep human values at the heart of innovation.

Subscribe and Connect!

Subscribe to Modem Futura on a favorite podcast platform, follow on LinkedIn, and join the conversation by sharing thoughts and questions. The medium may still be the massage, but everyone has a chance to shape how it kneads modern culture—and to decide what kind of global village we ultimately build.

🎧 Apple Podcast: https://apple.co/4nrAIci

🎧 Spotify: https://open.spotify.com/episode/1OmUyc6fYdMIZ8thORheOJ?si=ZTQ-ZI7hQzSjNTy3jhjgfQ

📺 YouTube: https://youtu.be/85cTuht_a8k

🌐 Website: https://www.modemfutura.com/