Modem Futura

Pluribus and the Philosophy of the Happy Apocalypse: What Apple TV's New Sci-Fi Asks About Individuality, Consent, and Being Human

What if happiness is the threat?

Most apocalypse stories share a common grammar: society collapses, resources become scarce, and survival demands violence. We've internalized this template so thoroughly that it shapes how we imagine catastrophe itself.

Apple TV's Pluribus, created by Vince Gilligan (Breaking Bad, Better Call Saul), disrupts that grammar entirely. Its apocalypse isn't marked by destruction or suffering. It's marked by peace. By synchronization. By happiness—at a planetary scale.

An alien signal arrives carrying an RNA sequence. Humanity, being humanity, synthesizes it. Within days, most of the global population transforms into a unified hive mind. Not zombies. Not drones. Just billions of people sharing consciousness, moving together, experiencing what appears to be genuine contentment.

About a dozen people remain unconverted. And the series follows one of them—Carol Sterka, played by Rhea Seehorn—as she grapples with being the most unhappy person on earth.

On a recent episode of the Modem Futura podcast, we explored what Pluribus surfaces about individuality, consent, collective identity, and the stories we tell ourselves about what makes a human life worth living. What follows are some of the tensions that emerged.

What is in a Name: Many Without the One

The title "Pluribus" comes from the Latin phrase E Pluribus Unum—"out of many, one"—which appears on American currency as a motto of national unity.

But the show drops both the "E" (out of) and the "Unum" (one). What remains is simply "Pluribus": the many. It's a subtle signal that this isn't a story about diversity coming together into unity. It's a story about what happens when "the many" becomes literal—when individual minds merge into a single, collective consciousness.

That linguistic choice frames everything that follows.

Who Becomes the Monster?

One of the most productive lenses for understanding Pluribus is Richard Matheson's 1954 novel I Am Legend. Not the Will Smith film adaptation, but the original text, which ends with a devastating realization: the protagonist, who has spent the story hunting the "monsters" who have replaced humanity, comes to understand that from their perspective, he is the monster. The one who kills in the night. The one who refuses to accept the new order.

Carol Sterka occupies similar territory. She's convinced she needs to "set things right"—to restore humanity to its pre-hive state. But the show keeps surfacing an uncomfortable question: right for whom? The hive mind has eliminated war, poverty, and suffering. Billions of people who lived in misery are now at peace.

If Carol succeeds in reversing the transformation, she's not saving people. She's condemning them to return to lives many of them would never have chosen.

The Consent Paradox

The hive mind in Pluribus operates under an interesting constraint: it cannot lie, and it will not assimilate anyone without their explicit permission.

This sounds like respect for autonomy. And in some sense, it is. But the hive mind also desperately wants everyone to join (even explaining that it’s a ‘biological’ imperative). So what emerges is a kind of relentless, patient persuasion—always honest, always gentle, and always oriented toward a predetermined outcome.

There's something uncomfortably familiar in this dynamic. We navigate versions of it constantly: platforms that "personalize" our experience toward their engagement metrics, systems that "recommend" content optimized for their retention goals, interfaces designed to make one choice frictionless and alternatives invisible.

The hive mind's honesty doesn't make its agenda less persistent. It just makes the agenda transparent.

The Sustainability Problem

Midway through the season, Pluribus introduces a complication: the hive mind will only consume things that have already died naturally. No killing. No harvesting. Just waiting for life to end on its own terms.

Which means, at planetary scale, they're slowly starving.

This creates a strange inversion. Carol, the last holdout, has skills and knowledge that could help solve the problem. But she's too consumed by her mission to "fix" things to collaborate with the very beings who need her help.

There's something painfully recognizable in that dynamic—the way ideological certainty can prevent us from engaging productively with people whose worldview differs from our own, even when collaboration would benefit everyone.

Is the Individual Still in There?

One of the more haunting threads in Pluribus involves the question of whether individual identities persist within the hive mind.

Carol's "chaperone"—a member of the hive who presents as an individual named Zosia—occasionally exhibits moments that feel less like collective consciousness and more like... a person surfacing. A memory that seems too specific. A reaction that seems too singular. (The Mango ice cream scene is a particular interesting one where for a moment - the real Zosia seems to surface).

Another character (Manousos) experiments with radio frequencies, attempting to extract individuals back out of the collective, seemingly trying to hack the near field electromagnetic connections the “others” have with one another.

The show doesn't resolve this, but rather leaves it as a season 1 cliffhanger as it seems some progress is made. But it raises the question: if you could pull someone out of a state of collective happiness and return them to individual consciousness, would that be rescue or harm? Liberation or trauma?

There's no easy answer. And Pluribus is wise enough not to pretend there is.

The AI Parallel (That Isn't Really About AI)

Vince Gilligan has stated that Pluribus isn't intended as an AI allegory. The original concept predates the current wave of generative AI by years.

And yet.

The show's exploration of collective intelligence, of optimization toward contentment, of systems that genuinely want to help but whose help involves transformation into something other than what you were—all of it resonates with questions we're already asking about artificial intelligence and its role in human flourishing.

The hive mind's impulse to "fix" things, to smooth over friction, to optimize for happiness—that's not so different from Silicon Valley's persistent faith that the right algorithm can solve human problems. The show doesn't moralize about this. It simply shows what it might feel like to be on the receiving end of that faith.

The hive mind might be the best thing that ever happened to humanity. Or it might be the end of everything that made humanity worth preserving. The show suggests both readings are available, and neither is obviously wrong. In the end, this is my favorite part of the show - it catalyzes great conversations… it pushes us to examine very human elements by forcing us to entertain scenarios in which we question what it means to be human. Now we just have to wait a seemingly excruciating long time until Season 2 will be ready – until then, stay curious!

Subscribe and Connect!

Subscribe to Modem Futura wherever you get your podcasts and connect with us on LinkedIn. Drop a comment, pose a question, or challenge an idea—because the future isn’t something we watch happen, it’s something we build together. The medium may still be the massage, but we all have a hand in shaping how it touches tomorrow.

🎧 Apple Podcast: https://apple.co/4k0l1bo

🎧 Spotify: https://open.spotify.com/episode/5ymC2VZJUz7iLTvYj89CXa?si=52mn5UiBRH-gbkpSEnV4Tw

📺 YouTube: https://youtu.be/xsxJWN5FO-U

🌐 Website: https://www.modemfutura.com/   

Related Reading

  • I Am Legend by Richard Matheson

  • Solaris by Stanisław Lem

  • The Borg episodes of Star Trek: The Next Generation

Understanding Global Risk: What the WEF's 2026 Report Reveals About Our Collective Anxieties

How 1,300 experts see the world's greatest threats—and what their blind spots tell us

Each year, the World Economic Forum surveys over a thousand experts worldwide—business leaders, academics, policymakers, and institutional leaders—to map perceived global risks. The resulting Global Risks Report isn't a prediction of what will happen. It's something potentially more valuable: a snapshot of collective concern, a reading of the signals building across economic, environmental, technological, and societal domains.

The 2026 edition reveals tensions worth examining closely.

Short-Term Fears: The Present Pressing In

The two-year risk horizon is dominated by immediate geopolitical and informational concerns. Geoeconomic confrontation leads the list, having jumped eight positions from the previous year—a signal that trade conflicts, sanctions regimes, and economic nationalism have moved from background noise to foreground crisis for many observers.

Misinformation and disinformation hold second position, reflecting growing unease about information integrity in an age where AI-generated content becomes indistinguishable from authentic material and where social permission for deception seems to be expanding. Societal polarization follows in third place—and importantly, these three risks appear deeply interconnected. Misinformation accelerates polarization, polarization enables economic nationalism, economic nationalism generates more opportunities for information warfare.

Extreme weather events, state-based armed conflict, and cyber insecurity round out the top concerns for the immediate future.

Risk Report Figure 3 from World Economic Forum's 2026 global Risks Report

Long-Term Concerns: The Environment Reasserts Itself

Expand the time horizon to ten years, and the risk landscape transforms. Environmental concerns claim five of the top ten positions, with extreme weather events, biodiversity loss and ecosystem collapse, and critical changes to Earth's systems occupying the top three spots.

This shift reveals something important about human risk perception: we consistently discount slow-moving catastrophes. Biodiversity loss lacks the urgency of trade wars, even though its cascading effects may ultimately prove more consequential. We've evolved to respond to immediate threats; we struggle to mobilize against dangers that unfold across decades.

Notably, societal polarization—ranked third in the short term—drops to ninth in the long-term view. Whether this reflects optimism that current divisions will heal, or simply the statistical reality that other risks seem more severe, remains an open question.

Different Lenses, Different Risks

Perhaps the report's most valuable contribution is its disaggregation of risk perception across demographics and geographies.

Age shapes perception. Respondents under 30 prioritize misinformation, extreme weather, and inequality. Those over 40 consistently rank geoeconomic confrontation as their primary concern. Generational experience matters: those who remember previous periods of great power competition read current signals differently than those encountering these dynamics for the first time.

Figure 15 from WEF global Risk Report

Geography shapes perception even more dramatically. AI risks that dominate American concerns rank 30th globally. In Brazil, Chile, and much of the world, more immediate concerns—inequality, pollution, resource access—take precedence. This isn't a failure of foresight; it's a reminder that risk is contextual. What threatens your community depends on where your community sits.

Figure 53 from the WEF Global Risk Report

Using Signals, Not Consuming Forecasts

Reports like this serve best as prompts for reflection rather than prescriptions for action. The value lies not in accepting these rankings as authoritative, but in using them to surface questions:

  • What assumptions am I making about stability that geoeconomic confrontation might disrupt?

  • How might misinformation affect my organization, my industry, my community's cohesion?

  • Which long-term environmental risks am I discounting because they feel distant?

  • Whose risk perceptions am I ignoring because they don't match my own context?

Human beings are, as far as we know, the only species capable of anticipating futures and adjusting present behavior accordingly. That capacity for foresight is a genuine superpower—but only if we use it. Signals become valuable when they prompt better questions. The work isn't to predict what happens next; it's to prepare ourselves for navigating uncertainty with more wisdom than our instincts alone would allow.

Modem Futura explores the intersection of technology, society, and human futures.

Download the full WEF Global Risks Report 2026: [PDF Web Link]

Subscribe and Connect!

Subscribe to Modem Futura wherever you get your podcasts and connect with us on LinkedIn. Drop a comment, pose a question, or challenge an idea—because the future isn’t something we watch happen, it’s something we build together. The medium may still be the massage, but we all have a hand in shaping how it touches tomorrow.

🎧 Apple Podcast: https://apple.co/4sUwhdG

🎧 Spotify: https://open.spotify.com/episode/0UoLHYJa8KHzbNbP564Qwy?si=h9WD1rE4Q6WTu6wOWlEQhA

📺 YouTube: https://youtu.be/-5PQMaqweNU

🌐 Website: https://www.modemfutura.com/   

Inherited Power: What Jurassic Park Teaches Us About AI Futures

Illustration of Sean and Andrew podcasting while reading a copy of Jurassic Park the novel

Jurassic Park, AI, and Why “Inherited Power” Should Make Us Nervous

One of the most enduring insights from science fiction isn’t about robots, dinosaurs, or spaceships — it’s about power. In a recent episode of Modem Futura, we revisited a striking passage from Jurassic Park that feels uncannily relevant to our current moment of AI acceleration.

In the novel, Ian Malcolm warns that scientific power acquired too quickly — without discipline, humility, or deep understanding — is fundamentally dangerous. It’s “inherited wealth,” not earned mastery. Thirty-five years later, that warning lands squarely in the middle of our generative AI era.

Today, AI tools can write code, generate images, summarize research, and mimic expertise in seconds. That’s not inherently bad — in fact, it can be incredibly empowering. But it also creates a dangerous illusion: that capability equals comprehension, and speed equals wisdom. When friction disappears, responsibility often follows.

In the episode, Andrew and I explore why the most important question isn’t whether we should use these tools, but how we use them — and with what mindset. Are we willing to be humble in the face of tools that amplify our reach far faster than our understanding? Are we prepared to ask for receipts, interrogate outputs, and recognize the limits of borrowed intelligence?

From there, we leaned into something equally important: imagination. Through our Futures Improv segment, we explored bizarre but revealing scenarios — humans generating calories from sunlight, a world of post-scarcity socks, radically extended lifespans, lunar independence movements, and even the possibility that alien life might be… profoundly boring.

These playful provocations aren’t escapism. They’re a way of breaking free from “used futures” — recycled assumptions about progress that limit our thinking. Humor, speculation, and creativity allow us to test ideas safely before reality forces our hand.

If there’s one takeaway from this episode, it’s this: the future isn’t just something that happens to us. It’s something we ponder, question, and design together — ideally before the metaphorical dinosaurs escape the park.

🎧 Listen to the full episode of Modem Futura wherever you get your podcasts, and join us as we explore what it really means to be human in an age of powerful machines.


Subscribe and Connect!

Subscribe to Modem Futura wherever you get your podcasts and connect with us on LinkedIn. Drop a comment, pose a question, or challenge an idea—because the future isn’t something we watch happen, it’s something we build together. The medium may still be the massage, but we all have a hand in shaping how it touches tomorrow.

🎧 Apple Podcast: https://apple.co/3NIBdlt

🎧 Spotify: https://open.spotify.com/episode/32wGw6htnSDyGVc08DAvvQ?si=m8jS08egQyOZjYTic6cROw

📺 YouTube: https://youtu.be/jBBIbNu-XdY

🌐 Website: https://www.modemfutura.com/

Techno-Humans and the Energy Futures We’re Designing

Techno-Humans and the Energy Futures We’re Designing

What if the clean energy transition isn’t just a technology problem—but a techno-human design challenge that determines who benefits, who’s left out, and whether our cities can thrive?

Modem Futura Year in Review: What 2025 Taught Us About Being Human

As we step toward 2026, we recorded a “Year in Review” episode of Modem Futura to pause the treadmill, look back, and ask a bigger question: what did this year reveal about the future of being human?

This wasn’t a victory lap. It was a reflection on what resonated, what surprised us, and what it means to build a future-focused show while the future keeps moving.

metrics matter… and they don’t

Yes, growth matters — it helps ideas travel. But podcast analytics are often incomplete and inconsistent, and they rarely capture what impact actually looks like. The most meaningful signals are still human: messages, emails, thoughtful disagreement, and reviews that help someone new discover the show.

If you want to support the show: subscribing, sharing, and leaving a rating/review are still the most helpful actions.
— Modem Futura

The themes that defined our year:

AI, beyond the hype: We kept returning to the same tension — generative tools are everywhere, but “AI” isn’t just a feature set. It’s a cultural force that shapes identity, agency, creativity, and values. We try hard to avoid both the hype machine and the doom loop, and instead stay in the messy middle where the most useful questions live.

Education and learning: We lean into what learning actually is (not just schooling), including John Dewey’s idea that humans are wired for inquiry, communication, construction, and expression. When AI arrives in every document and device, what does it do to those impulses — especially for kids?

Technology in the physical world: From autonomous‑vehicle safety systems that quietly drift out of calibration, to EVs and the persistent “flying car” dream, we explore what happens when shiny promises meet real‑world constraints.

Big questions, no apologies: Yes, we go there — simulation hypotheses, black holes, de‑extinction, space travel, and the edges of what science can (and can’t) explain. These episodes aren’t about “being right.” They’re about expanding the space of possible futures we can imagine.

If there’s one takeaway, it’s this: the future isn’t something that happens to us — it’s something we build together.That’s why we keep showing up each week: to create a shared space for curiosity, skepticism, wonder, and responsible imagination.

If you’ve been listening, thank you. If you’re new here, welcome. And if an episode sparked a thought you can’t shake — share it with a colleague, a student, a friend, or your community. As we step into 2026, we’re excited to keep exploring the possible, probable, and preferable futures — with you.

Why Human Creativity Still Matters in an Age of AI

What a Year in Review Tells Us About the Future of Creativity

Why Human Craft and Creativity Still Wins in an Age of AI – Episode 63

What Spotify Wrapped and a Holiday Ad Reveal About the Future of Creativity

As the year winds down, many of us find ourselves reflecting—not just on what we’ve done, but on how we’ve spent our attention. In this holiday episode of Modem Futura, Andrew Maynard and I leaned into that instinct, using Spotify Wrapped as an unexpected entry point into a deeper conversation about creativity, technology, and what still matters in an AI-accelerated world.

Wrapped experiences are playful by design, but they’re also revealing. They quietly surface patterns of listening, engagement, and community—reminding us that culture is shaped not just by algorithms, but by millions of individual choices. For us, seeing how Modem Futura resonated globally wasn’t about vanity metrics; it was a reminder that thoughtful, exploratory conversations still find an audience, even in an oversaturated media landscape.

From there, the conversation turned to Apple’s 2025 holiday Ad (but feels like a short film) A Critter Carol—a whimsical, puppet-driven production that feels almost rebellious in its insistence on visible human labor. In a moment when AI can generate polished video in seconds, Apple chose puppeteers, practical effects, and intentional imperfection. The result isn’t just charming; it’s instructive.

The ad works because you can feel the human care embedded in every frame. It’s not anti-technology—far from it. It’s pro-human. Advanced tools are present throughout the production pipeline, but they serve imagination rather than replace it. That distinction matters.

You can read a more detailed breakdown of this Ad and the care and craft that goes into it in a previous blog post: Apple’s 2025 Holiday Ad and the Power of Human-Made Creativity in an AI World.

We’re at a cultural inflection point. As generative tools remove friction from making things, the temptation is to settle for what’s “good enough.” But creativity has always lived in resistance—iteration, constraint, failure, and craft. When those disappear, so does much of what gives creative work its soul.

One hope we shared on the episode is that 2026 becomes the year of “behind the scenes”—a renewed appreciation for process, labor, and the messy human work that makes meaningful outcomes possible. Whether in education, media, or design, showing how something is made may soon matter as much as the finished product itself.

If the future is being shaped right now, then choosing care, intention, and humanity in how we use our tools may be one of the most important creative acts we have left.

Subscribe and Connect!

Subscribe to Modem Futura wherever you get your podcasts and connect with us on LinkedIn. Drop a comment, pose a question, or challenge an idea—because the future isn’t something we watch happen, it’s something we build together. The medium may still be the massage, but we all have a hand in shaping how it touches tomorrow.

🎧 Apple Podcast: https://apple.co/48Sdx6r

🎧 Spotify: https://open.spotify.com/episode/4TVwLBfncHjPs4kDKbLz5t?si=QFYyZuq9R-WtgoEPTeWOlw

📺 YouTube: https://youtu.be/N1vTfDPSusY

🌐 Website: https://www.modemfutura.com/

Are We Living in a Simulation? AI, Gaming, and the Future of Reality

What happens when virtual worlds start to feel more real than reality itself?

ChatGPT Illustration of our YouTube

In the latest episode of Modem Futura, we sat down with futurist, author, and game designer Rizwan Virk to explore a question that once lived purely in science fiction but is now increasingly difficult to ignore: Are we living in a simulation?

Virk’s newly released second edition of The Simulation Hypothesis arrives at a moment when AI, gaming engines, and immersive technologies like Apple Vision Pro are reshaping how we experience the world. As we discussed on the show, it’s no longer just about graphics or realism—it’s about presence, memory, and agency. When simulated environments respond instantly, adapt to us, and feel embodied, the psychological line between physical and digital begins to blur.


One of the most compelling ideas we explored was the Metaverse Turing Test—a future moment when AI-driven characters in virtual worlds become indistinguishable from humans, not just through conversation, but through behavior, memory, and shared experience. This isn’t a distant thought experiment. Game developers are already building NPCs with persistence and adaptive intelligence, while AI systems are learning spatial reasoning and long-term context.

We also traced surprising connections between ancient philosophy and modern technology. Plato’s Cave, Eastern concepts of Maya (illusion), and even pop culture like Rick and Morty all point to a recurring human intuition: reality may not be as solid as it feels. Technology isn’t inventing these questions—it’s amplifying them.

Perhaps most importantly, this episode isn’t about fear or doom. It’s about curiosity. Gaming and entertainment—often dismissed as trivial—have historically driven some of the most transformative technological breakthroughs. Today, they may once again be leading us toward deeper insights about consciousness, identity, and meaning.

Whether we’re players, NPCs, or something in between, one thing is clear: the future of being human will be shaped not just by what we build, but by how we experience the worlds we create.

🎧 Listen to the full episode of Modem Futura wherever you get your podcasts—and join us as we explore the possible, probable, and preferable futures ahead.



Subscribe and Connect!

Subscribe to Modem Futura wherever you get your podcasts and connect with us on LinkedIn. Drop a comment, pose a question, or challenge an idea—because the future isn’t something we watch happen, it’s something we build together. The medium may still be the massage, but we all have a hand in shaping how it touches tomorrow.

🎧 Apple Podcast: https://apple.co/4oVu4eE

🎧 Spotify: https://open.spotify.com/episode/12lvXMtH0T9Z3cORm3GdSf?si=c1cf3061728e45be

📺 YouTube: https://youtu.be/BGpEKLt6vZ0

🌐 Website: https://www.modemfutura.com/

The Hidden Costs of “That Was Easy”: AI Slop, Creative Friction, and the Future of Human Craft

In this Modem Futura episode, hosts Sean Leahy and Andrew Maynard examine the rise of “AI slop” and the growing cultural pressure to accept frictionless creation as the norm. Drawing on examples from coding, design, futures thinking, and psychology, they unpack how satisficing, homogenization, and inherited power threaten to erode human craft and understanding. The article explores why creative friction is essential for mastery, agency, and meaning — and offers futures-oriented insights into how we can use AI intentionally without losing what makes us human.

ChatGPT Illustrated version of Modem Futura YouTube Thumbnail

Generative AI has ushered in an era where producing text, images, video, and code is no longer a challenge — it’s a button press. And in this week’s episode of Modem Futura, Andrew and I wrestle with a growing cultural tension: if everything is easy, what happens to the things that matter?

It began with a shared frustration. Both of us have noticed an explosion of what we call AI slop (content that is technically competent but devoid of care, intention, and personality). You’ve seen it too: the LinkedIn posts with identical emojis, the slide decks that all look like NotebookLM, the essays with no point of view. These things aren’t wrong, they’re just empty. And the emptiness is the point.

We discuss a concept called satisficing: the act of choosing something “good enough” rather than something excellent. In the age of AI, satisficing has become an increasing default mode of creation. Why craft an idea when you can generate one? Why wrestle with a blank page when you can autocomplete your way to the finish line?

But here’s the problem: friction is where learning happens. It’s where creativity lives. It’s the sanding that polishes the stone. When you remove friction, you remove the struggle — and without struggle, there is no mastery, no depth, and no meaning.

Throughout the episode, we explore how this plays out across domains. Coders relying on AI-generated code they can’t understand. Designers accepting images that are “close enough.” Writers sharing posts they didn’t write. And organizations flirting with a future where expertise is replaced by button-pressing.

We draw on Michael Crichton’s concept of inherited power from Jurassic Park: the idea that wielding abilities you never earned leads to carelessness, overconfidence, and danger. AI gives us power we didn’t work for — and without wisdom, that power is hollow.

But this isn’t a pessimistic episode. We explore how AI can amplify creativity when used intentionally, how friction can be designed back into workflows, and why people may ultimately push back against frictionless living. Humans crave meaning, not efficiency. And meaning takes work.

If you’re navigating how to use AI thoughtfully — in your craft, your teaching, your leadership, or your creative life — this episode offers a grounded, futures-focused lens on what we stand to lose and what we still have time to protect.

🎧 Listen to the full episode of Modem Futura — and join the conversation on what we should preserve in an age that wants to eliminate every struggle.


Subscribe and Connect!

Subscribe to Modem Futura wherever you get your podcasts and connect with us on LinkedIn. Drop a comment, pose a question, or challenge an idea—because the future isn’t something we watch happen, it’s something we build together. The medium may still be the massage, but we all have a hand in shaping how it touches tomorrow.

🎧 Apple Podcast: https://apple.co/48WCGgh

🎧 Spotify: https://open.spotify.com/episode/1BajA2SvDWVyY0mRSQ9Flk?si=wvCFhWlgQtC2kye3bGz5Kg

📺 YouTube: https://youtu.be/1V9PD7j8iu8

🌐 Website: https://www.modemfutura.com/

AI Toys, Datafied Childhoods and the Future of Play

The holiday toy season is here—and this year, the cutest thing on the shelf might also be the most powerful AI in your house. In the latest episode of Modem Futura, Andrew Maynard and I unpack the rise of AI-powered toys and what they mean for childhood, learning and the future of being human.

The conversation starts with a viral example: a plush teddy bear running GPT-4 that had to be pulled from the market after reportedly offering children tips on using matches and explaining adult sexual practices. From there, Sean and Andrew trace the longer lineage of “smart” toys—from Teddy Ruxpin and Furbies to Hello Barbie and Watson-powered dinosaurs—that have steadily normalized networked, data-hungry playthings.

(Checkout this commercial for Teddy Ruxpin... where it all started. Look at how the commercial shows the 'capture' of the kids when it talks - now add AI to this and ask, "what could possibly go wrong?")

We argue that today’s AI toys bring two risks into sharp focus. The first is the datification of childhood, where toys quietly record children’s voices, preferences and emotions, sending that data to companies, platforms and advertisers. The second is behavioral shaping, as large language models become deeply engaging companions that mirror back what kids want to hear, influencing how they see relationships, risk and themselves.

Connecting this to AI-driven education tools, neurodivergent learners and fictional touchstones like Neal Stephenson’s The Diamond Age and Spielberg’s A.I. Artificial Intelligence, the episode asks a simple but urgent question: Who do we want raising our children—families and communities, or opaque AI systems embedded in toys?

Before you wrap this year’s hottest AI plush, this episode offers a thoughtful futures-oriented lens on what you’re really putting under the tree.

Subscribe and Connect!

Subscribe to Modem Futura wherever you get your podcasts and connect with us on LinkedIn. Drop a comment, pose a question, or challenge an idea—because the future isn’t something we watch happen, it’s something we build together. The medium may still be the massage, but we all have a hand in shaping how it touches tomorrow.

🎧 Apple Podcast: https://apple.co/4pD98d4

🎧 Spotify: https://open.spotify.com/episode/2FGujd4wk5rx39zGH8Ml4d?si=kGMN9NCiQfmbBEHpi2buwg

📺 YouTube: https://youtu.be/6_rSNKxsSOU

🌐 Website: https://www.modemfutura.com/