Podcasts

Three Horizons Framework & Futures Wheel Explained

There's a reason some organizations consistently seem to see disruption coming — and it's usually not because they're smarter or better funded. It's because they've built structured habits of thinking about change in multiple time horizons simultaneously, and they've learned how to trace the cascading consequences of a single shift before it becomes a crisis.

Two of the oldest and most reliable tools for doing exactly that are the Three Horizons Framework and the Futures Wheel. In this episode of Modem Futura, hosts Sean Leahy and Andrew Maynard break both down in accessible, conversational detail — and show what becomes possible when you use them together.

The Three Horizons Framework

Originally developed by Bill Sharpe and widely used in professional foresight and strategic planning, divides the landscape of change into three overlapping zones. Horizon 1 represents the dominant present — the systems, structures, and assumptions that govern how the world works today. Horizon 3 is the emergent fringe: weak signals, nascent ideas, and early-stage shifts that are observable but not yet mainstream. And Horizon 2 is the transitional space between them — turbulent, hard to define, and full of both opportunity and risk.

The model doesn't tell you what the future will bring. What it offers is a way of *positioning* trends, signals, and innovations in relation to change — helping individuals and organizations understand what to watch, what to act on, and what to prepare for.

The Futures Wheel

Developed by Jerome Glenn in 1971, works differently but complementarily. Starting from a specific change or trend, it maps outward through first, second, and third-order consequences — building a rich, networked picture of how a single shift might ripple through a system over time. It's a brainstorming and sense-making tool, not a prediction engine, and it's at its most powerful when used with diverse groups who bring different perspectives to the same question.

Used individually, each tool offers genuine insight. Used together, they offer something more: a way of understanding not just *what* a signal might do, but *when* and *through which pathways* it might do it.

Whether you're a founder trying to figure out which wave to ride, a strategist scanning for disruption, or simply someone trying to make better decisions in an uncertain world, these tools are worth adding to your thinking practice.

🎧 Listen to the full episode wherever you get your podcasts, or watch on YouTube.


Subscribe and Connect!

Subscribe to Modem Futura wherever you get your podcasts and connect with us on LinkedIn. Drop a comment, pose a question, or challenge an idea—because the future isn’t something we watch happen, it’s something we build together. The medium may still be the massage, but we all have a hand in shaping how it touches tomorrow.


🎧 Apple Podcast: https://apple.co/4sosMdQ

🎧 Spotify: https://open.spotify.com/episode/58Fdc2SrWodBTbwfxK8Pwm?si=leiCnhRsQxuv-_hnxEeNjQ

📺 YouTube: https://youtu.be/eVk6L_VfAkY

🌐 Website: https://www.modemfutura.com/   

Fluid Futures: Navigating an AI-Mediated World

What Happens When AI Stops Being a Tool and Starts Being the World?

There's a useful distinction that keeps getting lost in conversations about artificial intelligence: the difference between augmentation and mediation.

Augmentation is familiar. It's the calculator model — AI helps you work faster, smarter, better. You remain the agent. The tool amplifies your capacity.

Mediation is something else. When AI mediates your world, it's not just helping you do things — it's shaping the system you're doing them inside of. What information surfaces. What options appear. What feels like the obvious next move. You're not using the environment anymore. You're inside one that AI has constructed, and it's shifting around you in real time.

This distinction is at the heart of Exploring the Futures of Technology 2.0, the new report from the Copenhagen Institute of Future Studies — and it's the central thread of the latest episode of Modem Futura.

On this episode, my co-host Andrew Maynard, fresh from attending the report's launch in Copenhagen, joined me to work through ten signals the report identifies as defining the near future: the shift from static to liquid content, the rise of agentic organizations, neurotechnology and cognitive integration, synthetic simulations replacing real-world research populations, physical AI entering embodied space, the geopolitics of technological access, AI-mediated cybersecurity threats, the sustainability challenges of AI infrastructure, and quantum computing as the wildcard at the edge of everything.

What holds these signals together isn't a single prediction. It's a pattern: the world is becoming fluid, and the frameworks we built for a more static environment — static reports, static institutions, static skillsets — are increasingly inadequate for navigating it.

One of the episode's sharpest observations is about the cost of cognitive offloading. As we hand more of our decision-making and information retrieval to AI systems, we risk losing the capacity to recognize when something's wrong. Not because AI is malicious, but because we've stopped practicing the skills that would let us notice. Like losing the ability to read a map. Except the stakes are considerably higher.

The conversation doesn't resolve these tensions — and that's exactly the point. Futures thinking, at its best, isn't about prediction. It's about staying awake to what's changing, naming the tensions, and refusing to optimize for a world that no longer exists.

If you want the full report, the Copenhagen Institute has made it freely available. And if you want the conversation around it — the episode is a good place to start.

Subscribe and Connect!

Subscribe to Modem Futura wherever you get your podcasts and connect with us on LinkedIn. Drop a comment, pose a question, or challenge an idea—because the future isn’t something we watch happen, it’s something we build together. The medium may still be the massage, but we all have a hand in shaping how it touches tomorrow.

🎧 Apple Podcast: https://apple.co/4bSGsZP

🎧 Spotify: https://open.spotify.com/episode/4sdx83QUD6pIs9IXb9G0VY?si=JdbwVHUKRg2mFO0Gsi_EFw

📺 YouTube: https://youtu.be/-2enUvPYmHo

🌐 Website: https://www.modemfutura.com/   

Power, Probes, and the Post-Human Horizon: What the Kardashev Scale Reveals About Us

On the surface, this episode of Modem Futura is an excuse to have fun. It's a spring break Futures Improv — Sean and Andrew throwing speculative scenarios at each other and seeing where things land. And it is fun. But somewhere between Dyson Spheres and the Fermi Paradox, it becomes something else: a quiet meditation on what humanity actually wants when we talk about mastering energy, exploration, and the cosmos.

The conversation begins with the Kardashev Scale, proposed by Soviet astronomer Nikolai Kardashev in 1964 as a way to rank civilizations by their energy use. A Type 1 civilization controls its planet's full energy output. Type 2 controls its star. Type 3 commands a galaxy. Humans, for context, are not yet a Type 1 civilization. We harness a fraction of what's available to us on Earth alone.


The question the hosts bring to this framework isn't just can we get there — it's what would we do once we did? Would abundance resolve our deepest conflicts, or would we simply carry our scarcity mindset into a new era? Andrew draws on Maslow's hierarchy of needs to make the point: remove the bottom layers of the pyramid — hunger, shelter, survival — and what remains is a different kind of human problem. The need for meaning, status, belonging, and always — always — a little more.

From there, the conversation ranges widely across some of the most provocative concepts in speculative science:

Dyson Spheres — hypothetical megastructures built around a star to harvest its complete energy output. Theoretical, yes, but not quite as theoretical as they once seemed. In 2024, seven anomalous objects within 1,000 light-years of Earth caught researchers' attention for occlusion patterns that didn't fit known planetary behavior.

Matrioshka Brains — named after Russian nesting dolls, these are hypothetical star-powered supercomputers of almost incomprehensible scale. The hosts draw an obvious connection: if AI data centers already strain Earth's energy grid, what does that compute-energy loop look like at stellar scales?

Von Neumann Probes — self-replicating spacecraft capable of exploring the galaxy by mining local resources to reproduce themselves. Biology can't survive interstellar space. Self-replicating machines, perhaps, can.

The Fermi Paradox — the haunting question of why, in a universe this old and this large, we can't find anyone else. The hosts explore the possibility that civilizations rise and fall within cosmic time windows too narrow to ever overlap. That the universe may be full of life that simply never gets to meet itself.

What makes this episode work is not the concepts themselves — though they're genuinely fascinating — but the humility behind the exploration. No predictions. No resolution. Just two people genuinely wondering, out loud, whether the same drive that would take us to the stars might also be the thing that holds us back.



🎧 Listen to the full episode wherever you get your podcasts, or watch on YouTube.

Subscribe and Connect!

Subscribe to Modem Futura wherever you get your podcasts and connect with us on LinkedIn. Drop a comment, pose a question, or challenge an idea—because the future isn’t something we watch happen, it’s something we build together. The medium may still be the massage, but we all have a hand in shaping how it touches tomorrow.

🎧 Apple Podcast: https://apple.co/4bOu2kk

🎧 Spotify: https://open.spotify.com/episode/2sbRQEuoabpKCUTrOQPCGT?si=159db4727a8841cb

📺 YouTube: https://youtu.be/z53hk7AlXZ4

🌐 Website: https://www.modemfutura.com/   

The Invisible Upgrade: What AI Is Actually Doing to the People Who Use It

Human sitting at computer with a split half image of regular life and one augmented by AI

The loudest part of the conversation about artificial intelligence right now is focused on what AI produces. Can you detect it? Does it have tells? Is this essay, image, or report human-made or machine-made?

It's a reasonable place to start. But it's not where the most important transformation is happening.

In Episode 75 of Modem Futura, host Sean Leahy and co-host Andrew Maynard explore what Sean calls the invisible upgrade — the quiet, compounding cognitive shift taking place not in AI-generated artifacts, but in the minds and workflows of the people who have fully integrated these tools into how they think, create, and decide.

The Seam-Scanning Problem

Sean introduces the concept of "seam scanning" — the practice of looking for signs of AI in a piece of work. Early on, those seams were easy to spot: nine-fingered hands in AI images, suspicious em-dashes, the word "delve" where it didn't belong. But as AI systems become more sophisticated and more deeply woven into human workflows, those tells are disappearing. Not because the AI is getting better at hiding — but because the line between human and AI output is becoming genuinely indistinguishable when the integration is deep enough.

The question "how much AI did you use?" is becoming as meaningful, Sean argues, as asking a writer how much spellcheck they used. The tool has become part of the process.

Constitutive Resonance

Andrew brings a concept he's been developing to the conversation: constitutive resonance. Unlike a calculator, which you use and put down, AI reconfigures you as you use it — and is reconfigured in return. The relationship is recursive and dynamic. Drawing on physics, when two systems resonate at coupled frequencies, the exchange of energy between them can be transformative. Applied to human cognition and AI systems, this suggests that those who engage deeply with AI tools aren't just more productive — they are thinking differently, possibly in ways that are difficult to reverse.

This maps directly onto McLuhan's 1967 insight: all media work us over completely. AI, as Andrew and Sean explore, is the most cognitively-coupled medium humanity has ever produced.

The Productivity Gap

What emerges from this isn't just a philosophical concern — it's a structural divergence. A growing group of knowledge workers, students, and researchers are operating with what Sean calls a "multiplier effect" — not because they are inherently smarter, but because their total cognitive output, the speed and depth of synthesis, ideation, and iteration, has expanded significantly. Meanwhile, those still debating whether to engage are falling further behind, not necessarily in skill, but in thinking capacity.

The episode also explores the rise of multi-agent AI systems as what Andrew calls a step-change likely bigger than the launch of ChatGPT — and what it means for institutions, education, and our understanding of what individual human contribution actually looks like in a world where AI is already inside the walls.

The Futures Cone: A Framework for Exploring What Could Be

How one deceptively simple tool can transform the way you think about uncertainty, possibility, and the choices that shape tomorrow.

There's a habit most of us share when it comes to thinking about the future: we treat it as a destination. A singular, somewhat “predictable” place that today's trends are quietly marching toward. It's a useful shorthand — but as a mental model, it's quietly limiting.

The Futures Cone, a foundational tool in the field of futures studies, offers a different way of seeing. Rather than imagining the future as a point, it asks you to imagine it as a cone — wide open, expanding outward from the present moment, filled with layers of possibility that range from the likely to the genuinely unthinkable.

How the Cone Works

The narrowest point is now. As the cone extends outward through time, it widens to reveal different regions of possible futures, each defined by how much disruption or change would be required to bring them about:

Projected futures — the baseline; what happens if nothing changes

Probable futures — where current trends are pointing

Plausible futures — what could happen given known forces and trajectories

Possible futures — speculative, requiring future knowledge we don't yet have

Preposterous futures — the outer edge; scenarios that challenge our deepest assumptions about what is physically or socially feasible

Threaded through all of these is the Preferable future — not a separate ring, but a cross-section that asks: given everything in this cone, what do we actually want? Where do our values point?

The Dator-Clarke Line

One of the most provocative ideas associated with the cone is what's referred to as the Dator-Clarke Line — drawn from futurist James Dator's claim that any genuinely useful idea about the future should, at first glance, appear ridiculous. Paired with Arthur C. Clarke's observation that the only way to find the limits of the possible is to push into the impossible, it suggests that the most valuable futures work happens precisely in the uncomfortable space at the edge of the cone.

The practical implication is significant: if every idea your team generates sounds reasonable, you probably haven't stretched far enough. The preposterous isn't a failure of imagination — it's a boundary worth exploring.

Why This Tool Matters Now

In a period defined by technological acceleration, geopolitical uncertainty, and rapid social change, the instinct to "project forward" can feel reassuring — but it's also where strategic blind spots form. The Futures Cone doesn't resolve that uncertainty. Instead, it gives individuals, teams, and organizations a shared language for navigating it: a structured way to ask not just "what will happen?" but "what could happen, what might we prefer, and what are we willing to do about it?"

This is the subject of Episode 74 of Modem Futura, in which we walk through the cone layer by layer — and then demonstrate it live with a thought experiment that starts with frogs and ends somewhere near the moons of Jupiter.

Subscribe and Connect!

Subscribe to Modem Futura wherever you get your podcasts and connect with us on LinkedIn. Drop a comment, pose a question, or challenge an idea—because the future isn’t something we watch happen, it’s something we build together. The medium may still be the massage, but we all have a hand in shaping how it touches tomorrow.

🎧 Apple Podcast: https://apple.co/4bz1tIC

🎧 Spotify: https://open.spotify.com/episode/20Hz36eLfZ90M6EifUrRuu?si=swNkzHWZSLCelVSKsAyj7A

📺 YouTube: https://youtu.be/wc_e3dsY-vw

🌐 Website: https://www.modemfutura.com/   

What Old iPods and Tiny Cameras Teach Us About Technology, Ownership, and Being Human

There's a moment in this episode of Modem Futura where two grown adults are hunched over a miniature Polaroid camera, watching a blurry selfie slowly develop — and laughing about it. It's objectively a terrible photograph. But it captures something that most modern technology has quietly optimized away: surprise, imperfection, and the distinctly human joy of not knowing exactly what you're going to get.

This episode began with a box of old iPods — tangled cables, dead batteries, and all — and evolved into a wide-ranging conversation about what we trade away every time we upgrade to something faster, thinner, and more connected. The themes are ones that touch anyone who has ever felt a pang of something unnamed while scrolling through an infinite library of music and being unable to choose a single song.

Ownership in the age of access. The iPods in the conversation are air-gapped — no internet connection, no cloud sync, no subscription. The music on them belongs to their owner in a way that a Spotify library simply does not. This distinction matters more than it might seem, especially when you consider that digital books, photos, and music can disappear when a service shuts down or an account holder passes away. The question of digital legacy — who inherits your cloud — is one most people haven't thought through yet.

Craft, care, and the "fast food" of technology. Sean raises a pointed observation about a recently released video game that shipped with fewer features than its predecessor from a decade ago. It's a pattern that extends well beyond gaming: the pressure to release fast increasingly overrides the commitment to release well. When did "good enough" become the standard?

The paradox of abundance. One of the episode's most compelling threads is the tension between scarcity and surplus. Limited storage on an old iPod forced intentional curation — playlists that became personal time capsules. Unlimited streaming offers everything and, paradoxically, can deliver less meaning. Andrew's students, however, offer a counterpoint: raised in abundance, they've developed their own sophisticated habits of curation and care. Perhaps the pendulum is already swinging.

Imperfection as a feature. The tiny Kodak keychain camera. The Polaroid with its gloriously blurry output. The analog photograph whose chemistry introduces an element of chance. These aren't failures of technology — they're reminders that the most human experiences are often the least predictable ones.

This episode doesn't offer prescriptions. It offers an invitation: to notice, to question, and to be intentional about the role technology plays in your life before someone else makes that choice for you.

🎧 Listen to Episode 73 of Modem Futura — available on Apple Podcasts, Spotify, and wherever you listen.

Subscribe and Connect!

Subscribe to Modem Futura wherever you get your podcasts and connect with us on LinkedIn. Drop a comment, pose a question, or challenge an idea—because the future isn’t something we watch happen, it’s something we build together. The medium may still be the massage, but we all have a hand in shaping how it touches tomorrow.

🎧 Apple Podcast: https://apple.co/4b3P8L8

🎧 Spotify: https://open.spotify.com/episode/2UNsDaZox2jdEb1QYN1m44?si=FUkqjQ0gSEecnYyrjKfoVA

📺 YouTube: https://youtu.be/UKC7UHkGNJQ

🌐 Website: https://www.modemfutura.com/   

Thriving with AI: Two Futures Thinking Tools for Navigating Uncertainty

Illustration of Sean and Andrew presenting their workshop title slide

The question is no longer whether AI will reshape education. It already has. The more interesting question — and the harder one — is how educators, leaders, and institutions can navigate that transformation with clarity, purpose, and agency.

In this episode of Modem Futura, hosts Sean Leahy and Andrew Maynard walk listeners through a workshop they developed for ASU's 2026 Folk Fest titled "Thriving with AI: Ethical, Transparent, and Human-Centered Learning." Rather than demonstrating AI platforms or advocating for a particular stance, the session offers two practical thinking tools designed to help individuals make sense of complexity and make intentional decisions — regardless of where they fall on the AI adoption spectrum.

Foresight Methodologies

The Futures Triangle, originally developed by futurist Sohail Inayatullah, is a foresight method that maps three forces shaping any change landscape: the pull of the future (emerging visions and possibilities), the push of the present (trends, pressures, and mandates driving change), and the weight of history (the traditions, values, and institutional structures that resist or ground that change). By making these forces visible, individuals and teams can better orient themselves within the dynamics of change rather than simply reacting to them.

The Intent Map, drawn from Jefferey Abbott and Andrew Maynard's book AI and the Art of Being Human, complements the triangle by shifting from orientation to action. A simple two-by-two matrix, it asks users to identify four elements: their core values (what they won't compromise), their desired outcomes (what success looks like), their guardrails (the hard boundaries they won't cross), and their metrics (how they'll know if it's working). Critically, the framework recognizes that metrics don't have to be numerical — sometimes the most meaningful indicators of success are qualitative, like a student who can't stop thinking about what they learned.

What makes these tools particularly valuable is their accessibility. Both can be sketched on a scrap of paper. Both work for individuals and teams. And both are domain-agnostic — while the episode frames them in the context of education, they apply equally well to organizational strategy, technology adoption, and personal decision-making.

The episode is anchored by two provocative 2035 headlines: one in which AI tutors outperform human teachers and faculty roles come under review, and another in which human-AI partnership produces the most critically thinking generation in history. The question the workshop poses isn't which headline is more likely. It's which one you want — and what intentional choices you need to make to move toward it.

Thriving with AI, as the hosts frame it, isn't about mastering the latest platform. It's about staying awake to what matters.


Subscribe and Connect!

Subscribe to Modem Futura wherever you get your podcasts and connect with us on LinkedIn. Drop a comment, pose a question, or challenge an idea—because the future isn’t something we watch happen, it’s something we build together. The medium may still be the massage, but we all have a hand in shaping how it touches tomorrow.

🎧 Apple Podcast: https://apple.co/3ZXgT2P

🎧 Spotify: https://open.spotify.com/episode/1b1Q0W7YVSGZA2ELYj6g6C?si=wL1sXb-DQsSluBkLYCu9tg

📺 YouTube: https://youtu.be/zi_zvXCt9sY

🌐 Website: https://www.modemfutura.com/   


Asimov's "The Fun They Had" and the Real Cost of AI-Driven Education

Illustration of Asimov's Fun They Had boy reading by mechanical teacher

The History of our Future

More than seventy years ago, Isaac Asimov imagined a future where children learn in isolation, guided by personalized mechanical tutors, and books are relics of a forgotten age. His 1951 short story, "The Fun They Had," is set in 2155, but its questions feel startlingly current.

In the story, a young girl named Margie discovers a paper book and learns about a time when children went to school together—sat in classrooms, were taught by human teachers, and shared the experience of learning with their peers. Her own education is efficient, personalized, and lonely. Her mechanical teacher can diagnose her struggles and recalibrate its approach, but it cannot inspire her, connect with her, or make her feel like she belongs to something larger than a lesson plan.

Asimov didn’t predict AI as we know it. But he predicted the question that matters most: in our rush to optimize education, are we designing out the very things that make learning meaningful?

This is precisely the tension at the heart of today's conversation about AI in education. The promise of AI-powered tutors is real and, in many cases, genuinely valuable: adaptive pacing, instant feedback, content tailored to individual needs. But when personalization becomes the dominant paradigm—when every learner is on a separate track, in a separate space, at a separate time—the communal dimensions of education begin to disappear.

Natural Human Impulses for Learning (not schooling)

John Dewey argued more than a century ago that learning is driven by four natural impulses: inquiry, communication, construction, and expression. Most of these are inherently social. They depend on friction, dialogue, surprise, and the presence of other people. No amount of algorithmic sophistication can fully replicate the moment a teacher's unexpected enthusiasm shifts a student's entire trajectory, or the experience of working through difficulty alongside peers who share the same struggle.

Asimov's story also raises a subtler question about what endures. The book Margie discovers has survived two centuries. The static words on the page—unchanging, tactile, physical—carry a kind of permanence that digital media cannot easily match. This resonates with the growing cultural appetite for analog experiences: vinyl records, film photography, even old iPods. These are not acts of technological rejection. They are expressions of a deeper need for embodied engagement, deliberate choice, and the kind of friction that gives experience its texture.

Where do we go next?

None of this means AI has no place in education. It does, and increasingly will. But Asimov's story is a quiet reminder that the most important things about learning—curiosity, connection, belonging, the joy of shared discovery—are not problems to be optimized. They are human experiences to be protected.

The question is not whether AI can teach us. It's whether, in building systems that teach us more efficiently, we are designing out the very things that made learning worth having in the first place.

*Episode 71 of Modem Futura explores these themes through Asimov's story and a wider conversation about technology, nostalgia, and what it means to learn as a human being.*

Subscribe and Connect!

Subscribe to Modem Futura wherever you get your podcasts and connect with us on LinkedIn. Drop a comment, pose a question, or challenge an idea—because the future isn’t something we watch happen, it’s something we build together. The medium may still be the massage, but we all have a hand in shaping how it touches tomorrow.

🎧 Apple Podcast: https://apple.co/4s1lDk1

🎧 Spotify: https://open.spotify.com/episode/20I5j2DliUnZAbWDiVw7y8?si=WoEW_Zb2SPiynHYb4d8XHA

📺 YouTube: https://youtu.be/TDQc15Muwto

🌐 Website: https://www.modemfutura.com/   

Vibe Coding and the Return of Personal Software

Vintage styled computer terminal with the text "What will you build today?" displayed on screen

The Echo of Early Personal Computing

There was a brief, electric moment in the history of computing—roughly the late 1970s through the mid-1980s—when ordinary people could sit down at a keyboard and make a machine do something it hadn't done before. The Commodore 64, the BBC Micro, the Apple II: these were limited, clunky, and profoundly empowering. For a generation, they opened the door to a kind of creative agency that felt almost magical.

That door closed, gradually, as software became professionalized. The gap between what you could imagine and what you could build widened into a canyon. If you wanted a tool that didn't exist, you needed a developer—or you went without.

Vibe coding is reopening that door.

The term refers to the practice of describing what you want in natural language and letting a generative AI—tools like Claude, ChatGPT, or Copilot—write the code for you. No syntax to memorize. No debugging by hand. You describe your intent, and working software comes back in seconds.

In this episode of Modem Futura, we explore what this shift means—not just technically, but humanly. I demonstrates tools he built from single prompts (also referred to as a a ‘one-shot’): a horizon-scanning app for futures research and a two-by-two uncertainty matrix used in strategic foresight. Both were functional on the first attempt. Both took less time to create than it takes to describe them.

The Inherited Power Problem

But the episode resists the temptation to treat this as a simple good-news story. The hosts dig into the real tensions: AI-generated code that no one fully understands, security vulnerabilities baked into apps that reach market before anyone reviews them, the new threat landscape of prompt injection, and the philosophical question of wielding power you haven't earned the literacy to evaluate—what the hosts call "inherited power."

There are also rich implications for education. Rather than relying on off-the-shelf apps that never quite fit, instructors and students alike can now build purpose-specific tools—and in doing so, develop a more grounded understanding of what these AI systems can and cannot do.

The deeper question the episode surfaces is less about code and more about agency. For decades, software was something done to us—platforms we adapted to, interfaces we learned, ecosystems we bought into. Vibe coding hints at a possible reversal: software shaped by the individual, for the individual, in the moment they need it.

Whether that future is liberating or reckless—or both—depends on the kind of literacy, caution, and imagination we bring to it.

Listen to the full conversation on Modem Futura.

Subscribe and Connect!

Subscribe to Modem Futura wherever you get your podcasts and connect with us on LinkedIn. Drop a comment, pose a question, or challenge an idea—because the future isn’t something we watch happen, it’s something we build together. The medium may still be the massage, but we all have a hand in shaping how it touches tomorrow.

🎧 Apple Podcast: https://apple.co/4rbOr9r

🎧 Spotify: https://open.spotify.com/episode/28DMXJsM2kEBA2QDxuDmtJ?si=AJpR7zCpRgS2KCCfWwjjWg

📺 YouTube: https://youtu.be/lQGYaiThuBk?si=nRbHVEQk9dwL3gXr

🌐 Website: https://www.modemfutura.com/   

The Future From a Kid's Perspective: What a 10-Year-Old Thinks About AI, Jobs, and Meaningful Work

We spend a lot of time talking about young people when we discuss the future of technology. We debate how AI will affect their education, reshape their careers, and transform the world they'll inherit. But we rarely stop to ask them what they think.

In this special episode of Modem Futura, we did exactly that. Freddie Leahy—co-host Sean's almost-10-year-old son—joined us for an unscripted conversation about artificial intelligence, meaningful work, and the questions that don't have easy answers.

Already Thinking About Job Displacement

When asked what he thinks about when he imagines the future, Freddie's first response wasn't about flying cars or space travel. It was about jobs.

"I kind of more think about the AI part of the future," he said. "And I'm just wondering what jobs will be overran by AI."

He's almost ten. And he's already calculating whether his dream career—paleontology—will exist by the time he's ready to pursue it.

This isn't abstract concern. Freddie has a specific vision: he wants to be like Alan Grant from Jurassic Park, out in the field, hands in the dirt, discovering fossils himself. When we suggested that AI might help him find more dinosaur bones faster, he didn't immediately embrace the idea. His worry isn't about efficiency—it's about being separated from the work itself.

"I would be doing it not for the money," he explained, "just because of the experience."

The Limits of AI Creativity

Freddie has firsthand experience with generative AI. He and I have spent time creating AI-generated images—D&D characters, fantasy creatures, book covers. But he's noticed something that many adults are also discovering: the gap between imagination and output.

"Every time you create an AI image," he said, "you never feel like it's quite right. So you just keep making these, and then you have to choose one, but in the end it never feels like the perfect cover you wanted."

When asked why, his answer was simple: "AI isn't our heads."

This observation—from a fourth-grader—gets at something fundamental about the current state of generative tools. They can produce impressive outputs, but they can't access the specific vision in your mind. The friction between prompt and result isn't just a technical limitation; it's a gap between human intention and machine interpretation.

When it comes to his own writing—Freddie is working on stories—he's clear that he doesn't want AI assistance. The temptation exists, especially when facing a blank page. But he recognizes something important: "It's the point about using your own creativity."

Suspicious of AI Companions

One of the most revealing exchanges came when we explored the idea of AI friendship. What if Freddie could have an AI companion who shared all his interests—someone who wanted to talk about dinosaurs as much as he does?

His response was immediate skepticism.

"That would be weird," he said, "because nobody likes what I like."

The very thing that might make an AI friend appealing—perfect alignment with his interests—is exactly what made it feel inauthentic. Part of what makes his interests meaningful is that they're his, distinct from the people around him. An AI that mirrored them perfectly would feel hollow.

When pressed further about whether he'd want an AI as a secret companion—a sort of digital spirit animal—Freddie remained uncertain. "Who knows what it could do," he noted. "It could hack everything."

There's healthy skepticism there, but also something deeper: a sense that friendship involves more than shared interests. It involves trust, vulnerability, and the unpredictability of another mind.

"I Refuse": Mind Uploading at Nine

During our Futures Improv segment, we posed a classic transhumanist scenario: What if you could upload your consciousness to a computer and live forever digitally, while your biological body remained behind?

Freddie's answer required no deliberation:

"I refuse. I will not upload my brain into a digital computer."

His reasoning was practical but profound. At nine years old, why would he abandon a body that works? The theoretical benefits of digital immortality don't outweigh the immediate reality of physical experience.

This perspective offers a useful counterweight to futures discourse that sometimes treats technological transcendence as obviously desirable. From Freddie's vantage point, the question isn't whether we can escape biological limitations, but whether we'd want to—and what we might lose in the process.

Questions Without Right Answers

Perhaps the most important takeaway from this conversation came near the end, when Freddie observed something about the nature of our questions.

"Because of all these questions," he said, "there is no wrong or right answer."

That's exactly right. The value of futures thinking isn't in predicting what will happen or determining the "correct" response to emerging technologies. It's in learning to sit with uncertainty, explore tensions, and develop our capacity for navigating complexity.

At almost ten years old, Freddie already understands this. He's not looking for definitive answers about AI and jobs and creativity. He's learning to ask better questions—and to recognize that asking them is more important than resolving them.

What the Future Thinks About Itself

We often frame conversations about technology and youth as adults preparing children for a world we're creating. But this episode suggests something different: young people are already thinking about these issues, often with more nuance than we might expect.

Freddie isn't anti-technology. He plays VR games, makes AI art, and follows developments in the field. But he's also holding onto something—a sense that some experiences are valuable precisely because we do them ourselves, that the struggle of creation is part of its meaning, and that efficiency isn't the only measure of a good life.

These aren't lessons we taught him. They're insights he's developing on his own, as he navigates a world where these technologies are simply part of the landscape.

Maybe the best thing we can do isn't to tell young people what the future will look like. Maybe it's to listen to what they already think about it—and learn from their perspective.

I don't know what the future holds for his generation. But if this conversation is any indication, they're thinking about it more carefully than we might expect.

Subscribe and Connect!

Subscribe to Modem Futura wherever you get your podcasts and connect with us on LinkedIn. Drop a comment, pose a question, or challenge an idea—because the future isn’t something we watch happen, it’s something we build together. The medium may still be the massage, but we all have a hand in shaping how it touches tomorrow.

🎧 Apple Podcast: https://apple.co/4khmVES

🎧 Spotify: https://open.spotify.com/episode/5nKjpEVZcaUDisdZpGGaMZ?si=YgWp_O84T1yVlBSloedV1w

📺 YouTube: https://youtu.be/mfumkJZav-M

🌐 Website: https://www.modemfutura.com/