Modem Futura

Three Horizons Framework & Futures Wheel Explained

There's a reason some organizations consistently seem to see disruption coming — and it's usually not because they're smarter or better funded. It's because they've built structured habits of thinking about change in multiple time horizons simultaneously, and they've learned how to trace the cascading consequences of a single shift before it becomes a crisis.

Two of the oldest and most reliable tools for doing exactly that are the Three Horizons Framework and the Futures Wheel. In this episode of Modem Futura, hosts Sean Leahy and Andrew Maynard break both down in accessible, conversational detail — and show what becomes possible when you use them together.

The Three Horizons Framework

Originally developed by Bill Sharpe and widely used in professional foresight and strategic planning, divides the landscape of change into three overlapping zones. Horizon 1 represents the dominant present — the systems, structures, and assumptions that govern how the world works today. Horizon 3 is the emergent fringe: weak signals, nascent ideas, and early-stage shifts that are observable but not yet mainstream. And Horizon 2 is the transitional space between them — turbulent, hard to define, and full of both opportunity and risk.

The model doesn't tell you what the future will bring. What it offers is a way of *positioning* trends, signals, and innovations in relation to change — helping individuals and organizations understand what to watch, what to act on, and what to prepare for.

The Futures Wheel

Developed by Jerome Glenn in 1971, works differently but complementarily. Starting from a specific change or trend, it maps outward through first, second, and third-order consequences — building a rich, networked picture of how a single shift might ripple through a system over time. It's a brainstorming and sense-making tool, not a prediction engine, and it's at its most powerful when used with diverse groups who bring different perspectives to the same question.

Used individually, each tool offers genuine insight. Used together, they offer something more: a way of understanding not just *what* a signal might do, but *when* and *through which pathways* it might do it.

Whether you're a founder trying to figure out which wave to ride, a strategist scanning for disruption, or simply someone trying to make better decisions in an uncertain world, these tools are worth adding to your thinking practice.

🎧 Listen to the full episode wherever you get your podcasts, or watch on YouTube.


Subscribe and Connect!

Subscribe to Modem Futura wherever you get your podcasts and connect with us on LinkedIn. Drop a comment, pose a question, or challenge an idea—because the future isn’t something we watch happen, it’s something we build together. The medium may still be the massage, but we all have a hand in shaping how it touches tomorrow.


🎧 Apple Podcast: https://apple.co/4sosMdQ

🎧 Spotify: https://open.spotify.com/episode/58Fdc2SrWodBTbwfxK8Pwm?si=leiCnhRsQxuv-_hnxEeNjQ

📺 YouTube: https://youtu.be/eVk6L_VfAkY

🌐 Website: https://www.modemfutura.com/   

Power, Probes, and the Post-Human Horizon: What the Kardashev Scale Reveals About Us

On the surface, this episode of Modem Futura is an excuse to have fun. It's a spring break Futures Improv — Sean and Andrew throwing speculative scenarios at each other and seeing where things land. And it is fun. But somewhere between Dyson Spheres and the Fermi Paradox, it becomes something else: a quiet meditation on what humanity actually wants when we talk about mastering energy, exploration, and the cosmos.

The conversation begins with the Kardashev Scale, proposed by Soviet astronomer Nikolai Kardashev in 1964 as a way to rank civilizations by their energy use. A Type 1 civilization controls its planet's full energy output. Type 2 controls its star. Type 3 commands a galaxy. Humans, for context, are not yet a Type 1 civilization. We harness a fraction of what's available to us on Earth alone.


The question the hosts bring to this framework isn't just can we get there — it's what would we do once we did? Would abundance resolve our deepest conflicts, or would we simply carry our scarcity mindset into a new era? Andrew draws on Maslow's hierarchy of needs to make the point: remove the bottom layers of the pyramid — hunger, shelter, survival — and what remains is a different kind of human problem. The need for meaning, status, belonging, and always — always — a little more.

From there, the conversation ranges widely across some of the most provocative concepts in speculative science:

Dyson Spheres — hypothetical megastructures built around a star to harvest its complete energy output. Theoretical, yes, but not quite as theoretical as they once seemed. In 2024, seven anomalous objects within 1,000 light-years of Earth caught researchers' attention for occlusion patterns that didn't fit known planetary behavior.

Matrioshka Brains — named after Russian nesting dolls, these are hypothetical star-powered supercomputers of almost incomprehensible scale. The hosts draw an obvious connection: if AI data centers already strain Earth's energy grid, what does that compute-energy loop look like at stellar scales?

Von Neumann Probes — self-replicating spacecraft capable of exploring the galaxy by mining local resources to reproduce themselves. Biology can't survive interstellar space. Self-replicating machines, perhaps, can.

The Fermi Paradox — the haunting question of why, in a universe this old and this large, we can't find anyone else. The hosts explore the possibility that civilizations rise and fall within cosmic time windows too narrow to ever overlap. That the universe may be full of life that simply never gets to meet itself.

What makes this episode work is not the concepts themselves — though they're genuinely fascinating — but the humility behind the exploration. No predictions. No resolution. Just two people genuinely wondering, out loud, whether the same drive that would take us to the stars might also be the thing that holds us back.



🎧 Listen to the full episode wherever you get your podcasts, or watch on YouTube.

Subscribe and Connect!

Subscribe to Modem Futura wherever you get your podcasts and connect with us on LinkedIn. Drop a comment, pose a question, or challenge an idea—because the future isn’t something we watch happen, it’s something we build together. The medium may still be the massage, but we all have a hand in shaping how it touches tomorrow.

🎧 Apple Podcast: https://apple.co/4bOu2kk

🎧 Spotify: https://open.spotify.com/episode/2sbRQEuoabpKCUTrOQPCGT?si=159db4727a8841cb

📺 YouTube: https://youtu.be/z53hk7AlXZ4

🌐 Website: https://www.modemfutura.com/   

The Invisible Upgrade: What AI Is Actually Doing to the People Who Use It

Human sitting at computer with a split half image of regular life and one augmented by AI

The loudest part of the conversation about artificial intelligence right now is focused on what AI produces. Can you detect it? Does it have tells? Is this essay, image, or report human-made or machine-made?

It's a reasonable place to start. But it's not where the most important transformation is happening.

In Episode 75 of Modem Futura, host Sean Leahy and co-host Andrew Maynard explore what Sean calls the invisible upgrade — the quiet, compounding cognitive shift taking place not in AI-generated artifacts, but in the minds and workflows of the people who have fully integrated these tools into how they think, create, and decide.

The Seam-Scanning Problem

Sean introduces the concept of "seam scanning" — the practice of looking for signs of AI in a piece of work. Early on, those seams were easy to spot: nine-fingered hands in AI images, suspicious em-dashes, the word "delve" where it didn't belong. But as AI systems become more sophisticated and more deeply woven into human workflows, those tells are disappearing. Not because the AI is getting better at hiding — but because the line between human and AI output is becoming genuinely indistinguishable when the integration is deep enough.

The question "how much AI did you use?" is becoming as meaningful, Sean argues, as asking a writer how much spellcheck they used. The tool has become part of the process.

Constitutive Resonance

Andrew brings a concept he's been developing to the conversation: constitutive resonance. Unlike a calculator, which you use and put down, AI reconfigures you as you use it — and is reconfigured in return. The relationship is recursive and dynamic. Drawing on physics, when two systems resonate at coupled frequencies, the exchange of energy between them can be transformative. Applied to human cognition and AI systems, this suggests that those who engage deeply with AI tools aren't just more productive — they are thinking differently, possibly in ways that are difficult to reverse.

This maps directly onto McLuhan's 1967 insight: all media work us over completely. AI, as Andrew and Sean explore, is the most cognitively-coupled medium humanity has ever produced.

The Productivity Gap

What emerges from this isn't just a philosophical concern — it's a structural divergence. A growing group of knowledge workers, students, and researchers are operating with what Sean calls a "multiplier effect" — not because they are inherently smarter, but because their total cognitive output, the speed and depth of synthesis, ideation, and iteration, has expanded significantly. Meanwhile, those still debating whether to engage are falling further behind, not necessarily in skill, but in thinking capacity.

The episode also explores the rise of multi-agent AI systems as what Andrew calls a step-change likely bigger than the launch of ChatGPT — and what it means for institutions, education, and our understanding of what individual human contribution actually looks like in a world where AI is already inside the walls.

The Futures Cone: A Framework for Exploring What Could Be

How one deceptively simple tool can transform the way you think about uncertainty, possibility, and the choices that shape tomorrow.

There's a habit most of us share when it comes to thinking about the future: we treat it as a destination. A singular, somewhat “predictable” place that today's trends are quietly marching toward. It's a useful shorthand — but as a mental model, it's quietly limiting.

The Futures Cone, a foundational tool in the field of futures studies, offers a different way of seeing. Rather than imagining the future as a point, it asks you to imagine it as a cone — wide open, expanding outward from the present moment, filled with layers of possibility that range from the likely to the genuinely unthinkable.

How the Cone Works

The narrowest point is now. As the cone extends outward through time, it widens to reveal different regions of possible futures, each defined by how much disruption or change would be required to bring them about:

Projected futures — the baseline; what happens if nothing changes

Probable futures — where current trends are pointing

Plausible futures — what could happen given known forces and trajectories

Possible futures — speculative, requiring future knowledge we don't yet have

Preposterous futures — the outer edge; scenarios that challenge our deepest assumptions about what is physically or socially feasible

Threaded through all of these is the Preferable future — not a separate ring, but a cross-section that asks: given everything in this cone, what do we actually want? Where do our values point?

The Dator-Clarke Line

One of the most provocative ideas associated with the cone is what's referred to as the Dator-Clarke Line — drawn from futurist James Dator's claim that any genuinely useful idea about the future should, at first glance, appear ridiculous. Paired with Arthur C. Clarke's observation that the only way to find the limits of the possible is to push into the impossible, it suggests that the most valuable futures work happens precisely in the uncomfortable space at the edge of the cone.

The practical implication is significant: if every idea your team generates sounds reasonable, you probably haven't stretched far enough. The preposterous isn't a failure of imagination — it's a boundary worth exploring.

Why This Tool Matters Now

In a period defined by technological acceleration, geopolitical uncertainty, and rapid social change, the instinct to "project forward" can feel reassuring — but it's also where strategic blind spots form. The Futures Cone doesn't resolve that uncertainty. Instead, it gives individuals, teams, and organizations a shared language for navigating it: a structured way to ask not just "what will happen?" but "what could happen, what might we prefer, and what are we willing to do about it?"

This is the subject of Episode 74 of Modem Futura, in which we walk through the cone layer by layer — and then demonstrate it live with a thought experiment that starts with frogs and ends somewhere near the moons of Jupiter.

Subscribe and Connect!

Subscribe to Modem Futura wherever you get your podcasts and connect with us on LinkedIn. Drop a comment, pose a question, or challenge an idea—because the future isn’t something we watch happen, it’s something we build together. The medium may still be the massage, but we all have a hand in shaping how it touches tomorrow.

🎧 Apple Podcast: https://apple.co/4bz1tIC

🎧 Spotify: https://open.spotify.com/episode/20Hz36eLfZ90M6EifUrRuu?si=swNkzHWZSLCelVSKsAyj7A

📺 YouTube: https://youtu.be/wc_e3dsY-vw

🌐 Website: https://www.modemfutura.com/   

Thriving with AI: Two Futures Thinking Tools for Navigating Uncertainty

Illustration of Sean and Andrew presenting their workshop title slide

The question is no longer whether AI will reshape education. It already has. The more interesting question — and the harder one — is how educators, leaders, and institutions can navigate that transformation with clarity, purpose, and agency.

In this episode of Modem Futura, hosts Sean Leahy and Andrew Maynard walk listeners through a workshop they developed for ASU's 2026 Folk Fest titled "Thriving with AI: Ethical, Transparent, and Human-Centered Learning." Rather than demonstrating AI platforms or advocating for a particular stance, the session offers two practical thinking tools designed to help individuals make sense of complexity and make intentional decisions — regardless of where they fall on the AI adoption spectrum.

Foresight Methodologies

The Futures Triangle, originally developed by futurist Sohail Inayatullah, is a foresight method that maps three forces shaping any change landscape: the pull of the future (emerging visions and possibilities), the push of the present (trends, pressures, and mandates driving change), and the weight of history (the traditions, values, and institutional structures that resist or ground that change). By making these forces visible, individuals and teams can better orient themselves within the dynamics of change rather than simply reacting to them.

The Intent Map, drawn from Jefferey Abbott and Andrew Maynard's book AI and the Art of Being Human, complements the triangle by shifting from orientation to action. A simple two-by-two matrix, it asks users to identify four elements: their core values (what they won't compromise), their desired outcomes (what success looks like), their guardrails (the hard boundaries they won't cross), and their metrics (how they'll know if it's working). Critically, the framework recognizes that metrics don't have to be numerical — sometimes the most meaningful indicators of success are qualitative, like a student who can't stop thinking about what they learned.

What makes these tools particularly valuable is their accessibility. Both can be sketched on a scrap of paper. Both work for individuals and teams. And both are domain-agnostic — while the episode frames them in the context of education, they apply equally well to organizational strategy, technology adoption, and personal decision-making.

The episode is anchored by two provocative 2035 headlines: one in which AI tutors outperform human teachers and faculty roles come under review, and another in which human-AI partnership produces the most critically thinking generation in history. The question the workshop poses isn't which headline is more likely. It's which one you want — and what intentional choices you need to make to move toward it.

Thriving with AI, as the hosts frame it, isn't about mastering the latest platform. It's about staying awake to what matters.


Subscribe and Connect!

Subscribe to Modem Futura wherever you get your podcasts and connect with us on LinkedIn. Drop a comment, pose a question, or challenge an idea—because the future isn’t something we watch happen, it’s something we build together. The medium may still be the massage, but we all have a hand in shaping how it touches tomorrow.

🎧 Apple Podcast: https://apple.co/3ZXgT2P

🎧 Spotify: https://open.spotify.com/episode/1b1Q0W7YVSGZA2ELYj6g6C?si=wL1sXb-DQsSluBkLYCu9tg

📺 YouTube: https://youtu.be/zi_zvXCt9sY

🌐 Website: https://www.modemfutura.com/   


Asimov's "The Fun They Had" and the Real Cost of AI-Driven Education

Illustration of Asimov's Fun They Had boy reading by mechanical teacher

The History of our Future

More than seventy years ago, Isaac Asimov imagined a future where children learn in isolation, guided by personalized mechanical tutors, and books are relics of a forgotten age. His 1951 short story, "The Fun They Had," is set in 2155, but its questions feel startlingly current.

In the story, a young girl named Margie discovers a paper book and learns about a time when children went to school together—sat in classrooms, were taught by human teachers, and shared the experience of learning with their peers. Her own education is efficient, personalized, and lonely. Her mechanical teacher can diagnose her struggles and recalibrate its approach, but it cannot inspire her, connect with her, or make her feel like she belongs to something larger than a lesson plan.

Asimov didn’t predict AI as we know it. But he predicted the question that matters most: in our rush to optimize education, are we designing out the very things that make learning meaningful?

This is precisely the tension at the heart of today's conversation about AI in education. The promise of AI-powered tutors is real and, in many cases, genuinely valuable: adaptive pacing, instant feedback, content tailored to individual needs. But when personalization becomes the dominant paradigm—when every learner is on a separate track, in a separate space, at a separate time—the communal dimensions of education begin to disappear.

Natural Human Impulses for Learning (not schooling)

John Dewey argued more than a century ago that learning is driven by four natural impulses: inquiry, communication, construction, and expression. Most of these are inherently social. They depend on friction, dialogue, surprise, and the presence of other people. No amount of algorithmic sophistication can fully replicate the moment a teacher's unexpected enthusiasm shifts a student's entire trajectory, or the experience of working through difficulty alongside peers who share the same struggle.

Asimov's story also raises a subtler question about what endures. The book Margie discovers has survived two centuries. The static words on the page—unchanging, tactile, physical—carry a kind of permanence that digital media cannot easily match. This resonates with the growing cultural appetite for analog experiences: vinyl records, film photography, even old iPods. These are not acts of technological rejection. They are expressions of a deeper need for embodied engagement, deliberate choice, and the kind of friction that gives experience its texture.

Where do we go next?

None of this means AI has no place in education. It does, and increasingly will. But Asimov's story is a quiet reminder that the most important things about learning—curiosity, connection, belonging, the joy of shared discovery—are not problems to be optimized. They are human experiences to be protected.

The question is not whether AI can teach us. It's whether, in building systems that teach us more efficiently, we are designing out the very things that made learning worth having in the first place.

*Episode 71 of Modem Futura explores these themes through Asimov's story and a wider conversation about technology, nostalgia, and what it means to learn as a human being.*

Subscribe and Connect!

Subscribe to Modem Futura wherever you get your podcasts and connect with us on LinkedIn. Drop a comment, pose a question, or challenge an idea—because the future isn’t something we watch happen, it’s something we build together. The medium may still be the massage, but we all have a hand in shaping how it touches tomorrow.

🎧 Apple Podcast: https://apple.co/4s1lDk1

🎧 Spotify: https://open.spotify.com/episode/20I5j2DliUnZAbWDiVw7y8?si=WoEW_Zb2SPiynHYb4d8XHA

📺 YouTube: https://youtu.be/TDQc15Muwto

🌐 Website: https://www.modemfutura.com/   

The Future From a Kid's Perspective: What a 10-Year-Old Thinks About AI, Jobs, and Meaningful Work

We spend a lot of time talking about young people when we discuss the future of technology. We debate how AI will affect their education, reshape their careers, and transform the world they'll inherit. But we rarely stop to ask them what they think.

In this special episode of Modem Futura, we did exactly that. Freddie Leahy—co-host Sean's almost-10-year-old son—joined us for an unscripted conversation about artificial intelligence, meaningful work, and the questions that don't have easy answers.

Already Thinking About Job Displacement

When asked what he thinks about when he imagines the future, Freddie's first response wasn't about flying cars or space travel. It was about jobs.

"I kind of more think about the AI part of the future," he said. "And I'm just wondering what jobs will be overran by AI."

He's almost ten. And he's already calculating whether his dream career—paleontology—will exist by the time he's ready to pursue it.

This isn't abstract concern. Freddie has a specific vision: he wants to be like Alan Grant from Jurassic Park, out in the field, hands in the dirt, discovering fossils himself. When we suggested that AI might help him find more dinosaur bones faster, he didn't immediately embrace the idea. His worry isn't about efficiency—it's about being separated from the work itself.

"I would be doing it not for the money," he explained, "just because of the experience."

The Limits of AI Creativity

Freddie has firsthand experience with generative AI. He and I have spent time creating AI-generated images—D&D characters, fantasy creatures, book covers. But he's noticed something that many adults are also discovering: the gap between imagination and output.

"Every time you create an AI image," he said, "you never feel like it's quite right. So you just keep making these, and then you have to choose one, but in the end it never feels like the perfect cover you wanted."

When asked why, his answer was simple: "AI isn't our heads."

This observation—from a fourth-grader—gets at something fundamental about the current state of generative tools. They can produce impressive outputs, but they can't access the specific vision in your mind. The friction between prompt and result isn't just a technical limitation; it's a gap between human intention and machine interpretation.

When it comes to his own writing—Freddie is working on stories—he's clear that he doesn't want AI assistance. The temptation exists, especially when facing a blank page. But he recognizes something important: "It's the point about using your own creativity."

Suspicious of AI Companions

One of the most revealing exchanges came when we explored the idea of AI friendship. What if Freddie could have an AI companion who shared all his interests—someone who wanted to talk about dinosaurs as much as he does?

His response was immediate skepticism.

"That would be weird," he said, "because nobody likes what I like."

The very thing that might make an AI friend appealing—perfect alignment with his interests—is exactly what made it feel inauthentic. Part of what makes his interests meaningful is that they're his, distinct from the people around him. An AI that mirrored them perfectly would feel hollow.

When pressed further about whether he'd want an AI as a secret companion—a sort of digital spirit animal—Freddie remained uncertain. "Who knows what it could do," he noted. "It could hack everything."

There's healthy skepticism there, but also something deeper: a sense that friendship involves more than shared interests. It involves trust, vulnerability, and the unpredictability of another mind.

"I Refuse": Mind Uploading at Nine

During our Futures Improv segment, we posed a classic transhumanist scenario: What if you could upload your consciousness to a computer and live forever digitally, while your biological body remained behind?

Freddie's answer required no deliberation:

"I refuse. I will not upload my brain into a digital computer."

His reasoning was practical but profound. At nine years old, why would he abandon a body that works? The theoretical benefits of digital immortality don't outweigh the immediate reality of physical experience.

This perspective offers a useful counterweight to futures discourse that sometimes treats technological transcendence as obviously desirable. From Freddie's vantage point, the question isn't whether we can escape biological limitations, but whether we'd want to—and what we might lose in the process.

Questions Without Right Answers

Perhaps the most important takeaway from this conversation came near the end, when Freddie observed something about the nature of our questions.

"Because of all these questions," he said, "there is no wrong or right answer."

That's exactly right. The value of futures thinking isn't in predicting what will happen or determining the "correct" response to emerging technologies. It's in learning to sit with uncertainty, explore tensions, and develop our capacity for navigating complexity.

At almost ten years old, Freddie already understands this. He's not looking for definitive answers about AI and jobs and creativity. He's learning to ask better questions—and to recognize that asking them is more important than resolving them.

What the Future Thinks About Itself

We often frame conversations about technology and youth as adults preparing children for a world we're creating. But this episode suggests something different: young people are already thinking about these issues, often with more nuance than we might expect.

Freddie isn't anti-technology. He plays VR games, makes AI art, and follows developments in the field. But he's also holding onto something—a sense that some experiences are valuable precisely because we do them ourselves, that the struggle of creation is part of its meaning, and that efficiency isn't the only measure of a good life.

These aren't lessons we taught him. They're insights he's developing on his own, as he navigates a world where these technologies are simply part of the landscape.

Maybe the best thing we can do isn't to tell young people what the future will look like. Maybe it's to listen to what they already think about it—and learn from their perspective.

I don't know what the future holds for his generation. But if this conversation is any indication, they're thinking about it more carefully than we might expect.

Subscribe and Connect!

Subscribe to Modem Futura wherever you get your podcasts and connect with us on LinkedIn. Drop a comment, pose a question, or challenge an idea—because the future isn’t something we watch happen, it’s something we build together. The medium may still be the massage, but we all have a hand in shaping how it touches tomorrow.

🎧 Apple Podcast: https://apple.co/4khmVES

🎧 Spotify: https://open.spotify.com/episode/5nKjpEVZcaUDisdZpGGaMZ?si=YgWp_O84T1yVlBSloedV1w

📺 YouTube: https://youtu.be/mfumkJZav-M

🌐 Website: https://www.modemfutura.com/   

Understanding Global Risk: What the WEF's 2026 Report Reveals About Our Collective Anxieties

How 1,300 experts see the world's greatest threats—and what their blind spots tell us

Each year, the World Economic Forum surveys over a thousand experts worldwide—business leaders, academics, policymakers, and institutional leaders—to map perceived global risks. The resulting Global Risks Report isn't a prediction of what will happen. It's something potentially more valuable: a snapshot of collective concern, a reading of the signals building across economic, environmental, technological, and societal domains.

The 2026 edition reveals tensions worth examining closely.

Short-Term Fears: The Present Pressing In

The two-year risk horizon is dominated by immediate geopolitical and informational concerns. Geoeconomic confrontation leads the list, having jumped eight positions from the previous year—a signal that trade conflicts, sanctions regimes, and economic nationalism have moved from background noise to foreground crisis for many observers.

Misinformation and disinformation hold second position, reflecting growing unease about information integrity in an age where AI-generated content becomes indistinguishable from authentic material and where social permission for deception seems to be expanding. Societal polarization follows in third place—and importantly, these three risks appear deeply interconnected. Misinformation accelerates polarization, polarization enables economic nationalism, economic nationalism generates more opportunities for information warfare.

Extreme weather events, state-based armed conflict, and cyber insecurity round out the top concerns for the immediate future.

Risk Report Figure 3 from World Economic Forum's 2026 global Risks Report

Long-Term Concerns: The Environment Reasserts Itself

Expand the time horizon to ten years, and the risk landscape transforms. Environmental concerns claim five of the top ten positions, with extreme weather events, biodiversity loss and ecosystem collapse, and critical changes to Earth's systems occupying the top three spots.

This shift reveals something important about human risk perception: we consistently discount slow-moving catastrophes. Biodiversity loss lacks the urgency of trade wars, even though its cascading effects may ultimately prove more consequential. We've evolved to respond to immediate threats; we struggle to mobilize against dangers that unfold across decades.

Notably, societal polarization—ranked third in the short term—drops to ninth in the long-term view. Whether this reflects optimism that current divisions will heal, or simply the statistical reality that other risks seem more severe, remains an open question.

Different Lenses, Different Risks

Perhaps the report's most valuable contribution is its disaggregation of risk perception across demographics and geographies.

Age shapes perception. Respondents under 30 prioritize misinformation, extreme weather, and inequality. Those over 40 consistently rank geoeconomic confrontation as their primary concern. Generational experience matters: those who remember previous periods of great power competition read current signals differently than those encountering these dynamics for the first time.

Figure 15 from WEF global Risk Report

Geography shapes perception even more dramatically. AI risks that dominate American concerns rank 30th globally. In Brazil, Chile, and much of the world, more immediate concerns—inequality, pollution, resource access—take precedence. This isn't a failure of foresight; it's a reminder that risk is contextual. What threatens your community depends on where your community sits.

Figure 53 from the WEF Global Risk Report

Using Signals, Not Consuming Forecasts

Reports like this serve best as prompts for reflection rather than prescriptions for action. The value lies not in accepting these rankings as authoritative, but in using them to surface questions:

  • What assumptions am I making about stability that geoeconomic confrontation might disrupt?

  • How might misinformation affect my organization, my industry, my community's cohesion?

  • Which long-term environmental risks am I discounting because they feel distant?

  • Whose risk perceptions am I ignoring because they don't match my own context?

Human beings are, as far as we know, the only species capable of anticipating futures and adjusting present behavior accordingly. That capacity for foresight is a genuine superpower—but only if we use it. Signals become valuable when they prompt better questions. The work isn't to predict what happens next; it's to prepare ourselves for navigating uncertainty with more wisdom than our instincts alone would allow.

Modem Futura explores the intersection of technology, society, and human futures.

Download the full WEF Global Risks Report 2026: [PDF Web Link]

Subscribe and Connect!

Subscribe to Modem Futura wherever you get your podcasts and connect with us on LinkedIn. Drop a comment, pose a question, or challenge an idea—because the future isn’t something we watch happen, it’s something we build together. The medium may still be the massage, but we all have a hand in shaping how it touches tomorrow.

🎧 Apple Podcast: https://apple.co/4sUwhdG

🎧 Spotify: https://open.spotify.com/episode/0UoLHYJa8KHzbNbP564Qwy?si=h9WD1rE4Q6WTu6wOWlEQhA

📺 YouTube: https://youtu.be/-5PQMaqweNU

🌐 Website: https://www.modemfutura.com/   

Modem Futura Year in Review: What 2025 Taught Us About Being Human

As we step toward 2026, we recorded a “Year in Review” episode of Modem Futura to pause the treadmill, look back, and ask a bigger question: what did this year reveal about the future of being human?

This wasn’t a victory lap. It was a reflection on what resonated, what surprised us, and what it means to build a future-focused show while the future keeps moving.

metrics matter… and they don’t

Yes, growth matters — it helps ideas travel. But podcast analytics are often incomplete and inconsistent, and they rarely capture what impact actually looks like. The most meaningful signals are still human: messages, emails, thoughtful disagreement, and reviews that help someone new discover the show.

If you want to support the show: subscribing, sharing, and leaving a rating/review are still the most helpful actions.
— Modem Futura

The themes that defined our year:

AI, beyond the hype: We kept returning to the same tension — generative tools are everywhere, but “AI” isn’t just a feature set. It’s a cultural force that shapes identity, agency, creativity, and values. We try hard to avoid both the hype machine and the doom loop, and instead stay in the messy middle where the most useful questions live.

Education and learning: We lean into what learning actually is (not just schooling), including John Dewey’s idea that humans are wired for inquiry, communication, construction, and expression. When AI arrives in every document and device, what does it do to those impulses — especially for kids?

Technology in the physical world: From autonomous‑vehicle safety systems that quietly drift out of calibration, to EVs and the persistent “flying car” dream, we explore what happens when shiny promises meet real‑world constraints.

Big questions, no apologies: Yes, we go there — simulation hypotheses, black holes, de‑extinction, space travel, and the edges of what science can (and can’t) explain. These episodes aren’t about “being right.” They’re about expanding the space of possible futures we can imagine.

If there’s one takeaway, it’s this: the future isn’t something that happens to us — it’s something we build together.That’s why we keep showing up each week: to create a shared space for curiosity, skepticism, wonder, and responsible imagination.

If you’ve been listening, thank you. If you’re new here, welcome. And if an episode sparked a thought you can’t shake — share it with a colleague, a student, a friend, or your community. As we step into 2026, we’re excited to keep exploring the possible, probable, and preferable futures — with you.

The Hidden Costs of “That Was Easy”: AI Slop, Creative Friction, and the Future of Human Craft

In this Modem Futura episode, hosts Sean Leahy and Andrew Maynard examine the rise of “AI slop” and the growing cultural pressure to accept frictionless creation as the norm. Drawing on examples from coding, design, futures thinking, and psychology, they unpack how satisficing, homogenization, and inherited power threaten to erode human craft and understanding. The article explores why creative friction is essential for mastery, agency, and meaning — and offers futures-oriented insights into how we can use AI intentionally without losing what makes us human.

ChatGPT Illustrated version of Modem Futura YouTube Thumbnail

Generative AI has ushered in an era where producing text, images, video, and code is no longer a challenge — it’s a button press. And in this week’s episode of Modem Futura, Andrew and I wrestle with a growing cultural tension: if everything is easy, what happens to the things that matter?

It began with a shared frustration. Both of us have noticed an explosion of what we call AI slop (content that is technically competent but devoid of care, intention, and personality). You’ve seen it too: the LinkedIn posts with identical emojis, the slide decks that all look like NotebookLM, the essays with no point of view. These things aren’t wrong, they’re just empty. And the emptiness is the point.

We discuss a concept called satisficing: the act of choosing something “good enough” rather than something excellent. In the age of AI, satisficing has become an increasing default mode of creation. Why craft an idea when you can generate one? Why wrestle with a blank page when you can autocomplete your way to the finish line?

But here’s the problem: friction is where learning happens. It’s where creativity lives. It’s the sanding that polishes the stone. When you remove friction, you remove the struggle — and without struggle, there is no mastery, no depth, and no meaning.

Throughout the episode, we explore how this plays out across domains. Coders relying on AI-generated code they can’t understand. Designers accepting images that are “close enough.” Writers sharing posts they didn’t write. And organizations flirting with a future where expertise is replaced by button-pressing.

We draw on Michael Crichton’s concept of inherited power from Jurassic Park: the idea that wielding abilities you never earned leads to carelessness, overconfidence, and danger. AI gives us power we didn’t work for — and without wisdom, that power is hollow.

But this isn’t a pessimistic episode. We explore how AI can amplify creativity when used intentionally, how friction can be designed back into workflows, and why people may ultimately push back against frictionless living. Humans crave meaning, not efficiency. And meaning takes work.

If you’re navigating how to use AI thoughtfully — in your craft, your teaching, your leadership, or your creative life — this episode offers a grounded, futures-focused lens on what we stand to lose and what we still have time to protect.

🎧 Listen to the full episode of Modem Futura — and join the conversation on what we should preserve in an age that wants to eliminate every struggle.


Subscribe and Connect!

Subscribe to Modem Futura wherever you get your podcasts and connect with us on LinkedIn. Drop a comment, pose a question, or challenge an idea—because the future isn’t something we watch happen, it’s something we build together. The medium may still be the massage, but we all have a hand in shaping how it touches tomorrow.

🎧 Apple Podcast: https://apple.co/48WCGgh

🎧 Spotify: https://open.spotify.com/episode/1BajA2SvDWVyY0mRSQ9Flk?si=wvCFhWlgQtC2kye3bGz5Kg

📺 YouTube: https://youtu.be/1V9PD7j8iu8

🌐 Website: https://www.modemfutura.com/