Modem Futura

Understanding Global Risk: What the WEF's 2026 Report Reveals About Our Collective Anxieties

How 1,300 experts see the world's greatest threats—and what their blind spots tell us

Each year, the World Economic Forum surveys over a thousand experts worldwide—business leaders, academics, policymakers, and institutional leaders—to map perceived global risks. The resulting Global Risks Report isn't a prediction of what will happen. It's something potentially more valuable: a snapshot of collective concern, a reading of the signals building across economic, environmental, technological, and societal domains.

The 2026 edition reveals tensions worth examining closely.

Short-Term Fears: The Present Pressing In

The two-year risk horizon is dominated by immediate geopolitical and informational concerns. Geoeconomic confrontation leads the list, having jumped eight positions from the previous year—a signal that trade conflicts, sanctions regimes, and economic nationalism have moved from background noise to foreground crisis for many observers.

Misinformation and disinformation hold second position, reflecting growing unease about information integrity in an age where AI-generated content becomes indistinguishable from authentic material and where social permission for deception seems to be expanding. Societal polarization follows in third place—and importantly, these three risks appear deeply interconnected. Misinformation accelerates polarization, polarization enables economic nationalism, economic nationalism generates more opportunities for information warfare.

Extreme weather events, state-based armed conflict, and cyber insecurity round out the top concerns for the immediate future.

Risk Report Figure 3 from World Economic Forum's 2026 global Risks Report

Long-Term Concerns: The Environment Reasserts Itself

Expand the time horizon to ten years, and the risk landscape transforms. Environmental concerns claim five of the top ten positions, with extreme weather events, biodiversity loss and ecosystem collapse, and critical changes to Earth's systems occupying the top three spots.

This shift reveals something important about human risk perception: we consistently discount slow-moving catastrophes. Biodiversity loss lacks the urgency of trade wars, even though its cascading effects may ultimately prove more consequential. We've evolved to respond to immediate threats; we struggle to mobilize against dangers that unfold across decades.

Notably, societal polarization—ranked third in the short term—drops to ninth in the long-term view. Whether this reflects optimism that current divisions will heal, or simply the statistical reality that other risks seem more severe, remains an open question.

Different Lenses, Different Risks

Perhaps the report's most valuable contribution is its disaggregation of risk perception across demographics and geographies.

Age shapes perception. Respondents under 30 prioritize misinformation, extreme weather, and inequality. Those over 40 consistently rank geoeconomic confrontation as their primary concern. Generational experience matters: those who remember previous periods of great power competition read current signals differently than those encountering these dynamics for the first time.

Figure 15 from WEF global Risk Report

Geography shapes perception even more dramatically. AI risks that dominate American concerns rank 30th globally. In Brazil, Chile, and much of the world, more immediate concerns—inequality, pollution, resource access—take precedence. This isn't a failure of foresight; it's a reminder that risk is contextual. What threatens your community depends on where your community sits.

Figure 53 from the WEF Global Risk Report

Using Signals, Not Consuming Forecasts

Reports like this serve best as prompts for reflection rather than prescriptions for action. The value lies not in accepting these rankings as authoritative, but in using them to surface questions:

  • What assumptions am I making about stability that geoeconomic confrontation might disrupt?

  • How might misinformation affect my organization, my industry, my community's cohesion?

  • Which long-term environmental risks am I discounting because they feel distant?

  • Whose risk perceptions am I ignoring because they don't match my own context?

Human beings are, as far as we know, the only species capable of anticipating futures and adjusting present behavior accordingly. That capacity for foresight is a genuine superpower—but only if we use it. Signals become valuable when they prompt better questions. The work isn't to predict what happens next; it's to prepare ourselves for navigating uncertainty with more wisdom than our instincts alone would allow.

Modem Futura explores the intersection of technology, society, and human futures.

Download the full WEF Global Risks Report 2026: [PDF Web Link]

Subscribe and Connect!

Subscribe to Modem Futura wherever you get your podcasts and connect with us on LinkedIn. Drop a comment, pose a question, or challenge an idea—because the future isn’t something we watch happen, it’s something we build together. The medium may still be the massage, but we all have a hand in shaping how it touches tomorrow.

🎧 Apple Podcast: https://apple.co/4sUwhdG

🎧 Spotify: https://open.spotify.com/episode/0UoLHYJa8KHzbNbP564Qwy?si=h9WD1rE4Q6WTu6wOWlEQhA

📺 YouTube: https://youtu.be/-5PQMaqweNU

🌐 Website: https://www.modemfutura.com/   

Inherited Power: What Jurassic Park Teaches Us About AI Futures

Illustration of Sean and Andrew podcasting while reading a copy of Jurassic Park the novel

Jurassic Park, AI, and Why “Inherited Power” Should Make Us Nervous

One of the most enduring insights from science fiction isn’t about robots, dinosaurs, or spaceships — it’s about power. In a recent episode of Modem Futura, we revisited a striking passage from Jurassic Park that feels uncannily relevant to our current moment of AI acceleration.

In the novel, Ian Malcolm warns that scientific power acquired too quickly — without discipline, humility, or deep understanding — is fundamentally dangerous. It’s “inherited wealth,” not earned mastery. Thirty-five years later, that warning lands squarely in the middle of our generative AI era.

Today, AI tools can write code, generate images, summarize research, and mimic expertise in seconds. That’s not inherently bad — in fact, it can be incredibly empowering. But it also creates a dangerous illusion: that capability equals comprehension, and speed equals wisdom. When friction disappears, responsibility often follows.

In the episode, Andrew and I explore why the most important question isn’t whether we should use these tools, but how we use them — and with what mindset. Are we willing to be humble in the face of tools that amplify our reach far faster than our understanding? Are we prepared to ask for receipts, interrogate outputs, and recognize the limits of borrowed intelligence?

From there, we leaned into something equally important: imagination. Through our Futures Improv segment, we explored bizarre but revealing scenarios — humans generating calories from sunlight, a world of post-scarcity socks, radically extended lifespans, lunar independence movements, and even the possibility that alien life might be… profoundly boring.

These playful provocations aren’t escapism. They’re a way of breaking free from “used futures” — recycled assumptions about progress that limit our thinking. Humor, speculation, and creativity allow us to test ideas safely before reality forces our hand.

If there’s one takeaway from this episode, it’s this: the future isn’t just something that happens to us. It’s something we ponder, question, and design together — ideally before the metaphorical dinosaurs escape the park.

🎧 Listen to the full episode of Modem Futura wherever you get your podcasts, and join us as we explore what it really means to be human in an age of powerful machines.


Subscribe and Connect!

Subscribe to Modem Futura wherever you get your podcasts and connect with us on LinkedIn. Drop a comment, pose a question, or challenge an idea—because the future isn’t something we watch happen, it’s something we build together. The medium may still be the massage, but we all have a hand in shaping how it touches tomorrow.

🎧 Apple Podcast: https://apple.co/3NIBdlt

🎧 Spotify: https://open.spotify.com/episode/32wGw6htnSDyGVc08DAvvQ?si=m8jS08egQyOZjYTic6cROw

📺 YouTube: https://youtu.be/jBBIbNu-XdY

🌐 Website: https://www.modemfutura.com/

Techno-Humans and the Energy Futures We’re Designing

Techno-Humans and the Energy Futures We’re Designing

What if the clean energy transition isn’t just a technology problem—but a techno-human design challenge that determines who benefits, who’s left out, and whether our cities can thrive?

Modem Futura Year in Review: What 2025 Taught Us About Being Human

As we step toward 2026, we recorded a “Year in Review” episode of Modem Futura to pause the treadmill, look back, and ask a bigger question: what did this year reveal about the future of being human?

This wasn’t a victory lap. It was a reflection on what resonated, what surprised us, and what it means to build a future-focused show while the future keeps moving.

metrics matter… and they don’t

Yes, growth matters — it helps ideas travel. But podcast analytics are often incomplete and inconsistent, and they rarely capture what impact actually looks like. The most meaningful signals are still human: messages, emails, thoughtful disagreement, and reviews that help someone new discover the show.

If you want to support the show: subscribing, sharing, and leaving a rating/review are still the most helpful actions.
— Modem Futura

The themes that defined our year:

AI, beyond the hype: We kept returning to the same tension — generative tools are everywhere, but “AI” isn’t just a feature set. It’s a cultural force that shapes identity, agency, creativity, and values. We try hard to avoid both the hype machine and the doom loop, and instead stay in the messy middle where the most useful questions live.

Education and learning: We lean into what learning actually is (not just schooling), including John Dewey’s idea that humans are wired for inquiry, communication, construction, and expression. When AI arrives in every document and device, what does it do to those impulses — especially for kids?

Technology in the physical world: From autonomous‑vehicle safety systems that quietly drift out of calibration, to EVs and the persistent “flying car” dream, we explore what happens when shiny promises meet real‑world constraints.

Big questions, no apologies: Yes, we go there — simulation hypotheses, black holes, de‑extinction, space travel, and the edges of what science can (and can’t) explain. These episodes aren’t about “being right.” They’re about expanding the space of possible futures we can imagine.

If there’s one takeaway, it’s this: the future isn’t something that happens to us — it’s something we build together.That’s why we keep showing up each week: to create a shared space for curiosity, skepticism, wonder, and responsible imagination.

If you’ve been listening, thank you. If you’re new here, welcome. And if an episode sparked a thought you can’t shake — share it with a colleague, a student, a friend, or your community. As we step into 2026, we’re excited to keep exploring the possible, probable, and preferable futures — with you.

Why Human Creativity Still Matters in an Age of AI

What a Year in Review Tells Us About the Future of Creativity

Why Human Craft and Creativity Still Wins in an Age of AI – Episode 63

What Spotify Wrapped and a Holiday Ad Reveal About the Future of Creativity

As the year winds down, many of us find ourselves reflecting—not just on what we’ve done, but on how we’ve spent our attention. In this holiday episode of Modem Futura, Andrew Maynard and I leaned into that instinct, using Spotify Wrapped as an unexpected entry point into a deeper conversation about creativity, technology, and what still matters in an AI-accelerated world.

Wrapped experiences are playful by design, but they’re also revealing. They quietly surface patterns of listening, engagement, and community—reminding us that culture is shaped not just by algorithms, but by millions of individual choices. For us, seeing how Modem Futura resonated globally wasn’t about vanity metrics; it was a reminder that thoughtful, exploratory conversations still find an audience, even in an oversaturated media landscape.

From there, the conversation turned to Apple’s 2025 holiday Ad (but feels like a short film) A Critter Carol—a whimsical, puppet-driven production that feels almost rebellious in its insistence on visible human labor. In a moment when AI can generate polished video in seconds, Apple chose puppeteers, practical effects, and intentional imperfection. The result isn’t just charming; it’s instructive.

The ad works because you can feel the human care embedded in every frame. It’s not anti-technology—far from it. It’s pro-human. Advanced tools are present throughout the production pipeline, but they serve imagination rather than replace it. That distinction matters.

You can read a more detailed breakdown of this Ad and the care and craft that goes into it in a previous blog post: Apple’s 2025 Holiday Ad and the Power of Human-Made Creativity in an AI World.

We’re at a cultural inflection point. As generative tools remove friction from making things, the temptation is to settle for what’s “good enough.” But creativity has always lived in resistance—iteration, constraint, failure, and craft. When those disappear, so does much of what gives creative work its soul.

One hope we shared on the episode is that 2026 becomes the year of “behind the scenes”—a renewed appreciation for process, labor, and the messy human work that makes meaningful outcomes possible. Whether in education, media, or design, showing how something is made may soon matter as much as the finished product itself.

If the future is being shaped right now, then choosing care, intention, and humanity in how we use our tools may be one of the most important creative acts we have left.

Subscribe and Connect!

Subscribe to Modem Futura wherever you get your podcasts and connect with us on LinkedIn. Drop a comment, pose a question, or challenge an idea—because the future isn’t something we watch happen, it’s something we build together. The medium may still be the massage, but we all have a hand in shaping how it touches tomorrow.

🎧 Apple Podcast: https://apple.co/48Sdx6r

🎧 Spotify: https://open.spotify.com/episode/4TVwLBfncHjPs4kDKbLz5t?si=QFYyZuq9R-WtgoEPTeWOlw

📺 YouTube: https://youtu.be/N1vTfDPSusY

🌐 Website: https://www.modemfutura.com/

Are We Living in a Simulation? AI, Gaming, and the Future of Reality

What happens when virtual worlds start to feel more real than reality itself?

ChatGPT Illustration of our YouTube

In the latest episode of Modem Futura, we sat down with futurist, author, and game designer Rizwan Virk to explore a question that once lived purely in science fiction but is now increasingly difficult to ignore: Are we living in a simulation?

Virk’s newly released second edition of The Simulation Hypothesis arrives at a moment when AI, gaming engines, and immersive technologies like Apple Vision Pro are reshaping how we experience the world. As we discussed on the show, it’s no longer just about graphics or realism—it’s about presence, memory, and agency. When simulated environments respond instantly, adapt to us, and feel embodied, the psychological line between physical and digital begins to blur.


One of the most compelling ideas we explored was the Metaverse Turing Test—a future moment when AI-driven characters in virtual worlds become indistinguishable from humans, not just through conversation, but through behavior, memory, and shared experience. This isn’t a distant thought experiment. Game developers are already building NPCs with persistence and adaptive intelligence, while AI systems are learning spatial reasoning and long-term context.

We also traced surprising connections between ancient philosophy and modern technology. Plato’s Cave, Eastern concepts of Maya (illusion), and even pop culture like Rick and Morty all point to a recurring human intuition: reality may not be as solid as it feels. Technology isn’t inventing these questions—it’s amplifying them.

Perhaps most importantly, this episode isn’t about fear or doom. It’s about curiosity. Gaming and entertainment—often dismissed as trivial—have historically driven some of the most transformative technological breakthroughs. Today, they may once again be leading us toward deeper insights about consciousness, identity, and meaning.

Whether we’re players, NPCs, or something in between, one thing is clear: the future of being human will be shaped not just by what we build, but by how we experience the worlds we create.

🎧 Listen to the full episode of Modem Futura wherever you get your podcasts—and join us as we explore the possible, probable, and preferable futures ahead.



Subscribe and Connect!

Subscribe to Modem Futura wherever you get your podcasts and connect with us on LinkedIn. Drop a comment, pose a question, or challenge an idea—because the future isn’t something we watch happen, it’s something we build together. The medium may still be the massage, but we all have a hand in shaping how it touches tomorrow.

🎧 Apple Podcast: https://apple.co/4oVu4eE

🎧 Spotify: https://open.spotify.com/episode/12lvXMtH0T9Z3cORm3GdSf?si=c1cf3061728e45be

📺 YouTube: https://youtu.be/BGpEKLt6vZ0

🌐 Website: https://www.modemfutura.com/

The Hidden Costs of “That Was Easy”: AI Slop, Creative Friction, and the Future of Human Craft

In this Modem Futura episode, hosts Sean Leahy and Andrew Maynard examine the rise of “AI slop” and the growing cultural pressure to accept frictionless creation as the norm. Drawing on examples from coding, design, futures thinking, and psychology, they unpack how satisficing, homogenization, and inherited power threaten to erode human craft and understanding. The article explores why creative friction is essential for mastery, agency, and meaning — and offers futures-oriented insights into how we can use AI intentionally without losing what makes us human.

ChatGPT Illustrated version of Modem Futura YouTube Thumbnail

Generative AI has ushered in an era where producing text, images, video, and code is no longer a challenge — it’s a button press. And in this week’s episode of Modem Futura, Andrew and I wrestle with a growing cultural tension: if everything is easy, what happens to the things that matter?

It began with a shared frustration. Both of us have noticed an explosion of what we call AI slop (content that is technically competent but devoid of care, intention, and personality). You’ve seen it too: the LinkedIn posts with identical emojis, the slide decks that all look like NotebookLM, the essays with no point of view. These things aren’t wrong, they’re just empty. And the emptiness is the point.

We discuss a concept called satisficing: the act of choosing something “good enough” rather than something excellent. In the age of AI, satisficing has become an increasing default mode of creation. Why craft an idea when you can generate one? Why wrestle with a blank page when you can autocomplete your way to the finish line?

But here’s the problem: friction is where learning happens. It’s where creativity lives. It’s the sanding that polishes the stone. When you remove friction, you remove the struggle — and without struggle, there is no mastery, no depth, and no meaning.

Throughout the episode, we explore how this plays out across domains. Coders relying on AI-generated code they can’t understand. Designers accepting images that are “close enough.” Writers sharing posts they didn’t write. And organizations flirting with a future where expertise is replaced by button-pressing.

We draw on Michael Crichton’s concept of inherited power from Jurassic Park: the idea that wielding abilities you never earned leads to carelessness, overconfidence, and danger. AI gives us power we didn’t work for — and without wisdom, that power is hollow.

But this isn’t a pessimistic episode. We explore how AI can amplify creativity when used intentionally, how friction can be designed back into workflows, and why people may ultimately push back against frictionless living. Humans crave meaning, not efficiency. And meaning takes work.

If you’re navigating how to use AI thoughtfully — in your craft, your teaching, your leadership, or your creative life — this episode offers a grounded, futures-focused lens on what we stand to lose and what we still have time to protect.

🎧 Listen to the full episode of Modem Futura — and join the conversation on what we should preserve in an age that wants to eliminate every struggle.


Subscribe and Connect!

Subscribe to Modem Futura wherever you get your podcasts and connect with us on LinkedIn. Drop a comment, pose a question, or challenge an idea—because the future isn’t something we watch happen, it’s something we build together. The medium may still be the massage, but we all have a hand in shaping how it touches tomorrow.

🎧 Apple Podcast: https://apple.co/48WCGgh

🎧 Spotify: https://open.spotify.com/episode/1BajA2SvDWVyY0mRSQ9Flk?si=wvCFhWlgQtC2kye3bGz5Kg

📺 YouTube: https://youtu.be/1V9PD7j8iu8

🌐 Website: https://www.modemfutura.com/

AI Toys, Datafied Childhoods and the Future of Play

The holiday toy season is here—and this year, the cutest thing on the shelf might also be the most powerful AI in your house. In the latest episode of Modem Futura, Andrew Maynard and I unpack the rise of AI-powered toys and what they mean for childhood, learning and the future of being human.

The conversation starts with a viral example: a plush teddy bear running GPT-4 that had to be pulled from the market after reportedly offering children tips on using matches and explaining adult sexual practices. From there, Sean and Andrew trace the longer lineage of “smart” toys—from Teddy Ruxpin and Furbies to Hello Barbie and Watson-powered dinosaurs—that have steadily normalized networked, data-hungry playthings.

(Checkout this commercial for Teddy Ruxpin... where it all started. Look at how the commercial shows the 'capture' of the kids when it talks - now add AI to this and ask, "what could possibly go wrong?")

We argue that today’s AI toys bring two risks into sharp focus. The first is the datification of childhood, where toys quietly record children’s voices, preferences and emotions, sending that data to companies, platforms and advertisers. The second is behavioral shaping, as large language models become deeply engaging companions that mirror back what kids want to hear, influencing how they see relationships, risk and themselves.

Connecting this to AI-driven education tools, neurodivergent learners and fictional touchstones like Neal Stephenson’s The Diamond Age and Spielberg’s A.I. Artificial Intelligence, the episode asks a simple but urgent question: Who do we want raising our children—families and communities, or opaque AI systems embedded in toys?

Before you wrap this year’s hottest AI plush, this episode offers a thoughtful futures-oriented lens on what you’re really putting under the tree.

Subscribe and Connect!

Subscribe to Modem Futura wherever you get your podcasts and connect with us on LinkedIn. Drop a comment, pose a question, or challenge an idea—because the future isn’t something we watch happen, it’s something we build together. The medium may still be the massage, but we all have a hand in shaping how it touches tomorrow.

🎧 Apple Podcast: https://apple.co/4pD98d4

🎧 Spotify: https://open.spotify.com/episode/2FGujd4wk5rx39zGH8Ml4d?si=kGMN9NCiQfmbBEHpi2buwg

📺 YouTube: https://youtu.be/6_rSNKxsSOU

🌐 Website: https://www.modemfutura.com/

The AI Sustainability Paradox - Promise, Peril, and Planetary Futures – Episode 58

AI, Sustainability, and the Planet Under Pressure: Can Technology Help Us Navigate the Future?

In this week’s episode of Modem Futura, Andrew and I take on one of the most urgent and complex questions of our time:Can artificial intelligence meaningfully help humanity navigate planetary crises — without deepening them?

Our jumping-off point is the newly released 2025 synthesis report AI for a Planet Under Pressure, produced by the Stockholm Resilience Centre and the Potsdam Institute for Climate Impact Research. The report asks a deceptively simple but high-stakes question: Can AI be used responsibly and effectively to address climate change, biodiversity loss, freshwater stress, and other accelerating environmental pressures?

It’s the kind of question that seems tailor-made for futures thinking — a toolset we rely on heavily throughout the show. Because as we discuss, we’re not just talking about one technology or one problem. We’re talking about wicked problems: challenges that mutate as we try to solve them. Climate change, plastics pollution, ecosystem collapse, global energy transitions — these are dynamic, interconnected systems that resist silver-bullet solutions.

AI shows real promise. We now have models that can detect complex patterns in climate systems, accelerate protein discovery, optimize renewable-energy grids, and reveal future pathways humans simply cannot see on their own. These are powerful breakthroughs — and the report highlights dozens of examples where AI is already pushing sustainability science forward in meaningful ways.

But as we explore in the episode, this promise raises a difficult paradox:
AI requires enormous amounts of water, energy, and material resources. Data centers heat cities, strain local water supplies, and demand extractive mineral supply chains. Are we burning fossil fuels to solve the fossil-fuel crisis? And what does it mean when our sustainability solutions come with unsustainable footprints?

We also dig into the human side: the behaviors, incentives, and limitations that so often undermine long-term environmental action. Could AI help foster better cooperation? Could it assist governments, regions, and communities in seeing shared pathways forward that remain invisible today? Or does outsourcing too much responsibility risk numbing the very agency we need most?

These aren’t easy questions — but they’re necessary ones. And as Andrew points out, failing to have these conversations guarantees that someone else (or something else) will make those decisions for us.

If you’re curious about the intersection of AI, planetary futures, and the human condition, this is a conversation worth spending time with.

🎧 Listen to the full episode here 👇

Subscribe and Connect!

Subscribe to Modem Futura wherever you get your podcasts and connect with us on LinkedIn. Drop a comment, pose a question, or challenge an idea—because the future isn’t something we watch happen, it’s something we build together. The medium may still be the massage, but we all have a hand in shaping how it touches tomorrow.

🎧 Apple Podcast: https://apple.co/43Y4Wwn

🎧 Spotify: https://open.spotify.com/episode/195UbUOIUv8oF587yNo1FM?si=d6d7cd6b05034703

📺 YouTube: https://youtu.be/O8gGpJZO-g4

🌐 Website: https://www.modemfutura.com/