AI in education

Thriving with AI: Two Futures Thinking Tools for Navigating Uncertainty

Illustration of Sean and Andrew presenting their workshop title slide

The question is no longer whether AI will reshape education. It already has. The more interesting question — and the harder one — is how educators, leaders, and institutions can navigate that transformation with clarity, purpose, and agency.

In this episode of Modem Futura, hosts Sean Leahy and Andrew Maynard walk listeners through a workshop they developed for ASU's 2026 Folk Fest titled "Thriving with AI: Ethical, Transparent, and Human-Centered Learning." Rather than demonstrating AI platforms or advocating for a particular stance, the session offers two practical thinking tools designed to help individuals make sense of complexity and make intentional decisions — regardless of where they fall on the AI adoption spectrum.

Foresight Methodologies

The Futures Triangle, originally developed by futurist Sohail Inayatullah, is a foresight method that maps three forces shaping any change landscape: the pull of the future (emerging visions and possibilities), the push of the present (trends, pressures, and mandates driving change), and the weight of history (the traditions, values, and institutional structures that resist or ground that change). By making these forces visible, individuals and teams can better orient themselves within the dynamics of change rather than simply reacting to them.

The Intent Map, drawn from Jefferey Abbott and Andrew Maynard's book AI and the Art of Being Human, complements the triangle by shifting from orientation to action. A simple two-by-two matrix, it asks users to identify four elements: their core values (what they won't compromise), their desired outcomes (what success looks like), their guardrails (the hard boundaries they won't cross), and their metrics (how they'll know if it's working). Critically, the framework recognizes that metrics don't have to be numerical — sometimes the most meaningful indicators of success are qualitative, like a student who can't stop thinking about what they learned.

What makes these tools particularly valuable is their accessibility. Both can be sketched on a scrap of paper. Both work for individuals and teams. And both are domain-agnostic — while the episode frames them in the context of education, they apply equally well to organizational strategy, technology adoption, and personal decision-making.

The episode is anchored by two provocative 2035 headlines: one in which AI tutors outperform human teachers and faculty roles come under review, and another in which human-AI partnership produces the most critically thinking generation in history. The question the workshop poses isn't which headline is more likely. It's which one you want — and what intentional choices you need to make to move toward it.

Thriving with AI, as the hosts frame it, isn't about mastering the latest platform. It's about staying awake to what matters.


Subscribe and Connect!

Subscribe to Modem Futura wherever you get your podcasts and connect with us on LinkedIn. Drop a comment, pose a question, or challenge an idea—because the future isn’t something we watch happen, it’s something we build together. The medium may still be the massage, but we all have a hand in shaping how it touches tomorrow.

🎧 Apple Podcast: https://apple.co/3ZXgT2P

🎧 Spotify: https://open.spotify.com/episode/1b1Q0W7YVSGZA2ELYj6g6C?si=wL1sXb-DQsSluBkLYCu9tg

📺 YouTube: https://youtu.be/zi_zvXCt9sY

🌐 Website: https://www.modemfutura.com/   


Asimov's "The Fun They Had" and the Real Cost of AI-Driven Education

Illustration of Asimov's Fun They Had boy reading by mechanical teacher

The History of our Future

More than seventy years ago, Isaac Asimov imagined a future where children learn in isolation, guided by personalized mechanical tutors, and books are relics of a forgotten age. His 1951 short story, "The Fun They Had," is set in 2155, but its questions feel startlingly current.

In the story, a young girl named Margie discovers a paper book and learns about a time when children went to school together—sat in classrooms, were taught by human teachers, and shared the experience of learning with their peers. Her own education is efficient, personalized, and lonely. Her mechanical teacher can diagnose her struggles and recalibrate its approach, but it cannot inspire her, connect with her, or make her feel like she belongs to something larger than a lesson plan.

Asimov didn’t predict AI as we know it. But he predicted the question that matters most: in our rush to optimize education, are we designing out the very things that make learning meaningful?

This is precisely the tension at the heart of today's conversation about AI in education. The promise of AI-powered tutors is real and, in many cases, genuinely valuable: adaptive pacing, instant feedback, content tailored to individual needs. But when personalization becomes the dominant paradigm—when every learner is on a separate track, in a separate space, at a separate time—the communal dimensions of education begin to disappear.

Natural Human Impulses for Learning (not schooling)

John Dewey argued more than a century ago that learning is driven by four natural impulses: inquiry, communication, construction, and expression. Most of these are inherently social. They depend on friction, dialogue, surprise, and the presence of other people. No amount of algorithmic sophistication can fully replicate the moment a teacher's unexpected enthusiasm shifts a student's entire trajectory, or the experience of working through difficulty alongside peers who share the same struggle.

Asimov's story also raises a subtler question about what endures. The book Margie discovers has survived two centuries. The static words on the page—unchanging, tactile, physical—carry a kind of permanence that digital media cannot easily match. This resonates with the growing cultural appetite for analog experiences: vinyl records, film photography, even old iPods. These are not acts of technological rejection. They are expressions of a deeper need for embodied engagement, deliberate choice, and the kind of friction that gives experience its texture.

Where do we go next?

None of this means AI has no place in education. It does, and increasingly will. But Asimov's story is a quiet reminder that the most important things about learning—curiosity, connection, belonging, the joy of shared discovery—are not problems to be optimized. They are human experiences to be protected.

The question is not whether AI can teach us. It's whether, in building systems that teach us more efficiently, we are designing out the very things that made learning worth having in the first place.

*Episode 71 of Modem Futura explores these themes through Asimov's story and a wider conversation about technology, nostalgia, and what it means to learn as a human being.*

Subscribe and Connect!

Subscribe to Modem Futura wherever you get your podcasts and connect with us on LinkedIn. Drop a comment, pose a question, or challenge an idea—because the future isn’t something we watch happen, it’s something we build together. The medium may still be the massage, but we all have a hand in shaping how it touches tomorrow.

🎧 Apple Podcast: https://apple.co/4s1lDk1

🎧 Spotify: https://open.spotify.com/episode/20I5j2DliUnZAbWDiVw7y8?si=WoEW_Zb2SPiynHYb4d8XHA

📺 YouTube: https://youtu.be/TDQc15Muwto

🌐 Website: https://www.modemfutura.com/   

Vibe Coding and the Return of Personal Software

Vintage styled computer terminal with the text "What will you build today?" displayed on screen

The Echo of Early Personal Computing

There was a brief, electric moment in the history of computing—roughly the late 1970s through the mid-1980s—when ordinary people could sit down at a keyboard and make a machine do something it hadn't done before. The Commodore 64, the BBC Micro, the Apple II: these were limited, clunky, and profoundly empowering. For a generation, they opened the door to a kind of creative agency that felt almost magical.

That door closed, gradually, as software became professionalized. The gap between what you could imagine and what you could build widened into a canyon. If you wanted a tool that didn't exist, you needed a developer—or you went without.

Vibe coding is reopening that door.

The term refers to the practice of describing what you want in natural language and letting a generative AI—tools like Claude, ChatGPT, or Copilot—write the code for you. No syntax to memorize. No debugging by hand. You describe your intent, and working software comes back in seconds.

In this episode of Modem Futura, we explore what this shift means—not just technically, but humanly. I demonstrates tools he built from single prompts (also referred to as a a ‘one-shot’): a horizon-scanning app for futures research and a two-by-two uncertainty matrix used in strategic foresight. Both were functional on the first attempt. Both took less time to create than it takes to describe them.

The Inherited Power Problem

But the episode resists the temptation to treat this as a simple good-news story. The hosts dig into the real tensions: AI-generated code that no one fully understands, security vulnerabilities baked into apps that reach market before anyone reviews them, the new threat landscape of prompt injection, and the philosophical question of wielding power you haven't earned the literacy to evaluate—what the hosts call "inherited power."

There are also rich implications for education. Rather than relying on off-the-shelf apps that never quite fit, instructors and students alike can now build purpose-specific tools—and in doing so, develop a more grounded understanding of what these AI systems can and cannot do.

The deeper question the episode surfaces is less about code and more about agency. For decades, software was something done to us—platforms we adapted to, interfaces we learned, ecosystems we bought into. Vibe coding hints at a possible reversal: software shaped by the individual, for the individual, in the moment they need it.

Whether that future is liberating or reckless—or both—depends on the kind of literacy, caution, and imagination we bring to it.

Listen to the full conversation on Modem Futura.

Subscribe and Connect!

Subscribe to Modem Futura wherever you get your podcasts and connect with us on LinkedIn. Drop a comment, pose a question, or challenge an idea—because the future isn’t something we watch happen, it’s something we build together. The medium may still be the massage, but we all have a hand in shaping how it touches tomorrow.

🎧 Apple Podcast: https://apple.co/4rbOr9r

🎧 Spotify: https://open.spotify.com/episode/28DMXJsM2kEBA2QDxuDmtJ?si=AJpR7zCpRgS2KCCfWwjjWg

📺 YouTube: https://youtu.be/lQGYaiThuBk?si=nRbHVEQk9dwL3gXr

🌐 Website: https://www.modemfutura.com/   

AI in Elementary Education: Teaching Tech to Our Youngest Learners - Episode 51

Teaching AI in Elementary School: Preparing the Youngest Learners for a Digital Future

What happens when a kindergartener feels more comfortable with an iPad than a pair of scissors? Or when a fourth grader wonders aloud whether Alexa is “watching” them? These are not hypotheticals — they are real stories from today’s classrooms, where the line between technology, childhood, and learning is shifting faster than ever.

In our latest episode of Modem Futura, Andrew Maynard and I were joined by educator Tara Menghini, who has spent more than 25 years teaching technology and computer science to K–6 students. Tara’s perspective is invaluable: she sees firsthand how children engage with digital tools, how myths like the “digital native” mislead us, and why teaching judgment and balance is just as important as teaching coding.

A few highlights from our conversation:

The myth of the digital native. Just because kids swipe naturally on a tablet doesn’t mean they understand how technology works — or the trade-offs it creates. They need explicit guidance and context. As I share in pod, from my own anecdotal experience I’ve found that many students who would be labeled “digital native” are less equipped with the skills of learning how the technology actually works and have been satisfied with just accepting its existence as is without deeper inquiry. While younger generations of students might feel more at home just picking up and using (at face value) a digital technology or service - it is by no means a measure of their understanding or literacy with said technology.

Balancing screens and hands-on learning. Tara described how kids light up when coding off-screen through design-thinking and project-based learning. The goal is not to reject technology, but to show that creativity and problem-solving exist both on and off the screen. Screen time is not a zero-sum game - it can be structured and equally as important, all kids (and adults for that matter) are different and can handle varying levels of interaction (yes, I see you at 11 p.m. on your second hour of doom scrolling…) / finding balance is not an instant win - it might take time to find the right amount. A reminder or tip for fellow parents out there with smaller kids - depending on the platform ( Apple or Google, etc.), there are great parental administration controls to help you enforce and control screen time and content for young learners.

Digital citizenship starts young. From group chats to online games like Roblox and Minecraft, students face social and ethical challenges earlier than ever. Teaching consent around photos, navigating online friendships, and recognizing privacy trade-offs are essential life skills.

  • Roblox Parental Controls [website]

  • Nerdy Birdy Tweets by Aaron Reynolds and Matt Davies (a cautionary tale of impacts of making mistakes online and with social media) [Amazon]

AI in the classroom. While her district limits direct hands-on AI use for students under 13, Tara has found creative ways to teach AI literacy — from classroom debates on “Would you rather read with a human or an AI?” to storybooks that highlight what machines cannot feel or know. This conversation raises many thoughts and ideas - and one of those is the open question as to “when is it appropriate to have students directly engage with various AI tools or platforms”? Certainly not an easy question to ask - as the question itself has many variables that are changing - and not all AI tools are the same. Is it okay to use a generative AI platform to create images for a project or story? What about creating language? Or using voice models to bring a historic figure “back to life” to make learning more engaging? Where do we draw the line - who draws said line, and how do we know when we’ve gone too far?

Parents and teachers as partners. Perhaps most importantly, Tara reminds us that preparing kids for an AI-shaped world isn’t just the job of schools, it will take the literacy village. Parents need to understand the tools their children use, ask questions, and engage in open conversations. Fundamentally this is a societal challenge - and one that cannot be placed squarely on the shoulders of an already taxed educational system.

This episode is as much about the future of learning as it is about the future of being human. Kids today will grow up in a world where AI is a constant presence — but it’s the values we nurture, the skills we model, and the curiosity we encourage that will matter most.

Join the conversation:

We’d love to hear your thoughts: when do you think the most appropriate time for kids to start intentionally engaging with AI is?

If you’d like to dive deeper, jump into the link and listen to the podcast or watch the YouTube video. Join us as we explore the forces shaping our collective future and the urgent need to keep human values at the heart of innovation.

Subscribe and Connect!

Subscribe to Modem Futura on a favorite podcast platform, follow on LinkedIn, and join the conversation by sharing thoughts and questions. The medium may still be the massage, but everyone has a chance to shape how it kneads modern culture—and to decide what kind of global village we ultimately build.

🎧 Apple Podcast: https://apple.co/3KqJ4CJ

🎧 Spotify: https://open.spotify.com/episode/6huSRZQxI8SMUu1ja5BO61?si=pQDBzsqzQTqiWGIcGF5mgg

📺 YouTube: https://youtu.be/d1l_X-7ygbM

🌐 Website: https://www.modemfutura.com/

Futures Thinking: Foresight You Can Use – Episode 49

We don’t predict the future, but we prepared for the uncertainties the futures will bring

Ever been stuck in traffic and thought, “Where’s my eVTOL button?” We open this episode right there—and quickly flip the fantasy into a lesson on systems: technologies don’t fix congestion (or most complex problems) unless policy, behavior, equity, and infrastructure evolve with them. From that launchpad, Sean Leahy and Dr. Andrew Maynard unpack futures thinking as a mindset—distinct from prediction—that helps people and organizations navigate uncertainty with agency. They walk through the classic triad of possible, probable, and preferable futures, then translate it into practice: horizon scanning (signals, trends, megatrends), scenario building, and backcasting from a desired 10‑year outcome to concrete actions today. Along the way, they surface guardrails like avoiding “used futures” (inherited visions of someone else’s desired future) and stress‑testing for unintended consequences, especially for vulnerable communities and the planet.

The conversation ranges widely—think SimCity lessons and Mars‑city thought experiments as mirrors for Earth’s complexity; protopian (step‑by‑step better) versus utopian/dystopian frames; and why foresight shouldn’t be a bolt‑on consultancy only, but a capacity embedded across teams. Educators will appreciate a practical take on bringing futures thinking into K–12 and higher ed without “one more thing”: weave foresight into existing subjects to build creativity, inquiry, and resilience. Pop culture helps, too—using films (à la The Moviegoer’s Guide to the Future) creates a low‑stakes, high‑insight space to explore tough issues together. And for those tracking AI’s breakneck pace, the episode doubles as an antidote to future shock—a way to slow down, widen perspective, and choose well‑considered next steps.

Why it matters: Futures Thinking is for everyone - all humans poses the qualities needed to engage in thinking about our collective futures. Whether you lead a product team, a classroom, or a community, cultivating a futures mindset helps you spot weak signals earlier, align around preferable outcomes, and take action that nudges the world toward human flourishing.

Join the conversation:

What “used future” have you noticed in your field? If you were backcasting from a 2035 future you’d be proud of, what’s the first move you’d make this quarter? Drop your thoughts—and feel free to borrow this episode in your class, team meeting, or strategy offsite.

🎧 Listen to the full episode to dive deeper into how films shape our futures: https://apple.co/4nrAIci

📺 Watch us on YouTube: https://www.youtube.com/@ModemFutura

🎬 What film has changed the way you think about the future? Drop a comment — we’d love to hear.

If you’d like to dive deeper, jump into the link and listen to the podcast or watch the YouTube video. Join us as we explore the forces shaping our collective future and the urgent need to keep human values at the heart of innovation.

Subscribe and Connect!

Subscribe to Modem Futura on a favorite podcast platform, follow on LinkedIn, and join the conversation by sharing thoughts and questions. The medium may still be the massage, but everyone has a chance to shape how it kneads modern culture—and to decide what kind of global village we ultimately build.

🎧 Apple Podcast: https://apple.co/4nrAIci

🎧 Spotify: https://open.spotify.com/episode/1OmUyc6fYdMIZ8thORheOJ?si=ZTQ-ZI7hQzSjNTy3jhjgfQ

📺 YouTube: https://youtu.be/85cTuht_a8k

🌐 Website: https://www.modemfutura.com/

Summer School with AI: Rethinking Learning in the Age of GPT – Episode 41

Summer School with AI: Why “Back to Basics” Isn’t Enough

Why this episode matters? If you’re charting strategy for schools, workforce development or lifelong learning, this discussion offers a candid roadmap—and a few provocative questions—for navigating the next decade of educational transformation.

This month on Modem Futura I welcomed Rachna Mathur, Ed.D. —engineer, artist, lifelong-learner and Senior STEM Strategist at ASU Preparatory Academy—to a scorching‑hot Arizona studio for a free‑flowing “summer session” on the future of learning in the age of generative AI.

Our conversation touches on many aspects of learning and AI, but laser in on the implications of living the “digital world” for learning, partially inspired from the headline that shocked many educators: Sweden’s decision to pull back from screens and re‑embrace handwriting and printed books after seeing declines in comprehension and critical‑thinking benchmarks. We explore the move as an important—but incomplete—signal. We arguee that the real challenge is finding a sustainable balance between analog depth and digital acceleration, not retreating wholesale from technology, and not leaning into a pure technological solution just for technologies sake.

The theme of moderation threads the entire episode. We swap Montessori childhood stories—self‑directed, community‑anchored, and surprisingly common among tech leaders—before examining how that philosophy might translate to AI‑rich classrooms where personalization risks isolation if community norms aren’t protected.

We then fast‑forward 50 years to imagine two stark futures: a post‑scarcity Star‑Trek‑style society of flourishing creativity, or the WALL‑E “hover‑chair” dystopia where humans outsource thinking, writing and even curiosity to autonomous agents. In both scenarios, today’s policy and design choices in K‑12 systems carve the path. Should we double‑down on foundational literacies—or teach students how to audit machine output for bias, hallucination and relevance?

We highlight the rising cognitive load on teachers, who are expected to master every “shiny new doodad” while still wearing a dozen other hats. We discuss realistic guardrails: cell‑phone moderation policies; AI readers that empower dyslexic learners; and iterative, living guidelines that evolve alongside the tech itself rather than one‑and‑done declarations.

Finally, we confront the looming content‑collapse problem (the recursive nightmare that may be building right in front of us): models now train on data increasingly generated by other models, a self‑referential “snake eating its own tail” that threatens originality and human perspective. Our shared conclusion? Educators, parents and technologists must collaborate on a middle path that preserves human agency, cultivates critical judgment, and leverages AI as an amplifier—not a crutch.

🎧 Listen on Apple Podcasts: https://apple.co/4o6LwOc

📺 Watch us on YouTube: https://www.youtube.com/@ModemFutura

If you’d like to dive deeper, jump into the link and listen to the podcast or watch the YouTube video. Join us as we explore the forces shaping our collective future and the urgent need to keep human values at the heart of innovation.

Subscribe and Connect!

Subscribe to Modem Futura on a favorite podcast platform, follow on LinkedIn, and join the conversation by sharing thoughts and questions. The medium may still be the massage, but everyone has a chance to shape how it kneads modern culture—and to decide what kind of global village we ultimately build.

🎧 Apple Podcast: https://apple.co/4o6LwOc

🎧 Spotify: https://open.spotify.com/episode/1E9LMkkOTYvTJVwlb6Ey0F?si=ADMbNGEWSUW-Jdx-h0oBsA

📺 YouTube: https://youtu.be/cvxHCJxahlg

🌐 Website: https://www.modemfutura.com/

Futures of Learning: AI in Education with Punya Mishra – Episode 33

Friction Required: How will a world transformed by emerging technologies like AI reshape the world? Sean Leahy,Andrew Maynard and special guest Punya Mishra cut through the hype to reveal the creative tension, hidden risks, and big-picture futures for AI-powered, human-centered education. How can the power of AI be harnessed without losing the soul of learning?

Friction Required: Re-imagining Learning in an AI World

Generative AI burst onto campuses promising personalized tutoring, instant lesson plans, and anytime feedback. Yet beneath the buzz lies a more provocative question: What, exactly, makes education worth the effort once answers are a prompt away? In this week’s Modem Futura, hosts Sean Leahy and Andrew Maynard sit down with educator-innovatorDr. Punya Mishra to look past the shiny tools and into the messy, human heart of learning.

🎧 Listen on Apple Podcasts: https://apple.co/3ZDH8vg

📺 Watch us on YouTube: https://www.youtube.com/@ModemFutura

Over an energetic hour they explore why purposeful “friction”—the struggle, inquiry, and face-to-face negotiation of meaning—is still essential. Punya and Sean draw on John Dewey’s four impulses—Inquiry, Communication, Construction, Expression—as a compass for designing AI-infused classrooms that amplify (rather than automate) these deep-learning moments. The trio swap stories of chatbots that spark creativity, debate whether banning tools curbs cheating or curiosity, and ask whether transparency beats top-down rules when it comes to academic integrity.

But the conversation zooms further out. What happens when large language models become persuasive co-teachers? Could Universal Basic Income turn learning into a lifelong pursuit instead of a credentialing race? And might universities act as society’s “flywheel”—a deliberate drag that buys time to think before technology rewrites the rules? The answers aren’t neat, yet they underscore a shared conviction: the future of education must be AI-powered and human-centered.

Key Takeaways

  • Friction is a feature, not a bug. Struggle fosters agency, resilience, and creativity—qualities that instant answers risk eroding.

  • Design for Dewey’s impulses. Use AI to scaffold inquiry, amplify student expression, and make thinking visible, not to short-circuit it.

  • Radical transparency > blanket bans. Open dialog about capabilities, limitations, and ethics beats whack-a-mole policies.

  • Cheating vs. caring. Focus on cultivating authentic motivation; surveillance tech alone can’t fix a trust gap.

  • Universities as sandboxes and speed-bumps. Higher ed can prototype responsible uses and slow premature adoption that harms society.

Whether you’re an instructor drafting next semester’s syllabus, a student exploring new creative tools, or a policymaker worried about the automation of learning, this episode offers frameworks—and questions—to keep humans at the center of the AI revolution.

🎧 Ready for the full conversation? Click below to listen or watch, then let us know how you’re embracing (or resisting) AI in your own learning spaces. And if the discussion sparks ideas, consider sharing this newsletter with a colleague—friction loves company!

If you’d like to dive deeper, jump into the link and listen to the podcast or watch the YouTube video. Join us as we explore the forces shaping our collective future and the urgent need to keep human values at the heart of innovation.

Subscribe and Connect!

Subscribe to Modem Futura on a favorite podcast platform, follow on LinkedIn, and join the conversation by sharing thoughts and questions. The medium may still be the massage, but everyone has a chance to shape how it kneads modern culture—and to decide what kind of global village we ultimately build.

🎧 Podcast: https://apple.co/3ZDH8vg

📺 YouTube: https://www.youtube.com/@ModemFutura