Podcasts

AI in Elementary Education: Teaching Tech to Our Youngest Learners - Episode 51

Teaching AI in Elementary School: Preparing the Youngest Learners for a Digital Future

What happens when a kindergartener feels more comfortable with an iPad than a pair of scissors? Or when a fourth grader wonders aloud whether Alexa is “watching” them? These are not hypotheticals — they are real stories from today’s classrooms, where the line between technology, childhood, and learning is shifting faster than ever.

In our latest episode of Modem Futura, Andrew Maynard and I were joined by educator Tara Menghini, who has spent more than 25 years teaching technology and computer science to K–6 students. Tara’s perspective is invaluable: she sees firsthand how children engage with digital tools, how myths like the “digital native” mislead us, and why teaching judgment and balance is just as important as teaching coding.

A few highlights from our conversation:

The myth of the digital native. Just because kids swipe naturally on a tablet doesn’t mean they understand how technology works — or the trade-offs it creates. They need explicit guidance and context. As I share in pod, from my own anecdotal experience I’ve found that many students who would be labeled “digital native” are less equipped with the skills of learning how the technology actually works and have been satisfied with just accepting its existence as is without deeper inquiry. While younger generations of students might feel more at home just picking up and using (at face value) a digital technology or service - it is by no means a measure of their understanding or literacy with said technology.

Balancing screens and hands-on learning. Tara described how kids light up when coding off-screen through design-thinking and project-based learning. The goal is not to reject technology, but to show that creativity and problem-solving exist both on and off the screen. Screen time is not a zero-sum game - it can be structured and equally as important, all kids (and adults for that matter) are different and can handle varying levels of interaction (yes, I see you at 11 p.m. on your second hour of doom scrolling…) / finding balance is not an instant win - it might take time to find the right amount. A reminder or tip for fellow parents out there with smaller kids - depending on the platform ( Apple or Google, etc.), there are great parental administration controls to help you enforce and control screen time and content for young learners.

Digital citizenship starts young. From group chats to online games like Roblox and Minecraft, students face social and ethical challenges earlier than ever. Teaching consent around photos, navigating online friendships, and recognizing privacy trade-offs are essential life skills.

  • Roblox Parental Controls [website]

  • Nerdy Birdy Tweets by Aaron Reynolds and Matt Davies (a cautionary tale of impacts of making mistakes online and with social media) [Amazon]

AI in the classroom. While her district limits direct hands-on AI use for students under 13, Tara has found creative ways to teach AI literacy — from classroom debates on “Would you rather read with a human or an AI?” to storybooks that highlight what machines cannot feel or know. This conversation raises many thoughts and ideas - and one of those is the open question as to “when is it appropriate to have students directly engage with various AI tools or platforms”? Certainly not an easy question to ask - as the question itself has many variables that are changing - and not all AI tools are the same. Is it okay to use a generative AI platform to create images for a project or story? What about creating language? Or using voice models to bring a historic figure “back to life” to make learning more engaging? Where do we draw the line - who draws said line, and how do we know when we’ve gone too far?

Parents and teachers as partners. Perhaps most importantly, Tara reminds us that preparing kids for an AI-shaped world isn’t just the job of schools, it will take the literacy village. Parents need to understand the tools their children use, ask questions, and engage in open conversations. Fundamentally this is a societal challenge - and one that cannot be placed squarely on the shoulders of an already taxed educational system.

This episode is as much about the future of learning as it is about the future of being human. Kids today will grow up in a world where AI is a constant presence — but it’s the values we nurture, the skills we model, and the curiosity we encourage that will matter most.

Join the conversation:

We’d love to hear your thoughts: when do you think the most appropriate time for kids to start intentionally engaging with AI is?

If you’d like to dive deeper, jump into the link and listen to the podcast or watch the YouTube video. Join us as we explore the forces shaping our collective future and the urgent need to keep human values at the heart of innovation.

Subscribe and Connect!

Subscribe to Modem Futura on a favorite podcast platform, follow on LinkedIn, and join the conversation by sharing thoughts and questions. The medium may still be the massage, but everyone has a chance to shape how it kneads modern culture—and to decide what kind of global village we ultimately build.

🎧 Apple Podcast: https://apple.co/3KqJ4CJ

🎧 Spotify: https://open.spotify.com/episode/6huSRZQxI8SMUu1ja5BO61?si=pQDBzsqzQTqiWGIcGF5mgg

📺 YouTube: https://youtu.be/d1l_X-7ygbM

🌐 Website: https://www.modemfutura.com/

Sloppy Clankers: Is This AI’s Frankenfood Moment? – Episode 50

Sloppy Clankers: Are We Witnessing AI’s Frankenfood Moment?

In the latest episode of Modem Futura, Andrew Maynard and I explore a potential cultural shift that says a lot about where society might be heading with artificial intelligence, and a growing social backlash: the rise of the word clanker.

For those who haven’t stumbled across it on social media sites like TikTok or Reddit, clanker started as a Star Wars term for mindless battle droids. Today, it’s becoming shorthand for AI tools—and increasingly, for the people who use them. Its sibling insult, slopper, has emerged for AI-generated content that feels shallow or mass-produced. At first glance, this may seem like internet silliness. But dig deeper, and it looks like something far more significant: a signal of growing social backlash to generative AI.

We ask a provocative question: could clanker be AI’s “Frankenfood” moment?

Back in the 1990s, a single term—“Frankenfood”—sparked widespread opposition to genetically modified organisms (GMOs), reshaping public perception and consumer habits for decades. Even today, you’ll see “Non-GMO” labels on supermarket shelves, not because of scientific consensus, but because of public unease, mistrust, and a sense of lost agency.

That same dynamic is bubbling around AI. As companies rush to integrate generative tools, public sentiment is turning cautious, even hostile. People are starting to question: Do I trust the content I’m seeing? Is this authentic? Am I being replaced—or manipulated? Labels like “Non-AI” may soon emerge as creators and organizations scramble to signal authenticity.

We also dig into what happens when AI steps into the deeply human spaces of communication and relationships. Should managers outsource sensitive workplace emails to ChatGPT? Should someone rely on AI to write a condolence message? The temptation is real, but the relational costs can be enormous. Outsourcing care, empathy, or creativity risks eroding the trust that makes organizations, friendships, and communities work.

And then there are the legal battles. In this episode, we explore Anthropic’s recent $1.5B settlement with authors whose pirated works were used to train its AI models. It’s a watershed moment in the debate over creativity, copyright, and fair use. Yet it also raises thorny questions: where do we draw the line between inspiration, influence, and appropriation?

So, are we at an inflection point? Will terms like clanker and slopper fade as fleeting memes, or will they crystallize into rallying cries of resistance—like “Frankenfood” did 30 years ago?

As always on Modem Futura, Andrew and I don’t offer final answers, but rather open the space for reflection. These small shifts in language often reveal much larger undercurrents in how we understand technology, society, and ultimately what it means to be human in a rapidly changing world.

Join the conversation:

We’d love to hear your thoughts: do you see clanker as harmless internet slang—or the first sparks of a broader social reckoning with AI? Drop your thoughts—and feel free to borrow this episode in your class, team meeting, or strategy offsite.

If you’d like to dive deeper, jump into the link and listen to the podcast or watch the YouTube video. Join us as we explore the forces shaping our collective future and the urgent need to keep human values at the heart of innovation.

Subscribe and Connect!

Subscribe to Modem Futura on a favorite podcast platform, follow on LinkedIn, and join the conversation by sharing thoughts and questions. The medium may still be the massage, but everyone has a chance to shape how it kneads modern culture—and to decide what kind of global village we ultimately build.

🎧 Apple Podcast: https://apple.co/3KCubgw

🎧 Spotify: https://open.spotify.com/episode/52vm7ThI4AfSPZvINuu3mq?si=xs6G8RnaTtG-pVCewVB_Rw

📺 YouTube: https://youtu.be/2199nHf_PVQ

🌐 Website: https://www.modemfutura.com/

Futures Thinking: Foresight You Can Use – Episode 49

We don’t predict the future, but we prepared for the uncertainties the futures will bring

Ever been stuck in traffic and thought, “Where’s my eVTOL button?” We open this episode right there—and quickly flip the fantasy into a lesson on systems: technologies don’t fix congestion (or most complex problems) unless policy, behavior, equity, and infrastructure evolve with them. From that launchpad, Sean Leahy and Dr. Andrew Maynard unpack futures thinking as a mindset—distinct from prediction—that helps people and organizations navigate uncertainty with agency. They walk through the classic triad of possible, probable, and preferable futures, then translate it into practice: horizon scanning (signals, trends, megatrends), scenario building, and backcasting from a desired 10‑year outcome to concrete actions today. Along the way, they surface guardrails like avoiding “used futures” (inherited visions of someone else’s desired future) and stress‑testing for unintended consequences, especially for vulnerable communities and the planet.

The conversation ranges widely—think SimCity lessons and Mars‑city thought experiments as mirrors for Earth’s complexity; protopian (step‑by‑step better) versus utopian/dystopian frames; and why foresight shouldn’t be a bolt‑on consultancy only, but a capacity embedded across teams. Educators will appreciate a practical take on bringing futures thinking into K–12 and higher ed without “one more thing”: weave foresight into existing subjects to build creativity, inquiry, and resilience. Pop culture helps, too—using films (à la The Moviegoer’s Guide to the Future) creates a low‑stakes, high‑insight space to explore tough issues together. And for those tracking AI’s breakneck pace, the episode doubles as an antidote to future shock—a way to slow down, widen perspective, and choose well‑considered next steps.

Why it matters: Futures Thinking is for everyone - all humans poses the qualities needed to engage in thinking about our collective futures. Whether you lead a product team, a classroom, or a community, cultivating a futures mindset helps you spot weak signals earlier, align around preferable outcomes, and take action that nudges the world toward human flourishing.

Join the conversation:

What “used future” have you noticed in your field? If you were backcasting from a 2035 future you’d be proud of, what’s the first move you’d make this quarter? Drop your thoughts—and feel free to borrow this episode in your class, team meeting, or strategy offsite.

🎧 Listen to the full episode to dive deeper into how films shape our futures: https://apple.co/4nrAIci

📺 Watch us on YouTube: https://www.youtube.com/@ModemFutura

🎬 What film has changed the way you think about the future? Drop a comment — we’d love to hear.

If you’d like to dive deeper, jump into the link and listen to the podcast or watch the YouTube video. Join us as we explore the forces shaping our collective future and the urgent need to keep human values at the heart of innovation.

Subscribe and Connect!

Subscribe to Modem Futura on a favorite podcast platform, follow on LinkedIn, and join the conversation by sharing thoughts and questions. The medium may still be the massage, but everyone has a chance to shape how it kneads modern culture—and to decide what kind of global village we ultimately build.

🎧 Apple Podcast: https://apple.co/4nrAIci

🎧 Spotify: https://open.spotify.com/episode/1OmUyc6fYdMIZ8thORheOJ?si=ZTQ-ZI7hQzSjNTy3jhjgfQ

📺 YouTube: https://youtu.be/85cTuht_a8k

🌐 Website: https://www.modemfutura.com/

Films from the Future: Moviegoer’s Guide to Tomorrow – Episode 48

Films from the Future: How Sci-Fi Movies Shape the Way We See Tomorrow

Why do movies like Jurassic Park, Minority Report, or Ex Machina stay with us long after the credits roll? It’s not just the dinosaurs, futuristic tech, or special effects — it’s because these films reflect back to us the deeper questions of what it means to be human in a rapidly changing world.

In our latest Modem Futura episode, Andrew Maynard and I revisit his book Films from the Future: The Technology and Morality of Sci-Fi Movies and the class it inspired, The Moviegoer’s Guide to the Future. The central idea? Films are not only entertainment — they’re cultural tools that help us grapple with profound questions about technology, ethics, and identity.

Take Never Let Me Go, a haunting exploration of cloning and the value of life itself. Or Minority Report, which foreshadowed today’s debates over predictive policing and surveillance technologies. Ex Machina pushes us to consider how easily humans can be manipulated by AI that learns our cognitive biases. And Elysium asks us to confront inequality in access to innovation, healthcare, and privilege. Even Contact, Carl Sagan’s love letter to science, brings us face-to-face with the tension between faith, science, and the human search for meaning.

What makes these films powerful isn’t scientific accuracy — it’s storytelling. Stories give us a playground for exploring possible futures. They allow us to ask “what if?” and to examine how technological choices shape human lives, for better and for worse. And when these stories are shared communally — in theaters, classrooms, or even podcasts like ours — they become catalysts for conversations that spill over into dinner tables, workplaces, and beyond.

For us, this is the heart of futures thinking. By examining the stories we tell, we can better understand the world we’re building, and perhaps make wiser choices about where we’re headed.

🎧 Listen to the full episode to dive deeper into how films shape our futures: https://apple.co/3VHmkka

📺 Watch us on YouTube: https://www.youtube.com/@ModemFutura

🎬 What film has changed the way you think about the future? Drop a comment — we’d love to hear.

If you’d like to dive deeper, jump into the link and listen to the podcast or watch the YouTube video. Join us as we explore the forces shaping our collective future and the urgent need to keep human values at the heart of innovation.

Subscribe and Connect!

Subscribe to Modem Futura on a favorite podcast platform, follow on LinkedIn, and join the conversation by sharing thoughts and questions. The medium may still be the massage, but everyone has a chance to shape how it kneads modern culture—and to decide what kind of global village we ultimately build.

🎧 Apple Podcast: https://apple.co/3VHmkka

🎧 Spotify: https://open.spotify.com/episode/2cpIyNfnbveNmKC4eh4dV6?si=NiwZzFYNR5-fVrr0jgoNvg

📺 YouTube: https://youtu.be/YHR1xEG4kAo

🌐 Website: https://www.modemfutura.com/

Up in the Air: The Future of eVTOLs and Urban Air Mobility – Episode 47

Flying Cars or Futile Fantasy? The Future of eVTOLs

What if your morning commute didn’t mean fighting through traffic, but instead lifting off above the streets, gliding quietly past congestion, and landing minutes later at your destination? That’s the promise of eVTOLs—electric vertical takeoff and landing aircraft—a technology that feels straight out of The Jetsons yet is edging closer to reality.

In the latest episode of Modem Futura, Andrew Maynard and I explore this emerging frontier of personal and urban mobility. From one-seat recreational flyers like the Jetson One to multi-passenger air taxis under development by companies like Joby and Archer, eVTOLs are being positioned as a transformative solution to congestion, emissions, and urban mobility. But as with every bold vision of the future, the story is more complex.

On the upside, eVTOLs hold real promise. They’re electric, meaning zero emissions at point of use. They could cut cross-town trips from hours to minutes. And they tap into decades of technological advances in drone stabilization, batteries, and sensors. Their potential for reshaping how we think about movement in cities—and even how we design those cities—is tantalizing.

Yet the risks and challenges are significant. Safety tops the list: what happens when experimental craft sharing city skies experience failure? Noise, privacy, and equity loom large as well. Who benefits when public infrastructure is built for vehicles that cost $100,000+? And what does it mean for the rest of society if only the wealthy soar above while others remain stuck below?

History reminds us that every major shift in transportation—from horse-drawn carriages to automobiles—has reshaped not just how we travel, but the very fabric of our communities. eVTOLs could do the same, but we must ask: are we building futures that benefit the few, or the many?

In our conversation, we balance fascination with skepticism. Could eVTOLs open doors to greener, more flexible mobility—or are they an expensive distraction from more equitable solutions like walkable cities, cycling infrastructure, and public transit? Perhaps, as with so many futures, the answer lies in imagination: using today’s innovations not simply to recreate cartoon fantasies, but to envision transportation that elevates all of us.

Special Acknowledgment 

We'd like to acknowledge the partial funding support provided by the US Department of Transportation-sponsored Travel Behavior and Demand National University Transportation Center led by The University of Texas at Austin. The Center, of which Arizona State University is a consortium member,

🎧 Listen on Apple Podcasts: https://apple.co/41xnEJS

📺 Watch us on YouTube: https://www.youtube.com/@ModemFutura

If you’d like to dive deeper, jump into the link and listen to the podcast or watch the YouTube video. Join us as we explore the forces shaping our collective future and the urgent need to keep human values at the heart of innovation.

Subscribe and Connect!

Subscribe to Modem Futura on a favorite podcast platform, follow on LinkedIn, and join the conversation by sharing thoughts and questions. The medium may still be the massage, but everyone has a chance to shape how it kneads modern culture—and to decide what kind of global village we ultimately build.

🎧 Apple Podcast: https://apple.co/41xnEJS

🎧 Spotify: https://open.spotify.com/episode/07YryAJIk5P6hytdKVHTGq?si=oAfh_V7PR42lMDCr6q26KA

📺 YouTube: https://youtu.be/A3UWx5_LGgg

🌐 Website: https://www.modemfutura.com/

Agentic AI in Education & the Art of Becoming with Punya Mishra – Episode 46

Agentic AI ≠ Automated Learning: Building Real Student Agency

What if AI “agents” did 30% of a student’s work—would the student still be learning? In this week’s Modem Futura, hosts Sean Leahy and Andrew Maynard sit down with ASU’s Punya Mishra for a lively, practical look at agentic AI—not as dashboards and nudges, but as tools that help people become. It’s a warm, grounded conversation for educators, ed‑leaders, parents, and anyone shaping the future of learning.

In this episode, we explore the distinction between agentic AI and automation: if an AI writes the paper, who is developing judgment, taste, and identity? We unpack why offloading tasks is not the same as cultivating agency. Drawing on John Dewey’s four natural impulses—inquiry, construction, communication, and expression—and Seymour Papert’s call for “playgrounds, not playpens,” we frame learning with AI as making and meaning, not simply “study mode.” Punya shares how he used AI to begin reading Odia in order to engage with his mother’s writing, a story that illustrates motivation over gamification—depth no badge can match.

We also challenge “learning management” mindsets, emphasizing that courses are crafted experiences that shape community and identity well beyond content delivery. We contrast classic intelligent tutoring systems with today’s large language models—the brittleness of the former versus the hallucinations of the latter—and identify where each genuinely helps learners. We examine the privacy, surveillance, and “efficiency” traps that datafication creates in schools, making the case for transparent, local, personal AIs—with an explicit kill switch. Finally, we underscore that craft and creativity still matter: Sean’s Final Cut Pro example shows how AI auto‑courses can nail mechanics yet miss the art (think J/L cuts), reminding us that human taste and critique remain essential.

Why this matters

Education is not a pipeline to “becoming X.” It’s a lifelong process of becoming—discovering interests, building capability, and strengthening belonging. AI is powerful precisely when it amplifies these human aims. Our invitation: design AI‑supported playgrounds where learners build, reflect, and share—safely and with agency.

🎧 Listen & share: If you care about AI’s role in real learning, this episode is for you. Share it with a teacher, professor, or parent who’s wrestling with where AI truly helps—and where it doesn’t.

🎧 Listen on Apple Podcasts: https://apple.co/4mA5z6x

📺 Watch us on YouTube: https://www.youtube.com/@ModemFutura

If you’d like to dive deeper, jump into the link and listen to the podcast or watch the YouTube video. Join us as we explore the forces shaping our collective future and the urgent need to keep human values at the heart of innovation.

Subscribe and Connect!

Subscribe to Modem Futura on a favorite podcast platform, follow on LinkedIn, and join the conversation by sharing thoughts and questions. The medium may still be the massage, but everyone has a chance to shape how it kneads modern culture—and to decide what kind of global village we ultimately build.

🎧 Apple Podcast: https://apple.co/4mA5z6x

🎧 Spotify:  https://open.spotify.com/episode/5HBXeUCV9Qf02NqKOfQ7WO?si=DE2hVgX4T_eETxaDxd_vjg

📺 YouTube: https://youtu.be/7FRPXozBEYI

🌐 Website: https://www.modemfutura.com/

AI, Not AI: Riding the Hype Cycle – Episode 45

Agents at the Peak, Humans in the Loop: Navigating the AI Hype Cycle

Every week brings another “breakthrough” headline—agent modes, study modes, version bumps—and it’s getting harder to tell hype from progress. In this new Modem Futura episode, we take a candid, summer‑mode breather to map where AI really sits on the Gartner Hype Cycle, what open‑weight releases mean for builders, and how to keep the human voice intact when co‑authoring with machines. (Yes, we tried not to talk about AI…and failed—because it’s interwoven into every aspect of human activity now.)

What open weights really unlock

Setting aside the current drama around GPT-5, recent open‑weight releases under permissive licenses are a quiet game‑changer. OpenAI has released a pair of open‑weight models (120B & 20B) under Apache‑2.0 license that you can download from Huggingface. Translation: you can download models, run them locally, and adapt them for your own needs—no cloud required (except o download). With capable personal computers (think Apple’s M‑series) or home-built rigs GneAI LLMs can be run locally on device, and as hardware capacity increases and the sophistication of the models improves, the barrier to entry keeps dropping. The reason this matters is that it enables “garage‑scale” innovation—students, labs, startups, and curious tinkerers can now build for their own unique (or weird), local needs rather than waiting for a platform update.

Writing with AI—and protecting the voice

We also dig into human‑AI co‑authoring. Andrew shares a writer’s perspective—AI can draft moving, polished prose, but a subtle sameness creeps in. The fix isn’t anti‑AI; it’s pro‑craft: re‑introduce your “tells,” rhythm, and variance so readers feel a human mind at work. Think editorial sculpture—chipping away until the voice has texture and life. When even an AI editor flags your draft as “too consistent,” it’s a nudge to put the messiness back in. This is what happens when the pendulum swings too far to one side (perfect AI generated prose) the reader craves authenticity and “style” to which we need to introduce our human-touch back into the machine.

So…where are we on the Hype Cycle?
Whether you’re looking to learn how to interpret this powerful model (tool) or just get some new band name ideas, we explain the curve (innovation trigger → peak of inflated expectations → trough of disillusionment → slope of enlightenment → plateau of productivity) and why agentic AI feels perched at the peak, while day‑to‑day generative AIis edging into the trough—not because it’s useless, but because the shine (over hyped exaggerated claims of impact) wears off and the real work begins (just look at the backlash from GPT-5). Layer in the diffusion‑of‑innovation model and you’ll see different communities (VCs, educators, enterprises) living on different parts of the curve at the same time.

Image source: pasqal.com

Image source: Gartner

Beyond screens: ambient intelligence

We explore the exciting space of spatial/ambient computing and sensing (I even got to briefly mention LANs, WANs, and PANs)—environments saturated with signals (Wi-Fi, Bluetooth, cellular, NFC) that AIs can interpret in ways we can’t. It raises the question of what happens when machines can interpret the data‑saturated world beyond our comprehension and act within it? That’s where “AI‑not‑AI” lives: less chatbot magic, more embedded intelligence shaping everyday environments. That’s both exciting and unsettling: it demands new conversations about design, privacy, agency, and the futures we actually want to build.


If it resonates, help broaden the conversation: subscribe, share with a colleague, and tell us where you place AI on the Hype Cycle—and where you’re craving more human messiness. As we joked in‑studio, Modem Futura is “on the slope of enlightenment—accepting social investment via ratings and reviews


🎧 Listen on Apple Podcasts: https://apple.co/47mypmb

📺 Watch us on YouTube: https://www.youtube.com/@ModemFutura

If you’d like to dive deeper, jump into the link and listen to the podcast or watch the YouTube video. Join us as we explore the forces shaping our collective future and the urgent need to keep human values at the heart of innovation.

Subscribe and Connect!

Subscribe to Modem Futura on a favorite podcast platform, follow on LinkedIn, and join the conversation by sharing thoughts and questions. The medium may still be the massage, but everyone has a chance to shape how it kneads modern culture—and to decide what kind of global village we ultimately build.

🎧 Apple Podcast: https://apple.co/47mypmb

🎧 Spotify:  https://open.spotify.com/episode/4ReAdtrV7o8WfxeZ0vaKH9?si=dGnDFD03QiW9A3UiUStEEg

📺 YouTube: https://youtu.be/cfHqBJKnGZo

🌐 Website: https://www.modemfutura.com/

Show Me the Receipts: the Futures of AI Super-intelligence – Episode 44

Superintelligence “In Sight”? Cutting Through Hype to Keep Humans in the Loop

If you’ve felt whiplash from this summer’s AI headlines, you’re not alone. In our latest episode of Modem Futura, we unpack Big Tech’s bolder‑than‑bold claim that “developing superintelligence is now in sight”—and ask for something simple before we all sprint into the future: receipts. We break down what companies are signaling when they talk about AI systems that “improve themselves,” why that sounds momentous, and where the marketing ends and the evidence begins.

First, definitions that matter. Today’s tools remain narrow—powerful, yes, but specialized. AGI is the hypothetical jump to general capability; superintelligence (ASI) is the further leap beyond any human capability. We explore why those terms are often moved like goalposts, and why declaring ASI “near” without a stable definition confuses the public, policymakers, and practitioners alike, and from my perspective is irresponsible.

Then we zoom into a practical pain point: reliability. When platforms silently change models, tools, or defaults, workflows break (hello GPT-5). In education and professional settings, that unpredictability isn’t just irritating—it’s costly. We share real examples (from transcripts labeled with the wrong speakers to model behavior shifting overnight) and discuss what “enterprise‑grade” should mean for LLMs people depend on.

We also play with the upside—digital twins and imaginative design. If a campus has a high‑fidelity digital twin, why stop at mirroring reality? Why not prototype preferable futures—safer, more inclusive, more sustainable spaces—and test them before we build? Of course, reliability matters there too; when operational systems depend on simulations, unintended tweaks can ripple into the real world.

Across the hour, we push back on technological solutionism—the reflex to cast AI as the single answer to complex, “wicked” problems. Yes, we’re excited about AI’s potential; no, it won’t magically resolve conflict, poverty, or disease without broader social, economic, and political work. Framing ASI as our only lifeboat risks narrowing our imaginations right when we need them most.

Ultimately, we return to our favorite question: What does it mean to be human when machines can emulate so much of what we do? For us, that means staying curious and critical, inviting more diverse perspectives into the conversation, and insisting on transparent claims we can evaluate—before ceding agency to systems we don’t fully understand.

If this episode gave you a useful lens on the AI noise, share it with a colleague, drop a comment with the boldest AI claim you’ve heard and the evidence that would convince you, and subscribe so you don’t miss what’s next.

🎧 Listen on Apple Podcasts: https://apple.co/41Ayf6J

📺 Watch us on YouTube: https://www.youtube.com/@ModemFutura

If you’d like to dive deeper, jump into the link and listen to the podcast or watch the YouTube video. Join us as we explore the forces shaping our collective future and the urgent need to keep human values at the heart of innovation.

Subscribe and Connect!

Subscribe to Modem Futura on a favorite podcast platform, follow on LinkedIn, and join the conversation by sharing thoughts and questions. The medium may still be the massage, but everyone has a chance to shape how it kneads modern culture—and to decide what kind of global village we ultimately build.

🎧 Apple Podcast: https://apple.co/41Ayf6J

🎧 Spotify: https://open.spotify.com/episode/0o1vePcG8wh2VmMtETRKjy?si=Fu0xyDEkSHKB-R3gkKKWoQ

📺 YouTube: https://youtu.be/XO9dLoYhIvY

🌐 Website: https://www.modemfutura.com/

Machines That Bleed: Inside the Future World of Bio‑Hybrid Robotics – Episode 43

Why Bio‑Hybrid Robotics Will Reshape Our Relationship with Technology

What if tomorrow’s rescue drone flapped real muscle, sniffed chemicals with a moth’s antenna, and healed itself like living tissue? This isn’t science fiction—it’s the fast‑emerging world of bio‑hybrid robotics, where biological systems and engineered hardware merge to create devices that are simultaneously alive and machine. In the latest episode ofModem Futura, hosts Sean Leahy and Andrew Maynard sit down with Sean Dudley (Associate Vice President, ASU Knowledge Enterprise) to map this fascinating frontier.

Bio‑hybrid robots could transform minimally invasive surgery, disaster‑zone reconnaissance, and environmental stewardship. Yet they also force us to rethink what counts as life and who (or what) deserves care.

Throughout the conversation Sean D. walks us through the four pillars of bio-hybrid robotics. Briefly outlined here, but to get the full sense of wonder (or ick factor) you'll need to watch or listen to the full episode – oh, and just be warned you'll probably be thinking about this for the rest of the week.

Four Pillars of a Living Machine Future

  1. Micro‑Robots Powered by Microbes
. Imagine algae‑propelled drug‑delivery bots navigating the bloodstream. Dudley explains how harnessing microbial metabolism can eliminate the need for bulky batteries while opening doors to precision medicine and targeted environmental cleanup.

  2. Muscle‑Integrated “Musclebots”. By 3‑D‑printing biodegradable scaffolds and seeding them with cultured muscle cells, researchers are building actuators that contract like real tissue—creating soft robots capable of delicate tasks from organ‑on‑chip testing to next‑gen prosthetics.

  3. Cyborg Systems. Neural or electrical interfaces are already steering beetles, eels, and jellyfish, turning animals into agile, low‑power platforms for search‑and‑rescue, deep‑sea exploration, and even atmospheric data‑collection. DARPA’s new HYBRID program is accelerating this work—raising equal measures of excitement and ethical concern.

  4. Living Sensors . Daphnia “canaries” that change swimming patterns in polluted water, plant‑based detectors that fluoresce when exposed to explosives—the conversation highlights how living organisms can outperform silicon in sensitivity, selectivity, and energy efficiency.

Beyond the Lab: Opportunities & Obligations

AI as a Design Partner: Advanced generative models are speeding up “shopping‑list biology,” letting engineers mix‑and‑match tissues, genes, and materials in silico before ever touching a petri dish.

Ethical Imperatives: Where do we draw lines of agency and dignity for augmented organisms? The hosts probe cultural attitudes toward animal welfare, military use cases, and DIY “bio‑punk” experimentation.

Global Governance Gaps: From intellectual‑property battles to cross‑border regulation, the trio stresses the need for international collaboration—before unintended consequences eclipse the technology’s promise.

🎧 Listen on Apple Podcasts: https://apple.co/40Qzfn5

📺 Watch us on YouTube: https://www.youtube.com/@ModemFutura

If you’d like to dive deeper, jump into the link and listen to the podcast or watch the YouTube video. Join us as we explore the forces shaping our collective future and the urgent need to keep human values at the heart of innovation.

Subscribe and Connect!

Subscribe to Modem Futura on a favorite podcast platform, follow on LinkedIn, and join the conversation by sharing thoughts and questions. The medium may still be the massage, but everyone has a chance to shape how it kneads modern culture—and to decide what kind of global village we ultimately build.

🎧 Apple Podcast: https://apple.co/40Qzfn5

🎧 Spotify: https://open.spotify.com/episode/1njCiTX40Q9XQoMpivuiqO?si=647fb6c5e9114d36

📺 YouTube: https://youtu.be/7MhOsxPPb7U

🌐 Website: https://www.modemfutura.com/

Futures of Agentic AI and the 2025 AI Action Plan – Episode 42

A Wet Hot AI Summer: Decoding the U.S. AI Action Plan & the Agentic‑Bot Boom

If you stepped away from the screen / feed for even a moment this July, you might have missed two massive AI stories that could shape the near-term innovation in AI. First, the White House released its 2025 AI Action Plan—a 20 plus page blueprint built on three pillars: (1) Accelerate AI innovation, (2) Build national AI infrastructure, and (3) Lead global AI diplomacy. If that wasn’t news enough - just back on July 17th OpenAI, announced the roll out of its new “Agent” modes—autonomous-ish bots that promise to book your travel, manage your calendar, and even spend your money while you sleep. Joking aside - please be VERY careful about what sort of access, privacy, and information you give any automated service. Ask yourself “what would be the worst that could happen?” If the answer makes you cringe or sweat - don’t do that thing. Okay - PSA cautionary rant over… back to the episode notes.

In our latest Modem Futura episode, Andrew and I pull these threads together. We ask whether the Action Plan’s “build‑baby‑build” mantra—complete with massive semiconductor subsidies and calls to “remove regulatory barriers”—is a bold vision or reckless speed run. We also spotlight what’s missing: robust guard‑rails for deepfakes, algorithmic bias, and the colossal energy footprint of new data‑centers.

Switching to agentic AI, we run real‑time tests on OpenAI’s new Agent Mode and compare them with Manus’ more mature workflow. Yes, watching a bot open browser tabs for you is technically impressive—until you realize you can still do most tasks faster yourself . That friction sparks a wider debate:

Productivity paradox – Studies already show teachers and coders spending more time fact‑checking AI output than drafting from scratch.

Privacy trade‑offs – Granting an agent access to your email or bank account may save clicks now, but what’s the long‑term cost to autonomy?

Deepfake backlash – The Plan flags courtroom deepfakes as a national‑security risk, yet leaves broader social harms largely unaddressed.

Behind the policy prose and flashy demos lurks a wider narrative of tech nationalism. The document casts AI as a race the United States must win, positioning allies as followers and China as the ultimate adversary. That framing risks turning open research into a geopolitical arms sprint—one where ethical reflection gets lapped by hype.

So where does that leave forward‑thinking professionals, educators, and creators? We advocate to start the conversations now - here are some great starting topics to begin with:

Stay curious but critical. Piloting new agent tools is the best way to spot real value—and red flags—early.

Advocate for “responsible speed.” Innovation and regulation are not mutually exclusive; demand both from vendors and policymakers.

Own your data literacy. Whether you’re vetting deepfake evidence or AI‑generated lesson plans, will skepticism become a core career skill?

🎧 Tune in for the full discussion—including Hitchhiker’s Guide jokes, live agent fails, and pragmatic optimism about building a flourishing, not merely faster, future.

🎧 Listen on Apple Podcasts: https://apple.co/4l7eCKC

📺 Watch us on YouTube: https://www.youtube.com/@ModemFutura

If you’d like to dive deeper, jump into the link and listen to the podcast or watch the YouTube video. Join us as we explore the forces shaping our collective future and the urgent need to keep human values at the heart of innovation.

Subscribe and Connect!

Subscribe to Modem Futura on a favorite podcast platform, follow on LinkedIn, and join the conversation by sharing thoughts and questions. The medium may still be the massage, but everyone has a chance to shape how it kneads modern culture—and to decide what kind of global village we ultimately build.

🎧 Apple Podcast: https://apple.co/4l7eCKC

🎧 Spotify: https://open.spotify.com/episode/2fI044VpiPE3t4Y9MXrZjJ?si=mJ-xb414R3Ww7IkTOIlT0Q

📺 YouTube: https://youtu.be/6fcOiRYnIK8

🌐 Website: https://www.modemfutura.com/