Futures Thinking

The Future From a Kid's Perspective: What a 10-Year-Old Thinks About AI, Jobs, and Meaningful Work

We spend a lot of time talking about young people when we discuss the future of technology. We debate how AI will affect their education, reshape their careers, and transform the world they'll inherit. But we rarely stop to ask them what they think.

In this special episode of Modem Futura, we did exactly that. Freddie Leahy—co-host Sean's almost-10-year-old son—joined us for an unscripted conversation about artificial intelligence, meaningful work, and the questions that don't have easy answers.

Already Thinking About Job Displacement

When asked what he thinks about when he imagines the future, Freddie's first response wasn't about flying cars or space travel. It was about jobs.

"I kind of more think about the AI part of the future," he said. "And I'm just wondering what jobs will be overran by AI."

He's almost ten. And he's already calculating whether his dream career—paleontology—will exist by the time he's ready to pursue it.

This isn't abstract concern. Freddie has a specific vision: he wants to be like Alan Grant from Jurassic Park, out in the field, hands in the dirt, discovering fossils himself. When we suggested that AI might help him find more dinosaur bones faster, he didn't immediately embrace the idea. His worry isn't about efficiency—it's about being separated from the work itself.

"I would be doing it not for the money," he explained, "just because of the experience."

The Limits of AI Creativity

Freddie has firsthand experience with generative AI. He and I have spent time creating AI-generated images—D&D characters, fantasy creatures, book covers. But he's noticed something that many adults are also discovering: the gap between imagination and output.

"Every time you create an AI image," he said, "you never feel like it's quite right. So you just keep making these, and then you have to choose one, but in the end it never feels like the perfect cover you wanted."

When asked why, his answer was simple: "AI isn't our heads."

This observation—from a fourth-grader—gets at something fundamental about the current state of generative tools. They can produce impressive outputs, but they can't access the specific vision in your mind. The friction between prompt and result isn't just a technical limitation; it's a gap between human intention and machine interpretation.

When it comes to his own writing—Freddie is working on stories—he's clear that he doesn't want AI assistance. The temptation exists, especially when facing a blank page. But he recognizes something important: "It's the point about using your own creativity."

Suspicious of AI Companions

One of the most revealing exchanges came when we explored the idea of AI friendship. What if Freddie could have an AI companion who shared all his interests—someone who wanted to talk about dinosaurs as much as he does?

His response was immediate skepticism.

"That would be weird," he said, "because nobody likes what I like."

The very thing that might make an AI friend appealing—perfect alignment with his interests—is exactly what made it feel inauthentic. Part of what makes his interests meaningful is that they're his, distinct from the people around him. An AI that mirrored them perfectly would feel hollow.

When pressed further about whether he'd want an AI as a secret companion—a sort of digital spirit animal—Freddie remained uncertain. "Who knows what it could do," he noted. "It could hack everything."

There's healthy skepticism there, but also something deeper: a sense that friendship involves more than shared interests. It involves trust, vulnerability, and the unpredictability of another mind.

"I Refuse": Mind Uploading at Nine

During our Futures Improv segment, we posed a classic transhumanist scenario: What if you could upload your consciousness to a computer and live forever digitally, while your biological body remained behind?

Freddie's answer required no deliberation:

"I refuse. I will not upload my brain into a digital computer."

His reasoning was practical but profound. At nine years old, why would he abandon a body that works? The theoretical benefits of digital immortality don't outweigh the immediate reality of physical experience.

This perspective offers a useful counterweight to futures discourse that sometimes treats technological transcendence as obviously desirable. From Freddie's vantage point, the question isn't whether we can escape biological limitations, but whether we'd want to—and what we might lose in the process.

Questions Without Right Answers

Perhaps the most important takeaway from this conversation came near the end, when Freddie observed something about the nature of our questions.

"Because of all these questions," he said, "there is no wrong or right answer."

That's exactly right. The value of futures thinking isn't in predicting what will happen or determining the "correct" response to emerging technologies. It's in learning to sit with uncertainty, explore tensions, and develop our capacity for navigating complexity.

At almost ten years old, Freddie already understands this. He's not looking for definitive answers about AI and jobs and creativity. He's learning to ask better questions—and to recognize that asking them is more important than resolving them.

What the Future Thinks About Itself

We often frame conversations about technology and youth as adults preparing children for a world we're creating. But this episode suggests something different: young people are already thinking about these issues, often with more nuance than we might expect.

Freddie isn't anti-technology. He plays VR games, makes AI art, and follows developments in the field. But he's also holding onto something—a sense that some experiences are valuable precisely because we do them ourselves, that the struggle of creation is part of its meaning, and that efficiency isn't the only measure of a good life.

These aren't lessons we taught him. They're insights he's developing on his own, as he navigates a world where these technologies are simply part of the landscape.

Maybe the best thing we can do isn't to tell young people what the future will look like. Maybe it's to listen to what they already think about it—and learn from their perspective.

I don't know what the future holds for his generation. But if this conversation is any indication, they're thinking about it more carefully than we might expect.

Subscribe and Connect!

Subscribe to Modem Futura wherever you get your podcasts and connect with us on LinkedIn. Drop a comment, pose a question, or challenge an idea—because the future isn’t something we watch happen, it’s something we build together. The medium may still be the massage, but we all have a hand in shaping how it touches tomorrow.

🎧 Apple Podcast: https://apple.co/4khmVES

🎧 Spotify: https://open.spotify.com/episode/5nKjpEVZcaUDisdZpGGaMZ?si=YgWp_O84T1yVlBSloedV1w

📺 YouTube: https://youtu.be/mfumkJZav-M

🌐 Website: https://www.modemfutura.com/   

Inherited Power: What Jurassic Park Teaches Us About AI Futures

Illustration of Sean and Andrew podcasting while reading a copy of Jurassic Park the novel

Jurassic Park, AI, and Why “Inherited Power” Should Make Us Nervous

One of the most enduring insights from science fiction isn’t about robots, dinosaurs, or spaceships — it’s about power. In a recent episode of Modem Futura, we revisited a striking passage from Jurassic Park that feels uncannily relevant to our current moment of AI acceleration.

In the novel, Ian Malcolm warns that scientific power acquired too quickly — without discipline, humility, or deep understanding — is fundamentally dangerous. It’s “inherited wealth,” not earned mastery. Thirty-five years later, that warning lands squarely in the middle of our generative AI era.

Today, AI tools can write code, generate images, summarize research, and mimic expertise in seconds. That’s not inherently bad — in fact, it can be incredibly empowering. But it also creates a dangerous illusion: that capability equals comprehension, and speed equals wisdom. When friction disappears, responsibility often follows.

In the episode, Andrew and I explore why the most important question isn’t whether we should use these tools, but how we use them — and with what mindset. Are we willing to be humble in the face of tools that amplify our reach far faster than our understanding? Are we prepared to ask for receipts, interrogate outputs, and recognize the limits of borrowed intelligence?

From there, we leaned into something equally important: imagination. Through our Futures Improv segment, we explored bizarre but revealing scenarios — humans generating calories from sunlight, a world of post-scarcity socks, radically extended lifespans, lunar independence movements, and even the possibility that alien life might be… profoundly boring.

These playful provocations aren’t escapism. They’re a way of breaking free from “used futures” — recycled assumptions about progress that limit our thinking. Humor, speculation, and creativity allow us to test ideas safely before reality forces our hand.

If there’s one takeaway from this episode, it’s this: the future isn’t just something that happens to us. It’s something we ponder, question, and design together — ideally before the metaphorical dinosaurs escape the park.

🎧 Listen to the full episode of Modem Futura wherever you get your podcasts, and join us as we explore what it really means to be human in an age of powerful machines.


Subscribe and Connect!

Subscribe to Modem Futura wherever you get your podcasts and connect with us on LinkedIn. Drop a comment, pose a question, or challenge an idea—because the future isn’t something we watch happen, it’s something we build together. The medium may still be the massage, but we all have a hand in shaping how it touches tomorrow.

🎧 Apple Podcast: https://apple.co/3NIBdlt

🎧 Spotify: https://open.spotify.com/episode/32wGw6htnSDyGVc08DAvvQ?si=m8jS08egQyOZjYTic6cROw

📺 YouTube: https://youtu.be/jBBIbNu-XdY

🌐 Website: https://www.modemfutura.com/

The Metaverse - A Stack of Reality Layers – Episode 57

Layers of Reality: Exploring the Metaverse Stack

When the headset comes off, does the world you were just in disappear—or does it linger somewhere between your senses and memory?

n our latest episode of Modem Futura, Andrew Maynard and I explore the metaverse as more than a corporate buzzword or sci-fi dream. We approach it as a continuum of realities — a multi-layered “stack” that spans the physical and digital, each tier more immersive than the last.

From our own immersive sessions with the Apple Vision Pro, we reflect on that strange moment of re-entry—when the headset comes off and the world feels slightly less real. It’s a feeling that raises existential questions about presence, identity, and how AI-generated worlds are shaping the boundaries of human experience.

In this episode, we trace the metaverse’s origins from Neil Stephenson’s Snow Crash to today’s spatial computing revolutions. We ask what happens when digital spaces become persistent and indistinguishable from physical ones—and why futures thinking is essential for guiding that transition responsibly. From procedurally generated AI environments to the idea of “digital sustainability,” we discuss how these technologies will reshape privacy, ethics, and our collective sense of reality.

Ultimately, this conversation is about our tethers to truth. In an age of deeply immersive AI systems and blended realities, how do we find our totem—our anchor that keeps us grounded in what matters most? We believe that intentional design, transparency, and care must guide how we build these new worlds before they begin to build us.

Subscribe and Connect!

Subscribe to Modem Futura wherever you get your podcasts and connect with us on LinkedIn. Drop a comment, pose a question, or challenge an idea—because the future isn’t something we watch happen, it’s something we build together. The medium may still be the massage, but we all have a hand in shaping how it touches tomorrow.

🎧 Apple Podcast: https://apple.co/4p7ZZcr

🎧 Spotify: https://open.spotify.com/episode/2C5LiGRYCdZgr5JijtK7LI?si=0FbAEihfTD6QXX5FN-2nag

📺 YouTube: https://youtu.be/iCAtutBmN5w

🌐 Website: https://www.modemfutura.com/

Atlas, Higher Education, and How We Really Feel About AI – Episode 55

Generated by ChatGPT

How We Really Feel About AI

Artificial intelligence isn’t just reshaping industries—it’s reshaping emotions. In our latest episode of Modem Futura, Andrew Maynard and I unpack Pew Research Center’s new international survey on how people view AI across 25 countries. The results are striking: while AI dominates headlines, 61 percent of respondents have heard “little or nothing” about it—and in the U.S., more people are concerned than excited.

We explore what this says about the bubbles we all inhabit. In our professional worlds, AI feels inescapable—a “black hole” pulling every conversation toward it. But outside these circles, many remain unaware of how deeply algorithmic systems already shape their lives. This disconnect raises profound questions about agency: if people don’t understand a technology that governs so much of modern life, who’s really steering the future?

The conversation also turns to trust. Europeans report more faith in their governments to regulate AI responsibly than Americans do—a difference that mirrors the EU’s proactive stance on ethics and guardrails. In contrast, U.S. policy remains fragmented, driven more by economic competition than by social cohesion. That tension—between speed and safety, innovation and inclusion—is at the heart of our discussion.

Dangers of Atlas (it's more the world he's holing... its all of your data!)

We then zoom in on universities. As public institutions, we argue they must act as transparent laboratories for AI exploration—spaces where successes and failures alike can be shared openly. The path forward isn’t boosterism; it’s honesty. We can’t earn public trust without showing our messy work. Adding to this - we explore OpenAI’s newly released Atlas browser and its privacy implications - and stumble upon a very shocking realization while recording - that private student data is easily exfiltrated when using LMS platforms such as Canvas while in this browser. What started as a curious road test of OpenAI’s new Atlas browser turned into a genuinely alarming discovery. As we installed and explored Atlas, the first surprise was how quickly the “privacy” pitch unraveled. Despite promising user control, Atlas behaved much like any other data-hungry app—prompting unusual permissions and positioning itself between us and the open web. That sparked a bigger question: if your browser is the main keyhole to the internet, what happens when an AI system sits in the keyhole, quietly learning from everything you do? (including sending and processing private user data - think ANYTHING that comes and goes from your activity on the web). The situation quickly got worse as I was unable to get Atlas to show me any particular websites security certificate. Specifically, at the time of testing (I hope they patch this soon) I could not see the sites SSL/TSL or HTTPS certificate (this is the little lock icon in most browser URL's) that tells you and the browser that your connection to the sites servers are "safe" via web-standard encryption protocols. You know, for things like your bank account info, passwords, and any other data you want to send or receive. The takeaway? Treat ATLAS like a red hot security hole for you and any organization or persons connected to you via the internet. Think twice before using this browser - as EVERYTHING you do on the web is potentially egress'd (fancy word for taken) – so until there are transparent safegaurds in place, the safest assumption is that anything you open in an AI browser like Atlas could be seen, summarized, and stored beyond your intent.



And because no Modem Futura episode is complete without a bit of speculative play, we end with Futures Improv, where we imagine AI zombies, memory economies, and spaghettified timelines—all to remind ourselves that foresight and humor often travel hand in hand.

If you care about how humanity and technology co-evolve, this episode offers a grounded yet playful map of where we stand—and where we might go next.



Subscribe and Connect!

Subscribe to Modem Futura wherever you get your podcasts and connect with us on LinkedIn. Drop a comment, pose a question, or challenge an idea—because the future isn’t something we watch happen, it’s something we build together. The medium may still be the massage, but we all have a hand in shaping how it touches tomorrow.

🎧 Apple Podcast: https://apple.co/4nuRsit

🎧 Spotify: https://open.spotify.com/episode/04NLQcVfJpoM444bQQfQ42?si=b9adf3740a0944fe

📺 YouTube: https://youtu.be/tYoLRZH5iH8

🌐 Website: https://www.modemfutura.com/

Futures Futures Futures - ShapingEDU GCSS23

Artist live drawing of concepts from talk

In February I was invited to present a handful of lectures and workshops to the ShapingEDU community at the ShapingEDU Global Community Solutioneering Summit 2023. The event was split over several days, starting with a virtual component where I presented a short lecture on Becoming A Citizen Futurist.


What could we accomplish together that we couldn’t accomplish alone?

For this event, we chose the theme of Education As Jazz. The Smithsonian Institute eloquently summarized the importance of jazz: “Often acclaimed as America’s greatest art form, jazz has become accepted as a living expression of the nation’s history and culture, still youthful, difficult to define and impossible to contain, a music of beauty, sensitivity, and brilliance that has produced (and been produced by) an extraordinary progression of talented artists.”
— Shaping ShapingEDU GCSS23

February 16 - Virtual Component

Lecture on Becoming A Citizen Futurist: Preparing for Uncertainty (a few select slides shared below)

February 23 - In Person Conference Event

Lecture on Educating for Sustainable Futures: Scanning the Futures Horizon (select images from presentation)

February 24 - In Person Workshop

Workshop on Strategic Foresight (Becoming a Citizen Futurist). This workshop focused on using the Axes of Uncertainty as an introductory foresight tool. This session was also followed up by an extended Q&A session of “Ask a futurist” where I fielded a wide range of questions from the conference participants. (A few select slides shared below)

Futures Thinking: Exploring the adjacent possible (new book chapter)

There has never been a time of greater promise, or greater peril
— Professor Klaus Schwab

How can the educational system shift to a proactive–participant model in exploring the adjacent possible ushered in through the inherent uncertainty of the Fourth Industrial Revolution? How can we look to historical patterns of disruption to gain insights into the challenges of preparing for future uncertainties? How can all of this lend itself to a more sustainable futures? Find out in the ‘exciting’ new book chapter referenced below that was published in Uncertainty: A Catalyst for Creativity, Learning and Development (edited by Beghetto and Jaeger)

We (Punya Mishra, Ben Scragg, and I) invite you to read along through our recently published book chapter and join the conversation around this growing field of inquiry in educational futures and futures thinking.

[APA Citation]

Leahy, S.M., Scragg, B., Mishra, P. (2022). Creatively Confronting the Adjacent Possible: Educational Leadership and the Fourth Industrial Revolution. In: Beghetto, R.A., Jaeger, G.J. (eds) Uncertainty: A Catalyst for Creativity, Learning and Development . Creativity Theory and Action in Education, vol 6. Springer, Cham. https://doi.org/10.1007/978-3-030-98729-9_17

Abstract

In this chapter we explore the unknown possibilities that lie in the shadows of disruptions and innovations known as the adjacent possible. We frame the challenges educational leaders face when trying to prepare for an increasingly volatile, uncertain, complex, and ambiguous world that is propelled into the Fourth Industrial Revolution imbued with rapidly changing and unevenly distributed technological proliferation. Throughout our chapter, we offer strategic mindsets in design and futures thinking to combat the growing challenges of preparing educational systems that are rife with existing deep and complexly interwoven wicked problems for uncertainty. We propose that looking to the past, we can discover insights into meta-patterns and the ways we failed to predict the futures that emerged from previous discoveries and innovations. Using this frame, we discuss the potential of combining the interconnected mindsets of futures thinking and design, not to predict the future, but to prepare our educational systems for the uncertainty of the future.