Foresight

The Future From a Kid's Perspective: What a 10-Year-Old Thinks About AI, Jobs, and Meaningful Work

We spend a lot of time talking about young people when we discuss the future of technology. We debate how AI will affect their education, reshape their careers, and transform the world they'll inherit. But we rarely stop to ask them what they think.

In this special episode of Modem Futura, we did exactly that. Freddie Leahy—co-host Sean's almost-10-year-old son—joined us for an unscripted conversation about artificial intelligence, meaningful work, and the questions that don't have easy answers.

Already Thinking About Job Displacement

When asked what he thinks about when he imagines the future, Freddie's first response wasn't about flying cars or space travel. It was about jobs.

"I kind of more think about the AI part of the future," he said. "And I'm just wondering what jobs will be overran by AI."

He's almost ten. And he's already calculating whether his dream career—paleontology—will exist by the time he's ready to pursue it.

This isn't abstract concern. Freddie has a specific vision: he wants to be like Alan Grant from Jurassic Park, out in the field, hands in the dirt, discovering fossils himself. When we suggested that AI might help him find more dinosaur bones faster, he didn't immediately embrace the idea. His worry isn't about efficiency—it's about being separated from the work itself.

"I would be doing it not for the money," he explained, "just because of the experience."

The Limits of AI Creativity

Freddie has firsthand experience with generative AI. He and I have spent time creating AI-generated images—D&D characters, fantasy creatures, book covers. But he's noticed something that many adults are also discovering: the gap between imagination and output.

"Every time you create an AI image," he said, "you never feel like it's quite right. So you just keep making these, and then you have to choose one, but in the end it never feels like the perfect cover you wanted."

When asked why, his answer was simple: "AI isn't our heads."

This observation—from a fourth-grader—gets at something fundamental about the current state of generative tools. They can produce impressive outputs, but they can't access the specific vision in your mind. The friction between prompt and result isn't just a technical limitation; it's a gap between human intention and machine interpretation.

When it comes to his own writing—Freddie is working on stories—he's clear that he doesn't want AI assistance. The temptation exists, especially when facing a blank page. But he recognizes something important: "It's the point about using your own creativity."

Suspicious of AI Companions

One of the most revealing exchanges came when we explored the idea of AI friendship. What if Freddie could have an AI companion who shared all his interests—someone who wanted to talk about dinosaurs as much as he does?

His response was immediate skepticism.

"That would be weird," he said, "because nobody likes what I like."

The very thing that might make an AI friend appealing—perfect alignment with his interests—is exactly what made it feel inauthentic. Part of what makes his interests meaningful is that they're his, distinct from the people around him. An AI that mirrored them perfectly would feel hollow.

When pressed further about whether he'd want an AI as a secret companion—a sort of digital spirit animal—Freddie remained uncertain. "Who knows what it could do," he noted. "It could hack everything."

There's healthy skepticism there, but also something deeper: a sense that friendship involves more than shared interests. It involves trust, vulnerability, and the unpredictability of another mind.

"I Refuse": Mind Uploading at Nine

During our Futures Improv segment, we posed a classic transhumanist scenario: What if you could upload your consciousness to a computer and live forever digitally, while your biological body remained behind?

Freddie's answer required no deliberation:

"I refuse. I will not upload my brain into a digital computer."

His reasoning was practical but profound. At nine years old, why would he abandon a body that works? The theoretical benefits of digital immortality don't outweigh the immediate reality of physical experience.

This perspective offers a useful counterweight to futures discourse that sometimes treats technological transcendence as obviously desirable. From Freddie's vantage point, the question isn't whether we can escape biological limitations, but whether we'd want to—and what we might lose in the process.

Questions Without Right Answers

Perhaps the most important takeaway from this conversation came near the end, when Freddie observed something about the nature of our questions.

"Because of all these questions," he said, "there is no wrong or right answer."

That's exactly right. The value of futures thinking isn't in predicting what will happen or determining the "correct" response to emerging technologies. It's in learning to sit with uncertainty, explore tensions, and develop our capacity for navigating complexity.

At almost ten years old, Freddie already understands this. He's not looking for definitive answers about AI and jobs and creativity. He's learning to ask better questions—and to recognize that asking them is more important than resolving them.

What the Future Thinks About Itself

We often frame conversations about technology and youth as adults preparing children for a world we're creating. But this episode suggests something different: young people are already thinking about these issues, often with more nuance than we might expect.

Freddie isn't anti-technology. He plays VR games, makes AI art, and follows developments in the field. But he's also holding onto something—a sense that some experiences are valuable precisely because we do them ourselves, that the struggle of creation is part of its meaning, and that efficiency isn't the only measure of a good life.

These aren't lessons we taught him. They're insights he's developing on his own, as he navigates a world where these technologies are simply part of the landscape.

Maybe the best thing we can do isn't to tell young people what the future will look like. Maybe it's to listen to what they already think about it—and learn from their perspective.

I don't know what the future holds for his generation. But if this conversation is any indication, they're thinking about it more carefully than we might expect.

Subscribe and Connect!

Subscribe to Modem Futura wherever you get your podcasts and connect with us on LinkedIn. Drop a comment, pose a question, or challenge an idea—because the future isn’t something we watch happen, it’s something we build together. The medium may still be the massage, but we all have a hand in shaping how it touches tomorrow.

🎧 Apple Podcast: https://apple.co/4khmVES

🎧 Spotify: https://open.spotify.com/episode/5nKjpEVZcaUDisdZpGGaMZ?si=YgWp_O84T1yVlBSloedV1w

📺 YouTube: https://youtu.be/mfumkJZav-M

🌐 Website: https://www.modemfutura.com/   

The Metaverse - A Stack of Reality Layers – Episode 57

Layers of Reality: Exploring the Metaverse Stack

When the headset comes off, does the world you were just in disappear—or does it linger somewhere between your senses and memory?

n our latest episode of Modem Futura, Andrew Maynard and I explore the metaverse as more than a corporate buzzword or sci-fi dream. We approach it as a continuum of realities — a multi-layered “stack” that spans the physical and digital, each tier more immersive than the last.

From our own immersive sessions with the Apple Vision Pro, we reflect on that strange moment of re-entry—when the headset comes off and the world feels slightly less real. It’s a feeling that raises existential questions about presence, identity, and how AI-generated worlds are shaping the boundaries of human experience.

In this episode, we trace the metaverse’s origins from Neil Stephenson’s Snow Crash to today’s spatial computing revolutions. We ask what happens when digital spaces become persistent and indistinguishable from physical ones—and why futures thinking is essential for guiding that transition responsibly. From procedurally generated AI environments to the idea of “digital sustainability,” we discuss how these technologies will reshape privacy, ethics, and our collective sense of reality.

Ultimately, this conversation is about our tethers to truth. In an age of deeply immersive AI systems and blended realities, how do we find our totem—our anchor that keeps us grounded in what matters most? We believe that intentional design, transparency, and care must guide how we build these new worlds before they begin to build us.

Subscribe and Connect!

Subscribe to Modem Futura wherever you get your podcasts and connect with us on LinkedIn. Drop a comment, pose a question, or challenge an idea—because the future isn’t something we watch happen, it’s something we build together. The medium may still be the massage, but we all have a hand in shaping how it touches tomorrow.

🎧 Apple Podcast: https://apple.co/4p7ZZcr

🎧 Spotify: https://open.spotify.com/episode/2C5LiGRYCdZgr5JijtK7LI?si=0FbAEihfTD6QXX5FN-2nag

📺 YouTube: https://youtu.be/iCAtutBmN5w

🌐 Website: https://www.modemfutura.com/

Atlas, Higher Education, and How We Really Feel About AI – Episode 55

Generated by ChatGPT

How We Really Feel About AI

Artificial intelligence isn’t just reshaping industries—it’s reshaping emotions. In our latest episode of Modem Futura, Andrew Maynard and I unpack Pew Research Center’s new international survey on how people view AI across 25 countries. The results are striking: while AI dominates headlines, 61 percent of respondents have heard “little or nothing” about it—and in the U.S., more people are concerned than excited.

We explore what this says about the bubbles we all inhabit. In our professional worlds, AI feels inescapable—a “black hole” pulling every conversation toward it. But outside these circles, many remain unaware of how deeply algorithmic systems already shape their lives. This disconnect raises profound questions about agency: if people don’t understand a technology that governs so much of modern life, who’s really steering the future?

The conversation also turns to trust. Europeans report more faith in their governments to regulate AI responsibly than Americans do—a difference that mirrors the EU’s proactive stance on ethics and guardrails. In contrast, U.S. policy remains fragmented, driven more by economic competition than by social cohesion. That tension—between speed and safety, innovation and inclusion—is at the heart of our discussion.

Dangers of Atlas (it's more the world he's holing... its all of your data!)

We then zoom in on universities. As public institutions, we argue they must act as transparent laboratories for AI exploration—spaces where successes and failures alike can be shared openly. The path forward isn’t boosterism; it’s honesty. We can’t earn public trust without showing our messy work. Adding to this - we explore OpenAI’s newly released Atlas browser and its privacy implications - and stumble upon a very shocking realization while recording - that private student data is easily exfiltrated when using LMS platforms such as Canvas while in this browser. What started as a curious road test of OpenAI’s new Atlas browser turned into a genuinely alarming discovery. As we installed and explored Atlas, the first surprise was how quickly the “privacy” pitch unraveled. Despite promising user control, Atlas behaved much like any other data-hungry app—prompting unusual permissions and positioning itself between us and the open web. That sparked a bigger question: if your browser is the main keyhole to the internet, what happens when an AI system sits in the keyhole, quietly learning from everything you do? (including sending and processing private user data - think ANYTHING that comes and goes from your activity on the web). The situation quickly got worse as I was unable to get Atlas to show me any particular websites security certificate. Specifically, at the time of testing (I hope they patch this soon) I could not see the sites SSL/TSL or HTTPS certificate (this is the little lock icon in most browser URL's) that tells you and the browser that your connection to the sites servers are "safe" via web-standard encryption protocols. You know, for things like your bank account info, passwords, and any other data you want to send or receive. The takeaway? Treat ATLAS like a red hot security hole for you and any organization or persons connected to you via the internet. Think twice before using this browser - as EVERYTHING you do on the web is potentially egress'd (fancy word for taken) – so until there are transparent safegaurds in place, the safest assumption is that anything you open in an AI browser like Atlas could be seen, summarized, and stored beyond your intent.



And because no Modem Futura episode is complete without a bit of speculative play, we end with Futures Improv, where we imagine AI zombies, memory economies, and spaghettified timelines—all to remind ourselves that foresight and humor often travel hand in hand.

If you care about how humanity and technology co-evolve, this episode offers a grounded yet playful map of where we stand—and where we might go next.



Subscribe and Connect!

Subscribe to Modem Futura wherever you get your podcasts and connect with us on LinkedIn. Drop a comment, pose a question, or challenge an idea—because the future isn’t something we watch happen, it’s something we build together. The medium may still be the massage, but we all have a hand in shaping how it touches tomorrow.

🎧 Apple Podcast: https://apple.co/4nuRsit

🎧 Spotify: https://open.spotify.com/episode/04NLQcVfJpoM444bQQfQ42?si=b9adf3740a0944fe

📺 YouTube: https://youtu.be/tYoLRZH5iH8

🌐 Website: https://www.modemfutura.com/

An Invited Talk: Futures Thinking & Strategic Foresight

It was a lot of fun to join the panel of invited speakers on July 14, 2020 for the first Learning Futures Leadership Studio, hosted by Mary Lou Fulton Teachers College and Arizona State University.

Learning Futures Leadership Studio Speakers

Learning Futures Leadership Studio Speakers

Learning Futures Leadership Studios
For full program details and information.

The events of the past few months have demonstrated that we live in a volatile, uncertain, complex and ambiguous world. Whether the global shutdown of schooling due to COVID-19 and the sudden move to remote learning, or the more recent protests against systemic, recurring inequities in our society, it is clear that we as educators need to do a better job.

This summer, Mary Lou Fulton Teachers College hosts Learning Futures Leadership Studio, a series of four online action-oriented studio sessions that are designed to engage teams of education leaders in creating pathways for leading systems in a time of change.

You will be immersed in provocative questions and ideas through interactive studio experiences. With your colleagues and others, you will reflect on these ideas and experiences. And you will develop action steps to pursue in your own context.

This is a BYOC (Bring Your Own Challenges) experience intended to allow you, in a team, to explore new angles and perspectives on the issues that you face today and expect to face in the future.
— https://learningfutures.education.asu.edu/

As the first studio to kick off the event, I focused on a topic that is not only contemporary, but one that I feel is critical to the collective needs of our society, Futures Thinking and Strategic Foresight.

Screenshot-presentation-intro-slide

Official tagline of my studio:

It is clear today that there will be no return to “normal” or to a pre-COVID-19 world. As a result, leaders must be prepared to forge ahead – with courage and efficacy – in a policy and bureaucratic context where there may or may not be clear guidance or feasible policy mandates. In this context, leaders cannot simply wait for guidance; rather, they must design and lead for the futures of learning.

In this session, we will introduce leaders to practical and creative tools of scenario planning and strategic foresight for leaders to explore new ways to think about strategically planning for uncertainty. Participants will learn how to forecast future trends and develop strategic plans to identify possible, plausible, and preferable futures.
(https://learningfutures.education.asu.edu)

In other words, the main objective of this studio is to demonstrate the need for organizational leaders (especially those in the educational system), to actively engage in futures thinking and strategic foresight. To create an organizational culture that allows for this type of thinking, planning, and strategy. I covered a handful of foresight methodological tools developed and open sourced (via Creative Commons) by the Future Today Institute.

The aim of this talk was to start an ember of futures thinking in educational systems leadership, with the hope that this will be the first of many subsequent explorations into the realm of strategic foresight.