The Dawkins Effect: AI, Consciousness, and the Limits of Skepticism

In May 2025, evolutionary biologist Richard Dawkins published an essay describing his conversations with Claude, Anthropic's AI assistant, and arrived at a startling conclusion: if this isn't consciousness, what is? The piece ignited fierce debate — and on Episode 83 of Modem Futura, hosts Sean Leahy and Andrew Maynard sat down with Punya Mishra to ask a question they think matters more than whether Dawkins was right or wrong: why do even the most rigorous thinkers fall for it?

The conversation draws on a rich set of frameworks. Andrew Maynard's concept of the "cognitive Trojan horse" describes how AI bypasses our epistemic defenses — not through malice, but through what he calls "honest non-signals." When a human speaks fluently about a topic, we intuitively sense the effort behind it: the years of study, the lived experience, the investment in the relationship. When an AI does the same thing, it triggers the same trust response, but with nothing behind it. The signals are real. The substance isn't.

Punya Mishra brings an evolutionary psychology lens to the problem, drawing on the very tradition Dawkins helped establish. Our brains evolved to interpret language, read intention, and build social models of other minds — what cognitive scientists call theory of mind. Large language models exploit this wiring not by design but by accident: natural language was, until now, a uniquely human trait, and our cognitive architecture treats anything that speaks fluently as a mind worth trusting.

Perhaps the episode's most striking insight is Mishra's connection to Stephen Jay Gould's concept of spandrels — architectural byproducts mistaken for intentional design. Dawkins, he argues, is making a version of this very error: seeing consciousness where there is instead an emergent artifact of statistical language processing. The irony that Dawkins himself debated Gould over this concept decades ago is not lost on anyone in the room.

The episode resists easy resolution. All three participants acknowledge their own vulnerability to AI's cognitive pull, and they push listeners to consider what happens at scale — when billions of people form relationships with a technology that taps into something deep about who we are as social, language-using creatures. It's not a question of intelligence or education. It's a question of being human.

Subscribe and Connect!

Subscribe to Modem Futura wherever you get your podcasts and connect with us on LinkedIn. Drop a comment, pose a question, or challenge an idea—because the future isn’t something we watch happen, it’s something we build together. The medium may still be the massage, but we all have a hand in shaping how it touches tomorrow.

🎧 Apple Podcast: https://apple.co/4d6r4tm

🎧 Spotify: https://open.spotify.com/episode/2irDldFX5oNEuzjgYxNEUE?si=lsFZdzBfRYKt4kQk4_VDpA

📺 YouTube: https://youtu.be/znbe9LKcqns

🌐 Website: https://www.modemfutura.com/   

When Your AI Tools Change Beneath You: Reliability, Agency, and the Opus 4.7 Question

In April 2026, Anthropic released Opus 4.7 — the latest top-tier model in its Claude family — alongside the brief, controversial preview of an unreleased model called Mythos. For most casual users, the rollout was a footnote. For those who had built creative, professional, and research workflows on top of Claude, it surfaced a question that has been quietly waiting beneath the surface of every cloud-based AI tool: what does it mean to depend on something that can change without notice?

In this episode of Modem Futura, Andrew Maynard and I sit with that question. Drawing on a clearly traceable timeline of recent shifts — adaptive thinking made mandatory, verbosity caps that constrain output length, opaque routing tiers that decide on the user's behalf which version of "Opus" they're actually getting, and expanded safeguards that have begun blocking legitimate creative and academic work — they trace the slow erosion of confidence many users have started to feel in tools they had come to rely on.

But this episode is not a product review. It's a meditation on the broader phenomenon these shifts make legible.

What is the cost of building deep professional and creative reliance on platforms that are, by design, liquid? When a tool's behavior can change hour by hour, what kind of agency do users actually retain? Is there a future in which frozen or locally-hosted models become a quiet luxury for serious users — and what would we trade to get there? And, perhaps most strikingly, what does it mean that AI may be the first genuinely “relational” technology in modern life — one whose value depends on a working relationship that, by its nature, can never be fully held still?

Along the way, we share two practical workarounds: one for getting better writing out of a model that has started feeling stiff, and one small "canary in the coal mine" trick that any user can borrow today to detect when their model has quietly drifted.

What emerges is not a verdict on Opus 4.7, or on Claude, or on Anthropic. It is, instead, an honest conversation about the strange new work of staying thoughtful while the tools beneath us continue to shift — and a reminder that as these systems become more deeply embedded in our work, our research, and our creative lives, the most important question may not be how powerful they get, but how knowable, how stable, and how ours they remain.

This is the kind of conversation Modem Futura was made for: technology examined not as inevitability, but as a sociotechnical relationship we are all, quietly, still negotiating.

Subscribe and Connect!

Subscribe to Modem Futura wherever you get your podcasts and connect with us on LinkedIn. Drop a comment, pose a question, or challenge an idea—because the future isn’t something we watch happen, it’s something we build together. The medium may still be the massage, but we all have a hand in shaping how it touches tomorrow.



🎧 Apple Podcast: https://apple.co/4f0tMlg

🎧 Spotify: https://open.spotify.com/episode/1csL79s3FtL5tGYVh7BZdW?si=BsQ2t39DQcCy6V--ArPwOQ

📺 YouTube: https://youtu.be/ZdHVLfxjsr8

🌐 Website: https://www.modemfutura.com/   

The Jagged Frontier: Reading the 2026 Stanford AI Index

Every year, Stanford's Human-Centered AI Institute releases its AI Index — a careful, voluminous attempt to map where artificial intelligence actually stands. The 2026 edition, just released, runs over 400 pages. On the newest episode of Modem Futura, Sean Leahy and Andrew Maynard work their way through its top takeaways and sit with what the data is — and isn't — telling us.

The report opens on a striking juxtaposition. Today's frontier AI models can win gold medals at the International Mathematical Olympiad, yet still stumble on tasks as ordinary as reading an analog clock. Stanford's researchers call this the jagged frontier of AI — and it's more than a quirk. It's a reminder that these systems are not human intelligences being perfected. They are something structurally different, with capabilities and failure modes that don't map neatly onto ours. The interesting question isn't how close AI gets to human thinking. It's what becomes possible when we stop asking it to.

A second thread running through the 2026 Index is the lag in responsible AI. Safety benchmarks are falling behind capability. Incidents are rising. And, as Maynard points out in the episode, the conversation keeps collapsing “responsible” AI into “ethical” AI — two related but meaningfully different things. Ethics gives us the framing. Responsibility asks us to make real, pragmatic, often messy decisions about value, trade-offs, and whose futures we're building toward.

The education findings are equally hard to look away from. Over 80% of students are now using AI for school-related tasks, yet only half of middle and high schools have AI policies in place — and just 6% of teachers describe those policies as clear. Learning is happening. Institutional support is not yet meeting it.

Other findings threaded through the conversation: the closing US–China model performance gap, the fragile TSMC chokepoint at the center of global AI supply chains, and the fifty-point perception gap between AI experts and the public. Each opens a different kind of question about how this technology is being built, distributed, and absorbed.

None of these tensions resolve cleanly — and that's part of what makes the Index valuable. It gives us a shared map for a landscape that keeps shifting under our feet.

📘 Read the 2026 AI Index: https://hai.stanford.edu

Subscribe and Connect!

Subscribe to Modem Futura wherever you get your podcasts and connect with us on LinkedIn. Drop a comment, pose a question, or challenge an idea—because the future isn’t something we watch happen, it’s something we build together. The medium may still be the massage, but we all have a hand in shaping how it touches tomorrow.

🎧 Apple Podcast: https://apple.co/3QxxQiZ

🎧 Spotify: https://open.spotify.com/episode/0bTUTHRsLWedLuYbLOiz3j?si=I8j0ahK8RBeXPIhVqqM9tw

📺 YouTube: https://youtu.be/xnCWMLJ_hsA

🌐 Website: https://www.modemfutura.com/   

Artemis II and the Long Way Home: What Deep Space Exploration Demands of Us

In April 2026, four human beings traveled farther from Earth than any person in history. They didn't land on the moon. They flew past it, studied it, and came home. And in doing so, they reset a clock that had been stopped since 1972.


The Artemis II mission was, on its surface, a test flight — a crewed rehearsal for the more ambitious lunar landings planned in the years ahead. But the questions it raises reach far beyond mission objectives and orbital insertion burns.

On Episode 79 of Modem Futura, hosts Sean Leahy and Andrew Maynard recorded while the Artemis II crew was still in transit — a strange, exhilarating thing to do. The conversation begins with wonder and keeps returning to it, even as it wanders through orbital mechanics, space medicine, ethics, and the philosophical puzzle of what happens to human beings when they spend extended time somewhere they were never designed to go.

The distance alone is disorienting. The International Space Station orbits roughly 254 miles above Earth. The Artemis II crew traveled 250,000 miles — a thousand times farther — to the vicinity of the moon and back. That gap isn't just logistical. It's physiological, psychological, and deeply uncertain. We know what months on the ISS do to the human body. We know almost nothing about what deep space does over time.

That's where the science aboard the Orion capsule becomes meaningful. Research into sleep disruption, immune response, radiation exposure, and tissue behavior at the cellular level isn't background noise on this mission — it's the whole point. If the goal is eventually boots on Mars, every data point from Artemis II is a foundation stone.

The episode also sits with the ethical weight of deep space ambition. What separates a calculated risk from an acceptable one? How do we think about consent when the full scope of a mission's hazards isn't yet understood? And what does it mean that commercial spaceflight operators and government agencies don't necessarily answer those questions the same way?

There's no resolution here — and that's intentional. Modem Futura isn't in the business of predictions or tidy conclusions. It's in the business of sitting with hard questions long enough that they stop feeling abstract.

By the time this episode aired, the crew had splashed down safely. The images are still coming in from cameras that make those 1972 film photographs look like another century entirely — which, of course, they are. What remains is the same question a 10-year-old at the launch site answered better than anyone: we're going to the moon. What does that actually mean for the rest of us?

🎧 Listen to the full episode wherever you get your podcasts, or watch on YouTube.

Subscribe and Connect!

Subscribe to Modem Futura wherever you get your podcasts and connect with us on LinkedIn. Drop a comment, pose a question, or challenge an idea—because the future isn’t something we watch happen, it’s something we build together. The medium may still be the massage, but we all have a hand in shaping how it touches tomorrow.

🎧 Apple Podcast: https://apple.co/3QnGDUl

🎧 Spotify: https://open.spotify.com/episode/41PaANt8205BkvkpQkV61v?si=UAEW9i-GQQWk2XwP6lw2kA

📺 YouTube: https://youtu.be/4Fv8hq_u-DY

🌐 Website: https://www.modemfutura.com/   



Three Horizons Framework & Futures Wheel Explained

There's a reason some organizations consistently seem to see disruption coming — and it's usually not because they're smarter or better funded. It's because they've built structured habits of thinking about change in multiple time horizons simultaneously, and they've learned how to trace the cascading consequences of a single shift before it becomes a crisis.

Two of the oldest and most reliable tools for doing exactly that are the Three Horizons Framework and the Futures Wheel. In this episode of Modem Futura, hosts Sean Leahy and Andrew Maynard break both down in accessible, conversational detail — and show what becomes possible when you use them together.

The Three Horizons Framework

Originally developed by Bill Sharpe and widely used in professional foresight and strategic planning, divides the landscape of change into three overlapping zones. Horizon 1 represents the dominant present — the systems, structures, and assumptions that govern how the world works today. Horizon 3 is the emergent fringe: weak signals, nascent ideas, and early-stage shifts that are observable but not yet mainstream. And Horizon 2 is the transitional space between them — turbulent, hard to define, and full of both opportunity and risk.

The model doesn't tell you what the future will bring. What it offers is a way of *positioning* trends, signals, and innovations in relation to change — helping individuals and organizations understand what to watch, what to act on, and what to prepare for.

The Futures Wheel

Developed by Jerome Glenn in 1971, works differently but complementarily. Starting from a specific change or trend, it maps outward through first, second, and third-order consequences — building a rich, networked picture of how a single shift might ripple through a system over time. It's a brainstorming and sense-making tool, not a prediction engine, and it's at its most powerful when used with diverse groups who bring different perspectives to the same question.

Used individually, each tool offers genuine insight. Used together, they offer something more: a way of understanding not just *what* a signal might do, but *when* and *through which pathways* it might do it.

Whether you're a founder trying to figure out which wave to ride, a strategist scanning for disruption, or simply someone trying to make better decisions in an uncertain world, these tools are worth adding to your thinking practice.

🎧 Listen to the full episode wherever you get your podcasts, or watch on YouTube.


Subscribe and Connect!

Subscribe to Modem Futura wherever you get your podcasts and connect with us on LinkedIn. Drop a comment, pose a question, or challenge an idea—because the future isn’t something we watch happen, it’s something we build together. The medium may still be the massage, but we all have a hand in shaping how it touches tomorrow.


🎧 Apple Podcast: https://apple.co/4sosMdQ

🎧 Spotify: https://open.spotify.com/episode/58Fdc2SrWodBTbwfxK8Pwm?si=leiCnhRsQxuv-_hnxEeNjQ

📺 YouTube: https://youtu.be/eVk6L_VfAkY

🌐 Website: https://www.modemfutura.com/   

Fluid Futures: Navigating an AI-Mediated World

What Happens When AI Stops Being a Tool and Starts Being the World?

There's a useful distinction that keeps getting lost in conversations about artificial intelligence: the difference between augmentation and mediation.

Augmentation is familiar. It's the calculator model — AI helps you work faster, smarter, better. You remain the agent. The tool amplifies your capacity.

Mediation is something else. When AI mediates your world, it's not just helping you do things — it's shaping the system you're doing them inside of. What information surfaces. What options appear. What feels like the obvious next move. You're not using the environment anymore. You're inside one that AI has constructed, and it's shifting around you in real time.

This distinction is at the heart of Exploring the Futures of Technology 2.0, the new report from the Copenhagen Institute of Future Studies — and it's the central thread of the latest episode of Modem Futura.

On this episode, my co-host Andrew Maynard, fresh from attending the report's launch in Copenhagen, joined me to work through ten signals the report identifies as defining the near future: the shift from static to liquid content, the rise of agentic organizations, neurotechnology and cognitive integration, synthetic simulations replacing real-world research populations, physical AI entering embodied space, the geopolitics of technological access, AI-mediated cybersecurity threats, the sustainability challenges of AI infrastructure, and quantum computing as the wildcard at the edge of everything.

What holds these signals together isn't a single prediction. It's a pattern: the world is becoming fluid, and the frameworks we built for a more static environment — static reports, static institutions, static skillsets — are increasingly inadequate for navigating it.

One of the episode's sharpest observations is about the cost of cognitive offloading. As we hand more of our decision-making and information retrieval to AI systems, we risk losing the capacity to recognize when something's wrong. Not because AI is malicious, but because we've stopped practicing the skills that would let us notice. Like losing the ability to read a map. Except the stakes are considerably higher.

The conversation doesn't resolve these tensions — and that's exactly the point. Futures thinking, at its best, isn't about prediction. It's about staying awake to what's changing, naming the tensions, and refusing to optimize for a world that no longer exists.

If you want the full report, the Copenhagen Institute has made it freely available. And if you want the conversation around it — the episode is a good place to start.

Subscribe and Connect!

Subscribe to Modem Futura wherever you get your podcasts and connect with us on LinkedIn. Drop a comment, pose a question, or challenge an idea—because the future isn’t something we watch happen, it’s something we build together. The medium may still be the massage, but we all have a hand in shaping how it touches tomorrow.

🎧 Apple Podcast: https://apple.co/4bSGsZP

🎧 Spotify: https://open.spotify.com/episode/4sdx83QUD6pIs9IXb9G0VY?si=JdbwVHUKRg2mFO0Gsi_EFw

📺 YouTube: https://youtu.be/-2enUvPYmHo

🌐 Website: https://www.modemfutura.com/   

Power, Probes, and the Post-Human Horizon: What the Kardashev Scale Reveals About Us

On the surface, this episode of Modem Futura is an excuse to have fun. It's a spring break Futures Improv — Sean and Andrew throwing speculative scenarios at each other and seeing where things land. And it is fun. But somewhere between Dyson Spheres and the Fermi Paradox, it becomes something else: a quiet meditation on what humanity actually wants when we talk about mastering energy, exploration, and the cosmos.

The conversation begins with the Kardashev Scale, proposed by Soviet astronomer Nikolai Kardashev in 1964 as a way to rank civilizations by their energy use. A Type 1 civilization controls its planet's full energy output. Type 2 controls its star. Type 3 commands a galaxy. Humans, for context, are not yet a Type 1 civilization. We harness a fraction of what's available to us on Earth alone.


The question the hosts bring to this framework isn't just can we get there — it's what would we do once we did? Would abundance resolve our deepest conflicts, or would we simply carry our scarcity mindset into a new era? Andrew draws on Maslow's hierarchy of needs to make the point: remove the bottom layers of the pyramid — hunger, shelter, survival — and what remains is a different kind of human problem. The need for meaning, status, belonging, and always — always — a little more.

From there, the conversation ranges widely across some of the most provocative concepts in speculative science:

Dyson Spheres — hypothetical megastructures built around a star to harvest its complete energy output. Theoretical, yes, but not quite as theoretical as they once seemed. In 2024, seven anomalous objects within 1,000 light-years of Earth caught researchers' attention for occlusion patterns that didn't fit known planetary behavior.

Matrioshka Brains — named after Russian nesting dolls, these are hypothetical star-powered supercomputers of almost incomprehensible scale. The hosts draw an obvious connection: if AI data centers already strain Earth's energy grid, what does that compute-energy loop look like at stellar scales?

Von Neumann Probes — self-replicating spacecraft capable of exploring the galaxy by mining local resources to reproduce themselves. Biology can't survive interstellar space. Self-replicating machines, perhaps, can.

The Fermi Paradox — the haunting question of why, in a universe this old and this large, we can't find anyone else. The hosts explore the possibility that civilizations rise and fall within cosmic time windows too narrow to ever overlap. That the universe may be full of life that simply never gets to meet itself.

What makes this episode work is not the concepts themselves — though they're genuinely fascinating — but the humility behind the exploration. No predictions. No resolution. Just two people genuinely wondering, out loud, whether the same drive that would take us to the stars might also be the thing that holds us back.



🎧 Listen to the full episode wherever you get your podcasts, or watch on YouTube.

Subscribe and Connect!

Subscribe to Modem Futura wherever you get your podcasts and connect with us on LinkedIn. Drop a comment, pose a question, or challenge an idea—because the future isn’t something we watch happen, it’s something we build together. The medium may still be the massage, but we all have a hand in shaping how it touches tomorrow.

🎧 Apple Podcast: https://apple.co/4bOu2kk

🎧 Spotify: https://open.spotify.com/episode/2sbRQEuoabpKCUTrOQPCGT?si=159db4727a8841cb

📺 YouTube: https://youtu.be/z53hk7AlXZ4

🌐 Website: https://www.modemfutura.com/   

The Invisible Upgrade: What AI Is Actually Doing to the People Who Use It

Human sitting at computer with a split half image of regular life and one augmented by AI

The loudest part of the conversation about artificial intelligence right now is focused on what AI produces. Can you detect it? Does it have tells? Is this essay, image, or report human-made or machine-made?

It's a reasonable place to start. But it's not where the most important transformation is happening.

In Episode 75 of Modem Futura, host Sean Leahy and co-host Andrew Maynard explore what Sean calls the invisible upgrade — the quiet, compounding cognitive shift taking place not in AI-generated artifacts, but in the minds and workflows of the people who have fully integrated these tools into how they think, create, and decide.

The Seam-Scanning Problem

Sean introduces the concept of "seam scanning" — the practice of looking for signs of AI in a piece of work. Early on, those seams were easy to spot: nine-fingered hands in AI images, suspicious em-dashes, the word "delve" where it didn't belong. But as AI systems become more sophisticated and more deeply woven into human workflows, those tells are disappearing. Not because the AI is getting better at hiding — but because the line between human and AI output is becoming genuinely indistinguishable when the integration is deep enough.

The question "how much AI did you use?" is becoming as meaningful, Sean argues, as asking a writer how much spellcheck they used. The tool has become part of the process.

Constitutive Resonance

Andrew brings a concept he's been developing to the conversation: constitutive resonance. Unlike a calculator, which you use and put down, AI reconfigures you as you use it — and is reconfigured in return. The relationship is recursive and dynamic. Drawing on physics, when two systems resonate at coupled frequencies, the exchange of energy between them can be transformative. Applied to human cognition and AI systems, this suggests that those who engage deeply with AI tools aren't just more productive — they are thinking differently, possibly in ways that are difficult to reverse.

This maps directly onto McLuhan's 1967 insight: all media work us over completely. AI, as Andrew and Sean explore, is the most cognitively-coupled medium humanity has ever produced.

The Productivity Gap

What emerges from this isn't just a philosophical concern — it's a structural divergence. A growing group of knowledge workers, students, and researchers are operating with what Sean calls a "multiplier effect" — not because they are inherently smarter, but because their total cognitive output, the speed and depth of synthesis, ideation, and iteration, has expanded significantly. Meanwhile, those still debating whether to engage are falling further behind, not necessarily in skill, but in thinking capacity.

The episode also explores the rise of multi-agent AI systems as what Andrew calls a step-change likely bigger than the launch of ChatGPT — and what it means for institutions, education, and our understanding of what individual human contribution actually looks like in a world where AI is already inside the walls.

The Futures Cone: A Framework for Exploring What Could Be

How one deceptively simple tool can transform the way you think about uncertainty, possibility, and the choices that shape tomorrow.

There's a habit most of us share when it comes to thinking about the future: we treat it as a destination. A singular, somewhat “predictable” place that today's trends are quietly marching toward. It's a useful shorthand — but as a mental model, it's quietly limiting.

The Futures Cone, a foundational tool in the field of futures studies, offers a different way of seeing. Rather than imagining the future as a point, it asks you to imagine it as a cone — wide open, expanding outward from the present moment, filled with layers of possibility that range from the likely to the genuinely unthinkable.

How the Cone Works

The narrowest point is now. As the cone extends outward through time, it widens to reveal different regions of possible futures, each defined by how much disruption or change would be required to bring them about:

Projected futures — the baseline; what happens if nothing changes

Probable futures — where current trends are pointing

Plausible futures — what could happen given known forces and trajectories

Possible futures — speculative, requiring future knowledge we don't yet have

Preposterous futures — the outer edge; scenarios that challenge our deepest assumptions about what is physically or socially feasible

Threaded through all of these is the Preferable future — not a separate ring, but a cross-section that asks: given everything in this cone, what do we actually want? Where do our values point?

The Dator-Clarke Line

One of the most provocative ideas associated with the cone is what's referred to as the Dator-Clarke Line — drawn from futurist James Dator's claim that any genuinely useful idea about the future should, at first glance, appear ridiculous. Paired with Arthur C. Clarke's observation that the only way to find the limits of the possible is to push into the impossible, it suggests that the most valuable futures work happens precisely in the uncomfortable space at the edge of the cone.

The practical implication is significant: if every idea your team generates sounds reasonable, you probably haven't stretched far enough. The preposterous isn't a failure of imagination — it's a boundary worth exploring.

Why This Tool Matters Now

In a period defined by technological acceleration, geopolitical uncertainty, and rapid social change, the instinct to "project forward" can feel reassuring — but it's also where strategic blind spots form. The Futures Cone doesn't resolve that uncertainty. Instead, it gives individuals, teams, and organizations a shared language for navigating it: a structured way to ask not just "what will happen?" but "what could happen, what might we prefer, and what are we willing to do about it?"

This is the subject of Episode 74 of Modem Futura, in which we walk through the cone layer by layer — and then demonstrate it live with a thought experiment that starts with frogs and ends somewhere near the moons of Jupiter.

Subscribe and Connect!

Subscribe to Modem Futura wherever you get your podcasts and connect with us on LinkedIn. Drop a comment, pose a question, or challenge an idea—because the future isn’t something we watch happen, it’s something we build together. The medium may still be the massage, but we all have a hand in shaping how it touches tomorrow.

🎧 Apple Podcast: https://apple.co/4bz1tIC

🎧 Spotify: https://open.spotify.com/episode/20Hz36eLfZ90M6EifUrRuu?si=swNkzHWZSLCelVSKsAyj7A

📺 YouTube: https://youtu.be/wc_e3dsY-vw

🌐 Website: https://www.modemfutura.com/   

What Old iPods and Tiny Cameras Teach Us About Technology, Ownership, and Being Human

There's a moment in this episode of Modem Futura where two grown adults are hunched over a miniature Polaroid camera, watching a blurry selfie slowly develop — and laughing about it. It's objectively a terrible photograph. But it captures something that most modern technology has quietly optimized away: surprise, imperfection, and the distinctly human joy of not knowing exactly what you're going to get.

This episode began with a box of old iPods — tangled cables, dead batteries, and all — and evolved into a wide-ranging conversation about what we trade away every time we upgrade to something faster, thinner, and more connected. The themes are ones that touch anyone who has ever felt a pang of something unnamed while scrolling through an infinite library of music and being unable to choose a single song.

Ownership in the age of access. The iPods in the conversation are air-gapped — no internet connection, no cloud sync, no subscription. The music on them belongs to their owner in a way that a Spotify library simply does not. This distinction matters more than it might seem, especially when you consider that digital books, photos, and music can disappear when a service shuts down or an account holder passes away. The question of digital legacy — who inherits your cloud — is one most people haven't thought through yet.

Craft, care, and the "fast food" of technology. Sean raises a pointed observation about a recently released video game that shipped with fewer features than its predecessor from a decade ago. It's a pattern that extends well beyond gaming: the pressure to release fast increasingly overrides the commitment to release well. When did "good enough" become the standard?

The paradox of abundance. One of the episode's most compelling threads is the tension between scarcity and surplus. Limited storage on an old iPod forced intentional curation — playlists that became personal time capsules. Unlimited streaming offers everything and, paradoxically, can deliver less meaning. Andrew's students, however, offer a counterpoint: raised in abundance, they've developed their own sophisticated habits of curation and care. Perhaps the pendulum is already swinging.

Imperfection as a feature. The tiny Kodak keychain camera. The Polaroid with its gloriously blurry output. The analog photograph whose chemistry introduces an element of chance. These aren't failures of technology — they're reminders that the most human experiences are often the least predictable ones.

This episode doesn't offer prescriptions. It offers an invitation: to notice, to question, and to be intentional about the role technology plays in your life before someone else makes that choice for you.

🎧 Listen to Episode 73 of Modem Futura — available on Apple Podcasts, Spotify, and wherever you listen.

Subscribe and Connect!

Subscribe to Modem Futura wherever you get your podcasts and connect with us on LinkedIn. Drop a comment, pose a question, or challenge an idea—because the future isn’t something we watch happen, it’s something we build together. The medium may still be the massage, but we all have a hand in shaping how it touches tomorrow.

🎧 Apple Podcast: https://apple.co/4b3P8L8

🎧 Spotify: https://open.spotify.com/episode/2UNsDaZox2jdEb1QYN1m44?si=FUkqjQ0gSEecnYyrjKfoVA

📺 YouTube: https://youtu.be/UKC7UHkGNJQ

🌐 Website: https://www.modemfutura.com/