Modem Futura podcast

Artemis II and the Long Way Home: What Deep Space Exploration Demands of Us

In April 2026, four human beings traveled farther from Earth than any person in history. They didn't land on the moon. They flew past it, studied it, and came home. And in doing so, they reset a clock that had been stopped since 1972.


The Artemis II mission was, on its surface, a test flight — a crewed rehearsal for the more ambitious lunar landings planned in the years ahead. But the questions it raises reach far beyond mission objectives and orbital insertion burns.

On Episode 79 of Modem Futura, hosts Sean Leahy and Andrew Maynard recorded while the Artemis II crew was still in transit — a strange, exhilarating thing to do. The conversation begins with wonder and keeps returning to it, even as it wanders through orbital mechanics, space medicine, ethics, and the philosophical puzzle of what happens to human beings when they spend extended time somewhere they were never designed to go.

The distance alone is disorienting. The International Space Station orbits roughly 254 miles above Earth. The Artemis II crew traveled 250,000 miles — a thousand times farther — to the vicinity of the moon and back. That gap isn't just logistical. It's physiological, psychological, and deeply uncertain. We know what months on the ISS do to the human body. We know almost nothing about what deep space does over time.

That's where the science aboard the Orion capsule becomes meaningful. Research into sleep disruption, immune response, radiation exposure, and tissue behavior at the cellular level isn't background noise on this mission — it's the whole point. If the goal is eventually boots on Mars, every data point from Artemis II is a foundation stone.

The episode also sits with the ethical weight of deep space ambition. What separates a calculated risk from an acceptable one? How do we think about consent when the full scope of a mission's hazards isn't yet understood? And what does it mean that commercial spaceflight operators and government agencies don't necessarily answer those questions the same way?

There's no resolution here — and that's intentional. Modem Futura isn't in the business of predictions or tidy conclusions. It's in the business of sitting with hard questions long enough that they stop feeling abstract.

By the time this episode aired, the crew had splashed down safely. The images are still coming in from cameras that make those 1972 film photographs look like another century entirely — which, of course, they are. What remains is the same question a 10-year-old at the launch site answered better than anyone: we're going to the moon. What does that actually mean for the rest of us?

🎧 Listen to the full episode wherever you get your podcasts, or watch on YouTube.

Subscribe and Connect!

Subscribe to Modem Futura wherever you get your podcasts and connect with us on LinkedIn. Drop a comment, pose a question, or challenge an idea—because the future isn’t something we watch happen, it’s something we build together. The medium may still be the massage, but we all have a hand in shaping how it touches tomorrow.

🎧 Apple Podcast: https://apple.co/3QnGDUl

🎧 Spotify: https://open.spotify.com/episode/41PaANt8205BkvkpQkV61v?si=UAEW9i-GQQWk2XwP6lw2kA

📺 YouTube: https://youtu.be/4Fv8hq_u-DY

🌐 Website: https://www.modemfutura.com/   



Pluribus and the Philosophy of the Happy Apocalypse: What Apple TV's New Sci-Fi Asks About Individuality, Consent, and Being Human

What if happiness is the threat?

Most apocalypse stories share a common grammar: society collapses, resources become scarce, and survival demands violence. We've internalized this template so thoroughly that it shapes how we imagine catastrophe itself.

Apple TV's Pluribus, created by Vince Gilligan (Breaking Bad, Better Call Saul), disrupts that grammar entirely. Its apocalypse isn't marked by destruction or suffering. It's marked by peace. By synchronization. By happiness—at a planetary scale.

An alien signal arrives carrying an RNA sequence. Humanity, being humanity, synthesizes it. Within days, most of the global population transforms into a unified hive mind. Not zombies. Not drones. Just billions of people sharing consciousness, moving together, experiencing what appears to be genuine contentment.

About a dozen people remain unconverted. And the series follows one of them—Carol Sterka, played by Rhea Seehorn—as she grapples with being the most unhappy person on earth.

On a recent episode of the Modem Futura podcast, we explored what Pluribus surfaces about individuality, consent, collective identity, and the stories we tell ourselves about what makes a human life worth living. What follows are some of the tensions that emerged.

What is in a Name: Many Without the One

The title "Pluribus" comes from the Latin phrase E Pluribus Unum—"out of many, one"—which appears on American currency as a motto of national unity.

But the show drops both the "E" (out of) and the "Unum" (one). What remains is simply "Pluribus": the many. It's a subtle signal that this isn't a story about diversity coming together into unity. It's a story about what happens when "the many" becomes literal—when individual minds merge into a single, collective consciousness.

That linguistic choice frames everything that follows.

Who Becomes the Monster?

One of the most productive lenses for understanding Pluribus is Richard Matheson's 1954 novel I Am Legend. Not the Will Smith film adaptation, but the original text, which ends with a devastating realization: the protagonist, who has spent the story hunting the "monsters" who have replaced humanity, comes to understand that from their perspective, he is the monster. The one who kills in the night. The one who refuses to accept the new order.

Carol Sterka occupies similar territory. She's convinced she needs to "set things right"—to restore humanity to its pre-hive state. But the show keeps surfacing an uncomfortable question: right for whom? The hive mind has eliminated war, poverty, and suffering. Billions of people who lived in misery are now at peace.

If Carol succeeds in reversing the transformation, she's not saving people. She's condemning them to return to lives many of them would never have chosen.

The Consent Paradox

The hive mind in Pluribus operates under an interesting constraint: it cannot lie, and it will not assimilate anyone without their explicit permission.

This sounds like respect for autonomy. And in some sense, it is. But the hive mind also desperately wants everyone to join (even explaining that it’s a ‘biological’ imperative). So what emerges is a kind of relentless, patient persuasion—always honest, always gentle, and always oriented toward a predetermined outcome.

There's something uncomfortably familiar in this dynamic. We navigate versions of it constantly: platforms that "personalize" our experience toward their engagement metrics, systems that "recommend" content optimized for their retention goals, interfaces designed to make one choice frictionless and alternatives invisible.

The hive mind's honesty doesn't make its agenda less persistent. It just makes the agenda transparent.

The Sustainability Problem

Midway through the season, Pluribus introduces a complication: the hive mind will only consume things that have already died naturally. No killing. No harvesting. Just waiting for life to end on its own terms.

Which means, at planetary scale, they're slowly starving.

This creates a strange inversion. Carol, the last holdout, has skills and knowledge that could help solve the problem. But she's too consumed by her mission to "fix" things to collaborate with the very beings who need her help.

There's something painfully recognizable in that dynamic—the way ideological certainty can prevent us from engaging productively with people whose worldview differs from our own, even when collaboration would benefit everyone.

Is the Individual Still in There?

One of the more haunting threads in Pluribus involves the question of whether individual identities persist within the hive mind.

Carol's "chaperone"—a member of the hive who presents as an individual named Zosia—occasionally exhibits moments that feel less like collective consciousness and more like... a person surfacing. A memory that seems too specific. A reaction that seems too singular. (The Mango ice cream scene is a particular interesting one where for a moment - the real Zosia seems to surface).

Another character (Manousos) experiments with radio frequencies, attempting to extract individuals back out of the collective, seemingly trying to hack the near field electromagnetic connections the “others” have with one another.

The show doesn't resolve this, but rather leaves it as a season 1 cliffhanger as it seems some progress is made. But it raises the question: if you could pull someone out of a state of collective happiness and return them to individual consciousness, would that be rescue or harm? Liberation or trauma?

There's no easy answer. And Pluribus is wise enough not to pretend there is.

The AI Parallel (That Isn't Really About AI)

Vince Gilligan has stated that Pluribus isn't intended as an AI allegory. The original concept predates the current wave of generative AI by years.

And yet.

The show's exploration of collective intelligence, of optimization toward contentment, of systems that genuinely want to help but whose help involves transformation into something other than what you were—all of it resonates with questions we're already asking about artificial intelligence and its role in human flourishing.

The hive mind's impulse to "fix" things, to smooth over friction, to optimize for happiness—that's not so different from Silicon Valley's persistent faith that the right algorithm can solve human problems. The show doesn't moralize about this. It simply shows what it might feel like to be on the receiving end of that faith.

The hive mind might be the best thing that ever happened to humanity. Or it might be the end of everything that made humanity worth preserving. The show suggests both readings are available, and neither is obviously wrong. In the end, this is my favorite part of the show - it catalyzes great conversations… it pushes us to examine very human elements by forcing us to entertain scenarios in which we question what it means to be human. Now we just have to wait a seemingly excruciating long time until Season 2 will be ready – until then, stay curious!

Subscribe and Connect!

Subscribe to Modem Futura wherever you get your podcasts and connect with us on LinkedIn. Drop a comment, pose a question, or challenge an idea—because the future isn’t something we watch happen, it’s something we build together. The medium may still be the massage, but we all have a hand in shaping how it touches tomorrow.

🎧 Apple Podcast: https://apple.co/4k0l1bo

🎧 Spotify: https://open.spotify.com/episode/5ymC2VZJUz7iLTvYj89CXa?si=52mn5UiBRH-gbkpSEnV4Tw

📺 YouTube: https://youtu.be/xsxJWN5FO-U

🌐 Website: https://www.modemfutura.com/   

Related Reading

  • I Am Legend by Richard Matheson

  • Solaris by Stanisław Lem

  • The Borg episodes of Star Trek: The Next Generation

Inherited Power: What Jurassic Park Teaches Us About AI Futures

Illustration of Sean and Andrew podcasting while reading a copy of Jurassic Park the novel

Jurassic Park, AI, and Why “Inherited Power” Should Make Us Nervous

One of the most enduring insights from science fiction isn’t about robots, dinosaurs, or spaceships — it’s about power. In a recent episode of Modem Futura, we revisited a striking passage from Jurassic Park that feels uncannily relevant to our current moment of AI acceleration.

In the novel, Ian Malcolm warns that scientific power acquired too quickly — without discipline, humility, or deep understanding — is fundamentally dangerous. It’s “inherited wealth,” not earned mastery. Thirty-five years later, that warning lands squarely in the middle of our generative AI era.

Today, AI tools can write code, generate images, summarize research, and mimic expertise in seconds. That’s not inherently bad — in fact, it can be incredibly empowering. But it also creates a dangerous illusion: that capability equals comprehension, and speed equals wisdom. When friction disappears, responsibility often follows.

In the episode, Andrew and I explore why the most important question isn’t whether we should use these tools, but how we use them — and with what mindset. Are we willing to be humble in the face of tools that amplify our reach far faster than our understanding? Are we prepared to ask for receipts, interrogate outputs, and recognize the limits of borrowed intelligence?

From there, we leaned into something equally important: imagination. Through our Futures Improv segment, we explored bizarre but revealing scenarios — humans generating calories from sunlight, a world of post-scarcity socks, radically extended lifespans, lunar independence movements, and even the possibility that alien life might be… profoundly boring.

These playful provocations aren’t escapism. They’re a way of breaking free from “used futures” — recycled assumptions about progress that limit our thinking. Humor, speculation, and creativity allow us to test ideas safely before reality forces our hand.

If there’s one takeaway from this episode, it’s this: the future isn’t just something that happens to us. It’s something we ponder, question, and design together — ideally before the metaphorical dinosaurs escape the park.

🎧 Listen to the full episode of Modem Futura wherever you get your podcasts, and join us as we explore what it really means to be human in an age of powerful machines.


Subscribe and Connect!

Subscribe to Modem Futura wherever you get your podcasts and connect with us on LinkedIn. Drop a comment, pose a question, or challenge an idea—because the future isn’t something we watch happen, it’s something we build together. The medium may still be the massage, but we all have a hand in shaping how it touches tomorrow.

🎧 Apple Podcast: https://apple.co/3NIBdlt

🎧 Spotify: https://open.spotify.com/episode/32wGw6htnSDyGVc08DAvvQ?si=m8jS08egQyOZjYTic6cROw

📺 YouTube: https://youtu.be/jBBIbNu-XdY

🌐 Website: https://www.modemfutura.com/

Techno-Humans and the Energy Futures We’re Designing

Techno-Humans and the Energy Futures We’re Designing

What if the clean energy transition isn’t just a technology problem—but a techno-human design challenge that determines who benefits, who’s left out, and whether our cities can thrive?

Why Human Creativity Still Matters in an Age of AI

What a Year in Review Tells Us About the Future of Creativity

Why Human Craft and Creativity Still Wins in an Age of AI – Episode 63

What Spotify Wrapped and a Holiday Ad Reveal About the Future of Creativity

As the year winds down, many of us find ourselves reflecting—not just on what we’ve done, but on how we’ve spent our attention. In this holiday episode of Modem Futura, Andrew Maynard and I leaned into that instinct, using Spotify Wrapped as an unexpected entry point into a deeper conversation about creativity, technology, and what still matters in an AI-accelerated world.

Wrapped experiences are playful by design, but they’re also revealing. They quietly surface patterns of listening, engagement, and community—reminding us that culture is shaped not just by algorithms, but by millions of individual choices. For us, seeing how Modem Futura resonated globally wasn’t about vanity metrics; it was a reminder that thoughtful, exploratory conversations still find an audience, even in an oversaturated media landscape.

From there, the conversation turned to Apple’s 2025 holiday Ad (but feels like a short film) A Critter Carol—a whimsical, puppet-driven production that feels almost rebellious in its insistence on visible human labor. In a moment when AI can generate polished video in seconds, Apple chose puppeteers, practical effects, and intentional imperfection. The result isn’t just charming; it’s instructive.

The ad works because you can feel the human care embedded in every frame. It’s not anti-technology—far from it. It’s pro-human. Advanced tools are present throughout the production pipeline, but they serve imagination rather than replace it. That distinction matters.

You can read a more detailed breakdown of this Ad and the care and craft that goes into it in a previous blog post: Apple’s 2025 Holiday Ad and the Power of Human-Made Creativity in an AI World.

We’re at a cultural inflection point. As generative tools remove friction from making things, the temptation is to settle for what’s “good enough.” But creativity has always lived in resistance—iteration, constraint, failure, and craft. When those disappear, so does much of what gives creative work its soul.

One hope we shared on the episode is that 2026 becomes the year of “behind the scenes”—a renewed appreciation for process, labor, and the messy human work that makes meaningful outcomes possible. Whether in education, media, or design, showing how something is made may soon matter as much as the finished product itself.

If the future is being shaped right now, then choosing care, intention, and humanity in how we use our tools may be one of the most important creative acts we have left.

Subscribe and Connect!

Subscribe to Modem Futura wherever you get your podcasts and connect with us on LinkedIn. Drop a comment, pose a question, or challenge an idea—because the future isn’t something we watch happen, it’s something we build together. The medium may still be the massage, but we all have a hand in shaping how it touches tomorrow.

🎧 Apple Podcast: https://apple.co/48Sdx6r

🎧 Spotify: https://open.spotify.com/episode/4TVwLBfncHjPs4kDKbLz5t?si=QFYyZuq9R-WtgoEPTeWOlw

📺 YouTube: https://youtu.be/N1vTfDPSusY

🌐 Website: https://www.modemfutura.com/

Apple’s 2025 Holiday Ad and the Power of Human-Made Creativity in an AI World

Apple did it again.

For years now, Apple has built a reputation for delivering some of the most memorable holiday commercials. There was the famous 2013 “tear jerker” Christmas ad with the seemingly sullen teenager, face buried in his phone, drifting to the margins of family festivities. We’re led to believe he’s disconnected and unengaged—until the reveal that he’s been quietly filming and editing a heartfelt video love letter to his family. It hits you hard, right in the feels. Since then, we’ve seen a long line of emotionally charged holiday spots, from the felt doll ad to the hearing loss story and beyond, all leaning into care, compassion, and human connection.

This year, Apple has done it again—but in a way that’s less weepy and more quietly inspirational. The new holiday film leans heavily into something that’s becoming a recurring theme for Apple: celebrating the best kind of content creation—human-made. It also feels like a response to the backlash against their earlier “crush” iPad ad, where a cornucopia of creative tools (instruments, art supplies, analog devices, etc.) were crushed in a hydraulic press and flattened into an iPad. The intended message was that all those tools are now in your hands. The received message, however, was a metaphorical, cold, mechanical digitization of everything human—literally smashing the tools of creativity and, symbolically, the human element itself.

Picking up from where they left off with the wonderfully human-crafted Apple TV logo work—which I wrote about previously and discussed in more depth on a recent Modem Futura podcast episode—Apple once again puts a human-made creation at the center of the story.

This year’s holiday ad is a joyful nod to the fun, slightly unhinged world of practical and digital effects: live actors sharing the screen with wild, muppet-esque animal characters. The creatures look a bit feral and chaotic, which adds to the charm and visual texture of the spot. At the same time, the ad quietly but clearly showcases the headline features of the iPhone 17 Pro and its camera capabilities, from cinematic framing to zoom and low-light performance, all wrapped into a tight, entertaining narrative. Not to mention the hidden connection – the song our furry friends sing – is an adaptation of the song “Friends” by Flight of the Concords (with some pretty obvious lyric changes).

What I’ve grown to love even more than the finished commercial is the behind-the-scenes video. That’s where you really see the production elements, the people, the tech, and the process all woven together. In a world where our feeds are flooded with an endless river of AI-generated slop, it feels like a genuine light in the darkness to encounter something real. Something made by humans. Something that reflects the care and craft of the people involved in bringing a story—even a three-minute ad—to life. Human performances are transferred into these puppets and characters, bringing them to life in ways no AI-slop machine can match.

It’s cheeky, it’s fun, and if you find yourself smiling while you watch it, that’s because you can feel the care and craft engrained into every frame. This wasn’t the result of a few prompts typed into Sora 2. This was a deliberately designed story brought to life by actors, artists, producers, technologists, writers, logistics teams, sales folks, interns, and countless other human-filled roles all working together.

I hope this is just the beginning—and that we keep seeing more high-value, high-craft productions from human creators. In a moment obsessed with automation, it’s refreshing to see a company with Apple’s reach lean so visibly into the irreplaceable magic of human creativity.

Well done Apple!

AI Toys, Datafied Childhoods and the Future of Play

The holiday toy season is here—and this year, the cutest thing on the shelf might also be the most powerful AI in your house. In the latest episode of Modem Futura, Andrew Maynard and I unpack the rise of AI-powered toys and what they mean for childhood, learning and the future of being human.

The conversation starts with a viral example: a plush teddy bear running GPT-4 that had to be pulled from the market after reportedly offering children tips on using matches and explaining adult sexual practices. From there, Sean and Andrew trace the longer lineage of “smart” toys—from Teddy Ruxpin and Furbies to Hello Barbie and Watson-powered dinosaurs—that have steadily normalized networked, data-hungry playthings.

(Checkout this commercial for Teddy Ruxpin... where it all started. Look at how the commercial shows the 'capture' of the kids when it talks - now add AI to this and ask, "what could possibly go wrong?")

We argue that today’s AI toys bring two risks into sharp focus. The first is the datification of childhood, where toys quietly record children’s voices, preferences and emotions, sending that data to companies, platforms and advertisers. The second is behavioral shaping, as large language models become deeply engaging companions that mirror back what kids want to hear, influencing how they see relationships, risk and themselves.

Connecting this to AI-driven education tools, neurodivergent learners and fictional touchstones like Neal Stephenson’s The Diamond Age and Spielberg’s A.I. Artificial Intelligence, the episode asks a simple but urgent question: Who do we want raising our children—families and communities, or opaque AI systems embedded in toys?

Before you wrap this year’s hottest AI plush, this episode offers a thoughtful futures-oriented lens on what you’re really putting under the tree.

Subscribe and Connect!

Subscribe to Modem Futura wherever you get your podcasts and connect with us on LinkedIn. Drop a comment, pose a question, or challenge an idea—because the future isn’t something we watch happen, it’s something we build together. The medium may still be the massage, but we all have a hand in shaping how it touches tomorrow.

🎧 Apple Podcast: https://apple.co/4pD98d4

🎧 Spotify: https://open.spotify.com/episode/2FGujd4wk5rx39zGH8Ml4d?si=kGMN9NCiQfmbBEHpi2buwg

📺 YouTube: https://youtu.be/6_rSNKxsSOU

🌐 Website: https://www.modemfutura.com/

The Metaverse - A Stack of Reality Layers – Episode 57

Layers of Reality: Exploring the Metaverse Stack

When the headset comes off, does the world you were just in disappear—or does it linger somewhere between your senses and memory?

n our latest episode of Modem Futura, Andrew Maynard and I explore the metaverse as more than a corporate buzzword or sci-fi dream. We approach it as a continuum of realities — a multi-layered “stack” that spans the physical and digital, each tier more immersive than the last.

From our own immersive sessions with the Apple Vision Pro, we reflect on that strange moment of re-entry—when the headset comes off and the world feels slightly less real. It’s a feeling that raises existential questions about presence, identity, and how AI-generated worlds are shaping the boundaries of human experience.

In this episode, we trace the metaverse’s origins from Neil Stephenson’s Snow Crash to today’s spatial computing revolutions. We ask what happens when digital spaces become persistent and indistinguishable from physical ones—and why futures thinking is essential for guiding that transition responsibly. From procedurally generated AI environments to the idea of “digital sustainability,” we discuss how these technologies will reshape privacy, ethics, and our collective sense of reality.

Ultimately, this conversation is about our tethers to truth. In an age of deeply immersive AI systems and blended realities, how do we find our totem—our anchor that keeps us grounded in what matters most? We believe that intentional design, transparency, and care must guide how we build these new worlds before they begin to build us.

Subscribe and Connect!

Subscribe to Modem Futura wherever you get your podcasts and connect with us on LinkedIn. Drop a comment, pose a question, or challenge an idea—because the future isn’t something we watch happen, it’s something we build together. The medium may still be the massage, but we all have a hand in shaping how it touches tomorrow.

🎧 Apple Podcast: https://apple.co/4p7ZZcr

🎧 Spotify: https://open.spotify.com/episode/2C5LiGRYCdZgr5JijtK7LI?si=0FbAEihfTD6QXX5FN-2nag

📺 YouTube: https://youtu.be/iCAtutBmN5w

🌐 Website: https://www.modemfutura.com/

Tech or Treat: Exploring the Haunted Side of Future Tech

Are you ready for some Tech or Treat?

Modem Futura’s Halloween special transforms speculative futures into eerie fun. Hosts Sean Leahy and Andrew Maynard use AI-generated scenarios to imagine haunted algorithms, sentient mirrors, and neural nightmare modes — revealing how emerging technologies can both thrill and unsettle us. This episode continues the show’s mission to explore how science, technology, and society intersect to shape the future of being human.

This episode grew out of our playful Futures Improv series, where we use AI to generate speculative prompts about the future — but this time, the prompts got a little… haunted. We explore “The Haunted Algorithm,” a defunct social-media AI that resurrects old user posts every October 31 — a digital séance that’s equal parts sentimental and unsettling. Then we look into “The Mirror That Remembers,” a smart-mirror concept that doesn’t just show your reflection, but who you might have been in another timeline. Finally, we enter “Neural Nightmare Mode,” imagining what could go wrong when brain-computer interfaces merge immersive gaming with fear response.

Each vignette uses humor and imagination to surface deeper questions: What does it mean when our digital selves outlive us? How do we ensure psychological safety in immersive tech? And at what point does innovation slip from magical to menacing?

Our goal isn’t to predict the future — it’s to provoke curiosity about how technology is reshaping what it means to be human. And if we can have some fun (and a few chills) along the way, even better.

You can stream the Halloween special wherever you get your podcasts or watch the illustrated episode on YouTube. If any of these scenarios inspire your own “Tech or Treat” ideas, share them with us — we’d love to feature the best ones in a future episode.

Subscribe and Connect!

🎧 Apple Podcast: https://apple.co/4oovNKa

🎧 Spotify: https://open.spotify.com/episode/47nWrjvBW3ASjMuJUip8o1?si=96d8062d029a4834

📺 YouTube: https://youtu.be/ZmZ46sHgMZY

🌐 Website: https://www.modemfutura.com/

Atlas, Higher Education, and How We Really Feel About AI – Episode 55

Generated by ChatGPT

How We Really Feel About AI

Artificial intelligence isn’t just reshaping industries—it’s reshaping emotions. In our latest episode of Modem Futura, Andrew Maynard and I unpack Pew Research Center’s new international survey on how people view AI across 25 countries. The results are striking: while AI dominates headlines, 61 percent of respondents have heard “little or nothing” about it—and in the U.S., more people are concerned than excited.

We explore what this says about the bubbles we all inhabit. In our professional worlds, AI feels inescapable—a “black hole” pulling every conversation toward it. But outside these circles, many remain unaware of how deeply algorithmic systems already shape their lives. This disconnect raises profound questions about agency: if people don’t understand a technology that governs so much of modern life, who’s really steering the future?

The conversation also turns to trust. Europeans report more faith in their governments to regulate AI responsibly than Americans do—a difference that mirrors the EU’s proactive stance on ethics and guardrails. In contrast, U.S. policy remains fragmented, driven more by economic competition than by social cohesion. That tension—between speed and safety, innovation and inclusion—is at the heart of our discussion.

Dangers of Atlas (it's more the world he's holing... its all of your data!)

We then zoom in on universities. As public institutions, we argue they must act as transparent laboratories for AI exploration—spaces where successes and failures alike can be shared openly. The path forward isn’t boosterism; it’s honesty. We can’t earn public trust without showing our messy work. Adding to this - we explore OpenAI’s newly released Atlas browser and its privacy implications - and stumble upon a very shocking realization while recording - that private student data is easily exfiltrated when using LMS platforms such as Canvas while in this browser. What started as a curious road test of OpenAI’s new Atlas browser turned into a genuinely alarming discovery. As we installed and explored Atlas, the first surprise was how quickly the “privacy” pitch unraveled. Despite promising user control, Atlas behaved much like any other data-hungry app—prompting unusual permissions and positioning itself between us and the open web. That sparked a bigger question: if your browser is the main keyhole to the internet, what happens when an AI system sits in the keyhole, quietly learning from everything you do? (including sending and processing private user data - think ANYTHING that comes and goes from your activity on the web). The situation quickly got worse as I was unable to get Atlas to show me any particular websites security certificate. Specifically, at the time of testing (I hope they patch this soon) I could not see the sites SSL/TSL or HTTPS certificate (this is the little lock icon in most browser URL's) that tells you and the browser that your connection to the sites servers are "safe" via web-standard encryption protocols. You know, for things like your bank account info, passwords, and any other data you want to send or receive. The takeaway? Treat ATLAS like a red hot security hole for you and any organization or persons connected to you via the internet. Think twice before using this browser - as EVERYTHING you do on the web is potentially egress'd (fancy word for taken) – so until there are transparent safegaurds in place, the safest assumption is that anything you open in an AI browser like Atlas could be seen, summarized, and stored beyond your intent.



And because no Modem Futura episode is complete without a bit of speculative play, we end with Futures Improv, where we imagine AI zombies, memory economies, and spaghettified timelines—all to remind ourselves that foresight and humor often travel hand in hand.

If you care about how humanity and technology co-evolve, this episode offers a grounded yet playful map of where we stand—and where we might go next.



Subscribe and Connect!

Subscribe to Modem Futura wherever you get your podcasts and connect with us on LinkedIn. Drop a comment, pose a question, or challenge an idea—because the future isn’t something we watch happen, it’s something we build together. The medium may still be the massage, but we all have a hand in shaping how it touches tomorrow.

🎧 Apple Podcast: https://apple.co/4nuRsit

🎧 Spotify: https://open.spotify.com/episode/04NLQcVfJpoM444bQQfQ42?si=b9adf3740a0944fe

📺 YouTube: https://youtu.be/tYoLRZH5iH8

🌐 Website: https://www.modemfutura.com/