agentic AI

Show Me the Receipts: the Futures of AI Super-intelligence – Episode 44

Superintelligence “In Sight”? Cutting Through Hype to Keep Humans in the Loop

If you’ve felt whiplash from this summer’s AI headlines, you’re not alone. In our latest episode of Modem Futura, we unpack Big Tech’s bolder‑than‑bold claim that “developing superintelligence is now in sight”—and ask for something simple before we all sprint into the future: receipts. We break down what companies are signaling when they talk about AI systems that “improve themselves,” why that sounds momentous, and where the marketing ends and the evidence begins.

First, definitions that matter. Today’s tools remain narrow—powerful, yes, but specialized. AGI is the hypothetical jump to general capability; superintelligence (ASI) is the further leap beyond any human capability. We explore why those terms are often moved like goalposts, and why declaring ASI “near” without a stable definition confuses the public, policymakers, and practitioners alike, and from my perspective is irresponsible.

Then we zoom into a practical pain point: reliability. When platforms silently change models, tools, or defaults, workflows break (hello GPT-5). In education and professional settings, that unpredictability isn’t just irritating—it’s costly. We share real examples (from transcripts labeled with the wrong speakers to model behavior shifting overnight) and discuss what “enterprise‑grade” should mean for LLMs people depend on.

We also play with the upside—digital twins and imaginative design. If a campus has a high‑fidelity digital twin, why stop at mirroring reality? Why not prototype preferable futures—safer, more inclusive, more sustainable spaces—and test them before we build? Of course, reliability matters there too; when operational systems depend on simulations, unintended tweaks can ripple into the real world.

Across the hour, we push back on technological solutionism—the reflex to cast AI as the single answer to complex, “wicked” problems. Yes, we’re excited about AI’s potential; no, it won’t magically resolve conflict, poverty, or disease without broader social, economic, and political work. Framing ASI as our only lifeboat risks narrowing our imaginations right when we need them most.

Ultimately, we return to our favorite question: What does it mean to be human when machines can emulate so much of what we do? For us, that means staying curious and critical, inviting more diverse perspectives into the conversation, and insisting on transparent claims we can evaluate—before ceding agency to systems we don’t fully understand.

If this episode gave you a useful lens on the AI noise, share it with a colleague, drop a comment with the boldest AI claim you’ve heard and the evidence that would convince you, and subscribe so you don’t miss what’s next.

🎧 Listen on Apple Podcasts: https://apple.co/41Ayf6J

📺 Watch us on YouTube: https://www.youtube.com/@ModemFutura

If you’d like to dive deeper, jump into the link and listen to the podcast or watch the YouTube video. Join us as we explore the forces shaping our collective future and the urgent need to keep human values at the heart of innovation.

Subscribe and Connect!

Subscribe to Modem Futura on a favorite podcast platform, follow on LinkedIn, and join the conversation by sharing thoughts and questions. The medium may still be the massage, but everyone has a chance to shape how it kneads modern culture—and to decide what kind of global village we ultimately build.

🎧 Apple Podcast: https://apple.co/41Ayf6J

🎧 Spotify: https://open.spotify.com/episode/0o1vePcG8wh2VmMtETRKjy?si=Fu0xyDEkSHKB-R3gkKKWoQ

📺 YouTube: https://youtu.be/XO9dLoYhIvY

🌐 Website: https://www.modemfutura.com/

Futures of Agentic AI and the 2025 AI Action Plan – Episode 42

A Wet Hot AI Summer: Decoding the U.S. AI Action Plan & the Agentic‑Bot Boom

If you stepped away from the screen / feed for even a moment this July, you might have missed two massive AI stories that could shape the near-term innovation in AI. First, the White House released its 2025 AI Action Plan—a 20 plus page blueprint built on three pillars: (1) Accelerate AI innovation, (2) Build national AI infrastructure, and (3) Lead global AI diplomacy. If that wasn’t news enough - just back on July 17th OpenAI, announced the roll out of its new “Agent” modes—autonomous-ish bots that promise to book your travel, manage your calendar, and even spend your money while you sleep. Joking aside - please be VERY careful about what sort of access, privacy, and information you give any automated service. Ask yourself “what would be the worst that could happen?” If the answer makes you cringe or sweat - don’t do that thing. Okay - PSA cautionary rant over… back to the episode notes.

In our latest Modem Futura episode, Andrew and I pull these threads together. We ask whether the Action Plan’s “build‑baby‑build” mantra—complete with massive semiconductor subsidies and calls to “remove regulatory barriers”—is a bold vision or reckless speed run. We also spotlight what’s missing: robust guard‑rails for deepfakes, algorithmic bias, and the colossal energy footprint of new data‑centers.

Switching to agentic AI, we run real‑time tests on OpenAI’s new Agent Mode and compare them with Manus’ more mature workflow. Yes, watching a bot open browser tabs for you is technically impressive—until you realize you can still do most tasks faster yourself . That friction sparks a wider debate:

Productivity paradox – Studies already show teachers and coders spending more time fact‑checking AI output than drafting from scratch.

Privacy trade‑offs – Granting an agent access to your email or bank account may save clicks now, but what’s the long‑term cost to autonomy?

Deepfake backlash – The Plan flags courtroom deepfakes as a national‑security risk, yet leaves broader social harms largely unaddressed.

Behind the policy prose and flashy demos lurks a wider narrative of tech nationalism. The document casts AI as a race the United States must win, positioning allies as followers and China as the ultimate adversary. That framing risks turning open research into a geopolitical arms sprint—one where ethical reflection gets lapped by hype.

So where does that leave forward‑thinking professionals, educators, and creators? We advocate to start the conversations now - here are some great starting topics to begin with:

Stay curious but critical. Piloting new agent tools is the best way to spot real value—and red flags—early.

Advocate for “responsible speed.” Innovation and regulation are not mutually exclusive; demand both from vendors and policymakers.

Own your data literacy. Whether you’re vetting deepfake evidence or AI‑generated lesson plans, will skepticism become a core career skill?

🎧 Tune in for the full discussion—including Hitchhiker’s Guide jokes, live agent fails, and pragmatic optimism about building a flourishing, not merely faster, future.

🎧 Listen on Apple Podcasts: https://apple.co/4l7eCKC

📺 Watch us on YouTube: https://www.youtube.com/@ModemFutura

If you’d like to dive deeper, jump into the link and listen to the podcast or watch the YouTube video. Join us as we explore the forces shaping our collective future and the urgent need to keep human values at the heart of innovation.

Subscribe and Connect!

Subscribe to Modem Futura on a favorite podcast platform, follow on LinkedIn, and join the conversation by sharing thoughts and questions. The medium may still be the massage, but everyone has a chance to shape how it kneads modern culture—and to decide what kind of global village we ultimately build.

🎧 Apple Podcast: https://apple.co/4l7eCKC

🎧 Spotify: https://open.spotify.com/episode/2fI044VpiPE3t4Y9MXrZjJ?si=mJ-xb414R3Ww7IkTOIlT0Q

📺 YouTube: https://youtu.be/6fcOiRYnIK8

🌐 Website: https://www.modemfutura.com/