Talk Python To Me - #536: Fly inside FastAPI Cloud

You've built your FastAPI app, it's running great locally, and now you want to share it with the world. But then reality hits -- containers, load balancers, HTTPS certificates, cloud consoles with 200 options. What if deploying was just one command? That's exactly what Sebastian Ramirez and the FastAPI Cloud team are building. On this episode, I sit down with Sebastian, Patrick Arminio, Savannah Ostrowski, and Jonathan Ehwald to go inside FastAPI Cloud, explore what it means to build a "Pythonic" cloud, and dig into how this commercial venture is actually making FastAPI the open-source project stronger than ever.

Episode sponsors

Command Book
Python in Production
Talk Python Courses

Guests
Sebastián Ramírez: github.com
Savannah Ostrowski: github.com
Patrick Arminio: github.com
Jonathan Ehwald: github.com

FastAPI labs: fastapilabs.com
quickstart: fastapicloud.com
an episode on diskcache: talkpython.fm
Fastar: github.com
FastAPI: The Documentary: www.youtube.com
Tailwind CSS Situation: adams-morning-walk.transistor.fm
FastAPI Job Meme: fastapi.meme
Migrate an Existing Project: fastapicloud.com
Join the waitlist: fastapicloud.com

Talk Python CLI
Talk Python CLI Announcement: talkpython.fm
Talk Python CLI GitHub: github.com

Command Book
Download Command Book: commandbookapp.com
Announcement post: mkennedy.codes

Watch this episode on YouTube: youtube.com
Episode #536 deep-dive: talkpython.fm/536
Episode transcripts: talkpython.fm

Theme Song: Developer Rap
🥁 Served in a Flask 🎸: talkpython.fm/flasksong

---== Don't be a stranger ==---
YouTube: youtube.com/@talkpython

Bluesky: @talkpython.fm
Mastodon: @talkpython@fosstodon.org
X.com: @talkpython

Michael on Bluesky: @mkennedy.codes
Michael on Mastodon: @mkennedy@fosstodon.org
Michael on X.com: @mkennedy

Code Story: Insights from Startup Tech Leaders - S12 E5: Marc Gyöngyösi, OneTrack

Marc Gyöngyösi has had a lifelong passion for building and technology, shaped early on by time spent crafting wooden projects and tinkering with remote-controlled vehicles... before progressing to constructing a full 737 flight-simulator cockpit in their parents’ basement as a young teen. His interests have consistently centered on blending the physical and digital worlds, from open-source flight-simulator development to modern explorations in AI, which now occupies most of his free time. Outside of tech, he enjoys running, skiing, golf, and staying active, and although he has spent time flying, he's stepped back from it due to time constraints. He's especially fond of a well-made Austrian Wiener Schnitzel — an elusive treasure in the U.S., but one they happily track down whenever possible.

In 2017, Marc launched his company Intelligent Flying Machines, which was a college project focused on building autonomous drones for warehouses. After dealing with crashes, and 12 stitches from said crashes, Marc shifted his focus from flying robots to a broader, computer vision platform capturing real world data.

This is the creation story of OneTrack.

Sponsors

  • Unblocked
  • TECH Domains
  • Mezmo
  • Braingrid.ai
  • Alcor
  • Equitybee
  • Terms and conditions: Equitybee executes private financing contracts (PFCs) allowing investors a certain claim to ESO upon liquidation event; Could limit your profits. Funding in not guaranteed. PFCs brokered by EquityBee Securities, member FINRA.


Links



Support this podcast at — https://redcircle.com/code-story-insights-from-startup-tech-leaders/donations

Advertising Inquiries: https://redcircle.com/brands

Privacy & Opt-Out: https://redcircle.com/privacy

The Stack Overflow Podcast - The logos, ethos, and pathos of your LLMs

Ryan is joined by Professor Tom Griffiths, the head of Princeton University’s AI Lab, to dive into findings from his new book The Laws of Thought, which explores the history of the philosophy, mathematics, and logic that underlie artificial intelligence, and scientists' efforts to describe our minds using mathematics. They discuss the challenges of understanding human cognition, the implications of probabilistic AI “thinking,” and where Aristotle fits into the philosophical discussions we’re having on consciousness and sentience in AI. 

Episode notes: 

The Laws of Thought details our quest to use mathematics to describe the ways we think, from its origins three hundred years ago to the ideas behind modern AI systems and how our human minds differ from the neural networks of AI. 

Connect with Tom on LinkedIn and find more of his work at the Princeton website

Congrats to user Andreas Rayo Kniep for winning a Populist badge for their answer to Is there a difference between the UTC and Etc/UTC time zones?.

We want to know what you're using to upskill and learn in the age of AI. Take this five minute survey on learning and AI to have your voice heard in our next Stack Overflow Knows Pulse Survey. 

TRANSCRIPT

See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Code Story: Insights from Startup Tech Leaders - S12 Bonus: Johnny Halife, Southworks

Johnny Halife was born and raised in Argentina. As such, he takes soccer very seriously. He is a die hard fan of Boca, and has taken his family to live games in Miami and Nashville. He is the father of 2 young boys, which he notes completely changed his life. He has been slowly introducing them to soccer, as an Argentina after would do, and they love the roar of the stadium during a game. He also claims to be a really bad golfer, which I can relate to.

Twenty one years ago, Johnny started working for Microsoft Engineering behind the scenes, helping them shape products. Eventually, he and his team started asking the question - if we are helping Microsoft, why don't we help other companies?

This is Johnny's creation story at Southworks.

Sponsors

Links



Support this podcast at — https://redcircle.com/code-story-insights-from-startup-tech-leaders/donations

Advertising Inquiries: https://redcircle.com/brands

Privacy & Opt-Out: https://redcircle.com/privacy

African Tech Roundup - Natasha Blycha of Stirling & Rose / Nooriam / LexChip: Why AI without law is just code without conscience


Episode overview:

Natasha Blycha's path into emerging technology law started in an unlikely place. As a gap-year volunteer teaching English and economics at a school outside Gweru, Zimbabwe, circa 2000, she was simultaneously working for a small rural law firm on constitutional questions — an experience she credits with shaping the questions that have driven her career since.

In conversation with Andile Masuku, Blycha — who co-authored the Oxford Smart Legal Contracts textbook and was named the Financial Times' Most Innovative Lawyer — traces a line from those early days to advising global banks on whether their crypto experiments were even legal, to building LexChip: technology that embeds enforceable contracts directly into AI-powered devices.

The conversation spans smart contracts (the technical kind and the legally binding kind — they're different), why crypto adoption in Nigeria and Ghana has less to do with speculation and more to do with broken banking infrastructure, and what Jensen Huang's "five-layer AI cake" means for nations trying to build sovereign AI stacks without the energy, chips, or legal infrastructure to hold them together.

Blycha's central argument: if we can't put code in jail, and AI systems are becoming economic stakeholders that can book a million flights or displace entire workforces, then the law as currently designed has a problem. Her proposed contribution — smart legal contracts that act as referees inside AI systems, capable of stopping a device when it breaches its own rules — sits at the intersection of contract law and responsible AI.


Key insights:

  • On why this isn't Y2K: "This is so much more complicated, so much more geopolitically complicated. And if we said that Y2K didn't happen, it was one day we got to find out. What we're seeing already with AI systems is we're already getting the proof in the pudding that they are working." Blycha argues Y2K was a manageable vector of complexity compared to AI. The difference: AI systems are actively delivering on their promise, and big tech's mandate to reach AGI means we can't simply wait for one day to find out.
  • On why Africa's slower adoption might be an advantage, not a liability: "If I cannot keep the power on, am I really talking about agentic AI?" But Blycha points to a counterintuitive upside: countries without legacy infrastructure can leapfrog, just as India and parts of Africa bypassed landlines for mobile. Crypto adoption in Nigeria and Ghana demonstrates this — populations using blockchain not as a speculative instrument but as functional money in economies where traditional banking fails them.
  • On the difference between smart contracts and smart legal contracts: A smart contract is code that executes on a blockchain — "if this happens, do this." It's a technical term, not a legal one. A smart legal contract, by contrast, is a real, enforceable agreement where specific clauses are automated. Blycha uses the example of a lease where rent adjusts automatically based on CPI. The distinction matters because conflating the two obscures where legal accountability actually sits.
  • On the fundamental legal problem AI creates: "The law needs a person to ascribe responsibility to." Bitcoin was invented by someone who may not exist. Decentralised autonomous organisations insist the code is responsible, not them. But you can't put code in jail. As AI agents proliferate — booking flights, managing finances, making hiring decisions — the gap between what the technology does and who the law can hold accountable is widening faster than regulators can respond.
  • On smart legal contracts as AI's conscience: Through LexChip, Blycha's team is embedding contracts directly into AI edge devices — robotics, autonomous vehicles, hardware with embodied AI. These contracts can monitor behaviour in real time and, critically, act as a referee: stopping a device safely when it breaches its rules. "You've taken an analog thing, you've turned it into a performance-based contract and it can speak to an AI system."
  • On Ubuntu as an AI governance framework — with a warning: Blycha was moved by the Ubuntu principle of interconnectedness during a family visit to South Africa. She sees it as a potentially powerful ethical framework for AI policy — but cautions against using it as "window dressing for someone to write a wishy-washy policy that then doesn't deal with the hard stuff." The hard stuff: GPU clusters, cloud compute, sovereign data infrastructure. Values without investment are just declarations.
  • On who opposes all of this — and why: Peter Thiel and a portion of Silicon Valley divide the world into accelerators and decelerators. In their framing, lawyers like Blycha are slowing down progress toward a post-human, transhumanist future of brain-computer interfaces and infinite lifespan. Blycha's response: "This is not a lawyers versus the tech bros conversation because there is an extremely large majority of the tech bros who are also saying there is a big problem here."


Notable moments:

1. The first text message: At the Bata Club in Gweru, Zimbabwe, circa 2000 — a social venue attached to a Canadian shoe factory — Blycha saw her first SMS travel between England and Zimbabwe on a feature phone. "It wasn't a smartphone, it was a dead phone." She'd bought her flight to Zimbabwe on the day of the Y2K bug because tickets were cheap. That moment — witnessing a communication revolution in a country experiencing currency crisis and fuel shortages — frames the conversation's central question about technology adoption in constrained environments.

2. The Mennonite test: Visiting Amish communities in Ohio, Blycha learned their approach to technology adoption. "They don't prohibit technology at all. They ask two questions: does this technology bring me closer to my family and does this technology bring me closer to God?" Asked how everyday people should think about adopting AI tools, Blycha offered this as her "heart answer" — a striking conclusion from someone who has spent her career at technology's legal frontier.

3. The McKinsey displacement reality: Blycha points to McKinsey's replacement of significant portions of its workforce with AI agents as evidence that displacement is not theoretical. The legal question this raises: how do you write an employment contract with an AI agent? And when that agent — operating at a scale no human can oversee — breaches the law, the "human in the loop" principle that underpins every AI governance framework starts to break down.


Connect and engage:


Resources referenced in this episode:

  • Natasha Blycha on Shirtloads of Science podcast

Python Bytes - #469 Commands, out of the terminal

Topics covered in this episode:
Watch on YouTube

About the show

Sponsored by us! Support our work through:

Michael #1: Command Book App

  • New app from Michael
  • Command Book App is a native macOS app for developers, data scientists, AI enthusiasts and more.
  • This is a tool I've been using lately to help build Talk Python, Python Bytes, Talk Python Training, and many more applications.
  • It's a bit like advanced terminal commands or complex shell aliases, but hosted outside of your terminal. This leaves the terminal there for interactive commands, exploration, short actions.
  • Command Book manages commands like "tail this log while I'm developing the app", "Run the dev web server with true auto-reload", and even "Run MongoDB in Docker with exactly the settings I need"
  • I'd love it if you gave it a look, shared it with your team, and send me feedback.
  • Has a free version and paid version.
  • Build with Swift and Swift UI
  • Check it out at https://commandbookapp.com

Brian #2: uvx.sh: Install Python tools without uv or Python

  • Tim Hopper

Michael #3: Ending 15 years of subprocess polling

  • by Giampaolo Rodola
  • The standard library's subprocess module has relied on a busy-loop polling approach since the timeout parameter was added to Popen.wait() in Python 3.3, around 15 years ago
  • The problem with busy-polling
    • CPU wake-ups: even with exponential backoff (starting at 0.1ms, capping at 40ms), the system constantly wakes up to check process status, wasting CPU cycles and draining batteries.
    • Latency: there's always a gap between when a process actually terminates and when you detect it.
    • Scalability: monitoring many processes simultaneously magnifies all of the above.
    • + L1/L2 CPU cache invalidations
  • It’s interesting to note that waiting via poll() (or kqueue()) puts the process into the exact same sleeping state as a plain time.sleep() call. From the kernel's perspective, both are interruptible sleeps.
  • Here is the merged PR for this change.

Brian #4: monty: A minimal, secure Python interpreter written in Rust for use by AI

  • Samuel Colvin and others at Pydantic
  • Still experimental
  • “Monty avoids the cost, latency, complexity and general faff of using a full container based sandbox for running LLM generated code. “
  • “Instead, it lets you safely run Python code written by an LLM embedded in your agent, with startup times measured in single digit microseconds not hundreds of milliseconds.”

Extras

Brian:

Michael:

Joke: Silence, current side project!

Big Technology Podcast - Software In Shambles, OpenAI vs. Anthropic Super Brawl, Amazon’s Struggles

Ranjan Roy from Margins is back for our weekly discussion of the latest tech news. We cover: 1) AI worries crush software stocks 2) Why is software in the crosshairs? 3) Is it vibecoding or that software becomes an input into AI bots 4) Why software might make it through 5) Anthropic's legal plugin that set it off 6) Okay, so no AI bubble? 7) Anthropic's Super Bowl ad attacks OpenAI 8) OpenAI's response 9) OpenAI losing share to rival chatbots 10) What the hell is happening to Bitcoin? 11)

---

Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice.

Want a discount for Big Technology on Substack + Discord? Here’s 25% off for the first year: https://www.bigtechnology.com/subscribe?coupon=0843016b

Learn more about your ad choices. Visit megaphone.fm/adchoices

The Government Huddle with Brian Chidester - 201: The One with the Marin County, CA Innovation Chief

Crosby Burns, Chief Digital & Innovation Officer for Marin County, California joins the show for a candid and forward-looking conversation on what digital innovation really means in government today. Together, we explore how a service-first mindset can help governments balance security, accessibility, and trust in an era increasingly shaped by AI, bots, and new digital entry points. We also discuss how Marin County is rethinking public websites as repositories of structured information designed for both humans and machines and how accessibility is about far more than compliance, but about removing friction across the entire citizen journey.

array(3) { [0]=> string(76) "https://mcdn.podbean.com/mf/web/dvwa7jedazegpbct/Marin_County_Finala3jhj.m4a" [1]=> string(0) "" [2]=> string(8) "35507858" }

The Stack Overflow Podcast - AI attention span so good it shouldn’t be legal

We have another two-for-one special this week, with two more interviews from the floor of re:Invent. First, Ryan welcomes Pathway CEO Zuzanna Stamirowska and CCO Victor Szczerba to dive into their development of Baby Dragon Hatchling, the first post-transformer frontier model, from how continual learning and memory will transform AI to the real-world use cases for longer LLM attention span. 

In the second part of this episode, Ryan is joined by Rowan McNamee, co-founder and COO of Mary Technology, to discuss bringing AI into the carefully governed world of litigation and how LLMs are helping lawyers manage and interpret the vast amounts of legal evidence that pass across their desks every day.

Episode notes: 

Pathway is building the first post-transformer frontier model that solves for attention span and continual learning.

Mary Technology is an AI for attorneys that turns evidentiary documents into structured, easy-to-review facts.

Connect with Zuzanna on LinkedIn and Twitter

Reach out to Victor at his email: victor@pathway.com 

Connect with Rowan on LinkedIn.

We want to know what you're using to upskill and learn in the age of AI. Take this five minute survey on learning and AI to have your voice heard in our next Stack Overflow Knows Pulse Survey.

TRANSCRIPT

See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Code Story: Insights from Startup Tech Leaders - Impact: How to Inspire, Align and Amplify Innovative Teams with Keith Lucas

Today, we have a special guest on the podcast, Keith Lucas, a startup advisor specializing in product, growth, people and culture. Keith led product and engineering at Roblox, helping scale its infrastructure, product offerings, team and business. Most recently, Keith published a book entitled Impact: How to Inspire, Align, and Amplify Innovative Teams. All book proceeds go to charities to help young entrepreneurs, so make sure you check the link in the notes and grab the book today.

In our chat, Keith is going to walk us through key concepts in the book, surrounding centering your team around the vision and mission of what you are driving towards, from recruiting to execution to "coaching out".

Questions:

  • What was your goal in writing this book? What were you hoping to accomplish?
  • In Chapter 1, you mention purpose inspiring action. How does aligning to purpose drive urgency, without resulting in burnout or being an "antiquated mandate", like you mention in Chapter 2?
  • You state "Culture is what you do, not what you say"... How does a leader's daily behavior - especially around micromanagement or decision-making speed - define the team's realized values, overriding the company's codified ones?
  • I found the idea of The Cascade (Chapter 5) interesting, mapping core beliefs to execution alignment. In terms of feedback, what is the difference between "belief busting" and "hypothesis busting" feedback? How should leaders respond to each in order to maintain trust and agility?
  • How often should entrepreneurial teams deliberately challenge and re-org autonomous pods to optimize for agility and opportunity, over long term stability?
  • Now this is interesting - the "okay contributor", you define as a person who meets standards in all areas but shows no exceptionalism. Why is this person more damaging to a culture of mastery, than the high talent disrupter?
  • What is a Mission Athlete? When recruiting, how does preparing a vision doc for a role shift the recruiting conversation from transactional to one focused on strategic alignment and ownership?
  • You mention in Chapter 8 that compensation can be a distraction. What core mistakes do scaling startups make with compensation that turn it from a non-issue into an energy-sapping problem that erodes retention?
  • You define Coaching Out as the intentional process of protecting the productive from the disruptive, treating an exit as a non-personal assessment that maintains decency and clarity. Can you describe the GYOR continuum?
  • Why should leaders avoid formal PIP's when dealing with a struggling team member? What must replace it to ensure accountability and decency?

Sponsors

  • TECH Domains
  • Mezmo
  • Braingrid.ai
  • Alcor
  • Equitybee
  • Terms and conditions: Equitybee executes private financing contracts (PFCs) allowing investors a certain claim to ESO upon liquidation event; Could limit your profits. Funding in not guaranteed. PFCs brokered by EquityBee Securities, member FINRA.


Links



Support this podcast at — https://redcircle.com/code-story-insights-from-startup-tech-leaders/donations

Advertising Inquiries: https://redcircle.com/brands

Privacy & Opt-Out: https://redcircle.com/privacy