Ryan welcomes Nic Benders to discuss the complexity and abstraction crisis in software development, the importance of going beyond observability into understandability, and demystifying AI's opacity for understanding and control.
Episode notes:
New Relic is a full-stack observability platform that helps engineers plan, build, deploy, and run software. Read their 2025 observability forecast.
Connect with Nic on Linkedin or email him at nic@newrelic.com.
Erez Druk grew up in Israel, but has been in the Bay Area for many years. He has a common theme in his life of obsessing over his current thing. In the 4th grade it was the saxophone, and later on it was being Israel's board game champion, and then - he became obsessed with startups. Outside of tech, he is married and expecting his first child. He's into exercising, reading and coffee. His favorite is going to a coffee shop with his wife, and having a cappuccino and a pastry - but at home, he leans towards his aeropress.
Eight years ago, Erez met his wife who was heading into medical school. He got to see first hand how folks in the healthcare system work, and how hard their jobs are. After wrapping up his prior startup, he started down the path of building a solution that improved the lives of these clinicians.
CodeCrafters helps you become a better engineer by building real-world, production-grade projects. Learn hands-on by creating your own Git, Redis, HTTP server, SQLite, or DNS server from scratch. Sign up for free today using this link and enjoy 40% off.
M.G. Siegler of Spyglass is back for our monthly tech news discussion. Today we dig into OpenAI’s newly cleared path to an IPO, what trillion-scale capex vs. current revenue implies, and how Microsoft’s 27% stake, IP rights, and fresh AWS entanglements complicate the story. We debate whether the market can stomach years of heavy losses, why “AGI or bust” creates systemic risk, and what happens if model gains plateau, compute economics flip, or fast followers erase any AGI edge. Finally, we look at Apple’s iPhone 17 resurgence—why it’s hitting now and whether it’s enough without a breakthrough assistant. Tune in for a clear walkthrough of tech’s biggest news with one of the industry’s sharpest analysts.
---
Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice.
Today, we are another episode in our series, sponsored by our good friends at Railsware. Railsware is a leading product studio with two main focuses - services and products. They have created amazing products like Mailtrap, Coupler and TitanApps, while also partnering with teams like Calendly and Bright Bytes. They deliver amazing products, and have happy customers to prove it.
In this series, we are digging into the company's methods around product engineering and development. In particular, we will cover relevant topics to not only highlight their expertise, but to educate you on industry trends alongside their experience.
In today's episode, we are talking again with Sergiy Korolov, Co-CEO of Railsware and Co-founder of Mailtrap. In my conversation with Sergiy, we dive into how Railsware delivers value - not just features - by following their BRIDGeS framework, enabling their team to focus on value delivery.
Questions:
Railsware is proud of its product development approaches, so let’s pave the way to our topic through one of your prominent cases. In its early days, Calendly reached out to you to deliver their product – with a tight budget and a large set of requirements. You’ve said earlier that several of those initial expected features remained unfulfilled. This leads me to the question: to you, what's the difference between shipping features and delivering value, and why do so many product teams get this wrong?
You’ve been working on several client products, as well as on Railsware’s own. How do you identify what "value" actually means for different stakeholders?
Railsware is known for its BRIDGeS framework, a useful tool to bring the team on the same page and set the product process straight. Can you walk us through the BRIDGeS framework and how it helps teams focus on value delivery?
What role does user research and validation play in the BRIDGeS approach?
Can you share a specific example where applying BRIDGeS helped a team pivot from building the wrong features to delivering real value?
What's the biggest challenge teams face when transitioning from feature delivery to value delivery?
Mrinal Wadhwa grew up in India with a Dad in the Armed Forces, so he moved around a lot. His mother was a teacher for 40+ years, and greatly influenced his love for teaching. In addition to this, he grew up loving to build things. He was introduced to computers and the internet by his cousin - and at that point he was hooked. Outside of tech, he is married and enjoys attending concerts in the Bay Area. He plays pool, very seriously. In fact, he is the guy carrying the little bag into a party with his own pool stick.
Mrinal is one of the minds behind Okham, a popular open source Rust toolkit to build secure communications between applications. Late last year, he observed people desiring to build the layer between agent communications... and decided to build something to do it the right way.
CodeCrafters helps you become a better engineer by building real-world, production-grade projects. Learn hands-on by creating your own Git, Redis, HTTP server, SQLite, or DNS server from scratch. Sign up for free today using this link and enjoy 40% off.
Ryan is joined by Greg Foster, CTO of Graphite, to explore how much we should trust AI-generated code to be secure, the importance of tooling in ensuring code security whether it’s AI-assisted or not, and the need for context and readability for humans in AI code.
Episode notes:
Graphite is an AI code review platform that helps you get context on code changes, fix CI failures, and improve your PRs right from your PR page.
Connect with Greg on LinkedIn and keep up with Graphite on their Twitter.
Join us on YouTube at pythonbytes.fm/live to be part of the audience. Usually Monday at 10am PT. Older video versions available there too.
Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to our friends of the show list, we'll never share it.
In Jan 2025, the PSF submitted a proposal for a US NSF grant under the Safety, Security, and Privacy of Open Source Ecosystems program. After months of work by the PSF, the proposal was recommended for funding.
If the PSF accepted it, however, they would need to agree to the some terms and conditions, including, affirming that the PSF doesn't support diversity. The restriction wouldn't just be around the security work, but around all activity of the PSF as a whole. And further, that any deemed violation would give the NSF the right to ask for the money back.
That just won't work, as the PSF would have already spent the money.
The PSF mission statement includes "The mission of the Python Software Foundation is to promote, protect, and advance the Python programming language, and to support and facilitate the growth of a diverse and international community of Python programmers." The money would have obviously been very valuable, but the restrictions are just too unacceptable.
The PSF withdrew the proposal. This couldn't have been an easy decision, that was a lot of money, but I think the PSF did the right thing.
Lean TDD book will be written in the open. TOC, some details, and a 10 page introduction are now available. Hoping for the first pass to be complete by the end of the year.
I’d love feedback to help make it a great book, and keep it small-ish, on a very limited budget.
Today, we’re talking about building real AI products with foundation models. Not toy demos, not vibes. We’ll get into the boring dashboards that save launches, evals that change your mind, and the shift from analyst to AI app builder. Our guide is Hugo Bowne-Anderson, educator, podcaster, and data scientist, who’s been in the trenches from scalable Python to LLM apps. If you care about shipping LLM features without burning the house down, stick around.
Hugo Bowne-Anderson: x.com Vanishing Gradients Podcast: vanishinggradients.fireside.fm Fundamentals of Dask: High Performance Data Science Course: training.talkpython.fm Building LLM Applications for Data Scientists and Software Engineers: maven.com marimo: a next-generation Python notebook: marimo.io DevDocs (Offline aggregated docs): devdocs.io Elgato Stream Deck: elgato.com Sentry's Seer: talkpython.fm The End of Programming as We Know It: oreilly.com LorikeetCX AI Concierge: lorikeetcx.ai Text to SQL & AI Query Generator: text2sql.ai Inverse relationship enthusiasm for AI and traditional projects: oreilly.com
Dan Houser is co-founder of Rockstar Games and is a legendary creative mind behind Grand Theft Auto (GTA) and Red Dead Redemption series of video games.
Thank you for listening ❤ Check out our sponsors: https://lexfridman.com/sponsors/ep484-sc
See below for timestamps, transcript, and to give feedback, submit questions, contact Lex, etc.
OUTLINE:
(00:00) – Introduction
(01:29) – Sponsors, Comments, and Reflections
(11:32) – Greatest films of all time
(23:45) – Making video games
(26:36) – GTA 3
(29:55) – Open world video games
(32:42) – Character creation
(36:09) – Superintelligent AI in A Better Paradise
(45:21) – Can LLMs write video games?
(49:41) – Creating GTA 4 and GTA 5
(1:01:16) – Hard work and Rockstar’s culture of excellence
(1:04:56) – GTA 6
(1:21:46) – Red Dead Redemption 2
(2:01:39) – DLCs for GTA and Red Dead Redemption
(2:07:58) – Leaving Rockstar Games
(2:17:22) – Greatest game of all time
(2:22:10) – Life lessons from father
(2:24:29) – Mortality
(2:41:47) – Advice for young people
(2:47:49) – Future of video games
Ranjan Roy from Margins is back for our weekly discussion of the latest tech news. We cover: 1) OpenAI converts to a public benefit corporation 2) Why this is big news 3) Satya Nadela's wise OpenAI maneuver 4) Microsoft wants every AI model on Azure 5) Is AGI dead? 6) Inside Microsoft and OpenAI's negotiations 7) Sam Altman charts out OpenAI's next three years 8) Is building automated AI researchers a worthwhile and ambitious goal? 9) OpenAI also wants to be its own Ai cloud 10) OpenAI has become Facebook, kinda 11) OpenAI employees say they don't want to be engagement farmers 12) Meta's threat from OpenAI 13) Instead of the AI bubble, how about the AI wobble? 14) Do we want the 1X Technologies Neo humanoid robot?
---
Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice.
Want a discount for Big Technology on Substack + Discord? Here’s 25% off for the first year: https://www.bigtechnology.com/subscribe?coupon=0843016b
Questions? Feedback? Write to: bigtechnologypodcast@gmail.com