Big Technology Podcast - The Big GPT-5 Debate, Sam Altman’s AI Bubble, OnlyFans Chatbots

Ranjan Roy from Margins is back for our weekly discussion of the latest tech news. We cover: 1) Did AI take a step back with GPT-5? 2) Is AI hype going to cool off? 3) GPT-5's switching problem 4) Do we need AI agents? 5) Thinking Vs. Doing AI 6) Sam Altman says parts of AI are a bubble 7) Eric Schmidt says the U.S. should stop overindexing on AGI and instead build it into products 8) GPT-6 is going to have much better memory 9) MIT study says 95% of AI projects fail to achieve their goals 10) AI may replace OnlyFans outsourced 'chatters' 11) Is love AI's real use case?

---

Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice.

Want a discount for Big Technology on Substack + Discord? Here’s 25% off for the first year: https://www.bigtechnology.com/subscribe?coupon=0843016b

Questions? Feedback? Write to: bigtechnologypodcast@gmail.com

Talk Python To Me - #517: Agentic Al Programming with Python

Agentic AI programming is what happens when coding assistants stop acting like autocomplete and start collaborating on real work. In this episode, we cut through the hype and incentives to define “agentic,” then get hands-on with how tools like Cursor, Claude Code, and LangChain actually behave inside an established codebase. Our guest, Matt Makai, now VP of Developer Relations at DigitalOcean, creator of Full Stack Python and Plushcap, shares hard-won tactics. We unpack what breaks, from brittle “generate a bunch of tests” requests to agents amplifying technical debt and uneven design patterns. Plus, we also discuss a sane git workflow for AI-sized diffs. You’ll hear practical Claude tips, why developers write more bugs when typing less, and where open source agents are headed. Hint: The destination is humans as editors of systems, not just typists of code.

Episode sponsors

Posit
Talk Python Courses

Matt Makai: linkedin.com

Plushcap Developer Content Analytics: plushcap.com
DigitalOcean Gradient AI Platform: digitalocean.com
DigitalOcean YouTube Channel: youtube.com
Why Generative AI Coding Tools and Agents Do Not Work for Me: blog.miguelgrinberg.com
AI Changes Everything: lucumr.pocoo.org
Claude Code - 47 Pro Tips in 9 Minutes: youtube.com
Cursor AI Code Editor: cursor.com
JetBrains Junie: jetbrains.com
Claude Code by Anthropic: anthropic.com
Full Stack Python: fullstackpython.com
Watch this episode on YouTube: youtube.com
Episode #517 deep-dive: talkpython.fm/517
Episode transcripts: talkpython.fm
Developer Rap Theme Song: Served in a Flask: talkpython.fm/flasksong

--- Stay in touch with us ---
Subscribe to Talk Python on YouTube: youtube.com
Talk Python on Bluesky: @talkpython.fm at bsky.app
Talk Python on Mastodon: talkpython
Michael on Bluesky: @mkennedy.codes at bsky.app
Michael on Mastodon: mkennedy

The Stack Overflow Podcast - Learning in the flow: Unlocking employee potential through continuous learning

In this episode of Leaders of Code, Stack Overflow CEO Prashanth Chandrasekar and Christina Dacauaziliqua, Senior Learning Specialist at Morgan Stanley, talk about the importance of experiential learning in fast-paced environments. They emphasize the value of creating intentional learning environments where innovative tools meet collaborative communities to support growth for both individuals and organizations. 

The discussion also:

  • Explores why leaders need to model continuous learning to inspire their teams.
  • Explains three practical principles to embed a culture of ongoing learning into everyday operations successfully.
  • Touches on Morgan Stanley's multi-year strategic initiatives centred on talent excellence and how they empower employees through an intentional learning framework and metric tracking. 

Notes:

See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

The Stack Overflow Podcast - Robots in the skies (and they use Transformer models)

Ryan welcomes Nathan Michael, CTO at Shield AI, to discuss what AI looks like in defense technologies, both technically and ethically. They cover how the Hivemind technology works in coordinating the autonomous decisions of drones in the field while keeping humans in the loop, whether Shield AI is building Terminators, and how software security works on an edge device that could fall into enemy hands. 

Episode notes:

Shield AI produces Hivemind, a resilient autonomy platform intended to protect service members and civilians.

Congrats to Great Answer badge winner tmdavison for dropping a 100 point plus answer on Set max value for color bar on seaborn heatmap.



See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Code Story: Insights from Startup Tech Leaders - S11 Bonus: Goutham (Gou) Rao, Neubird

Goutham Rao grew up in Brooklyn, a nerd all his life. Back in the day, his Dad bought him a Commodore 64, from which he started to learn to write code in BASIC. Eventually, he attended the University of Pennsylvania to get his Masters in Computer Science. Outside of tech, he is married with 2 kids. He likes to travel, and likes to run. He used to compete in half marathons, but nowadays, he does his running as more of a mental exercise.

Goutham saw that IT telemetry logs have a lot of complexity within their data. Fast forward to today, he and his co-founders noticed the way that LLMs were processing data. They thought they could build something to interpret this data, and "clone" themselves to create something that mimics issue triage.

This is the creation story of Neubird.

Sponsors

Links




Support this podcast at — https://redcircle.com/code-story-insights-from-startup-tech-leaders/donations

Advertising Inquiries: https://redcircle.com/brands

Privacy & Opt-Out: https://redcircle.com/privacy

Big Technology Podcast - Inside Google’s Generative AI Reinvention — With Nick Fox and Liz Reid

Nick Fox is the SVP of Knowledge and Information at Google. Liz Reid is the VP of Search at Google. The two join Big Technology Podcast to discuss the way Google plans and builds in the generative AI era, including how it chooses what to ship and when. We also cover publisher traffic, search monetization and ads, shopping and product research, and the near-term future of the web. Hit play for a clear, no-fluff conversation with the leaders building search’s next chapter.

---

Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice.

Want a discount for Big Technology on Substack + Discord? Here’s 25% off for the first year: https://www.bigtechnology.com/subscribe?coupon=0843016b

Questions? Feedback? Write to: bigtechnologypodcast@gmail.com

Code Story: Insights from Startup Tech Leaders - S11 E13: Matt Hamann, Rownd

Matt Hamann knew he was going to be in tech way back in his younger days. His Dad worked for IBM, so there were always fun things to talk about and play with. He got his first family computer when he was 4 years old, and started programming BASIC when he was 8. Eventually, they got dialup through AOL - and he took off building websites with PHP & MySQL. Outside of tech, he is married with 3 kids. He loves to travel and spend time with his family. He also plays several instruments, including the piano and pipe organ, and enjoys tinkering with smart home devices.

Right around the time of the pandemic, Matt and his co-founder were pitching a new company idea in Y Combinator, around data privacy. After receiving the feedback that there wasn't a big market for the original idea, they started to jam on ideas on how to pivot - and quickly landed on how cool it would be to have password-less authentication.

This is the creation story of Rownd.

Sponsors

Links



Support this podcast at — https://redcircle.com/code-story-insights-from-startup-tech-leaders/donations

Advertising Inquiries: https://redcircle.com/brands

Privacy & Opt-Out: https://redcircle.com/privacy

Talk Python To Me - #516: Accelerating Python Data Science at NVIDIA

Python’s data stack is getting a serious GPU turbo boost. In this episode, Ben Zaitlen from NVIDIA joins us to unpack RAPIDS, the open source toolkit that lets pandas, scikit-learn, Spark, Polars, and even NetworkX execute on GPUs. We trace the project’s origin and why NVIDIA built it in the open, then dig into the pieces that matter in practice: cuDF for DataFrames, cuML for ML, cuGraph for graphs, cuXfilter for dashboards, and friends like cuSpatial and cuSignal. We talk real speedups, how the pandas accelerator works without a rewrite, and what becomes possible when jobs that used to take hours finish in minutes. You’ll hear strategies for datasets bigger than GPU memory, scaling out with Dask or Ray, Spark acceleration, and the growing role of vector search with cuVS for AI workloads. If you know the CPU tools, this is your on-ramp to the same APIs at GPU speed.

Episode sponsors

Posit
Talk Python Courses

RAPIDS: github.com/rapidsai
Example notebooks showing drop-in accelerators: github.com
Benjamin Zaitlen - LinkedIn: linkedin.com
RAPIDS Deployment Guide (Stable): docs.rapids.ai
RAPIDS cuDF API Docs (Stable): docs.rapids.ai
Asianometry YouTube Video: youtube.com
cuDF pandas Accelerator (Stable): docs.rapids.ai
Watch this episode on YouTube: youtube.com
Episode #516 deep-dive: talkpython.fm/516
Episode transcripts: talkpython.fm
Developer Rap Theme Song: Served in a Flask: talkpython.fm/flasksong

--- Stay in touch with us ---
Subscribe to Talk Python on YouTube: youtube.com
Talk Python on Bluesky: @talkpython.fm at bsky.app
Talk Python on Mastodon: talkpython
Michael on Bluesky: @mkennedy.codes at bsky.app
Michael on Mastodon: mkennedy

The Stack Overflow Podcast - The server-side rendering equivalent for LLM inference workloads

Ryan is joined by Tuhin Srivastava, CEO and co-founder of Baseten, to explore the evolving landscape of AI infrastructure and inference workloads, how the shift from traditional machine learning models to large-scale neural networks has made GPU usage challenging, and the potential future of hardware-specific optimizations in AI. 

Episode notes:

Baseten is an AI infrastructure platform giving you the tooling, expertise, and hardware needed to bring AI products to market fast.

Connect with Tuhin on LinkedIn or reach him at his email tuhin@baseten.co. 

Shoutout to user Hitesh for winning a Populist badge for their answer to Cannot drop database because it is currently in use.

See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Python Bytes - #445 Auto-activate Python virtual environments for any project

Topics covered in this episode:
Watch on YouTube

About the show

Python Bytes 445

Sponsored by Sentry: pythonbytes.fm/sentry - Python Error and Performance Monitoring

Connect with the hosts

Join us on YouTube at pythonbytes.fm/live to be part of the audience. Usually Monday at 10am PT. Older video versions available there too.

Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to our friends of the show list, we'll never share it.

Michael #1: pyx - optimized backend for uv

  • via John Hagen (thanks again)
  • I’ll be interviewing Charlie in 9 days on Talk Python → Sign up (get notified) of the livestream here.
  • Not a PyPI replacement, more of a middleware layer to make it better, faster, stronger.
  • pyx is a paid service, with maybe a free option eventually.

Brian #2: Litestar is worth a look

  • James Bennett
  • Michael brought up Litestar in episode 444 when talking about rewriting TalkPython in Quart
  • James brings up
    • scaling - Litestar is easy to split an app into multiple files
    • Not using pydantic - You can use pydantic with Litestar, but you don’t have to. Maybe attrs is right for you instead.
  • Michael brought up
    • Litestar seems like a “more batteries included” option.
    • Somewhere between FastAPI and Django.

Brian #3: Django remake migrations

  • Suggested by Bruno Alla on BlueSky
  • In response to a migrations topic last week
  • django-remake-migrations is a tool to help you with migrations and the docs do a great job of describing the problem way better than I did last week
  • “The built-in squashmigrations command is great, but it only work on a single app at a time, which means that you need to run it for each app in your project. On a project with enough cross-apps dependencies, it can be tricky to run.”
  • “This command aims at solving this problem, by recreating all the migration files in the whole project, from scratch, and mark them as applied by using the replaces attribute.
  • Also of note
    • The package was created with Copier
    • Michael brought up Copier in 2021 in episode 219
    • It has a nice comparison table with CookieCutter and Yoeman
    • One difference from CookieCutter is yml vs json.
    • I’m actually not a huge fan of handwriting either. But I guess I’d rather hand write yml.
    • So I’m thinking of trying Copier with my future project template needs.

Michael #4: django-chronos

  • Django middleware that shows you how fast your pages load, right in your browser.
  • Displays request timing and query counts for your views and middleware.
  • Times middleware, view, and total per request (CPU and DB).

Extras

Brian:

  • Test & Code 238: So Long, and Thanks for All the Fish
    • after 10 years, this is the goodbye episode

Michael:

Joke: python is better than java