Rich Harris, creator of Svelte and software engineer at Vercel, joins Ryan on the show to dive into the evolution and future of web frameworks. They discuss the birth and growth of Svelte during the rise of mobile, the challenges of building robust and efficient web applications, how companies can back more open-source community projects, and the dirty little secret about asynchronous operations and component frameworks.
Episode notes:
Svelte is a UI framework that uses a compiler to let you write components using HTML, CSS and JavaScript. It’s ranked as one of developer’s most admired web frameworks in this year’s Developer Survey.
Join us on YouTube at pythonbytes.fm/live to be part of the audience. Usually Monday at 10am PT. Older video versions available there too.
Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to our friends of the show list, we'll never share it.
Brian #1: pypistats.org was down, is now back, and there’s a CLI
pypistats.org is a cool site to check the download stats for Python packages.
Thanks to @jezdez for suggesting the @ThePSF takes stewardship and connecting the right people, to @EWDurbin for migrating, and of course to Christopher Flynn for creating and running it for all these years!”
“The aim of the wrapt module is to provide a transparent object proxy for Python, which can be used as the basis for the construction of function wrappers and decorator functions.
An easy to use decorator factory is provided to make it simple to create your own decorators that will behave correctly in any situation they may be used.”
Why not just use functools.wraps()?
“The wrapt module focuses very much on correctness. It therefore goes way beyond existing mechanisms such as functools.wraps() to ensure that decorators preserve introspectability, signatures, type checking abilities etc. The decorators that can be constructed using this module will work in far more scenarios than typical decorators and provide more predictable and consistent behaviour.”
Scan your Python dependencies for known security vulnerabilities with Rust-powered scanner.
PySentry audits Python projects for known security vulnerabilities by analyzing dependency files (uv.lock, poetry.lock, Pipfile.lock, pyproject.toml, Pipfile, requirements.txt) and cross-referencing them against multiple vulnerability databases. It provides comprehensive reporting with support for various output formats and filtering options.
External Resolver Integration: Leverages uv and pip-tools for accurate requirements.txt constraint solving
Multiple Data Sources:
PyPA Advisory Database (default)
PyPI JSON API
OSV.dev (Open Source Vulnerabilities)
Flexible Output for different workflows: Human-readable, JSON, SARIF, and Markdown formats
Performance Focused:
Written in Rust for speed
Async/concurrent processing
Multi-tier intelligent caching (vulnerability data + resolved dependencies)
Comprehensive Filtering:
Severity levels (low, medium, high, critical)
Dependency scopes (main only vs all [optional, dev, prod, etc] dependencies)
Direct vs. transitive dependencies
Enterprise Ready: SARIF output for IDE/CI integration
I tried it on pythonbytes.fm and found only one issue, sadly can’t be fixed:
PYSENTRY SECURITY AUDIT
=======================
SUMMARY: 89 packages scanned • 1 vulnerable • 1 vulnerabilities found
SEVERITY: 1 LOW
UNFIXABLE: 1 vulnerabilities cannot be fixed
VULNERABILITIES
---------------
1. PYSEC-2022-43059 aiohttp v3.12.15 [LOW] [source: pypa-zip]
AIOHTTP 3.8.1 can report a "ValueError: Invalid IPv6 URL" outcome, which can lead to a Denial of Service (DoS). NOTE:...
Scan completed
Scott Horton is the director of the Libertarian Institute, editorial director of Antiwar.com, host of The Scott Horton Show, co-host of Provoked, and for the past three decades a staunch critic of U.S. military interventionism.
Thank you for listening ❤ Check out our sponsors: https://lexfridman.com/sponsors/ep478-sc
See below for timestamps, and to give feedback, submit questions, contact Lex, etc.
OUTLINE:
(00:00) – Introduction
(00:35) – Sponsors, Comments, and Reflections
(09:14) – From the Cold War to the War on Terror
(1:02:13) – Iraq War 1
(1:30:17) – Bin Laden
(2:29:39) – Afghanistan War
(2:44:35) – Iraq War 2
(3:10:59) – Military Industrial Complex
(3:50:25) – Scott’s life story
(4:20:15) – Iraq War 2 (continued)
(5:11:43) – Syria
(6:05:01) – Iraq War 3
(6:17:28) – Somalia
(6:22:56) – Iran
(7:12:41) – Israel-Palestine
(9:02:19) – Cold War 2.0
Ranjan Roy from Margins is back for our weekly discussion of the latest tech news. We cover: 1) Did AI take a step back with GPT-5? 2) Is AI hype going to cool off? 3) GPT-5's switching problem 4) Do we need AI agents? 5) Thinking Vs. Doing AI 6) Sam Altman says parts of AI are a bubble 7) Eric Schmidt says the U.S. should stop overindexing on AGI and instead build it into products 8) GPT-6 is going to have much better memory 9) MIT study says 95% of AI projects fail to achieve their goals 10) AI may replace OnlyFans outsourced 'chatters' 11) Is love AI's real use case?
---
Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice.
Want a discount for Big Technology on Substack + Discord? Here’s 25% off for the first year: https://www.bigtechnology.com/subscribe?coupon=0843016b
Questions? Feedback? Write to: bigtechnologypodcast@gmail.com
Agentic AI programming is what happens when coding assistants stop acting like autocomplete and start collaborating on real work. In this episode, we cut through the hype and incentives to define “agentic,” then get hands-on with how tools like Cursor, Claude Code, and LangChain actually behave inside an established codebase. Our guest, Matt Makai, now VP of Developer Relations at DigitalOcean, creator of Full Stack Python and Plushcap, shares hard-won tactics. We unpack what breaks, from brittle “generate a bunch of tests” requests to agents amplifying technical debt and uneven design patterns. Plus, we also discuss a sane git workflow for AI-sized diffs. You’ll hear practical Claude tips, why developers write more bugs when typing less, and where open source agents are headed. Hint: The destination is humans as editors of systems, not just typists of code.
In this episode of Leaders of Code, Stack Overflow CEO Prashanth Chandrasekar and Christina Dacauaziliqua, Senior Learning Specialist at Morgan Stanley, talk about the importance of experiential learning in fast-paced environments. They emphasize the value of creating intentional learning environments where innovative tools meet collaborative communities to support growth for both individuals and organizations.
The discussion also:
Explores why leaders need to model continuous learning to inspire their teams.
Explains three practical principles to embed a culture of ongoing learning into everyday operations successfully.
Touches on Morgan Stanley's multi-year strategic initiatives centred on talent excellence and how they empower employees through an intentional learning framework and metric tracking.
Ryan welcomes Nathan Michael, CTO at Shield AI, to discuss what AI looks like in defense technologies, both technically and ethically. They cover how the Hivemind technology works in coordinating the autonomous decisions of drones in the field while keeping humans in the loop, whether Shield AI is building Terminators, and how software security works on an edge device that could fall into enemy hands.
Episode notes:
Shield AI produces Hivemind, a resilient autonomy platform intended to protect service members and civilians.
Goutham Rao grew up in Brooklyn, a nerd all his life. Back in the day, his Dad bought him a Commodore 64, from which he started to learn to write code in BASIC. Eventually, he attended the University of Pennsylvania to get his Masters in Computer Science. Outside of tech, he is married with 2 kids. He likes to travel, and likes to run. He used to compete in half marathons, but nowadays, he does his running as more of a mental exercise.
Goutham saw that IT telemetry logs have a lot of complexity within their data. Fast forward to today, he and his co-founders noticed the way that LLMs were processing data. They thought they could build something to interpret this data, and "clone" themselves to create something that mimics issue triage.
Nick Fox is the SVP of Knowledge and Information at Google. Liz Reid is the VP of Search at Google. The two join Big Technology Podcast to discuss the way Google plans and builds in the generative AI era, including how it chooses what to ship and when. We also cover publisher traffic, search monetization and ads, shopping and product research, and the near-term future of the web. Hit play for a clear, no-fluff conversation with the leaders building search’s next chapter.
---
Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice.
Matt Hamann knew he was going to be in tech way back in his younger days. His Dad worked for IBM, so there were always fun things to talk about and play with. He got his first family computer when he was 4 years old, and started programming BASIC when he was 8. Eventually, they got dialup through AOL - and he took off building websites with PHP & MySQL. Outside of tech, he is married with 3 kids. He loves to travel and spend time with his family. He also plays several instruments, including the piano and pipe organ, and enjoys tinkering with smart home devices.
Right around the time of the pandemic, Matt and his co-founder were pitching a new company idea in Y Combinator, around data privacy. After receiving the feedback that there wasn't a big market for the original idea, they started to jam on ideas on how to pivot - and quickly landed on how cool it would be to have password-less authentication.