Code Story: Insights from Startup Tech Leaders - S11 Bonus: Shamba Chowdhury, DeForge

Shamba Chowdhury got his first computer at an early age. He was the kid that explored every button and every setting, trying to figure out how it all worked. His curiosity exploded when he was 15 and the internet came around. Post that, his first foray into programming came from his love of playing video games. Outside of tech, he loves to read, in particular crime thrillers. He noted that his favorite is A Minute to Midnight by David Baldacci.

Shamba and his co-founder have participated in many hackathons, and they noticed how difficult it was to stitch together ideas, utilizing AI technology. It was at that point they decided to build a no code builder to wire up AI agents together.

This is the creation story of DeForge.

Sponsors

Links




Support this podcast at — https://redcircle.com/code-story-insights-from-startup-tech-leaders/donations

Advertising Inquiries: https://redcircle.com/brands

Privacy & Opt-Out: https://redcircle.com/privacy

Big Technology Podcast - How AI Is Changing Writing — With Tony Stubblebine

Tony Stubblebine is the CEO of Medium. He joins Big Technology to discuss the future of writing in the age of AI and how platforms should handle AI-generated content. Tune in to hear fresh data on ChatGPT vs. Google referral quality, Gemini’s impact on click-throughs, and Medium’s anti-spam approach. We also cover Cloudflare AI blocking, creator payouts, and Medium’s writing app. Hit play for a candid operator’s view of what survives—and thrives—as AI floods the web.

---

Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice.

Want a discount for Big Technology on Substack + Discord? Here’s 25% off for the first year: https://www.bigtechnology.com/subscribe?coupon=0843016b

Questions? Feedback? Write to: bigtechnologypodcast@gmail.com

Code Story: Insights from Startup Tech Leaders - The Railsware Way – Vibe Coding vs. Traditional SDLC, with Sergiy Korolov

Today, we are kicking off a new series, sponsored by our good friends at Railsware. Railsware is a leading product studio with two main focuses - services and products. They have created amazing products like Mailtrap, Coupler and TitanApps, while also partnering with teams like Calendly and Bright Bytes. They deliver amazing products, and have happy customers to prove it.

In this series, we are digging into the company's methods around product engineering and development. In particular, we will cover relevant topics to not only highlight their expertise, but to educate you on industry trends alongside their experience.

In today's episode, we are talking with Sergiy Korolov, Co-CEO of Railsware and Co-founder of Mailtrap. In this conversation, we are bringing up a popular - but somewhat controversial topic - vibe-coding vs. traditional software development approaches.

Questions:

  • You’ve been in tech for over two decades, and have definitely seen many trends come and go. How would you define "vibe-coding" and how does it differ from traditional software development approaches?
  • What drove the emergence of vibe-coding? Could it be a response to overly rigid development processes that many companies have? Or it’s a fundamental shift in engineering?
  • What do engineers on your team think about vibe-coding? Have you practiced this approach on some of your products?
  • What types of products or development contexts are best suited for vibe-coding?
  • Is it possible to create successful and scalable products through vibe-coding? For instance, can people balance vibe-coding with business requirements, deadlines, and stakeholder expectations?
  • To wrap up, is vibe-coding actually sustainable long-term, or is it just a trendy reaction to over-engineering?

Links




Support this podcast at — https://redcircle.com/code-story-insights-from-startup-tech-leaders/donations

Advertising Inquiries: https://redcircle.com/brands

Privacy & Opt-Out: https://redcircle.com/privacy

Code Story: Insights from Startup Tech Leaders - S11 E23: Dr. Zohar Bronfman, Pecan AI

Zohar Bronfman spends most of his time in Tel Aviv, Israel these days. He has a focused academic background, specifically in philosophy and neuroscience. He was always intrigued by the question - how do we know what we know? - which led him to get a PhD in Philosophy. While doing that, he also became fascinated with he human mind and empirical decision making, which took him down the road of obtaining another PhD in AI & Neuroscience, essentially emulating brain processes. Outside of tech, he has 3 kids and a startup. He loves a good book in the philosophy or neuroscience space, and is a big fan of sports. Specifically, he loves the NBA and claims to be a Knicks fan.

Zohar and his now co-founder were digging into predictive models, as an extension of their academic studies. They were curious as to why companies, though they were running predictive models, were not making accurate predictions. They soon realized that this was because the AI modeling expertise was centralized at couple of well known companies.

This is the creation story of Pecan AI.

Sponsors

Links




Support this podcast at — https://redcircle.com/code-story-insights-from-startup-tech-leaders/donations

Advertising Inquiries: https://redcircle.com/brands

Privacy & Opt-Out: https://redcircle.com/privacy

The Stack Overflow Podcast - Craft and quality beat speed and scale, with or without agents

Ryan welcomes Tom Moor, head of engineering at Linear, to discuss AI agents’ mixed results for productivity in the development lifecycle, the importance of context for maximizing agents’ effectiveness, and the role that junior developers need to take in a world increasingly driven by AI.

Episode notes:

Linear is a tool for planning and building products that streamline issues, projects, and product roadmaps.

Connect with Tom on Twitter

This episode’s shoutout goes to user ozz, who won a Populist badge for their answer to Column width not working in DataTables bootstrap.

TRANSCRIPT

See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Big Technology Podcast - Google Research Head Yossi Mathias: AI For Cancer Research, Quantum’s Progress, Researchers’ Future

Yossi Matias is the head of Google Research. He joins Big Technology Podcast to discuss the company's research efforts in areas like cancer treatment and Quantum and to discuss the relationship between research and product. Tune in to hear how Google used LLMs to generate a cancer hypothesis validated in living cells, what a “13,000×” quantum result really means, and how the research product loop turns papers into products. We also cover whether AI can automate a researcher's job. This conversation was recorded in front of a live audience at Google's Mountain View headquarters.

---

Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice.

Want a discount for Big Technology on Substack + Discord? Here’s 25% off for the first year: https://www.bigtechnology.com/subscribe?coupon=0843016b

Questions? Feedback? Write to: bigtechnologypodcast@gmail.com

Talk Python To Me - #525: NiceGUI Goes 3.0

Building a UI in Python usually means choosing between "quick and limited" or "powerful and painful." What if you could write modern, component-based web apps in pure Python and still keep full control? NiceGUI, pronounced "Nice Guy" sits on FastAPI with a Vue/Quasar front end, gives you real components, live updates over websockets, and it’s running in production at Zauberzeug, a German robotic company. On this episode, I’m talking with NiceGUI’s creators, Rodja Trappe and Falko Schindler, about how it works, where it shines, and what’s coming next. With version 3.0 releasing around the same time this episode comes out, we spend the end of the episode celebrating the 3.0 release.

Episode sponsors

Posit
Agntcy
Talk Python Courses

Rodja Trappe: github.com
Falko Schindler: github.com

NiceGUI 3.0.0 release: github.com
Full LLM/Agentic AI docs instructions for NiceGUI: github.com

Zauberzeug: zauberzeug.com
NiceGUI: nicegui.io
NiceGUI GitHub Repository: github.com
NiceGUI Authentication Examples: github.com
NiceGUI v3.0.0rc1 Release: github.com
Valkey: valkey.io
Caddy Web Server: caddyserver.com
JustPy: justpy.io
Tailwind CSS: tailwindcss.com
Quasar ECharts v5 Demo: quasar-echarts-v5.netlify.app
AG Grid: ag-grid.com
Quasar Framework: quasar.dev
NiceGUI Interactive Image Documentation: nicegui.io
NiceGUI 3D Scene Documentation: nicegui.io

Watch this episode on YouTube: youtube.com
Episode #525 deep-dive: talkpython.fm/525
Episode transcripts: talkpython.fm

Theme Song: Developer Rap
🥁 Served in a Flask 🎸: talkpython.fm/flasksong

---== Don't be a stranger ==---
YouTube: youtube.com/@talkpython

Bluesky: @talkpython.fm
Mastodon: @talkpython@fosstodon.org
X.com: @talkpython

Michael on Bluesky: @mkennedy.codes
Michael on Mastodon: @mkennedy@fosstodon.org
Michael on X.com: @mkennedy

Python Bytes - #455 Gilded Python and Beyond

Topics covered in this episode:
Watch on YouTube

About the show

Sponsored by us! Support our work through:

Connect with the hosts

Join us on YouTube at pythonbytes.fm/live to be part of the audience. Usually Monday at 10am PT. Older video versions available there too.

Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to our friends of the show list, we'll never share it.

Michael #1: Cyclopts: A CLI library

Brian #2: The future of Python web services looks GIL-free

  • Giovanni Barillari
  • “Python 3.14 was released at the beginning of the month. This release was particularly interesting to me because of the improvements on the "free-threaded" variant of the interpreter.

    Specifically, the two major changes when compared to the free-threaded variant of Python 3.13 are:

    • Free-threaded support now reached phase II, meaning it's no longer considered experimental
    • The implementation is now completed, meaning that the workarounds introduced in Python 3.13 to make code sound without the GIL are now gone, and the free-threaded implementation now uses the adaptive interpreter as the GIL enabled variant. These facts, plus additional optimizations make the performance penalty now way better, moving from a 35% penalty to a 5-10% difference.”
  • Lots of benchmark data, both ASGI and WSGI
  • Lots of great thoughts in the “Final Thoughts” section, including
    • “On asynchronous protocols like ASGI, despite the fact the concurrency model doesn't change that much – we shift from one event loop per process, to one event loop per thread – just the fact we no longer need to scale memory allocations just to use more CPU is a massive improvement. ”
    • “… for everybody out there coding a web application in Python: simplifying the concurrency paradigms and the deployment process of such applications is a good thing.”
    • “… to me the future of Python web services looks GIL-free.”

Michael #3: Free-threaded GC

  • The free-threaded build of Python uses a different garbage collector implementation than the default GIL-enabled build.
  • The Default GC: In the standard CPython build, every object that supports garbage collection (like lists or dictionaries) is part of a per-interpreter, doubly-linked list. The list pointers are contained in a PyGC_Head structure.
  • The Free-Threaded GC: Takes a different approach. It scraps the PyGC_Head structure and the linked list entirely. Instead, it allocates these objects from a special memory heap managed by the "mimalloc" library. This allows the GC to find and iterate over all collectible objects using mimalloc's data structures, without needing to link them together manually.
  • The free-threaded GC does NOT support "generations”
  • By marking all objects reachable from these known roots, we can identify a large set of objects that are definitely alive and exclude them from the more expensive cycle-finding part of the GC process.
  • Overall speedup of the free-threaded GC collection is between 2 and 12 times faster than the 3.13 version.

Brian #4: Polite lazy imports for Python package maintainers

  • Will McGugan commented on a LI post by Bob Belderbos regarding lazy importing
  • “I'm excited about this PEP.

    I wrote a lazy loading mechanism for Textual's widgets. Without it, the entire widget library would be imported even if you needed just one widget. Having this as a core language feature would make me very happy.”

    https://github.com/Textualize/textual/blob/main/src/textual/widgets/__init__.py

  • Well, I was excited about Will’s example for how to, essentially, allow users of your package to import only the part they need, when they need it.

  • So I wrote up my thoughts and an explainer for how this works.
  • Special thanks to Trey Hunner’s Every dunder method in Python, which I referenced to understand the difference between __getattr__() and __getattribute__().

Extras

Brian:

  • Started writing a book on Test Driven Development.
    • Should have an announcement in a week or so.
    • I want to give folks access while I’m writing it, so I’ll be opening it up for early access as soon as I have 2-3 chapters ready to review. Sign up for the pythontest newsletter if you’d like to be informed right away when it’s ready. Or stay tuned here.

Michael:

Joke: You're absolutely right

Big Technology Podcast - OpenAI’s Risky Browser Bet, Amazon’s Mass Automation Plan, Clippy’s Back

Ranjan Roy from Margins is back for our weekly discussion of the latest tech news. We cover: 1) OpenAI's Atlas browser is here 2) Atlas plays 2048 3) The danger of AI browser prompt injection 4) Will Atlas be around in five years? 5) Why Dave's Hot Chicken is the world's top app 6) Amazon has plans to automate hundreds of thousands of jobs 7) OpenAI is paying investment bankers to train its models 8) If we automate all the work, who will be left to buy stuff? 9) Meta cuts 100 AI jobs 10) Reddit fools AI crawlers and shows theft 11) Clippy returns!

---

Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice.

Want a discount for Big Technology on Substack + Discord? Here’s 25% off for the first year: https://www.bigtechnology.com/subscribe?coupon=0843016b

Questions? Feedback? Write to: bigtechnologypodcast@gmail.com

The Stack Overflow Podcast - Your runbooks are obsolete in the age of agents

Ryan is joined by Spiros Xanthos, CEO and founder of Resolve AI, to talk about the future of AI agents in incident management and troubleshooting, the challenges of maintaining complex software systems with traditional runbooks, and the changing role of developers in an AI-driven world.

Episode notes:

Resolve AI is building agents to help you troubleshoot alerts, manage incidents, and run your production systems. 

Connect with Spiros on Linkedin or email him at spiros@resolve.ai. 

Congrats to user larsks for winning a Stellar Answer badge for their answer to How do I get into a Docker container's shell?.

TRANSCRIPT

See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.