Today, we are releasing another episode for Cybersecurity Awareness month, in our series entitled the Haunted House of API’s, sponsored by our friends at Traceable AI. In this series, we are building awareness around API’s, their security risks – and what you can do about it. Traceable AI is building One Platform to secure every API, so you can discover, protect, and test all your API's with contextual API security, enabling organizations to minimize risk and maximize the value API's bring to their customers.
Phantom Threats: The Ghosts Haunting Your API Security
Today’s episode is titled Phantom Threats: The Ghosts Haunting Your API Security, with Adam Arellano. API security threats often go unnoticed, hiding like ghosts in your infrastructure. Bots, sophisticated fraud attempts, account takeovers and attackers disguising themselves within legit traffic… these all pose risk to your organization, and can bypass traditional security measures, wreaking havoc without detection – until it’s too late. Adam is a tech advisor, Global CISO at Traceable, and will guide listeners through the world of phantom threats haunting API security.
What are "phantom threats" in the context of API security, and how do they go undetected?
Can you explain how advanced botnets and fraud attempts exploit APIs while blending into legitimate traffic?
Do you have a real-world example of a phantom threat that caused significant damage to an organization?
What makes detecting these phantom threats so challenging, and why do traditional security measures often fail?
What are the best strategies or technologies organizations can adopt to detect and eliminate these hidden threats before they cause harm?
Shashwat Sehgal has been in the tech industry broadly for 15 years. He started out as an engineer, but eventually, worked his way towards product and the business side. Outside of tech, he enjoys spending time with his family. He's into sports, loves to play tennis, but admits he hasn't played pickle ball yet because the courts are always booked. He also enjoys reading, in particular historical narratives or autobiographies.
In his prior years, Shashwat noticed that developers spend a large amount of time securing business assets in the cloud. He dreamt of a world where this was just an abstraction layer on top of the cloud, making it easier for developers to complete the task.
Ranjan Roy from Margins is back for our weekly discussion of the latest tech news. We cover 1) AI agents are here 2) Anthropic's Sonnet 3.5 model 3) Why we're underwhelmed with AI agents so far 4) The long-term bull case for agents 5) OpenAI's Orion model 6) Sam Altman's fake news tweet, and his cryptic preview of that news 7) Elon Musk and Putin speak regularly 8) China, Russia, Iran, North Korea vs. U.S. and Europe about to get weird 9) Tesla's blowout earnings 10) Waymo raises $5.6 billion 11) Teen takes life after falling in love with Character.ai bot 12) Perplexity vs. The Media 13) Big Technology and ElevenLabs make a deal.
---
Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice.
For weekly updates on the show, sign up for the pod newsletter on LinkedIn: https://www.linkedin.com/newsletters/6901970121829801984/
Want a discount for Big Technology on Substack? Here’s 40% off for the first year: https://tinyurl.com/bigtechnology
Questions? Feedback? Write to: bigtechnologypodcast@gmail.com
Linda Miller, the Former Deputy Director for Pandemic Response Accountability Committee and Founder and CEO of the Audient Group, and Amy Crawford, Director of Market Planning for Fraud and Identity Services at LexisNexis Risk Solutions join the show to discuss the real cost of security threats by uncovering the true impact to residents and government organizations.
The Data Provenance Initiative is a collective of volunteer AI researchers from around the world. They conduct large-scale audits of the massive datasets that power state-of-the-art AI models with a goal of mapping the landscape of AI training data to improve transparency, documentation, and informed use of data. Their Explorer tool allows users to filter and analyze the training datasets typically used by large language models.
Today, we are releasing another episode for Cybersecurity Awareness month, in our series entitled the Haunted House of API’s, sponsored by our friends at Traceable AI. In this series, we are building awareness around API’s, their security risks – and what you can do about it. Traceable AI is building One Platform to secure every API, so you can discover, protect, and test all your API's with contextual API security, enabling organizations to minimize risk and maximize the value API's bring to their customers.
The Haunted Web: Navigating API Sprawl and Creepy Crawlers
Today’s episode is titled The Haunted Web: Navigating API Sprawl and Creepy Crawlers, with Traceable’s Chief Security Officer, Richard Bird. As organizations scale and evolve, so does the complexity of their APIs. API sprawl, the uncontrolled expansion of APIs, creates a tangled web where vulnerabilities linger in the shadows. These unseen APIs become “creepy crawlers” of your digital infrastructure, creeping through your systems and posing security risks. Richard will discuss how unmanaged and undocumented APIs contribute to blind spots in security, the risks they create for organizations and the best strategies for securing a sprawling ecosystem.
Discussion questions:
Can you explain what we mean by "unknown APIs" and the different types, like shadow, rogue, zombie, and undocumented?
Why do these APIs often go unnoticed, and how do they become security risks?
What makes these APIs such an attractive target for attackers, and can you share an example of how one has been exploited?
How can organizations begin to uncover these hidden APIs, and what tools or strategies are effective in doing so?
In your experience, what are some common mistakes organizations make that lead to these unknown APIs being created or overlooked?