Johnny Halife was born and raised in Argentina. As such, he takes soccer very seriously. He is a die hard fan of Boca, and has taken his family to live games in Miami and Nashville. He is the father of 2 young boys, which he notes completely changed his life. He has been slowly introducing them to soccer, as an Argentina after would do, and they love the roar of the stadium during a game. He also claims to be a really bad golfer, which I can relate to.
Twenty one years ago, Johnny started working for Microsoft Engineering behind the scenes, helping them shape products. Eventually, he and his team started asking the question - if we are helping Microsoft, why don't we help other companies?
According to surveys by the FINRA Foundation, our knowledge of personal finance here in the U.S. went down by 15% between 2009 and 2021. But what if it actually didn't? What if the technology we use to answer the questions is now getting in the way? In 2021, over half of all respondents used a smartphone to fill out the survey. In 2009, none of them used one, according to data from FINRA’s National Financial Capability Study. A new working paper finds that when people use smartphones for surveys they're more likely to respond with the wrong answer or say they don't know. Marketplace’s Stephanie Hughes spoke with Montana State University economics professor Carly Urban, one of the authors of the paper, to learn more.
Recently, several listeners have written to us wondering about the large flocks of crows they see darkening the skies over the Bay Area at sunset each night. The crows are like clockwork, swarming to the same locations night after night, often hundreds of them at a time. What's going on? We get answers and along the way explore why we're seeing more crows in the Bay Area in recent years and if they are having a negative impact on other bird species.
This story was reported by Dan Brekke. Bay Curious is made by Olivia Allen-Price, Katrina Schwartz and Christopher Beale. Additional support from Jen Chien, Katie Sprenger, Maha Sanad, Ethan Toven-Lindsay and everyone on Team KQED.
It's the first day of the Milan Cortina Winter Olympics. Hockey, curling, alpine skiing, luge, and now a new sport: ski mountaineering, also known as "skimo." And another storyline to follow is the return of superstar skier Lindsey Vonn, who was on the sidelines for five years before returning for this year's Olympics. Meanwhile, a $16 billion plan called the Hudson River Tunnel Project is kaput for now after President Trump announced he's withholding its funding. It was seen as one of the biggest infrastructure projects in the country. Also, in Los Angeles, traffic jams don't just happen on the freeways, they're happening in the sky too, with the airspace over Hollywood Burbank Airport being some of the most congested in the country. In business, the graffiti towers, officially known as the Oceanwide Plaza, reached a bankruptcy agreement that may open the path to its sale and cleanup, and the Teamsters of California are calling for the state to ban Waymo cars after one struck a child in Santa Monica. Read more at LATimes.com.
The search for Nancy Guthrie, the mother of Today show co-host Savannah Guthrie, enters a second week as her family says they’ve received a message from the people who took her and investigators continue to look for suspects. Ghislaine Maxwell is set to be questioned by members of Congress about Jeffrey Epstein, his crimes, and the powerful figures connected to him, even as she continues to challenge her own conviction. And the Seattle Seahawks win Super Bowl 60, beating the New England Patriots 29-13, using a dominant defense to secure the franchise’s second championship.
Want more analysis of the most important news of the day, plus a little fun? Subscribe to the Up First newsletter.
Today’s episode of Up First was edited by James Doubek, Megan Pratz, Russell Lewis, Mohamad ElBardicy, and Adriana Gallardo.
It was produced by Ziad Buchh and Ava Pukatch.
Our director is Christopher Thomas.
We get engineering support from Neisha Heinis. Our technical director is Carleigh Strange.
The question of what to do about undocumented immigrants has long bonded President Trump and his supporters — and an overwhelming majority of them backed his all-out crackdown over the past year.
But then came the extraordinary events of the past few weeks in Minneapolis. Since then, some of Mr. Trump’s voters have begun to have misgivings about his agenda.
“The Daily” spoke with more than a dozen people who voted for him in the last election about how they are making sense of the recent events in Minneapolis.
The White House deletes a racist social media post from the president’s account, after vociferously defending it. The Guthrie family adjusts its language in messages to potential kidnappers. And Bad Bunny’s Super Bowl Halftime Show sends a stark message, both inside and outside the United States.
Games are supposed to be fun — so what happens when the logic of games, points and competition escapes the playground and starts reshaping everyday life? The novelist and games-writer Naomi Alderman and her guests explore how the joy of play collides with the pressures of a gamified society.
Philosopher C Thi Nguyen introduces The Score, his examination of how ranking systems and numerical targets can both sharpen and warp our values, revealing how life becomes less playful when everything is reduced to points.
Journalist and critic Keza MacDonald discusses Super Nintendo, her cultural history of the iconic console, tracing how its games, aesthetics and innovations transformed the medium and helped define what play means for generations of players.
The Financial Times' commentator Stephen Bush examines the growing role of games and game like incentives in public life, exploring how the techniques of play — from reward structures to competitive framing — are reshaping political behaviour and communication.
Natasha Blycha's path into emerging technology law started in an unlikely place. As a gap-year volunteer teaching English and economics at a school outside Gweru, Zimbabwe, circa 2000, she was simultaneously working for a small rural law firm on constitutional questions — an experience she credits with shaping the questions that have driven her career since.
In conversation with Andile Masuku, Blycha — who co-authored the Oxford Smart Legal Contracts textbook and was named the Financial Times' Most Innovative Lawyer — traces a line from those early days to advising global banks on whether their crypto experiments were even legal, to building LexChip: technology that embeds enforceable contracts directly into AI-powered devices.
The conversation spans smart contracts (the technical kind and the legally binding kind — they're different), why crypto adoption in Nigeria and Ghana has less to do with speculation and more to do with broken banking infrastructure, and what Jensen Huang's "five-layer AI cake" means for nations trying to build sovereign AI stacks without the energy, chips, or legal infrastructure to hold them together.
Blycha's central argument: if we can't put code in jail, and AI systems are becoming economic stakeholders that can book a million flights or displace entire workforces, then the law as currently designed has a problem. Her proposed contribution — smart legal contracts that act as referees inside AI systems, capable of stopping a device when it breaches its own rules — sits at the intersection of contract law and responsible AI.
Key insights:
On why this isn't Y2K: "This is so much more complicated, so much more geopolitically complicated. And if we said that Y2K didn't happen, it was one day we got to find out. What we're seeing already with AI systems is we're already getting the proof in the pudding that they are working." Blycha argues Y2K was a manageable vector of complexity compared to AI. The difference: AI systems are actively delivering on their promise, and big tech's mandate to reach AGI means we can't simply wait for one day to find out.
On why Africa's slower adoption might be an advantage, not a liability: "If I cannot keep the power on, am I really talking about agentic AI?" But Blycha points to a counterintuitive upside: countries without legacy infrastructure can leapfrog, just as India and parts of Africa bypassed landlines for mobile. Crypto adoption in Nigeria and Ghana demonstrates this — populations using blockchain not as a speculative instrument but as functional money in economies where traditional banking fails them.
On the difference between smart contracts and smart legal contracts: A smart contract is code that executes on a blockchain — "if this happens, do this." It's a technical term, not a legal one. A smart legal contract, by contrast, is a real, enforceable agreement where specific clauses are automated. Blycha uses the example of a lease where rent adjusts automatically based on CPI. The distinction matters because conflating the two obscures where legal accountability actually sits.
On the fundamental legal problem AI creates: "The law needs a person to ascribe responsibility to." Bitcoin was invented by someone who may not exist. Decentralised autonomous organisations insist the code is responsible, not them. But you can't put code in jail. As AI agents proliferate — booking flights, managing finances, making hiring decisions — the gap between what the technology does and who the law can hold accountable is widening faster than regulators can respond.
On smart legal contracts as AI's conscience: Through LexChip, Blycha's team is embedding contracts directly into AI edge devices — robotics, autonomous vehicles, hardware with embodied AI. These contracts can monitor behaviour in real time and, critically, act as a referee: stopping a device safely when it breaches its rules. "You've taken an analog thing, you've turned it into a performance-based contract and it can speak to an AI system."
On Ubuntu as an AI governance framework — with a warning: Blycha was moved by the Ubuntu principle of interconnectedness during a family visit to South Africa. She sees it as a potentially powerful ethical framework for AI policy — but cautions against using it as "window dressing for someone to write a wishy-washy policy that then doesn't deal with the hard stuff." The hard stuff: GPU clusters, cloud compute, sovereign data infrastructure. Values without investment are just declarations.
On who opposes all of this — and why: Peter Thiel and a portion of Silicon Valley divide the world into accelerators and decelerators. In their framing, lawyers like Blycha are slowing down progress toward a post-human, transhumanist future of brain-computer interfaces and infinite lifespan. Blycha's response: "This is not a lawyers versus the tech bros conversation because there is an extremely large majority of the tech bros who are also saying there is a big problem here."
Notable moments:
1. The first text message: At the Bata Club in Gweru, Zimbabwe, circa 2000 — a social venue attached to a Canadian shoe factory — Blycha saw her first SMS travel between England and Zimbabwe on a feature phone. "It wasn't a smartphone, it was a dead phone." She'd bought her flight to Zimbabwe on the day of the Y2K bug because tickets were cheap. That moment — witnessing a communication revolution in a country experiencing currency crisis and fuel shortages — frames the conversation's central question about technology adoption in constrained environments.
2. The Mennonite test: Visiting Amish communities in Ohio, Blycha learned their approach to technology adoption. "They don't prohibit technology at all. They ask two questions: does this technology bring me closer to my family and does this technology bring me closer to God?" Asked how everyday people should think about adopting AI tools, Blycha offered this as her "heart answer" — a striking conclusion from someone who has spent her career at technology's legal frontier.
3. The McKinsey displacement reality: Blycha points to McKinsey's replacement of significant portions of its workforce with AI agents as evidence that displacement is not theoretical. The legal question this raises: how do you write an employment contract with an AI agent? And when that agent — operating at a scale no human can oversee — breaches the law, the "human in the loop" principle that underpins every AI governance framework starts to break down.