The Open Thread — AI Safety Just Crossed a Line
An AI threat no longer theoretical, your input at the three-month mark, and a preview of our next series: college sports going fully professional.

This week we return to The Open Thread — our monthly bridge between deep-dive series. It’s a deliberately unstructured space to experiment, update past stories, and look ahead. If you missed any of our first three series — on China’s rare earth dominance, AI safety, or the vanishing competition in U.S. House races — you’ll find a recap at the bottom.
In this installment, we revisit AI Safety with news that one of the risks experts have long warned about has now actually happened, ask for your feedback as we cross the three-month mark since launching Solving For, and tee up our next deep dive: college sports going pro.
But first, an invitation: this Saturday, Nov. 22 at 11 a.m., I’m moderating a conversation at the Miami Book Fair with Facebook co-founder Chris Hughes and Brown University’s Marc Dunkelman on each of their new books exploring America’s economic future. Hope you’ll join us.
Solving For tackles one pressing problem at a time: what’s broken, what’s driving it, and what can be done. New posts weekly. Learn more.
No longer hypothetical
Anthropic CEO Dario Amodei says that artificial intelligence may deliver a “compressed 21st century” — breakthroughs that once took 50 to 100 years unfolding in five to ten. Alzheimer’s halted, cancers neutralized, lifespans extended: all potentially accelerated by an order of magnitude.
But Amodei is just as blunt about the other side of that acceleration. The same technology that speeds up breakthroughs can also speed up dangers.
Experts tend to group AI risks into three major categories:
Misuse — AI deployed to build bioweapons, carry out cyberattacks, fuel mass surveillance, spread disinformation, or assist in self-harm.
Loss of control — systems that plan, act, or pursue goals independently of human intent.
Economic disruption — widespread job losses as large swaths of white-collar work are automated.
Our recent series, The Control Problem: Solving for AI Safety, focused on the first two. A future deep dive may tackle the third.
But last week, Anthropic reported that one of these risks has moved from hypothetical to real.
On Nov. 13, the company announced it had detected and disrupted what it calls the first large-scale cyberattack carried out primarily by AI.
“We believe this is the first documented case of a large-scale cyberattack executed without substantial human intervention,” Anthropic declared in a report on the attack.
Anthropic reported that in mid-September, a Chinese state-sponsored hacking group manipulated its Claude Code AI tool into executing a series of cyberattacks against roughly 30 organizations worldwide — including major tech companies, financial institutions, chemical manufacturers, and government agencies. While only a small number were successfully compromised, the significance lies in how the attack was conducted.
The hackers used Claude Code’s “agentic” features — allowing AI to take actions on behalf of people. And the AI system didn’t just assist — it ran the playbook: scanning for targets, finding a way in, grabbing passwords, moving through internal systems, pulling out data, and drafting instructions for future attacks.
The attacks were conducted “literally with the click of a button, and then with minimal human interaction,” Anthropic’s Jacob Klein, head of the company’s threat intelligence said.
This is exactly the sort of scenario safety researchers have long feared — not science fiction, not years away, but already here.
Anthropic reported that it banned accounts, notified affected parties, coordinated with authorities, and announced the case publicly “to help those in industry, government, and the wider research community strengthen their own cyber defenses.”
Details about who was targeted — and who carried out the attack — remain limited, prompting some pointed exchanges over how urgent the threat truly is.
In response to the disclosure, U.S. Senator Chris Murphy of Connecticut posted on X: “Guys wake the F up. This is going to destroy us — sooner than we think — if we don’t make AI regulation a national priority tomorrow.” That prompted Meta’s Chief AI Scientist Yann LeCun (who just announced he’s leaving to create his own startup) to respond, in part: “You’re being played…They are scaring everyone with dubious studies.”
But Anthropic has distinguished itself not only as one of the leading AI companies — recently valued at about $350 billion — but also one of the industry’s most outspoken voices on the potential dangers of AI without guardrails or oversight.
Anthropic’s origin story is seven OpenAI employees leaving to start a company built on the belief that safety and transparency needed to be the top priority. Amodei has long warned about the unpredictable nature of frontier AI.
In a 60 Minutes segment that aired on Nov. 16, Daniella Amodei - Anthropic co-founder and Dario’s sister — noted that “it’s unusual for a technology company to talk so much about all of the things that could go wrong.” But she and her brother said it’s essential.
“If you don’t,” Dario added, “you could end up in the world of the cigarette companies and opioid companies, where they knew the dangers and didn’t talk about them — and certainly didn’t prevent them.”
Amodei has also been outspoken that the burden of managing these risks cannot fall on companies alone. He has repeatedly called for federal regulation saying he is “deeply uncomfortable with these decisions being made by a few companies, by a few people” that could have such consequences for humanity.
As the breakneck rollout of frontier AI continues, episodes like this strengthen Anthropic’s argument that the risks aren’t theoretical — and that clear standards are needed to keep pace with the technology.
The compressed 21st century has arrived. Whether it delivers Alzheimer’s prevention or automated cyber warfare — or both — depends on the choices and safeguards we build now.
Learning from you
We’ve hit the three-month mark since launching Solving For, and we’d love to hear from you. The seeds of this venture were personal: a desire to better understand the challenges we face today — and how they can be addressed. The idea was to explore one problem at a time through weekly posts in three moves: unpack the problem, illuminate the forces, and surface clear solutions.
We’re learning as we go. In these first three months we’ve added audio, so you can listen as well as read. And because this is long-form journalism, we’ve started including a brief roadmap at the top of each post to orient you before diving in.
But this is very much a work in progress, and your input on the four questions below will genuinely help shape what comes next. If you’d like to share more - including topic ideas or reactions to the first three deep dives - just hit reply. It goes straight to me.

Next up: College sports going pro
For our next deep dive, we turn to a system in the middle of a historic transformation — one happening in real time, in courtrooms, conference board rooms, and locker rooms across the country. College sports, long built on the ideal of amateurism, are becoming something very different: a fully professional marketplace where athletes, boosters, universities, and media companies are negotiating money, power, and control. The shift is reshaping one of America’s most beloved institutions, often faster than the rules can keep up.
The idea that college athletes shouldn’t be paid — once the defining principle of the NCAA — has collapsed. NIL collectives now routinely offer six- and seven-figure deals. Conferences are realigning based on media rights alone. Federal judges have ruled that the NCAA’s limits on compensation violate antitrust laws. And more battles are coming: whether athletes should be employees, how revenue should be shared, and whether Congress will step in — or let the system splinter further.
In our next Solving For series, we’ll dig into what’s actually happening beneath the surface: how the business model of college sports really works, how incentives — for schools, conferences, networks, and boosters — have produced a system few designed and no one controls, and what credible, workable models exist for paying athletes in a way that preserve opportunity, protect non-revenue sports, and restore some sense of fairness.
It’s a story about money, power, and the future of an American tradition — but it’s also a story about a system breaking down and the race to rebuild it.
A note on timing: With Thanksgiving next week, we’ll kick off our deep dive on college sports going pro the following week. Wishing you and yours a very Happy Thanksgiving!
Previous Deep Dives
The 21st Century’s Oil: Solving For China’s Rare Earth Dominance
Part I - Rare Earths: The Invisible Backbone, Sept. 4
The Problem — What’s broken, and why it matters
Part II - Rare Earths: The Middle Kingdom’s Monopoly, Sept. 11
The Context — How we got here, and what’s been tried
Part III - Rare Earths: The Race to Rebuild, Sept. 18
The Solutions — What’s possible, and who’s leading the way
The Control Problem: Solving For AI Safety
Part I - AI: The Race and the Reckoning, Oct. 2
The Problem — What’s broken, and why it matters
Part II - AI: The Prisoner’s Dilemma, Oct. 9
The Context — How we got here, and what’s been tried
Part III - AI: The New Nuclear Moment, Oct. 16
The Solutions — What’s possible, and who’s leading the way
The Democracy Deficit: Solving for Competition in the People’s House
Part I - Congress: The Vanishing Competition, Oct. 31
The Problem — What’s broken, and why it matters
Part II - Congress: How We Got Stuck, Nov. 7
The Context — How we got here, and what’s been tried
Part III - Congress: Making Democracy Competitive Again, Nov. 16
The Solutions — What’s possible, and who’s leading the way


