I’ve been writing a lot lately about AI and how it’s reshaping the world most people aren’t watching closely enough. This one sits right at the intersection of Apple and that bigger picture — and I think it deserves attention from this community specifically.
Last week I wrote about how AI has fundamentally shifted the balance between those who find security vulnerabilities and those who defend against them — and why the next few years represent a meaningful window of elevated risk, not just for personal devices but for critical infrastructure.
This week, Anthropic announced Project Glasswing.
Apple is a founding partner — alongside Microsoft, Google, Amazon Web Services, Cisco, JPMorganChase, NVIDIA, CrowdStrike, Palo Alto Networks, and the Linux Foundation.
For a community that follows Apple closely: Apple does not join multi-company co-ordinated initiatives lightly. When they do, it’s worth asking why.
What’s Project Glasswing?
Anthropic has developed a new AI model (not publicly available) that can autonomously find security vulnerabilities — the kind that have been hiding undetected for years, sometimes decades. In recent weeks it found thousands of critical vulnerabilities across every major operating system and browser, including:
-
A 27-year-old flaw in OpenBSD — one of the most security-hardened operating systems in use
-
A 16-year-old flaw in FFmpeg that had survived five million automated test runs without detection
-
A chain of Linux kernel vulnerabilities that, when combined, allowed escalation from ordinary user access to full machine control
The initiative exists because those same capabilities will inevitably become more widely available — including to malicious actors. Project Glasswing is an attempt to get defenders ahead of that curve.
Why Apple?
Apple’s software runs on over two billion active devices. A significant vulnerability in macOS, iOS, or Safari — the kind this model can now find autonomously — would be a consequential problem. Their presence as a founding partner suggests they’ve assessed the risk and decided that active participation in the defence effort is worth more than waiting on the sidelines.
The bigger picture
The announcement is both sobering and genuinely hopeful. Sobering because it confirms what CrowdStrike described plainly in the announcement: “The window between a vulnerability being discovered and being exploited has collapsed — what once took months now happens in minutes.” Hopeful because the explicit goal of Project Glasswing is to give defenders a durable advantage in the long run — not just to patch the current gap but to build better security infrastructure for the AI era.
The transition period is the risk. The destination is potentially a more secure world than we’ve had before.
I’ve been thinking and writing about the broader implications of this kind of AI-driven change — for infrastructure, for communities, for how people prepare. If that’s a conversation that interests anyone here, I host a community called Future Together focused on exactly these discussions. Our next online meetup is Tuesday 15 April at 5:00 pm AEST — open to anyone, no technical background required.
Curious what the TidBITS community makes of this. Are Apple’s security teams already using AI for vulnerability detection at scale? Does the scope of the Glasswing partnership change how you think about the risk?