What Anthropic’s Mythos and Project Glasswing Mean for Your Apple Devices

Originally published at: What Anthropic’s Mythos and Project Glasswing Mean for Your Apple Devices - TidBITS

Anthropic, the company behind the Claude AI chatbot, made two security announcements that were shocking for many but seen as inevitable by those of us working in AI security. First, it announced Mythos Preview, a new, non-public AI model that turns out to be startlingly good at finding security flaws in software. The second was Project Glasswing, Anthropic’s program for getting that capability into the hands of the companies best positioned to fix those flaws before anyone else can exploit them. Apple is one of those companies.

As much as I’d like to downplay the announcements, Mythos and Project Glasswing are very big deals on their own, and harbingers for the future of digital security. Mythos was able to find and exploit new vulnerabilities in every major operating system, including a bug in OpenBSD, an operating system famous for its security, that had been sitting there unnoticed for 27 years. (If OpenBSD sounds familiar, it’s because Apple’s operating systems have roots in versions of BSD.) For now, the problem is contained. Only Anthropic has Mythos. But there’s no reason others can’t develop these capabilities, starting with nation states, and eventually filtering down to lower-resourced operations like criminal organizations.

Mythos matters. And while, as consumers, there isn’t a lot we can do, understanding the implications helps us prepare for the future and might even affect our buying decisions. Here’s what happened, and more importantly, what it means for the devices on your desk and in your pocket.

Is Mythos the Kind of AI Anyone Can Download and Run?

No. This is the single most important thing to understand before you read any of the louder headlines. Mythos isn’t a program you can copy onto a laptop. “Frontier AI models”—those at the cutting edge—like this one run on massive, purpose-built computing infrastructure that costs a fortune to build and operate. (The thousand tests against OpenBSD consumed nearly $20,000 worth of compute.) Anthropic can see who is using it, control what they can ask it to do, and shut down abuse. That’s exactly why Project Glasswing can work: Anthropic is handing Mythos to a small group of trusted partners, including Apple, so they can find and fix flaws in their own software before anyone hostile has a comparable tool.

Over time, similar capabilities will appear in other AI models, and some version will eventually leak into the wild. But we aren’t there today, and the defenders have a (temporary) advantage.

What Does This Mean for Apple?

Apple products have a structural advantage over other general-purpose consumer computing devices: Apple controls the entire stack, from the silicon in the chip to the operating system to the App Store to iCloud services. It’s called vertical integration (and is also sometimes a source of consternation, since that means it’s a closed ecosystem). When Apple decides to add a new security defense, it can build it into the chip, wire it into the operating system, and require its use in apps (in iOS, macOS is a different story). Most of the industry cannot do that. With Windows, Microsoft has to work with Intel and AMD and a thousand PC makers. With Android, Google has to coordinate with Qualcomm and Samsung and dozens of other phone manufacturers.

Apple has been quietly using that advantage for years. The Apple Platform Security Guide documents the company’s primary security controls, including how they tie hardware and software together. Defenses include tools such as the Secure Enclave, Pointer Authentication, Kernel Integrity Protection, and other esoterically named defenses that provide real-world benefits. Other ecosystems also leverage similar hardware-to-software security ties, but it’s typically messier and less consistent. For example, Microsoft has Pluton, its own custom security processor designed in partnership with AMD, Qualcomm, and Intel. But Pluton is optional and sometimes disabled by PC manufacturers, whereas Apple consistently builds its protections into all its platforms.

Apple’s newest (and exciting for us security nerds) addition is Memory Integrity Enforcement, and Apple calls it “the most significant upgrade to memory safety in the history of consumer operating systems.” That’s a strong claim, but not unreasonable. It ships with the A19 and A19 Pro chips, which means every iPhone 17 and the iPhone Air got it at launch, and it’s also coming to Macs with the M5 chip and later. Apple’s own write-up describes it as the culmination of roughly five years of engineering work.

Anthropic focused its Mythos testing on memory-related attacks. These are consistently one of the primary sources of serious security vulnerabilities. Apple’s Memory Integrity Enforcement tags memory at the hardware level so the chip itself refuses to let a program read or write memory that doesn’t belong to it. I have no idea if Mythos bypassed Memory Integrity Enforcement, but I suspect Apple’s protections helped. Memory Integrity Enforcement is, however, limited to Apple’s latest devices. And memory corruption attacks are only one of many families of security vulnerabilities.

How Worried Should I Be?

Mythos is concerning and will have implications across every technology you use. We are approaching a point where vulnerabilities and exploits are developed faster than humans can respond, and the tools find flaws humans miss. My advice is to be aware and be prepared to make changes to how you select and manage your personal technology. You’ll want to prefer newer devices and services with good track records of staying up to date.

Apple is already a Project Glasswing partner, alongside Google, Microsoft, Amazon, the Linux Foundation, and more than 45 other organizations. They get early access to Mythos-class tools to find and fix their own bugs before anyone else can use similar capabilities. iOS and iPadOS are relatively locked-down environments where every app must be reviewed, signed, and run inside a sandbox that limits what it can access. Combine that with Apple’s new hardware protections, and the iPhone and iPad are in about as good a position as any consumer device on the planet right now.

That is not the same as invulnerable. Nothing is invulnerable, as DarkSword shows (see “DarkSword Exploit Threatens iPhones Still Running iOS 18,” 23 March 2026). But the combination of a controlled ecosystem, hardware protections, and a head start on Project Glasswing puts iOS in a much better spot than most platforms. The attack surface isn’t infinite, and Project Glasswing (along with Apple’s ongoing security efforts) will likely dramatically reduce the number of potential vulnerabilities across Apple’s platforms.

The primary objective of Project Glasswing is to find and fix as much as possible across major platforms, services, and vendors before adversaries gain these offensive capabilities. Then companies like Apple can include Mythos-level assessments into their process as they build new things, closing the vulnerabilities before they ever go out the door.

What About Macs?

The Mac is a more complicated story.

Macs are designed to let you install and run a huge range of software from anywhere, not just the App Store. Macs need this versatility, but that same openness is what makes the Mac a tougher security problem than the iPhone. The more software you can run, and the more freely that software can interact with the rest of your system, the more surface area attackers can target.

Apple has been quietly tightening security on the Mac for years, and modern Macs running recent versions of macOS are far more hardened than most people realize. Gatekeeper, System Integrity Protection, and the signed system volume all work to keep the core operating system from being tampered with. More importantly, every Mac with Apple silicon, meaning the M1 and every chip since, inherited a large chunk of the same hardware security architecture Apple built for the iPhone: the Secure Enclave, Pointer Authentication, Kernel Integrity Protection, the Page Protection Layer, secure boot anchored in hardware, and isolated execution for sensitive system code. An Apple silicon Mac is, at the hardware level, dramatically better protected than an Intel-based Mac ever was. And Memory Integrity Enforcement, the same protection I described above for the iPhone 17 lineup, is now landing on Macs with the M5 chip and later, extending that ladder one more rung on the Mac side of the house.

But if you are thinking, “I should do something different on my Mac than on my iPhone,” you’re right. On your iPhone, the system is doing most of the work for you. On your Mac, you still need to be thoughtful about what you install and where it came from, because the Mac’s openness means some of the protections iOS takes for granted are opt-in. Macs also allow you to turn off some of their defenses, and that isn’t a good idea.

What Should I Actually Do?

First, and by a wide margin: keep your devices up to date. This is the single most important thing, and it is not new advice. The entire point of Project Glasswing is that fixes will start landing in Apple’s updates. Those fixes only help you if you install them. An older iPhone that’s being patched regularly is in much better shape than a brand-new one that isn’t. Turn on automatic updates on your iPhone, iPad, and Mac, and actually reboot when asked.

Second, understand that newer hardware gets you better protection than older hardware. One reason I upgraded to the iPhone Pro 17 was to get Memory Integrity Enforcement (I suspect I’m in the minority). This isn’t mere marketing; it’s how security works when protections are built into the chip. Every iPhone 17 and the iPhone Air already ship with Memory Integrity Enforcement, and Macs with the M5 chip and later are getting it too. If you’re on an M1, M2, M3, or M4 Mac, or any iPhone older than the iPhone 17 series, you do not have Memory Integrity Enforcement, but you do have the rest of Apple’s hardware security architecture that’s been accumulating since 2018. You are not suddenly insecure overnight; you just don’t have the latest protections.

If you are already planning to upgrade in the next year or two, that upgrade will give you meaningfully better protection against the kinds of attacks Mythos makes easier to build. That said, if you are using old hardware that’s no longer supported, it’s time to upgrade.

Being the tech guy for a family of five, I won’t be able to get everyone on all the latest hardware, but I’ve already been deprecating any pre-Apple silicon devices, will upgrade to M5 Macs for myself over the next year, and will be upgrading the kids’ iPhones more frequently than usual.

Third, be thoughtful about which apps you install and, more importantly, what data you give them. Here is the part most people miss. Even with Apple’s hardware protections and iOS sandboxing, the apps themselves are written by thousands of small developers, most of whom lack Apple’s resources to find and fix their own bugs. The App Store review process catches some bad actors, but it is not designed to find subtle security flaws, and compromised code libraries have made their way into legitimate apps before.

On top of that, most apps talk to cloud services run by small teams, and any data you give an app often ends up on those servers, too. Sandboxing on iOS does a good job of containing a misbehaving app so it can’t take over your whole phone, but it can’t protect data you have already handed to a company that then stores it on its own systems. So think twice before you give a random app access to your photos, contacts, health data, or financial information.

Stick to well-known, reputable apps for anything sensitive. Use Apple’s built-in privacy controls. When an app asks for permission to do something it doesn’t obviously need, say no. And if you’re not actively using an app, delete it. Every app you remove is one less thing for a future Mythos-class tool to find flaws in.

The Bigger Picture

We are at the start of a period in which finding software flaws that affect everyday users will become dramatically easier for both attackers and defenders. The situation for enterprises like banks, hospitals, and retailers is worrisome. These organizations have massive amounts of legacy code and software in their data centers that will be much harder to update and defend. This is why Project Glasswing includes financial institutions and other critical infrastructure companies, not just software and hardware vendors. As consumers, this is where we face our greatest risks, but it’s up to those organizations to protect us.

However, over the long run, I believe using AI to identify security vulnerabilities favors defenders, because developers can find and fix many more bugs before shipping software to the public. And AI coding tools may help us develop new defensive security techniques that eliminate entire attack categories, especially when those writing the software control the entire stack, as Apple does.

With respect to our Apple devices, we’re in a pretty good position. Apple is part of Project Glasswing and has quietly been building robust security protections for years. Keep your stuff updated, be thoughtful about who and what you trust with your data, and let Apple do what Apple is good at. This is a time to pay attention, not be afraid.


Rich Mogull is the TidBITS Security Editor, the Chief Analyst at the Cloud Security Alliance, and has spent more than 25 years working in information security. He is not compensated by Apple or any other company mentioned in this article.

3 Likes

[Moving this previous discussion into Rich’s article comments to keep it all together. -Adam]

I’ve been writing a lot lately about AI and how it’s reshaping the world most people aren’t watching closely enough. This one sits right at the intersection of Apple and that bigger picture — and I think it deserves attention from this community specifically.

Last week I wrote about how AI has fundamentally shifted the balance between those who find security vulnerabilities and those who defend against them — and why the next few years represent a meaningful window of elevated risk, not just for personal devices but for critical infrastructure.

This week, Anthropic announced Project Glasswing.

Apple is a founding partner — alongside Microsoft, Google, Amazon Web Services, Cisco, JPMorganChase, NVIDIA, CrowdStrike, Palo Alto Networks, and the Linux Foundation.

For a community that follows Apple closely: Apple does not join multi-company co-ordinated initiatives lightly. When they do, it’s worth asking why.

What’s Project Glasswing?

Anthropic has developed a new AI model (not publicly available) that can autonomously find security vulnerabilities — the kind that have been hiding undetected for years, sometimes decades. In recent weeks it found thousands of critical vulnerabilities across every major operating system and browser, including:

  • A 27-year-old flaw in OpenBSD — one of the most security-hardened operating systems in use

  • A 16-year-old flaw in FFmpeg that had survived five million automated test runs without detection

  • A chain of Linux kernel vulnerabilities that, when combined, allowed escalation from ordinary user access to full machine control

The initiative exists because those same capabilities will inevitably become more widely available — including to malicious actors. Project Glasswing is an attempt to get defenders ahead of that curve.

Why Apple?

Apple’s software runs on over two billion active devices. A significant vulnerability in macOS, iOS, or Safari — the kind this model can now find autonomously — would be a consequential problem. Their presence as a founding partner suggests they’ve assessed the risk and decided that active participation in the defence effort is worth more than waiting on the sidelines.

The bigger picture

The announcement is both sobering and genuinely hopeful. Sobering because it confirms what CrowdStrike described plainly in the announcement: “The window between a vulnerability being discovered and being exploited has collapsed — what once took months now happens in minutes.” Hopeful because the explicit goal of Project Glasswing is to give defenders a durable advantage in the long run — not just to patch the current gap but to build better security infrastructure for the AI era.

The transition period is the risk. The destination is potentially a more secure world than we’ve had before.

I’ve been thinking and writing about the broader implications of this kind of AI-driven change — for infrastructure, for communities, for how people prepare. If that’s a conversation that interests anyone here, I host a community called Future Together focused on exactly these discussions. Our next online meetup is Tuesday 15 April at 5:00 pm AEST — open to anyone, no technical background required.

Curious what the TidBITS community makes of this. Are Apple’s security teams already using AI for vulnerability detection at scale? Does the scope of the Glasswing partnership change how you think about the risk?

Piece in the NYTimes on this project.

Anthropic strike me as among the more responsible actors in this field, even with the release of their source code (cough). Interesting to see this field testing of collaboration between major players being announced. Coding is probably the leading edge of AI development, a bit of a glimpse into what may be coming for infrastructure as well as medical, scientific research.

2 Likes

I’ve been wrestling over how to respond to this thread for more time than I should.

On the one hand, it’s not the first time that major corporations and other organizations have joined forces to address public security concerns. On the other hand, the velocity and sophistication of cyberattacks that I’ve seen in the last year have increased in a truly sobering (if not terrifying) fashion.

It absolutely makes sense for Apple to join this sort of initiative now. I would be shocked if Apple were not using some sort of AI-informed security countermeasures now, but we are at a stage where development of new, industry-wide “best practices” are required.

2 Likes

Seems like a great idea to me. It was inevitable that these models would grow to a point that they would be able to find security issues more and more quickly. Now Apple et. al. will be able to run the models against their code to find vulnerabilities, probably even get suggestions to patch them, and even if there are a lot of false positives, that’s better than the alternative - which is someone else doing the same thing, not telling Apple about them so they can exploit them for gain.

I don’t even know why this would be controversial.

As a corollary to my last comment, I think that we have finally reached the point where anyone using old, unsuuported equpiment seriously needs to consider either retiring that equipment or limiting its use to tightly controlled, generally unconnected environments.

The bottom line is that the cost of targeting a broad range of obsolete or unsupported technologies is plummeting. It’s like the economics of spam. If the cost of attack toolkits becomes essentially zero, then it becomes trivial for modern day “script kiddies” to deploy surprisingly sophisticated attacks indiscriminately against a lot of targets, never mind the damage that well-funded entities can execute.

In the same way that publicly exposing an unpatched Windows 95 computer on the Internet once would result in compromise within minutes (if not less), I’m confident that some of our most beloved older Apple devices are becoming unacceptable risks.

1 Like

I wonder, if by the same token, this also means the cost of finding/developing patches to such vulnerabilities is plummeting.
If that effort were indeed to go to near zero, it would become harder to justify why manufacturers can’t be compelled to provide security patches over time periods much longer than just the 2 most recent years.

5 Likes

I think it’s true that the cost of developing patches is decreasing, but thanks to entropy, it’s also true that it is easier to find flaws than to fix them.

While it would be nice to think that the market (or governments) might compel companies to support products longer, I think that substantially increased threat profiles would encourage companies to enforce mandatory updates more ruthlessly rather than support devices longer. Ironically, that is not far from Apple’s current practice.

Of course, I would favor an environment where it becomes commonplace to support older devices longer than is the current practice. I think it would be much easier to do that if there were greater decoupling between OS and apps.

I’m not holding my breath.

1 Like

Having a good friend that works on their safety team, I confidently claim they are the most responsible. I consider release/leak of source code as completely separate issue from model safety.

The main issue I see at present though is the shifting balance between attackers and defenders of critical infrastructure. It’s only short/medium term situation, but horizon is out far enough that substantial damage could occur in that time.

Which means the question is, what can we do to mitigate those risks?

This is the critical factor. While it technically also means that defenders can move more rapidly, the reality is the existing systems are caught in processes (corporate, compliance, hardware, etc) that are too slow to respond.

What do those “best practices” look like, especially when some of the critical systems are decades old?

For users of Apple equipment, I believe we are relatively safe - issues that are discovered will be fixed and pushed out in reasonable time frame. It’s the systems we connect to with our Apple devices that are of greater concern. How does a bank with COBOL systems from last century react to discovered vulnerabilities (real issue if I’m to believe a recent podcast)?

I don’t think using models is controversial (apologies if I my post gave that impression) - the issue is the balance has shifted in favour of attackers. While both attackers and defenders have access to models, the defenders are trapped by legacy processes which make mitigations too slow.

Granted, Mythos is not publicly available, but Opus 4.6 is also very capable of finding flaws in systems. The attackers already have cutting-edge access.

1 Like

This is the important observation. But expand that to systems and technologies underpinning critical infrastructure - system that are far from trivial to update.

Yes, that is also true. But generating the fix is only a minor part of the solution. It’s all the processes around getting that fix pushed out, whether due to technical challenges, corporate governance, lost/archived processes, etc - that make the balance favouring the attackers so relevant.

There is also a chain-reaction. There is a long history of one patch triggering a new vulnerability. (That was topic of a podcast I listened to this morning.) Fixing one issue doesn’t really mean making a system more secure; it can even be the opposite.

Many people agree with you. And there are number of projects whose sole purpose is to breathe life into old unsupported hardware. But there are still too many systems that become bricks as technology marches on.

I have a hope that AI generated software will be able to bring life (& better security) to even more old hardware. I also believe we’ll get to that point. But in the meantime, the attackers have the advantage of being able to move more quickly and have access to really powerful models.

There is very real risk ahead. Project Glasswing is a good step, but is it sufficient?

3 Likes

Dave Plummer (retired Microsoft Engineer) shares his own two cents on the Axios hack, Anthropic source code theft, and related topics:

All of this on my mind after fixing up a few things on my mother’s MB Air a week ago. It won’t update beyond Monterey, can’t install latest Safari. I moved her to Firefox which did run as the latest version.

She uses it mainly for streaming now. Has no interest in picking up the Neo (yes I tried). Not looking forward to trying explaining this turn of events.

1 Like

Welcome to a major escalation in the InfoSec cold war. Anthropic just developed the equivalent of a digital nuclear weapon. How long before the bad guys catch up? Fully expecting to witness opposing A.I.’s doing battle in the not so distant future. How long before the intelligence agencies obtain this tech and use it to further the pursuit of cyber weapons? No doubt they are kicking themselves right now. How many of these exploits did they secretly already have in their arsenal? This is why they were so angry about Snowden. They had Christmas in July in their pocket and Snowden revealed their capabilities to the world taking away their weapons. Before Snowden the intel agencies could hack every system on the planet and many of them had back doors built-in. The rest was a library of secret exploits that Snowden exposed only partially. Every company closed those loopholes. Secrets are only useful if they remain secret. How long before Mythos is stolen or duplicated? What will come next? Can Anthropic be trusted? After all, Claude source code was leaked in a spectacular fashion very recently.

A.I. is going to cause an exponential curve of technological advancement beyond our current ability to comprehend. We don’t even understand our own human consciousness. A.I. researchers do not even understand how it works. We are currently using existing A.I. to build new generations of A.I. as in evolutionary reproduction. Advancements in all forms of science will begin to make astounding leaps forward. Materials science, quantum mechanics, biological science, power generation, new fantastical computers, etc. Are we as a species ready for it? Or are we seeding our own destruction? Everything can be turned into a weapon, is humanity ready for it? Can we adapt that quickly? Can we overcome our violent nature? Regarding history? No we are not ready for this. Not by a long shot. We are being led by a series of autistic madmen born from chemical environmental corruption and hell bent on living forever, merging humanity with machine, and colonizing the stars. They are Humanists who want to become gods. You cannot put the genie back in the bottle once it’s been released. A.I. has escaped Pandora’s Box.

Marcus Hutchins has a somewhat more sober take on Mythos/Glasswing:

4 Likes

So much of contemporary discourse is disempowering, it is important to proceed on the basis of concrete facts and known issues, not to diminish the potential impact of a future AI, but human agency is guiding all of this. Act in your own individual way and gain confidence through that. If the issue concerns you, do what you can do on the basis of facts. Speculations and what-ifs can be diverting but can untether you from facts and reality. The first true thing I heard about LLMs was that they are both overhyped and underestimated, I’m focussing on where they are useful - to me. There’s far too much hoo-ha out there, I wouldn’t add to it.

3 Likes

I hope that some non-American public and private organisations get invitations to Project Glasswing meetings. I presume that US and Allied defence organisations are also having Project Glasswing style conferences.

First, and by a wide margin: keep your devices up to date. This is the single most important thing, and it is not new advice.

I question this advice. If one is way behind, what is the danger of new exploits? In fact, evil hackers may ignore my version of the OS since it might be more secure and also a smaller target in terms of less users. Isn’t true that most dangerous risks these days come from new releases?

I think it’s hard to make broad conclusions about risk because much depends on individual factors, such as what a computer is used for, if it is connected to the Internet and how, what data is stored on connected storage devices, and if the user(s) have something or do something of interest to attackers.

Having said that, two broad problems caused by using old OS’s, in my view, are that legacy versions of applications and utilities can have unfixed vulnerabilities as can outdated but widely distributed open source components the OS relies on.