The ImageIO vulnerability, which Apple credits as being discovered internally, could allow an attacker to use a maliciously crafted image file to corrupt memory in an exploitable way. Apple acknowledged that this security flaw “may have been exploited in an extremely sophisticated attack against specific targeted individuals.” This wording suggests that the vulnerability was weaponized for nation-state use in operations against high-value targets, likely involving spyware like Pegasus.
Given the active exploitation, we recommend updating your devices promptly, especially if you are in a position that might be targeted by sophisticated attackers. While most users are extremely unlikely to be affected, there’s no benefit in leaving your devices vulnerable in case the exploit is resold to less discerning attackers.
At this point, I would hope Apple punishes the individuals responsible for this vulnerability. There is NO EXCUSE for memory corruption given how well understood the means to prevent are now.
Although there are very good tools for identifying and fixing memory problems, we are not even close to the point where an expert programmer can be expected to produce perfect code all the time on a system as big and complicated as a major operating system.
I have a few comments in response to your comment:
I was a software developer for many years at a major aerospace company. I don’t want to escalate this discussion into a “who’s right, who’s wrong” debate, however:
While I agree with you totally about not expecting perfect code from developers on such a huge product set (that is, Apple’s various Operating Systems), on the other hand, if Apple could focus some of their massive software development army on developing more and better regression tests, some bugs (perhaps not all) might be caught and avoided prior to release.
The “problem” is, developing a better and more robust regression-test suite is not sexy work, but it is valuable work.
As a good example, in this particular case, literally and potentially endangering the lives of certain individuals (for example, journalists, political activists, etc.) via releasing potentially dangerous bugs, is also not sexy.
You can ask yourself, how many times have you seen in the various Tidbits commentary threads, comments such as: Why doesn’t Apple spend more time and effort fixing bugs, in addition to focusing so much on developing major updates for the annual release of the Operating Systems, (including introducing yet more bugs)??
Because Apple does have a massive software development army, they could address both bug eradication, and major Operating System releases.
I updated my Sequoia Mac this morning when I got the update prompt. Currently updating several iDevices. No update yet for an older Mac running Monterery.
Cupertino/Redmond: A vast team works around the clock, laying siege to their own software — bounds checking, unit testing, pounding it for integration flaws.
Pyonyang/GRU: A vast team works around the clock, laying siege to the same software — bounds checking, unit testing, pounding it for integration flaws.
Most often, Team A uncovers the vulnerability before Team B. Sometimes Team B, with the resources of an entire nation/state, uncovers the vulnerability before Team A.
I’m with Dave. The size of Team B is greater than the entire population of Cupertino by a large factor. (Cyber-incompetence at our own state and federal government level is generally not a matter of resources.)
Pyongyang is a poor example. Apple makes all of North Korea’s estimated annual GDP in just 18 days.
I’m not satisfied with this explanation anyway. Team B could disappear into a hole tomorrow and I couldn’t care less. Team A on the other hand runs our lives. I expect much more from Team A. Allowing Team B to score a preventable win every once in a while and then just shrugging it off as no big deal is to me entirely unacceptable. At least as long as Team A wants to remain in charge of our tech. In my book, if you’re king of the hill and want to remain there, that comes with expectations.
While I won’t disagree, I will note that regression tests only catch regressions, i.e. bugs that were found, fixed, and have reappeared. I haven’t seen anything suggesting this particular bug is a regression (although I could have missed it), so regression tests wouldn’t have helped.
These sorts of “maliciously crafted” images (or whatever) are particularly hard to catch, as the existing test cases will be a collection of valid inputs and inputs that have caused (different) problems in the past. Fuzzing can help, although developing a fuzzer that creates inputs that aren’t immediately rejected and knowing how many times to run an inherently probabilistic test is difficult. Code reviews can also help, although it takes particularly detail oriented people (who have their own work to do) to catch these kinds of things, and even detail oriented people’s minds wander reading through 1,000 lines of code, making them miss the subtle vulnerability on the line 1,001.
While I’ve been critical about Apple’s QA here and elsewhere, this seems like something particularly hard to catch, and requires putting resources on a problem that might not even exist, so I can’t be too critical in this case.
Yes, but I worked on mission and safety critical stuff, where bugs were NOT ACCEPTABLE. i’m always disgusted by the attitude that “bugs are inevitable and therefore acceptable.”
And yeah, software QA is probably the only part of software where you CAN throw bodies at the problem and get a positive result.
We’ve had the technology to prevent memory corruption for many years. We’re just not willing to use the technology, because it would require more thinking and less hacking.
You can get results, but not by throwing the wrong bodies at the problem. It’s all too easy to rat-hole on testing what’s easy to test (or worse, what’s easy to automate), and miss more important areas. The decline in written (and reviewed) test plans makes this a lot worse.
So does the low prestige of testing, with managers appointed for reasons other than competence, and testers too junior (or not good enough) for engineering jobs. AI has started making this even worse; to upper management unfamiliar and unconcerned with testing, lots of low-priority tests written by a bot can seem an adequate substitute for (rather than supplement to) fewer tests written by an intelligent human.
Yeah, that would be my question…my MacBook Air is early 2015 and I’m stuck in Monterey. Are older OS, like Monterey, in trouble, potentially from this attack?
Unfortunately, there’s no way to know—that’s the downside of using operating systems that no longer receive security updates. That said, I think there are three reasons not to worry:
It was used in a highly targeted attack, which means it isn’t likely to be reused in a general attack.
Although the exploit affected iOS, iPadOS, and macOS because all three share code, it was very likely used against iOS since iPhones are vastly more prevalent than Macs. Most (all?) of the high-profile vulnerabilities where we know something about the person targeted were aimed at the iPhone.
Monterey is used by so few people at this point that no attacker would bother to put the work into targeting a Monterey user unless they wanted to compromise one specific Monterey user.
As long as you take precautions about avoiding sketchy sites, running something like Malwarebytes every so often, and so on, you should be fine.
And use a robust ad blocker, since a lot of malware (or at least phishing screens that make you think you’ve been attacked) seem to be distributed through ad networks.