Apple’s January 2025 OS Updates Enhance Apple Intelligence, Fix Bugs

Originally published at: Apple’s January 2025 OS Updates Enhance Apple Intelligence, Fix Bugs - TidBITS

Apple has released its third major set of operating system updates, including macOS 15.3 Sequoia, iOS 18.3, iPadOS 18.3, watchOS 11.3, visionOS 2.3, tvOS 18.3, and HomePod Software 18.3. Enhancements to Apple Intelligence dominate the release notes, though Apple also fixed a few bugs and addressed numerous security vulnerabilities.

The company also released iPadOS 17.7.4, macOS 14.7.3 Sonoma, macOS 13.7.3 Ventura, and Safari 18.3 for Sonoma and Ventura, all of which include security updates from the current operating systems. For the second consecutive release, Apple did not update iOS 17 to align with iPadOS 17, presumably because all iPhones capable of running iOS 17 can also support iOS 18.

Apple Intelligence Changes

Although we’re still waiting for Siri to gain onscreen awareness, understand personal context, and work more deeply with apps, these updates bring changes to Apple Intelligence notification summaries, Visual Intelligence, and Genmoji.

Notification Summaries

After complaints that Apple Intelligence’s notification summaries generated blatantly incorrect news summaries and misidentified spouses, Apple responded by changing the style of summarized notifications to italics. Previously, the only indicator of a summarized notification was a glyph.

More tellingly, the company temporarily turned off notification summaries for all apps in the App Store’s News & Entertainment category. We presume Apple’s engineers are putting more effort into summarizing news articles and verifying that the results match the source. Apple says that “users who opt-in will see them again when the feature becomes available.”

Finally, Apple made it easier to manage settings for notification summaries from the Lock Screen. On an iPhone, for instance, you can swipe right to reveal an Options button, tap it, and then tap Turn Off AppName Summaries. You can also report a concern with a summary.

Notification summary changes in iOS 18.3

Visual Intelligence Adds Scheduling, Plant and Animal Identification

Apple Intelligence enhances the new Camera Control button on iPhone 16 models, allowing it to take action based on what’s in the viewfinder. Initially, it could only ask ChatGPT about what it saw or perform a Google image search. Now, when you press and hold the Camera Control, Visual Intelligence can also detect if you’re pointing at a poster or flyer and suggest creating a calendar event. Additionally, if it recognizes a plant or animal within the frame, it will identify it and provide more information with a tap.

Visual Intelligence scheduling an event and identifying a plant

I generally use the Seek and Leaf Identification apps to learn plant names, so once spring arrives, I’ll be curious to see if Visual Intelligence does as well. Poster scanning may be a bigger win because Ithaca is a college town with many events advertised on local bulletin boards.

Genmoji Arrive on the Mac

iOS 18.2 and iPadOS 18.2 introduced the custom emoji that Apple calls Genmoji, but macOS 15.2 lacked the feature. With macOS 15.3, the Mac has now caught up. The feature remains the same—you describe what you want to see in a few words, and you can base the emoji on a picture of a person. The Genmoji you create are actually stickers, but you can use them just like regular emojis.

Genmoji splash screen

Calculator Gains Repeated Operations

The Calculator app on the Mac, iPhone, and iPad now repeats the last mathematical operation each time you click or tap the equals sign. In other words, if you use it to multiply 2 by 2, clicking the equals sign the first time gives you 4. Clicking it again multiplies 4 by 2, then 8 by 2, then 16 by 2, and so forth. Although I can’t imagine using this feature on a calculator (as opposed to in a spreadsheet), it could be useful for cumulative multiplication or division (such as when resizing an image), calculating compound interest, or modeling exponential growth.

Calculator

2025 Black Unity Collection

To honor Black History Month, Apple unveiled its Black Unity Collection for 2025, which includes a special-edition Apple Watch Black Unity Sport Loop, a matching watch face, and iPhone and iPad wallpapers. I mention these because the new Unity Rhythm watch face is the main change in watchOS 11.3. Otherwise, Apple merely states that it offers improvements and bug fixes.

2025 Black Unity collection from Apple

Bug and Security Fixes

Apple admitted to only two bug fixes in iOS 18.3 and iPadOS 18.3, which:

  • Fix an issue where the keyboard might disappear when initiating a typed Siri request
  • Resolve an issue where audio playback continues until the song ends, even after closing Apple Music

Although the macOS 15.3 release notes don’t mention any bug fixes, Apple reportedly resolved the Apple Software Restore bug that prevented SuperDuper, Carbon Copy Cloner, and ChronoSync from creating bootable backups (see “It’s Time to Move On from Bootable Backups,” 23 December 2024). I’ve confirmed that SuperDuper can once again complete a bootable backup, and I assume the others can as well. That said, while I could select my backup drive in the macOS startup options screen, when my M1 MacBook Air restarted, it booted from the internal drive and displayed a kernel panic dialog. When I consulted ChatGPT and Claude about the panic log, they indicated it was related to a missing library. So perhaps Apple Software Restore isn’t fully functional yet.

SuperDuper backup success

The remaining releases—visionOS 2.3 (I assume, since Apple hasn’t updated its release notes page), tvOS 18.3, and HomePod Software 18.3—only acknowledge “performance and stability improvements.”

On the security front, here’s the overview:

The zero-day vulnerability exists in the CoreMedia frameworks shared by most of Apple’s operating systems. Apple states that a malicious application may exploit this vulnerability to gain elevated privileges. The company says it’s aware of a report that the vulnerability “may have been actively exploited against versions of iOS before iOS 17.2.”

Updating Advice

I’ve been running macOS 15.3 and iOS 18.3 betas for a while with no issues. Although it’s tough to get excited about the new features or bug fixes in these releases, I recommend updating soon since some of these updates address a zero-day vulnerability.

1 Like

Just updated. Did NOT activate AI.

Guess they got enough heat?

Sorry, should have been more specific; updated macOS to 15.3. Haven’t done iOS yet.

iOS 18.3, on the other hand, DID activate AI.

Not for me…15 Pro Max. I’m had not turned it on so apparently it respected that.

Listening to OpenAI CEO tonight talking about how AI is a game changer that leap frogs society to a new level.

Meanwhile, it’s so stupid it thinks I have 5 fingers on my right hand.

While odd, it can be viewed as artistic license that it has me emerging from the INSIDE of the piano, rather than sitting in front of the keyboard. But there’s no world where giving me 6 fingers is anything but a design error that would give a 5th grade art homework assignment a “C” or lower.

But we’ve all been seeing these extra-finger AI graphics for a couple years now. I’m mainly disappointed that the magic of Apple pouring their billions into this didn’t give us any better outcome than the multi-fingered junk that’s been going around the Net for a while.

But more, I don’t get how software can screw up this bad. I’m a career software engineer. What is the algorithm that creates people? Doesn’t it model people in some way, kind of like 3-D animation/rendering software? With rules about how to articulate elbows, general proportions, and little details like, ya know, HOW MANY FINGERS to generate on each hand??

Like, how does this happen?? Does the software have trouble counting to 5?? Or does it have a +/-1 degree of freedom when assembling phalanges? And for goodness’ sake, why don’t both hands match?? I’ve never seen a genmoji with 3 eyes or 2 mouths…

So, sorry Sam Altman, I’m not feelin’ it. I’d be really embarassed to put software like this out. But if I got paid that much, I probably wouldn’t care either :-)

1 Like

Because it doesn’t have any concept of what a “human” or a “finger” might be. It has been trained on millions of tagged pictures, not all of which are going to be realistic. It then uses probabilities to “generate” an image based on your requested description.

When you ask for a picture of a person, it doesn’t know what a person is supposed to look like. It simply knows that out of however many millions of pictures it was trained on, a particular subset were tagged as “person”, and is probabilistically extrapolates from there.

The result is therefore only as good is the data it is trained on, and we have absolutely no clue what that might be. And I’m sure it includes all kinds of artistic renderings that defy real-world biology.

Mmm. Probably should (finally) update to iOS 18 on my primary device, and macOS 15 on both my desktop and server machines. I would have (should have) done it over the holidays as I promised myself, but Tab was put down in late December and the trauma and grief of that hasn’t left me yet. I’ll do the server machine first, I think, on the grounds that the opportunity for total surprise should be fairly minimal.

Apple Intelligence acronym:

iASS

Any takers for what it stands for :thinking:

1 Like

You might find this interesting:
Once a model is trained, it generates an image by starting with an image of random noise and refining it into a coherent picture through gradually removing noise.

How to Distinguish AI-Generated Images from Authentic Photographs
https://arxiv.org/pdf/2406.08651

1 Like

You may be right. But I’m skeptical. I have never seen a picture, or even artists rendering, of someone with 6 fingers. And if it’s using the aggregate of millions of pictures, its “intelligence” would have easily told it to eliminate the couple odd-balls out there, causing it to ALWAYS render humans with 5 fingers.

Yes we do: billions of photos of people with 5 fingers on each hand.

I’m not impressed. Again, a 7 year old would not even draw a stick figure with 6 fingers. OpenAI has the world of data at its “fingertips”. And we’re going to be gaslit into giving it a pass on something this ridiculously unintelligent?

Not happening.

1 Like

The problem has to do with the many different ways fingers can appear in training images.

It’s also improving in the top systems—Image Playground would seem to be, like much of Apple Intelligence, well behind the times. I asked ChatGPT for some images of hands playing the piano, and while they’re not necessarily perfect, they all had the right number of fingers.

2 Likes

We don’t know what those photos actually look like.

As @ace pointed out in the article he cited, photos don’t usually include beauty shots of hands. Even data sets designed for hand recognition don’t consist entirely of pictures like this. Fingers (especially thumbs) may be obscured. Wrists may not be visible. Multiple hands (from multiple people may be close together and partially obscured, making it not-obvious what parts connect where.

You and I understand human physiology enough to know that the overwhelming majority of people have two arms, terminating in two hands, each with five fingers in a mirror-image configuration. But the image-generation ML models don’t have any such knowledge - they just have the images they’re given and they don’t have any of the meta-knowledge that you and I use to make sense of the ambiguous images.

And then just to make things extra confusing, there is a lot of artwork out there that deliberately distorts physiology (e.g. https://ring.cdandlp.com/fast54/photo_grande/115298867.jpg - how many fingers does the caricature of Ian Anderson, on the left, have?). I’m sure all kinds of images like that are also in the training data, but can you train a neural net to learn the artistic style without also picking up wrong facts (like the number of fingers a person should have)? I don’t think the tech is close to being there.

People and companies with a product to sell always exaggerate the products pros and try to pretend the cons don’t exist. That’s marketing. I’m not the least bit surprised that “AI” vendors are doing it.

With most products (e.g. when someone claims that their car, or beer or deodorant spray is going to make you irresistible to the opposite sex), we assume that this is taking place and we usually ignore the hype. And I’m sure the public will soon get just as jaded with respect to generative AI, as its failures make more and more headlines.

And yet if you ask a 5 year old how many fingers are on each hand, you will always get the correct answer.

The world is full of confusing images. That’s why scientists don’t gain intelligence by casual web surfing. If scanning Google Images gives your algorithm embarrassingly dumb results, then you might consider tweaking your algorithm to tap into the trillions of searchable facts, also online.

For example, if you Google “how many fingers are on the human hand” worded a million different ways and compile and statistically analyze the results, I bet you will never, ever, conclude the answer is 6.

Still not sold that this is “intelligent”.

These AI models are significantly younger than five years old. We’re in early times still. Let’s see if things get better.

Absolutely correct. Which is why I never call these software products “AI” (except when referencing what other people call them), because there is no intelligence there, not at the level of a 5-year-old, not even at the level of a small animal.

The software generates output based on probabilities derived from its training data and has absolutely zero understanding of anything it processes.

These products are much more accurately described as “ML” (machine learning) or “NN” (neural network) software products, which more accurately describes what they are doing.

As for what people call it, people are and always have been easily fooled. Lots of people have been fooled into thinking primitive chatbots like Eliza are intelligent, so it’s not the least bit surprising that people jump to those same (wrong) conclusions with other natural-language scripting languages (e.g. Alexa and classic Siri). And it’s even less surprising that more people are fooled after integrating ML algorithms and advertising the products as intelligent.

There is actual AI research going on, attempting to design software capable of understanding the problem domain it works within, but these are about as intelligent as a mouse, and even then only within their specific problem domains. They’re nowhere near the level of being able to exhibit near-human levels of intelligence in a broad problem domain. (A good book on the subject from 1995 is Douglas Hofstadter’s book Fluid Concepts and Creative Analogies. Not an easy read, but a great introduction to the subject.)

But this AI research has absolutely nothing to do with the ML/NN software being sold today as “AI”.

3 Likes

Although I can’t imagine using this feature on a calculator (as opposed to in a spreadsheet)

I’m fairly sure that many (if not most) electronic calculators worked this way, because I remember it tripping me up decades ago.

1 Like

If anyone works with managed macs, that have Falcon deployment (aka Crowdstrike), you’ll be glad that 15.3 fixes the Airplay-firewall blocking issue.
I’m still miffed that Apple released 15.2 with Private IP Wifi set to Fixed instead of default off. Atleast you can change that. (might be nice for home users, but not always for managed networks and macs).
Just annoyed that someone delivers your car with the heater on full…during the Summer. “Look, I use this, BUT I’ll turn on what I need, not what you think I need”.

I really wouldn’t get hung up on terminology. Whatever the name implies, AI is just software, and it works well for some things and badly for others. In that, it’s just like all other software, though the ways that it works well and badly are quite different because it’s not deterministic like most algorithms.

True…but the fact remains that if the AI can recognize the value hung as a hand…that should be enough not to put 6 fingers on it…because probably more than 95% of all hands have 5. And since it adds fingers…it seems pretty obvious that it knows it is a hand.

And again I totally agree…today’s AI is not intelligence and we are a long way from an actual artificial intelligence IMO. What is called AI is actually just a more sophisticated algorithm than we had 10 years ago running on more powerful hardware that allows some ‘learning’ for lack of a better term…but it’s not anything close to actual intelligence or real learning. But the media hype has turned ‘more sophisticated algorithm running on more powerful hardware’ into the way more hype-able and sexier term AI.