Apple’s January 2025 OS Updates Enhance Apple Intelligence, Fix Bugs

You’re comparing two different models. I’m using the hand-detection model as an example to show how training images don’t always show all fingers, and that there is often ambiguity in the images.

But that model isn’t generating new images. That’s something completely different, and I don’t think those apps know anything about anatomy. They just have probabilities that certain patterns of pixels appear near other certain patterns.

And if there is ambiguous training data (e.g. two hands held close together with the wrists obscured or out of frame), those probabilities could easily produce fingers where they don’t belong.

I think you’re getting so upset over this because you are assuming a level of understanding that simply does not exist.

Maybe comparing “AI” to mice is unfair (to mice). This article discusses the intelligence of an amoeba!

Apple does not offer me the option to upgrade my iPad Air 4 from 17.7.2 to 17.7.4, the only option is 18.3 which I don’t want to install because it will not offer me anything (except possible new bugs) as none of the AI options will be available to me (I live in Europe and my devices are too old anyway). Same things applies to my iPhone 12.
IMHO this Apple policy of ‘forcing’ users to upgrade to the latest mayor version of the OS is bad and irresponsible. Now people like me, who don’t want the hassle and risk of a big upgrade which provides them no benefits, have to ‘accept’ using a device that is less secure than it could be.

Actually, no. Most of the time the answer will be 5, which is wrong. A human hand has 4 fingers and a thumb. Usually, some people manage to lose a few during their life :wink:

I knew this one was coming :slight_smile:

Well then hey, maybe that explains it. The AI read online that the human hand has 5 fingers, so it gave me 5 fingers… and a thumb… but only on one hand… ;-)

They certainly will.

Meanwhile, I will spare you all the other ways I continue to be underwhelmed. But here’s one from yesterday: wasn’t sure Maps via CarPlay was taking me the right way; almost seemed like I was headed the complete opposite direction. So avoiding the risk of a ton of fiddling with my screen while driving, I asked Siri/AI a simple question: “what direction am I heading?” Hoping to hear “north”. But it had no idea what I was talking about.

And that’s one of so many stupid, simple things I should be able to accomplish with my voice that don’t work at all. Things that basic Siri should have been able to answer. And now the marquis feature of 2024 was Apple AI, and still nothing, even with my iPhone 16 Pro.

Sure, but that just goes back to a post I made a few days ago - we all know that Siri is bad, has been for far longer than it should have been, and some improvements are finally coming with 18.4 in several weeks, with reportedly a complete rewrite as an LLM coming with iOS 19 sometime in the first half of 2026 (so, likely iOS 19.4 if iOS update numbers progress as they have for a few years.) So continued reporting of its failures to me is just old news, “Generalissimo Francisco Franco is still dead”. Let’s see what improves with 18.4 / 15.4.

1 Like

As for me, I’m holding out for a Duke feature before turning on Ap.Intel.
;-)

2 Likes

Nah, let’s not get people’s hopes up when we know better. Just because they add some LLM sauce to Siri will not teach her how to open a settings panel or how to switch view mode in Maps. LLM sauce is here to placate the markets and say “hey we’re doing the ChatGPT thing too”, but I can pretty much guarantee you it will do nothing to make Siri more useful to me and many others like me. That will likely take 19.x which isn’t due for at least another half year.

If I want a chatbot I can already use free LLMs all day long. What I cannot do is replace iOS Settings app with a free (and trustworthy) clone with voice support or hack Maps to implement reasonable voice support for safe hands-off driving. How about Apple gives me what only Apple can and stops trying to play catchup with stuff others have done for longer and do better? Focus, Apple.

2 Likes

Exactly. It doesn’t take LLM or even AI at all to give us the basics that are grossly missing. What it does take is corporate vision and attention to customer needs, both of which are clearly lacking.

I disagree. To me, continuing to tell me to wait and see what comes out next, and never does, is old news. In this case, I waited for the feature that was announced and released and tried it and reported on its shortcomings. Call it a product review.

1 Like

I’d say that Apple isn’t paying attention to a specific customer segment’s needs (here on TidBITS, people who are closer to Apple’s former hobbyist focus rather than the current lifestyle and luxury focus) that it has decided doesn’t drive significant growth in its business.

I miss the summaries and hope they “fix” it soon. (I didn’t think they were terribly broken.) Now if I have a collection of notifications from, say, The NY Times, it only shows me the text from the last notification. The summary for me was very good about summarizing each notification in the group until 18.3. It was pretty common for me with 18.1 and 18.2 to dismiss the groups without expanding the notifications from reading the summary.

1 Like

I guess we can disagree to disagree…if the AI is smart enough to recognize hands…it should be as smart as a 5 year old who knows there are only 5 fingers on a hand. Proves my point that AI isn’t nearly as smart as proponents claim.

the one bit of a/i (aka apple idiocy – not faulting apple here: entirely driven by the money morons on wallstreet) enabled on my mac is the “clean up” button in apple photos. pinged support asking for a way to remove that. also how to delete the related plugin that i accidentally downloaded. so far the answer is you can’t.

anyone have a better answer?

Again, you’re assuming a level of intelligence (meaning anything greater than zero) that doesn’t actually exist.

The hand-detection model doesn’t know what a hand is or what a finger is or anything like that.

It is a “segmentation” model. For each pixel in the image, it computes the probability that the given pixel belongs to a “hand” as defined by the training data (a folder full of images where some human has manually highlighted all the pixels that belong to hands). The application software using the model then takes that output and applies a threshold (maybe 75%) and reports all the pixels whose probability is greater than that threshold as a “hand”. But without any knowledge whatsoever about what a hand is.

That exact same segmentation model can be used to detect cars, park benches, traffic lights, bicycles and anything else you can think of. And the model doesn’t know what any of those objects are. It is simply trained with thousands (or millions) of images, where the target objects’ pixels are highlighted, in the hopes that this will let it make an accurate prediction when applied to new images that it hasn’t seen before.

Any “proponent” that claims the software has any intelligence at all is either a liar or someone who doesn’t actually understand what he’s talking about. Or both.

2 Likes

Yes! My strong preference is to spend each year bothering only with security updates, and I do want to get them.

I was permitted to update my 2018 mini to v14.7.3 instead of v15.3.
Yet Apple refuses to allow my iPad Air (M2) to update to v17.7.4 rather than v18.3.

I’d be grateful if someone could explain why the Apple silicon iPad is treated differently to the Intel Mac mini.

I am bracing myself for the iOS 18 upgrade on my iPhone. I too would love to have yet another stopgap patch.

But, to be honest, the longer I hold out, the harder it’s going to be to upgrade. Ever thus. I would rather not be so conservative, but the real issue is identifying all the changes that have user-visible controls before starting, and Apple makes this needlessly difficult. The PDF they’ve published with the iOS 18 changes was not updated with more recent updates. So I’m going to have to do the upgrade “blind”, then go delving into every corner of the system to see what’s changed, methodically. This is not my idea of time well spent. Sigh.

I’ve never called it intelligent at all…it’s just a more sophisticated algorithm than we had a couple years back. However…I think that it’s probably a reasonable assumption that it recognizes the thing in then picture as a hand…and if does that than it should know that hands have 5 fingers in total. If it doesn’t recognize it as a hand it wouldn’t be able to add fingers to complete the picture.

But we’re talking in circles here…humankind may eventually figure out how to develop an actual artificial intelligence with self awareness…but it’s going to take a lot more computer horsepower than any of these pseudo intelligences have. I most definitely agree with the second statement…AI is today’s buzzword but it’s still just hype and overinflated claims.

1 Like

That’s not a reasonable assumption.

These models don’t say “this is a hand”. They return data like “this pixel has a 54% chance of being object type 1, a 10% chance of being object type 2, and 3% chance of object type 3, etc.”.

The fact that type 1 is a hand, type 2 is a bicycle and type 3 is a helicopter is simply an artifact of the fact that pixels in the training images were tagged this way. The model doesn’t know what any of these objects actually are.

4 Likes