Examining Apple Intelligence

Originally published at: Examining Apple Intelligence - TidBITS

Apple devoted a large part of its WWDC keynote to Apple Intelligence, a collection of new AI-driven features that it plans to introduce throughout the next year in iOS 18, iPadOS 18, and macOS 15 Sequoia.

3 Likes

Great description of what’s coming in Apple Intelligence. I’m especially interested in using Siri to control my Apple devices reliably. I’d love to say “Siri read me today’s article in Bloomberg about Nvidia”.

It sounds like most apps will have to be updated to support the latest Apple Intents API. I’m sure Apple apps will be updated right away, we’ll see how fast third party apps are updated.

This could affect the app economy. Many apps depend on ad revenue. If no one sees the app on-screen because they’re using Siri voice control instead, in-app ads could die, and even more apps could switch to subscriptions.

1 Like

I don’t care about ChatGPT (and if I did, I would just go to chatgpt.com right now without first needing to spend >$1k for a new 15 Pro) or crowdsourcing data or any of that. All I want is for Siri to understand me better (as in, get what I actually want) and for it to gain the right hooks into the built-in apps I routinely use so it can actually help them help me.

So one day I want this to work:

Hey Siri, stop navigating to Gott’s and instead get me directions to Bongo Burger on Euclid, but make sure you stop me at an ATM on the way there. … No wait, I have enough cash, forget the ATM. After Bongo get me to West Coast Sporting Goods without taking the freeway.

Every 12-year old would understand what I’m trying to do and Maps can do such navigation just fine right now (albeit requiring a whole bunch of taps and typing which I’m not going to do while driving), but present-day Siri is hopelessly lost with that. All I’d need is for Siri to get what I’m saying and then be able to set up Maps to get that done. If come September I get iOS 18 and it properly deals with the above, I will be ecstatic.

OTOH if that still doesn’t work but instead I’ll be able to write some silly nonsense and get ChatGPT Apple Intelligence to make it sound smart (while still being wrong), again at the expense of at least a new iPhone 15 Pro, I’ll be excruciatingly meh. Your move, Apple.

An interesting article. And I’m looking forward to trying Apple Intelligence features. I was somewhat disappointed to read elsewhere that only the very newest iPhones will support all the features and that even my iPhone 13 Pro is too old for some of them. However my iPad M1 from 2021 is apparently ok.

It is funny how Apple is trying to hijack the abbreviation AI. :slight_smile:

So what is better with Apple Intelligence compared to using ChatGPT directly? I also have enough options for image creation without the silly restrictions of Image Playground.

Siri needs to be improved. Especially in mixed languages which I as developer use all the time Siri fails badly. The Apple devs can’t even fix simple problems. In Mail I still get reminders I did not set every couple of weeks or so.

@doug2 Ben Thompson speculates that Apple’s LLM requires 8 GB to run. If you look at the phones that support Apple Intelligence, the cutoff is 8 GB of memory.
John Gruber asked Apple VP of AI John Giannandrea about this on The Talk Show and he didn’t deny it.
Every Mac with an Apple Silicon processor has 8 GB or more.

1 Like

I watched an interview with The Woz yesterday https://www.youtube.com/watch?v=CxlOdljFu6g&t=186s where he discussed this. He seems excited saying it approaches the Knowledge Navigator. He also said that HE has AI: Actual Intelligence! Go Woz!

Thanks for the article. The things that interest me particularly are the transcript features (lots of oral history interviews that I don’t have to listen to word for word), the summarize features (triaging scholarly articles that don’t have abstracts), the automating searching and sorting features in Photos & Mail, and a better Siri.

If Apple can take advantage of AI and integrate thoroughly but in a disciplined way into their systems, I think this will be very helpful indeed.

The focus on your personal context—the AI chatbots are very good at global knowledge and very poor at knowing anything about you that would inform or improve the results.

3 Likes

I’d love to have location relevant siri requests: in Hawaii where I live, Siri is hopeless with any address, directions, or anything not spelled the “siri way”. I’m sure this would work with many first nations locations as well. I’m also aware that everything everyone everywhere that has used siri has been recorded, giving them a data advantage much like what Tesla has with so many cars driving on so many roads, uploaded every night. This could make “local siri” much more intelligent, as it knows how you speak, to whom, and even in different languages. Hopeful, yet still I have may texts that read “siri sucks”.

While ChatGPT can write code, I’m hoping Siri/AI will be able to create Shortcuts. I find it maddening trying to figure out how to create what IMHO should be a simple Shortcut.

2 Likes

In that interview Giannandrea said:

“So these models, when you run them at run times, it’s called inference, and the inference of large language models is incredibly computationally expensive. And so it’s a combination of bandwidth in the device, it’s the size of the Apple Neural Engine, it’s the oomph in the device to actually do these models fast enough to be useful. You could, in theory, run these models on a very old device, but it would be so slow that it would not be useful.”

which makes it sound a lot more like performance than RAM. That said, I also think RAM is Occam’s razor here. I wonder why Apple is beating around the bush on this one. Perhaps they’ve heard all the snark for years about skimping on RAM they didn’t want to admit, that snark or not, those criticizing their skimping were not wrong, being essentially more forward looking than they themselves had been. Federighi at least actually did appear more willing to “hint” to RAM, according to this MR piece.

The interview was really meh though IMHO. Gruber asked:

“So it’s not a scheme to sell new iPhones?”

to which Joswiak answered:

“No, not at all. Otherwise, we would have been smart enough just to do our most recent iPads and Macs, too, wouldn’t we?”

(good answer)

But heck, lobbing softballs much? I mean, seriously, Gruber, what the heck would you expect their answer to such a question would be? “Ah shucks, you got us. Sorry man. But hey, can’t blame a guy for trying, right?”. Not exactly iJustine levels of brown-nosing here, but sure starting to feel a bit close. LOL :laughing:

Adam’s assessment is very helpful and sober. It tells me what I wanted to know, and he understands why professional writers are critical. I’ve been making my living writing about science and technology for decades, and yes, I’m critical.

I agree that Siri has been a disappointment and seems to have gotten worse. I was a fairly early adopter of computers, so I learned to search by looking for particular names or words, which a simple word search can do cleanly and easily. That’s what I want from a search, and Siri can’t find things I know are somewhere in my Mac’s memory. I have the same problem using Google and other search engines on the net, and it’s getting worse. With Google, it’s painfully obvious that their searches are slanted to their advertising business. With Siri I think it wants a sentence rather than a name or quote to search for. I don’t think AI will get this better unless it allows you to request a particular type of search, and I don’t know if that’s possible.

One thing I would worry about with writing tools is that they are likely to be “one size fits all,” with a limited number of modes of editing. I have written books for kids and I have written textbooks on lasers and fiber optics, and I write very differently for the two audiences. Likewise, a physician would use professional terminology talking with another doctor, and speak or write differently to patients. So Apple Intelligence tools may be useful in dashing off notes to friends and family, but I would not expect them to be suitable for editing a scholarly paper or a big report on your big project for your boss.

It’s going to be interesting to watch, but the current hype level worries me. Look at the growing skepticism about self-driving cars, which have fallen far behind original projections, and which still keep running into fire tricks with their lights flashing, which is an AI failure.

Yes and no. The chatbots are pretty good at tailoring their output to different styles and reading levels now, but that’s for generated text. But I’ve noticed that Grammarly’s editing suggestions tend to be in the same tone, so some effort or settings may be required to get them to change tone.

I’m becoming quite put out with both extremes. No, AI won’t be taking your job, and no, Boston Dynamics robot dogs armed with guns won’t be roaming the streets. It doesn’t do anything on its own.

Simultaneously, every prediction Elon Musk and Ray Kurzweil make seems to be cluelessly optimistic or, in Musk’s case, more aimed at trolling a competitor or the world in general.

AI is just technology, and like all technologies, it will be used by humans for good and ill.

1 Like

I’m with you on that.

I would much rather see clear-eyed impartial analysis than doomsayers or singularity optimists. And we need to get serious about climate change and the impact of the high energy demands of large-language models.

I’ll believe when I see it. Developing code WELL requires a deep understanding of the problem, and generally, AI is not yet at that point of understanding.

Maybe AI can “write code” today, but if based on LLMs, it’s probably extremely buggy. Code that compiles, does not mean the code works, or does anything useful, or meaningful.

Never forget what an LLM is actually doing. It has no understanding of your problem, nor is it “writing” anything.

It is (effectively) using a massive database of probabilities in order to generate a sequence of words and phrases that are statistically likely to follow each other.

So if you ask it to write code (or anything else), you will only get something that was previously written by a human, or a mash-up of stuff written by multiple humans. But there’s no guarantee that the result will be correct, or even coherent.

So if you ask an LLM to write code, you’d better review that code really carefully before trusting it with anything. And I would argue that the effort needed to do that is likely going to be more than the effort you’d need to write that code yourself (assuming you’re a good programmer, of course).

In total agreement !!

What question would you have asked that would have gotten a different answer?

This is one of the reasons I never try to get interviews with Apple executives. They’re not going to answer anything they don’t want to, and they’re highly practiced at dodging any such questions. So I’d doubt that anything new would come out of such an interview.

4 Likes