I do not use any AI intentionally.
Being outside the US, enabling Apple Intelligence requires changing System Settings to American language and then Apple Intelligence comes up on the Siri panel and then it an be enabled. I found out how to do this through Chat GPT; this advice was not provided by Apple itself.
I have been using Chat GPT for some time and I do believe Adam got it right when he described it as being an answering function and not a search function as is Google Search (although Google’s AI responses are becoming quite good).
Now that I have managed to get Apple Intelligence working on my Mac, I have tried it out. The Mail sorting and delivery monitoring is quite good, and I am very pleased that Spotlight seems to be able to now find a file on my Mac, but I am indifferent to most of the other offerings by Apple Intelligence. To some extent I would like an Apple Intelligence app that I can just open and ask for responses rather than its embedded approach.
My preference is to continue using Chat GPT particularly as I can dig into its responses by narrowing or expanding my interrogations.
I haven’t used it at all. One of the uses that Apple would be looking at is to allow more natural language to control Siri. You want to set up more complex trips in Apple Maps, then just tell it what to do. Set up something in Calendar by asking it for the first free Saturday in July, etc. Finda a photo of X. What a great ad to show someone having a fairly normal conversation with their phone.
This is a conundrum for me. I CAN’T use it on my iMac or MacBook Pro as they are maxxed out at High Sierra and Monterey respectively. However, my iPad Mini 7 is at 18.5 while my iPhone is at 17.7.2 (I plan on installing 18.5 in August). So I guess “Haven’t Used It” is the best response.
Something I’m assuming about AI in general is that it won’t work particularly well for whatever it might be used for at first. Unlike programmed tools, which continue to work the same way, AI supposedly learns from your interactions with whatever function it’s supporting, and thus over a few weeks (for fairly frequently used functions) it should become much better at working with you for your needs. I also am assuming I’ve used Apple Intelligence in that I’m assuming Apple has incorporated some aspects of AIAI into its apps. For example, I noticed that shortly after AIAI was introduced, Apple’s spam filter started filtering out noticeably more emails I’d have wanted in my inbox. If that’s due to Apple adding AI to its spam filtering (the issue I noticed could be just Mail getting worse), I presume that as Mail relearns what emails I move to my inbox/mark Not Junk, and perhaps also the ways I interact with my email, that feature will improve (maybe becoming better than it used to be, even).
I filled out the whole form, showing I use it more than I thought, however, in order to submit the answers, I had to log into Google? Why? I don’t use anything Google. So I guess the main places I use it is in Mail (altho not too sure on categories yet), and Siri (ChatGPT) but not yet in photos… overall, it hasn’t made a great impact on my computer life.
You only need to sign into Google if you want the ability to save your answers to a partially completed survey before completing it or to revise a completed survey.
I did not login when I completed my survey response.
Ideally, that is the case, but it doesn’t always work that way. There are several well-known phenomena that cause AI models to decrease in accuracy with increased usage. It’s one of the things that can really slow down advancement of AIs. Probably the best known example is model collapse, but there are others.
One of the risks of Apple aggressively pushing the branding of “Apple Intelligence” is that end users may not always know which features actually use it or not, so any case where an Apple device does not behave as expected or exhibits an actual bug gets blamed on “Apple Intelligence”.
In my opinion, Apple (and other purveyors of AI) would have been better off avoiding talk of “transformations” and “revolutions” while periodically introducing individual features once they are proven to work as intended. Of course, many marketing teams hate that. (I say that as someone who respects and appreciates the importance of marketing when done well. Despite the numerous Dilbert jokes about marketing departments, it is a very difficult job to do well.)
I have never found the “automatic” setting on anything to be very satisfying. Cameras on automatic mostly produce reasonable pictures, but not as good as I can do manually. And sometimes they completely stuff up.
Voice recognition, predictive text and writing tools are similar. Words are offered that are not words I would ever use. Many have a distinct American vernacular (say “going to” and it gives “gonna”). This is because they are generic and do not take account of the individual. Until they can take account of the individual, I cannot see them being helpful in many cases.
AI just seems to be a “better” automatic that still does not produce what I would create myself.
I’m not aware that Apple Intelligence, at least in its current incarnation, enhances Spotlight in any significant way.
There’s a really interesting article in the New York Times Magazine:
A.I. Is Poised to Rewrite History. Literally.
It addresses the complexity of LLM usage better than many current articles and I find myself wondering whether I shouldn’t spend more time really exercising ChatGPT and the like.
As for myself, I’m always open to trying new things, particularly when they’re free, so I’ve tried most all the Apple Intelligence thingies. Meh.
But with the exception of proofreading which seems genuinely useful I find most all of them rather useless. It is indeed amazing when you search for “red car” in Photos and it finds them. It is not amazing when it misses an obvious four of them.
I’ve come to the conclusion that you have to evaluate all this AI explosion by a simple metric: If you hired a bright young person (AKA real money involved) who instantly provided you with grammatical or logically correct reams of information and their product consistently turned out to be 30% wrong, you’d fire them the next day.
The NYT article shows why one shouldn’t wholesale dismiss the LLM tsunami but what catches my eye is their choice of interviewees. They’re all experts who can verify the data either explicitly or intuitively. What deeply concerns me is the huge, monstrous, number of non-expert users of the LLMs—they do not know that 30% of the responses are crap. That’s a recipe for disaster both personal and societal.
Dave
As I filled out the survey, I realized how many third-party (non-Apple) apps I use on my Mac and Apple devices. I use Arc for browsing, DuckDuckGo for web searches, Spark for email, Adobe Lightroom and Topaz PhotoAI for photos, and Excel and Word. So since I don’t use Mail, Safari, and Photos, I had to say “Haven’t Used It” for most of the questions.
Most of these have added “AI features” as Apple has, but I haven’t used most of these, outside of the features in the two photo apps.
I also have been using Perplexity for AI-based information searches, and keep being surprised at how often it finds and shows me information (real, not made up) that traditional web searches never found.
I don’t view LLMs and Generative AI to be any more of a threat to society than other communication channels were when they were new. Printed books, the telegraph, the telephone, radio, television, USENET, email, websites, instant messaging, push notifications, blogs, wikis, social media…just about every major advance in how people obtain information caused consternation and, yes, examples of misuse and abuse. But humans do learn and adapt. And let’s face it: most worries and predictions of doom related to new things become quaint and dated with the passage of time. The world survived the publication of books by Charles Darwin, D.H. Lawrence, and Henry Miller, the broadcasts of Elvis dancing, and students looking things up in Wikipedia, no?
I am not sure, but the Spotlight improvement coincided with the ‘new’ Apple Intelligence and this was alluded to in the recent WWDC.
Interesting observations, both about AI and marketing. I think the significant thing to consider is that AI isn’t intended to continue reacting going forward to your input the same way it does upon initial use. The intention (in personal computing AI) should ideally to be to adapt to your own personal computing behavior to provide added benefits. I don’t know enough about AI to know about the decrease in accuracy with increased usage that you mention, but even with that, it still should be the case that expecting AI to provide the same results after weeks of use that it does when you first try it out isn’t an accurate expectation. A person might not like the results upon initial trial, but might really like it after continued use. Or based on your caveat, might hate it even more after continued use.
Theoretically, AI’s adaptation to a person’s personal computing style could mean that someone else using that person’s computer might find the AI on it really frustrating, with quite different behavior than on their own computer.
Filled in with every box ticked as “haven’t used it” - in the Netherlands not all options are available and I do not trust the results of AI chatbots. However, now that I think of it, I do use descriptive searches in Photos to my advantage. It kind of works. Not perfect, but better than finding a date fo some past event in Calendar and then hope to find an associated picture.
Interesting point!
It’s not uncommon today to be frustrated when sitting down in front of a computer owned by someone with very different user interface preferences from your own. I notice it particularly when someone else has very different trackpad settings on a laptop, e.g., “natural scrolling”, gesture customizations, etc. I suppose eventually “AI” customizations will follow your account around, rather than be device specific.
I use Genmoji occasionally and it’s both fun and ‘useful’. Sometimes there’s not quite the right emoji I’m looking for or I want to send something intentionally humorous. Genmoji is quick and easy and does the job. This is where I think AI works well – specific and well integrated into existing tools.
I too am stil struggling to find a use case for Apple Intelligence for my personal needs. I’ve seen examples of Apple Intelligence working in videos but fully expect it to not work as well in the real world (for me).
LLMs provide much broader applicability than expert systems ever did. And while LLMs can provide wildly inaccurate information at times, hence need to be used with caution, expert systems were by design narrow, and never really lived up early claims of efficacy. I don’t think LLMs are going to be relegated to low-level jobs. They are evolving and improving rapidly. Still with many and serious warts, but nonetheless quite useful once you learn how to talk with them.