How Siri Could Become the Mac’s New Help System

Originally published at: How Siri Could Become the Mac’s New Help System - TidBITS

Using modern software is often less about doing the work than about figuring out how to do the work. Even those of us with decades of experience regularly find ourselves leaving an app to search the Web, skim documentation when it exists at all, read forum threads, or watch YouTube videos to discover how a feature works, where an option lives, or what a cryptic setting actually does. The more powerful the app, the more likely this is to happen. I’m not a graphics guy, so as much as I like having access to all the capabilities in Canva’s Affinity app, nearly everything I want to do requires research (see “Canva’s Affinity Combines Photo, Designer, and Publisher into One Free App,” 31 October 2025).

TidBITS exists largely because of that gap. For decades, we’ve explained how macOS and Mac software works, what features really do, and how to accomplish tasks that aren’t obvious from the interface. We started TidBITS Talk in part to provide answers for more specific concerns, with users asking (and answering!) questions that have caused consternation. Many of these questions aren’t inherently difficult, but built-in help systems are seldom useful, and users often don’t know how to frame a question well enough to find an answer through search. Sometimes, the easiest approach is simply to ask a smart friend. The problem is not ignorance or lack of effort—it’s discoverability.

To be clear, most tech support questions have good answers, just not ones the system can provide directly. The information is scattered across developer documentation, release notes, blog posts, forum replies, and half-remembered advice from friends. Users are forced to assemble understanding from fragments, often without knowing which sources are trustworthy or if the advice applies to the version of the app they’re actually running. The process is inefficient, error-prone, and increasingly out of step with how capable our devices have otherwise become.

That good help is hard to find isn’t a new problem, and Apple has tried to solve it before. What is new is the opportunity to approach it differently. In this article, I argue that Apple Intelligence and Siri could finally fulfill a long-standing promise: letting the system itself explain how its apps work, accurately, contextually, and without forcing users to leave what they’re doing.

Apple has already taken a step in this direction. Starting in macOS 15 Sequoia, Siri gained some awareness of macOS help content and can answer some questions about macOS and bundled Apple apps without deferring to a Web search. (When it does defer to a Web search, it often presents Windows or Android answers. Embarrassing!) That change matters less for what it accomplishes today than for what it signals: Apple is trying something new with system-level help.

However, the current implementation also demonstrates why this problem remains unsolved. Siri’s responses are inconsistent and don’t extend beyond Apple’s own apps, so few users have gotten in the habit of asking. System-wide help systems succeed only when they are broadly and reliably helpful; sporadic insight into a subset of apps isn’t enough to change behavior.

Previous Help Systems

Apple has long recognized the need for a system-wide help system and has made repeated, serious attempts to address it. One of the earliest and most ambitious was Balloon Help, introduced with System 7.0 in 1991 (see “What Good is the Help Menu?,” 29 July 1991). When enabled, hovering over interface elements revealed explanatory text describing each button, menu item, or control. By System 7.5, Balloon Help was often paired with Apple Guide information, which went further by offering task-oriented walkthroughs (see “Future System Software,” 28 March 1994). Together, they represented a coherent effort to make the system explain itself in context.

Balloon Help was genuinely forward-looking but ultimately collapsed under its own weight. Functionally, it was slow and hard to enable, so it quickly led to third‑party utilities that made toggling it easier. It also often obscured the very interface it was meant to clarify. More importantly, the utility of Balloon Help depended on developers writing good descriptions, which many did unevenly or not at all. Apple Guide, while conceptually elegant, suffered from the same problem.

In Mac OS 8.5, Apple tried again with Help Viewer, an HTML-based online help viewer (see “Delving Further into Mac OS 8.5,” 26 October 1998) that Apple opened up to developers in the next release (see “Apple Rolls Out Mac OS 8.6,” 10 May 1999). Help Viewer made the jump to Mac OS X, though Balloon Help and Apple Guide did not (see “Mac OS X Report Card: October 2002,” 7 October 2002). Help books shipped within apps, were indexed by the system, and were searchable, all things requested by developers and users.

In theory, Help Viewer provided richer content, consistent presentation, and centralized access. In practice, Help Viewer has quietly faded into the background of most users’ everyday Mac experience. People rarely opened Help Viewer proactively, search results were often vague or outdated, and many developers treated help as an afterthought, both because the tooling was awkward and because writing good documentation is genuinely hard. As Web search improved, attention shifted from in-app help to Google results, forum posts, and video tutorials.

What survived from Balloon Help is its least ambitious descendant: tooltips and menu item descriptions. (We also joked that it’s where Apple got the idea for conversation styling in iChat, now Messages; see “Jaguar: Mac OS X Prepares to Pounce,” 6 May 2002). Tooltips are still genuinely useful, and they embody the part of Balloon Help that worked best—brief, contextual hints. But they are intentionally shallow. They explain what a command does in a word or two, not how or why to use it. They improve discoverability in small ways but don’t meaningfully reduce the need to look elsewhere.

The takeaway is that Apple has genuinely tried to solve this problem multiple times over the decades. Developers often produce weak documentation and in-app help, but that’s usually due to limited time, resources, and ability rather than a lack of desire; all developers want to reduce support costs. Even today, Apple continues to address the problem through online user guides, the Tips app, and Siri answering user questions. What has been missing is a help system that can adapt to context, stay aligned with the running version of an app, and draw on multiple sources of knowledge rather than relying on static pages—qualities that Apple Intelligence, if applied carefully, is positioned to provide through Siri.

A More Helpful Siri, Not an Agent

Apple keeps talking about a “more personalized” Siri, but an easier—and likely more helpful—step would be to give Siri deeper awareness of its machine environment so it can explain how to use all available apps and tools, rather than focusing primarily on personal data. Many questions users have aren’t about themselves at all; they’re about the software in front of them. What does this feature do? Where is that setting? Why does this option behave the way it does? How would I accomplish this task?

Answering these questions does not require Siri to take actions on the user’s behalf or attempt to drive interfaces. That approach has historically been brittle, and the recent focus on autonomous agents—such as the security issues now surrounding OpenClaw—underscores how quickly complexity and risk can outweigh practical benefit. For now, the safer and more valuable role for Siri lies in explanation, not execution.

Explanation is entirely achievable today. A more helpful Siri that can answer questions in plain language—naming features, clarifying concepts, and describing where to look—would address a large fraction of users’ everyday friction. Even something as simple as identifying what a feature is called can be enough to unlock further exploration, whether through menus, settings search, or additional documentation.

In addition, explaining software helps users build mental models of how an app works; performing actions for them bypasses that understanding. For tasks users expect to repeat, learning how something works is always more valuable than having it done once on their behalf.

Just as importantly, explanation fails more gracefully. If Siri’s role is to advise, being incomplete or occasionally wrong is an inconvenience, not a catastrophe. Users can verify, adapt, and continue. That kind of failure is tolerable—and familiar to anyone who has read an outdated article or received bad advice on a discussion forum. By keeping Siri on the explanatory side of the line, Apple can make it useful without eroding trust when it inevitably gets things wrong.

Where Will a More Helpful Siri Get Its Information?

For Siri to explain how apps work in a way users can trust, it must draw on multiple sources of information. No single repository of knowledge is sufficient, and pretending otherwise results in incomplete or even misleading answers. A more helpful Siri should rely on a three-layer model that acknowledges that different kinds of information are authoritative in different ways:

  • Layer 1: Developer-Provided Documentation: The most important source is documentation provided by the developer and bundled with the app. When developers take the time to describe features, workflows, and constraints in a structured way, that information should be treated as canonical. It reflects how the app is intended to work, applies to the version installed on the user’s device, and eliminates the guesswork that comes with using external sources. If Siri is going to explain the impact of a setting or how a feature functions, documentation from the horse’s mouth is the place to start.
  • Layer 2: Implicit Information: Some questions don’t require documentation at all, just orientation. Users often want to know what a feature is called, where it lives, or whether a capability even exists. macOS already has access to a limited but reliable set of structural information about every app it runs: menu hierarchies, toolbar items, keyboard shortcuts, accessibility labels, automation dictionaries, supported file types, and configuration panes. This information isn’t deep, but it is always present and version-accurate. Siri should be able to use it to answer basic “what” and “where” questions, identify the correct terminology, and point users in the right direction, reducing friction without pretending to replace real documentation.
  • Layer 3: External, User-Generated Knowledge: In reality, developer documentation is often incomplete, overly abstract, or focused on how features work rather than how people actually use them. That’s where Web-based, user-generated knowledge becomes essential. Forum posts, blog articles, and troubleshooting guides often capture how software is actually used, not just how it is intended to work. They reveal edge cases, workarounds, and limitations that developers either don’t document or prefer not to acknowledge. Developer documentation rarely says that an app can’t do something, whereas users are quick to explain what doesn’t work and how they cope. User-generated knowledge is less authoritative, but more candid and often more useful. It shouldn’t replace official documentation, but it’s essential when local sources are incomplete or lack task-oriented guidance.

Siri needs to draw from each of these layers appropriately. Developer documentation should generally carry more weight than a random forum comment, but a detailed blog post describing an expert workflow is likely more useful than a terse official description. In its responses, Siri should indicate whether specific details reflect documented behavior or common practice, and any external information should be cited and linked so users can assess its relevance.

Getting Developers on Board Through Xcode and AI

A strength of this proposal is that it benefits from developer participation without depending on it. Our more helpful Siri can still fall back on the structure of the app and widely available community knowledge to answer many basic questions. These Layer 2 and Layer 3 information sources ensure that there’s always some sort of answer.

That said, developer participation would dramatically improve the quality, precision, and reliability of Siri’s answers. When developers describe their app’s features, particularly in a structured way, that information can underpin explanations about how the app is intended to work. How then can we encourage developers to write in-app documentation when such efforts have failed in the past?

First, we have to acknowledge that developers who document their apps typically do so on the Web, using whatever tools fit their workflow, because that’s where documentation has the clearest payoff in supporting existing users and providing pre-sales information to potential buyers. Asking developers to write separate help content just for Siri would repeat the mistakes of Apple’s earlier help systems. Instead, Apple should support developers in writing documentation that can be reused across multiple uses.

Apple’s Xcode development environment is well-positioned to help here with the help of AI. Xcode already understands enough about an app’s structure to generate a useful starting point: a map of features, their names, and where they live in the interface, supported where possible by comments in the code. What if Xcode automatically drafted this baseline and then prompted developers with focused questions about intent, usage, and limitations—information only the developer can provide? (I’ve found that asking a chatbot to ask interview questions, one at a time, and answering them via dictation is a good way to collect thoughts.) The result would be structured information that the developer could then refine.

From that source, Xcode could produce bundled, versioned help for Siri to consult locally and export clean, search-friendly Web documentation for online discovery. Developers who opt in would get better Siri answers, fewer repetitive support questions, and cleaner onboarding for new users. That’s the carrot that Apple Intelligence is positioned to dangle.

Making Apps Deeply Discoverable

The payoff for users from an Apple Intelligence-powered help system should be obvious. A more helpful Siri that can reliably answer questions about how apps work would reduce the time users spend searching the Web, skimming forum threads, or watching videos just to solve small, specific problems. It would also slow the steady stream of basic questions that flood support forums, freeing those spaces for deeper and more interesting discussion.

More importantly, good help leads to better understanding. When users can easily learn what features exist, what they’re called, and where they live, they develop stronger mental models of the software they use. That confidence makes it easier to explore new capabilities, use advanced features, and become expert with complex apps.

Apple has been circling around this problem for decades through various help systems, none of which have quite solved it. With today’s large language models, richer app metadata, online resources, and tighter system integration, it’s now possible to provide accurate, contextual help directly within apps.

The real question is whether Apple is now willing to treat help as a first-class operating system feature open to all developers, rather than an afterthought or something limited to bundled apps. If it does, Apple Intelligence could give Siri a genuinely useful new role—not as an agent or a personal assistant, but as the help system Apple users have needed all along.

9 Likes

All software suffers from this. Good software developers are rarely good tech writers or graphic designers. And vice versa.

You rarely get really good documentation from small projects, because they can’t afford a team of experts who can write the documentation. And even the big companies often don’t bother, because they consider documentation to be an added expense that doesn’t add value to a product.

No amount of OS-level features can ever change this.

A chatbot isn’t likely to get better “Layer 1 - Developer-provided documentation”, because it is going to be subject to the same problems that cause all other help/documentation systems to fail.

The other two categories may do a lot better, but the layer-3 data won’t exist until the product has been around long enough and has gotten popular enough for a pool of knowledge to build up.

3 Likes

Assuming Apple follows Adam’s suggestions, I think the part about “Getting Developers on Board Through Xcode and AI” offers a solid way to improve layer 1 in a win-win way for both Apple and its developers. I know that I have numerous pieces of software that are much more powerful and useful than the way I normally use them; often because it’s too hard or time consuming to figure out what they are and how to use them.

Another thought, and this would likely be down the road a bit; a next step might be an intelligent Siri that could watch how I use an app and, when appropriate, offer a suggestion on how I might use the app’s features more effectively. The key here for me would be that it “suggest” without trying to do it for me.

Good article Adam!

5 Likes

If this ever comes to pass, please let it be optional — we don’t need an Apple version of Clippy.

With respect to Adam’s article. For the most part, I’ve replaced web searches via traditional search engines with inquiries to an answer engine. When it comes to figuring out software, I can post an inquiry and refine it to get a more precise answer. The engine I use, Perplexity, also provides a source reference for its statements, so I can verify its assertions (it sometimes goes beyond what the claimed source says).

You may have noticed that many of my responses to questions raised on Tidbits Talk provide links to sources. That usually comes from me posing the question to Perplexity and finding a source that provides a more definitive answer than I can write.

1 Like

Reminded me of a University of Maryland course I took when I was stationed in Germany back in the 80s. It was on Technical Report Writing. Of course 40 some years later I don’t remember much of it! Too bad small developers couldn’t take a similar course in their local college extension classes.

Brilliant proposal, Adam. This is a strategic proposal for Apple to implement, as this approach will take time and require a long term commitment. Levels 2 and 3 are the kinds of things that can be built soonest. I’ve been impressed with my experiences with Anthropic’s Claude when I need to crack a coding nut. The references and proposed code always beats me digging through my own notes and online searches, and saves me from the occasional warring arguments online. I think Level 2 and 3 is doable now

Level 1, especially if it’s ‘auto-coded’ through x-code is the exciting idea.

In my mind, an analogy is inter-app communication was the carrot that drove (drives?!) AppleScript and the desire for app developers to integrate Dictionaries in their apps. Do all app developers provide Dictionaries? No. Do many? Yes.

So back to the idea of ‘auto-coding’ the Level 1 help system. It might not be adopted immediately by all developers, but will some developers adopt it? I’m betting they will. Developers like Culture Code, Omni, and Bare Bones have solid documentation. Would they be pioneers of this? I hope they would.

The idea that Siri could become the OS and App help system is a worthy experiment for Apple to adopt.

Well done, Adam!

1 Like

By the way, Adam:

“as much as I like having access to all the capabilities in Canva’s Affinity app, nearly everything I want to do requires research”

It’s not that you’re not a graphics-oriented user. The original Serif applications, Publisher, Photo & Design, while very good in many ways, were hardly transparent; and Affinity is a stew, frankly.

I’ve used Publisher extensively for five years or more, setting a fairly big bimonthly newsletter in it – and use Photo regularly. While I have ready understanding of the fundamental DTP controls in Publisher, there are corners of the app I don’t make sense of; and in Design plenty eludes me. And the ‘Help’ isn’t very helpful…

It’s not just you! :face_holding_back_tears:

1 Like

This is extremely well-thought-through and correct in every detail. Excellent! I hope Apple listens!

2 Likes

What I am desperately waiting for is some clever application of AI in Apple Mail. I have loads of nested mailboxes that contain my work mails and I spend a lot of time sorting my incoming mail into these mailboxes, so that my Inbox only contains mail that still requires some action. Mail proposes a mailbox in the menu, but this is invariably based only on the sender’s email address. There is a wealth of information available in the subject header that would be much more useful, but which is completely ignored. Most of the time I have to bypass the suggestion and manually scroll through the list of mailboxes, which takes multiple seconds per message.

2 Likes

Great article. I am sure people at Apple read your stuff, so let’s hope they take this on board.

Speaking as someone who has from time to time had to write help, I can confirm that it is genuinely hard, and yes, a skill that not all developers posess. One approach is via ‘use cases’ – what might users want to do in an app, and how to do it. In the days of printed documentation we used to discriminate between a ‘user guide’ – how to do stuff – and a ‘reference guide’ – what all the bits are for.

I have generally found most in-app help all but useless right across the board, and forums (fora?) might give you the answers you need if you are prepared to spend a whole lifetime looking for them. Now we have chatbots, my go-to one being Anthropic’s Claude, and Siri has a long way to go to catch up.

The worst examples I can remember are searches in Microsoft’s on-line ‘help database’, with it’s 400,000 answers to a simple question. Many of you will know this already, but it’s worth retelling the old joke about M$ help.

An executive is travelling by helicopter from his home to Seattle airport when fog comes down. This is unfortunate, the the heli pilot and his machine only cleared for clear air flight. Panic ensues, when a tall building looms out of the mist. On inspiration, the pilot gets his passenger to write ‘WHERE ARE WE?’ on a large sheet of paper and hold it to the window while the pilot flies slowly around the top of the building. The guys inside scrabble around and soon hold up their own sheet of paper, which reads ‘YOU’RE IN A HELICOPTER’.

Without hesitation, the pilot plots a new course and flies safely through the fog to the airprt. His passenger is amazed: “How did you do that?!'“. “Simple”, replies the pilot, “The answer was factually correct but damn all use. That told me it was the Microsoft building.”

5 Likes

It’s really a shame, because (a long time ago), their documentation was wonderful.

When I started college in 1987, I received a copy of Microsoft Word for DOS (version 3, 1986) as a part of my computer bundle. In addition to having a good printed manual, it included several floppies containing interactive tutorial apps. These apps walked you though all of the app’s features and by the end, you knew how to use everything. It took me about two days (part-time) to go through all the lessons.

A few releases later (version 5.1, 1991, I think), they dropped the tutorial discs, forcing you to learn the app from the manual, which didn’t work nearly as well.

1 Like

A good justification for their excellent, original printed manuals!

A good justification for their excellent, original printed manuals

Which no one read.

Making a better Level 3: prioritizing ‘trusted information’. Sometimes, what I need to know is in, e.g., a Take Control book that I already own (and meant to read/have forgotten a detail of). If my own collection of trusted resources were a prioritized part of the search and presentation of the solution, it would both be more likely to be a right and useful solution for me and an encouragement to buy (and to remember to read) more help books that I know are trustworthy and useful.

3 Likes

I have found AI (ChatGPT, often) to be such a time-saver this way. I feat the suits will say the same, and blow off your excellent thoughts on this.

1 Like

There are no easy answers here. You have to know your audience as well as your subject, and knowing your audience is tougher. When I wrote Understanding Lasers, Understanding Fiber Optics, and The Laser Guidebook, I was writing for industry trade magazines and knew the audience. The first editions of all three came out in the 1980s, and I was introducing new technologies to readers of my books as I had been explaining new technology to readers of laser magazines. It was a big broad task and all told I reached around 150,000 readers.

But when you’re writing for Mac users you have to reach a much much larger audience and a vastly more diverse one. You have to write for people from 10 to 100 with educations from grade school to PhDs, and from immigrants whose English is shaky to top scientists and scholars with PhDs in Byzantine History. You can’t assume much about your readers, so you don’t know whether or not you can use an acronym or industry buzzword. I could get away with using laser geek speak – industry jargon – as long as I explained it first, but you can’t do that with an audience in the hundreds of millions. You can’t do it right without doing hard work, and the industry does not want to pay for that.

1 Like

You are a better person than me. Photo and Design are both readily understandable to me because they are modeled after a certain 800-pound-gorilla’s functionally-equivalent applications.

Publisher is still poorly thought out and organized. The huge deficit is in importing content. The 800-pound’s InDesign, and PageMaker before it, were both conceived as Rosetta-Stone-style content hubs, where content is created in the text or graphics application of one’s choice and then integrated into a publishable document. Import (integration) is aided by a myriad of content translation filters. I’m a now-lapsed expert in both systems, including their scripting languages.

Publisher seems to be more of a copy-and-paste integrator, at least in my experience, and I am never quite sure how things are going to land, be altered, or otherwise macerated on their way in to my document. I find myself using the documentation way more when I’m trying to lay out a publication, and it doesn’t help that Publisher has defaults like “12 pts space after paragraphs” baked in to its DNA.

So yes, it’s not just you, me, or them. :slight_smile:

[long post]

Here’s a use case that involves AI, no Siri, but is in the spirit of this excellent proposal:

I produce music notation these days using Dorico. It’s a stellar application produced by brilliant humans, and includes help in the form of an exhaustive HTML-based system meticulously developed and curated by a very gifted human.

Dorico has its very own world of notation standards, and if you are grounded in other notation software as this 30-odd year veteran of Finale is, you have to learn how to “think like Dorico.” And Dorico, er, thinks different.

One blind spot in the docs is that they sometimes presume the user is approaching a problem the same way Dorico does. An example is Tuplets. (Tuplets are those notes you see in engraved music that have a number over them, like this:

Musicians know that standard meter (“beats”) is divided like so: 1 whole bar, divided in 2, 4, 8, 16, 32, 64 and so on, exactly like base 2 numbers. If you want the performer to divide a beat differently, then you need some form of tuplet, such as the “3” indicated by the orange arrow. Tuplets can divide a beat into any ratio of divisions, such as 2 or 7 into 3, or even be nested within another tuplet. Good luck and well done to performers handling those!

Dorico by default will divide a beat into 3 when a tuplet is called for. But what is presumed is that Dorico is calling for 3 sub-beats in the metrical space of 2 sub-beats. Because you can indicate the tuplet with the single character “3”, it is not obvious that Dorico is actually notating this with the specific ratio “3:2” (or “3 sub-beats in the space of 2”).

The documentation presumes the user knows this, and so does not provide that explanation or any other to show how the tuplet may be called for. The examples presume the user is “thinking like Dorico” without revealing how Dorico thinks about a tuplet. In the Dorico user forum, asking for an explanation would yield a mix of unhelpful responses and, maybe 20 replies in, one user who accidentally reveals the construct the program uses and is very helpful indeed. No knock on that forum—much like TidBITS Talk which is also platformed on Discourse, the conversation is mostly civil and moderated.

So, I turned to ChatGPT, and asked the very specific question:

I am using Dorico Pro 6.1 on MacOS. Please explain how tuplet ratios work as though I were learning the concept for the first time. You may use as an example my current project where I am working in a 12/8 meter, and need to enter eighth note duplets.

This is a question I could ask of ChatGPT without shame. And the response was brilliantly laid out, conceptually accurate, and included both specific examples and a general rule that was specific to the Dorico 6.1 application.

I can only presume that OpenAI allowed its LLMs to read and train on both the HTML help files, which are carefully tagged for application version and platform, and the Dorico help forums. But the result was exactly what @ace Adam was describing in his essay, at least for Layers 2 and 3, and I would argue Layer 1 was heavily incorporated as well. Unlike canned documentation, the ChatGPT bot allowed for a conversational followup that made it easy to ensure I understood what it was saying.

Orienting Siri and Apple Intelligence toward a System-aware help system in this manner would lift a lot of veils, because the response can be contextually tailored to the user’s knowledge level based on the prompt. “I don’t understand how to get Mail to sort my messages for me! Show me how Mail does that, and tell me what I need to do to get started.” Because Siri would “know” which version of xOS and Mail are in use, it could start with a response tailored to the user’s specific circumstances rather than a general survey of Rules.

I think Adam’s insight may be a key to making Siri relevant instead of a novelty.

2 Likes

I attended a programming conference a few years ago when ChatGPT was new and LLM-based AI was just getting going. One presenter did a talk on AI and basically pointed out that if computers can really understand us via voice, it changes everything about programming. Basically, user interface, as we know it, is done.

  • Experts in a topic (such you with music notation), can just describe what you want in your own way and the AI can understand and implement. No more need of complex UI that is only “intuitive” to a select set of minds.

  • More than half of programming today is creating the user interface. With no UI needed, just voice, programmers can have more time to do focus on the core product.

  • Language/culture becomes less of a barrier. Instead of having to write software that works in multiple language and not supporting rare and exotic languages such as a small country in Africa, the user can just talk in their own language and the AI can understand.

  • “Help,” so to speak, is not really necessary since the AI does all the work. If something doesn’t work right, you just tell it what you really wanted and it does it.

I think this idea is brilliant and definitely the future… though it may not happen as fast as we think and there will always be some need for user interface in certain situations (or as a backup for voice as voice isn’t always the ideal interface).

This also potentially changes the complaints in the anti-Liquid Glass camp and those who think Apple’s UI skills have gone downhill. Even if that’s true, it might not matter in a world where voice control is the OS!

Who cares what the Pages icon looks like or the menu icons are inscrutable if you aren’t seeing any of that because you’re talking to you computer instead? :joy:

2 Likes

I have hacked up Keyboard Maestro to replicate the function of the sadly defunct Mailhub, so that I press “/”, then type the first few letters of the relevant mailbox, then Return (or Command-Return, which is my habit), to stick it where it goes. Saves me tons of time.