Originally published at: How Siri Could Become the Mac’s New Help System - TidBITS
Using modern software is often less about doing the work than about figuring out how to do the work. Even those of us with decades of experience regularly find ourselves leaving an app to search the Web, skim documentation when it exists at all, read forum threads, or watch YouTube videos to discover how a feature works, where an option lives, or what a cryptic setting actually does. The more powerful the app, the more likely this is to happen. I’m not a graphics guy, so as much as I like having access to all the capabilities in Canva’s Affinity app, nearly everything I want to do requires research (see “Canva’s Affinity Combines Photo, Designer, and Publisher into One Free App,” 31 October 2025).
TidBITS exists largely because of that gap. For decades, we’ve explained how macOS and Mac software works, what features really do, and how to accomplish tasks that aren’t obvious from the interface. We started TidBITS Talk in part to provide answers for more specific concerns, with users asking (and answering!) questions that have caused consternation. Many of these questions aren’t inherently difficult, but built-in help systems are seldom useful, and users often don’t know how to frame a question well enough to find an answer through search. Sometimes, the easiest approach is simply to ask a smart friend. The problem is not ignorance or lack of effort—it’s discoverability.
To be clear, most tech support questions have good answers, just not ones the system can provide directly. The information is scattered across developer documentation, release notes, blog posts, forum replies, and half-remembered advice from friends. Users are forced to assemble understanding from fragments, often without knowing which sources are trustworthy or if the advice applies to the version of the app they’re actually running. The process is inefficient, error-prone, and increasingly out of step with how capable our devices have otherwise become.
That good help is hard to find isn’t a new problem, and Apple has tried to solve it before. What is new is the opportunity to approach it differently. In this article, I argue that Apple Intelligence and Siri could finally fulfill a long-standing promise: letting the system itself explain how its apps work, accurately, contextually, and without forcing users to leave what they’re doing.
Apple has already taken a step in this direction. Starting in macOS 15 Sequoia, Siri gained some awareness of macOS help content and can answer some questions about macOS and bundled Apple apps without deferring to a Web search. (When it does defer to a Web search, it often presents Windows or Android answers. Embarrassing!) That change matters less for what it accomplishes today than for what it signals: Apple is trying something new with system-level help.
However, the current implementation also demonstrates why this problem remains unsolved. Siri’s responses are inconsistent and don’t extend beyond Apple’s own apps, so few users have gotten in the habit of asking. System-wide help systems succeed only when they are broadly and reliably helpful; sporadic insight into a subset of apps isn’t enough to change behavior.
Previous Help Systems
Apple has long recognized the need for a system-wide help system and has made repeated, serious attempts to address it. One of the earliest and most ambitious was Balloon Help, introduced with System 7.0 in 1991 (see “What Good is the Help Menu?,” 29 July 1991). When enabled, hovering over interface elements revealed explanatory text describing each button, menu item, or control. By System 7.5, Balloon Help was often paired with Apple Guide information, which went further by offering task-oriented walkthroughs (see “Future System Software,” 28 March 1994). Together, they represented a coherent effort to make the system explain itself in context.
Balloon Help was genuinely forward-looking but ultimately collapsed under its own weight. Functionally, it was slow and hard to enable, so it quickly led to third‑party utilities that made toggling it easier. It also often obscured the very interface it was meant to clarify. More importantly, the utility of Balloon Help depended on developers writing good descriptions, which many did unevenly or not at all. Apple Guide, while conceptually elegant, suffered from the same problem.
In Mac OS 8.5, Apple tried again with Help Viewer, an HTML-based online help viewer (see “Delving Further into Mac OS 8.5,” 26 October 1998) that Apple opened up to developers in the next release (see “Apple Rolls Out Mac OS 8.6,” 10 May 1999). Help Viewer made the jump to Mac OS X, though Balloon Help and Apple Guide did not (see “Mac OS X Report Card: October 2002,” 7 October 2002). Help books shipped within apps, were indexed by the system, and were searchable, all things requested by developers and users.
In theory, Help Viewer provided richer content, consistent presentation, and centralized access. In practice, Help Viewer has quietly faded into the background of most users’ everyday Mac experience. People rarely opened Help Viewer proactively, search results were often vague or outdated, and many developers treated help as an afterthought, both because the tooling was awkward and because writing good documentation is genuinely hard. As Web search improved, attention shifted from in-app help to Google results, forum posts, and video tutorials.
What survived from Balloon Help is its least ambitious descendant: tooltips and menu item descriptions. (We also joked that it’s where Apple got the idea for conversation styling in iChat, now Messages; see “Jaguar: Mac OS X Prepares to Pounce,” 6 May 2002). Tooltips are still genuinely useful, and they embody the part of Balloon Help that worked best—brief, contextual hints. But they are intentionally shallow. They explain what a command does in a word or two, not how or why to use it. They improve discoverability in small ways but don’t meaningfully reduce the need to look elsewhere.
The takeaway is that Apple has genuinely tried to solve this problem multiple times over the decades. Developers often produce weak documentation and in-app help, but that’s usually due to limited time, resources, and ability rather than a lack of desire; all developers want to reduce support costs. Even today, Apple continues to address the problem through online user guides, the Tips app, and Siri answering user questions. What has been missing is a help system that can adapt to context, stay aligned with the running version of an app, and draw on multiple sources of knowledge rather than relying on static pages—qualities that Apple Intelligence, if applied carefully, is positioned to provide through Siri.
A More Helpful Siri, Not an Agent
Apple keeps talking about a “more personalized” Siri, but an easier—and likely more helpful—step would be to give Siri deeper awareness of its machine environment so it can explain how to use all available apps and tools, rather than focusing primarily on personal data. Many questions users have aren’t about themselves at all; they’re about the software in front of them. What does this feature do? Where is that setting? Why does this option behave the way it does? How would I accomplish this task?
Answering these questions does not require Siri to take actions on the user’s behalf or attempt to drive interfaces. That approach has historically been brittle, and the recent focus on autonomous agents—such as the security issues now surrounding OpenClaw—underscores how quickly complexity and risk can outweigh practical benefit. For now, the safer and more valuable role for Siri lies in explanation, not execution.
Explanation is entirely achievable today. A more helpful Siri that can answer questions in plain language—naming features, clarifying concepts, and describing where to look—would address a large fraction of users’ everyday friction. Even something as simple as identifying what a feature is called can be enough to unlock further exploration, whether through menus, settings search, or additional documentation.
In addition, explaining software helps users build mental models of how an app works; performing actions for them bypasses that understanding. For tasks users expect to repeat, learning how something works is always more valuable than having it done once on their behalf.
Just as importantly, explanation fails more gracefully. If Siri’s role is to advise, being incomplete or occasionally wrong is an inconvenience, not a catastrophe. Users can verify, adapt, and continue. That kind of failure is tolerable—and familiar to anyone who has read an outdated article or received bad advice on a discussion forum. By keeping Siri on the explanatory side of the line, Apple can make it useful without eroding trust when it inevitably gets things wrong.
Where Will a More Helpful Siri Get Its Information?
For Siri to explain how apps work in a way users can trust, it must draw on multiple sources of information. No single repository of knowledge is sufficient, and pretending otherwise results in incomplete or even misleading answers. A more helpful Siri should rely on a three-layer model that acknowledges that different kinds of information are authoritative in different ways:
- Layer 1: Developer-Provided Documentation: The most important source is documentation provided by the developer and bundled with the app. When developers take the time to describe features, workflows, and constraints in a structured way, that information should be treated as canonical. It reflects how the app is intended to work, applies to the version installed on the user’s device, and eliminates the guesswork that comes with using external sources. If Siri is going to explain the impact of a setting or how a feature functions, documentation from the horse’s mouth is the place to start.
- Layer 2: Implicit Information: Some questions don’t require documentation at all, just orientation. Users often want to know what a feature is called, where it lives, or whether a capability even exists. macOS already has access to a limited but reliable set of structural information about every app it runs: menu hierarchies, toolbar items, keyboard shortcuts, accessibility labels, automation dictionaries, supported file types, and configuration panes. This information isn’t deep, but it is always present and version-accurate. Siri should be able to use it to answer basic “what” and “where” questions, identify the correct terminology, and point users in the right direction, reducing friction without pretending to replace real documentation.
- Layer 3: External, User-Generated Knowledge: In reality, developer documentation is often incomplete, overly abstract, or focused on how features work rather than how people actually use them. That’s where Web-based, user-generated knowledge becomes essential. Forum posts, blog articles, and troubleshooting guides often capture how software is actually used, not just how it is intended to work. They reveal edge cases, workarounds, and limitations that developers either don’t document or prefer not to acknowledge. Developer documentation rarely says that an app can’t do something, whereas users are quick to explain what doesn’t work and how they cope. User-generated knowledge is less authoritative, but more candid and often more useful. It shouldn’t replace official documentation, but it’s essential when local sources are incomplete or lack task-oriented guidance.
Siri needs to draw from each of these layers appropriately. Developer documentation should generally carry more weight than a random forum comment, but a detailed blog post describing an expert workflow is likely more useful than a terse official description. In its responses, Siri should indicate whether specific details reflect documented behavior or common practice, and any external information should be cited and linked so users can assess its relevance.
Getting Developers on Board Through Xcode and AI
A strength of this proposal is that it benefits from developer participation without depending on it. Our more helpful Siri can still fall back on the structure of the app and widely available community knowledge to answer many basic questions. These Layer 2 and Layer 3 information sources ensure that there’s always some sort of answer.
That said, developer participation would dramatically improve the quality, precision, and reliability of Siri’s answers. When developers describe their app’s features, particularly in a structured way, that information can underpin explanations about how the app is intended to work. How then can we encourage developers to write in-app documentation when such efforts have failed in the past?
First, we have to acknowledge that developers who document their apps typically do so on the Web, using whatever tools fit their workflow, because that’s where documentation has the clearest payoff in supporting existing users and providing pre-sales information to potential buyers. Asking developers to write separate help content just for Siri would repeat the mistakes of Apple’s earlier help systems. Instead, Apple should support developers in writing documentation that can be reused across multiple uses.
Apple’s Xcode development environment is well-positioned to help here with the help of AI. Xcode already understands enough about an app’s structure to generate a useful starting point: a map of features, their names, and where they live in the interface, supported where possible by comments in the code. What if Xcode automatically drafted this baseline and then prompted developers with focused questions about intent, usage, and limitations—information only the developer can provide? (I’ve found that asking a chatbot to ask interview questions, one at a time, and answering them via dictation is a good way to collect thoughts.) The result would be structured information that the developer could then refine.
From that source, Xcode could produce bundled, versioned help for Siri to consult locally and export clean, search-friendly Web documentation for online discovery. Developers who opt in would get better Siri answers, fewer repetitive support questions, and cleaner onboarding for new users. That’s the carrot that Apple Intelligence is positioned to dangle.
Making Apps Deeply Discoverable
The payoff for users from an Apple Intelligence-powered help system should be obvious. A more helpful Siri that can reliably answer questions about how apps work would reduce the time users spend searching the Web, skimming forum threads, or watching videos just to solve small, specific problems. It would also slow the steady stream of basic questions that flood support forums, freeing those spaces for deeper and more interesting discussion.
More importantly, good help leads to better understanding. When users can easily learn what features exist, what they’re called, and where they live, they develop stronger mental models of the software they use. That confidence makes it easier to explore new capabilities, use advanced features, and become expert with complex apps.
Apple has been circling around this problem for decades through various help systems, none of which have quite solved it. With today’s large language models, richer app metadata, online resources, and tighter system integration, it’s now possible to provide accurate, contextual help directly within apps.
The real question is whether Apple is now willing to treat help as a first-class operating system feature open to all developers, rather than an afterthought or something limited to bundled apps. If it does, Apple Intelligence could give Siri a genuinely useful new role—not as an agent or a personal assistant, but as the help system Apple users have needed all along.
