Originally published at: Are AI Subscription Models Sustainable? - TidBITS
When it comes to AI, what should be available for free? We’re accustomed to getting many core Internet services, such as search and email, for free, but running anything at scale requires massive investment and operating costs. To give you a sense of the scale, OpenAI recently reported 700 million active weekly users of ChatGPT, four times the number accessing the service at the end of last year.
You can use ChatGPT, Claude, Gemini, and others to an extent without paying. But all of them come with a $20-per-month paid mode that offers better models, higher or no limits, and additional features. (For power users who need even more capabilities, there are premium tiers ranging from $100 to $250 per month that provide enhanced features, priority access, and significantly higher usage limits.) AI companies are willing to provide sufficient capabilities to showcase what their chatbots can do, but they must attract enough users to cover their infrastructure costs and move toward a sustainable business model.
The Evolution of Internet Business Models
This freemium model, although relatively common now, is historically quite unusual in its direct connection between revenue and perceived value to users. In the early days of the Internet, websites like Yahoo and Google instead followed in the footsteps of more magazine-like sites in supporting themselves with advertising.
(Here’s where I point out that TidBITS pioneered advertising on the Internet in 1992, before websites were even a thing (see “TidBITS Sponsorship Program,” 20 July 1992). The idea was too obvious for me to claim significant influence on the evolution of the Internet, but I’m still sorry that Internet advertising has become so prevalent, trashy, and prone to abuse.)
I’d argue that this early reliance on advertising stemmed from the value proposition not being high enough to support monthly subscription fees, the desire of companies to focus on attracting users over initial profit, and the lack of a reliable payment system infrastructure at the necessary scale. Whatever the reasons, the advertising model has dominated mainstream Internet services for decades.
But it’s much harder to display sensible ads alongside chatbot conversations, and the value proposition is much higher. Many people consider $20 per month a reasonable fee to access the benefits of a powerful chatbot, particularly now that chatbots can search the Web for current information that goes beyond whatever ended up in their training data.
App-Based AI Subscription Challenges
Many people are also happy to pay for AI-powered features in other apps. For instance, the popular Mac launcher Raycast can supercharge its capabilities with AI for $10 or $20 per month (the higher tier offers advanced models), the Lex.page online word processor integrates an AI editorial assistant for $18 per month, and Notion AI adds an assistant that helps with writing, analysis, and more in Notion’s $20 per month Business plan. These fees are necessary because the developers of Raycast, Lex, and Notion all have to pay the model providers—such as OpenAI, Anthropic, and Google—for API access to their models. In essence, they’re repackaging the models’ capabilities in a new form and passing on the costs.
Although it may seem that charging for AI functionality is an indication of a functional business model, much of what these apps provide is convenience and context. They’re offering custom interfaces, fine-tuning prompts, and leveraging local context, but at the base level, the models are doing the heavy lifting. That leads to a few challenges that may impact users:
- Subscription stacking: In the short term, users may accumulate multiple $20 monthly subscriptions, resulting in significant ongoing costs. That’s already an issue with app subscriptions, but most of those are well under $20 per month.
- Constant feature justification: These apps must continuously persuade users that their additional functionality warrants another subscription when pasting into a chatbot would produce similar results. Plus, many of these AI-enhanced apps overlap in functionality, particularly in terms of writing features, making it more challenging to justify multiple subscriptions.
- Competition from model providers: The model providers have a significant cost advantage if they want to compete directly. My experience with Gemini in Google Docs is that it isn’t nearly as useful as Lex, but Google has the resources to change that if it wants to.
- Pricing vulnerability: Third-party apps are at the mercy of model providers’ pricing decisions. If OpenAI or Anthropic were to raise their API fees, apps would need to either absorb the costs (threatening their viability) or pass them on to users (potentially losing customers).
There are two other approaches supported by some apps, such as Raycast and DEVONthink:
- API access: You can pay for direct API access to a particular chatbot, which you connect to a third-party app using an API key. Instead of a monthly subscription fee, you pay for each prompt. It might take a month or two of usage to discover whether API access is more or less expensive than the subscription.
- Local models: If you have a sufficiently beefy Mac with Apple silicon, you could use Ollama to install and run local models. Local models require significant disk space and memory, and likely won’t be as powerful as those that run in the cloud. Again, testing would be required to determine their utility, but there would be no cost to using them.
How Apple Could Fit into the Picture
Interestingly, these challenges point to a huge opportunity for Apple. Imagine if Apple Intelligence were built on an LLM good enough to compete with ChatGPT, Claude, and Gemini, and that could perform Web searches like Perplexity. Enabling Apple developers to leverage such capabilities could significantly enhance apps for the iPhone, iPad, Mac, Apple Watch, Vision Pro, and even Apple TV. If Apple Intelligence were sufficiently compelling and broadly adopted by developers, Apple could even use it to boost Services revenue by offering it as a separate subscription that would unlock AI features across many apps throughout the ecosystem.
Apple isn’t the only company with this opportunity. Google and Microsoft have already built impressive AI capabilities and are working to extend them more broadly across their own ecosystems. Meanwhile, companies like OpenAI (with a rumored browser), Anthropic (which just started talking about a Claude browser extension), and Perplexity (with its beta Comet browser) are expanding beyond being mere model providers by building agentic browsers that turn the Web itself into their platform.
Given this intense competition in AI, it’s concerning that Apple seems to be rearranging deck chairs with Liquid Glass instead of prioritizing the development or acquisition of a top-notch LLM for its next-generation operating systems under Apple Intelligence. It’s not that Liquid Glass precludes work on a competitive version of Apple Intelligence, but the former is happening, and the latter isn’t shipping.