Be Cautious with Agentic Web Browsers

Originally published at: Be Cautious with Agentic Web Browsers - TidBITS

At The Verge, Robert Hart writes:

In the past few weeks alone, researchers have uncovered vulnerabilities in Atlas allowing attackers to take advantage of ChatGPT’s “memory” to inject malicious code, grant themselves access privileges, or deploy malware. Flaws discovered in Comet could allow attackers to hijack the browser’s AI with hidden instructions. Perplexity, through a blog, and OpenAI’s chief information security officer, Dane Stuckey, acknowledged prompt injections as a big threat last week, though both described them as a “frontier” problem that has no firm solution.

Read Hart’s article for more details on the security and privacy concerns plaguing agentic browsers, but in short, they’re all somewhat vulnerable to “prompt injection” attacks, in which malicious instructions are concealed within content read by an AI. These instructions could be hidden in HTML comments, white text on a white background, or in the page metadata. They might trick the chatbot into requesting personal information or instruct the browser to download and execute malware.

Right now, agentic browsers have limited defenses against prompt injections. While AI systems can distinguish between system instructions and user content at the architectural level, they can’t reliably identify malicious instructions hidden within legitimate content encountered on Web pages. To an LLM, all text is tokens, and all tokens carry essentially equal weight. These browsers do employ input sanitization and prompt classification, and there are guardrails in place, but we’re talking about an entirely new attack space, making it impossible to anticipate and block all potential attacks.

However, there aren’t yet enough users of these agentic browsers to attract sophisticated cybercriminals, and the browsers don’t work well enough to be reliably exploited, so I’m comfortable using one occasionally for experimentation (see “Can Agentic Web Browsers Count?,” 30 October 2025). I think it’s safest to avoid using an agentic browser as your daily driver for now, though. At best, they are one-trick AI ponies that offer few features to enhance the human-powered Web browsing we all do.

2 Likes

Thanks for that, Adam. Yes, better to be safe than sorry. I use Brave, and it works well. But along those same lines, I do not make use of the so-called Cloud for anything, and especially for storing any of my personal/financial information. Just way too risky.

1 Like

User: Make me a dinner reservation…

AI: Is 7:30 Friday ok? Please give me a credit card to hold your table.

Simon Willison, who is doing some of the smartest writing about the AI space, has a blog post about a Meta AI paper about AI agent security. In it, the Meta researchers suggest the Agents Rule of Two, which basically presents a Venn diagram showing three properties of an agent session, no more than two of which can be satisfied while remaining safe. It shows just what aspects of usage we should pay attention to when evaluating the trustworthiness of AI agents.

And another good post from Simon responding to OpenAI’s discussion of security in ChatGPT Atlas.