
As more web browsers begin experimenting with agentic, action-taking features—from booking tickets to completing online shopping—concerns around data exposure and financial risk are rising. In response, Google has detailed its security strategy for Chrome’s upcoming agentic capabilities, emphasizing layered oversight, user consent, and strict origin controls. The company first previewed these agentic features in September and confirmed they will roll out in the coming months.
To ensure that automated actions remain aligned with user intent, Google is deploying multiple model-based safeguards. The company explained that it has developed a User Alignment Critic, built using Gemini, to review the tasks generated by Chrome’s planner model. “If the critic model thinks that the planned tasks don’t serve the user’s goal, it asks the planner model to rethink the strategy.” Importantly, “Google noted that the critic model only sees the metadata of the proposed action and not the actual web content,” reducing exposure and limiting potential misuse.
A key part of Google’s approach involves restricting what agentic models can read and modify during a task. Using Agent Origin Sets, Chrome limits agents to “read-only origins and read-writeable origins.” For example, on a shopping site, product listings may be accessible, while advertisements are excluded. Google added that the agent can only click or type within approved iframes. “This delineation enforces that only data from a limited set of origins is available to the agent, and this data can only be passed on to the writable origins. This bounds the threat vector of cross-origin data leaks.”
Google is also deploying another observer model to analyze URLs before navigation, helping prevent automated journeys to unsafe or model-generated malicious links.
To enhance user control, especially for sensitive tasks, Chrome will require explicit consent. When an agent attempts to visit pages containing banking or medical information, it will first prompt the user. It will also request permission before using Chrome’s password manager, and Google emphasized that “the agent’s model doesn’t have exposure to password data.” Users will also be asked before actions such as making purchases or sending messages.
Beyond these measures, Google has introduced a prompt-injection classifier and is actively testing agentic features against researcher-generated attacks. Other AI browser companies are also prioritizing safety—Perplexity recently launched an open-source model designed to prevent prompt injection attacks on agents.




