⏳ In Brief
- Anthropic pilots Claude for Chrome, a browser agent that performs actions.
- The agent can see pages, click buttons, and fill forms with consent.
- Pilot starts with 1,000 Max users, access via waitlist, staged rollout.
- Early red-team tests cut prompt-injection success from 23.6% to 11.2%.
- Defaults include site permissions, high-risk action confirmations, and blocked categories.
Claude steps into Chrome as a browser-using agent
Anthropic is piloting Claude for Chrome, placing the assistant directly inside the browser to help with real tasks. The agent can view current pages, understand context, and act with explicit user permission.
The initial goal is practical productivity, from scheduling and drafting to testing site features. Anthropic frames this as the next step after connecting Claude to calendars and documents, now extending help where people already work.
Access begins with a 1,000-user research preview for Max subscribers, via a public waitlist, with gradual expansion as safety confidence improves.
We’ve developed Claude for Chrome, where Claude works directly in your browser and takes actions on your behalf.
We’re releasing it at first as a research preview to 1,000 users, so we can gather real-world insights on how it’s used. pic.twitter.com/lVDKhnPbHY
— Anthropic (@AnthropicAI) August 26, 2025
What the Chrome agent can do right now
Claude can see what is on a page, click UI elements, and fill web forms when asked. It can chain steps to navigate between sites, execute simple workflows, and hand back results for human review.
Inside Anthropic, early versions helped manage calendars, schedule meetings, draft emails, process expense reports, and test web features, illustrating near-term utility without full autonomy.
“We view browser-using AI as inevitable… see what you’re looking at, click buttons, and fill forms.”
Safety system: permissions, categories, and early numbers
The first guardrail is permissions. Users grant site-level access and approve high-risk actions like publishing, purchasing, or sharing personal data. These checks persist even in autonomous mode.
Anthropic also blocks high-risk site categories by default, for example financial services and pirated content, and deploys classifiers that spot suspicious instructions and unusual data access patterns.
In 123 test cases spanning 29 attack scenarios, mitigations reduced prompt-injection success from 23.6% to 11.2%, and from 35.7% to 0% on a browser-specific challenge set.
How access works, and what users should expect
This is a controlled research preview. Anthropic is collecting real-world feedback to refine classifiers and expand permissions without compromising day-to-day browsing. Wider access will follow staged rollouts.
Install happens through the Chrome Web Store after approval, then users authenticate with Claude credentials. Guidance advises avoiding sensitive sites and starting on trusted domains while the pilot matures.
Early uses to try
- Triage emails, draft replies, and schedule meetings
- Fill forms and submit routine web requests
- Navigate multi-step workflows across tabs
What remains unconfirmed, and the open risks
Anthropic cautions that browser agents face prompt-injection threats hidden in pages, emails, or documents. The pilot focuses on finding new attack patterns and closing gaps before general availability.
Questions remain about broader availability, enterprise controls, and how these protections scale as tasks grow more complex. The company indicates it will expand access as safeguards strengthen.
Conclusion
Claude’s Chrome pilot pushes assistants from passive chat toward actionable help on the web. With site permissions, confirmations, and blocked categories, Anthropic is pairing capability with measurable defenses.
If the safety curve holds, browser-using AI could become a default way to handle routine web work. The pilot’s data, not demos, will determine when Claude’s agent becomes broadly available.
📈 Trending News
27th August 2025
- How to use Nano Banana today (for free)?
- YouTube quietly used AI to alter your shorts — Without consent
- Japanese news giants ‘Asahi’ & ‘Nikkei’ just went to court to sue Perplexity
- xAI just sued Apple & OpenAI — What happens now?
- Perplexity launches $5 Comet Plus subscription tied to its AI browser
For more AI stories, visit AI News on our site.