See How Visible Your Brand is in AI Search Get Free Report

Before you click “Agree”: 3 Things to check in Claude’s updated Privacy Settings — Act before it’s too late

  • August 29, 2025
    Updated
before-you-click-agree-3-things-to-check-in-claudes-updated-privacy-settings-act-before-its-too-late

⏳ In Brief

  • Anthropic updated Consumer Terms, asking Claude users to allow model training.
  • Applies to Free, Pro, Max, including Claude Code, excludes Work, Gov, Education.
  • Users must choose by September 28, 2025, pop-up shows training toggle.
  • Five-year retention for participants, otherwise current 30-day policy remains.
  • Older chats are out, unless you resume them in a new session.


Anthropic now asks consumer users to share chats for Claude’s training

Anthropic updated its Consumer Terms and Privacy Policy, giving users a choice to let chats and coding sessions improve future Claude models. The company says the setting can be changed anytime in Privacy Settings.

The change covers Claude Free, Pro, and Max, including Claude Code on those accounts. It does not apply to Claude for Work, Claude Gov, Claude for Education, or API usage through Bedrock or Vertex AI.

Starting today, existing users will see an in-app prompt. You have until September 28, 2025 to accept the updated terms and make a training decision, otherwise access will require selecting a preference.

claude updated terms and conditions


What exactly changes, and who is affected

If you participate, Anthropic may train new models on your new or resumed chats and coding sessions. If you do not, you continue under the existing 30-day retention, and your data is excluded from future training.

Training only considers new interactions, plus old threads you resume. Dormant conversations remain outside the scope unless you reactivate them by continuing the chat or coding session.

“We’re now giving users the choice to allow their data to be used to improve Claude… and strengthen our safeguards against harmful usage.”


The pop-up, the default, and how the opt-out works

The pop-up displays a large Accept button, with a smaller toggle labeled to allow training. Reports show the toggle is On by default, which means inattentive clicks may permit training unintentionally.

You can change your selection anytime in Settings → Privacy → Privacy Settings. Turning it Off stops future use, although data already used in a completed training run is not pulled back from models.

Quick steps to opt out

  • Open Settings → Privacy → Privacy Settings, switch Help improve Claude Off
  • Avoid resuming old chats you want excluded from future training


Retention rules, filtering, and what Anthropic says about privacy

For participants, Anthropic extends retention to five years to support long-cycle training and safety classifier improvements. Non-participants remain on the 30-day schedule. Deleting a chat excludes it from future training.

Anthropic states it uses automated tools to filter or obfuscate sensitive data and does not sell user data to third parties. Users can change their choice later, which applies to future interactions.

Five-year retention applies only if you participate. Commercial tiers and API usage remain outside these consumer training terms, with their own data-handling policies.


Context: the legal backdrop and timing

The policy shift arrives as the company resolved a copyright class action from U.S. authors. A proposed settlement avoids a December trial after a mixed pre-trial ruling on fair use and data handling. Terms remain confidential.

Observers see continued uncertainty around training data and rights. The settlement removes one early appellate test, while consumer controls, like this opt-out flow, try to balance capability and privacy expectations.


What remains constant, and what to watch next

Commercial customers, including Work, Gov, and Education, plus API traffic through Bedrock and Vertex, are not included. Providers may still revise usage policies as agentic features expand.

Watch whether the default-On toggle in the pop-up drives meaningful opt-in rates, and whether retention and filtering practices address user concerns about long-term storage and reuse.


Conclusion

Anthropic is moving model improvement closer to everyday use, asking consumer users to share new chats and coding sessions unless they choose otherwise. Controls are present, however defaults matter in real behavior.

For now, the clearest path is informed choice. Decide before September 28, keep sensitive threads separate, and review Privacy Settings so Claude’s training lines up with your actual intent.


📈 Trending News

29th August 2025

For more AI stories, visit AI News on our site.

Was this article helpful?
YesNo
Generic placeholder image
Articles written 861

Khurram Hanif

Reporter, AI News

Khurram Hanif, AI Reporter at AllAboutAI.com, covers model launches, safety research, regulation, and the real-world impact of AI with fast, accurate, and sourced reporting.

He’s known for turning dense papers and public filings into plain-English explainers, quick on-the-day updates, and practical takeaways. His work includes live coverage of major announcements and concise weekly briefings that track what actually matters.

Outside of work, Khurram squads up in Call of Duty and spends downtime tinkering with PCs, testing apps, and hunting for thoughtful tech gear.

Personal Quote

“Chase the facts, cut the noise, explain what counts.”

Highlights

  • Covers model releases, safety notes, and policy moves
  • Turns research papers into clear, actionable explainers
  • Publishes a weekly AI briefing for busy readers

Related Articles

Leave a Reply