⏳ In Brief
- Anthropic updated Consumer Terms, asking Claude users to allow model training.
- Applies to Free, Pro, Max, including Claude Code, excludes Work, Gov, Education.
- Users must choose by September 28, 2025, pop-up shows training toggle.
- Five-year retention for participants, otherwise current 30-day policy remains.
- Older chats are out, unless you resume them in a new session.
Anthropic now asks consumer users to share chats for Claude’s training
Anthropic updated its Consumer Terms and Privacy Policy, giving users a choice to let chats and coding sessions improve future Claude models. The company says the setting can be changed anytime in Privacy Settings.
We’re updating our consumer terms and privacy policy to help us deliver even more capable, useful AI models.
Our users will now be given the choice to allow their data to be used to improve Claude.
— Claude (@claudeai) August 28, 2025
The change covers Claude Free, Pro, and Max, including Claude Code on those accounts. It does not apply to Claude for Work, Claude Gov, Claude for Education, or API usage through Bedrock or Vertex AI.
Starting today, existing users will see an in-app prompt. You have until September 28, 2025 to accept the updated terms and make a training decision, otherwise access will require selecting a preference.

What exactly changes, and who is affected
If you participate, Anthropic may train new models on your new or resumed chats and coding sessions. If you do not, you continue under the existing 30-day retention, and your data is excluded from future training.
Training only considers new interactions, plus old threads you resume. Dormant conversations remain outside the scope unless you reactivate them by continuing the chat or coding session.
“We’re now giving users the choice to allow their data to be used to improve Claude… and strengthen our safeguards against harmful usage.”
The pop-up, the default, and how the opt-out works
The pop-up displays a large Accept button, with a smaller toggle labeled to allow training. Reports show the toggle is On by default, which means inattentive clicks may permit training unintentionally.
You can change your selection anytime in Settings → Privacy → Privacy Settings. Turning it Off stops future use, although data already used in a completed training run is not pulled back from models.
Quick steps to opt out
- Open Settings → Privacy → Privacy Settings, switch Help improve Claude Off
- Avoid resuming old chats you want excluded from future training
Retention rules, filtering, and what Anthropic says about privacy
For participants, Anthropic extends retention to five years to support long-cycle training and safety classifier improvements. Non-participants remain on the 30-day schedule. Deleting a chat excludes it from future training.
Anthropic states it uses automated tools to filter or obfuscate sensitive data and does not sell user data to third parties. Users can change their choice later, which applies to future interactions.
Five-year retention applies only if you participate. Commercial tiers and API usage remain outside these consumer training terms, with their own data-handling policies.
Context: the legal backdrop and timing
The policy shift arrives as the company resolved a copyright class action from U.S. authors. A proposed settlement avoids a December trial after a mixed pre-trial ruling on fair use and data handling. Terms remain confidential.
Observers see continued uncertainty around training data and rights. The settlement removes one early appellate test, while consumer controls, like this opt-out flow, try to balance capability and privacy expectations.
What remains constant, and what to watch next
Commercial customers, including Work, Gov, and Education, plus API traffic through Bedrock and Vertex, are not included. Providers may still revise usage policies as agentic features expand.
Watch whether the default-On toggle in the pop-up drives meaningful opt-in rates, and whether retention and filtering practices address user concerns about long-term storage and reuse.
Conclusion
Anthropic is moving model improvement closer to everyday use, asking consumer users to share new chats and coding sessions unless they choose otherwise. Controls are present, however defaults matter in real behavior.
For now, the clearest path is informed choice. Decide before September 28, keep sensitive threads separate, and review Privacy Settings so Claude’s training lines up with your actual intent.
📈 Trending News
29th August 2025
- Taco Bell hits pause on AI drive-thru
- OpenAI Codex arrives for ChatGPT subscribers for free
- WhatsApp’s AI “Writing Help” rolls out to fix tone, typos, and phrasing
- ‘What if’ AI video of Mount Fuji erupting over Tokyo
- Meet Plaud’s Note Pro — a card-sized AI recorder with a display
For more AI stories, visit AI News on our site.